url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/1103.1072
Orthogonal decomposition of Lorentz transformations
The canonical decomposition of a Lorentz algebra element into a sum of orthogonal simple (decomposable) Lorentz bivectors is discussed, as well as the decomposition of a proper orthochronous Lorentz transformation into a product of commuting Lorentz transformations, each of which is the exponential of a simple bivector. As an application, we obtain an alternative method of deriving the formulas for the exponential and logarithm for Lorentz transformations.
\section{Introduction} It is common to define a three--dimensional rotation by specifying its axis and rotation angle. However, since the axis of rotation is fixed, we may also define the rotation in terms of the two--plane orthogonal to the axis. This point of view is useful in higher dimensions, where one no longer has a single rotation axis. Indeed we may define a rotation in $n$--dimensional space by choosing a two--plane and a rotation angle in that plane; vectors orthogonal to the plane are fixed by the rotation. The most general rotation, however, is a composition of such planar, or {\em simple}, rotations. Moreover, the planes of rotation may be taken to be mutually orthogonal to each other, so that the simple factors commute with one another. On the Lie algebra level, where rotations correspond to antisymmetric matrices, a simple rotation corresponds to a {\em decomposable} matrix; i.e., a matrix of the form $u\wedge v=uv^T-vu^T$, where $u,v$ are (column) vectors. Geometrically, the vectors $u,v$ span the plane of rotation. A general rotation corresponds to the sum of decomposable matrices: $A=\sum_iu_i\wedge v_i$, and the mutual orthogonality of the two--planes of rotation corresponds to the mutual annihilation of summands: $(u_i\wedge v_i)(u_j\wedge v_j)=0$ for $i\neq j$. Note that mutual annihilation trivially implies commutation, so that $\exp(\sum_iu_i\wedge v_i$$)=\prod_i\exp(u_i\wedge v_i)$, and we recover algebraically the commutation of simple factors in a rotation. In the particular case of four--dimensional Euclidean space, a general rotation is the composition of two simple commuting rotations in orthogonal two--planes. Correspondingly, a $4\times 4$ antisymmetric matrix can be written as the sum of two mutually annihilating decomposable matrices. It is possible (although apparently not of sufficient interest to be publishable) to write down explicit formulas for this decomposition --- both on the Lie algebra level and on the level of rotations. It is natural to ask if such an orthogonal decomposition is possible for Lorentz transformations. This article answers this question in the affirmative. If one assumes a Minkowski metric and allows the use of complex numbers, a Wick rotation may be applied to the formulas for four--dimensional Euclidean rotations. However our discussion will not proceed along this line; instead, we will limit ourselves to algebra over the reals (see below) and develop the formulas from first principles, without any assumptions on the specific form of the Lorentz metric. It should be noted that the orthogonal decomposition of a Lorentz transformation $\Lambda$ into commuting factors is distinct from the usual polar decomposition $\Lambda=BR$, where $B$ is a boost and $R$ is a rotation. While $B$ and $R$ are simple in the sense that they are exponentials of decomposable Lie algebra elements: $B=\exp(b)$ and $R=\exp(r)$, they do not necessarily commute, nor do the Lie algebra elements $b,r$ annihilate each other. In particular, $\exp(b+r)\neq\exp(b)\exp(r)$ in general. From the point of view of the geometry of Lorentz transformations, there is nothing new presented in this work. It is known that any proper orthochronous Lorentz transformation is the product of commuting factors which act only on orthogonal two--planes (\cite{Schremp}). That the corresponding Lorentz algebra element can be written as the sum of mutually annihilating simple elements is a natural consequence. However, the formulas given here, which actually allow one to compute these decompositions using only basic arithmetic operations, appear to be original. Moreover they are not metric--specific, and so are applicable to any space--time manifold in a natural way, without the need to use Riemann normal coordinates. Since it may seem oddly unnecessary to restrict ourselves only to algebra over the reals, we briefly explain our motivation for this. The author is interested in efficient numerical interpolation of Lorentz transformations, with a view towards a real--time relativistic simulation. That is, we are given (proper) Lorentz transformations $\Lambda_0$ and $\Lambda_1$, which we think of as giving coordinate transformations, measured at times $t=0$ and $t=1$, between an observer and an object in nonuniform motion. For any time $t$ with $0\leq t\leq 1$, we would like to find a Lorentz transformation $\Lambda(t)$ that is ``between'' $\Lambda_0$ and $\Lambda_1$. Analogous to linear interpolation, we may use geodesic interpolation in the manifold of all Lorentz transformations: $\Lambda(t)=\Lambda_0\exp(t\log{\Lambda_0^{-1}\Lambda_1})$. Thus we need to efficiently compute the logarithm and exponential for Lorentz transformations. Although many computer languages offer arithmetic over the complex numbers, not all do, and in any case they are typically not supported at the hardware level, so their usage introduces additional computational overhead. \section{Preliminaries} \subsection{Generalized trace}\label{sec:traces} We will make use of the {\bf $k$--th order trace} ${\rm tr}_kM$ of a $n\times n$ matrix $M$, which is defined by the identity \begin{equation}\label{eq:ktr} \det(I+tM)=\sum_{k=0}^n({\rm tr}_kM)t^k. \end{equation} Equivalently, the $(n-k)$--th degree term of the characteristic polynomial $\det(\lambda I-M)$ of $M$ has coefficient $(-1)^k{\rm tr}_kM$. In particular, the zeroth order trace is unity, the first order trace is the usual trace, and the $n$--th order trace is the determinant. For $k\neq 1$, ${\rm tr}_kM$ is not linear. However, the identities (i) ${\rm tr}_kM^T={\rm tr}_kM$, (ii) ${\rm tr}_kMN={\rm tr}_kNM$, and (iii) ${\rm tr}_k\alpha M=\alpha^k{\rm tr}_kM$ for any scalar $\alpha$, are always satisfied for any $k$. These properties follow from equation \eqref{eq:ktr} and the basic properties of determinants. Of primary interest is the second order trace, for which we have the explicit formula \begin{equation}\label{eq:2tr} {\rm tr}_2M=\tfrac{1}{2}({\rm tr}^2M-{\rm tr}M^2) \end{equation} This follows by computing the second derivative of equation \eqref{eq:ktr} and evaluating at $t=0$. In engineering, ${\rm tr}_2M$ is often denoted by ${\rm II}_M$. \subsection{Antisymmetry and decomposability} In preparation for the next section, let us derive a few facts that we will need concerning $4\times 4$ antisymmetric matrices. Such a matrix can be written in the form \begin{equation}\label{eq:skew} A_{\bf xy}=\begin{pmatrix} 0 & {\bf x}^T\\ -{\bf x} & W_{\bf y} \end{pmatrix} \quad\text{where}\quad W_{\bf y}=\begin{pmatrix} 0 & -y_3 & y_2\\ y_3 & 0 & -y_1\\ -y_2 & y_1 & 0 \end{pmatrix} \end{equation} for some ${\bf x},{\bf y}\in{\mathbb R}^3$. Here $W_{\bf y}$ is chosen so that it has the defining property $W_{\bf y}{\bf v}={\bf y}\times{\bf v}$. By computing the determinant of \eqref{eq:skew} directly, we obtain the following, which is a special case of the general fact that the determinant of an antisymmetric matrix (in any dimension) is the square of its Pfaffian. \begin{lemma}\label{lem:skewdet} $\det{A_{\bf xy}}=({\bf x}\cdot{\bf y})^2$\qed \end{lemma} We may identify the {\em wedge (exterior) product} of four--vectors $u,v$ with the matrix $u\wedge v\doteq uv^T-vu^T$. This matrix is evidently antisymmetric, and if we write $u=(u_0,{\bf u})$, $v=(v_0,{\bf v})$, we find that $u\wedge v=A_{\bf xy}$, where ${\bf x}=u_0{\bf v}-v_0{\bf u}$ and ${\bf y}={\bf v}\times{\bf u}$. In particular, lemma \ref{lem:skewdet} implies that $\det{u\wedge v}=0$. This is again a special case of a general fact: the determinant of a wedge is zero in dimensions greater than two, since we can always find a vector orthogonal to the vectors $u,v$. In dimension four, the converse is also true. \begin{lemma}\label{lem:wedge} $A_{\bf xy}$ is a wedge--product if and only if $\det{A_{\bf xy}}=0$. \end{lemma} \begin{proof} For the sufficiency, first assume that ${\bf y}\neq{\bf 0}$. Choose vectors ${\bf u},{\bf v}$ such that ${\bf v}\times{\bf u}={\bf y}$. Since ${\bf x}\cdot{\bf y}=0$ (lemma \ref{lem:skewdet}), we may write ${\bf x}=u_0{\bf v}-v_0{\bf u}$ for some scalars $u_0,v_0$. If ${\bf y}={\bf 0}$, then take $u=(1,{\bf 0})$ and $v=(0,{\bf x})$. \end{proof} In general, the vector space of two--forms in $n$ dimensions can be identified with the space of $n\times n$ antisymmetric matrices. In dimensions $n\geq 4$, not every two--form is {\em decomposable;} i.e., the wedge of two one--forms. Lemma \ref{lem:wedge} gives a simple test for decomposability in dimension $n=4$. \section{Lorentz algebra element decomposition} Let $g$ be a Lorentz inner product on ${\mathbb R}^4$; i.e., $g$ is symmetric, nondegenerate, and $\det{g}<0$. Recall that the {\bf Lorentz algebra} $so(g)$ is the collection of all linear transformations $L$ on ${\mathbb R}^4$ for which $L^Tg+gL=0$; or equivalently, $L^T=-gLg^{-1}$. We will refer to an element of $so(g)$ as a {\bf bivector}. \subsection{Algebra invariants and properties} \begin{lemma}\label{lem:antisym} $Ag\in so(g)$ if and only if $A$ is antisymmetric. \end{lemma} \begin{proof} Since $g$ is symmetric, $(Ag)^Tg+g(Ag)=g(A^T+A)g$, which is zero if and only if $A^T=-A$. \end{proof} Thus any Lorentz bivector $L$ can be written in the form $L=Ag$ for some antisymmetric matrix $A$. Although this identification gives us a vector space isomorphism between $so(g)$ and the space of all antisymmetric $4\times 4$ matrices, the identification is not compatible with their respective Lie algebra structures; i.e., does not give a Lie algebra isomorphism. \begin{lemma}\label{lem:Ltr13} For any $L\in so(g)$, ${\rm tr}L=0={\rm tr}_3L$. \end{lemma} \begin{proof} ${\rm tr}_kL={\rm tr}_kL^T={\rm tr}_k(-gLg^{-1})={\rm tr}_k(-L)=(-1)^k{\rm tr}_kL$. \end{proof} \begin{lemma}\label{lem:detL} $\det{L}\leq 0$ for all $L\in so(g)$. \end{lemma} \begin{proof} Write $L=Ag$. Then $\det{L}=\det{A}\,\det{g}\leq 0$, by lemma \ref{lem:skewdet}. \end{proof} By the comments in section \ref{sec:traces}, the characteristic equation for a Lorentz bivector $L$ is \begin{equation}\label{eq:charL} \lambda^4+({\rm tr}_2L)\lambda^2+\det{L}=0. \end{equation} The squared--roots (that is, the roots of $x^2+({\rm tr}_2L)x+\det{L}=0$) are \begin{equation}\label{eq:eigenL} \mu_\pm=\tfrac{1}{2}\bigl(-{\rm tr}_2L \pm\sqrt{{\rm tr}_2^2L-4\det{L}}\bigr) \end{equation} Since $\det{L}\leq 0$, $\mu_\pm$ are real numbers. Note that they are solutions to the equations $\mu_++\mu_-=-{\rm tr}_2L$ and $\mu_+\mu_-=\det{L}$, and that $\mu_+\geq 0$ and $\mu_-\leq 0$. Moreover, $\det{L}\neq 0$ if and only if $\mu_+>0$ and $\mu_-<0$. \begin{lemma}\label{lem:4} $L^4+({\rm tr}_2L)L^2+(\det{L})I=0$. \end{lemma} \begin{proof} This follows from equation \eqref{eq:charL} and the Cayley--Hamilton theorem. \end{proof} The determinant and second order trace of a Lorentz bivector may be computed from the traces of its powers. \begin{lemma}\label{lem:pow} ${\rm tr}_2L=-\tfrac{1}{2}{\rm tr}L^2$ and $\det{L}=\tfrac{1}{8}({\rm tr}^2L^2-2\,{\rm tr}L^4)$. \end{lemma} \begin{proof} Lemma \ref{lem:Ltr13} and equation \eqref{eq:2tr} imply the first identity. For the second, lemma \ref{lem:4} implies ${\rm tr}L^4+{\rm tr}_2L\,{\rm tr}L^2+4\det{L}=0$. \end{proof} \subsection{Simple bivectors} Given two four--vectors $u,v$, we may construct the {\bf simple Lorentz bivector} $u\wedge^g v$, which is the $4\times 4$ matrix \begin{equation}\label{eq:bivector} u\wedge^g v\doteq uv^Tg-vu^Tg \end{equation} In covariant coordinate notation, $(u\wedge^gv)_\alpha^\beta=u^\beta v_\alpha-v^\beta u_\alpha$. Observe that $(u\wedge^g v)=(u\wedge v)g$, whence lemma \ref{lem:antisym} implies that $u\wedge^g v\in so(g)$. \begin{lemma}\label{lem:simple} $L\in so(g)$ is simple if and only if $\det{L}=0$. \end{lemma} \begin{proof} Write $L=Ag$, with $A$ antisymmetric. Since $\det{L}=\det{A}\det{g}$, the lemma follows from lemma \ref{lem:wedge}. \end{proof} \begin{lemma}\label{lem:3} $L\in so(g)$ is simple if and only if $L^3+({\rm tr}_2L)L=0$. Moreover if $L=u\wedge^gv$, then ${\rm tr}_2L=(u^Tgu)(v^Tgv)-(u^Tgv)^2$. \end{lemma} \begin{proof} For $L=u\wedge^gv$, one computes directly from \eqref{eq:bivector} that ($\ast$) ${\rm tr}L^2=2\gamma$ and $L^3=\gamma L$, where $\gamma$ is the scalar $(u^Tgv)^2-(u^Tgu)(v^Tgv)$. Lemma \ref{lem:pow} and ($\ast$) imply that ${\rm tr}_2L=-\gamma$. For the converse, $L^3+({\rm tr}_2L)L=0$ implies that $L^4+({\rm tr}_2L)L^2=0$. Lemma \ref{lem:pow} then yields $\det{L}=-\tfrac{1}{4}({\rm tr}L^4+{\rm tr}_2L\,{\rm tr}L^2)=0$, and lemma \ref{lem:simple} applies. \end{proof} The four--vectors $u,v$ are parallel if and only if $u\wedge^gv=0$. However, even if $u,v$ are linearly independent (so that $u\wedge^gv\neq 0$), it is possible that the metric is no longer nondegenerate when restricted to the plane spanned by $u,v$. We will call such a two--plane {\bf degenerate}. The next lemma (combined with the previous lemma) shows that degenerate two--planes are synonymous with {\em null two--planes (two--flats)} in the sense of \cite{Synge}. \begin{lemma}\label{lem:degen} $u,v$ span a nondegenerate two--plane if and only if ${\rm tr}_2u\wedge^gv\neq 0$. \end{lemma} \begin{proof} For $w=\alpha u+\beta v$, we have $w^Tgu=\alpha(u^Tgu)+\beta(v^Tgu)$ and $w^Tgv=\alpha(u^Tgv)+\beta(v^Tgv)$. This linear system has a unique solution if and only if its determinant $(u^Tgu)(v^Tgv)-(u^Tgv)^2={\rm tr}_2L$ is nonzero. \end{proof} The following lemma implies that almost all simple Lorentz bivectors are determined up to constant multiple by their squares. \begin{lemma}\label{lem:2plane} Suppose $L=u\wedge^gv$ is such that ${\rm tr}_2L\neq 0$. Then $P_L\doteq-L^2/{\rm tr}_2L$ is orthogonal projection (with respect to $g$) onto the nondegenerate two--plane spanned by $u,v$; that is, $P_L^2=P_L$, ${\rm tr}P_L=2$, and $(gP_L)^T=gP_L$. Conversely, if $P$ is orthogonal projection onto a two--plane, the two--plane is necessarily nondegenerate, and we have $P=P_L$, where $L=u\wedge^gv$ and $u,v$ are any linearly independent four--vectors in the image of $P$. \end{lemma} \begin{proof} The first statement follows from lemmas \ref{lem:3}, \ref{lem:pow}, and \ref{lem:degen}. For the second statement, let ${\mathcal P}$ denote the two--plane forming the image of $P$. Suppose that $w\in{\mathcal P}$ is such that $w^Tgy=0$ for all $y\in{\mathcal P}$. Thus for any four--vector $x$, $w^Tgx=(Pw)^Tgx=w^TgPx=0$; hence $w=0$, since $g$ is nondegenerate. Thus ${\mathcal P}$ is nondegenerate. The remainder of the lemma follows from the uniqueness of orthogonal projection. Indeed, given a four--vector $x$, for any $y\in{\mathcal P}$ we have $(Px)^Tgy=x^TgPy=x^Tgy$. Since $g$ is nondegenerate on $\mathcal P$, the value of $Px$ is thus uniquely determined by $x$ and $g$. \end{proof} \subsection{Orthogonal sum of simple bivectors} Although an element $L\in so(g)$ will not be a simple bivector in general, we may write it as a sum of simple bivectors. Such a sum is not unique; however, we will do so in a canonical way using the operators \begin{equation}\label{eq:proj} P_\pm\doteq\pm\frac{L^2-\mu_\mp I}{\mu_+-\mu_-} \end{equation} Here $\mu_\pm$ are defined as in equation \eqref{eq:eigenL}. For $P_\pm$ to be well--defined, $L$ cannot be a simple bivector; that is, $\det{L}\neq 0$, since by the comments following equation \eqref{eq:eigenL}, this guarantees that $\mu_+-\mu_-\neq 0$. \begin{theorem}\label{thm:decomp} If $L\in so(g)$ is such that $\det{L}\neq 0$, then (i) $P_++P_-=I$, (ii) $P_+P_-=0=P_-P_+$, (iii) $P_\pm^2=P_\pm$, and (iv) $P_\pm L=LP_\pm$. Moreover, $P_\pm L$ are simple Lorentz bivectors with $L=P_+L+P_-L$, $(P_+L)(P_-L)=0=(P_-L)(P_+L)$, and ${\rm tr}_2P_\pm L=-\mu_\pm$. \end{theorem} \begin{proof} Set $N\doteq\mu_+-\mu_-$. Then $L^4+({\rm tr}_2L)L^2+(\det{L})I=(L^2-\mu_+I)(L^2-\mu_-I)=-N^2P_+P_-$. Thus $P_+P_-=0$ by lemma \ref{lem:4}. Properties (i), (iii), (iv) also follow from the definitions of $P_\pm$, coupled with lemma \ref{lem:4} and the remarks after equation \eqref{eq:eigenL}. The remaining two algebraic statements follow from (i), (ii), and (iv), so it suffices to establish that $P_\pm L$ are simple Lorentz bivectors with the second order trace as stated. First, note that $(L^3)^T=(L^T)^3=(-gLg^{-1})^3=-gL^3g^{-1}$. It then follows from equation \eqref{eq:proj} that $(P_\pm L)^T=-g(P_\pm L)g^{-1}$; hence $P_\pm L\in so(g)$. Second, $(P_\pm L)^3=P_\pm L^3=\pm(L^5-\mu_\mp L^3)/N$. Using lemma \ref{lem:4}, we have $L^5=-({\rm tr}_2L)L^3-(\det{L})L$. Therefore, $(P_\pm L)^3=\pm\mu_\pm(L^3-\mu_\mp L)/N=\mu_\pm P_\pm L$. From lemma \ref{lem:3}, $P_\pm L$ is a simple bivector with ${\rm tr}_2P_\pm L=-\mu_\pm$. \end{proof} \begin{corollary}\label{cor:bivector} Any nonsimple Lorentz bivector $L$ may be written as the sum of two simple Lorentz bivectors: $L=L_++L_-$ with $L_+L_-=0=L_-L_+$, where $$L_\pm=\pm\frac{L^3-\mu_\mp L}{\mu_+-\mu_-}$$ and ${\rm tr}_2L_\pm=-\mu_\pm$, with $\mu_\pm$ are defined as in equation \eqref{eq:eigenL}.\qed \end{corollary} We shall refer to the decomposition $L=L_++L_-$ in the corollary as the {\bf orthogonal decomposition} of $L$. We remark that since $\mu_+>0$ and $\mu_-<0$ for a nonsimple Lorentz bivector, lemma \ref{lem:3} implies that the two--plane associated to $L_+$ (as in lemma \ref{lem:2plane}) is {\em time--like} (intersects the null--cone) while that of $L_-$ is {\em space--like} (does not intersect the null--cone) in Synge's classification of two--planes given in \cite{Synge}. \subsection{Special case: Minkowski metric} In the case when $g=\eta\doteq{\rm diag}(-1,1,1,1)$ is the Minkowski metric, we may make use of the explicit parametrization of antisymmetric matrices given in equation \eqref{eq:skew}. That is, any element of $so(\eta)$ can be written in the form \begin{equation}\label{eq:minkowski} L_{\bf xy} =\begin{pmatrix} 0 & {\bf x}^T\\ {\bf x} & W_{\bf y} \end{pmatrix} \end{equation} for some ${\bf x}, {\bf y}\in{\mathbb R}^3$; i.e., $L_{\bf xy}=A_{\bf xy}\eta$. Note that $\det{L_{\bf xy}}=-({\bf x}\cdot{\bf y})^2$, so that $L_{\bf xy}$ is simple if and only if ${\bf x}\cdot{\bf y}=0$. One computes, using lemma \ref{lem:pow}, that ${\rm tr}_2L_{\bf xy}=|{\bf y}|^2-|{\bf x}|^2$. \begin{theorem} If ${\bf x}\cdot{\bf y}\neq 0$, then $L_{\bf xy}=L_{{\bf a}_+{\bf b}_+}+L_{{\bf a}_-{\bf b}_-}$ is the orthogonal decomposition of $L_{\bf xy}$: $L_{{\bf a}_+{\bf b}_+}L_{{\bf a}_-{\bf b}_-}=0=L_{{\bf a}_-{\bf b}_-}L_{{\bf a}_+{\bf b}_+}$, where $${\bf a}_\pm=\pm\frac{\mu_\pm{\bf x}+({\bf x}\cdot{\bf y}){\bf y}} {\mu_+-\mu_-} \quad\text{and}\quad {\bf b}_\pm=\pm\frac{\mu_\pm{\bf y}-({\bf x}\cdot{\bf y}){\bf x}} {\mu_+-\mu_-} $$ and $\mu_\pm=\tfrac{1}{2}\bigl(|{\bf x}|^2-|{\bf y}|^2\pm\sqrt{(|{\bf x}|^2-|{\bf y}|^2)^2+4({\bf x}\cdot{\bf y})^2}\bigr)$. \end{theorem} \begin{proof} Using equation \eqref{eq:minkowski}, compute $L_{\bf xy}^3=L_{{\bf x}'{\bf y}'}$, where ${\bf x}'=(|{\bf x}|^2-|{\bf y}|^2){\bf x}+({\bf x}\cdot{\bf y}){\bf y}$, and ${\bf y}'=-({\bf x}\cdot{\bf y}){\bf x}+(|{\bf x}|^2-|{\bf y}|^2){\bf y}$. Now use corollary \ref{cor:bivector}. \end{proof} One may show that ${\bf a}_+\cdot{\bf a}_-=0$, ${\bf b}_+\cdot{\bf b}_-=0$, and ${\bf a}_\pm\times{\bf b}_\mp=0$. Necessarily, ${\bf a}_\pm\cdot{\bf b}_\pm=0$, since $L_{{\bf a}_+{\bf b}_+}$ and $L_{{\bf a}_-{\bf b}_-}$ are simple. \subsubsection{Relation with the Hodge dual} We may also interpret orthogonal decomposition in terms of Hodge duals. Write the Lorentz bivector $L$ as the two--form $L=\tfrac{1}{2}L_\beta^\alpha e^\beta\wedge e_\alpha$, where $e_\alpha$ ($\alpha=0,1,2,3$) is a basis of ${\mathbb R}^4$ and $e^\beta=\eta^{\alpha\beta}e_\alpha$ is the dual basis (the summation convention is used). Observe that we may do this as $L\eta^{-1}=L\eta$ is antisymmetric. The Hodge dual (with respect to $\eta$) of $A_{\mathbf x\mathbf y}$ is $A_{(-{\mathbf y}){\mathbf x}}$, so that $\ast L_{{\mathbf x}{\mathbf y}}=L_{(-{\mathbf y}){\mathbf x}}$. Moreover, one computes that $({\mathbf x}\cdot{\mathbf y}){\mathbf a}_\pm=\mu_\pm{\mathbf b}_\mp$ and $({\mathbf x}\cdot{\mathbf y}){\mathbf b}_\pm=-\mu_\pm{\mathbf a}_\mp$. It then follows that $$*L_{{\mathbf a}_+{\mathbf b}_+} =\frac{\mu_+}{{\mathbf x}\cdot{\mathbf y}}\, L_{{\mathbf a}_-{\mathbf b}_-} \quad\text{and}\quad *L_{{\mathbf a}_-{\mathbf b}_-} =-\frac{{\mathbf x}\cdot{\mathbf y}}{\mu_+}\, L_{{\mathbf a}_+{\mathbf b}_+}. $$ \subsubsection{Relation with the spin group}\label{sssec:spin} In addition, we describe the orthogonal decomposition in terms of the double covering group homomorphism $\Psi:{\it SL_2}{\mathbb C}\rightarrow{\it SO^+}(\eta)$. Since we are using the mathematicians choice of metric signature, we use $\rho(u)\doteq\bigl(\begin{smallmatrix}u_1+iu_2 & u_0+u_3\\ u_0-u_3 & u_1-iu_2\end{smallmatrix}\bigr)$ as our linear embedding of ${\mathbb R}^4$ into the algebra of $2\times 2$ complex matrices, along with the involution $\bigl(\begin{smallmatrix}A & B\\ C & D\end{smallmatrix}\bigr)^\star\doteq\bigl(\begin{smallmatrix}\bar{D} & \bar{B}\\ \bar{C} & \bar{A}\end{smallmatrix}\bigr)$, so that $\rho(u)^\star=\rho(u)$ and $\det\rho(u)=u^T\eta u$. The map $\Psi$ sends the $2\times 2$ complex matrix $M$ with unit determinant to the Lorentz transformation $u\mapsto\rho^{-1}(M\rho(u)M^\star)$. On the Lie algebra level, the map $\psi:{\it sl_2}{\mathbb C}\rightarrow{\it so}(\eta)$ corresponding to $\Psi$ sends the traceless complex matrix $m=\bigl(\begin{smallmatrix} a & b\\ c & -a\end{smallmatrix}\bigr)$ to the Lorentz bivector $u\mapsto\rho^{-1}(m\rho(u)+\rho(u)m^\star)$. One computes that $\psi(m)=L_{{\mathbf x}{\mathbf y}}$, where ${\mathbf x}=\bigl({\it Re}(b+c),{\it Im}(b-c),2{\it Re}(a)\bigr)$ and ${\mathbf y}=\bigl({\it Im}(b+c),-{\it Re}(b-c),2{\it Im}(a)\bigr)$. Thus ${\mathbf x}\cdot{\mathbf y}=2{\it Im}(a^2+bc)$ and $|{\mathbf y}|^2-|{\mathbf x}|^2=-4{\it Re}(a^2+bc)$. In particular, $\psi(m)$ is simple if and only if $a^2+bc$ is real. While it is possible to work out $\psi^{-1}(L_{{\mathbf a}_+{\mathbf b}_+})$, the resulting expression is not particularly nice. However, one computes that $\psi(im)=\ast L_{{\mathbf x}{\mathbf y}}$, so that by the previous discussion, if $\psi(m_+)=L_{{\mathbf a}_+{\mathbf b}_+}$, then $\psi(i\theta m_+)=L_{{\mathbf a}_-{\mathbf b}_-}$ for some real $\theta$. To get back to the Lie group level, we only need to exponentiate: if $\psi(m)=L$, then $\Psi(\exp(m))=\exp(L)$. We will return to this briefly at the end of section \ref{ssec:factordecomp}. \section{Exponential on $so(g)$} Recall that the exponential of a square matrix $M$ is defined as $\exp(M)=\sum_{n=0}^\infty\frac{1}{n!}M^n$. The series always converges, and the exponential of a Lorentz bivector lies in the identity component the Lie group of all Lorentz transformations. \begin{theorem}\label{thm:expsimple} If $L$ is a simple Lorentz bivector, then $\exp(L)=I+d_1L+d_2L^2$, where \begin{align*} &d_1=\frac{\sinh\sqrt{-{\rm tr}_2L}}{\sqrt{-{\rm tr}_2L}} &\text{and} & &d_2=\frac{1-\cosh\sqrt{-{\rm tr}_2L}}{{\rm tr}_2L} & &\text{if ${\rm tr}_2L<0$,}\\ &d_1=\frac{\sin\sqrt{{\rm tr}_2L}}{\sqrt{{\rm tr}_2L}} &\text{and} & &d_2=\frac{1-\cos\sqrt{{\rm tr}_2L}}{{\rm tr}_2L} & &\text{if ${\rm tr}_2L>0$,} \end{align*} and if ${\rm tr}_2L=0$, then $d_1=1$ and $d_2=\tfrac{1}{2}$. \end{theorem} \begin{proof} Lemma \ref{lem:3} implies that $L^{2k+1}=(-1)^k({\rm tr}_2L)^kL$ for all $k\neq 0$, which is used to sum the exponential series. \end{proof} This formula appears in \cite{Geyer} with $-{\rm tr}_2L$ replaced by the symbol $\alpha$; however in that reference, $\alpha$ is not identified with a matrix invariant, the coefficient of $L^2$ is incorrect, and the limiting case when ${\rm tr}_2L=0$ is not handled explicitly. For nonsimple Lorentz bivectors, one may sum the exponential series using the identity $L^k=\mu_+^kL_++\mu_-^kL_-$ for all $k\geq 0$, which one verifies by induction in conjunction with lemma \ref{lem:4}. The resulting formula, which is cubic in $L$ appears in \cite{Geyer} and in \cite{Coll}, although the formula in the latter takes a different form than that in the former; both references use somewhat different methods than ours, and in particular, make use of algebra over the complex numbers. Alternatively, we may make use of corollary \ref{cor:bivector} to obtain the exponential of a nonsimple Lorentz algebra element. Since $L_+,L_-$ trivially commute, we have $\exp(L)=\exp(L_+)\exp(L_-)$, and each factor can be computed from theorem \ref{thm:expsimple} to obtain the following. \begin{theorem}\label{thm:exp} If $L\in so(g)$ is nonsimple and $L=L_++L_-$ is the orthogonal decomposition of $L$ as in corollary \ref{cor:bivector}, then $$\exp(L)=I+d_1^+L_++d_1^-L_-+d_2^+L_+^2+d_2^-L_-^2$$ \begin{align*} d_1^+ &=\sinh\sqrt{\mu_+}/\sqrt{\mu_+} & d_1^- &=\sin\sqrt{-\mu_-}/\sqrt{-\mu_-}\\ d_2^+ &=(\cosh\sqrt{\mu_+}-1)/\mu_+ & d_2^- &=(\cos\sqrt{-\mu_-}-1)/\mu_-\qed \end{align*} \end{theorem} Note that if we write $L_\pm$ in terms of $L$ (according to corollary \ref{cor:bivector}), we obtain a polynomial of degree six in $L$. However, the degree can be reduced to three by lemma \ref{lem:4}; the resulting expression is necessarily the same as that obtained in \cite{Coll} and \cite{Geyer}. \section{Lorentz transformation factorization} The {\bf Lorentz group} $O(g)$ is set of all linear transformations $\Lambda$ on ${\mathbb R}^4$ that preserve the metric: $\Lambda^Tg\Lambda=g$. Necessarily $\det\Lambda=\pm 1$, and $\Lambda$ is said to be {\bf proper} if $\det\Lambda=1$. It is well--known that the group of all proper Lorentz transformations has two connected components; the Lorentz transformations within the identity component are said to be {\bf orthochronous}. In the case when $g=\eta$ is the Minkowski metric, the orthochronous transformations are characterized by the component $\Lambda_0^0$ being positive. The set of all proper orthochronous Lorentz group elements will be denoted by $SO^+(g)$. {\em A note on proofs.} In the remainder, all proofs will use the following abbreviations without warning: $\tau_k\doteq{\rm tr}_k\Lambda$, $l_k\doteq{\rm tr}_kL$, $x\doteq\sqrt{-l_2}$ when $l_2<0$, $y\doteq\sqrt{l_2}$ when $l_2>0$, $c_+\doteq\cosh{x}$, $c_-\doteq\cos{y}$, $s_+\doteq\sinh{x}/x$, and $s_-\doteq\sin{y}/y$. Note that since $\Lambda^{-1}=g^{-1}\Lambda^Tg$, we have ${\rm tr}_k\Lambda^{-1}=\tau_k={\rm tr}_k\Lambda$. We remind the reader that ${\rm tr}_2\Lambda=\tfrac{1}{2}({\rm tr}^2\Lambda-{\rm tr}\Lambda^2)$, from equation \eqref{eq:2tr}. \subsection{Some Lorentz transformation relations} \begin{lemma}\label{lem:trace13} For all $\Lambda\in SO^+(g)$, ${\rm tr}_3\Lambda={\rm tr}\Lambda$. Consequently, $\Lambda$ satisfies the relation $\Lambda^4-({\rm tr}\Lambda)\Lambda^3+({\rm tr}_2\Lambda)\Lambda^2-({\rm tr}\Lambda)\Lambda+I=0$. \end{lemma} \begin{proof} The characteristic polynomials of $\Lambda$ and $\Lambda^{-1}$ are the same, namely $\lambda^4-\tau_1\lambda^3+\tau_2\lambda^2-\tau_3\lambda+1$. By the Cayley--Hamilton theorem, ($\ast$) $\Lambda^4-\tau_1\Lambda^3+\tau_2\Lambda^2-\tau_3\Lambda+I=0$ and ($\ast\ast$) $\Lambda^{-4}-\tau_1\Lambda^{-3}+\tau_2\Lambda^{-2}-\tau_3\Lambda^{-1}+I=0$. Multiplying ($\ast\ast$) by $\Lambda^4$ and comparing with ($\ast$), we get $\tau_3=\tau_1$. \end{proof} \begin{lemma}\label{lem:spow} If $\Lambda\in SO^+(g)$, then $\Lambda^k+\Lambda^{-k}=A_kI+B_k(\Lambda+\Lambda^{-1})$, where $A_2=-{\rm tr}_2\Lambda$, $B_2={\rm tr}\Lambda$, $A_3=(2-{\rm tr}_2\Lambda){\rm tr}\,\Lambda$, $B_3={\rm tr}^2\Lambda-{\rm tr}_2\Lambda-1$, $A_4=(2-{\rm tr}_2\Lambda)\,{\rm tr}^2\Lambda+{\rm tr}_2^2\Lambda-2$, and $B_4=({\rm tr}^2\Lambda-2\,{\rm tr}_2\Lambda)\,{\rm tr}\Lambda$ \end{lemma} \begin{proof} From lemma \ref{lem:trace13}, we have $\Lambda^2-\tau_1\Lambda+\tau_2I-\tau_1\Lambda^{-1}+\Lambda^{-2}=0$; so that $\Lambda^2+\Lambda^{-2}=-\tau_2I+\tau_1(\Lambda+\Lambda^{-1})$. Moreover, $\Lambda^3-\tau_1\Lambda^2+\tau_2\Lambda-\tau_1I+\Lambda^{-1}=0$ and $\Lambda^{-3}-\tau_1\Lambda^{-2}+\tau_2\Lambda^{-1}-\tau_1I+\Lambda=0$; thus $\Lambda^3+\Lambda^{-3}=\tau_1(\Lambda^2+\Lambda^{-2})-(\tau_2+1)(\Lambda+\Lambda^{-1})+2\tau_1I=\tau_1(2-\tau_2)I+(\tau_1B_2-\tau_2-1)(\Lambda+\Lambda^{-1})$. Similarly, we have $\Lambda^4+\Lambda^{-4}=\tau_1(\Lambda^3+\Lambda^{-3})-\tau_2(\Lambda^2+\Lambda^{-2})+\tau_1(\Lambda+\Lambda^{-1})-2I=(\tau_1A_3-\tau_2A_2-2)I+(\tau_1B_3-\tau_2B_2+\tau_1)(\Lambda+\Lambda^{-1})$. \end{proof} \subsection{Simple Lorentz transformations} We will say that a Lorentz transformation is {\bf simple} if it is the exponential of a simple Lorentz bivector. Our first goal is to give an algebraic criterion for simplicity. We will make use of the fact that the orthochronous Lorentz group is exponential (see \cite{Nishikawa}); that is, $\exp:so(g)\rightarrow SO^+(g)$ is surjective. \begin{lemma}\label{lem:lorentztraces} For nonsimple $\Lambda\in SO^+(g)$, there is a nonsimple Lorentz bivector $L$ with orthogonal decomposition $L=L_++L_-$ such that $\Lambda=\exp(L)$ and ${\rm tr}_2L_\pm=-\mu_\pm$, where $\sqrt{\mu_+}=\cosh^{-1}\tfrac{1}{4}({\rm tr}\Lambda+\sqrt\Delta)$, $\sqrt{-\mu_-}=\cos^{-1}\tfrac{1}{4}({\rm tr}\Lambda-\sqrt\Delta)$, and $\Delta={\rm tr}^2\Lambda-4{\rm tr}_2\Lambda+8$. Moreover, we have ${\rm tr}\Lambda=2(c_++c_-)$ and ${\rm tr}_2\Lambda=4c_+c_-+2$, where $c_+\doteq\cosh\sqrt{\mu_+}$, $c_-\doteq\cos\sqrt{-\mu_-}$. \end{lemma} \begin{proof} Since $\Lambda$ is not simple, $\Lambda=\exp(L)$ for some nonsimple Lorentz bivector $L$, and theorem \ref{thm:exp} applies. Moreover, using lemma \ref{lem:3} one computes \begin{equation}\label{eq:2} \Lambda^2 =I+2d_1^+(1+\mu_+d_2^+)L_++2d_1^-(1+\mu_-d_2^-)L_- +A_+L_+^2+A_-L_-^2 \end{equation} where $A_\pm\doteq(2d_2^\pm+d_1^{\pm 2}+\mu_\pm d_2^{\pm 2})=2(c_\pm^2-1)/\mu_\pm$. Computing the traces in theorem \ref{thm:exp} and equation \eqref{eq:2}, we obtain ${\rm tr}\Lambda=4+d_2^+{\rm tr}L_+^2+d_2^-{\rm tr}L_-^2$ and ${\rm tr}\Lambda^2=4+A_+{\rm tr}L_+^2+A_-{\rm tr}L_-^2$. Using ${\rm tr}L_\pm^2=-2{\rm tr}_2L_\pm=2\mu_\pm$, one then computes that ($\star$) ${\rm tr}\Lambda=2(c_++c_-)$ and ${\rm tr}\Lambda^2=4(c_+^2+c_-^2-1)$. Consequently, $(\star\star$) ${\rm tr}_2\Lambda=\tfrac{1}{2}({\rm tr}^2\Lambda-{\rm tr}\Lambda^2)=4c_+c_-+2$. Equations ($\star$) and ($\star\star$) can then be solved for $c_\pm$ to obtain the formulas in the statement of the lemma. \end{proof} \begin{lemma}\label{lem:simplecriterion} $\Lambda$ is simple if and only if ${\rm tr}_2\Lambda=2{\rm tr}\Lambda-2$ and ${\rm tr}\Lambda\geq 0$. \end{lemma} \begin{proof} If $\Lambda$ is simple, then $\Lambda=\exp(L)$ for some simple Lorentz bivector $L$. From theorem \ref{thm:expsimple} and lemma \ref{lem:pow}, ($\ast$) $\tau_1=4-2d_2l_2$. Moreover, we have $\Lambda^2=I+2d_1L+(2d_2+d_1^2)L^2+2d_1d_2L^3+d_2L^4$, and one computes $\tau_2=\tfrac{1}{2}(\tau_1^2-{\rm tr}\Lambda^2)=6+(d_1^2-6d_2)l_2+d_2^2l_2^2$ using lemma \ref{lem:pow} and $\det{L}=0$. Thus ($\ast\ast$) $\tau_2-2\tau_1+2=(d_1^2-2d_2)l_2+d_2^2l_2^2$. By theorem \ref{thm:expsimple}, if $l_2<0$, then $d_2l_2=1-\cosh{x}$ and $d_1^2l_2=-\sinh^2{x}=1-\cosh^2{x}$. Thus ($\ast$) yields $\tau_1=2(1+\cosh{x})\geq 4$, and ($\ast\ast$) yields $\tau_2-2\tau_1+2=0$. Similarly, if $l_2>0$, then $\tau_1=2(1+\cos{y})\geq 0$ and $\tau_2-2\tau_1+2=0$; and if $l_2=0$, then $\tau_1=4$ and $\tau_2-2\tau_1+2=0$. Now suppose that $\Lambda$ is not simple, and write $\Lambda=\exp(L)$ as in lemma \ref{lem:lorentztraces}. Thus $\tau_2-2\tau_1+2=-4(c_+-1)(1-c_-)$, which is only zero when $c_+=1$ or $c_-=1$. As $\Lambda$ is nonsimple, $\mu_+>0$ and $\mu_-<0$; whence $c_+>1$, and $c_-=1$ only if $\sqrt{-\mu_-}$ is a nonzero multiple of $2\pi$. However in the latter case, theorem \ref{thm:exp} implies that $\exp(L)=\exp(L_+)$, which cannot be the case, since $\Lambda$ is assumed to be nonsimple. \end{proof} We note that the proof of lemma \ref{lem:simplecriterion} implies that if $\Lambda$ is nonsimple, then $c_+>1$ and $-1\leq c_-<1$. In particular, ${\rm tr}\Lambda>0$. \subsection{Simple logarithm} \begin{theorem}\label{thm:simplelog} Suppose $\Lambda\in SO^+(g)$ is simple. For ${\rm tr}\Lambda>0$, let us define $L_\Lambda=\tfrac{1}{2}k(\Lambda-\Lambda^{-1})$, where \begin{description} \item{(i)} $\displaystyle k=\frac{\sqrt{\mu}}{\sinh\sqrt{\mu}}$ and $\sqrt{\mu}=\cosh^{-1}(\tfrac{1}{2}{\rm tr}\Lambda-1)$, if ${\rm tr}\Lambda>4$ \item{(ii)} $\displaystyle k=\frac{\sqrt{-\mu}}{\sin\sqrt{-\mu}}$ and $\sqrt{-\mu}=\cos^{-1}(\tfrac{1}{2}{\rm tr}\Lambda-1)$, if $0<{\rm tr}\Lambda<4$ \item{(iii)} $k=1$ and $\mu=0$, if ${\rm tr}\Lambda=4$ \end{description} Then $L_\Lambda$ is a simple Lorentz bivector with $\exp(L_\Lambda)=\Lambda$ and ${\rm tr}_2L_\Lambda=-\mu$. For ${\rm tr}\Lambda=0$, $P_\Lambda=\tfrac{1}{2}(I-\Lambda)$ is orthogonal projection onto a nondegenerate two--plane such that $-\pi^2P_\Lambda$ is the square of a simple Lorentz bivector $L$ with $\exp{L}=\Lambda$ and ${\rm tr}_2L=\pi^2$. \end{theorem} Thus if ${\rm tr}\Lambda\neq 0$, then $L_\Lambda$ may be taken as the logarithm of $\Lambda$: $L_\Lambda=\log\Lambda$. In the case ${\rm tr}\Lambda=0$, we may reconstruct $\log\Lambda$ from the two independent four--vectors in the image of $P_\Lambda$; it should be noted that in this case, $\Lambda$ is necessarily an involution: $\Lambda^2=I$. In general, the logarithm is not unique, see \cite{Shaw}. In particular if $L$ is simple with ${\rm tr}_2L=2n\pi$, for any integer $n$, then $\Lambda\doteq\exp(L)=I$ by theorem \ref{thm:expsimple}. In this case, the above theorem gives $L_\Lambda=0$. A version of the formula for $\log\Lambda$ in the case ${\rm tr}\Lambda\neq 0$ that involves algebra over the complex numbers appears in \cite{Coll}. \begin{proof} Write $\Lambda=\exp(L)$ for some simple $L$. By theorem \ref{thm:expsimple}, we have ($\star$) $d_1L=\tfrac{1}{2}(\Lambda-\Lambda^{-1})$. As in the proof of lemma \ref{lem:simplecriterion}, ($\star\star$) $\tau_1=4-2d_2l_2$. From theorem \ref{thm:expsimple}, we see that $d_1=0$ only if $l_2>0$ and $\sqrt{l_2}$ is a nonzero multiple of $\pi$; by the periodicity of the sine and cosine, we may assume that $l_2=\pi^2$. In this case, $d_2=2/\pi^2$ and by ($\star\star$), $\tau_1=0$. Conversely, if $\tau_1=0$, then ($\star\star$) implies $d_2l_2=2$, which only happens if $l_2>0$ and $\sqrt{l_2}$ is a nonzero multiple of $\pi$. Thus if $\tau_1=0$, then $\Lambda=I+(2/\pi^2)L^2=I-2P_L$, with $P_L$ as in lemma \ref{lem:2plane}. Thus, $P_L=P_\Lambda$. If $\tau_1\neq 0$, then $d_1\neq 0$; and from ($\star$) and theorem \ref{thm:expsimple}, we only need to deduce the value of $l_2$ in terms of $\Lambda$ in order to obtain the formulas (i)---(iii). If we assume that $l_2<0$, then theorem \ref{thm:expsimple} states that $d_2l_2=1-\cosh{x}$; and so ($\star\star$) can be solved to yield $\cosh{x}=\tfrac{1}{2}\tau_1-1$. Note that this equation has a solution for $x>0$ if and only if $\tau_1>4$. The cases when $l_2>0$ and $l_2=0$ are handled similarly. \end{proof} \subsection{Decomposition into simple factors}\label{ssec:factordecomp} In the case when $\Lambda$ is nonsimple, then $L=\log\Lambda$ is a nonsimple bivector, and we have the orthogonal decomposition $L=L_++L_-$ into simple mutually annihilating summands. Thus $\Lambda=\exp(L_+)\exp(L_-)$ is a product of commuting simple factors. We give explicit formulas. \begin{theorem}\label{thm:lorentzproj} If $\Lambda\in SO^+(g)$ is nonsimple, then $\Lambda=\exp(L)$ where $L$ is a nonsimple Lorentz bivector with projection operators $$P_\pm=\pm\frac{\tfrac{1}{2}(\Lambda+\Lambda^{-1})-c_\mp I}{c_+-c_-}$$ where $c_\pm$ are as in lemma \ref{lem:lorentztraces}. \end{theorem} \begin{proof} From theorem \ref{thm:exp}, we have ($\ast$) $\Lambda^{-1}=\exp(-L)=I-d_1^+L_+-d_1^-L_-+d_2^+L_+^2+d_2^-L_-^2$, and we obtain the equation $\tfrac{1}{2}(\Lambda+\Lambda^{-1})=I+d_2^+L_+^2+d_2^-L_-^2$, which we rewrite as ($\circ$) $d_2^+L_+^2+d_2^-L_-^2=-I+\tfrac{1}{2}(\Lambda+\Lambda^{-1})$. In addition, from equation \eqref{eq:2}, we have $\tfrac{1}{2}(\Lambda^2+\Lambda^{-2})=I+A_+L_+^2+A_-L_-^2$. However, by lemma \ref{lem:spow}, this equation may be rewritten as ($\circ\circ$) $A_+L_+^2+A_-L_-^2=-\tfrac{1}{2}(\tau_2+2)I+\tfrac{1}{2}\tau_1(\Lambda+\Lambda^{-1})$. Equations ($\circ$) and ($\circ\circ$) form a linear system in $L_\pm^2$, whose determinant is computed to be $2(c_+-1)(1-c_-)(c_+-c_-)/\mu_+\mu_-$, which is never zero. After inverting the system ($\circ$), ($\circ\circ$) , one computes $$L^2=L_+^2+L_-^2 = \frac{1}{c_+-c_-}(\mu_-c_+-\mu_+c_-)I +\tfrac{1}{2}(\mu_+-\mu_-)(\Lambda+\Lambda^{-1}) $$ Equation \eqref{eq:proj} then yields the desired formulas for $P_\pm$. \end{proof} \begin{theorem} Suppose $\Lambda\in SO^+(g)$ is nonsimple. Let $c_\pm$, and $\mu_\pm$ be defined as in lemma \ref{lem:lorentztraces}. If $\mu_-\neq -\pi^2$, then $\Lambda=\exp(L)$ with $L$ a nonsimple Lorentz bivector whose orthogonal decomposition $L=L_++L_-$ is given by $$L_\pm=\mp\frac{1}{(c_+-c_-)s_\pm} \left\{\tfrac{1}{2}c_\mp(\Lambda-\Lambda^{-1}) -\tfrac{1}{4}(\Lambda^2-\Lambda^{-2})\right\} $$ where $s_+\doteq\sinh\sqrt{\mu_+}/\sqrt{\mu_+}$ and $s_-\doteq\sin\sqrt{-\mu_-}/\sqrt{-\mu_-}$. Moreover, ${\rm tr}_2L_\pm=-\mu_\pm$. \end{theorem} \begin{proof} From equation \eqref{eq:2} and equation ($\ast$) in the proof of the previous theorem, we obtain the equations $\tfrac{1}{2}(\Lambda-\Lambda^{-1})=d_1^+L_++d_1^-L_-$ and $\tfrac{1}{4}(\Lambda^2-\Lambda^{-2})=d_1^+(1+\mu_+d_2^+)L_++d_1^-(1+\mu_-d_2^-)L_-$. These define a linear system for $L_\pm$. The determinant is $-(c_+-c_-)s_+s_-$, which is zero only when $\sqrt{-\mu_-}$ is a nonzero multiple of $\pi$. Inverting the linear system yields the formulas in the statement of the theorem. \end{proof} \begin{theorem} For nonsimple $\Lambda\in SO^+(g)$, $\Lambda=\Lambda_+\Lambda_-$, where $\Lambda_\pm$ are the commuting simple Lorentz transformations given by $$\Lambda_\pm=\pm\frac{1}{2(c_+-c_-)} \left\{(1+2c_\pm)I-\Lambda^{-1}-(1+2c_\mp)\Lambda+\Lambda^2\right\} $$ where $c_\pm$ are as in lemma \ref{lem:lorentztraces}. \end{theorem} \begin{proof} Let $L$ be a nonsimple Lorentz bivector such that $\Lambda=\exp(L)$, and $P_\pm$ as in theorem \ref{thm:lorentzproj}. Observe that $\exp(P_\pm L)=P_\mp+P_\pm\Lambda$. Indeed, $(P_\pm L)^n=P_\pm^nL^n=P_\pm L^n$ for all $n>0$, since $P_\pm$ is a projection that commutes with $L$. Thus, $\exp(P_\pm L)=I+\sum_{n>0}\frac{1}{n!}(P_\pm L)^n=I+P_\pm(\sum_{n>0}\frac{1}{n!}L^n)=I+P_\pm(\exp(L)-I)$. The formulas for $\Lambda_\pm$ then follow from theorem \ref{thm:lorentzproj}. \end{proof} As we noted in section \ref{sssec:spin}, in the special case when $g=\eta$ is the Minkowski metric, we may write $L_\pm$ in terms of the Lie algebra homomorphism $\psi:{\it sl_2}{\mathbb C}\rightarrow{\it so}(\eta)$. Namely, $\psi(m_+)=L_+$ and $\psi(i\theta m_+)=L_-$ for some traceless $2\times 2$ complex matrix $m_+=\bigl(\begin{smallmatrix} a & b\\ c & -a\end{smallmatrix}\bigr)$ such that $a^2+bc$ is real. It follows that $M_+\doteq\exp(m_+)$ and $M_-\doteq\exp(i\theta m_+)$ give (commuting) matrices in ${\it SL_2}{\mathbb C}$ with $\Psi(M_\pm)=\Lambda_\pm$; i.e., $M_\pm$ gives the decomposition of $\Psi(M_+M_-)=\Lambda$ into simple factors. Observe that $\exp(m_+)$ is readily computed, since $m_+^2=(a^2+bc)I$. We may view the decomposition of a nonsimple Lorentz transformation into commuting simple factors as a generalization of Synge's {\it 4--screw:} a product of a boost and a rotation in orthogonal two--planes. While it is known that every proper nonsimple Lorentz transformation can be expressed as the product of two commuting simple Lorentz transformations (see \cite{Schremp}), the formulas presented here are ostensibly original, and in any case independent of the specific form of the Lorentz metric.
{ "timestamp": "2012-11-27T02:04:14", "yymm": "1103", "arxiv_id": "1103.1072", "language": "en", "url": "https://arxiv.org/abs/1103.1072", "abstract": "The canonical decomposition of a Lorentz algebra element into a sum of orthogonal simple (decomposable) Lorentz bivectors is discussed, as well as the decomposition of a proper orthochronous Lorentz transformation into a product of commuting Lorentz transformations, each of which is the exponential of a simple bivector. As an application, we obtain an alternative method of deriving the formulas for the exponential and logarithm for Lorentz transformations.", "subjects": "General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph)", "title": "Orthogonal decomposition of Lorentz transformations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429595026213, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7097211122525972 }
https://arxiv.org/abs/1803.01290
Second homotopy and invariant geometry of flag manifolds
We use the Hopf fibration to explicitly compute generators of the second homotopy group of the flag manifolds of a compact Lie group. We show that these $2$-spheres have nice geometrical properties such as being totally geodesic surfaces with respect to any invariant metric on the flag manifold. We characterize when the generators with the same invariant geometry are in the same homotopy class. This is done by exploring the action of Weyl group on the irreducible components of isotropy representation of the flag manifold.
\section*{Introduction} We consider (generalized) flag manifolds of a compact Lie group $U$, that is, homogeneous manifolds $\mathbb{F}_\Theta=U/U_\Theta$, where the isotropy $U_\Theta$ is a connected subgroup with maximal rank. These omnipresent manifolds have being studied from several points of view: symplectic geometry, algebraic geometry, riemannian geometry and combinatorics (see for instance \cite{besse}, \cite{biron}, \cite{GGSM}, \cite{lonardo-jordan}). Moreover, several geometric objects on flag manifolds can be described in a very explicit way in terms of Lie theory and representation theory. In this work, we describe explicitly topological phenomena on flag manifolds via invariant geometry and geometry of root systems. One of our motivation is the work of Dur\'an-Mendoza-Rigas \cite{duran-rigas}, where the authors give a geometric description of Blakers-Massey element, the generator of the homotopy group $\pi_6(S^3)$, by considering the principal bundle $S^3 \cdots {\rm Sp}(2) \to S^7$ and explicitly describing the boundary homomorphism $\pi_7(S^7) \stackrel{\partial}{\to} \pi_6(S^3)$. In the first part of the paper (Section \ref{boundary-sec}) we consider the second homotopy group of the flag manifold $\mathbb{F}_\Theta$. It is well known to be generated by 2-spheres given by the Schubert cells associated to reflections of the {\em simple} roots outside the simple roots of the isotropy. In our first result, we consider the principal bundle $U_\Theta \cdots U \to \mathbb F_\Theta$ and explicitly describe the boundary homomorphism $\pi_2(\mathbb F_\Theta) \stackrel{\partial}{\to} \pi_1(U_\Theta)$ on special 2-spheres $\sigma^\vee_\alpha: S^2 \to \mathbb F_\Theta$ associated to {\em all} the roots outside the roots of the isotropy. As an application we describe precisely the homotopy class of some 2-spheres by using the combinatory of the root system. Regarding differential geometry, in \cite{twistor} Burstall-Rawnsley showed that the generators of $\pi_2(\mathbb F_\Theta)$ are totally geodesic with respect to a specific metric (the so called normal metric) and in particular, it is an harmonic map with respect to that metric. In the second part (Section \ref{equiharmonic-spheres}) of this paper we explore the properties of our generators $\sigma^\vee_\alpha$ in terms of the harmonic map theory and, in our second result, we prove that these generators are totally geodesic with respect to {\em any} invariant metric on $\mathbb F_\Theta$. It follows that $\pi_2(\mathbb F_\Theta)$ is generated by equiharmonic maps, that is, harmonic maps with respect to any invariant metric on $\mathbb F_\Theta$. A very important ingredient in the study of the geometry of flag manifolds is its isotropy representation and the corresponding isotropy components. It allows us to describe invariant tensors in the flag manifold (e.g. invariant metrics, almost complex structures, symplectic forms and so on). In the third part of the paper (Sections \ref{theta-rigid} and \ref{actionW}) we explore the relation between the invariant geometry of the generators $\sigma^\vee_\alpha$ and its homotopy class. This is done after a careful analysis of the action of the Weyl group on the irreducible components of the isotropy representation, which is purely a problem on the geometry of the root system. In particular we introduce the concept of $\Theta$-rigid roots to answer, in our third result, the following question: when are two {\em isometric} generators of $\pi_2$, say $\sigma^\vee_\alpha$ and $\sigma^\vee_\beta$ in the same {\em homotopy class}? We feel that there are interesting examples of the interplay between invariant geometry and topology to be explored when the invariant geometry and homotopy classes of these spheres do not coincide, for example, the partial flag manifolds of type $B$, $C$, $F$ or $G$, acoording to our result (see Example \ref{exemplog2} for type $G$). We finish the introduction with two remarks on the generality of our setup. In the first place, we avoided to take the compact connected group $U$ semisimple and simply connected from the start. This makes the setup more flexible to produce examples and easily adaptable to the context of maximal rank homogeneous spaces $U/L$, that is, where the isotropy $L$ has maximal rank but can be disconnected. Let $L_0$ be the connected component subgroup of $L$, then $U/L_0$ is a flag manifold of $U$ and the fibration $U/L_0 \to U/L$ is the universal covering of $U/L$ which, thus, induces isomorphism in the higher homotopy groups. Thus, the results of Section 2 follows through: the 2-spheres $\sigma^\vee_\alpha$ project to generators of $\pi_2(U/L)$, whose images can be 2-spheres or projective planes. This can be of use to study the topology of {\em real flag manifolds} of maximal rank. For example, the real flag manifold of maximal rank ${\rm SO}(4)/{\rm S}({\rm O}(2) \times {\rm O}(2))$ is the Grassmannian of planes of $\mathbb{R}^4$ and its universal cover is ${\rm SO}(4)/({\rm SO}(2) \times {\rm SO}(2))$, the oriented Grassmannian of planes of $\mathbb{R}^4$, the maximal flag manifold of the non-simply connected group ${\rm SO}(4)$. In the second place, we prove the results on the action of the Weyl group on the isotropy components in the more general context of {\em nonreduced} root systems. Again, this may be of future interest for the study of the invariant geometry of real flag manifolds (see \cite{mauro-luiz}). \section{Preliminaries} We recall some results and constructions on the fundamental group of compact Lie groups, since we will use similar arguments in what follows. See Helgason \cite{helgason} or Hilgert and Neeb \cite{neeb} for details and proofs. Let a compact connected Lie group $U$ have Lie algebra $\u$ and exponential map $\exp: \u \to U$. Let $T$ have Lie algebra $\t$ and {\em lattice} given by $$ \Gamma = \{ \gamma \in \t: \, \exp( 2\pi \gamma ) = 1 \} $$ which is canonically isomorphic to $\pi_1(T)$ by the map $\Gamma \to \pi_1(T)$ that takes $\gamma \in \Gamma$ to the based homotopy class of the loop $S^1 \to T$, $e^{\theta {\rm \bf i}}\mapsto \exp(\theta \gamma)$. Composing with the map induced by the inclusion $T \subset U$ gives a surjective map $\Gamma \to \pi_1(U)$ whose kernel is given by the following construction (see Theorem 12.4.14 p.494 of \cite{neeb}). \subsubsection*{Coroot vectors} We have that $\u$ is the compact real form of the complex reductive Lie algebra $\mathfrak{g} = \u^\mathbb{C}$. The adjoint representation of the Cartan subalgebra $\mathfrak{h} = \t^\mathbb{C}$ splits as the root space decomposition $ \mathfrak{g} = \mathfrak{h} \oplus \sum_{\alpha \in \Pi} \mathfrak{g}_\alpha $ with root space $$ \mathfrak{g}_\alpha = \{ X \in \mathfrak{g}:\, \mathrm{ad}(H) X = \alpha(H) X, \, \forall H \in \mathfrak{h} \} $$ where $\Pi \subset \mathfrak{h}^*$ is the root system. We have that $\dim \mathfrak{g}_\alpha = 1$ and that each root $\alpha$ is imaginary valued in $\t$ so that $\alpha \in {\rm \bf i} \t^*$. To each root $\alpha$ there corresponds the unique {\em coroot vector} $H^\vee_\alpha \in \mathfrak{h}$ such that $$ H^\vee_\alpha \in [\mathfrak{g}_\alpha, \, \mathfrak{g}_{-\alpha}]\quad\text{and}\quad \alpha(H^\vee_\alpha) = 2 $$ Now, let $X \mapsto \ov{X}$ denote conjugation in $\mathfrak{g}$ w.r.t.\ $\u$, it is an automorphism of $\mathfrak{g}$. We have that $\ov{\mathfrak{h}} = \mathfrak{h}$ and, for a root $\alpha$, $\ov{\alpha(H)} = -\alpha( \ov{H} )$ for $H \in \mathfrak{h}$, so that $\ov{\mathfrak{g}_\alpha} = \mathfrak{g}_{-\alpha}$. For $X_{\alpha} \in \mathfrak{g}_{\alpha}$, we have that both $$ A_\alpha = \frac{1}{2}(X_\alpha - \ov{X_\alpha}) \qquad S_\alpha = \frac{1}{2{\rm \bf i}}(X_\alpha + \ov{X_\alpha}) $$ belong to $\u$ and are such that $[H, A_\alpha] = {\rm \bf i} \alpha(H) S_\alpha$ and $[H, S_\alpha] = -{\rm \bf i} \alpha(H) A_\alpha$, for $H \in \t$. Consider the real root space $$ \u_\alpha = {\rm span}_\mathbb{R} \{ A_\alpha, S_{\alpha} \} $$ Let $\Pi^+$ be a choice of positive roots, then $\u$ splits as the real root space decomposition $$ \u = \t \oplus \sum_{\alpha \in \Pi^+} \u_\alpha $$ Choosing $X_{\alpha} \in \mathfrak{g}_\alpha$ such that $[X_\alpha, \ov{X_\alpha}] = H^\vee_\alpha$ we have that ${\rm \bf i} H^\vee_\alpha \in \t$ and that $[S_\alpha, A_\alpha] = {\rm \bf i} H^\vee_\alpha/ 2$. Furthermore $[{\rm \bf i} H^\vee_\alpha, A_\alpha] = - 2 S_\alpha$ and $[{\rm \bf i} H^\vee_\alpha, S_\alpha] = 2 A_\alpha$ so that $$ \u(\alpha) = {\rm span}_\mathbb{R} \{ S_\alpha, \, {\rm \bf i} H^\vee_\alpha, \, A_{\alpha} \} $$ is a subalgebra of $\u$ isomorphic with $\mathfrak{su}(2)$ by the explicit isomorphism $$ \frac{1}{2{\rm \bf i}} \begin{pmatrix} 0 & \phantom{-}1 \\ 1 & \phantom{-}0 \end{pmatrix} \mapsto S_\alpha \qquad \begin{pmatrix} {\rm \bf i} & \phantom{-}0 \\ 0 & -{\rm \bf i} \end{pmatrix} \mapsto {\rm \bf i} H^\vee_\alpha \qquad \frac{1}{2} \begin{pmatrix} \phantom{-}0 & \phantom{-}1 \\ -1 & \phantom{-}0 \end{pmatrix} \mapsto A_\alpha $$ Let $U(\alpha)$ be the connected subgroup of $U$ with Lie algebra $\u(\alpha)$. Since $\u(\alpha)$ is semisimple, if follows that $U(\alpha)$ is compact. This isomorphism $ \psi: \mathfrak{su}(2) \to \u(\alpha)$ can be uniquely integrated to a group epimorphism $ \Psi: SU(2) \to U(\alpha)$, since $SU(2)$ is simply connected. It maps $$ \Psi \begin{pmatrix} e^{\theta {\rm \bf i}}& \\ & e^{-\theta {\rm \bf i}}\\ \end{pmatrix} = \exp( \theta {\rm \bf i} H^\vee_\alpha) $$ so that $\exp(2 \pi {\rm \bf i} H^\vee_\alpha) = 1$ and thus ${\rm \bf i} H^\vee_\alpha \in \Gamma$. The $\mathbb{Z}$-span of the coroot vectors ${\rm \bf i} H^\vee_\alpha$, $\alpha \in \Pi$, gives the {\em coroot group} $\Gamma^\vee$, a subgroup of $\Gamma$. The map $\Gamma \to \pi_1(U)$ is then an epimorphism with kernel $\Gamma^\vee$ so that it induces a natural isomorphism \begin{equation} \label{eq:isompi1} \Gamma / \Gamma^\vee \to \pi_1(U) \end{equation} This can interpreted geometrically as: the loops in $T$ generate the loops in $U$, but the loops in $U$ coming from a coroot ${\rm \bf i} H^\vee_\alpha$ shrinks to the identity since it comes from the ``equator'' of the simply connected $SU(2)$ which is then immersed in $U(\alpha)$. \subsubsection*{Dual roots and Weyl group} Choose an $\mathrm{Ad}(U)$ invariant inner product $\prod{\cdot,\cdot}$ in $\u$ and extend it to an Hermitian product in $\mathfrak{g}$ where $\u$ and ${\rm \bf i} \u$ are orthogonal. Restricting it to $\mathfrak{h}$ gives a linear isomorphism $\mathfrak{h}^* \to \mathfrak{h}$ that associates to each functional $\phi \in \mathfrak{h}^*$ a vector $H_\phi \in \mathfrak{h}$ such that $\phi(H) = \prod{H_\phi, H}$ for all $H \in \mathfrak{h}$. This isomorphism furnishes in $\mathfrak{h}^*$ with an Hermitian product and its restriction to the real subspace spanned by the roots is an inner product. To a root $\alpha$, since $\alpha \in {\rm \bf i} \t^*$, there corresponds an $H_\alpha \in {\rm \bf i} \t$ that is proportional to the coroot vector $H^\vee_\alpha$, with proportionality given by $$ H^\vee_\alpha = \frac{2 H_\alpha}{\prod{\alpha, \alpha}} $$ The dual root system $\Pi^\vee$ of $\Pi$ is given by the dual roots $$ \alpha^\vee = \frac{2 \alpha}{\prod{\alpha, \alpha}} $$ It follows that, under this isomorphism, the coroot vectors correspond to dual roots, in other words, $H^\vee_\alpha = H_{\alpha^\vee}$. The reflection $r_\alpha$ of $\t$ around the hyperplane $\alpha = 0$ is given by $r_\alpha(H) = H - \alpha(H) \alpha^\vee$. Let $W = N(T)/T$ be the Weyl group of $U$. Fix a choice $\Sigma$ of simple roots. We see an element $w \in W$ either as an element of $N(T)$ acting by the adjoint action on $\t$, or as an element of the isometry subgroup of $\t$ generated by simple reflections $r_\alpha$, $\alpha \in \Sigma$. Let $H \in \t$ and $u \in U$ be such that $\mathrm{Ad}(u) H \in \t$, then $\mathrm{Ad}(u)H = w H$ for some $w \in W$ (see Proposition VII.2.2 p.285 of \cite{helgason}). We have that $W$ also acts on the roots $\Pi$ by the coadjoint action $w^*\alpha(H)=\alpha(w^{-1}H)$. \section{Boundary homomorphism and $\pi_2$} \label{boundary-sec} In this section we use the boundary homomorphism to investigate the second homotopy group of flag manifolds. \subsection{Hopf fibration} \label{sec:hopf} Consider the Hopf fibration $U(1) \cdots SU(2) \stackrel{h\,}{\to} \mathbb{C} P^1$ over the Riemann sphere $\mathbb{C} P^1$, given by $$ h \begin{pmatrix} z & -\ov{w} \\ w & \phantom{-}\ov{z} \end{pmatrix} = z/w \qquad \text{with fiber} \qquad U(1) = \left\{ \begin{pmatrix} \lambda & \\ & \ov{\lambda} \end{pmatrix} :\, \begin{array}{r} \lambda \in \mathbb{C} \\ |\lambda|=1 \end{array} \right\} $$ over the basepoint $\infty =z/0$, where $z, w \in \mathbb{C}$ are such that $|z|^2 + |w|^2 = 1$. We explicitly compute the boundary isomorphism of the homotopy exact sequence \begin{equation} \label{eq:seqexata} 0= \pi_2(SU(2)) \to \pi_2(\mathbb{C} P^1) \stackrel{\partial}{\to} \pi_1(U(1)) \to \pi_1(SU(2)) = 0 \end{equation} Consider the continuous map $$ f: D \subset \mathbb{C} \to \mathbb{C} P^1 \qquad z = r e^{\theta {\rm \bf i}} \mapsto \,{\rm tan}(\pi r/2) e^{\theta {\rm \bf i}} $$ from the disk $D$ of radius $r \leq 1$, $\theta \in [0,2\pi]$, to the Riemann sphere. It is continuous, maps the origin to the origin, the disk interior $r < 1$ homeomorphically into the sphere minus $\infty$ and maps the whole boundary $r = 1$ onto the basepoint $\infty$. It follows that it induces the generator of $\pi_2(\mathbb{C} P^1)$. Consider a continuous lift of $f$ to $SU(2)$ given by \begin{equation} F: D \to SU(2) \qquad z = r e^{\theta {\rm \bf i}} \mapsto \left( \begin{array}{ll} \,{\rm sin}(\pi r/2) e^{\theta {\rm \bf i}}& -\cos(\pi r/2) \\ \cos(\pi r/2) & \phantom{-}\,{\rm sin}(\pi r/2) e^{-\theta {\rm \bf i}} \end{array} \right) \end{equation} where, clearly, $h \circ F = f$. At the boundary $r = 1$ of $D$ we have $$ F|_{\partial D}:\, e^{\theta {\rm \bf i}} \mapsto \begin{pmatrix} e^{\theta {\rm \bf i}}& \\ & e^{-\theta {\rm \bf i}} \end{pmatrix} $$ Thus, in homotopy we have $$ \partial(1) = \partial( [f] ) = [F|_{\partial D}] = \left[ e^{\theta {\rm \bf i}}\mapsto \begin{psmallmatrix} e^{\theta {\rm \bf i}}& \\ & e^{-\theta {\rm \bf i}} \end{psmallmatrix} \right] = 1 $$ where the leftmost $1$ is the generator of $\pi_2(\mathbb{C} P^1)$ on the base and the rightmost $1$ is the generator of $\pi_1(U(1))$ on the fiber. Consider now the quotient Hopf fibration $ U(1)/\{ \pm 1 \} \cdots SU(2)/\{ \pm 1 \} \stackrel{h\,}{\to} \mathbb{C} P^1 $ given by the same map. It is actually the frame bundle fibration $$ SO(2) \cdots SO(3) \stackrel{g}{\to} S^2 $$ over the Euclidean unit sphere $S^2 \subset \mathbb{R}^3$, given by $$ g(*) = \text{last column of }* \quad\quad \text{with fiber} \quad\quad \begin{pmatrix} SO(2) & \\ & 1 \\ \end{pmatrix} $$ over the basepoint $e_3 = (0,0,1)$, were $SO(3)$ acts canonically in $\mathbb{R}^3$ and we identify the fiber with $SO(2)$. In fact, let $\mathbb{R}^3$ be the imaginary quaternions spanned by ${\rm \bf i}, \j, \k$, so that $e_3 = \k$. Since $SU(2)$ can be identified with the unit quaternions $q = z + \j w$, its Lie algebra is then this $\mathbb{R}^3$. The 2-covering epimorphism $SU(2) \to SO(3)$ is then the adjoint representation $\mathrm{Ad}$, the action of the unit quaternions on $\mathbb{R}^3$ by conjugation, and has kernel $\{\pm 1\}$. Note that $h(q) = z/w$ and that $g( \mathrm{Ad}(q) ) = q \k q^{-1}$. The following diagram commutes \begin{equation} \label{eq:SU2-SO3} \begin{array}{ccccc} U(1) & \cdots & SU(2) & \stackrel{h}{\to} & \mathbb{C} P^1 \\ \text{\tiny 1:2} \downarrow \phantom{\mathrm{Ad}} & & \text{\tiny 1:2} \downarrow \mathrm{Ad} & & \text{\tiny 1:1} \downarrow \phi \\ SO(2) & \cdots & SO(3) & \stackrel{g}{\to} & S^2 \end{array} \end{equation} where $\phi$ is the diffeomorphism defined by $\phi( h(q) ) = q \k q^{-1}$, $q$ a unit quaternion in $SU(2)$. Quotienting the groups in first row of the diagram above we get that the frame bundle fibration is isomorphic to the quotient Hopf fibration. Computing its boundary $\ov{\partial}: \pi_2(S^2) \to \pi_1(SO(2))$ we have, by naturality, that the following diagram commutes $$ \begin{array}{ccc} \pi_2(\mathbb{C} P^1) & \stackrel{\partial}{\to} & \pi_1(U(1)) \\ \text{\tiny 1:1} \downarrow \phi & & \text{\tiny 1:2} \downarrow \wt{\pi} \\ \pi_2(S^2) & \stackrel{\ov{\partial}}{\to} & \pi_1(SO(2)) \end{array} $$ where we denoted the induced maps in homotopy again by their names. It follows that $$ \ov{\partial}(1) = 2 $$ where the leftmost $1$ is the generator of $\pi_2(S^2)$ on the base and the rightmost $2$ is twice the generator of $\pi_1(SO(2))$ on the fiber. From now on we topologically identify $$ \mathbb{C} P^1 = S^2 $$ \begin{remark} \label{remark:quat} Consider the 1 by 1 unitary symplectic group ${\rm Sp}(1)$ of unitary quaternions, isomorphic to $SU(2)$, the 2 by 2 unitary symplectic group of quaternionic matrices $$ {\rm Sp}(2) = \left\{ \begin{pmatrix} p & r \\ q & s \end{pmatrix}:\, p, q, r, s \in \H, \quad \begin{array}{ll} |p|^2 + |q|^2 = 1, & |r|^2 + |s|^2 = 1 \\ \ov{p}r + \ov{q}s = 0, & p\ov{q} + r\ov{s} = 0 \end{array} \right\} $$ and the Hopf fibration ${\rm Sp}(1) \times {\rm Sp}(1) \cdots {\rm Sp}(2) \stackrel{h}{\to} \H P^1$ over the quaternionic projective line $\H P^1 = S^4$, given by $$ h \begin{pmatrix} p & r \\ q & s \end{pmatrix} = p/q \qquad \text{with fiber} \qquad {\rm Sp}(1) \times {\rm Sp}(1) = \left\{ \begin{pmatrix} \lambda & \\ & \mu \end{pmatrix} \right\} $$ over the basepoint $\infty =p/0$, where $\lambda, \mu \in \H$ are such that $|\lambda| = |\mu| = 1$. The same argument as before can be used to compute the boundary map $\pi_4(\H P^1) \stackrel{\partial}{\to} \pi_3(S^3)$. Indeed, consider the continuous map $$ f: D \subset \H \to \H P^1 \qquad q \mapsto \,{\rm tan}(\pi |q|/2) \frac{q}{|q|} $$ from the ball $D$ of radius $\leq 1$. It is continuous, maps the origin to the origin, the ball interior $r < 1$ homeomorphically into the sphere minus $\infty$ and maps the whole boundary $r = 1$ onto the basepoint $\infty$. It follows that it induces the generator of $\pi_4(\H P^1)$. Consider a continuous lift of $f$ to ${\rm Sp}(2)$ given by \begin{equation} F: D \to {\rm Sp}(2) \qquad q \mapsto \left( \begin{array}{ll} \displaystyle \,{\rm sin}(\pi |q|/2)\frac{q}{|q|} & -\cos(\pi |q|/2) \\ \cos(\pi |q|/2) & \displaystyle \phantom{-}\,{\rm sin}(\pi |q|/2) \frac{\ov{q}}{|q|} \end{array} \right) \end{equation} where, clearly, $h \circ F = f$. Thus, in homotopy we have $$ \partial(1) = \partial( [f] ) = [F|_{\partial D}] = \left[ q \mapsto \begin{psmallmatrix} q & \\ & \ov{q} \end{psmallmatrix} \right] = (1, -1) $$ where $q$ stands for unitary quaternions $|q|=1$, the leftmost $1$ is the generator of $\pi_4(\H P^1)$ on the base and the rightmost $1$ is the generator of $\pi_3( {\rm Sp}(1) ) = \mathbb{Z}$ on each factor of the fiber. We also have the quotient Hopf fibration $$ {\rm Sp}(1) \times {\rm Sp}(1)/\{ \pm 1 \} \cdots {\rm Sp}(2)/\{ \pm 1 \} \stackrel{h\,}{\to} \H P^1 $$ given by the same map, which is actually the frame bundle fibration $$ {\rm SO}(4) \cdots {\rm SO}(5) \stackrel{g}{\to} S^4 $$ over the Euclidean unit sphere $S^4 \subset \mathbb{R}^5$. This is because ${\rm Sp}(2)$ is the universal cover of ${\rm SO}(5)$ and ${\rm Sp}(1) \times {\rm Sp}(1)$ is the universal cover of ${\rm SO}(4)$, both with the same kernel $\{\pm 1\}$ (see Examples 6.50 (b) and (c), p.208 of \cite{poor}). Computing the boundary $\ov{\partial}: \pi_4(S^4) \to \pi_3({\rm SO}(4))$ we have, by naturality, that the induced diagram in homotopy commutes $$ \begin{array}{ccc} \pi_4(\H P^1) & \stackrel{\partial}{\to} & \pi_3( {\rm Sp}(1) \times {\rm Sp}(1) ) \\ \text{\tiny 1:1} \downarrow & & \text{\tiny 1:1} \downarrow \\ \pi_4(S^4) & \stackrel{\ov{\partial}}{\to} & \pi_3({\rm SO}(4)) \end{array} $$ where the righmost column is an isomorphism, since the universal covering induces isomorphism in higher homotopy groups. It follows that $$ \ov{\partial}(1) = (1,-1) $$ where the leftmost $1$ is the generator of $\pi_4(S^4)$ on the base and the rightmost $1$ is the generator of $\pi_3( {\rm Sp}(1) ) = \mathbb{Z}$ on each factor of the fiber $\pi_3({\rm SO}(4)) \simeq \pi_3({\rm Sp}(1) \times {\rm Sp}(1))$. \end{remark} \subsection{Flag fibration} Let $\mathbb F_\Theta = U/U_\Theta$ be a flag manifold of a connected compact Lie group $U$, that is, the isotropy $U_\Theta$ is centralizer of a torus. It follows that $U_\Theta$ is connected. In this section we consider the flag fibration $U_\Theta \cdots U \to \mathbb F_\Theta$ with basepoint $1$ on the fiber (thus on the total space) and basepoint $o = U_\Theta$ (trivial coset) on the base. By reducing to rank one, we compute the {\em boundary map} of the homotopy exact sequence \begin{equation} \label{eq:seqexata} 0 = \pi_2(U) \to \pi_2(\mathbb F_\Theta) \stackrel{\partial}{\to} \pi_1(U_\Theta) \to \pi_1(U) \end{equation} on special elements of $\pi_2(\mathbb F_\Theta)$. Then we use this to show that these special elements provide a $\mathbb{Z}$-basis for $\pi_2(\mathbb F_\Theta)$. Note that the second homotopy group of a connected Lie group is zero so that, by exactness, the boundary map is always injective. Note that different groups $U$ may yield diffeomorphic quotients $\mathbb F_\Theta$, nonetheless, the boundary homomorphism depends on the chosen groups since it depends on the topology of the whole fibration. \begin{example} The only flag manifold of rank 1 is the sphere $S^2$ considered in the previous section. If the group is simply connected, the flag fibration is the Hopf fibration whose boundary map is surjective. If the group is not simply connected, the flag fibration is the frame bundle fibration whose boundary map is not surjective. \end{example} First we identify the image of the boundary map. By exactness, it is the kernel of the map $\pi_1(U_\Theta) \to \pi_1(U)$. Fix $T$ a maximal torus of $U_\Theta$, hence of $U$. It follows that $U$ and $U_\Theta$ share the same lattice $\Gamma$. Denote by $\Pi$ the roots of $\u$, it contains the roots $\Pi_\Theta$ of $\u_\Theta$, the Lie algebra of $U_\Theta$. Denote by $\Gamma^\vee$ the coroot lattice of $\u$, it contains the coroot lattice $\Gamma^\vee_\Theta$ of the isotropy $\u_\Theta$. Fix from now on the natural isomorphisms that follow from (\ref{eq:isompi1}) $$ \Gamma / \Gamma^\vee_\Theta \to \pi_1(U_\Theta) \qquad \Gamma / \Gamma^\vee \to \pi_1(U) $$ Note that $\Gamma/\Gamma^\vee$ can have torsion, as in $\pi_1(SO(3)) = \mathbb{Z}_2$. Denote by $R^\vee$ the dual root group, given by the $\mathbb{Z}$-span of the dual roots $\Pi^\vee$. Since the isomorphism $\phi \in \mathfrak{h}^* \mapsto {\rm \bf i} H_\phi \in \mathfrak{h}$ takes a dual root $\alpha^\vee$ to the coroot vector ${\rm \bf i} H_{\alpha^\vee} = {\rm \bf i} H^\vee_{\alpha}$, it follows that it restricts to a group isomorphism $$ R^\vee \to \Gamma^\vee $$ Denote by $R^\vee_\Theta$ the dual root group of the isotropy, given by the $\mathbb{Z}$-span of the dual roots $\Pi^\vee_\Theta$. Then the isomorphism above restricts to an isomorphism $R^\vee_\Theta \to \Gamma^\vee_\Theta$ so that we have the natural group isomorphism \begin{equation} \label{isom:raizdual-coraiz} R^\vee/R^\vee_\Theta \to \Gamma^\vee/\Gamma^\vee_\Theta \end{equation} Fix a system of simple roots $\Theta$ for $\Pi_\Theta$ and extend it to a system of simple roots $\Sigma$ for $\Pi$. This gives a choice of positive roots $\Pi^+$ and then $\Pi^+_\Theta$. \begin{proposition} \label{prop:imagem} The image of $\partial$ is $\Gamma^\vee/\Gamma^\vee_\Theta$ and has no torsion. Furthermore, $\partial$ is surjective iff $U$ is simply connected. \end{proposition} \begin{proof} Under the above isomorphims, the map $\pi_1(U_\Theta) \to \pi_1(U)$ induced by the inclusion $U_\Theta \subset U$ clearly becomes the natural map $\Gamma / \Gamma^\vee_\Theta \to \Gamma / \Gamma^\vee$, which has kernel $\Gamma^\vee/\Gamma^\vee_\Theta$. For the torsion statement, note that the dual simple roots $\Sigma^\vee$ is a $\mathbb{Z}$-basis for $R^\vee$ and the dual simple roots $\Sigma^\vee_\Theta$ is a $\mathbb{Z}$-basis for $R^\vee_\Theta$. Since this $\mathbb{Z}$-basis of $R^\vee_\Theta$ is a subset of this $\mathbb{Z}$-basis of $R^\vee$, it follows that $R^\vee/R^\vee_\Theta$ has no torsion. The result then follows from the isomorphism (\ref{isom:raizdual-coraiz}). If $\pi_1(U)=0$ then $\partial$ is surjective by the exact sequence (\ref{eq:seqexata}). Reciprocally, if $\partial$ is surjective then $\Gamma^\vee / \Gamma^\vee_\Theta = \Gamma / \Gamma^\vee_\Theta$ so that $\Gamma^\vee = \Gamma$ and then $\pi_1(U) = \Gamma/\Gamma^\vee = 0$. \end{proof} The previous result shows that the image of the boundary homomorphism depends only on the Lie algebra of $U$ and $U_\Theta$. We now construct a special element of $\pi_2(\mathbb F_\Theta)$ which gets mapped to each coroot vector in this image. Let $\alpha \in \Pi$ be a root, not necessarily simple. Consider the compact subgroup $U(\alpha)$ of $U$ and epimorphism $\Psi: SU(2) \to U(\alpha)$. Since $ \Psi\begin{psmallmatrix} e^{\theta {\rm \bf i}}& \\ & e^{-\theta {\rm \bf i}}\\ \end{psmallmatrix} = \exp( \theta {\rm \bf i} H^\vee_\alpha) $ it follows that $\Psi(U(1)) \subset T \subset U_\Theta$. We then get the commutative diagram of immersions \begin{equation} \label{eq:hopf-em-U} \begin{array}{rcrcl} U(1) \,\,\,\, & \cdots & SU(2) & \stackrel{h}{\to} & S^2 \quad \\ \downarrow \Psi & & \downarrow \Psi & & \downarrow \sigma^\vee_\alpha \\ U_\Theta \,\,\,\, & \cdots & U \quad & \to & \mathbb F_\Theta \end{array} \end{equation} where in the upper row we have the Hopf bundle and $\Psi$ factors to the immersion of a 2-sphere into $\mathbb F_\Theta$ given by $$ \sigma^\vee_\alpha: S^2 \to \mathbb F_\Theta \qquad z/w \mapsto \Psi \begin{pmatrix} z & -\ov{w} \\ w & \ov{z} \end{pmatrix} o $$ whose image is the orbit of $U(\alpha)$ through the basepoint $o$ of $\mathbb F_\Theta$. Denote the based homotopy class of this map again by $\sigma^\vee_\alpha \in \pi_2(\mathbb F_\Theta)$. \begin{proposition} \label{teo:bordo} $\partial( \sigma^\vee_\alpha ) = {\rm \bf i} H^\vee_\alpha \mod \Gamma^\vee_\Theta$ \end{proposition} \begin{proof} The homotopy class of $\sigma^\vee_\alpha$ in $\mathbb F_\Theta$ is represented by $\Psi \circ f: D \to \mathbb F_\Theta$. This has a lift $\Psi \circ F: D \to U$ such that $$ \Psi \circ F|_{\partial D} = \Psi \begin{pmatrix} e^{\theta {\rm \bf i}}& \\ & e^{-\theta {\rm \bf i}} \end{pmatrix} = \exp(\theta {\rm \bf i} H^\vee_\alpha) $$ It follows that $$ \partial( \sigma^\vee_\alpha ) = [e^{\theta {\rm \bf i}}\mapsto \exp(\theta {\rm \bf i} H^\vee_\alpha) ] \in \pi_1(U_\Theta) $$ which correponds to ${\rm \bf i} H^\vee_\alpha$ mod $\Gamma^\vee_\Theta$ under the natural isomorphism. \end{proof} The real root decomposition of the isotropy Lie algebra is $$ \u_\Theta = \t \oplus \sum_{\alpha \in \Pi^+_\Theta} \u_\alpha $$ By the real root decomposition of $\u$ it follows that $\u_\Theta$ is the centralizer in $\u$ of the torus \begin{equation} \label{eq:toro-isotropia} \t_\Theta = \{ H \in \t: \, \alpha(H) = 0 \quad \forall \alpha \in \Theta \} \end{equation} Since $U_\Theta$ is connected, it is the centralizer of $\t_\Theta$ in $U$. Note that if $\alpha \in \Pi_\Theta$ then $U(\alpha) \subset U_\Theta$. \begin{lemma} \label{lema:isotropia-rank1} If $\alpha \not\in \Pi_\Theta$ then $U(\alpha) \cap U_\Theta = T(\alpha)$ \end{lemma} \begin{proof} Let $T(\alpha)$ be the centralizer of ${\rm \bf i} H^\vee_\alpha$ in $U(\alpha)$, it is contained in $T$ and is a maximal torus of the rank 1 group $U(\alpha)$. We have that $ U(\alpha) \cap T = T(\alpha) $, since an $u \in U(\alpha) \cap T$ centralizes $ {\rm \bf i} H^\vee_\alpha \in \t$, so that $u \in T(\alpha)$, and the other inclusion is immediate. Now consider the orthogonal subspace to $H_\alpha$ given by $\t_\alpha = \{ H \in \t: \, \alpha(H) = 0 \}$. If a root $\beta$ is such that $\beta|_{\t_\alpha} = 0$ then $H_\beta$ is a multiple of $H_\alpha$ so that $\beta = \pm \alpha$, since the root system is reduced. It follows from the real root decomposition of $\u$ that the centralizer of $\t_\alpha$ in $\u$ is $\t \oplus \u_\alpha$. Denote by $U_\alpha$ the centralizer of $\t_\alpha$ in $U$, its Lie algebra is then $\t \oplus \u_\alpha$. It follows that $U_\alpha \cap U_\Theta$ is the centralizer of the torus $\t_\alpha + \t_\Theta$, hence it is connected. By the real root decomposition of $\u$ and $\u_\Theta$, it follows that the centralizer of $\t_\alpha + \t_\Theta$ in $\u$ is $$ (\t \oplus \u_\alpha) \cap (\t \oplus \sum_{\alpha \in \Pi_\Theta} \u_\alpha) = \t $$ since $\alpha \not\in \Pi_\Theta$. It follows that $ U_\alpha \cap U_\Theta = T $. Since $\u(\alpha) \subset \t \oplus \u_\alpha$, we have that $U(\alpha) \subset U_\alpha$, and since $T(\alpha) \subset T \subset U_\Theta$, it follows from the previous intersections that $$ T(\alpha) \subset U(\alpha) \cap U_\Theta \subset U(\alpha) \cap ( U_\alpha \cap U_\Theta) = U(\alpha) \cap T = T(\alpha). $$ as claimed. \end{proof} The next result is the main result of this section. \begin{theorem} \label{teo:pi2} \begin{enumerate}[(i)] \item The dual roots $\alpha^\vee, \beta^\vee, \delta^\vee \in \Pi$ add up to $ \delta^\vee = \alpha^\vee + \beta^\vee$ {\rm mod} $R^\vee_\Theta $ if, and only if, in $\pi_2(\mathbb F_\Theta)$ we have $$ \sigma^\vee_\delta = \sigma^\vee_\alpha * \sigma^\vee_\beta $$ \item If $\alpha \not\in \Pi_\Theta$ then $\sigma^\vee_\alpha$ is diffeomorphism onto a 2-sphere. \item With regards to the topology of $\mathbb F_\Theta$ and of the map $\sigma^\vee_\alpha$, we can assume that $U$ is compact semisimple and simply connected. \item We have that $\{ \sigma^\vee_\alpha: \, \alpha \in \Sigma - \Theta \}$ is a $\mathbb{Z}$-basis of $\pi_2(\mathbb F_\Theta)$. \end{enumerate} \end{theorem} \begin{proof} For item (i), by the isomorphism (\ref{isom:raizdual-coraiz}) it follows that $\delta^\vee = \alpha^\vee + \beta^\vee$ {\rm mod} $R^\vee_\Theta$ is equivalent to ${\rm \bf i} H_\delta^\vee = {\rm \bf i} H_\alpha^\vee + {\rm \bf i} H_\beta^\vee \mod \Gamma^\vee_\Theta$. Since the ${\rm \bf i} H_\bullet^\vee$ are coroot vectors and $\partial$ is injective, Proposition \ref{teo:bordo} then implies item (i). For item (ii), the image of $\sigma^\vee_\alpha$ is the orbit $U(\alpha)o$ of the basepoint $o$. It has isotropy $U(\alpha) \cap U_\Theta = T(\alpha)$, by the previous Lemma. Hence the image of $\sigma^\vee$ is diffeomorphic to $U(\alpha)/T(\alpha)$, which is a 2-sphere. To show that $\sigma^\vee_\alpha$ is a diffeomorphism, from (\ref{eq:hopf-em-U}) we have the diagram \begin{equation} \begin{array}{rcl} SU(2)/U(1) & \to & S^2 \quad \\ \downarrow \Psi \quad & & \downarrow \sigma^\vee_\alpha \\ U(\alpha)/T(\alpha) & \to & U(\alpha)o \end{array} \end{equation} whose rows are diffeomorphisms, so that we only have to prove that the first column is a diffeomorphism. Since it is induced by the epimorphism $SU(2) \stackrel{\Psi}{\to} U(\alpha)$, we only have to prove that $\Psi^{-1}(T(\alpha)) = U(1)$. We have already stablished the inclusion $\supseteq$, right before (\ref{eq:hopf-em-U}). Let then $u \in SU(2)$ be such that $\Psi(u) \in T(\alpha)$. Then $\Psi(u)$ centralizes $i H_\alpha^\vee = \psi\begin{psmallmatrix} {\rm \bf i} & \\ & -{\rm \bf i} \\ \end{psmallmatrix}$ so that $$ \psi \left( \mathrm{Ad}(u) \begin{psmallmatrix} {\rm \bf i} & \\ & -{\rm \bf i} \\ \end{psmallmatrix} \right) = \mathrm{Ad}(\Psi(u)) \psi\begin{psmallmatrix} {\rm \bf i} & \\ & -{\rm \bf i} \\ \end{psmallmatrix} = \psi\begin{psmallmatrix} {\rm \bf i} & \\ & -{\rm \bf i} \\ \end{psmallmatrix} $$ and, since $\psi$ is an isomorphism of Lie algebras, if follows that $u$ centralizes $\begin{psmallmatrix} {\rm \bf i} & \\ & -{\rm \bf i} \\ \end{psmallmatrix}$ so that it lies in $U(1)$, as claimed. For item (iii), since the center $Z$ of $U$ is contained in $T$, it is contained in $U_\Theta$. Consider the quotient groups $\ov{U} = U/Z$ and $\ov{U_\Theta} = U_\Theta/Z$. Then $\ov{U}$ is connected semisimple. Furthermore, its maximal torus $T/Z$ is contained in $\ov{U_\Theta}$ which is then a maximal rank subgroup. Let $\ov{\Psi}: SU(2) \to \ov{U}$ be the homomorphism constructed as before, now for $\ov{U}$, and let $\ov{\pi}: U \to \ov{U}$ be the quotient homomorphism. Denote by $\ov{\exp}$ the exponential map of $\ov{U}$. Since $\ov{\pi}( \exp ) = \ov{\exp}( d\ov{\pi}_1 )$, it follows that the leftmost diagram below commutes \begin{equation*} \begin{tikzcd} SU(2) \arrow["\Psi"]{r} \arrow["\ov{\Psi}"]{rd} & U \arrow["\ov{\pi}"]{d}\\ & \ov{U} \end{tikzcd} \qquad\qquad\qquad \begin{tikzcd} S^2 \arrow["\sigma^\vee_\alpha"]{r} \arrow["\ov{\sigma^\vee_\alpha}"]{rd} & \mathbb F_\Theta \arrow["\ov{\phi}"]{d}\\ & \ov{U}/\ov{U_\Theta} \end{tikzcd} \end{equation*} The quotient map $\ov{\pi}: U \to \ov{U}$ induces the diffeomorphism $\ov{\phi}: \mathbb F_\Theta \to \ov{U}/\ov{U_\Theta}$. Let $\ov{\sigma^\vee_\alpha}: S^2 \to \ov{U}/\ov{U_\Theta}$ be the immersion constructed as before, now for $ \ov{U}/\ov{U_\Theta}$. The commutativity of the leftmost diagram above implies that the righmost diagram also commutes. Thus, exchanging the pair $U, U_\Theta$ by $\ov{U}, \ov{U}_\Theta$, we can assume that $U$ is compact semisimple since the diffeomorphism $\ov{\phi}$ takes $\sigma^\vee_\alpha$ to $\ov{\sigma^\vee_\alpha}$. Now $U$ being compact semisimple, it has a compact semisimple covering $\wt{U}$ with covering homomorphism $\wt{\pi}: \wt{U} \to U$. Consider the subgroup $\wt{U}_\Theta = \wt{\pi}^{-1}( U_\Theta )$. It is closed, since $U_\Theta$ is closed and $\wt{\pi}$ is continuous, and it is of maximal rank in $\wt{U}$, since this is a local property. Let $\wt{\Psi}: SU(2) \to \wt{U}$ be the homomorphism constructed as before, now for $\wt{U}$. Denote by $\wt{\exp}$ the exponential map of $\wt{U}$. Since $\wt{\pi}( \wt{\exp} ) = \exp$, it follows that the leftmost diagram below commutes \begin{equation*} \begin{tikzcd} & \wt{U} \arrow["\wt{\pi}"]{d} \\ SU(2) \arrow["\Psi"]{r} \arrow["\wt{\Psi}"]{ru} & U \end{tikzcd} \qquad\qquad\qquad \begin{tikzcd} & \wt{U}/\wt{U}_\Theta \arrow["\wt{\phi}"]{d} \\ S^2 \arrow["\sigma^\vee_\alpha"]{r} \arrow["\wt{\sigma^\vee_\alpha}"]{ru} & U \end{tikzcd} \end{equation*} The covering map $\wt{\pi}: \wt{U} \to U$ induces the diffeomorphism $\wt{\phi}: \wt{U}/\wt{U}_\Theta \to \mathbb F_\Theta$. Let $\wt{\sigma^\vee_\alpha}: S^2 \to \wt{U}/\wt{U}_\Theta$ be the immersion constructed as before, now for $ \wt{U}/\wt{U}_\Theta$. The commutativity of the leftmost diagram above implies that the righmost diagram also commutes. Thus, exchanging the pair $U, U_\Theta$ by $\wt{U}, \wt{U}_\Theta$, we can assume that $U$ is compact semisimple and simply connected since the diffeomorphism $\wt{\phi}$ takes $\wt{\sigma^\vee_\alpha}$ to $\sigma^\vee_\alpha$. For item (iv), we use the previous item to assume that $U$ is simply connected so that, by the exact sequence (\ref{eq:seqexata}), the boundary map is an isomorphism. By the no torsion part of Proposition \ref{prop:imagem}, it follows that $\alpha^\vee$, $\alpha \in \Sigma - \Theta$, projects to a $\mathbb{Z}$-basis of $R^\vee/R^\vee_\Theta$. By the isomorphism (\ref{isom:raizdual-coraiz}) it follows that ${\rm \bf i} H^\vee_{\alpha}$, $\alpha \in \Sigma - \Theta$, projects to a $\mathbb{Z}$-basis of $\Gamma^\vee/\Gamma^\vee_\Theta$. Proposition \ref{teo:bordo} then implies item (iv). \end{proof} It follows that $\pi_2(\mathbb F_\Theta) \simeq \Gamma^\vee/\Gamma^\vee_\Theta$ where we do not have a canonical isomorphism (since the boundary depends on the choice of $U$) but we do have a preferred $\mathbb{Z}$-basis. By the isomorphism (\ref{isom:raizdual-coraiz}) it follows that \begin{equation} \label{isom:raizdual-raizdual} \pi_2(\mathbb F_\Theta) \simeq R^\vee/R^\vee_\Theta \end{equation} where, for roots $\alpha$, $\beta$, we have that $\sigma_\alpha^\vee = \sigma_\beta^\vee$ in $\pi_2(\mathbb F_\Theta)$ iff $\alpha^\vee = \beta^\vee \mod R^\vee_\Theta$. \begin{remark} Note that in items (i)-(iii) of the previous Theorem $\alpha$ is not necessarily a simple root. For a simple root $\alpha$, the image of $\sigma_\alpha^\vee$ is a, so called, minimal Schubert variety of $\mathbb F_\Theta$. \end{remark} \begin{remark} Following these lines, the computations in Remark \ref{remark:quat} may be of use to describe the $\pi_4$ of {\em quaternionic} flag manifolds. \end{remark} \section{Invariant geometry} In this section we relate the well known invariant riemannian geometry of a flag manifold $\mathbb F_\Theta = U/U_\Theta$, which has a nice description in terms of the roots $\Pi$ of $U$ and the roots $\Pi_\Theta$ of the isotropy $U_\Theta$, with the spheres embeedings $\sigma^\vee_\alpha$ on $\mathbb F_\Theta$ of the previous section. An $U$-invariant riemmanian metric of $\mathbb F_\Theta$ furnishes an inner product on the tangent space $T_o(\mathbb F_\Theta)$ at the basepoint $o$ that is invariant by the isotropy representation of $U_\Theta$. Reciprocally, once we fix a $U_\Theta$ -invariant inner product on $T_o(\mathbb F_\Theta)$, we can use the action of $U$ to spread it around $\mathbb F_\Theta$ and obtain a $U$-invariant riemmanian metric. It follows that a $U$-invariant riemmanian metric of $\mathbb F_\Theta$ is uniquely determined by the $U_\Theta$-invariant inner product on $T_o(\mathbb F_\Theta)$. Note that the projection $U \to \mathbb F_\Theta$ differentiates at the identity to the linear epimorphism \begin{equation} \label{eq:esptg} \u \to T_o(\mathbb F_\Theta) \end{equation} whose kernel is the isotropy subalgebra $\u_\Theta$. Fix in $\u$ an $\mathrm{Ad}(U)$-invariant inner product $\prod{\cdot,\cdot}$. Consider $\mathfrak{m}_\Theta = \u_\Theta^\perp$ the orthogonal complement in $\u$ of $\u_\Theta$. The restriction of the above epimorphism to $\mathfrak{m}_\Theta$ then canonically identifies it with $T_o(\mathbb F_\Theta)$. The restriction of $\prod{\cdot,\cdot}$ to $\mathfrak{m}_\Theta$ then furnishes an inner product in the tangent space. Since $\mathfrak{m}_\Theta$ is $U_\Theta$-invariant, the isotropy representation becomes $U_\Theta \to {\rm O}(\mathfrak{m}_\Theta)$ and is given by $\mathrm{Ad}|_{\mathfrak{m}_\Theta}$, the restriction to $\mathfrak{m}_\Theta$ of the adjoint representation of $U_\Theta$. Now let $\alpha \not\in \Pi_\Theta$ and denote by $S^2(\alpha)$ the 2-sphere given by the image of the embeeding $\sigma^\vee_\alpha$. It is the orbit of the basepoint $o$ by $U(\alpha)$, whose tangent space at $o$ is the real root space $\u_\alpha$ and whose isotropy is, by Lemma \ref{lema:isotropia-rank1}, $U(\alpha) \cap U_\Theta = T(\alpha)$, which acts by rotations in $\u_\alpha$. Now, the restriction to $\u_\alpha$ of an $U_\Theta$-invariant inner product of $\mathfrak{m}_\Theta$ furnishes another $T(\alpha)$-invariant inner product on $\u_\alpha$ which is, thus, a positive multiple by $\lambda_\alpha$ of the previous one, by the uniqueness of an inner product invariant by rotations. The diameter of $S^2(\alpha)$ in the corresponding invariant metric is the previous diameter divided by $\lambda_\alpha$. From the real root space decomposition of $\u$ and $\u_\Theta$, since distinct real root spaces are orthogonal under any $U$-invariant inner product, it follows that \begin{equation} \label{eq:decompos-raizes-tg} \mathfrak{m}_\Theta = \sum_{\alpha \not\in \Pi_\Theta} \u_{\alpha} \end{equation} so that the various parameters $\lambda_\alpha$, $\alpha \not\in \Pi_\Theta$, that determine the diameter of the 2-spheres $S^2(\alpha)$, also determine the invariant metric. Neverthless, these parameters cannot in general be chosen independently since the isotropy action of $U_\Theta$ may take some $\u_{\alpha}$ to another $\u_\beta$ so that $\lambda_\alpha = \lambda_\beta$, since it acts by isometries. \begin{example} The $SO(3)$-invariant metrics of the 2-sphere $S^2$ are multiples of the round metric of the embeeding $S^2 \subset \mathbb{R}^3$. In fact, let $\mathbb{R}^3$ be spanned by ${\rm \bf i}, \j, \k$, with the cross product this is the Lie algebra of $SO(3)$. The isotropy subgroup of the basepoint $\k$ is ${\rm SO}(2)$ and the isotropy subalgebra is the $\k$ axis. Fix in $\mathbb{R}^3$ the usual inner product. It follows that $\mathfrak{m} = \k^\perp =$ the ${\rm \bf i},\j$ plane $\mathbb{R}^2$, on which ${\rm SO}(2)$ acts by rotation. The only rotation-invariant inner product on this plane are the positive multiples of the inner product of $\mathbb{R}^2$, which is the restriction of the inner product of $\mathbb{R}^3$. The latter is precisely the inner product induced by the the embeeding $S^2 \subset \mathbb{R}^3$ in the tangent plane to $\k$, which proves our claim. The same conclusion applies to the ${\rm SU}(2)$-invariant riemannian metrics of the Riemman sphere, when identified with the euclidean sphere $S^2$ as in (\ref{eq:SU2-SO3}). This is because the adjoint action of ${\rm SU}(2)$ on its Lie algebra is the canonical action of the ${\rm SO}(3)$ on $\mathbb{R}^3$. \end{example} Being an orthogonal representation, $\mathfrak{m}_\Theta$ decomposes as the sum of mutually orthogonal and irreducible representations \begin{equation} \label{eq:decompos-esptg} \mathfrak{m}_\Theta = \mathfrak{m}_1 \oplus \mathfrak{m}_2 \oplus \cdots \oplus \mathfrak{m}_k \end{equation} We call this an isotropy decomposition and each $\mathfrak{m}_i$ an {\em isotropy component}. The fundamental result is then Siebenthal's theorem (Théorème 7.1 of \cite{siebenthal}), which proves that the isotropy components are unique (apart from reordering) irreducible {\em inequivalent} representations and describes each one of them in terms of roots. Now, an $\mathrm{Ad}(U_\Theta)$-invariant inner product in $\mathfrak{m}_\Theta$ is given by $\prod{\Lambda X, Y}$, where $\Lambda: \mathfrak{m}_\Theta \to \mathfrak{m}_\Theta$ is a symmetric, positive linear operator that commutes with the adjoint action of $U_\Theta$. From decomposition (\ref{eq:decompos-esptg}) and Schur's Lemma it follows that $\Lambda|_{\mathfrak{m}_i} = \lambda_i$ times the identity of $\mathfrak{m}_i$. From the inequivalence of the isotropy components if follows that the $\lambda_i$'s are not related. Thus, the $\mathrm{Ad}(U_\Theta)$-invariant inner products in $\mathfrak{m}_\Theta$, and the corresponding $U$ invariant riemannian metrics of $\mathbb F_\Theta$, are described by the $k$ positive parameters $\lambda_1, \ldots, \lambda_k$, one for each isotropy component. We now recall Siebenthal's description of the isotropy components $\mathfrak{m}_i$. Denote by $R_\Theta$ the root group of the isotropy, given by the $\mathbb{Z}$-span of the roots $\Pi_\Theta$. We consider the quotient group $\mathfrak{h}^*/R_\Theta$ so that two functionals $\alpha,\, \beta \in \mathfrak{h}^*$ are equivalent mod $R_\Theta$ when their difference is in $R_\Theta$. Partitioning the union of the positive roots $\Pi^+$ with the zero functional according to the equivalence classes mod $R_\Theta$ we get $$ \Pi^+ \cup \{ 0 \} = \Pi_0 \cup \Pi_1 \cup \cdots \cup \Pi_s $$ where the residue class of $0$ is $$ \Pi_0 = \Pi^+_\Theta \cup \{0\} $$ The residue classes mod $R_\Theta$ characterize the isotropy components by $$ \mathfrak{m}_i = \sum_{\alpha \in \Pi_i} \u_{\alpha} $$ where $i = 1,2,\ldots, k$. \begin{example} In type A, the mod $R_\Theta$ equivalence classes of positive roots and the isotropy components are related to block decompositions of the Lie algebra. Consider for example the partial flag manifold $\mathbb F_\Theta = {\rm SU}(9)/{\rm S}({\rm U}(3)\times {\rm U}(1) \times {\rm U}(2))$. Then its simple roots $\Theta$ corresponds to the three intervals of simple roots given by the black dots above the diagonal in the picture below. \begin{center} \def13.5cm{5cm} \input{blocos-isotropia_pdf.pdf} \end{center} The other hollow black dots in the blocks right above $\Theta$ correspond to the other positive roots in the isotropy. The colored dots outside the blocks on the diagonal correspond to positive roots outside the isotropy. Their block decomposition correspond to the equivalence classes mod $\Theta$, say the red ones give $\Pi_1$, the green ones give $\Pi_2$ and the blue ones give $\Pi_3$. The corresponding isotropy components $\mathfrak{m}_1$, $\mathfrak{m}_2$, $\mathfrak{m}_3$ are then given by picturing the above picture as a block decomposition of anti-hermitian matrices, where the upper triangular block is appropriately reflected onto the lower triangular block. \end{example} \begin{remark} In the maximal flag manifold we have $\Theta = \emptyset$, so that each $R_\Theta = \{ 0 \}$, each residue class $\Pi_i$ is a singleton, each $\mathfrak{m}_i$ is a real root space, and then (\ref{eq:decompos-raizes-tg}) is already the isotropy decomposition. In partial flag manifolds where $\Theta \neq \emptyset$, apart from $\Pi_0 - \{0\}$, the other residue classes do not, in general, define root subsystems. For example, if there exists two $\Theta$-equivalent roots $\alpha, \beta$ in $\Pi_i$, $i \neq 0$, with Cartan integer $\prod{\alpha, \beta^\vee}=1$ then $r_\beta(\alpha) = \alpha - \beta$ is a root that lies in $R_\Theta$, and so in $\Pi_\Theta$, thus outside $\Pi_i$. \end{remark} \subsection{Equiharmonic spheres} \label{equiharmonic-spheres} We now establish an interesting relation between the spheres $S^2(\alpha)$ and harmonic maps into $\mathbb F_\Theta$. Consider $M^2$ a compact Riemann surface equipped with a metric $g$, let $(N,h)$ be a compact Riemannian manifold and $\phi:(M^2,g)\to (N,h)$ a differentiable map. The {\em energy} of $\phi$ is given by $$ E(\phi)=\frac{1}{2}\int_{M}{|d\phi|^2\, \omega_g}, $$ where $\omega_g$ is the volume measure defined by the metric $g$ and $|d\phi|$ is the Hilbert-Schmidt norm of $d\phi$. The differentiable map $\phi$ is {\em harmonic} if it is a critical point of the energy functional. Examples of harmonic maps are the minimal immersions and the totally geodesic immersions. \begin{definition} A map $\phi:M^2\to \mathbb F$ is called {\em equihamonic map} if it is harmonic with respect {\em any} invariant metric on $\mathbb F$. \end{definition} From now on we restrict ourselves to the case $M^2=S^2$, the 2-sphere. We will use also the concept of {\em homogeneous equigeodesics}: curves of the form $\gamma(t)=(\exp tX)\, T$, $X\in\mathfrak{m}_\Theta$ that are geodesics with respect to any invariant metric. The vector $X\in \mathfrak{m}_\Theta$ is called {\em equigeodesic vector}. One can think equigeodesics as equihamonic maps whose the domain is 1-dimensional. In \cite{CGN} it is proved that any flag manifolds admits such curves and each isotropy component of ($\ref{eq:decompos-esptg}$) consists of equigeodesic vectors, that is, for all $X\in \mathfrak{m}_i$ the curve $\gamma(t)=(\exp tX)\, T$ is an equigeodesic on $\mathbb F_\Theta$. \begin{proposition} For each root $\alpha \not \in \Pi_\Theta$ the sphere $S^2(\alpha)$ is equihamonic, that is, the map $\sigma^\vee_\alpha: S^2 \to \mathbb F_\Theta$ is equiharmonic. In particular $\pi_2(\mathbb F_\Theta)$ is generated by equiharmonic 2-spheres. \end{proposition} \begin{proof} We will show that $S^2(\alpha)$ is totally geodesic, independent of the $U$ invariant metric on $\mathbb F_\Theta$. Fix an $U$-invariant metric on $\mathbb F_\Theta$ (and therefore an $\mathrm{Ad}(T)$-invariant scalar product on $\mathfrak{m}_\Theta$). Keep in mind the decomposition (\ref{eq:decompos-esptg}) associated with the isotropy representation. It is clear that the tangent space of $S^2(\alpha)$ at $o$ is $\mathfrak{u}_\alpha$. Since $S^2(\alpha)$ is an $U(\alpha)$-orbit with isotropy $U(\alpha) \cap U_\Theta = T(\alpha)$ (see Lemma \ref{lema:isotropia-rank1}) it follows that the $U$-invariant metric induces $T(\alpha)$-invariant inner product on $\u_\alpha$. Since $T(\alpha)$ acts in $\u_\alpha$ by rotation, it follows that this inner product on $\u_\alpha$ is a multiple of the Cartan-Killing inner-product of $\u(\alpha)$. It follows that the induced metric on $S^2(\alpha)$ is a multiple of the round metric and every geodesic on $S^2(\alpha)$ is of the form $\gamma(t)=(\exp tX) T(\alpha)$. We remark that $\gamma$ is also a geodesic on $\mathbb F_\Theta$ since the vector $X\in\mathfrak{u}_\alpha$ is an equigeodesic vector for $\mathbb F_\Theta$. Therefore, a geodesic on $S^2(\alpha)$ is also a geodesic on $\mathbb F_\Theta$ so that $S^2(\alpha)$ is totally geodesic with respect to the fixed invariant metric on $\mathbb F_\Theta$. Since the fixed invariant metric is arbitrary, the result follows. \end{proof} \subsection{$\Theta$-rigid roots} \label{theta-rigid} To interpret the invariant metrics of $\mathbb F_\Theta$ geometrically, denote by $S^2(\alpha)$ the embedded 2-sphere in $\mathbb F_\Theta$ given by the image of $\sigma^\vee_\alpha$, for a root $\alpha \not \in \Pi_\Theta$. \begin{proposition} The invariant metrics of $\mathbb F_\Theta$ are given by choosing the diameters of the 2-spheres $S^2(\alpha)$, for the roots $\alpha \not\in \Pi_\Theta$, where two spheres $S^2(\alpha)$, $S^2(\beta)$ must be given the same diameter whenever their corresponding roots $\alpha, \beta$ are congruent {\rm mod} $R_\Theta$. In this case we say that these two spheres have the same invariant geometry. \end{proposition} This partitions the spheres $S^2(\alpha)$ in classes with the same invariant geometry under any invariant metric of the ambient space $\mathbb F_\Theta$. Clearly, a sufficient condition for $S^2(\alpha)$ and $S^2(\beta)$ to have the same invariant geometry is that there exists $u \in U$ such that $u S^2(\alpha) = S^2(\beta)$. The next result shows that this condition may be checked purely on the root system. Denote by $W$ be the Weyl group of $U$ and by $W_\Theta$ be the Weyl group of the isotropy $U_\Theta$. \begin{proposition} \label{propos:isometria-esferas} There exists $u \in U$ such that $u S^2(\alpha) = S^2(\beta)$ in $\mathbb F_\Theta$ iff there exists $w \in W_\Theta$ such that $w^*\alpha = \beta$. In particular, $\alpha$ and $\beta$ have the same length. \end{proposition} \begin{proof} Let $w^*\alpha = \beta$, $w \in W_\Theta$. There exists $u \in U_\Theta$ such that $\mathrm{Ad}(u)|_{\t} = w$. Then $\mathrm{Ad}(u)\u_{\alpha} = \u_{\mathrm{Ad}(u)^*\alpha} = \u_{w^*\alpha} = \u_{\beta}$ and also $\mathrm{Ad}(u) {\rm \bf i} H_\alpha = {\rm \bf i} H_{\mathrm{Ad}(u)^*\alpha} = {\rm \bf i} H_{w^*\alpha} = {\rm \bf i} H_{\beta}$. It follows that $\mathrm{Ad}(u) \u(\alpha) = \u(\beta)$. Exponentiating we get $ u U(\alpha) u^{-1} \, o = u U(\alpha) \, o = u U(\beta) \, o$, as claimed, since $u \, o = o$. Now let $S^2(\beta) = U(\beta) \, o$. There exists then $v \in U(\beta)$ such that $u \, o = v \, o$. Thus $ v^{-1} u \in U_\Theta$ is such that $v^{-1} u S^2(\alpha) = v^{-1} S^2(\beta) = S^2(\beta)$. Hence, replacing $u$ by $v^{-1} u$, we can assume that $u \in U_\Theta$ which, together with $u S^2(\alpha) = S^2(\beta)$, imply that $\mathrm{Ad}(u) \u_{\alpha} = \u_\beta$. It follows that $\mathrm{Ad}(u) \mathfrak{g}_{\alpha} = \mathfrak{g}_\beta$ and, since $\mathrm{Ad}(u) \mathfrak{g}_{\alpha} = \mathfrak{g}_{\mathrm{Ad}(u)^*\alpha}$, we have that $\mathrm{Ad}(u)^*\alpha = \beta$. It follows that $\mathrm{Ad}(u)H_\alpha = H_\beta$, so that $\mathrm{Ad}(u) {\rm \bf i} H_\alpha = {\rm \bf i} H_\beta$. Since both ${\rm \bf i} H_\alpha, {\rm \bf i} H_\beta \in \t$, there exists $w \in W$ such that $\mathrm{Ad}(u) {\rm \bf i} H_\alpha = w {\rm \bf i} H_\alpha = {\rm \bf i} H_{w^*\alpha} = {\rm \bf i} H_\beta$, so that $w^*\alpha = \beta$. \end{proof} It is interesting to note that there may spheres $S^2(\alpha)$, $S^2(\beta)$ with the same invariant geometry but with roots $\alpha, \beta$ of different lengths (see Example \ref{exemplog2}) The 2-spheres $\sigma^\vee_\alpha$ that induce generators of $\pi_2(\mathbb F_\Theta)$, a homotopical object, also generate the invariant geometry of $\mathbb F_\Theta$, a geometrical object. When do these homotopical generators of $\pi_2(\mathbb F_\Theta)$ carry the same invariant geometry? This reduces to a question about root systems, as follows. For a subset of roots $A \subset \Pi$, denote by $A^\vee = \{ \alpha^\vee:\, \alpha \in A\}$. \begin{remark} Note that in general $R^\vee_\Theta$ is not the same as $(R_\Theta)^\vee$, since $R_\Theta$ is the $\mathbb{Z}$-span of $\Theta$, $R^\vee_\Theta$ is the $\mathbb{Z}$-span of $\Theta^\vee$ and the map $\alpha \mapsto \alpha^\vee$ is in general not linear. Thus, $\alpha = \beta \mod R_\Theta$ does not imply in general that $\alpha^\vee = \beta^\vee \mod R^\vee_\Theta$ and vice-versa, see the examples in the next Section. \end{remark} Denote by $\Pi_\Theta(\alpha)$ the mod $R_\Theta$ class of $\alpha$ in $\Pi$ and by $\Pi^\vee_\Theta(\alpha^\vee)$ the mod $R^\vee_\Theta$ class of $\alpha^\vee$ in $\Pi^\vee$. We say that a root $\alpha$ is {\em $\Theta$-rigid} when $\alpha \not\in \Pi_\Theta$ and $$ \Pi_\Theta(\alpha)^\vee \subseteq \Pi^\vee_\Theta(\alpha^\vee) $$ equivalently, when the 2-spheres $S^2(\beta)$ that have the same invariant geometry as $S^2(\alpha)$ are in the same homotopy class as $S^2(\alpha)$. Note that $\Theta$-rigidity is a property of the whole residue class $\Pi_\Theta(\alpha)$. By the duality $(\alpha^\vee)^\vee = \alpha$ it follows that $\alpha^\vee$ is $\Theta^\vee$-rigid when $$ \Pi^\vee_\Theta(\alpha^\vee)^\vee \subseteq \Pi_\Theta(\alpha) $$ equivalently, when the 2-spheres $S^2(\beta)$ that have the same homotopy class as $S^2(\alpha)$ have the same invariant geometry as $S^2(\alpha)$. In Section \ref{actionW} we characterize $\Theta$-rigid roots by using the action the Weyl group $W_\Theta$ on the residue classes of roots mod $R_\Theta$. Below we state the main consequences of these results to the invariant geometry of flag manifolds. \begin{theorem} For a flag manifold $\mathbb F_\Theta$ with root system $\Pi$ and isotropy root system $\Pi_\Theta$, we have the following. \begin{enumerate}[(i)] \item If root $\beta \in \Pi_\Theta(\alpha)$ has the same length as $\alpha$, then the 2-sphere $S^2(\beta)$ has the same invariant geometry and same homotopy class as $S^2(\alpha)$. \item A root $\alpha$ is $\Theta$-rigid iff the roots of the residue class $\Pi_\Theta(\alpha)$ have the same length, iff each sphere with the same invariant geometry as $S^2(\alpha)$ is the translation of $S^2(\alpha)$ by an isometry of $U$. \item The homotopy classes of all the 2-spheres $S^2(\alpha)$, $\alpha \not\in \Pi_\Theta$, coincide with their invariant geometry classes iff either the root system $\Pi$ is simply laced and $\Theta$ is arbitrary or else the root system is not simply laced and $\Theta = \emptyset$. \end{enumerate} \end{theorem} \begin{proof} For item (i), Proposition \ref{propos1} gives that $W_\Theta$ acts transitively in the roots of same length in $\Pi_\Theta(\alpha)$, thus $\beta = w^*\alpha$ for some $w \in W_\Theta$. Now Proposition \ref{resid-invar} gives the invariance of $\Pi_\Theta(\alpha)$ and $\Pi^\vee_\Theta(\alpha^\vee)$ by $W_\Theta = W_{\Theta^\vee}$. Thus $\beta^\vee = (w \alpha)^\vee = w \alpha^\vee$, so that $\beta^\vee$ is in the same mod $R^\vee_\Theta$ class of $\alpha^\vee$. Hence $S^2(\beta)$ has the same homotopy class as $S^2(\alpha)$, as claimed. Item (ii) follows from Theorem \ref{propos3} items (ii) and (i), using Proposition \ref{propos:isometria-esferas} for the last conclusion. Item (iii) is an immediate consequence of Corollary \ref{corol3}. \end{proof} For the flag manifolds of {\em simple} compact Lie groups, it follows that the invariant geometry classes of spheres coincide with their homotopy classes iff the root system is simply laced $A$, $D$, $E$ with arbitrary $\Theta$, thus any flag manifold of these types, or of type $B$, $C$, $F$, $G$ with empty $\Theta = \emptyset$, thus only the maximal flag manifold of these types. \section{Action of $W$ on the isotropy components} \label{actionW} The results of this section are stated and proved using purely the combinatorics of root systems. We consider, more generally, nonreduced root system. Recall the partition $$ \Pi \cup \{0\} = \Pi_0 \cup \Pi_1 \cup \cdots \cup \Pi_s $$ into mod $R_\Theta$ residue classes considered in the last section, which depends solely on the root data $\Pi \supseteq \Pi_\Theta$, where $\Pi_0$ is the residue class of zero. Let $W_\Theta$ be the Weyl group of the root system $\Pi_\Theta$. In this section we characterize when is $W_\Theta$ is transitive on each nonzero residue class. It already leaves invariant each residue class mod $R_\Theta$. \begin{proposition} \label{resid-invar} Each residue class of roots mod $R_\Theta$ is $W_\Theta$-invariant. \end{proposition} \begin{proof} Recall the annihilator $\t_\Theta$ of $\Theta$ in $\t$ given by (\ref{eq:toro-isotropia}). Then $\phi \in \t^*$ is such that the restriction $\phi|_{\t_\Theta} = 0$ iff $\phi$ is a linear combination of roots in $\Theta$. It follows that two roots satisfy $\alpha = \beta$ mod $R_\Theta$ iff their restrictions $\alpha|_{\t_\Theta} = \beta|_{\t_\Theta}$ coincide. Since $W_\Theta$ centralizes $\t_\Theta$, it follows that each residue class mod $R_\Theta$ is invariant by $W_\Theta$. \end{proof} Recall the duality bijection $\Pi \to \Pi^\vee$, $\alpha \mapsto \alpha^\vee = 2\alpha/\prod{\alpha,\alpha}$, which is in general not linear, even tough $(\alpha^\vee)^\vee = \alpha$. It is only linear for the simply laced root systems. We fix a subset of simple roots $\Theta \subset \Sigma$. Recall that $W$ is generated by the simple reflections $r_\alpha$, $\alpha \in \Sigma$, while $W_\Theta$ is generated by the simple reflections $r_\alpha$, $\alpha \in \Theta$. We have that the Weyl group of $\Pi_\Theta$ and its dual root system $\Pi^\vee_{\Theta}$ coincide $W_\Theta = W_{\Theta^\vee}$, since for the corresponding reflections we have $r_{\alpha} = r_{\alpha^\vee}$. Since the Weyl group elements are isometries, we have that duality respects the Weyl group actions, that is, $(w\alpha)^\vee = w \alpha^\vee$ for all $w \in W$. Recall that the possible squared lengths of roots in a root system are 1, 2 and 4, called short, long and longer roots, respectively. Simple roots are either short or long. In a doubly laced root system, for a short root $\alpha$ we have that $\alpha^\vee = 2 \alpha$, for a long root $\alpha$ we have that $\alpha^\vee = \alpha$, and for a longer root we have that $\alpha^\vee = \alpha/2$. For a triply laced root system, exchange 2 for 3 and exclude the longer roots. It follows that duality $\Sigma \to \Sigma^\vee$ exchanges short and long simple roots. \begin{example} The following picture illustrates the nonzero classes mod $R_\Theta$ for the non-reduced root system of type $BC_2$, only the positive roots are displayed. \begin{center} \def13.5cm{12cm} \input{bc2-alfa-e-beta.pdf_tex} \end{center} \end{example} A sequence of roots $\beta_1, \ldots, \beta_k$ connecting $\beta_1$ to $\beta_k$ is a $\Theta$-sequence when $\beta_{i+1} - \beta_i \in\Pi_\Theta$ $(i=1,2,\ldots,k-1)$. A subset of roots is $\Theta$-connected when all pair of roots in the subset can be connected by a $\Theta$-sequence. Siebenthal's fundamental result is that each nonzero residue class $\Pi_i$ is $\Theta$-connected (Proposition 2.1 of \cite{siebenthal}). What about the Weyl group of the root system, can it be used to connect the roots in each residue class? Surely no, if the residue class has roots of two distinct lengths. Also no, for the residue class of zero, since the diagram of $\Theta$ can be disconnected. Apart from that, we show in the next result that the answer is positive, by adapting Siebenthal's proof. \begin{proposition} \label{propos1} $W_\Theta$ is transitive in the roots of same length in each nonzero residue class mod $R_\Theta$. \end{proposition} \begin{proof} Fix a possible length $L$ of the roots, $L^2 =$ 1, 2 or 4. Let $\beta_1 \neq \beta_2$ be distinct roots of equal length $L$ which are $\Theta$-equivalent. Since $\beta_1$ and $\beta_2$ are distinct and have that same length, the only possibility that they are proportional is that $\beta_1 = - \beta_2$. By Cauchy-Schwartz we have that $$ \prod{\beta_1 - \beta_2, \beta_1} = |\beta_1|^2 - \prod{\beta_2, \beta_1} > |\beta_1|^2 - |\beta_2||\beta_1| = 0 $$ Writing $\Theta = \{ \alpha_1, \ldots, \alpha_n \}$, we have that $\beta_1 - \beta_2 = \sum_i n_i \alpha_i$, for $n_i \in \mathbb{Z}$. It follows that $$ \prod{\beta_1 - \beta_2, \beta_1} = \sum_i n_i \prod{\alpha_i,\beta_1} > 0 $$ We claim that there exists $n_j \neq 0$ such that $\prod{\beta_1,\alpha_j} \neq 0$ and has the same sign of $n_j$, that is, $n_j \prod{\beta_1,\alpha_j} > 0$. Indeed, suppose not, then $n_j \prod{\beta_1,\alpha_j} \leq 0$ for all $j$ and adding up we get $\prod{\beta_1 - \beta_2, \beta_1} = \sum_i n_i \prod{\alpha_i,\beta_1} \leq 0$, a contradiction. We thus get the root \begin{eqnarray*} r_{\alpha_j}(\beta_1) & = & \beta_1 - \prod{\beta_1,\alpha_j^\vee} \alpha_j \\ & = & \beta_2 + n_1 \alpha_1 + \cdots + (n_j - \prod{\beta_1,\alpha^\vee})\alpha_j + \cdots \end{eqnarray*} where the coefficient of $\alpha_j$ in the above root has absolute value strictly smaller than that of $n_j$, since $n_j$ and $\prod{\beta_1,\alpha^\vee}$ have the same sign. Also note that $r_{\alpha_j}(\beta_1)$ has the same length $L$ as $\beta_1$ and lies in the same mod $R_\Theta$ residue class as $\beta_1$. Proceeding inductively, the coefficients in $\Theta$ decrease strictly and in a finite number of steps we get roots $\alpha_j, \ldots, \alpha_k$ in $\Theta$ such that $$ r_{\alpha_k} \cdots r_{\alpha_j} (\beta_1) = \beta_2 $$ where $r_{\alpha_k} \cdots r_{\alpha_j} \in W_\Theta$. \end{proof} It follows that a necessary and sufficient criterion for $W_\Theta$ to be transitive in a nonzero residue class mod $R_\Theta$ is that the roots of this class have the same length. Next, we give some criteria to check this. First, a long root criterion. Note that, since $\Theta$ is a subset of simple roots, it can only contain short or long roots. \begin{lemma} \label{lemma2} Let $\alpha$ be a short simple root. Then there exists a long root $\phi$ in $\Pi$ such that $\prod{\alpha, \phi} \neq 0$. \end{lemma} \begin{proof} Since $\alpha$ is short simple, its connected component on the Dynkin diagram of $\Pi$ contains a long simple root. It follows that $\alpha$ must is connected to a long simple root $\beta$, more precisely, there exists a sequence of short simple roots $ \alpha = \alpha_0,\, \alpha_1,\, \ldots,\, \alpha_n$ such that $\prod{\alpha_i, \alpha_{i+1}^\vee} = -1$ and $k = \prod{\alpha_n,\beta^\vee} < 0$. If $n=0$ then $\prod{\alpha,\beta^\vee} < 0$ and we are done. Else, denote by $r_i$ the reflection on the simple root $\alpha_i$ and consider $$ \begin{array}{clll} \phi_0 & = r_n(\beta) & = \beta - k \alpha_n \\ \phi_1 & = r_{n-1}(\phi_0) & = \beta - k( \alpha_n + \alpha_{n-1}) \\ \phi_2 & = r_{n-2}(\phi_1) & = \beta - k( \alpha_n + \alpha_{n-1} + \alpha_{n-2}) \\ \cdots \\ \phi_n & = r_{n}(\phi_{n-1}) & = \beta - k( \alpha_n + \ldots + \alpha_1 + \alpha ) \end{array} $$ Then $\phi = \phi_n$ is a long root, since it is the image of $\beta$ by the Weyl group. Thus $\phi^\vee = \phi$ and then $$ \prod{\alpha,\phi^\vee} = -k( \prod{\alpha, \alpha_1} + \prod{\alpha, \alpha} ) = -k( -1 + 2 ) = -k > 0 $$ since, by construction, $\alpha$ is orthogonal to $\beta,\, \alpha_n,\, \ldots, \alpha_2$. This proves our claim. \end{proof} \begin{proposition} \label{propos2} The following are equivalent: \begin{enumerate}[(i)] \item $W_\Theta$ is transitive in each nonzero residue classes mod $R_\Theta$. \item Each nonzero residue classes mod $R_\Theta$ has roots of the same length. \item The roots in $\Theta$ are long. \end{enumerate} \end{proposition} \begin{proof} It is immediate that (i) implies (ii). To prove that (ii) implies (iii), suppose by contradiction that $\Theta$ contains a short root $\alpha$. Then by the previous Lemma there exists a long root $\phi$ such that the $\alpha$-string of roots through $\phi$ is non-empty. Since $\phi$ is long and $\alpha$ is short, this $\alpha$-string contains both long and short roots, and since $\alpha \in \Theta$, this $\alpha$-string is contained in the the residue class of $\phi$ mod $R_\Theta$, which contradicts (ii). To prove that (iii) implies (i), proceed as in Lemma 4.3, Item 1, of \cite{mauro-luiz}. \end{proof} \begin{remark} \label{remark:simply-laced} It follows that for simply laced root systems, which contain only long roots, $W_\Theta$ is always transitive in the nonzero residue classes mod $R_\Theta$, for arbitrary $\Theta$. \end{remark} Now we characterize transitivity in each residue class by using duality. First, an example that will illustrate our next result. \begin{example} \label{exemplog2} The following picture illustrates the nonzero classes mod $R_\Theta$ for the root system of type $G_2$, the nonzero classes mod $R^\vee_\Theta$ of its (isomorphic) dual $G_2^\vee$ and also the image of these residue classes under duality, for $\Theta = \{\alpha\}$. Only the positive roots are displayed. \begin{center} \def13.5cm{13.5cm} \input{g2-curta-longa-e-dual.pdf_tex} \end{center} Note that $W_\Theta$ is not transitive in one of the two mod $R_\Theta$ classes and that their images by duality in the dual $G_2^\vee$ are not contained in the mod $R_\Theta^\vee$ classes. In the dual $G_2^\vee$, where $\alpha^\vee$ takes the role of the long root, note that $W_\Theta$ is now transitive in the three classes mod $R_\Theta^\vee$ and that their image by duality in the original $G_2$ are contained in the two mod $R_\Theta$ classes. \end{example} \begin{theorem} \label{propos3} Given a root $\alpha \not\in \Pi_\Theta$, the following are equivalent: \begin{enumerate}[(i)] \item $W_\Theta$ is transitive in the residue class of $\alpha$ mod $R_\Theta$. \item $\Pi_\Theta(\alpha)^\vee \subseteq \Pi^\vee_\Theta(\alpha^\vee)$. \item The roots in $\Pi_\Theta(\alpha)$ have the same length. \end{enumerate} \end{theorem} \begin{proof} To prove that (i) implies (ii), first note that transitivity implies that the residue class of $\alpha$ is given by the orbit $W_\Theta \alpha = \Pi_\Theta(\alpha)$. For the orbit we have $$ (W_\Theta \alpha)^\vee = W_\Theta \alpha^\vee \subseteq \Pi^\vee_\Theta(\alpha^\vee) $$ since duality respects the Weyl group action and since the mod $R_\Theta^\vee$ classes in $\Pi^\vee$ are $W_\Theta$-invariant. To prove that (ii) implies (iii) first note that if a nonzero multiple $k \alpha$ belongs to the $\Pi_\Theta$, then $\alpha \in \Pi_\Theta$. In fact, write $\alpha = \sum n_i \alpha_i$ for integers $n_i$ and simple roots $\alpha_i$, so that by linear independence of the simple roots $k\alpha = \sum_i kr_i \alpha_i \in \Pi_\Theta$ implies that $kr_i = 0$ for $\alpha_i \not\in \Theta$. It follows that $r_i = 0$ for $\alpha_i \not\in \Theta$ and thus $\alpha \in \Pi_\Theta$. Now, suppose that the residue class of $\alpha$ has roots of different lengths. Then, there exists a root $\alpha + \gamma$ with length different from the length of $\alpha$, with $\gamma \in R_\Theta$. We have the following possibilities. \begin{enumerate}[(a)] \item[] Suppose first that the root system is doubly laced. \item $\alpha$ is long and $\alpha + \gamma$ is short: it follows that $(\alpha + \gamma)^\vee = 2 \alpha + 2 \gamma$ and $\alpha^\vee = \alpha$. But from (ii) we get that $(\alpha + \gamma)^\vee = \alpha^\vee + \phi$, for some $\phi \in R^\vee_\Theta$, so that $$ 2 \alpha + 2 \gamma = \alpha + \phi \quad \text{and thus} \quad \alpha = \phi - 2\gamma $$ Write $\gamma = \sum_i r_i \alpha_i$, with $r_i \in \mathbb{Z}$, $\alpha_i \in \Theta$. Then $2\gamma = \sum_i r_i 2\gamma_i = \sum_i s_i \gamma^\vee_i$, where $s_i = r_i$ if $\gamma_i$ is short and $s_i = 2 r_i$ if $\gamma_i$ is long. This shows that $2\gamma \in R^\vee_\Theta$ and thus $\alpha \in R^\vee_\Theta$. But then $\alpha^\vee = \alpha \in \Pi^\vee_\Theta \cap \Pi_\Theta$, so that $\alpha \in \Pi_\Theta$, a contradiction. \item $\alpha$ is short and $\alpha + \gamma$ is long: it follows that $(\alpha + \gamma)^\vee = \alpha + \gamma$ and $\alpha^\vee = 2\alpha$. Again from (ii) we get that $$ \alpha + \gamma = 2\alpha + \phi \quad \text{and thus} \quad \alpha = \phi - \gamma $$ for some $\phi \in R^\vee_\Theta$. Write $\phi = \sum_i r_i \alpha^\vee_i$, with $r_i \in \mathbb{Z}$, $\alpha_i \in \Theta$. Since for the simple roots we have that $\alpha^\vee_i$ is either $\alpha_i$ or $2\alpha_i$, it follows that $\phi$ lies in $R_\Theta$. This shows that the root $\alpha$ lies in $R_\Theta$ and thus in $\Pi_\Theta$, a contradiction. \item[] If the root system is triply laced we can replace 2 by 3 in the previous arguments and get that $2 \alpha \in R_\Theta$. By the above equation it follows that $\alpha \in \Pi_\Theta$, a contradiction. The next possibilities can only happen for doubly laced root systems: \item $\alpha$ is long and $\alpha + \gamma$ is longer: it follows that $(\alpha + \gamma)^\vee = (\alpha + \gamma)/2$ and $\alpha^\vee = \alpha$. Again from (ii) we get that $$ (\alpha + \gamma)/2 = \alpha + \phi \quad \text{and thus} \quad \alpha = 2\phi - \gamma $$ for some $\phi \in R^\vee_\Theta$. As in item (b), we have that $\phi$ lies in $R_\Theta$. This shows that the root $\alpha$ lies in $R_\Theta$ and thus in $\Pi_\Theta$, a contradiction. \item $\alpha$ is short and $\alpha + \gamma$ is longer: it follows that $(\alpha + \gamma)^\vee = (\alpha + \gamma)/2$ and $\alpha^\vee = 2\alpha$. Again from (ii) we get that $$ (\alpha + \gamma)/2 = 2\alpha + \phi \quad \text{and thus} \quad 3\alpha = 2\phi - \gamma $$ As in item (b) we have that $\phi \in R_\Theta$ which shows that $3\alpha \in R_\Theta$. By the above equation this implies that the root $\alpha$ lies in $R_\Theta$ and thus in $\Pi_\Theta$, a contradiction. \end{enumerate} That (iii) implies (i) follows directly from Proposition \ref{propos1}. \end{proof} The next result characterizes full transitivity by duality. \begin{corollary} \label{corol3} The following are equivalent: \begin{enumerate}[(i)] \item $W_\Theta$ is transitive in each nonzero residue classes mod $R_\Theta$ and mod $R_\Theta^\vee$ \item $\Pi_\Theta(\alpha)^\vee = \Pi^\vee_\Theta(\alpha^\vee)$ for all roots $\alpha$. \item Either the root system is simply laced and $\Theta$ is arbitrary or else the root system is not simply laced and $\Theta = \emptyset$. \end{enumerate} \end{corollary} \begin{proof} Item (i) implies, by the previous Lemma, that $\Pi_\Theta(\alpha)^\vee \subseteq \Pi^\vee_\Theta(\alpha^\vee)$ and $\Pi^\vee_\Theta(\alpha^\vee)^\vee \subseteq \Pi_\Theta(\alpha)$, for all roots $\alpha$. Applying the duality map to the last equation we get $\Pi^\vee_\Theta(\alpha^\vee) \subseteq \Pi_\Theta(\alpha)^\vee$ and thus get item (ii). Again by the previous Lemma, item (ii) clearly implies item (i). To prove that (i) implies (iii) use the long root criterion (Proposition \ref{propos2}) so that (i) implies that the roots in both $\Theta$ and $\Theta^\vee$ are long. Since duality exchanges long and short roots this can only happen if $\Theta$ is empty or if the root system is simply laced and does not contain short roots. Item (iii) implies (i) by Remark \ref{remark:simply-laced}, when the root system is simply laced. When $\Theta = \emptyset$ then (i) follows trivially since in this case $W_\Theta = \{1\}$ and the classes mod $R_\Theta$ and $R^\vee_\Theta$ are singletons. \end{proof}
{ "timestamp": "2018-03-06T02:09:43", "yymm": "1803", "arxiv_id": "1803.01290", "language": "en", "url": "https://arxiv.org/abs/1803.01290", "abstract": "We use the Hopf fibration to explicitly compute generators of the second homotopy group of the flag manifolds of a compact Lie group. We show that these $2$-spheres have nice geometrical properties such as being totally geodesic surfaces with respect to any invariant metric on the flag manifold. We characterize when the generators with the same invariant geometry are in the same homotopy class. This is done by exploring the action of Weyl group on the irreducible components of isotropy representation of the flag manifold.", "subjects": "Differential Geometry (math.DG); Algebraic Topology (math.AT); Representation Theory (math.RT)", "title": "Second homotopy and invariant geometry of flag manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983342958526322, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7097211115479598 }
https://arxiv.org/abs/1302.6632
Constructive proof of the Carpenter's Theorem
We give a constructive proof of Carpenter's Theorem due to Kadison. Unlike the original proof our approach also yields the real case of this theorem.
\section{Kadison's theorem} In \cite{k1} and \cite{k2} Kadison gave a complete characterization of the diagonals of orthogonal projections on a Hilbert space $\mathcal H$. \begin{thm}[Kadison]\label{Kadison} Let $\{d_{i}\}_{i\in I}$ be a sequence in $[0,1]$. Define \[a=\sum_{d_{i}<1/2}d_{i} \quad\text{and}\quad b=\sum_{d_{i}\geq 1/2}(1-d_{i}).\] There exists a projection $P$ with diagonal $\{d_{i}\}$ if and only if one of the following holds \begin{enumerate} \item $a,b<\infty$ and $a-b\in\mathbb{Z}$, \item $a=\infty$ or $b=\infty$. \end{enumerate} \end{thm} The goal of this paper is to give a constructive proof of the sufficiency direction of Kadison's theorem. Kadison \cite{k1,k2} referred to the necessity part of Theorem \ref{Kadison} as the Pythagorean Theorem and the sufficiency as Carpenter's Theorem. Arveson \cite{a} gave a necessary condition on the diagonals of a certain class of normal operators with finite spectrum. When specialized to the case of two point spectrum Arveson's theorem yields the Pythagorean Theorem, i.e., the necessity of (i) or (ii) in Theorem \ref{Kadison}. Whereas Kadison's original proof is a beautiful direct argument, Arveson's proof uses the Fredholm Index Theory. In contrast, up to now there were no proofs of Carpenter's Theorem other than the original one by Kadison, although its extension for $\rm{II}_1$ factors was studied by Argerami and Massey \cite{am}. In this paper we give an alternative proof of Carpenter's Theorem which has two main advantages over the original. First, the original proof does not yield the real case, which ours does. Second, our proof is constructive in the sense that it gives a concrete algorithmic process for finding the desired projection. This is distinct from Kadison's original proof, which is mostly existential. The paper is organized as follows. In Section 2 we state preliminary results such as finite rank Horn's theorem. These results are then used in Section 3 to show the sufficiency of (i) in Theorem \ref{Kadison}. The key role in the proof is played by a lemma from \cite{mbjj} which enables modifications of diagonal sequences into more favorable configurations. Section 4 contains the proof of sufficiency of (ii) in Theorem \ref{Kadison}. To this end we introduce an algorithmic procedure for constructing a projection with prescribed diagonal which is reminiscent of the spectral tetris construction introduced by Casazza et al. \cite{cfmwz} in their study of tight fusion frames. Finally, in Section 5 we formulate an open problem of characterizing spectral functions of shift-invariant spaces in $L^2(\mathbb R^d)$, introduced by the first author and Rzeszotnik in \cite{br}, which was a motivating force behind this paper. \section{Preliminary results} The main goal of this section is to give a constructive proof of Horn's Theorem \cite[Theorem 9.B.2]{moa}, which is the sufficiency part of the Schur-Horn Theorem \cite{horn, schur}. We present this proof both for the sake of self-sufficiency of part (i) of Carpenter's Theorem and also to cover the more general case of finite rank operators on an infinite dimensional Hilbert space, see also \cite{ak, kw0, kw}. Moreover, we also give an argument reducing Theorem \ref{Kadison} to the countable case. \begin{thm}[Horn's Theorem]\label{Horn finite rank} Let $\{\lambda_{i}\}_{i=1}^{N}$ be a positive nonincreasing sequence, and let $\{d_{i}\}_{i=1}^{M}$ be a nonnegative nonincreasing sequence, where $M\in\mathbb{N} \cup\{\infty\}$ and $M\ge N$. If \begin{equation}\label{finite rank majorization}\begin{split} \sum_{i=1}^{n}d_{i} \leq \sum_{i=1}^{n}\lambda_{i} & \quad \text{for all }n\leq N,\\ \sum_{i=1}^{M}d_{i} = \sum_{i=1}^{N}\lambda_{i}, & \\ \end{split}\end{equation} then there is a positive rank $N$ operator $S$ on a real $M$-dimensional Hilbert space $\mathcal{H}$ with positive eigenvalues $\{\lambda_{i}\}_{i=1}^{N}$ and diagonal $\{d_{i}\}_{i=1}^{M}$. \end{thm} We need a basic lemma. \begin{lem}\label{Horn rank 1} Let $M\in\mathbb{N}\cup\{\infty\}$. If $\{d_{i}\}_{i=1}^{M}$ is a nonzero nonnegative sequence with \[\sum_{i=1}^{M}d_{i}=\lambda<\infty,\] then there is a positive rank $1$ operator $S$ on $M$-dimensional Hilbert space $\mathcal H$ with eigenvalue $\lambda$ and diagonal $\{d_{i}\}$.\end{lem} \begin{proof} Let $\{e_{i}\}_{i=1}^{M}$ be an orthonormal basis for the Hilbert space $\mathcal{H}$. Set \[v = \sum_{i=1}^{M}\sqrt{d_{i}}e_{i},\] and define $S:\mathcal{H}\to\mathcal{H}$ by $Sf = \langle f,v\rangle v$ for each $f\in\mathcal{H}$. Clearly $S$ is rank $1$, and since $\norm{v}^{2}=\lambda$ the vector $v$ is an eigenvector with eigenvalue $\lambda$. Finally, it is simple to check that $S$ has the desired diagonal.\end{proof} \begin{proof}[Proof of Theorem \ref{Horn finite rank}] The proof proceeds by induction on $N$. The base case $N=1$ follows from Lemma \ref{Horn rank 1}. Suppose that Theorem \ref{Horn finite rank} holds for ranks up to $N-1$. Define \[m_{0}=\max\bigg\{m :\sum_{i=m}^{M}d_{i}\geq \lambda_{N}\bigg\}\] and \[\delta=\bigg(\sum_{i=m_{0}}^{M}d_{i}\bigg)-\lambda_{N}.\] Note that $m_{0}\geq N$ and define $\{\tilde{\lambda}_{i}\}_{i=1}^{m_{0}-1}$ by \begin{equation*}\tilde{\lambda}_{i}=\left\{\begin{array}{ll} \lambda_{i} & i=1,2,\ldots,N-1\\ 0 & i=N,\ldots,m_{0}-1.\end{array}\right.\end{equation*} Note that $d_{m_{0}}>\delta$, and define the sequence $\{\tilde{d}_{i}\}_{i=m_{0}}^{M}$ by \begin{equation*}\tilde{d}_{i}=\left\{\begin{array}{ll} d_{m_{0}}-\delta & i=m_{0}\\ d_{i} & i>m_{0}.\end{array}\right.\end{equation*} Since \[\sum_{i=m_{0}}^{M}\tilde{d}_{i}=\lambda_{N},\] we can apply Lemma \ref{Horn rank 1} to get a positive, rank 1 operator $\tilde{S}_{2}$ with eigenvalue $\lambda_{N}$ and diagonal $\{\tilde{d}_{i}\}_{i=m_{0}}^{M}$. Now define $\{\tilde{d}_{i}\}_{i=1}^{m_{0}-1}$ by \begin{equation*}\tilde{d}_{i}=\left\{\begin{array}{ll} d_{i} & i<m_{0}-1\\ d_{m_{0}-1}+\delta & i=m_{0}-1.\end{array}\right.\end{equation*} Note that \[\sum_{i=1}^{m_{0}-1}\tilde{d}_{i} = \sum_{i=1}^{m_{0}-1}\tilde{\lambda}_{i}\] and clearly we have \[\sum_{i=1}^{n}\tilde{d}_{i} \leq \sum_{i=1}^{n}\tilde{\lambda}_{i}\] for all $n=1,\ldots,m_{0}-2$. Thus, by the induction hypothesis there is a positive rank $N-1$ operator $\tilde{S}_{1}$ with diagonal $\{\tilde{d}_{i}\}_{i=1}^{m_{0}-1}$ and eigenvalues $\{\tilde{\lambda}_{i}\}_{i=1}^{m_{0}-1}$. Now, the operator $\tilde{S}=\tilde{S}_{1}\oplus\tilde{S}_{2}$ has the desired eigenvalues, but diagonal $\{\tilde{d}_{i}\}_{i=1}^{M}$. However, $\{\tilde{d}_{i}\}$ only differs from $\{d_{i}\}$ at $i=m_{0}-1$ and $m_{0}$. Let $\alpha\in[0,1]$ such that $\alpha(d_{m_{0}-1}+\delta) + (1-\alpha)(d_{m_{0}}-\delta)=d_{m_{0}-1}$. Define the unitary operator $U$ on the standard orthonormal basis $\{e_{i}\}_{i=1}^{M}$ by \begin{equation*} U(e_{i})=\begin{cases} \sqrt{\alpha}e_{m_{0}-1} - \sqrt{1-\alpha}e_{m_{0}} & i=m_{0}-1\\ \sqrt{1-\alpha}e_{m_{0}-1} + \sqrt{\alpha}e_{m_{0}} & i=m_{0}\\ e_{i} & \text{otherwise}. \end{cases} \end{equation*} It is a simple calculation to see that $S=U^{\ast}\tilde{S}U$ has the desired diagonal in the basis $\{e_{i}\}_{i=1}^{M}$. This completes the proof of Theorem \ref{Horn finite rank}.\end{proof} The following ``moving toward $0$-$1$'' lemma first appeared in \cite{mbjj}. Its proof is constructive as it consists a finite number of ``convex moves'' as at the end of the previous proof. Moreover, from the proof in \cite{mbjj} it follows that Lemma \ref{ops} holds for real Hilbert spaces as well as complex. \begin{lem}\label{ops} Let $\{d_{i}\}_{i\in I}$ be a sequence in $[0,1]$. Let $I_0, I_1 \subset I$ be two disjoint finite subsets such that $\max\{d_{i}: i \in I_0\}\leq\min\{d_{i}: i \in I_1\}$. Let $\eta_{0}\geq 0$ and \[ \eta_{0}\leq\min\bigg\{\sum_{i\in I_0} d_{i},\sum_{i\in I_1} (1-d_{i}) \bigg\}.\] (i) There exists a sequence $\{\tilde d_{i}\}_{i\in I}$ in $[0,1]$ satisfying \begin{align} \label{ops0} \tilde d_i = d_i &\quad\text{for } i \in I \setminus (I_0 \cup I_1), \\ \label{ops1} \tilde d_i \leq d_i \quad i\in I_0, &\quad\text{and}\quad \tilde d_i \ge d_i, \quad i\in I_1, \\ \label{ops2} \eta_0+\sum_{i\in I_0}\tilde{d}_{i} =\sum_{i\in I_0}d_{i} &\quad\text{and}\quad \eta_0+\sum_{i\in I_1} (1-\tilde{d}_{i})=\sum_{i\in I_1} (1-d_{i}). \end{align} (ii) For any self-adjoint operator $\tilde E$ on $\mathcal{H}$ with diagonal $\{\tilde d_{i}\}_{i\in I}$, there exists an operator $E$ on $\mathcal{H}$ unitarily equivalent to $\tilde E$ with diagonal $\{d_{i}\}_{i\in I}$. \end{lem} We end this section by remarking that the indexing set $I$ in Theorem \ref{Kadison} need not be countable. In \cite{k2} the possibility that $I$ is an uncountable set is addressed in all but the most difficult case where $\{d_{i}\}$ and $\{1-d_{i}\}$ are nonsummable \cite[Theorem 15]{k2}. However, the case when $I$ is uncountable is a simple extension of the countable case, as we explain below. \begin{proof}[Proof of reduction of Theorem \ref{Kadison} to countable case] First, we consider a projection $P$ with diagonal $\{d_{i}\}_{i\in I}$ with respect to some orthonormal basis $\{e_{i}\}_{i\in I}$ of a Hilbert space $\mathcal H$. If $a$ or $b$ is infinite then there is nothing to show, so we may assume $a,b<\infty$. Set $J = \{i\in I:d_{i}=0\}\cup\{i\in I:d_{i} = 1\}$, and let $P'$ be the restriction of $P$ to the subspace $\mathcal H'=\overline{\lspan}\{e_{i}\}_{i\in I\setminus J}$. Since $e_{i}$ is an eigenvector for each $i\in J$, $\mathcal H'$ is an invariant subspace $P'(\mathcal H') \subset \mathcal H'$. Hence, $P'$ is a projection with diagonal $\{d_{i}\}_{i\in I\setminus J}$. The assumption that $a,b<\infty$ implies $I\setminus J$ is at most countable. Thus, the countable case of Theorem \ref{Kadison} applied to the operator $P'$ yields $a-b\in\mathbb{Z}$. This shows that (ii) is necessary. To show that (i) or (ii) is sufficient, we claim that it is enough to assume that all of $d_{i}$'s are in $(0,1)$. If we can find a projection $P$ with only these $d_{i}$'s, then we take $\mathbf I$ to be the identity and ${\bf 0}$ the zero operator on Hilbert spaces with dimensions equal to the cardinalities of the sets $\{i\in I: d_i=1\}$ and $\{i\in I: d_i=0\}$, respectively. Then, $P\oplus \mathbf I\oplus {\bf 0}$ has diagonal $\{d_{i}\}$. Since $a$ and $b$ do not change when we restrict to $(0,1)$, we may assume that $\{d_{i}\}_{i\in I}$ has uncountably many terms and is contained in $(0,1)$. There is some $n\in\mathbb{N}$ such that $J=\{i\in I:1/n < d_{i}<1-1/n\}$ has the same cardinality as $I$. Thus, we can partition $I$ into a collection of countable infinite sets $\{I_{k}\}_{k\in K}$ such that $I_{k}\cap J$ is infinite for each $k\in K$. Each sequence $\{d_{i}\}_{i\in I_{k}}$ contains infinitely many terms bounded away from $0$ and $1$, thus (ii) holds. Again, by the countable case of Theorem \ref{Kadison}, for each $k\in K$ there is a projection $P_{k}$ with diagonal $\{d_{i}\}_{i\in I_{k}}$. Thus, $\bigoplus_{k\in K}P_{k}$ is a projection with diagonal $\{d_{i}\}_{i\in I}$. \end{proof} \section{Carpenter's Theorem part i} The goal of this section is to give a proof of the sufficiency of (i) in Theorem \ref{Kadison}. As a corollary of Theorem \ref{Horn finite rank} we have the summable version of the Carpenter's Theorem. \begin{thm}\label{fcpt} Let $M\in\mathbb{N}\cup\{\infty\}$, and let $\{d_{i}\}_{i=1}^{M}$ be a sequence in $[0,1]$. If $\sum_{i=1}^{M}d_{i}\in\mathbb{N}$, then there is a projection $P$ with diagonal $\{d_{i}\}$.\end{thm} \begin{proof} Let $\{d_{i}'\}_{i=1}^{M'}$ be the terms of $\{d_{i}\}$ in $(0,1]$, listed in nonincreasing order. Set $N=\sum_{i=1}^{M}d_{i}$, and define $\lambda_{i}=1$ for $i=1,\ldots,N$. Since $d_{i}'\leq 1$ for all $i$ we have \begin{equation}\label{frcpt1}\sum_{i=1}^{n}d_{i}'\leq \sum_{i=1}^{n}\lambda_{i}\qquad \text{for } n=1,2,\ldots,N.\end{equation} We also have \[\sum_{i=1}^{M'}d_{i}' = N = \sum_{i=1}^{N}\lambda_{i}.\] By Theorem \ref{Horn finite rank} there is a rank $N$ self-adjoint operator $P'$ with positive eigenvalues $\{\lambda_{i}\}_{i=1}^{N}$ and diagonal $\{d_{i}'\}_{i=1}^{M'}$. Since $\lambda_{i} = 1$ for each $i$, the operator $P'$ is a projection. Let $\mathbf{0}$ be the zero operator on a Hilbert space with dimension equal to $|\{i\colon d_{i}=0\}|$. The operator $P'\oplus \mathbf{0}$ is a projection with diagonal $\{d_{i}\}_{i=1}^{M}$. \end{proof} \begin{cor}\label{cptf} Let $M\in\mathbb{N}\cup\{\infty\}$ and $\{d_{i}\}_{i=1}^{M}$ be a sequence in $[0,1]$. If $\sum_{i=1}^{M}(1-d_{i})\in\mathbb{N}$, then there is a projection $P$ with diagonal $\{d_{i}\}$. \end{cor} \begin{proof} This follows immediately from the observation that a projection $P$ has diagonal $\{d_{i}\}$ if and only if $\mathbf I-P$ is a projection with diagonal $\{1-d_{i}\}$. \end{proof} Finally, we can handle the general case (i) of the Carpenter's Theorem. \begin{thm}\label{cptk} Let $\{d_{i}\}_{i\in I}$ be a sequence in $[0,1]$. If \begin{equation}\label{cptk1} a = \sum_{d_{i}<1/2}d_{i}<\infty,\ b =\sum_{d_{i}\geq 1/2}(1-d_{i}) < \infty, \text{ and } a-b\in\mathbb{Z},\end{equation} then there exists a projection $P$ with diagonal $\{d_{i}\}$. \end{thm} \begin{proof} First, note that if $\{d_{i}\}$ or $\{1-d_{i}\}$ is summable, then by \eqref{cptk1} its sum is in $\mathbb{N}$. Thus, we can appeal to Theorem \ref{fcpt} or Corollary \ref{cptf}, resp., to obtain the desired projection. Hence, we may assume both $0$ and $1$ are limit points of the sequence $\{d_{i}\}$. Next, we claim that it is enough to prove the theorem under the assumption that $d_{i}\in(0,1)$ for all $i$. Indeed, if $P$ is a projection with diagonal $\{d_{i}\}_{d_{i}\in(0,1)}$, $\mathbf I$ is the identity operator on a space of dimension $|\{i\colon d_{i}=1\}|$ and $\mathbf{0}$ is the zero operator on a space of dimension $|\{i\colon d_{i}=0\}|$ then $P\oplus \mathbf I\oplus\mathbf{0}$ is a projection with diagonal $\{d_{i}\}_{i\in I}$. Define $J_{0} = \{i\in I:d_{i}<1/2\}$ and $J_{1}=\{i\in I:d_{i}\geq1/2\}$. Choose $i_{1}\in J_{1}$ such that $d_{i_{1}}\leq d_{i}$ for all $i\in J_{1}$. Choose $J'_{0}\subseteq J_{0}$ such that $J_{0}\setminus J'_{0}$ is finite and \[ \sum_{i\in J'_{0}}d_{i}<1-d_{i_{1}}. \] Let $i_{2}\in J_{1}$ be such that $d_{i_{2}}> d_{i_{1}}$ and \[d_{i_{2}} + \sum_{i\in J'_{0}}d_{i} \geq 1.\] Set \begin{equation}\label{cptk2} \eta_{0} = \sum_{i\in J'_{0}}d_{i}-(1-d_{i_{2}} ) <\sum_{i\in J'_{0}}d_{i}< 1- d_{i_1}. \end{equation} Let $I_0\subset J'_0$ be a finite set such that \begin{equation}\label{cptk3} \sum_{i\in I_0}d_{i}>\eta_{0}. \end{equation} By \eqref{cptk2} and \eqref{cptk3}, we can apply Lemma \ref{ops} to finite subsets $I_0$ and $I_1=\{i_1\}$ to obtain a sequence $\{\tilde{d}_{i}\}_{i\in I}$ coinciding with $\{d_{i}\}_{i\in I}$ outside of $I_0\cup I_1$ and such that \[ \sum_{i\in I_0}\tilde{d}_{i} = \sum_{i\in I_0}d_{i} - \eta_{0}\qquad\text{and}\qquad 1-\tilde{d}_{i_{1}} = 1-d_{i_{1}} - \eta_{0}.\] Note that \[ \sum_{i\in J'_0\cup\{i_{2}\}}\tilde{d}_{i} = d_{i_{2}} + \sum_{i\in J'_{0}\setminus I_{0}}d_{i} + \sum_{i\in I_{0}}\tilde{d}_{i} = d_{i_{2}} + \sum_{i\in J'_{0}\setminus I_{0}}d_{i} + \sum_{i\in I_{0}}d_{i} - \eta_{0} = 1. \] Thus, by Theorem \ref{fcpt} there is a projection $P_{1}$ with diagonal $\{\tilde{d}_{i}\}_{i\in J'_{0}\cup\{i_{2}\}}$. Next, we note that \begin{align*} \sum_{i\in I\setminus(J'_{0}\cup\{i_{2}\})} (1-\tilde{d}_{i}) &= \sum_{i\in J_{0}\setminus J'_{0}}(1-\tilde{d}_{i}) + \sum_{i\in J_{1}\setminus \{i_{2}\}}(1-\tilde{d}_{i}) \\ & = |J_{0}\setminus J'_{0}| - \sum_{i\in J_{0}\setminus J'_{0}}d_{i} + \sum_{i\in J_{1}\setminus \{i_{2}\}}(1-d_{i}) - \eta_{0}\\ & = |J_{0}\setminus J'_{0}| - \sum_{i\in J_{0}}d_{i} + \sum_{i\in J_{1}}(1-d_{i}) = |J_{0}\setminus J'_{0}| - a+b\in\mathbb{N}. \end{align*} By Corollary \ref{cptf} there is a projection $P_{2}$ with diagonal $\{\tilde{d}_{i}\}_{i\in I\setminus(J'_{0}\cup\{i_{2}\})}$. The projection $P_{1}\oplus P_{2}$ has diagonal $\{\tilde{d}_{i}\}_{i\in I}$. By Lemma \ref{ops} (ii) there is an operator $P$ with diagonal $\{d_{i}\}_{i\in I}$ which is unitarily equivalent to $P_{1}\oplus P_{2}$. Thus, $P$ is the required projection. \end{proof} In \cite[Remark 8]{k1} Kadison asked whether it is possible to construct projections with specified diagonal so that all its entries are real and nonnegative. While the answer is positive for rank one, in general it is negative for higher rank projections. \begin{ex} Consider any sequence $\{d_i\}_{i=1}^3$ of numbers in $(0,1)$ such that $d_1+d_2+d_3=2$. By Theorem \ref{fcpt} there exists a projection $P$ on $\mathbb{R}^3$ with such diagonal. However, some entries of $P$ must be negative. Indeed, $\mathbf I -P$ is rank one projection. Hence, $(\mathbf I -P)x=\langle x, v \rangle v $ for some unit vector $v=(v_1,v_2,v_3)\in \mathbb{R}^3$. That is, $(i,j)$ entry of $\mathbf I -P$ equals $v_iv_j$. In particular, $(v_i)^2=1-d_i>0$ for each $i$. This implies that for some $i\ne j$, the off-diagonal entry $(i,j)$ of $\mathbf I -P$ must be positive. Consequently, $(i,j)$ entry of $P$ is negative. \end{ex} \section{The algorithm and Carpenter's Theorem part ii} In this section we introduce an algorithmic technique for finding a projection with prescribed diagonal. The main result of this section is Theorem \ref{cptalg}. Given a non-summable sequence $\{d_{i}\}$ with all terms in $[0,1/2]$, except possibly one term in $(1/2,1)$, Theorem \ref{cptalg} produces an orthogonal projection with the diagonal $\{d_{i}\}$. Applying this result countably many times allows us to deal with all possible diagonal sequences in part (ii) of Carpenter's Theorem. The procedure of Theorem \ref{cptalg} is reminiscent to spectral tetris construction of tight frames introduced by Casazza et al. in \cite{cfmwz}, and further investigated in \cite{chkwz}. In fact, the infinite matrix constructed in the proof of Theorem \ref{cptalg} consists of column vectors forming a Parseval frame with squared norms prescribed by the sequence $\{d_{i}\}$. However, our construction was discovered independently with a totally different aim than that of \cite{cfmwz}. \begin{lem}\label{translem} Let $\sigma,d_{1},d_{2}\in[0,1]$. If $\max\{d_{1},d_{2}\}\leq\sigma$ and $\sigma\leq d_{1}+d_{2}$, then there exists a number $a\in[0,1]$ such that the matrix \begin{equation}\label{transmatrix} \left[\begin{array}{cc} a & \sigma-a\\ d_{1}-a & d_{2}-\sigma+a \end{array}\right] \end{equation} has entries in $[0,1]$ and \begin{equation}\label{transeq}a(d_{1}-a)=(\sigma-a)(d_{2}-\sigma+a).\end{equation} Moreover, if $d_{1}+d_{2}<2\sigma$, then $a$ is unique and given by \begin{equation}\label{transsol} a=\frac{\sigma(\sigma-d_{2})}{2\sigma-d_{1}-d_{2}}.\end{equation} \end{lem} \begin{proof} First, assume $\max\{d_{1},d_{2}\}\leq\sigma$ and $\sigma\leq d_{1}+d_{2}$. If $d_{1}=d_{2}=\sigma$ then any $a\in[0,\sigma]$ will satisfy \eqref{transeq} and the matrix \eqref{transmatrix} will have entries in [0,1]. Thus, we may additionally assume $d_{1}+d_{2}<2\sigma$, and hence $\sigma>0$. Since the quadratic terms in \eqref{transeq} cancel out, the equation is linear and the unique solution is given by \eqref{transsol}. It remains to show that the entries of the matrix in \eqref{transmatrix} are in $[0,1]$. It is clear that $a\geq 0$. Next, we calculate \begin{equation}\label{translem.1}\sigma-a = \sigma\left(1-\frac{\sigma-d_{2}}{2\sigma-d_{1}-d_{2}}\right) = \frac{\sigma(\sigma-d_{1})}{2\sigma-d_{1}-d_{2}},\end{equation} which implies that $\sigma-a\geq 0$. Since $\sigma\leq1$ we clearly have $a,\sigma-a\in[0,1]$. It remains to prove that the second row of \eqref{transmatrix} has nonnegative entries. Since $d_{1}+d_{2}\in[\sigma,2\sigma)$ we have \[(d_{1}-a) + (d_{2}-\sigma+a) = d_{1}+d_{2} - \sigma\in[0,\sigma).\] If one of $d_{1}-a$ or $d_{2}-\sigma+a$ is negative, then the other must be positive. From \eqref{transeq} we see that $a=\sigma-a=0$. This contradicts the assumption that $\sigma > 0$. Thus, both $d_{1}-a$ and $d_{2}-\sigma+a$ are nonnegative. \end{proof} \begin{lem}\label{inj} Let $\{d_{i}\}_{i\in \mathbb{N}}$ be a sequence such that $d_{1}\in [0,1)$, $d_{i}\in[0,\frac{1}{2}]$ for $i\geq 2$ and $\sum_{i=1}^{\infty}d_{i}=\infty$. There is a bijection $\pi:\mathbb{N}\to\mathbb{N}$ such that for each $n\in\mathbb{N}$ we have \begin{equation}\label{inj.1} d_{\pi(k_{n}-1)}\geq d_{\pi(k_{n})}\quad\text{where}\quad k_{n} := \min\left\{k\in \mathbb{N}:\sum_{i=1}^{k}d_{\pi(i)}\geq n\right\}.\end{equation} \end{lem} \begin{proof}For $n\in \mathbb{N}$ define \begin{equation}\label{alg.1} m_{n}:= \min\left\{k\in \mathbb{N}:\sum_{i=1}^{k}d_{i}\geq n\right\}. \end{equation} Define a bijection $\pi_{n}:\{m_{n-1}+1,\ldots,m_{n}\}\to\{m_{n-1}+1,\ldots,m_{n}\}$ such that $\{d_{\pi(i)}\}_{i=m_{n-1}+1}^{m_{n}}$ is in nonincreasing order with the convention that $m_0=0$. Finally, define a bijection $\pi:\mathbb{N} \to \mathbb{N}$ by \[ \pi(i)=\pi_n(i) \qquad\text{if } m_{n-1}<i \le m_n,\ n\in \mathbb{N}. \] We claim that \begin{equation}\label{alg.2} m_{n-1}+2 \le k_n \le m_n \qquad\text{for all }n\in \mathbb{N}. \end{equation} Indeed, by the minimality of $m_{n-1}$ we have for $n\ge 2$, \[ \sum_{i=1}^{m_{n-1}+1}d_{\pi(i)} = \sum_{i=1}^{m_{n-1}}d_{i}+d_{\pi(m_{n-1}+1)}<(n-1/2)+1/2=n. \] The above holds also holds trivially for $n=1$. Thus, $k_n >m_{n-1}+1$ for all $n\in \mathbb{N}$. On the other hand, we have \[ \sum_{i=1}^{m_{n}}d_{\pi(i)} = \sum_{i=1}^{m_{n}}d_{i}\ge n. \] This yields $k_n \le m_n$ and, thus, \eqref{alg.2} is shown. By \eqref{alg.2} we have $m_{n-1}+1 \le k_n-1 <k_n \le m_n$. Since $\{d_{\pi(i)}\}_{i=m_{n-1}+1}^{m_n}$ is nonincreasing, this yields \eqref{inj.1}. \end{proof} \begin{thm}\label{cptalg} Let $\{d_{i}\}_{i\in I}$ be a sequence such that $d_{i_{0}}\in [0,1)$ for some $i_{0}\in I$, $d_{i}\in[0,\frac{1}{2}]$ for all $i\neq i_{0}$, and $\sum_{i\in I}d_{i}=\infty$. There exists an orthogonal projection $P$ with diagonal $\{d_{i}\}_{i\in I}$. \end{thm} \begin{proof} Since $I$ is a countable set and $\sum_{i\in I} d_{i} = \infty$ we may assume without loss of generality that $I = \mathbb{N}$ and $i_{0} = 1$. By Lemma \ref{inj} there is a bijection $\pi:\mathbb{N}\to\mathbb{N}$ such that \eqref{inj.1} holds. For each $n\in \mathbb{N}$ set \begin{equation}\label{alg.7}\sigma_{n} = n - \sum_{i=1}^{k_{n}-2}d_{\pi(i)}.\end{equation} From the definition of $k_{n}$ we see that \begin{equation}\label{alg.8}\sigma_{n} = n - \sum_{i=1}^{k_{n}}d_{\pi(i)} + d_{\pi(k_{n}-1)} + d_{\pi(k_{n})}\leq d_{\pi(k_{n}-1)} + d_{\pi(k_{n})}.\end{equation} From the minimality of $k_{n}$ and \eqref{inj.1} we see that \[\sigma_{n} = n - \sum_{i=1}^{k_{n}-1}d_{\pi(i)} + d_{\pi(k_{n}-1)} \geq d_{\pi(k_{n}-1)}\geq d_{\pi(k_{n})},\] which implies that \begin{equation}\label{alg.9} \sigma_{n}\geq \max\{d_{\pi(k_{n}-1)}, d_{\pi(k_{n})}\}.\end{equation} By Lemma \ref{translem} for each $n$ there exists $a_{n}\in[0,1]$ such that the matrix \begin{equation*}\left[\begin{array}{cc} a_{n} & \sigma_{n}-a_{n}\\ d_{\pi(k_{n}-1)}-a_{n} & d_{\pi(k_{n})}-\sigma_{n}+a_{n} \end{array}\right]\end{equation*} has non-negative entries and \begin{equation}\label{orth}a_{n}(d_{\pi(k_{n}-1)}-a_{n})=(\sigma_{n}-a_{n})(d_{\pi(k_{n})}-\sigma_{n}+a_{n}).\end{equation} Let $\{e_{i}\}_{i\in\mathbb{N}}$ be an orthonormal basis for a Hilbert space $\mathcal{H}$. Set \[v_{1} = \sum_{i=1}^{k_{1}-2} d_{\pi(i)}^{1/2}e_{i} + a_{1}^{1/2}e_{k_{1}-1} - (\sigma_{1}-a_{1})^{1/2}e_{k_{1}},\] and for $n\geq 2$ define \begin{align*} v_{n} & = (d_{\pi(k_{n-1}-1)}-a_{n-1})^{1/2}e_{k_{n-1}-1} + (d_{\pi(k_{n-1})}-\sigma_{n-1}+a_{n-1})^{1/2}e_{k_{n-1}}\\ & + \sum_{i=k_{n-1}+1}^{k_{n}-2}d_{\pi(i)}^{1/2}e_{i} + a_{n}^{1/2}e_{k_{n}-1} - (\sigma_{n}-a_{n})^{1/2}e_{k_{n}}. \end{align*} We can visualize $\{v_{n}\}_{n\in \mathbb{N}}$ as row vectors expanded in the orthonormal basis $\{e_i\}_{i\in I}$ by the following infinite matrix. \[ \begin{bmatrix} v_1 \\ \hline v_2 \\ \hline v_3 \\ \hline \cdots \end{bmatrix} = \left[\begin{array}{ccccccccccccc} \sqrt{d_\cdot} & \cdots & \sqrt{a_1} & -\sqrt{\sigma_1-a_1} \\ & & \sqrt{d_\cdot-a_1} & \sqrt{d_\cdot-\sigma_1+a_1} &\sqrt{d_\cdot} & \cdots & \sqrt{a_2} & -\sqrt{\sigma_2-a_2} \\ & & & & & &\sqrt{d_\cdot-a_2} & \sqrt{d_\cdot-\sigma_2+a_2}& \cdots \\ & & & & & & & & \cdots \end{array} \right] \] In the above matrix empty spaces represents $0$ and $d_\cdot$ is an abbreviation for $d_{\pi(i)}$ in $i$th column. We claim that $\{v_{n}\}_{n\in \mathbb{N}}$ is an orthonormal set in $\mathcal H$. Indeed, by \eqref{alg.7} we have for $n\geq 2$ \begin{align*} \norm{v_{n}}^{2} & = d_{\pi(k_{n-1}-1)}-a_{n-1} + d_{\pi(k_{n-1})}-\sigma_{n-1}+a_{n-1} + \sum_{i=k_{n-1}+1}^{k_{n}-2}d_{\pi(i)} + a_{n} + \sigma_{n}-a_{n}\\ & = \sum_{i=k_{n-1}-1}^{k_{n}-2}d_{\pi(i)} + \sigma_{n}-\sigma_{n-1}\\ & = \sum_{i=k_{n-1}-1}^{k_{n}-2}d_{\pi(i)} + \left(n-\sum_{i=1}^{k_{n}-2}d_{\pi(i)}\right)-\left(n-1-\sum_{i=1}^{k_{n-1}-2}d_{\pi(i)}\right) = 1. \end{align*} A similar calculation yields $\norm{v_{1}}=1$. This means that rows of our infinite matrix have each norm $1$. Moreover, they are mutually orthogonal since any two vectors $v_n$ and $v_m$ have disjoint supports unless they are consecutive: $v_n$ and $v_{n+1}$. However, in the latter case the orthogonality is a consequence of \eqref{orth}. Define the orthogonal projection $P$ by \[Pv = \sum_{n\in\mathbb{N}} \langle v,v_{n}\rangle v_{n}, \qquad v\in \mathcal H.\] It is easy to check that the $i$th column of our infinite matrix has norm equal to $\sqrt{d_{\pi(i)}}$. In other words, for each $i\in\mathbb{N}$ we have \[ \langle Pe_{i},e_{i}\rangle =||Pe_i||^2=\sum_{n\in \mathbb{N}} |\langle e_i,v_n\rangle|^2= d_{\pi(i)}. \] This completes the proof of Theorem \ref{cptalg}. \end{proof} We are now ready to prove Carpenter's Theorem under assumption (ii). \begin{thm}\label{abinf} If $\{d_{i}\}_{i\in I}$ is a sequence in $[0,1]$ such that \begin{equation}\label{abinf1} a = \sum_{d_{i}<1/2}d_{i} = \infty\qquad\text{or}\qquad b =\sum_{d_{i}\geq 1/2}(1-d_{i}) = \infty, \end{equation} then there is a projection $P$ with diagonal $\{d_{i}\}$. \end{thm} \begin{proof} Set \[I_{0} = \{i : d_{i} \leq 1/2\}\quad \text{and}\quad I_{1} = \{i :d_{i} > 1/2\}.\] Our hypothesis \eqref{abinf1} implies that \begin{equation}\label{abinf2} a' = \sum_{i\in I_{0}}d_{i} = \infty \qquad\text{or}\qquad b=\infty. \end{equation} {\bf{Case 1.}} Assume that $a'=\infty$. We can partition $I$ into countably many sets $\{J_{n}\}_{n\in \mathbb{N}}$ such that each $J_n$ contains at most one element in $I_1$ and \[ \sum_{i\in J_{n}} d_{i} = \infty \qquad\text{for all } n\in \mathbb{N}.\] This is possible since $I_0$ satisfies \eqref{abinf2}. By Theorem \ref{cptalg}, for each $n\in \mathbb{N}$ there is a projection $P_{n}$ with diagonal $\{d_{i}\}_{i\in J_n}$. Thus, the projection \[P = \bigoplus_{n\in \mathbb{N}}P_{n}\] has the desired diagonal $\{d_i\}_{i\in I}$. This completes the proof of Case 1. {\bf{Case 2.}} Assume that $b=\infty$. Note that \[b = \sum_{1-d_{i}\leq 1/2}(1-d_{i}). \] Thus, by Case 1 there is a projection $P'$ with diagonal $\{1-d_{i}\}$. Hence, $P=\mathbf I-P'$ is a projection with diagonal $\{d_{i}\}$. \end{proof} \section{A selector problem} Kadison's Theorem \ref{Kadison} is closely connected with an open problem of characterizing all spectral functions of shift-invariant spaces. Shift-invariant (SI) spaces are closed subspaces of $L^2(\mathbb{R}^d)$ that are invariant under all shifts, i.e., integer translations. That is, a closed subspace $V \subset L^2(\mathbb{R}^d)$ is SI if $T_k(V)=V$ for all $k\in\mathbb{Z}^d$, where $T_kf(x)=f(x-k)$ is the translation operator. The theory of shift-invariant spaces plays an important role in many areas, most notably in the theory of wavelets, spline systems, Gabor systems, and approximation theory \cite{BDR1, BDR2, Bo, RS1, RS2}. The study of analogous spaces for $L^2(\mathbb{T}, \mathcal H)$ with values in a separable Hilbert space $\mathcal H$ in terms of the range function, often called doubly-invariant spaces, is quite classical and goes back to Helson \cite{He}. In the context of SI spaces a {\it range function} is any mapping $$J: \mathbb{T}^d \to \{\text{closed subspaces of }\ell^2(\mathbb{Z}^d)\},$$ where $\mathbb{T}^d=\mathbb{R}^d/\mathbb{Z}^d$ is identified with its fundamental domain $[-1/2,1/2)^d$. We say that $J$ is {\it measurable} if the associated orthogonal projections $P_J(\xi): \ell^2(\mathbb{Z}^d) \to J(\xi)$ are operator measurable, i.e., $\xi \mapsto P_J(\xi) v$ is measurable for any $v\in \ell^2(\mathbb{Z}^d)$. We follow the convention which identifies range functions if they are equal a.e. A fundamental result due to Helson \cite[Theorem 8, p.~59]{He} gives one-to-one correspondence between SI spaces $V$ and measurable range functions $J$, see also \cite[Proposition 1.5]{Bo}. Among several equivalent ways of introducing the spectral function of a SI space the most relevant definition uses a range function. \begin{defn} The {\it spectral function} of a SI space $V$ is a measurable mapping $\sigma_V: \mathbb{R}^d \to [0,1]$ given by \begin{equation}\label{dsp} \sigma_V(\xi+k) = ||P_J(\xi) e_k||^2=\langle P_J(\xi)e_k,e_k \rangle \qquad\text{for } \xi\in \mathbb{T}^d,\ k\in\mathbb{Z}^d, \end{equation} where $\{e_k\}_{k\in\mathbb{Z}^d}$ denotes the standard basis of $\ell^2(\mathbb{Z}^d)$ and $\mathbb{T}^d=[-1/2,1/2)^d$. In other words, $\{\sigma_V(\xi+k)\}_{k\in \mathbb{Z}^d}$ is a diagonal of a projection $P_J(\xi)$. \end{defn} Note that $\sigma_V(\xi)$ is well defined for a.e.~$\xi\in\mathbb{R}^d$, since $\{ k+ \mathbb{T}^d: k\in \mathbb{Z}^d\}$ is a partition of $\mathbb{R}^d$. As an immediate consequence of Theorem \ref{Kadison} we have the following result. \begin{thm}\label{sp} Suppose that $V \subset L^2(\mathbb{R}^d)$ is a SI space. Let $\sigma=\sigma_V:\mathbb{R}^d \to [0,1]$ be its spectral function. For $\xi\in \mathbb{T}^d$ define \[ a(\xi)=\sum_{k\in \mathbb{Z}^d, \ \sigma(\xi+k)<1/2} \sigma(\xi+k) \quad\text{and}\quad b(\xi)=\sum_{k\in \mathbb{Z}^d, \ \sigma(\xi+k) \ge1/2}(1-\sigma(\xi+k)). \] Then, for a.e. $\xi \in \mathbb{R}^d$ we either have \begin{enumerate} \item $a(\xi),b(\xi)<\infty$ and $a(\xi)-b(\xi)\in\mathbb{Z}$, or \item $a(\xi)=\infty$ or $b(\xi)=\infty$. \end{enumerate} \end{thm} It is an open problem whether the converse to Theorem \ref{sp}. \begin{problem} Suppose that a measurable function $\sigma:\mathbb{R}^d \to [0,1]$ satisfies either (i) or (ii) for a.e. $\xi \in \mathbb{R}^d$. Does there exists a SI space $V\subset L^2(\mathbb{R}^d)$ such that its spectral function $\sigma_V=\sigma$? \end{problem} The sufficiency part of Theorem \ref{Kadison}, i.e., Carpenter's Theorem, suggests a positive answer to this problem. That is, for a.e. $\xi$ it yields a projection $P_J(\xi)$ whose diagonal satisfies \eqref{dsp}. However, it does not guarantee a priori that the corresponding range function $J$ is measurable. This naturally leads to the following selector problem. \begin{problem} Let $X$ be a finite (or $\sigma$-finite) measure space and let $I$ be a countable index set. Let $\sigma:X \times I \to [0,1]$ be a measurable function. For $\xi\in X$ define \[ a(\xi)=\sum_{i\in I, \ \sigma(\xi,i)<1/2} \sigma(\xi,i) \quad\text{and}\quad b(\xi)=\sum_{i\in I, \ \sigma(\xi,i)\ge 1/2} (1-\sigma(\xi,i)). \] Suppose that for a.e. $\xi \in X$ we either have \begin{enumerate} \item $a(\xi),b(\xi)<\infty$ and $a(\xi)-b(\xi)\in\mathbb{Z}$, or \item $a(\xi)=\infty$ or $b(\xi)=\infty$. \end{enumerate} Does there exists a measurable range function $J:X \to \{\text{closed subspaces of }\ell^2(I)\}$ such that the corresponding orthogonal projections $P_J(\xi)$ have diagonal $\{\sigma(\xi,i)\}_{i\in I}$ for a.e. $\xi \in X$? \end{problem} In other words, Problem 2 asks whether it is possible to find a measurable selector of projections in Theorem \ref{Kadison}. The constructive proof of Carpenter's Theorem given in this paper might be a first step toward resolving this problem. However, Problem 2 remains open.
{ "timestamp": "2013-02-28T02:00:41", "yymm": "1302", "arxiv_id": "1302.6632", "language": "en", "url": "https://arxiv.org/abs/1302.6632", "abstract": "We give a constructive proof of Carpenter's Theorem due to Kadison. Unlike the original proof our approach also yields the real case of this theorem.", "subjects": "Functional Analysis (math.FA)", "title": "Constructive proof of the Carpenter's Theorem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983342957061873, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7097211104910036 }
https://arxiv.org/abs/1110.0994
A Spectral Sequence Connecting Continuous With Locally Continuous Group Cohomology
We present a spectral sequence connecting the continuous and 'locally continuous' group cohomologies for topological groups. As an application it is shown that for contractible topological groups these cohomology concepts coincide. Similar results for k-groups and smooth cochains on Lie groups are also obtained.
\section*{Introduction} There exist various cohomology concepts for topological groups $G$ and topological coefficient groups $V$ which take the topologies of the group and that of the coefficients into account. One is obtained by restricting oneself to the complex $C_c^* (G;V)$ continuous group cochains only whose cohomology is called the \emph{continuous group cohomology} $H_c (G;V)$. For abstract groups $G$ and $G$-modules $V$ the first cohomology group $H^1 (G;V)$ classifies crossed morphisms modulo principal derivations, the second cohomology group $H^2 (G;V)$ classifies equivalence classes of group extensions $V \hookrightarrow \hat{G} \twoheadrightarrow G$ and the third cohomology group $H^3 (G;V)$ classifies equivalence classes crossed modules with kernel $V$ and cokernel $G$ (cf. \cite[Theorem 6.4.5, Theorem 6.6.3 and Theorem 6.6.13]{WeHA}. Analogous considerations show that for topological groups $G$ and $G$-modules $V$ the first cohomology group $H_c^1 (G;V)$ classifies continuous crossed morphisms modulo principal derivations, the second cohomology group $H_{c}^2 (G;V)$ classifies equivalence classes of topological group extensions $V \hookrightarrow \hat{G} \twoheadrightarrow G$ which admit a global section (i.e. $\hat{G} \twoheadrightarrow G$ is a trivial $V$-principal bundle) and the third cohomology group $H_{c}^3 (G;V)$ classifies equivalence classes of topologically split crossed modules. The continuous group cohomology has the drawback that for even the compact Hausdorff group $G=\mathbb{R}/\mathbb{Z}$ the short exact sequence \begin{equation*} 0 \rightarrow \mathbb{Z} \hookrightarrow \mathbb{R} \twoheadrightarrow \mathbb{R}/\mathbb{Z} \rightarrow 0 \end{equation*} of coefficients does not induce a long exact sequence of cohomology groups. (The group $H_{c}^1 (G;\mathbb{R})$ is trivial because the projection $\mathbb{R} \twoheadrightarrow \mathbb{R}/\mathbb{Z}$ does not admit global sections, $H_{c}^n (G;\mathbb{Z})=0$ because all continuous group cochains on $G$ are constant whereas the group of $H_{c}^1 (G;G)$ all continuous endomorphisms of $G$ is non-trivial.) This drawback is relieved by a second more general cohomology concept, which is obtained by considering the complex $C_{cg}^* (G;V)$ of group cochains which are continuous on some identity neighbourhood in $G$. By abuse of language some people call the corresponding cohomology groups $H_{cg} (G;V)$ the `locally continuous group cohomology´. The first cohomology group $H_{cg}^1 (G;V)$ classifies continuous crossed morphisms modulo principal derivations, the second cohomology group $H_{cg}^2 (G;V)$ classifies equivalence classes of topological group extensions $V \hookrightarrow \hat{G} \twoheadrightarrow G$ which admit local sections (i.e. $\hat{G} \twoheadrightarrow G$ is a locally trivial $V$-principal bundle) and the third cohomology group $H_{cg}^3 (G;V)$ classifies equivalence classes of crossed modules in which all homomorphisms admit local sections. The inclusion $C_{cg}^* (G;V) \hookrightarrow C_c^* (G;V)$ of cochain complexes induces a morphism $H_{cg}^* (G;V) \rightarrow H_c^* (G;V)$ of cohomology groups, which is used to compare the two cohomology concepts. As the above example shows, these cohomology concepts do not even coincide for connected compact Hausdorff groups and real coefficients. In the following we will show that the contractibility of a topological group $G$ forces the two cohomologies to coincide (cf. Corollary \ref{gcontriso}): \begin{theorem*} For contractible groups $G$ the inclusion $C_c^* (G;V) \hookrightarrow C_{cg}^* (X;V)$ induces an isomorphism in cohomology. \end{theorem*} This is proved by constructing a row-exact double complex $A_{cg,eq} (G;V)$ whose rows and columns can be augmented by the complexes $C_{cg}^* (G;V)$ and $C_{c}^* (G;V)$ respectively. The contractibility of $G$ will be shown to force the columns of this double complex to be exact as well, which then in turn is shown to imply that the inclusion $C_{cg}^* (G;V) \hookrightarrow C_c^* (G;V)$ induces an isomorphism in cohomology. In fact we will be considering the more general setting of transformation groups $(G,X)$ and $G$-equivariant cochains on $X$ and prove these results in this more general setting. Similar results for $k$-groups and smooth transformation groups will also be obtained. \section{Basic Concepts} In this section we recall the definitions of various cochain complexes and the interpretation of some of their cohomology groups. For topological spaces $X$ and abelian topological groups $V$ one can consider variations of the exact \emph{standard complex} $A^* (X;V)=\hom_\mathbf{Set} (X;V)$ of abelian groups. \begin{definition} For every topological space $X$ and abelian topological group $V$ the subcomplex $A_c^* (X;V):= C (X^{*+1};V)$ of the standard complex is called the \emph{continuous standard complex}. \end{definition} For transformation groups $(G,X)$ and $G$-modules $V$ the group $G$ acts on the spaces $X^{n+1}$ via the diagonal action and the groups $A^n (X;V)$ can be endowed with a $G$-action via \begin{equation}\label{defgact} G \times A^n (X;V) \rightarrow A^n (X;V), \quad [g.f] (\vec{x})=g.[ f (g^{-1} .\vec{x})] \, . \end{equation} The $G$-fixed points of this action are the $G$-equivariant cochains. Because the differential of the standard complex intertwines the $G$-action, the equivariant cochains form a subcomplex $A^* (X;V)^G$ of the standard complex and the continuous equivariant cochains form a subcomplex $A_c^* (X;V)^G$ of the continuous standard complex. These complexes not exact in general. \begin{example} For any group $G$ which acts on itself by left translation and $G$-module $V$ the complex $A^* (G;V)^G$ is the complex of (homogeneous) group cochains; for topological groups $G$ and $G$-modules $V$ the complex $A_c^* (G;V)^G$ is the complex of continuous (homogeneous) group cochains \end{example} \begin{definition} The cohomology $H_{eq} (X;V)$ of the complex $A^* (X;V)^G$ is called the equivariant cohomology of $X$ (with values in $V$). The cohomology $H_{eq,c} (X;V)$ of the subcomplex $A_c^* (X;V)^G$ is called the equivariant continuous cohomology of $X$ (with values in $V$). \end{definition} \begin{example} For any group $G$ which acts on itself by left translation and $G$-module $V$ the cohomology $H_{eq} (G;V)$ is the group cohomology of $G$ with values in $V$; for topological groups $G$ and $G$-modules $V$ the cohomology $H_{eq,c} (G;V)$ is the continuous group cohomology of $G$ with values in $V$. \end{example} For transformation groups $(G,X)$ and $G$-modules $V$ there exists a $G$-invariant complex $A_{cg}^* (X;V)$ between $A_c (X;V)$ and $A (X;V)$ which we are going to define now. For each open covering $\mathfrak{U}$ of $X$ and each $n \in \mathbb{N}$ one can define an open neighbourhood $\mathfrak{U} [n]$ of the diagonal in $X^{n+1}$ via \begin{equation*} \mathfrak{U} [n] := \bigcup_{U \in \mathfrak{U}} U^{n+1} \, . \end{equation*} These neighbourhoods of the diagonals in $X^{*+1}$ form a simplicial subspace of $X^{*+1}$ which allows us to consider the subcomplex of $A^* (X;V)$ formed by the groups \begin{equation*} A_{cr}^n (X,\mathfrak{U};V) := \left\{ f \in A^n (X;V) \mid \, f_{\mid \mathfrak{U} [n]} \in C (\mathfrak{U}[n];V) \right\} \end{equation*} of cochains whose restriction to the subspaces $\mathfrak{U}[n]$ of $X^{n+1}$ are continuous. The cohomology of the cochain complex $A_{cr}^* (X,\mathfrak{U};V)$ is denoted by $H_{cr} (X,\mathfrak{U};V)$. If the covering $\mathfrak{U}$ of $X$ is $G$-invariant, then the subspaces $\mathfrak{U}[*]$ is a simplicial $G$-subspace of the simplicial $G$-space $X^{*+1}$. \begin{example} If $G=X$ is a topological group which acts on itself by left translation and $U$ an open identity neighbourhood, then $\mathfrak{U}_U :=\{ g.U \mid g \in G \}$ is a $G$-invariant open covering of $G$ and $\mathfrak{U}[*]$ is an open simplicial $G$-subspace of $G^{*+1}$. \end{example} For $G$-invariant coverings $\mathfrak{U}$ of $X$ the cohomology of the subcomplex $A_{cr}^* (X,\mathfrak{U};V)^G$ of $G$-equivariant cochains is denoted by $H_{cr,eq} (X,\mathfrak{U};V)$. \begin{example} If $G=X$ is a topological group which acts on itself by left translation and $U$ an open identity neighbourhood, then the complex $A_{cr}^* (X,\mathfrak{U}_U;V)^G$ is the complex of homogeneous group cochains whose restrictions to the subspaces $\mathfrak{U}_U [*]$ are continuous. (These are sometimes called $\mathfrak{U}$-continuous cochains.) \end{example} For directed systems $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ one can also consider the colimit complex $\colim_i A_{cr}^* (X,\mathfrak{U}_i ;V)$. In particular for the directed system of all open coverings of $X$ one observes that the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods, hence one obtains the complex \begin{equation*} A_{cg}^* (X;V):= \colim_{\mathfrak{U} \text{is open cover of $X$}} A_{cr}^* (X;\mathfrak{U};V) \end{equation*} of global cochains whose germs at the diagonal are continuous. This is a subcomplex of the standard complex $A^* (X;V)$ which is invariant under the $G$-action (Eq. \ref{defgact}) and thus a sub complex of $G$-modules. The $G$-equivariant cochains with continuous germ form a subcomplex $A_{cg}^* (X;V)^G$ thereof, whose cohomology is denoted by $H_{cg,eq} (X;V)$. The latter subcomplex can also be obtained by taking the colimit over all $G$-invariant open coverings of $X$ only: \begin{proposition} \label{natinclofeqccisiso} The natural morphism of cochain complexes \begin{equation*} A_{cg,eq}^* (X;V):= \colim_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{cr}^* (X;\mathfrak{U};V)^G \rightarrow A_{cg}^* (X;V)^G \end{equation*} is a natural isomorphism. \end{proposition} \begin{proof} We show that this morphism is surjective and injective. Every equivalence class in $A_{cg}^n (X;V)^G$ can be represented by a cochain $f \in A_{cr}^n (X,\mathfrak{U};V)^G$, where $\mathfrak{U}$ is an open cover of $X$. The cochain $f$ is continuous on $\mathfrak{U}[n]$ by definition. Its equivariance implies, that it also is continuous on $G . \mathfrak{U}[n]=( G. \mathfrak{U})[n]$, hence an element of $A_{eq}^n (X, (G. \mathfrak{U})[n];V)$. The equivalence class $[f] \in A_{cg,eq}^n (X;V)$ is mapped onto $[f]A_{cg}^n (X;V)^G$. This proves surjectivity. Every equivalence class in $A_{cg,eq}^n (X;V)^G$ can be represented by an equivariant $n$-cochain $f$ in $A_{cr}^n (X, \mathfrak{U};V)^G$, where $\mathfrak{U}$ is a $G$-invariant open cover of $X$. If the image of the class $[f] \in A_{cg}^* (X;V)^G$ is trivial, then the cochain $f$ itself is trivial and so is its class $[f] \in A_{cg,eq}^n (X;V)^G$. This proves injectivity. \end{proof} \begin{corollary} The cohomology $H_{cg,eq} (X;V)$ is the cohomology of the complex of equivariant cochains which are continuous on some $G$-invariant neighbourhood of the diagonal. \end{corollary} \begin{example} If $G=X$ is a topological group which acts on itself by left translation, then the complex $A_{cg}^* (G;V)^G$ is the complex of homogeneous group cochains whose germs at the diagonal are continuous. (By abuse of language these are sometimes called 'locally continuous' group cochains.) \end{example} \section{The Spectral Sequence} \label{sectss} Let $(G,X)$ be a transformation group, $V$ be $G$-module and $\mathfrak{U}$ be an open covering of $X$. We will show (in Section \ref{seccontanduccc}) that the inclusion $A_{cr}^* (X,\mathfrak{U};V) \hookrightarrow A_c^* (X;V)$ induces an isomorphism in cohomology provided the space $X$ is contractible. For this purpose we consider the abelian groups \begin{equation} \label{defrcu} A_{cr}^{p,q} ( X,\mathfrak{U} ; V ) := \left\{ f: X^{p+1} \times X^{q+1} \rightarrow V \mid f_{\mid X^{p+1} \times \mathfrak{U}[q]} \; \text{is continuous} \right\} \, . \end{equation} The abelian groups $A_{cr}^{p,q} ( X,\mathfrak{U} ; V )$ form a first quadrant double complex whose vertical and horizontal differentials are given by \begin{align*} d_{h}^{p,q} : A_{cr}^{p,q} \to A_{cr}^{p+1,q}, & \quad d_{h}^{p,q}(f^{p,q})(\vec{x},\vec{x}) =\sum_{i=0}^{p+1}(-1)^{i}f^{p,q}(x_{0},...,\widehat{x_{i}},...,x_{p+1},\vec{x}')\\ d_{v}^{p,q} : A_{cr}^{p,q}\to A_{cr}^{p,q+1}, &\quad d_{v}^{p,q}(f^{p,q})(\vec{x},\vec{x}') = (-1)^p \sum_{i=0}^{q+1}(-1)^{i}f^{p,q}(\vec{x},x_{0}',...,\widehat{x_{i}}',...,x_{q+1}') \, . \end{align*} The double complex $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )$ can be filtrated column-wise to obtain a spectral sequence $E_{cr,*}^{*,*} (X,\mathfrak{U};V)$ (cf. \cite[Theorem 2.15]{Mcl}). Since the double complex is a first quadrant double complex, the spectral sequence $E_{cr,*}^{*,*} (X,\mathfrak{U};V)$ converges to the cohomology of the total complex of $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )$. The rows of the double complex $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )$ can be augmented by the complex $A_{cr}^* (X,\mathfrak{U} ;V)$ for the covering $\mathfrak{U}$ and the columns can be augmented by the exact complex $A_c^* (X;V)$ of continuous cochains: \begin{equation*} \vcenter{ \xymatrix{ \vdots & \vdots & \vdots & \vdots \\ A_{cr}^2 (X, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{cr}^{0,2} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{1,2} ( X, \mathfrak{U}; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{2,2} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ A_{cr}^1 (X, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{cr}^{0,1} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{1,1} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{2,1} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ A_{cr}^0 (X, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{cr}^{0,0} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{1,0} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{cr}^{2,0} ( X, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ & A_c^0 ( X ; V) \ar[r]^{d_{h}}\ar[u] & A_c^1 ( X ; V) \ar[r]^{d_{h}}\ar[u] & A_c^2 ( X ; V) \ar[r]^{d_{h}}\ar[u] & \cdots }} \end{equation*} We denote the total complex of the double complex $A_{cr}^{*,*} ( X, \mathfrak{U} ; V)$ by $\mathrm{Tot} A_{cr}^{*,*} ( X, \mathfrak{U} ; V)$. The augmentations of the rows and columns of this double complex induce morphisms $i^* : A_{cr}^* ( X, \mathfrak{U} ; V) \rightarrow \mathrm{Tot} A_{cr}^{*,*} ( X, \mathfrak{U} ; V)$ and $j^*: A_c^* ( X ; V) \rightarrow \mathrm{Tot} A_{cr}^{*,*} ( X,\mathfrak{U} ; V)$ of cochain complexes respectively. \begin{lemma} \label{columnsexact} The morphism $i^*: A_{cr}^* ( X,\mathfrak{U} ; V) \rightarrow \mathrm{Tot} A_{cr}^{*,*} (X,\mathfrak{U} ; V)$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} On each augmented row $A_{cr}^q ( X, \mathfrak{U} ; V) \hookrightarrow A_{cr}^{*,q} ( X, \mathfrak{U} ; V)$ one can define a contraction $h^{*,q}$ via \begin{equation} \label{defrowcontr} h^{p,q} : A_{cr}^{p,q} ( X, \mathfrak{U} ; V) \rightarrow A_{cr}^{p-1,q} ( X, \mathfrak{U} ; V) , \quad h^{p,q} (f) (\vec{x},\vec{x}')= f ( x_0, \ldots, x_{p-1}, x_0 ', \vec{x}' ) \, . \end{equation} Therefore the augmented rows are exact and the augmentation $i^*$ induces an isomorphism in cohomology. \end{proof} \begin{remark} Note that for non-trivial $\mathfrak{U}$ this construction does not work for the column complexes, because the so constructed cochains would not fulfil the continuity condition in Def. \ref{defrcu}. \end{remark} For $G$-invariant open coverings $\mathfrak{U}$ of $X$ one can consider the sub double complex $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )^G$ of $A_{cr}^{*,*} ( X,\mathfrak{U} ; V )$ whose rows are augmented by the cochain complex $A_{cr}^* (X,\mathfrak{U} ;V)^G$ for the covering $\mathfrak{U}$ and the columns can be augmented by the complex $A_c^* (X;V)^G$ of continuous equivariant cochains (,which is not exact in general). \begin{lemma} \label{columnsexacteq} For $G$-invariant coverings $\mathfrak{U}$ of $X$ the morphism $i_{eq}^*: ={i^*}^G$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The contraction $h_{*,q}$ of the augmented rows $A_{cr}^q ( X, \mathfrak{U} ; V) \hookrightarrow \mathrm{Tot} A_{cr}^{*,q} ( X, \mathfrak{U} ; V)$ defined in Eq. \ref{defrowcontr} is $G$-equivariant and thus restricts to a row contraction of the augmented sub-row $A_{cr}^q ( X, \mathfrak{U} ; V)^G \hookrightarrow \mathrm{Tot} A_{cr}^{*,q} ( X, \mathfrak{U} ; V)^G$. \end{proof} So the morphism $H (i_{eq}) : H_{cr,eq} (X,\mathfrak{U};V) \rightarrow H ( \mathrm{Tot} A_{cr}^{*,*} ( X, \mathfrak{U} ; V)^G )$ is invertible. For the composition $H (i_{eq})^{-1} H(j_{eq}):H_{c,eq}(X;V)\rightarrow H_{cr,eq}(X,\mathfrak{U};V)$ we observe: \begin{proposition} \label{contiscohtocr} The image $j^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $X$ in $\mathrm{Tot} A_{cr}^{*,*} (X,\mathfrak{U};,V)^G$ is cohomologous to the image $i_{eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{cr}^n (X,\mathfrak{U};V)^G$ in $\mathrm{Tot} A_{cr}^{*,*} (X,\mathfrak{U};V)^G$. \end{proposition} \begin{proof} The proof is a variation of the proof of \cite[Proposition 14.3.8]{F10}: Let $f:X^{n+1}\to V$ be a continuous equivariant $n$-cocycle on $X$ and (for $p+q=n-1$) define equivariant cochains $\psi^{p,q} : X^{p+1}\times X^{q+1} \cong X^{n+1} \to V$ in $A_{cr}^{p,q} (X,\mathfrak{U};V)^G$ via $\psi^{p,q} ( \vec{x},\vec{x}')= (-1)^p f (\vec{x},\vec{x}')$. The vertical coboundary of the cochain $\psi^{p,q}$ is given by \begin{eqnarray*} [d_v \psi^{p,q}] (\vec{x},x_0',\ldots,x_{q+1}') & = & (-1 )^p \sum (-1)^i f (\vec{x},x_0',\ldots,\hat{x}_i',\ldots,x_q') \\ & = & - \sum (-1)^{p+1+i} f ( x_0,\ldots,\hat{x}_i, \ldots, x_p,\vec{x}') \\ & = & [d_h \psi^{p-1,q+1}](x_0,...,x_p,\vec{x}'). \end{eqnarray*} The anti-commutativity of the horizontal and the vertical differential ensures that the coboundary of the cochain $\sum_{p+q=n-1} (-1)^p \psi^{p,q}$ in the total complex is the cochain $j^n (f) - i^n (f)$. Thus the cocycles $j^n (f)$ and $i_{eq}^n (f)$ are cohomologous in $\mathrm{Tot} A_{cr}^{*,*} (X,\mathfrak{U};V)^G$. \end{proof} \begin{corollary} The composition $H (i_{eq})^{-1} H(j_{eq}):H_{c,eq}(X;V)\rightarrow H_{cr,eq}(X,\mathfrak{U};V)$ is induced by the inclusion $A_c^* (X,\mathfrak{U};V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$. \end{corollary} \begin{corollary} If the morphism $j^*_{eq}:={j^*}^G : A_c^* (X;V)^G \rightarrow \mathrm{Tot} A_{cr}^{*,*}(X,\mathfrak{U}A)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. \end{corollary} For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ one can also consider the corresponding augmented colimit double complexes. In particular for the directed system of all open coverings of $X$ one obtains the double complex complex \begin{equation*} A_{cg}^{*,*} (X;V):= \colim_{\mathfrak{U} \text{ is open cover of $X$}} A_{cr}^{*,*} (X;\mathfrak{U};V) \end{equation*} whose rows and columns are augmented by the colimit complex $A_{cg}^* (X;V)$ and by the complex $A_c^* (X;V)$ respectively. \begin{lemma} For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ the morphism $\colim_i i^*: \colim_i A_{cr}^* ( X,\mathfrak{U}_i ; V) \rightarrow \mathrm{Tot} \colim_i A_{cr}^{*,*} (X,\mathfrak{U}_i ; V)$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \ref{columnsexact}). \end{proof} As a consequence the colimit morphism $i_{cg}^* : A_{cg}^* ( X; V) \rightarrow \mathrm{Tot} A_{cg}^{*,*} (X; V)$ induces an isomorphism in cohomology. The colimit double complex $A_{cg}^{*,*} (X;V)$ is a double complex of $G$-modules and the $G$-equivariant cochains in form a sub double complex $A_{cg}^{*,*} (X;V)^G$, whose rows and columns are augmented by the colimit complex $A_{cg,eq}^* (X;V)$ and by the complex $A_c^* (X;V)^G$ respectively. \begin{lemma} For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of $G$-invariant open coverings of $X$ the morphism $\colim_i i_{eq}^*: \colim_i A_{cr}^* ( X,\mathfrak{U}_i ; V)^G \rightarrow \mathrm{Tot} \colim_i A_{cr}^{*,*} (X,\mathfrak{U}_i ; V)^G$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \ref{columnsexacteq}). \end{proof} Moreover, since the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods, we observe: \begin{lemma} \label{natinclofeqdcisiso} The natural morphism of double complexes \begin{equation*} A_{cg,eq}^{*,*} (X;V):= \colim_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{cr}^{*,*} (X;\mathfrak{U};V)^G \rightarrow A_{cg}^* (X;V)^G \end{equation*} is a natural isomorphism. \end{lemma} \begin{proof} The proof is analogous to that of Proposition \ref{natinclofeqccisiso}. \end{proof} As a consequence the colimit morphism $i_{cg,eq}^* : A_{cg,eq}^* ( X; V) \rightarrow \mathrm{Tot} A_{cg}^{*,*} (X; V)^G$ induces an isomorphism in cohomology, and the morphism $H (i_{cg,eq})$ is invertible. For the composition $H(i_{cg,eq})^{-1} H(j_{eq}):H_{c,eq}(X;V)\rightarrow H_{cg,eq}(X,\mathfrak{U};V)$ we observe: \begin{proposition} \label{contiscohtocreq} The image $j^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $X$ in $\mathrm{Tot} A_{cg}^{*,*} (X;,V)^G$ is cohomologous to the image $i_{cg,eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{cg,eq}^n (X;V)$ in $\mathrm{Tot} A_{cg}^{*,*} (X;V)^G$. \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{contiscohtocr}. \end{proof} \begin{corollary} The composition $H (i_{cg,eq})^{-1} H(j_{eq}):H_{c,eq}(X;V)\rightarrow H_{cg,eq}(X;V)$ is induced by the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$. \end{corollary} \begin{corollary} If the morphism $j^*_{eq}:={j^*}^G : A_c^* (X;V)^G \rightarrow \mathrm{Tot} A_{cg}^{*,*}(X;V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg,eq}^* (X;V)$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. \end{corollary} \section{Continuous and $\mathfrak{U}$ -Continuous Cochains} \label{seccontanduccc} In this section we consider transformation groups $(G,X)$ and $G$-modules $V$ for which we show that the inclusion $A_c^* (X,\mathfrak{U};V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$ of the complex of continuous equivariant cochains into the complex of equivariant $\mathfrak{U}$-continuous cochains induces an isomorphism $H_c^* (X,\mathfrak{U};V) \cong H_{cr}^* (X,\mathfrak{U};V)$ provided the topological space $X$ is contractible. The proof relies on the row exactness of the double complexes $A_c^{*,*} (X,\mathfrak{U};V )^G$ and $A_{cr}^{*,*} ( X,\mathfrak{U};V)^G$. At first we reduce the problem to the non-equivariant case: \begin{proposition} \label{noneqextheneqex} If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cr}^{p,*}(X,\mathfrak{U};V)$ are exact, then the augmented sub column complexes $A_c^p (X;V)^G \hookrightarrow A_{cr}^{p,*}(X,\mathfrak{U};V)^G$ of equivariant cochains are exact as well. \end{proposition} \begin{proof} Assume that the augmented column complexes $A_c^p (X,\mathfrak{U};V) \hookrightarrow A_{cr}^{p,*}(X,\mathfrak{U};V)$ are exact. Then each equivariant vertical cocycle $f_{eq}^{p,q} \in A_{rc}^{p,q} (X,\mathfrak{U};V)^G$ is the vertical coboundary $d_v f^{p,q-1}$ of a cocycle $f^{p,q-1} \in A_{cr}^{p,q-1} (X,\mathfrak{U};V)$ (which is not necessary equivariant). Define an equivariant cochain $f_{eq}^{p,q-1}$ of bidegree \mbox{$(p,q-1)$ via} \begin{equation*} f_{eq}^{p,q-1} (\vec{x},\vec{x}'):= x_0 . f^{p,q-1} (x_0^{-1} . \vec{x}, x_0^{-1} . \vec{x}') \, . \end{equation*} This equivariant cochain is continuous on $X^{p+1} \times \mathfrak{U}[q-1]$ because $f^{p,q-1}$ is continuous on $X^{p+1} \times \mathfrak{U}[q-1]$. We assert that the vertical coboundary $d_v f_{eq}^{p,q-1}$ of $f_{eq}$ is the equivariant vertical cocycle $f_{eq}^{p,q}$. Indeed, since the differential $d_v$ is equivariant, the vertical coboundary of $f_{eq}^{p,q-1}$ computes to \begin{equation*} d_v f_{eq}^{p,q-1} (\vec{x},\vec{x}') = x_0 . \left[ d_v f^{p,q-1} (x_0^{-1} . \vec{x} , x_0^{-1} . \vec{x}') \right]= f_{eq}^{p,q} (\vec{x} , \vec{x}') \, . \end{equation*} Thus every equivariant vertical cocycle $f_{eq}^{p,q}$ in $A_{cr}^{*,*} (X,\mathfrak{U};V)^G$ is the vertical coboundary of an equivariant cochain $f_{eq}^{p,q-1}$ of bidegree $(p,q-1)$. \end{proof} \begin{corollary} \label{augexthenjeqindiso} If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cr}^{p,*} (X,\mathfrak{U};V)$ are exact, then the inclusion $j_{eq}^* : A_c^* (X;V)^G \hookrightarrow \mathrm{Tot} A_{cr}^{*,*}(X,\mathfrak{U},V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{corollary} \label{augextheninclindiso} If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cr}^{p,*} (X,\mathfrak{U};V)$ are exact, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$ induces an isomorphism in cohomology. \end{corollary} To achieve the announced result it remains to show that for contractible spaces $X$ the colimit augmented columns $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V)$ are exact. For this purpose we first consider the cochain complex associated to the cosimplicial abelian group $A^{p,*} (X;V) := \left\{ f : X^{p+1} \times X^{*+1} \rightarrow V \mid \, \forall \vec{x}' \in X^{*+1} : f (-,\vec{x}') \in C (X^{p+1},V) \right\}$ of global cochains, its subcomplex $A_{cr}^{p,*} (X,\mathfrak{U};V)$ and the cochain complexes associated to the cosimplicial abelian groups \begin{eqnarray*} A^{p,*} (\mathfrak{U};V) & := & \{ f : X^{p+1} \times \mathfrak{U}[*] \rightarrow \mid \, \forall \vec{x}' \in \mathfrak{U}[*] : f (-,\vec{x}') \in C (X^{p+1},V) \} \quad \text{and} \\ A_c^{p,*} (X,\mathfrak{U};V) & := & C ( X^{p+1} \times \mathfrak{U}[*] , V) \, . \end{eqnarray*} Restriction of global to local cochains induces morphisms of cochain complexes $\res^{p,*} : A^{p,*} (X;V) \twoheadrightarrow A^{p,*} (X,\mathfrak{U};V)$ and $\res_{cr}^{p,*} : A_{cr}^* (X,\mathfrak{U};V) \twoheadrightarrow A_c^{p,*} (X,\mathfrak{U};V)$ intertwining the inclusions of the subcomplexes $A_{cr}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A^{p,*} (X;V)$ and $A_c^{p,*} (X,\mathfrak{U};V) \hookrightarrow A^{p,*} (X, \mathfrak{U};V)$, so one obtains the following commutative diagram \begin{equation} \label{morphexseq} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\res_{cr}^{p,*} ) & \longrightarrow & A_{cr}^{p,*} (X,\mathfrak{U};V) & \longrightarrow & A_c^{p,*} (X,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\res^{p,*} ) & \longrightarrow & A^{p,*} (X;V) & \longrightarrow & A^{p,*} (X , \mathfrak{U};V) & \longrightarrow 0 \end{array} \end{equation} of cochain complexes whose rows are exact. The kernel $\ker (\res^{p,q} )$ is the subspace of those $(p,q)$-cochains which are trivial on $X^{p+1} \times \mathfrak{U} [q]$. Since these $(p,q)$-cochains are continuous on $X^{p+1} \times \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\res^{p,*} ) = \ker (\res_{rc}^{p,*} )$ by $K^{p,*}$ and denote the cohomology groups of the complex $A_{cr}^{p,*} (X,\mathfrak{U};V)$ by $H_{cr}^{p,*} (X,\mathfrak{U};V)$, the cohomology groups of the complex $A_c^{p,*} (X,\mathfrak{U};V)$ of continuous cochains by $H_c^{p,*} (X,\mathfrak{U};V)$ and the cohomology groups of the complex $A^{p,*} (X,\mathfrak{U};V)$ by $H^{p,*} (X,\mathfrak{U};V)$. \begin{lemma} The cochain complexes $A^{p,*} (X;V)$ are exact. \end{lemma} \begin{proof} For any point $* \in X$ the homomorphisms $h^{p,q} : A^{p,q} (X;V) \rightarrow A^{p,q-1} (X;V)$ given by $h^{p,q} (f) (\vec{x},\vec{x}'):=f (\vec{x},*,\vec{x}')$ form a contraction of the complex $A^{p,*} (X;V)$. \end{proof} The morphism of short exact sequences of cochain complexes in Diagram \ref{morphexseq} gives rise to a morphism of long exact cohomology sequences, in which the cohomology of the complex $A^{p,*} (X;V)$ is trivial: \begin{equation} \label{diaglecs} \xymatrix@R-10pt@C-4pt{ \ar[r] & H^q (K^{p,*} )) \ar[r] \ar@{=}[d] & H_{cr}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H_c^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H^{p,q} (X,\mathfrak{U};V) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} } \end{equation} \begin{lemma} The augmented complex $A_c^p (X;V) \hookrightarrow A_{cr}^{p,*} (X,\mathfrak{U};V)$ is exact if and only if the inclusion $A_c^{p,*} (X,\mathfrak{U};V) \hookrightarrow A^{p,*} (X,\mathfrak{U};V)$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} This is an immediate consequence of Diagram \ref{diaglecs} \end{proof} \begin{proposition} If the inclusion $A_c^{p,*} (X,\mathfrak{U};V) \hookrightarrow A^{p,*} (X,\mathfrak{U};V)$ induces an isomorphism in cohomology, then the inclusions $j_{eq}^* : A_c^* (X;V)^G \hookrightarrow \mathrm{Tot} A_{cr}^{*,*}(X,\mathfrak{U},V)^G$ and $A_c^* (X,\mathfrak{U};V)^G \hookrightarrow A_{cr}^* (X,\mathfrak{U};V)^G$ also induces an isomorphism in cohomology. \end{proposition} \begin{proof} This follows from the preceding Lemma and Corollaries \ref{augexthenjeqindiso} and \ref{augextheninclindiso}. \end{proof} The passage to the colimit over all open coverings of $X$ yields the corresponding results for the complexes of cochains with continuous germs: \begin{proposition} \label{noneqextheneqexcg} If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*}(X;V)$ are exact, then the augmented sub column complexes $A_c^p (X;V)^G \hookrightarrow A_{cg}^{p,*}(X;V)^G$ of equivariant cochains are exact as well. \end{proposition} \begin{proof} The proof is similar to that of Proposition \ref{noneqextheneqex}. \end{proof} \begin{corollary} \label{augexthenjeqindisocg} If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V)$ are exact, then the inclusion $j_{eq}^* : A_c^* (X;V)^G \hookrightarrow \mathrm{Tot} A_{cg}^{*,*}(X;V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{corollary} \label{augextheninclindisocg} If the augmented column complexes $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V)$ are exact, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{remark} \label{remonlyginvcov} Alternatively to taking the colimit over all open coverings $\mathfrak{U}$ of $X$ one may consider $G$-invariant open coverings only to obtains the same results. (This was shown in Proposition \ref{natinclofeqccisiso} and Lemmata \ref{natinclofeqdcisiso}.) \end{remark} \begin{example} If $G=X$ is a topological group which acts on itself by left translation and the augmented columns $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V) := \colim A^{p,*} (X,\mathfrak{U}_U;V)$ (where $U$ runs over all open identity neighbourhoods in $G$) are exact, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ induces an isomorphism in cohomology. \end{example} The complex $A^{p,*} (X,\mathfrak{U};V)$ is isomorphic to the complex $A^* (\mathfrak{U}; C (X^{p+1},V))$. The colimit $A_{AS}^* (X; C (X^{p+1} ,V)):=\colim A^* (\mathfrak{U}; C (X^{p+1} ,V))$, where $\mathfrak{U}$ runs over all open coverings of $X$ is the complex of Alexander-Spanier cochains on $X$. Therefore the colimit complex $\colim A^p (X; A^* (\mathfrak{U};V))$ is isomorphic to the cochain complex $A_{AS}^* (X;C (X^{p+1} ,V))$. A similar observation can be made for the cochain complex $A_c^{p,*} (X,\mathfrak{U};V)$ if the exponential law $C (X^{p+1} \times \mathfrak{U}[q],V) \cong C (X,C (\mathfrak{U}[q],V))$ holds for a cofinal set of open coverings $\mathfrak{U}$ of $X$. Passing to the colimit in Diagram \ref{morphexseq} yields the morphism \begin{equation} \label{morphexseqcg} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\res_{cg}^{p,*} ) & \longrightarrow & A_{cg}^{p,*} (X;V) & \longrightarrow & \colim A_c^{p,*} (X,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\res^{p,*} ) & \longrightarrow & A^{p,*} (X;V) & \longrightarrow & A_{AS}^* (X; C^{p+1} (X,V)) & \longrightarrow 0 \end{array} \end{equation} of short exact sequences of cochain complexes. The kernel $\ker (\res^{p,q})$ is the subspace of those $(p,q)$-cochains which are trivial on $X^{p+1} \times \mathfrak{U} [q]$ for some open covering $\mathfrak{U}$ of $X$. Since these $(p,q)$-cochains are continuous on $X^{p+1} \times \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\res^{p,*} ) = \ker (\res_{cg}^{p,*} )$ by $K_{cg}^{p,*}$ and denote the cohomology groups of the complex $A_{cg}^{p,*} (X;V)$ by $H_{cg}^{p,*} (X;V)$. The morphism of short exact sequences of cochain complexes in Diagram \ref{morphexseqcg} gives rise to a morphism of long exact cohomology sequences: \begin{equation} \label{diaglecscg} \xymatrix@C-11pt@R-10pt{ \ar[r] & H^q (K_{cg}^{p,*} )) \ar[r] \ar@{=}[d] & H_{cg}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^q (\colim A_c^{p,*} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K_{cg}^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H_{AS}^q (X; C^{p+1} (X,V)) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} } \end{equation} \begin{lemma} The augmented complex $A_c^p (X;V) \hookrightarrow A_{cg}^{p,*} (X;V)$ is exact if and only if the inclusion $\colim A_c^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C^{p+1}(X,V))$ of cochain complexes induces an isomorphism in cohomology. \end{lemma} \begin{proof} This is an immediate consequence of Diagram \ref{diaglecscg} \end{proof} \begin{proposition} \label{inclcolimacpsindisothenjeqiso} If the inclusion $\colim A_c^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C(X^{p+1},V))$ induces an isomorphism in cohomology, then $j_{eq}^* : A_c^* (X;V)^G \hookrightarrow \mathrm{Tot} A_{cg}^{*,*}(X;V)^G$ and $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ also induce an isomorphism in cohomology. \end{proposition} \begin{proof} This follows from the preceding Lemma and Corollaries \ref{augexthenjeqindisocg} and \ref{augextheninclindisocg}. \end{proof} As observed before (cf. Remark \ref{remonlyginvcov}) one may restrict oneself to the directed system of $G$-invariant open coverings only to achieve the same result. Thus we observe: \begin{corollary} If $G=X$ is a locally contractible topological group which acts on itself by left translation and the inclusion $\colim A_c^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C(X^{p+1},V))$ (where $U$ runs over all open identity neighbourhoods in $G$) induces an isomorphism in cohomology, then the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ induces an isomorphism in cohomology as well. \end{corollary} \begin{proof} It has been shown in \cite{vE62b} that the cohomology of the colimit cochain complex $\colim A^* ( \mathfrak{U} ;C(X^{p+1},V))$ is the Alexander-Spanier cohomology of $X$. \end{proof} \begin{lemma} \label{xcontrthenacpsistriv} If the topological space $X$ is contractible, then the cohomology of the complex $\colim A_c^{p,*} (X,\mathfrak{U};V)$ is trivial. \end{lemma} \begin{proof} The reasoning is analogous to that for the Alexander-Spanier presheaf. The proof \cite[Theorem 2.5.2]{F10} carries over almost in verbatim. \end{proof} \begin{theorem} For contractible $X$ the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ induces an isomorphism in cohomology. \end{theorem} \begin{proof} If the topological space $X$ is contractible, then the Alexander-Spanier cohomology $H_{AS} (X;C^{p+1}(X,V))$ is trivial and the cohomology of the cochain complex $\colim A_c^{p,*} (X,\mathfrak{U};V)$ is trivial by Lemma \ref{xcontrthenacpsistriv}. By Proposition \ref{inclcolimacpsindisothenjeqiso} the inclusion $A_c^* (X;V)^G \hookrightarrow A_{cg}^* (X;V)^G$ then induces an isomorphism in cohomology. \end{proof} \begin{corollary} \label{gcontriso} For contractible topological groups $G$ the continuous group cohomology is isomorphic to the cohomology of homogeneous group cochains with continuous germ at the diagonal. \end{corollary} \section{working in the category of $k$-spaces} \label{secwinktop} In this section we consider transformation groups $(G,X)$ in the category $\mathbf{kTop}$ of $k$-spaces and $G$-modules $V$ in $\mathbf{kTop}$. Working only in the category $\mathbf{kTop}$ we construct a spectral sequence analogously to that in Section \ref{sectss} and derive results analogous to those obtained there. \begin{definition} For every $k$-space $X$ and abelian $k$-group $V$ the subcomplex $A_{kc}^* (X;V):= C ( \mathrm{k} X^{*+1};V)$ of the standard complex is called the \emph{continuous standard complex in $\mathbf{kTop}$}. \end{definition} For open coverings $\mathfrak{U}$ of a $k$-space $X$ we also consider the subcomplex of $A^* (X;V)$ formed by the groups \begin{equation*} A_{kcr}^n (X,\mathfrak{U};V) := \left\{ f \in A^n (X;V) \mid \, f_{\mid \mathrm{k} \mathfrak{U} [n]} \in C ( \mathrm{k} \mathfrak{U}[n];V) \right\} \end{equation*} of cochains whose restriction to the open subspaces $\mathrm{k} \mathfrak{U}[n]$ of $\mathrm{k} X^{n+1}$ are continuous. The cohomology of the cochain complex $A_{kcr}^* (X,\mathfrak{U};V)$ is denoted by $H_{kcr} (X,\mathfrak{U};V)$. If the covering $\mathfrak{U}$ of $X$ is $G$-invariant, then the subspaces $\mathrm{k} \mathfrak{U}[*]$ is a simplicial $G$-subspace of the simplicial $G$-space $\mathrm{k} X^{*+1}$. \begin{example} If $G=X$ is a $k$-group which acts on itself by left translation and $U$ an open identity neighbourhood, then $\mathfrak{U}_U :=\{ g.U \mid g \in G \}$ is a $G$-invariant open covering of $G$ and $\mathrm{k} \mathfrak{U}[*]$ is a simplicial $G$-subspace of $\mathrm{k} G^{*+1}$. \end{example} For $G$-invariant coverings $\mathfrak{U}$ of $X$ the cohomology of the subcomplex $A_{kcr}^* (X,\mathfrak{U};V)^G$ of $G$-equivariant cochains is denoted by $H_{kcr,eq} (X,\mathfrak{U};V)$. \begin{example} If $G=X$ is a $k$-group which acts on itself by left translation and $U$ an open identity neighbourhood, then the complex $A_{kcr}^* (X,\mathfrak{U}_U;V)^G$ is the complex of homogeneous group cochains whose restrictions to the subspaces $\mathrm{k} \mathfrak{U}_U [*]$ are continuous. (These are sometimes called $\mathfrak{U}$-continuous cochains.) \end{example} For directed systems $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ one can also consider the colimit complex $\colim_i A_{kcr}^* (X,\mathfrak{U}_i ;V)$. In particular, if the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods, one obtains the complex \begin{equation*} A_{kcg}^* (X;V):= \colim_{\mathfrak{U} \text{is open cover of $X$}} A_{kcr}^* (X;\mathfrak{U};V) \end{equation*} of global cochains whose germs at the diagonal are continuous. This happens for all $k$-spaces $X$ for which the finite products $X^{n+1}$ in $\mathbf{Top}$ are already compactly Hausdorff generated, e.g. metrisable spaces, locally compact spaces or Hausdorff $k_\omega$-spaces. The complex $A_{kcg}^* (X;V)$ is then a subcomplex of the standard complex $A^* (X;V)$ which is invariant under the $G$-action (Eq. \ref{defgact}) and thus a sub complex of $G$-modules. The $G$-equivariant cochains with continuous germ form a subcomplex $A_{kcg}^* (X;V)^G$ thereof, whose cohomology is denoted by $H_{kcg,eq} (X;V)$. The latter subcomplex can also be obtained by taking the colimit over all $G$-invariant open coverings of $X$ only: \begin{proposition} \label{natinclofeqccisisok} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the natural morphism of cochain complexes \begin{equation*} A_{kcg,eq}^* (X;V):= \colim_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{kcr}^* (X;\mathfrak{U};V)^G \rightarrow A_{kcg}^* (X;V)^G \end{equation*} is a natural isomorphism. \end{proposition} \begin{proof} The proof is analogous top that of Proposition \ref{natinclofeqccisiso}. \end{proof} \begin{corollary} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the cohomology $H_{kcg,eq} (X;V)$ is the cohomology of the complex of equivariant cochains which are continuous on some $G$-invariant neighbourhood of the diagonal. \end{corollary} \begin{example} If $G=X$ is a metrisable or locally compact topological group or a real or complex Kac-Moody group which acts on itself by left translation, then the complex $A_{kcg}^* (G;V)^G$ is the complex of homogeneous group cochains whose germs at the diagonal are continuous. (By abuse of language these are sometimes called 'locally continuous' group cochains.) \end{example} Analogously to the procedure in Section \ref{sectss} we can construct a spectral sequence relating $A_{kcr}^* (X,\mathfrak{U};V)$ and $A_{kc}^* (X;V)$. For this purpose we consider the abelian groups \begin{equation} \label{defrcuk} A_{kcr}^{p,q} ( X,\mathfrak{U} ; V ) := \left\{ f: X^{p+1} \times X^{q+1} \rightarrow V \mid f_{\mid \mathrm{k} X^{p+1} \times_k \mathrm{k} \mathfrak{U}[q]} \; \text{is continuous} \right\} \, . \end{equation} The abelian groups $A_{kcr}^{p,q} ( X,\mathfrak{U} ; V )$ form a first quadrant double complex whose vertical and horizontal differentials are given by the same formulas as for the double complex $A_{cr}^{p,q} ( X,\mathfrak{U} ; V )$ introduced in Section \ref{sectss}. Analogously to the latter double complex the rows of the double complex $A_{kcr}^{*,*} ( X,\mathfrak{U} ; V )$ can be augmented by the complex $A_{kcr}^* (X,\mathfrak{U} ;V)$ for the covering $\mathfrak{U}$ and the columns can be augmented by the exact complex $A_{kc}^* (X;V)$ of continuous cochains. We denote the total complex of the double complex $A_{kcr}^{*,*} ( X, \mathfrak{U} ; V)$ by $\mathrm{Tot} A_{kcr}^{*,*} ( X, \mathfrak{U} ; V)$. The augmentations of the rows and columns induce morphisms $i_k^* : A_{kcr}^* ( X, \mathfrak{U} ; V) \rightarrow \mathrm{Tot} A_{kcr}^{*,*} ( X, \mathfrak{U} ; V)$ and $j_k^*: A_{kc}^* ( X ; V) \rightarrow \mathrm{Tot} A_{kcr}^{*,*} ( X,\mathfrak{U} ; V)$ of cochain complexes respectively. \begin{lemma} \label{columnsexactk} The morphism $i_k^*: A_{kcr}^* ( X,\mathfrak{U} ; V) \rightarrow \mathrm{Tot} A_{kcr}^{*,*} (X,\mathfrak{U} ; V)$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The proof of Lemma \ref{columnsexact} also works in the category $\mathbf{kTop}$ of $k$-spaces. \end{proof} For $G$-invariant open coverings $\mathfrak{U}$ of $X$ one can consider the sub double complex $A_{kcr}^{*,*} ( X,\mathfrak{U} ; V )^G$ of $A_{kcr}^{*,*} ( X,\mathfrak{U} ; V )$ whose rows are augmented by the cochain complex $A_{kcr}^* (X,\mathfrak{U} ;V)^G$ for the covering $\mathfrak{U}$ and the columns can be augmented by the complex $A_{kc}^* (X;V)^G$ of continuous equivariant cochains (,which is not exact in general). \begin{lemma} \label{columnsexacteqk} For $G$-invariant coverings $\mathfrak{U}$ of $X$ the morphism $i_{k,eq}^*: ={i_k^*}^G$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The proof is analogous to that of Lemma \ref{columnsexacteq}. \end{proof} So the morphism $H (i_{k,eq}) : H_{kcr,eq} (X,\mathfrak{U};V) \rightarrow H ( \mathrm{Tot} A_{kcr}^{*,*} ( X, \mathfrak{U} ; V)^G )$ is invertible. For the composition $H (i_{k, eq})^{-1} H(j_{k,eq}):H_{kc,eq}(X;V)\rightarrow H_{kcr,eq}(X,\mathfrak{U};V)$ we observe: \begin{proposition} \label{contiscohtocrk} The image $j_k^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $X$ in $\mathrm{Tot} A_{kcr}^{*,*} (X,\mathfrak{U};,V)^G$ is cohomologous to the image $i_{k,eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{kcr}^n (X,\mathfrak{U};V)^G$ in $\mathrm{Tot} A_{kcr}^{*,*} (X,\mathfrak{U};V)^G$. \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{contiscohtocr}. \end{proof} \begin{corollary} The map $H (i_{k,eq})^{-1} H(j_{eq}) :H_{kc,eq}(X;V) \rightarrow H_{kcr,eq}(X,\mathfrak{U};V)$ is induced by the inclusion $A_{kc}^* (X,\mathfrak{U};V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$. \end{corollary} \begin{corollary} If the morphism $j^*_{k, eq}:={j^*}^G : A_{kc}^* (X;V)^G \rightarrow \mathrm{Tot} A_{kcr}^{*,*}(X,\mathfrak{U}A)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. \end{corollary} \begin{lemma} For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $X$ the morphism $\colim_i i^*: \colim_i A_{kcr}^* ( X,\mathfrak{U}_i ; V) \rightarrow \mathrm{Tot} \colim_i A_{kcr}^{*,*} (X,\mathfrak{U}_i ; V)$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \ref{columnsexactk}). \end{proof} \begin{lemma} For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of $G$-invariant open coverings of $X$ the morphism $\colim_i i_{k, eq}^*: \colim_i A_{kcr}^* ( X,\mathfrak{U}_i ; V)^G \rightarrow \mathrm{Tot} \colim_i A_{cr}^{*,*} (X,\mathfrak{U}_i ; V)^G$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \ref{columnsexacteqk}). \end{proof} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then one obtains the double complex complex \begin{equation*} A_{kcg}^{*,*} (X;V):= \colim_{\mathfrak{U} \text{ is open cover of $X$}} A_{kcr}^{*,*} (X;\mathfrak{U};V) \end{equation*} whose rows and columns are augmented by the complexes $A_{kcg}^* (X;V)$ and $A_{kc}^* (X;V)$ respectively. In this case the colimit morphism $i_{kcg}^* : A_{kcg}^* ( X; V) \rightarrow \mathrm{Tot} A_{kcg}^{*,*} (X; V)$ induces an isomorphism in cohomology. Furthermore the colimit double complex $A_{kcg}^{*,*} (X;V)$ then is a double complex of $G$-modules and the $G$-equivariant cochains in form a sub double complex $A_{kcg}^{*,*} (X;V)^G$, whose rows and columns are augmented by the colimit complex $A_{kcg,eq}^* (X;V)$ and by the complex $A_{kc}^* (X;V)^G$ respectively. In addition we observe: \begin{lemma} \label{natinclofeqdcisisok} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the natural morphism of double complexes \begin{equation*} A_{kcg,eq}^{*,*} (X;V):= \colim_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{kcr}^{*,*} (X;\mathfrak{U};V)^G \rightarrow A_{kcg}^* (X;V)^G \end{equation*} is a natural isomorphism. \end{lemma} \begin{proof} The proof is analogous to that of Proposition \ref{natinclofeqccisisok}. \end{proof} As a consequence the colimit morphism $i_{kcg,eq}^* : A_{kcg,eq}^* ( X; V) \rightarrow \mathrm{Tot} A_{kcg}^{*,*} (X; V)^G$ then induces an isomorphism in cohomology, and the morphism $H (i_{kcg,eq})$ is invertible. For the composition $H(i_{kcg,eq})^{-1} H(j_{k,eq}):H_{kc,eq}(X;V)\rightarrow H_{kcg,eq}(X,\mathfrak{U};V)$ we observe: \begin{proposition} \label{contiscohtocreqk} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the image $j^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $X$ in $\mathrm{Tot} A_{kcg}^{*,*} (X;,V)^G$ is cohomologous to the image $i_{kcg,eq}^n (f)$ of the equivariant cocycle $f\in A_{kcg,eq}^n (X;V)$ in $\mathrm{Tot} A_{kcg}^{*,*} (X;V)^G$. \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{contiscohtocr}. \end{proof} \begin{corollary} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the composition $H (i_{kcg,eq})^{-1} H(j_{k, eq}):H_{kc,eq}(X;V)\rightarrow H_{kcg,eq}(X;V)$ is induced by the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$. \end{corollary} \begin{corollary} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the morphism $j^*_{k, eq}:={j_k^*}^G : A_{kc}^* (X;V)^G \rightarrow \mathrm{Tot} A_{kcg}^{*,*}(X;V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg,eq}^* (X;V)$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. \end{corollary} \section{Continuous and $\mathfrak{U}$ -Continuous Cochains on $k$-spaces} \label{seccontanduccck} In this section we consider transformation $k$-groups $(G,X)$ and $G$-modules $V$ in $\mathbf{kTop}$ for which we show that the inclusion $A_{kc}^* (X,\mathfrak{U};V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$ of the complex of continuous equivariant cochains into the complex of equivariant $\mathfrak{U}$-continuous cochains induces an isomorphism $H_{kc}^* (X,\mathfrak{U};V) \cong H_{kcr}^* (X,\mathfrak{U};V)$ provided the $k$-space $X$ is contractible. The proceeding is similar to that in Section \ref{seccontanduccck}. At first we reduce the problem to the non-equivariant case: \begin{proposition} \label{noneqextheneqexk} If the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcr}^{p,*}(X,\mathfrak{U};V)$ are exact, then the augmented sub column complexes $A_{kc}^p (X;V)^G \hookrightarrow A_{kcr}^{p,*}(X,\mathfrak{U};V)^G$ of equivariant cochains are exact as well. \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{noneqextheneqex}. \end{proof} \begin{corollary} \label{augexthenjeqindisok} If the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcr}^{p,*} (X,\mathfrak{U};V)$ are exact, then the inclusion $j_{k, eq}^* : A_{kc}^* (X;V)^G \hookrightarrow \mathrm{Tot} A_{kcr}^{*,*}(X,\mathfrak{U},V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{corollary} \label{augextheninclindisok} If the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcr}^{p,*} (X,\mathfrak{U};V)$ are exact, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$ induces an isomorphism in cohomology. \end{corollary} To achieve the announced result it remains to show that for contractible $k$-spaces $X$ the colimit augmented columns $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V)$ are exact. For this purpose we first consider the cochain complex associated to the cosimplicial abelian group \begin{equation*} A_k^{p,*} (X;V) := \left\{ f : X^{p+1} \times X^{*+1} \rightarrow V \mid \, \forall \vec{x}' \in X^{*+1} :f (-,\vec{x}') \in C ( \mathrm{k} X^{p+1},V) \right\} \end{equation*} of global cochains, its subcomplex $A_{kcr}^{p,*} (X,\mathfrak{U};V)$ and the cochain complexes associated to the cosimplicial abelian groups \begin{eqnarray*} A_k^{p,*} (\mathfrak{U};V) & := & \{ f : X^{p+1} \times \mathfrak{U}[*] \rightarrow \mid \, \forall \vec{x}' \in \mathfrak{U}[*] : f (-,\vec{x}') \in C (\mathrm{k} X^{p+1},V) \} \quad \text{and} \\ A_{kc}^{p,*} (X,\mathfrak{U};V) & := & C ( \mathrm{k} X^{p+1} \times_k \mathrm{k} \mathfrak{U}[*] , V) \, . \end{eqnarray*} Restriction of global to local cochains induces morphisms of cochain complexes $\res^{p,*}_k : A_k^{p,*} (X;V) \twoheadrightarrow A_k^{p,*} (X,\mathfrak{U};V)$ and $\res_{kcr}^{p,*} : A_{kcr}^* (X,\mathfrak{U};V) \twoheadrightarrow A_{kc}^{p,*} (X,\mathfrak{U};V)$ intertwining the inclusions of the subcomplexes $A_{kcr}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A_k^{p,*} (X;V)$ and $A_{kc}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A_k^{p,*} (X,\mathfrak{U};V)$, so one obtains the following commutative diagram \begin{equation} \label{morphexseqk} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\res_{kcr}^{p,*} ) & \longrightarrow & A_{kcr}^{p,*} (X,\mathfrak{U};V) & \longrightarrow & A_{kc}^{p,*} (X,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\res^{p,*}_k ) & \longrightarrow & A_k^{p,*} (X;V) & \longrightarrow & A_k^{p,*} (X , \mathfrak{U};V) & \longrightarrow 0 \end{array} \end{equation} of cochain complexes whose rows are exact. The kernel $\ker (\res^{p,q}_k )$ is the subspace of those $(p,q)$-cochains which are trivial on $\mathrm{k} X^{p+1} \times_k \mathrm{k} \mathfrak{U} [q]$. Since these $(p,q)$-cochains are continuous on $\mathrm{k} X^{p+1} \times_k \mathrm{k} \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\res^{p,*}_k ) = \ker (\res_{krc}^{p,*} )$ by $K_k^{p,*}$ and denote the cohomology groups of the complex $A_{kcr}^{p,*} (X,\mathfrak{U};V)$ by $H_{kcr}^{p,*} (X,\mathfrak{U};V)$, the cohomology groups of the complex $A_{kc}^{p,*} (X,\mathfrak{U};V)$ of continuous cochains by $H_{kc}^{p,*} (X,\mathfrak{U};V)$ and the cohomology groups of the complex $A_k^{p,*} (X,\mathfrak{U};V)$ by $H_k^{p,*} (X,\mathfrak{U};V)$. \begin{lemma} The cochain complexes $A_k^{p,*} (X;V)$ are exact. \end{lemma} \begin{proof} For any point $* \in X$ the homomorphisms $h^{p,q} : A_k^{p,q} (X;V) \rightarrow A_k^{p,q-1} (X;V)$ given by $h^{p,q} (f) (\vec{x},\vec{x}'):=f (\vec{x},*,\vec{x}')$ form a contraction of the complex $A_k^{p,*} (X;V)$. \end{proof} The morphism of short exact sequences of cochain complexes in Diagram \ref{morphexseqk} gives rise to a morphism of long exact cohomology sequences, in which the cohomology of the complex $A_k^{p,*} (X;V)$ is trivial: \begin{equation} \label{diaglecsk} \xymatrix@R-10pt@C-4pt{ \ar[r] & H^q (K_k^{p,*} )) \ar[r] \ar@{=}[d] & H_{kcr}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H_{kc}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K_k^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H_k^{p,q} (X,\mathfrak{U};V) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} } \end{equation} \begin{lemma} The augmented complex $A_{kc}^p (X;V) \hookrightarrow A_{cr}^{p,*} (X,\mathfrak{U};V)$ is exact if and only if the inclusion $A_{kc}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A_k^{p,*} (X,\mathfrak{U};V)$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} This is an immediate consequence of Diagram \ref{diaglecsk} \end{proof} \begin{proposition} If the inclusion $A_{kc}^{p,*} (X,\mathfrak{U};V) \hookrightarrow A_k^{p,*} (X,\mathfrak{U};V)$ induces an isomorphism in cohomology, then the inclusions $A_c^* (X;V)^G \hookrightarrow \mathrm{Tot} A_{kcr}^{*,*}(X,\mathfrak{U},V)^G$ and $A_{kc}^* (X,\mathfrak{U};V)^G \hookrightarrow A_{kcr}^* (X,\mathfrak{U};V)^G$ also induces an isomorphism in cohomology. \end{proposition} \begin{proof} This follows from the preceding Lemma and Corollaries \ref{augexthenjeqindisok} and \ref{augextheninclindisok}. \end{proof} For $k$-spaces $X$ for which the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods the passage to the colimit over all open coverings of $X$ yields the corresponding results for the complexes of cochains with continuous germs: \begin{proposition} \label{noneqextheneqexcgk} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*}(X;V)$ are exact, then the augmented sub column complexes $A_{kc}^p (X;V)^G \hookrightarrow A_{kcg}^{p,*}(X;V)^G$ of equivariant cochains are exact as well. \end{proposition} \begin{proof} The proof is similar to that of Proposition \ref{noneqextheneqex}. \end{proof} \begin{corollary} \label{augexthenjeqindisocgk} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V)$ are exact, then the inclusion $j_{k, eq}^* : A_{kc}^* (X;V)^G \hookrightarrow \mathrm{Tot} A_{kcg}^{*,*}(X;V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{corollary} \label{augextheninclindisocgk} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the augmented column complexes $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V)$ are exact, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{remark} \label{remonlyginvcovk} Alternatively to taking the colimit over all open coverings $\mathfrak{U}$ of $X$ one may consider $G$-invariant open coverings only to obtains the same results. (This was shown in Proposition \ref{natinclofeqccisisok} and Lemmata \ref{natinclofeqdcisisok}.) \end{remark} \begin{example} If $G=X$ is a metrisable, locally compact or Hausdorff $k_\omega$ topological group which acts on itself by left translation and the augmented columns $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V) := \colim A_k^{p,*} (X,\mathfrak{U}_U;V)$ (where $U$ runs over all open identity neighbourhoods in $G$) are exact, then $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ induces an isomorphism in cohomology. \end{example} The complex $A^{p,*} (X,\mathfrak{U};V)$ is isomorphic to the complex $A^* (\mathfrak{U}; C (\mathrm{k} X^{p+1},V))$. If the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods then the colimit $A_{AS}^* (X; C (X^{p+1} ,V)):=\colim A^* (\mathfrak{U}; C (X^{p+1} ,V))$, where $\mathfrak{U}$ runs over all open coverings of $X$ is the complex of Alexander-Spanier cochains on $X$ (with values in $C (X^{p+1} ,V)$). In this case the colimit complex $\colim A^p (X; A^* (\mathfrak{U};V))$ is isomorphic to the cochain complex $A_{AS}^* (X;C (\mathrm{k} X^{p+1} ,V))$. A similar observation can be made for the cochain complex $A_{kc}^{p,*} (X,\mathfrak{U};V)$ because the exponential law $C ( \mathrm{k} X^{p+1} \times_k \mathrm{k} \mathfrak{U}[q],V) \cong C (X, \mathrm{k} C (\mathfrak{U}[q],V))$ holds in $\mathbf{kTop}$. Passing to the colimit in Diagram \ref{morphexseqk} yields the morphism \begin{equation} \label{morphexseqcgk} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\res_{kcg}^{p,*} ) & \longrightarrow & A_{kcg}^{p,*} (X;V) & \longrightarrow & \colim A_{kc}^{p,*} (X,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\res_k^{p,*} ) & \longrightarrow & A_k^{p,*} (X;V) & \longrightarrow & A_{AS}^* (X; C^{p+1} (X,V)) & \longrightarrow 0 \end{array} \end{equation} of short exact sequences of cochain complexes. The kernel $\ker (\res^{p,q}_k)$ is the subspace of those $(p,q)$-cochains which are trivial on $\mathrm{k} X^{p+1} \times_k \mathrm{k} \mathfrak{U} [q]$ for some open covering $\mathfrak{U}$ of $X$. Since these $(p,q)$-cochains are continuous on $\mathrm{k} X^{p+1} \times_k \mathrm{k} \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\res^{p,*}_k ) = \ker (\res_{kcg}^{p,*} )$ by $K_{kcg}^{p,*}$ and denote the cohomology groups of the complex $A_{kcg}^{p,*} (X;V)$ by $H_{kcg}^{p,*} (X;V)$. The morphism of short exact sequences of cochain complexes in Diagram \ref{morphexseqcgk} gives rise to a morphism of long exact cohomology sequences: \begin{equation} \label{diaglecscgk} \xymatrix@C-11pt@R-10pt{ \ar[r] & H^q (K_{kcg}^{p,*} )) \ar[r] \ar@{=}[d] & H_{kcg}^{p,q} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^q (\colim A_{kc}^{p,*} (X,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K_{kcg}^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K_{kcg}^{p,*} ) \ar[r]& 0 \ar[r] & H_{AS}^q (X; C^{p+1} (X,V)) \ar[r]^\cong & H^{q+1} (K_{kcg}^{p,*} )) \ar[r] & {} } \end{equation} \begin{lemma} The augmented complex $A_{kc}^p (X;V) \hookrightarrow A_{kcg}^{p,*} (X;V)$ is exact if and only if the inclusion $\colim A_{kc}^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C^{p+1}(X,V))$ of cochain complexes induces an isomorphism in cohomology. \end{lemma} \begin{proof} This is an immediate consequence of Diagram \ref{diaglecscgk} \end{proof} \begin{proposition} \label{inclcolimacpsindisothenjeqisok} If the open diagonal neighbourhoods $\mathrm{k} \mathfrak{U}[n]$ in $\mathrm{k} X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods and the inclusion $\colim A_{kc}^{p,*}(X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C(X^{p+1},V))$ induces an isomorphism in cohomology, then $j_{k, eq}^* : A_{kc}^* (X;V)^G \hookrightarrow \mathrm{Tot} A_{kcg}^{*,*}(X;V)^G$ and $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ also induce an isomorphism in cohomology. \end{proposition} \begin{proof} This follows from the preceding Lemma and Corollaries \ref{augexthenjeqindisocgk} and \ref{augextheninclindisocgk}. \end{proof} As observed before (cf. Remark \ref{remonlyginvcovk}) one may restrict oneself to the directed system of $G$-invariant open coverings only to achieve the same result. Thus we observe: \begin{corollary} If $G=X$ is a locally contractible metrisable, locally contractible locally compact or locally contractible Hausdorff $k_\omega$ topological group which acts on itself by left translation and the inclusion $\colim A_{kc}^{p,*} (X,\mathfrak{U};V)\hookrightarrow A_{AS}^* (X;C(X^{p+1},V))$ (where $U$ runs over all open identity neighbourhoods in $G$) induces an isomorphism in cohomology, then the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ induces an isomorphism in cohomology as well. \end{corollary} \begin{proof} It has been shown in \cite{vE62b} that the cohomology of the colimit cochain complex $\colim A^* ( \mathfrak{U} ;C(\mathrm{k} X^{p+1},V))$ is the Alexander-Spanier cohomology of $X$ with coefficients $C(\mathrm{k} X^{p+1},V)$. \end{proof} \begin{lemma} \label{xcontrthenacpsistrivk} If the topological space $X$ is contractible, then the cohomology of the complex $\colim A_{kc}^{p,*} (X,\mathfrak{U};V)$ is trivial. \end{lemma} \begin{proof} The reasoning is analogous to that for the Alexander-Spanier presheaf. The proof \cite[Theorem 2.5.2]{F10} carries over almost in verbatim. \end{proof} \begin{theorem} For contractible $X$ the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ induces an isomorphism in cohomology. \end{theorem} \begin{proof} If the $k$-space $X$ is contractible, then the Alexander-Spanier cohomology of $X$ is trivial and the cohomology of the cochain complex $\colim A_{kc}^{p,*} (X,\mathfrak{U};V)$ is trivial by Lemma \ref{xcontrthenacpsistrivk}. By Proposition \ref{inclcolimacpsindisothenjeqisok} the inclusion $A_{kc}^* (X;V)^G \hookrightarrow A_{kcg}^* (X;V)^G$ then induces an isomorphism in cohomology. \end{proof} \begin{corollary} For metrisable, locally compact or Hausdorff $k_\omega$ topological groups $G$ which are contractible the continuous group cohomology $H_{kc,eq} (G;V)$ is isomorphic to the cohomology $H_{kcg,eq} (G;V)$ of homogeneous group cochains with continuous germ at the diagonal. \end{corollary} \section{Complexes of Smooth Cochains} In this Section we introduce the sub (double)complexes for smooth transformation groups $(G,M)$ and smooth $G$-modules $V$, where $V$ is an abelian Lie group. (We use the general infinite dimensional calculus introduced in \cite{BGN04}.) Let $(G,M)$ be a smooth transformation group, $V$ be a smooth $G$-module and $\mathfrak{U}$ be an open covering of $M$. \begin{definition} For every manifold $M$ and abelian Lie group $V$ the subcomplex $A_s^* (M;V):= C^\infty (M^{*+1};V)$ of the standard complex is called the \emph{smooth standard complex}. The cohomology $H_{eq,s} (M;V)$ of the subcomplex $A_s^* (M;V)^G$ is called the equivariant smooth cohomology of $M$ (with values in $V$). \end{definition} \begin{example} For any Lie group $G$ which acts on itself by left translation and smooth $G$-module $V$ the complex $A_s^* (G;V)^G$ is the complex of smooth (homogeneous) group cochains; its cohomology $H_{eq,s} (G;V)$ is the smooth group cohomology of $G$ with values in $V$. \end{example} For Lie groups $G$ and $G$-modules $V$ the first cohomology group $H_{eq,s}^1 (G;V)$ classifies smooth crossed morphisms modulo principal derivations, the second cohomology group $H_{eq,s}^2 (G;V)$ classifies equivalence classes of Lie group extensions $V \hookrightarrow \hat{G} \twoheadrightarrow G$ which admit a smooth global section (i.e. $\hat{G} \twoheadrightarrow G$ is a trivial smooth $V$-principal bundle) and the third cohomology group $H_{eq,c}^3 (G;V)$ classifies equivalence classes of smoothly split crossed modules. For each open covering $\mathfrak{U}$ of $M$ one can consider the subcomplex of $A^* (M;V)$ formed by the groups \begin{equation*} A_{sr}^n (M,\mathfrak{U};V) := \left\{ f \in A^n (M;V) \mid \, f_{\mid \mathfrak{U} [n]} \in C^\infty (\mathfrak{U}[n];V) \right\} \end{equation*} of cochains whose restriction to the subspaces $\mathfrak{U}[n]$ of $M^{n+1}$ are smooth. The cohomology of the cochain complex $A_{sr}^* (M,\mathfrak{U};V)$ is denoted by $H_{sr} (M,\mathfrak{U};V)$. If the covering $\mathfrak{U}$ of $M$ is $G$-invariant, then the subspaces $\mathfrak{U}[*]$ is a simplicial $G$-subspace of the simplicial $G$-space $M^{*+1}$. For $G$-invariant coverings $\mathfrak{U}$ of $M$ the cohomology of the subcomplex $A_{sr}^* (M,\mathfrak{U};V)^G$ of $G$-equivariant cochains is denoted by $H_{cr,eq} (M,\mathfrak{U};V)$. \begin{example} If $G=M$ is a Lie group which acts on itself by left translation and $U$ an open identity neighbourhood, then the complex $A_{sr}^* (M,\mathfrak{U}_U;V)^G$ is the complex of homogeneous group cochains whose restrictions to the subspaces $\mathfrak{U}_U [*]$ are smooth. (These are sometimes called $\mathfrak{U}$-smooth cochains.) \end{example} For directed systems $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $M$ one can also consider the colimit complex $\colim_i A_{sr}^* (M,\mathfrak{U}_i ;V)$. In particular for the directed system of all open coverings of $M$ one observes that the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $M^{n+1}$ for open coverings $\mathfrak{U}$ of $M$ are cofinal in the directed set of all open diagonal neighbourhoods, hence one obtains the complex \begin{equation*} A_{sg}^* (M;V):= \colim_{\mathfrak{U} \text{is open cover of $M$}} A_{sr}^* (M;\mathfrak{U};V) \end{equation*} of global cochains whose germs at the diagonal are continuous. This is a subcomplex of the standard complex $A^* (M;V)$ which is invariant under the $G$-action (Eq. \ref{defgact}) and thus a sub complex of $G$-modules. The $G$-equivariant cochains with continuous germ form a subcomplex $A_{sg}^* (M;V)^G$ thereof, whose cohomology is denoted by $H_{cg,eq} (M;V)$. The latter subcomplex can also be obtained by taking the colimit over all $G$-invariant open coverings of $M$ only: \begin{proposition} \label{natinclofeqccisisosmooth} The natural morphism of cochain complexes \begin{equation*} A_{cg,eq}^* (M;V):= \colim_{\mathfrak{U} \text{is $G$-invariant open cover of $M$}} A_{sr}^* (M;\mathfrak{U};V)^G \rightarrow A_{sg}^* (M;V)^G \end{equation*} is a natural isomorphism. \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{natinclofeqccisiso}. \end{proof} \begin{corollary} The cohomology $H_{cg,eq} (M;V)$ is the cohomology of the complex of equivariant cochains which are continuous on some $G$-invariant neighbourhood of the diagonal. \end{corollary} \begin{example} If $G=M$ is a Lie group which acts on itself by left translation, then the complex $A_{sg}^* (G;V)^G$ is the complex of homogeneous group cochains whose germs at the diagonal are smooth. (By abuse of language these are sometimes called 'locally smooth' group cochains.) \end{example} We will show (in Section \ref{secsmoothanduscc}) that the inclusion $A_{sr}^* (M,\mathfrak{U};V) \hookrightarrow A_s^* (M;V)$ induces an isomorphism in cohomology provided the manifold $M$ is smoothly contractible. For this purpose we consider the abelian groups \begin{equation} \label{defrsu} A_{sr}^{p,q} ( M,\mathfrak{U} ; V ) := \left\{ f: M^{p+1} \times M^{q+1} \rightarrow V \mid f_{\mid M^{p+1} \times \mathfrak{U}[q]} \; \text{is continuous} \right\} \, . \end{equation} The abelian groups $A_{sr}^{p,q} ( M,\mathfrak{U} ; V )$ form a first quadrant sub double complex of the double complex $A_{cr}^{p,q} ( M,\mathfrak{U} ; V )$. The rows of the double complex $A_{sr}^{*,*} ( M,\mathfrak{U} ; V )$ can be augmented by the complex $A_{sr}^* (M,\mathfrak{U} ;V)$ for the covering $\mathfrak{U}$ and the columns can be augmented by the exact complex $A_s^* (M;V)$ of continuous cochains: \begin{equation*} \vcenter{ \xymatrix{ \vdots & \vdots & \vdots & \vdots \\ A_{sr}^2 (M, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{sr}^{0,2} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{1,2} ( M, \mathfrak{U}; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{2,2} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ A_{sr}^1 (M, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{sr}^{0,1} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{1,1} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{2,1} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ A_{sr}^0 (M, \mathfrak{U};V) \ar[r] \ar[u]_{d_{v}} & A_{sr}^{0,0} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{1,0} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & A_{sr}^{2,0} ( M, \mathfrak{U} ; V) \ar[r]^{d_{h}}\ar[u]_{d_{v}} & \cdots \\ & A_s^0 ( M ; V) \ar[r]^{d_{h}}\ar[u] & A_s^1 ( M ; V) \ar[r]^{d_{h}}\ar[u] & A_s^2 ( M ; V) \ar[r]^{d_{h}}\ar[u] & \cdots }} \end{equation*} We denote the total complex of the double complex $A_{sr}^{*,*} ( M, \mathfrak{U} ; V)$ by $\mathrm{Tot} A_{sr}^{*,*} ( M, \mathfrak{U} ; V)$. The augmentations of the rows and columns of this double complex induce morphisms $i^* : A_{sr}^* ( M, \mathfrak{U} ; V) \rightarrow \mathrm{Tot} A_{sr}^{*,*} ( M, \mathfrak{U} ; V)$ and $j^*: A_s^* ( M ; V) \rightarrow \mathrm{Tot} A_{sr}^{*,*} ( M,\mathfrak{U} ; V)$ of cochain complexes respectively. \begin{lemma} \label{columnsexactsmooth} The morphism $i^*: A_{sr}^* ( M,\mathfrak{U} ; V) \rightarrow \mathrm{Tot} A_{sr}^{*,*} (M,\mathfrak{U} ; V)$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The row contraction given in the proof of Lemma \ref{columnsexact} restricts to one of the sub row complex $A_s^* (M,\mathfrak{U} ; V) \hookrightarrow A_{sr}^{*,*} (M,\mathfrak{U} ; V)$. \end{proof} \begin{remark} Note that this construction does not work for the column complexes. \end{remark} For $G$-invariant open coverings $\mathfrak{U}$ of $M$ one can consider the sub double complex $A_{sr}^{*,*} ( M,\mathfrak{U} ; V )^G$ of $A_{sr}^{*,*} ( M,\mathfrak{U} ; V )$ whose rows are augmented by the cochain complex $A_{sr}^* (M,\mathfrak{U} ;V)^G$ for the covering $\mathfrak{U}$ and the columns can be augmented by the complex $A_s^* (M;V)^G$ of smooth equivariant cochains (,which is not exact in general). \begin{lemma} \label{columnsexacteqsmooth} For $G$-invariant coverings $\mathfrak{U}$ of $M$ the morphism $i_{eq}^*: ={i^*}^G$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The contraction $h_{*,q}$ of the augmented rows $A_{cr}^q ( M, \mathfrak{U} ; V) \hookrightarrow \mathrm{Tot} A_{sr}^{*,q} ( M, \mathfrak{U} ; V)$ defined in Eq. \ref{defrowcontr} is $G$-equivariant and thus restricts to a row contraction of the augmented sub-row $A_{sr}^q ( M, \mathfrak{U} ; V)^G \hookrightarrow \mathrm{Tot} A_{sr}^{*,q} ( M, \mathfrak{U} ; V)^G$. \end{proof} So the morphism $H (i_{eq}) : H_{cr,eq} (M,\mathfrak{U};V) \rightarrow H ( \mathrm{Tot} A_{sr}^{*,*} ( M, \mathfrak{U} ; V)^G )$ is invertible. For the composition $H (i_{eq})^{-1} H(j_{eq}):H_{c,eq}(M;V)\rightarrow H_{cr,eq}(M,\mathfrak{U};V)$ we observe: \begin{proposition} \label{contiscohtocrsmooth} The image $j^n (f)$ of a smooth equivariant $n$-cocycle $f$ on $M$ in $\mathrm{Tot} A_{sr}^{*,*} (M,\mathfrak{U};,V)^G$ is cohomologous to the image $i_{eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{sr}^n (M,\mathfrak{U};V)^G$ in $\mathrm{Tot} A_{sr}^{*,*} (M,\mathfrak{U};V)^G$. \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{contiscohtocr}. \end{proof} \begin{corollary} The composition $H (i_{eq})^{-1} H(j_{eq}):H_{s,eq}(M;V)\rightarrow H_{sr,eq}(M,\mathfrak{U};V)$ is induced by the inclusion $A_s^* (M,\mathfrak{U};V)^G \hookrightarrow A_{sr}^* (M,\mathfrak{U};V)^G$. \end{corollary} \begin{corollary} If the morphism $j^*_{eq}:={j^*}^G : A_s^* (M;V)^G \rightarrow \mathrm{Tot} A_{sr}^{*,*}(M,\mathfrak{U}A)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sr}^* (M,\mathfrak{U};V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. \end{corollary} For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $M$ one can also consider the corresponding augmented colimit double complexes. In particular for the directed system of all open coverings of $M$ one obtains the double complex complex \begin{equation*} A_{sg}^{*,*} (M;V):= \colim_{\mathfrak{U} \text{ is open cover of $M$}} A_{sr}^{*,*} (M;\mathfrak{U};V) \end{equation*} whose rows and columns are augmented by the colimit complex $A_{sg}^* (M;V)$ and by the complex $A_s^* (M;V)$ respectively. \begin{lemma} For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of open coverings of $M$ the morphism $\colim_i i^*: \colim_i A_{sr}^* ( M,\mathfrak{U}_i ; V) \rightarrow \mathrm{Tot} \colim_i A_{sr}^{*,*} (M,\mathfrak{U}_i ; V)$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \ref{columnsexactsmooth}). \end{proof} As a consequence the colimit morphism $i_{sg}^* : A_{sg}^* ( M; V) \rightarrow \mathrm{Tot} A_{sg}^{*,*} (M; V)$ induces an isomorphism in cohomology. The colimit double complex $A_{sg}^{*,*} (M;V)$ is a double complex of $G$-modules and the $G$-equivariant cochains in form a sub double complex $A_{sg}^{*,*} (M;V)^G$, whose rows and columns are augmented by the colimit complex $A_{cg,eq}^* (M;V)$ and by the complex $A_s^* (M;V)^G$ respectively. \begin{lemma} For any directed system $\{ \mathfrak{U}_i \mid i \in I \}$ of $G$-invariant open coverings of $M$ the morphism $\colim_i i_{eq}^*: \colim_i A_{sr}^* ( M,\mathfrak{U}_i ; V)^G \rightarrow \mathrm{Tot} \colim_i A_{sr}^{*,*} (M,\mathfrak{U}_i ; V)^G$ induces an isomorphism in cohomology. \end{lemma} \begin{proof} The passage to the colimit preserves the exactness of the augmented row complexes (Lemma \ref{columnsexacteqsmooth}). \end{proof} Moreover, since the open diagonal neighbourhoods $\mathfrak{U}[n]$ in $X^{n+1}$ for open coverings $\mathfrak{U}$ of $X$ are cofinal in the directed set of all open diagonal neighbourhoods, we observe: \begin{lemma} \label{natinclofeqdcisisosmooth} The natural morphism of double complexes \begin{equation*} A_{cg,eq}^{*,*} (X;V):= \colim_{\mathfrak{U} \text{is $G$-invariant open cover of $X$}} A_{sr}^{*,*} (X;\mathfrak{U};V)^G \rightarrow A_{sg}^* (X;V)^G \end{equation*} is a natural isomorphism. \end{lemma} \begin{proof} The proof is analogous to that of Proposition \ref{natinclofeqccisiso}. \end{proof} As a consequence the colimit morphism $i_{cg,eq}^* : A_{cg,eq}^* ( M; V) \rightarrow \mathrm{Tot} A_{sg}^{*,*} (M; V)^G$ induces an isomorphism in cohomology, and the morphism $H (i_{cg,eq})$ is invertible. For the composition $H(i_{cg,eq})^{-1} H(j_{eq}):H_{c,eq}(M;V)\rightarrow H_{cg,eq}(M,\mathfrak{U};V)$ we observe: \begin{proposition} \label{smoothiscohtosreq} The image $j^n (f)$ of a continuous equivariant $n$-cocycle $f$ on $M$ in $\mathrm{Tot} A_{sg}^{*,*} (M;,V)^G$ is cohomologous to the image $i_{cg,eq}^n (f)$ of the equivariant $n$-cocycle $f\in A_{cg,eq}^n (M;V)$ in $\mathrm{Tot} A_{sg}^{*,*} (M;V)^G$. \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{contiscohtocr}. \end{proof} \begin{corollary} The composition $H (i_{cg,eq})^{-1} H(j_{eq}):H_{c,eq}(M;V)\rightarrow H_{cg,eq}(M;V)$ is induced by the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$. \end{corollary} \begin{corollary} If the morphism $j^*_{eq}:={j^*}^G : A_s^* (M;V)^G \rightarrow \mathrm{Tot} A_{sg}^{*,*}(M;V)^G$ induces a monomorphism, epimorphism or isomorphism in cohomology, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{cg,eq}^* (M;V)$ induces a monomorphism, epimorphism or isomorphism in cohomology respectively. \end{corollary} \section{Smooth and $\mathfrak{U}$-Smooth Cochains} \label{secsmoothanduscc} In this Section we derive results for smooth transformation groups $(G,M)$ and smooth $G$-modules $V$, which are analogous to those concerning continuous cochains. Let $(G,M)$ be a smooth transformation group, $V$ be a smooth $G$-module and $\mathfrak{U}$ be an open covering of $M$. \begin{proposition} \label{noneqextheneqexsmooth} If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sr}^{p,*}(M,\mathfrak{U};V)$ are exact, then the augmented sub column complexes $A_s^p (M;V)^G \hookrightarrow A_{sr}^{p,*}(M,\mathfrak{U};V)^G$ of equivariant cochains are exact as well. \end{proposition} \begin{proof} The proof is analogous to that of Proposition \ref{noneqextheneqex}. \end{proof} \begin{corollary} \label{augexthenjeqindisosmooth} If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sr}^{p,*} (M,\mathfrak{U};V)$ are exact, then the inclusion $j_{eq}^* : A_s^* (M;V)^G \hookrightarrow \mathrm{Tot} A_{sr}^{*,*}(M,\mathfrak{U},V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{corollary} \label{augextheninclindisosmooth} If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sr}^{p,*} (M,\mathfrak{U};V)$ are exact, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sr}^* (M,\mathfrak{U};V)^G$ induces an isomorphism in cohomology. \end{corollary} It remains to show that for smoothly contractible manifolds $M$ the colimit augmented columns $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V)$ are exact. For this purpose we first consider the cochain complex associated to the cosimplicial abelian group $A^{p,*} (M;V) := \left\{ f : M^{p+1} \times M^{*+1} \rightarrow V \mid \, \forall \vec{x}' \in M^{*+1} : f (-,\vec{m}') \in C^\infty (M^{p+1},V) \right\}$ of global cochains, its subcomplex $A_{sr}^{p,*} (M,\mathfrak{U};V)$ and the cochain complexes associated to the cosimplicial abelian groups \begin{eqnarray*} A^{p,*} (\mathfrak{U};V) & := & \{ f : M^{p+1} \times \mathfrak{U}[*] \rightarrow \mid \, \forall \vec{m}' \in \mathfrak{U}[*] : f (-,\vec{m}') \in C^\infty (M^{p+1},V) \} \quad \text{and} \\ A_s^{p,*} (M,\mathfrak{U};V) & := & C^\infty ( M^{p+1} \times \mathfrak{U}[*] , V) \, . \end{eqnarray*} Restriction of global to local cochains induces morphisms of cochain complexes $\res^{p,*} : A^{p,*} (M;V) \twoheadrightarrow A^{p,*} (M,\mathfrak{U};V)$ and $\res_{sr}^{p,*} : A_{sr}^* (M,\mathfrak{U};V) \twoheadrightarrow A_s^{p,*} (M,\mathfrak{U};V)$ intertwining the inclusions of the subcomplexes $A_{sr}^{p,*} (M,\mathfrak{U};V) \hookrightarrow A^{p,*} (M;V)$ and $A_s^{p,*} (M,\mathfrak{U};V) \hookrightarrow A^{p,*} (M, \mathfrak{U};V)$, so one obtains the following commutative diagram \begin{equation} \label{morphexseqsmooth} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\res_{sr}^{p,*} ) & \longrightarrow & A_{sr}^{p,*} (M,\mathfrak{U};V) & \longrightarrow & A_s^{p,*} (M,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\res^{p,*} ) & \longrightarrow & A^{p,*} (M;V) & \longrightarrow & A^{p,*} (M , \mathfrak{U};V) & \longrightarrow 0 \end{array} \end{equation} of cochain complexes whose rows are exact. The kernel $\ker (\res^{p,q} )$ is the subspace of those $(p,q)$-cochains which are trivial on $M^{p+1} \times \mathfrak{U} [q]$. Since these $(p,q)$-cochains are smooth on $M^{p+1} \times \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\res^{p,*} ) = \ker (\res_{rc}^{p,*} )$ by $K^{p,*}$ and denote the cohomology groups of the complex $A_{sr}^{p,*} (M,\mathfrak{U};V)$ by $H_{sr}^{p,*} (M,\mathfrak{U};V)$, the cohomology groups of the complex $A_s^{p,*} (M,\mathfrak{U};V)$ of continuous cochains by $H_s^{p,*} (M,\mathfrak{U};V)$ and the cohomology groups of the complex $A^{p,*} (M,\mathfrak{U};V)$ by $H^{p,*} (M,\mathfrak{U};V)$. \begin{lemma} The cochain complexes $A^{p,*} (M;V)$ are exact. \end{lemma} \begin{proof} For any point $* \in M$ the homomorphisms $h^{p,q} : A^{p,q} (M;V) \rightarrow A^{p,q-1} (M;V)$ given by $h^{p,q} (f) (\vec{x},\vec{x}'):=f (\vec{x},*,\vec{x}')$ form a contraction of the complex $A^{p,*} (M;V)$. \end{proof} The morphism of short exact sequences of cochain complexes in diagram \ref{morphexseq} gives rise to a morphism of long exact cohomology sequences, in which the cohomology of the complex $A^{p,*} (M;V)$ is trivial: \begin{equation} \label{diaglecssmooth} \xymatrix@R-10pt@C-4pt{ \ar[r] & H^q (K^{p,*} )) \ar[r] \ar@{=}[d] & H_{sr}^{p,q} (M,\mathfrak{U};V) \ar[r] \ar[d] & H_s^{p,q} (M,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H^{p,q} (M,\mathfrak{U};V) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} } \end{equation} \begin{lemma} If the inclusion $A_s^{p,*} (M,\mathfrak{U};V) \hookrightarrow A^{p,*} (M,\mathfrak{U};V)$ induces an isomorphism in cohomology, then the augmented complex $A_s^p (M;V) \hookrightarrow A_{sr}^{p,*} (M,\mathfrak{U};V)$ is exact. \end{lemma} \begin{proof} This is an immediate consequence of Diagram \ref{diaglecssmooth} \end{proof} \begin{proposition} If the inclusion $A_s^{p,*} (M,\mathfrak{U};V) \hookrightarrow A^{p,*} (M,\mathfrak{U};V)$ induces an isomorphism in cohomology, then the inclusions $j_{eq}^* : A_s^* (M;V)^G \hookrightarrow \mathrm{Tot} A_{sr}^{*,*}(M,\mathfrak{U},V)^G$ and $A_s^* (M,\mathfrak{U};V)^G \hookrightarrow A_{sr}^* (M,\mathfrak{U};V)^G$ also induces an isomorphism in cohomology. \end{proposition} \begin{proof} This follows from the preceding Lemma and Corollaries \ref{augexthenjeqindisosmooth} and \ref{augextheninclindisosmooth}. \end{proof} The passage to the colimit over all open coverings of $M$ yields the corresponding results for the complexes of cochains with continuous germs: \begin{proposition} \label{noneqextheneqexsg} If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*}(M;V)$ are exact, then the augmented sub column complexes $A_s^p (M;V)^G \hookrightarrow A_{sg}^{p,*}(M;V)^G$ of equivariant cochains are exact as well. \end{proposition} \begin{proof} The proof is similar to that of Proposition \ref{noneqextheneqex}. \end{proof} \begin{corollary} \label{augexthenjeqindisosg} If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V)$ are exact, then the inclusion $j_{eq}^* : A_s^* (M;V)^G \hookrightarrow \mathrm{Tot} A_{sg}^{*,*}(M;V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{corollary} \label{augextheninclindisosg} If the augmented column complexes $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V)$ are exact, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ induces an isomorphism in cohomology. \end{corollary} \begin{remark} \label{remonlyginvcovsmooth} Alternatively to taking the colimit over all open coverings $\mathfrak{U}$ of $M$ one may consider $G$-invariant open coverings only to obtains the same results. (This was shown in Proposition \ref{natinclofeqccisisosmooth} and Lemmata \ref{natinclofeqdcisisosmooth}.) \end{remark} \begin{example} If $G=M$ is a Lie group which acts on itself by left translation and the augmented columns $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V) := \colim A^{p,*} (M,\mathfrak{U}_U;V)$ (where $U$ runs over all open identity neighbourhoods in $G$) are exact, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ induces an isomorphism in cohomology. \end{example} The complex $A^{p,*} (M,\mathfrak{U};V)$ is isomorphic to the complex $A^* (\mathfrak{U}; C (M^{p+1},V))$. The colimit $A_{AS}^* (M; C (M^{p+1} ,V)):=\colim A^* (\mathfrak{U}; C (M^{p+1} ,V))$, where $\mathfrak{U}$ runs over all open coverings of $M$ is the complex of Alexander-Spanier cochains on $M$. Therefore the colimit complex $\colim A^p (M; A^* (\mathfrak{U};V))$ is isomorphic to the cochain complex $A_{AS}^* (M;C (M^{p+1} ,V))$. A similar observation can be made for the cochain complex $A_s^{p,*} (M,\mathfrak{U};V)$ if the exponential law $C (M^{p+1} \times \mathfrak{U}[q],V) \cong C (M,C (\mathfrak{U}[q],V))$ holds for a cofinal set of open coverings $\mathfrak{U}$ of $M$. Passing to the colimit in Diagram \ref{morphexseqsmooth} yields the morphism \begin{equation} \label{morphexseqsg} \begin{array}{cccccccc} 0 \longrightarrow & \ker (\res_{sg}^{p,*} ) & \longrightarrow & A_{sg}^{p,*} (M;V) & \longrightarrow & \colim A_s^{p,*} (M,\mathfrak{U};V) & \longrightarrow 0 \\ & \downarrow & & \downarrow & & \downarrow & \\ 0 \longrightarrow & \ker (\res^{p,*} ) & \longrightarrow & A^{p,*} (M;V) & \longrightarrow & A_{AS}^* (M; C^{p+1} (M,V)) & \longrightarrow 0 \end{array} \end{equation} of short exact sequences of cochain complexes. The kernel $\ker (\res^{p,q})$ is the subspace of those $(p,q)$-cochains which are trivial on $M^{p+1} \times \mathfrak{U} [q]$ for some open covering $\mathfrak{U}$ of $M$. Since these $(p,q)$-cochains are continuous on $M^{p+1} \times \mathfrak{U}[q]$ we find that both kernels coincide. We abbreviate the complex $\ker (\res^{p,*} ) = \ker (\res_{sg}^{p,*} )$ by $K_{sg}^{p,*}$ and denote the cohomology groups of the complex $A_{sg}^{p,*} (M;V)$ by $H_{sg}^{p,*} (M;V)$. The morphism of short exact sequences of cochain complexes in Diagram \ref{morphexseqsg} gives rise to a morphism of long exact cohomology sequences: \begin{equation} \label{diaglecssg} \xymatrix@C-11pt@R-10pt{ \ar[r] & H^q (K_{sg}^{p,*} )) \ar[r] \ar@{=}[d] & H_{sg}^{p,q} (M,\mathfrak{U};V) \ar[r] \ar[d] & H^q (\colim A_s^{p,*} (M,\mathfrak{U};V) \ar[r] \ar[d] & H^{q+1} (K_{sg}^{p,*} ) \ar[r] \ar@{=}[d] & {} \\ \ar[r]^\cong & H^q (K^{p,*} ) \ar[r]& 0 \ar[r] & H_{AS}^q (M; C^{p+1} (M,V)) \ar[r]^\cong & H^{q+1} (K^{p,*} )) \ar[r] & {} } \end{equation} \begin{lemma} If the inclusion $\colim A_s^{p,*} (M,\mathfrak{U};V)\hookrightarrow A_{AS}^* (M;C^{p+1}(M,V))$ of cochain complexes induces an isomorphism in cohomology, then the augmented complex $A_s^p (M;V) \hookrightarrow A_{sg}^{p,*} (M;V)$ is exact. \end{lemma} \begin{proof} This is an immediate consequence of Diagram \ref{diaglecssg} \end{proof} \begin{proposition} \label{inclcolimacpsindisothenjeqisosmooth} If the inclusion $\colim A_s^{p,*} (M,\mathfrak{U};V)\hookrightarrow A_{AS}^* (M;C(M^{p+1},V))$ induces an isomorphism in cohomology, then $j_{eq}^* : A_s^* (M;V)^G \hookrightarrow \mathrm{Tot} A_{sg}^{*,*}(M;V)^G$ and $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ also induce an isomorphism in cohomology. \end{proposition} \begin{proof} This follows from the preceding Lemma and Corollaries \ref{augexthenjeqindisosg} and \ref{augextheninclindisosg}. \end{proof} As observed before (cf. Remark \ref{remonlyginvcovsmooth}) one may restrict oneself to the directed system of $G$-invariant open coverings only to achieve the same result. Thus we observe: \begin{corollary} If $G=M$ is a Lie group which acts on itself by left translation and the inclusion $\colim A_s^{p,*} (M,\mathfrak{U};V)\hookrightarrow A_{AS}^* (M;C(M^{p+1},V))$ (where $U$ runs over all open identity neighbourhoods in $G$) induces an isomorphism in cohomology, then the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ induces an isomorphism in cohomology as well. \end{corollary} \begin{proof} It has been shown in \cite{vE62b} that the cohomology of the colimit cochain complex $\colim A^* ( \mathfrak{U} ;C(M^{p+1},V))$ is the Alexander-Spanier cohomology of $M$. \end{proof} \begin{lemma} \label{xsmoothlycontrthenacpsistriv} If the manifold $M$ is contractible, then the cohomology of the complex $\colim A_s^{p,*} (M,\mathfrak{U};V)$ is trivial. \end{lemma} \begin{proof} The reasoning is analogous to that for the Alexander-Spanier presheaf. The proof \cite[Theorem 2.5.2]{F10} carries over almost in verbatim. \end{proof} \begin{proof} If the manifold $M$ is contractible, then the Alexander-Spanier cohomology $H_{AS} (M;C^{p+1}(M,V))$ is trivial and the cohomology of the cochain complex $\colim A_s^{p,*} (M,\mathfrak{U};V)$ is trivial by Lemma \ref{xsmoothlycontrthenacpsistriv}. By Proposition \ref{inclcolimacpsindisothenjeqisosmooth} the inclusion $A_s^* (M;V)^G \hookrightarrow A_{sg}^* (M;V)^G$ then induces an isomorphism in cohomology. \end{proof} \begin{corollary} For smoothly contractible Lie groups $G$ the continuous group cohomology is isomorphic to the cohomology of homogeneous group cochains with continuous germ at the diagonal. \end{corollary} \bibliographystyle{amsalpha}
{ "timestamp": "2011-10-06T02:03:28", "yymm": "1110", "arxiv_id": "1110.0994", "language": "en", "url": "https://arxiv.org/abs/1110.0994", "abstract": "We present a spectral sequence connecting the continuous and 'locally continuous' group cohomologies for topological groups. As an application it is shown that for contractible topological groups these cohomology concepts coincide. Similar results for k-groups and smooth cochains on Lie groups are also obtained.", "subjects": "General Topology (math.GN)", "title": "A Spectral Sequence Connecting Continuous With Locally Continuous Group Cohomology", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983342957061873, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7097211104910036 }
https://arxiv.org/abs/1305.3587
Perimeter-minimizing Tilings by Convex and Non-convex Pentagons
We study the presumably unnecessary convexity hypothesis in the theorem of Chung et al. [CFS] on perimeter-minimizing planar tilings by convex pentagons. We prove that the theorem holds without the convexity hypothesis in certain special cases, and we offer direction for future research.
\section{Introduction} \label{intro} \subsection{Tilings of the plane by pentagons} Chung et al. \cite[Thm. 3.5]{pen11} proved that certain ``Cairo'' and ``Prismatic'' pentagons provide least-perimeter tilings by (mixtures of) convex pentagons, and they conjecture that the restriction to convex pentagons is unnecessary. In this paper we consider tilings by mixtures of convex and non-convex pentagons, and we prove that under certain conditions the convexity hypothesis in the results of Chung et al. can in fact be removed. The conjecture remains open. \begin{figure} \centering \includegraphics[scale=.4]{CairoPrismatic.png} \caption{Tilings by Cairo and Prismatic pentagons provide least-perimeter tilings by unit-area convex pentagons. Can the convexity hypothesis be removed?} \label{fig:cairoprismatic} \end{figure} Throughout the paper, we assume all tilings are unit area and edge-to-edge. We focus on tilings of flat tori, although Section 5 begins the extension of our results to the plane by limit arguments. Our main results are Theorems \ref{oneeffn&mTorus} and \ref{dihedraltype2n&mTorus}. The first shows that tilings by an efficient pentagon and non-convex quadrilaterals cannot have less perimeter than Cairo or Prismatic tilings. The second shows that dihedral tilings by efficient pentagons and Type 2 non-convex pentagons cannot have less perimeter than Cairo or Prismatic tilings. The general strategy employed in our main results begins with the assumption that a mixed tiling with convex pentagons and non-convex pentagons (or in Section 3, quadrilaterals) exists that has less perimeter than a Cairo or Prismatic tiling. The first step in the proof is to show that such tilings must have at least one degree-four efficient vertex (Props. \ref{gendihedralSomeDeg4Verts} and \ref{type2dihedralSomeDeg4Verts}). We then obtain a large lower bound on the ratio of efficient to non-convex pentagons (or quadrilaterals in Section 3; Props. \ref{generalQuadRatio} and \ref{type2-ratio}). This is primarily done by showing that in order to tile the plane, the convex pentagons must have perimeter substantially higher than the regular pentagon, though a high bound on the perimeter of the non-convex pentagons would also suffice. Second we show (Thms. \ref{oneeffn&mTorus} and \ref{dihedraltype2n&mTorus}) that the ratio of convex pentagons to non-convex pentagons (or quadrilaterals) has an upper bound by bounding the number of efficient vertices and counting the number of angles appearing at such vertices. We derive a contradiction by showing that the upper bound is less than the lower bound, and thus conclude that a tiling cannot have perimeter less than that of a Cairo/Prismatic tiling. In addition to our main results, we categorize non-convex pentagons in Proposition \ref{3typesnonconvex}, and we bound the angles and edge-lengths of efficient pentagons (Props. \ref{minmaxangle} and \ref{min-edgelength-pent}). We further restrict the behavior of efficient pentagons in perimeter-minimizing tilings in Proposition \ref{four-angles-don't-tile}, which shows that some efficient pentagons in the tiling must have five angles that tile with the efficient pentagons' angles. Definition \ref{def:perimeterRatio} generalizes the concept of the perimeter of a tiling to the planar case by defining the perimeter ratio as the limit supremum of the perimeters of the tiling restricted to increasingly large disks. Lemma \ref{truncation-lemma} shows that the limit infimum of the perimeter to area ratio of tiles completely contained within disks of radius $R$ centered at the origin does not exceed the perimeter ratio of a tiling. Propositions \ref{plane-ext-ratio1} and \ref{plane-ext-stronger-ratio1} generalize our results on the lower bound of the ratio of convex to non-convex pentagons in the general case, and in the special case when all the convex pentagons in the tiling are efficient. Proposition \ref{plane-ext-angles-convex-between} shows that planar tilings by non-convex pentagons and pentagons with angles strictly between $\pi/2$ and $2\pi/3$ have a perimeter ratio higher than that of a Cairo/Prismatic tiling, generalizing Proposition \ref{angles-convex-between}. Finally, Proposition \ref{equilateralPentagonMin} finds the perimeter-minimizing unit-area equilateral convex pentagon that tiles the plane monohedrally. \subsection{Organization} Section 2 explores tilings of large flat tori by efficient and non-convex pentagons. It provides results restricting the angles and edge-lengths of efficient pentagons and describes particular efficient pentagons of interest. Additionally it limits the ways in which efficient and non-convex pentagons interact in mixed tilings with perimeter less than Cairo/Prismatic, if such tilings exist, and considers efficient and non-convex pentagons outside the context of a tiling. The propositions in Section 2 are used to prove the main results in Sections 3 and 4. Section 3 shows that a tiling of a large, flat torus by an efficient pentagon and any number of non-convex quadrilaterals cannot have perimeter less than a Cairo/Prismatic tiling. Section 4 shows that a dihedral tiling of a large flat torus by an efficient pentagon and so-called Type 2 non-convex pentagons cannot have perimeter less than a Cairo/Prismatic tiling. Section 5 generalizes results on large, flat tori to similar results on the plane by limit arguments. Section 6 considers special cases of the main conjecture, such as dihedral tilings by efficient non-convex pentagons, where it may be easier to show that Cairo and Prismatic tilings are perimeter minimizing. The final appendix section provides the perimeter-minimizing equilateral pentagon that tiles the plane monohedrally. \subsection{Acknowledgements} This paper is work of the 2012 ``SMALL'' Geometry Group, an undergraduate research group at Williams College, continued in Martin's thesis \cite{ZaneThesis}. Thanks to our advisor Frank Morgan, for his patience, guidance, and invaluable input. Thanks to Professor William Lenhart for his excellent comments and suggestions. Thanks to Andrew Kelly for contributions to the summer work that laid the groundwork for this paper. Thanks to the National Science Foundation for grants to the Williams College ``SMALL'' Research Experience for Undergraduates, and Williams College for additional funding. Additionally thank you to the Mathematical Association of America (MAA), MIT, the University of Chicago, the University of Texas at Austin, Williams College, and the NSF for a grant to Professor Morgan for funding in support of trips to speak at MathFest 2012, the MAA Northeastern Sectional Meeting at Bridgewater State, the Joint Meetings 2013 in San Diego, and the Texas Undergraduate Geometry and Topology Conference at UT Austin (texTAG). \setcounter{equation}{0} \section{Pentagonal Tilings} \label{penta} In 2001, Thomas Hales \cite[Thm.1-A]{hales} proved that the regular hexagon provides a most efficient unit-area tiling of the plane. Of course, for triangles and quadrilaterals the perimeter-minimizing tiles are the equilateral triangle and the square. Unfortunately, the regular pentagon doesn't tile. There are, however, two nice pentagons which do tile. \begin{definitions} \label{def:CairoPrismaticEfficient} \emph{While the terms are sometimes used in a broader sense, we define a pentagon as }Cairo \emph{or} Prismatic \emph{if it has three angles of $2\pi/3$, two right angles, nonadjacent or adjacent, respectively, and is circumscribed about a circle, as in Figure \ref{fig:cairoprismatic}. For unit area, both have perimeter $2\sqrt{2+\sqrt{3}} \approx 3.86$.} \emph{In this paper, we assume that all tilings by polygons are} edge-to-edge; \emph{ that is, if two tiles are adjacent they meet only along entire edges or at vertices.} \emph{We say that a unit-area pentagon is} efficient \emph{if it has a perimeter less than or equal to that of a Cairo pentagon's, and that a tiling is} efficient \emph{if it has a perimeter per tile less than half the perimeter of a Cairo pentagon's. Note that a non-convex pentagon can never be efficient because it has more perimeter than a square, the optimal quadrilateral.} \emph{An} efficient vertex \emph{is a vertex in a tiling which is surrounded exclusively by efficient pentagons.} \emph{Finally, given a sequence of angles $a_i$, we say that an angle $a_j$} tiles \emph{if for some positive integers $m_i$ including $m_j$, $\sum m_i a_i = 2 \pi$.} \end{definitions} \begin{remarks} \emph{Note that an efficient pentagon cannot tile monohedrally. If it did, it would violate Theorem \ref{chungthm}. But an efficient pentagon could have five angles that tile.} \emph{An efficient pentagon cannot have more than two edges greater than $\sqrt{2}$ because by definition its perimeter is less than a Cairo pentagon's, about 3.86, which is less than $3\sqrt{2}$.} \emph{While} isoperimetric \emph{tilings by pentagons have been considered only recently \cite{pen11}, there has been extensive research on pentagonal tilings in general. There are 14 known types of convex pentagons which tile the plane monohedrally, but no proof that these types form a complete list, despite notable recent progress by Bagina \cite{bagina11} and Sugimoto and Ogawa \cite{sugi2006}, \cite{sugi1}, \cite{sugi2}, \cite{sugi3}. There is a complete list for equilateral convex pentagons (\cite{hirsch&hunt}, see also \cite{bagina04}) and apparently for all equilateral pentagons \cite{hirhunt}. These sources provide partial results regarding the properties of convex pentagons which tile, and focus their attention on showing that the known list of 14 types of pentagonal tiles is complete. Hirschhorn and Hunt consider non-convex equilateral pentagons which tile the plane (\cite{hirhunt}), but more general studies of types of non-convex pentagons which tile are absent from the literature, as are any in-depth considerations of tilings by mixtures of convex and non-convex pentagons.} \emph{Chung et al. \cite[Thm. 3.5]{pen11} proved that Cairo and Prismatic pentagons provide optimal ways to tile the plane using (mixtures of) \emph{convex} pentagons, but were unable to remove the convexity assumption. We conjecture that their results hold without the convexity assumption, and rule out certain tilings with mixtures of convex and non-convex pentagons, though the main conjecture remains open. We begin with the main result from Chung et al.} \end{remarks} \begin{theorem} \cite[Thm 3.5]{pen11} \label{chungthm} Perimeter-minimizing planar tilings by unit-area convex polygons with at most five sides are given by Cairo and Prismatic tiles. \end{theorem} Various times throughout the paper we use the following planar case of a theorem of Lindel\"{o}f (\cite{lind}, see Florian \cite[pp. 174-180]{florian} and Chung et al. \cite[Prop 3.1]{pen11} from before the authors knew about Lindel\"{o}f): \begin{theorem}[Lindel\"{o}f's Theorem \cite{lind}] \label{lindelof-lemma} For $n$ given angles, the $n$-gon circumscribed about a circle is uniquely perimeter minimizing for its area. \end{theorem} Chung et al. give an explicit formula for finding the perimeter of an $n$-gon circumscribed about a circle and add an immediate corollary to the result: \begin{lemma}\cite[Prop. 3.1]{pen11} \label{cot-perimeter-lemma} Scaled to unit area, an $n$-gon with angles $0 < a_i \leq \pi$ has perimeter greater than or equal to \begin{equation} 2\sqrt{\sum_{i=1}^n \cot(a_i/2)} , \end{equation} with equality holding if and only if the $n$-gon is circumscribed about a circle. For convex $n$-gons, since cotangent is strictly convex up to $\pi/2$, the more nearly equal the angles, the smaller the perimeter. \end{lemma} The following proposition follows directly from the above, and will be useful later on in proving our main results. \begin{proposition} \label{efficient-pentagon-one-small-angle} If two angles in a pentagon average less than $\pi/2$ then the pentagon cannot be efficient. If two angles average exactly $\pi/2$, the pentagon is efficient only if it is Cairo or Prismatic. \end{proposition} \begin{proof} Suppose that at least two angles are each less than or equal to $\pi/2$. By Lemma \ref{cot-perimeter-lemma}, the perimeter is uniquely minimized when exactly two angles equal $\pi/2$, the other angles are equal, and the pentagon is circumscribed about a circle, i.e. for Cairo and Prismatic. Therefore the pentagon is not efficient if the average of two angles is less than $\pi/2$, and if the average is equal to $\pi/2$ the pentagon is Cairo Prismatic. \end{proof} We begin our analysis of non-convex pentagons, first by categorizing them into two types. \begin{proposition} \label{3typesnonconvex} There are two types of non-convex pentagons, as in Figure \ref{fig:nonconvexfig}: \begin{enumerate} \item a non-convex pentagon with one interior angle larger than $\pi$, \item a non-convex pentagon with two interior angles (these can be adjacent or non-adjacent) larger than $\pi$ whose average is less than $3\pi/2$, \end{enumerate} A unit-area Type 1 pentagon has perimeter greater than a square's (4). A unit-area Type 2 pentagon has perimeter greater than an equilateral triangle's (about 4.559). \end{proposition} \begin{proof} If a pentagon has more than two interior angles larger than $\pi$, then the sum of the interior angles will be greater than $3\pi$, which is a contradiction since the sum of all the interior angles of a pentagon is always $3\pi$. Therefore, either it has one angle larger than $\pi$, as in Case 1, or it has two. If it has two angles larger than $\pi$, and the average of the two angles is greater than $3\pi/2$, then they will sum to more than $3\pi$, which is a contradiction. Hence, the average of the two angles must be less than $3\pi/2$. These large angles are either adjacent or not adjacent, so we have Case 2. To prove the final statement, just note that taking the convex hull and then scaling down to unit area reduces perimeter, and that the square and equilateral triangle minimize perimeter for given area and number of sides. \end{proof} \begin{figure} \centering \includegraphics[scale=0.4]{nonconvex1.png} \includegraphics[scale=0.4]{nonconvex2.png} \includegraphics[scale=0.4]{nonconvex3.png} \caption{A non-convex pentagon can have one or two interior angles greater than $\pi$.} \label{fig:nonconvexfig} \end{figure} \begin{remark} \emph{By Proposition \ref{3typesnonconvex} a unit-area Type 1 non-convex pentagon must have at least one edge with length at least 4/5.} \end{remark} We now bound the edge-lengths of unit-area non-convex quadrilaterals and then extend this bound to Type 2 non-convex pentagons. \begin{proposition} \label{quad-min-biggest-edge} The quadrilateral formed by taking a right, isosceles triangle and adding a vertex at the midpoint of the hypotenuse, minimizes longest edge-length for given area among quadrilaterals with an angle measuring $\pi$. \end{proposition} \begin{proof} Given two sides of length at most $a$, a right isosceles triangle with sides of length $a$ maximizes area. The result follows. \end{proof} \begin{lemma} \label{quad-quadrilateral-max-edge} For a unit-area non-convex quadrilateral, some edge must be greater than or equal to $\sqrt{2}$. \end{lemma} \begin{proof} Assume to the contrary that there existed a unit-area non-convex quadrilateral with four edges less than $\sqrt{2}$. Replacing one of the non-convex angle with a straight line with a vertex in the center and scaling down to unit area would contradict Proposition \ref{quad-min-biggest-edge}. \end{proof} \begin{lemma} \label{ncvx-pent-max-edge} For a unit-area Type 2 non-convex pentagon, some edge must be greater than or equal to $\sqrt{2}$. \end{lemma} \begin{proof} Assume to the contrary that there existed a unit-area non-convex pentagon with five edges less than $\sqrt{2}$. Replacing the non-convex angle with a straight line with a vertex in the center and scaling down to unit area would contradict Lemma \ref{quad-quadrilateral-max-edge}. \end{proof} Chung et al. \cite[Prop. 2.11]{tile11} prove that in a pentagonal tiling of a flat torus with perimeter per tile less than a Prismatic tiling, the ratio of convex to non-convex pentagons is greater than 2.6. We can immediately infer from their proof that the ratio of \emph{efficient} pentagons to non-convex pentagons is greater than 2.6. We further strengthen this result. \begin{proposition} \label{ratio1} Let $T$ be a tiling of a flat torus by unit-area pentagons, with perimeter per tile less than or equal to half the perimeter of a Prismatic pentagon. Then the fractions $C_1$, $N_1$, and $N_2$ of efficient, Type 1 non-convex, and Type 2 non-convex pentagons in the tiling satisfy $C_1 > 2.6N_1 + 13.4N_2$. \end{proposition} \begin{proof} We follow a similar proof given by Chung et al. \cite[Prop. 2.11]{tile11}. The perimeters of a regular pentagon, a Cairo/Prismatic pentagon, the unit square, and the unit-area equilateral triangle are $P_0 = 2\sqrt{5} \sqrt[4]{5 - 2\sqrt{5}}$, $P_1 = 2\sqrt{2 + \sqrt{3}}$, $P_2 = 4$ and $P_3 = 3 \sqrt{4/\sqrt{3}}$. Since each edge appears in the perimeter of two tiles, twice the perimeter per tile is at least $$ C_1 P_0 + C_2 P_1 + N_1 P_2 + N_2 P_3 \leq P_1 = (C_1 + C_2 + N_1 + N_2)P_1. $$ Therefore, $$ C_1 \geq N_1 \frac{P_2 - P_1}{P_1 - P_0} + N_2 \frac{P_3 - P_1}{P_1 - P_0} > 2.6 N_1 + 13.4 N_2. $$ \end{proof} Under certain conditions, the convexity hypothesis is easy to rule out, as in the following proposition. \begin{proposition} \label{angles-convex-between} A unit-area tiling of a flat torus by non-convex pentagons and pentagons with angles strictly between $\pi/2$ and $2\pi/3$ has more perimeter per tile than half the perimeter of a Prismatic pentagon. \end{proposition} \begin{proof} Assume, on the contrary, that there exists a unit-area tiling of a flat torus by non-convex pentagons and convex pentagons with angles strictly between $\pi/2$ and $2\pi/3$ which has less perimeter per tile than half the Prismatic pentagon's. By Proposition \ref{ratio1}, the ratio of convex pentagons to non-convex pentagons must be greater than 2.6. Since all the angles of the convex pentagons are strictly between $\pi/2$ and $2\pi/3$, there is at least one non-convex pentagon at each vertex. By definition, a non-convex pentagon has at least one angle greater than $\pi$. Thus at least $1/5$ of the vertices must contain an angle greater than $\pi$. At such vertices there is at most one convex pentagon. At the remaining vertices, there are at most three convex pentagons, because their angles are greater than $\pi/2$. Thus the ratio of convex pentagons to non-convex pentagons is at most $3(4/5)+1(1/5) = 2.6$. This is a contradiction of Proposition \ref{ratio1}, which says the ratio of convex to non-convex pentagons must be strictly greater than 2.6. \end{proof} \begin{remark} \emph{The reason we need the angles to be strictly between $\pi/2$ and $2\pi/3$ is that our argument depends on having no vertices completely covered by convex pentagons. We deal with other cases separately. Some special cases are easy to eliminate. For example, if a pentagon has two $3\pi/4$ angles and three $\pi/2$ angles, then the perimeter-minimizing pentagon has perimeter equal to about 3.91, which is more than the Prismatic pentagon's.} \end{remark} The next few propositions better describe efficient pentagons by bounding their angles and edge-lengths. \begin{proposition} \label{minmaxangle} The interior angles $a_i$ of an efficient pentagon satisfy $80.91^\circ < a_i < 142.29^\circ$. \end{proposition} \begin{proof} By Lemma \ref{cot-perimeter-lemma}, it is enough to check the proposition when the smallest angle is 80.92 and the others are equal and, similarly, when the largest angle is 142.29 and the others are equal. At these values, the perimeter is about 3.8638 (to four decimal places), greater than the Prismatic perimeter of about 3.8637. \begin{figure} \centering \includegraphics[scale=0.6]{PentSmallestAngleGraph.png} \caption{A pentagon with angles far from $108^\circ$ has lots of perimeter.} \label{fig:pentsmallangle} \end{figure} \end{proof} \begin{corollary} \label{convex-vertices-degnot-2} An efficient vertex in a tiling must have degree equal to three or four. \end{corollary} Figure \ref{fig:pentsmallangle} shows the excess perimeter over the Prismatic perimeter for pentagons circumscribed about a circle with one angle $a$ and the other angles equal. We now provide an alternate proof of a proposition of Chung et al.: \begin{proposition} \label{min-edgelength-pent} \cite[Lem. 3.6]{tile11} The perimeter-minimizing unit-area pentagon with a given edge-length $e$ is the one inscribed in a circle with one edge of length $e$ and the other four equal. \end{proposition} \begin{figure} \includegraphics[scale=0.6]{MinGivenEdge.png} \caption{A method for scaling pentagon ABCDE down to unit-area, keeping edge DE fixed - replacing line segment $AE$ with $A_3E$ and decreasing perimeter.} \label{fig:MinGivenEdge} \end{figure} \begin{proof} Let $ABCDE$ be a perimeter-minimizing pentagon with $DE$ of given length $l$. It is well known that for given edge-lengths, the $n$-gon inscribed in a circle uniquely maximizes area. If the $ABCDE$ were not inscribed in a circle, we can increase its area by inscribing it, keeping the edges, and thus the perimeter, fixed. We then scale back down to unit area keeping $DE$ fixed. One method to perform this is shown in Figure \ref{fig:MinGivenEdge} -- we simply replace line segment $EA$ with line segment $EA_i$ until we arrive at unit area. This decreases the perimeter, contradicting that $ABCDE$ is perimeter-minimizing. Therefore $ABCDE$ must be inscribed in a circle. Now we show that the other four edges must be equal. Suppose two such adjacent edges, say $AB$ and $BC$, have different lengths. We can replace triangle $ABC$ with an isosceles triangle $AB'C$ such that $|AB'| + |B'C| = |AB| + |AC|$, but $AB'C$ has greater area than $ABC$. We do not need to worry about $AB'C$ overlapping or bumping into another edge or vertex. Scaling down to unit-area but keeping $DE$ fixed (using the method in Figure \ref{fig:MinGivenEdge}) we have decreased the perimeter, a contradiction as $ABCDE$ is perimeter minimizing. Therefore the other four edge-lengths must be equal, and the proposition follows. \end{proof} \begin{lemma} \label{quad-pentagon-min-edge} A unit-area efficient pentagon cannot have an edge greater than 1.081 or less than .4073. \end{lemma} \begin{proof} By Proposition \ref{min-edgelength-pent}, for a given edge-length $e$, the pentagon $X$ inscribed in a circle with four equal sides and one side $e$ minimizes perimeter. Let $r$ be the radius of the circle and $\alpha$ be the angle between rays from the center of the circle to adjacent vertices of the four equal sides. Then the perimeter and area formulae for $P$ are: $$ P = 8r \sin(\alpha/2) + 2r \sin(2\alpha), $$ $$ A = 2r^2 \sin(\alpha) - (1/2)r^2 \sin(4\alpha). $$ By assumption the area is one, and the perimeter of a Cairo/Prismatic pentagon is approximately $3.86$, our value for $P$. Then solving for $r^2$ we get $$ r^2 = \frac{3.86^2}{(8 \sin(\alpha/2) + 2 \sin(2\alpha))^2} $$ and therefore $$\alpha \approx 62.8942, 81.0705. $$ From this we conclude that the pentagon is efficient only when $$ 62.8941 < \alpha < 81.0706. $$ Then $e = 2r\sin(2\alpha)$, which implies that $.4073 < e < 1.081$. \end{proof} Recall that an angle $a_j$ tiles if given a sequence of angles $a_i$, there exist some positive integers $m_i$ including $m_j$ such that $\sum m_i a_i = 2 \pi$. Then for tilings by efficient pentagons and Type 2 non-convex pentagons we have the following: \begin{proposition} \label{four-angles-don't-tile} Consider a unit-area tiling of a flat torus by efficient pentagons and Type 2 non-convex pentagons. Assume that each efficient pentagon has at most four angles which tile with efficient pentagons. Then the tiling has more perimeter per tile than half the perimeter of a Prismatic pentagon. \end{proposition} \begin{proof} Because the efficient pentagon has at most four angles which tile, it cannot be surrounded entirely by efficient pentagons. Therefore the ratio of efficient pentagons to non-convex pentagons is at most the maximum number of efficient pentagons which can surround a non-convex pentagon. At the three angles of the non-convex pentagon which are less than $\pi$, there are at most four efficient pentagons. If there were five or more, then the angles of the efficient pentagon would be too small, in violation of Proposition \ref{minmaxangle}. By the same logic, there are at most two efficient pentagons at the two angles in the non-convex pentagon which is greater than $\pi$. Since the tiling is edge-to-edge, five of the efficient pentagons surrounding the non-convex pentagon appear at two vertices, and we must avoid double counting these. So the total number of efficient pentagons surrounding the non-convex pentagon is $4(3) + 2(2) - 5 = 11$. Then the ratio of efficient pentagons to non-convex pentagons is at most eleven to one. Therefore by Proposition \ref{ratio1} the tiling has more perimeter per tile than half that of a Prismatic pentagon. \end{proof} We have a similar result for certain Type 1 non-convex pentagons, though the proposition does not hold for all Type 1 non-convex pentagons. \begin{proposition} Consider a tiling of a flat torus by efficient pentagons and Type 1 non-convex pentagons with perimeter greater than 4.537. Assume that each efficient pentagon has at most four angles which tile with efficient pentagons. Then the tiling has more perimeter per tile than half the perimeter of a Prismatic pentagon. \end{proposition} \begin{proof} Because the efficient pentagon has at most four angles which tile, it cannot be surrounded entirely by efficient pentagons. Therefore the ratio of efficient pentagons to non-convex pentagons is at most the maximum number of efficient pentagons which can surround a non-convex pentagon. At the four angles of the convex pentagon which are less than $\pi$, there are at most four efficient pentagons, otherwise the efficient pentagons would contradict Proposition \ref{minmaxangle}. By the same logic, there are at most two efficient pentagons at the angle which is greater than $\pi$. Since the tiling is edge-to-edge, five of the efficient pentagons surrounding the non-convex pentagon will appear at two vertices, and we need to avoid double counting these. Then the ratio of efficient pentagons to non-convex pentagons is $4(4) + 2 - 5 = 13.$ Assume such a tiling had perimeter per tile less than half that of a Cairo pentagon. Let $P_0$, $P_1$, and $P_2$ denote the perimeter of a regular pentagon, a Cairo/Prismatic pentagon, and 4.537. If there are $m$ convex pentagons and $n$ non-convex pentagons then by hypothesis $$ mP_0 + nP_2 < (m+n)P_2, $$ which implies $$ \frac{m}{n} > \frac{P_1 - P_2}{P_0 - P_1} > 13, $$ which is a contradiction. \end{proof} The following will be useful in proving our main results, as it limits the perimeter of a certain class of efficient pentagons. \begin{figure} \includegraphics[scale=0.4]{5TileMin.png} \caption{The perimeter-minimizing pentagon, with five angles that tile, at least one angle of which can tile a degree-four vertex.} \label{fig:FiveTileMinimizer} \end{figure} \begin{proposition} \label{pminimizing5tileDeg4Vert} The perimeter-minimizing pentagon $P$ with five angles that tile, at least one angle of which can tile a degree-four vertex, is the one circumscribed about a circle with one $90^\circ$ angle, three $108^\circ$ angles, and one $126^\circ$ angle, as in Figure \ref{fig:FiveTileMinimizer}. The perimeter is approximately $3.8414$. \end{proposition} \begin{proof} First note that such a perimeter-minimizing pentagon must exist. Consider a sequence of such pentagons $P_n$ with perimeter converging to the infimum. We may assume that the pentagons are convex (see Definitions \ref{def:CairoPrismaticEfficient}). By standard compactness, the desired limit exists. Since by hypothesis the pentagon can tile a degree-four vertex, we have the following cases: \newline \newline \noindent \textbf{Case 1: Exactly one angle tiles a degree-four vertex.} \newline Note that this implies there must be one $90^\circ$ angle and that the other four angles must tile a degree-three vertex. If the other four angles tiled a vertex of degree greater than or equal to five the pentagon would not be efficient by Corollary \ref{convex-vertices-degnot-2}. We proceed to cover all the cases with regards to which of the four non-right angles are equal to one another. First note that these four angles cannot all be equal, as they would equal $(112.5^\circ)$, which does not tile with itself or $90^\circ$. Next suppose three of the other angles, $x$, are equal and one, $y$, is different. Then $$ 3x + y = 450^\circ, $$ as the angles of a pentagon sum to $540^\circ$. Note that as $x$ and $y$ are not equal to each other, they cannot both equal $120^\circ$. This implies they must tile together in some way. So either $2x + y = 360^\circ$ or $x + 2y = 360^\circ$. In the first case $y = 180^\circ$, which violates Proposition \ref{minmaxangle}. The second has has $x = 108^\circ$ and $y = 126^\circ$, and is the pentagon $P$. Next suppose there are two pairs of equal angles, denoted $x$ and $y$. Then $2x + 2y = 450^\circ$ and as before either $2x + y = 360^\circ$ or $x + 2y = 360^\circ$. In either case there are three $90^\circ$ angles and two $135^\circ$ angles. This pentagon has perimeter approximately $3.9132$, greater than the perimeter of $P$. Next suppose two angles are equal and two are unequal. Let $x$ denote the equal angles and $y$ and $z$ the unequal ones. Then $2x + y + z = 450^\circ$. First note that if $x$, $y$, or $z$ tile with the $90^\circ$ angle, they will equal either $180^\circ$ (if there are two $90^\circ$ angles tiling with them) or $135^\circ$ (if there is one $90^\circ$ and two copies of $x$). The first case is not efficient by Proposition \ref{minmaxangle}. The second case does not have perimeter less than $P$ by Lemma \ref{cot-perimeter-lemma}. So $x$, $y$ , and $z$ tile together. If $x + y + z = 360^\circ$, this implies that $x = 90^\circ$, a contradiction that exactly one angle tiles a degree-four vertex. If $x = 120^\circ$ and it tiles with $y$ or $z$ the optimal pentagon of this form is Cairo, which has perimeter greater than $P$. If $x$ doesn't tile with $y$ or $z$ then $y$ and $z$ tile together. Without loss of generality, assume $2y + z = 360^\circ$. Then since we also know $y + z = 210^\circ$ we conclude that $y$ must be be $150^\circ$, which means the pentagon is not efficient by Proposition \ref{minmaxangle}. So the possible cases for how $x$ tiles are either $2x + y = 360^\circ$ or $x + 2y = 360^\circ$, with $x \not= 120^\circ$ (if $x$ tiles with $z$ just switch the labels on $y$ and $z$). First note that if $2x + y = 360^\circ$ and $2z + x = 360^\circ$, $x = 180^\circ$ which contradicts Proposition \ref{minmaxangle}. Now if $2x + y = 360^\circ$ then additionally either $2y + z = 360^\circ$ or $2z + y = 360^\circ$. Since we know $2x + y + z = 450^\circ$, we can solve for $x, y$ and $z$. In both cases $y = 180^\circ$, which means the pentagon is not efficient by Proposition \ref{minmaxangle}. Similarly if $x + 2y = 360^\circ$ either $2y + z = 360^\circ$ or $2z + y = 360^\circ$. Then we have two possible pentagons: one has $x = 108^\circ$, $y = 126^\circ$, and $z =108^\circ$, and so is just $P$, and the second has $x= 720^\circ/7$, $y= 900^\circ/7$, and $z = 810^\circ/7$, and perimeter greater than $3.849$. This completes the case when two angles are equal and two are unequal. The final subcase is when all four angles are unequal. Then $w + x + y + z = 450^\circ$. We have two options with regard to how angle $x$ tiles - either $x + y + z = 360^\circ$ or $2x + y = 360^\circ$ (switching the labels on $x$ and $y$ allows us to assume this). If $x + y + z = 360^\circ$, then $w = 90^\circ$. As $x, y$ and $z$ are not equal, this implies the pentagon is not efficient. Assume that $$ 2x + y = 360^\circ. $$ Since $w + x + y + z = 450^\circ$, we can express $x = z+w - 90^\circ$ and $y = 540^\circ - 2(z+w)$. Substituting in $s$ for $z + w$, we know that the perimeter-minimizing pentagon satisfying these requirements has angles measuring $90^\circ, s - 90^\circ, 540^\circ - 2s, s/2, s/2$. Note that here we assume $w = z$, and do not consider tiling by these two angles. While this violates the conditions on these pentagons, it provides a lower bound for pentagons of this general form. The only case when these pentagons have perimeter less than $3.8414$ is when $ 210^\circ < s < 216^\circ$, by applying Lemma \ref{cot-perimeter-lemma} and using Mathematica to calculate the perimeter for all $s$ in the allowable range. So $210^\circ < z+w < 216^\circ$. If $2z + w = 360^\circ$ or $z + 2w = 360^\circ$ then either $w$ or $z$ is greater than $144^\circ$, so the pentagon will not be efficient by Proposition \ref{minmaxangle}. Also note that $2x + z = 360^\circ$ and $2z + y = 360^\circ$ are not allowable, as $z$ is not equal to $x$ or $y$. If $z + 90^\circ + x = 90^\circ$ or $z + y + 90 = 360^\circ$, this implies that $x$ or $y$ equals $270^\circ - w$. The largest an angle can be in a pentagon with one $90^\circ$ angle and perimeter less than 3.8414 is $127^\circ$. This implies $x$ or $y$ is at least $143^\circ$, which implies that the pentagon is not efficient by Proposition \ref{minmaxangle}. So either $2y + z = 360^\circ$ or $2z + x = 360^\circ$. Then given that $x + y + z + w = 450^\circ$ and $2x + y = 360^\circ$, we have three equations and four variables. Putting everything in terms of $w$, we get two pentagons. One has angles $x = (450^\circ-w)/2$, $y = 60^\circ+2w/3$ and $z = 240^\circ-4w/3$. The other has angles $x = 2(90^\circ+w)/3$, $y = 240^\circ-4w/3$ and $z = 150^\circ-w/3$. Using Lemma \ref{cot-perimeter-lemma} and Mathematica, we can plot the minimum perimeter in terms of $w$. We observe that for no value of $w$ will the perimeter be less than $3.8414$. Then we have seen that no matter how the other four angles interact, pentagons with exactly one angle which can tile a degree-four vertex have perimeter greater than or equal to $P$. \newline \newline \noindent \textbf{Case 2: Exactly two angles tile a degree-four vertex.} \newline If they are of the form $x + y = 180^\circ$, by Lemma \ref{efficient-pentagon-one-small-angle} the perimeter will be at least that of a Cairo pentagon's, which is greater than the perimeter of $P$. If they are of the form $x + 3y = 360^\circ$ and $x < y$ then the average of the two must be less than $90^\circ$, so the perimeter will be greater than that of a Cairo pentagon. If $y < x$ then for given $y$ the pentagon with least perimeter has angles $y, 360^\circ - 3y, (180^\circ + 2y)/3$. Then for values of $y$ from $80.92^\circ$ to $90^\circ$ - the only allowable ranges for $y$ - we can see graphically that the perimeter is greater than $3.84143$ using Mathematica. These are the only possible cases, therefore the perimeter-minimizing pentagon with exactly two angles that tile a degree-four vertex perimeter greater than or equal to $3.8414$. \newline \newline \noindent \textbf{Case 3: Exactly three angles tile a degree-four vertex.} \newline In this case, we know that it is the case that $2x + y + z = 360$. Thus $x + y + z$ is at most $279.08^\circ$ by Proposition \ref{minmaxangle}. So the other two angles, $a$ and $b$, average at least $260.92^\circ$. The perimeter is minimized when $a = b$, i.e. when there are two angles measuring at least $130.46^\circ$. So the perimeter is at least $3.88$ by Lemma \ref{cot-perimeter-lemma}, and the pentagon will not be efficient. \newline \newline \noindent \textbf{Case 4: Exactly four angles tile a degree-four vertex.} \newline Then the fifth angle must equal $180^\circ$, which violates Proposition \ref{minmaxangle}. \end{proof} \begin{proposition} \label{type1-cvx-best} The least-perimeter unit-area pentagon with angles not strictly between $\pi/2$ and $2\pi/3$ is circumscribed about a circle with one $2\pi/3$ angle and four $7\pi/12$ angles. It has perimeter approximately $3.819$. \end{proposition} \begin{proof} This follows directly from Lemma \ref{cot-perimeter-lemma}: the more nearly equal the angles, the smaller the perimeter. Excluding pentagons with angles between $\pi/2$ and $2\pi/3$, we are left with the following two pentagnos as the optimal choices: a pentagon circumscribed about a circle with one $2\pi/3$ angle and four $7\pi/12$ angles, and a pentagon circumscribed about a circle with on $\pi/2$ angle and four $5\pi/8$ angles. Using Lemma \ref{cot-perimeter-lemma} to calculate the perimeter yields the result. \end{proof} \section{Tilings of Pentagons and Quadrilaterals} We now turn our attention to tilings by pentagons and quadrilaterals. In particular we consider tilings with one efficient pentagon and any number of non-convex quadrilaterals, though some of our results hold for tilings by efficient pentagons and non-convex quadrilaterals. Tilings with efficient pentagons, non-convex quadrilaterals, and non-efficient convex pentagons or convex quadrilaterals remain relatively unstudied, and we do not know whether our results might generalize to that case. Recall from Proposition \ref{type1-cvx-best} we have the following: \begin{repproposition}{type1-cvx-best} The least-perimeter unit-area pentagon with angles not strictly between $\pi/2$ and $2\pi/3$ is circumscribed about a circle with one $2\pi/3$ angle and four $7\pi/12$ angles. It has perimeter approximately $3.819$. \end{repproposition} \begin{proposition} \label{perimeterMinimizingDegreeFourPentagon} The perimeter-minimizing unit-area pentagon which can tile a degree-four efficient vertex has perimeter at least 3.8328. \end{proposition} \begin{proof} In order to tile a degree-four vertex, the perimeter-minimizing pentagon must have at least one angle measuring at most $90^\circ$. By Lemma \ref{cot-perimeter-lemma}, the pentagon circumscribed about a circle with one angle measuring $90^\circ$ and four angles measuring $112.5^\circ$ is perimeter-minimizing, with perimeter greater than 3.8328. \end{proof} Recall from Section 2 we have Proposition \ref{pminimizing5tileDeg4Vert}: \begin{repproposition}{pminimizing5tileDeg4Vert} The perimeter-minimizing pentagon with five angles that tile and at least one angle which can tile a degree-four vertex is the one circumscribed about a circle with one $90^\circ$ angle, three $108^\circ$ angle, and one $126^\circ$ angle. This has perimeter approximately $3.8414$. \end{repproposition} \begin{proposition} \label{nonconvex-quad-not-surrounded} In unit-area tiling by non-convex quadrilaterals and a single type of efficient pentagon, a non-convex quadrilateral cannot be surrounded by efficient pentagons. \end{proposition} \begin{proof} By Lemma \ref{quad-quadrilateral-max-edge}, some edge of each non-convex quadrilateral exceeds $\sqrt{2} > 1.41$, but by Lemma \ref{quad-pentagon-min-edge}, the longest edge in an efficient pentagon is less than $1.081$. \end{proof} The following proof is loosely based on a previous result by Chung et al. \cite[Lem. 3.6]{tile11}, who demonstrate the perimeter-minimizing pentagon with a given edge-length. We adapt their proof to find the perimeter-minimizing triangle with a given edge-length. \begin{proposition} \label{EdgeMinTriangle} The perimeter-minimizing triangle with given edge-length $e$ is an isosceles triangle with base $e$. \end{proposition} \begin{proof} It is well known that for given edge-lengths, $e_i$, $i = 1, 2, \ldots, n$, the perimeter-minimizing $n$-gon is the one inscribed in a circle. The area is given by $$ \frac{1}{2}r^2\sum_{i = 1}^3 \sin \theta_i, $$ where $\theta_i$ is the center angle corresponding to the $e_i$. Since sine is concave down from 0 to $\pi$, for a fixed perimeter the area will be maximized when the angles are equal. Therefore given one edge, the perimeter is minimized when the other two edges are equal. \end{proof} \begin{proposition} \label{quadVertexBound} For any tiling, any $m$ quadrilateral tiles have at least $m$ vertices. \end{proposition} \begin{proof} The angles of a quadrilateral sum to $2\pi$, and in a tiling a vertex measures exactly $2\pi$. Therefore each quadrilateral contributes one vertex, so the $m$ quadrilaterals contribute $m$ vertices (or portions of more than $m$ vertices). \end{proof} \begin{proposition} \label{gendihedralSomeDeg4Verts} In a tiling of a flat torus by efficient pentagons and non-convex quadrilaterals, if the ratio of efficient pentagon to non-convex quadrilaterals exceeds $14$ then the tiling will have at least one degree-four efficient vertex. \end{proposition} \begin{proof} Assume a tiling of a flat torus by $n$ efficient pentagons and $m$ non-convex quadrilaterals. The area of the torus is $n + m$, so by the Euler characteristic formula there are $3n/2 + m$ vertices. By Proposition \ref{nonconvex-quad-not-surrounded}, the non-convex quadrilaterals cannot be surrounded entirely by efficient pentagons because at least one edge in each non-convex quadrilateral is too long to tile with any efficient pentagons. Therefore each non-convex quadrilateral shares an edge with at least one other non-convex quadrilateral, so there are at most 6 inefficient vertices for each pair of non-convex quadrilaterals. By Proposition \ref{minmaxangle}, there are at most $2(2) + 4(4) - 6 = 14$ efficient pentagons (subtracting six because we assume the tiling is edge-to-edge) surrounding each pair of non-convex quadrilaterals. Let $k_3$ and $k_4$ be the number of degree three and degree-four efficient vertices. Then counting each efficient vertex as one-fifth of a pentagon, we know $$ (3/5)k_3 + (4/5)k_4 \geq n - (14/5)(m/2). $$ Additionally $$ 3n/2 + m - m = 3n/2 \geq k_3 + k_4. $$ as by Proposition \ref{quadVertexBound} the $m$ non-convex quadrilaterals contribute at least $m$ inefficient vertices. These bounds imply $k_4 \geq n/2 - 7m.$ Then $k_4 \geq n/2 - 7m > 0$ when $n > 14m$. \end{proof} \begin{proposition} \label{genquad-bound-four-angles-don't-tile} Consider a unit-area tiling of a flat torus by efficient pentagons and non-convex quadrilaterals. Assume that each efficient pentagon has at most four angles which tile with the efficient pentagon's angles. Then the tiling has a ratio of convex to non-convex pentagons less than or equal to 10. \end{proposition} \begin{proof} Because the efficient pentagon has at most four angles which tile, it cannot be surrounded entirely by efficient pentagons. Therefore the ratio of efficient pentagons to non-convex quadrilaterals is at most the maximum number of efficient pentagons which can surround a non-convex quadrilateral. At the three angles of the non-convex quadrilateral which are less than $\pi$, there are at most four efficient pentagons. If there were five or more, then the angles of the efficient pentagon would be too small, in violation of Proposition \ref{minmaxangle}. By the same logic, there are at most two efficient pentagons at the angle in the non-convex quadrilateral which is greater than $\pi$. Since the tiling is edge-to-edge, four of the efficient pentagons surrounding the non-convex quadrilateral appear at two vertices, and we must avoid double counting these. So the total number of efficient pentagons surrounding the non-convex pentagon is $4(3) + 2(1) - 4 = 10$. Then the ratio of efficient pentagons to non-convex quadrilaterals is at most ten to one. \end{proof} We adapt Proposition \ref{ratio1} to the case of dihedral tilings with quadrilaterals and pentagons. \begin{proposition} \label{generalQuadRatio} In a tiling $T$ of a flat torus by a unit-area efficient pentagon and non-convex quadrilaterals, with perimeter per tile less than or equal to half the perimeter of a Prismatic pentagon, the ratio of efficient pentagons to non-convex quadrilaterals is greater than $31.1753$. \end{proposition} \begin{proof} We adapt a proof given by Chung et al. \cite[Prop. 2.11]{tile11} and Proposition \ref{ratio1}. By Proposition \ref{angles-convex-between}, $T$ cannot have an efficient pentagon with angles strictly between $\pi/2$ and $2\pi/3$. Then by Proposition \ref{type1-cvx-best}, the least-perimeter unit-area pentagon with angles not strictly between $\pi/2$ and $2\pi/3$ has perimeter, $P_0$, greater than $3.819$. The perimeter of a Cairo/Prismatic pentagon is $P_1 = 2\sqrt{2+\sqrt{3}} < 3.86$. The convex pentagons have perimeter at least $P_0$ by definition, and the non-convex quadrilaterals have perimeter $P_2$ greater than an equilateral triangle ($3\sqrt{4 / \sqrt{3}}$). Let $n$ and $m$ denote the number of efficient pentagons and non-convex quadrilaterals. By hypothesis, $$ nP_0 + mP_2 < (n+m)P_1. $$ Therefore $$ n/m > \frac{P_2-P_1}{P_1-P_0} \approx 15.554.$$ Proposition \ref{gendihedralSomeDeg4Verts} implies that the tiling must contain at least one degree-four efficient vertex, and Propostion \ref{genquad-bound-four-angles-don't-tile} implies that the efficient pentagon must have five angles which tile. Therefore by Proposition \ref{pminimizing5tileDeg4Vert}, the efficient pentagon cannot have perimeter exceeding $P_0' = 3.8414$. Substituting $P_0'$ in for $P_0$ yields $n/m > 31.1753$. \end{proof} \begin{proposition} \label{gen-quad-four-angles-don't-tile} Consider a unit-area tiling of a flat torus by an efficient pentagon and non-convex quadrilaterals. Assume that each efficient pentagon has at most four angles which tile with efficient pentagon's angles. Then the tiling has more perimeter per tile than half the perimeter of a Prismatic pentagon. \end{proposition} \begin{proof} This follows immediately from Propositions \ref{generalQuadRatio} and \ref{genquad-bound-four-angles-don't-tile}. \end{proof} \begin{proposition} \label{quadCan'tTileDeg3} Let $P$ be a efficient pentagon with perimeter less than $3.8574$ and let $s$ and $a$ be angles in $P$ with the following properties: \begin{enumerate} \item All five angles of $P$ tile; \item $s$ is strictly less than $90^\circ$; \item $3s + a = 360$. \end{enumerate} Then $a$ cannot tile a degree three vertex with the angles in $P$. \end{proposition} \begin{proof} As $P$ is efficient, by Proposition \ref{minmaxangle}, $80.92^\circ < s < 90^\circ$. From the given we know $a = 360-3s$. Let $x, y$ be two additional angles in $P$. Suppose that $a$ did tile a degree three vertex with the angles in $P$. Then there are five case, corresponding to the five ways $a$ can be combined with itself, $s$, and $x$ and $y$. \begin{enumerate} \item $a + 2s = 360$; \item $a + s + x = 360$; \item $a + x + y = 360$; \item $a + 2x = 360$; \item $2a + x = 360$. \end{enumerate} Case 1 can be easily ruled out, as $a + 3s = 360$ and $a + 2s = 360$ are true only when $s$ is zero and $a$ is $360^\circ$, which obviously violates the properties of $P$. In Case 2, $360 - 2s + x = 360$ and therefore $x = 2s$. So $161.84 < x$, which by Proposition \ref{minmaxangle} implies that $P$ is not efficient. By Lemma \ref{cot-perimeter-lemma}, Case 3 reduces to Case 4 as the perimeter will be lowest when $x = y$. Then we can solve for $x$ in terms of $s$ to get $x = 3/2s$ and therefore $540 - (s + a + x) = (180 + s/2)/2$. Using Mathematica and Lemma \ref{cot-perimeter-lemma} to calculate the minimum perimeter of $P$ in terms of $s$ yields a perimeter greater than 3.8574. In Case 5, $x = 6s - 360$. So $540 - (s + a + x)$ is $540-4s$, which means the remaining two angles in the pentagon average $(540-4s)/2$ (with perimeter minimized when the two are equal to the average.) Using Lemma \ref{cot-perimeter-lemma} and Mathematica to calculate the minimum perimeter of $P$ in terms of $s$ implies $P$ will never be efficient in this case. Therefore if the perimeter of $P$ is less than $3.8574$, none of these five cases are possible. \end{proof} Recall from Proposition \ref{efficient-pentagon-one-small-angle} we have the following: \begin{repproposition}{efficient-pentagon-one-small-angle} If two angles in a pentagon average less than $\pi/2$ then the pentagon cannot be efficient. If two angles average exactly $\pi/2$, the pentagon is efficient only if it is Cairo or Prismatic. \end{repproposition} \begin{theorem} \label{oneeffn&mTorus} A unit-area tiling of a flat torus by an efficient pentagon and non-convex quadrilaterals cannot have perimeter per tile less than half the perimeter per tile of a Cairo/Prismatic tiling. \end{theorem} \begin{proof} Assume there exists a unit-area tiling of a flat torus by $n$ efficient pentagons and $m$ non-convex quadrilaterals with perimeter per tile less than a Cairo/Prismatic tiling. Note that $m \not= 0$, otherwise the tiling would contradict Theorem \ref{chungthm}. The area of the torus is $n + m$, so by the Euler characteristic formula there are $3n/2 + m$ vertices. By Proposition \ref{nonconvex-quad-not-surrounded}, a non-convex quadrilateral cannot be surrounded entirely by efficient pentagons because at least one edge in the non-convex quadrilateral is too long. Therefore each non-convex quadrilateral shares an edge with at least one other non-convex quadrilateral, so there are at most six inefficient vertices for each pair of non-convex quadrilaterals. Then we may assume that there are at most $3m$ inefficient vertices, so the number of efficient vertices is at least $$ \frac{3n}{2} + m - 3m = \frac{3n}{2} - 2m. $$ Let $k_3$ and $k_4$ be the number of degree-three and degree-four efficient vertices. By Corollary \ref{convex-vertices-degnot-2}, these are the only two types of efficient vertices which can appear in the tiling. Therefore $$ k_3 + k_4 \geq \frac{3n}{2} - 2m. $$ Additionally, since there are $n$ efficient pentagons, considering each vertex as a fifth of a pentagon we conclude $$ \frac{3}{5} k_3 + \frac{4}{5} k_4 \leq n. $$ Thus $k_3 \geq n - 8m$ and $k_4 \leq n/2 + 6m$. Now by Proposition \ref{generalQuadRatio}, it must be the case that $n > 31.1753m$, as the tiling has perimeter per tile less than a Cairo pentagon's. Therefore by Proposition \ref{gendihedralSomeDeg4Verts}, there exists at least one efficient vertex of degree-four in the tiling. So there must be at least one angle, $s$, in the efficient pentagon which measures less than or equal to $90^\circ$. \begin{figure} \centering \includegraphics[scale=0.5]{DihedralQuadBad.png} \caption{A tiling with an efficient pentagon and non-convex quadrilaterals is always worse than a Prismatic tiling.} \label{fig:nonconvexfig} \end{figure} Suppose $s$ is the only angle which tiled an efficient vertex of degree four. Then $s = 90^\circ$. Therefore at the large angles of the non-convex quadrilateral there can be at most one efficient angle, and at small angles there can be at most three. As the non-convex quadrilaterals are paired because of their edge-lengths, there are at most $2(1) + 4(3) - 6 = 8$ efficient pentagons (subtracting six because we assume the tiling is edge-to-edge) per pair of non-convex quadrilaterals. Therefore counting each efficient pentagon as one-fifth of a vertex, we know $$ (3/5)k_3 + (4/5)k_4 \geq n - (8/5)(m/2). $$ Additionally $$ 3n/2 + m \geq k_3 + k_4, $$ as there cannot be more efficient vertices than total vertices in the tiling. We can use these two inequalities to determine that $k_4 \geq n/2 - 7m.$ We will show this implies there are too many $s$ angles in the tiling. As $s$ is the only angle that can tile a degree-four efficient vertex, there are at least $4k_4 \geq 2n - 28m$ $s$ angles. But as there cannot be more than $n$ $s$ angles, we have that $n \geq 4k_4$, and so $n \geq 2n-28$. Then $28m \geq n$. But since $n > 31.1753m$ by Proposition \ref{generalQuadRatio} this inequality is false, i.e. there are more than $n$ $s$ angles. As this is a contradiction, there must be at least two angles which tile a degree-four vertex. Next we show there cannot be more than one angle other than $s$ which tiles a degree four efficient vertex. Let $a$, $b$, and $c$ be three angles in the efficient pentagon different than $s$. If $a + b + c + s = 360^\circ$ then the four angles average $90^\circ$ and the pentagon is not efficient by Proposition \ref{efficient-pentagon-one-small-angle}. If $2s + a + b = 360^\circ$, then by Lemma \ref{cot-perimeter-lemma} the perimeter is minimized when two angles measure $(360 - 2s)/2$, two measure $(180 + s)/2$, and one measures $s$. By definition $s \leq 90^\circ$ and by Proposition \ref{minmaxangle} $s > 80.92^\circ$; it follows from Lemma \ref{cot-perimeter-lemma} that the pentagon is not efficient. The case when $2a + s + b = 360^\circ$ is similar: just replace $s$ with $a$ and let $a$ range from $80.92^\circ$ to $142.29^\circ$ by Proposition \ref{minmaxangle}. As before, the pentagon is not efficient in this case. Therefore only one angle, say $a$, can tile a degree four vertex with $s$. If there are two $s$ and two $a$ angles, then they average exactly $\pi/2$. Since the pentagon is efficient, by Lemma \ref{cot-perimeter-lemma} and Proposition \ref{efficient-pentagon-one-small-angle} it must be a Cairo or Prismatic pentagon, and the proposition follows as $m \not= 0$. Therefore there are either three $s$ or three $a$ angles at a degree-four efficient vertex. First suppose there are three $a$ angles and one $s$ angle. Then $3a + s = 360$. By Proposition \ref{minmaxangle}, $s$ must be greater than $80.92^\circ$, and by hypothesis less than or equal to $90^\circ$. It follows that the average of $a$ and $s$ is between $86.973^\circ$ and $90^\circ$. Then by Proposition \ref{efficient-pentagon-one-small-angle}, the efficient pentagon must be Cairo or Prismatic and the proposition follows as $m \not= 0$. Therefore it must be the case that $3s + a = 360$. Because the tiling is there is only one type of efficient pentagon, there are exactly $n$ $s$ angles. At all $k_4$ vertices there will be three $s$ angles. By Proposition \ref{minmaxangle}, there are at most $2(2) + 4(4) - 6 = 14$ efficient pentagons surrounding each pair of non-convex quadrilaterals. Therefore counting each efficient vertex as one-fifth of a pentagon, we know $$ (3/5)k_3 + (4/5)k_4 \geq n - (14/5)(m/2). $$ Additionally $$ 3n/2 + m \geq k_3 + k_4 $$ which we can use to determine that $k_4 \geq n/2 - 10m.$ Since there are three $s$ angles at each $k_4$ vertex, there are at least $3k_4 \geq 3n/2 - 30m$ $s$ angles. But as there cannot be more than $n$ $s$ angles, we have that $n \geq 3k_4$, and so $n \geq 3n/2 - 30m$, which means $n \leq 60m$. In order to have a ratio of efficient pentagons to non-convex quadrilaterals less than or equal to 60, the efficient pentagon must have perimeter less than $3.8458$. Here it is convenient to note that since the tiling has perimeter per tile less than Cairo/Prismatic, by Proposition \ref{genquad-bound-four-angles-don't-tile} the efficient pentagon must have five angles which tile. Thus the efficient pentagon satisfies the hypothesis of Proposition \ref{quadCan'tTileDeg3}, so it will not be the case that $a$ tiles an efficient vertex of degree three. Therefore $a$ appears only at degree-four efficient vertices and inefficient vertices. There will be at most one $a$ angle at each degree-four vertex, and at most three at each non-efficient vertex, as $a$ is greater than $90^\circ$. Since $k_4 \leq n/2 + 6m$ and the number of inefficient vertices is at most $3m$, the number of $a$ angles in the tiling is at most $n/2 + 6m + 3(3m)$, and therefore $$ n/2 + 6m + 9m \geq n. $$ Solving this yields $30m \geq n$; if this is not the case there will not be enough $a$ angles in the tiling. But this contradicts Proposition \ref{generalQuadRatio}: that the ratio of efficient pentagons to non-convex quadrilaterals must be at least $48.14$. Therefore it must be the case that the perimeter per tile is greater than or equal to that of a Cairo/Prismatic pentagon. \end{proof} \begin{remark} \emph{Note that the proof of Theorem \ref{oneeffn&mTorus} assumes at points that the non-convex quadrilaterals are spread out in the tiling, and at other points assumes they are densely packed. This is done in order to ensure the bounds on the number of inefficient vertices are correct. The proof may be improved if these assumptions are either avoided or treated in some way.} \end{remark} \begin{remark} \emph{ An earlier draft of this paper showed the above results only for dihedral tilings with a single efficient pentagon and a single non-convex quadrialteral. We tried several other initial approaches to generalize the above proposition before finding the current one. Initially we attempted to find ratios of quadrilaterals which shared an edge with the efficient pentagons and those that didn't, as the first type have high perimeter. Additionally we attempted to find the perimeter-minimizing pentagon with five angles that tile but were unable to do so.} \end{remark} \section{Dihedral Tiling of Efficient and Type 2 Non-Convex Pentagons} We adapt the technique used in Section 5 to the dihedral case with one efficient pentagon and one Type 2 non-convex pentagon. First recall from Section 2 that we have the following lemma: \begin{replemma}{ncvx-pent-max-edge} For a unit-area Type 2 non-convex pentagon, some edge must be greater than or equal to $\sqrt{2}$. \end{replemma} \begin{proposition} \label{nonconvex-type2-not-surrounded} In a unit-area dihedral tiling by Type 2 non-convex pentagons and efficient pentagons, a non-convex pentagon cannot be surrounded by efficient pentagons. \end{proposition} \begin{proof} By Lemma \ref{ncvx-pent-max-edge}, some edge of the non-convex pentagon exceeds $\sqrt{2} > 1.41$. But by Lemma \ref{quad-pentagon-min-edge}, the longest edge in an efficient pentagon is less than $1.081$. \end{proof} \begin{proposition} \label{type2NonconvexPerimeterMin} In a dihedral tiling by an efficient pentagon and a Type 2 non-convex pentagon, the non-convex pentagon has perimeter at least 4.93594. \end{proposition} \begin{proof} By Lemma \ref{quad-pentagon-min-edge}, the largest an edge of a unit-area efficient pentagon can be is 1.081. Therefore a non-convex pentagon in a dihedral tiling with an efficient pentagon must have an edge measuring at least 1.081. By Proposition \ref{EdgeMinTriangle}, the perimeter-minimizing unit-area triangle given one edge-length, $e$, is the isosceles triangle with base $e$. Since the non-convex pentagon will have perimeter greater than or equal to this triangle, we simply solve for the perimeter of such a triangle with edge-length 1.081, which is at least 4.93594. \end{proof} \begin{proposition} \label{type2dihedralSomeDeg4Verts} In a dihedral tiling of a torus by efficient pentagons and Type 2 non-convex pentagons, if the ratio of efficient pentagons to non-convex pentagons exceeds 24, then the tiling will have at least one degree-four efficient vertex. \end{proposition} \begin{proof} Consider a tiling of a flat torus by $n$ efficient pentagons and $m$ non-convex pentagons. The area of the torus is $n + m$, so by the Euler characteristic formula there are $3(n+m)/2$ vertices. By Proposition \ref{nonconvex-type2-not-surrounded}, the non-convex pentagons cannot be surrounded entirely by efficient pentagons. Therefore each non-convex pentagon shares an edge with at least one other non-convex pentagon, so there are at most eight inefficient vertices for every two non-convex pentagons. Two non-convex pentagons which share an edge have at most four small angles and four large angles between them. Assume there are two efficient pentagons at all four of the large angles. Then all the large angles are less than $198.16^\circ$ by Proposition \ref{minmaxangle}. So the four small angles average $71.84^\circ$, and since at least one angle must be greater than or equal to the average, at one of the small angles there are only three efficient pentagons. At the rest of the small angles there are four, so the total number of efficient pentagons around two non-convex pentagons is at most $4(2) + 3(4) + 3 - 8 = 15$ (subtracting eight because we assume the tiling is edge-to-edge). Suppose there were not two efficient pentagons at all the large angles. Then at least two of the large angles are greater than $198.16^\circ$ and there is only one efficient pentagon at these two large angles. There are two efficient pentagons at the other two large angles, and four at the four small angles for a total of $14$. We conclude that there are at most $15$ efficient pentagons around each pair of non-convex pentagons. Let $k_3$ and $k_4$ be the number of degree three and degree four efficient vertices. Then counting each efficient vertex as one-fifth of a pentagon, we know $$ (3/5)k_3 + (4/5)k_4 \geq n - (15/5)(m/2). $$ We subtract $(15/5)(m/2)$ as this is the maximum number of vertices which can tile non-efficient vertices. Additionally $$ 3n/2 + 3m/2 \geq k_3 + k_4, $$ as there cannot be more efficient vertices then the total number of vertices in the tilng. Thus $k_4 \geq n/2 - 12m.$ Therefore $k_4 \geq n/2 - 12m > 0$ when $n > 24m$. \end{proof} We adapt Proposition \ref{ratio1} to the case of dihedral tilings with efficient and Type 2 non-convex pentagons. \begin{proposition} \label{type2-ratio} In a unit-area dihedral tiling $T$ of a flat torus by an efficient pentagon and a Type 2 non-convex pentagon, with perimeter per tile less than or equal to half the perimeter of a Prismatic pentagon, the ratio of efficient pentagons to non-convex pentagons is greater than $34.77$. \end{proposition} \begin{proof} We adapt a proof given by Chung et al. \cite[Prop. 2.11]{tile11} and Proposition \ref{ratio1}. By Proposition \ref{angles-convex-between}, $T$ cannot have an efficient pentagon with angles strictly between $\pi/2$ and $2\pi/3$. By Proposition \ref{type1-cvx-best}, the least-perimeter unit-area pentagon with angles not strictly between $\pi/2$ and $2\pi/3$ is defined as a Type 1 special convex pentagon. By Definition \ref{type1-cvx-best} this has perimeter, $P_0$, greater than $3.819$. The perimeter of a Cairo/Prismatic pentagon is $P_1 = 2\sqrt{2+\sqrt{3}} < 3.86$. The convex pentagons have perimeter at least $P_0$ by definition, and the non-convex pentagons have perimeter greater than $P_2 = 4.93594$ by Proposition \ref{type2NonconvexPerimeterMin}. Let $n$ and $m$ denote the number of efficient pentagons and non-convex pentagons. By hypothesis, $$ nP_0 + mP_2 < (n+m)P_1. $$ Therefore $$ n/m > \frac{P_2-P_1}{P_1-P_0} \approx 24.117.$$ Proposition \ref{type2dihedralSomeDeg4Verts} implies that the tiling must contain at least one degree four vertex. But then by Proposition \ref{perimeterMinimizingDegreeFourPentagon}, the efficient pentagon cannot have perimeter exceeding $3.8328$. Substituting $3.8328$ in for $P_0$ yields $n/m > 34.77$. \end{proof} \begin{proposition} \label{type2-four-angles-don't-tile} Consider a unit-area dihedral tiling of a flat torus by an efficient pentagon and a Type 2 non-convex pentagons. Assume that each efficient pentagon has at most four angles which tile with efficient pentagon's angles. Then the tiling has more perimeter per tile than half the perimeter of a Prismatic pentagon. \end{proposition} \begin{proof} Because the efficient pentagon has at most four angles which tile, it cannot be surrounded entirely by efficient pentagons. Therefore the ratio of efficient pentagons to non-convex pentagons is at most the maximum number of efficient pentagons which can surround a non-convex pentagon. At the three angles of the non-convex pentagon which are less than $\pi$, there are at most four efficient pentagons. If there were five or more, then the angles of the efficient pentagon would be too small, in violation of Proposition \ref{minmaxangle}. By the same logic, there are at most two efficient pentagons at the two angles in the non-convex pentagon which is greater than $\pi$. Since the tiling is edge-to-edge, five of the efficient pentagons surrounding the non-convex pentagon appear at two vertices, and we must avoid double counting these. So the total number of efficient pentagons surrounding the non-convex pentagon is $4(3) + 2(2) - 5 = 11$. Thus the ratio of efficient pentagons to non-convex pentagons is at most eleven to one. Therefore by Proposition \ref{type2-ratio} the tiling has more perimeter per tile than half that of a Prismatic pentagon. \end{proof} \begin{theorem} \label{dihedraltype2n&mTorus} A unit-area dihedral tiling of a flat torus by an efficient pentagon and a Type 2 non-convex pentagon cannot have perimeter per tile less than half the perimeter per tile of a Cairo/Prismatic tiling. \end{theorem} \begin{proof} Assume there exists a unit-area tiling of a flat torus by $n$ efficient pentagons and $m$ Type 2 non-convex pentagons with perimeter per tile less than a Cairo/Prismatic tiling's. Note that $m \not= 0$; otherwise the tiling would contradict Theorem \ref{chungthm}. The area of the torus is $n + m$, so by the Euler characteristic formula there are $3(n + m)/2$ vertices. By Proposition \ref{nonconvex-type2-not-surrounded}, the non-convex pentagon cannot be surrounded entirely by efficient pentagons because at least one edge in the non-convex pentagon is too long. Therefore each non-convex pentagon shares an edge with at least one other non-convex pentagon, so there are at most eight inefficient vertices for each pair of non-convex pentagons. Therefore there are at most $4m$ inefficient vertices, so the number of efficient vertices is at least $$ \frac{3n}{2} + \frac{3m}{2} - 4m = \frac{3n}{2} - \frac{5m}{2} $$ Let $k_3$ and $k_4$ be the number of degree three and degree four efficient vertices. By Corollary \ref{convex-vertices-degnot-2}, these are the only two types of efficient vertices which can appear in the tiling. Thus $$ k_3 + k_4 \geq \frac{3n}{2} - \frac{5m}{2}. $$ Additionally, since there are $n$ efficient pentagons, considering each vertex as a fifth of a pentagon we conclude that $$ \frac{3}{5} k_3 + \frac{4}{5} k_4 \leq n. $$ Therefore that $k_3 \geq n - 10m$ and $k_4 \leq n/2 + (15/2)m$. Now by Proposition \ref{type2-ratio} it must be the case that $n > 34.77m$, as the tiling has perimeter per tile less than a Cairo pentagon's. By Proposition \ref{type2dihedralSomeDeg4Verts}, there exists at least one efficient vertex of degree four in the tiling. So there must be at least one angle, $s$, in the efficient pentagon which measures less than or equal to $90^\circ$. Suppose $s$ is the only angle that tiles an efficient vertex of degree four. Then $s = 90^\circ$. Therefore at the large angles of the non-convex pentagon there can be at most one efficient angle, and at small angles there can be at most three. As the non-convex pentagons are paired because of their edge-lengths, there are at most $4(1) + 4(3) - 8 = 8$ efficient pentagons (subtracting eight because we assume the tiling is edge-to-edge) per pair of non-convex pentagons. Therefore counting each efficient pentagon as one-fifth of a vertex, we know $$ (3/5)k_3 + (4/5)k_4 \geq n - (8/5)(m/2). $$ Additionally $$ 3n/2 + 3m/2 \geq k_3 + k_4, $$ as there cannot be more efficient vertices than total vertices in the tiling. We can use these two inequalities to determine that $k_4 \geq n/2 - (17/2)m.$ We will show this implies there are too many $s$ angles in the tiling. As $s$ is the only angle that can tile a degree-four efficient vertex, there are at least $4k_4 \geq 2n - 34m$ $s$ angles. But as there cannot be more than $n$ $s$ angles, we have that $n \geq 4k_4$, and so $n \geq 2n-34m$. Thus $34m \geq n$. But since $n > 34.77m$ by Proposition \ref{type2-ratio} this inequality is false, i.e. there are more than $n$ $s$ angles. As this is a contradiction, there must be at least two angles which tile a degree four vertex. Next we show there cannot be more than one angle other than $s$ which tiles a degree four efficient vertex. Let $a$, $b$, and $c$ be three angles in the efficient pentagon different than $s$. If $a + b + c + s = 360^\circ$ then the four angles average $90^\circ$ and the pentagon is not efficient by Proposition \ref{efficient-pentagon-one-small-angle}. If $2s + a + b = 360^\circ$, then by Lemma \ref{cot-perimeter-lemma} the perimeter is minimized when two angles measure $(360 - 2s)/2$, two measure $(180 + s)/2$, and one measures $s$. By definition $s \leq 90^\circ$ and by Proposition \ref{minmaxangle} $s > 80.92^\circ$; it follows from Lemma \ref{cot-perimeter-lemma} that the pentagon is not efficient. The case when $2a + s + b = 360^\circ$ is similar: just replace $s$ with $a$ and let $a$ range from $80.92^\circ$ to $142.29^\circ$ by Proposition \ref{minmaxangle}. As before, the pentagon is not efficient in this case. Therefore only one angle, say $a$, can tile a degree four vertex with $s$. If there are two $s$ and two $a$ angles, then they average exactly $\pi/2$. Since the pentagon is efficient, by Lemma \ref{cot-perimeter-lemma} and Proposition \ref{efficient-pentagon-one-small-angle} it must be a Cairo or Prismatic pentagon, and the proposition follows as $m \not= 0$. Therefore there are either three $s$ or three $a$ angles at a degree four efficient vertex. First suppose there are three $a$ angles and one $s$ angle. Then $3a + s = 360$. By Proposition \ref{minmaxangle}, $s$ must be greater than $80.92^\circ$, and by hypothesis less than or equal to $90^\circ$. It follows that the average of $a$ and $s$ is between $86.973^\circ$ and $90^\circ$. Then by Proposition \ref{efficient-pentagon-one-small-angle}, the efficient pentagon must be Cairo or Prismatic and the proposition follows as $m \not= 0$. Therefore it must be the case that $3s + a = 360$. Because the tiling is dihedral, there are exactly $n$ $s$ angles. At all $k_4$ vertices there will be three $s$ angles. By Proposition \ref{minmaxangle}, there are at most $2(4) + 4(4) - 8 = 16$ efficient pentagons surrounding each pair of non-convex pentagons. Therefore counting each efficient vertex as one-fifth of a pentagon, we know $$ (3/5)k_3 + (4/5)k_4 \geq n - (16/5)(m/2). $$ Additionally $$ 3n/2 + 3m/2 \geq k_3 + k_4 $$ which we can use to determine that $k_4 \geq n/2 - (25/2)m.$ Then since there are three $s$ angles at each $k_4$ vertex, there are at least $3k_4 \geq 3n/2 - (75/2)m$ $s$ angles. But as there cannot be more than $n$ $s$ angles, we have that $n \geq 3k_4$, and so $n \geq 3n/2 - (75/2)m$ which means $n \leq 75m$. In order to have a ratio of efficient pentagons to non-convex pentagons less than or equal to 75, the efficient pentagon must have perimeter less than $3.8495$. Here it is convenient to note that since the tiling has perimeter per tile less than Cairo/Prismatic, by Proposition \ref{type2-four-angles-don't-tile} the efficient pentagon must have five angles which tile. Then the efficient pentagon satisfies the hypothesis of Proposition \ref{quadCan'tTileDeg3}, so it will not be the case that $a$ tiles an efficient vertex of degree three. Therefore $a$ appears only at degree four efficient vertices and inefficient vertices. There will be at most one $a$ angle at each degree four vertex. Because at least two-fifths of the inefficient angles have a large angle at them, there is at most one $a$ angle at at least two-fifths of the non-efficient vertices, and at most three $a$ angles at at most three-fifths of the non-efficient vertices, as $a$ is greater than $90^\circ$. Since $k_4 \leq n/2 + (15/2)m$ and the number of inefficient vertices is at most $4m$, the number of $a$ angles in the tiling is at most $n/2 + (15/2)m + (2/5)(4m) + (3)(3/5)(4m)$, and therefore $$ n/2 + (163/10)m \geq n. $$ Solving this yields $32.6m \geq n$; if this is not the case there will not be enough $a$ angles in the tiling. But this contradicts Proposition \ref{type2-ratio}: that the ratio of efficient pentagons to Type 1 non-convex pentagons must be at least $34.77$. Therefore it must be the case that the perimeter per tile is greater than or equal to that of a Cairo/Prismatic pentagon. \end{proof} \begin{remark} \emph{We have to slightly improve two important values in this section to get the proof to apply to tilings by an efficient pentagon and any number of non-convex Type 2 pentagons to work. We need to get the ratio of efficient to non-convex Type 2 pentagons which implies there exists at least one degree four vertex down to 22.57 from 24, and we need to get the upper-bound for the ratio of efficient to non-convex Type 2 pentagons in a tiling which is better than Cairo/Prismatic tilings down to 31.26.} \end{remark} \begin{remark} \emph{Considering trihedral tilings is a logical in generalizing the results from this section. While we have solved the case of one efficient pentagon two non-convex Type 2 pentagons in Theorem \ref{dihedraltype2n&mTorus}, there is still the case of two different efficient pentagons and a Type 2 non-convex pentagon, and the case of an efficient pentagon, a non-efficient convex pentagon, and a Type 2 non-convex pentagon, both of which remain open.} \end{remark} \section{Extension of Results to the Plane} This section contains early attempts and work to extend the above results, many of which hold only for large flat tori, to the plane. We first generalize the concept of the perimeter, as considered on a flat torus, to the plane. \begin{definition} \label{def:perimeterRatio} \emph{The} perimeter ratio, $\rho$, \emph{of a planar tiling is the limit supremum as $R$ goes to infinity of the perimeter inside a disk of radius $R$ about the origin , divided by $\pi R^2$. We intend to prove that this does not depend on the choice of origin.} \end{definition} The following result allows us to generalize results from finite cases on large tori to infinite cases in the plane. \begin{lemma}[Truncation Lemma] \cite[15.3]{morggeo} \label{truncation-lemma} Consider a tiling of the plane. Let $P(R)$ and $A(R)$ be the perimeter and area of the tiling in a disk of radius $R$ centered at the origin. Additionally let $P_0(R)$ and $A_0(R)$ be the perimeter and area of tiles completely contained within the disk. Then given $\epsilon > 0$, $R_0 > 0$, there exists some $R \geq R_0$ such that the number $N(R)$ of points where the circle $S(0, R)$ intersects edges of tiles is less than $\epsilon A(R)$. In particular, the number of tiles which intersect the disk but are not contained in the disk is at most $\epsilon A(R)$. Furthermore $$ \liminf_{R \to \infty} \frac{P_0}{A_0} \leq \rho. $$ \end{lemma} The following proposition generalizes Proposition \ref{ratio1} to the plane: \begin{proposition} \label{plane-ext-ratio1} Let $T$ be a tiling of the plane by unit-area pentagons, with perimeter ratio less than or equal to half the perimeter of a Prismatic pentagon. Then the fractions $C_1$, $N_1$ and $N_2$ of the pentagons completely inside a disk about the origin of radius $R$ which are efficient, non-convex Type 1, and non-convex Type 2, respectively, satisfy: $$ \liminf_{R \to \infty} (C_1 - 2.6N_1 + 13.4N_2) \geq 0. $$ \end{proposition} \begin{proof} The perimeters of a regular pentagon, a Cairo/Prismatic pentagon, the unit square, and the unit-area equilateral triangle are $P_1 = 2\sqrt{5} \sqrt[4]{5 - 2\sqrt{5}}$, $P_2 = 2\sqrt{2 + \sqrt{3}}$, $P_3 = 4$ and $P_4 = 3 \sqrt{4/\sqrt{3}}$. Let $\epsilon > 0$. By the definition of $\rho$, there exists an $R_0 > 0$ such that for $R_1 > R_0$ the perimeter to area ratio of the disk of radius $R_1$ is less than $\rho + \epsilon$. By the Truncation Lemma \ref{truncation-lemma} there exists an $R > R_0$ such that the perimeter $P_0(R)$ and area $A_0(R)$ of the tiles completely within the disk of radius $R$ satisfy $$ \frac{P_0}{A_0} < \rho + \epsilon. $$ Then since the efficient, non-efficient convex, Type 1 non-convex, and Type 2 non-convex pentagons have perimeter at least $P_1, P_2, P_3$ and $P_4$ respectively, $$ C_1 P_1 + C_2 P_2 + N_1 P_3 + N_2 P_4 \leq 2\frac{P_0}{A_0}. $$ Therefore $C_1 P_1 + C_2 P_2 + N_1 P_3 + N_2 P_4 \leq 2(\rho + \epsilon)$, which by hypothesis implies $$ C_1 P_1 + C_2 P_2 + N_1 P_3 + N_2 P_4 < P_2 + 2\epsilon. $$ Then, $$ C_1 P_1 + C_2 P_2 + N_1 P_3 + N_2 P_4 < P_2(C_1 + C_2 + N_1 + N_2) + 2\epsilon, $$ and $$ C_1 > N_1 \frac{P_3 - P_2}{P_2 - P_1} + N_2 \frac{P_4 - P_2}{P_2 - P_1} - \frac{2\epsilon}{P_2 - P_1}. $$ By the definition of the $P_i$, $$ C_1 - 2.6N_1 - 13.4N_2 > -40\epsilon. $$ By the definition of the limit inferior the proposition holds. \end{proof} \begin{proposition} \label{plane-ext-stronger-ratio1} Let $T$ be a tiling of the plane with only efficient and non-convex pentagons, with perimeter ratio less than or equal to half the perimeter of a Prismatic pentagon. Then the fractions $C_1$, $N_1$, and $N_2$ of the pentagons completely inside a disk about the origin of radius R which are efficient, non-convex Type 1, and non-convex Type 2, respectively, satisfy: $$ C_1 > 2.6N_1 + 13.4N_2. $$ \end{proposition} \begin{proof} The perimeters of a regular pentagon, a Cairo/Prismatic pentagon, the unit square, and the unit-area equilateral triangle are $P_1 = 2\sqrt{5} \sqrt[4]{5 - 2\sqrt{5}}$, $P_2 = 2\sqrt{2 + \sqrt{3}}$, $P_3 = 4$ and $P_4 = 3 \sqrt{4/\sqrt{3}}$. Let $\epsilon > 0$. By the definition of $\rho$, there exists an $R_0 > 0$ such that for $R_1 > R_0$ the perimeter to area ratio of the disk of radius $R_1$ is less than $\rho + \epsilon$. By the Truncation Lemma \ref{truncation-lemma} there exists an $R > R_0$ such that the perimeter $P_0(R)$ and area $A_0(R)$ of the tiles completely within the disk of radius $R$ satisfy $$ \frac{P_0}{A_0} < \rho + \epsilon. $$ Then since the efficient, Type 1 non-convex, and Type 2 non-convex pentagons have perimeter at least $P_1, P_3$ and $P_4$ respectively, $$ C_1 P_1 + N_1 P_3 + N_2 P_4 \leq 2\frac{P_0}{A_0}. $$ Therefore $C_1 P_1 + N_1 P_3 + N_2 P_4 \leq 2(\rho + \epsilon)$, which by hypothesis implies $$ C_1 P_1 + N_1 P_3 + N_2 P_4 < P_2 + 2\epsilon. $$ Then, $$ C_1 P_1 + N_1 P_3 + N_2 P_4 < P_2(C_1 + N_1 + N_2) + 2\epsilon, $$ and $$ C_1 > N_1 \frac{P_3 - P_2}{P_2 - P_1} + N_2 \frac{P_4 - P_2}{P_2 - P_1} - \frac{2\epsilon}{P_2 - P_1}. $$ Then since $C_1 + N_1 + N_2 = 1$, $$ C_1 > 2.63 N_1 + 13.43 N_2 - 38.63 \epsilon (C_1 + N_1 + N_2), $$ and therefore $$ C_1 > \frac{2.63 - 38.63 \epsilon}{1 + 38.63 \epsilon} N_1 + \frac{13.43 - 38.63 \epsilon}{1 + 38.63 \epsilon} N_2 $$ Choosing any value of $\epsilon < 5.39 \times 10^{-5}$ implies $$ C_1 > 2.6N_1 + 13.4N_2. $$ \end{proof} We extend Proposition \ref{angles-convex-between} to the plane : \begin{proposition} \label{plane-ext-angles-convex-between} A unit-area tiling of the plane by non-convex pentagons and pentagons with angles strictly between $\pi/2$ and $2\pi/3$ has a perimeter ratio more than half the perimeter of a Prismatic pentagon. \end{proposition} \begin{proof} By the Truncation Lemma \ref{truncation-lemma}, we can find some $R$ large enough so that for tiles within a/ disk of radius $R$, the perimeter per tile is less than half the perimeter of a Prismatic pentagon. Note that the pentagons in $D_R$ will be a finite collection. Assume, on the contrary, that there existed some large $R$ such that a tiling of $D_R$ by non-convex pentagons and efficient pentagons with angles strictly between $\pi/2$ and $2\pi/3$ had a perimeter ratio less than half the Prismatic pentagon's. By Proposition \ref{plane-ext-ratio1}, the ratio of the number of efficient pentagons to the number of non-convex pentagons must be greater than 2.6. Since all the angles of the efficient pentagons are strictly between $\pi/2$ and $2\pi/3$, there is at least one non-convex pentagon at each vertex. By definition, a non-convex pentagon has at least one angle greater than $\pi$. Thus at least $1/5$ of the vertices must contain an angle greater than $\pi$. At such vertices there is at most one efficient pentagon. At the remaining vertices, there are at most three efficient pentagons, because their angles are greater than $\pi/2$. At vertices near the boundary, where truncation occurs, these bounds may not hold, as some pentagons may be removed. However by the Truncation Lemma \ref{truncation-lemma} for any $\epsilon$ we can find some $R$ such that the number of pentagons being removed is $\epsilon A(R)$. Therefore for large $R$ the number of such pentagons is insignificant. Thus the ratio of efficient pentagons to non-convex pentagons is at most $3(4/5)+1(1/5) = 2.6$. This is a contradiction of Proposition \ref{plane-ext-ratio1}, which says the ratio of efficient to non-convex pentagons must be strictly greater than 2.6. \end{proof} We extend Proposition \ref{four-angles-don't-tile} to the plane: \begin{proposition} \label{plane-four-angles-don't-tile} Consider a unit-area tiling of a the plane by efficient pentagons and Type 2 non-convex pentagons. Assume that each efficient pentagon has at most four angles which tile with efficient pentagons. Then the tiling has a perimeter ratio greater than half the perimeter of a Prismatic pentagon. \end{proposition} \begin{proof} At the three angles of a non-convex pentagon which are less than $\pi$, there are at most four efficient pentagons. If there were five or more, then the angles of the efficient pentagon would be too small, in violation of Proposition \ref{minmaxangle}. By the same logic, there are at most two efficient pentagons at the two angles in the non-convex pentagon which is greater than $\pi$. Since the tiling is edge-to-edge, five of the efficient pentagons surrounding a non-convex pentagon appear at two vertices, and we must avoid double counting these. So the maximum number of efficient pentagons surrounding a non-convex pentagon in the tiling is $4(3) + 2(2) - 5 = 11$. Now given $\epsilon > 0$, by Lemma \ref{truncation-lemma} there is a large disk of radius $R$ such that the number of pentagons which intersect the disk but are not contained within it is at most $\epsilon A(R)$. Let $C_1$ and $N_2$ be the fraction of efficient and Type 2 non-convex pentagons with at least one vertex in the disk. Note that by hypothesis each efficient pentagon within shares a vertex with at least one non-convex pentagon, and by the conclusion of the first paragraph there are at most 11 efficient pentagons which surround each non-convex pentagon, so we might expect that $C_1 < 11N_2.$ However near the boundary of the disk we may have efficient pentagons within the disk which share a vertex with a non-convex pentagon which is not contained within the disk. There are at most $\epsilon A(r)$ non-convex pentagons of this type, so in fact we have $$ C_1 < 11N_2 - \epsilon N_2. $$ Therefore $C_1 < 11N_2/(1 - \epsilon)$. But if $\epsilon < 1 - 11/13.4$ and the perimeter ratio of the tilings is less than or equal to half the perimeter of a Prismatic pentagon this contradicts Proposition \ref{plane-ext-stronger-ratio1} as $C_1 \not> 13.4N_2$. Therefore the tiling has a perimeter ratio greater than half that of a Prismatic pentagon. \end{proof} \section{Projects for Future Study} This section marks future projects which might be used to remove the convexity assumption in Theorem \ref{chungthm} and prove the main conjecture: \begin{conjt} \label{mainConjt} Perimeter-minimizing planar tilings by unit-area polygons with at most five sides are given by Cairo and Prismatic tiles. \end{conjt} \begin{proj} \emph{Extend the results from Section 5 and 6 on flat tori to similar results on the plane, considering methods from Morgan \cite{morggeo} and Chung et al. \cite{pen11}. See Section 7 for partial progress.} \end{proj} The following may be a more accessible version of Conjecture \ref{mainConjt}: \begin{projs} \emph{(1) Prove \ref{mainConjt} on flat tori of area $n$ for small $n$, known for $n = 4$ (\cite[Prop. 3.9]{tile11}) and some tori of area 2 (\cite[Prop. 3.3]{tile11}). \newline (2) Prove \ref{mainConjt} for dihedral tilings (the case with Type 1 non-convex pentagons remains open). \newline (3) Prove \ref{mainConjt} for tilings of the plane by convex pentagons and Type 2 pentagons.} \end{projs} The following may be helpful in proving some of the above or \ref{mainConjt}: \begin{projs} \emph{(1) Prove that if a tiling by convex and non-convex pentagons has less perimeter than Cairo tilings, then the convex pentagons must have at least one angle of $90^\circ$ or $120^\circ$. \newline (2) Without using Theorem \ref{chungthm}, show a tiling of a large flat torus by unit-area convex pentagons and unit squares must have more perimeter per tile then one half the Prismatic pentagon's perimeter. \newline (3) Without using Theorem \ref{chungthm}, show a tiling of a large flat torus by unit-area convex pentagons and unit-area equilateral triangles must have more perimeter per tile then one half the Prismatic pentagon's perimeter.} \end{projs}
{ "timestamp": "2013-05-16T02:02:31", "yymm": "1305", "arxiv_id": "1305.3587", "language": "en", "url": "https://arxiv.org/abs/1305.3587", "abstract": "We study the presumably unnecessary convexity hypothesis in the theorem of Chung et al. [CFS] on perimeter-minimizing planar tilings by convex pentagons. We prove that the theorem holds without the convexity hypothesis in certain special cases, and we offer direction for future research.", "subjects": "Metric Geometry (math.MG)", "title": "Perimeter-minimizing Tilings by Convex and Non-convex Pentagons", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429560855736, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7097211097863662 }
https://arxiv.org/abs/1908.11683
Identities and Properties of Multi-Dimensional Generalized Bessel Functions
The Generalized Bessel Function (GBF) extends the single variable Bessel function to several dimensions and indices in a nontrivial manner. Two-dimensional GBFs have been studied extensively in the literature and have found application in laser physics, crystallography, and electromagnetics. In this article, we document several properties of $m$-dimensional GBFs including an underlying partial differential equation structure, asymptotics for simultaneously large order and argument, and analysis of generalized Neumann, Kapteyn, and Schlömilch series. We extend these results to mixed-type GBFs where appropriate.
\section{Introduction} Bessel functions \cite{watson22} are pervasive in mathematics and physics and are particularly important in the study of wave propagation. Bessel functions were first studied in the context of solutions to a second order differential equation known as Bessel's equation: \begin{equation} x^2f''(x)+xf'(x)+(x^2-n^2)f(x)=0. \label{eq:besselEquation} \end{equation} This equation is parameterized by the value $n$. Solutions to this equation are known as Bessel functions of order $n$. Since equation \eqref{eq:besselEquation} is a second order linear differential equation, there exist two linearly independent solutions. These solutions, denoted $J_n (x)$ and $Y_n (x)$, are referred to as Bessel functions of the first and second kind respectively, where $J_n (0)$ is finite and $Y_n (x)$ has a singularity at $x=0$. These functions commonly arise in physical problems involving cylindrical equations including the Laplace, Helmholtz, and Schrodinger equations \cite{farlow93}. For integer order $n$, Bessel functions of the first kind admit an integral representation: \begin{equation} J_n(x)=\frac{1}{2\pi}\int _{-\pi}^\pi e^{i(n\theta -x\sin\theta )}d\theta \label{eq:bfInt} \end{equation} Recognizing equation \eqref{eq:bfInt} as the coefficients to a complex Fourier series expansion, we recover the following equation: \begin{equation} e^{-ix\sin{\theta}}=\sum _{n=-\infty}^\infty J_n(x)e^{-in\theta} \label{eq:jacobiAnger} \end{equation} Equation \eqref{eq:jacobiAnger} is referred to as a Jacobi-Anger expansion \cite{erdelyi53} and essentially states that the complex Fourier series coefficients for the function $e^{-ix\sin\theta}$ are in fact the $n^{th}$ order cylindrical Bessel functions of the first kind. The Bessel functions are oscillatory functions of the variable $x$ with even and odd symmetry in $x$ for even and odd orders $n$ respectively. Now consider a complex exponential function whose argument itself is represented as a Fourier sine series \cite{HagueDiss, HagueI}, represented as: \begin{equation} s(\theta )=\exp \left[ -i\sum_{k=1}^m x_k\sin{k\theta}\right] \end{equation} Further suppose we wish to find the complex Fourier series coefficients for that function. By leveraging a more general form of the Jacobi-Anger expansion, we may write the following representation of $s(\theta )$ \cite{dattoli96} as \begin{equation} s(\theta )=\sum _{n=-\infty}^\infty J_n^{\bf p}({\bf x})e^{-in\theta} \end{equation} where ${\bf x}:\{ 1,...,m\}\rightarrow\mathbb R$, ${\bf p}:\{ 1,...,m\}\rightarrow\mathbb N$ with ${\bf p}=\{ p_1,...,p_m\}$, and \begin{equation} J_n^{\bf p}({\bf x})=\frac{1}{2\pi}\int _{-\pi}^\pi\exp \left[ i\left( n\theta -\sum_{k=1}^m x_k\sin{p_k\theta}\right)\right] d\theta . \label{eq:equation6} \end{equation} We call $J_n^{p1,\dots, p_m}({\bf x}) = J_n^{\bf p}({\bf x})$ in \eqref{eq:equation6} an $m$-dimensional Generalized Bessel Function (GBF) with indices ${\bf p}$ and require that $\text{gcd}({\bf p})=1$ to avoid trivial simplifications. If any of the arguments in ${\bf x}$ are set to zero, then \eqref{eq:equation6} simplifies to a lower dimensional GBF. We can extend this definition to the mixed-type generalized Bessel functions (MT-GBF) such that the argument also includes cosines \cite{dattoli96}: \begin{equation} J_n({\bf x};{\bf y})= \frac{1}{2\pi}\int _{-\pi}^\pi\exp \left[ i\left( n\theta -\sum _{k=1}^m (x_k\sin{k\theta}+y_k\cos{k\theta})\right)\right] d\theta . \end{equation} An infinite-dimensional variant of the MT-GBF (and the GBF) can be constructed by requiring ${\bf x},{\bf y}\in\ell ^2(\mathbb N )$ such that the integral converges \cite{dattoli1997}. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{./GBF_Figure2_Contour.pdf} \caption{Plots of $|J_n^{1,2}(x,y)|$ for $-20<x,y<20$ for (a) $n=0$, (b) $n=1$, (c) $n=2$ and (d) $n=3$. Much like their one-dimensional counterparts, the GBFs are oscillatory in both $x$ and $y$ dimensions. They additionally display symmetries over the $x$ and $y$ axis.} \label{fig:GBF2} \end{figure} \begin{figure}[h] \includegraphics[width=1.0\textwidth]{./GBF_Figure3_Contour.pdf} \caption{Plots of $|J_n^{p,q}(x,y)|$ for $-20<x,y<20$ for (a) $n=0$, $p=1$, $q=3$, (b)$n=1$. $p=4$, $q=5$, (c) $n=2$, $p=2$, $q=3$, (d) $n=3$, $p=4$, $q=7$. These GBFs with their more general indices $p$ and $q$ sill possess similar oscillatory properties as their $J_n^{1,2}$ counterparts with more complex symmetry and structure.} \label{fig:GBF3} \end{figure} GBFs first appeared in the literature in the early 1900s as a simple but isolated extension of the Bessel functions \cite{appell15}. The 2-D GBFs were extensively utilized in the physics community \cite{Reiss_1962, nikishov1964quantum, Reiss_1980, Reiss_1983} and their mathematical properties were thoroughly explored by the efforts of \cite{dattoli1990theory, dattoli1991theory, dattoli1992linear}. These functions were re-examined and further generalized to the m-dimensional and infinite-dimensional \cite{dattoli1995, dattoli96, dattoli1997} cases as more of an effort was made to connect the GBF to other notable special functions \cite{dattoli1992generating, andrews85}. Additionally, the authors have taken an interest in the GBFs as they provide insight into the analysis and design of frequency modulated signals for radar and sonar applications \cite{HagueDiss, HagueI, Hague_MTFFM, hague2020adaptive}. However, many of the well understood properties and identities of $1-$dimensional Bessel functions have not been extended to the $m-$dimensional GBFs. In this paper we concentrate on three fundamental properties of GBFs that to the best of the authors' knowledge have not been fully established. These are (1) finding a second order linear Partial Differential Equation (PDE) of which the GBF is a solution, (2) deriving asymptotic approximations of GBFs with arbitrary dimension for large arguments and/or orders with general indicies ${\bf p}$, and (3) establishing analytical expressions for infinite summations of GBFs such as the Neumann, Kapteyn and Schl\"{o}milch series. These three fundamental properties for 1-D Bessel functions have found extensive use in Physics. It is the hope of the authors that extending these properties for m-dimensional GBFs could potentially provide further insight into problems of interest to the Physics community at large. While the Bessel function was defined as a solution to a differential equation, it is more difficult to discern whether the GBF satisfies a similar partial differential equation. Previous efforts \cite{dattoli96} have showed that the $J_n^{1,m}$ GBF satisfies an order $m^2$ linear PDE which does not bear any passing similarity to Bessel's differential equation. Work by the authors in \cite{kuklinski18} determined that the GBF does not solve a second order linear PDE but certain modifications of GBFs do; in particular the MT-GBF satisfies a first order linear Schrodinger type equation. A similar Schrodinger type equation result was obtained by \cite{cesarano} for the special case of 2-dimensional 1-parameter GBFs. However, this PDE only describes the simple cross sectional structure between two variables $x_k$ and $y_k$ and ignores the more intricate interplay between just the ${\bf x}$ variables. In this paper, we demonstrate that it is indeed possible to write a second order linear PDE for $J_n^{1,2}\left(x,y\right)$ of which Bessel's original linear PDE is a special case. Most analysis of GBFs has been limited to the two dimensional case, and for a complete asymptotic development, even further limited to the case of $(p_1,p_2)=(1,2)$ \cite{korsch06}. Using techniques from algebraic geometry \cite{cox06}, we describe a general method to analytically compute bifurcation surfaces for both GBFs with large argument and for GBFs with simultaneously large order and argument. These bifurcation surfaces are revealed to be piecewise algebraic. We conclude this paper with a discussion of various summations involving MT-GBFs. In \cite{watson22}, Neumann, Kapteyn, and Schl\"{o}milch series were developed for one-dimensional Bessel functions. We consider generalizations of these series where the MT-GBF replaces the Bessel function. It is possible to write closed forms for the moments of the Neumann and Kapteyn series, while for the Schl\"{o}milch series we can compute smoothness boundaries. Both the region of convergence for the Kapteyn series and the smoothness boundaries of the Schl\"{o}milch series are connected in an algebraic sense to the region of polynomial decay for MT-GBFs of increasing order and argument. The rest of the paper is organized as follows; in Section \ref{sec:PDE} we write linearly independent PDE representations for $J_n^{1,2}(x,y)$ and note a possible application to computing level sets. In Section 3 we discuss asymptotics of the GBF and provide closed form results. We then develop closed forms for generalized Neumann and Kapteyn series in Section 4, and also describe smoothness boundaries for the generalized Schl\"{o}milch series. Finally, Section 5 concludes the paper. \section{Partial Differential Equations Representation} \label{sec:PDE} In this section, we demonstrate that the index (1,2) GBF $J_n^{1,2}(x,y)$ solves a set of linearly independent partial differential equations. These equations can be derived from a combination of trigonometric identities and an integral identitiy. \subsection{Partial Differential Equations for the 2-D GBF and m-D MT-GBF} If $f:\mathbb R\rightarrow\mathbb R$ is a differentiable periodic function with period $P$, then \begin{equation} \int _{x_0}^{x_0+P}f'(x)dx=0. \label{eq:equation8} \end{equation} Let $h(\theta )=n\theta -x\sin\theta -y\sin 2\theta$, $f(x,y)=2\pi J_n^{1,2}(x,y)=\int _{-\pi}^\pi e^{ih(\theta )}d\theta$, and $g(x,y)=\int _{-\pi}^\pi \cos\theta e^{ih(\theta )}d\theta$. To take derivatives of these expressions, we can interchange the derivative and integral operators; computing these derivatives involves integrating products of trigonometric functions and $e^{ih(\theta )}$. This allows us to exploit trigonometric identities to relate the derivatives of $f$ and $g$; for instance by $\sin 2\theta=2\sin\theta\cos\theta$ we have $f_y=2g_x$. We apply equation \eqref{eq:equation8} to $e^{ih(\theta )}$, $(\sin\theta )e^{ih(\theta )}$, and $(\cos\theta )e^{ih(\theta )}$ to generate the partial differential equations. The first identity becomes \begin{equation} 0=nf-xg-2y\int _{-\pi}^\pi (\cos 2\theta )e^{ih(\theta )}d\theta \end{equation} Using the identity $\cos 2\theta =1-2\sin^2\theta$, we have \begin{equation} 0=(n-2y)f-xg-4yf_{xx} \label{eq:equation10} \end{equation} This allows us to represent $g$ in terms of $f$ and $f_{xx}$. Applying equation \eqref{eq:equation8} to $(\sin\theta )e^{ih(\theta )}$ leads to the following formula: \begin{equation} 0=g-nf_x+\frac{1}{2}xf_y-2iy\int_{-\pi}^\pi(\sin\theta \cos 2\theta )e^{ih(\theta )}d\theta \end{equation} Using the identity $\sin\theta\cos 2\theta =\sin 2\theta\cos\theta -\sin\theta$, we can write: \begin{equation} 0=g-(n+2y)f_x+\frac{1}{2}xf_y+2yg_y \end{equation} By taking a derivative of this equation with respect to $x$ and using the identities $f_y=2g_x$ and $f_{yy}=2g_{xy}$, we can write the first partial differential equation: \begin{equation} (n+2y)f_{xx}-yf_{yy}-\frac{x}{2}f_{xy}-f_y=0 \label{eq:eq2_6} \end{equation} We can derive the second partial differential equation by applying equation \eqref{eq:equation8} to $(\cos\theta )e^{ih(\theta )}$. This leads us to the following formula: \begin{equation} 0=f_x-ng+x\int _{-\pi}^\pi (\cos ^2\theta )e^{ih(\theta )}d\theta + 2y\int _{-\pi}^\pi (\cos\theta\cos 2\theta )e^{ih(\theta )}d\theta \end{equation} By using trigonometric substitutions, we can write these integrals in terms of the derivatives of $f$ and $g$: \begin{equation} 0=xf_{xx}+f_x+xf-(n-2y)g+2yf_{xy} \end{equation} Using the previous substitution for $g$ in equation \eqref{eq:equation10}, we have: \begin{equation} 0=[x^2+4y(n-2y)]f_{xx}+2xyf_{xy}+ xf_x+[x^2-(n-2y)^2]f \label{eq:diffEQ} \end{equation} There are several promising similarities between equation \eqref{eq:diffEQ} and Bessel's differential equation in equation \eqref{eq:besselEquation}, particularly that the correct degree polynomial in $(x,y)$ multiplies the corresponding ordered derivative, and the dependence on $n^2$ in the coefficient of $f$. Moreover, substituting $y=0$ returns Bessel's differential equation. We can use a similar procedure to find linearly independent PDEs which govern MT-GBFs of arbitrary dimension. Note that by $\sin^2\theta +\cos ^2\theta =1$, we have \begin{equation} \frac{\partial ^2J_n}{\partial x_k^2}+\frac{\partial ^2J_n}{\partial y_k^2}+J_n=0 \end{equation} This shows that every $(x_k,y_k)$ plane has circular level sets when all other variables fixed. It was shown in \cite{kuklinski18, cesarano} that the MT-GBF also satisfies a Schrodinger-type PDE: \begin{equation} nJ_n=i\sum _{k=1}^m k\left( x_k\frac{\partial J_n}{\partial y_k}-y_k\frac{\partial J_n}{\partial x_k}\right) \end{equation} A complete list of linearly independent second order PDE identities for the $J_n^{1,2}$ MT-GBF could be compiled using equation \eqref{eq:equation8}, however since there are four input variables we omit this list for brevity. \subsection{A Comment on Level Sets} If we could find another second order linear PDE which $J_n^{1,2}(x,y)$ solves independent of \eqref{eq:eq2_6} and \eqref{eq:diffEQ}, then it would be possible to parameterize its vanishing level sets using a second order variant of the method of characteristics \cite{evans98}. Suppose we have smooth functions $(x(t),y(t))$ such that $f(x(t),y(t))=J_n^{1,2}(x(t),y(t))=0$. Then $x'(t)f_x+y'(t)f_y=0$, and taking a second derivative of this equation gives \begin{equation} (x')^2f_{xx}+2x'y'f_{xy}+(y')^2f_{yy}+x''f_x+y''f_y=0 \end{equation} This allows us to write the following matrix representation of the system: \begin{equation} \begin{bmatrix} 2(n+2y) & -x & -2y & 0 & -2 \\ x^2+4y(n-2y) & 2xy & 0 & x & 0 \\ (x')^2 & 2x'y' & (y')^2 & x'' & y'' \\ 0 & 0 & 0 & x' & y'\end{bmatrix}\begin{bmatrix} f_{xx} \\ f_{xy} \\ f_{yy} \\ f_x \\ f_y\end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 0 \\ 0\end{bmatrix} \end{equation} We could then write a system of second order nonlinear ordinary differential equations with initial conditions $(x(0),y(0),x'(0),y'(0))=(j_{n,k},0,0,1)$ or for even $n$ $(x(0),y(0),x'(0),y'(0))=(0,j_{n/2,k},1,0)$ where $j_{n,k}$ is the $k^\text{th}$ root of the one-dimensional Bessel function $J_n(x)$. It may be possible to determine the topology of the level sets of $J_n^{1,2}(x,y)$ by analyzing this ODE system. Note in Figure \ref{fig:GBF2} that for even $n$, the zero surfaces intersecting the $y$-axis appear to be closed loops, while for $n$ odd there appears to be a single infinite contour winding about the $y$-axis. This is a potential topic for future investigation. \section{Asymptotic Properties} In this section, we characterize the bifurcation surfaces of the GBF using the method of stationary phase \cite{stein93}. Consider the integral: \begin{equation} I(t)=\int_a^b g(\theta )e^{itf(\theta )}d\theta \label{eq:equation21} \end{equation} where $g$ and $f$ are smooth functions compactly supported in the interval. Consider the set $S=\{ x\in (a,b):f'(x)=0,f''(x)\ne 0\}$ whose members we refer to as \emph{points of stationary phase}. If this set is nonempty, then we can make an approximation on $I(t)$ for large $t$: \begin{equation} I(t)=\sum _{x\in S}g(x)e^{itf(x)}\sqrt{\frac{\pi}{t|f''(x)|}}(1\pm i)+O(t^{-1}) \end{equation} Here, the phase of the summand (i.e. the sign of $1\pm i$) is determined by the sign of $f''(x)$. If the set of stationary phase points is empty, then $I(t)$ decays superpolynomially, and under minor conditions, exponentially. Some care must be taken that there are no stationary phase points at the endpoints of the integral, and that the stationary phase points do not give the phase function a vanishing second derivative. The latter type of stationary phase points are referred to as \emph{critical points}, and crossing over these points changes the structure of the approximation (i.e. changing the number of stationary phase points in the region of integration). Compare the integral in equation \eqref{eq:equation21} with our definition of the GBF in equation \eqref{eq:equation6}. For an $m$-dimensional GBF with fixed indices, there are $m+1$ terms which can vary (the order and $m$ input arguments), any combination of which we can choose to be large and apply a stationary phase approximation to. In this section we will consider two cases; we first let the arguments be large relative to the order, and then we let the arguments and the order be simultaneously large. In practice, when the indices are arbitrary integers, these approximations can be difficult to analytically display as they involve solving high order polynomials. However, we can analytically find the locus of critical points which trace out \emph{bifurcation surfaces} in $m$-dimensional space. For relatively small order these bifurcation surfaces become two linear $m-1$ dimensional surfaces, but for large order the bifurcation surfaces include an additional algebraic surface. The collection of bifurcation surfaces of the MT-GBF generally contains only higher order algebraic surfaces. These more complex algebraic surfaces satisfy systems of polynomial equations which can be solved using the Sylvester matrix determinant. If we have a system of polynomial equations $f(x)=0$ and $g(x)=0$ with coefficients in the set $\{ a_1,...,a_n\}$ and which have no common factors, then there exists a nontrivial multivariate polynomial called the \emph{resultant} which satisfies $\text{Res}(f,g;x)=F(a_1,...,a_n )=0$ \cite{cox06}. If $f(x)=\sum _{k=0}^n a_kx^k$ and $g(x)=\sum _{k=0}^n b_kx^k$, the resultant may be written as the determinant of the Sylvester matrix: \begin{equation} \text{Res}(f,g;x)=\det\begin{bmatrix} a_n & a_{n-1} & \hdots & a_0 & 0 & ~ & ~ & ~ \\ b_n & b_{n-1} & \hdots & b_0 & 0 & \ddots & ~ & ~ \\ 0 & a_n & \hdots & a_1 & a_0 & \ddots & ~ & ~ \\ 0 & b_n & \hdots & b_1 & b_0 & \ddots & ~ & ~ \\ ~ & ~ & \ddots & \ddots & \ddots & \ddots & ~ & ~ \\ ~ & ~ & ~ & ~ & ~ & b_n & \hdots & b_0\end{bmatrix} \end{equation} Suppose we have a system of $m+1$ polynomial equations in $m$ variables such that $f_{1,k}(x_1,...,x_m)=0$ for $k\le m+1$. Then we can generate a system of $m$ polynomial equations $f_{2,k}(x_1,...,x_{m-1})=\text{Res}(f_{1,k},f_{1,m+1};x_m)$ and continue iteratively until there is one polynomial with none of the $\{ x_k\}$ variables. This polynomial is the resultant of the entire system. One caveat of this method is that the equation $\text{Res}(f,g)=F(a_1,...,a_n)=0$ will include solutions $\{a_1,...,a_n\}$ which do not simultaneously satisfy $f(x)=0$ and $g(x)=0$ for some $x$, whereas if the system is satisfied for some $x$, the coefficients must solve $F(a_1,...,a_n )=0$. For example, consider the system $f(x)=ax+b+1=0$ and $g(x)=cx+d=0$. For this system to be simultaneously satisfied for some $x$, we must have $F(a,b,c,d)=ad-bc-c=0$. The trivial solution $\{ a,b,c,d\} =\{ 0,0,0,0\}$ satisfies this equation but clearly does not satisfy the system $f\left(x\right) = g\left(x\right) = 0$ for any $x$. Moreover, it will often occur that we would like for the simultaneous solution $x$ to be real or otherwise satisfy some chosen constraint. Only a portion of the bifurcation curve will correspond to this situation, and this information is not encoded in the resultant. We will examine these cases as they arise. We also note that while we can solve the bifurcation surfaces analytically, these do not give a complete asymptotic description of the GBF. This would require us to solve for the points of stationary phase which satisfy a polynomial of degree $\text{max }{\bf p}$ in $\cos\theta$; which is typically not analytically possible. \subsection{Large Arguments} \label{subsec:largeArgs} We first consider bifurcation curves of the GBF with large arguments relative to the order, as first elucidated by Korsch et. al. \cite{korsch06}. Otherwise stated, let us consider the function $J_n^{{\bf p}} (tx_1,...,tx_m )$ for large values of $t$. We write this function in a stationary phase form \begin{equation} J_n^{{\bf p}}(tx_1,...,tx_m)=\frac{1}{2\pi}\int _{-\pi}^\pi e^{in\theta}\exp\left[ -it\sum _{k=1}^m x_k\sin{p_k\theta}\right] d\theta \label{eq:equation25} \end{equation} where $g(\theta )=e^{in\theta}$ and $f(\theta )=-\sum _{k=1}^m x_k\sin{p_k\theta}$. If $(x_1,...x_m)$ is an element of the bifurcation surface, there must exist some $\theta\in [-\pi ,\pi ]$ such that $f'(\theta )=f''(\theta )=0$. Otherwise, \begin{align} f'(\theta )&=\sum _{k=1}^m x_kp_k\cos{p_k\theta}=0, \\ f''(\theta )&=\sum _{k=1}^m x_kp_k^2\sin{p_k\theta}=0. \label{eq:equation26} \end{align} We note that these functions can be written as polynomials in $\cos \theta$ and $\sin \theta$. In particular, the Chebyshev polynomials of the first and second kinds respectively satisfy the identities \begin{align} T_n(\cos\theta )&=\cos{n\theta}, \\ U_{n-1}(\cos\theta )&=\frac{\sin{n\theta}}{\sin\theta}. \label{eq:equation27} \end{align} Using these identities, we can rewrite the system of equations \eqref{eq:equation26}: \begin{align} \sum _{k=1}^m x_kp_k T_{p_k}(\cos\theta )&=0, \\ \sin\theta\left[\sum _{k=1}^m x_kp_k^2U_{p_k-1}(\cos\theta )\right] &=0. \label{eq:equation28} \end{align} The right equation in \eqref{eq:equation28} is trivially satisfied if $\sin\theta =0$, or when $\theta =0$ or $\theta =\pi$. These two solutions lead to the following representations of bifurcation surfaces when plugged back into the left equation of \eqref{eq:equation28}: \begin{align} \sum _{k=1}^m x_kp_k&=0, \\ \sum _{k=1}^m x_kp_k(-1)^{p_k}&=0. \label{eq:equation29} \end{align} Notice that if all of the $p_k$ are odd (they cannot all be even or else their greatest common denominator would be at least 2), then both equations of \eqref{eq:equation29} denote the same surface. In this case, letting $\theta =\frac{\pi}{2}$ or $\theta =\frac{3\pi}{2}$ trivially satisfies the left equation of \eqref{eq:equation28}. Substituting these values into the right equation of \eqref{eq:equation28} leads to the following alternate surface: \begin{equation} \sum _{k=1}^m x_kp_k^2 (-1)^\frac{p_k+1}{2}=0. \end{equation} We illustrate this dichotomy in behavior in Figure \ref{fig:GBF4}; notice that the bifurcation curves of $J_n^{1,2}(x,y)$ and $J_n^{2,3}(x,y)$ are symmetric about the coordinate axes, but the bifurcation curves of $J_n^{1,3}(x,y)$ and $J_n^{3,5}(x,y)$ have only rotational symmetry about the origin. For the MT-GBF, there is no factor of $\sin\theta$ in $f''(\theta )$, so the bifurcation surfaces will be higher order algebraic surfaces. The bifurcation surface of the $J_n^{1,2}(tx_1,tx_2;ty_1,ty_2)$ MT-GBF is an eighth order algebraic surface whose equation is too large for the page. \begin{figure}[!h] \includegraphics[width=1.0\textwidth]{./GBF_Figure4.pdf} \caption{Plots of $|J_0^{p,q}(x,y)|$ for $-20<x,y<20$ for (a) $p=1$, $q=2$, (b) $p=2$, $q=3$, (c) $p=1$, $q=3$ and (d) $p=3$, $q=5$. Notice that the top two figures, each containing an even index, have bifurcation curves which are symmetric about both coordinate axes. Meanwhile, the bottom two figures, both with odd indices, contain bifurcation curves with only rotational symmetry.} \label{fig:GBF4} \end{figure} \subsection{Large Order and Arguments} \label{subsec:largeOrdandArgs} We now compute bifurcation curves of the GBF with simultaneously large order and large arguments, otherwise $J_{tn}^{\bf p} (t{\bf x})$ for large values of $t$. In these cases, there is a region of exponential decay containing the origin which grows linearly with $n$. The bifurcation surfaces will bound this region. In the stationary phase representation, we have $f(\theta )=n\theta -\sum_{k=1}^m x_k\sin p_k\theta$ such that the bifurcation curves now satisfy: \begin{align} f'(\theta )&=\sum _{k=1}^m x_kp_k\cos{p_k\theta}=n, \\ f''(\theta )&=\sum _{k=1}^m x_kp_k^2\sin{p_k\theta}=0. \label{eq:equation30} \end{align} As stated previously, we will be able to write a multivariate polynomial equation in $\{ x_k\}$, a subset of whose solutions satisfy equations \eqref{eq:equation30} for some $\theta$. By factoring out $\sin\theta$ from the second equation, we are able to derive two linear bifurcation surfaces: \begin{align} \sum _{k=1}^m x_kp_k&=n, \\ \sum _{k=1}^m x_kp_k(-1)^{p_k}&=n. \end{align} We refer to these solutions as the \emph{trivial solutions}. Notice that unlike the small order case of the previous section, these two surfaces are distinct even if ${\bf p}$ contains only odd indices, in which case they are parallel. If ${\bf p}=\{ {\bf p_e},{\bf p_o}\}$ corresponds to the even and odd indices respectively, then the intersection of the trivial surfaces can be represented as the intersection of two orthogonal surfaces: \begin{align} \sum _{k\in {\bf p_e}}x_kp_k&=0, \\ \sum _{k\in {\bf p_o}}x_kp_k&=n. \end{align} The nontrivial bifurcation surfaces can be computed using the resultant as described at the beginning of the section. For instance, the nontrivial bifurcation curve of $J_{tn}^{1,2}(tx,ty)$ becomes \begin{equation} x^2+32y^2+16yn=0 \label{eq:equation33} \end{equation} which is the result obtained by \cite{lotstedt09}. We plot the bifurcation curves of this GBF in Figure \ref{fig:GBF5}. Notice here that the upper section of the ellipse as predicted by equation \eqref{eq:equation33} does not actually behave as a bifurcation curve, i.e. it subdivides a coherent region of exponential decay. We remedy this discrepancy by noting that the simulateous solutions of equation \eqref{eq:equation30} must satisfy $|\cos\theta |\le 1$, otherwise $\theta$ is not in the region of integration. In this example, this implies that we must have $\left\lvert\frac{x}{8y}\right\rvert\le 1$, or equivalently the section of the ellipse which lies above the ine $y=-\frac{n}{6}$ is not truly part of the bifurcation curve. \begin{figure}[!h] \includegraphics[width=1.0\textwidth]{./GBF_Figure5_Orig.pdf} \caption{Plot of $|J_{40}^{1,2}(x,y)|$ for $-100<x,y<100$ (a) and its corresponding Bifurcation curves (b). Notice that crossing the upper part of the ellipse does not lead to a difference in asymptotic behavior.} \label{fig:GBF5} \end{figure} Since previous asymptotic analysis of GBFs \cite{korsch06} \cite{lotstedt09} was dependent on explicitly solving for stationary phase points, those authors could only consider the $J_{tn}^{1,2}(tx,ty)$ GBF because the stationary phase points satisfy a quadratic function in $\cos\theta$. However, the resultant allows us to solve for bifurcation curves of MT-GBFs of arbitrary order. For example, the nontrivial bifurcation curve of $J_{tn}^{1,3}(tx,ty)$ becomes \begin{equation} (x-9y)^3+81yn^2=0 \end{equation} This representation overstates the true bifurcation curve, and we can show that only the section of this curve which satisfies $\frac{1}{4}-\frac{x}{36y}\le 1$ acts as a legitimate bifurcation curve, as shown in Figure 6. We could additionally compute bifurcation surfaces of the MT-GBF in this way, but the equations would be far too large to display concisely. \begin{figure}[!h] \includegraphics[width=1.0\textwidth]{./GBF_Figure6_Orig.pdf} \caption{Plots of $|J_{40}^{1,3}(x,y)|$ for $-100<x,y<100$ (a) and its corresponding Bifurcation curves (b). Notice that crossing the central part of the cubic curve does not lead to a change in asymptotic behavior.} \label{fig:GBF6} \end{figure} \section{MT-GBF Series} In this section, we generalize various infinite sums involving the one-dimensional Bessel function to the MT-GBF. The techniques used to compute these sums are not unique to the dimension of the Bessel function and are easily extended to the MT-GBF. For the generalized Neumann and Kapteyn series, we use generating functions to compute $\ell^\text{th}$ moments. These moments could then be used to approximate series involving arbitrary coefficients. The moments of the Schl\"{o}milch series are not so easily calculated even in the one-dimensional case. However, a distinguishing feature of this series in one dimension is that it is only piecewise smooth; the boundaries of these pieces occur at $x=2\pi n$ for $n\in\mathbb Z$. We document a similar property in the GBF case and compute the smoothness boundaries using the resultant of the previous section. \subsection{Generalized Neumann Series} We first present a compact method to compute the moments of the following MT-GBF based Neumann series \begin{align} f_{1,\ell}\left({\bf x};{\bf y}\right) &=\sum _{n=-\infty}^\infty n^{\ell}J_n({\bf x};{\bf y}) \\ f_{2,\ell}\left({\bf x};{\bf y}\right) &=\sum _{n=-\infty}^\infty n^{\ell}J_n({\bf x};{\bf y})^2 \\ f_{3,\ell}\left({\bf x};{\bf y}\right) &=\sum _{n=-\infty}^\infty n^{\ell}\left|J_n({\bf x};{\bf y})\right|^2. \end{align} These derivations will primarily use the MT-GBF Jacobi-Anger identity as well as the derivative recursion of the MT-GBF \begin{align} \dfrac{\partial J_n}{\partial x_k} &= \frac{1}{2}\left(J_{n-k}-J_{n+k}\right) \\ \dfrac{\partial J_n}{\partial y_k} &= \frac{1}{2i}\left(J_{n-k}+J_{n+k}\right). \end{align} We will also utilize the square MT-GBF identity \begin{align} J_n\left(x_k;y_k\right)^2 &= \frac{1}{2\pi}\int_{-\pi}^{\pi}J_{2n}\left(2x_k\cos k\theta; 2y_k\cos k\theta\right)d\theta \label{eq:GBF_Square_1} \\ \left|J_n\left(x_k;y_k\right)\right|^2 &= \frac{1}{2\pi}\int_{-\pi}^{\pi}J_{2n}\left(2x_k\cos k\theta-2y_k\sin k\theta; 0\right)d\theta. \label{eq:GBF_Square_2} \end{align} These integral identities can be readily derived from their 1-D Bessel function counterparts. \subsubsection{Moments of $f_{1,\ell}\left({\bf x};{\bf y}\right)$} \label{subsubsec:Series1} Computing these sums becomes nearly trivial by utilizing the MT-GBF Jacobi-Anger expansion. Let $g\left(\theta\right) = \sum_{n=-\infty}^{\infty}J_n\left(x_k;y_k\right)e^{-i n\theta}$. If we let $X_{\ell}=\sum _{n=1}^\infty n^{\ell}x_n$ be the ${\ell}^\text{th}$ moment of ${\bf x}$ and likewise for $Y_{\ell}$, then we can simply represent the first few moments as: \begin{align} \mu _0&=e^{-iY_0}, \\ \mu _1&=X_1e^{-iY_0}, \\ \mu _2&=(X_1^2-iY_2)e^{-iY_0}, \\ \mu _3&=(X_3-3iX_1Y_2+X_1^3)e^{-iY_0} \end{align} These results are MT-GBF generalizations of the results obtained in \cite{dattoli96} \subsubsection{Moments of $f_{2,\ell}\left({\bf x};{\bf y}\right)$} \label{subsubsec:Series2} For the series $f_{2,\ell}\left({\bf x};{\bf y}\right)$, using the identity in \eqref{eq:GBF_Square_1}, we can interchange the summation and the integral to write \begin{equation} f_{2,\ell} = \frac{1}{2\pi}\int_{-\pi}^{\pi}\sum_{-\infty}^{\infty}n^{\ell} J_{2n}\left(2x_k \cos k\theta; 2y_k\cos k\theta\right)d\theta \label{eq:series2Eq1} \end{equation} Consider the following summation formula \begin{equation} h_{\ell}\left(\theta\right) = \sum_{n=-\infty}^{\infty}n^{\ell}J_{2n}\left(x_k;y_k\right)\cos^2n.\theta \end{equation} Expanding the $\cos^2n\theta$ in terms of complex exponentials results in the expression \begin{equation} h_{\ell}\left(\theta\right) = i^{\ell}\left(\frac{1}{4}f^{\left(\ell\right)}\left(2\theta\right) + \frac{1}{4}f^{\left(\ell\right)}\left(-2\theta\right) + \frac{1}{2}f^{\left(\ell\right)}\left(0\right) \right) \end{equation} recognizing that the following identity holds: \begin{equation} \sum_{n=-\infty}^{\infty}n^{\ell}J_{2n}\left(x_k;y_k\right) = \frac{1}{2^{\ell}}h_{\ell}\left(\pi\right) \end{equation} We may now substitute the above result into \eqref{eq:series2Eq1} and conduct the integral, the result of which will be another MT-GBF expression. Using the fact that $J_n\left(0;y_k\left(-1\right)^k\right) = \left(-1\right)^n J_n\left(0;y_k\right)$, we can derive th following quantities \begin{align} f_{2,0} &= J_0\left(0;2y_k\right) \\ f_{2,1} &= i\sum_{k=1}^{\infty}k x_k \dfrac{\partial J_0 \left(0;2y_k\right)}{\partial y_k} \\ f_{2,2} &= \frac{1}{2}\sum_{k=1}^{\infty}k^2y_k\dfrac{\partial J_0\left(0;2y_k\right)}{\partial y_k}- \sum_{j,k=1}^{\infty}jkx_jx_j\frac{\partial^2 J_0\left(0;2y_k\right)}{\partial y_j \partial y_k} \end{align} \subsubsection{Moments of $f_{3,\ell}\left({\bf x};{\bf y}\right)$} \label{subsubsec:Series3} For the series $f_{3,k}$, again using using the integral identity \cite{eq:GBF_Square_2} we may write \begin{equation} f_{3,\ell}=\frac{1}{2\pi}\int_{-\pi}^{\pi}\sum_{n=-\infty}^{\infty}n^{\ell}J_{2n}\left(2x_k\cos k\theta -2y_k\sin k\theta; 0\right). \end{equation} Again using the function $h_{\ell}\left(\theta\right)$ from the previous section, we can readily compute the first few moments in this series \begin{align} f_{3,0} &= 1 \\ f_{3,1} &= 0 \\ f_{3,2} &= \sum_{k=1}^{\infty}\dfrac{k^2\left(x_k^2+y_k^2\right)}{2}. \label{eq:series3eq1} \end{align} Reassuringly, the result for $f_{3,2}$ in \eqref{eq:series3eq1} collapses back to a well known identity for 1-D Bessel functions for the case where m=1 \cite{watson22}, \begin{equation} f_{3,2} = \sum_{n=-\infty}^{\infty}n^2|J_n\left(x\right)|^2 = \frac{x^2}{2}. \end{equation} \subsection{Generalized Kapteyn Series} We now compute moments of the generalized Kapteyn series, where the summation index is included both in the order and argument: \begin{equation} \mu_{\ell}=\sum _{n=-\infty}^\infty n^{\ell}J_n(n{\bf x};n{\bf y}) \end{equation} To compute these moments, we develop a generating function from a variant of the multi-dimensional feedback equation \cite{dattoli98} \cite{kuklinski18}. Let $f(\theta )=\theta -\sum _{k=1}^\infty (x_k\sin k\theta +y_k\cos k\theta )$ and suppose we wish to invert this function, i.e. represent $\theta =f^{-1}(t)$ as a function of $t$. Notice that this inversion is only valid on the set $\Omega$ of coordinates $\{ {\bf x},{\bf y}\}$ on which $f(\theta )$ is monotone increasing. It can be shown that $\Omega$ corresponds to the region of exponential decay of the previous section with $n=1$. This is intuitive as for $\{ {\bf x},{\bf y}\}$ outside this region, there will be stationary phase points in the integral represnetation such that $J_n(n{\bf x};n{\bf y})=O(n^{-1/2})$, and therefore the series will not converge. If we assume $\{ {\bf x},{\bf y}\}\in\Omega$, then both $f$ and $f^{-1}$ are monotone increasing, continuous, bijective, and invertible. Since $f(\theta +2\pi )=f(\theta )+2\pi$, we have \begin{equation} f(f^{-1}(t+2\pi ))=t+2\pi =f(f^{-1}(t)+2\pi ) \end{equation} This implies that $g(t)=f^{-1}(t)-t$ is $2\pi$ periodic such that $g$ admits a Fourier expansion $g(t)=\sum _{m=-\infty}^\infty a_me^{imt}$ and the coefficients satisfy $a_m=\frac{1}{2\pi}\int _{-\pi}^\pi g(t)e^{-imt}dt$. By integration by parts, a change of variables to $\theta$, and recognizing that \\ $\int _{-\pi}^\pi f'(\theta )e^{-imf(\theta )}d\theta =0$, we can write these coefficients as MT-GBFs: \begin{equation} a_{-m}=-\frac{1}{2\pi mi}\int _{-\pi}^\pi e^{imf(\theta )}d\theta =-\frac{i}{m}J_m(m{\bf x};m{\bf y}) \end{equation} If we define $f(\theta )=\theta -h(\theta )$, then we can write a generating function for the generalized Kapteyn series: \begin{equation} h(\theta )=-i\sum _{m\ne 0}\frac{1}{m}J_m(m{\bf x};m{\bf y})e^{im(\theta -h(\theta ))} \end{equation} To solve for moments of the Kapteyn series, we must use the quantity $\theta _0$ which satisfies $f(\theta _0)=0$. Since $f$ is bijective, $\theta _0$ exists and is unique. Furthermore if ${\bf y}=0$, then $\theta _0=0$, $h^{(2n)}(\theta _0)=0$, and $h^{(2n+1)}(\theta _0)=\sum k^{2n+1}x_k$. The moments arise by taking derivatives of both sides of equation (40) with respect to $\theta$ and evaluating at $\theta _0$. We list the first few moments here: \begin{align} \mu _0&=\frac{h'(\theta _0)}{1-h'(\theta _0)}, \\ \mu _1&=-\frac{ih''(\theta _0)}{(1-h'(\theta _0))^3}, \\ \mu_2&=-\frac{h'''(\theta _0)}{(1-h'(\theta _0))^3}-\frac{3h''(\theta _0)^2}{(1-h'(\theta _0))^4} \end{align} These are MT-GBF versions of the equations that appear in Dattoli et. al. \cite{dattoli98}. \subsection{Generalized Schl\"{o}milch Series} We conclude this section with a discussion of generalized Schl\"{o}milch series which we write as \begin{equation} \mu_{\ell}=\sum _{n=1}^\infty n^{-{\ell}}J_m(n{\bf x};n{\bf y}) \end{equation} where $m$ is fixed and $\ell$ is a positive integer. In Watson \cite{watson22}, only special cases of the one-dimensional Schl\"{o}milch series admit a simple algebraic representation. However, it is noted that these summations are not smooth in neighborhoods about $x=2\pi n$ for $n\in\mathbb Z$. We attempt to recover a similar result in the higher dimensional MT-GBF case. For $\ell\ge 2$, we can interchange the sum and integral to write \begin{equation} \mu _{\ell}=\frac{1}{2\pi}\int _{-\pi}^\pi e^{im\theta}\left[\sum _{n=1}^\infty\frac{e^{-inh(\theta )}}{n^{\ell}}\right] d\theta = \frac{1}{2\pi}\int _{-\pi}^\pi e^{im\theta}\text{Li}_{\ell}(e^{-ih(\theta )}) \end{equation} where $\text{Li}_{\ell}(z)$ is the polylogarithm function \cite{nielsen09}. The polylogarithm is valid for $|z|<1$ but can be extended to an analytic function in $z \in \mathbb C\backslash [1,\infty]$. As such, the integrand of equation (43) passes through the branch point at $z=1$ if there exist $\theta\in [-\pi ,\pi]$ such that $h(\theta )=2\pi n$ for some $n\in\mathbb Z$. This creates a collection of surfaces in $({\bf x},{\bf y})$ space which we term \emph{smoothness boundaries}; $\mu _{\ell}$ is not smooth in neighborhoods of the smoothness boundaries and crossing over one of these boundaries changes the number of times the integrand passes through the branch point. Therefore, we index the smoothness boundaries by the simultaneous equations \begin{equation} h(\theta )=2\pi n,\hspace{1cm} h'(\theta )=0 \end{equation} Note that this closely relates to the discussion of bifurcation curves in the previous section. We can use similar methods to compute these smoothness boundaries. For $J_n^{1,2}(x,y)$, these boundaries become: \begin{equation} 64(2\pi n)^4y^2+(2\pi n)^2(x^4-80x^2y^2-128y^4)+ (64y^6-48x^2y^4+12x^4y^2-x^6)=0 \end{equation} We display these surfaces in Figure 7 along with the $J_n^{1,3}(x,y)$ smoothness boundaries located at $(2\pi n)^2y-(x^3+9x^2y+27xy^2+27y^3)=0$. \begin{figure}[!h] \includegraphics[width=1.0\textwidth]{./GBF_Figure7.pdf} \caption{(\emph{Left}) Smoothness boundaries of the $J_n^{1,2}(x,y)$ Schl\"{o}milch Series (\emph{Right}) Smoothness boundaries of the $J_n^{1,3}(x,y)$ Schl\"{o}milch series. Moments of the Scl\"{o}milch series are continuous in the domain but not smooth; they are smooth in regions bounded by these curves, however.} \label{fig:GBF7} \end{figure} \section{Conclusion} In this paper we have documented several novel properties of multi-dimensional GBFs from their one-dimensional counterparts. The properties listed here, while bearing similarity to analogous results on one-dimensional Bessel functions, have a distinct algebraic geometry interpretation. The authors believe that the results in each of these sections can be improved upon with further research. Though it is likely that $J_n^{1,2}(x,y)$ does not solve another linearly independent partial differential equation, it may be possible to solve for the topology of its level sets via the ordinary differential equation system generated from equation (16). It would also be worthwhile to pursue more precise asymptotics for higher dimension GBFs beyond the bifurcation surface calculations presented here. Since many of these properties of the one-dimensional Bessel functions have found such extensive use in the field of mathematical physics, it is the hope of the authors that these GBF properties might be of similar applicability.
{ "timestamp": "2021-04-29T02:17:59", "yymm": "1908", "arxiv_id": "1908.11683", "language": "en", "url": "https://arxiv.org/abs/1908.11683", "abstract": "The Generalized Bessel Function (GBF) extends the single variable Bessel function to several dimensions and indices in a nontrivial manner. Two-dimensional GBFs have been studied extensively in the literature and have found application in laser physics, crystallography, and electromagnetics. In this article, we document several properties of $m$-dimensional GBFs including an underlying partial differential equation structure, asymptotics for simultaneously large order and argument, and analysis of generalized Neumann, Kapteyn, and Schlömilch series. We extend these results to mixed-type GBFs where appropriate.", "subjects": "General Mathematics (math.GM)", "title": "Identities and Properties of Multi-Dimensional Generalized Bessel Functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429638959673, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211095379617 }
https://arxiv.org/abs/1110.6851
Finite dimensional ordered vector spaces with Riesz interpolation and Effros-Shen's unimodularity conjecture
It is shown that, for any field F \subseteq R, any ordered vector space structure of F^n with Riesz interpolation is given by an inductive limit of sequence with finite stages (F^n,\F_{>= 0}^n) (where n does not change). This relates to a conjecture of Effros and Shen, since disproven, which is given by the same statement, except with F replaced by the integers, Z. Indeed, it shows that although Effros and Shen's conjecture is false, it is true after tensoring with Q.
\section{Introduction} In this article we prove the following. \begin{thm} \label{MainResult} Let $\mathbb{F}$ be a subfield of the real numbers, let $n$ be a natural number, and suppose that $(V,V^+)$ is a ordered directed $n$-dimensional vector space over $\mathbb{F}$ with Riesz interpolation. Then there exists an inductive system \[ (\mathbb{F}^n, \mathbb{F}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_i^2} (\mathbb{F}^n, \mathbb{F}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_2^3} \cdots \] of ordered vector spaces over $\mathbb{F}$ whose inductive limit in $(V,V^+)$. \end{thm} The inductive limit may be taken either in the category of ordered abelian groups (with positivity-preserving homomorphisms as the arrows) or of ordered vector spaces over $\mathbb{F}$ (with positivity-preserving linear transformations as the arrows). Here, $\mathbb{F}_{\geq 0} := \mathbb{F} \cap [0,\infty)$, so that the ordering on $(\mathbb{F}^n, \mathbb{F}_{\geq 0}^n)$ is simply given by coordinatewise comparison. In \cite{EffrosShen:DimGroups}, Effros and Shen conjectured that every ordered, directed, unperforated, rank $n$ free abelian group $(G,G^+)$ with Riesz interpolation can be realized as an inductive system of ordered groups $(\mathbb{Z}^n, \mathbb{Z}_{\geq 0}^n)$. This was called the ``unimodularity conjecture,'' as the connecting maps would necessarily (eventually) be unimodular. This conjecture was disproven by Riedel in \cite{Riedel}. Theorem \ref{MainResult} shows that, nonetheless, upon tensoring with the rational numbers (or any other field contained in $\mathbb{R}$), the conjecture is true. As a consequence, Corollary \ref{QSimplicial} says that if $(G,G^+)$ is an ordered $n$-dimensional $\mathbb{Q}$-vector space with Riesz interpolation then it is an inductive limit of $(\mathbb{Z}^n,\mathbb{Z}_{\geq 0}^n)$ (where the maps are, of course, not unimodular). In \cite{Handelman:RealDG}, David Handelman shows that every vector space with Riesz interpolation can be realized as an inductive limit of ordered vector spaces $(\mathbb{F}^n,\mathbb{F}_{\geq 0}^n)$, though of course the number $n$ isn't assumed to be constant among the finite stages. The focus of \cite{Handelman:RealDG} is on the infinite-dimensional case, and indeed, an interesting example is given of countable dimensional ordered vector space that can't be expressed as an inductive limit of a \textit{sequence} of ordered vector spaces $(\mathbb{F}^n,\mathbb{F}_{\geq 0}^n)$. Combined with this article, this gives a dichotomy between the behaviour of infinite- versus finite-dimensional ordered vector spaces with Riesz interpolation. \section{Preliminaries} \label{PrelimSec} We shall say a little here about the theory of ordered vector spaces with Riesz interpolation. Although the focus in on vector spaces, much of the interesting theory holds in the more general setting of ordered abelian groups (particularly when the group is unperforated, as ordered vector spaces are automatically). An excellent account of this theory can be found in the book \cite{Goodearl:book} by Ken Goodearl. \begin{defn} An \textbf{ordered vector space} consists of a vector space $V$ together with a subset $V^+ \subseteq V$ called the positive cone, giving an ordering compatible with the vector space structure; that is to say: \begin{enumerate} \item[(\textbf{OV1})] $V^+ \cap (-V^+) = 0$ ($V^+$ gives an order, not just a preorder); \item[(\textbf{OV2})] $V^+ + V^+ \subseteq V^+$; and \item[(\textbf{OV3})] $\lambda V^+ \subseteq V^+$ for all $\lambda \in \mathbb{F}_{\geq 0}$. \end{enumerate} The ordering on $V$ is of course given by $x \leq y$ if $y-x \in V^+$. The ordered vector space $(V,V^+)$ is \textbf{directed} if for all $x,y \in V$, there exists $z \in V$ such that \[ \begin{array}{c} x \\ y \end{array} \leq z. \] The ordered vector space $(V,V^+)$ has \textbf{Riesz interpolation} if for any $a_1,a_2,c_1,c_2 \in V$ such that \[ \begin{array}{c} a_1 \\ a_2 \end{array} \leq \begin{array}{c} c_1 \\ c_2 \end{array}, \] there exists $b \in V$ such that \[ \begin{array}{c} a_1 \\ a_2 \end{array} \leq b \leq \begin{array}{c} c_1 \\ c_2 \end{array}. \] \end{defn} Note that $(V,V^+)$ being directed is an extremely natural condition, as it is equivalent to saying that $V^+ - V^+ = V$. Riesz interpolation for an ordered vector space $(V,V^+)$ is equivalent to Riesz decomposition, which says that for any $x_1,x_2,y \in V^+$, if $y \leq x_1 + x_2$ then there exist $y_1,y_2 \in V^+$ such that $y = y_1+y_2$ and $y_i \leq x_i$ for $i=1,2$ \ccite{Section 23}{Birkhoff:LatticeGroups}. The category of ordered vector spaces (over a fixed field $\mathbb{F}$) has as arrows linear transformations which are positivity-preserving, meaning that they map the positive cone of the domain into the positive cone of the codomain. This category admits inductive limits, and for an inductive system $((V_\alpha, V_\alpha^+)_{\alpha \in A},(\phi_\alpha^\beta)_{\alpha \leq \beta})$, the inductive limit is given concretely as $(V,V^+)$ where $V$ is the inductive limit of $((V_\alpha)_{\alpha \in A},(\phi_\alpha^\beta)_{\alpha \leq \beta})$ in the category of vector spaces, and if $\phi_\alpha^\infty:V_\alpha \to V$ denotes the canonical map then \[ V^+ = \bigcup_{\alpha \in A} \phi_\alpha^\infty(V_\alpha). \] If $(V_\alpha,V_\alpha^+)$ has Riesz interpolation for every $\alpha$ then so does the inductive limit $(V,V^+)$. Theorem 1 of \cite{Handelman:RealDG} states that every ordered $\mathbb{F}$-vector space with Riesz interpolation can be realized as an inductive limit of a net of ordered vector spaces of the form $(\mathbb{F}^n,\mathbb{F}^n_{\geq 0})$. The proof uses the techniques of \cite{EffrosHandelmanShen}, where it was shown that every ordered directed unperforated abelian group with Riesz interpolation is an inductive limit of a net of ordered groups of the form $(\mathbb{Z}, \mathbb{Z}_{\geq 0})$. In the case that $\mathbb{F}=\mathbb{Q}$, \ccite{Theorem 1}{Handelman:RealDG} follows from \cite{EffrosHandelmanShen} and the theory of ordered group tensor product found in \cite{GoodearlHandelman:tens}. Certainly, if $(V,V^+)$ is an ordered directed $\mathbb{Q}$-vector space with Riesz interpolation then it can be written as an inductive limit of $G_\alpha = (\mathbb{Z}^{n_\alpha}, \mathbb{Z}_{\geq 0}^{n_\alpha})$, and then we have \[ (V,V^+) \cong (\mathbb{Q},\mathbb{Q}^+) \otimes_{\mathbb{Z}} (V,V^+) \cong \lim (\mathbb{Q}, \mathbb{Q}_{\geq 0}) \otimes_{\mathbb{Z}} (\mathbb{Z}^{n_\alpha}, \mathbb{Z}_{\geq 0}^{n_\alpha}) \cong \lim (\mathbb{Q}^\alpha, \mathbb{Q}_{\geq 0}^{n_\alpha}). \] But in the case of other fields, we no longer have $(V,V^+) \cong (V,V^+) \otimes_{\mathbb{Z}} (\mathbb{F},\mathbb{F}_{\geq 0})$ (indeed, $\mathbb{F} \otimes_{\mathbb{Z}} \mathbb{F} \not\cong \mathbb{F}$). Indeed, although in the countable case, the net of groups in \cite{EffrosHandelmanShen} can be chosen to be a sequence, not every countable dimensional ordered vector spaces with Riesz interpolation is the limit of a sequence of ordered vector spaces $(\mathbb{F}^n,\mathbb{F}_{\geq 0}^n)$. Theorem 2 of \cite{Handelman:RealDG} characterizes when the net from \ccite{Theorem 1}{Handelman:RealDG} can be chosen to be a sequence: exactly when the positive cone is countably generated. Using \cite{EffrosHandelmanShen}, one sees that an obviously sufficient condition for $(V,V^+)$ to be the limit of a sequence of ordered vector spaces of the form $(\mathbb{F}^n,\mathbb{F}_{\geq 0}^n)$ is that \begin{equation} \label{DGtens} (V,V^+) \cong (G,G^+) \otimes_{\mathbb{Z}} (\mathbb{F},\mathbb{F}_{\geq 0}). \end{equation} This is the case whenever $\mathbb{F}=\mathbb{Q}$. \ccite{Proposition 5}{Handelman:RealDG} also shows that \eqref{DGtens} holds when $(V,V^+)$ is simple, since in this case, we can in fact take $(G,G^+)$ to be a rational vector space. Also, \eqref{DGtens} holds in the finite rank case, as \ccite{Theorem 3.2 and Corollary 6.2}{FinRiesz} likewise show that we can take $(G,G^+)$ to be a rational vector space. However, Theorem \ref{MainResult} improves on this result in the finite rank case, by showing that the finite stages have an even more special form -- their dimension does not exceed the dimension of the limit. \section{Outline of the proof} In light of the concrete description above of the inductive limit of ordered vector spaces, saying that $(V,V^+)$ (where $\dim_\mathbb{F} V = n$) can be realized as an inductive limit of a system \[ (\mathbb{F}^n, \mathbb{F}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_i^2} (\mathbb{F}^n, \mathbb{F}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_2^3} \cdots \] is equivalent to saying that there exist linear transformations $\phi_i^\infty:\mathbb{F}^n \to V$ such that: \begin{enumerate} \item $\phi_i^\infty$ is invertible for all $i$; \item $V^+ = \bigcup \phi_i^\infty(\mathbb{F}_{\geq 0}^n)$; and \item For all $i$, $\phi_i^\infty(\mathbb{F}_{\geq 0}^n) \subseteq \phi_{i+1}^\infty(\mathbb{F}_{\geq 0}^n)$. \end{enumerate} This idea is used in the proof of Theorem \ref{MainResult}, which we outline now. We rely on \cite{FinRiesz} for a combinatorial description of the ordered vector space $(V,V^+)$. Using this description, linear tranformations $\alpha^\epsilon,\beta^R:\mathbb{F}^n \to \mathbb{F}^n$ are defined for all $\epsilon,R \in \mathbb{F}_{>0} := \mathbb{F} \cap (0,\infty)$. It is shown in Lemma \ref{Invertibility} that both $\alpha^\epsilon$ and $\beta^R$ are invertible. In \eqref{Udefn}, we associate to $(V,V^+)$ another ordered vector space $(\mathbb{F}^n,U^+)$ whose cone is like $V^+$ but such that the positive functionals on $(\mathbb{F}^n,U^+)$ separate the points. We show in Lemma \ref{ForwardImagePositive} (i) and Lemma \ref{UnionIsAll} (i), that \[ U^+ = \bigcup_{R \in \mathbb{F}_{>0}} \alpha^\epsilon(\mathbb{F}_{\geq 0}^n), \] and in Lemma \ref{ForwardImagePositive} (ii) and Lemma \ref{UnionIsAll} (ii), that \[ V^+ = \bigcup_{\epsilon \in \mathbb{F}_{>0}} \beta^R(U^+). \] Although we don't have \[ \beta^{R_1}(\alpha^{\epsilon_1}(\mathbb{F}_{\geq 0}^n)) \subseteq \beta^{R_2}(\alpha^{\epsilon_2}(\mathbb{F}_{\geq 0}^n)) \] when $R_1 < R_2$ and $\epsilon_1 > \epsilon_2$, Lemma \ref{EpsilonExists} does allow us to extract an increasing sequence from among all the images $\beta^R(\alpha^\epsilon(\mathbb{F}_{\geq 0}))$, such that their union is still all of $V^+$. \section{The proof in detail} We begin with a useful matrix inversion formula. \begin{lemma} \label{Jinverse} Let $J_n \in M_n$ denote the matrix all of whose entries are $1$. Then for $\lambda \neq -1/n$, $I_n+\lambda J_n$ is invertible and \[ (I_n+\lambda J_n)^{-1} = I_n - \frac{\lambda}{\lambda n + 1} J_n. \] \end{lemma} \begin{proof} Using the fact that $J^2 = nJ$, we can easily verify \[ (I + \lambda J) \left(I - \frac{\lambda}{\lambda n + 1} J \right) = I. \] \end{proof} The main result of \cite{FinRiesz} shows that every finite dimensional ordered directed $\mathbb{F}$-vector space with Riesz interpolation looks like $\mathbb{F}^n$ with an positive cone given by unions of products of $\mathbb{F},\mathbb{F}_{>0}$ and $\{0\}$. To fully describe the result, the following notation for such products is quite useful. \begin{notation} For a partition $\{1,\dots,n\} = S_1 \amalg \cdots \amalg S_k$ and subsets $A_1,\dots,A_k$ of a set $A$, define \[ A_1^{S_1} \cdots A_k^{S_k} = \{(a_1,\dots,a_n) \in A^n: a_i \in A_j \ \forall i \in S_j, j=1,\dots,k\}. \] \end{notation} \begin{thm} \label{CombDescr} Every finite dimensional ordered directed $\mathbb{F}$-vector space with Riesz interpolation is isomorphic to $(\mathbb{F}^n,V^+)$ where \[ V^+ = \bigcup_{S \in \mathcal{S}} 0^{E^0_S}\, \mathbb{F}_{>0}^{E^>_{S}}\, \mathbb{F}^{E^*_{S}}, \] $\mathcal{S}$ is a sublattice of $2^{\{1,\dots,n\}}$ containing $\emptyset$ and $\{1,\dots,n\}$, and for each $S \in \mathcal{S}$, we have a partition \[ \{1,\dots,n\} = E^0_S \amalg E^>_S \amalg E^*_S, \] where $E^0_S = S^c$. We also use the notation $E^\geq_S := E^0_S \amalg E^>_S$. The sets $E^0_S, E^>_S, E^*_S$ satisfy the following conditions: \begin{enumerate} \item[(\textbf{RV1})] \label{UnionFormula} Using the notation $E^\geq_S := E^>_S \amalg S^c$, for any $S_1,S_2 \in \mathcal{S}$, \[ E^\geq_{S_1 \cup S_2} = E^\geq_{S_1} \cap E^\geq_{S_2}; \text{ and} \] \item[(\textbf{RV2})] \label{PositiveIdealSeparation} For any $S_1,S_2 \in \mathcal{S}$, if $S_2 \not\subseteq S_1$ then $E^>_{S_2} \setminus S_1 \neq \emptyset$. \end{enumerate} \end{thm} \begin{remark*} Corollaries 5.2 and 6.2 of \cite{FinRiesz} says that, in the cases $\mathbb{F}=\mathbb{R}$ and $\mathbb{F}=\mathbb{Q}$, every $V^+$ given as in the above theorem does actually have Riesz interpolation. Moreover, the proof of \ccite{Corollary 5.2}{FinRiesz} works for any other field $\mathbb{F} \subseteq \mathbb{R}$. However, the proof of Theorem \ref{MainResult} only uses that $V^+$ has the form described in the above theorem, and therefore it gives an entirely different proof of \ccite{Corollary 5.2}{FinRiesz}, that $V^+$ has Riesz interpolation (since Riesz interpolation is preserved under taking inductive limits). \end{remark*} \begin{proof} This is simply a special case of \ccite{Theorem 3.2}{FinRiesz}. Note that (\textbf{RV2}) appears in \ccite{Theorem 3.2}{FinRiesz} as: if $S_1 \subsetneq S_2$ then $E^>_{S_2} \setminus S_1 \neq \emptyset$. This is equivalent, to (\textbf{RV2}), since if $S_2 \not\subseteq S_1$ then $S_1 \cap S_2 \subsetneq S_2$, while if $S_1 \subsetneq S_2$ then of course $S_2 \not\subseteq S_1$. \end{proof} For each $i =1,\dots,n$, define \begin{align*} Z_i &:= \bigcup \{S \in \mathcal{S}: i \in S^c\}, \text{ and} \\ P_i &:= \bigcup \{S \in \mathcal{S}: i \in E^{\geq}_S\}. \end{align*} Note that $i \not\in Z_i^c$ and $i \in E^\geq_{P_i}$. For $\epsilon \in \mathbb{F}_{>0}$, define functionals $\alpha^\epsilon_i:\mathbb{F}^n \to \mathbb{F}$ by \begin{equation} \label{alpha-defn} \alpha^\epsilon_i(z_1,\dots,z_n) := z_i + \epsilon \sum_{j \not\in Z_i} z_j; \end{equation} and for $R \in \mathbb{F}_{>0}$, define functionals $\beta^R_i:\mathbb{F}^n \to \mathbb{F}$ by \begin{equation} \label{beta-defn} \beta^R_i(y_1,\dots,y_n) := y_i - R \sum_{j \not\in P_i, P_i \neq P_j} y_j. \end{equation} Let us denote $\alpha^\epsilon := (\alpha^\epsilon_1,\dots, \alpha^\epsilon_n):\mathbb{F}^n \to \mathbb{F}^n$ and $\beta^R := (\beta^R_1,\dots,\beta^R_n):\mathbb{F}^n \to \mathbb{F}^n$. Then $\alpha^\epsilon$ is block-triangular, and $\beta^R$ is triangular, as we shall now explain. For indices $i$ and $j$, we have $j \not\in Z_i$ if and only if $Z_i \subseteq Z_j$. We therefore label the blocks of $(\alpha_1^{\epsilon_1},\dots,\alpha_n^{\epsilon_n})$ by sets $Z \in \mathcal{S}$, where the $Z^\text{th}$ block consists of indices $i$ such that $Z_i = Z$; we shall use $B_Z$ to denote this set of indices, i.e. \[ B_Z := \{i =1,\dots,n: Z_i = Z\}. \] For $\beta^R$, note that if $i \not\in P_i$ then $P_i \subseteq P_j$, and from this it follows that $P_i$ is triangular. \begin{lemma} \label{Invertibility} For all $\epsilon,R \in \mathbb{F}_{>0}$, $\alpha^\epsilon$ and $\beta^R$ are invertible. \end{lemma} \begin{proof} That $\beta^R$ is invertible follows from the fact that it is triangular with $1$'s on the diagonal. To show that $\alpha^\epsilon$ is invertible, as we already noted that it is block-triangular, we need to check that each block is invertible. In matrix form, the $Z^\text{th}$ block of $\alpha^R$ is equal to \[ I_{|B_Z|} + \epsilon J_{|B_Z|}, \] and by Lemma \ref{Jinverse}, this block is invertible. \end{proof} \begin{notation} \label{ZeroSet-Defn} For $x \in \mathbb{F}^n$, let us use $S_x$ to denote the smallest set $S \in \mathcal{S}$ such that $S$ contains \[ \{i=1,\dots,n : x_i \neq 0\}. \] \end{notation} \begin{lemma} \label{ZeroSet-Lemma} Let $\epsilon,R \in \mathbb{F}_{>0}$ be scalars and let $z \in \mathbb{F}^n$. Then \[ S_z = S_{\alpha^\epsilon(z)} = S_{\beta^R(\alpha^\epsilon(z))}. \] \end{lemma} \begin{proof} To show that $S_{\alpha^\epsilon(z)} \subseteq S_z$, it suffices to show that $\alpha^\epsilon_i(z)=0$ for all $i \not\in S_z$, which we show in (a). Likewise we show in (b) that $z_i = 0$ for all $i \not\in S_{\alpha^\epsilon(z)}$, in (c) that $\beta^R_i(\alpha^\epsilon(z)) = 0$ for all $i \not\in S_{\alpha^\epsilon(z)}$, and in (d) that $\alpha_i^\epsilon(z) = 0$ for all $i \not\in S_{\beta^R(\alpha^\epsilon(z))}$. (a) If $i \not\in S_z$ then $S_z \subseteq Z_i$ and therefore, for every $j \not\in Z_i$ we have $j \not\in S_z$ and so $z_i = 0$. Since $\alpha^R_i(z)$ is a linear combination of $\{z_j: j \not\in Z_i\}$, it follows that $\alpha^R_i(z) = 0$. (b) We shall prove this by induction on the blocks $B_Z$, iterating $Z \in \mathcal{S}$ in a nonincreasing order. Since $i \not\in S_{\alpha^R(z)}$ if and only if $Z_i \supseteq S_{\alpha^R(z)}$, we only need to consider $Z \supseteq S_1$. For a block $Z \supseteq S_{\alpha^R(z)}$ and an index $i \in B_Z$, we have \begin{equation} \label{ZeroSet-Eqb} 0 = \alpha^\epsilon_i(z) = z_i + \epsilon \sum_{j \in B_Z} z_j + \epsilon \sum_{j: Z_j \supsetneq Z} z_j. \end{equation} By induction, we have that $z_j = 0$ for all $j$ satisfying $Z_j \subsetneq Z$; that is to say, the last term in \eqref{ZeroSet-Eqb} vanishes. Hence, the system \eqref{ZeroSet-Eqb} becomes \[ 0 = (I_{|B_Z|} + \epsilon J_{|B_Z|})(z_i)_{i \in B_Z}; \] and by Lemma \ref{Jinverse}, it follows that $z_i = 0$ for all $i \in B_Z$, as required. (c) For (c) and (d), let us set $y := \alpha^\epsilon(x)$. If $i \not\in S_y$ then again, $S_y \subseteq Z_i$ and so $y_i = 0$ for all $j \not\in Z_i \subseteq P_i$. Since $\beta^R_i(y)$ is a linear combination of $\{y_i\} \cup \{y_j: j \not\in P_i\}$, $\beta^R_i(y) = 0$. (d) If $i \not\in S_{\beta^R(y)}$ then we have \[ 0 = \beta^R_i(y) = y_i - R\sum_{j \not\in P_i, P_i \supsetneq P_j} y_j. \] As above, $j \not\in P_i$ implies that $j \not\in S_{\beta^R(y)}$. Hence, if we iterate the indices $i \in S_{\beta^R(y)}^c$ in a nondecreasing order of the sets $P_i$ then induction proves $y_i=0$ for all $i \not\in S_{\beta^R(y)}$. \end{proof} Our proof makes use of the following positive cone: \begin{equation} \label{Udefn} U^+ := \bigcup_{S \in \mathcal{S}} \mathbb{F}_{>}^S\, 0^{S^c}. \end{equation} \begin{lemma} \label{ForwardImagePositive} Let $R,\epsilon \in \mathbb{F}_{>0}$ be scalars. Then: \begin{enumerate} \item $\alpha^\epsilon(\mathbb{F}_{\geq 0}^n) \subseteq U_+$, and \item $\beta^R(U_+) \subseteq V_+$. \end{enumerate} \end{lemma} \begin{proof} (i) Let $z \in \mathbb{F}_{\geq 0}^n$. By Lemma \ref{ZeroSet-Lemma}, we know that $\alpha^{\epsilon_{Z_i}}_i(z)=0$ for $i \not\in S_z$. Let us show that $\alpha^{\epsilon_{Z_i}}_i(z) > 0$ for $i \in S_z$, from which it follows that $\alpha^\epsilon(z) \in U^+$. For $i \in S_z$, we have \[ \alpha^{\epsilon_{Z_i}}_i(z) = z_i + \epsilon \sum_{j \not\in Z_i} z_j; \] so evidently $\alpha^{\epsilon_{Z_i}}_i(z) \geq 0$ and $\alpha^{\epsilon_{Z_i}}_i(z) = 0$ would imply that $z_j = 0$ for all $j \not\in Z_i$. But if that were the case, then we would have $S_z \subseteq Z_i$, and in particular, $i \not\in S_z$, which is a contradiction. Hence $\alpha^{\epsilon_{Z_i}}_i(z) > 0$. (ii) Let $y \in U^+$. Then we must have $y_i > 0$ for all $i \in S_y$. By Lemma \ref{ZeroSet-Lemma}, we already know that $\beta^R_i(y) = 0$ for all $i \in S_y^c = E^0_{S_y}$. Thus, we need only show that $\beta^R_i(y) > 0$ for $i \in E^>_{S_y}$. For such an $i$, we have \[ \beta^R_i(y) = y_i - R\sum_{j \not\in P_i, P_j \neq P_i} y_j. \] By (i), we know that $y_i > 0$. Since $i \in E^>_{S_y}$, we have $P_i \subseteq S_y$. Therefore if $j \not\in P_i$ then $j \not\in S_y$ and so $y_j = 0$. Thus, we in fact have $\beta^R_i(y) = y_i > 0$. \end{proof} \begin{lemma} \label{UnionIsAll} Let $U^+$ be as defined in \eqref{Udefn}. Then: \begin{enumerate} \item $U^+ = \bigcup_{\epsilon \in \mathbb{F}_{>0}} \bigcap_{\epsilon' \in \mathbb{F}_{>0}, \epsilon' < \epsilon} \alpha^{(\epsilon'_Z)}(\mathbb{F}_{\geq 0}^n).$ \item $V^+ = \bigcup_{R \in \mathbb{F}_{>0}} \bigcap_{R' \in \mathbb{F}, R' > R} \beta^{R'}(U^+).$ \end{enumerate} \end{lemma} \begin{proof} (i) Let $y \in U^+$. Define $m := \min \{|y_i|: i \in S_y\} > 0$ and $M := \max \{|y_i|: i \in S_y\}$, and suppose that $\epsilon \in \mathbb{F}_{>0}$ is such that \[ \epsilon < \frac{m}{2nM}, \] for all $Z \in \S$. Let us show that $z = (\alpha^{\epsilon})^{-1}(y)$ satisfies $z_i \geq 0$ for all $i$. We will show, by induction on the blocks $B_Z$ (iterating $Z \in \mathcal{S}$ in a nonincreasing order), that \[ 0 \leq z_i \leq M \] for all $i \in B_Z$. By the definition of $S_y$, we already know that this holds for $Z \supseteq S_y$ (for if $i \not\in S_y$ then $z_i = 0$ for $i \not\in S_y$). For $i \in B_Z \cap S_y$, set \[ C_i := z_i + \epsilon \sum_{j \in B_Z} z_j = y_i - \epsilon \sum_{j \not\in Z, Z_j \supsetneq Z} z_j. \] Then we have \[ C_i \geq m - \epsilon n M > m - m/2 = m/2 \] and \[ C_i \leq M. \] By Lemma \ref{Jinverse}, we have \[ z_i = C_i - \frac{\epsilon}{n\epsilon+1} \sum_{j \in B_Z} C_j. \] On the one hand, this gives \[ z_i > m/2 - \epsilon nM = m/2 - m/2 = 0, \] and on the other, it gives \[ z_i \leq C_i \leq M, \] as required. (ii) Let $x \in V^+$. For $R \in \mathbb{F}_{>0}$ let us denote $y^R = (y^R_1,\dots,y^R_n) := (\beta^R)^{-1}(x)$. For all $i \not\in S_x$ we already know that $y^R_i = 0$ for all $i$. Moreover, for all $i \in E^>_{S_x}$ and all $R$, we have \[ x_i = y^R_i - R \sum_{j \not\in P_i, P_j \neq P_i} y^R_i; \] but note that if $j \not\in P_i \supseteq S_x$ then $j \not\in S_x$, and therefore we have $y^R_i = x_i > 0$. We will show by induction that, for each $i \in E^*_{S_x}$ there exists $R_i \in \mathbb{F}_{>0}$ such that for all $R'' \geq R' \geq R_i$, we have \[ y_i^{R''} > y_i^{R'} > 0. \] We iterate the indices $i$ in a nonincreasing order of $P_i$. For the index $i$, we have \begin{equation} \label{UnionIsAll-yEq} y^R_i = x_i + R \sum_{j \not\in P_i, P_j \supsetneq P_i} y^R_j. \end{equation} If we require that $R \geq \max\{R_j: P_j \supsetneq P_i\}$ then, by induction, we know that $y^R_j \geq 0$ for all $j \not\in P_i$. Moreover, since $i \not\in E^{\leq}_{S_x}$, this means that $S_x \not\subseteq P_i$ and therefore by (\textbf{RV2}) in Theorem \ref{CombDescr}, there exists some $j_0 \in E^>_{S_x} \setminus P_i$. Notice that $P_j \subseteq S_x$ so that $y^R_j$ does appear as a summand in the right-hand side of \eqref{UnionIsAll-yEq}. Thus, we have \[ y^R_i = x_i + R \sum_j \not\in P_i, P_j \neq P_i y^R_j \geq x_i + R y^R_{j_0} = x_i + R x_j. \] Since $x_j > 0$, there exists $R=R_i$ for which the right-hand side is positive, and so $y^R_i > 0$. Since $y^R_j$ is a nondecreasing function of $R$ for all $j$ for which $P_j \supsetneq P_i$, it is clear from \eqref{UnionIsAll-yEq} that so is $y^R_i$. \end{proof} \begin{lemma} \label{EpsilonExists} Let $R_1,\epsilon_1 \in \mathbb{F}_{>0}$ be scalars. For any $R' > R_1$, there exist $R_2,\epsilon_2 \in \mathbb{F}_{>0}$ with $R_2 > R'$ and $\epsilon_2 < \epsilon_1$ such that \[ \beta^{R_1}(\alpha^{\epsilon_1}(\mathbb{F}_{\geq 0}^n)) \subseteq \beta^{R_2}(\alpha^{\epsilon_2}(\mathbb{F}_{\geq 0}^n)). \] \end{lemma} \begin{proof} Let $e_1,\dots,e_n$ be the canonical basis for $\mathbb{F}^n$, so that $\mathbb{F}_{\geq 0}^n$ is the cone generated by $e_1,\dots,e_n$. Then for each of $i=1,\dots,n$, we have by Lemma \ref{ForwardImagePositive} that \[ \beta^{R_1}(\alpha^{\epsilon_1}(e_i)) \in V_+; \] and thus by Lemma \ref{UnionIsAll} (i), there exists $R_2 > R'$ such that \[ (\beta^{R_2})^{-1}(\beta^{R_1}(\alpha^{\epsilon_1}(e_i))) \in U_+ \] for all $i=1,\dots,n$. By Lemma \ref{UnionIsAll} (ii), there then exists $\epsilon_2 < \epsilon_1$ such that \[ (\alpha^{\epsilon_1})^{-1}((\beta^{R_2})^{-1}(\beta^{R_1}(\alpha^{\epsilon_1}(e_i)))) \in \mathbb{F}_{\geq 0}^n \] for all $i=1,\dots,n$, which is to say, \[ \beta^{R_1}(\alpha^{\epsilon_1}(e_i)) \in \beta^{R_2}(\alpha^{\epsilon_2}(\mathbb{F}_{\geq 0}^n)). \] Since $\mathbb{F}_{\geq 0}^n$ is the cone generated by $e_1,\dots,e_n$, it follows that \[ \beta^{R_1}(\alpha^{\epsilon_1}(\mathbb{F}_{\geq 0}^n)) \subseteq \beta^{R_2}(\alpha^{\epsilon_2}(\mathbb{F}_{\geq 0}^n)), \] as required. \end{proof} \begin{proof}[Proof of Theorem \ref{MainResult}] Let $R_1,\epsilon_1 \in \mathbb{F}_{>0}$, and, using Lemma \ref{EpsilonExists}, inductively construct sequences $(R_i), (\epsilon_i) \subset \mathbb{F}_{>0}$, such that $R_i \to \infty, \epsilon_i \to 0$ and for each $i$, \[ \beta^{R_i}(\alpha^{\epsilon_i}(\mathbb{F}_{\geq 0}^n)) \subseteq \beta^{R_{i+1}}(\alpha^{\epsilon_{i+1}}(\mathbb{F}_{\geq 0}^n)). \] Set $\phi_i = \beta^{R_i} \circ \alpha^{\epsilon_i}:\mathbb{F}^n \to \mathbb{F}^n$. By Lemma \ref{UnionIsAll}, we have $V^+ = \bigcup_{i=1}^\infty \phi_i(\mathbb{F}_{\geq 0}^n)$. Our inductive system is thus \[ (\mathbb{F}^n,\mathbb{F}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_{2}^{-1} \circ \phi_1} (\mathbb{F}_n,\mathbb{F}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_3^{-1} \circ \phi_2} \cdots; \] as explained in Section \ref{PrelimSec}, the inductive limit is \[ (\mathbb{F}^n, \bigcup_{i=1}^\infty \phi_i(\mathbb{F}_{\geq 0}^n)) = (V,V^+), \] as required. \end{proof} \section{Consequences} \begin{cor} \label{QSimplicial} Let $(V,V^+)$ be an $n$-dimensional ordered directed $\mathbb{Q}$-vector space with Riesz interpolation. Then there exists an inductive system of ordered groups \[ (\mathbb{Z}^n,\mathbb{Z}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_1^2} (\mathbb{Z}^n,\mathbb{Z}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_2^3} \cdots \] whose inductive limit is $(V,V^+)$. \end{cor} \begin{proof} By Theorem ref{MainResult}, let \[ (\mathbb{Q}^n, \mathbb{Q}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_i^2} (\mathbb{Q}^n, \mathbb{Q}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_2^3} \cdots \] be an inductive system whose limit is $(V,V^+)$. Since positive scalar multiplication gives an isomorphism of any ordered vector space, we may replace any of the connecting maps with a positive scalar multiples, and still get $(V,V^+)$ in the limit. Hence, we may assume without loss of generality that $\phi_i^{i+1}(\mathbb{Z}^n) \subseteq \mathbb{Z}^n$. Then, letting $\overline{\phi}_i^{i+1} = \phi_i^{i+1}|_{\mathbb{Z}^n}$, we have an inductive system \[ (\mathbb{Z}^n,\mathbb{Z}_{\geq 0}^n) \labelledthing{\longrightarrow}{\overline{\phi}_i^2} (\mathbb{Z}^n, \mathbb{Z}_{\geq 0}^n) \labelledthing{\longrightarrow}{\overline{\phi}_2^3} \cdots, \] whose limit $(G,G^+)$ satisfies $(G,G^+) \otimes_\mathbb{Z} (\mathbb{Q},\mathbb{Q}^+) \cong (V,V^+)$. Now, we may easily find an inductive system \[ (\mathbb{Z},\mathbb{Z}_{\geq 0}) \labelledthing{\longrightarrow}{\psi_1^2} (\mathbb{Z},\mathbb{Z}_{\geq 0}) \labelledthing{\longrightarrow}{\psi_2^3} \cdots \] whose limit is $(\mathbb{Q},\mathbb{Q}_{\geq 0})$. (Such an inductive system necessarily has $\psi_i^{i+1}$ given by multiplication by a positive scalar $N_i$; and the limit is $(\mathbb{Q},\mathbb{Q}_{\geq 0})$ as long as every prime occurs as a root of infinitely many $N_i$.) Thus, by \ccite{Lemma 2.2}{GoodearlHandelman:tens}, $(V,V^+)$ is the inductive limit of \[ (\mathbb{Z}^n,\mathbb{Z}_{\geq 0}^n) \otimes_\mathbb{Z} (\mathbb{Z},Z_{\geq 0}) \labelledthing{\longrightarrow}{\overline{\phi}_1^2 \otimes_\mathbb{Z} \psi_1^2} (\mathbb{Z}^n, \mathbb{Z}_{\geq 0}^n) \otimes_\mathbb{Z} (\mathbb{Z},\mathbb{Z}_{\geq 0}) \labelledthing{\longrightarrow}{\overline{\phi}_2^3 \otimes_\mathbb{Z} \psi_2^3} \cdots, \] which is what we require, since $(G,G^+) \otimes_\mathbb{Z} (\mathbb{Z},\mathbb{Z}_{\geq 0}) = (G,G^+)$ for any ordered abelian group $(G,G^+)$. \end{proof} \begin{cor} Let $(G,G^+)$ be a rank $n$ ordered directed free abelian group with Riesz interpolation. Then there exists an inductive system of ordered groups \[ (\mathbb{Z}^n,\mathbb{Z}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_1^2} (\mathbb{Z}^n,\mathbb{Z}_{\geq 0}^n) \labelledthing{\longrightarrow}{\phi_2^3} \cdots \] whose inductive limit is $(G,G^+) \otimes_{\mathbb{Z}} (\mathbb{Q},\mathbb{Q}_{\geq 0})$. \end{cor} \begin{proof} This follows immediately, as $(G,G^+) \otimes_{\mathbb{Z}} (\mathbb{Q},\mathbb{Q}_{\geq 0})$ is an $n$-dimensional ordered directed $\mathbb{Q}$-vector space with Riesz interpolation. \end{proof}
{ "timestamp": "2011-11-01T01:07:28", "yymm": "1110", "arxiv_id": "1110.6851", "language": "en", "url": "https://arxiv.org/abs/1110.6851", "abstract": "It is shown that, for any field F \\subseteq R, any ordered vector space structure of F^n with Riesz interpolation is given by an inductive limit of sequence with finite stages (F^n,\\F_{>= 0}^n) (where n does not change). This relates to a conjecture of Effros and Shen, since disproven, which is given by the same statement, except with F replaced by the integers, Z. Indeed, it shows that although Effros and Shen's conjecture is false, it is true after tensoring with Q.", "subjects": "Rings and Algebras (math.RA); Functional Analysis (math.FA); K-Theory and Homology (math.KT); Operator Algebras (math.OA)", "title": "Finite dimensional ordered vector spaces with Riesz interpolation and Effros-Shen's unimodularity conjecture", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429634078179, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211091856432 }
https://arxiv.org/abs/1710.08253
Differential posets and restriction in critical groups
In recent work, Benkart, Klivans, and Reiner defined the critical group of a faithful representation of a finite group $G$, which is analogous to the critical group of a graph. In this paper we study maps between critical groups induced by injective group homomorphisms and in particular the map induced by restriction of the representation to a subgroup. We show that in the abelian group case the critical groups are isomorphic to the critical groups of a certain Cayley graph and that the restriction map corresponds to a graph covering map. We also show that when $G$ is an element in a differential tower of groups, critical groups of certain representations are closely related to words of up-down maps in the associated differential poset. We use this to generalize an explicit formula for the critical group of the permutation representation of the symmetric group given by the second author, and to enumerate the factors in such critical groups.
\section{Introduction} \label{sec:intro} The critical group $K(\Gamma)$ is a well-studied abelian group invariant of a finite graph $\Gamma$ which encodes information about the dynamics of a process called \textit{chip firing} on the graph (see \cite{LP} where critical groups are called \textit{sandpile groups}). Recent work of Benkart, Klivans, and Reiner defined analogous abelian group invariants $K(V)$, also called \textit{critical groups}, associated to a faithful representation $V$ of a finite group $G$ \cite{BKR}. It is known (see, for example, \cite{Tr}) that graph covering maps induce surjective maps between graph critical groups. This paper investigates maps on critical groups of group representations which are induced by group homomorphisms. Differential posets, introduced by Stanley \cite{St}, generalize many of the combinatorial and enumerative properties of Young's lattice. In \cite{MR}, Miller and Reiner introduced a very strong conjecture about the Smith normal form of $UD+tI$ where $U,D$ are the up and down maps in a differential poset, and $t$ is a variable. We investigate how this conjecture, which was proven for powers of Young's lattice by Shah \cite{Sh}, can be used to determine the structure of critical groups in certain \textit{differential towers of groups.} Section \ref{sec:defs} defines critical groups for graphs and group representations and gives background results. It also discusses background on differential posets and differential towers of groups which will be used throughout the later sections. In Section \ref{sec:maps} we study maps between critical groups which are induced by group homomorphisms. In particular, restriction of representations to a subgroup $H \subset G$ induces a map $\bar{\text{Res}}:K(V) \to K(\text{Res}^G_H V)$. When $G$ is abelian, Theorem \ref{thm:cay-graph} shows that $K(V)$ can be identified with the critical group of a certain Cayley graph $\text{Cay}(\hat{G},\mathcal{S}_V)$, and that the restriction map $\bar{\text{Res}}$ agrees with a map on graph critical groups induced by a natural graph covering. In \cite{G1}, the second author determined the exact structure of the critical group for the permutation representation of the symmetric group $\mathfrak{S}_n$. This result depended on a relationship between tensor products with the permutation representation and the up and down maps in Young's lattice of integer partitions. Section \ref{sec:gen-perm-rep} formalizes this connection and generalizes it to the context of differential towers of groups, allowing us to explicitly compute the critical group for a generalized permutation representation of the wreath product $A \wr \mathfrak{S}_n$ in Theorem \ref{thm:gen-perm-rep}. It also investigates properties of the critical groups associated to representations $V(w)$ which occur by repeatedly applying restriction and induction to the trivial representation in a differential tower of groups. The pattern of restriction and induction is specified by a word $w \in \{U,D\}^*$, where $U,D$ are the up and down operators in the corresponding differential poset. In Theorem \ref{thm:structure of K(V(f))} we show that the structure of the critical group $K(V(w))$ is closely related to combinatorial properties of the up and down operators, as studied in \cite{St}. Finally, in Section \ref{sec:factors}, Theorem \ref{thm:ones of w} gives an enumeration of the factors in the elementary divisor form of $K(V(w))$ in terms of the rank sizes of the corresponding differential tower of groups. We conclude by presenting a conjecture for the size and multiplicity of the smallest nontrivial factor in $K(V(U^kD^k))$ in $A \wr \mathfrak{S}_n$. This conjecture also relates the larger factors in this critical group to the factors in a critical group for the subgroup $A \wr \mathfrak{S}_{n-k}$. \section{Background and definitions} \label{sec:defs} \subsection{Critical groups of graphs} This section gives some background on critical groups of graphs; see \cite{chip-firing survey} for a thorough survey. We will be interested in critical groups of graphs primarily as motivation for our study of critical groups of group representations, however Section \ref{subsec:cayley} below gives a close relationship between the two concepts when $G$ is abelian. Let $\Gamma$ be a finite directed graph with a unique sink $s$, we sometimes designate a vertex as a sink and therefore ignore its outgoing edges. Fix some ordering $s=v_0,v_1,...,v_{\ell}$ of the vertices of $\Gamma$, and let $d_i$ be the out-degree of $v_i$ and $a_{ij}$ be the number of edges from $v_i$ to $v_j$. Then the \textit{Laplacian matrix} $\tilde{L}(\Gamma)$ has entries \[ \tilde{L}(\Gamma)_{ij}=\begin{cases} d_i-a_{ii} & \text{, for $i=j$} \\ -a_{ij} & \text{, for $i \neq j$} \end{cases} \] The \textit{reduced Laplacian matrix} $L(\Gamma)$ is the $\ell \times \ell$ matrix obtained from $\tilde{L}(\Gamma)$ by removing the row and column corresponding to the sink. The \textit{critical group} $K(\Gamma)$ (also called the \textit{sandpile group} in the literature), defined as \[ K(\Gamma) = \text{coker}(L(\Gamma): \mathbb{Z}^{\ell} \to \mathbb{Z}^{\ell}) \] is a finite abelian group whose order is the number of spanning trees of $\Gamma$ which are directed towards the sink. There are two sets of distinguished coset representatives for $K(\Gamma)$, the \textit{superstable} and \textit{recurrent} configurations, which encode the dynamics of a process called \textit{chip-firing} on $\Gamma$. In the remainder of the paper we will primarily be interested in the group structure of $K(\Gamma)$, and not in these particular coset representatives or the chip-firing process. Given two directed multigraphs $\Gamma, \Gamma'$, a \textit{graph map} is a continuous map $\varphi: \Gamma \to \Gamma'$ of the underlying topological spaces which maps the interior of each edge homeomorphically to the interior of another edge, preserving orientation; by continuity this also defines a map from the vertices of $\Gamma$ to the vertices of $\Gamma'$. A graph map is a \textit{graph covering} if in addition each vertex of $\Gamma$ has a neighborhood on which the restriction of $\varphi$ is a homeomorphism. The following proposition is well-known, see for example \cite{Tr}: \begin{prop} \label{prop:graph covering surj} The underlying map on vertices of a graph covering $\varphi: \Gamma \to \Gamma'$ induces a surjective group homomorphism $\bar{\varphi}: K(\Gamma) \twoheadrightarrow K(\Gamma')$. \end{prop} Section \ref{subsec:cayley} discusses the relationship between maps induced on critical groups of Cayley graphs by certain graph coverings $\bar{\varphi}$ and the map $\bar{\text{Res}}$ on critical groups of group representations. \subsection{Critical groups of group representations} Let $G$ be a finite group and $V$ a faithful complex (not-necessarily-irreducible) representation of $G$; let $\mathbbm{1}_G=V_0, V_1,...,V_{\ell}$ denote the irreducible complex representations and $\chi_i$, $i=0,...,\ell$ denote their characters. Let $R(G)$ denote the \textit{representation ring} of $G$. This is the commutative $\mathbb{Z}$-algebra of formal integer combinations of representations of $G$ modulo the relations $[W \oplus W']=[W] + [W']$; the product structure is defined as $[W] \cdot [W'] = [W \otimes_{\mathbb{C}} W']$. As a $\mathbb{Z}$-module, $R(G)$ is isomorphic to $\mathbb{Z}^{\ell+1}$, since the classes of irreducible representations $[\mathbbm{1}_G],[V_1],...,[V_{\ell}]$ form a basis. We define elements \[ \delta^{(g)}=\sum_{W \in \text{Irr}(G)} \chi_W(g)\cdot [W] \] of $R(G)$ corresponding to the columns in the character table of $G$. The representation ring $R(G)$ is endowed with a $\mathbb{Z}$-algebra homomorphism $\dim: R(G) \to \mathbb{Z}$ sending representations $[W]$ to their dimensions as vector spaces (which we also denote by $\dim(W)$), and extending by linearity to virtual representations. The kernel of this map, which we denote by $R_0(G)$, is the ideal of elements in $R(G)$ with virtual dimension 0. Then multiplication by the element $\dim(V)[\mathbbm{1}_G]-[V]$ defines a linear map $\tilde{C}_V:R(G) \to R(G)$. Since $\dim(V)[\mathbbm{1}_G]-[V] \in R_0(G)$, this descends to a linear map \[ C_V: R_0(G) \to R_0(G) \] \begin{def-prop}[\cite{BKR}, Proposition 5.20] \label{def-prop:critical group} If $V$ is a faithful finite dimensional representation of $G$, then the linear map $C_V$ is nonsingular, and so $\text{coker}(C_V)$ is a finite abelian group. We define the \textit{critical group} $K(V)$ to be this cokernel. We also have that $\text{coker}(\tilde{C}_V) = \mathbb{Z} \cdot \delta^{(e)} \oplus K(V)$. \end{def-prop} \begin{remark} As a quotient of $R_0(G)$, the critical group $K(V)$ inherits a multiplicative structure in addition to its (additive) abelian group structure. We are interested here only in the additive structure of $K(V)$. See Sections 5 and 6 of \cite{BKR} for some discussion of the multiplicative structure. \end{remark} We will need the following facts about critical groups and the maps $\tilde{C}_V$: \begin{prop}[\cite{BKR}, Proposition 5.3] \label{prop:eigenvectors} A full set of orthogonal eigenvectors for $\tilde{C}_V$ is given by the column vectors $\delta^{(g)}$ in the character table for $G$: \[ \tilde{C}_V\delta^{(g)}=(\dim(V)-\chi_V(g))\delta^{(g)} \] where $g$ ranges over a set of conjugacy class representatives for $G$, and $\chi_V$ denotes the character of $V$. \end{prop} \begin{remark} When $V$ is faithful, $\chi_V(g) \neq \dim(V)$ for $g \neq e$, thus Proposition \ref{prop:eigenvectors} shows that $\ker(\tilde{C}_V)$ is spanned by $\delta^{(e)}=\sum_{i=0}^{\ell} \dim(V_i)[V_i]=[V_{reg}]$, where $V_{reg}$ is the regular representation of $G$. \end{remark} \begin{theorem}[\cite{G1}, Theorem 3]\label{thm:repeated} \text{} \begin{itemize} \item[a.] Let $e=c_0,...,c_{\ell}$ be a set of conjugacy class representatives for $G$. Then \begin{equation} |K(V)|=\frac{1}{|G|}\prod_{i=1}^{\ell}(\dim(V)-\chi_V(c_i)) \end{equation} \item[b.] Suppose $a$ is an integer value of $\chi_V$ achieved on $m$ different conjugacy classes, then $K(V)$ contains a subgroup isomorphic to $(\mathbb{Z}/(n-a)\mathbb{Z})^{m-1}$. \end{itemize} \end{theorem} \begin{example} \label{ex:perm rep of S4} Let $G=\mathfrak{S}_4$ and let $V=\mathbb{C}^4$ be the 4-dimensional representation where $G$ acts by permuting coordinates. Working in the basis $\{[V_0],...,[V_4]\}$ of $R(G)$ given by the character table below, we decompose each tensor product $V \otimes V_i$ into irreducibles, giving the rows of the matrix $\tilde{C}_V$. \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline & $e$ & $(12)$ & $(123)$ & $(1234)$ & $(12)(34)$ \\ \hline \hline $\chi_V$ & 4 & 2 & 1 & 0 & 0 \\ \hline $\chi_0$ & 1 & 1& 1 & 1 & 1\\ \hline $\chi_1$ & 3 & 1 & 0 & -1 & -1 \\ \hline $\chi_2$ & 2 & 0 & -1 & 0 & 2 \\ \hline $\chi_3$ & 3 & -1 & 0 & 1 & -1 \\ \hline $\chi_4$ & 1 & -1 & 1 & -1 & 1 \\ \hline \end{tabular} \end{center} To calculate the cokernel of $\tilde{C}_V: R(G) \to R(G)$, we compute the Smith normal form (see Section \ref{sec:snf} below) of $\tilde{C}_V$ to get $\text{diag}(0,1,1,1,4)$. This shows that $\text{coker}(\tilde{C}_V)\cong \mathbb{Z} \oplus \mathbb{Z}/4\mathbb{Z}$, and so $K(V) \cong \mathbb{Z}/4 \mathbb{Z}$. \[ \tilde{C}_V = \begin{pmatrix} 3 & -1 & 0 & 0 & 0 \\ -1 & 2 & -1 & -1 & 0 \\ 0 & -1 & 3 & -1 & 0 \\ 0 & -1 & -1 & 2 & -1 \\ 0 & 0 & 0 &-1 & 3 \end{pmatrix}\] Alternatively, we could apply Theorem \ref{thm:repeated}(a) to see that \[ |K(V)|=\frac{1}{4!}(4-2)(4-1)(4-0)(4-0)=4 \] and apply part (b) to see that $K(V)$ has a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}$. This forces $K(V) \cong \mathbb{Z}/4\mathbb{Z}$. The critical groups for the permutation representation of $\mathfrak{S}_n$ were computed by the second author in \cite{G1}. In Section \ref{sec:gen-perm-rep} we generalize this result further. \end{example} \subsection{Differential posets} Differential posets are a class of partially ordered sets defined by Stanley in \cite{St}. Differential posets retain many of the striking enumerative and combinatorial properties of Young's lattice $Y$, the lattice of integer partitions ordered by containment of Young diagrams. \begin{figure}[ht] \begin{center} \begin{tikzpicture}[shorten >=1pt, auto, node distance=3cm, ultra thick, node_style/.style={fill=white!20!,font=\sffamily\Large\bfseries}, edge_style/.style={draw=black, thick}] \node[node_style, scale=1] (v0) at (5,-1) {$\emptyset$}; \node[node_style, scale=0.7] (v1) at (5,0) {\yng(1)}; \node[node_style, scale=0.7] (v2) at (4,1) {\yng(2)}; \node[node_style, scale=0.7] (v3) at (6,1){\yng(1,1)}; \node[node_style, scale=0.7] (v4) at (3,2){\yng(3)}; \node[node_style, scale=0.7] (v5) at (5,2){\yng(2,1)}; \node[node_style, scale=0.7] (v6) at (7,2){\yng(1,1,1)}; \node[node_style, scale=0.7] (v7) at (2,3){\yng(4)}; \node[node_style, scale=0.7] (v8) at (3.65,3){\yng(3,1)}; \node[node_style, scale=0.7] (v9) at (5,3.15){\yng(2,2)}; \node[node_style, scale=0.7] (v10) at (6.15,3.1){\yng(2,1,1)}; \node[node_style, scale=0.6] (v11) at (7.8,3.1){\yng(1,1,1,1)}; \node[node_style, scale=0.6] (v12) at (0.5,4.40){\yng(5)}; \node[node_style, scale=0.6] (v13) at (2.15,4.40){\yng(4,1)}; \node[node_style, scale=0.6] (v14) at (3.5,4.40){\yng(3,2)}; \node[node_style, scale=0.6] (v15) at (4.94,4.40){\yng(3,1,1)}; \node[node_style, scale=0.6] (v16) at (6.22, 4.45){\yng(2,2,1)}; \node[node_style, scale=0.6] (v17) at (7.35,4.6){\yng(2,1,1,1)}; \node[node_style, scale=0.6] (v18) at (8.8,4.6){\yng(1,1,1,1,1)}; \draw[edge_style] (v0) edge node{} (v1); \draw[edge_style] (v1) edge node{} (v2); \draw[edge_style] (v1) edge node{} (v3); \draw[edge_style] (v2) edge node{} (v4); \draw[edge_style] (v2) edge node{} (v5); \draw[edge_style] (v3) edge node{} (v5); \draw[edge_style] (v3) edge node{} (v6); \draw[edge_style] (v4) edge node{} (v7); \draw[edge_style] (v4) edge node{} (v8); \draw[edge_style] (v5) edge node{} (v8); \draw[edge_style] (v5) edge node{} (v9); \draw[edge_style] (v5) edge node{} (v10); \draw[edge_style] (v6) edge node{} (v10); \draw[edge_style] (v6) edge node{} (v11); \draw[edge_style] (v7) edge node{} (v12); \draw[edge_style] (v7) edge node{} (v13); \draw[edge_style] (v8) edge node{} (v13); \draw[edge_style] (v8) edge node{} (v14); \draw[edge_style] (v8) edge node{} (v15); \draw[edge_style] (v9) edge node{} (v14); \draw[edge_style] (v9) edge node{} (v16); \draw[edge_style] (v10) edge node{} (v15); \draw[edge_style] (v10) edge node{} (v16); \draw[edge_style] (v10) edge node{} (v17); \draw[edge_style] (v11) edge node{} (v17); \draw[edge_style] (v11) edge node{} (v18); \end{tikzpicture} \end{center} \caption{Young's lattice $Y$, a 1-differential poset.} \label{fig:youngs lattice} \end{figure} We refer the reader to \cite{EC1} for basic definitions related to posets in what follows. \begin{definition}[\cite{St}, Definition 1.1 and Theorem 2.2] For $r \in \mathbb{Z}_{>0}$, a poset $P$ is called an \textit{r-differential poset} if the following properties hold: \begin{itemize} \item[(DP1)] $P$ is a graded locally-finite poset with $\hat{0}$. \item[(DP2)] Let $\mathbb{Z}^{P_n}$ be the free abelian group spanned by elements of the $n$-th rank of $P$. Define the \textit{up and down maps} $U_n:\mathbb{Z}^{P_n} \to \mathbb{Z}^{P_{n+1}}$ and $D_n:\mathbb{Z}^{P_n}\to \mathbb{Z}^{P_{n-1}}$ by \begin{align*} &U_n x := \sum_{x \lessdot y} y, &D_n y := \sum_{x \lessdot y} x \end{align*} Where $x \lessdot y$ means that $y$ covers $x$. Then we require that for all $n$ we have \begin{align*} D_{n+1}U_n - U_{n-1}D_n = rI \end{align*} \end{itemize} \end{definition} \noindent When the context is clear we omit the subscripts from the up and down maps. When $P$ is a differential poset, we let $p_n=|P_n|$ denote the size of the $n$-th rank, and we let $\Delta p_n = p_n-p_{n-1}$ denote the difference in the sizes of consecutive ranks. We make the convention that $p_i=0$ for $i<0$. In the case of Young's lattice, $p_n=p(n)$ where $p(n)$ denotes the number of integer partitions of $n$. If $\lambda$ is a partition, we let $\lambda'$ denote the conjugate partition obtained by reflecting the Young diagram across the diagonal. The following results of Stanley characterize the eigenspaces of $UD$ in terms of the rank sizes. \begin{theorem}[\cite{St}, Theorem 4.1] \label{thm:DP eigenvalues} Let $P$ be an $r$-differential poset and let $n \in \mathbb{N}$. Then $UD_n$ is semisimple and has characteristic polynomial: \[ \text{ch}(UD_n)=\prod_{i=0}^n (x-ri)^{\Delta p_{n-i}}. \] \end{theorem} \begin{theorem}[\cite{St}, Proposition 4.6] \label{thm:DP eigenspaces} Let $E_n(ri)$ denote the eigenspace of $UD_n$ belonging to the eigenvalue $ri$, then \[ E_n(0)=\ker(D_n)=(UP_{n-1})^{\perp} \] and \[ E_n(ri)=U^iE_{n-i}(0) \] for $1 \leq i \leq n$. \end{theorem} \subsection{Differential towers of groups} \begin{definition}[\cite{MR}, Definition 6.1] For $r \in \mathbb{Z}_{>0}$ define an \textit{r-differential tower of groups} $\mathfrak{G}$ to be an infinite tower of finite groups: \[ \mathfrak{G}: \{e\}=G_0 \subseteq G_1 \subseteq G_2 \subseteq \cdots \] Such that for all $n$: \begin{itemize} \item[(DTG1)] The branching rules for restricting irreducibles from $G_n$ to $G_{n-1}$ are \\ multiplicity-free, and \item[(DTG2)] $\text{Res}^{G_{n+1}}_{G_n} \text{Ind}_{G_n}^{G_{n+1}}- \text{Ind}_{G_{n-1}}^{G_n}\text{Res}_{G_{n-1}}^{G_n} = r \cdot \text{id}$ where both sides are regarded as linear operators on $R(G_n)$. \end{itemize} \end{definition} An $r$-differential tower of groups $\mathfrak{G}$ corresponds to an $r$-differential poset $P=P(\mathfrak{G})$ whose $n$-th rank $P_n$ is in bijection with the set $\text{Irr}(G_n)$ of irreducible representations of $G_n$. We will use Greek letters like $\lambda$ to denote elements of $P(\mathfrak{G})$ and $V_{\lambda}$ to denote the corresponding irreducible representation. We write $|\lambda|=n$ if $\lambda \in P_n$, or equivalently if $V_{\lambda}$ is a representation of $G_n$. For $\lambda \in P_n$ and $\mu \in P_{n+1}$, $\lambda \lessdot \mu$ in $P$ if and only if $\text{Res}^{G_{n+1}}_{G_n} V_{\mu}$ contains $V_{\lambda}$ in its irreducible decomposition, thus condition (DTG2) becomes condition (DP2). \begin{example} Let $Y$ denote Young's lattice of integer partitions (see Figure \ref{fig:youngs lattice} above). It is well known that irreducible representations of the symmetric group $\mathfrak{S}_n$ are indexed by partitions $\lambda=(\lambda_1, \lambda_2,...)$ with $|\lambda|=\sum_i \lambda_i=n$; we refer the reader to \cite{James} for background on the representation theory of the symmetric group. Young's rule says that $\text{Res}^{\mathfrak{S}_n}_{\mathfrak{S}_{n-1}} V_{\lambda}$ decomposes as a direct sum of $V_{\nu}$ where $\nu$ ranges over all possible ways to remove a single box from the Young diagram for $\lambda$. It is well known \cite{St} that $Y$ is a 1-differential poset, so (DP2) holds, and by the above identification (DTG2) also holds. Thus \[ \mathfrak{S}: \{e\} \subset \mathfrak{S}_1 \subset \mathfrak{S}_2 \subset \cdots \] is a 1-differential tower of groups, with $P(\mathfrak{S})=Y$. More generally, if $A$ is an abelian group of size $r$, then Okada \cite{O} showed that the tower of wreath products $A \wr \mathfrak{S}: \{e\} \subset A \subset A \wr \mathfrak{S}_2 \subset A \wr \mathfrak{S}_3 \subset \cdots$ is an $r$-differential tower of groups with $P(A \wr \mathfrak{S})=Y^r$. \end{example} In the following result we show that the groups in any differential tower of groups have the same order as those in $A \wr \mathfrak{S}$. \begin{prop} \label{prop: size of DTG} Let $\mathfrak{G}: \{e\}=G_0 \subset G_1 \subset \cdots$ be an $r$-differential tower of groups, then $|G_n|=r^n \cdot n!$ for all $n \geq 0$. \end{prop} \begin{proof} Let $P=P(\mathfrak{G})$ be the corresponding $r$-differential poset. Since restriction of representations does not change dimension, we have \[ \dim(V_{\lambda})=\sum_{\mu \lessdot_P \lambda} \dim(V_{\mu}) \] It follows by induction that $\dim(V_{\lambda})=e(\lambda)$, where $e(\lambda)$ denotes the number of upward paths from $\hat{0}$ to $\lambda$ in the Hasse diagram of $P$. Stanley showed in Corollary 3.9 of \cite{St} that for any $r$-differential poset $Q$ we have $\sum_{x \in Q_n} e(x)^2 = r^n \cdot n!$. Applying this to $P$, and recalling that $\sum_{V \in \text{Irr}(G)}\dim(V)^2=|G|$ for any finite group $G$ gives the desired result. \end{proof} \begin{remark} Recent work \cite{G2} of the second author shows that the tower $\mathfrak{S}$ of symmetric groups is the only 1-differential tower of groups. \end{remark} \subsection{Smith normal form and cokernels of linear maps} \label{sec:snf} The cokernel of a linear map over a PID is described by the Smith normal form of the corresponding matrix. In this section we review basic facts about Smith normal form, and state a conjecture about the Smith normal form of the map $UD$ in a differential poset. \begin{definition} Let $M \in R^{n \times n}$ be an $n \times n$ matrix with entries in some ring $R$. We say that $S$ is a \textit{Smith normal form} for $M$ if: \begin{itemize} \item $S=PMQ$ for some $P,Q \in GL_n(R)$, and \item $S$ is a diagonal matrix $S=\text{diag}(s_1,...,s_n)$ such that successive diagonal entries divide one another: $s_i | s_{i+1}$ for all $i=1,...,n-1$. \end{itemize} \end{definition} The following facts are well-known (see, for example \cite{St SNF}). \begin{prop} \text{} \begin{itemize} \item[a.] If $M$ has a Smith normal form, then it is unique up to multiplication of the $s_i$ by units in $R$. \item[b.] If $R$ is a PID, then all matrices $M$ have a Smith normal form. \item[c.] If $M$ has a Smith normal form $S=\text{diag}(s_1,...,s_n)$, then \[ \text{coker}(M) \cong \bigoplus_{i=1}^n R/(s_i). \] \end{itemize} \end{prop} \begin{prop} \label{prop:SNF from minors} Let $R$ be a PID, and suppose that the $n \times n$ matrix $M$ over $R$ has Smith normal form $\text{diag}(s_1,...,s_n)$. Then for $1 \leq k \leq n$ we have that $s_1s_2 \cdots s_k$ is equal to the greatest common divisor of all $k \times k$ minors of $M$, with the convention that if all $k \times k$ minors are 0, then their greatest common divisor is 0. \end{prop} We will primarily be interested in determining Smith normal forms over $\mathbb{Z}$, but we will use some results about Smith normal forms over $\mathbb{Z}[t]$ as a computational tool. When $R=\mathbb{Z}$ we will always assume that the $s_i$ are nonnegative (this can be achieved since $\pm 1$ are the units in $\mathbb{Z}$). When referring to an abelian group $A = \text{coker}(M)$, we say that $A$ has $k$ \textit{factors} if exactly $k$ of the $s_i$ are different from 1; dually, we write $\text{ones}(A)=k$ if exactly $k$ of the $s_i$ are equal to 1. In \cite{MR} Miller and Reiner make the following remarkable conjecture (note that $\mathbb{Z}[t]$ is not a PID): \begin{conjecture}[\cite{MR}, Conjecture 1.1] \label{conj:Miller-Reiner} For all differential posets $P$, and for all $n$, the map $U_{n-1}D_n+tI:\mathbb{Z}[t]^{p_n} \to \mathbb{Z}[t]^{p_n}$ has a Smith normal form over $\mathbb{Z}[t]$. \end{conjecture} We are interested in Smith normal forms over $\mathbb{Z}[t]$ because of the following very strong consequence: \begin{prop}[\cite{MR}, Proposition 8.4] \label{prop:snf} Let $M \in \mathbb{Z}^{n \times n}$ be semisimple and have integer eigenvalues, and suppose $M+tI$ has a Smith normal form over $\mathbb{Z}[t]$. Then the Smith normal form $S=\text{diag}(s_1,...,s_n)$ of $M$ is given by \[ s_{n+1-i}=\prod_{\substack{k \\ m(k) \geq i}} k \] where $m(k)$ denotes the multiplicity of the eigenvalue $k$ of $M$. \end{prop} Since $UD$ is semisimple and has integer eigenvalues for any differential poset by Theorem \ref{thm:DP eigenvalues}, if Conjecture \ref{conj:Miller-Reiner} holds for some differential poset $P$, then Proposition \ref{prop:snf} uniquely determines the Smith normal form of $UD$. Shah showed that Conjecture \ref{conj:Miller-Reiner} is true in the cases of interest to us here: \begin{theorem}[\cite{Sh}, Corollary 5.2] \label{thm: MR for Y} For any $r \geq 1$: \begin{itemize} \item[a.] Conjecture \ref{conj:Miller-Reiner} holds for $Y^r$. \item[b.] For all $n$ the down maps $D_n: \mathbb{Z}^{p_n} \to \mathbb{Z}^{p_{n-1}}$ in $Y^r$ are surjective. \end{itemize} \end{theorem} \section{Maps induced between critical groups} \label{sec:maps} For $\sigma: H \to G$ a group homomorphism and $W$ a representation of $G$, we let $W^{\sigma}$ denote the representation of $H$ given by $h \cdot w := \sigma(h)w$ for all $h \in H, w \in W$. If $\sigma$ is the inclusion of a subgroup, then $W^{\sigma}=\text{Res}^G_H W$. If $\sigma$ is an automorphism of $G$, then $W^{\sigma}$ corresponds to the usual notion of \textit{twisting} by $\sigma$. We extend by linearity to define $W^{\sigma}$ for $W$ a virtual representation. \begin{theorem} \label{Thm:induced-map} Let $\sigma: H \hookrightarrow G$ be an injective group homomorphism and $V$ a faithful representation of $G$, then $\bar{\sigma}: [W]\mapsto [W^{\sigma}]$ is a well-defined group homomorphism $K(V) \to K(V^{\sigma})$. If $\sigma$ is an isomorphism, then so is $\bar{\sigma}$. \end{theorem} \begin{proof} First observe that the following diagram commutes: \[ \begin{CD} R(G) @> \bar{\sigma} >> R(H) \\ @V[\text{$V$}]\cdot(-)VV @VV [\text{$V^{\sigma}$}]\cdot (-) V \\ R(G) @> \bar{\sigma} >> R(H) \end{CD} \] This is because for any genuine representation $W$, we have \[ [V^{\sigma}]\cdot [W^{\sigma}]=[V^{\sigma} \otimes W^{\sigma}]=[(V \otimes W)^{\sigma}] \] and extending by linearity gives the desired result. Since $\bar{\sigma}$ preserves virtual dimension, the above diagram restricts to $R_0(G)$ and $R_0(H)$. The commutativity of the resulting diagram is exactly the condition needed to ensure that the map of quotient groups $K(V) \to K(V^{\sigma})$ is well-defined, and the injectivity of $\sigma$ guarantees that $V^{\sigma}$ is a faithful representation of $H$. If $\sigma$ is invertible, then it is easy to see that $\bar{\sigma^{-1}}=\bar{\sigma}^{-1}$. \end{proof} \begin{example} Let $\sigma$ denote the unique outer automorphism of $\mathfrak{S}_6$ (the map $\bar{\sigma}$ is uninteresting for inner automorphisms since $W \cong W^{\sigma}$). Indexing the irreducible representations of $\mathfrak{S}_6$ by partitions in the usual way, the action of $\bar{\sigma}$ is \begin{align*} V_{(5,1)} & \leftrightarrow V_{(2,2,2)} \\ V_{(2,1^4)} & \leftrightarrow V_{(3,3)} \\ V_{(4,1,1)} & \leftrightarrow V_{(3,1^3)} \end{align*} with the remaining irreducible representations fixed \cite{Wi}. One can calculate that \begin{align*} K(V_{(5,1)}) & \cong K(V_{(2,2,2)}) \cong (\mathbb{Z}/6 \mathbb{Z})^2 \oplus \mathbb{Z}/120 \mathbb{Z} \\ K(V_{(2,1^4)}) & \cong K(V_{(3,3)}) \cong \mathbb{Z}/24 \mathbb{Z} \oplus \mathbb{Z}/480 \mathbb{Z} \\ K(V_{(4,1,1)}) & \cong K(V_{(3,1^3)}) \cong \mathbb{Z}/3 \mathbb{Z} \oplus \mathbb{Z}/90 \mathbb{Z} \oplus \mathbb{Z}/47520 \mathbb{Z}. \end{align*} \end{example} In the case $\sigma: H \hookrightarrow G$, one might have hoped that, in analogy with Proposition \ref{prop:graph covering surj}, the map $\bar{\sigma}: [W] \mapsto [\text{Res}^{G}_H W]$ would be surjective on critical groups; the following example shows that this is not the case for general groups $H \subset G$. \begin{example} Let $G=D_5$ be the dihedral group of order 10, and let $V$ be the direct sum of a two-dimensional irreducible and the non-trivial one-dimensional irreducible. This is the complexification of the action of $G$ in $\mathbb{R}^3$ by rotation of a fixed plane and reflection across that plane. One can calculate (see \cite{G3}, Appendix C) that $K(V)\cong \mathbb{Z}/2 \mathbb{Z}$. Letting $H=C_5$ be the cyclic subgroup, however, one can show that $K(\text{Res}^G_H V) \cong \mathbb{Z}/5 \mathbb{Z}$. Thus $\bar{\text{Res}}: K(V) \to K(\text{Res}^G_H V)$ cannot be surjective. This is a natural counterexample to pick, since $C_5$ has more conjugacy classes than $D_5$, and so $\text{Res}: R(D_5) \to R(C_5)$ cannot be surjective. \end{example} There are two classes of groups for which $\bar{\text{Res}}$ can be seen to be surjective for all $V$, both of which will be investigated further throughout the paper. \begin{prop} \label{prop:surj} The map $\bar{\text{Res}}:K(V) \to K(\text{Res}^G_H V)$ is surjective if: \begin{itemize} \item[(i)] $G$ is abelian, \item[(ii)] $G=A \wr \mathfrak{S}_n$ and $H=A \wr \mathfrak{S}_m$ for $A$ an abelian group and $m \leq n$. \end{itemize} \end{prop} \begin{proof} In both cases the map $\text{Res}:R(G) \to R(H)$ is already surjective. This is clear in case (i); in case (ii) this follows from Theorem \ref{thm: MR for Y} and the fact that $A \wr \mathfrak{S}$ is a differential tower of groups with corresponding differential poset $Y^r$, where $r=|A|$. \end{proof} For completeness, we also mention that induction induces a map in the opposite direction: \begin{prop} \label{prop:ind map} The map $[W] \mapsto [\text{Ind}^G_H W]$ induces a map on critical groups $\bar{\text{Ind}}: K(\text{Res} V) \to K(V)$. If $\bar{\text{Res}}$ is a surjection, then $\bar{\text{Ind}}$ is an injection. \end{prop} \begin{proof} To see that $\text{Ind}$ induces a map on critical groups, one easily checks that $(\text{Ind} \: W) \otimes_{\mathbb{C}} V \cong \text{Ind}(W \otimes \text{Res} \: V)$, so that the required diagram commutes. The second claim follows from the observation that the map $\text{Ind}:R(H) \to R(G)$ is the transpose of the map $\text{Res}:R(G) \to R(H)$ and a standard application of the Snake Lemma. \end{proof} \subsection{Cayley graph covering maps} \label{subsec:cayley} In this section we investigate the relationship between critical groups of group representations and critical groups of graphs when $G$ is abelian. For any finite group $G$, we let $\hat{G}=\text{Hom}(G, \mathbb{C})$ denote the Pontryagin dual group. When $G$ is abelian, all irreducible representations are 1-dimensional, and so $\hat{G}$ is equal to the group of irreducible characters of $G$ under point-wise multiplication. If $V$ is a faithful representation of an abelian group $G$, then the multiset $\mathcal{S}_V$ of characters of irreducible components appearing in $V$ generates $\hat{G}$ as a group. This follows from the standard fact that all irreducible representations of a finite group appear as factors in a sufficiently large tensor power of a fixed faithful representation. If $G$ is a group with generating multiset $\mathcal{S}$, the \textit{Cayley graph} $\text{Cay}(G, \mathcal{S})$ is the directed multigraph with vertex set $G$ and directed edges $g \to gx$ whenever $x \in \mathcal{S}$. See Figure \ref{fig:Cayley-graph} for an example of this construction. \begin{theorem} \label{thm:cay-graph} For $V$ a faithful representation of an abelian group $G$ the critical groups $K(V)$ and $K(\text{Cay}(\hat{G}, \mathcal{S}_V))$ can be naturally identified, and the diagram \begin{center} $\begin{CD} K(V) @= K(\text{Cay}(\hat{G}, \mathcal{S}_V)) \\ @V\bar{\text{Res}}VV @VV\bar{\varphi}V \\ K(\text{Res}^G_H V) @= K(\text{Cay}(\hat{H}, \mathcal{S}_{\text{Res} V})) \end{CD}$ \end{center} commutes, where $\bar{\varphi}$ is the surjection on critical groups induced by the natural graph covering map $\varphi:\text{Cay}(\hat{G},\mathcal{S}_{V}) \to \text{Cay}(\hat{H}, \mathcal{S}_{\text{Res} V})$. \end{theorem} \begin{proof} \begin{comment} Suppose $G=\prod_{i=1}^k G_i$ with each $G_i$ cyclic of order $d_i$, and write $H=\prod_{i=1}^k H_i$ where $H_i$ is cyclic of order $d_i' | d_i$. Then the elements of $\hat{G}$ (that is, the irreducible characters of $G$) are indexed by $k$-tuples $(a_1,...,a_k)$ with $0 \leq a_i \leq d_i-1$ corresponding to the representation sending a generator of $G_i$ to $\zeta_{d_i}^{a_i}$ for each $i$, where $\zeta_d$ denotes a primitive $d$-th root of unity. Consider the matrix for $\tilde{C}_V$ in the basis of irreducibles. \end{comment} Define a map $\varphi: \hat{G} \twoheadrightarrow \hat{H}$ by $\chi \mapsto \text{Res}^G_H \chi$. Now, $\hat{G}$ and $\hat{H}$ are the vertex sets of the Cayley graphs in question, so $\varphi$ induces a graph map $\varphi: \text{Cay}(\hat{G}, \mathcal{S}_V) \twoheadrightarrow \text{Cay}(\hat{H}, \mathcal{S}_{\text{Res}{V}})$ by sending each edge $\chi \to \chi \cdot \psi$ to the edge $\text{Res} \chi \to (\text{Res} \chi)\cdot (\text{Res} \psi)$. Now, identifying the basis of irreducibles in $R(G)$ and $R(H)$ with the elements of $\hat{G}$ and $\hat{H}$ respectively, it is clear from the definitions that $\tilde{C}_V$ and $\tilde{L}(\text{Cay}(\hat{G},\mathcal{S}_V))$ define the same linear maps, and that, under this identification $\bar{\varphi}$ and $\bar{\text{Res}}$ agree. \end{proof} \begin{comment} \begin{example} Let $G = \langle g \rangle \cong \mathbb{Z}/6 \mathbb{Z}$ and $H = \langle g^3 \rangle \cong \mathbb{Z}/2 \mathbb{Z}$. Let $\zeta$ be a primitive sixth root of unity, and let $V$ be given by $g \mapsto \begin{pmatrix} \zeta & 0 \\ 0 & \zeta^3 \end{pmatrix}$. Then the Cayley graphs $\text{Cay}(\hat{G},\mathcal{S}_V)$ and $\text{Cay}(\hat{H},\mathcal{S}_{\text{Res} V})$ are as shown below: \end{comment} \begin{figure} \begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.6cm, thick,main node/.style={circle,draw,font=\Large\bfseries}, baseline=(current bounding box.center)] \node[main node] (1) {1}; \node[main node] (2) [right of=1, fill=gray] {$\zeta$}; \node[main node] (3) [below right of=2] {$\zeta^2$}; \node[main node] (4) [below left of=3, fill=gray] {$\zeta^3$}; \node[main node] (5) [left of=4] {$\zeta^4$}; \node[main node] (6) [below left of=1, fill=gray] {$\zeta^5$}; \path (1) edge node [green, thick]{} (2) edge node []{} (4) (2) edge node []{} (3) edge node []{} (5) (3) edge node []{} (4) edge node []{} (6) (4) edge node []{} (5) edge node []{} (1) (5) edge node []{} (6) edge node []{} (2) (6) edge node []{} (1) edge node []{} (3); \end{tikzpicture} \hspace{0.2 cm} $\xrightarrow{\varphi}$ \hspace{0.2 cm} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.6cm, thick,main node/.style={circle,draw,font=\Large\bfseries}, baseline=(current bounding box.center)] \node[main node] (1) {1}; \node[main node] (2) [below of=1, fill=gray] {$\zeta^3$}; \path (1) [<->,thick] edge node [right]{2} (2); \end{tikzpicture} \end{center} \caption{The Cayley graphs for the case $G=\langle g \rangle \cong \mathbb{Z}/6 \mathbb{Z}$ and $H=\langle g^3 \rangle \cong \mathbb{Z}/ 2 \mathbb{Z}$ with $V$ given by $g \mapsto \text{diag}(\zeta, \zeta^3)$, where $\zeta$ is a primitive sixth root of unity.} \label{fig:Cayley-graph} \end{figure} \begin{comment} Then, using the basis of irreducibles for $R(G)$ and $R(H)$ we have: \begin{align*} \tilde{C}_V &= \tilde{L}(\text{Cay}(\hat{G},\mathcal{S}_V)) = \begin{pmatrix} 2 & -1 & 0 & -1 & 0 & 0 \\ 0 & 2 & -1 & 0 & -1 & 0 \\ 0 & 0 & 2 & -1 & 0 & -1 \\ -1 & 0 & 0 & 2 & -1 & 0 \\ 0 & -1 & 0 & 0 & 2 & -1 \\ -1 & 0 & -1 & 0 & 0 & 2 \end{pmatrix} \\ \tilde{C}_{\text{Res} V} &= \tilde{L}(\text{Cay}(\hat{H},\mathcal{S}_{\text{Res}{V}})) = \begin{pmatrix} 2 & -2 \\ -2 & 2 \end{pmatrix} \end{align*} \end{example} \end{comment} \begin{remark} For graph covering maps $\varphi:\Gamma \to \Gamma'$, Reiner and Tseng \cite{RT} give an interpretation of the kernel of $\bar{\varphi}:K(\Gamma) \twoheadrightarrow K(\Gamma')$ as a certain ``voltage graph critical group''. Thus the identification in Theorem \ref{thm:cay-graph} allows one to describe the kernel of $\bar{\text{Res}}$ in these same terms in the abelian group case. \end{remark} \section{Critical groups and differential posets}\label{sec:gen-perm-rep} By a \textit{word} of length $2k$, we mean a sequence $w=w_1...w_{2k}$ of $U$'s and $D$'s. A word $w$ is \textit{balanced} if the number of $U$'s is equal to the number of $D$'s. When a tower of groups $G_0 \subset G_1 \subset \cdots$ is clear from context, we let $w(\text{Ind}, \text{Res})$ denote the linear operator $\bigoplus_i R(G_i) \to \bigoplus_i R(G_i)$ defined by replacing the $U$'s in $w$ with $\text{Ind}$ and the $D$'s with $\text{Res}$ and viewing the resulting sequence as a composition of linear operators. We always assume that induction and restriction are between consecutive groups in the sequence and that $\text{Res} [V]=0$ for $[V] \in R(G_0)$. Similarly, if $P$ is a differential poset, then we let the linear map $w(U,D): \bigoplus_i \mathbb{Z}^{P_i} \to \bigoplus_i \mathbb{Z}^{P_i}$ be defined as the natural composition of linear operators. When $w$ is balanced, then for each $i$, $w(\text{Ind}, \text{Res})$ (resp. $w(U,D)$) restricts to a map $R(G_i) \to R(G_i)$ (resp. $\mathbb{Z}^{P_i} \to \mathbb{Z}^{P_i}$). \begin{example} Let $\mathfrak{S}$ be the tower of symmetric groups, and $Y=P(\mathfrak{S})$ denote Young's lattice. Fix $i \geq 1$, then $w(U,D)=UD$ is a map $\mathbb{Z}^{Y_i} \to \mathbb{Z}^{Y_i}$ and $w(\text{Ind},\text{Res})$ is a map $R(\mathfrak{S}_i) \to R(\mathfrak{S}_i)$ sending $[W]\mapsto [\text{Ind}^{\mathfrak{S}_{i}}_{\mathfrak{S}_{i-1}}\text{Res}^{\mathfrak{S}_i}_{\mathfrak{S}_{i-1}} W]$. Thus $w(\text{Ind}, \text{Res})[\mathbbm{1}_{\mathfrak{S}_i}]=[\text{Ind}_{\mathfrak{S}_{i-1}}^{\mathfrak{S}_i}\mathbbm{1}_{\mathfrak{S}_{i-1}}]$ is the class of the permutation representation of $\mathfrak{S}_i$. If we identify $\mathbb{Z}^{Y_i}$ with $R(\mathfrak{S}_i)$ via the differential tower of group structure, then one can check that $w(\text{Ind}, \text{Res})[\mathbbm{1}_{\mathfrak{S}_i}]\cdot (-)$ and $w(U,D)$ in fact agree as linear maps. This fact will be explained below. \end{example} The following basic fact follows from (\cite{O}, Appendix, Theorem A). \begin{lemma} \label{lem:tensor induced trivial} Let $G$ be a finite group, and let $H \subset G$ be a subgroup. \begin{itemize} \item[a.] The operator $\text{Ind} \text{Res}$ on $R(G)$ has a complete system of orthogonal eigenvectors given by \[ \delta^{(g)} := \sum_{V \in \text{Irr}(G)} \chi_V(g) \cdot [V] \] as $g$ ranges through a set of conjugacy class representatives of $G$. \item[b.] The associated eigenvalue equation is \[ \text{Ind} \text{Res} \cdot \delta^{(g)} = \chi_{\text{Ind} \mathbbm{1}_H}(g) \cdot \delta^{(g)} \] That is, $[\text{Ind} \mathbbm{1}_H] \cdot (-)$ and $\text{Ind} \text{Res}$ are equal as linear maps $R(G) \to R(G)$. \end{itemize} \end{lemma} When $f=\sum_i c_i w^{(i)}$ is a finite nonnegative sum of balanced words, and a differential tower of groups $\mathfrak{G}$ is understood, write $V(f)_n$ for the representation of $G_n$ given by: \[ f(\text{Ind}, \text{Res})[\mathbbm{1}_{G_n}]=\sum_i c_i w^{(i)}(\text{Ind}, \text{Res})[\mathbbm{1}_{G_n}] \] For example, if $f$ consists of the single word $U^kD^k$, and we are working in the tower $\mathfrak{S}$ of symmetric groups, then $V(f)_n=\text{Ind}_{\mathfrak{S}_{n-k}}^{\mathfrak{S}_n} \mathbbm{1}$ is a basic object of study in the representation theory of the symmetric group. Under the standard characteristic map $\text{ch}: R(\mathfrak{S}_n) \xrightarrow{\sim} \Lambda_n$ between the representation ring of $\mathfrak{S}_n$ and the ring of degree-$n$ symmetric functions, this representation is sent to the complete homogeneous symmetric function $h_{(n-k,1^k)}$ indexed by a ``hook shape". \begin{prop} \label{prop:IndRes = UD} Let $\mathfrak{G}: G_0 \subset G_1 \subset \cdots$ be an $r$-differential tower of groups with corresponding differential poset $P=P(\mathfrak{G})$, let $f$ be a finite nonnegative sum of balanced words. Then, identifying $\mathbb{Z}^{P_n}$ and $R(G_n)$, the maps $f(U,D)_n$ and $[V(f)_n] \cdot (-)$ are equal. Furthermore the character values of $V(f)_n$ are equal to the eigenvalues of $f(U,D)_n$. \end{prop} \begin{proof} We prove the result for a balanced word $w$, the case for general $f$ then follows by linearity. By the differential tower of group structure, we have \[ DU-UD = rI = \text{Res} \text{Ind} - \text{Ind} \text{Res} \] Therefore we can write \begin{align*} w(U,D)&=\sum_{i\geq 0} c_iU^iD^i \\ w(\text{Ind}, \text{Res})&=\sum_{i \geq 0} c_i \text{Ind}^i \text{Res}^i \end{align*} with the same coefficients $c_i$. For all $i$, the result for $w'=U^iD^i$ follows from Lemma \ref{lem:tensor induced trivial}, and so the general result holds by linearity. The eigenvalues of $[V(w)_n]\cdot (-)$ are exactly the character values of $V(w)_n$ by Proposition \ref{prop:eigenvectors}, so these must agree with the eigenvalues of $w(U,D)_n$. \end{proof} \begin{remark} Any (not-necessarily-differential) tower of groups gives rise to a graded multigraph in the same way that a differential tower of groups gives rise to the Hasse diagram of a differential poset. One could write down a statement analogous to Proposition \ref{prop:IndRes = UD} in this context, however we are interested in this special case since the combinatorial and algebraic properties of the up and down maps in differential posets are so strong. \end{remark} \subsection{The generalized permutation representation} The representation \\ $\text{Ind}_{\mathfrak{S}_{n-1}}^{\mathfrak{S}_n} \mathbbm{1}$ of the symmetric group $\mathfrak{S}_n$ is easily seen to be isomorphic to the $n$-dimensional permutation representation, where $\mathfrak{S}_n$ acts by permuting coordinates in $\mathbb{C}^n$. In \cite{G1}, the second author was able to explicitly compute the critical group for this representation, generalizing Example \ref{ex:perm rep of S4} to arbitrary $n$. Here we extend that result to a broader class of differential towers of groups: \begin{theorem} \label{thm:gen-perm-rep} Let $\mathfrak{G}=G_0 \subset G_1 \subset \cdots$ be an $r$-differential tower of groups such that the associated differential poset $P=P(\mathfrak{G})$ satisfies Conjecture \ref{conj:Miller-Reiner} (such as $\mathfrak{G}=A \wr \mathfrak{S}$ with $A$ abelian of order $r$). Let $m_0(i)$ denote the smallest $m$ such that $\Delta p_m \geq i$. Let $V=V(UD)_n=\text{Ind}_{G_{n-1}}^{G_n} \mathbbm{1}_{G_{n-1}}$. Then \[ K(V)=\bigoplus_{i=2}^{p_n} \mathbb{Z}/q_i \mathbb{Z} \] where $q_i=r^{n-m_0(i)+1}\cdot n(n-1)\cdots m_0(i)$. \end{theorem} \begin{proof} By Proposition \ref{prop: size of DTG}, $\dim(V)=|G_n|/|G_{n-1}|=rn$. By Proposition \ref{prop:IndRes = UD}, the matrix for the map $\tilde{C}$ of multiplication by $rn[\mathbbm{1}]-[V]$ in $R(G_n)$ in the basis $\{V_{\lambda}|\lambda \in P_n\}$ is equal to the matrix for the map $rnI-UD$ in the basis $\{\lambda|\lambda \in P_n \}$. Since $P$ satisfies Conjecture \ref{conj:Miller-Reiner} by hypothesis, Proposition \ref{prop:snf} and Theorem \ref{thm:DP eigenvalues} together imply that the Smith normal form of $\tilde{C}$ is $\text{diag}(s_1,...,s_{p_n})$ with \[ s_{p_n+1-i}=\prod_{\substack{1 \leq j \leq n \\ \Delta p_{n-j} \geq i}}(rn-rj) \] Re-indexing with $j=n-j$ we get that \[ s_{p_n+1-i}=\prod_{\substack{1 \leq j \leq n \\ \Delta p_j \geq i}}rj. \] The theorem follows by rewriting this product using $m_0$. \end{proof} \subsection{The structure of $K(V(f))$} In this section we investigate the order and subgroup structure of $K(V(f)_n)$ for general finite sums of balanced words $f$. Although exact formulas for the critical group, like that given in Theorem \ref{thm:gen-perm-rep} for the case $f(U,D)=UD$ remain elusive in general, the results below considerably restrict the structure of $K(V(f)_n)$. The following proposition of Stanley characterizes eigenspaces for sums of balanced words in a differential poset: \begin{prop}[\cite{St}, Proposition 4.12] \label{prop:word-eigenspaces} Let $P$ be an $r$-differential poset and let $f(U,D)$ be a finite sum of balanced words. Write \[ f(U,D)=\sum_{j \geq 0} \beta_j(UD)^j \] and define \[ \alpha_i=\sum_{j \geq 0} \beta_j(ri)^j \] Then the characteristic polynomial of $f(U,D)_n:\mathbb{Z}^{P_n} \to \mathbb{Z}^{P_n}$ is given by \[ \text{ch} f(U,D)_n = \prod_{j=0}^n (x-\alpha_i)^{\Delta p_{n-i}}. \] \end{prop} Proposition \ref{prop:word-eigenspaces} allows us to characterize the order and subgroup structure of critical groups $K(V(f)_n)$. Since it is clear from the definition that $K(V \oplus \mathbbm{1}) = K(V)$ for all representations $V$, we are free to assume in Proposition \ref{prop:word-eigenspaces} that $\beta_0=0$, and we use this convention in what follows. \begin{prop} \label{prop:dim of V(f)} Let $f$ be a nonnegative finite sum of balanced words, and maintain the notation of Proposition \ref{prop:word-eigenspaces}. Assume further that $P=P(\mathfrak{G})$ for $\mathfrak{G}$ a differential tower of groups. Then, \begin{itemize} \item[a.] $\dim(V(f)_n)=\alpha_n$, and \item[b.] $V(f)_n$ is a faithful representation. \end{itemize} \end{prop} \begin{proof} We have $f(U,D)=\sum_{j>0} \beta_j (UD)^j$ and $\alpha_n=\sum_{j > 0} \beta_j (rn)^j$. Part (a) is immediate from the fact that $V((UD)^j_n)$ is obtained by applying $(\text{Ind} \text{Res})^j$ to the representation $\mathbbm{1}_{G_n}$, so it has the dimension $[G_n:G_{n-1}]^j=(rn)^j$ by Proposition \ref{prop: size of DTG}. It is clear from the definition that $\alpha_i < \alpha_n$ for $i \neq n$, thus, since the $\alpha_i$ are the character values of $V(f)_n$, and since $\alpha_n$ has multiplicity $\Delta p_0=1$, we see that $V(f)_n$ is faithful. \end{proof} \begin{theorem} \label{thm:structure of K(V(f))} Let $\mathfrak{G}$ be an $r$-differential tower of groups and let $f(U,D)$ be a nonnegative finite sum of balanced words. Then, using the notation of Proposition \ref{prop:word-eigenspaces}, we have: \begin{itemize} \item[a.] The size of the critical group $K(V(f)_n)$ is given by: \[ |K(V(f)_n)|=\frac{1}{r^n \cdot n!} \prod_{i=0}^{n-1} (\alpha_n - \alpha_i)^{\Delta p_{n-i}}. \] \item[b.] For each $i=1,...,n-1$, the critical group $K(V(f)_n)$ has a subgroup isomorphic to $(\mathbb{Z}/(\alpha_n - \alpha_i) \mathbb{Z})^{\Delta p_{n-i} -1}$. \end{itemize} \end{theorem} \begin{proof} This follows from applying Theorem \ref{thm:repeated} with the information about character values given by Proposition \ref{prop:dim of V(f)} and Proposition \ref{prop:IndRes = UD} and the size of $G_n$ as calculated in Proposition \ref{prop: size of DTG}. \end{proof} \section{Enumeration of factors in critical groups} \label{sec:factors} In what follows, when a differential tower of groups and a rank $n$ are understood, we let $\text{ones}(w)$ denote the number of ones in the Smith normal form of $\tilde{C}_{V(w)_n}$, where $w$ is a balanced word. Then the number of nontrivial factors in the critical group $K(V(w)_n)$ is $p_n-1-\text{ones}(w)$, since $\tilde{C}$ is a $p_n \times p_n$-matrix and there is always a unique zero in the Smith form, by Definition-Proposition \ref{def-prop:critical group}. The elements of $Y^r$ of rank $n$ are indexed by $r$-tuples $\boldsymbol{\lambda}=(\lambda^{(1)},...,\lambda^{(r)})$ of partitions such that $\sum_i |\lambda^{(i)}|=n$. Define the \textit{$r$-lexicographic order} on $Y^r$ by first applying lexicographic order on $\lambda^{(1)}$, and then on $\lambda^{(2)}$, and so on; if $|\lambda|>|\mu|$ then we use the convention that $\lambda$ is greater than $\mu$ in the lexicographic order. We write $\boldsymbol{\lambda} \geq \boldsymbol{\mu}$ to denote this order. Our main tool for proving results in this section will be the the characterization of Smith normal form in terms of minors of the matrix, as given in Proposition \ref{prop:SNF from minors}. If $M$ is a matrix with rows and columns indexed by sets $S,T$ respectively, we let $M_{S',T'}$ denote the submatrix indexed by rows $S' \subset S$ and columns $T' \subset T$. \begin{theorem} \label{thm:ones of w} Let $w$ be any balanced word of length $2k \leq 2n$. Consider the $r$-differential tower of groups $A \wr \mathfrak{S}$, where $A$ is abelian group of order $r \geq 2$, and the corresponding differential poset $Y^r$. Then \[ \text{ones}(w)=|(Y^r)_{n-k}|=\sum_{\substack{i_1+\cdots+i_r=n-k \\ i_1,...,i_r \geq 0}} \prod_{j=1}^r p(i_j). \] In particular, $\text{ones}(w)$ depends only on $r$ and $n-k$, and not on the particular $w$ chosen. \end{theorem} We first prove the case $w=U^kD^k$; note that, unlike in Theorem \ref{thm:ones of w}, we do not require $r \geq 2$ in Proposition \ref{prop:ones of UkDk}. \begin{prop} \label{prop:ones of UkDk} Let $w=U^kD^k$ with $k \leq n$. Consider the $r$-differential tower of groups $A \wr \mathfrak{S}$, where $A$ is abelian group of order $r$, and the corresponding differential poset $P=Y^r$. Then, \[ \text{ones}(w)=|(Y^r)_{n-k}| \] \end{prop} \begin{proof} Consider the matrix $M=M^{(k)}$ for $U^kD^k : \mathbb{Z}^{P_n} \to \mathbb{Z}^{P_n}$ in the standard basis, ordering the rows and columns from least to greatest in the $r$-lexicographic order, and note that $M$ is symmetric. We say a partition $\lambda$ has $\ell$ ones if $\lambda_i=1$ for $\ell$ values of $i$. Define subsets of the rows and columns, respectively: \begin{align*} S_k &= \{\boldsymbol{\lambda} \in P_n | \lambda^{(r)} \text{ has at least $k$ ones}\} \\ T_k &= \{\boldsymbol{\lambda} \in P_n | (\lambda^{(1)})' \text{ has at least $k$ ones}\} \end{align*} Then we claim that $M_{S_k,T_k}$ is lower unitriangular (see the example in Figure \ref{fig:unitriangular}). This follows since for each $\boldsymbol{\lambda} \in S_k$, the $r$-lexicographically greatest $\boldsymbol{\mu}$ which can be obtained from $\boldsymbol{\lambda}$ by removing and then adding $k$ boxes from the tuple of Young diagrams can be reached in only one way. Namely, $k$ of the ones from $\lambda^{(r)}$ are removed, and all of these boxes are added to the largest part of $\lambda^{(1)}$. The resulting $\boldsymbol{\mu}$ is clearly an element of $T_k$. Now, easy bijections show that $|S_k|=|T_k|=p_{n-k}$. Thus $M^{(k)}$ and $\tilde{C}_{V(w)_n}$ have a $p_{n-k} \times p_{n-k}$ minor equal to $\pm 1$. This forces $\text{ones}(w) \geq p_{n-k}$. \begin{figure} \[ \begin{pmatrix} 1 & 2 & 1 & 2 & 2 & 1 & \boldsymbol{1} & 0 & 0 & 0 \\ 2 & 4 & 2 & 4 & 4 & 2 & 2 & 0 & 0 & 0 \\ 1 & 2 & 1 & 2 & 2 & 1 & 1 & 0 & 0 & 0 \\ 2 & 4 & 2 & 5 & 5 & 4 & 4 & 1 & 2 & \boldsymbol{1} \\ 2 & 4 & 2 & 5 & 5 & 4 & 4 & 1 & 2 & 1 \\ 1 & 2 & 1 & 4 & 4 & 5 & 5 & 2 & 4 & 2 \\ 1 & 2 & 1 & 4 & 4 & 5 & 5 & 2 & 4 & 2 \\ 0 & 0 & 0 & 1 & 1 & 2 & 2 & 1 & 2 & 1 \\ 0 & 0 & 0 & 2 & 2 & 4 & 4 & 2 & 4 & 2 \\ 0 & 0 & 0 & 1 & 1 & 2 & 2 & 1 & 2 & 1 \\ \end{pmatrix} \hspace{0.4in} \begin{pmatrix} 1 & 1 & 0 & \boldsymbol{1} & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 1 & 1 & \boldsymbol{1} & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 2 & 1 & 1 & \boldsymbol{1} & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 2 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 2 & 1 & 1 & \boldsymbol{1} & 0 \\ 0 & 0 & 0 & 1 & 1 & 1 & 2 & 0 & 1 & \boldsymbol{1} \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 2 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 \\ \end{pmatrix} \] \caption{The matrices $M^{(2)}$ and $M^{(1)}$, in the case $r=2, n=3$ with $w=UDUD=U^2D^2+2UD$. The diagonal entries of the unitriangular submatrices are shown in bold.} \label{fig:unitriangular} \end{figure} Now, the map $U^kD^k$ factors through the $(n-k)$-th rank: \[ \mathbb{Z}^{P_n} \xrightarrow{D^k} \mathbb{Z}^{P_{n-k}} \xrightarrow{U^k} \mathbb{Z}^{P_n} \] Since we know $D$ is surjective for $Y^r$ and $U$ is injective in any differential poset, we see that in fact $\dim \: \ker(U^kD^k) = p_n-p_{n-k}$. Thus $V(U^kD^k)_n$ has the repeated character value 0 with multiplicity $p_n-p_{n-k}$, and thus a subgroup $(\mathbb{Z}/d \mathbb{Z})^{p_n-p_{n-k}-1}$ where $d>1$ is the dimension of $V(U^kD^k)_n$ by Theorem \ref{thm:repeated}. Therefore \[ \text{ones}(U^kD^k) \leq (p_n-1)-(p_n-p_{n-k}-1) = p_{n-k}. \] \end{proof} \begin{example} \label{ex:ones of UkDk} Let $r=1$ in Proposition \ref{prop:ones of UkDk}. Then $V(U^kD^k)_n= \text{Ind}_{\mathfrak{S}_{n-k}}^{\mathfrak{S}_n} \mathbbm{1}$ is the representation which corresponds to the complete homogeneous symmetric function $h_{(n-k,1^k)}$ under the standard characteristic map (see \cite{M}, Chapter 1). The proposition implies that the critical group $K(V(U^kD^k)_n)$ has $p(n)-p(n-k)-1$ nontrivial factors. \textit{Note}: When $r>1$, Proposition \ref{prop:ones of UkDk} still applies, however $V(U^kD^k)_n$ no longer corresponds to the wreath-product analog of $h_{(n-k,1^k)}$ (see \cite{M}, Chapter 1, Appendix B). That representation is obtained by inducing from $\mathfrak{S}_{n-k} \times A^k$, rather than from $\mathfrak{S}_{n-k}$. \end{example} We now return to the proof of the main Theorem. \begin{proof}[Proof of Theorem \ref{thm:ones of w}.] We can write \[ w(U,D)=U^kD^k+c_{k-1}U^{k-1}D^{k-1}+\cdots + c_1UD+c_0I \] using the relation $DU-UD=rI$; clearly all coefficients $c_i$ are divisible by $r$. Let $M_w$ be the matrix for $w(U,D)$ using our standard ordered basis, and let $M^{(i)}$ be the matrix for $U^iD^i$, so $M_w=\sum c_i M^{(i)}$. Using the notation from the proof of Proposition \ref{prop:ones of UkDk}, it is clear that $S_i \subset S_{i-1}$ and $T_i \subset T_{i-1}$ for all $i$, and that the diagonal of $M^{(i)}_{S_i, T_i}$ is above that of $M^{(i-1)}_{S_{i-1},T_{i-1}}$ for all $i$. Therefore $M_w$ still contains a $p_{n-k} \times p_{n-k}$ lower unitriangular submatrix corresponding to rows and columns $S_k, T_k$. Since this gives a $p_{n-k} \times p_{n-k}$ minor equal to 1, we know that $\text{ones}(w) \geq p_{n-k}$. For the other inequality, let $P,Q$ be invertible integer matrices which put $M^{(k)}$ in Smith form: $PM^{(k)}Q=S$. By Proposition \ref{prop:ones of UkDk}, all but $p_{n-k}$ of the columns of $PM^{(k)}Q$ are divisible by $r$. Therefore, since the $c_i$ are divisible by $r$, all but $p_{n-k}$ of the columns of $PM_wQ$ are divisible by $r$. The dimension of $V(w)_n$ is divisible by $r$ as well, thus at most $p_{n-k}$ of the columns of $P\tilde{C}_V Q$ are not divisible by $r$. Then any minor of a $m \times m$ submatrix with $m > p_{n-k}$ must be divisible by $r$, and so $\text{ones}(w) \leq p_{n-k}$ by Proposition \ref{prop:SNF from minors}. \end{proof} \begin{example} \label{ex:r=1 doesn't work} This example shows that the hypothesis $r \geq 2$ in Theorem \ref{thm:ones of w} is necessary. Let $w=(UD)^2$, then for $n=7$ one can calculate that $\text{ones}(w)=9 \neq p(7-2)=7$. \end{example} We can still give some upper and lower bounds in the $r=1$ case. For a balanced word $w$ of length $2k$, write \begin{equation} \label{eq:ci def} w(U,D)=\sum_{i=0}^k c_i U^iD^i \end{equation} Then define $\ell(w)=\min \{ i | c_i \neq 0 \}$; clearly $0 \leq \ell(w) \leq k$, with equality on the right if and only if $w=U^kD^k$. \begin{prop}\label{prop:r=1 bounds} Let $w$ be a balanced word of length $2k \leq 2n$. Then, working in the tower $\mathfrak{S}$ of symmetric groups, we have \begin{itemize} \item[a.] $p(n-k) \leq \text{ones}(w) \leq p(n-\ell(w))$, and \item[b.] $\text{ones}(w)=p(n-k)$ if $gcd(c_1,...,c_k,\dim(V(w)))>1$, where the $c_i$ are defined as in Equation \ref{eq:ci def}. \end{itemize} \end{prop} \begin{proof} The argument for the lower bound $p(n-k) \leq \text{ones}(w)$ in the proof of Theorem \ref{thm:ones of w} still holds in the $r=1$ case. For the upper bound, note that $$\ker(w(U,D)) \supset \ker(D^{\ell(w)}:\mathbb{Z}^{Y_n} \to \mathbb{Z}^{Y_{n-\ell(w)}}).$$ Thus $\dim \: \ker(w(U,D)) \geq p(n)-p(n-\ell(w))$. By Theorem \ref{thm:repeated}, this gives the upper bound, proving part (a). For part (b), notice that if $gcd(c_1,...,c_k,\dim(V(w)))=s>1$, then the argument for the upper bound in the proof of Theorem \ref{thm:ones of w} still applies. \end{proof} \begin{example} Continuing Example \ref{ex:r=1 doesn't work}, we see that \[ (UD)^2=UDUD=U(UD+rI)D=U^2D^2+rUD \] and so $\ell((UD)^2)=1$. Then for $n=7$, Proposition \ref{prop:r=1 bounds} gives that \[ 7=p(5) \leq \text{ones}(w) \leq p(6) = 11 \] In fact we have $\text{ones}(w)=9$, so that we cannot hope for either bound in Proposition \ref{prop:r=1 bounds} to be an equality in general. \end{example} \subsection{Smallest factors} In this section, we give a conjecture on the size and multiplicity of the smallest nontrivial factor in critical groups $K(V(U^kD^k)_n)$. \begin{conjecture} \label{conj:final-conj} Working in the differential tower of groups $A \wr \mathfrak{S}$ with $A$ abelian of order $r$, when $k \leq n$ the critical group $K(V(U^kD^k)_n)=K(\text{Ind}_{A \wr \mathfrak{S}_{n-k}}^{A \wr \mathfrak{S}_n} \mathbbm{1})$ is given, as a list of elementary divisors, by: \[ \left(1^{p_{n-k}}, \left(r^{k} \frac{n!}{(n-k)!}\right)^{p_n-2p_{n-k}+p_{n-2k}},r^{k}\frac{n!}{(n-k)!}e_i \right) \] where the exponents involving rank sizes denote multiplicities, and where $e_i$ ranges over the non-unit elementary divisors in the critical group $K(V(D^kU^k)_{n-k})$. \end{conjecture} \begin{remark} The claim that the multiplicity of 1 as an elementary divisor is $p_{n-k}$ is the content of Proposition \ref{prop:ones of UkDk}. In the case $r=1$, we were able to prove Conjecture \ref{conj:final-conj} using explicit row and column operations related to the unitriangular submatrices identified in the proof of Proposition \ref{prop:ones of UkDk} and Theorem \ref{thm:ones of w} which we were unable to generalize to the $r>1$ case. In the $r=1$ case, letting $k=n-2$ in Conjecture \ref{conj:final-conj} allows us to explicitly compute that \[ K(\text{Ind}_{\mathfrak{S}_2}^{\mathfrak{S}_n} \mathbbm{1})=K(U^{n-2}D^{n-2})=\left(\mathbb{Z}/\frac{n!}{2}\mathbb{Z} \right)^{p(n)-4} \times \left(\mathbb{Z}/\frac{1}{8}n!(n-2)!(n-2)(n+1) \mathbb{Z} \right) \] where the largest factor is determined by the formula for the the size of the critical group given in Theorem \ref{thm:repeated}. In the $k=n-3$ case one can again obtain an explicit formula, but this formula depends on $n$ modulo 36, suggesting that a simple general formula is unlikely to exist. \end{remark} \section*{Acknowledgements} The authors wish to thank Vic Reiner for suggesting the analogy between restriction of representations and graph covering maps which led to this project, and for helpful conversations throughout. The first author wishes to thank the MIT PRIMES program and its organizers Pavel Etingof, Slava Gerovitch, and Tanya Khovanova for their feedback and support throughout the program. This material is based upon work supported by the National Science Foundation under Grant no. DMS-1519580.
{ "timestamp": "2017-10-24T02:13:47", "yymm": "1710", "arxiv_id": "1710.08253", "language": "en", "url": "https://arxiv.org/abs/1710.08253", "abstract": "In recent work, Benkart, Klivans, and Reiner defined the critical group of a faithful representation of a finite group $G$, which is analogous to the critical group of a graph. In this paper we study maps between critical groups induced by injective group homomorphisms and in particular the map induced by restriction of the representation to a subgroup. We show that in the abelian group case the critical groups are isomorphic to the critical groups of a certain Cayley graph and that the restriction map corresponds to a graph covering map. We also show that when $G$ is an element in a differential tower of groups, critical groups of certain representations are closely related to words of up-down maps in the associated differential poset. We use this to generalize an explicit formula for the critical group of the permutation representation of the symmetric group given by the second author, and to enumerate the factors in such critical groups.", "subjects": "Combinatorics (math.CO); Representation Theory (math.RT)", "title": "Differential posets and restriction in critical groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429629196684, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211088333246 }
https://arxiv.org/abs/2206.12269
Data-driven reduced order models using invariant foliations, manifolds and autoencoders
This paper explores how to identify a reduced order model (ROM) from a physical system. A ROM captures an invariant subset of the observed dynamics. We find that there are four ways a physical system can be related to a mathematical model: invariant foliations, invariant manifolds, autoencoders and equation-free models. Identification of invariant manifolds and equation-free models require closed-loop manipulation of the system. Invariant foliations and autoencoders can also use off-line data. Only invariant foliations and invariant manifolds can identify ROMs, the rest identify complete models. Therefore, the common case of identifying a ROM from existing data can only be achieved using invariant foliations.Finding an invariant foliation requires approximating high-dimensional functions. For function approximation, we use polynomials with compressed tensor coefficients, whose complexity increases linearly with increasing dimensions. An invariant manifold can also be found as the fixed leaf of a foliation. This only requires us to resolve the foliation in a small neighbourhood of the invariant manifold, which greatly simplifies the process. Combining an invariant foliation with the corresponding invariant manifold provides an accurate ROM. We analyse the ROM in case of a focus type equilibrium, typical in mechanical systems. The nonlinear coordinate system defined by the invariant foliation or the invariant manifold distorts instantaneous frequencies and damping ratios, which we correct. Through examples we illustrate the calculation of invariant foliations and manifolds, and at the same time show that Koopman eigenfunctions and autoencoders fail to capture accurate ROMs under the same conditions.
\section{Introduction} There is a great interest in the scientific community to identify explainable and/or parsimonious mathematical models from data. In this paper we classify these methods and identify one concept that is best suited to accurately calculate reduced order models (ROM) from off-line data. A ROM must track some selected features of the data over time and predict them into the future. We call this property of the ROM \emph{invariance}. A ROM may also be \emph{unique}, which means that barring a (nonlinear) coordinate transformation, the mathematical expression of the ROM is independent of the sampled data as long as the sample size is sufficiently large and the distribution of the data satisfies some minimum requirements. However, uniqueness may not be as important as invariance in certain applications. \begin{figure} \begin{centering} \includegraphics[width=0.49\linewidth]{DataManif}\includegraphics[width=0.49\linewidth]{DataScattered} \par\end{centering} \caption{\label{fig:data-structures}Two cases of data distribution. The blue dots (initial conditions) are mapped into the red triangles by the nonlinear map $\boldsymbol{F}$ given by equation (\ref{eq:caricature-mod}). a) data points are distributed in the neighbourhood of the solid green curve, which is identified as an approximate invariant manifold. b) Initial conditions are well-distributed in the neighbourhood of the steady state and there is no obvious manifold structure. For completeness, the second invariant manifold is denoted by the dashed line.} \end{figure} Not all methods that identify low-dimensional models produce ROMs. In some cases the data lie on a low-dimensional (nonlinear) manifold embedded in a high-dimensional (typically Euclidean) space (figure \ref{fig:data-structures}(a)). In this case the task is to parametrise the low-dimensional manifold and fit a model to the data in the coordinates of the parametrisation. The choice of parametrisation influences the form of the model. It is desired to use a parametrisation that yields a model with the least number of parameters. Approaches to reduce the number of parameters include compressed sensing \cite{Donoho2006,billings2013nonlinear,Brunton2014,BruntonPNAS2016,Champion2019Autoencoder} and normal form methods \cite{Read1998NormalForm,Yair2017DiffusionNormal,Cenedese2022NatComm}. The methods to obtain a parametrisation of the manifold include diffusion maps \cite{Coifman2006}, isomaps \cite{Tenenbaum2000isomap} and autoencoders \cite{Kramer1991autoencoder,Champion2019Autoencoder,KaliaMeijerBrunton2021,Cenedese2022NatComm}. The main focus of this paper is genuine reduced order models, where we need to find order in a cloud of data as illustrated in figure \ref{fig:data-structures}(b). The dynamics within the data defines the reduced order model. We are not merely finding a parametrisation of a manifold, we are choosing a particular one that is invariant. Alternatively, a family of manifolds that map onto each other are sought after, which is the case of an invariant foliation. The lesson learnt from previous work \cite{Fenichel,delaLlave1997,CabreLlave2003} is that invariant manifolds are not defined by the dynamics on them but by the dynamics in their neighbourhood. Therefore, any method that does not identify the dynamics in the neighbourhood of a manifold cannot claim invariance. Autoencoders do not identify the surrounding dynamics and therefore fail to guarantee invariance. Obviously, if there is no surrounding dynamics to identify, because all the data is on the manifold, this is not a problem. \subsection{Set-up} We assume a deterministic system and small additive noise of zero mean that appears in our off-line collected data. The state of the system belongs to a real $n$-dimensional Euclidean space $X$ (a vector space with an inner product $\left\langle \cdot,\cdot\right\rangle _{X}:X\times X\to\mathbb{R}$). The state of the system is uniformly sampled \begin{equation} \boldsymbol{y}_{k}=\boldsymbol{F}\left(\boldsymbol{x}_{k}\right)+\boldsymbol{\xi}_{k},\;k=1,\ldots,N,\label{eq:MAP-simple} \end{equation} where $\boldsymbol{\xi}_{k}\in X$ is a small noise sampled from a distribution with zero mean. Equation (\ref{eq:MAP-simple}) describes pieces of trajectories if for some $k\in\left\{ 1,\ldots,N\right\} $, $\boldsymbol{x}_{k+1}=\boldsymbol{y}_{k}$. The state of the system can also be defined on a manifold, in which case $X$ is chosen such that the manifold is embedded in $X$ according to Whitney's embedding theorem \cite{WhitneyEmbedding1936}. It is also possible that the state cannot be directly measured, in which case Taken's delay embedding technique \cite{TakensEmbedding1981} can be used to reconstruct the state, which we will do subsequently in an optimal manner \cite{Casdagli1989DelayEmbed}. As a minimum, we require that our system is observable \cite{Hermann1977NonlinearContrObs}. For the purpose of this paper we also assume a fixed point at the origin, such that $\boldsymbol{F}\left(\boldsymbol{0}\right)=\boldsymbol{0}$ and that the domain of $\boldsymbol{F}$ is a compact and connected subset $G\subset X$ that includes a neighbourhood of the origin. \section{Reduced order models} We now describe a weak definition of a ROM, which only requires invariance. There are two ingredients of a ROM: a function connecting the vector space $X$ with another vector space $Z$ of lower dimensionality and a map on vector space $Z$. The connection can go two ways, either from $Z$ to $X$ or from $X$ to $Z$. To make this more precise, we also assume that $Z$ has an inner product $\left\langle \cdot,\cdot\right\rangle _{Z}:Z\times Z\to\mathbb{R}$ and that $\dim Z<\dim X$. The connection $\boldsymbol{U}:X\to Z$ is called an \emph{encoder} and the connection $\boldsymbol{W}:Z\to X$ is called a \emph{decoder}. We also assume that both the encoder and the decoder are continuously differentiable and their Jacobians have full rank. Our terminology is borrowed from computer science \cite{Kramer1991autoencoder}, but we can also use mathematical terms that calls $\boldsymbol{U}$ a manifold submersion \cite{Lawson1974} and $\boldsymbol{W}$ a manifold immersion \cite{lang2012fundamentals}. In accordance with our assumption that $\boldsymbol{F}\left(\boldsymbol{0}\right)=\boldsymbol{0}$, we also assume that $\boldsymbol{U}\left(\boldsymbol{0}\right)=\boldsymbol{0}$ and $\boldsymbol{W}\left(\boldsymbol{0}\right)=\boldsymbol{0}$. \begin{defn} \label{def:ROM}Assume two maps $\boldsymbol{F}:X\to X$, $\boldsymbol{S}:Z\to Z$ and a encoder $\boldsymbol{U}:X\to Z$ or a decoder $\boldsymbol{W}:Z\to X$. \begin{enumerate} \item The encoder-map pair $\left(\boldsymbol{U},\boldsymbol{S}\right)$ is a \emph{reduced order model} (ROM) of $\boldsymbol{F}$ if for all initial conditions $\boldsymbol{x}_{0}\in G\subset X$ the trajectory $\boldsymbol{x}_{k+1}=\boldsymbol{F}\left(\boldsymbol{x}_{k}\right)$ and for initial condition $\boldsymbol{z}_{0}=\boldsymbol{U}\left(\boldsymbol{x}_{0}\right)$ the second trajectory $\boldsymbol{z}_{k+1}=\boldsymbol{S}\left(\boldsymbol{z}_{k}\right)$ are connected such that $\boldsymbol{z}_{k}=\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)$ for all $k>0$. \item The decoder-map pair $\left(\boldsymbol{W},\boldsymbol{S}\right)$ is a \emph{reduced order model} of $\boldsymbol{F}$ if for all initial conditions $\boldsymbol{z}_{0}\in H=\left\{ \boldsymbol{z}\in Z:\boldsymbol{W}\left(z\right)\in G\right\} $ the trajectory $\boldsymbol{z}_{k+1}=\boldsymbol{S}\left(\boldsymbol{z}_{k}\right)$ and for initial condition $\boldsymbol{x}_{0}=\boldsymbol{W}\left(\boldsymbol{z}_{0}\right)$ the second trajectory $\boldsymbol{x}_{k+1}=\boldsymbol{F}\left(\boldsymbol{x}_{k}\right)$ are connected such that $\boldsymbol{x}_{k}=\boldsymbol{W}\left(\boldsymbol{z}_{k}\right)$ for all $k>0$. \end{enumerate} \end{defn} \begin{rem} In essence, a ROM is a model whose trajectories are connected to the trajectories of our system $\boldsymbol{F}$. We call this essential property \emph{invariance.} It is possible to define a weaker ROM, where the connections $\boldsymbol{z}_{k}=\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)$ or $\boldsymbol{x}_{k}=\boldsymbol{W}\left(\boldsymbol{z}_{k}\right)$ are only approximate, which is not discussed here. \end{rem} The following corollary can be thought of as an equivalent definition of a ROM. \begin{cor} The encoder-map pair $\left(\boldsymbol{U},\boldsymbol{S}\right)$ or decoder-map pair $\left(\boldsymbol{W},\boldsymbol{S}\right)$ is a reduced order model (ROM) if and only if either invariance equation \begin{align} \boldsymbol{S}\left(\boldsymbol{U}\left(\boldsymbol{x}\right)\right) & =\boldsymbol{U}\left(\boldsymbol{F}\left(\boldsymbol{x}\right)\right),\quad\boldsymbol{x}\in G\quad\text{or}\label{eq:MAP-U-invariance}\\ \boldsymbol{W}\left(\boldsymbol{S}\left(\boldsymbol{z}\right)\right) & =\boldsymbol{F}\left(\boldsymbol{W}\left(\boldsymbol{z}\right)\right),\;\boldsymbol{z}\in H,\label{eq:MAP-W-invariance} \end{align} holds, where $H=\left\{ \boldsymbol{z}\in Z:\boldsymbol{W}\left(z\right)\in G\right\} $. \end{cor} \begin{proof} Let us assume that (\ref{eq:MAP-U-invariance}) holds, choose an $\boldsymbol{x}_{0}\in X$ and let $\boldsymbol{z}_{0}=\boldsymbol{U}\left(\boldsymbol{x}_{0}\right)$. First we check whether $\boldsymbol{z}_{k}=\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)$, if $\boldsymbol{x}_{k}=\boldsymbol{F}^{k}\left(\boldsymbol{x}_{0}\right)$, $\boldsymbol{z}_{k}=\boldsymbol{S}^{k}\left(\boldsymbol{z}_{0}\right)$ and (\ref{eq:MAP-U-invariance}) hold. Substituting $\boldsymbol{x}=\boldsymbol{F}^{k}\left(\boldsymbol{x}_{0}\right)$ into (\ref{eq:MAP-U-invariance}) yields \begin{align*} \boldsymbol{S}^{k}\left(\boldsymbol{z}_{0}\right) & =\boldsymbol{U}\left(\boldsymbol{F}^{k}\left(\boldsymbol{x}_{0}\right)\right)\\ \boldsymbol{z}_{k} & =\boldsymbol{U}\left(\boldsymbol{x}_{k}\right). \end{align*} Now in reverse, assuming that $\boldsymbol{z}_{k}=\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)$, $\boldsymbol{z}_{k}=\boldsymbol{S}^{k}\left(\boldsymbol{z}_{0}\right)$, $\boldsymbol{x}_{k}=\boldsymbol{F}^{k}\left(\boldsymbol{x}_{0}\right)$ and setting $k=1$ yields that \begin{align*} \boldsymbol{z}_{1}=\boldsymbol{S}\left(\boldsymbol{z}_{0}\right) & =\boldsymbol{U}\left(\boldsymbol{x}_{1}\right),\\ \boldsymbol{S}\left(\boldsymbol{U}\left(\boldsymbol{x}_{0}\right)\right) & =\boldsymbol{U}\left(\boldsymbol{F}\left(\boldsymbol{x}_{0}\right)\right), \end{align*} which is true for all $\boldsymbol{x}_{k-1}\in G$, hence (\ref{eq:MAP-U-invariance}) holds. The proof for the decoder is identical, except that we swap $\boldsymbol{F}$ and $\boldsymbol{S}$ and replace $\boldsymbol{U}$ with $\boldsymbol{W}.$ \end{proof} \begin{figure}[H] \begin{centering} \begin{center} {\normalsize a) \begin{tikzcd} \tikz \node[draw,circle]{\textcolor{blue}{$X$}}; \arrow[r, dashed, "\boldsymbol{F}"] \arrow[d, "\boldsymbol{U}"] & X \arrow[d, dashed, "\boldsymbol{U}"] \\ Z \arrow[r, "\boldsymbol{S}"]& \tikz \node[draw,rectangle]{\textcolor{red}{$Z$}}; \end{tikzcd} b) \begin{tikzcd} X \arrow[r, dashed, "\boldsymbol{F}"] & \tikz \node[draw,rectangle]{\textcolor{red}{$X$}}; \\ \tikz \node[draw,circle]{\textcolor{blue}{$Z$}}; \arrow[r, "\boldsymbol{S}"] \arrow[u, dashed, "\boldsymbol{W}"]& Z \arrow[u, "\boldsymbol{W}"] \end{tikzcd}\\ c)\begin{tikzcd} \tikz \node[draw,circle]{\textcolor{blue}{$X$}}; \arrow[r, dashed, "\boldsymbol{F}"] \arrow[d, "\boldsymbol{U}"] & \tikz \node[draw,rectangle]{\textcolor{red}{$X$}}; \\ Z \arrow[r, "\boldsymbol{S}"] & Z \arrow[u, "\boldsymbol{W}"] \end{tikzcd} d) \begin{tikzcd} X \arrow[r, dashed, "\boldsymbol{F}"] & X \arrow[d, dashed, "\boldsymbol{U}"] \\ \tikz \node[draw,circle]{\textcolor{blue}{$Z$}}; \arrow[r, "\boldsymbol{S}"] \arrow[u, dashed, "\boldsymbol{W}"] & \tikz \node[draw,rectangle]{\textcolor{red}{$Z$}}; \end{tikzcd } \par\end{center} \par\end{centering} \caption{\label{fig:straight-commutative}Commutative diagrams of a) invariant foliations, b) invariant manifolds, and connection diagrams of c) autoencoders and d) reverse autoencoders. The dashed arrows denote the chain of function composition that involves $\boldsymbol{F}$, the continuous arrows denote the chain of function composition that involves map $\boldsymbol{S}$. The encircled vector space denotes the domain of the invariance equation and the boxed vector space is the target of the invariance equation.} \end{figure} Now the question arises if we can use a combination of an encoder and a decoder? This is the case of the autoencoder or nonlinear principal component analysis \cite{Kramer1991autoencoder}. Indeed, there are four combinations of encoders and decoders, which are depicted in diagrams \ref{fig:straight-commutative}(a,b,c,d). We name these four scenarios as follows. \begin{defn} \label{def:Foil-Manif-AEnc}We call the connections displayed in figures \ref{fig:straight-commutative}(a,b,c,d), invariant foliation, invariant manifold, autoencoder and reverse autoencoder, respectively. \end{defn} \begin{figure} \begin{centering} \includegraphics{Autoencoder} \par\end{centering} \caption{\label{fig:autoencoder}Autoencoder. Encoder $\boldsymbol{U}$ maps data to parameter space $Z$, then the low-order map $\boldsymbol{S}$ takes the parameter forward in time and finally the decoder $\boldsymbol{W}$ brings the new parameter back to the state space. This chain of maps denoted by solid purple arrows must be the same as map $\boldsymbol{F}$ denoted by the dashed arrow. The two results can only match between two manifolds $\mathcal{M}$ and $\mathcal{N}$ and not elsewhere in the state space. Manifold $\mathcal{M}$ is invariant if and only if $\mathcal{M}\subset\mathcal{N}$, which is not guaranteed by this construction.} \end{figure} As we walk through the dashed and solid arrows in diagram \ref{fig:straight-commutative}(a), we find the two sides of the invariance equation (\ref{eq:MAP-U-invariance}). If we do the same for diagram \ref{fig:straight-commutative}(b), we find equation (\ref{eq:MAP-W-invariance}). This implies that invariant foliations and invariant manifolds are ROMs. The same is not true for autoencoders and reverse autoencoders. Reading off diagram \ref{fig:straight-commutative}(c), the autoencoder must satisfy \begin{equation} \boldsymbol{W}\left(\boldsymbol{S}\left(\boldsymbol{U}\left(\boldsymbol{x}\right)\right)\right)=\boldsymbol{F}\left(\boldsymbol{x}\right).\label{eq:MAP-AE-noninvar} \end{equation} Equation (\ref{eq:MAP-AE-noninvar}) is depicted in figure \ref{fig:autoencoder}. Since the decoder $\boldsymbol{W}$ maps onto a manifold $\mathcal{M}$, equation (\ref{eq:MAP-AE-noninvar}) can only hold if $\boldsymbol{x}$ is chosen from the preimage of $\mathcal{M}$, that is, $\boldsymbol{x}\in\mathcal{N}=\boldsymbol{F}^{-1}\left(\mathcal{M}\right)$. For invariance, we need $\boldsymbol{F}\left(\mathcal{M}\right)\subset\mathcal{M}$, which is the same as $\mathcal{M}\subset\mathcal{N}$ if $\boldsymbol{F}$ is invertible. However, the inclusion $\mathcal{M}\subset\mathcal{N}$ is not guaranteed by equation (\ref{eq:MAP-AE-noninvar}). The only way to guarantee that $\mathcal{M}\subset\mathcal{N}$, is by stipulating that the function composition $\boldsymbol{W}\circ\boldsymbol{U}$ is the identity map on the data. A trivial case is when $\dim Z=\dim X$ or, in general, when all data fall onto a $\dim Z$ dimensional submanifold of $X$. Indeed, the standard way to find an autoencoder \cite{Kramer1991autoencoder} is to solve \begin{equation} \arg\min_{\boldsymbol{U},\boldsymbol{W}}\sum_{k=1}^{N}\left\Vert \boldsymbol{W}\left(\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right)-\boldsymbol{x}_{k}\right\Vert ^{2}.\label{eq:AE-find} \end{equation} Unfortunately, if the data is not on a $\dim Z$ dimensional submanifold of $X$, which is the case of genuine reduced order models, the minimum of $\sum_{k=1}^{N}\left\Vert \boldsymbol{W}\left(\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right)-\boldsymbol{x}_{k}\right\Vert ^{2}$ will be far from zero and the location of $\mathcal{M}$ as the solution of (\ref{eq:AE-find}) will only indicate where the data is in the state space. It is customary to seek a solution to (\ref{eq:AE-find}) under the normalising condition that $\boldsymbol{U}\circ\boldsymbol{W}$ is the identity and hence $\boldsymbol{S}=\boldsymbol{U}\circ\boldsymbol{F}\circ\boldsymbol{W}$. The reverse autoencoder in diagram \ref{fig:straight-commutative}(d) is identical to the autoencoder \ref{fig:straight-commutative}(c) if we replace $\boldsymbol{F}$ with $\boldsymbol{F}^{-1}$ and $\boldsymbol{S}$ with $\boldsymbol{S}^{-1}$. Reading off diagram \ref{fig:straight-commutative}(d), the reverse autoencoder must satisfy \begin{equation} \boldsymbol{S}\left(\boldsymbol{z}\right)=\boldsymbol{U}\left(\boldsymbol{F}\left(\boldsymbol{W}\left(\boldsymbol{z}\right)\right)\right).\label{eq:MAP-RAE-noninvar} \end{equation} Equation (\ref{eq:MAP-RAE-noninvar}) immediately provides $\boldsymbol{S}$, for any $\boldsymbol{U},$ $\boldsymbol{W}$, without identifying any structure in the data. Only invariant foliations, through equation (\ref{eq:MAP-U-invariance}) and autoencoders through equations (\ref{eq:MAP-AE-noninvar}) and (\ref{eq:AE-find}) can be fitted to data, because parameter $\boldsymbol{z}$ is not in the data. Indeed, if we take into account our data (\ref{eq:MAP-simple}), foliation invariance equation (\ref{eq:MAP-U-invariance}) turns into an optimisation problem \begin{equation} \arg\min_{\boldsymbol{S},\boldsymbol{U}}\sum_{k=1}^{N}\left\Vert \boldsymbol{x}_{k}\right\Vert ^{-2}\left\Vert \boldsymbol{S}\left(\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right)-\boldsymbol{U}\left(\boldsymbol{y}_{k}\right)\right\Vert ^{2},\label{eq:MAP-U-optim} \end{equation} and the autoencoder equation (\ref{eq:MAP-AE-noninvar}) turns into \begin{equation} \arg\min_{\boldsymbol{S},\boldsymbol{U},\boldsymbol{W}}\sum_{k=1}^{N}\left\Vert \boldsymbol{x}_{k}\right\Vert ^{-2}\left\Vert \boldsymbol{W}\left(\boldsymbol{S}\left(\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right)\right)-\boldsymbol{y}_{k}\right\Vert ^{2}.\label{eq:MAP-AE-optim} \end{equation} For the optimisation problem (\ref{eq:MAP-U-optim}) to have a unique solution, we need to apply a constraint to $\boldsymbol{U}$. One possible constraint is explained in remark \ref{rem:U-constraint}. In case of the autoencoder (\ref{eq:MAP-AE-optim}), the constraint, in addition to the one restricting $\boldsymbol{U}$, can be that $\boldsymbol{U}\circ\boldsymbol{W}$ is the identity as also stipulated in \cite{Cenedese2022NatComm}. \subsection{Invariant foliations and invariant manifolds} \begin{figure} \begin{centering} \includegraphics[width=0.49\linewidth]{Fibres}\includegraphics[width=0.49\linewidth]{SSMequil} \par\end{centering} \caption{\label{fig:manifold-foliation}(a) Invariant foliation: a leaf $\mathcal{L}_{\boldsymbol{z}}$ is being mapped onto leaf $\mathcal{L}_{\boldsymbol{S}\left(\boldsymbol{z}\right)}$. Since the origin $\boldsymbol{0}\in Z$ is a steady state of $\boldsymbol{S}$, the leaf $\mathcal{L}_{\boldsymbol{0}}$ is mapped into itself and therefore it is an invariant manifold. (b) Invariant manifold: a single trajectory, represented by red dots, starting from the invariant manifold $\mathcal{M}$ remains on the green invariant manifold.} \end{figure} An encoder represents a family of manifolds, called \emph{foliation}, which are the constant level surfaces of $\boldsymbol{U}$. A single level surface is called a leaf of a foliation, hence the foliation is a collection of leaves. In mathematical terms a \emph{leaf} with parameter $\boldsymbol{z}\in Z$ is denoted by \begin{equation} \mathcal{L}_{\boldsymbol{z}}=\left\{ \boldsymbol{x}\in G\subset X:\boldsymbol{U}\left(\boldsymbol{x}\right)=\boldsymbol{z}\right\} .\label{eq:MAP-leaf-def} \end{equation} All leaves are $\dim X-\dim Z$ dimensional differentiable manifolds, because we assumed that the Jacobian $D\boldsymbol{U}$ has full rank \cite{Lawson1974}. The collection of all leaves is a \emph{foliation,} denoted by \[ \mathcal{F}=\left\{ \mathcal{L}_{\boldsymbol{z}}:\boldsymbol{z}\in H\right\} , \] where $H=\boldsymbol{U}\left(G\right)$. The foliation has a co-dimension, which is the same as $\dim Z$. The invariance equation (\ref{eq:MAP-U-invariance}) means that each leaf of the foliation is mapped onto another leaf, in particular the leaf with parameter $\boldsymbol{z}$ is mapped onto the leaf with parameter $\boldsymbol{S}\left(\boldsymbol{z}\right)$, that is \[ \boldsymbol{F}\left(\mathcal{L}_{\boldsymbol{z}}\right)\subset\mathcal{L}_{\boldsymbol{S}\left(\boldsymbol{z}\right)}. \] Due to our assumptions, leaf $\mathcal{L}_{\boldsymbol{0}}$ is an invariant manifold, because $\boldsymbol{F}\left(\mathcal{L}_{\boldsymbol{0}}\right)\subset\mathcal{L}_{\boldsymbol{0}}$. This geometry is illustrated in figure \ref{fig:manifold-foliation}(a). We now characterise the existence and uniqueness of invariant foliations about a fixed point. We assume that $\boldsymbol{F}$ is a $C^{r}$, $r\ge2$ map and that the Jacobian matrix $D\boldsymbol{F}\left(\boldsymbol{0}\right)$ has eigenvalues $\mu_{1},\ldots,\mu_{n}$ such that $\left|\mu_{i}\right|<1$, $i=1,\ldots,n$. To select the invariant manifold or foliation we assume two $\nu$-dimensional linear subspaces $E$ of $X$ and $E^{\star}$ of $X$ corresponding to eigenvalues $\mu_{1},\ldots,\mu_{\nu}$ such that $D\boldsymbol{F}\left(\boldsymbol{0}\right)E\subset E$ and for the adjoint map $\left(D\boldsymbol{F}\left(\boldsymbol{0}\right)\right)^{\star}E^{\star}\subset E^{\star}$. \begin{defn} The number \[ \beth_{E^{\star}}=\frac{\min_{k=1\ldots\nu}\log\left|\mu_{k}\right|}{\max_{k=1\ldots n}\log\left|\mu_{k}\right|} \] is called the spectral quotient of the left-invariant linear subspace $E^{\star}$ of $\boldsymbol{F}$ about the origin. \end{defn} \begin{thm} \label{thm:MapFoliation}Assume that $D\boldsymbol{F}\left(\boldsymbol{0}\right)$ is semisimple and that there exists an integer $\sigma\ge2$, such that $\beth_{E^{\star}}<\sigma\le r$. Also assume that \begin{equation} \prod_{k=1}^{n}\mu_{k}^{m_{k}}\neq\mu_{j},\;j=1,\ldots,\nu\label{eq:MapFoliationNonResonance} \end{equation} for all $m_{k}\ge0$, $1\le k\le n$ with at least one $m_{l}\neq0$, $\nu+1\le l\le n$ and with $\sum_{k=0}^{n}m_{k}\le\sigma-1$. Then in a sufficiently small neighbourhood of the origin there exists an invariant foliation $\mathcal{F}$ tangent to the left-invariant linear subspace $E^{\star}$ of the $C^{r}$ map $\boldsymbol{F}$. The foliation $\mathcal{F}$ is unique among the $\sigma$ times differentiable foliations and it is also $C^{r}$ smooth. \end{thm} \begin{proof} The proof is carried out in \cite{Szalai2020ISF}. Note that the assumption of $D\boldsymbol{F}\left(\boldsymbol{0}\right)$ being semisimple was used to simplify the proof in \cite{Szalai2020ISF}, therefore it is unlikely to be needed. \end{proof} \begin{rem} \label{rem:U-constraint}Theorem \ref{thm:MapFoliation} only concerns the uniqueness of the foliation, but not the encoder $\boldsymbol{U}$. However, for any smooth and invertible map $\boldsymbol{R}:Z\to Z$, the encoder $\tilde{\boldsymbol{U}}=\boldsymbol{R}\circ\boldsymbol{U}$ represents the same foliation and the nonlinear map $\boldsymbol{S}$ transforms into $\tilde{\boldsymbol{S}}=\boldsymbol{R}\circ\boldsymbol{S}\circ\boldsymbol{R}^{-1}$. If we want to solve the invariance equation (\ref{eq:MAP-U-invariance}), we need to constrain $\boldsymbol{U}$. The simplest such constraint is that \begin{equation} \boldsymbol{U}\left(\boldsymbol{W}_{1}\boldsymbol{z}\right)=\boldsymbol{z},\label{eq:FOIL-graph} \end{equation} where $\boldsymbol{W}_{1}:Z\to X$ is a linear map with full rank such that $E^{\star}\cap\ker\boldsymbol{W}_{1}^{\star}=\left\{ \boldsymbol{0}\right\} $. To explain the meaning of equation (\ref{eq:FOIL-graph}), we note that the image of $\boldsymbol{W}_{1}$ is a hyperplane $\mathcal{H}$ in $X$. Equation (\ref{eq:FOIL-graph}) therefore means that each leaf $\mathcal{L}_{\boldsymbol{z}}$ must intersect this hyperplane exactly at parameter $\boldsymbol{z}$. The condition $E^{\star}\cap\ker\boldsymbol{W}_{1}^{\star}=\left\{ \boldsymbol{0}\right\} $ then means that the leaf $\mathcal{L}_{\boldsymbol{0}}$ has a proper intersection with hyperplane $\mathcal{H}$ and therefore is not tangent to $\mathcal{H}$. This is similar to the graph-style parametrisation of a manifold over a hyperplane. \end{rem} \begin{rem} \label{rem:Koopman}Eigenvalues and eigenfunctions of the Koopman operator \cite{Mezic2005,Mezic2021} are invariant foliations. Indeed, the Koopman operator is defined as $\left(\mathcal{K}\boldsymbol{u}\right)\left(\boldsymbol{x}\right)=\boldsymbol{u}\left(\boldsymbol{F}\left(\boldsymbol{x}\right)\right)$. If we assume that $\boldsymbol{U}=\left(\boldsymbol{u}_{1},\ldots,\boldsymbol{u}_{\nu}\right)$ is a collection of functions $\boldsymbol{u}_{j}:X\to\mathbb{R}$, $Z=\mathbb{R}^{\nu}$, then $\boldsymbol{U}$ spans an invariant subspace of $\mathcal{K}$ if there exists a matrix $\boldsymbol{S}$ such that $\mathcal{K}\left(\boldsymbol{U}\right)=\boldsymbol{S}\boldsymbol{U}$. Expanding this equation yields $\boldsymbol{U}\left(\boldsymbol{F}\left(x\right)\right)=\boldsymbol{S}\boldsymbol{U}\left(x\right)$, which is the same as the invariance equation (\ref{eq:MAP-U-invariance}), except that $\boldsymbol{S}$ is linear. The existence of linear map $\boldsymbol{S}$ requires further non-resonance conditions, which are \begin{equation} \prod_{k=1}^{\nu}\mu_{k}^{m_{k}}\neq\mu_{j},\;j=1,\ldots,\nu\label{eq:Internal-nonresonance} \end{equation} for all $m_{k}\ge0$ such that $\sum_{k=0}^{n}m_{k}\le\sigma-1$. Equation (\ref{eq:Internal-nonresonance}) is referred to as the set of \emph{internal non-resonance} conditions, because these are intrinsic to the invariant subspace $E^{\star}$. In many cases $\boldsymbol{S}$ represents the slowest dynamics, hence even if there are no internal resonances, (\ref{eq:Internal-nonresonance}) will approximately hold for some set of $m_{1},\ldots,m_{\nu}$ and that causes numerical issues leading to undesired inaccuracies. We will illustrate this in section \ref{subsec:10dimsys-example}. \end{rem} Now we discuss invariant manifolds. A decoder $\boldsymbol{W}$ defines a differentiable manifold \[ \mathcal{M}=\left\{ \boldsymbol{W}\left(\boldsymbol{z}\right):\boldsymbol{z}\in H\right\} , \] where $H=\left\{ \boldsymbol{z}\in Z:\boldsymbol{W}\left(\boldsymbol{z}\right)\in G\right\} $. Invariance equation (\ref{eq:MAP-W-invariance}) is equivalent to the geometric condition that $\boldsymbol{F}\left(\mathcal{M}\right)\subset\mathcal{M}$. This geometry is shown in figure \ref{fig:manifold-foliation}(b), which illustrates that if a trajectory is started on $\mathcal{M}$, all subsequent points of the trajectory stay on $\mathcal{M}$. Invariant manifolds as a concept cannot be used to identify ROMs from off-line data. As we will see below, invariant manifolds can still be identified as a leaf of an invariant foliation, but not through the invariance equation (\ref{eq:MAP-W-invariance}). Indeed, it is not possible to guess the manifold parameter $\boldsymbol{z}\in Z$ from data. Introducing an encoder $\boldsymbol{U}:X\to Z$ to calculate $\boldsymbol{z}=\boldsymbol{U}\left(\boldsymbol{x}\right)$, transforms the invariant manifold into an autoencoder, which does not guarantee invariance. We now state the conditions of the existence and uniqueness of an invariant manifold. \begin{defn} The number \[ \aleph_{E}=\frac{\min_{k=\nu+1\ldots n}\log\left|\mu_{k}\right|}{\max_{k=1\ldots\nu}\log\left|\mu_{k}\right|} \] is called the spectral quotient of the right-invariant linear subspace $E$ of map $\boldsymbol{F}$ about the origin. \end{defn} \begin{thm} \label{thm:MapManifold}Assume that there exists an integer $\sigma\ge2$, such that $\aleph_{E}<\sigma\le r$. Also assume that \begin{equation} \prod_{k=1}^{\nu}\mu_{k}^{m_{k}}\neq\mu_{j},\;j=\nu+1,\ldots,n\label{eq:MapManifNonResonance} \end{equation} for all $m_{k}\ge0$ such that $\sum_{k=0}^{\nu}m_{k}\le\sigma-1$. Then in a sufficiently small neighbourhood of the origin there exists an invariant manifold $\mathcal{\mathcal{M}}$ tangent to the invariant linear subspace $E$ of the $C^{r}$ map $\boldsymbol{F}$. The manifold $\mathcal{M}$ is unique among the $\sigma$ times differentiable manifolds and it is also $C^{r}$ smooth. \end{thm} \begin{proof} The theorem is a subset of theorem 1.1 in \cite{CabreLlave2003}. \end{proof} \begin{rem} \label{rem:W-constraint}To calculate an invariant manifold with a unique representation, we need to impose a constraint on $\boldsymbol{W}$ and/or $\boldsymbol{S}$. The simplest constraint is imposed by \begin{equation} \boldsymbol{U}_{1}\boldsymbol{W}\left(\boldsymbol{z}\right)=\boldsymbol{z},\label{eq:MAP-W-graphstyle} \end{equation} where $\boldsymbol{U}_{1}:Z\to X$ is a linear map with full rank such that $E\cap\ker\boldsymbol{U}_{1}^{\star}=\left\{ \boldsymbol{0}\right\} $. This is similar, but more general than a graph-style parametrisation (akin to theorem 1.2 in \cite{CabreLlave2003}), where the range of $\boldsymbol{U}_{1}$ must span the linear subspace $E$. Constraint (\ref{eq:MAP-W-graphstyle}) can break down for large $\left\Vert \boldsymbol{z}\right\Vert $, when $\boldsymbol{U}_{1}D\boldsymbol{W}\left(\boldsymbol{z}\right)$ does not have full rank. A globally suitable constraint is that $D\boldsymbol{W}^{\star}\left(\boldsymbol{z}\right)D\boldsymbol{W}\left(\boldsymbol{z}\right)=\boldsymbol{I}$. \end{rem} For linear subspaces $E$ and $E^{\star}$ with eigenvalues closest to the complex unit circle (representing the slowest dynamics), $\beth_{E}=1$ and $\aleph_{E}$ is maximal. Therefore the foliation corresponding to the slowest dynamics requires the least smoothness, while the invariant manifold requires the maximum smoothness for uniqueness. The ultimate factor that decides what method is whether it can be fitted to data and can produce a ROM at the same time, which only invariant foliations are capable of. Table \ref{tab:Comparison} summarises the main properties of the three conceptually different model identification techniques. \begin{table} \begin{centering} \begin{tabular}{|>{\centering}p{3cm}|>{\centering}p{3cm}|>{\centering}p{3cm}|>{\centering}p{3cm}|} \hline & Invariant Foliation & Invariant Manifold & Autoencoder\tabularnewline \hline \hline Applicable to Math. Models & YES & YES & YES\tabularnewline \hline Applicable to Data & YES & NO & YES\tabularnewline \hline Obtains a ROM & YES & YES & NO\tabularnewline \hline Uniqueness & slowest most unique & slowest least unique & NO\tabularnewline \hline References & \cite{Szalai2020ISF,Mezic2005} & \cite{ShawPierre,delaLlave1997,CabreLlave2003,CabreP3-2005,Haller2016,Szalai20160759,VIZZACCARO2021normalForm} & \cite{Cenedese2022NatComm,Champion2019Autoencoder,KaliaMeijerBrunton2021}\tabularnewline \hline \end{tabular} \par\end{centering} \caption{\label{tab:Comparison}Comparison of invariant foliations, invariant manifolds and autoencoders. } \end{table} \subsection{\label{subsec:LocallyAccurateEncoder}Invariant manifolds represented by locally accurate invariant foliations} As discussed before, we cannot fit invariant manifolds to data, instead we can fit an invariant foliation that contains our invariant manifold (which is the leaf containing the fixed point $\mathcal{L}_{\boldsymbol{0}}$). This invariant foliation only needs to be accurate near the invariant manifold and therefore we can simplify the functional representation of the encoder that defines the foliation. In this section we discuss this simplification. To begin with, assume that we already have an invariant foliation $\mathcal{F}$ with an encoder $\boldsymbol{U}$ and nonlinear map $\boldsymbol{S}$. Our objective is to find the invariant manifold $\mathcal{M}$, represented by decoder $\boldsymbol{W}$ that has the same dynamics $\boldsymbol{S}$ as the foliation. This is useful if we want to know quantities that are only defined for invariant manifolds, such as instantaneous frequencies and damping ratios. Formally, we are looking for a simplified invariant foliation $\hat{\mathcal{F}}$ with encoder $\hat{\boldsymbol{U}}:X\to\hat{Z}$ that together with $\mathcal{F}$ form a coordinate system in $X$. (Technically speaking, $\left(\boldsymbol{U},\hat{\boldsymbol{U}}\right):X\to Z\times\hat{Z}$ must be a homeomorphism.) In this case our invariant manifold is the zero level surface of encoder $\hat{\boldsymbol{U}}$, i.e., $\mathcal{M}=\hat{\mathcal{L}}_{0}\in\hat{\mathcal{F}}$. Naturally, we must have $\dim Z+\dim\hat{Z}=\dim X$. \begin{figure} \begin{centering} \includegraphics[width=0.7\linewidth]{ApproxFoliation} \par\end{centering} \caption{\label{fig:approx-foil}Approximate invariant foliation. The linear map $\boldsymbol{U}^{\perp}$ and the nonlinear map $\boldsymbol{U}$ create a coordinate system, so that in this frame the invariant manifold $\mathcal{M}$ is given by $\boldsymbol{U}^{\perp}\boldsymbol{x}=\boldsymbol{W}_{\boldsymbol{0}}\left(\boldsymbol{z}\right)$, where $\boldsymbol{z}=\boldsymbol{U}\left(\boldsymbol{z}\right)$. We then allow to shift manifold $\mathcal{M}$ in the $\boldsymbol{U}^{\perp}$ direction such that $\boldsymbol{U}^{\perp}\boldsymbol{x}-\boldsymbol{W}_{\boldsymbol{0}}\left(\boldsymbol{z}\right)=\hat{\boldsymbol{z}}$, which creates a foliation parametrised by $\hat{\boldsymbol{z}}\in\hat{Z}$. If this foliation satisfies the invariance equation in a neighbourhood of $\mathcal{M}$ with a linear map $\boldsymbol{B}$ as per equation (\ref{eq:Uhat-invariance}), then $\mathcal{M}$ is an invariant manifold. In other words, nearby blue dashed leaves are mapped towards $\mathcal{M}$ by linear map $\boldsymbol{B}$ the same way as the underlying dynamics does.} \end{figure} The sufficient condition that $\mathcal{F}$ and $\hat{\mathcal{F}}$ form a coordinate system locally about the fixed point is that the square matrix \[ \boldsymbol{Q}=\begin{pmatrix}D\boldsymbol{U}\left(\boldsymbol{0}\right)\\ D\hat{\boldsymbol{U}}\left(\boldsymbol{0}\right) \end{pmatrix} \] is invertible. Let us represent our approximate (but locally accurate) encoder as \begin{equation} \hat{\boldsymbol{U}}\left(\boldsymbol{x}\right)=\boldsymbol{U}^{\perp}\boldsymbol{x}-\boldsymbol{W}_{0}\left(\boldsymbol{U}\left(\boldsymbol{x}\right)\right),\label{eq:Uhat-decoder} \end{equation} where $\boldsymbol{W}_{0}:Z\to\hat{Z}$ is a nonlinear map with $D\boldsymbol{W}_{0}\left(\boldsymbol{0}\right)=\boldsymbol{0}$, $\boldsymbol{U}^{\perp}:X\to\hat{Z}$ is an orthogonal linear map, $\boldsymbol{U}^{\perp}\left(\boldsymbol{U}^{\perp}\right)^{T}=\boldsymbol{I}$ and $\boldsymbol{U}^{\perp}\boldsymbol{W}_{0}\left(\boldsymbol{z}\right)=\boldsymbol{0}$. Here, the linear map $\boldsymbol{U^{\perp}}$ measures coordinates in a transversal direction to the manifold and $\boldsymbol{W}_{0}$ prescribes where the actual manifold is along this transversal direction, while $\boldsymbol{U}$ provides the parametrisation of the manifold. All other leaves of the approximate foliation $\hat{\mathcal{F}}$ are shifted copies of $\mathcal{M}$ along the $\boldsymbol{U}^{\perp}$ direction as displayed in figure \ref{fig:approx-foil}. A locally accurate foliation means that $\hat{\boldsymbol{z}}=\hat{\boldsymbol{U}}\left(\boldsymbol{x}\right)\in\hat{Z}$ is assumed to be small, hence we can also assume linear dynamics among the leaves of $\hat{\mathcal{F}}$, which is represented by a linear operator $\boldsymbol{B}:\hat{Z}\to\hat{Z}$. Therefore the invariance equation (\ref{eq:MAP-U-invariance}) becomes \begin{equation} \boldsymbol{B}\hat{\boldsymbol{U}}\left(\boldsymbol{x}\right)=\hat{\boldsymbol{U}}\left(\boldsymbol{F}\left(\boldsymbol{x}\right)\right).\label{eq:Uhat-invariance} \end{equation} Once\textbf{ $\boldsymbol{B}$}, $\boldsymbol{U}^{\perp}$ and $\boldsymbol{W}_{0}$ are found, the final step is to reconstruct the decoder $\boldsymbol{W}$ of our invariant manifold $\mathcal{M}$. \begin{prop} The decoder $\boldsymbol{W}$ of the invariant manifold $\mathcal{M}=\left\{ \boldsymbol{x}\in X:\hat{\boldsymbol{U}}\left(\boldsymbol{x}\right)=\boldsymbol{0}\right\} $ is the unique solution of the system of equations \begin{equation} \left.\begin{array}{rl} \boldsymbol{U}\left(\boldsymbol{W}\left(\boldsymbol{z}\right)\right) & =\boldsymbol{z}\\ \boldsymbol{U}^{\perp}\boldsymbol{W}\left(\boldsymbol{z}\right) & =\boldsymbol{W}_{0}\left(\boldsymbol{z}\right) \end{array}\right\} .\label{eq:Uhat-W-reconstruct} \end{equation} \end{prop} \begin{proof} First we show that conditions (\ref{eq:Uhat-W-reconstruct}) imply $\hat{\boldsymbol{U}}\left(\boldsymbol{W}\left(\boldsymbol{x}\right)\right)=\boldsymbol{0}$. We expand our expression using (\ref{eq:Uhat-decoder}) into $\hat{\boldsymbol{U}}\left(\boldsymbol{W}\left(\boldsymbol{x}\right)\right)=\boldsymbol{U}^{\perp}\boldsymbol{W}\left(\boldsymbol{z}\right)-\boldsymbol{W}_{0}\left(\boldsymbol{U}\left(\boldsymbol{W}\left(\boldsymbol{z}\right)\right)\right)$, then use equations (\ref{eq:Uhat-W-reconstruct}), which yields $\hat{\boldsymbol{U}}\left(\boldsymbol{W}\left(\boldsymbol{x}\right)\right)=\boldsymbol{0}$. To solve equations (\ref{eq:Uhat-W-reconstruct}) for $\boldsymbol{W}$, we decompose $\boldsymbol{U}$, $\boldsymbol{W}$ into linear and nonlinear components, such that $\boldsymbol{U}\left(\boldsymbol{x}\right)=D\boldsymbol{U}\left(0\right)\boldsymbol{x}+\widetilde{\boldsymbol{U}}\left(\boldsymbol{x}\right)$ and $\boldsymbol{W}\left(\boldsymbol{z}\right)=D\boldsymbol{W}\left(0\right)\boldsymbol{z}+\widetilde{\boldsymbol{W}}\left(\boldsymbol{z}\right)$. Expanding equation (\ref{eq:Uhat-W-reconstruct}) with the decomposed $\boldsymbol{U}$, $\boldsymbol{W}$ yields \begin{equation} \left.\begin{array}{rl} D\boldsymbol{U}\left(0\right)\left(D\boldsymbol{W}\left(0\right)\boldsymbol{z}+\widetilde{\boldsymbol{W}}\left(\boldsymbol{z}\right)\right)+\widetilde{\boldsymbol{U}}\left(D\boldsymbol{W}\left(0\right)\boldsymbol{z}+\widetilde{\boldsymbol{W}}\left(\boldsymbol{z}\right)\right) & =\boldsymbol{z}\\ \boldsymbol{U}^{\perp}\left(D\boldsymbol{W}\left(0\right)\boldsymbol{z}+\widetilde{\boldsymbol{W}}\left(\boldsymbol{z}\right)\right) & =\boldsymbol{W}_{0}\left(\boldsymbol{z}\right) \end{array}\right\} .\label{eq:Uhat-W-expand} \end{equation} The linear part of equation (\ref{eq:Uhat-W-expand}) is \[ \boldsymbol{Q}D\boldsymbol{W}\left(0\right)=\begin{pmatrix}\boldsymbol{I}\\ \boldsymbol{0} \end{pmatrix}, \] hence $D\boldsymbol{W}\left(0\right)=\boldsymbol{Q}^{-1}\begin{pmatrix}\boldsymbol{I}\\ \boldsymbol{0} \end{pmatrix}$. The nonlinear part of (\ref{eq:Uhat-W-expand}) is \[ \boldsymbol{Q}\widetilde{\boldsymbol{W}}\left(\boldsymbol{z}\right)=\begin{pmatrix}-\widetilde{\boldsymbol{U}}\left(D\boldsymbol{W}\left(0\right)\boldsymbol{z}+\widetilde{\boldsymbol{W}}\left(\boldsymbol{z}\right)\right)\\ \boldsymbol{W}_{0}\left(\boldsymbol{z}\right) \end{pmatrix}, \] which can be solved by the iteration \begin{equation} \widetilde{\boldsymbol{W}}_{k+1}\left(\boldsymbol{z}\right)=\boldsymbol{Q}^{-1}\begin{pmatrix}-\widetilde{\boldsymbol{U}}\left(D\boldsymbol{W}\left(0\right)\boldsymbol{z}+\widetilde{\boldsymbol{W}}_{k}\left(\boldsymbol{z}\right)\right)\\ \boldsymbol{W}_{0}\left(\boldsymbol{z}\right) \end{pmatrix},\;\widetilde{\boldsymbol{W}}_{1}\left(\boldsymbol{z}\right)=\boldsymbol{0},\;k=1,2,\ldots.\label{eq:Wtilde-iteration} \end{equation} Iteration (\ref{eq:Wtilde-iteration}) converges for $\left|\boldsymbol{z}\right|$ sufficiently small, due to $\widetilde{\boldsymbol{U}}$$\left(\boldsymbol{x}\right)=\mathcal{O}\left(\left|\boldsymbol{x}\right|^{2}\right)$ and $\boldsymbol{W}_{0}\left(\boldsymbol{z}\right)=\mathcal{O}\left(\left|\boldsymbol{z}\right|^{2}\right)$. \end{proof} As we will see in section \ref{sec:Examples}, this approach provides better results than using an autoencoder. Here we have resolved the dynamics transversal to the invariant manifold $\mathcal{M}$ up to linear order. It is essential to resolve this dynamics to find invariance, not just the location of data points. \begin{rem} This method is not without problems. The first is that $\hat{\boldsymbol{U}}\left(\boldsymbol{x}_{k}\right)$ might assume large values over some data points. This either requires to replace $\boldsymbol{B}$ with a nonlinear map or we need to filter out data points that are not in a small neighbourhood of the invariant manifold $\mathcal{M}$. Due to $\boldsymbol{B}$ being high-dimensional, replacing it with a nonlinear map leads to computational difficulties. Filtering data is easier. For example, we can assign weights to each term in our optimisation problem (\ref{eq:MAP-U-optim}) depending on how far a data point is from the predicted manifold, which is the zero level surface of $\hat{\boldsymbol{U}}$. This can be done using the optimisation problem \begin{equation} \arg\min_{\boldsymbol{B},\boldsymbol{U}^{\perp},\boldsymbol{W}_{0}}\sum_{k=1}^{N}\left\Vert \boldsymbol{x}_{k}\right\Vert ^{-2}\exp\left(-\frac{1}{2\kappa^{2}}\left\Vert \hat{\boldsymbol{U}}\left(\boldsymbol{x}_{k}\right)\right\Vert ^{2}\right)\left\Vert \boldsymbol{B}\hat{\boldsymbol{U}}\left(\boldsymbol{x}_{k}\right)-\hat{\boldsymbol{U}}\left(\boldsymbol{y}_{k}\right)\right\Vert ^{2},\label{eq:MAP-Uhat-optim} \end{equation} where $\kappa>0$ determines the size of the neighbourhood of the invariant manifold $\mathcal{M}$ that we take into account. \end{rem} \begin{rem} The approximation (\ref{eq:Uhat-invariance}) can be made more accurate if we allow matrix $\boldsymbol{U}^{\perp}$ to vary with the parameter of the manifold $\boldsymbol{z}=\boldsymbol{U}\left(\boldsymbol{x}\right)$. In this case the encoder becomes \[ \breve{\boldsymbol{U}}\left(\boldsymbol{x}\right)=\boldsymbol{U}^{\perp}\left(\boldsymbol{U}\left(\boldsymbol{x}\right)\right)\boldsymbol{x}-\boldsymbol{W}_{0}\left(\boldsymbol{U}\left(\boldsymbol{x}\right)\right). \] This does increase computational costs, but not nearly as much as if we were calculating a globally accurate invariant foliation. \end{rem} \begin{rem} \label{rem:extrasimpleROM}It is also possible to eliminate a-priori calculation of $\boldsymbol{U}$. We can assume that $\boldsymbol{U}$ is a linear map, such that $\boldsymbol{U}\boldsymbol{U}^{T}=\boldsymbol{I}$ and treat it as an unknown in representation (\ref{eq:Uhat-decoder}). The assumption that $\boldsymbol{U}$ is linear makes sense if we limit ourselves to a small neighbourhood of the invariant manifold $\mathcal{M}$ by setting $\kappa<\infty$ in (\ref{eq:MAP-Uhat-optim}), as we have already assumed a linear dynamics among the leaves of the associated foliation $\hat{\mathcal{F}}$ given by linear map $\boldsymbol{B}$. Once $\boldsymbol{B}$, $\boldsymbol{U},$ $\boldsymbol{U}^{\perp}$ and $\boldsymbol{W}_{0}$ are found, the map $\boldsymbol{S}$ can also be fitted to the invariance equation (\ref{eq:MAP-U-invariance}). The equation to fit $\boldsymbol{S}$ to data is \[ \arg\min_{\boldsymbol{S}}\sum_{k=1}^{N}\left\Vert \boldsymbol{x}_{k}\right\Vert ^{-2}\exp\left(-\frac{1}{2\kappa^{2}}\left\Vert \hat{\boldsymbol{U}}\left(\boldsymbol{x}_{k}\right)\right\Vert ^{2}\right)\left\Vert \boldsymbol{U}\boldsymbol{y}_{k}-\boldsymbol{S}\left(\boldsymbol{U}\boldsymbol{x}_{k}\right)\right\Vert ^{2}, \] which is a straightforward linear least squares problem, if $\boldsymbol{S}$ is linear in its parameters. This approach will be further explored elsewhere. \end{rem} \section{\label{sec:freq-damp}Instantaneous frequencies and damping ratios} Instantaneous damping ratios and frequencies are usually defined with respect to a model that is fitted to data \cite{JinBrake2020FreqDamp}. Here we take a similar approach and stress that these quantities only make sense in an Euclidean frame and not in the nonlinear frame of an invariant manifold or foliation. The geometry of a manifold or foliation depends on an arbitrary parametrisation, hence uncorrected results are not unique. Many studies mistakenly use nonlinear coordinate systems, for example one by the present author \cite{Szalai20160759} and colleagues \cite{Breunung2017,PONSIOEN2018269}. Such calculations are only asymptotically accurate near the equilibrium. Here we describe how to correct this error. We assume a two-dimensional invariant manifold $\mathcal{M}$, parametrised by a decoder $\boldsymbol{W}$ in polar coordinates $r,\theta$. The invariance equation (\ref{eq:MAP-W-invariance}) for the decoder $\boldsymbol{W}$ can be written as \begin{equation} \boldsymbol{W}\left(R\left(r\right),\theta+T\left(r\right)\right)=\boldsymbol{F}\left(\boldsymbol{W}\left(r,\theta\right)\right).\label{eq:Polar-Invariance} \end{equation} Without much thinking, (as described in \cite{Szalai20160759,Szalai2020ISF}), the instantaneous frequency and damping could be calculated as \begin{align} \omega\left(r\right) & =T\left(r\right) & \left[\text{rad}/\text{step}\right],\label{eq:Naive-natFreq}\\ \zeta\left(r\right) & =-\frac{\log\frac{R\left(r\right)}{r}}{\omega\left(r\right)} & \left[-\right],\label{eq:Naive-dampRatio} \end{align} respectively. The instantaneous amplitude is a norm $A\left(r\right)=\left\Vert \boldsymbol{W}\left(r,\cdot\right)\right\Vert $, for example \begin{equation} \left\Vert \boldsymbol{f}\right\Vert =\sqrt{\left\langle \boldsymbol{f},\boldsymbol{f}\right\rangle },\;\left\langle \boldsymbol{f},\boldsymbol{f}\right\rangle =\frac{1}{2\pi}\int_{0}^{2\pi}\left\langle \boldsymbol{f}\left(\theta\right),\boldsymbol{f}\left(\theta\right)\right\rangle _{X}\mathrm{d}\theta,\label{eq:amplitude} \end{equation} where $\left\langle \cdot,\cdot\right\rangle _{X}$ is the inner product on vector space $X$. The frequency and damping ratio values are only accurate if there is a linear relation between $\left\Vert \boldsymbol{W}\left(r,\cdot\right)\right\Vert $ and $r$, for example \begin{equation} \left\Vert \boldsymbol{W}\left(r,\cdot\right)\right\Vert =r\label{eq:Polar-amplitude} \end{equation} and the relative phase between two closed curves satisfies \begin{equation} \arg\min_{\gamma}\left\Vert \boldsymbol{W}\left(r_{1},\cdot\right)-\boldsymbol{W}\left(r_{2},\cdot+\gamma\right)\right\Vert =0.\label{eq:Polar-phase} \end{equation} Equation (\ref{eq:Polar-amplitude}) means that the instantaneous amplitude of the trajectories on manifold $\mathcal{M}$ is the same as parameter $r$, hence the map $r\mapsto R\left(r\right)$ determines the change in amplitude. Equation (\ref{eq:Polar-phase}) stipulates that the parametrisation in the angular variable is such that there is no phase shift between the closed curves $\boldsymbol{W}\left(r_{1},\cdot\right)$ and $\boldsymbol{W}\left(r_{2},\cdot\right)$ for $r_{1}\neq r_{2}$. If there would be a phase shift $\gamma$, a trajectory that within a period moves from amplitude $r_{1}$ to $r_{2}$, would misrepresent its instantaneous period of vibration by phase $\gamma$, hence the frequency given by $T\left(r\right)$ would be inaccurate. In fact, one can set a continuous phase shift $\gamma\left(r\right)$ among the closed curves $\boldsymbol{W}\left(r,\cdot\right)$, such that the frequency given by $T\left(r\right)$ is arbitrary. The following result provides accurate values for the instantaneous frequency and damping ratio. \begin{prop} \label{prop:Polar-fr-dm}Assume a decoder $\boldsymbol{W}:\left[0,r_{1}\right]\times\left[0,2\pi\right]\to X$ and functions $R,T:\left[0,r_{1}\right]\to\mathbb{R}$ such that they satisfy invariance equation (\ref{eq:Polar-Invariance}). \begin{enumerate} \item A new parametrisation $\tilde{\boldsymbol{W}}$of the manifold generated by $\boldsymbol{W}$ that satisfies the constraints (\ref{eq:Polar-amplitude}) and (\ref{eq:Polar-phase}) is given by \[ \tilde{\boldsymbol{W}}\left(r,\theta\right)=\boldsymbol{W}\left(t,\theta+\delta\left(t\right)\right),\;t=\kappa^{-1}\left(\frac{r^{2}}{2}\right), \] where \begin{align} \delta\left(r\right) & =-\int_{0}^{r}\frac{\int_{0}^{2\pi}\left\langle D_{1}\boldsymbol{W}\left(\rho,\theta\right),D_{2}\boldsymbol{W}\left(\rho,\theta\right)\right\rangle _{X}\mathrm{d}\theta}{\int_{0}^{2\pi}\left\langle D_{2}\boldsymbol{W}\left(\rho,\theta\right),D_{2}\boldsymbol{W}\left(\rho,\theta\right)\right\rangle _{X}\mathrm{d}\theta}\mathrm{d}\rho,\label{eq:Polar-delta-prime-1}\\ \kappa\left(r\right) & =\frac{1}{2\pi}\int_{0}^{r}\int_{0}^{2\pi}\left\langle D_{1}\boldsymbol{W}\left(\rho,\theta\right),\boldsymbol{W}\left(\rho,\theta\right)\right\rangle _{X}\mathrm{d}\theta\mathrm{d}\rho\label{eq:Polar-kappa-prime-1} \end{align} and $\left\langle \cdot,\cdot\right\rangle _{X}$ is the inner product on vector space $X$. \item The instantaneous natural frequency and damping ratio are calculated as \begin{align} \omega\left(r\right) & =T\left(t\right)+\delta\left(t\right)-\delta\left(R\left(t\right)\right) & \left[\text{rad}/\text{step}\right],\label{eq:Polar-omega-correct}\\ \zeta\left(r\right) & =-\frac{\log r^{-1}\sqrt{2\kappa\left(R\left(t\right)\right)}}{\omega\left(r\right)} & \left[-\right].\label{eq:Polar-zeta-correct} \end{align} where $t=\kappa^{-1}\left(\frac{r^{2}}{2}\right)$. \end{enumerate} \end{prop} \begin{proof} A proof is given in appendix \ref{sec:proof-FreqDamp}. \end{proof} \begin{rem} The transformed expressions (\ref{eq:Polar-freq-newpar}), (\ref{eq:Polar-damp-newpar}) for the instantaneous frequency and damping ratio show that any instantaneous frequency can be achieved for all $r>0$ if $R\left(r\right)\neq r$ by choosing appropriate functions $\rho,\gamma$. For example, zero frequency is achieved by solving \begin{align} \gamma\left(r\right) & =\gamma\left(\rho^{-1}\left(R\left(\rho\left(r\right)\right)\right)\right)-T\left(\rho\left(r\right)\right)\label{eq:Polar-zero-freq} \end{align} For the particular choice of $\rho=r$, equation (\ref{eq:Polar-zero-freq}) turns into \[ \gamma\left(r\right)=\gamma\left(R\left(r\right)\right)-T\left(r\right), \] which is a functional equation. For an $\epsilon>0$, fix $\gamma\left(R\left(\epsilon\right)\right)=0$, $\gamma\left(\epsilon\right)=T\left(\epsilon\right)$ and some interpolating values in the interior of the interval $\left[R\left(\epsilon\right),\epsilon\right]$, then use contraction mapping to arrive at a unique solution for function $\gamma$. \end{rem} \begin{rem} \label{rem:VF-freq-dm}The same calculation applies to vector fields, $\dot{\boldsymbol{x}}=\boldsymbol{f}\left(\boldsymbol{x}\right)$, but the final result is somewhat different. Assume a decoder $\boldsymbol{W}:\left[0,r_{1}\right]\times\left[0,2\pi\right]\to X$ and functions $R,T:\left[0,r_{1}\right]\to\mathbb{R}$ such that they satisfy the invariance equation \begin{equation} D_{1}\boldsymbol{W}\left(r,\theta\right)R\left(r\right)+D_{2}\boldsymbol{W}\left(r,\theta\right)T\left(r\right)=\boldsymbol{f}\left(\boldsymbol{W}\left(r,\theta\right)\right).\label{eq:Polar-VF-invariance} \end{equation} The instantaneous natural frequency and damping ratio is calculated by \begin{align*} \omega\left(r\right) & =T\left(t\right)-\delta^{\prime}\left(t\right)R\left(t\right) & \left[\text{rad}/\text{unit time}\right],\\ \zeta\left(r\right) & =-\frac{\kappa^{\prime}\left(t\right)R\left(t\right)}{r^{2}\omega\left(r\right)} & \left[-\right], \end{align*} where $t=\kappa^{-1}\left(\frac{r^{2}}{2}\right)$. All other quantities are as in proposition \ref{prop:Polar-fr-dm}. A proof is given in appendix \ref{sec:proof-FreqDamp}. \end{rem} The following example illustrates that if a system is linear in a nonlinear coordinate system, we can recover the actual instantaneous damping ratios and frequencies using proposition \ref{prop:Polar-fr-dm}. This is the case of Koopman eigenfunctions, or normal form transformations where all the nonlinear terms are eliminated. \begin{example} Let us consider the linear map \begin{equation} \begin{array}{rl} r_{k+1} & =\mathrm{e}^{\zeta_{0}\omega_{0}}r_{k}\\ \theta_{k+1} & =\theta_{k}+\omega_{0} \end{array}\label{eq:Polar-Linear-Model} \end{equation} and the corresponding nonlinear decoder \begin{equation} \boldsymbol{W}\left(r,\theta\right)=\left(\begin{array}{c} r\cos\theta-\frac{1}{4}r^{3}\cos^{3}\theta\\ r\sin\theta+\frac{1}{2}r^{3}\cos^{3}\theta \end{array}\right).\label{eq:Polar-Linear-Decoder} \end{equation} In terms of the polar invariance equation (\ref{eq:Polar-VF-invariance}) the linear map (\ref{eq:Polar-Linear-Model}) translates to $T\left(r\right)=\omega_{0}$ and $R\left(r\right)=\mathrm{e}^{\zeta_{0}\omega_{0}}r$. If we disregard the nonlinearity of $\boldsymbol{W}$, the instantaneous frequency and the instantaneous damping ratio of our hypothetical system would be constant, that is $\omega\left(r\right)=\omega_{0}$ and $\zeta\left(r\right)=\zeta_{0}$. Using proposition \ref{prop:Polar-fr-dm}, we calculate the effect of $\boldsymbol{W}$, and find that \begin{align*} \delta(r) & =-\frac{2}{\sqrt{19}}\left(\tan^{-1}\left(\frac{15r^{2}-8}{8\sqrt{19}}\right)+\tan^{-1}\left(\frac{1}{\sqrt{19}}\right)\right),\\ \kappa(r) & =\frac{1}{512}r^{2}\left(25r^{4}-48r^{2}+256\right). \end{align*} Finally, we plot the expressions (\ref{eq:Polar-omega-correct}) and (\ref{eq:Polar-zeta-correct}) in figure \ref{fig:Freq-Damp-graph}(a) and \ref{fig:Freq-Damp-graph}(b), respectively. It can be seen that frequencies and damping ratios change with the vibration amplitude (red lines), but they are constant without taking the decoder (\ref{eq:Polar-Linear-Decoder}) into account (blue lines). \begin{figure} \begin{centering} \includegraphics[height=7cm]{FreqDamp-F}\includegraphics[height=7cm]{FreqDamp-D} \par\end{centering} \caption{\label{fig:Freq-Damp-graph}The instantaneous frequency (a) and instantaneous damping of system (\ref{eq:Polar-Linear-Model}) together with the decoder (\ref{eq:Polar-Linear-Decoder}). The blue line represents the naive calculation just from system (\ref{eq:Polar-Linear-Model}) and the red lines represent the corrected values (\ref{eq:Polar-freq-newpar}) and (\ref{eq:Polar-damp-newpar}).} \end{figure} The geometry of the reparametrisation is illustrated in figure \ref{fig:Polar-repair}. \begin{figure} \begin{centering} \includegraphics[height=5cm]{FreqDamp-PhaseShift} \par\end{centering} \caption{\label{fig:Polar-repair}Geometry of the decoder (\ref{eq:Polar-Linear-Decoder}). The blue grid represents the polar coordinate system with maximum radius $r=1$. The red curves are the images of the blue concentric circles under the decoder $\boldsymbol{W}$. Each red dot corresponds, where the phases $k\pi/5$, $k=1,\ldots10$, or the intersections of the blue radial lines with the blue circles mapped by $\boldsymbol{W}$. Therefore the red dots deviating from the radial blue lines indicate a phase shift. The green curves correspond to the images of the blue concentric circles under the re-parametrised decoder $\tilde{\boldsymbol{W}}$ for the same parameters as the red curves. The amplitudes of the green curves are now the same as the amplitude of the corresponding blue curves. On average there is no phase shift among the green curves, that is the displacement of the black dots from the blue radial lines is zero on average.} \end{figure} \end{example} \section{ROM identification procedures} Here we describe our methodology of finding invariant foliations, manifolds and autoencoders. These steps involve methods described so far and further methods from the appendices. First we start with finding an invariant foliation together with the invariant manifold that has the same dynamics as the foliation. \begin{lyxlist}{00.00.0000} \item [{F1.}] If the data is not from the full state space, especially when it is a scalar signal, we use a state-space reconstruction technique as described in appendix \ref{sec:DelayEmbedding}. At this point we have data points $\boldsymbol{x}_{k},\boldsymbol{y}_{k}\in X$, $k=1,\ldots,N$. \item [{F2.\label{F2}}] To ensure that the solution of (\ref{eq:MAP-U-optim}) converges to the desired foliation, we calculate a linear estimate of the foliation. We only consider a small neighbourhood of the fixed point, where the dynamics is nearly linear. Hence, we define the index set $\mathcal{I}_{\rho}=\left\{ k\in\left\{ 1,\ldots,N\right\} :\left|\boldsymbol{x}_{k}\right|<\rho\right\} $ with $\rho$ sufficiently small, but large enough that it encompasses enough data for linear parameter estimation. Then we create a least-squares estimate \cite{boyd_vandenberghe_2018} of the linearised system about the equilibrium by calculating \begin{align*} \boldsymbol{K} & =\sum_{k\in\mathcal{I}_{\rho}}\left|\boldsymbol{x}_{k}\right|^{-2}\boldsymbol{x}_{k}\otimes\boldsymbol{x}_{k}^{T},\\ \boldsymbol{L} & =\sum_{k\in\mathcal{I}_{\rho}}\left|\boldsymbol{x}_{k}\right|^{-2}\boldsymbol{y}_{k}\otimes\boldsymbol{x}_{k}^{T},\\ \boldsymbol{A} & =\boldsymbol{L}\boldsymbol{K}^{-1}, \end{align*} where matrix $\boldsymbol{A}$ approximates the Jacobian $D\boldsymbol{F}\left(\boldsymbol{0}\right)$. We calculate the left and right invariant subspaces ($E^{\star}$ and $E$, respectively) of matrix $\boldsymbol{A}$, by using real Schur decomposition, that is $\boldsymbol{A}=\boldsymbol{Q}\boldsymbol{H}\boldsymbol{Q}^{T}$, where $\boldsymbol{Q}$ is unitary and $\boldsymbol{H}$ is in an upper Hessenberg form, which is zero below the first subdiagonal. The Schur decomposition is calculated (or rearranged) such that the first $\nu$ column vectors, $\boldsymbol{q}_{1},\ldots,\boldsymbol{q}_{\nu}$ of $\boldsymbol{Q}$ span the required right invariant subspace $E$ and correspond to eigenvalues $\mu_{1},\ldots,\mu_{\nu}$. Therefore we define $\boldsymbol{W}_{1}=\left[\boldsymbol{q}_{1},\ldots,\boldsymbol{q}_{\nu}\right]$. We repeat the Schur decomposition for $\boldsymbol{A}^{T}=\hat{\boldsymbol{Q}}\hat{\boldsymbol{H}}\hat{\boldsymbol{Q}}^{T}$, where $\hat{\boldsymbol{Q}}=\left[\hat{\boldsymbol{q}}_{1},\ldots,\hat{\boldsymbol{q}}_{n}\right]$ with the same order of eigenvalues in the diagonal blocks of $\boldsymbol{H}$ and $\hat{\boldsymbol{H}}$. This gives us the initial guess $D\boldsymbol{U}\left(\boldsymbol{0}\right)\approx\boldsymbol{U}_{1}=\left[\hat{\boldsymbol{q}}_{1},\ldots,\hat{\boldsymbol{q}}_{\nu}\right]^{T}$, which spans the left invariant subspace $E^{\star}$. The initial guess for the map $\boldsymbol{S}$ is such that $D\boldsymbol{S}\left(\boldsymbol{0}\right)\approx\boldsymbol{S}_{1}=\boldsymbol{U}_{1}\boldsymbol{A}\boldsymbol{U}_{1}^{T}$. Finally, we rearrange the decomposition $\boldsymbol{A}^{T}=\tilde{\boldsymbol{Q}}\tilde{\boldsymbol{H}}\tilde{\boldsymbol{Q}}^{T}$ such that the selected eigenvalues $\mu_{1},\ldots,\mu_{\nu}$ now appear last in the diagonal of $\tilde{\boldsymbol{H}}$, and we set $\boldsymbol{U}_{1}^{\perp}=\left[\tilde{\boldsymbol{q}}_{1},\ldots,\tilde{\boldsymbol{q}}_{n-\nu}\right]^{T}$ and $\boldsymbol{B}_{1}=\boldsymbol{U}_{1}^{\perp}\boldsymbol{A}\boldsymbol{U}_{1}^{\perp T}$. This provides the initial guesses $\boldsymbol{U}^{\perp}\approx\boldsymbol{U}_{1}^{\perp}$ and $\boldsymbol{B}\approx\boldsymbol{B}_{1}$ in optimisation (\ref{eq:MAP-Uhat-optim}). \item [{F3.}] We solve problem (\ref{eq:MAP-U-optim}) with the initial guess $D\boldsymbol{U}\left(\boldsymbol{0}\right)=\boldsymbol{U}_{1}$ and $D\boldsymbol{S}\left(\boldsymbol{0}\right)=\boldsymbol{S}_{1}$ as calculated in the previous step. We also prescribe the constraints that $D\boldsymbol{U}\left(\boldsymbol{0}\right)\left(D\boldsymbol{U}\left(\boldsymbol{0}\right)\right)^{T}=\boldsymbol{I}$ and that \textbf{$\boldsymbol{U}\left(\boldsymbol{W}_{1}\boldsymbol{z}\right)$} is linear in $\boldsymbol{z}$. The computational representation of the encoder $\boldsymbol{U}$ is described in appendix \ref{subsec:sparse-poly}. \item [{F4.\label{F4}}] We perform a normal form transformation on map $\boldsymbol{S}$ and transform $\boldsymbol{U}$ into the new coordinates. The normal form $\boldsymbol{S}_{n}$ satisfies the invariance equation $\boldsymbol{U}_{n}\circ\boldsymbol{S}=\boldsymbol{S}_{n}\circ\boldsymbol{U}_{n}$ (c.f. equation (\ref{eq:MAP-U-invariance})), where $\boldsymbol{U}_{n}:Z\to Z$ is a nonlinear map. We then replace $\boldsymbol{U}$ with $\boldsymbol{U}_{n}\circ\boldsymbol{U}$ and \textbf{$\boldsymbol{S}$} with $\boldsymbol{S}_{n}$ as our reduced order model. This step is optional and only required if we want to calculate instantaneous frequencies and damping ratios. In a two dimensional coordinate system, where we have a complex conjugate pair of eigenvalues $\mu_{1}=\overline{\mu}_{2}$, the real valued normal form is \[ \begin{pmatrix}z_{1}\\ z_{2} \end{pmatrix}_{k+1}=\begin{pmatrix}z_{1}f_{r}\left(z_{1}^{2}+z_{2}^{2}\right)-z_{2}f_{r}\left(z_{1}^{2}+z_{2}^{2}\right)\\ z_{1}f_{i}\left(z_{1}^{2}+z_{2}^{2}\right)+z_{2}f_{r}\left(z_{1}^{2}+z_{2}^{2}\right) \end{pmatrix}, \] which leads to the polar form (\ref{eq:Polar-Invariance}) with $R\left(r\right)=r\sqrt{f_{r}^{2}\left(r^{2}\right)+f_{i}^{2}\left(r^{2}\right)}$ and $T\left(r\right)=\tan^{-1}\frac{f_{i}\left(r^{2}\right)}{f_{r}\left(r^{2}\right)}$. This normal form calculation is described in \cite{Szalai2020ISF}. \item [{F5.}] To find the invariant manifold, we use the approximate foliation (\ref{eq:Uhat-decoder}) and solve the optimisation problem (\ref{eq:MAP-Uhat-optim}) to find the decoder $\boldsymbol{W}$ of invariant manifold $\mathcal{M}$. The initial guess in problem (\ref{eq:MAP-Uhat-optim}) is such that $\boldsymbol{U}^{\perp}=\boldsymbol{U}_{1}^{\perp}$ and $\boldsymbol{B}=\boldsymbol{B}_{1}$. We also need to set a $\kappa$ parameter, which is assumed to be $\kappa=0.2$ throughout the paper. We have found that results are only sensitive to the value of kappa if extreme values are chosen such as $0,\infty$. \item [{F6.\label{F6}}] In case of an oscillatory dynamics in a two-dimensional ROM, we recover the actual instantaneous frequencies and damping ratios using proposition (\ref{prop:Polar-fr-dm}). \end{lyxlist} The procedure for the Koopman eigenfunction calculation is the same as steps F1-F6, except that $\boldsymbol{S}$ is assumed to be linear. To identify an autoencoder, we use the same setup as in \cite{Cenedese2022NatComm}. The computational representation of the autoencoder is described in appendix \ref{subsec:AENC-repr}. We carry out the following steps \begin{lyxlist}{00.00.0000} \item [{AE1.}] We identify $\boldsymbol{W}_{1}$,\textbf{ $\boldsymbol{S}_{1}$} as in step F2 and set $\boldsymbol{U}=\boldsymbol{W}_{1}^{T}$ and $D\boldsymbol{S}\left(\boldsymbol{0}\right)=\boldsymbol{S}_{1}$ \item [{AE2.}] Solve the optimisation problem \[ \arg\min_{\boldsymbol{U},\boldsymbol{W}_{nl}}\sum_{k=1}^{N}\left\Vert \boldsymbol{y}_{k}\right\Vert ^{-2}\left\Vert \boldsymbol{W}\left(\boldsymbol{U}\boldsymbol{y}_{k}\right)-\boldsymbol{y}_{k}\right\Vert ^{2}, \] which tries to ensure that $\boldsymbol{W}\circ\boldsymbol{U}$ is the identity, and finally solve \[ \arg\min_{\boldsymbol{S}}\sum_{k=1}^{N}\left\Vert \boldsymbol{x}_{k}\right\Vert ^{-2}\left\Vert \boldsymbol{W}\left(\boldsymbol{S}\left(\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right)\right)-\boldsymbol{y}_{k}\right\Vert ^{2}. \] \item [{AE3.}] Perform a normal form transformation on $\boldsymbol{S}$ by seeking the simplest $\boldsymbol{S}_{n}$ that satisfies $\boldsymbol{W}_{n}\circ\boldsymbol{S}_{n}=\boldsymbol{S}\circ\boldsymbol{W}_{n}$. This is similar to step F4, except that the normal form is in the style of the invariance equation of a manifold (\ref{eq:MAP-W-invariance}). \item [{AE4.}] Same as step F6 applied to nonlinear map $\boldsymbol{S}_{n}$ and decoder $\boldsymbol{W}\circ\boldsymbol{W}_{n}$. \end{lyxlist} \section{\label{sec:Examples}Examples} We are considering three examples. The first is a caricature model to illustrate all the techniques discussed in this paper and why certain techniques fail. The second example is series of synthetic data sets with higher dimensionality, to illustrate the methods in more detail using a polynomial representations of the encoder with HT tensor coefficients (see appendix \ref{subsec:sparse-poly}). This example also illustrates two different methods to reconstruct the state space of the system from a scalar measurement. The final example is a physical experiment of a jointed beam, where only a scalar signal is recorded and we need to reconstruct the state space with our previously tested technique. \subsection{\label{sec:Caricature}A caricature model} To illustrate the three conceptually different connections in definition \ref{def:Foil-Manif-AEnc}, we construct a simple two-dimensional map $\boldsymbol{F}$ with a node-type fixed point using the expression \begin{equation} \boldsymbol{F}\left(\boldsymbol{x}\right)=\boldsymbol{V}\left(\boldsymbol{A}\boldsymbol{V}^{-1}\left(\boldsymbol{x}\right)\right),\label{eq:caricature-mod} \end{equation} where \[ \boldsymbol{A}=\begin{pmatrix}\frac{9}{10} & 0\\ 0 & \frac{4}{5} \end{pmatrix}, \] the near-identity coordinate transformation is \begin{align*} \boldsymbol{V}\left(\boldsymbol{x}\right) & =\begin{pmatrix}\begin{array}{l} x_{1}+\frac{1}{4}\left(x_{1}^{3}-3\left(x_{1}-1\right)x_{2}x_{1}+2x_{2}^{3}+\left(5x_{1}-2\right)x_{2}^{2}\right)\\ x_{2}+\frac{1}{4}\left(2x_{2}^{3}+\left(2x_{1}-1\right)x_{2}^{2}-x_{1}^{2}\left(x_{1}+2\right)\right) \end{array}\end{pmatrix}, \end{align*} and the state vector is defined as $\boldsymbol{x}=\left(x_{1},x_{2}\right)$. In a neighbourhood of the origin, transformation $\boldsymbol{V}$ has a unique inverse, which we calculate numerically. Map $\boldsymbol{F}$ is constructed such that we can immediately identify the smoothest (hence unique) invariant manifolds corresponding to the two eigenvalues of $D\boldsymbol{F}\left(0\right)$ as \begin{align*} \mathcal{M}_{\frac{9}{10}} & =\left\{ \boldsymbol{V}\left(z,0\right),z\in\mathbb{R}\right\} ,\\ \mathcal{M}_{\frac{4}{5}} & =\left\{ \boldsymbol{V}\left(0,z\right),z\in\mathbb{R}\right\} . \end{align*} We can also calculate the leaves of the two invariant foliations as \begin{align} \mathcal{L}_{\frac{9}{10},z} & =\left\{ \boldsymbol{V}\left(z,\overline{z}\right),\overline{z}\in\mathbb{R}\right\} ,\label{eq:caricature-foil-vert}\\ \mathcal{L}_{\frac{4}{5},z} & =\left\{ \boldsymbol{V}\left(\overline{z},z\right),\overline{z}\in\mathbb{R}\right\} .\label{eq:caricature-foil-horiz} \end{align} To test the methods, we created 500 times 30 points long trajectories with initial conditions sampled from a uniform distribution over the rectangle $\left[-\frac{4}{5},\frac{4}{5}\right]\times\left[-\frac{1}{4},-\frac{1}{4}\right]$ to fit our reduced model to. We first attempt to fit an autoencoder to the data. We assume that the encoder, decoder and the nonlinear map $\boldsymbol{S}$ are \begin{align} \boldsymbol{U}\left(x_{1},x_{2}\right) & =x_{1},\label{eq:exa-AU-enc}\\ \boldsymbol{W}\left(z\right) & =\left(z,h\left(z\right)\right)^{T},\label{eq:exa-AU-dec}\\ \boldsymbol{S}\left(z\right) & =\lambda\left(z\right),\label{eq:exa-AU-map} \end{align} where $\lambda,h$ are polynomials of order-5. Our expressions already contain the invariant subspace $E=\mathrm{span}\left(1,0\right)^{T}$, which should make the fitting easier. Finally, we solve optimisation problem (\ref{eq:MAP-AE-optim}). The result of the fitting can be seen in figure \ref{fig:caricature-sim}(a) as depicted by the red curve. The fitted curve is more dependent on the distribution of data than the actual position of the invariant manifold, which is represented by the blue dashed line in figure \ref{fig:caricature-sim}(a). Various other expressions for $\boldsymbol{U}$ and \textbf{$\boldsymbol{W}$} were also tried that does not assume the direction of the invariant subspace $E$ with similar results. To calculate the invariant foliation in the horizontal direction, we assume that \begin{equation} \left.\begin{array}{rl} \boldsymbol{U}\left(x_{1},x_{2}\right) & =x_{1}+u\left(x_{1},x_{2}\right)\\ \boldsymbol{S}\left(z\right) & =\lambda z \end{array}\right\} ,\label{eq:exa-FOL-enc} \end{equation} where $u$ is an order-5 polynomial which lacks the constant and linear terms. The exact expression of $\boldsymbol{U}$ is not a polynomial, because it is the second coordinate of the inverse of function $\boldsymbol{V}$. The fitting is carried out by solving the optimisation problem (\ref{eq:MAP-U-optim}). The result can be seen in figure \ref{fig:caricature-sim}(b), where the red curves are contour plots of the identified encoder $\boldsymbol{U}$ and the dashed blue lines are the leaves as defined by equation (\ref{eq:caricature-foil-vert}). Figure \ref{fig:caricature-sim}(c) is produced in the same way as \ref{fig:caricature-sim}(b), except that the encoder is defined as $\boldsymbol{U}\left(x_{1},x_{2}\right)=x_{2}+h\left(x_{1},x_{2}\right)$ and the blue lines are the leaves given by (\ref{eq:caricature-foil-horiz}). As we have discussed in section \ref{subsec:LocallyAccurateEncoder}, an approximate encoder can also be constructed from a decoder. In the expression of the encoder (\ref{eq:Uhat-decoder}) we take \[ \boldsymbol{W}_{0}\left(z\right)=h\left(z\right) \] and $\boldsymbol{U}^{\perp}=\left(0,1\right)$, where $h$ is an order-9 polynomial without constant and linear terms. The expressions for $\boldsymbol{U}$ and \textbf{$\boldsymbol{S}$} were already found as (\ref{eq:exa-FOL-enc}), hence our approximate encoder becomes \[ \hat{\boldsymbol{U}}\left(\boldsymbol{x}\right)=x_{2}-h\left(x_{1}+u\left(x_{1},x_{2}\right)\right). \] We solve optimisation problem (\ref{eq:MAP-Uhat-optim}) with $\kappa=0.13$. We do not reconstruct the decoder $\boldsymbol{W}$, as it is straightforward to plot the level surfaces of $\hat{\boldsymbol{U}}$ directly. The result can be seen in figure \ref{fig:caricature-sim}(d), where the green line is the approximate invariant manifold (the zero level surface of $\hat{\boldsymbol{U}}$) and the red lines are other level surfaces of $\hat{\boldsymbol{U}}$. In conclusion, this simple example shows that only invariant foliations can be fitted to data, even if only approximate. Autoencoders give spurious results. \begin{figure} \begin{centering} \includegraphics[width=0.35\linewidth]{SSM-nonfit}\includegraphics[width=0.35\linewidth]{ISF-x1fit}\\ \includegraphics[width=0.35\linewidth]{ISF-x2fit}\includegraphics[width=0.35\linewidth]{ISF-SSM-fit} \par\end{centering} \caption{\label{fig:caricature-sim}Identifying invariant objects in equation (\ref{eq:caricature-mod}). The data contains 500 trajectories of length 30 with initial conditions picked from a uniform distribution; a) fitting an autoencoder (red continuous curve) does not reproduce the invariant manifold (blue dashed curve), instead it follows the distribution of data; (b) invariant foliation in the horizontal direction; c) invariant foliation in the vertical direction; d) an invariant manifold is calculated as the leaf of a locally accurate invariant foliation. The green line is the invariant manifold. Each diagram represent the box $\left[-\frac{4}{5},\frac{4}{5}\right]\times\left[-\frac{1}{4},-\frac{1}{4}\right]$, the axes labels are intentionally hidden.} \end{figure} \subsection{\label{subsec:10dimsys-example}A ten-dimensional system} To create a numerically challenging example, we construct a ten-dimensional differential equation from five decoupled second-order nonlinear oscillators using two coordinate transformations. The system of decoupled oscillators is denoted by $\dot{\boldsymbol{x}}=\boldsymbol{f}_{0}\left(\boldsymbol{x}\right)$, where the state variable is in the form of \[ \boldsymbol{x}=\left(r_{1},\ldots,r_{5},\theta_{1},\ldots,\theta_{5}\right) \] and the dynamics is given by \begin{equation} \begin{array}{ll} \dot{r}_{1}=-\frac{1}{500}r_{1}+\frac{1}{100}r_{1}^{3}-\frac{1}{10}r_{1}^{5}, & \dot{\theta}_{1}=1+\frac{1}{4}r_{1}^{2}-\frac{3}{10}r_{1}^{4},\\ \dot{r}_{2}=-\frac{\mathrm{e}}{500}r_{2}-\frac{1}{10}r_{2}^{5}, & \dot{\theta}_{2}=\mathrm{e}+\frac{3}{20}r_{2}^{2}-\frac{1}{5}r_{2}^{4},\\ \dot{r}_{3}=-\frac{1}{50}\sqrt{\frac{3}{10}}r_{3}+\frac{1}{100}r_{3}^{3}-\frac{1}{10}r_{3}^{5},\quad & \dot{\theta}_{3}=\sqrt{30}+\frac{9}{50}r_{3}^{2}-\frac{19}{100}r_{3}^{4},\\ \dot{r}_{4}=-\frac{1}{500}\pi^{2}r_{4}+\frac{1}{100}r_{4}^{3}-\frac{1}{10}r_{4}^{5}, & \dot{\theta}_{4}=\pi^{2}+\frac{4}{25}r_{4}^{2}-\frac{17}{100}r_{4}^{4},\\ \dot{r}_{5}=-\frac{13}{500}r_{5}+\frac{1}{100}r_{5}^{3}, & \dot{\theta}_{5}=13+\frac{4}{25}r_{5}^{2}-\frac{9}{50}r_{5}^{4}. \end{array}\label{eq:10dim-model} \end{equation} The first transformation brings the polar form of equation (\ref{eq:10dim-model}) into Cartesian coordinates using the transformation $\boldsymbol{y}=\boldsymbol{g}\left(\boldsymbol{x}\right)$, which is defined by $y_{2k-1}=r_{k}\cos\theta_{k}$ and $y_{2k}=r_{k}\sin\theta_{k}$. Finally, we couple all variables using the second nonlinear transformation $\boldsymbol{y}=\boldsymbol{h}\left(\boldsymbol{z}\right)$, which reads \begin{equation} \begin{array}{rlrl} y_{1} & =z_{1}+z_{3}-\frac{1}{12}z_{3}z_{5}, & y_{2} & =z_{2}-z_{3},\\ y_{3} & =z_{3}+z_{5}-\frac{1}{12}z_{5}z_{7}, & y_{4} & =z_{4}-z_{5},\\ y_{5} & =z_{5}+z_{7}+\frac{1}{12}z_{7}z_{9}, & y_{6} & =z_{6}-z_{7},\\ y_{7} & =z_{7}+z_{9}-\frac{1}{12}z_{1}z_{9}, & y_{8} & =z_{8}-z_{9},\\ y_{9} & =z_{9}+z_{1}-\frac{1}{12}z_{3}z_{1},\quad & y_{10} & =z_{10}-z_{1}, \end{array}\label{eq:10dim-transform} \end{equation} and where $\boldsymbol{y}=\left(y_{1},\ldots,y_{10}\right)$ and $\boldsymbol{z}=\left(z_{1},\ldots,z_{10}\right)$. The two transformations give us the differential equation $\dot{\boldsymbol{z}}=\boldsymbol{f}\left(\boldsymbol{z}\right)$, where \begin{equation} \boldsymbol{f}\left(\boldsymbol{z}\right)=\left[D\boldsymbol{g}^{-1}\left(\boldsymbol{h}\left(\boldsymbol{z}\right)\right)D\boldsymbol{h}\left(\boldsymbol{z}\right)\right]^{-1}\boldsymbol{f}_{0}\left(\boldsymbol{g}^{-1}\left(\boldsymbol{h}\left(\boldsymbol{z}\right)\right)\right).\label{eq:10dim-finmod} \end{equation} The natural frequencies of our system at the origin are \[ \omega_{1}=1,\omega_{2}=\mathrm{e},\omega_{3}=\sqrt{30},\omega_{4}=\pi^{2},\omega_{5}=13 \] and the damping ratios are the same $\zeta_{1}=\cdots=\zeta_{5}=1/500$. We select the first natural frequency to test various methods. We also test the methods on three types of data. Firstly, full state space information is used, secondly the state space is reconstructed from the signal $\xi_{k}=\frac{1}{10}\sum_{j=1}^{10}z_{k,j}$ using principal component analysis (PCA) as described in appendix \ref{subsec:PCA}, finally the state space is reconstructed from $\xi_{k}$ using a discrete Fourier transform (DFT) as described in appendix \ref{subsec:DFT}. When data is recorded in state space form, $1000$ trajectories $16$ points long each with time step $\Delta T=0.01$ were created by numerically solving (\ref{eq:10dim-finmod}). Initial conditions were sampled from unit balls of radius $0.8$, $1.0$, $1.2$ and $1.4$ about the origin. The Euclidean norm of the initial conditions were uniformly distributed. The four data sets are labelled ST-1, ST-2, ST-3, ST-4 in the diagrams. For state space reconstruction, 100 trajectories, 3000 points each, with time step $\Delta T=0.01$ were created by numerically solving (\ref{eq:10dim-finmod}). The initial conditions for this data was similarly sampled from unit balls of radius $0.8$, $1.0$, $1.2$ and $1.4$ about the origin, such that the Euclidean norm of the initial conditions are uniformly distributed. The PCA reconstructed data are labelled PCA-1, PCA-2, PCA-3, PCA-4 and the DFT reconstructed data are labelled DFT-1, DFT-2, DFT-3, DFT-4. The amplitude for each ROM is calculated as $A\left(r\right)=\sqrt{\frac{1}{2\pi}\int\left(\boldsymbol{w}^{\star}\cdot\boldsymbol{W}\left(r,\theta\right)\right)^{2}\mathrm{d}\theta}$, where $\boldsymbol{w}^{\star}=\frac{1}{10}\left(1,1,\ldots,1\right)$ for the state-space data and $\boldsymbol{w}^{\star}$ is calculated in appendices \ref{subsec:PCA}, \ref{subsec:DFT} when state-space reconstruction is used. We can also attach an amplitude to each data point $\boldsymbol{x}_{k}$ through the encoder and the decoder. In the polar parametrisation of the ROM, the radial coordinate of a data point is an instantaneous amplitude and it is the same as the radial parameter $r$ of $\boldsymbol{W}$, because we have re-parametrised $\boldsymbol{W}$ in section \ref{sec:freq-damp}, for this to be true. Therefore we can use $\left\Vert \boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right\Vert $ as the instantaneous amplitude of data point $\boldsymbol{x}_{k}$. \begin{figure} \begin{centering} \includegraphics[width=0.99\linewidth]{FullState}\\ \includegraphics[width=0.99\linewidth]{PCAState}\\ \includegraphics[width=0.99\linewidth]{DFTState} \par\end{centering} \caption{\label{fig:10dim-foliations}Instantaneous frequencies and damping ratios of differential equation (\ref{eq:10dim-finmod}) using invariant foliations. The first column shows the data density with respect to the scalar value $\left\Vert \boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right\Vert $, the second column is the instantaneous frequency and the third column is the instantaneous damping. The first row corresponds to state space date, the second row shows PCA reconstructed data and the third row is DFT reconstructed data. The data density is controlled by sampling initial conditions from different sized neighbourhoods of the origin. A too small neighbourhood results in inaccurate high-amplitude predictions, too large neighbourhood invalidate assumptions about embedding dimensions. The amplitude $A\left(r\right)$ is calculated using formula (\ref{eq:amplitude}), which is a root mean square calculation.} \end{figure} Figure \ref{fig:10dim-foliations} shows the result of our calculation for the three types of data. In the first column data density is displayed with respect to amplitude in the ROM, that is the density with respect to $\left\Vert \boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right\Vert $. Lower amplitudes have higher densities, because trajectories exponentially converge to the origin. This also means that the accuracy of the foliation will be higher for lower amplitudes. In figure \ref{fig:10dim-foliations} we also display the identified instantaneous frequencies and damping ratios. The results are then compared to the analytically calculated frequencies and damping ratios. State space data gives the closest match to the analytical reference (labelled as VF). We find that the PCA method cannot embed the data in a 10-dimesional space, only an 18-dimensional embedding would be acceptable. However, there are technical complications with using the 18-dimensional state space reconstruction, because the data would still lie on a 10-dimensional submanifold, and our method is best suited for cases when the data is not on a submanifold. This is not necessarily a problem in experimental settings because most systems have infinite dimensional state spaces, which we must truncate at a finite dimension by selecting a sampling frequency. Using a perfect reproducing filter bank yields better results, probably because the original signal can be fully reconstructed and we expect a correct state-space reconstruction at small amplitudes. Indeed, the results only diverge at higher amplitudes, where the state space reconstruction is no longer valid. Moreover, including more data with high amplitudes further increases inaccuracies as can be seen for curves labelled DFT-2, DFT-3, DFT-4. The author has also tried non-optimal delay embedding, with inferior results. None of the techniques had any problem with the less the challenging Shaw-Pierre example \cite{ShawPierre,Szalai2020ISF} (data not shown). \begin{figure} \begin{centering} \includegraphics[width=0.99\columnwidth]{Koopman} \par\end{centering} \caption{\label{fig:10dim-Koopman}ROM by Koopman eigenfunctions. A sample of the same quantities are calculated as in figure \ref{fig:10dim-foliations}, except that map $\boldsymbol{S}$ is assumed to be linear. There is some variation in the frequencies and damping ratios with respect to the amplitude due to the corrections in section \ref{sec:freq-damp}, but accurate values could not be recovered as the linear approximation does not account for near internal resonance, as in formula (\ref{eq:Internal-nonresonance}).} \end{figure} When restricting map $\boldsymbol{S}$ to be linear, we are identifying Koopman eigenfunctions. Despite that linear dynamics is identified we should be able to reproduce the nonlinearities as illustrated in section \ref{sec:freq-damp}. However, we also have near internal resonances as per equation (\ref{eq:Internal-nonresonance}), which make certain terms of encoder $\boldsymbol{U}$ large, which are difficult to find by optimisation. The result is what can be seen in figure \ref{fig:10dim-Koopman}. The identified frequencies and damping ratios show little variation with amplitude and mostly capture the average of the reference values. \begin{figure} \begin{centering} \includegraphics[width=0.99\linewidth]{AENC-figure} \par\end{centering} \caption{\label{fig:10dim-autoencoder}Data analysis by autoencoder. An autoencoder can recover system dynamics if all data is on the invariant manifold. For the solid line MAN, data was artificially forced to be on an a-priori calculated invariant manifold. However if the data is not on an invariant manifold, such as for data set ST-1, the autoencoder calculation is meaningless. The dotted line VF-1 represents the analytic calculation for the first natural frequency, the dash-dotted VF-2 depicts the analytic calculation for the second natural frequency of vector field (\ref{eq:10dim-finmod}).} \end{figure} Knowing that autoencoders are only useful if all the dynamics is on the manifold, we have synthetically created data consisting of trajectories with initial conditions from the invariant manifold of the first natural frequency. We used 800 trajectories, 24 points each with time-step $\Delta t=0.2$ starting on the manifold. Fitting an autoencoder to this data yields a good match in figure (\ref{fig:10dim-autoencoder}), the corresponding lines are labelled MAN. Then we tried dataset ST-1, that matched the reference best when calculating an invariant foliation. However, our data does not lie on a manifold and it is impossible to make $\boldsymbol{W}\circ\boldsymbol{U}$ close to the identity on our data. The results in figure (\ref{fig:10dim-autoencoder}) show this: the frequency and damping curves are far from the reference. \subsection{Jointed beam} \begin{figure} \begin{centering} \includegraphics[width=0.99\linewidth]{beamdiag-01} \par\end{centering} \caption{\label{fig:JointedBeam}(a) Experimental setup. (b) Schematic of the jointed beam. The width of the beam (not shown) is 25mm. All measurements are in millimetres.} \end{figure} Here we analyse the data published in~\cite{Titurus2016}. The experimental setup can be seen in figure \ref{fig:JointedBeam}. The two halves of the beam were joined together with an M6 bolt. The two interfaces of the beams were sandblasted to increase friction and a polished steel plate was placed between them, finally the bolt was tightened using four different torques: minimal torque so that the beam does not collapse under its own weight (denoted as 0~Nm), 1~Nm, 2~Nm and 3.1~Nm. The free vibration of the steel beam was recorded using an accelerometer placed at the end of the beam. The vibration was initiated using an impact hammer at the position of the accelerometer. Calibration data for the accelerometer is not available. For each torque value a number of 20 seconds long signals were recorded. The impacts were of different magnitude so that the amplitude dependency of the dynamics could be tracked. In~\cite{Titurus2016}, a linear model was fitted to each signal and the first five vibration frequencies and damping ratios were identified. These are represented by various markers in figure \ref{fig:JointedBeam-ROM}. In order to make a connection between the peak impact force and the instantaneous amplitude we also calculated the peak root mean square (RMS) amplitude for signals with the largest impact force for each tightening torque and found that the average conversion factor between the peak RMS amplitude and the peak impact force was 443, which we used to divide the peak force and plot the equivalent peak RMS in figure \ref{fig:JointedBeam-ROM}. To calculate the invariant foliation we used a 12-dimensional PCA-reconstructed state space, as described is appendix \ref{subsec:PCA}. As opposed to our synthetic data, the signal could be accurately reproduced in 12 dimensions and at the same time the data filled up a non-negligible volume of the reconstructed state space. We choose $\kappa=0.2$ in problem (\ref{eq:MAP-Uhat-optim}) when finding the invariant manifold. The result can be seen in figure \ref{fig:JointedBeam-ROM}. Since we do not have the ground truth for this system it is not possible to tell which method is more accurate. However, we can argue that linear identification at high amplitudes should produce errors, because of non-negligible nonlinearities, hence the invariant foliation should be more accurate there. \begin{figure} \begin{centering} \includegraphics[width=0.99\linewidth]{beam_pca} \par\end{centering} \caption{\label{fig:JointedBeam-ROM}Instantaneous frequencies and damping ratios of the jointed beam setup in figure \ref{fig:JointedBeam}. Solid red lines correspond to the invariant foliation of the system with minimal tightening torque, blue dashed lines with tightening torque of 2.1 Nm and green dotted lines with tightening torque of 3.1 Nm. The markers of the same colour show results calculated in \cite{Titurus2016} using curve fitting. (a) data density with respect to amplitude $\left\Vert \boldsymbol{U}\left(\boldsymbol{x}_{k}\right)\right\Vert $, (b) instantaneous frequency, (c) instantaneous damping ratio.} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=0.49\linewidth]{reconstruct-beam-1}\includegraphics[width=0.49\linewidth]{reconstruct-beam-3} \par\end{centering} \caption{\label{fig:JointedBeam-reconstruction}Reconstruction error of the foliation. (a,b) Reconstructing the first signal of the series of experimental data with 0 Nm tightening torque. (c,d) Reconstructing the first signal of the series of experimental data with 2.1 Nm tightening torque. The compared values are calculated through the encoder $\boldsymbol{z}=\boldsymbol{U}\left(\boldsymbol{x}_{k}\right)$ and the reconstructed values are from the iteration $\boldsymbol{z}_{k+1}=\boldsymbol{S}\left(\boldsymbol{z}_{k}\right)$. The amplitude error ($\left\Vert \boldsymbol{z}\right\Vert $) is minimal, however, over time phase error accumulates, hence direct comparison of a coordinate ($z_{1}$) can look unfavourable.} \end{figure} We can also assess, whether the measured signal is reproduced by the invariant foliation $\left(\boldsymbol{U},\boldsymbol{S}\right)$ in figure \ref{fig:JointedBeam-reconstruction}. For this we apply the encoder $\boldsymbol{U}$ to our original signal $\boldsymbol{x}_{k}$, $k=1,\ldots,N$ and compare this signal to the one produced by the recursion $\boldsymbol{z}_{k+1}=\boldsymbol{S}\left(\boldsymbol{z}_{k}\right)$, where $\boldsymbol{z}_{1}=\boldsymbol{U}\left(\boldsymbol{x}_{1}\right)$. In the case of our data series with 2.1 Nm tightening torque there was small error in the amplitude and even the phase accumulated slowly over time. For the loosest possible tightening torque (0 Nm) there was a reasonably low amplitude error, but the phase error accumulated to its maximum at about a seconds, from which point onwards there was no further accumulation. Note that our fitting procedure does not accumulate phase errors, it only minimises the prediction error for a single time step, the maximum of which over all data point was in the range of $10^{-3}$ for all four tightening torques and which has occurred at the largest amplitudes. Unfortunately, accumulating phase error is rarely shown in the literature, where only short trajectories are compared, in contrast to the $28685$ and $25149$ time-steps that are displayed in figure \ref{fig:JointedBeam-reconstruction}. \section{Discussion} The main conclusion of this study is that only invariant foliations are suitable for ROM identification from off-line data. Using an invariant foliation avoids the need to use resonance decay \cite{EHRHARDT2016612}, or waiting for the signal to settle near the most attracting invariant manifold, thereby throwing away valuable data. Using invariant foliations can make use of unstructured data with arbitrary initial conditions, such as impact hammer tests. Invariant foliations produce genuine reduced order models and not only parametrise a-priori known invariant manifolds, like other methods \cite{Cenedese2022NatComm,Champion2019Autoencoder,Yair2017DiffusionNormal}. We have shown that the high-dimensional function required to represent an encoder can be represented by polynomials with sparse tensor coefficients, which significantly reduces the computational and memory costs. Sparse tensors are also amenable to further analysis, such as singular value decomposition \cite{GrasedyckSVD}, which gives way to mathematical interpretations of the decoder $\boldsymbol{U}$. The low dimensional map $\boldsymbol{S}$ is also amenable to normal form transformations, which can be used to extract information such as instantaneous frequencies and damping ratios. We have tested the related concept of Koopman eigenfuntions, which differs from an invariant foliation in that map $\boldsymbol{S}$ is assumed to be linear. If there are no internal resonances, Koopman eigenfunctions are theoretically equivalent to invariant foliations. However in numerical settings unresolved near internal resonances become important and therefore Koopman eigenfunctions become inadequate. We have also tried to fit autoencoders to our data \cite{Cenedese2022NatComm}, but apart from the artificial case where the invariant manifold was pre-computed, it performed even worse than Koopman eigenfunctions. Fitting an invariant foliation to data is extremely robust when state space data is available. However, when the state space needed to be reproduced from a scalar signal, the results were not as accurate as we hoped for. While Taken's theorem allows for any generic delay coordinates, in practice a non-optimal choice can lead to poor results. We were expecting that the embedding dimension at least for low amplitude signals would be the same as the attractor dimension. This is however not true if the data also includes higher amplitude points. Despite not being theoretically optimal, we have found that perfect reproducing filter banks produce accurate results for low amplitude signals and at the same time provide a state-space reconstruction with the same dimensionality as that of the attractor. For our experimental data optimal PCA based state-space reconstruction was appropriate as the underlying system was a continuum and therefore the signal was not on a submanifold for any choice of dimensionality of the reconstructed state-space. Future work should include exploring various state-space reconstruction techniques in combination with fitting an invariant foliation to data. We did not fully explore the idea of locally accurate invariant foliations in remark \ref{rem:extrasimpleROM}, which can lead to computationally efficient methods. Further research can also be directed towards cases, where the data is on a high-dimensional submanifold of an even higher dimensional vector space $X$. This is where an autoencoder and invariant foliation may be combined.
{ "timestamp": "2022-06-27T02:13:05", "yymm": "2206", "arxiv_id": "2206.12269", "language": "en", "url": "https://arxiv.org/abs/2206.12269", "abstract": "This paper explores how to identify a reduced order model (ROM) from a physical system. A ROM captures an invariant subset of the observed dynamics. We find that there are four ways a physical system can be related to a mathematical model: invariant foliations, invariant manifolds, autoencoders and equation-free models. Identification of invariant manifolds and equation-free models require closed-loop manipulation of the system. Invariant foliations and autoencoders can also use off-line data. Only invariant foliations and invariant manifolds can identify ROMs, the rest identify complete models. Therefore, the common case of identifying a ROM from existing data can only be achieved using invariant foliations.Finding an invariant foliation requires approximating high-dimensional functions. For function approximation, we use polynomials with compressed tensor coefficients, whose complexity increases linearly with increasing dimensions. An invariant manifold can also be found as the fixed leaf of a foliation. This only requires us to resolve the foliation in a small neighbourhood of the invariant manifold, which greatly simplifies the process. Combining an invariant foliation with the corresponding invariant manifold provides an accurate ROM. We analyse the ROM in case of a focus type equilibrium, typical in mechanical systems. The nonlinear coordinate system defined by the invariant foliation or the invariant manifold distorts instantaneous frequencies and damping ratios, which we correct. Through examples we illustrate the calculation of invariant foliations and manifolds, and at the same time show that Koopman eigenfunctions and autoencoders fail to capture accurate ROMs under the same conditions.", "subjects": "Dynamical Systems (math.DS); Machine Learning (cs.LG); Data Analysis, Statistics and Probability (physics.data-an)", "title": "Data-driven reduced order models using invariant foliations, manifolds and autoencoders", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429629196684, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211088333246 }
https://arxiv.org/abs/1606.03670
The Probability That All Eigenvalues are Real for Products of Truncated Real Orthogonal Random Matrices
The probability that all eigenvalues of a product of $m$ independent $N \times N$ sub-blocks of a Haar distributed random real orthogonal matrix of size $(L_i+N) \times (L_i+N)$, $(i=1,\dots,m)$ are real is calculated as a multi-dimensional integral, and as a determinant. Both involve Meijer G-functions. Evaluation formulae of the latter, based on a recursive scheme, allow it to be proved that for any $m$ and with each $L_i$ even the probability is a rational number. The formulae furthermore provide for explicit computation in small order cases.
\section{Introduction} In general an $N \times N$ real matrix may have both real and complex eigenvalues. A natural question is thus to ask for the probability $p_{N,k}^{X}$ that a random real matrix $X$ chosen from a particular distribution has a specific number $k$ of real eigenvalues. Due to the complex eigenvalues coming in complex conjugate pairs, $k$ must have the same parity as $N$ for $p_{N,k}^{X}$ to be non-zero. This question is of interest from a number of different viewpoints. In probability theory, for large $N$, the distribution of $p_{N,k}^{X}$ in the case that $X=G_1^{-1} G_2$, where $G_1, G_2$ are standard real Gaussian matrices, can be proved \cite{FM11} to satisfy a local central limit theorem with \begin{equation}\label{VM} {\rm Var}_N \sim (2 - \sqrt{2}) \mu_N, \qquad \mu_N \asymp \sqrt{N}. \end{equation} The recent work~\cite{Si15} proves a central limit theorem for polynomial functionals of the real eigenvalues in this setting, and the relation~\eqref{VM} is found. This latter relation was first observed in \cite{FN07} in the case of matrices $X$ drawn from the real Ginibre ensemble, and thus with independent standard real Gaussian entries. Very recently (\ref{VM}) has been observed in numerical simulations to hold for a wide class of random matrix ensembles, and with a corresponding distribution consistent with a local limit theorem~\cite{GPT16}. The large $N$ form of the probability $p_{N,0}^X$ ($N$ even) that no eigenvalues are real has been observed in \cite{KPTTZ15} to be intimately related to the large $s$ form of the probability that the interval $[-s/2,s/2]$ of the real line is free of eigenvalues with the limit $N \to \infty$ already taken. For $X$ a member of the real Ginibre ensemble, the latter asymptotic form was computed in \cite{Fo15e}. Paper [17] also calculates the large $N$ asymptotics of $P_{N,k}^X$ for $k\ll \sqrt{N}/\log N$. The other extreme, the probability $p_{N,N}^X$ that all eigenvalues are real, is known to be related to large deviation principles \cite{GPTW16}, allowing for its asymptotic determination in the case of $X$ equal to the product of $m$ real Ginibre matrices \cite{Fo13}. From the viewpoint of applications, the probability $p_{N,N}^X$ in the case $X$ of the form $G_1^{-1} G_2$ gives the probability that random elements from a certain tensor structure have minimal rank \cite{tB91}. With $G_1, G_2$ real Ginibre matrices this probability can be computed exactly \cite{FM11,BF12}. The probability $p_{N,N}^X$ with $N=2$ and $X = G_1 G_2$ was shown in \cite{La13} to have an interpretation in quantum entanglement, quantifying when two-qubits $|\phi_1 \rangle$ and $|\phi_2 \rangle$ chosen from a uniform distribution on the 3-sphere are an optimal pair. Another interest in $p_{N,k}^X$, for $X$ a real Ginibre matrix \cite{Ed97,AK07,Ma11}, the inverse of the real Ginibre matrix times a real Ginibre matrix \cite{FM11}, and the product of two real Ginibre matrices \cite{Fo13,Ku15,FI16}, are its special arithmetic properties. The ability to probe these properties relies on integrable structures associated with the computation of probabilities in these cases. In addition, the proof in the case of the product of two real Ginibre matrices relies on new evaluation formulae for certain Meijer G-functions \cite{Ku15}. Integrable structures are also present in the computation of statistical distributions relating to products of truncated real orthogonal matrices \cite{KSZ09,Fo10,IK14}. It is our aim in this paper to make use of the integrable structures to give a formula for the probability $p_{N,N}^X$ in the case that $X$ is formed from the product of $m$ truncated real orthogonal matrices, and to isolate arithmetic properties in the case $m=1$ (general $N$, $L_1$), the case $m=2$ ($L_1=1$, $L_2=2$, small $N$), and for $m\geq 2$ (general $N$, all $L_i$ even). The latter requires the derivation of some further evaluation formulae for certain Meijer G-functions. The evaluations of $p_{N,N}^X$, both in the form of a multi-dimensional integral, and a determinant, are given in Section \ref{S2}. The multi-dimensional integral can be evaluated as a product of gamma functions in the case $m=1$, making the arithmetic properties immediate. Section \ref{S3} contains the evaluation formulae for certain Meijer G-functions in terms of recurrences, and from this it follows that $p_{N,N}^X$ is rational when the $L_i$ are even. We conclude in Section \ref{S4} by implementing the recurrences in some low order cases (equivalent to $m=1$ and $m=2$) to give some explicit evaluations of $p_{N,N}^X$. \section{The Probability $p_{N,N}^X$ for Products of Truncated Real Orthogonal Matrices}\label{S2} \subsection{Multidimensional Integral Formula} Let $R$ be a Haar distributed random real orthogonal matrix of size $(L+N) \times (L+N)$, and let $D$ be an $N \times N$ sub-block of $R$. A peculiarity of this setting is that only for $L \ge N$ is the corresponding probability density function of $D$ free of delta function constraints, and given by the smooth function (see e.g.~\cite[Eq.~(3.113)]{Fo10}) \begin{equation}\label{1} P(D) = {1 \over C_{N,L}} \det ( \mathbb I - D^T D)^{(L-N-1)/2}, \end{equation} where \begin{equation}\label{2} C_{N,L} = \pi^{N^2/2} \prod_{j=0}^{N-1} {\Gamma((L-N+1+j)/2) \over \Gamma((L+1+j)/2)}. \end{equation} Our interest is in the real eigenvalues of the product of random matrices \begin{equation}\label{D} P_m=D_1 D_2 \cdots D_m, \end{equation} where each $D_i$ is of size $N \times N$, but constructed as a sub-block of a $(L_i + N) \times (L_i + N)$ random real orthogonal matrix. To be able to make use of (\ref{1}) we will require each $L_i \ge N$, however the final formulae to be obtained are well defined for all $L_i \ge 0$ and remain valid in this range. \begin{prop} Let $P_m$ be specified as in (\ref{D}). Define \begin{equation}\label{W} w_m(x) = G^{m,0}_{m,m} \Big ( {L_1/2,L_2/2,\dots,L_m/2 \atop 0,0,\dots,0} \Big | x^2 \Big ), \end{equation} where $G_{m,m}^{m,0}$ denotes a particular Meijer G-function as specified in e.g.~\cite{Lu69}. Then for $L_i \ge 0$ ($i=1,\dots,m$), \begin{equation}\label{pb} p_{N,N}^{P_m} = \prod_{i=1}^m \prod_{s=0}^{N-1} {\Gamma((L_i + 1 + s)/2) \over \Gamma((s+1)/2) } \int_{\lambda_1 > \lambda_2 > \cdots > \lambda_N} \prod_{l=1}^N w_m(\lambda_l) \prod_{1 \le j < k \le N} (\lambda_j - \lambda_k) \, d \lambda_1 \cdots d \lambda_N. \end{equation} \end{prop} \noindent Proof. \quad Following an idea in \cite{ARRS13}, we decompose each $D_i$ so that $ D_i = Q_i R_i Q_{i+1}^T, $ where each $Q_i$ is real orthogonal and $R_i$ is upper triangular, $$ R_i = \begin{bmatrix} \lambda_1^{(i)} & & & \\ & \lambda_2^{(i)} & & \\ & & \ddots & \\ & & & \lambda_N^{(i)} \end{bmatrix} + T_i, $$ with the matrix $T_i$ being strictly upper triangular. To keep the working succinct, we suppose for the time being that $L_i \ge N$ ($i=1,\dots,m$), in which each $D_i$ has probability density function~\eqref{1}. Ignoring the label $(i)$ for the time being, we know from workings in \cite{Ma11} that \begin{align*} & {1 \over C_{N,L}} \int \det ( \mathbb I - D^T D)^{(L-N-1)/2} \, (dT) (dQ)\\ & \qquad = {1 \over C_{N,L}} {\pi^{N(N+1)/4} \over \prod_{j=1}^N \Gamma(j/2)} \prod_{s=1}^N \pi^{(N-s)/2} {\Gamma((L-N+s)/2) \over \Gamma(L/2)} \, \prod_{s=1}^N (1 - \lambda_s^2)^{L/2 - 1} \\ & \qquad = \prod_{s=0}^{N-1} {\Gamma((L+1+s)/2) \over \Gamma((s+1)/2) \Gamma(L/2)} \, \prod_{s=1}^N (1 - \lambda_s^2)^{L/2 - 1} . \end{align*} We now re-instate the $(i)$ by writing $L \mapsto L_i$, $\lambda_s \mapsto \lambda_s^{(i)}$, and we require that $ \lambda_s = \prod_{i=1}^m \lambda_s^{(i)}, $ where $\{\lambda_s \}$ are the eigenvalues of the product ($P_m$). In changing variables to $\{\lambda_s \}$, and integrating out over $(dQ)$ and $(dT)$, we obtain for the joint probability density of these eigenvalues the functional form \begin{multline}\label{F1} \prod_{i=1}^m \prod_{s=0}^{N-1} {\Gamma((L_i + 1 + s)/2) \over \Gamma((s+1)/2) \Gamma(L_i/2)} \\ \times \prod_{l=1}^N \int_{-1}^1 d \lambda_l^{(1)} \cdots \int_{-1}^1 d \lambda_l^{(m)} \, \delta \Big (\lambda_l - \prod_{i=1}^m \lambda_l^{(i)} \Big ) \prod_{i=1}^m (1 - (\lambda_l^{(i)})^2 )^{L_i/2 - 1} \prod_{1 \le j < k \le N} (\lambda_j - \lambda_k). \end{multline} Now write \begin{equation}\label{F2} F(\lambda) = \int_{-1}^1 d \lambda^{(1)} \cdots \int_{-1}^1 d \lambda^{(m)} \, \delta \Big (\lambda - \prod_{i=1}^m \lambda^{(i)} \Big ) \prod_{i=1}^m (1 - (\lambda^{(i)})^2 )^{L_i/2 - 1}. \end{equation} Taking the Mellin transform, we then have that $$ \int_{-\infty}^\infty F(\lambda) |\lambda|^{p-1} \, d \lambda = \int_{-1}^1 \cdots \int_{-1}^1 \prod_{i=1}^m | \lambda^{(i)}|^{p-1} (1 - (\lambda^{(i)})^2)^{L_i/2 - 1} \, d \lambda^{(i)}. $$ After a change of variables, these are all Euler beta integrals, and so $$ \int_0^\infty F(\lambda) \lambda^{p-1} \, d \lambda = \phi(p), \qquad \phi(p) = {1 \over 2} \prod_{i=1}^m {\Gamma(p/2) \Gamma(L_i/2) \over \Gamma((p+L_i)/2)}. $$ Taking the inverse Mellin transform, we thus have that \begin{align*} F(x) = {1 \over 2 \pi i} \int_{c-i \infty}^\infty x^{-2s} \prod_{l=1}^m {\Gamma(s) \Gamma(L_l/2) \over \Gamma(s+L_l/2)} \, ds = \prod_{l=1}^m \Gamma(L_l/2) \, w_m(x), \end{align*} where $w_m(x)$ is given by (\ref{W}). Recalling the definition of $F(x)$ as given by (\ref{F2}), and substituting in (\ref{F1}), we obtain (\ref{pb}), although with the restriction $L_i \ge N$. This restriction was imposed because of the essential use of the density~\eqref{1}. However, at the expense of more complex and lengthy working, it is possible to do without (\ref{1}), making use instead of the fact that with $D$ the top $N \times N$ sub-block of the $(L + N) \times (L+N)$ Haar distributed real orthogonal matrix, and $B$ the bottom $L \times L$ sub-block, is proportional to $\delta(D^TD + B^T B - \mathbb I_{N+L})$. In the case $m=1$, the necessary working is sketched in \cite{KSZ09}, with the full details, including a generalisation to so called induced ensembles \cite{FBKSZ12}, given in the thesis \cite{Fi12}. This has the consequence of allowing (\ref{pb}) to be established for all $L_i \ge 0$. {}~\hfill $\square$ In the special case $m=1$ we have \begin{equation}\label{Ga} G^{1,0}_{1,1} \Big ( {L_1/2 \atop 0} \Big | x^2 \Big ) = {1 \over\Gamma(L_1/2)} (1 - x^2)^{L_1/2-1} \chi_{|x| < 1}. \end{equation} Substituting in~\eqref{pb}, the resulting multidimensional integral can be evaluated in terms of a product of gamma functions, allowing the arithmetic properties of $p_{N,N}^{P_1}$ to be specified. \begin{cor}\label{cor3} We have \begin{equation}\label{F3} p_{N,N}^{P_1} = \prod_{j=0}^{N-1} {\Gamma(L_1+j) \Gamma((L_1+j)/2) \over \Gamma(L_1 + (N+j-1)/2) \Gamma(L_1/2)}. \end{equation} As a consequence, for $L_1$ even, $p_{N,N}^{P_1} $ is rational, while for $L_1$ odd, it is equal to a rational number times $1/\pi^{\lfloor N/2 \rfloor}$. \end{cor} \noindent Proof. \quad Substituting (\ref{Ga}) in (\ref{pb}) gives \begin{align*} p_{N,N}^{P_1} & = \prod_{s=0}^{N-1} {\Gamma((L_1 + 1 + s)/2) \over \Gamma((s+1)/2) \Gamma(L_1/2)} \int_{\lambda_1 > \lambda_2 > \cdots > \lambda_N} \prod_{l=1}^N (1 - \lambda_l^2)^{L_1/2-1} \prod_{1 \le j < k \le N} (\lambda_j - \lambda_k) \, d \lambda_1 \cdots d \lambda_N \\ & = {1 \over N!} \prod_{s=0}^{N-1} {\Gamma((L_1 + 1 + s)/2) \over \Gamma((s+1)/2) \Gamma(L_1/2)} \int_{-1}^1 \cdots \int_{-1}^1 \prod_{l=1}^N (1 - \lambda_l^2)^{L_1/2-1} \prod_{1 \le j < k \le N} |\lambda_j - \lambda_k| \, d \lambda_1 \cdots d \lambda_N. \end{align*} Changing variables in the integral shows that it is equal to $$ 2^{N L_1} 2^{N(N-3)/2} \int_0^1 \cdots \int_0^1 \prod_{j=1}^N x_j^{L_1/2-1}(1 - x_j)^{L_1/2-1} \prod_{1 \le j < k \le N} |x_k - x_j| \, dx_1 \cdots dx_N. $$ This is a particular Selberg integral (see e.g.~\cite[Ch.~4]{Fo10}) and so can be evaluated to give $$ 2^{N L_1} 2^{N(N-3)/2} \prod_{j=0}^{N-1} {(\Gamma(L_1/2+j/2))^2 \Gamma(1+(j+1)/2) \over \Gamma (L_1 + (N+j-1)/2) \Gamma(3/2)}. $$ Substituting and simplifying, (\ref{F3}) results. \hfill $\square$ \begin{remark}\label{Rb} Scaling the matrix $D$ in (\ref{1}) by $1/\sqrt{L}$ and taking the limit $L \to \infty$ shows that the distribution $P(D)$ tends to a Gaussian, now being proportional to $e^{-{1 \over 2} {\rm Tr} \, X^2}$. It has been known for some time that the probability of all eigenvalues being real for a real standard Gaussian matrix is equal to $2^{-N(N-1)/4}$ \cite{Ed97}. Indeed taking the limit $L_1 \to \infty$ in (\ref{F3}) reclaims this value. \end{remark} \subsection{Determinant formula} It is a standard exercise in random matrix theory to write the multidimensional integral in terms of a Pfaffian, using a method based on integration over alternate variables due to de Bruijn~\cite{deB55}. The details depend on the parity of $N$. As noted in earlier studies relating to the computation of probabilities relating to real eigenvalues for certain random matrix ensembles \cite{FN07,FN08p,FM11,Fo13,FI16}, the fact that the resulting matrix entries vanish in a checkerboard fashion allows the Pfaffian, when expressed as the square root of an anti-symmetric matrix, to be then expressed as a determinant of half the size. \begin{prop} With $w_m(x)$ given by~\eqref{W}, define \begin{align}\label{alpha} \nonumber &\alpha_{j,k} = \int_{-1}^1 dx \int_{-1}^1 dy \, w_m(x) w_m(y) x^{j-1} y^{k-1} {\rm sgn} (y - x),\\ &\nu_j = \int_{-1}^1 w_m(x) x^{j-1} \, dx. \end{align} We then have \begin{equation}\label{det} p_{N,N}^{P_m} = \prod_{i=1}^m \prod_{s=0}^{N-1} {\Gamma((L_i + 1 + s)/2) \over \Gamma((s+1)/2) } \det A, \end{equation} where for $N$ even \begin{equation} A = [\alpha_{2j-1,2k} ]_{j,k=1,\dots,N/2}, \end{equation} while for $N$ odd \begin{equation} A = \Big [ [\alpha_{2j-1,2k} ]_{j =1,\dots,(N+1)/2 \atop k=1,\dots,(N-1)/2} \: [\nu_{2j-1} ]_{j=1,\dots,(N+1)/2} \Big ]. \end{equation} Moreover, the matrix elements~\eqref{alpha} permit the evaluations \begin{equation}\label{alp} \alpha_{2j-1,2k} = G^{m+1,m}_{2m+1,2m+1} \Big ( {3/2-j,\dots,3/2-j ;1,L_1/2+k,L_2/2+k,\dots,L_m/2+k \atop 0,k,\dots,k;3/2-j-L_1/2,\dots,3/2-j-L_m/2} \Big | 1 \Big ), \end{equation} as well as \begin{equation}\label{nu} \nu_{2j-1} = \prod_{l=1}^m {\Gamma(j-1/2) \over \Gamma(L_l/2+j-1/2)}. \end{equation} \end{prop} \noindent Proof. \quad In addition to the original paper~\cite{deB55}, the de Bruijn formulae for the expression of the multiple integral~\eqref{pb} as a Pfaffian are given in e.g.~\cite[Prop.~6.3.4 ($N$ even), Exercises 6.3 q.1 ($N$ odd)]{Fo10}. Explicitly, for $N$ even this gives~\eqref{det} with $\det A$ replaced by ${\rm Pf} \,[\alpha_{j,k}]_{j,k=1,\dots,N}$. The fact that $w_m(x)$ is even reveals from the definition~\eqref{alpha} that $$ \alpha_{2j,2k} = \alpha_{2j-1,2k-1} = 0 \qquad (j,k=1,\dots,N/2), $$ so every alternate element in the matrix $[\alpha_{j,k}$ is zero. Interchanging rows and columns so that the zero elements are all in the top left and bottom right block and noting $\alpha_{2k,2j-1} = - \alpha_{2j-1,2k}$ shows that ${\rm Pf} \,[\alpha_{j,k}]_{j,k=1,\dots,N} = \det A$ as required. The $N$ odd case is similar. The evaluations~\eqref{alp} and~\eqref{nu} follow from standard Meijer G-function formulae (see e.g.~\cite{Lu69}).\\ ${}_{}$ \hfill $\square$ \begin{remark} \label{Rc} Taking the limits $L_1,\dots,L_m \to \infty$, it follows from the definition of the Meijer G-function as a contour integral \cite{Lu69} that $$ p_{N,N}^{P_m} \to \prod_{s=0}^{N-1} \Big ( {1 \over \Gamma((s+1)/2)} \Big )^m \left \{ \begin{array}{ll} \det [ \tilde{\alpha}_{2j-1,2k} ]_{j,k=1,\dots,N/2}, & N \: {\rm even} \\[.2cm] \det \Big [ [\tilde{\alpha}_{2j-1,2k} ]_{j =1,\dots,(N+1)/2 \atop k=1,\dots,(N-1)/2} \: [\tilde{\nu}_{2j-1} ]_{j=1,\dots,(N+1)/2} \Big ],& N \: {\rm odd} \end{array} \right. $$ with $$ \tilde{\alpha}_{j,k} = G^{m+1,m}_{m+1,m+1} \Big ( {3/2-j,\dots,3/2-j ;1\atop 0, k,\dots, k} \Big | 1 \Big ), \quad \tilde{\nu}_{2j-1} = (\Gamma(j-1/2))^m. $$ In keeping with Remark \ref{Rb}, this is the functional form derived in \cite{Fo13} for the probability that all eigenvalues are real for a product of $m$ real standard Gaussian matrices of size $N \times N$. \end{remark} \begin{remark} \label{Rd} Using the working used to derive \cite[Prop.~6]{Fo13}, we can show that, assuming $L_i \ne 0$ for all $i$, $$ \lim_{m \to \infty} {\prod_{i=1}^m \Gamma(L_i+2+j-1/2) \Gamma(L_i/2+k) \over (\Gamma(j-1/2) \Gamma(k))^m} \, \alpha_{2j-1,2k} = \left \{ \begin{array}{ll} 1, & j \le k \\ 0, & j > k \end{array} \right. $$ and thus $\lim_{m \to \infty} p_{N,N}^{P_m} = 1$, in accordance with an effect first noted in \cite{La13}. \end{remark} \section{Evaluation of some Meijer G-functions}\label{S3} \begin{prop} Consider, for positive integers $\mu, \nu, j,k$, the expression \begin{align} \label{Kmunu} K^{\mu,\nu}_{j,k}:=\frac{\Gamma \left(j-1/2\right)}{\Gamma (\mu) \Gamma \left(\mu+\nu+j+k -3/2\right)} \sum _{r=1}^{\nu} \frac{\Gamma \left(j+k+r-3/2\right) \Gamma (\mu+\nu-r)}{\Gamma \left(j+r-1/2\right) \Gamma (\nu-r+1)}. \end{align} Then we have the following finite-sum results: \begin{align}\label{Gmunu} &G^{m+1,m}_{2m+1,2m+1} \Big ( {3/2-j,\dots,3/2-j ;1,\mu+k,k\dots,k \atop 0,k,\dots,k;3/2-j-\nu,3/2-j,\dots,3/2-j} \Big | 1 \Big )\nonumber\\ &=G^{2,1}_{3,3} \Big ( {3/2-j ;1,\mu+k \atop 0,k;3/2-j-\nu} \Big | 1 \Big)=K^{\mu,\nu}_{j,k} , \end{align} \begin{align}\label{Gmunu2} \nonumber &G^{m+1,m}_{2m+1,2m+1} \Big ( {3/2-j,\dots,3/2-j ;1,\mu+k,\nu+k,k,\dots,k \atop 0,k,\dots,k;3/2-j-\mu,3/2-j-\nu,3/2-j,\dots,3/2-j} \Big | 1 \Big )\nonumber\\ =\,&G^{3,2}_{5,5} \Big ( {3/2-j,3/2-j ;1,\mu+k,\nu+k \atop 0,k,k;3/2-j-\mu,3/2-j-\nu} \Big | 1 \Big)\nonumber\\ \nonumber =&\sum_{\xi=1}^\mu\sum_{\eta=1}^\nu \frac{\Gamma(2\mu-\xi)\Gamma(\xi+j+k-3/2)\Gamma(2\nu-\eta)\Gamma(\eta+j+k-3/2)}{\Gamma(\mu)\Gamma(\mu-\xi+1)\Gamma(2\mu+j+k-3/2)\Gamma(\nu)\Gamma(\nu-\eta+1)\Gamma(2\nu+j+k-3/2)}\\ &\times\bigg(K_{j,k}^{\xi,\eta}+K_{j,k}^{\eta,\xi}+\frac{\Gamma^2(j-1/2)}{\Gamma(\xi+j-1/2)\Gamma(\eta+j-1/2)}\bigg). \end{align} We note that these Meijer G-functions are rational numbers. These results will be used in Section~\ref{S4} to obtain explicit evaluation of the probability $p^{P_m}_{N,N}$ for $m=2$ and $L_1,L_2$ even. \end{prop} \noindent Proof. \quad We begin with the following three-term recurrence relation which is satisfied by Meijer G-functions: \begin{align} \label{recur} \nonumber &G^{m,n}_{p,q} \Big( {a_1, ...\,a_n; a_{n+1},...,a_{p-1},a_p-1 \atop b_1, ...\,b_m; b_{m+1},...,b_q}\,\Big|z \Big) +G^{m,n}_{p,q} \Big( {a_1, ...\,a_n; a_{n+1},...,a_{p-1},a_p \atop b_1, ...\,b_m; b_{m+1},...,b_{q-1},b_q+1 }\,\Big|z \Big)\\ &=(a_p-b_q-1)G^{m,n}_{p,q} \Big( {a_1, ...\,a_n; a_{n+1},...,a_p \atop b_1, ...\,b_m; b_{m+1},...,b_q},\Big|z \Big);~~~~~~~~~~~~n<p, m<q. \end{align} With the aid of this relation we can construct the diagram as shown in Fig.~1. \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{MeijerDiagram1} \label{MeijerDiagram1.pdf} \caption{Diagram facilitating the derivation of recurrence relation~\eqref{Grec}.} \end{figure} In this diagram we use the shorthand notation $G\binom{\alpha}{\beta}$ to represent the Meijer G-function $G^{m+1,m}_{2m+1,2m+1} \Big( {a_1, ...\,a_m; a_{m+1},...,a_{2m},\alpha \atop b_1, ...\,b_{m+1}; b_{m+2},...,b_{2m},\beta},\Big|1\Big)$. The arrows show how successively new Meijer G-functions can be constructed from the previous ones. The `inverse terms' (in red -- colour-on-line) are the weights that have to be considered while constructing the Meijer G-functions. For example, $G\binom{\alpha+1}{-\beta-1}=(\alpha+\beta+1)^{-1}\left[G\binom{\alpha}{-\beta-1}+G\binom{\alpha+1}{-\beta}\right],$ $G\binom{\alpha+1}{-\beta-2}=(\alpha+\beta+2)^{-1}\left[G\binom{\alpha}{-\beta-2}+G\binom{\alpha+1}{-\beta-1}\right],$ etc. All the Meijer G's below the topmost line can be constructed using those at the topmost line by following the arrows. A careful observation leads to the following recurrence relation: \begin{align} \label{Grec} \nonumber G\binom{\alpha+\mu}{-\beta-\nu}=\sum_{r=1}^{\nu} C_{\mu,\nu+1-r}\bigg(\prod_{s=r}^{\mu+\nu-1}(\alpha+\beta+s)^{-1}\bigg)G\Big({\alpha\atop -\beta-r}\Big)\\ +\sum_{r=1}^{\mu} C_{\nu,\mu+1-r}\bigg(\prod_{s=r}^{\mu+\nu-1}(\alpha+\beta+s)^{-1}\bigg)G\Big({\alpha+r\atop -\beta}\Big). \end{align} Here the coefficients $C_{i,j}$ are given by \begin{equation}\label{coeff} C_{i,j}=\binom{i+j-2}{j-1}=\frac{\Gamma(i+j-1)}{\Gamma(i)\Gamma(j)}=\frac{1}{(i+j-1)\text{B}(i,j)}. \end{equation} Interestingly, these coefficients satisfy the recurrence relation \begin{align} C_{1,j}=1,~~ C_{i,j}=\sum_{r=1}^j C_{i-1,r}, \end{align} and form a tilted Pascal's triangle, as depicted in Fig.~2. \begin{figure}[ht!] \centering \includegraphics[width=0.25\linewidth]{Pascal.pdf} \label{Pascal} \caption{Some of the coefficients as defined in~\eqref{coeff}. These coefficients constitute a tilted Pascal's triangle.} \end{figure} Now, for non-negative integers $l_1,...,l_n$, when not all of them are 0, we have the following identities that follow from the standard contour integral formula for Meijer G-function~\cite{Lu69}: \begin{align} \label{id1} G^{m+1,m}_{2m+1,2m+1} \Big({3/2-j, ...\,3/2-j; 1,l_1+k,...,l_m+k \atop 0, k,...,k; 3/2-j,...3/2-j }\,\Big|1 \Big)=0, \end{align} \begin{align} \label{id2} \nonumber G^{m+1,m}_{2m+1,2m+1} \Big({3/2-j, ...\,3/2-j; 1,k,...,k \atop 0, k,...,k; 3/2-j-l_1,...3/2-j-l_m}\,\Big|1 \Big) =\prod_{s=1}^m\prod_{r_s=1}^{l_s} \frac{1}{(j+r_s-3/2)}\\ =\prod_{s=1}^m\frac{\Gamma(j-1/2)}{\Gamma(j+l_s-1/2)}. \end{align} We note that \eqref{id2} is related to \eqref{nu}. When $l_1=\cdots=l_m=0$, the Meijer G-function $G^{m+1,m}_{2m+1,2m+1} \Big({3/2-j, ...\,3/2-j; 1,k,...,k \atop 0, k,...,k; 3/2-j,...3/2-j }\,\Big|z \Big)$ reduces to $G^{1,0}_{1,1} \Big({1 \atop 0}\,\Big|z \Big)$, which is a theta function involving $|z|$, \begin{equation}\label{G1011} G^{1,0}_{1,1} \Big({1 \atop 0}\,\Big|z \Big)=\Theta(1-|z|)=\begin{cases}1, & |z|<1,\\0, & |z|>1,\end{cases} \end{equation} and hence discontinuous at $z=1$. It turns out that taking its value to be $1/2$ at $z=1$ gives correct result for certain probability, as observed in Section~\ref{S4} ahead. With the above results at our hands,~\eqref{Gmunu} follows as \begin{align*} &G^{m+1,m}_{2m+1,2m+1} \Big({3/2-j,\,3/2-j; 1,\mu+k,k,...,k \atop 0, k,...,k; 3/2-j-\nu,3/2-j,...,3/2-j}\,\Big|1 \Big) =G^{2,1}_{3,3} \Big({3/2-j; 1,\mu+k \atop 0,k; 3/2-j-\nu}\Big|1 \Big)\\ =&\sum_{r=1}^{\nu} C_{\mu,\nu+1-r}\bigg(\prod_{s=r}^{\mu+\nu-1}(j+k+s-3/2)^{-1}\bigg)G^{2,1}_{3,3} \Big({3/2-j; 1,k \atop 0,k; 3/2-j-r }\,\Big|1 \Big)\\ +&\sum_{r=1}^{\mu} C_{\nu,\mu+1-r}\bigg(\prod_{s=r}^{\mu+\nu-1}(j+k+s-3/2)^{-1}\bigg)G^{2,1}_{3,3} \Big({3/2-j; 1,k+r \atop 0,k; 3/2-j}\,\Big|1 \Big)\\ =&\sum_{r=1}^{\nu}C_{\mu,\nu+1-r}\bigg(\prod_{s=r}^{\mu+\nu-1}(j+k+s-3/2)^{-1}\bigg)\bigg(\prod_{t=1}^{r}(j+t-3/2)^{-1}\bigg)+0\\ =&\frac{\Gamma \left(j-1/2\right)}{\Gamma (\mu ) \Gamma \left(j+k+\mu+\nu -3/2\right)} \sum _{r=1}^{\nu } \frac{\Gamma \left(j+k+r-3/2\right) \Gamma (\mu+\nu-r)}{\Gamma \left(j+r-1/2\right) \Gamma (\nu-r+1)}. \end{align*} In the second-last step we used Eqs.~\eqref{id1} and \eqref{id2}. Now, let us consider \begin{align} \nonumber U^{\mu,\nu}_{j,k}:= &G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,\mu+k,k \atop 0,k,k; 3/2-j-\mu,3/2-j-\nu }\,\Big|1 \Big) =G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,k,\mu+k \atop 0,k,k; 3/2-j-\nu,3/2-j-\mu }\,\Big|1 \Big)\\ \nonumber =&\sum_{\xi=1}^\mu C_{\mu,\mu+1-\xi}\bigg(\prod_{s=\xi}^{2\mu-1}(j+k+s-3/2)^{-1}\bigg) \bigg(G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,k,k \atop 0,k,k; 3/2-j-\nu,3/2-j-\xi }\,\Big|1 \Big)\\ \nonumber &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,k,k+\xi \atop 0,k,k; 3/2-j,3/2-j-\nu}\,\Big|1 \Big)\bigg)\\ =&\sum_{\xi=1}^\mu \frac{\Gamma(2\mu-\xi)\Gamma(\xi+j+k-3/2)}{\Gamma(\mu)\Gamma(\mu-\xi+1)\Gamma(2\mu+j+k-3/2)}\bigg(\frac{\Gamma^2(j-1/2)}{\Gamma(\nu+j-1/2)\Gamma(\xi+j-1/2)}+K^{\xi,\nu}_{j,k}\bigg), \end{align} \begin{align} \nonumber V^{\mu,\nu}_{j,k}:= &G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,\mu+k,\nu+k \atop 0,k,k; 3/2-j-\mu,3/2-j }\,\Big|1 \Big) =G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,\nu+k,\mu+k \atop 0,k,k; 3/2-j,3/2-j-\mu }\,\Big|1 \Big)\\ \nonumber =&\sum_{\xi=1}^\mu C_{\mu,\mu+1-\xi}\bigg(\prod_{s=\xi}^{2\mu-1}(j+k+s-3/2)^{-1}\bigg) \bigg(G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,\nu+k,k \atop 0,k,k; 3/2-j,3/2-j-\xi }\,\Big|1 \Big)+0\bigg)\\ =&\sum_{\xi=1}^\mu \frac{\Gamma(2\mu-\xi)\Gamma(\xi+j+k-3/2)}{\Gamma(\mu)\Gamma(\mu-\xi+1)\Gamma(2\mu+j+k-3/2)}\,K^{\nu,\xi}_{j,k}. \end{align} Finally, the result~\eqref{Gmunu2} follows as \begin{align*} &G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,\mu+k,\nu+k \atop 0,k,k; 3/2-j-\mu,3/2-j-\nu }\,\Big|1 \Big) =G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,\nu+k,\mu+k \atop 0,k,k; 3/2-j-\nu,3/2-j-\mu }\,\Big|1 \Big)\\ =&\sum_{\xi=1}^\mu C_{\mu,\mu+1-\xi}\bigg(\prod_{s=\xi}^{2\mu-1}(j+k+s-3/2)^{-1}\bigg) \bigg(G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,\nu+k,k \atop 0,k,k; 3/2-j-\nu,3/2-j-\xi }\,\Big|1 \bigg)\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +G^{3,2}_{5,5} \Big({3/2-j,3/2-j; 1,\nu+k,\xi+k \atop 0,k,k; 3/2-j-\nu,3/2-j }\,\Big|1 \Big)\bigg)\\ =&\sum_{\xi=1}^\mu \frac{\Gamma(2\mu-\xi)\Gamma(\xi+j+k-3/2)}{\Gamma(\mu)\Gamma(\mu-\xi+1)\Gamma(2\mu+j+k-3/2)}\big(U^{\nu,\xi}_{j,k}+V^{\nu,\xi}_{j,k}\big)\\ =&\sum_{\xi=1}^\mu\sum_{\eta=1}^\nu \frac{\Gamma(2\mu-\xi)\Gamma(\xi+j+k-3/2)\Gamma(2\nu-\eta)\Gamma(\eta+j+k-3/2)}{\Gamma(\mu)\Gamma(\mu-\xi+1)\Gamma(2\mu+j+k-3/2)\Gamma(\nu)\Gamma(\nu-\eta+1)\Gamma(2\nu+j+k-3/2)}\\ &\times\left(K_{j,k}^{\xi,\eta}+K_{j,k}^{\eta,\xi}+\frac{\Gamma^2(j-1/2)}{\Gamma(\xi+j-1/2)\Gamma(\eta+j-1/2)}\right). \end{align*} \hfill $\square$ \begin{remark}\label{Re} We used the identities~\eqref{id1} and~\eqref{id2} along with the recurrence relation~\eqref{Grec} to obtain relevant Meijer G-functions for calculating $p_{N,N}^{P_m}$ for $m=1,2$ when the $L_i$ are even. It turns out that actually~\eqref{id1},~\eqref{id2} and~\eqref{Grec} can be used repeatedly in a systematic manner to obtain Meijer G-functions for any $m$ and even $L_i$. This is facilitated by the diagram shown in Fig.~3. \begin{figure}[ht!] \centering \includegraphics[width=1\linewidth]{MeijerDiagram2.pdf} \label{MGD2} \caption{Diagram showing how higher order Meijer G-functions can be obtained from the lower order ones using the recurrence~\eqref{Grec}.} \end{figure} In this diagram we have used the notation $G_{2m+1,2m+1}^{m+1,m}\Big({a_1,...,a_m \atop b_1,...,b_m }\Big)$ to represent $G^{m+1,m}_{2m+1,2m+1} \Big({3/2-j, ...\,3/2-j;\, 1, a_1,..., a_m \atop 0, k, ...,k;\, 3/2-b_1,..., 3/2-b_m},\Big|1\Big)$. The arrows emanating from two Meijer G-functions and terminating at a third suggest the former two can be implemented in the recurrence~\eqref{Grec} to obtain the third one. The $\alpha$'s and $\beta$'s are positive integers that may serve as dummy summation variables when used in~\eqref{Grec}. The Meijer G-functions with unequal set of indices, say $G^{3,2}_{5,5}\Big({k,k\atop -j-\beta_1,-j-\beta_2}\Big)$ and $G^{2,1}_{3,3}\Big({k+\alpha_1 \atop -j-\beta_1}\Big)$ can be used in the recurrence relation by noting that the latter is same as $G^{3,2}_{5,5}\Big({k+\alpha_1,k \atop -j-\beta_1,-j}\Big)$. We also note that the Meijer G-functions at the top are already known from~\eqref{id1} and~\eqref{id2}. Now, since the recurrence relation involves multiplication of the Meijer G's with rational coefficients, and the initial functions at the top are all rationals, it follows that the repeated application of the recurrence relation also produces rational numbers. Therefore, eventually the probability $p_{N,N}^{P_m}$ for any $m$ with each $L_i$ even is always a rational number. \end{remark} \begin{remark}\label{Rf} Computer algebra, using Mathematica~\cite{Mathematica}, suggests the following results: \begin{align} \label{GEv1} G_{3,3}^{2,1}\Big({3/2-j;1,\mu +1/2+k \atop 0,k;3/2-j-1/2 }\Big|1\Big) =\frac{\Gamma (k) }{\sqrt{\pi }\, \Gamma \left(j+k+\mu -1/2\right)}\sum _{\alpha =1}^k \frac{\Gamma (\alpha +\mu ) \Gamma \left(j+k-\alpha -1/2\right)}{\Gamma \left(\alpha +\mu +1/2\right) \Gamma (k-\alpha +1)}, \end{align} \begin{align} \label{GEv2} G_{3,3}^{2,1}\Big({3/2-j;1,1/2+k \atop 0,k;3/2-j-\nu-1/2 }\Big|1\Big)&=\frac{\Gamma (k)}{\Gamma \left(\nu +1/2\right) \Gamma \left(j+k+\nu -1/2\right)} \sum _{\alpha =1}^k \frac{ \Gamma (\alpha +\nu ) \Gamma (j+k-\alpha -1/2)}{\Gamma (\alpha +1/2) \Gamma (k-\alpha +1)}. \end{align} \end{remark} It is clear that, when evaluated, these two yield $1/\pi$ times a rational number. Also, when used in a recurrence relation similar to~\eqref{Grec}, these two lead to $G_{3,3}^{2,1}\Big({3/2-j;1,\mu +1/2+k \atop 0,k;3/2-j-\nu-1/2 }\Big|1\Big)$, and hence we have for positive integer $\mu$, \begin{align} \label{GEv3} \nonumber G_{3,3}^{2,1}\Big({3/2-j;1,\mu +1/2+k \atop 0,k;3/2-j-\mu-1/2 }\Big|1\Big)=&\sum _{r=1}^{\mu } \sum _{\alpha =1}^k \frac{\Gamma (k) \Gamma (r+\alpha ) \Gamma (2 \mu -r) \Gamma \left(j+k-\alpha -1/2\right)}{\Gamma (\mu ) \Gamma (k-\alpha +1) \Gamma (\mu-r +1) \Gamma \left(j+k+2 \mu -1/2\right)}\\ &\times\bigg( \frac{1}{\sqrt{\pi }\, \Gamma \left(r+\alpha +1/2\right)}+\frac{1}{\Gamma \left(\alpha +1/2\right) \Gamma \left(r+1/2\right)}\bigg). \end{align} Clearly, this also equals $1/\pi$ times a rational number. This result can be used in the determinantal formula~\eqref{det} for $m=1$ and odd $L$. It appears that $$G_{5,5}^{3,2}\Big({3/2-j,3/2-j;1,\mu +1/2+k,\nu +1/2+k \atop 0,k;3/2-j-\mu-1/2,3/2-j-\nu-1/2 }\Big|1\Big)~~ {\rm and} ~~ G_{5,5}^{3,2}\Big({3/2-j,3/2-j;1,\mu+k ,\nu+1/2+k \atop 0,k;3/2-j-\mu,3/2-j-\nu-1/2 }\Big|1\Big),$$ with the first being the analog of~\eqref{Gmunu2}, do not possess such simple arithmetic structures. For instance, we find that for $m = 2, L_1 = 1, L_2 = 2$,~\eqref{W} reduces to $w_m(x) = (2/\sqrt{\pi}) \tanh^{-1}(\sqrt{1-x^2})\Theta(1-|x|^2)$, which when used in~\eqref{alp} leads to the Meijer G-values $\alpha_{1,2}=(20+8\mathcal{G})/(3\pi)$, $\alpha_{1,4}=(181+162\mathcal{G})/(90\pi)$, $\alpha_{3,2}=(17-6\mathcal{G})/(15\pi)$, and $\alpha_{3,4}=(1157+450\mathcal{G})/(3780\pi)$. Here $\mathcal{G}\approx 0.915966$ is Catalan's constant. Using these in~\eqref{det} gives the probability values as $p_{2,2}^{P_2}=(2\mathcal{G}+5)/(3\pi)$, $p_{3,3}^{P_2}=(38\mathcal{G}-1)/(30\pi)$, and $p_{4,4}^{P_2}=(29412\mathcal{G}^2+10612\mathcal{G}-6767)/(25200\pi^2)$. \section{Some special cases}\label{S4} \subsection*{\tikz\draw[black,fill=black] (0,0) circle (.5ex); $ \boldsymbol{L_1=\cdots=L_m=0}$} This case corresponds to the product of $m$ orthogonal matrices each of dimension $N$, which is again an $N$-dimensional orthogonal matrix. For the trivial case of $N=1$,~\eqref{det} gives the probability as 1, as expected. For $N=2$, with $G^{1,0}_{1,1} \big({1 \atop 0}\,\big| 1 \big):=1/2$, as discussed near~\eqref{G1011}, we get the probability value 1/2. This is in conformity with the fact that all $2\times2$ orthogonal matrices are either rotations (almost surely complex eigenvalues) or reflections (real eigenvalues $\pm 1$) with equal probability. For $N\geq 3$, the determinant in~\eqref{det} vanishes and hence gives the probability as 0. To summarize, the probability of all eigenvalues real for the product of $m$ number of $N$-dimensional orthogonal matrices is \begin{align*} p_{N,N}^{P_m}=\begin{cases} 1, & N=1,\\ 1/2, & N=2,\\ 0, & N\geq 3. \end{cases} \end{align*} \subsection*{\tikz\draw[black,fill=black] (0,0) circle (.5ex); $ \boldsymbol{ L_1>0,L_2=\cdots= L_m=0}$} This scenario gives the probability of all eigenvalues real for product of the $N$-dimensional block of a $L_1+N$ dimensional orthogonal matrix and $m-1$ orthogonal matrices of dimension $N$. This probability turns out to be same as the probability of all eigenvalues real for the $N$-dimensional block of a $L_1+N$ dimensional orthogonal matrix, i.e., the product with other $m-1$ orthogonal matrices does not change the probability. This can be seen from~\eqref{W} that, when only one of the $L_i$ is nonzero (here $L_1$), the weight function $w_m(x)$ reduces from $G^{m,0}_{m,m} \Big({L_1/2,...,L_m/2 \atop 0,..., 0 }\,\Big|x^2 \Big)$ to $G^{1,0}_{1,1} \Big({L_1/2 \atop 0 }\,\Big|x^2 \Big)$. Thus the probability, as given by~\eqref{pb}, becomes same as that for the case $m=1$, and is obtained using~\eqref{F3}. We also note that if we consider $L_1=2\mu>0$ then using~\eqref{Gmunu} we have \begin{align} \nonumber G^{m+1,m}_{2m+1,2m+1} \Big({3/2-j,\,3/2-j; 1,\mu+k,k,...,k \atop 0, k,...,k; 3/2-j-\mu,3/2-j,...,3/2-j}\,\Big|1 \Big)\nonumber =G^{2,1}_{3,3} \Big({3/2-j; 1,\mu+k \atop 0,k; 3/2-j-\mu }\,\Big|1 \Big)=K^{\mu,\mu}_{j,k}, \end{align} where $K^{\mu,\mu}_{j,k}$ is given by~\eqref{Kmunu}. This result can be used in the determinantal formula~\eqref{det} to calculate the probability $p_{N,N}^{P_m}$ with $L_1=2\mu,L_2=\cdots= L_m=0$. Similarly, equations~\eqref{GEv1} and~\eqref{GEv3} can be used to calculate the probability for $L_1=2\mu+1,L_2=\cdots= L_m=0$. However, these yield the same value for $p_{N,N}^{P_1}$ as given by (\ref{F3}), as they must. \subsection*{\tikz\draw[black,fill=black] (0,0) circle (.5ex); $ \boldsymbol{ L_{1}>0, L_{2}>0,L_3=\cdots=L_{m}=0}$} Here we consider the case when all but two of the $L_i$ are nonzero, say $L_1$ and $L_2$. Applying reasoning similar to that in the preceding case, we find that the probability $p_{N,N}^{P_m}$ in the present scenario is the same as the probability for $m=2$, i.e., the probability $p_{N,N}^{P_2}$ of all eigenvalues real for product of $N$-dimensional blocks of orthogonal matrices of dimensions $L_1+N$ and $L_2+N$, respectively. In this case we do need to calculate the determinant and therefore need the values of Meijer G-functions. Equation~\eqref{Gmunu} can be used to obtain the explicit answers when the $L_i$ are even: $L_1=2\mu, L_2=2\nu$. As clear from the form of~\eqref{Gmunu} these probabilities are rational numbers. When one of $L_1, L_2$ is odd, or both are odd, such a simple arithmetic structure does not seem to exist, as already indicated in Remark~\ref{Rf}. In Table~\ref{meq2} we present probability values for various combinations of $N$ and even $L_1, L_2$, giving the explicit rational numbers. \begin{table}[h!] \renewcommand{\arraystretch}{1.5} \caption{Exact values and numerical values (6 significant digits) for some probabilities $p_{N,N}^{P_2}$. } \centering {\small \begin{tabular}{|c|c|c|c|c| } \hline \multirow{2}{*}{$N$} & \multirow{2}{*}{$L_1$} & \multirow{2}{*}{$L_2$} & \multicolumn{2}{|c|}{$p_{N,N}^{P_2}$} \\ \cline{4-5} & & & Exact & Numerical value \\ \hline\hline 2 (3) & 2 & 2 &$\frac{20}{27}$ ($\frac{1312}{3375}$) & $0.740741$ ($0.388741$)\\ \hline 2 (3) & 2 & 4 & $\frac{1184}{1575}$ ($\frac{4544}{11025}$)& $0.751746$ ($0.412154$)\\ \hline 2 (3)& 2 & 6 & $\frac{6112}{8085}$ ($\frac{665216}{1576575}$)& $0.755968$ ($0.421937$)\\ \hline 2 (3)& 4 & 4 & $\frac{97984}{128625}$ ($\frac{1504768}{3472875}$)& $0.761780 $ ($0.433292$)\\ \hline 2 (3)& 4 & 6 & $\frac{649984}{848925}$ ($\frac{161046016}{364188825}$)& $0.765655$ ($0.442205$)\\ \hline \end{tabular}} \label{meq2} \qquad \end{table} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2017-07-06T02:07:35", "yymm": "1606", "arxiv_id": "1606.03670", "language": "en", "url": "https://arxiv.org/abs/1606.03670", "abstract": "The probability that all eigenvalues of a product of $m$ independent $N \\times N$ sub-blocks of a Haar distributed random real orthogonal matrix of size $(L_i+N) \\times (L_i+N)$, $(i=1,\\dots,m)$ are real is calculated as a multi-dimensional integral, and as a determinant. Both involve Meijer G-functions. Evaluation formulae of the latter, based on a recursive scheme, allow it to be proved that for any $m$ and with each $L_i$ even the probability is a rational number. The formulae furthermore provide for explicit computation in small order cases.", "subjects": "Mathematical Physics (math-ph); Data Analysis, Statistics and Probability (physics.data-an)", "title": "The Probability That All Eigenvalues are Real for Products of Truncated Real Orthogonal Random Matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429629196684, "lm_q2_score": 0.7217432122827967, "lm_q1q2_score": 0.7097211088333245 }
https://arxiv.org/abs/1801.03741
Double asymptotic for random walks on hypercubes
We consider the sum of the coordinates of a simple random walk on the K-dimensional hypercube, and prove a double asymptotic of this process, as both the time parameter n and the space parameter K tend to infinity. Depending on the asymptotic ratio of the two parameters, they converge towards either a Brownian motion, an Ornstein-Uhlenbeck process or an i.i.d. collection of Gaussian variables.
\section{Introduction} Many results (like the Law of Large Numbers or the Central Limit Theorem) are already known for the asymptotic behavior in time of an additive functional of a Markov chain (see for instance \cite{MT09}). But the case where we consider a sequence of such processes is only partially studied. Here we adress the problem of a double asymptotic as both the time and the index in the sequence tend to infinity. For instance, a well understood case is the discretization of a diffusion process : as we consider larger time horizons and finer meshes, the discrete processes converge to the continuous diffusion they come from. Actually this paper was initially motivated by the study of a constrained random walk introduced in \cite{BCEN15}, where an additive observable of a simple random walk on a graph $G_K$ whose vertices are $\lbrace -1,1\rbrace^K$ is described. The authors used a discrete Hodge decomposition to rewrite their observable as a sum of a divergence-free and a bounded gradient vector fields, and then proved that for every $K$ the rescaled constrained random walk converges in time to a Brownian motion with variance $\sigma_K^2=\frac{2}{K+2}$. A natural generalization of this result would be to let $K$ grow to $+\infty$, but the diffusivity tends to $0$ as $K$ grows, which means that the normalization $\sqrt{n}$ used in \cite{BCEN15} is too strong to get a non-trivial limit in this case. Moreover the gradient part was neglected since it is bounded when $K$ is fixed. Actually, it is a function of $K$, and when $K$ tends to infinity it is no more obvious that it can be neglected. In our setting we are dealing with a simplified version of this model. By removing some edges from the graphs $G_K$, the additive observable corresponds to a pure gradient term. Hence in this model we have $\sigma_K$ vanishing for every $K$. Moreover this toy model is more amenable to computations since the dependence in $K$ of the gradient term is quite simple. Even if the diffusivity is zero we managed to get a convergence to Gaussian processes when both $n$ and $K$ tend to $+\infty$. Surprisingly we find out that the good normalization and the limiting process both depend on the asymptotic of the ratio of our parameters. Indeed, if the limit of $\frac{n}{K}$ is a positive constant, our rescaled process will converge to an Ornstein-Uhlenbeck process. If the ratio tends to $+\infty$ the limiting process is a Gaussian white noise (i.e. a collection of i.i.d. Gaussian random variables). Last but not least if $\frac{n}{K}$ tends to $0$ the initial value may diverge (for instance in the stationary case), hence we will prove first a convergence to a Brownian motion if we subtract the value of the processes at time $0$ before considering a "stationary Brownian motion" (i.e. some weak form of Brownian motion starting from the Lebesgue measure) as a singular limit. \section{The model} Let the graph $H_K=(V_K,E_K)$ be the $K$-dimensional hypercube, more precisely : \[V_K=\lbrace -1,+1\rbrace ^K \text{ and } E_K=\left\lbrace \lbrace u, u'_i\rbrace : u\in V_K, i\in [K]\right\rbrace\] with $u'_i=(u^{(1)},\dots, u^{(i-1)},-u^{(i)},u^{(i+1)},\dots,u^{(K)})$ and $[K]\overset{def}{=}\llbracket 1,K\rrbracket$. \medskip We define $\left(Y_K(n)\right)_{n\geq 0}$ as the simple random walk on $H_K$ starting from a law $\mu_K$ on $V_K$. We set $f_K:V_K\rightarrow \RR$ the function giving the sum of the coordinates in $H_K$, namely : \[\forall \ v\in V_K : \ f_K(v)=\sum_{i=1}^K v^{(i)}.\] \bigskip We are interested in the behavior of $f_K(Y_K(n))$, and more specifically we want to give some scaling limit as $n$ and $K$ both tend to infinity of the linear interpolations processes defined by : \[X_{n,K}(t)=f_K(Y_K(\lfloor nt\rfloor)) \ \ t\geq 0.\] In other words we want to find some $c_{n,K}$ and random process $\left( X_t\right)_{t\geq 0}$ such that : \[\left( \frac{X_{n,K}(t)}{c_{n,K}}\right)_{t\geq 0} \underset{n,K\to\infty}{\longrightarrow} \left(X_t\right)_{t\geq 0}.\] The mode of such convergences will be either the convergence in distribution in the set of càdlàg functions $D(\RR_+,\RR)$ (endowed with the Skorokhod topology used in \cite{EK91}), or a convergence of the finite-dimensional marginals (weakly or vaguely). In the next sections $X\overset{\mathcal{D}}{=}Y$ will mean that the random variables or processes $X$ and $Y$ both have the same probability law, and we will consider that $K$ is a function of $n$ (but we will keep writing $K$ instead of $K(n)$ to lighten the notations). \bigskip We begin with the intermediate regime (both parameters grow at comparable speeds) using diffusion approximation results from \cite{EK91}. Then in the fast regime ($n$ growing faster than $K$) the processes do no longer converge to a diffusion process, so we use an ersatz of Donsker's theorem (which will be proven in the appendix) to prove a convergence of the finite-dimensional laws. Lastly in the slow regime ($n$ grows slower than $K$) we state a convergence in law of the increments of the processes before proving a vague convergence to a Brownian motion starting from its invariant measure (i.e. the Lebesgue measure) using the results from the intermediate regime. \bigskip One can remark that $X_{n,K}$ is an affine transformation of an Ehrenfest's urn, which explains the convergence to an Ornstein-Uhlenbeck process in the intermediate regime (see for instance \cite{CM16} for a proof of this scaling limit). \section{Intermediate regime} Consider the sequences of random variables defined by : \begin{equation*} Z_{n,K}(i) \overset{def}{=} \frac{f_K(Y_K(i))}{c_{n,K}} \end{equation*} for every $i\in\NN$. In this section we will assume that $K$ and $n$ grow at comparable speeds, namely there exists $\lambda >0$ such that : \[\frac{n}{K}\underset{n\to\infty}{\longrightarrow}\lambda .\] One can check that for every $n\geq 1$ the sequence $\left(Z_{n,K}(i)\right)_{i\in\NN}$ is an homogeneous Markov chain with values in : \[\mathcal{S}_n=\left\lbrace \frac{2k-K}{c_{n,K}}:k\in\llbracket 0,K\rrbracket\right\rbrace\] whose transition kernel is given for every $x\in\mathcal{S}_n$ by : \[\PP_n(x,.)=\left(\frac{1}{2}+\frac{xc_{n,K}}{2K}\right)\delta_{x-\frac{2}{c_{n,K}}} + \left(\frac{1}{2}-\frac{xc_{n,K}}{2K}\right)\delta_{x+\frac{2}{c_{n,K}}}.\] Using the same kind of proof as the Example 27.8 from \cite{CM16}, we simply have to compute the two following functions : \begin{align*} b_n(x) &\overset{def}{=}n\int_{\vert y-x\vert\leq 1}(y-x)\ \PP_n(x,dy) = -2x\frac{n}{K} \\ a_n(x) &\overset{def}{=}n\int_{\vert y-x\vert\leq 1}(y-x)^2\ \PP_n(x,dy) = 4\frac{n}{c_{n,K}^2}. \end{align*} We will mostly use the following result, which grants a convergence to diffusion processes for homogeneous Markov chains : \begin{proposition}\label{EK} Suppose there exist a random variable $X_0$ and two continuous functions $b:\RR\longrightarrow\RR$ and $a:\RR\longrightarrow\RR_+$ such that all the following properties hold for any $r,\varepsilon >0$ : \begin{align*} \sup_{\vert x\vert\leq r}\left\vert a_n(x)-a(x)\right\vert\underset{n\to\infty}{\longrightarrow}0,\\ \sup_{\vert x\vert\leq r}\left\vert b_n(x)-b(x)\right\vert\underset{n\to\infty}{\longrightarrow}0,\\ \sup_{\vert x\vert\leq r}n\times\PP_n(x,[x-\varepsilon,x+\varepsilon]^c)\underset{n\to\infty}{\longrightarrow}0,\\ \text{ and }\hspace{1cm} Z_{n,K}(0)\overset{\mathcal{D}}{\underset{n\to\infty}{\longrightarrow}}X_0. \end{align*} Then we get the following convergence of processes : \begin{equation*} \left(Z_{n,K}(\lfloor nt\rfloor)\right)_{t\geq 0}\overset{\mathcal{D}}{\underset{n\to\infty}{\longrightarrow}}\left(X_t\right)_{t\geq 0} \end{equation*} where $\left(X_t\right)_{t\geq 0}$ is the diffusion starting from $X_0$ and solving : \[ dX_t=b(X_t)dt+\sqrt{a(X_t)}dW_t \] with $(W_t)_{t\geq 0}$ a standard Brownian motion starting from $0$. \end{proposition} \begin{proof} This result is just an adapted version of the Corollary 4.2 (p.355) from \cite{EK91} about the diffusion approximation. \end{proof} Once applied to the processes $Z_{n,K}$, it grants the following result : \begin{theorem}\label{RegimInter} Under the following assumptions : \[\frac{n}{c_{n,K}^2}\underset{n\to\infty}{\longrightarrow}\sigma^2\geq 0, \hspace{1cm} \frac{n}{K}\underset{n\to\infty}{\longrightarrow}\lambda\geq 0 \hspace{0.5cm} \text{ and } \hspace{0.5cm} Z_{n,K}(0)\overset{\mathcal{D}}{\underset{n\to\infty}{\longrightarrow}} Z_0\] we have the convergence in distributions of the random processes : \[\left(Z_{n,K}(\lfloor nt\rfloor)\right)_{t\geq 0}\overset{\mathcal{D}}{\underset{n\to\infty}{\longrightarrow}}\left(O_{\lambda,\sigma} (t)\right)_{t\geq 0}\] where $\left(O_{\lambda,\sigma}(t)\right)_{t\geq 0}$ denotes the diffusion process solving : \[\left\lbrace \begin{array}{l} dO_{\lambda,\sigma}(t)=-2\lambda O_{\lambda,\sigma}(t)dt+2\sigma dW_t \\ O_{\lambda,\sigma}(0)\overset{\mathcal{D}}{=} Z_0. \end{array} \right. \] \end{theorem} \begin{proof} We simply apply the Proposition \ref{EK} : $a_N(x)=4\frac{n}{c_{n,K}^2}$ converges uniformly to $a(x)=4\sigma^2$, $b_N(x)=-2x\frac{n}{K}$ converges locally uniformly to $b(x)=-2x\lambda$ and : \[n\PP_n(x,[x-\varepsilon,x+\varepsilon]^c) = n\II_{\frac{2}{c_{n,K}}>\varepsilon}\] which tends uniformly to 0 since the condition $\frac{n}{c_{n,K}^2}\underset{n\to\infty}{\longrightarrow}\sigma^2$ guaranties that $c_{n,K}$ tends to infinity. \end{proof} \begin{Remarque} We allow $\lambda = 0$ in the Theorem \ref{RegimInter} because we will use this specific case later to prove the Theorem \ref{TheorRegimLent} in the slow regime. But keep in mind that the intermediate regime restrains to positive values of $\lambda$. \end{Remarque} We can distinguish two different cases for the intermediate regime in the previous theorem : \begin{corollary} \begin{itemize} \item[$\bullet$]If $\sigma^2>0$ (i.e. $c_{n,K}$ is equivalent to $\sigma\sqrt{n}$) then the limit diffusion is an Ornstein-Uhlenbeck process. Moreover, if $Y_K(0)$ is uniformly distributed, $Z_{n,K}(0)$ converges in law to a $\mathcal{N}(0,\frac{\sigma^2}{\lambda})$ random variable, making the process $O_{\lambda, \sigma}$ temporally stationary. \item[$\bullet$]If $\sigma=0$ (i.e. $c_{n,K}$ grows faster than $\sqrt{n}$) the limit process is the (random) function $Z_0e^{-2\lambda t}$. Note that if $c_{n,K}$ grows faster than $K$ or $Y_K(0)$ is uniform over $V_K$, then the limit process is constantly equal to $0$. \end{itemize} \end{corollary} \begin{Exemple} If there exists $C\in [-1,1]$ such that $\frac{f_K(Y_K(0))}{K}\overset{\PP}{\underset{K\to\infty}{\longrightarrow}}C$, then we may set $c_{n,K}=K$ to check the assumptions of Theorem \ref{RegimInter} and the limit would be the deterministic function $t\mapsto Ce^{-2\lambda t}$. \end{Exemple} \section{Fast regime} In this section we will consider regime $\frac{n}{K}\to\infty$, and we will always assume that $\mu_K$ is the uniform distribution on $V_K$. The Theorem \ref{RegimInter} let us think that the good normalization will be of the order of $\sqrt{K}$, but the limit process would no longer be a diffusion. Since we can't use the diffusion approximation theorems from \cite{EK91}, we need to take a closer look at the finite-dimensional laws of the processes. Let's define $\mathbf{t}_n=(t_1(n),\dots,t_s(n))$ such that \begin{equation}\label{HypotTempsAleat} \frac{n}{K}\left(t_{j+1}(n)-t_j(n)\right)\underset{n,K\to\infty}{\longrightarrow}+\infty \ \forall\ 1\leq j\leq s-1 \end{equation} and set the following notation : \[X_{n,K}(\mathbf{t}_n)=\left(X_{n,K}(t_1(n)),\dots,X_{n,K}(t_s(n)) \right).\] Our goal is to show that : \begin{equation}\label{EquatBut} \frac{X_{n,K}(\mathbf{t}_n)}{\sqrt{K}}=\mathbf{F}\left(\left(\frac{1}{\sqrt{K}}\sum_{i\in B_k}\xi_i\right)_{k\in [N]}\right) \end{equation} for some linear functional $\mathbf{F}$, where $\left(\xi_i\right)_{i\in\NN}$ is an i.i.d. sequence of Rademacher random variables, and $\left(B_k\right)_{k\in [N]}$ is a random partition of $[K]$ (independent from the $\xi_i$). \medskip Since $X_{n,K}(t)=f_K(Y_K(\lfloor nt\rfloor))$, we will rewrite $f_K(Y_K(n))$ in a more suitable form : \begin{proposition}\label{PropoChangNotat} Let $\left(\xi_k\right)_{k\in\NN}$ be i.i.d. Rademacher random variables and let $\left(U_k^{(K)}\right)_{k\in\NN}$ be i.i.d. uniform random variables on $[K]$ independent from the $\xi_k$. Then : \[f_K(Y_K(n))\overset{\mathcal{D}}{=}\sum_{i=1}^K\xi_i\left(\II_{i\notin O_0^n}- \II_{i\in O_0^n}\right)\] where $O_i^j\overset{def}{=}\lbrace k\in [K] : \vert \lbrace i<l\leq j : U_l^{(K)}=k\rbrace\vert \text{ is odd}\rbrace$. \end{proposition} \begin{proof} We use the simple fact that for all integer $n\geq 0$ : \[f_K(Y_K(n+1))=f_K(Y_K(n))-2Y_K^{(U_n^{(K)})}(n).\] By induction we get that : \[f_K(Y_K(n))=f_K(Y_K(0))-2\sum_{i=1}^K Y_K^{(i)}(0)\II_{\epsilon_i(n)\text{ odd}}= \sum_{i=1}^K (-1)^{\epsilon_i(n)} Y_K^{(i)}(0)\] with $\epsilon_i(n)\overset{def}{=}\vert\lbrace U_k^{(K)}=i : k\in\llbracket 0,n-1\rrbracket\rbrace\vert$. Since only the oddness of $\epsilon_i(n)$ impacts the value of $f_K(Y_K(n))$, we get the expected result by denoting $\xi_i$ the value of $Y_K^{(i)}(0)$. \end{proof} Now that we made the $\left( \xi_i\right)_{i\in\NN}$ appear in the value of $X_{n,K}(\mathbf{t}_n)$, we want to construct a suitable partition to get the equation \eqref{EquatBut}. Since we are only interested in the coordinates which have been drawn an odd number of times between $\lfloor nt_{i-1}(n)\rfloor$ and $\lfloor nt_i(n)\rfloor$, we define the following sets : \begin{equation}\label{DefinB(J)} \forall \ J\subset [s], \ B(J)\overset{def}{=}\left(\bigcap_{k\in J}O_{\lfloor nt_{k-1}(n)\rfloor}^{\lfloor nt_k(n)\rfloor}\right)\bigcap\left(\bigcap_{k\notin J}\left(O_{\lfloor nt_{k-1}(n)\rfloor}^{\lfloor nt_k(n)\rfloor}\right)^c\right) \end{equation} (with the convention $t_0(n)=0$). In order to lighten the notations, we will write for every $k\in [s]$ : \[O_k(J)\overset{def}{=}\left\lbrace\begin{array}{ll}\phantom{(} O_{\lfloor nt_{k-1}(n)\rfloor}^{\lfloor nt_k(n)\rfloor}\phantom{)} & \text{if } k\in J, \\ \left(O_{\lfloor nt_{k-1}(n)\rfloor}^{\lfloor nt_k(n)\rfloor}\right)^c & \text{else.}\end{array}\right.\] Thus with this new notation we have : \begin{equation*} B(J) = \bigcap_{k=1}^s O_k(J). \end{equation*} In fact, if we cut the integer interval $\llbracket 1,\lfloor nt_s(n)\rfloor\rrbracket$ into the $s$ intervals of the form $\llbracket \lfloor nt_{i-1}(n)\rfloor+1,\lfloor nt_i(n)\rfloor\rrbracket$, then the set $B(J)$ contains all coordinates which have been drawn an odd number of times on the $j$-th interval for all $j\in J$ and an even number of times on the $j$-th interval for all $j\notin J$. For example, $B(\emptyset)$ is the set of coordinates which have been drawn an even number of times on each $\llbracket \lfloor nt_{i-1}(n)\rfloor+1,\lfloor nt_i(n)\rfloor\rrbracket$. By construction, one can see that $\left\lbrace B(J):J\subset [s]\right\rbrace$ is a partition of $[K]$. Moreover we can see every set $O_0^{\lfloor nt_k(n)\rfloor}$ for $k\in [s]$ as a (disjoint) union of some $B(J)$, since the coordinates who have been drawn an odd number of times between $0$ and $\lfloor nt_k(n)\rfloor$ are the ones who have been drawn an odd number of times in an odd number of intervals preceding $\lfloor nt_k(n)\rfloor$, i.e. : \begin{equation}\label{EquatUnionBJO} O_0^{\lfloor nt_j(n)\rfloor}=\bigsqcup_{\substack{J\subset [s] \\ \vert J\cap [j]\vert \text{ odd}}}B(J). \end{equation} Combining \eqref{EquatUnionBJO} with the Proposition \ref{PropoChangNotat}, we get the following : \begin{align}\label{EquatFinal} X_{n,K}(\mathbf{t}_n)&\overset{\mathcal{D}}{=}\left(\sum_{i\in [K]}\left(\II_{i\notin O_0^{\lfloor nt_j(n)\rfloor}}-\II_{i\in O_0^{\lfloor nt_j(n)\rfloor}}\right)\xi_i\right)_{j\in [s]}\nonumber \\ &=\left(\sum_{J\subset [s]}(-1)^{\vert J\cap [j]\vert}\sum_{i\in B(J)}\xi_i\right)_{j\in [s]}. \end{align} \setcounter{truc}{\value{lemma}} We now state a lemma (whose proof is in appendix), which is an ersatz of Donsker's Theorem with converging random time vectors : \begin{lemma}\label{TheorDonskAleat} Let $\left(\xi_i\right)_{i\in\NN}$ be i.i.d. Rademacher random variables and $\left(\mathbf{t}_K\right)_{K\geq 1}$ a sequence of random vectors in $\left(\RR_+\right)^k$ independent from $\left(\xi_i\right)_{i\in\NN}$. Define for all $K\geq 1$ and $\mathbf{s}=(s_1,\dots,s_k)\in\left(\RR_+\right)^k$ : \[T_K(\mathbf{s})=\left(\frac{1}{\sqrt{K}}\sum_{i=1}^{\lfloor Ks_j\rfloor}\xi_i\right)_{j\in [k]}.\] If there exists a deterministic $\mathbf{t}=(t_1,\dots,t_k)\in\left(\RR_+\right)^k$ such that $\mathbf{t}_K\overset{\PP}{\underset{K\to\infty}{\longrightarrow}}\mathbf{t}$, then : \[T_K(\mathbf{t}_K)\overset{\mathcal{D}}{\underset{K\to\infty}{\longrightarrow}}\left(W_{t_j}\right)_{j\in [k]}\] where $\left(W_s\right)_{s\in\RR_+}$ denotes a standard Brownian motion starting from $0$. \end{lemma} In order to apply the Lemma \ref{TheorDonskAleat}, we need to reorder the $\xi_i$ but also to prove that all the $\frac{\vert B(J)\vert}{K}$ converge to some constants. \begin{lemma}\label{LemmeConveCardi} Whatever the choices of $s\geq 1$, under the assumption \eqref{HypotTempsAleat} we get : \[\forall \ J\subset [s], \ \frac{\vert B(J)\vert}{K}\overset{\LL^2}{\underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow}}\frac{1}{2^s}.\] \end{lemma} \begin{proof} We aim to compute the first two moments of $\frac{\vert B(J)\vert}{K}$ and show they converge to $2^{-s}$ and $0$ respectively. First we use the fact that the coordinates are exchangeable, namely : \begin{equation}\label{EquatInterProba} \forall \ i\in [K], \ \PP(i\in O_k(J))=\PP(1\in O_k(J))=\frac{1}{K}\sum_{j=1}^K\PP(j\in O_k(J)). \end{equation} Then, since the sets $(O_k(J))_{k\in [s]}$ are independent, we get : \begin{align*} \EE\left[\vert B(J)\vert\right] &= \EE\left[\sum_{i=1}^K\prod_{k=1}^s \II_{i\in O_k(J)}\right] = \sum_{i=1}^K\prod_{k=1}^s \PP(i\in O_k(J)) \\ &= \sum_{i=1}^K\prod_{k=1}^s \frac{1}{K}\sum_{j=1}^K \PP(j\in O_k(J)) = \frac{1}{K^{s-1}}\prod_{k=1}^s \EE\left[\vert O_k(J)\vert\right] \end{align*} In particular : \begin{equation}\label{EquatEsperBJ} \EE\left[ \frac{\vert B(J)\vert}{K}\right]=\prod_{k=1}^s \EE\left[\frac{\vert O_k(J)\vert}{K}\right]. \end{equation} Next we use the Lemma \ref{LemmeConveEhren} (statement and proof some pages ahead) to get : \[\EE\left[\frac{\vert O_k(J)\vert}{K}\right] \underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow} \frac{1}{2}.\] Then : \[\EE\left[\frac{\vert B(J)\vert}{K}\right] \underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow} \left(\frac{1}{2}\right)^s.\] Now we compute the variance using \eqref{EquatInterProba} and \eqref{EquatEsperBJ} : \begin{align*} \VV\left[\frac{\vert B(J)\vert}{K}\right] &=\EE\left[\left(\frac{\vert B(J)\vert}{K}\right)^2\right] - \EE\left[\frac{\vert B(J)\vert}{K}\right]^2 \\ & = \EE\left[\frac{1}{K^2}\sum_{1\leq i,j\leq K}\II_{i\in B(J)}\II_{j\in B(J)}\right] - \left(\frac{1}{2^s}\right)^2 \\ & = \frac{1}{K^2}\sum_{1\leq i,j\leq K}\prod_{k=1}^s\PP(i\in O_k(J),j\in O_k(J)) \\ &= \frac{1}{K^2}\sum_{1\leq i\leq K}\prod_{k=1}^s\PP(1\in O_k(J)) \\ &\hspace{0.3cm} +\ \ \frac{1}{K^2}\sum_{\substack{1\leq i,j\leq K\\i\neq j}}\prod_{k=1}^s\PP(1\in O_k(J),2\in O_k(J)) \ \ - 4^{-s} \\ & = \frac{1}{K}\prod_{k=1}^s \EE\left[\frac{\vert O_k(J)\vert}{K}\right] \\ &\hspace{0.3cm} +\ \ \frac{K-1}{K}\prod_{k=1}^s\frac{1}{K(K-1)}\sum_{\substack{1\leq i,j\leq K\\i\neq j}}\EE\left[\II_{i\in O_k(J)}\II_{j\in O_k(J)}\right] \ \ -4^{-s} \\ & = \frac{1}{K}\EE\left[\frac{\vert B(J)\vert}{K}\right] \ \ +\ \ \frac{K-1}{K}\prod_{k=1}^s\EE\left[\frac{\vert O_k(J)\vert (\vert O_k(J)\vert -1)}{K(K-1)}\right]-\frac{1}{4^s}. \end{align*} The first term obviously tends to 0, so we just have to rewrite the second one in a more suitable way : \begin{align*} \frac{K-1}{K}\prod_{k=1}^s\EE\left[\frac{\vert O_k(J)\vert (\vert O_k(J)\vert -1)}{K(K-1)}\right] =& \\ &\hspace{-2.4cm} \frac{K-1}{K}\prod_{k=1}^s \frac{K}{K-1}\left(\EE\left[\left(\frac{\vert O_k(J)\vert}{K}\right)^2\right]-\EE\left[\frac{\vert O_k(J)\vert}{K^2}\right]\right). \end{align*} Using the Lemma \ref{LemmeConveEhren}, we get for every $k\in [s]$ : \begin{align*} & \EE\left[\frac{\vert O_k(J)\vert}{K^2}\right] \underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow} 0, \hspace{2cm} \ \EE\left[\left(\frac{\vert O_k(J)\vert}{K}\right)^2\right] \underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow} \frac{1}{4}, \\ & \ \ \ \ \text{and finally }\ \VV\left[\frac{\vert B(J)\vert}{K}\right] \underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow} 0+\frac{1}{4^s}-\frac{1}{4^s}=0. \end{align*} \end{proof} \begin{lemma}\label{LemmeConveEhren} For every $J\subset [s]$ and $k\in [s]$ we have the following convergence : \[ \frac{\vert O_k(J)\vert}{K}\overset{\LL^2}{\underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow}}\frac{1}{2}.\] \end{lemma} \begin{proof} It's easy to check that $\left\vert O_i^j\right\vert\overset{\mathcal{D}}{=}E_K(j-i)$ where $\left(E_K(n)\right)_{n\in\NN}$ is a $K$-Ehrenfest's urn starting from 0. Then we can use the moments of Ehrenfest's urn to get the convergence (see for instance \cite{Bar82} for computations of the first two moments of Ehrenfest's urn). Setting $\Delta_k(n) \overset{def}{=} \lfloor nt_k(n)\rfloor - \lfloor nt_{k-1}(n)\rfloor$ we have : \begin{equation}\label{EquatEsperEhren} \EE\left[O_{\lfloor nt_{k-1}(n)\rfloor}^{\lfloor nt_k(n)\rfloor}\right] = \frac{K}{2}\left(1-(1-\frac{2}{K})^{\Delta_k(n)}\right) = K-\EE\left[\left(O_{\lfloor nt_{k-1}(n)\rfloor}^{\lfloor nt_k(n)\rfloor}\right)^c\right] \end{equation} and then in whichever cases ($k\in J$ or $k\notin J$) we get : \begin{align*} \EE\left[\frac{\vert O_k(J)\vert}{K}\right] &= \frac{1}{2}\left(1\pm (1-\frac{2}{K})^{\Delta_k(n)}\right)= \frac{1}{2}\left(1\pm\exp\left(\Delta_k(n)\ln(1-\frac{2}{K})\right)\right) \\ &= \frac{1}{2}\left(1\pm\exp\left(\Delta_k(n)(-\frac{2}{K}+O(K^{-2}))\right)\right) \underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow} \frac{1}{2} \end{align*} since we assumed $\frac{\Delta_k(n)}{K}\underset{n,K\to\infty }{\longrightarrow}+\infty$ in \eqref{HypotTempsAleat}. And for the variance (which is the same in both cases) : \begin{align*} &\VV\left[\frac{\vert O_k(J)\vert}{K}\right] = \frac{1}{4}\left(\frac{1}{K}+\frac{K-1}{K}(1-\frac{4}{K})^{\Delta_k(n)} -(1-\frac{2}{K})^{2\Delta_k(n)}\right) \\ &= \frac{1}{4}\left(\frac{1}{K}+\frac{K-1}{K}\exp\left(-4\frac{\Delta_k(n)}{K}+O\left(\frac{\Delta_k(n)}{K^2}\right)\right)\right. \\ &\hspace{5cm}\left.-\exp\left(-4\frac{\Delta_k(n)}{K}+O\left(\frac{\Delta_k(n)}{K^2}\right)\right)\right) \\ &= \frac{1}{4K}\left(1-\exp\left(-4\frac{\Delta_k(n)}{K}+O\left(\frac{\Delta_k(n)}{K^2}\right)\right)\right) \underset{\substack{n,K\to\infty \\ n/K\to\infty}}{\longrightarrow} 0. \end{align*} \end{proof} With the Lemmas \ref{TheorDonskAleat} and \ref{LemmeConveCardi} we get the following convergence : \begin{proposition}\label{PropoConveLFD} \[\left( \frac{1}{\sqrt{K}}\sum_{i\in B(J)}\xi_i\right)_{J\subset [s]}\overset{\mathcal{D}}{\underset{\substack{n,K\to\infty\\ n/K\to\infty}}{\longrightarrow}}\left( G^{(s)}_J\right)_{J\subset [s]} \] where $\left( G^{(s)}_J\right)_{J\subset [s]}$ denotes an i.i.d. collection of Gaussian random variables $\mathcal{N}(0,2^{-s})$. \end{proposition} \begin{proof} Let's write $\lbrace J\subset [s]\rbrace = \lbrace J_1,J_2,\dots ,J_{2^s}\rbrace$ and set $S_k\overset{def}{=}\sum_{i=1}^k \vert B(J_i)\vert$. By reordering the $\xi_i$, we get : \[\left(\frac{1}{\sqrt{K}}\sum_{i\in B(J)}\xi_i\right)_{J\subset [s]} \overset{\mathcal{D}}{=} \left(\frac{1}{\sqrt{K}}\sum_{i=S_{k-1}}^{S_k-1}\xi_i\right)_{k\in [2^s]}.\] We can use the Lemma \ref{LemmeConveCardi} to show the following convergence : \[\left(\frac{S_k}{K}\right)_{k\in [2^s]}\overset{\PP}{\underset{\substack{n,K\to\infty\\ n/K\to\infty}}{\longrightarrow}} \left(\frac{k}{2^s}\right)_{k\in [2^s]}.\] Since $\left( S_k\right)_{k\in [2^s]}$ and $\left(\xi_i\right)_{i\in\NN}$ are independent, the Lemma \ref{TheorDonskAleat} states : \[\left(\frac{1}{\sqrt{K}}\sum_{i=S_k}^{S_{k+1}}\xi_i\right)_{k\in [2^s]}\overset{\mathcal{D}}{\underset{\substack{n,K\to\infty\\ n/K\to\infty}}{\longrightarrow}} \left(W_{\frac{k+1}{2^s}}-W_{\frac{k}{2^s}}\right)_{k\in [2^s]}\] where $\left( W_t\right)_{t\geq 0}$ is a standard Brownian motion starting from $0$. We can conclude thanks to the following : \[\left(W_{\frac{k+1}{2^s}}-W_{\frac{k}{2^s}}\right)_{k\in [2^s]} \overset{\mathcal{D}}{=} \left( G^{(s)}_J\right)_{J\subset [s]}.\] \end{proof} We finally get the convergence of the rescaled finite dimensions laws of the processes $X_{n,K}$ under the fast regime : \begin{theorem}\label{PropoConveInfin}Under the assumption \eqref{HypotTempsAleat} we have : \[\frac{X_{n,K}(\mathbf{t}_n)}{\sqrt{K}} \overset{\mathcal{D}}{\underset{\substack{n,K\to\infty\\ n/K\to\infty}}{\longrightarrow}}\left( G_j\right)_{1\leq j\leq s}\] where $\left( G_j\right)_{1\leq j\leq s}$ are i.i.d. Gaussian random variables $\mathcal{N}(0,1)$. \end{theorem} \begin{proof} Using Proposition \ref{PropoConveLFD} we know that the limit is a centered Gaussian vector, so we just need to compute its covariance, namely $cov(G_i,G_j)$ for all $1\leq i,j\leq s$. In the case $i< j$, using \eqref{EquatFinal} with $\mathbf{t}'_n=(t_i(n),t_j(n))$ we get \begin{align*} &cov(G_i,G_j) = cov\left(\sum_{J\subset [2]}(-1)^{\vert J\cap [1]\vert}G_J^{(2)},\sum_{J\subset [2]}(-1)^{\vert J\cap [2]\vert}G_J^{(2)}\right) \\ &= cov(G_\emptyset^{(2)}-G_{\lbrace 1\rbrace}^{(2)}+G_{\lbrace 2\rbrace}^{(2)}-G_{\lbrace 1,2\rbrace}^{(2)},G_\emptyset^{(2)}-G_{\lbrace 1\rbrace}^{(2)}-G_{\lbrace 2\rbrace}^{(2)}+G_{\lbrace 1,2\rbrace}^{(2)}) \\ &= \VV[G_\emptyset^{(2)}]+\VV[G_{\lbrace 1\rbrace}^{(2)}]-\VV[G_{\lbrace 2\rbrace}^{(2)}]-\VV[G_{\lbrace 1,2\rbrace}^{(2)}] = 0. \end{align*} For the variance, just consider $s=1$ and we have for any $j$ : \begin{align*} \VV[G_j] = \VV\left[\sum_{J\subset [1]}(-1)^{\vert J\cap [1]\vert}G_J^{(1)}\right] = \VV[G_\emptyset^{(1)}-G_{\lbrace 1\rbrace}^{(1)}] = \frac{1}{2}+\frac{1}{2} = 1. \end{align*} Then $\left(G_j\right)_{1\leq j \leq s}$ is a centered Gaussian vector whose covariance is given by : \[cov(G_i,G_j)=\II_{i=j}\] which characterizes the standard normal random vector, i.e. a random vector whose marginals are i.i.d. $\mathcal{N}(0,1)$ random variables. \end{proof} \begin{corollary} \[\left(\frac{X_{n,K}(t)}{\sqrt{K}}\right)_{t\geq 0}\overset{f.d.l.}{\underset{\substack{n,K\to\infty\\ n/K\to\infty}}{\longrightarrow}}\left( G_t\right)_{t\geq 0}\] where $\left( G_t\right)_{t\geq 0}$ is an i.i.d. collection of Gaussian random variables $\mathcal{N}(0,1)$. \end{corollary} \begin{proof} Any constant vector $\mathbf{t}=(t_1,\dots, t_s)$ such that $0\leq t_1<t_2<\dots < t_s$ fulfills the assumption \eqref{HypotTempsAleat}, and thus we can apply the previous Theorem. \end{proof} We could try to prove the results in the fast regime by dilating time in the intermediate one, but the fact that the dilation goes to infinity rises many technical issues. That is why we use another way to reach the asymptotic behavior in the fast regime. \section{Slow regime} Now we consider the slow regime, i.e. the case where $\frac{n}{K}\to 0$. This condition on the asymptotic of $\frac{n}{K}$ raises the following issue : if $Y_K(0)$ is distributed under the invariant law, $\VV[X_{n,K}(0)]=K$ while $\VV[X_{n,K}(t)-X_{n,K}(0)]$ is roughly $nt$. Then either we set $c_{n,K}=\sqrt{K}$ and the limit process is constant, or we consider $c_{n,K}=\sqrt{n}$ and the law of $\frac{X_{n,K}(0)}{c_{n,K}}$ diverges as $n$ and $K$ tend to infinity. We will consider the second option, and then set $Z_{n,K}(t)=n^{-\frac{1}{2}}X_{n,K}(t)$. If $n^{-\frac{1}{2}}X_{n,K}(0)$ converges in law we can apply the Theorem 1, but it doesn't include the case when $Y_K(0)$ is distributed under the invariant law. We will then consider a wider class of initial laws, but in order to get rid of the "diverging initial value" problem we will focus on the increments of the processes, namely we set : \[\Delta Z_{n,K}(t)\overset{def}{=}Z_{n,K}(t)-Z_{n,K}(0)=\frac{1}{\sqrt{n}}\left(X_{n,K}(t)-X_{n,K}(0)\right).\] \begin{theorem}\label{PropoConveZero} If the sequence $\mu_K$ is such that $\frac{\sqrt{n}}{K}X_{n,K}(0)\overset{\PP}{\underset{\substack{n,K\to\infty \\ n/K\to 0}}{\longrightarrow}}0$, then : \[\left(\Delta Z_{n,K}(t)\right)_{t\geq 0}\overset{\mathcal{D}}{\underset{\substack{n,K\to\infty \\ n/K\to 0}}\longrightarrow}\left(2W_t\right)_{t\geq 0}\] where $\left(W_t\right)_{t\geq 0}$ denotes a standard Brownian motion starting from $0$. \end{theorem} \begin{proof} The Proposition \ref{EK} can no longer be used since the processes $\Delta Z_{n,K}$ are not Markovian. We need a more general result, namely the Theorem 4.1 from \cite{EK91}. To apply this theorem, we need to find two sequences of random processes $A_n$ and $B_n$ such that $M_n(t)\overset{def}{=}\Delta Z_{n,K}(t)-B_n(t)$ and $M_n^2(t)-A_n(t)$ are $(\mathcal{F}_t^n)$-local martingales, with $\mathcal{F}_t^n=\sigma\left(\Delta Z_{n,K}(s),A_n(s),B_n(s):s\leq t\right)$. \medskip Let $\tilde{\mathcal{F}}_t^n=\sigma\left(X_{n,K}(s):s\leq t\right)$. If we note $\nabla X_{n,K}(i) \overset{def}{=}X_{n,K}\left(\frac{i+1}{n}\right)-X_{n,K}\left(\frac{i}{n}\right)$ we get for any integer $i\geq 0$ : \begin{align*} \EE\left[\left. \nabla X_{n,K}(i)\right\vert \tilde{\mathcal{F}}_{i/n}^n\right] &= \EE\left[ f_{K}(Y_{K}(i+1))\left.-f_K(Y_{K}(i)) \right\vert \tilde{\mathcal{F}}_{i/n}^n\right] \\ &=\EE\left[\left. \sum_{k=1}^{K} Y_{K}^{(k)}(i+1)-\sum_{k=1}^{K} Y_{K}^{(k)}(i)\right\vert Y_{K}(i)\right] \\ &=\EE\left[\left. -2Y_{K}^{(j)}(i)\right\vert Y_{K}(i)\right] = -2\sum_{k=1}^{K} \frac{Y_{K}^{(k)}(i)}{K} \\ &= -\frac{2}{K}X_{n,K}\left(\frac{i}{n}\right) \end{align*} where $j$ denotes the changing coordinate between $Y_{K}(i)$ and $Y_{K}(i+1)$. Similarly we set $\nabla Z_{n,K}(i) \overset{def}{=}\Delta Z_{n,K}\left(\frac{i+1}{n}\right)-\Delta Z_{n,K}\left(\frac{i}{n}\right)$ and then : \begin{align*} \EE\left[\left. \nabla Z_{n,K}(i)\right\vert \tilde{\mathcal{F}}_{i/n}^n\right] &= \frac{1}{\sqrt{n}}\EE\left[\nabla X_{n,K}(i)\left. \right\vert \tilde{\mathcal{F}}_{i/n}^n\right] = \frac{1}{\sqrt{n}}\times\frac{-2}{K}X_{n,K}\left(\frac{i}{n}\right) \\ &= -\frac{2}{K}\left(\Delta Z_{n,K}\left(\frac{i}{n}\right)+\frac{X_{n,K}(0)}{\sqrt{n}}\right). \end{align*} So if we set : \begin{align*} B_n(t) &\overset{def}{=} \sum_{i=0}^{\lfloor nt\rfloor -1}\EE\left[\left. \nabla Z_{n,K}(i)\right\vert \tilde{\mathcal{F}}_{i/n}^n\right] = -\frac{2}{K}\sum_{i=0}^{\lfloor nt\rfloor -1}\left(\Delta Z_{n,K}\left(\frac{i}{n}\right)+\frac{X_{n,K}(0)}{\sqrt{n}}\right) \\ A_n(t) &\overset{def}{=} \sum_{i=0}^{\lfloor nt\rfloor -1}\left(\EE\left[\left. \left(\nabla Z_{n,K}(i)\right)^2\right\vert \tilde{\mathcal{F}}_{i/n}^n\right] -\EE\left[\left. \nabla Z_{n,K}(i)\right\vert \tilde{\mathcal{F}}_{i/n}^n\right]^2\right) \\ & \ = 4\frac{\lfloor nt\rfloor}{n}-\frac{4}{K^2}\sum_{i=0}^{\lfloor nt\rfloor -1}\left(\Delta Z_{n,K}\left(\frac{i}{n}\right)+\frac{X_{n,K}(0)}{\sqrt{n}}\right)^2 \end{align*} then the processes $M_n(t)$ and $M_n^2(t)-A_n(t)$ will be $(\tilde{\mathcal{F}}_t^n)$-martingales, and then $(\mathcal{F}_t^N)$-martingales (since they are $(\mathcal{F}_t^N)$-adapted). We now just have to check the technical requirements of the Theorem from \cite{EK91} : \begin{proposition} For all $T,r>0$, if $\tau_n^r=\inf\left\lbrace t:\vert \Delta Z_{n,K}(t)\vert\geq r\right\rbrace$ we have : \begin{align*} \EE\left[\sup_{0<t\leq T\wedge\tau_n^r}\left\vert\Delta Z_{n,K}(t)- \lim_{\varepsilon\to 0}\Delta Z_{n,K}(t-\varepsilon)\right\vert ^2\right]\underset{n\to\infty}{\longrightarrow}0, \\ \EE\left[\sup_{0<t\leq T\wedge\tau_n^r}\left\vert B_n(t)- \lim_{\varepsilon\to 0}B_n(t-\varepsilon)\right\vert ^2\right]\underset{n\to\infty}{\longrightarrow}0, \\ \EE\left[\sup_{0<t\leq T\wedge\tau_n^r}\left\vert A_n(t)- \lim_{\varepsilon\to 0}A_n(t-\varepsilon)\right\vert \right]\underset{n\to\infty}{\longrightarrow}0, \\ \sup_{t\leq T\wedge\tau_n^r}\vert B_n(t)\vert \overset{\PP}{\underset{n\to\infty}{\longrightarrow}}0 \ \text{and }\sup_{t\leq T\wedge\tau_n^r}\vert A_n(t)-4t\vert \overset{\PP}{\underset{n\to\infty}{\longrightarrow}}0. \end{align*} \end{proposition} \begin{proof} The first convergence is obvious since the jumps of $\Delta Z_{n,K}$ are bounded by $\frac{1}{\sqrt{n}}$. Then we deal with the jumps of the compensator $B_n$ : \begin{align*} &\EE\left[\sup_{0<t\leq T\wedge\tau_n^r}\left\vert B_n(t)- \lim_{\varepsilon\to 0}B_n(t-\varepsilon)\right\vert ^2\right] \\ &\hspace{2cm} = \EE\left[\sup_{\frac{i}{n}\leq T\wedge\tau_n^r}\left\vert -\frac{2}{K}\left(\Delta Z_{n,K}\left(\frac{i}{n}\right)+\frac{X_{n,K}(0)}{\sqrt{n}}\right) \right\vert ^2\right] \\ &\hspace{2cm} \leq \EE\left[\sup_{\frac{i}{n}\leq T\wedge\tau_n^r}4\left(\left(\frac{\Delta Z_{n,K}(i/n)}{K}\right)^2 + \left( \frac{X_{n,K}(0)}{K\sqrt{n}} \right)^2\right)\right] \\ &\hspace{2cm} \leq 4\left(\left(\frac{r}{K}\right)^2 + \left( \frac{1}{\sqrt{n}} \right)^2\right) \end{align*} which obviously tends to 0. The same kind of computations applied to $A_n$ ensure the third convergence. The trickiest assumptions in this proposition are the two last ones, namely~: \[\sup_{t\leq T\wedge\tau_n^r}\vert B_n(t)\vert \overset{\PP}{\underset{n\to\infty}{\longrightarrow}}0 \text{ and }\sup_{t\leq T\wedge\tau_n^r}\vert A_n(t)-4t\vert \overset{\PP}{\underset{n\to\infty}{\longrightarrow}}0.\] With a good use of the stopping times $T$ and $\tau_n^r$ we get the following bound : \begin{align*} \sup_{t\leq T\wedge\tau_n^r}\vert B_n(t)\vert &= \sup_{t\leq T\wedge\tau_n^r}\left\vert -\frac{2}{K}\left(\frac{\lfloor nt\rfloor X_{n,K}(0)}{\sqrt{n}}+\sum_{i=0}^{\lfloor nt\rfloor -1}\Delta Z_{n,K}\left(\frac{i}{n}\right)\right)\right\vert \\ &\leq \frac{2\lfloor nT\rfloor}{K\sqrt{n}}\vert X_{n,K}(0)\vert + \frac{2\lfloor nT\rfloor r}{K} \end{align*} where the second term tends to $0$ in this regime. Since we assumed : \[\frac{\sqrt{n}}{K}X_{n,K}(0)\overset{\PP}{\underset{\substack{n,K\to\infty \\ n/K\to 0}}{\longrightarrow}}0\] then the supremum of $\vert B_n\vert$ converges to $0$ in probability. The last convergence may be proven exactly the same way. \end{proof} Then we fulfilled the assumptions of the Theorem 4.1 from \cite{EK91}, which prove that the processes $\Delta Z_{n,K}$ converge in distribution to a diffusion $(X_t)_{t\geq 0}$ such that $X_0=0$ a.s. and $dX_t=0dt + 2dW_t$. We easily conclude that : \[\left(\Delta Z_{n,K}(t)\right)_{t\geq 0}\overset{\mathcal{D}}{\underset{n,K\to\infty}\longrightarrow}\left(2W_t\right)_{t\geq 0}.\] \end{proof} The Theorem \ref{PropoConveZero} describes correctly the increments of the processes $Z_{n,K}$ (especially in the stationary case), but does not help to approximate the processes themselves, since we lose the information of the initial value. In fact, if $Y_K(0)$ is uniform on $V_K$ the processes $Z_{n,K}$ tend to behave like a "stationary Brownian motion", which starts from its invariant measure aka the Lebesgue measure on $\RR$. Obviously this is no longer a stochastic process, this is why we cannot use a convergence in distribution, but instead we will prove a vague convergence of the finite-dimensional laws of our processes : \begin{theorem}\label{TheorRegimLent} If $\mu_K$ is the uniform law on $V_K$, then for any $0\leq t_1<\dots <t_s$ and for every smooth compactly supported function $\varphi\in\mathcal{C}_c^\infty (\RR^s,\RR)$ we have the following convergence : \begin{align*} &\sqrt{2\pi\frac{K}{n}}\times \EE\left[\varphi \left(Z_{n,K}(t_1),\Delta Z_{n,K}(t_2),\dots,\Delta Z_{n,K}(t_s) \right)\right] \\ &\hspace{5.5cm} \underset{n,K\to\infty}{\longrightarrow}\int_{\RR^s}\EE\left[\varphi\left(x,2W_{t_2},\dots,2W_{t_s}\right)\right]dx \end{align*} where $(W_t)_{t\geq 0}$ is a standard Brownian motion starting from $0$. \end{theorem} \begin{proof} Since the processes $Z_{n,K}$ are temporally stationnary we can assume that $t_1=0$. We will write $\varphi(Z_{n,K})\overset{def}{=}\varphi \left(Z_{n,K}(0),\Delta Z_{n,K}(t_2),\dots,\Delta Z_{n,K}(t_s) \right)$ to lighten the notations, and consider the following consequence of Theorem \ref{RegimInter} from the intermediate regime : \begin{corollary} Let $(x_n)_{n\in\NN}$ be a deterministic sequence such that $x_n\underset{n\to\infty}{\longrightarrow}x$ for some $x\in\RR$. Then under the slow regime we get : \begin{align*} \EE\left[\left. \varphi(Z_{n,K})\right\vert Z_{n,K}(0) = x_n\right]\underset{n,K\to\infty}{\longrightarrow}\EE\left[\varphi\left(x, 2W_{t_2},\dots, 2W_{t_s}\right)\right]. \end{align*} \end{corollary} \begin{proof} We apply the Theorem \ref{RegimInter} with $c_{n,K}=n^{-\frac{1}{2}}$, $\lambda = 0$ and $Z_{n,K}(0)\overset{a.s.}{=}x_n$. Then the limit process is just $(2W_t + x)_{t\geq 0}$ where $W$ is a standard Brownian motion starting from $0$. \end{proof} Then we can write the following : \begin{align*} &\sqrt{2\pi\frac{K}{n}}\times \EE\left[\varphi (Z_{n,K})\right] = \EE\left[\sqrt{2\pi\frac{K}{n}}\times \EE\left[\left. \varphi (Z_{n,K})\right\vert Z_{n,K}(0) \right]\right] \\ &= \sum_{k\in\ZZ} \sqrt{2\pi\frac{K}{n}}\times \EE\left[\left. \varphi (Z_{n,K})\right\vert Z_{n,K}(0)=\frac{k}{\sqrt{n}} \right]\times\PP\left(Z_{n,K}(0)=\frac{k}{\sqrt{n}}\right)\\ &= \int_\RR \sqrt{2\pi K}\times \EE\left[\left. \varphi (Z_{n,K})\right\vert Z_{n,K}(0)=\frac{\lfloor\sqrt{n}x\rfloor}{\sqrt{n}} \right]\times\PP\left(Z_{n,K}(0)=\frac{\lfloor\sqrt{n}x\rfloor}{\sqrt{n}}\right)dx. \end{align*} We will need the following result (which is a consequence of De Moivre-Laplace Theorem, see for instance \cite{GK68}) to replace the probability in the previous line by a "Gaussian equivalent" : \begin{theorem}\label{DLLT} If $(\xi_i)_{i\in\NN}$ is an i.i.d. sequence of Rademacher random variables, then for every $N\geq 1$ : \begin{equation*} \sup_{m\in\ZZ}\sqrt{N}\times\left\vert\PP\left(\sum_{i=1}^N \xi_i = m\right) - \frac{2}{\sqrt{2\pi N}}e^{-\frac{m^2}{2N}}\II_{m\equiv N[2]}\right\vert\underset{N\to\infty}{\longrightarrow} 0 \end{equation*} where $x\equiv y[2]$ means that the integers $x$ and $y$ have same parity. \end{theorem} In our case we would get : \begin{equation*} \sup_{x\in\RR}\sqrt{K}\times\left\vert\PP\left(Z_{n,K}(0) = \frac{\lfloor \sqrt{n}x\rfloor}{\sqrt{n}}\right) - \frac{2}{\sqrt{2\pi K}}e^{-\frac{\lfloor \sqrt{n}x\rfloor^2}{2K}}\II_{\lfloor \sqrt{n}x\rfloor\equiv K[2]}\right\vert\underset{n,K\to\infty}{\longrightarrow} 0. \end{equation*} Using the fact that $\varphi$ is compactly supported, there exists some compact set $C_0$ such that $x\notin C_0 \Longrightarrow \EE\left[\left. \varphi (Z_{n,K})\right\vert Z_{n,K}(0)=\frac{\lfloor\sqrt{n}x\rfloor}{\sqrt{n}} \right]=0$. Then we have : \begin{align*} &\left\vert \int_\RR \sqrt{2\pi K}\times \EE\left[\left. \varphi (Z_{n,K})\right\vert Z_{n,K}(0)=\frac{\lfloor\sqrt{n}x\rfloor}{\sqrt{n}} \right]\times\PP\left(Z_{n,K}(0)=\frac{\lfloor\sqrt{n}x\rfloor}{\sqrt{n}}\right)dx \right. \\ &\hspace{2cm}\left. - \int_\RR \EE\left[\left. \varphi (Z_{n,K})\right\vert Z_{n,K}(0)=\frac{\lfloor\sqrt{n}x\rfloor}{\sqrt{n}} \right]2e^{-\frac{\lfloor \sqrt{n}x\rfloor^2}{2K}}\II_{\lfloor \sqrt{n}x\rfloor\equiv K[2]}dx\right\vert \\ &\hspace{0.5cm} \leq \vert\vert\varphi\vert\vert_\infty \int_\RR\sqrt{2\pi K}\left\vert\PP\left(Z_{n,K}(0) = \frac{\lfloor \sqrt{n}x\rfloor}{\sqrt{n}}\right) \right. \\ &\hspace{5cm} \left. - \frac{2}{\sqrt{2\pi K}}e^{-\frac{\lfloor \sqrt{n}x\rfloor^2}{2K}}\II_{\lfloor \sqrt{n}x\rfloor\equiv K[2]}\right\vert\II_{x\in C_0}dx \end{align*} which tends to $0$ as $n$ and $K$ tend to infinity. The next step is to get rid of the "same parity indicator", but the reader can easily check that the error term we get by turning $\II_{\lfloor nx\rfloor\equiv K[2]}$ into $\frac{1}{2}$ will also vanish as $n$ and $K$ go to infinity (thanks to the uniform continuity of $\varphi$ and the compactness of $C_0$). Then : \begin{align*} &\left\vert \sqrt{2\pi\frac{K}{n}}\times \EE\left[\varphi (Z_{n,K})\right] - \int_\RR \EE\left[\left. \varphi (Z_{n,K})\right\vert Z_{n,K}(0)=\frac{\lfloor\sqrt{n}x\rfloor}{\sqrt{n}} \right]e^{-\frac{\lfloor \sqrt{n}x\rfloor^2}{2K}}dx\right\vert \\ &\hspace{1cm}\underset{n,K\to\infty}{\longrightarrow} 0. \end{align*} The result of Theorem \ref{TheorRegimLent} follows from dominated convergence since in the slow regime : \begin{align*} &\EE\left[\left. \varphi (Z_{n,K})\right\vert Z_{n,K}(0)=\frac{\lfloor\sqrt{n}x\rfloor}{\sqrt{n}} \right] \overset{\forall x\in\RR}{\underset{n,K\to\infty}{\longrightarrow}} \EE\left[\varphi\left(x, 2W_{t_2},\dots, 2W_{t_s}\right)\right], \\ &\hspace{1cm}e^{-\frac{\lfloor \sqrt{n}x\rfloor^2}{2K}}\overset{\forall x\in\RR}{\underset{n,K\to\infty}{\longrightarrow}} 1 \hspace{0.5cm} \text{and}\hspace{0.5cm} \int_\RR \vert\vert\varphi\vert\vert_\infty\II_{x\in C_0}dx <+\infty. \end{align*} \end{proof}
{ "timestamp": "2019-09-23T02:08:54", "yymm": "1801", "arxiv_id": "1801.03741", "language": "en", "url": "https://arxiv.org/abs/1801.03741", "abstract": "We consider the sum of the coordinates of a simple random walk on the K-dimensional hypercube, and prove a double asymptotic of this process, as both the time parameter n and the space parameter K tend to infinity. Depending on the asymptotic ratio of the two parameters, they converge towards either a Brownian motion, an Ornstein-Uhlenbeck process or an i.i.d. collection of Gaussian variables.", "subjects": "Probability (math.PR)", "title": "Double asymptotic for random walks on hypercubes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429624315189, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.709721108481006 }
https://arxiv.org/abs/1410.5032
Monotonicity of Avoidance Coupling on $K_N$
Answering a question by Angel, Holroyd, Martin, Wilson and Winkler, we show that the maximal number of non-colliding coupled simple random walks on the complete graph $K_N$, which take turns, moving one at a time, is monotone in $N$. We use this fact to couple $\lceil \frac N4 \rceil$ such walks on $K_N$, improving the previous $\Omega(N/\log N)$ lower bound of Angel et al. We also introduce a new generalization of simple avoidance coupling which we call partially ordered simple avoidance coupling and provide a monotonicity result for this extension as well.
\section{Introduction} Let $G=([N],E)$ be a graph whose vertices are the set of integers $[N]=\{1,\dots,N\}$. A \emph{simple random walk} on this graph is a Markov chain $(X_t)_{t\in\mathbb{Z}}$ of elements in $[N]$ such that for all $t\in\mathbb{Z}$ the distribution of $X_t$ is uniform on the neighbors of $X_{t-1}$. A \emph{Simple Avoidance Coupling (SAC)} of $k$ walks on $G$ is a sequence of random maps $(U_t)_{t\in \mathbb{Z}}$ from $[k]$ to $[N]$ which satisfy two conditions: \begin{equation}\label{eq: SAC 1st} \forall i\in [k] : (U_t(i))_{t\in\mathbb{Z}}\text{ is a simple random walk on }G \end{equation} \begin{equation}\label{eq: SAC 2nd} \forall t\in \mathbb{Z},\ 1 \le i<j\le k\ :\ \mathbb{P}\Big(U_t(i)=U_t(j)\Big)= \mathbb{P}\Big(U_t(i)=U_{t-1}(j)\Big)= 0 \end{equation} Angel, Holroyd, Martin, Wilson and Winkler introduce this notion in \cite{AHMWW} in order to investigate couplings of $k$ simple random walks which move in turns in discrete time and avoid collision. One possible application of SACs on the complete graph $K_N$ is semi-synchronous orthogonal frequency hopping. A communication network consists of several transmitters. As there are overlaps between the transmission ranges they wish to use distinct frequencies at every given time. Malicious adversaries, each located in the vicinity of one of these transmitters, are trying to interfere with the communication by noising several frequencies at every given time. Once an adversary hits his target transmitter's frequency he can tell that his interruption succeeded. In order to avoid persistent interference the transmitters wish to change frequencies often. Being unable to perfectly synchronize their clocks, the transmitter must take turns at hopping. In this scenario it is desirable for each transmitter to perform a simple random walk as this would make each of its frequency changes (hops) independent from the past with maximal entropy. Independence is desirable since the adversary has some access to the frequency history of its target transmitter. An ideal hopping scheme in this setting is a SAC. An important result of \cite{AHMWW} is that there exists a SAC of $\Omega(N/\log N)$ walks on $K_N$. The authors also show in \cite[Theorem 6.1]{AHMWW} that when $N=2^\ell+1$ for some $\ell\in \mathbb{N}$, there exists an avoidance coupling of $2^{\ell-1}$ walks on $K_N$. Angel, Holroyd, Martin, Wilson and Winkler ask: does the existence of an avoidance coupling of $k$ walks on $K_{N-1}$, imply the existence of an avoidance coupling of $k$ walks on $K_{N}$. Our main result is a positive answer to this question: \begin{thm}\label{thm: main1} If there exists a simple avoidance coupling of $k$ walks on $K_{N-1}$, then there exists a simple avoidance coupling of $k$ walks on $K_{N}$. \end{thm} Combining this with \cite[Theorem 6.1]{AHMWW} we draw the following improved bound. \begin{thm}\label{thm: main3} There exists a simple avoidance coupling of $\lceil N/4 \rceil$ walks on $K_{N}$. \end{thm} We find it interesting that a byproduct of the proof of Theorem~\ref{thm: main1} is that in the extended coupling on $K_{N+1}$ one find another $k+1$-th special walk, which is a simple random walk as well, but does not obey the order in which the walkers move in every round. In Section~\ref{sec: posac} we investigate this observation and discuss a possible extensions of avoidance coupling to models where the order by which the walks move changes from one round to the next, subject to some restrictions. \subsection{Markovian Couplings} In \cite{AHMWW} the authors give special attention to Markovian Simple Avoidance Couplings. These have the property that whenever a walker's turn to move arrives, he needs only to look at the current configuration walkers to determine the distribution of its next location. In particular the simple avoidance coupling of $\Omega(N \log N)$ walkers on $K_N$ constructed in \cite{AHMWW} has this property, as does the coupling of $2^{\ell-1}$ walkers on $K_{2^\ell+1}$. While our extension theorem does not preserve this property, we preserve the following weaker version. Consider a SAC in which each site of the underlying graph $K_{N}$ is assigned a label. At the end of every round a random permutation is applied to these labels. Such a SAC is called \emph{Label Markovian} if whenever a walker's turn to move arrives he needs only to look at the current configuration of the walkers and, in addition, at the current labels of the vertices. Observe that every Markovian SAC is also a Label Markovian SAC. It is straightforward to check that our construction preserves Label Markov property. \section{Background} Probabilistic coupling of several stochastic processes sharing the same distribution, has been introduced to probability theory mainly as a tool to study and prove various properties of that common distribution. Such methods have been successfully used in showing properties such as monotonicity, stochastic dominance and convergence. Nevertheless, probabilistic coupling can also be a subject of study. In this context, the natural question is "in what sense a collection of coupled identically distributed stochastic processes, is different from a collection of independent processes with the same distribution?". A classical example is that of two random walks on some finite graph $G$. If two independent random walks move on $G$, then they are bound to collide with high probability after a polynomial number of steps. Collisions occur even if a scheduler is allowed to control the times in which each walk makes his move (see \cite{CTW},\cite{TW}), and can be avoided only if the scheduler has some knowledge of the future of each walk, and only on special graphs (see \cite{G1}). On the other hand, there exist many graphs on which coupled random walks can easily avoid each other. On the cycle graph $C_n$ for example, two walks which start on non-adjecent vertices can avoid each other by moving in the same direction at every step - either clockwise or counter-clockwise. Coupling of walks on $K_N$, the complete graph on $N$ vertices, appears to be more difficult. In \cite{AHMWW}, the authors use various techniques inspired by discrete harmonic analysis to create an avoidance coupling of $\Omega(N/\log N)$ walks on $K_N$ and of $N/2-1$ walks for an infinity collection of special $N$-s. They also investigate avoidance coupling on $K^*_N$, the complete graph with loops on $N$ vertices, and obtain a lower bound of $N/4$ walks on this graph. The authors further show that no coupling exists for $N-1$ walks on $K^*_N$, if $N\ge 4$. The research of avoidance couplings is closely related to that of Brownian motions which keep at least constant distance from each other. This subject and its relation to pursuit-evasion problems is investigated in \cite{BBC}, \cite{BBK} and \cite{K1}. \section{Extending an avoidance coupling} This section consists of the proof of Theorem~\ref{thm: main1}. Let $\mathcal{U}^{N-1}_k = \big(U_t(j)\big)_{t\in\mathbb{Z},j\in[k]}$ be a SAC of $k$ walks on $K_{N-1}$. Our goal is to define $\mathcal{W}^{N}_k$, a SAC of $k$ walks on $K_{N}$. \subsection{The extended coupling}\label{sec: ext} We begin by introducing an auxiliary sequence of random permutations. Let $P_0\in S_{N}$ be a uniformly chosen random permutation in $S_{N}$. Let $(a_t)_{t\in\mathbb{Z}}$ be an i.i.d. sequence where $a_0$ is a uniformly chosen element of $[N-1]$. For $t\in \mathbb{N}$ define inductively $P_t,P_{-t}\in{S_N}$ as follows. \begin{align*} P_t&:=P_{t-1}\circ (N\ a_t ),\\ P_{-t}&:= P_{1-t} \circ (N\ a_{1-t}), \end{align*} where $(a\ b)$ is the transposition of the two elements $a$ and $b$. Write $\mathcal{P}^{N}=(P_t)_{t\in\mathbb{Z}}$. It is straightforward to check that $\mathcal{P^{N}}$ is a stationary Markov chain on $S_{N}$ which is independent from $\mathcal{U}^{N-1}_k$. We define $\mathcal{W}^{N}_k=\big(W_t(j)\big)_{t\in\mathbb{Z},j\in[k]}$ where $W_t:[k]\to[N]$, as follows: \begin{equation*} W_t(j) = P_tU_t(j), \quad j\in [k], \: t\in \mathbb{Z}. \end{equation*} An example of $\mathcal{U}^{5}_2$, $\mathcal{P}^{6}$ and $\mathcal{W}^{6}_2$ is given in Figure~1. Below we prove that $\mathcal{W}^N_k$ is an avoidance coupling of $k$ walks on $K_{N}$. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{avoidance0.pdf} \rput(-7,7.8){\resizebox{1cm}{!}{$U_t$}} \rput(-7,2.8){\resizebox{1cm}{!}{$W_t$}} \rput(-5.9,5.45){\resizebox{0.8cm}{!}{$t=$}} \rput(-5.2,5.45){\resizebox{0.3cm}{!}{$0$}} \rput(-2.15,5.45){\resizebox{0.3cm}{!}{$1$}} \rput(0.78,5.45){\resizebox{0.3cm}{!}{$2$}} \rput(3.7,5.45){\resizebox{0.3cm}{!}{$3$}} \rput(6.6,5.45){\resizebox{0.3cm}{!}{$4$}} \caption{Above: $U_t$, a SAC of $2$ walks on $K_5$. Below: $W_t$, the extended SAC on $K_6$. The label permutation $P_t$ is given at the end of every time unit. Observe that the light blue walk always moves before the dark one. Also observe how $W_t$ is determined by $P_t$ and $U_t$.} \end{figure} \subsection{The extension is a SAC} To show that $\mathcal{W}^{N}_k$ is a SAC we must show that is satisfies \eqref{eq: SAC 1st} and \eqref{eq: SAC 2nd}. We begin by showing $\eqref{eq: SAC 2nd}$. Let $t\in \mathbb{Z}$,$\ 1 \le i<j\le k$. We have $$\mathbb{P}\Big(W_t(i)=W_t(j)\Big)= \mathbb{P}\Big(P_tU_t(i)=P_tU_t(j)\Big)= \mathbb{P}\Big(U_t(i)=U_t(j)\Big)=0$$ Where the central equality uses the fact that $P_t$ is a permutation and the right-most equality follow from the fact that $\mathcal{U}^{N-1}_k$ satisfies $\eqref{eq: SAC 2nd}$. Recall the definition of the sequence $(a_t)_{t\in\mathbb{Z}}$ and write $P'_t$ for the transposition $(N\ a_t)$. We have \begin{equation} \begin{aligned} \mathbb{P}\Big(W_t(i)=W_{t-1}(j)\Big)&= \mathbb{P}\Big(P_{t}U_{t}(i)=P_{t-1}U_{t-1}(j)\Big)= \mathbb{P}\Big(P_{t-1}\circ P'_t)U_{t}(i)=P_{t-1}U_{t-1}(j)\Big)\\ &=\mathbb{P}\Big(P'_tU_{t}(i)=U_{t-1}(j)\Big)=0, \end{aligned}\label{eq: SAC 2n pt 2} \end{equation} where the the last equality follows from the fact that $\mathcal{U}^{N-1}_k$ satisfies $\eqref{eq: SAC 2nd}$, and from the fact that $U_t(i),U_{t-1}(j)\in [N-1]$. We are left with showing that $\mathcal{W}^{N}_k$ satisfies \eqref{eq: SAC 1st}. Fix $j\in [k]$, we must show that $W_t(j)$ is a simple random walk on $K_{N}$. Equivalently -- for every $\ell\in\mathbb{N}$, every history $w_{t-\ell},...,w_{t-1}\in [N]$ such that $$\mathbb{P}\Big(W_{t-1}(j)=w_{t-1},\dots,W_{t-\ell}(j)=w_{t-\ell}\big)>0,$$ and for every $v \neq w_{t-1}$, we have \begin{equation}\label{eq: showing simple random walkness1} \mathbb{P}\Big(W_t(j)=v\ \big|\ W_{t-1}(j)=w_{t-1},\dots,W_{t-\ell}(j)=w_{t-\ell}\big)=\frac1{N-1}. \end{equation} To obtain this we show a stronger claim. Fix $\ell\in\ N$ and let $p = (p_{t-\ell},...,p_{t-1})\in (S_{N})^\ell$, $u= (u_{t-\ell},...,u_{t-1})\in [N+1]^\ell$. Consider the event $$A_t^{p,u}=\Big\{U_{t-1}(j)=u_{t-1},\dots,U_{t-\ell}(j)=u_{t-\ell}\text{ and }P_{t-1}(j)=p_{t-1},\dots,P_{t-\ell}(j)=p_{t-\ell}\Big\}.$$ We show that for all $p,u$ such that $\mathbb{P}(A_t^{p,u})\neq0$ and for all $v\neq p_{t-1}(u_{t-1})$ we have \begin{equation}\label{eq: showing simple random walkness2} \mathbb{P}\Big(W_t(j)=v\ \big|\ A_t^{p,u}\big)=1/N. \end{equation} Indeed, \eqref{eq: showing simple random walkness2} is stronger than \eqref{eq: showing simple random walkness1}, as the values of $P_{t-1},\dots,P_{t-\ell}$ and $U_{t-1}(j),\dots,U_{t-\ell}(j)$ determine the values of $W_{t-1}(j),\dots,W_{t-\ell}(j)$. Since $w_{t-1}=p_{t-1}(u_{t-1})\neq p_{t-1}(N)$ and using the fact that by \eqref{eq: SAC 2nd} we have $$\sum_{n\in [N]\setminus w_{t-1}} \mathbb{P}\Big(W_t(j)=n\ \big|\ A_t^{p,u}\Big)=1,$$ it would suffice to show \eqref{eq: showing simple random walkness2} in the case $v\neq p_{t-1}(N)$. Thus, let $v\in [N]\setminus \{p_{t-1}(N),w_{t-1}\}$ and use the total probability formula to write \begin{align}\label{eq: main} \mathbb{P}\Big(W_t(j)=v\ \big|\ A_t^{p,u}\Big) &=\mathbb{P}\Big(W_t(j)=v \ \big|\ A_t^{p,u}, P_t(N)=v\Big)\mathbb{P}\Big(P_{t}(N)=v\Big) \notag \\ &\ \ \ \ +\ \mathbb{P}\Big(W_t(j)= v\ \big|\ A_t^{p,u}, P_t(N)\neq v\Big)\mathbb{P}\Big(P_t(N)\neq v\Big) \notag \\ &=\mathbb{P}\Big(U_t(j)=N \ \big|\ A_t^{p,u}, P_t(N)=v\Big)\cdot \frac 1 {N-1} \notag \\ &\ \ \ \ +\mathbb{P}\Big(U_t(j)=P_t^{-1}(v) \ \big|\ A_t^{p,u}, P_t(N)\neq v\Big)\cdot \frac {N-2}{N-1}\notag\\ &=0\cdot \frac 1 {N-1} +\mathbb{P}\Big(U_t(j)=P_t^{-1}(v) \ \big|\ A_t^{p,u}, P_t(N)\neq v\Big)\cdot \frac {N-2}{N-1}. \end{align} We now observe that \begin{align}\label{eq: conditioned 1 over N-1} &\mathbb{P}\Big(U_t(j)=P_t^{-1}(v) \ \big|\ A_t^{p,u}, v \neq P_{t}(N)\}\Big) = \notag \\ &\mathbb{P}\Big(U_t(j)=p_{t-1}^{-1}(v) \ \big|\ A_t^{p,u}, v \neq P_{t}(N)\}\Big) = \frac{1}{N-2}. \end{align} where the first equality follows from the fact that for all $v \notin\{P_{t}(N), P_{t-1}(N)\}$, we have $P_t^{-1}(v)=P_{t-1}^{-1}(v)$, and the last equality uses our assumption that $v\neq w_{t-1} = P_{t-1}U_{t-1}(j)$. Plugging \eqref{eq: conditioned 1 over N-1} into \eqref{eq: main} we deduce \eqref{eq: showing simple random walkness2}, concluding the proof. \qed \section{Partially ordered avoidance coupling}\label{sec: posac} Consider the following generalization of an avoidance coupling. Let $R$ be a partial order on $[k]$. An $R$ \emph{Partially Ordered Avoidance Coupling} (POSAC) of $k$ walks on $G$ is a sequence of random maps \begin{equation*} U_t:[m] \to [N], t\in \mathbb{Z}, \end{equation*} such that there exists a sequence of permutations $\sigma_t\in S_m$ which respect $R$ (i.e., $i<_Rj \rightarrow \sigma(i)<\sigma(j)$) such that $(U_t)_{t\in \mathbb{Z}}$ and $\sigma_t$ satisfy two conditions: \begin{enumerate} \item $\forall i\in [m] : (U_t[i])_{t\in\mathbb{Z}}\text{ is a simple random walk on }G,$ \leavevmode\hfill\refstepcounter{equation}\textup{\tagform@{\theequation}}\label{eq: PSAC 1st} \item $\forall t\in \mathbb{Z},\ 1 \le \sigma_t(i)<\sigma_t(j)\le m\ :\ \mathbb{P}\Big(U_t(i)=U_{t-1}(j)\Big)= \mathbb{P}\Big(U_t(i)=U_t(j)\Big)=0$. \leavevmode\hfill\refstepcounter{equation}\textup{\tagform@{\theequation}}\label{eq: PSAC 2nd} \end{enumerate} A POSAC is a generalization of a SAC to a situation where the order in which the walks take turns can change from one round to the next, restricted by some partial order constraint (in the application to orthogonal hoping consider a situation where two transmitters can alter the order of their hops only if they receive each other's signal). The proof of Theorem~\ref{thm: main1} extends in this case to the following. \begin{thm}\label{thm: main2} If there exists an $R$ POSAC of $k$ walks on $K_{N-1}$, then there exists an $R$ POSAC of $k+1$ walks on $K_{N}$. \end{thm} Observe that in this case, although the extension does not allow adding additional relations it does allow increasing the number of walks. \subsection{Extending a POSAC} This section is dedicated to the proof of Theorem~\ref{thm: main2}. Let $R$ be a partial order on $[k]$, let $\mathcal{U}^{N-1}_{k,R}$ be an $R$ POSAC of $k$ walks on $K_{N-1}$, and let $(s_t)_{t\in\mathbb{Z}}$ be a sequence of permutations which respect $R$ and satisfy \eqref{eq: PSAC 2nd}. Let $\mathcal{P}^{N}=(P_t)_{t\in\mathbb{Z}}$ and $\mathcal{W}^{N}_{k}=\big(W_t(j)\big)_{t\in\mathbb{Z},j\in[k]}$ be as in section \ref{sec: ext}, and define $\mathcal{W}^{N}_{k+1,R}=\big(W_t(j)\big)_{t\in\mathbb{Z},j\in[k+1]}$ with $W_t(k+1):=P_t(N)$. Observe that given $t\in \mathbb{Z}$, for any distinct $i,j\in [k+1]$ we have \begin{equation}\label{eq: first part uniqueness ext} \mathbb{P}\Big(W_t(i)=W_t(j)\Big)= \mathbb{P}\Big(P_tU_t(i)=P_tU_t(j)\Big)= \mathbb{P}\Big(U_t(i)=U_t(j)\Big)=0, \end{equation} as before. Using this we define $(\sigma_t)_{t\in\mathbb{Z}}$ in the following way. If there exists $b\in[k]$ such that $W_{t-1}(b)=W_t(k+1)$ we set \begin{equation}\label{eq: sigma case1} \sigma_t (j)=\begin{cases} s_t(j) & s_t(j)\le s_t(b) \\ s_t(b)+1 & j=m+1 \\ s_t(j)+1 & s_t(j) > s_t(b) \end{cases} \end{equation} while otherwise we set \begin{equation}\label{eq: sigma case2} \sigma_t (j)=\begin{cases} s_t(j)+1 & j\le m \\ 1 & j=m+1\ \ \ . \end{cases} \end{equation} Our purpose is to show that $\mathcal{W}^{N}_k$ and $(\sigma_t)_{t\in\mathbb{Z}}$ satisfy \eqref{eq: PSAC 1st} and \eqref{eq: PSAC 2nd}. An example of $\mathcal{U}^{5}_{2,R}$, $\mathcal{P}^{6}$, $\mathcal{W}^{6}_{3,R}$ and $(\sigma_t)_{t\in\mathbb{Z}}$ is given in Figure~2. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{avoidance.pdf} \rput(-7,7.8){\resizebox{1cm}{!}{$U_t$}} \rput(-7,2.6){\resizebox{1cm}{!}{$W_t$}} \rput(-5.9,5.3){\resizebox{0.8cm}{!}{$t=$}} \rput(-5.2,5.3){\resizebox{0.3cm}{!}{$0$}} \rput(-2.3,5.3){\resizebox{0.3cm}{!}{$1$}} \rput(0.7,5.3){\resizebox{0.3cm}{!}{$2$}} \rput(3.7,5.3){\resizebox{0.3cm}{!}{$3$}} \rput(6.7,5.3){\resizebox{0.3cm}{!}{$4$}} \caption{Above: $U_t$, the same SAC of $2$ walks on $K_5$ as in Figure~1. Faded are duplicates of previous steps used to synchronize with the extension below. Below: $W_t$, a POSAC of $3$ walks, under the partial order of the light blue walker walking before the dark blue one. The permutation is given at the end of every time unit while the order can be inferred from the diagram. Observe how the order of the blue walk change with respect to the extended pink walk between different time units. The rules is that the pink walk waits until his new place is clear and then movs. Also notice that the pink walk always ends his motion in place number $6$.} \end{figure} We begin by showing \eqref{eq: PSAC 1st}. Since the first $k$ walks of $\mathcal{W}^{N}_{k+1,R}$ are defined in exactly the same way as these of $\mathcal{W}^{N}_{k}$, the proof that each of these walks performs a simple random walk is identical to the proof of this fact for $\mathcal{W}^{N}_{k+1}$ and we omit it. The fact that $\{W_t(k+1)\}_{t\in\mathbb{Z}}$ is a simple random walk is straightforward from fact that $W_t(k+1)=P_t(N)$ and from the definition of $P_t$. Next let us show that $\mathcal{W}^{N}_{k+1,R}$ satisfies \eqref{eq: PSAC 2nd}. Observe that we have obtained the first part of \eqref{eq: PSAC 2nd} in \ref{eq: first part uniqueness ext}. For the second part, consider the event $$B_t^{i,j}=\{\sigma_t(i)<\sigma_t(j)\},$$ and write again $P'_t$ for the transposition $(N\ a_t)$. For $i,j\in[k+1]$ we have \begin{align*} \mathbb{P}\Big(W_t(i)=W_{t-1}(j), B_t^{i,j}\Big)&= \mathbb{P}\Big(P_{t}U_{t}(i)=P_{t-1}U_{t-1}(j),B_t^{i,j}\Big)= \mathbb{P}\Big(P_{t-1}\circ P'_tU_{t}(i)=P_{t-1}U_{t-1}(j),B_t^{i,j}\Big)\\ &=\mathbb{P}\Big(P'_tU_{t}(i)=U_{t-1}(j),B_t^{i,j}\Big)=0, \end{align*}\label{eq: SAC 2n pt 2} following similar arguments to those used in \eqref{eq: SAC 2n pt 2}. We thus are left with the case $k+1\in \{i,j\}$. However, if $i=k+1$ and $W_t(k+1)=W_{t-1}(j)$, then by the definition of $\sigma_t$ we would have $\sigma_t(j)=\sigma_t(k+1)=s_t(i)+1$ and $\sigma_t(i)=s_t(i)$. Thus $$\mathbb{P}\Big(W_t(k+1)=W_{t-1}(j), B_t^{i,j}\Big)=0.$$ Finally consider the case $j=k+1$. If $B_t^{i,k+1}$ holds then, by the definition of $\sigma_t$ there must exist some $b\in[k]$ which satisfies $B_t^{i,b}$ such that $W_{t-1}(b)=W_t(j)=P_t(N)$. This $b$ satisfies $W_{t-1}(b)=P_{t-1}U_{t-1}(b)=P_t(N)$ and hence, by the definition of $P_t$, we have $P_{t}U_{t-1}(b)=P_{t-1}(N)$. We get that \begin{align*} \mathbb{P}\Big(W_t(i)=W_{t-1}(k+1), B_t^{i,j}\Big)&=\mathbb{P}\Big(W_t(i)=P_{t-1}(N), B_t^{i,j}\Big) =\mathbb{P}\Big(\exists b\in[k]\ :\ P_tU_t(i)=P_{t}U_{t-1}(b), B_t^{i,b}\Big)\\&= \mathbb{P}\Big(\exists b\in[k]\ :\ U_t(i)=U_{t-1}(b), B_t^{i,b}\Big)= 0. \end{align*} Where the last equality follows from the fact that $\mathcal{U}^{N-1}_{k,R}$ satisfies \eqref{eq: PSAC 2nd}. Theorem~\ref{thm: main2} follows. \section{Acknowledgments} The author would like to thank David Wilson for introducing him to the problem and for useful discussions and to Omer Angel for his corrections and comments.
{ "timestamp": "2015-05-12T02:08:01", "yymm": "1410", "arxiv_id": "1410.5032", "language": "en", "url": "https://arxiv.org/abs/1410.5032", "abstract": "Answering a question by Angel, Holroyd, Martin, Wilson and Winkler, we show that the maximal number of non-colliding coupled simple random walks on the complete graph $K_N$, which take turns, moving one at a time, is monotone in $N$. We use this fact to couple $\\lceil \\frac N4 \\rceil$ such walks on $K_N$, improving the previous $\\Omega(N/\\log N)$ lower bound of Angel et al. We also introduce a new generalization of simple avoidance coupling which we call partially ordered simple avoidance coupling and provide a monotonicity result for this extension as well.", "subjects": "Probability (math.PR)", "title": "Monotonicity of Avoidance Coupling on $K_N$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429624315189, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.709721108481006 }
https://arxiv.org/abs/1103.1339
Isotone maps on lattices
Let (L_i : i\in I) be a family of lattices in a nontrivial lattice variety V, and let \phi_i: L_i --> M, for i\in I, be isotone maps (not assumed to be lattice homomorphisms) to a common lattice M (not assumed to lie in V). We show that the maps \phi_i can be extended to an isotone map \phi: L --> M, where L is the free product of the L_i in V. This was known for V the variety of all lattices (Yu. I. Sorkin 1952).The above free product L can be viewed as the free lattice in V on the partial lattice P formed by the disjoint union of the L_i. The analog of the above result does not, however, hold for the free lattice L on an arbitrary partial lattice P. We show that the only codomain lattices M for which that more general statement holds are the complete lattices. On the other hand, we prove the analog of our main result for a class of partial lattices P that are not-quite-disjoint unions of lattices.We also obtain some results similar to our main one, but with the relationship lattices:orders replaced either by semilattices:orders or by lattices:semilattices.Some open questions are noted.
\section{Introduction}\label{S.intro} By Yu.\,I. Sorkin \cite[Theorem~3]{sorkin}, if $\E L = (L_i \mid i\in I)$ is a family of lattices and $\gf_i\colon L_i\to M$ are isotone maps of the lattices $L_i$ into a lattice $M$, then there exists an isotone map $\gf$ from the free product $\Free\E L$ of the $L_i$ to $M$ that extends all the $\gf_i$. (For a clarification of Sorkin's proof, see A.\ Kravchenko \cite{AK}; and for an alternative, simpler proof, G. Gr\"{a}tzer, H. Lakser and C.\,R. Platt \cite[\S4]{GLP70}.) Our main result, proved in Section~\ref{S.main}, is a generalization of this fact, with $\Free\E L$ replaced by $\Free_{\mathbf{V}}\E L$, the free product of the $L_i$ in any nontrivial variety $\mathbf{V}$ of lattices containing them --- though not necessarily containing~$M$. (In Section~\ref{S.alt_pfs}, we explore some variants of our proof of this result.) We may regard $\Free_{\mathbf{V}}\E L$ as the free lattice in $\mathbf{V}$ on the partial lattice $P$ given by the disjoint union of the $L_i$. Does the analog of the above result hold for more general partial lattices $P$ and their free lattices $L$? In Section~\ref{S.complete} we find that the lattices $M$ such that this statement holds for \emph{all} partial lattices $P$ are the complete lattices. On the other hand, we describe in Section~\ref{S.retracts} a class of partial lattices $P$, related to but distinct from the class considered in Section~\ref{S.main}, for which the full analog of the result of that section holds. Since \emph{semilattices} lie between orders and lattices, it is plausible that statements similar to our main result should hold, either with ``lattice'' weakened to ``semilattice'', or with ``lattice'' unchanged but ``isotone map'' strengthened to ``semilattice homomorphism''. In~Section~\ref{S.semilat1} we shall find that the former statement is easy to prove. In that section and Section~\ref{S.semilat2}, we obtain several approximations to the latter statement; we do not know whether the full statement holds. The reader familiar with the concepts of \emph{quasi}variety and \emph{pre}variety will find that the proofs given in this note for varieties of lattices in fact work for those more general classes. However, varieties are not sufficient for the result of Section~\ref{S.semilat2}, so we develop that in terms of prevarieties. In Section~\ref{S.questions} we note some open questions. For general definitions and results in lattice theory see \cite{GLT2} or \cite{GLT3}. \section{Extending isotone maps to free product lattices}\label{S.main} Let $\mathbf{V}$ be a nontrivial lattice variety, that is, a variety $\mathbf{V}$ of lattices having a member with more than one element. Let $\E L = (L_i \mid i\in I)$ be a family of lattices in~$\mathbf{V}$, and $L = \Free_{\mathbf{V}}\E L$ their free product in $\mathbf{V}$. Finally, let $(\gf_i\colon L_i\to M\mid i\in I)$ be a family of isotone maps into a lattice $M$, not assumed to lie in $\mathbf{V}$. To show that the $\gf_i$ have a common extension to $L$, it suffices, by the universal property of $L$, to find some $L'\in\mathbf{V}$ such that each map $\gf_i$ factors $L_i\to L'\to M$, where the first map is a lattice homomorphism, and the second an isotone map not depending on $i$. So~let us, for now, forget free products, and obtain such a lattice~$L'$. We first note (as remarked in~\cite[last paragraph]{GLP70} for $\mathbf{V}=\mathbf{L}$) that this is easy if $M$ has a least element, or more generally, if its subsets $\gf_i(L_i)$ have a common lower bound $e\in M$. In that case, we begin by enlarging all the $L_i$ to lattices $\bar{L}_i=\{e_i\}+L_i$, where $e_i$ is a new least element, and extend the $\gf_i$ to maps $\bar{\gf}_i\colon \bar{L}_i\to M$ mapping $e_i$ to $e$. Now let $L'$ be the sublattice of $\prod(\bar{L}_i \mid i\in I)$ consisting of the elements $x=(x_i\mid i\in I)$ such that $x_i=e_i$ for all but finitely many $i$; and let us map each $L_i$ into $L'$ by the homomorphism sending $x\in L_i$ to the element having $i$-component $x$, and $j$-component $e_j$ for all $j\neq i$. We now map $L'$ to $M$ using the isotone map $\gy$ given by \begin{equation}\begin{minipage}[c]{25pc}\label{d.pre_gy} $\gy(x)=\JJm{\bar{\gf}_i(x_i)}{i\in I}$ \quad for $x=(x_i\mid i\in I)\in L'$. \end{minipage}\end{equation} This infinite join is defined because all but finitely many of the joinands are $e$; and it is easy to verify that for each $i$, the composite map $L_i\to L'\to M$ is the given isotone map $\gf_i$, as required. If, rather, the $\gf_i(L_i)$ have a common \emph{upper} bound, the dual construction is, of course, available. In the absence of either sort of bound, we shall follow the same pattern of adjoining to the $L_i$ elements $e_i$ with a common image $e$ in $M$ (this time an arbitrary element of that lattice); but that construction takes a bit more work, as does the one analogous to the definition~(\ref{d.pre_gy}) of the isotone map $\gy:L'\to M$. The first of these steps is carried out in the following lemma (where $L$ corresponds to the above~$L_i$). \begin{lemma}\label{L.L&e} Let $\gf\colon L\to M$ be any isotone map of lattices, and $e$ any element of $M$. Then there exists a lattice extension $\bar{L}$ of $L$, and an isotone map $\bar{\gf}\colon \bar{L}\to M$ extending $\gf$, such that $e\in\bar{\gf}(\bar{L})$. Moreover, $\bar{L}$ can be taken to lie in any nontrivial lattice variety $\mathbf{V}$ containing $L$. \end{lemma} \begin{proof} Let $\bar{L}=L\times \SC 2 \times \SC 2$, where $\SC 2$ is the $2$-element lattice $\{0,1\}$, and embed $L$ in $\bar{L}$ by $x\mapsto(x,0,1)$. Define $\bar{\gf}\colon \bar{L}\to M$ by\vspace{.3em} \begin{equation}\begin{minipage}[c]{25pc}\label{d.gf} $\begin{cases} \begin{array}{rrc} \bar{\gf}(x,0,1) &=& \gf(x),\\[.3em] \bar{\gf}(x,1,0) &=& e,\\[.3em] \bar{\gf}(x,0,0) &=& \gf(x)\wedge e,\\[.3em] \bar{\gf}(x,1,1) &=& \gf(x)\vee e. \end{array} \end{cases}$\vspace{.3em} \end{minipage}\end{equation} It is easy to check that $\bar{\gf}$ is isotone, and it clearly has $e$ in its range. Since~$\SC 2=\{0,1\}$ belongs to every nontrivial variety of lattices, $\bar{L}$ will belong to any nontrivial variety~$\mathbf{V}$ containing~$L$. \end{proof} The next lemma gives the construction we will use to weld our $I$-tuple of isotone maps into one map. \begin{lemma}\label{L.M'toM} Let $M$ be a lattice, $e$ any element of $M$, and $I$ a nonempty set. Let $M'$ be the sublattice of $M^I$ consisting of those elements $f$ such that $f(i)=e$ for all but finitely many $i\in I$. Then there exists a map $\gy\colon M'\to M$ such that \begin{equation}\begin{minipage}[c]{25pc}\label{d.isotone} $\gy$ is isotone \end{minipage}\end{equation} and \begin{equation}\begin{minipage}[c]{25pc}\label{d.all_but_one} For every $i\in I$, and every $f\in M'$ satisfying $f(j)=e$ for all $j\neq i$, we have $\gy(f) = f(i)$. \end{minipage}\end{equation} \end{lemma} \begin{proof} For $f\in M'$, define \begin{equation}\begin{minipage}[c]{25pc}\label{d.gy} $\gy(f) = \begin{cases} \MMm{f(i)}{i\in I} & \text{if $f(i)\leq e$ for all $i\in I$,}\\[.3em] \JJm{f(i)}{i\in I,\ f(i) \nle e} & \text{otherwise.} \end{cases}$ \end{minipage}\end{equation} These meets and joins are defined because for each $f \in M'$, there are only finitely many distinct values $f(i)$. It is easy to see that $\gy$ satisfies~(\ref{d.all_but_one}). To obtain~(\ref{d.isotone}), observe that \begin{equation}\begin{minipage}[c]{25pc}\label{d.subset} For $f\leq g$ in $M'$, we have $\{i\mid f(i)\not\leq e\}\ \ci\ \{i\mid g(i)\not\leq e\}$. \end{minipage}\end{equation} Hence given $f\leq g$, there are three possibilities: Either the definitions of $\gy(f)$ and $\gy(g)$ both fall under the first case of~(\ref{d.gy}), or they both fall under the second, or that of $\gy(f)$ falls under the first and that of $\gy(g)$ under the second. If both fall under the first case, then $\gy(f)\leq\gy(g)$ because the meet operation of $M$ is isotone. If both fall under the second, the same conclusion follows using the fact that the join operation is isotone, together with the fact that bringing in more joinands, as can happen in view of~(\ref{d.subset}), yields a join greater than or equal to what we would get without those additional terms. Finally, if the evaluation of $\gy(f)$ falls under the first case and that of $\gy(g)$ under the second, we may choose, in view of the latter fact, an $i$ such that $g(i)\not\leq e$. Then \begin{equation}\begin{minipage}[c]{25pc}\label{d.leqleq} $\gy(f)\ \leq\ f(i)\ \leq\ g(i)\ \leq\ \gy(g)$, \end{minipage}\end{equation} completing the proof of~(\ref{d.isotone}). \end{proof} We can now fill in the proof of our main theorem. \begin{theorem}\label{T.main} Let $\mathbf{V}$ be a nontrivial variety of lattices, $\E L = (L_i \mid i\in I)$ a family of lattices in $\mathbf{V}$, and $(\gf_i\colon L_i\to M\mid i\in I)$ a family of isotone maps from the $L_i$ to a lattice $M$ not necessarily in $\mathbf{V}$. Then there exists an isotone map $\gf\colon \Free_{\mathbf{V}}\E L\to M$ whose restriction to each $L_i\ci \Free_{\mathbf{V}}\E L$ is $\gf_i$. \end{theorem} \begin{proof} Choose any element $e\in M$, and extend each $\gf_i$ as in Lemma~\ref{L.L&e} to a map $\bar{\gf}_i\colon \bar{L}_i\to M$ on a lattice extension $\bar{L}_i\ce L_i$ in $\mathbf{V}$, so that some $e_i\in\bar{L}_i$ is mapped by $\bar{\gf}_i$ to $e\in M$. Now map each $L_i$ into $\prod(\bar{L}_i\mid i \in I)$ by sending every element $x\in L_i$ to the element having $i$-th coordinate $x$, and $j$-th coordinate $e_j$ for all $j\neq i$. These maps are lattice homomorphisms, hence together they induce a homomorphism $\Free_{\mathbf{V}}\E L\to\prod(\bar{L}_i \mid i\in I)$. Moreover, this map has range in the sublattice $L'$ of elements whose $j$-coordinates are $e_j$ for almost all $j$, since the image of each $L_i$ lies in that sublattice. Mapping $\prod\bar{L}_i$ to $M^I$ by the isotone map $\prod\bar{\gf}_i$, we see that the above sublattice $L'\ci\prod\bar{L}_i$ is carried into the sublattice $M'\ci M^I$ of Lemma~\ref{L.M'toM}. Bringing in the isotone map $f\colon M'\to M$ of that lemma, we get our desired isotone map $\gf$ as the composite $\Free_{\mathbf{V}}\E L\to L'\to M'\to M$. It follows from~(\ref{d.all_but_one}) that the restriction of $\gf$ to each $L_i$ is $\gf_i$. \end{proof} We note a curious consequence of the fact that the $M$ of Theorem~\ref{T.main} need not lie in $\mathbf{V}$. \begin{corollary}\label{C.VtoL} Let $\E L = (L_i \mid i\in I)$ be a family of lattices in a nontrivial lattice variety~$\mathbf{V}$, let $\Free_{\mathbf{V}}\E L$ be their free product in $\mathbf{V}$, and let $\Free\E L$ be their free product in the variety $\mathbf{L}$ of all lattices. Then there exists an isotone map $\Free_{\mathbf{V}}\E L\to\Free\E L$ which acts as the identity on each $L_i$. In particular, for any nontrivial lattice variety $\mathbf{V}$ and any set $X$, there exists an isotone map $\Free_{\mathbf{V}}(X)\to\Free(X)$ \textup{(}where these denote the free lattices on the set $X$ in $\mathbf{V}$ and in $\mathbf{L}$ respectively\textup{)}, which acts as the identity map on $X$. \end{corollary} \begin{proof} For the first statement, apply Theorem~\ref{T.main} to the inclusions of the $L_i$ in~$\Free\E L$. The second is the case of the first where all $L_i$ are one-element lattices. \end{proof} \section{Digression: sketches of some alternate proofs of Theorem~\ref{T.main}}\label{S.alt_pfs} The definition~(\ref{d.gy}) of the isotone map $\gy$ used in the proof of Lemma~\ref{L.M'toM} is clearly asymmetric in the meet and join operations. We sketch below a variant proof of Theorem~\ref{T.main} which uses a function that is symmetric in these operations --- but lacks instead (when $|I|>2$) the symmetry in the family of lattices $L_i$ which the proof given above clearly has. We shall then show that one cannot have it both ways: a map of the required sort having both sorts of symmetry does not, in general, exist. However, we show that we \emph{can} get such a map if $M$ lies in the given variety~$\mathbf{V}$. This section will be sketchier than the rest of the paper. In particular, we will be informal about our two sorts of symmetry; though in the next-to-last paragraph we will indicate how to make these considerations precise. Our new proof of Theorem~\ref{T.main} starts with a generalization of the construction of Lemma~\ref{L.L&e}. Namely, suppose we are given isotone maps of \emph{two} lattices into a common lattice, $\gf_i\colon L_i\to M$ for $i=0,1$. Let \begin{equation}\begin{minipage}[c]{25pc}\label{d.L0L1C2C2} $L'\ =\ L_0\times L_1\times\SC 2\times\SC 2$. \end{minipage}\end{equation} Then taking any $e_0\in L_0$, $e_1\in L_1$, we can embed our two lattices in $L'$ by the homomorphisms \begin{equation}\begin{minipage}[c]{25pc}\label{d.embed} $L_0\to L'$\quad acting by\quad $x \mapsto(x,e_1,1,0)$,\\[.3em] $L_1\to L'$\quad acting by\quad $y \mapsto(e_0,y,0,1)$. \end{minipage}\end{equation} Now define the isotone map $\gf'\colon L'\to M$ by \begin{equation}\begin{minipage}[c]{25pc}\label{d.gf'} $\begin{cases} \begin{array}{rlc} \gf'(x,y,1,0) &=& \gf_0(x),\\ \gf'(x,y,0,1) &=& \gf_1(y),\\ \gf'(x,y,0,0) &=& \gf_0(x) \mm \gf_1(y),\\ \gf'(x,y,1,1) &=& \gf_0(x) \jj \gf_1(y). \end{array} \end{cases}$ \end{minipage}\end{equation} Clearly, $\gf'$ acts on the embedded images of the $L_i$ by the $\gf_i$; and as before, since $\SC 2$ belongs to every nontrivial variety of lattices, $L'$ belongs to any nontrivial variety~$\mathbf{V}$ containing the~$L_i$. This, in fact, gives us Theorem~\ref{T.main} for $|I|=2$ by a construction symmetric both in meet and join, and in our family of lattices. Now suppose more generally that we have lattices $L_i\in\mathbf{V}$ and isotone maps $\gf_i\colon L_i\to M$ indexed by an arbitrary set $I$. Assuming without loss of generality that $I$ is an ordinal, we shall construct the desired $L'\in\mathbf{V}$ and isotone map $\gf'\colon L'\to M$ by a recursive transfinite iteration of the above construction. It is the recursion that will lose us our symmetry in the $L_i$, via the arbitrary choice of an identification of $I$ with an ordinal, i.e., of a well-ordering on~$I$. To describe the recursion, let $1<k\leq I$, and assume that we have constructed lattices $L'_{(j)}$ for all $1\leq j<k$, which satisfy \begin{equation}\begin{minipage}[c]{25pc}\label{d.L2...Lj} $L_0\ =\ L'_{(1)}\ \ci\ L'_{(2)}\ \ci\ \dots \ \ci\ L'_{(j)}\ \ci\ \dots$, \end{minipage}\end{equation} together with lattice embeddings $L_i\to L'_{(j)}$ for $i<j$, and isotone maps $L'_{(j)}\to M$, and that these form a coherent system, in the sense that for $i<j<j'$, the composite $L_i\to L'_{(j)}\ci L'_{(j')}$ is the embedding $L_i\to L'_{(j')}$, and the composite $L'_{(j)}\ci L'_{(j')}\to M$ is the isotone map $L'_{(j)}\to M$; and, finally, such that for every $i<j$, the composite $L_i\to L'_{(j)}\to M$ is the given isotone map $\gf_i$. If $k$ is a successor ordinal, $k=j+1$, we apply the $|I|=2$ case of our construction, described in~(\ref{d.L0L1C2C2})-(\ref{d.gf'}), to the pair of lattices $L'_{(j)}$ and $L_{j}$ and their isotone maps to~$M$, calling the resulting lattice \begin{equation}\begin{minipage}[c]{25pc}\label{d.L'k=} $L'_{(j+1)}\ =\ L'_{(j)}\times L_{j}\times \SC 2\times\SC 2$, \end{minipage}\end{equation} and identifying $L'_{(j)}$ with its image therein under the first map of~(\ref{d.embed}). If, on the other hand, $k$ is a limit ordinal, we let $L'_{(k)}$ be the union of the $L'_{(j)}$ over all $j<k$. In each case, the asserted properties are immediate. Thus, we can carry our construction up to $k=I$, the resulting lattice $L'_{(I)}$ being our desired $L'$. What are the consequences of the different kinds of symmetry of the construction of the preceding section and the one just sketched? Because the former was symmetric in the $L_i$, we can deduce, for instance, that in the final statement of Corollary~\ref{C.VtoL}, if $X$ is finite, then the isotone map $\Free_{\mathbf{V}}(X)\to\Free(X)$ can be taken to respect the actions of the symmetric group $\operatorname{Sym}(X)$ on these two lattices. (Why assume $X$ to be finite? So that in applying Lemma~\ref{L.L&e}, we can choose an $e\in M=\Free(X)$ invariant under that group action, say the join of the given generators. Alternatively, without this finiteness assumption, if we choose any $x_0\in X$ and perform our construction with $e=x_0$, we can get our map to respect the action of $\operatorname{Sym}(X-\{x_0\})$.) On the other hand, using our new construction we can deduce that if $\mathbf{V}$ is closed under taking dual lattices, we can, instead, in that same final statement of Corollary~\ref{C.VtoL}, take the isotone map $\Free_{\mathbf{V}}(X)\to\Free(X)$ to respect the anti-automorphisms of the domain and codomain that fix the free generators but interchange meet and join. (Again, we have to decide what to use for our distinguished elements $e_0$, $e_1$ at each application of~(\ref{d.embed}). In this case, we may, at each such step, take $e_0$ to be any of the preceding generators, while for $e_1$ we have no choice but to use the generator we are adjoining.) Let us now show that for $|I|=3$, we cannot get a construction with both sorts of symmetry. If we could, then letting $\mathbf{D}$ denote the variety of distributive lattices, we could get an isotone map $\gf\colon\Free_{\mathbf{D}}(3)\to\Free(3)$ respecting all permutations of the generators, and also the anti-automorphisms that interchange meets and joins. Now $\Free_{\mathbf{D}}(3)$ has an element invariant under all these symmetries; namely, writing its three generators $a$, $b$, $c$, the element \begin{equation}\begin{minipage}[c]{25pc}\label{d.sym} $(a\mm b)\jj(b\mm c)\jj(c\mm a)\ =\ (a\jj b)\mm(b\jj c)\mm(c\jj a)$. \end{minipage}\end{equation} On the other hand, for any set $X$, the only elements of $\Free(X)$ that can be invariant under an anti-automorphism are the given free generators; this follows from the fact that every element of $\Free(X)$ other than those generators is either meet-reducible or join-reducible, but never both. (Cf.\ \cite[condition (W) on p.477, and Corollary~534(iii)]{GLT3}.) So $\Free(3)$ has no element with both sorts of symmetry to which one could send the element given by~(\ref{d.sym}). Finally, let us show that we \emph{can} get both sorts of symmetry if the lattice $M$ lies in $\mathbf{V}$. (Of course, this restriction makes it impossible to use the result to prove a version of Corollary~\ref{C.VtoL}.) We record in the next lemma the raw construction used in the proof. Though that lemma requires $M$ to lie in $\mathbf{V}$, it does not require the same of the $L_i$. But it is easy to see that if we add the assumption that the $L_i$ lie in $\mathbf{V}$, the lattice $L'$ obtained will lie there as well, hence the construction will induce, as in Theorem~\ref{T.main}, an isotone map $\gf\colon\Free_{\mathbf{V}}\E L\to M$ acting as $\gf_i$ on each $L_i$. Moreover, the construction clearly has all the asserted symmetries. (We remark that the factor $\Free_{\mathbf{V}}(I)$ in the construction reduces, when $|I|=2$, to the lattice $\SC 2\times\SC 2$ of~(\ref{d.L0L1C2C2}). So one could say it was the fact that all nontrivial lattice varieties have the same $2$-generator free lattice that allowed us to get the doubly symmetric construction in that two-lattice case with no added restriction on $M$.) \begin{lemma}\label{L.prodtimesfree} Let $\E L=(L_i \mid i\in I)$ be a family of lattices, let $M$ be a lattice, and for each $i\in I$, let $\gf_i\colon L_i\to M$ be an isotone map. Let $\mathbf{V}$ be a lattice variety containing~$M$, and in the free lattice $\Free_{\mathbf{V}}(I)$, let the $i$-th generator be denoted $g_i$ for each $i$. Let \begin{equation}\begin{minipage}[c]{25pc}\label{d.L'=prod} $L'\ =\ \prod(L_i\mid i\in I)\,\times\,\Free_{\mathbf{V}}(I)$. \end{minipage}\end{equation} Suppose we choose, for each $i\in I$, a lattice homomorphism $\gx_i\colon L_i\to L'$ which takes every $x\in L_i$ to an element whose $i$-th coordinate is $x$ and whose coordinate in $\Free_{\mathbf{V}}(I)$ is the generator $g_i$. \textup{(}For instance, such a family of homomorphisms $\gx_i$ can be determined by fixing an element $e_j$ in each $L_j$, and letting $\gx_i(x)$ have, in addition to the two coordinates just specified, $j$-th coordinate $e_j$ for all $j\in I-\{i\}$.\textup{)} Finally, let $\gf'\colon L'\to M$ be the function taking each pair $(x,w)$, where \begin{equation}\begin{minipage}[c]{25pc}\label{d.x=} $x\ =\ (x_i\mid i\in I)\in\prod(L_i\mid i\in I)$\quad and\quad $w\in\Free_{\mathbf{V}}(I)$, \end{minipage}\end{equation} to $\bar{w}(\gf_i(x)\mid i\in I)$, where $\bar{w}\colon M^I\to M$ is the operation of evaluating the lattice expression $w\in\Free_{\mathbf{V}}(I)$ at $I$-tuples of elements of $M$. Then $\gf'$ is an isotone map, and for each $i\in I$, $\gf'\gx_i=\gf_i$. \end{lemma} \begin{proof}[Sketch of proof] Clear, except, perhaps, for the conclusion that $\gf'$ is isotone. To get this, suppose $(x,w)\leq(x',w')$ in $L'$. Let us pass from $(x,w)$ to $(x',w')$ in finitely many steps. At the first step, replace the coordinates $x_i$ of $x$ by $x'_i$ for all $i$ \emph{other} than the finitely many values corresponding to the variables occurring in the term $w$. This does not affect the image of our element under $\gf'$. Next, one by one, replace the finitely many remaining $x_i$ with the values $x'_i\geq x_i$. The result is clearly greater than or equal to what we had. Finally, replace $w$ by $w'\geq w$. Again, the new value is greater than or equal to the old. \end{proof} We remark that the isotone map $\gf'$ of the above lemma is not, in general, a~lattice homomorphism. (For instance, let $I=\{0,1\}$, let $L_0=L_1=M=\SC 2$, let the $\gf_i\colon L_i\to M$ be the identity map, and let $\mathbf{V}$ be any nontrivial lattice variety. Denoting the generators of $\Free_\mathbf{V}(I)$ by $g_0$ and $g_1$, we note that \begin{equation}\begin{minipage}[c]{25pc}\label{d.neq} $((0,1),g_1\mm g_2)\jj((1,0),g_1\mm g_2)\ =\ ((1,1),g_1\mm g_2)$ \quad in $L'$. \end{minipage}\end{equation} However, $\gf'$ maps each of the joinands on the left to $0$, but the term on the right to~$1$.) We have discussed symmetries of our construction informally above, leaving it to the reader to see that a construction with a certain sort of symmetry would imply corresponding properties of the maps constructed. These considerations can, of course, be formalized. If we describe our constructions as functors on appropriate categories of systems of lattices, isotone maps, and distinguished elements, then constructions with various sorts of symmetry allow us to strengthen the conclusion of Theorem~\ref{T.main} to say that we have functors respecting certain additional structure on those categories. We shall not go into these details here, however. We turn now to some variants of our main result. \section{When does the same result hold for the inclusion\\ of a general partial lattice $P$ in its free lattice $L$?}\label{S.complete} If the lattice $M$ of Theorem~\ref{T.main} happens to be a \emph{complete} lattice, the conclusion of that theorem follows from a much more general fact: Any isotone map from an order $P$ into a complete lattice can be extended to any order extension $Q$ of $P$. In other words, in the category of orders, complete lattices are \emph{injective} with respect to inclusions of orders. The inclusions of orders are not, up to isomorphism, the only monomorphisms in the category of orders and isotone maps. B.\ Banaschewski and G.\ Bruns \cite{B+B} characterize the inclusions category-theoretically among the monomorphisms, calling them the \emph{strict} monomorphisms, and they formulate the above result as the statement that every complete lattice (in their terminology, every complete partially ordered set) is a ``strict injective''; to which they also prove the converse \cite[Proposition~1, (i)$\!\iff\!$(ii)]{B+B}. Theorem~\ref{T.main} can thus be looked at as saying that if $P$ is the disjoint union of a family of lattices $L_i$ belonging to a variety $\mathbf{V}$, regarded as a partial lattice, then the inclusion of $P$ in its free lattice $L=\Free_{\mathbf{V}} P$ behaves a little better than a general inclusion of orders, in that isotone maps of $P$ to arbitrary lattices, and not only to complete lattices, can be extended to $L$. In contrast, we shall see below that the inclusion of a general partial lattice $P$ in its free lattice $\Free P$ behaves no better in this way than do arbitrary extensions of orders, at least insofar as isotone maps to lattices are concerned. (For the concepts of a partial lattice and of the free lattice on such an object, see \cite[Section~I.5]{GLT2}, \cite[Sections~I.5.4-I.5.5]{GLT3}.) We begin with the building blocks from which the ``test cases'' showing this will be put together. \begin{lemma}\label{L.Bool} Let $B$ be a boolean lattice with $>2$ elements. Then as an extension of $B-\{0,1\}$, the lattice $\Free(B-\{0,1\})$ is isomorphic to $B$. \end{lemma} \begin{proof} Let $P=B-\{0,1\}$. The only joins that $P$ is missing are those that in $B$ yield $1$; likewise, the only missing meets are those that yield $0$. We shall show that all pairs of elements which had join $1$ in $B$ give equal joins in any lattice $L$ into which we map $P$ by a homomorphism of partial lattices. By symmetry, the dual statement holds for $0$ and meets. Hence the free lattice on $P$ just restores these two elements, i.e., it is naturally isomorphic to~$B$. So suppose we are given a map of $P$ into a lattice $L$, which preserves the meets and joins of $P$. By abuse of notation, we shall use the same symbols for elements of~$P$ and their images in $L$. Let us first consider any two elements $a,b\in P$ which are distinct in $P$ from each other and from each other's complements, and compare the joins $a\jj a^\mathrm{c}$ and $b\jj b^\mathrm{c}$ in $L$ (writing $(\ )^\mathrm{c}$ for complementation in $B$). Note that in $B$, we have \begin{equation}\begin{minipage}[c]{25pc}\label{d.a=vee} $a\ =\ (a\mm b)\jj(a\mm b^\mathrm{c})$\quad and\quad $a^\mathrm{c}\ =\ (a^\mathrm{c}\mm b)\jj(a^\mathrm{c}\mm b^\mathrm{c})$. \end{minipage}\end{equation} If the four meets appearing in these two expressions are all nonzero, then they belong to $P$, and the relations~(\ref{d.a=vee}) hold there, and hence in $L$. In this situation, if we expand $a\jj a^\mathrm{c}$ in $L$ using these two formulas, we can rearrange the result as $((a\mm b)\jj(a^\mathrm{c}\mm b))\jj ((a\mm b^\mathrm{c})\jj(a^\mathrm{c}\mm b^\mathrm{c}))$, which (by~(\ref{d.a=vee}) with $a$ and $b$ interchanged) simplifies to $b\jj b^\mathrm{c}$, giving the desired equality. On the other hand, if any of the four pairwise meets of $a$ and~$a^\mathrm{c}$ with $b$ and~$b^\mathrm{c}$ is zero, this can, under the assumptions made in the preceding paragraph, be true only of one such meet; say $a\mm b=0$. Then we can repeat the above computation, everywhere omitting ``$(a\mm b)\jj$''. (Thus, we have a version of~(\ref{d.a=vee}) with the first equation simplified to $a=a\mm b^\mathrm{c}$, and the second unchanged.) With this slight modification, our computation still works, and again gives $a\jj a^\mathrm{c}=b\jj b^\mathrm{c}$. So let us write $i$ for the common value, for all $a\in P$, of $a\jj a^\mathrm{c}\in L$. We now consider two elements $a,b\in P$ which are not assumed to be complements, but whose join in $B$ is $1$. This relation implies that $b\geq a^\mathrm{c}$; note that both these terms lie in $P$. Hence in $L$ we have $a\jj b\ \geq\ a\jj a^\mathrm{c}\ =\ i$, while the reverse inequality holds because $a\leq i,\ b\leq i$. Thus, $a\jj b=i$, completing the proof that all pairs of elements having join $1$ in $B$ have the same join, namely $i$, in~$L$. \end{proof} Let us now consider, independent of the above result, the same inclusion $B-\nolinebreak\{0,1\}\ci B$ in the context of isotone maps. \begin{lemma}\label{L.isot_on_freeB} Let $M$ be a lattice, $X$ a subset of $M$, and $B$ the free Boolean lattice on~$X$. Then there exists an isotone map $\gf\colon B-\{0,1\}\to M$ with the property that \begin{equation}\begin{minipage}[c]{25pc}\label{d.bounds} The pairs of elements $y,z\in M$ such that $\gf$ can be extended to an isotone map $\bar{\gf}\colon B\to M$ taking $0$ to $y$ and $1$ to $z$, are precisely those for which $y$ is a lower bound, and $z$ an upper bound, for $X$ in $M$. \end{minipage}\end{equation} \end{lemma} \begin{proof} Let $B$ have free generators $g_x$ for $x\in X$. For every $a\in B-\{0,1\}$, let $\gf(a)$ be the join in $M$ of all elements of the form \begin{equation}\begin{minipage}[c]{25pc}\label{d.x1,xn} $x_1\mm\cdots\mm x_n$ \end{minipage}\end{equation} where $n\geq 1$, and $x_1,\dots,x_n$ are distinct elements of $X$ such that for some choice of $\ge_1,\dots,\ge_n\in\{1,\mathrm{c}\}$, we have \begin{equation}\begin{minipage}[c]{25pc}\label{d.ageq} $a\ \geq\ g_{x_1}^{\ge_1}\mm\cdots\mm g_{x_n}^{\ge_n}$. \end{minipage}\end{equation} Here for $b\in B$, $b^\ge$ denotes $b$ if $\ge=1$, the complement of $b$ if $\ge=\mathrm{c}$. Because $a\neq 0$, the set of instances of~(\ref{d.ageq}) is nonempty, hence so is the set of joinands~(\ref{d.x1,xn}). These sets are in general infinite; however, if we take the least subset $X_0\ci X$ such that $a$ is in the Boolean sublattice generated by the $g_x$ with $x\in X_0$, then $X_0$ is finite and (since $a\neq 0,1$) nonempty; and we find that the irredundant relations~(\ref{d.ageq}) (those relations~(\ref{d.ageq}) from which no meetand can be dropped) involve only terms $g_x^\ge$ with $x\in X_0$. Thus, each expression~(\ref{d.x1,xn}) in our description of~$\gf(a)$ is majorized by one that arises from one of these finitely many irredundant relations~(\ref{d.ageq}); so the join describing $\gf(a)$ is effectively a finite join, and so exists in~$M$. It is not hard to see from our definition that $\gf$ is isotone, and that for all $x\in X$, $\gf(g_x)=\gf(g_x^\mathrm{c})=x$. Suppose now that we have an extension $\bar{\gf}\colon B\to M$ of this isotone map $\gf$. Then for every $x\in X$, $\bar{\gf}(1)\geq\gf(g_x)=x$, so $\bar{\gf}(1)$ is an upper bound of $X$. Conversely, any upper bound for $X$ in $M$ will majorize all elements~(\ref{d.x1,xn}), and hence all joins of such elements, hence will indeed be an acceptable choice for a value of $\bar{\gf}(1)$ making $\bar{\gf}$ isotone. Though our construction of $\gf$ is not symmetric in $\jj$ and $\mm$, the duals of these observations are easily seen to hold, so the choices for $\bar{\gf}(0)$ are, likewise, the lower bounds of~$X$. \end{proof} Note that (\ref{d.bounds}) above can be summarized as saying that \begin{equation}\begin{minipage}[c]{25pc}\label{d.bounds_alt} The upper and lower bounds in $M$ of $\gf(B-\{0,1\})$ are the same as the upper and lower bounds in $M$ of $X$. \end{minipage}\end{equation} In our next result, for any two partial lattices $P$ and $Q$, we will denote by $P+Q$ the disjoint union of $P$ and $Q$, made a partial lattice using the partial meet and join operations of $P$ and $Q$, together with the further meet and join relations corresponding to the condition that every element of $P$ be majorized by every element of $Q$ (namely, $p\mm q=p$ and $p\jj q=q$ for all $p\in P,\ q\in Q)$. It is not hard to see that \begin{equation}\begin{minipage}[c]{25pc}\label{d.FreeP+Q} $\Free(P+Q)\ \cong\ \Free P+\Free Q$. \end{minipage}\end{equation} \begin{theorem}\label{T.complete} Let $M$ be a lattice. Then the following conditions are equivalent. \vspace{.3em} \begin{equation}\begin{minipage}[c]{25pc}\label{d.complete} $M$ is complete. \end{minipage}\end{equation} \begin{equation}\begin{minipage}[c]{25pc}\label{d.free_extend} Any isotone map from a partial lattice $P$ to $M$ can be extended to an isotone map $\Free P\to M$. \end{minipage}\end{equation} \begin{equation}\begin{minipage}[c]{25pc}\label{d.Boole_extend} For $B$ a free Boolean lattice on a nonempty set, any isotone map $B_1-\{0,1\}\to M$ can be extended to an isotone map $B\to M$; and for $B_1$, $B_2$ any two free Boolean lattices on nonempty sets, any isotone map $(B_1-\{0,1\})+(B_2-\{0,1\})\to M$ can be extended to an isotone map $B_1+B_2\to M$. \end{minipage}\end{equation} \end{theorem} \begin{proof} (\ref{d.complete})$\!\implies\!$(\ref{d.free_extend}) is a case of \cite[Proposition~1, (i)$\!\implies\!$(ii)]{B+B}, which says that every complete lattice is injective with respect to inclusions of orders. In view of Lemma~\ref{L.Bool} and~(\ref{d.FreeP+Q}), the implication (\ref{d.free_extend})$\!\implies\!$(\ref{d.Boole_extend}) is clear. To complete the argument, assume~(\ref{d.Boole_extend}). Calling on the first statement of~(\ref{d.Boole_extend}), together with the case $X=M$ of the preceding lemma, we see that $M$ must have a greatest and a least element. Now take any nonempty subset $X_1\ci M$, let $X_2$ be the set of its upper bounds (which is nonempty, since $M$ has a greatest element), let $B_1$ be the free Boolean lattice on $X_1$, and let $B_2$ be the free Boolean lattice on $X_2$. Map $B_1-\{0,1\}$ and~$B_2-\{0,1\}$ into $M$ by maps $\gf_1$, $\gf_2$ satisfying~(\ref{d.bounds}) with respect to $X_1$ and $X_2$, respectively. By the equivalence of~(\ref{d.bounds}) and~(\ref{d.bounds_alt}), $\gf_1(B_1-\{0,1\})$ is majorized by all upper bounds of $X_1$, i.e., by all elements of $X_2$, hence (again using that equivalence) by all elements of $\gf_2(B_2-\{0,1\})$; so $\gf_1$ and $\gf_2$ together constitute an isotone map $\gf\colon (B_1-\{0,1\})+(B_2-\{0,1\})\to M$. Extending this to the free lattice $B_1+B_2$ on that partial lattice, we see that the image of the $1$ of $B_1$ (and likewise that of the $0$ of $B_2)$ will be both an upper bound of $X_1$ and a lower bound of $X_2$, hence must be a least upper bound of $X_1$. So $M$ is upper semicomplete. By symmetry (or by the known fact that in a lattice with $0$ and $1$, upper semicompleteness and lower semicompleteness are equivalent), $M$ is also lower semicomplete, establishing~(\ref{d.complete}). \end{proof} \section{Lattices amalgamated over convex retracts}\label{S.retracts} The results of the preceding section show that Theorem~\ref{T.main}, looked at as a property of the inclusion of a certain kind of partial lattice $P$ in $\Free_{\mathbf{V}} P$, does not go over to the inclusion of a general partial lattice $P$ in its free lattice. Can we describe other interesting partial lattices $P$ for which it does? In proving Theorem~\ref{T.main}, after reducing to the case where the given lattices contained elements $e_i$ that mapped to the same element of $M$, we effectively proved that the free lattice on the union of those lattices with amalgamation of the $e_i$ had the desired extension property. The next theorem will slightly generalize this result, replacing the singletons $\{e_i\}$ with any family of isomorphic sublattices that are both retracts of the $L_i$, and convex therein. We will need the following observation. \begin{lemma}\label{L.cvx_retr} Let $M$ be a lattice, and $\gr$ a lattice-theoretic retraction of $M$ to a convex sublattice. Then if an element $x\in M$ is majorized by some element of $\gr(M)$, then it is majorized by $\gr(x)$. \end{lemma} \begin{proof} Say $x\leq r\in\gr(M)$. Applying $\gr$ to this relation, and taking the join of the original relation with the resulting one, we get $x\jj\gr(x)\leq r$. Hence $x\jj\gr(x)$ lies in the interval between $\gr(x)$ and $r$, so as $\gr(M)$ is assumed convex, $x\jj\gr(x)\in\gr(M)$. This means that $x\jj\gr(x)$ is fixed under the idempotent lattice homomorphism $\gr$; but its image under that map is $\gr(x)\jj\gr(x)=\gr(x)$. Thus, $x\jj\gr(x)=\gr(x)$, which is equivalent to the desired conclusion $x\leq\gr(x)$. \end{proof} In the above lemma, the assumption that $\gr$ is a lattice homomorphism could have been weakened to say that it is a join-semilattice homomorphism. We have stated it as above for conceptual simplicity, and because in the proof of the next result, the maps $\gr_i$ must be lattice homomorphisms anyway. \begin{theorem}\label{T.retract} Let $(L_i \mid i\in I)$ be a family of lattices which are disjoint except for a common sublattice $K$, which is convex in each $L_i$, and is a retract of each $L_i$ via a lattice-theoretic retraction $\gr_i\colon L_i\to K$. Let $P$ denote the partial lattice given by the union of the $L_i$ with amalgamation of the common sublattice $K$, and let $L=\Free_\mathbf{V} P$, where $\mathbf{V}$ is any variety containing all the $L_i$. Then for any lattice $M$ \textup{(}not necessarily belonging to $\mathbf{V}$\textup{)} given with isotone maps $\gf_i\colon L_i\to M$ agreeing on $K$, there exists an isotone map $\gf\colon L\to M$ extending all the $\gf_i$. In other words, every isotone map $P\to M$ extends to $L$. \end{theorem} \begin{proof} Let us assume that $I$ does not contain the symbol $0$, and use $0$ to index the factor $K$ in $K\times(\prod L_i\mid i\in I)$. Now let $L'$ denote the sublattice of that direct product consisting of those elements $f$ such that $f(i)=f(0)$ for almost all $i$, and $\gr_i(f(i))=f(0)$ for all $i$. Then we can map each $L_i$ into $L'$ by sending $x\in L_i$ to the element having $i$-th coordinate $x$, and having $\gr_i(x)$ for all other coordinates (including the $0$-th coordinate). These maps are lattice homomorphisms (this is where we need the $\gr_i$ to be lattice homomorphisms and not just join-semilattice homomorphisms), which agree on $K$; hence they extend to a lattice homomorphism $L\to L'$. We shall now map $L'$ isotonely to $M$ using the idea of Lemma~\ref{L.M'toM}. Namely, given $f\in L'$, we define \begin{equation}\begin{minipage}[c]{25pc}\label{d.gy_again} $\gy(f)\ =\ \begin{cases} \MMm{\gf_i(f(i))}{i\in I} & \mbox{if for all $i\in I$, $f(i)\leq f(0)$,}\\[.3em] \JJm{\gf_i(f(i))}{i\in I,\,f(i)\not\leq f(0)} & \mbox{otherwise.} \end{cases}$ \end{minipage}\end{equation} These are defined because for each $f$, all but finitely many $i\in I$ have $\gf_i(f(i))$ equal to the image of $f(0)$ in $M$. (Recall that $f(0)\in K$, and all $\gf_i$ agree on $K.)$ We now claim that \begin{equation}\begin{minipage}[c]{25pc}\label{d.isotone_again} $\gy$ is isotone, \end{minipage}\end{equation} and \begin{equation}\begin{minipage}[c]{25pc}\label{d.all_but_one_again} for every $i\in I$, and every $f\in L'$ such that $f(j)=f(0)$ for all $j\neq i$, we have $\gy(f) = \gf_i(f(i))$. \end{minipage}\end{equation} Assertion~(\ref{d.all_but_one_again}) is clear. The proof of~(\ref{d.isotone_again}) is exactly like that of the corresponding statement,~(\ref{d.isotone}), in the proof of Lemma~\ref{L.M'toM}, once we know the analog of~(\ref{d.subset}), namely \begin{equation}\begin{minipage}[c]{25pc}\label{d.subset_again} for $f\leq g$ in $L'$, we have $\{i\mid f(i)\not\leq f(0)\}\ \ci\ \{i\mid g(i)\not\leq g(0)\}$. \end{minipage}\end{equation} To prove~(\ref{d.subset_again}), consider any $i$ not lying in the right-hand side. Then \begin{equation}\begin{minipage}[c]{25pc}\label{d.fi_<gi_<g0} $f(i)\ \leq\ g(i)\ \leq\ g(0)\in\gr(M)$, \end{minipage}\end{equation} so by Lemma~\ref{L.cvx_retr}, $f(i)\leq\gr(f(i))=f(0)$, showing that $i$ also fails to lie in the left-hand set. Composing $\gy$ with the map $L\to L'$ of the first paragraph of this proof, we get our desired isotone map $L\to M$. \end{proof} (If we think of the constant $e$ of Lemma~\ref{L.M'toM} as ``sea level'', then the $f(0)$ of the above proof brings in ``tides''.) We remark that though in the free-lattice-with-amalgamation $L$ of the above proof, $K$ is necessarily a retract, since it was a retract in each of the $L_i$, it does not follow similarly that $K$ is convex in $L$. To see this, let us first note an example of a lattice $L'$ having a sublattice $K$ which is convex and a retract in each of two sublattices $L_0$ and $L_1$ containing $K$, but is not convex in the sublattice that these generate. Let $L'$ be the lattice of all subspaces of a $3$-dimensional vector space $V$ over any field, let $K=\{\{0\},a\}$ where $a$ is a $2$-dimensional subspace of $V$, and let each $L_i$ be the sublattice generated by $a$ and a $1$-dimensional subspace $b_i$ not contained in $a$, with $b_0\neq b_1$. Then the stated hypotheses are satisfied, but $0<(b_0\jj b_1)\mm a<a$, so $K$ is not convex in the lattice generated by $L_1$ and $L_2$. It easily follows that in the free product $L$ of $L_0$ and $L_1$ with amalgamation of $K=\{0,1\}$, we likewise have $0<(b_0\jj b_1)\mm a<a$ with the middle term not in $K$. \section{Semilattice variants---two easy results}\label{S.semilat1} In our main theorem, free products of lattices $L_i$, whose normal role is to admit a lattice homomorphism extending a given family of lattice homomorphisms on the~$L_i$, were made to do the same for isotone maps (homomorphisms of orders). One might expect it to be easier to get similar results if the gap between lattices and orders is replaced by one of the smaller gaps between lattices and semilattices, or between semilattices and orders. For the latter case, the result is indeed easy; it is only for parallelism with our other results that we dignify it with the title of theorem. \begin{theorem}\label{T.semilat_to_order} Let $(L_i \mid i\in I)$ be a family of join-semilattices, and $\gf_i\colon L_i\to M$ a family of isotone maps from the $L_i$ to a common join-semilattice. Let $L$ denote the free product of the $L_i$ as join-semilattices. Then there exists an isotone map $\gf\colon L\to M$ whose restrictions to the $L_i\ci L$ are the $\gf_i$. \end{theorem} \begin{proof} The general element $x\in L$ is a formal join $x_{i_1}\jj\cdots\jj x_{i_n}$ of elements $x_{i_m}\in L_{i_m}$, where $i_1,\dots,i_n$ are a finite nonempty family of distinct indices in $I$. If we send each such $x$ to $\gf_{i_1}(x_{i_1})\jj\cdots\jj\gf_{i_n}(x_{i_n})$, this is easily seen to have the desired properties. \end{proof} There was no analog, in the above result, to the $\mathbf{V}$ of Theorem~\ref{T.main}, since the variety of semilattices has no proper nontrivial subvarieties. On the other hand, if we wish to get an analog of Theorem~\ref{T.main} with the $L_i$ and~$M$ again lattices, but for semilattice homomorphisms, rather than isotone maps, we may again start with lattices $L_i$ in an arbitrary lattice variety $\mathbf{V}$. For this situation the authors have not been able to prove the full analog of Theorem~\ref{T.main}. The difficulty with adapting our proofs of that theorem is that the map $\gy$ of Lemma~\ref{L.M'toM}, though isotone, does not respect joins; nor do the variant constructions of Section~\ref{S.alt_pfs}. The map of Lemma~\ref{L.M'toM} does, however, respect joins when the $e_i$ are least elements in the $\bar{L}_i$. In that case, the composite $L'\to M'\to M$ reduces to the map~(\ref{d.pre_gy}) in our sketch of the ``easy case'' of Theorem~\ref{T.main}, and we find that if the $\gf_i$ are join-semilattice homomorphisms, that composite will also be one. Hence we get \begin{proposition}\label{P.semilat_bdd_below} Let $\mathbf{V}$ be a nontrivial variety of lattices, $\E L=(L_i \mid i\in I)$ a~family of lattices in $\mathbf{V}$, $L=\Free_{\mathbf{V}}\E L$, and $\gf_i\colon L_i\to M$ a family of join-semilattice homomorphisms from the $L_i$ to a common lattice, not necessarily belonging to $\mathbf{V}$. If the image-sets $\gf_i(L_i)$ have a common lower bound $e\in M$, then there exists a join-semilattice homomorphism $\gf\colon L\to M$ whose restrictions to the $L_i\ci L$ are the~$\gf_i$. In particular, this is so if $M$ has a least element, or if $I$ is finite and every~$L_i$ has a least element.\qed \end{proposition} One could modify this result in the spirit of Theorem~\ref{T.retract}, assuming that each $L_i$ has a retraction $\gr_i$ to a common \emph{ideal} $K$ on which the $\gf_i$ agree. In another direction, the condition in Proposition~\ref{P.semilat_bdd_below} that there exist a common lower bound $e$ in $M$ to all the $\gf_i(L_i)$ can be weakened slightly (for $I$ infinite) to say that $M$ contains a chain $C$ such that every $\gf_i(L_i)$ is bounded below by some member of $C$. Let us sketch the argument that gets this, by transfinite induction, from the statement as given. First, by passing to a subchain, assume without loss of generality that $C$ is dually well-ordered. Then apply Proposition~\ref{P.semilat_bdd_below}, first, to those $L_i$ such that $\gf_i(L_i)$ is bounded below by the top element, $c_0$, of $C$, concluding that those $\gf_i$ can be factored through some lattice $L'_{(0)}$ in $\mathbf{V}$. Then go to the next member, $c_1$, of $C$, and combine $L'_{(0)}$ with all the $L_i$ that are bounded below by $c_1$ but not by $c_0$, factoring these together through a lattice $L'_{(1)}\in\mathbf{V}$; and so on. As in the discussion following~(\ref{d.L'k=}), we take the union of the preceding steps whenever we hit a limit ordinal. \section{Semilattice variants---a harder result}\label{S.semilat2} What if we have nothing like the lower-bound condition of Proposition~\ref{P.semilat_bdd_below}? For free products taken in the variety $\mathbf{L}$ of all lattices, the analog of that proposition, without the lower bound condition, is obtained in~\cite[middle of p.~239, ``We note finally\,...'']{GLP70}. Indeed, the map $f$ used in~\cite{GLP70} to prove Theorem~\ref{T.main} for $L=\Free\E L$ has the property that $f(x\jj y)=f(x)\jj f(y)$ \emph{except} possibly when $x$ and $y$ are bounded below by elements $x_{(i)},\,y_{(i)}\in L_i$ for some $i$, and $\gf_i(x_{(i)}\jj y_{(i)})>\gf_i(x_{(i)})\jj \gf_i(y_{(i)})$. (Cf.\ \cite[p.238, (ii)]{GLP70}.) If the $\gf_i$ are join-semilattice homomorphisms, that strict inequality never occurs, so $\gf$ is also a join-semilattice homomorphism. If $\mathbf{V}$ is a nontrivial variety of lattices containing the $L_i$, we do not know whether we can get the corresponding result for the free product of the $L_i$ in $\mathbf{V}$, but we shall show below that we can get such a result for their free product in the larger class $\mathbf{D}\circ\mathbf{V}$ (definition recalled in~(\ref{d.circ}) and~(\ref{d.D}) below). Our construction will be similar in broad outline to those used in preceding sections, but the intermediate lattice $L'$, rather than being a subdirect product, will be a certain lattice of downsets in a direct product. We recall the definition: \begin{equation}\begin{minipage}[c]{25pc}\label{d.circ} If $\mathbf{K}_1$ and $\mathbf{K}_2$ are classes of lattices, then the class of those lattices $L$ which admit homomorphisms $\ge\colon L\to L_2$ such that $L_2\in\mathbf{K}_2$, and such that the inverse image of every element of $L_2$ lies in $\mathbf{K}_1$, is denoted $\mathbf{K}_1\circ\mathbf{K}_2$. \end{minipage}\end{equation} The class $\mathbf{K}_1\circ\mathbf{K}_2$ is often called the \emph{product} of the classes $\mathbf{K}_1$ and $\mathbf{K}_2$, but we will not use that name here, to avoid confusion with direct products and free products of lattices. If $\mathbf{K}_1$ and $\mathbf{K}_2$ are varieties, the class $\mathbf{K}_1\circ\mathbf{K}_2$ need not be a variety; but as noted in~\cite{AIM}, if $\mathbf{K}_1$ and $\mathbf{K}_2$ are prevarieties or quasivarieties (classes closed under taking direct products and sublattices; respectively, under taking direct products, ultraproducts, and sublattices), then $\mathbf{K}_1\circ\mathbf{K}_2$ will also be a prevariety, respectively a quasivariety. In~particular, if $\mathbf{K}_1$ and $\mathbf{K}_2$ are varieties, $\mathbf{K}_1\circ\mathbf{K}_2$ is, at least, a quasivariety. We also recall the standard notation: \begin{equation}\begin{minipage}[c]{25pc}\label{d.D} The variety of distributive lattices is denoted $\mathbf{D}$. \end{minipage}\end{equation} Now suppose $(L_i \mid i\in I)$ is a family of lattices. To begin the construction of the lattice $L'$ that we shall use in proving our final result, let us adjoin to each $L_i$ a new top element, $1_i$, form the direct product $\prod(L_i+\{1_i\})$, and define the subset \begin{equation}\begin{minipage}[c]{25pc}\label{d.P} $P\ =\ \{f\in\prod(L_i+\{1_i\})\mid\{i\mid f(i)\neq 1_i\}$\ is finite but nonempty$\}$. \end{minipage}\end{equation} The condition that $\{i\mid f(i)\neq 1_i\}$ be nonempty means that we are excluding the top element of $\prod(L_i+\{1_i\})$; hence if $|I|>1$, $P$ is not a lattice, though it is a lower semilattice. For each $i\in I$, let us define a map $\gq_i\colon L_i\to P$ by \begin{equation}\begin{minipage}[c]{25pc}\label{d.theta} $\gq_i(x)(i)=x,\qquad\gq_i(x)(j)=1_j$\ \ for $j\neq i$. \end{minipage}\end{equation} We see that every element of $p\in P$ has a representation \begin{equation}\begin{minipage}[c]{25pc}\label{d.finite_meet} $p\ =\ \gq_{i_1}(x_1)\mm\cdots\mm\gq_{i_n}(x_n)$\quad with $n>0$, \end{minipage}\end{equation} unique up to order of terms, where $i_1,\dots,i_n$ are distinct elements of $I$, and $x_m\in\nolinebreak L_{i_m}$. We now let \begin{equation}\begin{minipage}[c]{25pc}\label{d.L'} $L'\ =$ the set of all nonempty finitely generated downsets $F\ci P$ such that \end{minipage}\end{equation} \begin{equation}\begin{minipage}[c]{25pc}\label{d.x_vee_y} for all $i\in I$ and $x,\,y\in L_i$, if $\gq_i(x),\,\gq_i(y)\in F$, then $\gq_i(x\jj y)\in F$. \end{minipage}\end{equation} Thus \begin{equation}\begin{minipage}[c]{25pc}\label{d.elts_of_L'} Each element $F\in L'$ is the union of the principal downsets \mbox{$\downarrow(\gq_{i_1}(x_1)\mm\dots\mm\gq_{i_n}(x_n))$} determined by its finitely many maximal elements $\gq_{i_1}(x_1)\mm\dots\mm\gq_{i_n}(x_n)$. Moreover, for each $i$, $F$ can have at most one such maximal element of the form $\gq_i(x)$ (i.e., with $n=1$, and with the one meetand arising from $L_i$). \end{minipage}\end{equation} The last sentence above follows from~(\ref{d.x_vee_y}): Given distinct $\gq_i(x),\,\gq_i(y)\in F$, we also have $\gq_i(x\jj y)\in F$, so $\gq_i(x)$ and $\gq_i(y)$ cannot both be maximal in $F$. Let us now prove \begin{lemma}\label{L.DoV} Let $(L_i \mid i\in I)$ be a family of lattices, and let $L'$ be constructed as in\textup{~(\ref{d.P})-(\ref{d.x_vee_y})} above. Then \begin{equation}\begin{minipage}[c]{25pc}\label{d.L'_is_lattice} $L'$, partially ordered by inclusion, is a lattice. \end{minipage}\end{equation} \begin{equation}\begin{minipage}[c]{25pc}\label{d.xi_i_is_hom} For each $i\in I$, the map $\gx_i\colon L_i\to L'$ defined by $\gx_i(x)=\,\downarrow\gq_i(x)$ is a lattice homomorphism. \end{minipage}\end{equation} \begin{equation}\begin{minipage}[c]{25pc}\label{d.DoV} If $\mathbf{V}$ is any prevariety containing all the $L_i$, then $L'\in\mathbf{D}\circ\mathbf{V}$. \end{minipage}\end{equation} \end{lemma} \begin{proof} In verifying~(\ref{d.L'_is_lattice}), the only points that need a moment's thought are (i) that the intersection $F\cap G$ of two sets as in~(\ref{d.elts_of_L'}) remains nonempty and finitely generated; but indeed, in any meet-semilattice, the intersection of two nonempty finitely generated downsets $\bigcup\downarrow p_i$ and $\bigcup\downarrow q_j$ is the nonempty finitely generated downset $\bigcup\downarrow p_i\mm q_j$; (ii) that the closure operation of~(\ref{d.x_vee_y}) cannot produce the element $(1_i)_{i\in I}\notin P$; this follows from the fact that each $L_i$ is closed under joins in $L_i+\{1_i\}$; and (iii) that repeated application of that operation when we form a join $F\jj G$ cannot lead to a violation of finite generation as a downset. This is clear once we observe that in constructing $F\jj G$ from $F\cup G$, it is enough to apply the closure operation of~(\ref{d.x_vee_y}) to pairs consisting of one of the finitely many maximal elements of $F$ and one of the finitely many maximal elements of $G$ (and then close as a downset). Statement~(\ref{d.xi_i_is_hom}) is easily checked. (Here~(\ref{d.x_vee_y}) guarantees that $\gx_i$ respects joins --- that is the point of that condition.) To show~(\ref{d.DoV}), let us now adjoin to each $L_i$ a bottom element $0_i$, and define maps $\gp_i\colon L'\to \{0_i\}+L_i$ as follows. For $F\in L'$, \begin{equation}\begin{minipage}[c]{25pc}\label{d.pi_i} If there are elements $x\in L_i$ such that $\gq_i(x)\in F$, let $\gp_i(F)$ be the largest such $x$ (cf.\ second sentence of~(\ref{d.elts_of_L'})).\\[0.5em] If there are no such $x$, let $\gp_i(F)\ =\ 0_i$. \end{minipage}\end{equation} In view of~(\ref{d.x_vee_y}), each $\gp_i$ is a homomorphism; hence together they give us a homomorphism $\gp\colon L'\to\prod(\{0_i\}+L_i\mid i\in I)\in\mathbf{V}$. We claim that the inverse image under $\gp$ of each $f\in\prod(\{0_i\}+L_i\mid i\in I)$ is distributive. Indeed, when we take the join of two elements $F,\,G\in\gp^{-1}(f)$, we see that for each $i$, the sets $F$ and $G$ agree in what elements $\gq_i(x)$ they contain, hence there is no occasion for enlarging $F\cup G$ via~(\ref{d.x_vee_y}). So $F\jj G=F\cup G$. We always have $F\mm G=F\cap G$ in $L'$; hence $\gp^{-1}(f)$ is a lattice of subsets of $P$ under unions and intersections, hence it is distributive. Thus, $L'\in\mathbf{D}\circ\mathbf{V}$, as claimed. \end{proof} Now suppose that for each $i\in I$ we are given an upper semilattice homomorphism $\gf_i\colon L_i\to M$, for a fixed lattice $M$. We define $\gy\colon L'\to M$ by \begin{equation}\begin{minipage}[c]{25pc}\label{d.gy_new} $\gy(F)\ = \ \JJm{\gf_{i_1}(x_1)\mm\dots\mm\gf_{i_n}(x_n)\ }% {\ \gq_{i_1}(x_1)\mm\dots\mm\gq_{i_n}(x_n)\in F}$. \end{minipage}\end{equation} This is formally an infinite join; but it is clearly equivalent to the corresponding join over the finitely many maximal elements of $F$, hence is defined. We claim that \begin{equation}\begin{minipage}[c]{25pc}\label{d.join-hom} $\gy$ is a join-semilattice homomorphism. \end{minipage}\end{equation} To see this, note that if we temporarily extend the definition~(\ref{d.gy_new}) to arbitrary finitely generated downsets $F$, not necessarily satisfying~(\ref{d.x_vee_y}), then we have \begin{equation}\begin{minipage}[c]{25pc}\label{d.cup=jj} $\gy(F\cup G)\ =\ \gy(F)\jj\gy(G)$. \end{minipage}\end{equation} Now for $F,\,G\in L'$, the element $F\jj G$ is obtained by bringing into $F\cup G$ elements $\gq_i(x\jj y)$ where $\gq_i(x)\in F$ and $\gq_i(y)\in G$ (and the elements they majorize). In this situation, the join defining $\gy(F\cup G)$ already contains joinands $\gf_i(x)$ and $\gf_i(y)$, resulting from the presence of $\gq_i(x)$ and $\gq_i(y)$ in $F$ and $G$, hence its value in $M$ already majorizes $\gf_i(x)\jj\gf_i(y) =\gf_i(x\jj y)$. So bringing $\gq_i(x\jj y)$ into $F\cup G$ does not increase its image under $\gy$, establishing~(\ref{d.join-hom}). Finally, comparing the definition~(\ref{d.xi_i_is_hom}) of the $\gx_i$ and the definition~(\ref{d.gy_new}) of $\gy$, we see that \begin{equation}\begin{minipage}[c]{25pc}\label{d.gfgy_i} For all $i\in I,\quad\gf_i\ =\ \gy\,\gx_i$. \end{minipage}\end{equation} Now given $\mathbf{V}$ as in~(\ref{d.DoV}), let $\E L=(L_i\mid i\in I)$ and $L=\Free_{\mathbf{D}\circ\mathbf{V}}\E L$. Then the lattice homomorphisms $\gx_i\colon L_i\to L'$ are equivalent to a single homomorphism $\gx\colon L\to L'$; and we see that by taking $\gf=\gy\,\gx\colon L\to L'\to M$, we get our desired result: \begin{theorem}\label{T.semilat} Let $\E L=(L_i \mid i\in I)$ be a family of lattices, and $\gf_i\colon L_i\to M$ a family of join-semilattice homomorphisms from the $L_i$ to a common lattice. Suppose all $L_i$ lie in some prevariety $\mathbf{V}$ of lattices, and let $L=\Free_{\mathbf{D}\circ\mathbf{V}}\E L$. Then there exists a join-semilattice homomorphism $\gf\colon L\to M$ whose restrictions to the $L_i$ are the~$\gf_i$.\qed \end{theorem} Let us show now by example that the lattice $L'$ constructed in the above proof may fail to lie in $\mathbf{V}$ itself. We start with two distributive lattices, namely, the one-element lattice $L_0=\{e\}$, and the four-element lattice $L_1$ generated by two elements $a$ and $b$. Let us use bar notation for the images of these generators under the embeddings $\gx_i\colon L_i\to L'$, so that $\bar{e}=\gx_0(e)=\ \downarrow(e,1_1)$, $\bar{a}=\gx_1(a)=\ \downarrow(1_0,a)$, $\bar{b}=\gx_1(b)=\ \downarrow(1_0,b)$. We claim that in $L'$, \begin{equation}\begin{minipage}[c]{25pc}\label{d.nondist} $\bar{e}\mm(\bar{a}\jj\bar{b})\ \neq \ (\bar{e}\mm\bar{a})\jj(\bar{e}\mm\bar{b})$. \end{minipage}\end{equation} Indeed, one finds that the left-hand side of~(\ref{d.nondist}) is the principal down-set $\downarrow(e,a\jj b)$, while the right-hand side is $(\downarrow(e,a))\cup(\downarrow(e,b))$, a nonprincipal down-set. Hence $L'$ is not distributive. It is not even modular: one can similarly verify that a copy of $N_5$ is given by the elements \begin{equation}\begin{minipage}[c]{25pc}\label{d.N_5} $(\bar{e}\mm\bar{a})\jj(\bar{e}\mm \bar{b})\jj(\bar{a}\mm\bar{b}),\quad \bar{a}\jj(\bar{e}\mm\bar{b}),\quad \bar{a}\jj(\bar{e}\mm(\bar{a}\jj\bar{b})),\quad \bar{a}\jj\bar{b},\quad (\bar{e}\mm\bar{a})\jj\bar{b}$. \end{minipage}\end{equation} Evidence suggesting that the task of extending semilattice homomorphisms from a family of lattices to their free product is likely to be harder than the corresponding task for isotone maps is \cite[Theorem~1]{B+L} $=$ \cite[Theorem 2.8]{H+K}, which says that the injective objects in the category of meet-semilattices are the \emph{frames}, i.e., the complete lattices satisfying the join-infinite distributive identity. (This result is generalized in \cite[Theorem~3.1]{Z+Z}.) Thus, dually, the injective join-semilattices are the complete lattices satisfying the meet-infinite distributive identity; in particular, they are distributive; so Theorem~\ref{T.semilat} does not ``almost'' follow from a general injectivity statement, as Theorem~\ref{T.main} did. (While on the topic of injective objects, what are the injectives in the variety of lattices? It is shown in \cite[next-to-last paragraph]{B+B} that the only one is the trivial lattice. This is generalized in \cite{AD} to any nontrivial variety $\mathbf{V}$ of lattices other than the variety of distributive lattices, and in \cite{EN}, with a very quick proof, to any class of lattices containing a $3$-element chain and a nondistributive lattice.) \section{Questions.}\label{S.questions} The example following Theorem~\ref{T.semilat} does not mean that there is no way to factor a family of maps as in that theorem through the free product of the $L_i$ in $\mathbf{V}$; only that the construction by which we have proved that theorem doesn't lead to such a factorization. Indeed, for that particular pair of lattices, one does have such a factorization, by the final clause of Proposition~\ref{P.semilat_bdd_below}. So we ask \begin{question}\label{Q.semilat} For $\mathbf{V}$ a general nontrivial variety of lattices, can one prove the full analog of Theorem~\ref{T.main} with join-semilattice homomorphisms in place of isotone maps \textup{(}i.e., a result like Theorem~\ref{T.semilat} with $\mathbf{V}$ in place of $\mathbf{D}\circ\mathbf{V}$; equivalently, a result like Proposition~\ref{P.semilat_bdd_below} without the assumptions on lower bounds\textup{)}? If that result is not true in general, is it true if $M$ also belongs to the given variety~$\mathbf{V}$? \end{question} A counterexample to either version of the above question would probably have to be fairly complicated, in view of Proposition~\ref{P.semilat_bdd_below}. In a different direction, note that in our main result, Theorem~\ref{T.main}, the assumption that $M$ had a lattice structure did not come into the statement, except to make the concept of isotone map meaningful, for which a structure of order would have sufficed; though the lattice structure was used in the proof. The same observation applies to many of our other results. This suggests a family of questions. \begin{question}\label{Q.M_not_lat} For each of Lemmas~\ref{L.L&e} and~\ref{L.M'toM} and Theorems~\ref{T.main},~\ref{T.complete}, and~\ref{T.retract}, does the same conclusion hold for a significantly wider class of orders $M$ than the underlying orders of lattices? Likewise, for Proposition~\ref{P.semilat_bdd_below} and Theorem~\ref{T.semilat}, does the same conclusion hold for a significantly wider class of join-semilattices $M$ than the underlying join-semilattices of lattices? \end{question} When we showed in Section~\ref{S.alt_pfs} that our main result could not be proved by a construction with ``too much symmetry'', we called on the fact that in a free lattice in the variety $\mathbf{L}$ of all lattices, no element is doubly reducible (both a proper meet and a proper join; see sentence following display~(\ref{d.sym})). Lattices (not necessarily free) with the latter property were considered in~\cite{tfbl}. We do not know the answer to \begin{question}\label{Q.m-j-red} Are there any nontrivial proper subvarieties $\mathbf{V}$ of $\mathbf{L}$ such that in every free lattice $\Free_{\mathbf{V}}(X)$, no element is doubly reducible? \end{question} A final tantalizing question is, \begin{question}\label{C.section} In the situation of Corollary~\ref{C.VtoL}, can the isotone map $\Free_{\mathbf{V}}\E L\to\Free\E L$ be taken to be a section \textup{(}left inverse\textup{)} to the natural lattice homomorphism $\Free\E L\to\Free_{\mathbf{V}}\E L$? In particular, for every variety of lattices $\mathbf{V}$ and every set $X$, does the natural lattice homomorphism $\Free(X)\to\Free_{\mathbf{V}}(X)$ admit an isotone section? \end{question}
{ "timestamp": "2011-03-08T02:04:09", "yymm": "1103", "arxiv_id": "1103.1339", "language": "en", "url": "https://arxiv.org/abs/1103.1339", "abstract": "Let (L_i : i\\in I) be a family of lattices in a nontrivial lattice variety V, and let \\phi_i: L_i --> M, for i\\in I, be isotone maps (not assumed to be lattice homomorphisms) to a common lattice M (not assumed to lie in V). We show that the maps \\phi_i can be extended to an isotone map \\phi: L --> M, where L is the free product of the L_i in V. This was known for V the variety of all lattices (Yu. I. Sorkin 1952).The above free product L can be viewed as the free lattice in V on the partial lattice P formed by the disjoint union of the L_i. The analog of the above result does not, however, hold for the free lattice L on an arbitrary partial lattice P. We show that the only codomain lattices M for which that more general statement holds are the complete lattices. On the other hand, we prove the analog of our main result for a class of partial lattices P that are not-quite-disjoint unions of lattices.We also obtain some results similar to our main one, but with the relationship lattices:orders replaced either by semilattices:orders or by lattices:semilattices.Some open questions are noted.", "subjects": "Rings and Algebras (math.RA)", "title": "Isotone maps on lattices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429619433693, "lm_q2_score": 0.7217432122827969, "lm_q1q2_score": 0.7097211081286875 }
https://arxiv.org/abs/2205.09049
Random graph embeddings with general edge potentials
In this paper, we study random embeddings of polymer networks distributed according to any potential energy which can be expressed in terms of distances between pairs of monomers. This includes freely jointed chains, steric effects, Lennard-Jones potentials, bending energies, and other physically realistic models.A configuration of $n$ monomers in $\mathbb{R}^d$ can be written as a collection of $d$ coordinate vectors, each in $\mathbb{R}^n$. Our first main result is that entries from different coordinate vectors are uncorrelated, even when they are different coordinates of the same monomer. We predict that this property holds in realistic simulations and in actual polymer configurations (in the absence of an external field).Our second main contribution is a theorem explaining when and how a probability distribution on embeddings of a complicated graph may be pushed forward to a distribution on embeddings of a simpler graph to aid in computations. This construction is based on the idea of chain maps in homology theory. We use it to give a new formula for edge covariances in phantom network theory and to compute some expectations for a freely-jointed network.
\section{Introduction} In the study of network polymers, it is common to represent the polymer topology by a graph and study the spatial distribution of the monomers in terms of the eigenvalues and eigenvectors of the Kirchhoff (or architecture~\cite{Kuchanov88}) matrix. In mathematics, this matrix is usually known as the graph Laplacian and studied as a discrete analogue of the usual Laplacian operator from continuum physics~\cite{Chung:1997tk}. The classical phantom network theory of James and Guth~\cite{James1947} restricts attention to the case where the probability distribution of positions of bonded pairs of monomers is described by a Gaussian spring and there are no other monomer-monomer interactions. Combining the linear algebra of the graph Laplacian with the simple behavior of Gaussian probability distributions under linear maps has allowed for exact calculations of striking simplicity and power (see~\cite{Yang1998,Wei1995b,Eichinger1985,tcrw,tcrw-theory}). However, mixing algebra and probability has a cost: presenting the theory this way makes it difficult to understand which results might generalize to different bond potentials. In this paper, we give a new formulation for the linear algebra of network polymers which is compatible with any combination of potentials between monomers which depend only on distance, such as Lennard-Jones, FENE, excluded-volume or elastic energy potentials, including fixed edge lengths, as in the case of the freely jointed chain. We note that different pairs of monomers are permitted to have different potentials, so we might model steric effects by a repulsive potential on some pairs and bonds by an attractive potential on others. In the classical theory, a random embedding of graph $\mathbf{G}$ with $\mathbf{v}$ vertices and $\mathbf{e}$ edges into $\mathbb{R}^1$ is described by a vector space $\mathbb{R}^{\mathbf{e}}$ of edge displacements and a vector space $\mathbb{R}^{\mathbf{v}}$ of vertex positions connected by an incidence (or boundary) matrix $\bdry \colon \mathbb{R}^{\mathbf{e}} \rightarrow \mathbb{R}^{\mathbf{v}}$.\footnote{The incidence matrix is usually called $B$ in the literature, but we use $\bdry$ here to match the notation used in the rest of the paper.} The Kirchhoff matrix $L$ is the $\mathbf{v} \times \mathbf{v}$ matrix $\bdry\bdy^T$. The special symmetries of Gaussian potentials allow us to study the problem of randomly embedding $\mathbf{G}$ into $\mathbb{R}^d$ coordinate-by-coordinate as a collection of $d$ independent one-dimensional problems. For our more general potentials, this will not be possible. For instance, if there is a fixed bond length between two monomers in space, the $x$, $y$, and $z$ coordinates of the vector between them are clearly not independent random variables. Our first idea, described in Section~\ref{sec:definitions}, is to replace the two vector spaces $\mathbb{R}^{\mathbf{v}}$ and $\mathbb{R}^{\mathbf{e}}$ with four vector spaces: $\operatorname{EC} \simeq \mathbb{R}^{\mathbf{e}}$ and $\operatorname{VC} \simeq \mathbb{R}^{\mathbf{v}}$, which are spaces of (scalar) weights on edges and vertices, and $\operatorname{ED} \simeq \mathbb{R}^{d\mathbf{e}}$ and $\operatorname{VP} \simeq \mathbb{R}^{d\mathbf{v}}$ which are spaces of vector edge displacements and vertex positions. These pairs of spaces are related by the contravariant functor $\operatorname{Hom}(-,\mathbb{R}^d)$, which exchanges the boundary map $\bdry \colon \operatorname{EC} \rightarrow \operatorname{VC}$ for the displacement map $\bdry^* \colon \operatorname{VP} \rightarrow \operatorname{ED}$. We then introduce natural inner products on these vector spaces which make $\bdry$ and $\bdry^*$ partial isometries (Propositions~\ref{prop:bdy is a partial isometry} and \ref{prop:bdystar is a partial isometry}). Polymer models are specified in terms of a probability distribution on $\operatorname{ED}$. However, if the topology of the network $\mathbf{G}$ is nontrivial, only a subspace of configurations in $\operatorname{ED}$ correspond to valid embeddings of the graph. In this case, we have to condition our distribution on membership in the appropriate subspace. In James--Guth theory (where the probability distribution on $\operatorname{ED}$ is Gaussian), this presents no problems. However, in general there are some technical difficulties involved in conditioning an arbitrary probability distribution on a hypothesis of measure zero. In Section~\ref{sec:probability measures} we resolve this problem, giving in Definition~\ref{def:G-compatible}, Proposition~\ref{prop:disintegration with density}, and Corollary~\ref{cor:compatibility for joint distributions} easy-to-check conditions under which the construction can be mathematically justified. In Section~\ref{sec:means and variances}, we consider the mean and variance of edge displacements and vertex positions, showing in~Propositions~\ref{prop:covariance structure for VP} and~\ref{prop:covariance structure for ED} that (as in James--Guth theory) different coordinates of these displacements or positions are uncorrelated,\footnote{Even though, as we noted above, they are~\emph{not} independent.} giving the covariance matrices a special structure. We are then able to give a general formula for the radius of gyration (Theorem~\ref{thm:gyradius formula}), which we use to compute the expected radius of gyration of a ring polymer whose edges each have an~\emph{arbitrary} symmetric probability distribution on $\mathbb{R}^d$ in terms of the edge variance (Proposition~\ref{prop:cycle graph gyradius}). It is then easy to recover the standard formulae for the expected radius of gyration of the freely jointed ring (Corollary~\ref{cor:freely jointed ring}) and the Gaussian ring polymer (Corollary~\ref{cor:Gaussian ring}). It's common in phantom network theory to try to relate the distribution of embeddings of a complicated graph to the distribution of embeddings of a simpler graph obtained, for instance, by contracting the graph by deleting ``bifunctional\footnote{That is, degree 2.}'' vertices. There are many variations of this construction. We unify the theory of such strategies in Section~\ref{sec:chain maps} by borrowing the idea of~\emph{chain maps} from homology theory. Our main result, Theorem~\ref{thm:chain maps and probability}, gives precise information on when and how a probability measure may be pushed from a more a complicated graph to a simpler one. In our final results, we demonstrate the utility of this proposition by giving a new, simple method for computing edge covariances in James--Guth theory~(\prop{projections}) and numerically computing the expectation of junction-junction distance in a tetrahedral network whose edges are freely-jointed chains (Figure~\ref{fig:numerical integration versus markov}). Various useful results from linear algebra are reviewed in Appendix~\ref{sec:background} and referenced throughout. Everything in the appendix is basically standard, but we are particularly interested in using nonstandard inner products (so that adjoints and transposes don't coincide), as well as more general spaces of linear transformations than just dual spaces, so none of this material is presented in quite the way we need in any textbooks we are familiar with. \section{Definitions} \label{sec:definitions} We begin with some definitions. \begin{definition}\label{def:chainspaces} Given a multigraph $\mathbf{G}$ with vertices $\vertex_1, \dotsc, \vertex_\verticesV$, we define the space $\operatorname{VC}$ of \emph{vertex chains}\footnote{The terminology comes from homology theory~\cite{Hatcher:2002ut}, which will be a continuing inspiration for our point of view.} to be the vector space of formal linear combinations $x = x_1 v_1 + \dots + x_\mathbf{v} v_\mathbf{v}$. The vertices form a canonical basis for this space, which is isomorphic to $\mathbb{R}^\mathbf{v}$. If $\mathbf{G}$ has edges $\edge_1, \dotsc, \edge_\edgesE$, we define the space $\operatorname{EC}$ of \emph{edge chains} to be the vector space of formal linear combinations $w = w_1 e_1 + \dots + w_\mathbf{e} e_\mathbf{e}$. The edges form a canonical basis for this space, which is isomorphic to $\mathbb{R}^\mathbf{e}$. \end{definition} The vector spaces $\operatorname{VC}$ and $\operatorname{EC}$ are joined by a natural linear map: \begin{definition}\label{def:boundary} The \emph{boundary map} $\bdry \colon \operatorname{EC} \rightarrow \operatorname{VC}$ is defined by $\bdry (e_i) = \operatorname{head}(e_i) - \operatorname{tail}(e_i)$. \end{definition} The name ``boundary map'' comes from a natural convention: the (signed) boundary of a oriented edge consists of two ``oriented'' vertices: the head vertex with orientation $+1$ and the tail vertex with orientation $-1$. The transpose $\mat{\bdry}^T$ is often called the~\emph{incidence matrix} of $\mathbf{G}$, although we are choosing a particular convention for how loop edges are recorded in the incidence matrix. Vertex and edge chains are linear combinations of vertices and edges; we think of them as weights on the vertices and edges. Varying the weights allows us to pick out various subsets and averages of vertices in the graph. For instance, $1 v_i$ represents vertex $i$ alone, while $\frac{1}{\mathbf{v}} (v_1 + \cdots + v_\mathbf{v}) = \frac{1}{\mathbf{v}} \ones{\verticesV}{1}$ represents a sort of average vertex,\footnote{We use the notation $\mat{\mathbf{1}}_{n \times m}$ for the $n \times m$ matrix containing all $1$s.} in the sense that evaluating any linear functional on this chain gives the average of the functional over the whole graph. \begin{definition}\label{def:loop space} The subspace $\ker \bdry \subset \operatorname{EC}$ is called the \emph{loop space} of $\mathbf{G}$. \end{definition} The name comes from the fact that if the oriented edges $e_1, \dotsc, e_n$ form a closed loop, then there are vertices $v_1, \dotsc, v_n$ so that $\bdry e_i = v_{i+1} - v_i$ for $i \in 1, \dotsc, n-1$ and $\bdry e_n = v_1 - v_n$. Thus $\bdry (e_1 + \cdots + e_n) = 0$. \begin{proposition}\label{prop:loop space} Every $w \in \ker \bdry \subset \operatorname{EC}$ is a linear combination of closed loops. The dimension of $\ker \bdry$ is the cycle rank $\xi(\mathbf{G}) = \mathbf{e} - \mathbf{v} + 1$ of $\mathbf{G}$. \end{proposition} For a proof of the above proposition in the context of a gentle introduction to homology on graphs, see Chapter~4 in Sunada's book~\cite{Sunada:2013jt}. We now want to consider assignments of vectors (instead of scalar weights) to our vertices and edges. \begin{definition}\label{def:VP} Given a graph $\mathbf{G}$ with vertices $\vertex_1, \dotsc, \vertex_\verticesV$, the vector space $\operatorname{Hom}(\operatorname{VC},\mathbb{R}^d)$ of linear maps $X \colon \operatorname{VC} \rightarrow \mathbb{R}^d$ is the \emph{vertex positions space}, denoted $\operatorname{VP}$. \end{definition} Each $X \in \operatorname{VP}$ describes an embedding of $\mathbf{G}$ in $\mathbb{R}^d$: if we think of $X \in \operatorname{VP}$ as a $d \times \mathbf{v}$ matrix, then the $(i,j)$ entry is the $i$th coordinate of the position of vertex $v_j$. This is the same as the standard basis for $\operatorname{Hom}(\operatorname{VC},\mathbb{R}^d)$ that appears in~\defn{hom}, which we now call $X_{ij}$: $X_{ij}(x_1 v_1 + \cdots + x_\mathbf{v} v_\mathbf{v}) = (0, \dotsc, x_j, \dotsc, 0)$, where $x_j$ is in the $i$th position. As a linear map, $X$ takes a weighted sum of (abstract) vertices to the corresponding weighted sum of their positions in $\mathbb{R}^d$. For instance, $\frac{1}{\mathbf{v}} X(v_1 + \dots + v_\mathbf{v}) = \frac{1}{\mathbf{v}} \mat{X} \ones{\verticesV}{1}$ is the position of the center of mass of the vertices. \begin{definition}\label{def:ED} If $\mathbf{G}$ has edges $\edge_1, \dotsc, \edge_\edgesE$, the vector space $\operatorname{Hom}(\operatorname{EC},\mathbb{R}^d)$ of linear maps $W \colon \operatorname{EC} \rightarrow \mathbb{R}^d$ is the \emph{edge displacements space}, denoted $\operatorname{ED}$. \end{definition} This space associates a vector in $\mathbb{R}^d$ with every edge of $\mathbf{G}$, rather than every vertex. If we represent $W \in \operatorname{ED}$ by a $d \times \mathbf{e}$ matrix, then the $(i,j)$ entry is the $i$th coordinate of the vector associated with $e_j$. Again, this is the same as the standard basis for $\operatorname{Hom}(\operatorname{EC},\mathbb{R}^d)$ that appears in~\defn{hom}: $W_{ij}(w_1 e_1 + \cdots + w_\mathbf{e} e_\mathbf{e}) = (0, \dotsc, w_j, \dotsc, 0)$, where $w_j$ is in the $i$th position. As a linear map, $W$ maps a weighted sum of (abstract) edges to the corresponding weighted sum of vectors associated with those edges. \begin{definition}\label{def:displacement map} The displacement map $\operatorname{disp}: \operatorname{VP} \rightarrow \operatorname{ED}$ is defined by \begin{equation*} \operatorname{disp}(X)(e_i) := X(\operatorname{head} e_i) - X(\operatorname{tail} e_i). \end{equation*} \end{definition} Every embedding of the vertices of $\mathbf{G}$ in $\mathbb{R}^d$ given by an $X \in \operatorname{VP}$ has a corresponding set of displacement vectors $W = \operatorname{disp}(X) \in \operatorname{ED}$. However, not every $W \in \operatorname{ED}$ is derived from a set of positions for the vertices. For instance, if $\mathbf{G}$ is the cycle graph with three edges $e_1 = v_1 \rightarrow v_2$, $e_2 = v_2 \rightarrow v_3$, and $e_3 = v_3 \rightarrow v_1$, any $W = \operatorname{disp}(X)$ must have $W(e_1 + e_2 + e_3) = \vec{0} \in \mathbb{R}^d$. We now give a surprising connection between $\operatorname{disp}$ and our boundary map $\bdry \colon \operatorname{EC} \rightarrow \operatorname{VC}$. Recall that any linear map $F \colon V \rightarrow W$ induces a corresponding linear map $F^* \colon \operatorname{Hom}(V,U) \rightarrow \operatorname{Hom}(W,U)$ as defined in~\eqref{eq:induced hom map}. \begin{definition}\label{def:bdystar} The map $\bdry^* \colon \operatorname{VP} \rightarrow \operatorname{ED}$ is the linear map induced by $\bdry \colon \operatorname{EC} \rightarrow \operatorname{VC}$. \end{definition} \begin{proposition}\label{prop:disp is coboundary} We have $\operatorname{disp} = \bdry^*$. \end{proposition} \begin{proof} We observe that in our bases, $\bdry^*(X_{ik}) = \sum_{j=1}^\mathbf{e} \operatorname{ht}_{kj} W_{ij}$, where \begin{equation*} \operatorname{ht}_{kj} = \begin{cases} 0, & \text{if $v_k = \operatorname{head}(e_j) = \operatorname{tail}(e_j)$}\\ +1, & \text{if $v_k = \operatorname{head}(e_j)$} \\ -1, & \text{if $v_k = \operatorname{tail}(e_j)$} \\ 0, & \text{otherwise} \end{cases} \end{equation*} It follows that $\bdry^*(X)(e_j) = X(\operatorname{head}(e_j) - \operatorname{tail}(e_j)) = \operatorname{disp}(X)(e_j)$ \end{proof} It follows immediately that \begin{lemma}\label{lem:net displacement} We say $P \in \operatorname{EC}$ is a path from $v_i$ to $v_j$ if $\bdry P = v_j - v_i$. Then $(\bdry^* X)(P) = (\operatorname{disp} X)(P) = X(v_j) - X(v_i)$ is the net displacement between the ends of the path $P$. \end{lemma} We now want to characterize $\ker \bdry^*$ and $\operatorname{im} \bdry^*$. \begin{proposition} \label{prop:ker and im of bdystar} If $\mathbf{G}$ is a connected graph, then \begin{align*} \ker \bdry^* &= \{ X \in \operatorname{VP} : X(v_i) = X(v_j) \in \mathbb{R}^d \text{ for all } i,j \in 1, \dotsc, \mathbf{v} \} \\ &= \{ Z \otimes \ones{\verticesV}{1} : \text{$Z \in \mathbb{R}^d$ is a $d \times 1$ column vector} \} \\ \operatorname{im} \bdry^* &= \{ W \in \operatorname{ED} : W(u) = 0 \text{ for all $u \in \ker \bdry \subset \operatorname{EC}$} \}. \end{align*} As a consequence, $\dim \ker \bdry^* = d$ and $\dim \operatorname{im} \bdry^* = d(\mathbf{v}-1)$. \end{proposition} \begin{proof} Using \prop{annihilator props}, $\ker \bdry^*$ is the annihilator $(\operatorname{im} \bdry)^0$ of $\operatorname{im} \bdry$. In other words, if $X \in \ker \bdry^*$, then $0 = X(\bdry e_i) = X(\operatorname{head} e_i) - X(\operatorname{tail} e_i)$, so $X(\operatorname{head} e_i) = X(\operatorname{tail} e_i)$. Since $\mathbf{G}$ is connected, this implies that $X(v_i)$ is the same for all vertices $v_i$. Similarly, $\operatorname{im} \bdry^* = (\ker \bdry)^0$, which completes the proof. \end{proof} This proposition means that two configurations of vertices $X$ and $Y$ in $\operatorname{VP}$ have the same edge displacements $\bdry^* X = \bdry^* Y$ if and only if they are translations of each other. In our description of $\ker \bdry^*$ as the set of maps in the form $Z \otimes \ones{\verticesV}{1}$, the vector $Z \in \mathbb{R}^d$ is the translation vector. We have now reached an important point: the space $\operatorname{ED}$ is the space of \emph{arbitrary} assignments of vectors $W(e_i) \in \mathbb{R}^d$ to the edges of $\mathbf{G}$. However, only $W \in \operatorname{im} \bdry^* \subset \operatorname{ED}$ are assignments of vectors which are displacements between a choice of vertex positions $X \in \operatorname{VP}$. \prop{ker and im of bdystar} tells us that $W \in \operatorname{im} \bdry^*$ if and only if the total displacement around any $v$ in the loop space of $\mathbf{G}$ is zero. Casassa~\cite{Casassa1965} adopted this point of view for ring polymers, where the loop space is one-dimensional and spanned by $e_1 + \cdots + e_\mathbf{e}$,\footnote{At least, this is true if we orient the edges consistently around the loop. If not, we'd have to reverse some signs in the sum to arrive at a consistent orientation for the ring.} by observing that a set of edge displacements form a closed ring if and only if their sum is the zero vector. However, he did not generalize this point of view to other network topologies: for a more complicated graph, it's clear that there are infinitely many possible loops, but without characterizing the loops as $\ker \bdry$, it's not at all clear that the loops form a subspace and hence that it suffices to require that total displacements around a finite basis for $\ker \bdry$ vanish. \subsection{The graph Laplacian} We now introduce the graph Laplacian,\footnote{The graph Laplacian is also known as the Kirchhoff adjacency matrix. It is central to the theory of James and Guth~\cite{James1947}, and also to Flory~\cite{Flory1976} and Eichinger~\cite{Eichinger1972} as a quadratic form expressing the potential energy of a phantom network with Gaussian chains joining the junctions. We will put it to more general use.} which will be a key part of the story. \begin{definition}\label{def:L} The \emph{graph Laplacian} $L \colon \operatorname{VC} \rightarrow \operatorname{VC}$ is defined by $L = \bdry \bdry^T$. Thought of as a matrix with respect to the standard basis $\vertex_1, \dotsc, \vertex_\verticesV$ for $\operatorname{VC}$, we have \begin{equation} \mat{L}_{ij} = \begin{cases} \deg(v_i) - 2 \# \text{(loop edges $v_i \rightarrow v_i$)}, & \text{if $i = j$,} \\ -\# \text{(edges $v_j \rightarrow v_i$)} - \# \text{(edges $v_i \rightarrow v_j$)}, & \text{if $i \neq j$}. \end{cases} \end{equation} \end{definition} Much is known~\cite{Chung:1997tk} about the graph Laplacian and how it reveals various properties of the (multi)graph~$\mathbf{G}$. We will record a couple of useful facts here. \begin{proposition}\label{prop:basic Laplacian properties} The $\mathbf{v} \times \mathbf{v}$ matrix $\mat{L}$ is symmetric and positive semidefinite. We have \begin{equation*} \operatorname{im} L = \operatorname{im} \bdry = \ker \ones{\verticesV}{\verticesV} \quad\text{and}\quad \ker L = \ker \bdry^T = \operatorname{im} \ones{\verticesV}{\verticesV}. \end{equation*} \end{proposition} \begin{proof} Since $\bdry^T = \bdry^*$ if $d=1$,~\prop{ker and im of bdystar} tells us that $\ker \bdry^T$ is spanned by the constant chains $\ones{\verticesV}{1} \in \operatorname{VC}$, which are the image of $\ones{\verticesV}{\verticesV} = \ones{\verticesV}{1} \ones{1}{\verticesV}$. Since $\ker \bdry^T \subset \ker L$, this tells us that $\operatorname{im} \ones{\verticesV}{\verticesV} \subset \ker L$. Similarly, if $\bdry^T \ones{\verticesV}{1} = 0$, then $\ones{1}{\verticesV} \bdry = 0$, or $\operatorname{im} L \subset \ker \ones{\verticesV}{\verticesV}$. But $\operatorname{im} \bdry^T = \star \operatorname{im} \bdry^* = \star (\ker \bdry)^0$ only intersects $\ker \bdry$ at the origin. Thus $\ker L = \ker \bdry^T$ is one-dimensional. It follows that $\operatorname{im} L$ is $\mathbf{v} - 1$ dimensional. Since $\operatorname{im} \ones{\verticesV}{\verticesV}$ is one-dimensional, $\ker \ones{\verticesV}{\verticesV}$ is $\mathbf{v} - 1$ dimensional. The result now follows from the inclusions above. \end{proof} Since $L$ has a kernel, we cannot invert it. However, we can get an invertible operator by defining an operator which is the identity on $\ker L$ rather than collapsing it: \begin{definition}\label{def:Ltilde} The \emph{augmented graph Laplacian} is the operator $\tilde{L} \colon \operatorname{VC} \to \operatorname{VC}$ which is represented in the basis $\vertex_1, \dotsc, \vertex_\verticesV$ by the $\mathbf{v} \times \mathbf{v}$ matrix $\mat{\tilde{L}} = \mat{L} + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV}$. \end{definition} \begin{proposition}\label{prop:im of subspaces under Ltilde} $\ker \tilde{L} = \{0\}$ and $\operatorname{im} \tilde{L} = \operatorname{VC}$. Thus $\tilde{L}$ is invertible. Further, \[ \tilde{L}(\operatorname{im} \ones{\verticesV}{\verticesV}) = \operatorname{im} \ones{\verticesV}{\verticesV} \quad\text{and}\quad \tilde{L}(\operatorname{im} \bdry) = \operatorname{im} \bdry. \] \end{proposition} \begin{proof} Since $L$ is symmetric, it is self-adjoint in the standard $\dotp{-}{-}$ inner product on $\operatorname{VC}$ and, by \lem{orthogonal decomposition}, $\operatorname{VC} = \ker L \oplus \operatorname{im} L$ is an orthogonal decomposition of $\operatorname{VC}$. Using~\prop{basic Laplacian properties}, this means that any $x \in \operatorname{VC}$ can be written as $x = \ones{\verticesV}{\verticesV} y + \bdry z$. Further, \begin{equation*} \tilde{L}x = (L + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV})(\ones{\verticesV}{\verticesV} y + \bdry z) = \ones{\verticesV}{\verticesV} y + L \bdry z. \end{equation*} Since $L \bdry z$ and $\ones{\verticesV}{\verticesV} y$ are in the orthogonal subspaces $\operatorname{im} L$ and $\ker L$, we can first conclude that $\tilde{L}x = 0$ if and only if $x=0$. Further, if $x \in \operatorname{im} \ones{\verticesV}{\verticesV}$, then $z = 0$, and $\tilde{L}x = \ones{\verticesV}{\verticesV} y \in \operatorname{im} \ones{\verticesV}{\verticesV}$ while if $x \in \operatorname{im} \bdry$, then $y = 0$ and $\tilde{L}x = L \bdry z \in \operatorname{im} L = \operatorname{im} \bdry$. Thus $\tilde{L}(\operatorname{im} \ones{\verticesV}{\verticesV}) \subset \operatorname{im} \ones{\verticesV}{\verticesV}$ and $\tilde{L}(\operatorname{im} \bdry) \subset \operatorname{im} \bdry$. Counting dimensions yields the reverse inclusions immediately. \end{proof} \subsection{Inner products} Up until this point, we have had vector spaces and linear maps, but (except as a convenience in the proof of \prop{im of subspaces under Ltilde}) not inner product spaces. Our next goal is to introduce natural inner products on all four spaces $\operatorname{VC}$, $\operatorname{EC}$, $\operatorname{VP}$, and $\operatorname{ED}$. While the inner products on $\operatorname{EC}$ and $\operatorname{ED}$ will be the expected ones, the inner products on $\operatorname{VC}$ and $\operatorname{VP}$ will be non-standard. \begin{definition} The inner product space $\left( \VC, \VCp{-}{-} \right)$ is the vector space $\operatorname{VC}$, together with the inner product given in the $\vertex_1, \dotsc, \vertex_\verticesV$ basis by $\tilde{L}^{-1}$. The inner product space $\left( \EC, \ECp{-}{-} \right)$ is the vector space $\operatorname{EC}$, together with the standard inner product in the $\edge_1, \dotsc, \edge_\edgesE$ basis. \label{def:VCprod and ECprod} \end{definition} The induced inner products make $\operatorname{VP}$ and $\operatorname{ED}$ inner product spaces as well: \begin{definition} The inner product space $\left( \VP, \VPp{-}{-} \right)$ is the vector space $\operatorname{VP}$, together with the inner product given in the $X_{11}, \dotsc, X_{d\verticesV}$ basis by $\mat{\tilde{L}^*} = \mat{I_d} \otimes \mat{\tilde{L}}$. The inner product space $\left( \ED, \EDp{-}{-} \right)$ is the vector space $\operatorname{ED}$, together with the standard inner product in the $W_{11}, \dotsc, W_{d\edgesE}$ basis. \label{def:VPprod and EDprod} \end{definition} Adjoints, orthogonality, Moore--Penrose pseudoinverses, and singular value decompositions all depend on inner products, so in principle we must be careful when using any of these (or referring to the literature) that our results hold in the desired inner product. This is simplified by the following useful fact: \begin{proposition} The operators $\bdry^+$, $\bdry^{T+}$ and $L^+ = \bdry^{T+} \bdry^+$ in the basis $\vertex_1, \dotsc, \vertex_\verticesV$ are the same whether they are computed with respect to the $\VCp{-}{-}$ or the $\dotp{-}{-}$ inner product on $\operatorname{VC}$. \label{prop:samesies} \end{proposition} \begin{proof} Since the first two Moore--Penrose properties $F^+ F F^+ = F^+$ and $F F^+ F = F$ don't refer to an inner product, pseudoinverses computed with respect to any inner product obey these conditions. To see that $\bdry$ has the same Moore--Penrose pseudoinverse with respect to both the $\VCp{-}{-}$ and $\dotp{-}{-}$ inner products on $\operatorname{VC}$, it suffices to check that if $\mat{\bdry^+ \bdry}$ and $\mat{\bdry \bdry^+}$ are symmetric (that is, they are self-adjoint in $\dotp{-}{-}$ on both $\operatorname{VC}$ and $\operatorname{EC}$), then $\bdry \bdry^+$ and $\bdry^+ \bdry$ are self-adjoint (in $\left( \VC, \VCp{-}{-} \right)$ and $\left( \EC, \ECp{-}{-} \right)$). Since the inner product on $\left( \EC, \ECp{-}{-} \right)$ is standard, $(\bdry^+ \bdry)^\dag = (\bdry^+ \bdry)^T = \bdry^+ \bdry$ immediately. So suppose we have computed $\bdry^+$ with respect to $\dotp{-}{-}$ on both $\operatorname{VC}$ and $\operatorname{EC}$. It's known~(\cite[eq.\ 7]{Ghosh2008}) that if $L^+$ is computed with respect to $\dotp{-}{-}$ on $\operatorname{VC}$, then \begin{equation} \tilde{L}^{-1} = L^+ + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV} = \bdry^{T+} \bdry^+ + \frac{1}{\mathbf{v}}\ones{\verticesV}{\verticesV}. \label{eq:Ltilde inverse} \end{equation} Thus, using \lem{adjoint formula}, \begin{equation*} (\bdry \bdry^+)^\dag = \tilde{L} (\bdry \bdry^+)^T \tilde{L}^{-1} = (L + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV}) \bdry \bdry^+ (L^+ + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV}). \end{equation*} \prop{basic Laplacian properties} tells us that $\ones{\verticesV}{\verticesV} \bdry = 0$ since $\operatorname{im} \bdry = \ker \ones{\verticesV}{\verticesV}$. Thus, we can simplify the right hand side above and get \begin{equation*} (\bdry \bdry^+)^\dag = L (\bdry \bdry^+)(L^+ + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV}) = L (\bdry^{T+} \bdry^T)(L^+ + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV}), \label{eq:self-adjoint 2} \end{equation*} where we used $\bdry \bdry^+ = (\bdry \bdry^+)^T = \bdry^{T+} \bdry^T$. Again,~\prop{basic Laplacian properties} tells us that $\bdry^T \ones{\verticesV}{\verticesV} = 0$ since $\operatorname{im} \ones{\verticesV}{\verticesV} = \ker \bdry^T$. So we can simplify the right-hand side above and get \begin{equation} (\bdry \bdry^+)^\dag = L(\bdry^{T+} \bdry^T)L^+ = \bdry (\bdry^T \bdry^{T+}) (\bdry^T \bdry^{T+}) \bdry^+ = \bdry \bdry^+ \bdry \bdry^+ = \bdry \bdry^+ , \end{equation} where we used $ \bdry^{T} \bdry^{T+} = (\bdry^+ \bdry)^T = \bdry^+ \bdry$ and $(\bdry^+ \bdry)(\bdry^+ \bdry) = \bdry^+ \bdry$. This proves that $\bdry^+$ does not depend on whether we compute in $\left( \VC, \VCp{-}{-} \right)$ or $(\operatorname{VC},\dotp{-}{-})$. To prove the second part, assume that $(\bdry^T)^+$ has been computed with respect to $\dotp{-}{-}$, so that $\bdry^{T+} \bdry^T$ and $\bdry^T \bdry^{T+}$ are symmetric. We must show that $\bdry^{T+} \bdry^T$ and $\bdry^T \bdry^{T+}$ are self-adjoint. But $\bdry^{T+} \bdry^T = (\bdry \bdry^+)^T = \bdry \bdry^+$, which we just proved is self-adjoint in $\left( \VC, \VCp{-}{-} \right)$, and $\bdry^T \bdry^{T+} = (\bdry^+ \bdry)^T = \bdry^+ \bdry$, which is symmetric and hence self-adjoint in $\left( \EC, \ECp{-}{-} \right)$. Last, if we assume that $L^+$ has been computed with respect to $\dotp{-}{-}$, we know that $LL^+$ and $L^+L$ are symmetric and need to show they are self-adjoint. We note first that $L L^+ = \operatorname{proj}_{\operatorname{im} L}$ (in $\dotp{-}{-}$) and $L^+ L = \operatorname{proj}_{\operatorname{im} L^T} = \operatorname{proj}_{\operatorname{im} L}$ since $L = L^T$. Thus $L L^+ = L^+ L$. Now this proof goes exactly along the lines of the first one. As above, \begin{equation*} (L^+L)^\dag = \tilde{L}(L^+ L)^T\tilde{L}^{-1} = (L + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV})(L L^{T+})\tilde{L}^{-1}, \end{equation*} where we used $L^T = L$. But $\operatorname{im} L = \ker \ones{\verticesV}{\verticesV}$, so this simplifies to \begin{equation*} (L^+ L)^\dag = L (L L^{T+}) \tilde{L} = L (L^+ L) (L^+ + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV}), \end{equation*} where we've used $L L^{T+} = L^T L^{T+} = (L^+ L)^T = L^+ L$. Since $\operatorname{im} \ones{\verticesV}{\verticesV} = \ker L$, we have \begin{equation*} (L^+ L)^\dag = L (L^+ L) (L^+ + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV}) = L L^+ L L^+ = L L^+ = L^+ L, \end{equation*} which completes the proof. \end{proof} As an immediate consequence, $\operatorname{im} \bdry^+ = \operatorname{im} \bdry^\dag = \operatorname{im} \bdry^T$, and the orthogonal projections $\bdry \bdry^+ = \operatorname{proj}_{\operatorname{im} \bdry}$, $\bdry^+ \bdry = \operatorname{proj}_{\operatorname{im} \bdry^+}$ are the same in either inner product. Similarly, $L L^+ = \operatorname{proj}_{\operatorname{im} L} = \operatorname{proj}_{\operatorname{im} \bdry}$ is the same in either inner product. \begin{corollary} $\bdry^{*+} \colon \left( \ED, \EDp{-}{-} \right) \rightarrow \left( \VP, \VPp{-}{-} \right)$ and $\bdry^{*+} \colon \left( \ED, \EDp{-}{-} \right) \rightarrow (\operatorname{VP}, \dotp{-}{-})$ are the same operator. \label{cor:samesies star} \end{corollary} \begin{proof} We know from \prop{starplus is plusstar} that in the inner products on $\operatorname{Hom}(\operatorname{VC},\mathbb{R}^d)$ and $\operatorname{Hom}(\operatorname{EC},\mathbb{R}^d)$ induced by inner products on $\operatorname{VC}$, $\operatorname{EC}$, and $\mathbb{R}^d$, $(\bdry^*)^+ = (\bdry^+)^* = \bdry^{*+}$. Since we just proved that $\bdry^+$ is the same operator in either inner product on $\operatorname{VC}$, this implies that $\bdry^{*+}$ is as well. \end{proof} \subsection{Partial isometries} We have now done some careful technical work to set things up, and can start to collect some of the rewards. We will see that with respect to our inner products, the maps $\bdry$ and $\bdry^*$ have extremely nice properties. We first recall a definition from functional analysis: \begin{definition} A map $A \colon \left( V, \Vp{-}{-} \right) \rightarrow \left( W, \Wp{-}{-} \right)$ is a \emph{partial isometry} if, for all $x, y \in (\ker A)^\perp$, we have $\Vp{x}{y} = \Wp{Ax}{Ay}$. We call $(\ker A)^\perp = \operatorname{im} A^\dag = \operatorname{im} A^+$ the \emph{initial space} of the partial isometry and $\operatorname{im} A$ the \emph{final space} of the isometry. \label{def:partial isometry} \end{definition} \begin{proposition} The map $\bdry \colon \left( \EC, \ECp{-}{-} \right) \rightarrow \left( \VC, \VCp{-}{-} \right)$ is a partial isometry. \label{prop:bdy is a partial isometry} \end{proposition} \begin{proof} The proof is a computation. Using~\prop{samesies} and~\eqn{Ltilde inverse} we have \begin{align*} \VCp{\bdry u}{\bdry w} = \dotp{\bdry u}{\tilde{L}^{-1} \bdry w} = \dotp{\bdry u}{L^+ \bdry w} + \frac{1}{\mathbf{v}}\dotp{\bdry u}{\ones{\verticesV}{\verticesV} \bdry w}. \end{align*} On the far right $\dotp{\bdry u}{\ones{\verticesV}{\verticesV} \bdry w} = \dotp{u}{\bdry^T \ones{\verticesV}{\verticesV} \bdry w} = 0$, since $\operatorname{im} \ones{\verticesV}{\verticesV} = \ker \bdry^T$ by~\prop{basic Laplacian properties}. Again using~\prop{samesies}, we are left with \begin{equation*} \VCp{\bdry u}{\bdry w} = \dotp{\bdry u}{L^+ \bdry w} = \dotp{\bdry u}{\bdry^{T+} \bdry^+ \bdry w} = \dotp{\bdry^+ \bdry u}{\bdry^+ \bdry w} = \dotp{\operatorname{proj}_{\operatorname{im} \bdry^+} u}{\operatorname{proj}_{\operatorname{im} \bdry^+} w}. \end{equation*} Thus if $u, w$ are in the intial space $\operatorname{im} \bdry^+$, we have $\dotp{\operatorname{proj}_{\operatorname{im} \bdry^+} u}{\operatorname{proj}_{\operatorname{im} \bdry^+} w} = \dotp{u}{w}$. Note that $\dotp{-}{-}$ is our inner product on $\left( \EC, \ECp{-}{-} \right)$, so we have completed the proof. \end{proof} \begin{proposition} The map $\bdry^* \colon \left( \VP, \VPp{-}{-} \right) \rightarrow \left( \ED, \EDp{-}{-} \right)$ is a partial isometry. \label{prop:bdystar is a partial isometry} \end{proposition} \begin{proof} Suppose $X,Y \in \operatorname{VP}$. Now $\tilde{L} = \bdry\bdy^T + \frac{1}{\mathbf{v}}\ones{\verticesV}{\verticesV}$, so $\tilde{L}^* = (\bdry^{T*})(\bdry^*) + \frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV}^*$ and \begin{equation}\label{eq:Bstar partial isometry} \VPp{X}{Y} = \dotp{X}{\bdry^{*T} \bdry^* Y} + \dotp{X}{\frac{1}{\mathbf{v}}\ones{\verticesV}{\verticesV}^* Y} = \dotp{\bdry^*X}{\bdry^*Y} + \dotp{X}{\frac{1}{\mathbf{v}}\ones{\verticesV}{\verticesV}^* Y}. \end{equation} Suppose $X, Y$ are in the initial space $\operatorname{im} (\bdry^*)^\dag$. Since $\operatorname{im} (\bdry^*)^\dag = \operatorname{im} (\bdry^*)^T$, we know $Y = (\bdry^{*T})W$ for some $W \in \operatorname{ED}$. Then the second term in~\eqref{eq:Bstar partial isometry} is equal to \begin{equation*} \dotp{X}{\frac{1}{\mathbf{v}}\ones{\verticesV}{\verticesV}^* \bdry^{T*} W} = \frac{1}{\mathbf{v}} \EDp{X}{(\bdry^T \ones{\verticesV}{\verticesV})^* W} = 0, \end{equation*} since $\operatorname{im} \ones{\verticesV}{\verticesV} = \ker \bdry^T$. \end{proof} \section{Probability measures} \label{sec:probability measures} We are now ready to build probability measures on our spaces. \begin{definition} \label{def:admissible everything} We say that $\mu$ is an~\emph{admissible measure} on $\operatorname{ED}$ if $\mu$ is an $O(d)$-invariant finite Radon measure on $\operatorname{ED}$ with $\mu(\operatorname{ED}) > 0$ and finite first moment. \end{definition} Recall that, while a purely measure-theoretic definition of Radon measure as a function on sets is standard, we can also view a Radon measure on $\mathbb{R}^n$ as a linear functional on the space $\mathcal{K}(\mathbb{R}^n)$ of continuous functions with compact support $\mathbb{R}^n \rightarrow \mathbb{R}$ via $\mu(f) = \mean{\mu}{f} = \int f(x) \mu(dx)$. A~\emph{probability measure} is a Radon measure with total mass $\mu(\mathbb{R}^n)$ one. A measure has~\emph{finite first moment} if the expected distance between two points is finite~\cite{Fritz2019}. We note that every $O(d)$-invariant probability measure on $\operatorname{ED}$ with finite first moment is certainly admissible, but we are not requiring the total mass of the measure to be normalized to one. This is mostly a matter of notational convenience. If our network model involves only properties such as Gaussian springs, FENE potentials, and Lennard-Jones potentials, then $\mu$ is absolutely continuous with respect to Lebesgue measure and we have $\mu = p(W) \lambda^{d\mathbf{e}}$ for some continuous density function $p(W)$. However, if our model involves equality constraints (such as fixed edgelengths), then $\mu$ may be singular. We know that $\operatorname{im} \bdry^*$ is the subspace of $\operatorname{ED}$ of edge displacements which may actually be reassembled (via $\bdry^{*+}$) into vertex positions in a way that's compatible with the graph structure. Therefore, our ultimate goal is to condition $\mu$ on the hypothesis $W \in \operatorname{im} \bdry^*$. However, $\operatorname{im} \bdry^*$ is a measure-zero subset of $\operatorname{ED}$, and conditioning on such sets is not always well-defined. So we now introduce some (basically technical) constructions designed to ensure that we can build a conditional probability measure $\mu_\mathbf{G}$ supported on $\operatorname{im} \bdry^*$. Recall that we have already introduced the loop space (\defn{loop space}) $\ker \bdry \subset \operatorname{EC}$. We now introduce the corresponding subspace of $\operatorname{ED}$. \begin{definition} The~\emph{incompatible displacement} space $\operatorname{ID} \subset \operatorname{ED}$ is $(\operatorname{im} \bdry^*)^\perp = \ker \bdry^{*+}$. If $\ell \in \operatorname{EC}$ is in the loop space, we call $W(\ell)$~\emph{the failure to close} of $W$ around $\ell$. \end{definition} To motivate our second definition, suppose we can write $\ell = e_{\ell_1} + \cdots + e_{\ell_k}$ where without loss of generality we assume that the edges are oriented so that $\operatorname{head} e_{\ell_i} = \operatorname{tail} e_{\ell_{i+1}}$ and $\operatorname{head} e_{\ell_k} = \operatorname{tail} e_{\ell_ 1}$. Then since $W(\ell) = W(e_{\ell_1}) + \dots + W(e_{\ell_k})$, it is natural to think of $W(\ell)$ as the failure of the loop $\ell$ to close. More generally, \prop{loop space} tells us that every $\ell \in \ker \bdry$ is a linear combination of loops in this simple form, and hence $W(\ell)$ is the corresponding linear combination of failures to close around those simple loops. A few facts about $\operatorname{ID}$ will be useful: \begin{proposition}\label{prop:basic ID properties} We have: \begin{enumerate} \item $\dim \operatorname{ID} = d \xi(\mathbf{G})$, where $\xi(\mathbf{G})$ is the cycle rank of $\mathbf{G}$. \item If $\ell \in \ker \bdry \subset \operatorname{EC}$ is a loop in $\mathbf{G}$, then $W(\ell) = (\operatorname{proj}_{\operatorname{ID}} W)(\ell)$. \item $W$ is in $\operatorname{im} \bdry^*$ and hence $W = \operatorname{disp} X$ for some $X \in \operatorname{VP}$ if and only if the failure to close $W(\ell) = 0$ for all loops $\ell \in \ker \bdry$. \end{enumerate} \end{proposition} \begin{proof} We know from~\prop{ker and im of bdystar} that $\dim \operatorname{im} \bdry^* = d(\mathbf{v} - 1)$. Therefore \begin{equation*} \dim \operatorname{ID} = \dim (\operatorname{im} \bdry^*)^\perp = \dim \operatorname{ED} - \dim \operatorname{im} \bdry^* = d(\mathbf{e} - \mathbf{v} + 1) = d \xi(\mathbf{G}). \end{equation*} We know that $\operatorname{ID} \oplus \operatorname{im} \bdry^*$ is an orthogonal decomposition of $\operatorname{ED}$, so every $W \in \operatorname{ED}$ can be written as $W = W_{\operatorname{ID}} + W_{\operatorname{im} \bdry^*}$. But $(\operatorname{im} \bdry^*) = (\ker \bdry)^0$, so $W(\ell) = W_{\operatorname{ID}}(\ell)$, as required. The last claim follows immediately from $(\operatorname{im} \bdry^*) = (\ker \bdry)^0$ as well. \end{proof} We now recall a little background from probability theory: \begin{notation}\label{pushforward notation} If $(S_1,\mathcal{A}_1)$ and $(S_2, \mathcal{A}_2)$ are Borel spaces, $f: S_1 \to S_2$ is measurable, and $\mu$ is a measure on $S_1$, then we will use $f_\sharp \mu$ to denote the pushforward measure on $S_2$; i.e., the measure defined by \[ (f_\sharp\mu)(B) := \mu(f^{-1}(B)). \] \end{notation} The standard method in probability to construct conditional distributions is now to build a disintegration (cf.~\cite{Chang1997}) of $\mu$ relative to the map $\operatorname{proj}_{\operatorname{ID}}$ and Lebesgue measure on $\operatorname{ID}$. This construction yields conditional probability measures $\mu^W_{\graphG}$ concentrated on $\operatorname{proj}_{\operatorname{ID}}^{-1}(W)$ which are well-defined for~$\lambda^{d\xi(\mathbf{G})}$-almost every ${W \in \operatorname{ID}}$. However, we would like to restrict our attention to cases where we can define a unique probability measure on $\operatorname{im} \bdry^* = \operatorname{proj}_{\operatorname{ID}}^{-1}(0)$, so we will require a slightly stronger idea originally proposed by Tjur~\cite{Tjur1975,Tjur1980}. We first give a version of Tjur's definition of a conditional probability which applies in the cases we study: \begin{definition}[{Tjur~\cite[p.\ 6]{Tjur1975},~\cite[Sec.\ 9.7]{Tjur1980}}]\label{def:tjur conditional probability} Suppose we have open sets $X \subset \mathbb{R}^m$ and $Y \subset \mathbb{R}^n$, a Radon probability measure $\mu$ on $X$, and a continuous map $t \colon X \rightarrow Y$. For any $t_\sharp \mu$-measurable set $B \subset Y$ with $(t_\sharp \mu)(B) > 0$, we can define a Radon probability measure $\mu^{B}$ on $X$ by \begin{equation*} \mu^B(f) = \frac{1}{(t_\sharp\mu)(B)} \int_{t^{-1}(B)} f(x) \mu(dx) \end{equation*} for any $f \in \mathcal{K}(X)$.\footnote{Remember, $\mathcal{K}(X)$ is the space of continuous functions on $X$ with compact support.} For any $y \in Y$, a measure $\mu^y$ on $X$ is~\emph{the conditional distribution of $\mu$ given $t(x) = y$} if, for any $f \in \mathcal{K}(\mathbb{R}^m)$ and any $\epsilon > 0$, there is an open neighborhood $V$ of $y$ in $Y$ so that, for any $B \subset V$ with $t_\sharp \mu(B) > 0$, we have $\abs{\mu^y(f) - \mu^B(f)} < \epsilon$. We note that if $\mu^y$ exists, it is unique. Further, it is concentrated on $t^{-1}(y)$. \end{definition} Intuitively, this definition says that $\mu^y$ (if it exists) is the (weak$^*$) limit of $\mu^B$ as the sets $B$ approach $y$. Tjur makes precise the notion of ``sets $B$ approaching a point $y$''~\cite[Definition 3.1]{Tjur1975}, but we don't need to worry about the details here. The observation that $\mu^y$ is unique if it exists is due to Tjur as well, so any reasonable definition of $\mu^y$ as a limit of $\mu^B$ yields the same result. We can now define compatibility for one of our measures with a graph structure: \begin{definition}\label{def:G-compatible} We say that $\mu$ is~\emph{compatible with $\mathbf{G}$} if $\mu$ is admissible and there is an open ball $U \subset \operatorname{ID}$ centered at $0$ so that the conditional distributions $\mu^W_{\graphG} := \mu^W$ constructed using $\mu$ as the Radon measure on $\operatorname{ED}$ and $\operatorname{proj}_{\operatorname{ID}}$ as the continuous map $\operatorname{ED} \rightarrow \operatorname{ID}$ are defined for all $W \in U$. We define $\mu_{\operatorname{ID}} := (\operatorname{proj}_{\operatorname{ID}})_\sharp \mu$ and $\mu_{\graphG} := \mu^0_{\graphG}$. \end{definition} This definition is certainly straightforward, but as written it may seem difficult to check. The following~Proposition shows that compatibility is automatic (and the conditional distributions have a familiar form) in a wide variety of cases where $\mu$ is given in terms of a density function. \begin{proposition}\label{prop:disintegration with density} Suppose that $\mu$ on $\operatorname{ED}$ is $O(d)$-invariant and has a continuous density $p(Z)$ with respect to Lebesgue measure $\lambda^{d\mathbf{e}}$ on $\operatorname{ED}$. Further, suppose there are open sets $U' \in \operatorname{ED}$ and $U \in \operatorname{ID}$ so that $\operatorname{proj}_{\operatorname{ID}}(U') = U$, $0 \in U$, and $p(Z) > 0$ on $U'$. Last, suppose $p(Z) \in o(\norm{Z}^n)$ for all $n \in \mathbb{Z}$. Then $\mu$ is admissible and compatible with $\mathbf{G}$, $\mu_{\operatorname{ID}}$ has a continuous, positive density with respect to Lebesgue measure $\lambda^{d\xi(\mathbf{G})}$ on $U$ given by \begin{equation*} m_W = \int_{Z \in \operatorname{proj}_{\operatorname{ID}}^{-1}(W)} p(Z) \haus{d(\mathbf{v}-1)}(dZ), \end{equation*} and the conditional distributions $\mu^W_{\graphG}$ (in the sense of~\defn{tjur conditional probability}) for all $W \in U$ are given explicitly by \begin{equation*} \mu^W_{\graphG}(f) = \frac{1}{m_W} \int_{Z \in \operatorname{proj}_{\operatorname{ID}}^{-1}(W)} f(Z) p(Z) \haus{d(\mathbf{v}-1)}(dZ). \end{equation*} \end{proposition} Above, $\haus{d(\mathbf{v}-1)}$ is the Hausdorff (or surface) measure on the $d(\mathbf{v}-1)$-dimensional subspace $\operatorname{proj}_{\operatorname{ID}}^{-1}(W)$ of $\operatorname{ED}$. Our hypotheses require that any neighborhood of $0$ in $\operatorname{ID}$ is assigned a positive probability by $\mu_{\operatorname{ID}}$ (that is, there is a nonzero chance of random configurations of edge displacements which are nearly consistent with the graph $\mathbf{G}$), which certainly seems reasonable. We note that if our model implies a mixture of lower and upper bounds on distances between vertices, this hypothesis can fail to be satisfied in somewhat subtle ways (all of the generalized triangle inequalities on these distances must be able to be satisfied), so it's difficult to make a more general statement about when this happens. Further, we require the decay condition $p(Z) \in o(\norm{Z}^n)$ for all $n \in \mathbb{Z}$, which states that $p(Z)$ decays faster than any polynomial as $\norm{Z} \rightarrow \infty$. This will ensure continuity of the $m_W$ by preventing $m_{W_i} \rightarrow \infty$ as $W_i \rightarrow W$. We note that this is automatic when $p(Z)$ has compact support, and that it could be weakened to polynomial decay for a particular $n$ depending on $\mathbf{G}$. This is very close to the definition of conditional probability in terms of marginal densities that is usually given in textbooks; we are simply giving alternate hypotheses which ensure that the marginal density $m_W$ is everywhere defined, positive, and continuous. \begin{proof} We note that Lemma~2.5 of~\cite{Fritz2019} observes that $\mu$ has finite first moment $\iff$ there is some $Z_0 \in \operatorname{ED}$ so that the $\mean{\mu}{\norm{Z-Z_0}}$ is finite. Our decay condition on $p(Z)$ implies this for $Z_0 = 0$, and also implies that the total mass $\mean{\mu}{1}$ is finite. Of course, any measure with a continuous density with respect to Lebesgue measure is Radon. Theorem 8.1 of~\cite{Tjur1975} does almost all the work here (recalling that $\operatorname{proj}_{\operatorname{ID}}$ is an orthogonal projection, so its normal Jacobian is constant and one); the only thing we have to prove is that $m_W$ is a continuous, positive function of $W$. We note that for any $W \in U$, there is some open $\operatorname{proj}_{\operatorname{ID}}^{-1}(W) \cap U'$ where $p(Z) > 0$. Therefore, the integral $m_W$ of the (everywhere non-negative) $p(Z)$ over $\operatorname{proj}_{\operatorname{ID}}^{-1}(W)$ is positive. We now show continuity. Suppose we have $W_i \rightarrow W$ in $\operatorname{ID}$. The subspaces $\operatorname{proj}_{\operatorname{ID}}^{-1}(W_i)$ are all in the form $\operatorname{im} \bdry^* \oplus W_i$. For each $Z \in \operatorname{im} \bdry^*$, we can define $q_i(Z) := p(W_i + Z)$ and $q(Z) := p(W + Z)$. Now \begin{equation*} m_{W_i} = \int_{Z \in \operatorname{proj}_{\operatorname{ID}}^{-1}(W_i)} p(Z) \haus{d(\mathbf{v}-1)}{dZ} = \int_{Z \in \operatorname{im} \bdry^*} q_i(Z) \haus{d(\mathbf{v}-1)}{dZ}. \end{equation*} Since $p$ is continuous on $\operatorname{ED}$ and $W_i \rightarrow W$, we have $q_i(Z) \rightarrow q(Z)$ pointwise. By the Lebesgue dominated convergence theorem, to show that the integrals $m_{W_i} \rightarrow m_W$ it now suffices to show that the $q_i$ are dominated by some integrable function $g$ on $\operatorname{im}\bdry^*$. Since the $q_i$ are shifts of the continuous function $p$ by $W_i$ which lie in a bounded subset of $\operatorname{ED}$ (because the $W_i$ converge), we may assume that the $q_i$ are uniformly bounded on any compact subset of $\operatorname{im}\bdry^*$. By the same logic, since $p(Z)/\norm{Z}^n \rightarrow 0$ as $\norm{Z} \rightarrow \infty$ for any $n$, we may assume for any $n$ that there is a single ball $B_n \subset \operatorname{im}\bdry^*$ so that $q_i(Z) < \norm{Z}^n$ for $Z$ outside $B_n$. We've already argued that there is some $C_n > q_i$ on $B_n$, so we can now construct a function $g_n(Z)$ equal to $C_n$ inside $B_n$ and $\norm{Z}^n$ outside $B_n$ and observe that $g_n > q_i$. If we take $n < 0$ so that $\abs{n}$ is sufficiently large, $g_n$ will be integrable on $\operatorname{im} \bdry^*$ (that is, $\int_{\operatorname{im} \bdry^*} g_n(Z) \haus{d(\mathbf{v}-1)}{dZ} < \infty$), completing the proof. \end{proof} It's sometimes easier to use these alternate hypotheses: \begin{corollary}\label{cor:compatibility for joint distributions} Suppose that we have $O(d)$-invariant probability distributions $\rho_1, \dots, \rho_\mathbf{e}$ on the edges $e_1, \dots, e_\mathbf{e}$ of $\mathbf{G}$ and further suppose that each $\rho_i$ has a continuous density with respect to Lebesgue measure $\rho_i(dx) = p_i(\norm{x}) \lambda^{d}$ on $\mathbb{R}^d$ and that each $p_i$ is bounded, positive in a neighborhood of $0$, and has $p_i(x) \in o(\norm{x}^n)$ for all $n \in \mathbb{Z}$. If $\mu$ is the joint distribution of independent edge displacements sampled from the $\rho_i$, then $\mu$ has density \begin{equation*} p(Z) = p_1(\norm{Z(e_1)}) \,\cdots\, p_\mathbf{e}(\norm{Z(e_\mathbf{e})}). \end{equation*} with respect to Lebesgue measure $\lambda^{d\mathbf{e}}$ on $\operatorname{ED}$ and the conclusions of~\prop{disintegration with density} hold. \end{corollary} \begin{proof} As in the proof of~\prop{disintegration with density} we need only show that each \begin{equation*} m_W = \int_{Z \in \operatorname{proj}_{\operatorname{ID}}^{-1}(W)} p(Z) \haus{d(\mathbf{v} - 1)}(dZ) \end{equation*} is positive and continuous in a neighborhood of $0$ in $\operatorname{ID}$. To see that $m_W$ is positive for small enough $W$, observe that $S(W) := \operatorname{proj}_{\operatorname{ID}}^{-1}(W)$ is an affine subspace of $\operatorname{ED}$ whose closest point is $\norm{W}$ from $0$. Therefore, by choosing $W$ small enough, we can guarantee that $S(W)$ intersects any open neighborhood of $0$ in $\operatorname{ED}$ in a set of positive measure. Now we know that \begin{equation*} p(Z) = p_1(\norm{Z(e_1)}) \,\cdots\, p_\mathbf{e}(\norm{Z(e_\mathbf{e})}). \end{equation*} It's an exercise to show that \begin{equation}\label{eq:ri bounds} \frac{1}{\sqrt{\mathbf{e}}} \norm{Z} \leq \max_i \norm{Z(e_i)} \leq \norm{Z}. \end{equation} In particular, since we've assumed that each $p_i$ is positive in a neighborhood of $0$, there is some $B>0$ so that all $p_i(\norm{Z(e_i)})$ are positive for $\norm{Z(e_i)} \leq \norm{Z} \leq B$. Thus for any $W$ with $\norm{W} < B$, $p(Z)$ is positive on subset of $S(W)$ with positive measure. This establishes that these $m_W$ are positive. Since the $p_i$ are all bounded, without loss of generality they are all bounded by a common $B>0$. It follows from~\eqn{ri bounds} that for each $Z$ there is some $i$ so that \begin{equation*} p(Z) \leq B^{\mathbf{e}-1} p_i\left(\frac{1}{\sqrt{\mathbf{e}}} \norm{Z}\right). \end{equation*} Since each $p_i(\norm{x})$ is in $o(\norm{x}^{n})$ (and there are a fixed number of $p_i$), this implies that $p(Z)$ is in $o(\norm{Z}^{n})$. The remainder of the argument follows as in the proof of~\prop{disintegration with density}. \end{proof} Most of the models we'd like to consider fall under~\cor{compatibility for joint distributions}: \begin{corollary}\label{cor:phantom network theory works} Suppose we have the Gaussian phantom network model of James--Guth, where $\mu$ is the joint distribution of edge displacements distributed according to any mean-zero Gaussians on $\mathbb{R}^d$. Then $\mu$ is admissible and compatible with $\mathbf{G}$. \end{corollary} \begin{corollary} Suppose that $\mu$ is the joint distribution of independent edge displacements distributed according to any Boltzmann distributions $p_i(x) \sim \exp( -f_i(x) )$ where the energy functions $f_i(x)$ are in $O(\norm{x}^\alpha)$ for some positive $\alpha$ as $\norm{x} \rightarrow \infty$. Then $\mu$ is admissible and compatible with $\mathbf{G}$. \end{corollary} Now that we have shown that $\mu$ is compatible with $\mathbf{G}$ often enough to make~\defn{G-compatible} interesting, we establish some properties of the $\mu^W_{\graphG}$. \begin{proposition}\label{prop:disintegration} If $\mu$ is compatible with $\mathbf{G}$, we have for each $W$ in $U$ that $\mu^W_{\graphG}$ is concentrated on $\operatorname{proj}_{\operatorname{ID}}^{-1}(W)$, and $\mu_{\graphG}$ is concentrated on $\operatorname{im} \bdry^* = \operatorname{proj}_{\operatorname{ID}}^{-1}(0)$. Further, if $Q \in O(d)$, then $\mu_{\operatorname{ID}} = Q_\sharp \mu_{\operatorname{ID}}$, $Q_\sharp \mu^W_{\graphG} = \mu^{Q(W)}_{\graphG}$ and $Q_\sharp \mu_{\graphG} = \mu_{\graphG}$. \end{proposition} \begin{proof} Proving this requires us to introduce Tjur's idea of a~\emph{decomposition} of a measure with respect to a map, which is like a disintegration~(cf.\, \cite{Chang1997}) but somewhat stronger: \begin{definition}[{\cite[Definition 6.1]{Tjur1975}}]\label{defn:decomposition} Let $t \colon X \rightarrow Y$ be a continuous function and $\lambda$ be a measure on $X$. A family $\lambda_y$ (for $y \in Y$) and a measure $\lambda'$ on $Y$ are called a~\emph{decomposition of $\lambda$ with respect to $t$} if \begin{enumerate} \item The mapping $y \mapsto \lambda_y$ is continuous (in the weak$^*$ topology on measures), \item Each $\lambda_y$ is concentrated on $t^{-1}(y)$. \item For any $f \in \mathcal{K}(X)$, $\int \lambda_y(f) \lambda'(dy) = \lambda(f)$. \end{enumerate} \end{definition} Conditional distributions are connected to decompositions very tightly by the following: \begin{theorem}[{\cite[Theorem 7.1]{Tjur1975}}]\label{thm:decomposition and existence} Let $X \subset \mathbb{R}^m$ and $Y \subset \mathbb{R}^n$ be open sets, $t \colon X \rightarrow Y$ be a continuous map, and $\mu$ be a probability measure on $X$. Suppose that $\lambda_y$ ($y \in Y$) is a family of probability measures on $X$. A set of $\lambda_y$ and $t_\sharp \mu$ are a decomposition of $\mu$ with respect to $t$ $\iff$ the conditional distribution $\mu^y$ is defined for all $y \in Y$ and $\mu^y = \lambda_y$. \end{theorem} By~\thm{decomposition and existence}, $\mu$ is compatible with $\mathbf{G}$ $\iff$ the $\mu^W_{\graphG}$ and $\mu_{\operatorname{ID}}$ are a decomposition of $\mu$ with respect to $\operatorname{proj}_{\operatorname{ID}}$. This implies first that each $\mu^W_{\graphG}$ is concentrated on $\operatorname{proj}_{\operatorname{ID}}^{-1}(W)$. Because $\mu$ is $\mathbf{G}$-compatible, it is admissible, and therefore $O(d)$-invariant: for any $Q \in O(d)$, $Q_\sharp \mu = \mu$. We now show that this implies that $Q_\sharp \mu_{\operatorname{ID}} = \mu_{\operatorname{ID}}$. The action of $Q$ on $\operatorname{ED}$ is multiplication by the $d\mathbf{e} \times d\mathbf{e}$ matrix $Q \otimes I_{\mathbf{e}}$. As a matrix, $\operatorname{proj}_{\operatorname{ID}} = I - \bdry^* \bdry^{*+} = I_d \otimes (I_\mathbf{e} - \bdry^{T} \bdry^{T+})$. It follows that \begin{equation*} Q \operatorname{proj}_{\operatorname{ID}} = (Q \otimes I_{\mathbf{e}})(I_\mathbf{e} - \bdry^{T} \bdry^{T+}) = Q \otimes (I_\mathbf{e} - \bdry^{T} \bdry^{T+}) = (I_d \otimes (I_\mathbf{e} - \bdry^{T} \bdry^{T+}))(Q \otimes I_{\mathbf{e}}) = \operatorname{proj}_{\operatorname{ID}} Q. \end{equation*} where we used \lem{mixed product} in the middle steps. Now we know that \begin{equation*} Q_\sharp \mu_{\operatorname{ID}} = Q_\sharp (\operatorname{proj}_{\operatorname{ID}})_\sharp \mu = (\operatorname{proj}_{\operatorname{ID}})_\sharp Q_\sharp \mu = (\operatorname{proj}_{\operatorname{ID}})_\sharp \mu = \mu_{\operatorname{ID}}. \end{equation*} We now prove $\mu^W_{\graphG} = Q_\sharp \mu^W_{\graphG}$, following the lines of Example 7 in~\cite{Chang1997}. Fix some $Q \in O(d)$. By~\thm{decomposition and existence}, it suffices to show that $\mu^{Q(W)}_{\graphG}$ and $Q_\sharp \mu^W_{\graphG}$ are both decompositions of $\mu$ with respect to $Q^{-1} \circ \operatorname{proj}_{\operatorname{ID}}$. We start with the $\mu^{Q(W)}_{\graphG}$. We already know that $W \mapsto \mu^W_{\graphG}$ is (weak$^*$) continuous in $W$, so $Q(W) \mapsto \mu^{Q(W)}_{\graphG}$ is as well. Since $W \mapsto Q(W)$ is also continuous, the composition $W \mapsto \mu^{Q(W)}_{\graphG}$ is continuous. Since each of the measures $\mu^W_{\graphG}$ is concentrated on $\operatorname{proj}_{\operatorname{ID}}^{-1}(W)$, the measure $\mu^{Q(W)}_{\graphG}$ is concentrated on $\operatorname{proj}_{\operatorname{ID}}^{-1}(Q(W)) = (Q^{-1} \circ \operatorname{proj}_{\operatorname{ID}})^{-1}(W)$. Now suppose we have some $f \in \mathcal{K}(\operatorname{ED})$. We have \begin{align*} \int_{\operatorname{ED}} f(Z) \mu(dZ) &= \int_{U} \mu^W_{\graphG}(f) \, \mu_{\operatorname{ID}}(dW) = \int_{U} \mu^W_{\graphG}(f) \, Q_\sharp \mu_{\operatorname{ID}}(dW) = \int_{U} \mu^{Q(W)}_{\graphG}(f) \, \mu_{\operatorname{ID}}(dW). \end{align*} We have now established that the $\mu^{Q(W)}_{\graphG}$ are a $Q^{-1} \circ \operatorname{proj}_{\operatorname{ID}}$-decomposition of $\mu$. We now show the same for the $Q_\sharp \mu^W_{\graphG}$, but it will help to prove this as a Lemma about decompositions in general, because we'll repeat similar arguments later. \begin{lemma}\label{lem:decompositions push forward} Suppose $t \colon X \rightarrow Y$ is a continuous function and $\lambda$ is a measure on $X$, and further suppose we have a family of measures $\lambda_y$ (for $y \in Y$) and a measure $\lambda'$ on $Y$ which are a decomposition of $\lambda$ with respect to $t$. Further, suppose that we have a Borel map $g \colon X \rightarrow Z$ and a map $s \colon Z \rightarrow Y$ so that $s \circ g = t$. Then the pushforwards $g_\sharp \lambda_y$ and the measure $\lambda'$ on $Y$ are a decomposition of $g_\sharp \lambda$ with respect to $s$. \end{lemma} \begin{proof}[Proof of lemma] Since the $y \mapsto \lambda_y$ is a continuous map from $y$ to Radon measures, and $\mu \mapsto g_\sharp \mu$ is a continuous map between measures, the composition $y \mapsto g_\sharp \lambda_y$ is continuous. Suppose we have some open set $U \in Z$ so that $U \cap s^{-1}(y) = \emptyset$. We claim that $g_\sharp \lambda_y (U) = 0$. We know that $g_\sharp \lambda_y (U) = \lambda_y (g^{-1}(U))$. We observe that $t(g^{-1}(U)) = s(g(g^{-1}(U))) \subset s(U)$. In particular, $y \not\in t(g^{-1}(U))$ since $y \notin s(U)$. Thus $\lambda_y (g^{-1}(U)) = 0$, as desired. Suppose we have $f \in \mathcal{K}(Z)$. Then \begin{equation*} g_\sharp \lambda(f) = \lambda (f \circ g) = \int \lambda_y(f \circ g) \lambda'(dy) = \int (g_\sharp \lambda_y) (f) \lambda'(dy), \end{equation*} as desired. \end{proof} Now if we let $Q \colon \operatorname{ED} \rightarrow \operatorname{ED}$ and $\operatorname{proj}_{\operatorname{ID}} \colon \operatorname{ED} \rightarrow \operatorname{ID}$, we see that $Q^{-1} \circ \operatorname{proj}_{\operatorname{ID}}$ has the property that $Q \circ (Q^{-1} \circ \operatorname{proj}_{\operatorname{ID}}) = \operatorname{proj}_{\operatorname{ID}}$, and so by ~\lem{decompositions push forward}, the $Q_\# \mu^W_{\graphG}$ are a decomposition of $\mu$ with respect to $(Q^{-1} \circ \operatorname{proj}_{\operatorname{ID}})$, as desired. \end{proof} We are now going to examine two singular measures $\mu$; one where we can establish compatibility and one where we can see that compatibility fails. In these cases, our arguments will be much more specific to the model. We are first going to recall a key fact about the freely jointed chain: \begin{proposition}\label{prop:conditional probability for arms} If $\mathbf{e} \geq 3$, $X = \mathbb{R}^{3\mathbf{e}}$, and $\mu$ is the product of uniform area measures $\mu_i$ on the unit spheres $S^2 \subset \mathbb{R}^3$, $Y = \mathbb{R}^3$, and $\operatorname{ftc} \colon X \rightarrow Y$ is the vector sum $\operatorname{ftc}(x) = x_1 + \cdots + x_\mathbf{e} = y$, then there are well-defined conditional probabilities $\mu^y$ for each $y$ with $\norm{y} < \mathbf{e}$ and a measure \begin{equation}\label{eq:failure to close for equilateral edges} \begin{aligned} \operatorname{ftc}_\sharp \mu(y) &= \left(\frac{1}{2\pi^2 \ell} \int_0^\infty s \sin \ell s \operatorname{sinc}^\mathbf{e} s \,d{s} \right) \lambda^{3}(dy) \\ &= \left( \frac{\mathbf{e}-1}{2^{\mathbf{e}+1}\pi\ell}\sum_{k=0}^{\mathbf{e}-1}\frac{(-1)^k }{k!(\mathbf{e}-k-1)!} \left((\mathbf{e}+\ell-2k-2)_+^{\mathbf{e}-2}-(\mathbf{e}+\ell-2k)_+^{\mathbf{e}-2}\right) \right) \lambda^{3}(dy) \end{aligned} \end{equation} where $x_+ = \max \{ x, 0\}$ and $\ell = \norm{y}$. These also form a decomposition of $\mu$ with respect to $\operatorname{ftc}$. \end{proposition} \begin{proof} The computation of the pushforward density and the expression as a $\operatorname{sinc}$ integral can be traced back to Rayleigh~\cite{Rayleigh:1919do}. The existence of the conditional probabilities is more or less standard, but we outline the argument in order to connect it to decompositions explicitly. We start by defining a singular set $\Delta \subset (S^2)^\mathbf{e}$ as the set of all configurations where $x_i = \pm x_j$ for all $i, j \in 1, \dots, \mathbf{e}$. This is a finite union of submanifolds of dimension $2$ inside $(S^2)^\mathbf{e}$ and so we note that $\haus{k}(\Delta) = 0$ for any $k > 2$. We now restrict our attention to $(S^2)^\mathbf{e} - \Delta$. The map $(z,\theta) \mapsto (\sqrt{1 - z^2} \cos \theta, \sqrt{1 - z^2} \sin \theta, z)$ writing $S^2$ in cylindrical coordinates is area-preserving and so pushes forward Lebesgue measure $\lambda^2(dz d\theta)$ to the area measure on $S^2$. These particular coordinates don't cover the north and south poles $(0,0,\pm 1)$, but employing similar constructions with respect to $x$ and $y$ and a partition of unity, we may construct a finite number of coordinate patches $\operatorname{cyl}_k \colon Z_k \rightarrow X_k$ where the $Z_k$ are an open cover of $(-1,1)^\mathbf{e} \times (0,2\pi)^\mathbf{e}$, the $X_k$ are an open cover of $(S^2)^\mathbf{e}$, the $\operatorname{cyl}_k$ are surjective, and each patch comes with a smooth, positive density function $\alpha_k$ so $\sum (\operatorname{cyl}_k)_\sharp \alpha_k \lambda^{2 \mathbf{e}} = \mu$. We define $\Delta_k := (\operatorname{cyl}_k)^{-1}(\Delta)$, and note that $\Delta_k$ is some dimension $2$ submanifold of $Z_k$. We now consider the maps $\operatorname{ftc} \circ \operatorname{cyl}_k \colon Z_k \rightarrow \mathbb{R}^3$. Since $\operatorname{cyl}_k$ is a diffeomorphism, its differential is always invertible. But the differential of $\operatorname{ftc}$ at $x$ is not invertible if and only if all the $x_i$ are colinear: exactly when $x \in \Delta$. Combining these, we see that the differential of $\operatorname{ftc} \circ \operatorname{cyl}_k$ is surjective on $Z_k - \Delta_k$. The normal Jacobian $J_3(\operatorname{ftc} \circ \operatorname{cyl}_k) = \det( D\operatorname{ftc} D\operatorname{cyl}_k D\operatorname{cyl}_k^T D\operatorname{ftc}^T )^{1/2}$ is therefore positive on $Z_k - \Delta_k$. By Theorem 8.1 of~\cite{Tjur1975}, we can construct decompositions $\alpha_k^y$ of each $\alpha_k \lambda^{2\mathbf{e}}$ with respect to $\operatorname{ftc} \circ \operatorname{cyl}_k$ on the open sets $Z_k - \Delta_k$, where \begin{equation*} q_k(y) = \int\limits_{z \in (\operatorname{ftc} \circ \operatorname{cyl}_k)^{-1}(y) - \Delta_k} \frac{\alpha_k(z)}{J_3 (\operatorname{ftc} \circ \operatorname{cyl}_k)(z)} \haus{2\mathbf{e}-3}(dz) \end{equation*} and\footnote{Here the notation $\nu \lefthalfcup U$ means the restriction of the measure $\nu$ to the subset $U$.} \begin{equation*} \alpha_k^y = \frac{1}{q_k(y)} \frac{\alpha_k(z)}{J_3 (\operatorname{ftc} \circ \operatorname{cyl}_k)(z)} (\haus{2\mathbf{e}-3} \lefthalfcup ((\operatorname{ftc} \circ \operatorname{cyl}_k)^{-1}(y) - \Delta_k))(dz) \end{equation*} and the pushforward measure $(\operatorname{ftc} \circ \operatorname{cyl}_k)_\sharp \alpha_k \lambda^{2\mathbf{e}}$ has density $q_k(y)$ with respect to $\lambda^{3}$ as long as the $q_k(y)$ are continuous and positive in $y$. The $q_k$ are integrals of positive quantities and hence positive; to see that they are finite recall that $\sum_k (\operatorname{cyl}_k)_\sharp \alpha_k \lambda^{2\mathbf{e}} = \mu$, so $\sum_k (\operatorname{ftc} \circ \operatorname{cyl}_k)_\# \alpha_k \lambda^{2\mathbf{e}} = \operatorname{ftc}_\# \mu$, which is given by the finite positive density in~\eqn{failure to close for equilateral edges}. We now want to assemble our work. We can define a map $\operatorname{cyl} \colon \bigsqcup_k (Z_k - \Delta_k) \rightarrow (S^2)^\mathbf{e} - \Delta$ by letting the restriction of $\operatorname{cyl}$ to $Z_k - \Delta_k$ be $\operatorname{cyl}_k$ and a measure $\alpha$ on $\bigsqcup_k (Z_k - \Delta)$ whose restriction to $Z_k - \Delta_k$ is $\alpha_k \lambda^{2\mathbf{e}}$. If we also define $\alpha^y$ on $\bigsqcup_k (Z_k - \Delta)$ by letting its restriction on $Z_k - \Delta$ be $\alpha_k^y$, it then follows that the $\alpha^y$ are a decomposition of $\alpha$ with respect to $\operatorname{ftc} \circ \operatorname{cyl}$. Now we have noted above that $\haus{\alpha}(Z_k) = 0$ for any $\alpha > 2$. Since $\mathbf{e} \geq 3$, we see that $\haus{2\mathbf{e}-3}(Z_k) = 0$ and we may rewrite our decomposition as \begin{equation*} q_k(y) = \int\limits_{z \in (\operatorname{ftc} \circ \operatorname{cyl}_k)^{-1}(y)} \frac{\alpha_k(z)}{J_3 (\operatorname{ftc} \circ \operatorname{cyl}_k)(z)} \haus{2\mathbf{e}-3}(dz) \end{equation*} and \begin{equation*} \alpha_k^y = \frac{1}{q_k(y)} \frac{\alpha_k(z)}{J_3 (\operatorname{ftc} \circ \operatorname{cyl}_k)(z)} (\haus{2\mathbf{e}-3} \lefthalfcup (\operatorname{ftc} \circ \operatorname{cyl}_k)^{-1}(y))(dz) \end{equation*} without changing it, as long as (by convention) we replace the integrand by $1$ where it is not defined. Therefore the $\alpha^y$ are~\emph{also} a decomposition of $\alpha$ with respect to $\operatorname{ftc} \circ \operatorname{cyl} \colon \bigsqcup_k Z_k \rightarrow (S^2)^\mathbf{e}$. We would like to call attention to this step because it is the only one in the proof which is nonstandard-- in a generic situation, one would know from Sard's theorem that the singular set $Z$ had $\haus{2\mathbf{e}}$-measure zero, so it could be ignored when computing expectations over all of $(S^2)^\mathbf{e}$ with respect to $\mu$, but one would~\emph{not} then be able to conclude that you could ignore the singular set when computing expectations with respect to~\emph{conditional} probabilities, which are integrals over lower-dimensional spaces. We know that $\mu = \sum (\operatorname{cyl}_k)_\sharp \alpha_k \lambda^{2 \mathbf{e}} = \operatorname{cyl}_\sharp \alpha$. We define $\mu^y = \operatorname{cyl}_\sharp \alpha^y$. It follows from~\lem{decompositions push forward} that the $\mu^y$ are a decomposition of $\mu$ with respect to $\operatorname{ftc} \colon (\mathbb{R}^3)^\mathbf{e} \rightarrow \mathbb{R}^3$. \end{proof} We have an immediate corollary: \begin{corollary}\label{cor:freely jointed ring admissible} Let $\mathbf{e} \geq 3$, suppose $\mathbf{G}$ is the $\mathbf{e}$-edge cycle graph, and let $\mu$ be the product of (uniform) area measures on the product of unit spheres $(S^2)^\mathbf{e} \subset \operatorname{ED} = (\mathbb{R}^3)^\mathbf{e}$. Then $\mu$ is admissible and compatible with~$\mathbf{G}$. \end{corollary} \begin{proof} We note that the fact that $\mu$ has finite mass and finite first moment follows directly from the fact that $\mu$ has compact support. \end{proof} By contrast, suppose we took $\mathbf{G}$ to be the $\mathbf{e}$-cycle graph and let $\mu$ be supported on the intersection of $(S^2)^\mathbf{e}$ and $\norm{W(\ell_1)} = 1$. The measure $\mu$ would still be Radon and $O(3)$-invariant, and therefore admissible. But the pushforward measure $\mu_{\operatorname{ID}}$ would be supported on the unit sphere $\norm{W(\ell_1)} = 1$ (and by $O(3)$-invariance, be the area measure on that sphere). Thus we cannot define conditional probabilities for $W(\ell_1)$ near $0$ and this $\mu$ is~\emph{not} compatible with $\mathbf{G}$. As we pointed out after the proof of Proposition~\ref{prop:ker and im of bdystar}, collections of vertex positions have the same edge displacements if and only if they are related by a translation. This means that the map $\bdry^{*+}$, which reconstructs vertex positions from edge displacements, must choose a particular translation of the vertices. We now show that this choice is natural. \begin{definition}\label{def:centered} We say that $X \in \operatorname{VP}$ is a~\emph{centered} embedding of $\mathbf{G}$ if the position of the center of mass $\frac{1}{\mathbf{v}} X(v_1 + \cdots + v_\mathbf{v}) = 0$. \end{definition} \begin{proposition}\label{prop:centered} The space $\operatorname{im} \bdry^{*+}$ is the space of centered embeddings. We also have \begin{equation*} \{ \text{centered embeddings} \} = (\operatorname{im} \ones{\verticesV}{\verticesV})^0 = \ker \ones{\verticesV}{\verticesV}^* = (\ker \bdry^T)^0 = \operatorname{im} \bdry^{T*} = (\ker \bdry^*)^\perp. \end{equation*} \end{proposition} \begin{proof} If $X$ is centered,~\defn{centered} tells us that $X(y) = 0$ for every $y \in \operatorname{im} \ones{\verticesV}{\verticesV}$. Using~\prop{annihilator props}, the centered embeddings are then $(\operatorname{im} \ones{\verticesV}{\verticesV})^0 = \ker \ones{\verticesV}{\verticesV}^*$. Since $\operatorname{im} \ones{\verticesV}{\verticesV} = \ker \bdry^T$ by~\prop{basic Laplacian properties}, they are also $(\ker \bdry^T)^0 = \operatorname{im} \bdry^{T*} = \operatorname{im} \bdry^{*+}$ . \end{proof} This proposition gives a formal explanation for choosing centered configurations of vertices. We now explain why this is also a physically natural choice to make. A probability distribution for configurations of a polymer in the absence of an external field should not depend on the position or orientation of the polymer in space. We have ruled out dependence on orientation by insisting that our probability distributions be $O(d)$-invariant. To rule out dependence on translation we must\footnote{We cannot simply insist on a translation-invariant probability measure because every translation-invariant Radon measure on a finite-dimensional vector space has infinite or zero total mass and so cannot be a probability measure.} restrict our attention, and our probability distribution, to vertex positions in a particular subspace $\mathcal{S}$ of $\operatorname{VP}$. Every $X$ in $\operatorname{VP}$ must have a translation which lies in $\mathcal{S}$ and no two $X$ and $Y$ in the subspace can be related by a translation (formally, $\mathcal{S}$ is a cross-section of the action of the translation group on $\operatorname{VP}$). In other words, $\operatorname{VP}$ is the direct sum of the translation subspace $\ker \bdry^*$ and the transverse subspace $\mathcal{S}$: $\operatorname{VP} = \ker \bdry^* \oplus \mathcal{S}$. An obvious choice for $\mathcal{S}$ is the orthogonal complement $(\ker \bdry^*)^\bot = \operatorname{im} \bdry^{*+}$ of the translations. By \prop{centered}, these are the centered configurations where we put the center of mass at the origin, as in Eichinger~\cite{Eichinger1972}.\footnote{We could also have fixed the position of a vertex, as in James--Guth theory, or fixed any other linear combination of vertex positions. All possible choices of $\mathcal{S}$ are linearly isomorphic, and indeed the isomorphism can be realized as orthogonal projection, so it is straightforward to translate between different conventions.} This is physically reasonable since polymer networks are usually very large, so the fluctuations of the center of mass are small. Further, we have found that computations of the response of a polymer to an external force are simplified for centered configurations. We now verify that every physically meaningful probability distribution on centered embeddings corresponds to a distribution on edge displacements. \begin{proposition} \label{prop:always a nug} If $\nu_{\graphG}$ is any probability measure on $\operatorname{VP}$ which is $O(d)$-invariant and concentrated on the centered configurations $\operatorname{im} \bdry^{*+}$, then $\nu_{\graphG} = (\bdry^{*+})_\sharp \mu_{\graphG}$ for some $O(d)$-invariant $\mu_{\graphG}$ concentrated on $\operatorname{im} \bdry^*$. \end{proposition} \begin{proof} Since the centered configurations are $\operatorname{im} \bdry^{*+}$, the map $\bdry^{*+} \bdry^* = \operatorname{proj}_{\operatorname{im} \bdry^{*+}}$ fixes the centered configurations. Since $\nu_{\graphG}$ is concentrated on the centered configurations, this means that \begin{equation*} \nu_{\graphG} = (\bdry^{*+} \bdry^*)_\sharp \nu_{\graphG} = (\bdry^{*+})_\sharp (\bdry^*_\sharp \nu_{\graphG}). \end{equation*} Set $\mu_{\graphG} = \bdry^*_\sharp \nu_{\graphG}$. Since $\mu_{\graphG}$ is the pushforward of an $O(d)$-invariant measure by the $O(d)$-equivariant function $\bdry^*$ (see \lem{linear equivariance}), it is $O(d)$-invariant. \end{proof} Incidentally, this solves a computational problem of some interest in numerical experiments: if one is given a set of edge displacements $W = \operatorname{disp}(X) = \bdry^* X$, this shows that you can reconstruct $X$ by applying the pseudoinverse matrix $\bdry^{*+}$. Koohestani and Guest~\cite{Koohestani2013} use this same idea to reconstruct conformations of tensegrities by imposing loop conditions. \begin{algorithm} If $W \in \operatorname{ED}$ satisfies $W = \operatorname{disp}(X)$ then $X = \bdry^{*+} W$ is the unique centered $X \in \operatorname{VP}$ with $\operatorname{disp} X = \bdry^* X = W$. However, computing $\bdry^{*+}$ as a $d\mathbf{v} \times d\mathbf{e}$ matrix is awkward because it requires us to choose bases for $\operatorname{VP}$ and $\operatorname{ED}$ and write $X$ and $W$ as vectors. It is more convenient to write $X$ and $V$ as $d \times \mathbf{v}$ and $d \times \mathbf{e}$ matrices, respectively, so that $\mat{X} = \mat{W} \mat{\bdry^+}$ or $\mat{X}^T = \mat{\bdry^{T+}} \mat{W}^T$. Then we need only find the $\mathbf{e} \times \mathbf{v}$ matrix $\mat{\bdry^{+}}$ to solve the embedding problem for $\mathbf{G}$. The amount of work required to do so does not depend on the embedding dimension $d$. \end{algorithm} \section{Means and Variances} \label{sec:means and variances} We would now like to draw whatever conclusions we can about the means and variances of vertex positions and edge displacements for a generic $\mu_{\graphG}$ and $\nu_{\graphG}$ from $O(d)$-invariance and concentration on $\operatorname{im} \bdry^*$ and $(\ker \bdry^*)^\perp$. \begin{lemma}\label{lem:physicality implies mean zero} If $\nu$ is any $O(d)$-invariant probability measure on $\operatorname{VP}$ then $\mean{\nu}{X} = 0$. If $\mu$ is any $O(d)$-invariant probability measure on $\operatorname{ED}$, then $\mean{\mu}{W} = 0$. \end{lemma} \begin{proof} Suppose we have a $d \times d$ matrix $Q \in O(d)$. The action of $Q$ on $\operatorname{VP}$ is multiplication by the $d\mathbf{v} \times d\mathbf{v}$ matrix $Q \otimes I_\mathbf{v}$. Since $\nu$ is $O(d)$-invariant, we know $(Q \otimes I_\mathbf{v})_\sharp \nu = \nu$. But then \begin{equation*} \mean{\nu}{X} = \mean{(Q \otimes I_\mathbf{v})_\sharp \nu}{(Q \otimes I_\mathbf{v}) X} = \mean{\nu}{(Q \otimes I_\mathbf{v}) X} = (Q \otimes I_\mathbf{v}) \mean{\nu}{X}. \end{equation*} This is true for all $Q \in O(d)$ only if $\mean{\nu}{X} = 0$. The proof for $\mean{\mu}{W}$ is the same. \end{proof} We recall that if $x$ is a random vector chosen according to a probability measure $\rho$ on an inner product space $\left( V, \Vp{-}{-} \right)$, then by definition \begin{equation*} \cov{\rho}{u}{v} := \mean{\rho}{\Vp{x}{u} \Vp{x}{v}}. \end{equation*} If the inner product $\Vp{-}{-} = \dotp{-}{-}_A$ for a symmetric matrix $A$, then \begin{equation} \begin{aligned} \cov{\rho}{u}{v} &= \mean{\rho}{\dotp{x}{Au} \dotp{x}{Av}} = \mean{\rho}{\dotp{Ax}{u} \dotp{Ax}{v}} = \mean{\rho}{u^T Axx^TA^T v} \\ &= u^T\mean{\rho}{Axx^TA^T}v = \dotp{u}{\mean{\rho}{Axx^TA} v} = \dotp{u}{A \mean{\rho}{xx^t} A v}. \end{aligned} \label{eq:covariance in general inner product} \end{equation} Thus we really need to compute the expectations of the outer products $X X^T$ and $W W^T$ to understand the covariances of vertex positions and edge vectors. \begin{proposition} \label{prop:covariance structure for VP} If $\nu$ is any $O(d)$-invariant probability measure on $\operatorname{VP}$, then the $d\mathbf{v} \times d\mathbf{v}$ matrix $\mean{\nu}{X X^T} = I_d \otimes \Sigma_\mathbf{v}$ where $\Sigma_\mathbf{v}$ is a symmetric positive semidefinite $\mathbf{v} \times \mathbf{v}$ matrix giving the expectations of products of \emph{a fixed coordinate} of the positions of different vertices of $\mathbf{G}$. It follows immediately that if we view $X$ as a $d \times \mathbf{v}$ matrix, the $d \times d$ matrix of expected dot products (in $\mathbb{R}^\mathbf{v}$) of coordinates of vertex positions $\mean{\nu_{\graphG}}{XX^T} = (\operatorname{tr} \Sigma_\mathbf{v}) I_d$ and the $\mathbf{v} \times \mathbf{v}$ matrix of expected dot products (in $\mathbb{R}^d$) of vertex vectors $\mean{\nu_{\graphG}}{X^TX} = d \Sigma_\mathbf{v}$. \label{prop:block structure of XXt} \end{proposition} \begin{proof} As above, \begin{align*} \mean{\nu}{XX^T} &= \mean{(Q \otimes I_\mathbf{v})_\sharp \nu}{(Q \otimes I_\mathbf{v}) X X^T (Q^T \otimes I_\mathbf{v})} \\ &= \mean{\nu}{(Q \otimes I_\mathbf{v}) X X^T (Q^T \otimes I_\mathbf{v})} = (Q \otimes I_\mathbf{v}) \mean{\nu}{X X^T} (Q^T \otimes I_\mathbf{v}). \end{align*} But if $\mean{\nu}{X X^T}$ is invariant under conjugation by $Q \otimes I_\mathbf{v}$, as a $d \times d$ matrix of $\mathbf{v} \times \mathbf{v}$ blocks, it must be\footnote{To expand on this point, first consider the diagonal matrix $Q \in O(d)$ with $(-1,1,\dots,1)$ on the diagonal. Conjugation by this $Q$ reverses the sign of all the blocks in the first column and first row except the block on the diagonal. Thus all these blocks (and by extension, all the off-diagonal blocks) must be zero matrices. Now consider the permutation matrix $Q \in O(d)$ which swaps $i$ and $j$. Conjugation by this matrix swaps the $i$-th and $j$-th diagonal blocks (and the $ij$ and $ji$ off-diagonal blocks) so the diagonal blocks must be equal.} blockwise a $d \times d$ scalar matrix, as scalar matrices are the only $d \times d$ matrices fixed by $O(d)$. Since $\mean{\nu}{XX^T}$ is an average of symmetric positive semidefinite matrices $XX^T$, it is symmetric and positive semidefinite. Therefore, the $\mathbf{v} \times \mathbf{v}$ diagonal block $\Sigma_\mathbf{v}$ is symmetric and positive semidefinite as well. \end{proof} \begin{proposition} \label{prop:covariance structure for ED} If $\mu$ is an $O(d)$-invariant probability measure on $\operatorname{ED}$, then the $d\mathbf{e} \times d\mathbf{e}$ matrix $\mean{\mu}{W W^T} = I_d \otimes \Sigma_\mathbf{e}$ where $\Sigma_\mathbf{e}$ is a symmetric positive semidefinite $\mathbf{e} \times \mathbf{e}$ matrix giving the expectations of products of \emph{a fixed coordinate} of the displacements assigned to edges in $\mathbf{G}$. As above, if we view $W$ as a $d \times \mathbf{e}$ matrix, $\mean{\mu}{W W^T} = (\operatorname{tr} \Sigma_\mathbf{e}) I_d$ and $\mean{\mu}{W^T W} = d \, \Sigma_\mathbf{e}$. \label{prop:block structure of WWt} \end{proposition} The proof is the same as that of the previous proposition. We have now shown that different coordinates of the vertex positions and edge vectors~\emph{must always be uncorrelated}, not only when they are assigned to different vertices or edges, but even when they are different coordinates of the same vertex position or edge vector! In phantom network theory one can go much further-- because of the special properties of Gaussian probability measures, the different coordinates are actually \emph{independent}, and not just uncorrelated. However, it is quite surprising to find the same structure for the covariance matrix even in cases (like the FENE potential or freely-jointed chain) where it is very clear that the different coordinates of an edge vector are dependent random variables. We note that so far, we have not used concentration of $\mu_{\graphG}$ and $\nu_{\graphG}$ on their respective subspaces or the fact that $\nu_{\graphG}$ is the pushforward of $\mu_{\graphG}$; only $O(d)$-invariance. We now introduce these other hypotheses, and see that they imply that with our inner products, \emph{edge covariances and vertex covariances are exactly the same.} \begin{proposition}\label{prop:matching covariances} Suppose that either \begin{enumerate} \item $\mu_{\graphG}$ is any $O(d)$-invariant probability measure on $\left( \ED, \EDp{-}{-} \right)$ which is concentrated on $\operatorname{im} \bdry^*$ and $\nu_{\graphG} = \bdry^{*+}_\sharp \mu_{\graphG}$ or, \item $\nu_{\graphG}$ is any $O(d)$-invariant probability measure on $\left( \VP, \VPp{-}{-} \right)$ which is concentrated on the centered configurations $\operatorname{im} \bdry^{*+}$ and $\mu_{\graphG} = \bdry^*_\sharp \nu_{\graphG}$. \end{enumerate} These hypotheses are equivalent. Further, if $X, Y \in \operatorname{VP}$ are centered, then $\cov{\nu_{\graphG}}{X}{Y} = \cov{\mu_{\graphG}}{\bdry^*X}{\bdry^*Y}$. If $W, U \in \operatorname{ED}$ are in $\operatorname{im} \bdry^*$, then $\cov{\mu_{\graphG}}{W}{U} = \cov{\nu_{\graphG}}{\bdry^{*+}W}{\bdry^{*+}U}$. If $X \in \operatorname{VP}$ is in the orthogonal complement $\ker \bdry^*$ of the centered configurations and $Y \in \operatorname{VP}$, then $\cov{\nu_{\graphG}}{X}{Y} = 0$. If $W \in \operatorname{ED}$ is in $(\operatorname{im} \bdry^*)^\perp$ and $U \in \operatorname{ED}$, then $\cov{\mu_{\graphG}}{W}{U} = 0$. \end{proposition} \begin{proof} Suppose $X, Y$ are centered in $\operatorname{VP}$. We begin by computing \begin{align*} \cov{\mu_{\graphG}}{\bdry^* X}{\bdry^* Y} = \mean{\mu_{\graphG}}{\EDp{W}{\bdry^* X} \EDp{W}{\bdry^* Y}} & = \int_{W \in \operatorname{ED}} \EDp{W}{\bdry^* X} \EDp{W}{\bdry^*Y} \mu_{\graphG}(dW) \\ & = \int_{W \in \operatorname{im} \bdry^*} \EDp{W}{\bdry^* X} \EDp{W}{\bdry^*Y} \mu_{\graphG}(dW) \end{align*} since $\mu_{\graphG}$ is concentrated on $\operatorname{im} \bdry^*$. Therefore, since $\mu_{\graphG} = (\bdry^*)_\sharp \nu_{\graphG}$, this integral is equal to \begin{align*} \cov{\mu_{\graphG}}{\bdry^* X}{\bdry^* Y} &= \int_{Z \in \operatorname{VP}} \EDp{\bdry^* Z}{\bdry^* X} \EDp{\bdry^*Z}{\bdry^*Y} \nu_{\graphG}(dZ) \\ &= \int_{Z \in (\ker \bdry^*)^\perp} \EDp{\bdry^* Z}{\bdry^* X} \EDp{\bdry^*Z}{\bdry^*Y} \nu_{\graphG}(dZ) \\ &= \int_{Z \in (\ker \bdry^*)^\perp} \VPp{Z}{X} \VPp{Z}{Y} \nu_{\graphG}(dZ) = \cov{\nu_{\graphG}}{X}{Y}. \end{align*} Here we have used the fact that $\nu_{\graphG}$ is supported on the centered configurations $(\ker \bdry^*)^\perp$ in the second equality, and~\prop{bdystar is a partial isometry} in the third. The second half of the proof follows from the fact that $\nu_{\graphG}$ and $\mu_{\graphG}$ are concentrated on $(\ker \bdry^*)^\perp$ and $\operatorname{im} \bdry^*$. In general, if we have a random vector $x$ chosen according to a probability measure $\rho$ which is concentrated on a subspace $S$ of $\left( V, \Vp{-}{-} \right)$, then for any $u \in S^\perp$, \begin{equation*} \cov{\rho}{u}{v} = \int_{x \in V} \Vp{u}{x} \Vp{v}{x} \rho(dx) = \int_{x \in S} \Vp{u}{x} \Vp{v}{x} \rho(dx) = 0, \end{equation*} because we know that $\Vp{u}{x} = 0$ since $x \in S$ and $u \in S^\perp$. \end{proof} \begin{corollary}\label{cor:XXt and WWt} With the hypotheses of~\prop{matching covariances}, and the notation of~\prop{block structure of XXt} and~\prop{block structure of WWt}, we have $\bdry^* \mean{\nu_{\graphG}}{XX^T} \bdry^{*T} = \mean{\mu_{\graphG}}{WW^T}$ and so $\bdry^T \Sigma_\mathbf{v} \bdry = \Sigma_\mathbf{e}$. Further, \begin{equation*} \mean{\nu_{\graphG}}{XX^T} = \bdry^{*+} \mean{\mu_{\graphG}}{WW^T} \bdry^{*T+} \quad\text{and}\quad \Sigma_\mathbf{v} = \bdry^{T+} \Sigma_\mathbf{e} \bdry^+. \end{equation*} \end{corollary} \begin{proof} Suppose $U, V \in \operatorname{VP}$ are centered. Then $\cov{\nu_{\graphG}}{U}{V} = \cov{\mu_{\graphG}}{\bdry^* U}{\bdry^* V}$. Using~\eqn{covariance in general inner product}, \begin{equation*} \cov{\nu_{\graphG}}{U}{V} = \mean{\nu_{\graphG}}{\VPp{X}{U} \VPp{X}{V}} = \dotp{\tilde{L}^* U}{\mean{\nu_{\graphG}}{XX^T} \tilde{L}^* V}. \end{equation*} Using~\prop{centered} and $\tilde{L}^* = L^* + (\frac{1}{\mathbf{v}} \ones{\verticesV}{\verticesV})^*$, we have $\tilde{L}^*U = L^* U$ and $\tilde{L}^* V = L^* V$. So \begin{equation} \begin{aligned} \cov{\nu_{\graphG}}{U}{V} &= \dotp{L^* U}{\mean{\nu}{XX^T}L^* V} \\ &= \dotp{\bdry^{T*} \bdry^* U}{\mean{\nu_{\graphG}}{XX^T}\bdry^{*T} \bdry^* V} \\ &= \dotp{\bdry^* U}{\bdry^* \mean{\nu_{\graphG}}{XX^T} \bdry^{*T} \bdry^*V} \end{aligned} \label{eq:covariance in nu of U and V} \end{equation} On the other hand, using~\eqn{covariance in general inner product} and the fact that the dot product on $\left( \ED, \EDp{-}{-} \right)$ is standard, \begin{equation} \cov{\mu_{\graphG}}{\bdry^* U}{\bdry^* V} = \dotp{\bdry^* U}{\mean{\mu_{\graphG}}{WW^T} \bdry^* V} \label{eq:covariance in mu of bdystar U and bdystar V} \end{equation} We now have two matrices $\bdry^* \mean{\nu}{XX^T} \bdry^{*T}$ and $\mean{\mu}{WW^T}$. The kernel of each matrix contains the loop space $\ker \bdry^{*T} = (\operatorname{im} \bdry^*)^\perp$ in $\operatorname{ED}$. The image of each matrix is contained in $\operatorname{im} \bdry^*$. Combining~\eqn{covariance in nu of U and V} and~\eqn{covariance in mu of bdystar U and bdystar V} and using the shared kernel, we have for all $Z \in \operatorname{im} \bdry^*$ and $P \in \operatorname{ED}$ that \begin{equation*} \dotp{Z}{\mean{\mu_{\graphG}}{WW^T} P} = \dotp{Z}{\bdry^* \mean{\nu_{\graphG}}{XX^T} \bdry^{*T} P}. \end{equation*} Thus we can conclude that $\mean{\mu_{\graphG}}{WW^T} P = \bdry^* \mean{\nu_{\graphG}}{XX^T} \bdry^{*T} P$ for all $P \in \operatorname{ED}$ and hence that the two matrices are equal. This proves the first part. Multiplying this identity on the left by $\bdry^{*+}$ and on the right by $\bdry^{*T+}$, \begin{equation*} \bdry^{*+} \bdry^* \mean{\nu_{\graphG}}{XX^T} \bdry^{*T} \bdry^{*T+} = \bdry^{*+} \mean{\mu_{\graphG}}{WW^T} \bdry^{*T+}. \end{equation*} But $\bdry^{*+} \bdry^* = \operatorname{proj}_{\operatorname{im} \bdry^{*+}} = \operatorname{proj}_{(\ker \bdry^*)^\perp}$ is orthogonal projection onto the centered configurations. Since $\nu$ is already supported on the centered configurations, $\operatorname{im} \mean{\nu_{\graphG}}{XX^T} \subset (\ker \bdry^*)^\perp$ and composing $\mean{\nu_{\graphG}}{XX^T}$ with this matrix has no effect. Similarly, $\bdry^{*T} \bdry^{*T+} = \operatorname{proj}_{\operatorname{im} \bdry^{*T}} = \operatorname{proj}_{\operatorname{im} \bdry^{*+}} = \operatorname{proj}_{(\ker \bdry^*)^\perp}$ is the same projection. Again, $\ker \mean{\nu_{\graphG}}{XX^T} \supset \ker \bdry^*$, so projecting to the orthogonal complement of this kernel before applying $\mean{\nu_{\graphG}}{XX^T}$ has no effect. This proves that $\mean{\nu_{\graphG}}{XX^T} = \bdry^{*+} \mean{\mu_{\graphG}}{WW^T} \bdry^{*T+}$, as desired. \end{proof} \section{Expected radius of gyration; phantom network theory} We have now proved a quite general structural result about variances. In practice, we are often interested in computing the expectation of the radius of gyration\footnote{This expectation is usually referred to as ``the radius of gyration'', even though it's technically an ensemble average of the radius of gyration of all possible conformations of the network.} of a polymer structure. \begin{definition}\label{def:gyradius} The \emph{expected radius of gyration} of a polymer whose vertex positions $X \in \operatorname{VP}$ are distributed according to probability measure $\nu_{\graphG}$ concentrated on centered configurations in $\operatorname{VP}$ is \begin{equation}\label{eq:gyradius} \left< \operatorname{R^2_g} \right>_{\nu_{\graphG}} = \frac{1}{\mathbf{v}} \mean{\nu_{\graphG}}{\sum \norm{X_{ij}}^2} = \frac{1}{\mathbf{v}} \operatorname{tr} \mean{\nu_{\graphG}}{XX^T}. \end{equation} \end{definition} We now give a general formula for the expected radius of gyration: \begin{theorem}\label{thm:gyradius formula} If $\nu_{\graphG} = (\bdry^{*+})_\sharp \mu_{\graphG}$ where $\mu_{\graphG}$ is any $O(d)$-invariant probability measure concentrated on $\operatorname{im} \bdry^*$, then if $\Sigma_\mathbf{v}$ and $\Sigma_\mathbf{e}$ are as defined in~\prop{block structure of XXt} and~\prop{block structure of WWt}, we have \begin{align*} \left< \operatorname{R^2_g} \right>_{\nu_{\graphG}} = \frac{d}{\mathbf{v}} \operatorname{tr} (\Sigma_\mathbf{v}) = \frac{d}{\mathbf{v}} \operatorname{tr} (\bdry^{T+} \Sigma_\mathbf{e} \bdry^{+}). \end{align*} \end{theorem} \begin{proof} We only have to combine~\eqn{gyradius} with~\prop{block structure of XXt} to see the first line. The second line follows directly from~\cor{XXt and WWt}. \end{proof} Here is an easy corollary. \begin{corollary}\label{cor:phantom network theory} In James--Guth phantom network theory $\mu$ is a standard Gaussian on $\operatorname{ED}$. For any graph $\mathbf{G}$, $\mu$ is admissible and compatible with $\mathbf{G}$ and $\mu_{\graphG} = \mu^0_{\graphG}$ is a standard Gaussian on $\operatorname{im} \bdry^*$. Further, \begin{equation} \left< \operatorname{R^2_g} \right>_{\nu_{\graphG}} = \frac{d}{\mathbf{v}} \operatorname{tr} L^+. \end{equation} \end{corollary} \begin{proof} We showed in~\cor{phantom network theory works} that $\mu$ is admissible and compatible with $\mathbf{G}$, so $\mu_{\graphG}$ is well-defined. Further, by~\prop{disintegration}, $\mu_{\graphG}$ has $O(d)$-invariance and concentration on $\operatorname{im} \bdry^*$ so~\thm{gyradius formula} applies. It is easy to see that $\mean{\mu_{\graphG}}{WW^T}$ is orthogonal projection onto $\operatorname{im} \bdry^*$. It follows from~\prop{block structure of WWt} that $\Sigma_\mathbf{e}$ is the orthogonal projection $\bdry^T \bdry^{T+}$ to $\operatorname{im} \bdry^T$. Thus \begin{equation*} \left< \operatorname{R^2_g} \right>_{\nu_{\graphG}} = \frac{d}{\mathbf{v}} \operatorname{tr} \bdry^{T+} (\bdry^{T} \bdry^{T+}) \bdry^+ = \frac{d}{\mathbf{v}} \operatorname{tr} \bdry^{T+} \bdry^+ = \frac{d}{\mathbf{v}} \operatorname{tr} L^+, \end{equation*} as claimed. \end{proof} Here is another example of our method. \begin{proposition}\label{prop:cycle graph gyradius} Suppose that $\mathbf{G}$ is the $n$-edge cycle graph, $\mu$ is the joint distribution of $n$ i.i.d.\,random edges in $\mathbb{R}^d$ chosen from some $O(d)$-invariant probability measure on $\mathbb{R}^d$, and $\mu$ is compatible with $\mathbf{G}$. Then $\mu_{\graphG}$ is permutation invariant, all $\mean{\mu_{\graphG}}{\norm{W(e_i)}^2}$ are equal and if their common value is $\lambda$ then \begin{equation*} \left< \operatorname{R^2_g} \right>_{\nu_{\graphG}} = \frac{\lambda (\mathbf{e} + 1)}{12}. \end{equation*} \end{proposition} \begin{proof} It is clear that $\mu$ is an $O(d)$-invariant probability distribution as it's the joint distribution of $O(d)$-invariant random variables. Thus $\mu$ is admissible. Further, $\mu$ is permutation invariant on edges since the individual distributions are independent and identical. We have assumed that $\mu$ is compatible with $\mathbf{G}$, so $\mu_{\graphG}$ is well-defined. Permutation invariance of the conditional distribution $\mu_{\graphG}$ can be proved by uniqueness of decompositions as we did in the proof of $O(d)$-invariance in~\prop{disintegration}. Using this permutation symmetry, all of the off-diagonal elements of the $\mathbf{e} \times \mathbf{e}$ matrix $\mean{\mu_{\graphG}}{W^T W}$ of edge covariances are equal, and all of the on-diagonal elements are equal as well. Since the loop space of the cycle graph $\ker \bdry \in \operatorname{EC}$ is one-dimensional and spanned by $\mat{1}_{\mathbf{e} \times 1}$, we know $\mat{1}_{\mathbf{e} \times 1}$ spans $\ker \mean{\mu_{\graphG}}{W^T W}$ and the sum of each row and column is zero. Thus \begin{equation} \mean{\mu_{\graphG}}{W^T W} = \frac{\lambda \mathbf{e}}{\mathbf{e} - 1} I_\mathbf{e} - \frac{\lambda}{\mathbf{e}-1} \ones{\edgesE}{\edgesE}. \end{equation} where the diagonal element $\lambda$ is the expected squared norm of a single edge~\emph{in $\mu_{\graphG}$}. We have $\mathbf{v} = \mathbf{e}$ and $\bdry$ is a (square) circulant matrix with first row $-1, 0, \dotsc, 0, 1$. It follows that $\bdry^T \bdry$ is a symmetric circulant matrix; its first row is $2, -1, \dots, 0, -1$. Since the row and column sums of $\bdry^T \bdry$ vanish, the row and column sums of $(\bdry^T \bdry)^+$ vanish as well and $\ones{\edgesE}{\edgesE} (\bdry^T \bdry)^+ = 0$. Further, the trace of $\bdry^T \bdry$ is\footnote{Eigenvalues of a circulant are well-known; to sum them use $\sum_{j=1}^{v-1} \csc^2 (\pi j)=(1/3)(v^2-1)$~\cite[4.4.6.5, p.\ 644]{Prudnikov:1986vp}.} $\frac{1}{12} (\mathbf{e}^2-1)$. Now we are ready to compute. Since $\mu_{\graphG}$ is (by construction) $O(d)$-invariant and concentrated on $\operatorname{im} \bdry^*$,~\thm{gyradius formula} applies. Rearranging it and using $d \Sigma_\mathbf{e} = \mean{\mu_{\graphG}}{W^T W}$, \begin{align*} \left< \operatorname{R^2_g} \right> = \frac{1}{\mathbf{e}} \operatorname{tr} \left(\mean{\mu_{\graphG}}{W^T W} (\bdry^T \bdry)^+ \right) = \frac{1}{\mathbf{e}} \operatorname{tr} \left(\lambda \frac{\mathbf{e}}{\mathbf{e}-1} (\bdry^T \bdry)^+ \right) = \frac{\lambda (\mathbf{e} + 1)}{12}. \end{align*} \end{proof} We note that we can satisfy the hypothesis that $\mu$ is compatible with $\mathbf{G}$ in many cases using decay estimates (cf. \prop{disintegration with density} and \cor{compatibility for joint distributions}). For the freely jointed ring polymer with $\mathbf{e}$ equilateral edges, we can immediately recover the standard formula for $\left< \operatorname{R^2_g} \right>$ (\cite{Zirbel2012}): \begin{corollary} \label{cor:freely jointed ring} Let $\mathbf{G}$ be the $\mathbf{e}$-edge cycle graph and $\mu$ be the product of (uniform) area measures on the product of spheres $(S^2)^\mathbf{e} \subset \operatorname{ED} = (\mathbb{R}^3)^\mathbf{e}$. $\mu$ is admissible and compatible with $\mathbf{G}$ and $\left< \operatorname{R^2_g} \right>_{\nu_{\graphG}} = (\mathbf{e} + 1)/12$. \end{corollary} \begin{proof} Admissibility and compatibility were established above in~\cor{freely jointed ring admissible}, so there is a well-defined conditional probability $\mu_{\graphG} = \mu^0_{\graphG}$ with $\lambda = \mean{\mu_{\graphG}}{\norm{W(e_i)}^2} = 1$. Applying~\prop{cycle graph gyradius} completes the proof. \end{proof} We can also recover the standard result for the Gaussian ring polymer: \begin{corollary}\label{cor:Gaussian ring} Let $\mathbf{G}$ be the $\mathbf{e}$-edge cycle graph and $\mu$ be the standard Gaussian on $\operatorname{ED} = (\mathbb{R}^3)^\mathbf{e}$. $\mu$ is admissible and compatible with $\mathbf{G}$, $\mu_{\graphG} = \mu^0_{\graphG}$ is well-defined, and \begin{equation*} \left< \operatorname{R^2_g} \right>_{\nu_{\graphG}} = (\mathbf{e}^2 - 1)/(12 \mathbf{e}) \end{equation*} \end{corollary} \begin{proof} In~\cor{phantom network theory}, we showed that this $\mu$ is compatible with any $\mathbf{G}$, along with the fact that $\mu_{\graphG} = \mu^0_{\graphG}$ was a standard Gaussian on $\operatorname{im} \bdry^*$. The computation of $\lambda = (\mathbf{e}-1)/\mathbf{e}$ can be done very easily by the method of~\prop{projections} below. \end{proof} \section{Chain maps; pushforwards to simpler graphs} \label{sec:chain maps} We now want to consider the distribution of more local quantities in a graph-- for instance, the squared distance between a particular pair of vertices instead of the ensemble sum which appears in the radius of gyration. To do this, it's helpful to compute the marginal distribution of the subset of vertices which are needed to compute the local quantity in question and then take expectations with respect to this marginal distribution. We start by defining the random variables we'll consider. \begin{definition} \label{def:expressed} Suppose we have graphs $\mathbf{G}$ and $\mathbf{G}'$ and an injective map $f_0 \colon \operatorname{VC}' \rightarrow \operatorname{VC}$. A function $g \colon \operatorname{VP} \rightarrow \mathbb{R}$ is~\emph{expressed in terms of $\mathbf{G}'$} if there is a map $g' \colon \operatorname{VP}' \rightarrow \mathbb{R}$ so that $g = g' \circ f_0^*$. \end{definition} It is standard that \begin{proposition} \label{prop:expectations under pushforwards} If $g$ is expressed in terms of $\mathbf{G}'$ and $\nu_{\graphG}$ is any probability distribution on $\operatorname{VP}$, then $\mean{\nu_{\graphG}}{g} = \mean{(f_0^*)_\sharp \nu_{\graphG}}{g'}$. \end{proposition} This proposition is evidently true but it's not very useful in practice, as computing an expectation with respect to $(f_0^*)_\sharp \nu_{\graphG}$ is likely just as hard as computing one with respect to $\nu_{\graphG}$ in the first place. On the other hand, if $\nu_{\graphG}$ is a probability distribution concentrated on the centered configurations which we have obtained from a probability distribution $\mu$ on $\operatorname{ED}$, it would be useful to construct some probability distribution $\mu'$ on $\operatorname{ED}'$ so that the corresponding $\nu'_{\graphG'}$ had $\mean{\nu_{\graphG}}{g} = \mean{\nu'_{\graphG'}}{g'}$. The purpose of this section is to establish a construction of $\mu'$ which, coupled with mild restrictions on $g'$, will accomplish this goal. We start by connecting the map $f_0$ on vertex chains to a corresponding map $f_1$ between edge chains. \begin{definition} Given two graphs $\mathbf{G}$ and $\mathbf{G}'$ and a pair of maps $f_0 \colon \operatorname{VC}' \rightarrow \operatorname{VC}$ and $f_1 \colon \operatorname{EC}' \rightarrow \operatorname{EC}$, we say that $f_0$ and $f_1$ are~\emph{chain maps} if $\bdry f_1 = f_0 \bdry'$. \end{definition} Chain maps $f_0$ and $f_1$ induce maps $f_0^* \colon \operatorname{VP} \rightarrow \operatorname{VP}'$ and $f_1^* \colon \operatorname{ED} \rightarrow \operatorname{ED}'$ and the definition ensures the squares below commute: \begin{equation} \label{eq:diagrams} \begin{tikzcd} \operatorname{VC} & \arrow[l,"\bdry"] \operatorname{EC} \\ \operatorname{VC}' \arrow[u,"f_0"] & \operatorname{EC}' \arrow[l,"\bdry'"] \arrow[u,"f_1"] \end{tikzcd} \qquad \begin{tikzcd} \operatorname{VP} \arrow[r,"\bdry^*"] \arrow[d,"f_0^*"] & \operatorname{ED} \arrow[d,"f_1^*"] \\ \operatorname{VP}' \arrow[r,"(\bdry')^*"] & \operatorname{ED}' \end{tikzcd} \end{equation} We will think of $\mathbf{G}'$ as a simpler graph which we include in $\mathbf{G}$ by the chain maps $f_0$ and $f_1$. The chain map hypothesis ensures that our assignments of edges and vertices are compatible with each other and the graph structures. \begin{proposition}\label{prop:chain map pushforwards} Suppose $f_0$ and $f_1$ are injective chain maps $\mathbf{G}' \rightarrow \mathbf{G}$, and $\mathbf{G}$ and $\mathbf{G}'$ have the same cycle rank $\mathbf{e} - \mathbf{v} + 1 = \mathbf{e}' - \mathbf{v}' + 1$. Then \begin{enumerate} \item $\operatorname{proj}_{\operatorname{ID}'} f_1^*$ is an invertible linear map between $\operatorname{ID}$ and $\operatorname{ID}'$, \label{projidprime f1star is invertible} \item \label{mu and mup compatible with gprime} $\mu$ is compatible with $\mathbf{G}$ $\implies$ $\mu' := (f_1^*)_\sharp \mu$ is compatible with $\mathbf{G}'$, \item \label{mugwp are pushmugw} Since $\mu$ is compatible with $\mathbf{G}$, by definition there is an open ball $U \subset \operatorname{ID}$ centered at $0$ so that $\mu^W_{\graphG}$ is defined for all $W \in U$. If we let $W' := (\operatorname{proj}_{\operatorname{ID}'} f_1^*) W$ then for all $W'$ in an open ball $U'$ centered at $0$ in $\operatorname{ID}'$, there are corresponding $W \in U$ so that \begin{equation*} (\mu')_{\graphG'}^{W'} = (f_1^*)_\sharp \mugw. \end{equation*} \end{enumerate} \end{proposition} Here the $\mu^W_{\graphG}$ and $(\mu')_{\graphG'}^{W'}$ are the conditional probabilities guaranteed by the $\mathbf{G}$-compatibility of $\mu$ and the $\mathbf{G}'$-compatibility of $\mu'$ (cf.\,\defn{G-compatible}). \begin{proof} We start by working out some properties of $f_1$ and $f_1^*$ which follow from our hypotheses. \begin{claim}\label{claim:chain maps} We have $f_1 (\ker \bdry') = \ker \bdry$. Thus $\ker f_1^* \subset \operatorname{im} \bdry^*$. \end{claim} \begin{proof} If $w' \in \ker \bdry'$, then $0 = f_0 \bdry' w' = \bdry f_1 w'$, so $f_1 w' \in \ker \bdry$. Thus $f_1(\ker \bdry') \subset \ker \bdry$. Since $f_1$ is injective, $\dim f_1 (\ker \bdry') = \dim \ker \bdry'$. But $\dim \ker \bdry' = \dim \ker \bdry$ because each is the cycle rank $\mathbf{e} - \mathbf{v} + 1 = \mathbf{e}' - \mathbf{v}' + 1$. Thus $f_1(\ker \bdry') = \ker \bdry$. Next, since $\ker \bdry \subset \operatorname{im} f_1$, we know $(\operatorname{im} f_1)^0 \subset (\ker \bdry)^0$. But $\ker f_1^* = (\operatorname{im} f_1)^0$ and $(\ker \bdry)^0 = \operatorname{im} \bdry^*$, so this proves $\ker f_1^* \subset \operatorname{im} \bdry^*$. \end{proof} \begin{claim} \label{claim:images} We have $f_1^*(\operatorname{im} \bdry^*) = \operatorname{im} (\bdry')^*$ and $(f_1^*)^{-1}(\operatorname{im} (\bdry')^*) = \operatorname{im} \bdry^*$. \end{claim} \begin{proof} Since $f_1(\ker \bdry') = \ker \bdry$, $f_1^* (\ker \bdry)^0 = (\ker \bdry')^0$, or $f_1^*(\operatorname{im} \bdry^*) = \operatorname{im} (\bdry')^*$. It follows immediately that $(f_1^*)^{-1}(\operatorname{im} (\bdry')^*) = (f_1^*)^{-1}(f_1^*(\operatorname{im} \bdry^*))$, so $\operatorname{im} \bdry^* \subset (f_1^*)^{-1}(\operatorname{im} (\bdry')^*)$. Now suppose $f_1^* W \in \operatorname{im} (\bdry')^*$. Then $f_1^* W = (\bdry')^* X' = (\bdry')^* f_0^* X = f_1^* \bdry^* X$ for some $X \in \operatorname{VP}$, where the second equality follows from surjectivity of $f_0^*$ (\cor{injective dual to surjective}). This means that $\bdry^*X - W \in \ker f_1^*$. But $\ker f_1^* \subset \operatorname{im} \bdry^*$ by the last claim, so $\bdry^*X - W \in \operatorname{im} \bdry^*$, and $W \in \operatorname{im} \bdry^*$, as required. \end{proof} \begin{claim} \label{claim:id isomorphism} The map $\operatorname{proj}_{\operatorname{ID}'} f_1^*$ is a linear isomorphism from $\operatorname{ID}$ to $\operatorname{ID}'$. \end{claim} \begin{proof} First, it's clear that $\operatorname{im} \operatorname{proj}_{\operatorname{ID}'} f_1^* \subset \operatorname{ID}'$. Next, we show that $\operatorname{proj}_{\operatorname{ID}'} f_1^*$ restricted to $\operatorname{ID}$ has kernel $0$. Suppose that $W \in \operatorname{ID}$ has $\operatorname{proj}_{\operatorname{ID}'} f_1^* W = 0$. Then $f_1^* W \in \ker \operatorname{proj}_{\operatorname{ID}'} = \operatorname{im} (\bdry')^*$. By~\clm{images}, this implies $W \in \operatorname{im} \bdry^*$. But $\operatorname{im} \bdry^* \cap \operatorname{ID} = 0$, so this means $W = 0$. Thus $\operatorname{proj}_{\operatorname{ID}'} f_1^*$ restricted to $\operatorname{ID}$ is injective. But $\dim \operatorname{ID}' = d \xi(\mathbf{G}') = d \xi(\mathbf{G}) = \dim \operatorname{ID}$, which means that $\operatorname{proj}_{\operatorname{ID}'} f_1^*$ restricted to $\operatorname{ID}$ is an isomorphism. \end{proof} \begin{claim}\label{claim:proj f1 proj is proj f1} We claim that $\operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}} = \operatorname{proj}_{\operatorname{ID}'} f_1^*$. \end{claim} \begin{proof} Any $W \in \operatorname{ED}$ can be written uniquely as $W = \bdry^* X + W_{\operatorname{ID}}$ for some $X \in \operatorname{VP}$ and $W_{\operatorname{ID}} \in \operatorname{ID}$. Now $f_1^* \bdry^* X \in \operatorname{im} (\bdry')^* = \ker \operatorname{proj}_{\operatorname{ID}'}$ by~\clm{images}. Thus $\operatorname{proj}_{\operatorname{ID}'} f_1^* W = \operatorname{proj}_{\operatorname{ID}'} f_1^* W_{\operatorname{ID}}$. But $\operatorname{proj}_{\operatorname{ID}} W = W_{\operatorname{ID}}$, so $\operatorname{proj}_{\operatorname{ID}'} f_1^* W_{\operatorname{ID}} = \operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}} W$, as required. \end{proof} \begin{claim}\label{claim:fiber is pushforward of fiber} Recalling that $W' = \operatorname{proj}_{\operatorname{ID}'} f_1^* W$, we claim that $\operatorname{proj}_{\operatorname{ID}'}^{-1}(W') = f_1^* \operatorname{proj}_{\operatorname{ID}}^{-1}(W)$. \end{claim} \begin{proof}[Proof of claim] We first show that $f_1^* \operatorname{proj}_{\operatorname{ID}}^{-1}(W) \subset \operatorname{proj}_{\operatorname{ID}'}^{-1}(W')$. Suppose $Z \in \operatorname{proj}_{\operatorname{ID}}^{-1}(W)$. Then $\operatorname{proj}_{\operatorname{ID}} Z = W$, so $\operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}} Z = \operatorname{proj}_{\operatorname{ID}'} f_1^* W = W'$. But by~\clm{proj f1 proj is proj f1}, $\operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}} Z = \operatorname{proj}_{\operatorname{ID}'} f_1^* Z$. Therefore, $f_1^* Z \in \operatorname{proj}_{\operatorname{ID}'}^{-1}(W')$, as required. Now we show that $\operatorname{proj}_{\operatorname{ID}'}^{-1}(W') \subset f_1^* \operatorname{proj}_{\operatorname{ID}}^{-1}(W)$. Suppose that we have $Z' \in \operatorname{proj}_{\operatorname{ID}'}^{-1}(W')$. Then $\operatorname{proj}_{\operatorname{ID}'} Z' = W'$. Now $f_1^*$ is surjective, so there exists some $Z \in \operatorname{ED}$ with $f_1^* Z = Z'$. We now know that $\operatorname{proj}_{\operatorname{ID}'} f_1^* Z = W'$. We must show that $Z \in \operatorname{proj}_{\operatorname{ID}}^{-1}(W)$. By~\clm{proj f1 proj is proj f1}, we know \begin{equation*} \operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}} Z = \operatorname{proj}_{\operatorname{ID}'} f_1^* Z = W' = \operatorname{proj}_{\operatorname{ID}'} f_1^* W. \end{equation*} But by~\clm{id isomorphism}, since $\operatorname{proj}_{\operatorname{ID}} Z \in \operatorname{ID}$ and $W \in \operatorname{ID}$, this implies that $\operatorname{proj}_{\operatorname{ID}} Z = W$, as required. \end{proof} \begin{claim} $(f_1^*)_\sharp \mu$ is an admissible measure on $\operatorname{ED}$. \end{claim} \begin{proof} Since $f_1^*$ is a linear map, it is Lipschitz. Further, Lipschitz pushforwards of Radon probability measures with finite first moment are also Radon probability measures of finite first moment~\cite{Fritz2019}, so $(f_1^*)_\sharp \mu$ is admissible. \end{proof} We are now ready for the body of the proof. We noted in~\defn{tjur conditional probability} that Tjur conditional probabilities are unique if they are defined. Therefore, we can establish both~\ref{mu and mup compatible with gprime} and~\ref{mugwp are pushmugw} by showing that the $(f_1^*)_\sharp \mugw$ are conditional probabilities for $\mu'$ given $\operatorname{proj}_{\operatorname{ID}'} Z' = W'$. \begin{claim} The $\mu^W_{\graphG}$ are conditional probabilities for $\mu$ given $\operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}}(Z) = W'$. \end{claim} \begin{proof}[Proof of claim] By~\clm{proj f1 proj is proj f1}, we know that $\mu'_{\operatorname{ID}'} = (\operatorname{proj}_{\operatorname{ID}'})_\sharp \mu' = (\operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}})_\sharp \mu$. So suppose we fix some $f \in \mathcal{K}(\operatorname{ED})$ and some $\epsilon > 0$. By hypothesis, there is some open neighborhood $V$ of $W$ in $\operatorname{ID}$ so that for every $B \subset V$ with $\mu_{\operatorname{ID}}(B) > 0$ we have $\abs{\mu^W_{\graphG}(f) - \mu^B(f)} < \epsilon$. Now the map $\operatorname{proj}_{\operatorname{ID}'} f_1^*$ is a linear isomorphism from $\operatorname{ID}$ to $\operatorname{ID}'$ by~\clm{id isomorphism}. Therefore, there is some open neighborhood $V'$ of $W' = \operatorname{proj}_{\operatorname{ID}'} f_1^* W$ so that $(\operatorname{proj}_{\operatorname{ID}'} f_1^*)^{-1}(V') \subset V$. Now suppose we have any $B' \subset V'$ with $\mu'_{\operatorname{ID}'}(B') > 0$. Defining $B$ to be the inverse image $B := (\operatorname{proj}_{\operatorname{ID}'} f_1^*)^{-1}(B')$, and applying the definition of pushforward, we now know that \begin{equation*} 0 < \mu'_{\operatorname{ID}'}(B') = (\operatorname{proj}_{\operatorname{ID}})_\sharp \mu( (\operatorname{proj}_{\operatorname{ID}'} f_1^*)^{-1}(B') ) = \mu_{\operatorname{ID}}(B). \end{equation*} Further, since $B' \subset V'$, we know $B \subset V$. We now compute \begin{align*} (\mu')^{B'}(f) &= \frac{1}{\mu'_{\operatorname{ID}'}(B')} \int_{(\operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}})^{-1}(B')} f(Z) \mu(dZ) \\ &= \frac{1}{\mu_{\operatorname{ID}}(B)} \int_{(\operatorname{proj}_{\operatorname{ID}})^{-1}(B)} f(Z) \mu(dZ) = \mu^B(f) \end{align*} This proves that $\abs{\mu^W_{\graphG}(f) - (\mu')^{B'}(f)} = \abs{\mu^W_{\graphG}(f) - \mu^B(f)} < \epsilon$, and hence that the $\mu^W_{\graphG}$ are conditional probabilities for $\mu$ given $\operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}}(W) = W'$. \end{proof} Applying~\thm{decomposition and existence}, we now know that the $\mu^W_{\graphG}$ and $\mu'_{\operatorname{ID}'}$ are a decomposition of $\mu$ with respect to the map $\operatorname{proj}_{\operatorname{ID}'} f_1^* \operatorname{proj}_{\operatorname{ID}} = \operatorname{proj}_{\operatorname{ID}'} f_1^*$. By~\lem{decompositions push forward}, the $(f_1^*)_\sharp \mugw$ and $\mu'_{\operatorname{ID}'}$ are thus a decomposition of $f_1^* \mu = \mu'$ with respect to $\operatorname{proj}_{\operatorname{ID}'}$. One last application of~\thm{decomposition and existence} shows that the $(f_1^*)_\sharp \mugw$ are conditional probabilities for $\mu'$ given $\operatorname{proj}_{\operatorname{ID}'} (Z') = W'$, as desired. \end{proof} We can now prove the main theorem of the section. \begin{theorem}\label{thm:chain maps and probability} Suppose we have injective chain maps $f_0$ and $f_1$ between connected graphs $\mathbf{G}$ and $\mathbf{G}'$ of the same cycle rank $\xi(\mathbf{G})$, together with a measure $\mu$ on $\operatorname{ED}$ which is compatible with $\mathbf{G}$ and its pushforward $\mu' = (f_1^*)_\sharp \mu$ on $\operatorname{ED}'$. Then $\mu'$ is compatible with $\mathbf{G}'$ and the corresponding $\nu_{\graphG} := \bdry^{*+}_\sharp \mu_{\graphG}$ and $\nu'_{\graphG'} := (\bdry')^{*+}_\sharp (\mu')_{\graphG'}$ measures are related by \begin{equation*} (\operatorname{proj}_{\operatorname{im} (\bdry')^{*+}} f_0^*)_\sharp \nu_{\graphG} = \nu'_{\graphG'}. \end{equation*} It follows that if $g$ is any $O(d)$ and translation-invariant function $g \colon \operatorname{VP} \rightarrow \mathbb{R}$ which is expressed in terms of $\mathbf{G}'$, we have $\mean{\nu_{\graphG}}{g} = \mean{\nu'_{\graphG'}}{g'}$. \end{theorem} We remind the reader that $\mathbf{G}$-compatibility of $\mu$ $\implies$ $\mathbf{G}'$-compatibility of $\mu'$ was established in~\prop{chain map pushforwards}. Compatibility guarantees the existence of the conditional probabilities $\mu_{\graphG}, \nu_{\graphG} := \bdry^{*+}_\sharp \mu_{\graphG}$ and $(\mu')_{\graphG'}, \nu'_{\graphG'} := (\bdry')^{*+}_\sharp (\mu')_{\graphG'}$. The map $g'$ is defined so $g' \circ f_0^* = g$. Its existence is guaranteed by~\defn{expressed}, since we have assumed that $g$ is expressed in terms of $\mathbf{G}'$. \begin{proof} We are first going to do a bit of linear algebra. \begin{claim} \label{claim:f0star and ker bdystar} $f_0^*(\ker \bdry^*) = \ker (\bdry')^*$. \end{claim} \begin{proof}[Proof of claim] We know that $\bdry f_1 = f_0 \bdry'$ since $f_0$ and $f_1$ are chain maps. Thus if $x \in \operatorname{im} \bdry'$, then $f_0 x = f_0 \bdry' w = \bdry f_1 w$, so $f_0 x \in \operatorname{im} \bdry$. Thus $f_0( \operatorname{im} \bdry' ) \subset \operatorname{im} \bdry$. But this means that $(\operatorname{im} \bdry')^0 \subset f_0^* (\operatorname{im} \bdry)^0$, or $\ker (\bdry')^* \subset f_0^*(\ker \bdry^*)$. However, from~\prop{ker and im of bdystar}, we know that $\dim \ker (\bdry')^* = \dim \ker \bdry^* = d$. Since $\dim f_0^*(\ker \bdry^*) \leq \dim \ker \bdry^* = d$, we have shown that $\ker (\bdry')^* = f_0^*(\ker \bdry^*)$. \end{proof} \begin{claim} The map $g'$ is $O(d)$ and translation invariant. \end{claim} \begin{proof}[Proof of claim] By definition, $g(X) = g'(f_0^*(X))$. To show that $g'$ is translation-invariant, we observe that the translations given by $\ker (\bdry')^*$. So if $Y'$ is a translation, $Y' = f_0^* Y$ (by the last~\clm{f0star and ker bdystar}). Further, $f_0^*$ is surjective, so for any $X' \in \operatorname{VP}'$, there is some $X \in \operatorname{VP}$ so that $X' = f_0^* X$. Now suppose $X \in \operatorname{VP}$ and $Y \in \ker (\bdry')^*$: \begin{equation*} g'(X' + Y') = g'(f_0^*(X) + f_0^*(Y)) = g'(f_0^*(X + Y)) = g(X + Y) = g(X) = g'(f_0(X)) = g'(X'). \end{equation*} where the step $g(X+Y) = g(X)$ follows from $Y \in \ker \bdry^*$ and the translation-invariance of $g$. By \lem{linear equivariance}, $f_0^* = I_d \otimes f_0^T$ is $GL(d)$-equivariant, and hence in particular $O(d)$-equivariant. To see what this means precisely, suppose that $Q \in O(d)$, so that the action of $Q$ on $\operatorname{VP}$ is $Q \otimes I_{\mathbf{v}}$ and the action of $Q$ on $\operatorname{VP}'$ is $Q \otimes I_{\mathbf{v}'}$. Since $f_0^T$ is a $\mathbf{v}' \times \mathbf{v}$ matrix: \begin{equation*} (Q \otimes I_{\mathbf{v}'}) f_0^* = (Q \otimes I_{\mathbf{v}'})(I_d \otimes f_0^T) = (Q \otimes f_0^T) = (I_d \otimes f_0^T)(Q \otimes I_{\mathbf{v}}) = f_0^* (Q \otimes I_{\mathbf{v}}). \end{equation*} where we used \lem{mixed product} in the middle steps. We then have \begin{equation*} g'( (Q \otimes I_{\mathbf{v}'}) X' ) = g'( (Q \otimes I_{\mathbf{v}'}) f_0^* X ) = g' (f_0^* (Q \otimes I_{\mathbf{v}} X)) = g((Q \otimes I_{\mathbf{v}}) X) = g(X) = g'(f_0^* X) = g'(X'). \end{equation*} which completes the proof. \end{proof} We know from~\prop{expectations under pushforwards} that $\mean{\nu_{\graphG}}{g} = \mean{(f_0^*)_\sharp \nu_{\graphG}}{g'}$. So we must show that $\mean{(f_0^*)_\sharp \nu_{\graphG}}{g'} = \mean{\nu'_{\graphG'}}{g'}$. Now we have just proved that $g'$ is translation-invariant, so the expectation of $g'$ with respect to $(f_0^*)_\sharp \nu_{\graphG}$ is equal to the expectation of $g'$ with respect to $(\operatorname{proj}_{\operatorname{im} (\bdry')^{*+}})_\sharp (f_0^*)_\sharp \nu_{\graphG}$. It now suffices to show that $(\operatorname{proj}_{\operatorname{im} (\bdry')^{*+}} f_0^*)_\sharp \nu_{\graphG} = \nu'_{\graphG'}$. After all our work above, this is mostly a matter of unpacking definitions. Recall that $\nu_{\graphG} := \bdry^{*+}_\sharp \mu_{\graphG}$ and $\nu'_{\graphG'} := (\bdry')^{*+}_\sharp (\mu')_{\graphG'}$. In the proof of~\prop{chain map pushforwards}, we showed $(\mu')_{\graphG'} = (f_1^*)_\sharp \mug$. Now we can just compute: \begin{align*} (\operatorname{proj}_{\operatorname{im} (\bdry')^{*+}} f_0^*)_\sharp \nu_{\graphG} &= (\operatorname{proj}_{\operatorname{im} (\bdry')^{*+}} f_0^* \bdry^{*+})_\sharp \mu_{\graphG} = ((\bdry')^{*+} (\bdry')^* f_0^* \bdry^{*+})_\sharp \mu_{\graphG} \\ &= ((\bdry')^{*+} f_1^* \bdry^* \bdry^{*+})_\sharp \mu_{\graphG} = ((\bdry')^{*+} f_1^*)_\sharp (\operatorname{proj}_{\operatorname{im} \bdry^*})_\sharp \mu_{\graphG} \\ &= ((\bdry')^{*+} f_1^*)_\sharp \mu_{\graphG} = (\bdry')^{*+}_\sharp (f_1^*)_\sharp \mug \\ &= (\bdry')^{*+}_\sharp (\mu')_{\graphG'} = \nu'_{\graphG'}, \end{align*} where $(\operatorname{proj}_{\operatorname{im} \bdry^*})_\sharp \mu_{\graphG} = \mu_{\graphG}$ because $\mu_{\graphG}$ is concentrated on $\operatorname{im} \bdry^*$ by construction. \end{proof} \section{Applications} Proving~\thm{chain maps and probability} was somewhat complicated, but applying the theorem is a much easier process. We now give several examples which show how this result can greatly simplify calculations and numerical experiments regarding network polymers. We start by analyzing in some detail a very common model: subdivided graphs. \begin{definition}\label{def:subdivision setup} The $n$-part edge subdivision $\mathbf{G}_n$ of a multigraph $\mathbf{G}$ is the graph obtained by dividing each edge of $\mathbf{G}$ into $n$ smaller edges (see \figr{subdivisions}) oriented to agree with the original graph. If $\mathbf{G}$ has $\mathbf{v}$ vertices $v_1, \dotsc, v_{\mathbf{v}}$ then $\mathbf{G}_n$ has $\mathbf{v}$~\emph{junction vertices} $v_{1 0}, \dotsc, v_{\mathbf{v} 0}$ corresponding to the vertices of $\mathbf{G}$ and $(n-1) \mathbf{e}$ \emph{subdivision vertices} $v_{11}, \dotsc, v_{1(n-1)}, v_{21}, \dotsc, v_{\mathbf{e} (n-1)}$ located along the subdivided edges. If $\mathbf{G}$ has $\mathbf{e}$ edges $e_1, \dotsc, e_{\mathbf{e}}$, then $\mathbf{G}_n$ has $n\mathbf{e}$ edges $e_{11}, \dotsc, e_{1n}, e_{21}, \dotsc, e_{\mathbf{e} n}$. We will call each group $e_{j1}, \dotsc, e_{jn}$ the~\emph{subdivided edge} corresponding to $e_j$ in $\mathbf{G}$. There are canonical chain maps $f_0(v_i) = v_{i0}$ and $f_1(e_j) = e_{j1} + \cdots + e_{jn}$ from $\mathbf{G}$ to $\mathbf{G}_n$ which take vertices of $\mathbf{G}$ to the corresponding junction vertices in $\mathbf{G}_n$ and edges of $\mathbf{G}$ to the corresponding subdivided edges in $\mathbf{G}_n$. We will reserve our usual notations $\mu, \mu_{\graphG}, \bdry, \operatorname{EC}, \operatorname{VC}, \operatorname{VP}, \operatorname{ED}$ to refer to $\mathbf{G}$ and use the notations $\mu_n, \mu_{\mathbf{G}_n}, \bdry_n, \operatorname{EC}_n, \operatorname{VC}_n, \operatorname{VP}_n, \operatorname{ED}_n$ for the corresponding objects for the subdivided graph $\mathbf{G}_n$. \end{definition} \begin{figure} \hphantom{.} \hfill \includegraphics[width=2in]{theory-rewrite-sub-1.pdf} \hfill \includegraphics[width=2in]{theory-rewrite-sub-4.pdf} \hfill \hphantom{.} \caption{A directed $\alpha$-graph $\mathbf{G}$ (left) and its four-part edge subdivision $\mathbf{G}_4$(right). Note that the edges of $\mathbf{G}_4$ obtain orientations from the edges of $\mathbf{G}$.} \label{fig:subdivisions} \end{figure} It follows immediately from~\thm{chain maps and probability} that \begin{proposition}\label{prop:subdivisions and expectations} If $\mathbf{G}_n$ is the $n$-part edge subdivision of $\mathbf{G}$, and we have a measure $\mu_n$ on $\operatorname{ED}_n$ which is compatible with $\mathbf{G}_n$, then $\mu := (f_1^*)_\sharp \mu_n$ is compatible with $\mathbf{G}$ and for any $O(d)$ and translation-invariant measurable function $g_n$ on $\operatorname{VP}_n$ which can be expressed in terms of $\mathbf{G}$ as $g_n = g \circ (f_0)^*$ then \begin{equation*} \mean{\nu_{\mathbf{G}_n}}{g_n} = \mean{\nu_{\graphG}}{g} \end{equation*} \end{proposition} This is already useful in many cases. An easy consequence is \begin{corollary}\label{cor:subdivisions in phantom network theory} In James--Guth phantom network theory (edges are i.i.d. according to standard Gaussians on $\mathbb{R}^d$), the joint distribution of squared distances between junction vertices in $\mathbf{G}_n$ is the joint distribution of $n$ times the squared distances between vertices in $\mathbf{G}$. \end{corollary} We note that this same result follows from computing expected squared distance as resistance distance between junctions, and regarding the subdivided edges as composed of $n$ unit resistors in series, as in~\cite{Chen:2010da}. Here is a second example application in James--Guth phantom network theory. \begin{figure} \hphantom{.} \hfill \includegraphics[width=2in]{theory-rewrite-flower.pdf} \hfill \includegraphics[width=2in]{theory-rewrite-sub-4.pdf} \hfill \hphantom{.} \caption{In the proof of~\prop{projections}, $\mathbf{G}'$ is the loop-edge graph with 3 loops (left) and $\mathbf{G}'$ is the four-part edge subdivision of the $\alpha$-graph (right).} \label{fig:flower} \end{figure} \begin{proposition}\label{prop:projections} Suppose $\mathbf{G}$ is a connected graph with cycle rank $r$. Take any orthonormal basis $\ell_1, \dots, \ell_r$ for the loop space $\ker \bdry \subset \operatorname{EC}$ and any $p \in \operatorname{EC}$ with $\bdry p = v_i - v_j$. In phantom network theory (that is, when the probability measure $\mu$ on $\operatorname{ED}$ is a standard Gaussian) for embeddings of $\mathbf{G}$ in $\mathbb{R}^d$, \begin{equation*} \mean{\nu_\mathbf{G}}{\norm{X(v_i) - X(v_j)}^2} = d \left(\ECp{p}{p} - \sum\limits_{i=1}^r \ECp{p}{\ell_i}^2 \right). \end{equation*} \end{proposition} \begin{proof} Let $\mathbf{G}'$ be the graph with two vertices $v'_1$ and $v'_2$, $r$ loop edges $e'_1, \dots, e'_r$ joining $v_1 \rightarrow v_1$, and a single edge $e'_{r+1}$ joining $v'_1 \rightarrow v'_2$. Now define chain maps $f_0$ and $f_1$ by $f_0(v'_1) = v_j$ and $f_0(v'_2) = v_i$, while $f_1(e'_i) = \ell_i$ and $f_1(e'_{r+1}) = p$. It is easy to verify that $f_0 \bdry' = \bdry f_1$, as $\bdry f_1 (e'_i) = \bdry \ell_i = 0$ and $\bdry f_1(e'_{r+1}) = v_i - v_j = f_0 \bdry'(e'_{r+1})$. An example of this construction is shown in~\figr{flower} where $\mathbf{G}$ is a subdivision of the $\alpha$-graph (which has cycle rank $3$). Since $\mu$ has covariance matrix $I_d \otimes I_\mathbf{e}$ on $\operatorname{ED}$, the pushforward $(f_1^*)_\sharp \mu$ has covariance matrix \begin{equation*} (f_1^*) (f_1^*)^T = (I_d \otimes f_1^T)(I_d \otimes f_1) = (I_d \otimes f_1^T f_1) \end{equation*} It follows from our definition of $f_1$ that $f_1^T f_1$ is a $2 \times 2$ block matrix with \begin{equation*} (f_1^T f_1)_{11} = I_{r}, \quad (f_1^T f_1)_{12} = ( \ECp{\ell_1}{p} \cdots \ECp{\ell_r}{p} ), \quad (f_1^T f_1)_{21} = (f_1^T f_1)_{12}^T, \quad (f_1^T f_1)_{22} = \ECp{p}{p}. \end{equation*} The $d \times d$ covariance matrix of $((f_1^*)_\sharp \mu)_\mathbf{G}$ is the conditional variance of $W(e_{r+1})$ conditioned on $W(e_1), \dotsc, W(e_r) = 0$. So far, everything we have said is true for an arbitrary $\mu$ on $\operatorname{ED}$ with covariance matrix $I_{d\mathbf{e}}$. For an arbitrary $\mu$, we would need more information to continue, because the covariance matrix does not determine the conditional variance in general. However, since we also know that $(f_1^*)_\sharp \mu$ is a Gaussian distribution in this special case, the conditional covariance matrix, which is the covariance matrix of $((f_1^*)_\sharp \mu)_\mathbf{G}$, can be computed by taking the Schur complement of $(f_1^T f_1)_{22}$ inside $f_1^T f_1$: \begin{align*} \cov{((f_1^*)_\sharp \mu)_\mathbf{G}}{-}{-} &= I_d \otimes ((f_1^T f_1)_{22} - (f_1^T f_1)_{12} (f_1^T f_1)_{11}^{-1} (f_1^T f_1)_{21}) \\ &= I_d \otimes (\ECp{p}{p} - \sum \ECp{\ell_i}{p}^2) = (\ECp{p}{p} - \sum \ECp{\ell_i}{p}^2) I_d. \end{align*} Now the expectation of $\norm{W(v_i) - W(v_j)} = \norm{W'(v_2) - W'(v_1)}$ is given by \begin{equation*} \cov{((f_1^*)_\sharp \mu)_\mathbf{G}}{\ones{d}{1}}{\ones{d}{1}} = d(\ECp{p}{p} - \sum \ECp{\ell_i}{p}^2), \end{equation*} as claimed. \end{proof} \begin{figure} \hphantom{.} \hfill \begin{overpic}[width=1.2in]{theory-rewrite-loop-1.pdf} \put(45,14){$w_1$} \end{overpic} \hfill \begin{overpic}[width=1.2in]{theory-rewrite-loop-2.pdf} \put(57,33){$w_2$} \end{overpic} \hfill \begin{overpic}[width=1.2in]{theory-rewrite-loop-3.pdf} \put(29,33){$w_3$} \end{overpic} \hfill \begin{overpic}[width=1.2in]{theory-rewrite-edge-1.pdf} \put(19,57){$e_i$} \end{overpic} \hfill \hphantom{.} \caption{On the left, we see three loops $w_1$, $w_2$ and $w_3$ which form a basis for the loop space of the subdivided $\alpha$-graph. On the right, we see a single edge $e_i$. Without loss of generality, we may choose corresponding $w_1$, $w_2$, $w_3$ with this relationship to an arbitrary $e_i$.} \label{fig:basis} \end{figure} This Proposition makes it relatively easy to do particular computations in phantom network theory. For instance, we now compute the edgelength variance and junction-junction variance of the subdivided $\alpha$ graph.~\figr{basis} shows a (non-orthonormal) basis $w_1, w_2, w_3$ for the $3$-dimensional loop space of this graph, together with an edge $e_i$. Without loss of generality, we can assume this is the situation for any $e_i$. Orienting each loop counterclockwise and counting shared edges and orientations, we see that $\ECp{w_j}{w_j} = 3n$ and $\ECp{w_j}{w_k} = -n$ for all pairs of loops. We now construct an orthonormal basis $\ell_1, \ell_2, \ell_3$ by Gram-Schmidt orthogonalization: \begin{equation*} \ell_1 = \sqrt{\frac{1}{3n}} w_1, \quad \ell_2 = \sqrt{\frac{1}{24n}} w_1 + \sqrt{\frac{3}{8n}} w_2, \quad \ell_3 = \sqrt{\frac{1}{8n}} w_1 + \sqrt{\frac{1}{8n}} w_2 + \sqrt{\frac{1}{2n}} w_3. \end{equation*} Since $e_i$ is disjoint from $w_1$ and $w_2$, $\ECp{e_i}{w_1} = \ECp{e_i}{w_2} = 0$, and so $\ECp{e_i}{\ell_1} = \ECp{e_i}{\ell_2} = 0$. Since $e_i$ is part of $w_3$ (and agrees in orientation with $w_3$), we have $\ECp{e_i}{w_3} = 1$ and so $\ECp{e_i}{\ell_3} = \sqrt{\frac{1}{2n}}$. Thus \begin{equation*} \mean{\mu_\mathbf{G}}{\norm{W(e_i)}^2} = d\left(1 - \frac{1}{2n}\right). \end{equation*} To compute the expectation of the squared junction-junction distance, we replace $e_i$ with a sum of $n$ edges $w$ along the same subdivided edge. We get $\ECp{w}{w_1} = \ECp{w}{w_2} = \ECp{w}{\ell_1} = \ECp{w}{\ell_2} = 0$, but $\ECp{w}{w_3} = n$, and so $\ECp{w}{\ell_3} = \sqrt{\frac{n}{2}}$. The expected squared junction-junction distance in the $n$-edge subdivided $\alpha$-graph is then $dn/2$. \prop{subdivisions and expectations} is very useful but we still have to understand $\mu_n$ well enough to establish compatibility of $\mu_n$ with $\mathbf{G}_n$ to get started. We will now show that in many cases, we can work around this limitation. \begin{proposition}\label{prop:subdivisions and compatibility} Suppose that $\mathbf{G}_n$ is the $n$-part edge subdivision of $\mathbf{G}$. Further, suppose that we have $n\mathbf{e}$ independent $O(d)$-invariant probability distributions $\rho_{11}, \dots, \rho_{\mathbf{e} n}$ on $\mathbb{R}^d$. Let $\rho_j$ be the joint distribution on $(\mathbb{R}^d)^n$ of $n$ independent vectors in $\mathbb{R}^d$ chosen from $\rho_{j1}, \dotsc, \rho_{jn}$. Let $\operatorname{ftc} \colon (\mathbb{R}^d)^n \rightarrow \mathbb{R}^n$ be defined by $\operatorname{ftc}(x_1, \dots, x_n) = \sum x_i$. If $\mu_n$ is the measure on $\operatorname{ED}_n$ obtained by choosing the $n\mathbf{e}$ edge displacements $W(e_{11}), \dots, W(e_{\mathbf{e} n})$ independently from $\rho_{11}, \dotsc, \rho_{\mathbf{e} n}$, then the pushforward $f_1^* \mu_n = \mu$ is obtained by choosing the $\mathbf{e}$ edge displacements $W(e_1), \dots, W(e_n)$ independently from $\operatorname{ftc}_\sharp \rho_1, \dots, \operatorname{ftc}_\sharp \rho_\mathbf{e}$. If $\mu$ is compatible with $\mathbf{G}$ and each $\rho_j$ has a decomposition with respect to $\operatorname{ftc}$ given by a family of measures $\rho_j^{W}$ on $(\mathbb{R}^d)^n$ and the pushforward $\operatorname{ftc}_\sharp \rho_j$, then $\mu_n$ is compatible with $\mathbf{G}_n$ and~\prop{subdivisions and expectations} holds. \end{proposition} \begin{proof} We are going to construct the conditional probabilities $\mugn^{W_n}$ by constructing a decomposition of $\mu_n$ with respect to $\operatorname{proj}_{\operatorname{ID}_n}$ and the measure $(\operatorname{proj}_{\operatorname{ID}_n})_\sharp \mu_n$, keeping in mind that $W_n$ is a member of $\operatorname{ED}_n = ((\mathbb{R}^d)^n)^\mathbf{e}$. We do this in several stages. We know that we have maps \begin{equation*} \operatorname{ED}_n \xrightarrow{f_1^*} \operatorname{ED} \xrightarrow{\operatorname{proj}_{\operatorname{ID}}} \operatorname{ID} \end{equation*} We first note that $\mu_n$ is the joint distribution of independent vectors in $(\mathbb{R}^d)^n$ chosen from the distributions $\rho_1, \dots, \rho_e$. Now as a $d \times dn$ matrix, $\operatorname{ftc} = I_d \otimes \ones{1}{n}$. Further, we can compute \begin{equation} \label{eq:kronecker subdivision} f_1^* = I_d \otimes f_1^T = I_d \otimes I_e \otimes \ones{1}{n} = I_e \otimes I_d \otimes \ones{1}{n} = I_e \otimes \operatorname{ftc}. \end{equation} Therefore, the pushforward $\mu = (f_1^*) \mu_n$ is the joint distribution of $\operatorname{ftc}_\sharp \rho_1, \dots, \operatorname{ftc}_\sharp \rho_\mathbf{e}$ on $\operatorname{ED} = (\mathbb{R}^d)^n$. We have assumed that the $\rho_i$ have decompositions with respect to $\operatorname{ftc}$, so we may construct a family of $(\mu_n)^Z$ decomposing $\mu_n$ with respect to $f_1^*$ by defining $(\mu_n)^Z$ as the joint distribution of the decomposing distributions $\rho_1^{Z(e_1)}, \dots, \rho_{\mathbf{e}}^{Z(e_\mathbf{e})}$. Further, we have assumed that $\mu$ is compatible with $\mathbf{G}$, so there are $\mu^W_{\graphG}$ decomposing $\mu$ with respect to $\operatorname{proj}_{\operatorname{ID}}$. Suppose we have some $g_n \in \mathcal{K}(\operatorname{ED}_n)$. We can define a new function $g$ by taking $g(Z) = (\mu_n)^Z(g_n)$. Since the $\mu_n^Z$ are weak$^*$-continuous in $Z$ as measures on $\operatorname{ED}$, their values on the fixed function $g_n$ are also a continuous function of $Z$. Further, since $g_n$ has compact support on $\operatorname{ED}_n$, the new function $g$ has compact support on $\operatorname{ED}$, and $g \in \mathcal{K}(\operatorname{ED})$. We can then define a measure $(\mu_n)^W$ on $\operatorname{ED}_n$ by $(\mu_n)^W(g_n) = \mu^W_{\graphG}(g)$ for each $W$ in $U$ where $\mu^W_{\graphG}$ is defined. We claim that these $(\mu_n)^W$ and the measure $(\operatorname{proj}_{\operatorname{ID}} \circ f_1^*)_\# \mu_n$ are a decomposition of $\mu_n$ with respect to $\operatorname{proj}_{\operatorname{ID}} \circ f_1^*$ Continuity of $(\mu_n)^W$ in $W$ follows from continuity of $\mu^W_{\graphG}$ in $W$. To show that $(\mu_n)^{W}$ is concentrated on $(\operatorname{proj}_{\operatorname{ID}} f_1^*)^{-1}(W)$, we argue as follows. Suppose $A$ is an open set in $\operatorname{ED}_n$ which is disjoint from $(\operatorname{proj}_{\operatorname{ID}} f_1^*)^{-1}(W)$ and $\chi_A(Z_n)$ is its characteristic function. The corresponding function $g(W) = (\mu_n)^W(\chi_A)$ is supported on $(f_1^*)(A)$, but by hypothesis, $f_1^*(A)$ is disjoint from $\operatorname{proj}_{\operatorname{ID}}^{-1}(W)$. Since $\mu^W_{\graphG}$ is concentrated on $\operatorname{proj}_{\operatorname{ID}}^{-1} W$, this means that $\mu^W_{\graphG}(g) = 0$. We last have to check the averaging property. This is a computation: \begin{equation*} \int (\mu_n)^W(g_n) (\operatorname{proj}_{\operatorname{ID}} \circ f_1^*)_\# \mu_n(dW) = \int \mu^W_{\graphG}(g) (\operatorname{proj}_{\operatorname{ID}})_\sharp \mu(dW) = \mu(g) = (f_1^*)_\# \mu_n(g) = \mu_n(g_n). \end{equation*} Now it is clear from the definition of the canonical chain maps that they have no kernel. Therefore they are injective. One can give a sophisticated proof that $\xi(\mathbf{G}) = \xi(\mathbf{G}_n)$ because the two spaces are homotopy equivalent and $\xi$ is the first Betti number. However, it is easier to compute \begin{equation*} \xi(\mathbf{G}) = \mathbf{e} - \mathbf{v} + 1 = n \mathbf{e} - (\mathbf{v} + (n-1) \mathbf{e}) + 1 = \xi(\mathbf{G}_n). \end{equation*} Therefore the hypotheses of~\prop{chain map pushforwards} hold. We've already proved in~\clm{id isomorphism} of the proof of that Proposition that $\operatorname{proj}_{\operatorname{ID}} f_1^*$ is a linear isomorphism from $\operatorname{ID}_n$ to $\operatorname{ID}$. Therefore, there is an inverse map $(\operatorname{proj}_{\operatorname{ID}} f_1^*)^{-1} \colon \operatorname{ID} \rightarrow \operatorname{ID}_n$. Further, we saw in~\clm{proj f1 proj is proj f1} that $\operatorname{proj}_{\operatorname{ID}} f_1^* = \operatorname{proj}_{\operatorname{ID}} f_1^* \operatorname{proj}_{\operatorname{ID}_n}$. Pushing our measure $(\operatorname{proj}_{\operatorname{ID}} f_1^*)_\# \mu_n$ on $\operatorname{ID}$ forward by $(\operatorname{proj}_{\operatorname{ID}} f_1^*)^{-1}$ to $\operatorname{ID}_n$, we see that \begin{equation*} (\operatorname{proj}_{\operatorname{ID}} f_1^*)^{-1}_\sharp (\operatorname{proj}_{\operatorname{ID}} f_1^*)_\sharp \mu_n = (\operatorname{proj}_{\operatorname{ID}} f_1^*)^{-1}_\sharp (\operatorname{proj}_{\operatorname{ID}} f_1^* \operatorname{proj}_{\operatorname{ID}_n})_\sharp \mu_n = (\operatorname{proj}_{\operatorname{ID}_n})_\sharp \mu_n. \end{equation*} Thus we can define measures $\mugn^{W_n} := (\mu_n)^{(\operatorname{proj}_{\operatorname{ID}} f_1^*)^{-1}(W_n)}$ which decompose $\mu_n$ with respect to the map $\operatorname{proj}_{\operatorname{ID}_n}$ and the measure $(\operatorname{proj}_{\operatorname{ID}_n})_\sharp \mu_n$. By~\thm{decomposition and existence}, this shows that $\mu_n$ is compatible with $\mathbf{G}_n$. \end{proof} We note that this proposition also covers generalized subdivisions of $\mathbf{G}$ where the number of subdivisions of each edge varies between the edges of $\mathbf{G}$; this can be proved by choosing $n$ to be largest number of subdivisions and setting unused $\rho_{ij}$ to $\delta(0)$ so that some ``edges'' are forced to have length~$0$. Alternatively, one can repeat the proof above-- the only difficulties in writing the analogue of~\eqn{kronecker subdivision} are notational. In particular, let's consider a generalization of the freely-jointed chain. \begin{definition} If $\mathbf{G}_n$ is a $n$-part edge subdivision of any graph $\mathbf{G}$ with $n \geq 3$, and $\mu_n$ is the joint distribution of independent edge displacements chosen from the area measure on $S^2 \subset \mathbb{R}^3$, we will call $\mathbf{G}_n$, $\mu_n$ a~\emph{freely jointed network} with~\emph{structure graph} $\mathbf{G}$. \end{definition} \begin{proposition}\label{prop:freely jointed network} The measure $\mu_n$ in the freely jointed network $\mathbf{G}_n$ is compatible with $\mathbf{G}_n$. The corresponding measure $\mu$ on the structure graph $\mathbf{G}$ independently samples edge displacements from \begin{equation} \label{eq:freely jointed end-to-end} \rho(x) = \left(\frac{1}{2\pi^2 \ell} \int_0^\infty y \sin \ell y \operatorname{sinc}^n y \,d{y} \right) \lambda^{3}(dW) \end{equation} where $\ell = \norm{x}$. Further, any function $g_n \colon \operatorname{VP}_n \rightarrow \mathbb{R}$ which can be expressed in terms of $\mathbf{G}$ as $g_n = g \circ f_0^*$ has \begin{equation*} \mean{\nu_{\mathbf{G}_n}}{g_n} = \mean{\nu_{\mathbf{G}}}{g}. \end{equation*} \end{proposition} \begin{proof} This is a combination of our existing results. $\mu$ is compatible with $\mathbf{G}$ by~\prop{disintegration with density} because it has a continuous density given by the product of the density of $\rho(x)$ which is positive in a neighborhood of the origin. By~\prop{conditional probability for arms}, $\rho$ has a decomposition with respect to $\operatorname{ftc}$ and $\operatorname{ftc}_\sharp \rho$, so we can apply~\prop{subdivisions and compatibility} to show that $\mu_n$ is compatible with $\mathbf{G}_n$. Now we can apply~\thm{chain maps and probability} to complete the result. \end{proof} \prop{freely jointed network} makes the computation of many expectations quite feasible for arbitrary freely jointed networks. We now describe an example numerical computation using~\prop{freely jointed network}. Suppose that $\mathbf{G}$ is the $\alpha$-graph (a.k.a., the complete graph $K_4$) and we consider the freely jointed network with graph $\mathbf{G}_n$ in $\mathbb{R}^3$. The graph $\mathbf{G}$ has $\mathbf{v} = 4$ and $\mathbf{e} = 6$, so the cycle rank $\xi(\mathbf{G}) = 3$. Therefore $\operatorname{ID}$ is $d\xi(\mathbf{G}) = 9$ dimensional, $\operatorname{im} \bdry^*$ is $d(\mathbf{v}-1) = 9$ dimensional, and $\operatorname{ED}$ is $d \mathbf{e} = 18$ dimensional. We parametrized centered configurations of four vertices in $\mathbb{R}^3$ ($\operatorname{im} \bdry^{*+} \subset \operatorname{VP}$) by $\mathbb{R}^9 = (\mathbb{R}^3)^3$ using \begin{equation*} (\vec{x}_1,\vec{x}_2,\vec{x}_3) \mapsto \frac{1}{4}(\vec{x}_1 + \vec{x}_2 + \vec{x}_3, \vec{x}_1 - \vec{x}_2 - \vec{x}_3, -\vec{x}_1 + \vec{x}_2 - \vec{x}_3, -\vec{x}_1 - \vec{x}_2 + \vec{x}_3), \end{equation*} and composed with $\bdry^*$ to parametrize $\operatorname{im} \bdry^* \subset \operatorname{ED}$ by \begin{equation*} (\vec{x}_1,\vec{x}_2,\vec{x}_3) \mapsto \frac{1}{2} \left(\vec{x}_1 - \vec{x}_2, \vec{x}_1 + \vec{x}_2, \vec{x}_1 + \vec{x}_3, \vec{x}_1 - \vec{x}_3, \vec{x}_2 + \vec{x}_3, \vec{x}_2 - \vec{x}_3 \right). \end{equation*} Now the (unnormalized) probability density for a given configuration is given by the product of $\rho$ from \eqn{freely jointed end-to-end} evaluated on the six edge displacements above. We found the partition function $m_0$ for $n$ between $3$ and $10$ by performing a $6$-dimensional numerical integral\footnote{Reduced from a 9-dimensional integral using the $O(3)$-symmetry.} for each $n$. We emphasize that although the dimension of $\operatorname{ED}(\mathbf{G}_n)$ rises with $n$, the dimension of $\operatorname{ED}(\mathbf{G})$ does not, so these integrals were all of comparable difficulty. Similarly, we were able to (numerically) integrate the squared length $\norm{W(e_1)}$ over this space to compute the expectation of squared junction-junction distance. We compared these results to the averages over 10,000 samples from the Markov chain method of Deguchi and Uehara~\cite{Uehara:2018bb} for freely jointed networks with maximum vertex degree 3, where we made 1,000 random moves between samples. The results are shown in~\figr{numerical integration versus markov}. They are quite close, supporting the conjecture that the Markov chain is converging to the correct measure. \begin{figure}[t] \hphantom{.} \hfill \includegraphics[height=2in]{theory-rewrite-sub-4.pdf} \hfill \begin{overpic}[height=2in]{junction-junction-data.pdf} \put(15,-3){Number of subdivisions $n$ of each edge of $\alpha$-graph} \put(-22,60){\begin{minipage}{2in}Expectation of squared\\ junction-junction distance\end{minipage}} \end{overpic} \hfill \hphantom{.} \vspace{0.1in} \caption{The right-hand graph shows the expectation of the squared distance between junctions in freely jointed networks obtained by subdividing the $\alpha$-graph (as shown at left). The circles are results obtained by $6$-dimensional numerical integration (following the discussion after~\prop{freely jointed network}) while the fences are 95\% confidence intervals for Monte Carlo integration using the method of~\cite{Uehara:2018bb}. The linear fit is to a line of slope $0.497981 \simeq 0.5$.} \label{fig:numerical integration versus markov} \end{figure} \section{Conclusion} We have now given a theory of random embeddings of graphs with respect to a very general class of probability distributions on the edges. From a mathematical point of view, it would be interesting to see how much further these results can be pushed. We established our theory for freely jointed networks by carefully proving the existence of conditional distributions for the freely jointed arm. This is not yet conclusive: for instance, what if we had fixed bond~\emph{angles} instead of lengths? An alternative (and more standard) approach to the theory above would be to build conditional probabilities via~\emph{disintegrations} (cf.~\cite{Chang1997}) rather than decompositions. This allows one to establish the existence of a conditional $\mu^W_{\graphG}$ for~\emph{almost every} $W \in U \subset \operatorname{ID}$ in our theorems above. The only hypothesis needed for this approach is that the pushforward measure $\mu_{\operatorname{ID}}$ has a density with respect to Lebesgue measure on~$\operatorname{ID}$. We have not followed this path above because our primary interest is in cases where one can build a single well-defined probability distribution $\mu_{\graphG}$. It has not escaped our attention that the explicit construction of $(\mu_n)^0$ in~\prop{subdivisions and compatibility} suggests various explicit sampling algorithms, particularly for freely jointed networks. We will develop these in a future publication. Last, we note that when one is considering problems with self-avoidance or steric constraints, the relevant graph is clearly the complete graph, where the bonds and the repulsive forces are distinguished by different probability distributions on different edges. In this case, there are various useful simplifications to be made to the theory above. We hope to say more about this in the future. \section*{Acknowledgments} The authors would like to acknowledge many friends and colleagues whose helpful discussions and generous explanations shaped this work. In particular we would like to acknowledge Yasuyuki~Tezuka and Satoshi Honda for helpful discussions of topological polymer chemistry and thank Fan Chung for introducing us to spectral graph theory. This paper stemmed from a long series of discussions which started at conferences at Ochanomizu University and the Tokyo Institute of Technology. Cantarella and Shonkwiler are grateful to the organizers and the Japan Science and Technology Agency for making these possible. In addition, we are grateful for the support of the Simons Foundation (\#524120 to Cantarella, \#354225 and \#709150 to Shonkwiler), the Japan Science and Technology Agency (CREST Grant Number JPMJCR19T4) and the Japan Society for the Promotion of Science (KAKENHI Grant Number JP17H06463).
{ "timestamp": "2022-05-19T02:22:16", "yymm": "2205", "arxiv_id": "2205.09049", "language": "en", "url": "https://arxiv.org/abs/2205.09049", "abstract": "In this paper, we study random embeddings of polymer networks distributed according to any potential energy which can be expressed in terms of distances between pairs of monomers. This includes freely jointed chains, steric effects, Lennard-Jones potentials, bending energies, and other physically realistic models.A configuration of $n$ monomers in $\\mathbb{R}^d$ can be written as a collection of $d$ coordinate vectors, each in $\\mathbb{R}^n$. Our first main result is that entries from different coordinate vectors are uncorrelated, even when they are different coordinates of the same monomer. We predict that this property holds in realistic simulations and in actual polymer configurations (in the absence of an external field).Our second main contribution is a theorem explaining when and how a probability distribution on embeddings of a complicated graph may be pushed forward to a distribution on embeddings of a simpler graph to aid in computations. This construction is based on the idea of chain maps in homology theory. We use it to give a new formula for edge covariances in phantom network theory and to compute some expectations for a freely-jointed network.", "subjects": "Statistical Mechanics (cond-mat.stat-mech); Probability (math.PR)", "title": "Random graph embeddings with general edge potentials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429604789206, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211070717314 }
https://arxiv.org/abs/1407.6701
Uniform growth rate
In an evolutionary system in which the rules of mutation are local in nature, the number of possible outcomes after $m$ mutations is an exponential function of $m$ but with a rate that depends only on the set of rules and not the size of the original object. We apply this principle to find a uniform upper bound for the growth rate of certain groups including the mapping class group. We also find a uniform upper bound for the growth rate of the number of homotopy classes of triangulations of an oriented surface that can be obtained from a given triangulation using $m$ diagonal flips.
\section{Introduction} Let $G$ be a group and $S$ be a generating set for $G$. We denote the word length in $G$ associated to $S$ with $\norm{\param}_S$. Recall that the growth rate of $G$ (relative to $S$) is defined to be \[ h_G = \lim_{R \to \infty} \frac{\log \, \# B_R(G)}{R}, \qquad\text{where}\qquad B_R(G) = \Big\{g \in G \ \Big| \ \norm{g}_S\leq R \Big\}. \] In his $60^{\text th}$ birthday conference, Bill Thurston mentioned that the mapping class group has a growth rate that is independent of its genus. Namely, consider the following set of curves on a surface $\Sigma=\Sigma_{g,p}$ of genus $g$ with $p$ punctures: \begin{figure}[htp] \begin{center} \includegraphics[width=.8\textwidth]{figures/lickorish.pdf} \end{center} \end{figure} Let $S$ be the set of Dehn (or half) twists around these curves. This set $S$ generates \mcg, the mapping class group of $\Sigma$ \cite{Lic64, FM12, Art47, Bir74}. (Note that $S$ is a combination of the Lickorish generators of the mapping class group of a closed surface and the standard generators of a braid group). We will refer to $S$ as the set of \emph{extended Lickorish generators} for \mcg. Then the growth rate of \mcg equipped with the word metric associated to $S$ has an upper bound that is independent of the topology of $\Sigma$. This, Thurston asserted, is true since most pairs of elements in $S$ commute. Note that, in fact, the number of elements in $S$ that do not commute with a given element in $S$ is uniformly bounded. We show that this is enough to obtain the uniform growth rate in general. \begin{introthm} \label{Thm:IntroCommute} Given any $c_0$, let $S$ be a generating set for a group $G$ such that, for every $s \in S$, the number elements of $S$ that do not commute with $s$ is bounded by $c_0$. Then $h_G \leq \log (2c_0+2) +1$. \end{introthm} Since each curve in the extended Lickorish generators intersects at most $3$ other curves, we obtain: \begin{introcor} \label{Cor:MCG} The growth rate of \mcg relative to the extended Lickorish generators is bounded by $\log8+1$. \end{introcor} Uniform growth rate can also be shown regarding groups \aut, \out, \GL and similar groups if the generating set is chosen such that the number of generators that do not commute with a given generator is uniformly bounded. In fact, these groups have natural generating sets with this property. For example, in the case of \aut, let \F be the free group with basis $\{a_1,\ldots a_n\}$, and consider the following three types of automorphisms of \F. \begin{enumerate} \item Inversion: For $1\le i \le n$, $I_i(a_i) = \overline{a_i}$ and fixes all other $a_j$. \item Transposition: For $1 \le i \le n-1$, $P_i(a_i) = a_{i+1}$ and $P_i(a_{i+1}) = a_i$ and fixes all other $a_j$. \item Multiplication: For $1 \le i \le n-1$, $M_i(a_i) = a_ia_{i+1}$ and fixes all other $a_j$. \end{enumerate} The collection of inversions, transpositions, and multiplications generate \aut \cite{MKS66,LS77} and is called the set of \emph{local Nielsen generators}. For each $s \in S$, the number of elements that do not commute with $s$ is at most $7$, we obtain: \begin{introcor} The growth rate of \aut relative to local Nielsen generators is bounded by $\log 16+1$. \end{introcor} \subsection{Evolving structures} Another context to apply this philosophy is the setting of evolving structures. We follow the footsteps of the work of Sleator-Tarjan-Thurston \cite{STT92} where they showed that if a graph is allowed to evolve using a set of rule that change the graph locally, then the growth rate of the number of possible outcomes after $R$ mutations is bounded by a constant depending on the rules of evolution and not the size of the graph. This was used in \cite{STT92} to estimate the diameter of the space of plane triangulations equipped with the diagonal flip metric and in \cite{RT13} to estimate the diameter of the space of cubic graphs equipped with the Whitehead move metric. Similar to their work, one can also consider the evolution of labeled graphs. Generalizing the results in \cite{STT92} slightly, we prove: \begin{introthm} \label{Thm:IntroGraph} Let $G$ be any group and $\Gamma$ be a $G$--labeled trivalent graph (see \secref{Graph} for definition). Let $B_R(\Gamma)$ be the set of $G$--labeled graphs that are obtained from $\Gamma$ by at most $R$ splits. Then, \[ \lim_{R \to \infty}\frac{\log \# B_R(\Gamma)}{R} \leq 3 \log 4. \] That is, the growth rate of $B_R(\Gamma)$ is independent of the size and shape of the starting graph $\Gamma$ and of the group $G$. \end{introthm} As an application, we can prove a combinatorial version of \corref{MCG}. Namely, let $\T(\Sigma)$ be the space of homotopy classes of triangulations of the surface $\Sigma$ with $n$ vertices. \begin{introthm} \label{Thm:IntroTriangle} For $T \in \T(\Sigma)$, let $B_R(T)$ be the set of triangulations in $\T(\Sigma)$ that are obtained from $T$ using $R$ diagonal flips. Then \[ \lim_{R \to \infty} \frac{\log \# B_R(T)}{R} \le 3 \log 4 \] for every surface $\Sigma$ and any number of vertices $n$. \end{introthm} Note that, even though \thmref{IntroTriangle} is a direct analogue of \corref{MCG} it does not follow from it. This is because the quotient of $\T(\Sigma)$ by \mcg has a size that goes to infinity as the number of vertices $n$ approaches infinity. \subsection{Remarks and references} Our \thmref{IntroCommute} follows immediately from an upper bound on the growth rate of a right-angled Artin group $A(\Theta)$ with defining graph $\Theta$, in terms of the maximum degree of the complementary graph \bT (\thmref{RAAG}). Other results relating the growth rate of $A(\Theta)$ to the shape of $\Theta$ have been obtained in the past. For instance, it was shown in \cite{Sco07} that the growth series of $A(\Theta)$ can be computed in terms of the clique polynomial of $\Theta$. Similar results can be found in \cite{AP14} and \cite{McM14}. However, the degree of \bT cannot be recovered from the coefficients of the clique polynomial of $\Theta$, so these results are independent from ours. Our proof of \thmref{RAAG} is related to normal forms for elements of a right-angled Artin group. A normal form for a word representing an element in $A(\Theta)$ is obtained by shuffling commuting elements and removing inverse pairs of generators of $A(\Theta)$ whenever possible (\cite{HM95}). By fixing an ordering of $V(\Theta)$, then every element of $A(\Theta)$ admits a unique normal form, obtained by additionally shuffling lower-order letters to lower positions whenever possible. In our proof of \thmref{RAAG}, we construct a \emph{canonical representative} for a given word, obtained similarly by shuffling lower-order letters to lower positions. However, we do not need to cancel inverse pairs, so the canonical representative of a word may not be in normal form. \subsection*{Acknowledgements} We thank Benson Farb for useful conversations following the talk by Thurston. We thank the GEAR network for their support. We also thank the referee for helpful comments. \section{Uniform growth rates} \label{Sec:RAAGs} \subsection{Preliminaries} Let $G$ be a finitely generated group. By convention, the inverse of an element $g \in G$ will be represented by $\overline{g}$; and for any subset $S \subset G$, let $\overline{S} = \{ \overline{s} \colon s \in S\}$. A \emph{word} in $S \cup \overline{S}$ is a sequence $w=[s_1,\ldots,s_R]$, where $s_i \in S \cup \overline{S}$; $R$ is the \emph{length} of $w$. We allow the empty word whose length is $0$. A word $w=[s_1,\ldots,s_R]$ represents an element $g \in G$ if $g=s_1 \cdots s_R$. (The empty word represents the identity element.) By a \emph{generating set} for $G$ we will mean a finite set $S \subset G \setminus \{1\}$ such that every element $g \in G$ is represented by a word in $S \cup \overline{S}$. The \emph{word length} $\norm{g}_S$ of $g$ relative to a generating set $S$ is the length of the shortest word in $S \cup \overline{S}$ representing $g$. For any $R$, $B_R(G)$ is the set of elements of $G$ with word length at most $R$. The \emph{growth rate} (also called the entropy) of $G$ relative to $S$ is \[ h_G = \lim_{R \to \infty} \frac{\log \# B_R(G)}{R},\] where the above limit exists by sub-additivity. We remark that the growth rate of $G$ depends on the generating set, but positivity of the growth rate does not. The growth rate of $\FF_n$ relative to a basis is $\log(2n-1)$. If $G$ contains a subgroup isomorphic to $\FF_2$, then $h_G$ is strictly positive. See \cite{GdlH97} and the references within for more details. \subsection{RAAGs} A graph is a 1-dimensional CW complex. It is \emph{simple} if there are no self-loops or double edges. Let $\Theta$ be a finite simple graph. Let $V(\Theta)$ and $E(\Theta)$ be the set of vertices and edges of $\Theta$. An element of $E(\Theta)$ will be denoted by $vw$, where $v$ and $w$ are the vertices of the edge. The \emph{complementary graph} of $\Theta$ is the graph $\bT$ with $V(\Theta) = V(\bT)$ but two vertices span an edge in $\bT$ if and only if they do not in $\Theta$. The \emph{right-angled Artin group} or RAAG associated to $\Theta$ is the group $A(\Theta)$ with the presentation: \[ A(\Theta) = \gen{s_v \text{ for } v \in V(\Theta) \colon [s_v,s_w]=1 \text{ for } vw \in E(\Theta) }. \] The collection $S=\{s_v\}$ will be called the \emph{standard generating set} of $A(\Theta)$. We will often ignore the distinction between a vertex $v$ and the generator $s_v$. \begin{theorem} \label{Thm:RAAG} If the valence of every vertex in $\bT$ is bounded above by a constant $c_0$, then the growth rate of $A(\Theta)$ relative to the standard generating set $S$ is bounded by $\log (2c_0 + 2)+1$. \end{theorem} From \thmref{RAAG}, we derive \thmref{IntroCommute} as corollary. \begin{corollary} \label{Cor:Commute} Let $S_G$ be a generating set for a group $G$ such that for every $s \in S_G$, the number elements of $S_G$ that do not commute with $s$ is bounded by $c_0$. Then $h_G \leq \log (2c_0+2)+1$. \end{corollary} \begin{proof} Let $\Theta$ be the graph with vertex set $S_G$ and $ss' \in E(\Theta)$ if and only if $[s,s']=1$ in $G$. The natural map from $A(\Theta)$ to $G$ taking the standard generating set $S$ to $S_G$ extends to a surjective homomorphism, and the hypothesis on $S_G$ implies the valence of every vertex in $\bT$ is bounded by $c_0$. All together, we obtain $h_G \le h_{A(\Theta)} \le \log(2c_0+2)+1$. \qedhere \end{proof} The rest of the section is dedicated to proving \thmref{RAAG}. Given a word $w=[s_1,\ldots,s_R]$ in $S \cup \overline{S}$, the $j$--th letter of $w$ is $w(j) = s_j$. A word $w'=[t_1,\dots,t_R]$ is a \emph{reordering} of $w$ if $t_1 \cdots t_R = s_1\cdots s_R$ and there is a permutation $\sigma$ such that $t_j = s_{\sigma(j)}$. We say the letter $s_k$ in $w$ is \emph{ready} for position $i$, $i \le k$, if $s_k$ commutes with every $s_j$, for $i \le j\le k$. At every vertex $v$ of $\bT$, label the half-edges at $v$ from $1$ to $d_v$, where $d_v \le c_0$ is the valence of $v$. Let $n$ be the cardinality of $V(\bT)$. Fix a labeling $L_0 \colon\, V(\bT) \to \NN$ whose image is $\{1,\ldots n\}$. Fix $w_0=[s_1,\ldots,s_R]$. We will inductively construct a sequence $w_1,\ldots,w_R$ of words that reorders $w_0$, in conjunction with a sequence $L_1,\ldots,L_R$ of labeling of $V(\bT)$. The final word $w_R$ will be called the \emph{canonical representative} of $g=s_1 \cdots s_R$ induced by $w_0$. ($W_R$ depends on $W_0$). Along this process, we produce an encoding of the canonical representative by a sequence of integers $\ell_1,\ldots,\ell_R$. Suppose for $i \ge 0$, $w_i=[u_1,\ldots,u_i,t_{i+1},\ldots, t_R]$, a labeling $L_i$ of $V(\bT)$, and a sequence $\ell_1, \ldots, \ell_i$ are given. Among $\{t_{i+1},\ldots,t_R\}$, let $U$ be the subset of letters that are ready for position $i+1$. Pick $t \in U$ such that $L_i(t)$ is minimal among all elements of $U$. Set \[ w_{i+1} = \left[ u_1,\ldots,u_i,t,t_{i+1},\ldots, \hat{t}, \ldots,t_R \right], \] $u_{i+1} = t$. If $t \in S$, then let $\ell_{i+1}=L_i(t)$; if $t \in \overline{S}$, then let $\ell_{i+1} = -L_i(t)$. The word $w_{i+1}$ is a reordering of $w_i$ and hence of $w_0$ by induction. We now define the labeling $L_{i+1} \colon\, V(\bT) \to \NN$. Let $n_i$ be the largest value of $L_i$. Let $(e_1,\ldots,e_d)$ be the half-edges of \bT incident at $t$ listed in order, where $d$ is the valence of $t$. For each $e_k$, let $v_k$ be the vertex connected to $t$ by the edge associated to $e_k$. We set $L_{i+1}(v_k) = n_i+k$ for each $k=1,\ldots,d$, and $L_{i+1}(v) = L_i(v)$ for all other $v \in V(\bT)$. \begin{lemma} \label{Lem:Canonical} Let $n=\#V(\bT)$. Then \[ 1 \le \left| \ensuremath{\ell}\xspace_1 \right| \le \left| \ensuremath{\ell}\xspace_2 \right| \le \cdots \le \left| \ensuremath{\ell}\xspace_R \right| \le n+c_0R. \] \end{lemma} \begin{proof} For each $i \ge 1$, we show $|\ensuremath{\ell}\xspace_i| \le |\ensuremath{\ell}\xspace_{i+1}|$. Let $w_R=[u_1,\ldots,u_R]$. We have $|\ensuremath{\ell}\xspace_i|=L_i(u_i)$. For $v \in V(\bT)$, $L_i(v) = L_{i+1}(v)$ unless $v$ is in the link of $u_i$; in the latter case, $L_{i+1}(v)$ is is bigger than the maximal value of $L_i$ If $u_i$ and $u_{i+1}$ do not commute, then $u_{i+1}$ is in the link of $u_i$, therefore $L_{i+1}(u_{i+1})$ exceeds the maximal value of $L_i$, and in particular $L_{i+1}(u_{i+1}) > L_i(u_i)$. If $u_i$ and $u_{i+1}$ commute, then they were both ready for position $i$. In this case, $L_i(u_{i+1}) = L_{i+1}(u_{i+1})$, and $u_i$ was chosen precisely because its label $L_i(u_i)$ is minimal among all elements in the set $u_{i+1},\ldots,u_R$ that were ready for position $i$. We conclude $|\ensuremath{\ell}\xspace_i| \le |\ensuremath{\ell}\xspace_{i+1}|$. The largest value of $L_{i+1}$ is at most $c_0$ plus the largest value of $L_i$. Hence the largest value of $L_R$ is at most $n+c_0R$. This bounds all $|\ell_i|$. \qedhere \end{proof} Set $C_R = n+c_0 R$. Let $D_R = \big\{ \pm1, \pm 2, \ldots, \pm C_R, C_R+1 \big\}$ and let \[ W_R = \big\{ \left( \ensuremath{\ell}\xspace_1,\ldots,\ensuremath{\ell}\xspace_R \right) \colon \ensuremath{\ell}\xspace_i \in D_R \text{ and } |\ensuremath{\ell}\xspace_1| \le \cdots \le |\ensuremath{\ell}\xspace_R| \big\}. \] \begin{proposition} There exists an embedding of $B_R(G)$ into $W_R$, hence $\#B_R(G) \le \#W_R$. \end{proposition} \begin{proof} Let $g \in B_R(G)$ have $\norm{g}_S=r$. Pick any word $w=[s_1,\ldots,s_r]$ representing $g$ and let $w_r$ be the canonical representative of $g$ induced from $w$. Let $(\ell_1,\ldots,\ell_r)$ be the code of $w_r$. If $r<R$, then extend the sequence to $\left( \ensuremath{\ell}\xspace_1, \ldots, \ensuremath{\ell}\xspace_r, \ensuremath{\ell}\xspace_{r+1}, \ldots, \ensuremath{\ell}\xspace_R \right)$ by setting $\ensuremath{\ell}\xspace_{r+i} = C_R+1$ for all $i=1,\ldots R-r$. By \lemref{Canonical}, $(\ell_1,\ldots,\ell_R) \in W_R$. This gives a map $B_R(G) \to W_R$. To see this is an embedding, we show how to recover $w_r$ and hence $g$ from the sequence $\left( \ensuremath{\ell}\xspace_1,\ldots,\ensuremath{\ell}\xspace_R \right)$. Recall \bT is equipped with a cyclic ordering of the half-edges at every vertex and a labeling $L_0$ of the vertices from $1$ to $n$. Let $w_0$ be the empty word. Suppose for $0 \le i \le r-1$, $L_i \colon\, V(\bT) \to \NN$ and a word $w_i=[u_1,\ldots,u_i]$ are defined. If $\ensuremath{\ell}\xspace_{i+1}=C_R+1$, then set $u_{i+1} = u_{i+2} = \cdots u_R = 1 $. Otherwise, Let $v$ be the unique vertex in \bT with label $|\ensuremath{\ell}\xspace_{i+1}| =L_i(v)$. Set $u_{i+1}=v$ if $\ensuremath{\ell}\xspace_{i+1}$ is positive and $u_{i+1} = \overline{v}$ if $\ensuremath{\ell}\xspace_{i+1}$ is negative. Let $(v_1,\ldots,v_d)$ be the vertices in the link of $u_{i+1}$ listed in cyclic order. Let $n_i$ be the largest value of $L_i$. Construct $L_{i+1} \colon\, V(\bT) \to \NN$ by setting $L_{i+1}(v_k)=n_i+k$ and $L_{i+1}(u)=L_i(u)$ for all other $u \in \bT$. Then $w_r=[u_1,\ldots,u_r]$. \qedhere \end{proof} We now give an upper bound for the growth rate of $\#W_R$, which will complete the proof of \thmref{RAAG}. \begin{lemma} $\lim_{R \to \infty} \frac{\log \#W_R}{R} \le \log (2c_0+2)+1$. \end{lemma} \begin{proof} Suppose $p(R)$ and $q(R)$ are two functions of $R$ with $\lim_{R \to \infty} \frac{p(R)}{q(R)} = \frac{1}{\epsilon}$. Then, using Stirling's formula, $\log \begin{pmatrix} p \\ q \end{pmatrix}$ is asymptotic to $p H(\epsilon)$ as $R \to \infty$, where \[ H(\epsilon) = \epsilon \log \frac{1}{\epsilon} + (1-\epsilon) \log \frac{1}{1-\epsilon} \] is the binary entropy function. (See \cite[Ch. 1]{Mac03}.) For any $R \ge 1$ and $C\ge R$, by a simple counting argument, the set \[ W(R,C) = \big\{ (x_1,\ldots,x_R) \colon x_i \in \{1,\ldots,C\}, x_1 \le \cdots \le x_R \big\}\] has cardinality $\#W(R,C) = \begin{pmatrix} C+R-1 \\ R \end{pmatrix}.$ Let $C=C_R+1=n+c_0R+1$. We have: \[ \#W_R \le 2^R \begin{pmatrix} n+R(c_0+1)\\ R \end{pmatrix} \quad \text{and} \quad \lim_{R \to \infty} \frac{n+R(c_0+1)}{R} = c_0 + 1. \] Therefore, \begin{align*} \lim_{R \to \infty} \frac{\log \#W_R}{R} & \le \lim_{R \to \infty} \frac{\log 2^R \begin{pmatrix} n+R(c_0+1) \\ R \end{pmatrix} }{R} = \lim_{R \to \infty} \frac{ R\log 2 + \Big( n+R(c_0+1) \Big) H\left( \frac{1}{c_0+1} \right)}{R}\\ & = \log 2 + (c_0+1) H\left( \frac{1}{c_0+1} \right) = \log 2 + \log(c_0+1) + c_0 \log \left(1+\frac{1}{c_0} \right) \\ &\le \log (2 c_0+2) + 1. \qedhere \end{align*} \end{proof} \section{Evolving structures on $G$--Labeled graphs} \label{Sec:Graph} A graph is is oriented if each edge is oriented. For any edge $e$ of an oriented graph, denote by $\init(e)$ and $\term(e)$ the initial and terminal vertex of $e$. If $e$ is a \emph{loop}, then $\init(e) = \term(e)$. The orientation of $e$ induces an orientation on each half-edge of $e$: the half-edge $e_l$ containing $\init(e)$ is oriented so that $\init(e_l) = \init(e)$ ($\term(e_l)$ is a point in the interior of $e$), and the half-edge $e_r$ containing $\term(e)$ is oriented so that $\term(e_r) = \term(e)$. Given a group $G$, an oriented graph is $G$--labeled if each edge is labeled by an element of $G$. Two $G$--labeled graphs are \emph{equivalent} if one is obtained from the other by reversing the orientation of some of the edges and relabeling those edges with the inverse words. This defines an equivalent relation on the set of $G$--labeled graphs. Fix $n$ and let $\G(G)$ be the set of equivalent classes of trivalent $G$--labeled graphs of rank $n$. (Recall the rank of a graph is the rank of its fundamental group.) We now consider operations that derive from an element in $\G(G)$ another element in $\G(G)$. \begin{figure}[htp!] \begin{center} \includegraphics{figures/split.pdf} \end{center} \caption{Double split and loop split} \label{Fig:Split} \end{figure} Let $\Gamma \in \G(G)$. Let $e$ be an edge with label $g_e$. There are two \emph{types} of edges in $\Gamma$: loop or non-loop. First assume $e$ is not a loop. Choose a half-edge $\tilde{a}$ (not a half-edge of $e$) incident at $i(e)$, and let $a$ be the edge associated to $\tilde{a}$ with label $g_a$. Disconnect $\tilde{a}$ from $\init(e)$ and reattach it to $\term(e)$, while changing the label $g_a \to g_a g_e$ if $\term(\tilde{a}) = \init(e)$, or $g_a \to \overline{g_e} g_a$ if $\init(\tilde{a}) = \init(e)$. We call this a \emph{forward split} along $e$. A forward split along $e$ is well defined: if we reverse the orientation of $a$ and invert $g_a$, then the resulting graph is equivalent. Similarly, take a half-edge $\tilde{b}$ incident at $\term(e)$. A \emph{backward split} along $e$ is obtained by disconnecting $\tilde{b}$ from $\term(e)$ and reattaching it to $\init(e)$, while changing the label $g_b \to g_e g_b$ if $\init(\tilde b) = \term(e)$, or $g_b \to g_b \overline{g_e}$ if $\term(\tilde{b}) = \init(e)$. This is again well defined. A \emph{double split} along $e$ (see \figref{Split}) is the composition of a forward and a backward split along $e$. The resulting graph from a double split is trivalent, unlike from a backward or forward split alone. If we reverse the orientation of $e$ and invert $g_e$, then a forward split along $e$ becomes a backward split along $e$ and vice versa. Therefore, a double split is well-defined on the equivalent class of $\Gamma$. For a loop $e$, let $a$ be the edge connected to $e$ with label $g_a$. A \emph{forward split} along $e$ changes the label $g_a \to g_a g_e$ if $\term(a) = \init(e)$, or $g_a \to \overline{g_e} g_a$ if $\init(a) = \init(e)$. A \emph{backward split} changes the label $g_a \to g_a \overline{g_e}$ if $\term(a) = \init(e)$, or $g_a \to g_e g_a$ if $\init(a) = \term(e)$ (see \figref{Split}). By a \emph{loop split} along $e$ we will mean either a forward or a backward split along $e$. This is again well defined on the equivalent class of $\Gamma$. For any edge $e$ of $\Gamma$, a \emph{split} along $e$ will mean either a double split or a loop split depending on the type of $e$. We will represent a split along $e$ by $\Gamma \split \Gamma'$ and call $e=\supp(s)$ the \emph{support} of $s$. Fix $\Gamma_0 \in \G(G)$. A derivation $D=[s_1,\ldots,s_R]$ of length $R$ is a sequence of splits \[ \Gamma_0 \splitt{s_1} \Gamma_1 \splitt{s_2} \cdots \splitt{s_R} \Gamma_R. \] Set $\Gamma_i = [s_1,\ldots,s_i](\Gamma_0)$; also write $\Gamma_R = D(\Gamma_0)$. We will say $\Gamma_R$ is derived from $\Gamma_0$ by $D$. Let $B_R(\Gamma_0)$ be the set of all trivalent graphs (up to equivalence) that are derived from $\Gamma_0$ by a derivation of length at most $R$. Our main result is \thmref{IntroGraph}, restated below. \begin{theorem} \label{Thm:Graph} For any $\Gamma_0 \in \G(G)$, \[ \lim_{R \to \infty} \frac{\log \# B_R(\Gamma_0)}{R} \le 3 \log 4.\] \end{theorem} The main idea behind the proof of \thmref{Graph} is to give a normal form for a derivation. This was done in \cite{STT92} for unlabeled graphs. It turns out labeled graphs do not pose significant additional difficulties. We are also careful to obtain an explicit upper bound for the growth rate of $\#B_R(\Gamma)$. \begin{figure}[htp!] \begin{center} \includegraphics{figures/configurations.pdf} \end{center} \caption{Configurations of splits} \label{Fig:Configurations} \end{figure} A split $\Gamma \splitt{s} \Gamma'$ defines a bijection between the edges of $\Gamma$ and $\Gamma'$. For any edge $e$ in $\Gamma$, let $s(e)$ be its image in $\Gamma'$. We say $\supp(s)$ and its vertices are \emph{destroyed} by $s$, and $s(\supp(s))$ and its vertices are \emph{created} by $s$. All other vertices of $\Gamma$ \emph{survive} $s$. An edge of $\Gamma$ survives $s$ if all of its vertices survive $s$. Given $\Gamma \splitt{s} \Gamma'$. If a representative of $\Gamma$ is chosen, then $s$ naturally induces representative for $\Gamma'$. Fix a representative in the equivalent class of $\Gamma_0$. This way, for any derivation $D=[s_1,\ldots,s_R]$, we can inductively define a representative for each $\Gamma_i = [s_1,\ldots,s_i](\Gamma_0)$. We will refer to \figref{Configurations} for the following discussion. For each vertex $v$ of $\Gamma_0$, label the half-edges at $v$ from $1$ to $3$ so they can be cyclically ordered. Let $D=[s_1,\ldots,s_R]$ be a derivation. We will cyclically order the half-edges at each vertex of $\Gamma_i=[s_1,\ldots,s_i](\Gamma_0)$ and \emph{label} each $s_i$ as follows. Let $X$ be a fixed planar binary tree with four valence-$1$ vertices. The distinguished middle edge of $X$ is oriented (see \figref{Configurations}). Let $P$ be the planar graph which is the wedge of an interval and an oriented loop (also see \figref{Configurations}). Now suppose for $i\ge 0$, the half-edges of $\Gamma_i$ are labeled. Let $e$ be the support of $s_{i+1}$ in $\Gamma_i$. If $e$ is not a loop, then the cyclic ordering at $\init(e)$ and $\term(e)$ allows us to identify a contractible neighborhood of $e$ with $X$. The four configurations in the left column of \figref{Configurations} represent all possible double splits with support the middle edge of $X$. Record the label of the configuration that $s_{i+1}$ identifies with; this is the label of $s_{i+1}$. Similarly, if $e$ is a loop, then identify a neighborhood of $e$ with $P$. Label $s_{i+1}$ by 0 if $s_{i+1}$ is a forward split along $e$ and label $s_{i+1}$ by $1$ otherwise (see \figref{Configurations}). Let $\ell \in \{0,1,2,3\}$ be the label of $s_{i+1}$. Note that we always know if $e$ is a loop or not so there is no confusion with the duplication of the labels $0$ and $1$. For each vertex $v$ of $\Gamma_{i+1}$, if $v$ is not created by $s_{i+1}$, then the half-edges at $v$ will inherit their labels from $\Gamma_i$; otherwise, label the half-edges at $v$ from $1$ to $3$ according to the right side of configuration $\ensuremath{\ell}\xspace$ in \figref{Configurations}. Let $D=[s_1,\ldots,s_R]$. Compute the label of each $s_i$ from above. Fix $i \ge 0$ and let $e$ be any edge in $\Gamma_i$. For $k > i+1$, we will say $e$ \emph{survives} $[s_{i+1},\ldots,s_{k-1}]$ if for all $j=i+1,\ldots,k-1$, the image of $e$ in $\Gamma_{j-1}$ survives $s_j$. In particular, $e$ remains the same type from $\Gamma_i$ to $\Gamma_{k-1}$. Let $e_i$ be the preimage of $\supp(s_k)$ in $\Gamma_i$. We say $s_k$ is \emph{ready} for $\Gamma_i$ if $e_i$ survives $[s_{i+1},\ldots,s_{k-1}]$. In this case, we can apply $s_k$ to $\Gamma_i$ with support $e_i$ using the label of $s_k$; this is well-defined since $e_i$ is the same type as $\supp(s_k)$. Consider \[ \Gamma_{k-2} \splitt{s_{k-1}} \Gamma_{k-1} \splitt{s_k} \Gamma_k.\] Suppose $s_k$ is ready for $\Gamma_{k-2}$. Apply $s_k$ to $\Gamma_{k-2}$ and let $\Gamma_{k-1}'$ be the resulting graph. Propagate the labels of half-edges from $\Gamma_{k-2}$ to $\Gamma_{k-1}'$ as before. Since $e_{k-2}$ and $e=\supp(s_{k-1})$ are disjoint in $\Gamma_{k-2}$, $e$ survives $s_k$, so we may apply $s_{k-1}$ to $\Gamma_{k-1}'$ with support $s_k(e)$ using the label of $s_{k-1}$. Let \[ \Gamma_{k-2} \splitt{s_k} \Gamma_{k-1}' \splitt{s_{k-1}} \Gamma_k'\] be the derivation obtained by switching the order of $s_{k-1}$ and $s_k$. We claim the following: \begin{lemma} \label{Lem:Commute} With the same notation as above. If $s_k$ is ready for $\Gamma_{k-2}$, then $s_{k-1}$ and $s_k$ commute; that is, $\Gamma_k = \Gamma_k'$. \end{lemma} \begin{figure}[htp!] \begin{center} \includegraphics{figures/commute.pdf} \end{center} \caption{If $s_k$ is ready for position $k-2$, then $s_{k-1}$ and $s_k$ commute. } \label{Fig:Commute} \end{figure} \begin{proof} For any split $s$, we say an edge is \emph{affected} by $s$ if its label is changed by $s$. Any split affects at most two edges. Let $e$ and $e'$ be the supports of $s_{k-1}$ and $s_k$ in $\Gamma_{k-1}$ respectively. If $s_{k-1}$ and $s_k$ do not affect the same edge in $\Gamma_{k-2}$, then they clearly commute. So let $a$ be an edge in $\Gamma_{k-2}$ affected by both $s_{k-1}$ and $s_k$. $a$ must share a vertex with both $e$ and $e'$. Since $e$ and $e'$ are disjoint, $a$ cannot be a loop. The proof that the labels of $s_{k-1} \circ s_k(a)$ and $s_k \circ s_{k-1}(a)$ are the same now follows from considering different cases. The proof in all cases are similar. \figref{Commute} shows the case when neither $e$ and $e'$ are loops and $\term(e') = \init(a)$ and $\term(a) = \init(e)$. Since $s_k \circ s_{k-1}$ and $s_{k-1} \circ s_k$ affect the edges labels the same way, $\Gamma_k' = \Gamma_k$. \qedhere \end{proof} Let $D=[s_1,\ldots,s_R]$. For any $i \le k-1$, if $s_k$ is ready for $\Gamma_i$, then $s_k$ is ready for $\Gamma_j$, for all $i \le j \le k-1$. By applying \lemref{Commute} $k-i$ times, we see that \[ D' = [s_1,\ldots,s_i,s_k,s_{i+1},\ldots, \widehat{s_k}, \ldots, s_R] \] is a well defined derivation and $D(\Gamma_0) = D'(\Gamma_0)$. Set $D_0 = D$ and let $\Gamma_R = D(\Gamma_0)$. Inductively, we will construct a sequence $D_1$, \ldots, $D_R$ of derivations such that $D_j(\Gamma_0) = \Gamma_R$ for all $j=1,\ldots,R$. The final derivation $D_R$ will be called the \emph{canonical derivation} of $\Gamma_R$ coming from $D_0$. Let $M=2n-2$ be the number of vertices of $\Gamma_0$. \emph{Label} each vertex of $\Gamma_0$ by a distinct integer from $1$ to $M$. Similarly, \emph{label} the edges of $\Gamma_0$ from $1$ to $N$, where $N=3n-3$ is the number of edges of $\Gamma_0$. Suppose for $i \ge 0$, $D_i=[u_1,\ldots,u_i,t_{i+1},\ldots,t_R]$ has been constructed. Also, suppose the vertices and edges of $\Gamma_i'$ have been labeled, where $\Gamma_i' = [u_1,\ldots,u_i](\Gamma_0)$. Let $U$ be the subset of $\{t_{i+1},\ldots,t_R\}$ consisting of splits that are ready for $\Gamma_i'$. Pick $t \in U$ such that $t$ destroys the vertex of $\Gamma_i'$ with the lowest label. Set \[ D_{i+1} = [u_1, \ldots, u_i, t, t_{i+1}, \ldots, \hat{t}, \ldots t_R]. \] Set $u_{i+1} = t$ and let $\Gamma_{i+1}'=[u_1,\ldots,u_{i+1}](\Gamma_0)$. Let $M_i$ and $N_i$ be the maximal vertex and edge label of $\Gamma_i'$. Let $f$ be the edge in $\Gamma_{i+1}'$ created by $t$. Label $f$ by $N_i+1$. If $f$ is a loop, then label its vertex by $M_i+1$. If $f$ is not a loop, then label $\init(f)$ by $M_i+1$ and $\term(f)$ by $M_i+2$. All other vertices and edges of $\Gamma_{i+1}'$ will inherit their label from $\Gamma_i'$. This completes the construction. Since $\Gamma_0$ has $2n-2$ vertices and at most two vertices are created in each stage, the maximal vertex label of $\Gamma_R$ is at most $2n-2+2R$. Similarly, the maximal edge label of $\Gamma_R$ is at most $3n-3+R$. Let \[ V = \{1,\ldots, 2n-2+2R \}, \quad E = \{1, \cdots, 3n-3 + R\}. \] Let $F_R$ be the set of all pairs of functions $(\phi,\psi)$, where $\phi \colon\, V \to \{0,1,2,3\}$ and $\psi \colon\, E \to \{0,1,2,3\}$. \begin{proposition} There exists an embedding of $B_R(\Gamma_0)$ into $F_R$, hence $\#B_R(\Gamma_0) \le \#F_R$. \end{proposition} \begin{proof} By definition, any $\Gamma \in B_R(\Gamma_0)$ can be obtained from $\Gamma_0$ by a derivation $D$ of length $r \le R$. Let $D_r$ be the canonical derivation of $\Gamma$ coming from $D$. We now \emph{encode} $D_r$ by a pair of maps $\phi \colon\, V \to \{0,1,2,3\}$ and $\psi \colon\, E \to \{0,1,2,3\}$. Set $D_r$ \[ \Gamma_0 \splitt{u_1} \Gamma_1 \splitt{u_2} \cdots \splitt{u_r} \Gamma_r. \] For $i \in V$, let $j$ be the largest index so that $\Gamma_j$ has a vertex $v$ with label $i$. If $j \ge R$ ($j > R$ means no such label exists), then set $\phi(i) =0$. If $j < R$, then $u_{j+1}$ must destroy $v$, so the support of $u_{j+1}$ is an edge $e$ in $\Gamma_j$ where $v$ is either $\init(e)$ or $\term(e)$. Define $\phi(i) \in \{1,2,3\}$ to be the label of the half-edge of $e$ containing vertex $v$. If $e$ is a loop, then choose $\phi(i)$ to be the label of any half-edge of $e$. For $i \in E$, let $k$ be the largest index so that $\Gamma_k$ has an edge $e$ of label $i$. If $k\ge R$, then $\psi(i)=0$. If $k<R$, then $e$ is the support of $t_{k+1}$. In this case, define $\psi(i)\in \{0,\ldots,3\}$ to be the label of $t_{k+1}$. To see this is an embedding, we will give a \emph{decoding} procedure that will recover from $(\phi,\psi)$ the canonical derivation $D_r=[u_1,\ldots,u_r]$ and hence $\Gamma$. For each $k \ge 0$, suppose $\Gamma_k'$ has been constructed. In $\Gamma_k'$, let $i$ range from $1$ to $2n-2+2k$ in order and let $v$ be the vertex in $\Gamma_k$ with label $i$. If $\phi(i) = 0$, then move on to $i+1$. Otherwise, $\phi(i)$ determines a unique edge $e$, where $v$ is either the initial or terminal vertex of $e$, such that the half-edge of $e$ at $v$ has label $\phi(i)$. We now explain a \emph{matching} procedure that can occur in two ways. If $e$ is a loop, then we have a match. If $e$ is not a loop, then let $w$ be the other vertex of $e$ with label $i'$. If $\phi(i')$ is exactly the label of the half-edge of $e$ at $w$, then we have a match. In all other case, there is no match and we move on to $i+1$. If there is a match, then let $j \in E$ be the label of $e$. The configuration $\psi(j)$ determines a split supported on $e$ which we call $u_{k+1}'$, and applying $u_{k+1}'$ to $\Gamma_k'$ yields $\Gamma_{k+1}'$. Proceed this way until $k=r$ results in a derivation $D'$. To see that $D'=D_r$. Let $e$ be the support of $u_k$. The encoding procedure ensures that the values of $\phi$ on the labels of $\init(e)$ and $\term(e)$ determine $e$, and hence a match, and the value of $\psi$ on the label of $e$ agrees with $u_k$. Furthermore, since only the splits that are ready at $\Gamma_{k-1}$ can determine a match, and $u_k$ is the unique one among them that destroys the vertex of $\Gamma_{k-1}$ with the smallest label, the match coming from $u_k$ will always be the first match the decoding procedure finds. This shows $\Gamma_k = \Gamma_k'$ and $t_k = t_k'$ for all $k$. Therefore, $B_R(\Gamma_0)$ embeds in $F_R$. \qedhere \end{proof} Since $\#F_R = 4^{5n-5+3R}$, $\lim_{R \to \infty} \frac{ \log \#F_R}{R} = 3 \log 4$. This completes the proof of \thmref{Graph}. \subsection{Triangulations of a surface} Let $\Sigma=\Sigma_{g,p}$ be an oriented surface of genus $g$ with $p$ punctures. For any $n \ge p$, let $\T = \T(\Sigma)$ be the set of homotopy classes of triangulations of $\Sigma$ with $n$ vertices (the punctures of $\Sigma$ are always vertices of triangles.) A natural transformation of a triangulation is a \emph{diagonal flip}. Given $T \in \T$. Let $\Delta$ and $\Delta'$ be two triangles in $T$ that share a common edge $E$. View $\Delta \cup \Delta'$ as a quadrilateral with diagonal $E$. Replace $E$ by the other diagonal in the quadrilateral yields a triangulation $T' \in \T$. Call this process a (diagonal) flip about $E$ and denote it by $T \splitt{d} T'$. Let $B_R(T)$ be the set of all triangulations of $\Sigma$ obtained from $T$ by a sequence of at most $R$ diagonal flips. Fix $T_0 \in \T$. Dual to $T_0$ is a trivalent graph $\Gamma_0$ obtained by putting a vertex in the interior of each triangle and connecting two vertices by an edge when two triangles share an edge. Pick a vertex $x_0$ in $\Gamma_0$ and let $G = \pi_1(\Sigma,x_0)$. We will label each edge of $\Gamma_0$ by an element of $G$ as follows. Orient the edges of $\Gamma_0$ arbitrarily. Pick a spanning tree $K_0$ in $\Gamma_0$ and label each edge of $K_0$ by $1$. Each edge $e$ in the complement of $K_0$ represent an element of $G$: connect the end points of $e$ to $x_0$ along $K_0$ and orient the resulting closed curve so that it matches the orientation of $e$ in $\Gamma_0$. Now, label $e$ by the element that this closed curve represents in $G$. This makes a $G$--labeled graph $\Gamma_0 \in \mathcal{G}_m(G)$, where $m=2g+n-1$ is the rank of $\Gamma_0$. By a pair $(\Gamma,f)$ we will mean a $G$--labeled graph $\Gamma \in \mathcal{G}_m(G)$ together with an embedding $ f \colon\, \Gamma \to \Sigma$. We say a pair $(\Gamma,f)$ is \emph{well-labeled} if for any closed path $p$ in $\Gamma$, the product of labels of edges along $p$ is in the conjugacy class in $G$ represented by $f(p)$. By construction, $(\Gamma_0,i)$, where $\Gamma_0$ is the dual graph to $T_0$ and $i$ is the inclusion map, is well-labeled. \begin{proposition} \label{Prop:T-to-G} There exists an embedding of $B_R(T_0)$ into $B_R(\Gamma_0)$, hence $\#B_R(T_0) \le \#B_R(\Gamma_0)$. \end{proposition} \begin{proof} Assume $T$ and a well-labeled dual graph $(\Gamma,i)$ are given. Consider a flip $T \splitt{d} T'$ about an edge $E$ in $T$ and let $e$ be the edge in $\Gamma$ dual to $E$. Identify the quadrilateral containing $E$ and a contractible neighborhood of $e$ dual to the quadrilateral with the left-hand side of \figref{Flip}. We define a split move $\Gamma \splitt{s} \Gamma'$ supported on $e$, where $(\Gamma',i)$ is also embedded in $\Sigma$, as indicated by \figref{Flip}. We refer to $s$ as the split associated to the flip $d$. Note that a closed path $p$ in $\Gamma$ can naturally be mapped to a homotopic closed path $p'$ in $\Gamma'$ and the products of labels along edges of $p$ and $p'$ are the same. That is, the pair $(\Gamma',i)$ is still well-labeled. \begin{figure}[htp!] \begin{center} \includegraphics{figures/flip.pdf} \end{center} \caption{Flip and dual split.} \label{Fig:Flip} \end{figure} We now define a map from $B_R(T_0)$ to $B_R(\Gamma_0)$. For any $T \in B_R(T_0)$, choose an arbitrary sequence of flips $T_0 \splitt{d_1} T_1 \splitt{d_2} \cdots \splitt{d_R} T_R=T$ and let $\Gamma_0 \splitt{s_1} \Gamma_1 \splitt{s_2} \cdots \splitt{s_R} \Gamma_R$ be the associated sequence of \emph{dual splits} as constructed above. The map from $B_R(T_0)$ to $B_R(\Gamma_0)$ is defined by sending $T_R$ to $(\Gamma_R,i)$ and then to $\Gamma_R$. We show that this map is injective. In fact, for triangulations $T$ and $T'$ and dual labeled graphs $(\Gamma,i)$ and $(\Gamma',i)$ that are well-labeled, we show that if $\Gamma$ and $\Gamma'$ are equivalent $G$--labeled graphs, then there exists a homeomorphism of $\Sigma$ homotopic to the identity taking $T$ to $T'$. Since $\Gamma$ and $\Gamma'$ are equivalent, there is a graph isomorphism $\phi \colon\, \Gamma \to \Gamma'$ such that the label of any edge $e \in \Gamma$ matches the label of $\phi(e) \in \Gamma'$. Since $\Gamma$ and $\Gamma'$ are dual graphs to the triangulations $T$ and $T'$ respectively, we can build a homeomorphism $f \colon\, \Sigma \to \Sigma$ mapping a triangle of $T$ associated to a vertex $v \in \Gamma$ to the triangle of $T'$ associated to the vertex $\phi(v)$. To show that $\phi$ is homotopic to identity, it is sufficient to show that every closed path $q$ in $\Sigma$ is homotopic to $f(q)$. First perturb $q$ so it missed the vertices of $T$. Then $q$ can be pushed to a closed path $p$ in $\Gamma$. Since $q$ is homotopic to $p$, we have $f(q)$ is homotopic to $p' = f(p)$. But the product of labels along the closed paths $p$ and $p'$ are identical, which means $p$ and $p'$ represent the same conjugacy class in $G$ and hence are homotopic. This finishes the proof. \qedhere \end{proof} \thmref{IntroTriangle} from the introduction now follows from \thmref{Graph} and \propref{T-to-G}. \bibliographystyle{alpha}
{ "timestamp": "2016-05-13T02:06:08", "yymm": "1407", "arxiv_id": "1407.6701", "language": "en", "url": "https://arxiv.org/abs/1407.6701", "abstract": "In an evolutionary system in which the rules of mutation are local in nature, the number of possible outcomes after $m$ mutations is an exponential function of $m$ but with a rate that depends only on the set of rules and not the size of the original object. We apply this principle to find a uniform upper bound for the growth rate of certain groups including the mapping class group. We also find a uniform upper bound for the growth rate of the number of homotopy classes of triangulations of an oriented surface that can be obtained from a given triangulation using $m$ diagonal flips.", "subjects": "Group Theory (math.GR); Geometric Topology (math.GT)", "title": "Uniform growth rate", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429604789206, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211070717314 }
https://arxiv.org/abs/2105.04684
An automatic system to detect equivalence between iterative algorithms
When are two algorithms the same? How can we be sure a recently proposed algorithm is novel, and not a minor twist on an existing method? In this paper, we present a framework for reasoning about equivalence between a broad class of iterative algorithms, with a focus on algorithms designed for convex optimization. We propose several notions of what it means for two algorithms to be equivalent, and provide computationally tractable means to detect equivalence. Our main definition, oracle equivalence, states that two algorithms are equivalent if they result in the same sequence of calls to the function oracles (for suitable initialization). Borrowing from control theory, we use state-space realizations to represent algorithms and characterize algorithm equivalence via transfer functions. Our framework can also identify and characterize some algorithm transformations including permutations of the update equations, repetition of the iteration, and conjugation of some of the function oracles in the algorithm. To support the paper, we have developed a software package named Linnaeus that implements the framework to identify other iterative algorithms that are equivalent to an input algorithm. More broadly, this framework and software advances the goal of making mathematics searchable.
\section{Introduction}\label{intro} Large-scale optimization problems in machine learning, signal processing, and imaging have fueled ongoing interest in iterative optimization algorithms. New optimization algorithms are regularly proposed in order to capture more complicated models, reduce computational burdens, or obtain stronger performance and convergence guarantees. However, the \emph{novelty} of an algorithm can be difficult to establish because algorithms can be written in different equivalent forms. For example, algorithm~\ref{algo_i1} was originally proposed by Popov~\cite{popov1980modification} in the context of solving saddle point problems. This method was later generalized by Chiang et al.~\cite[\S4.1]{chiang2012online} in the context of online optimization. Algorithm~\ref{algo_i2} is a reformulation of algorithm~\ref{algo_i1} adapted for use in generative adversarial networks (GANs)~\cite{gidel2018a}. Algorithm~\ref{algo_i3} is an adaptation of \emph{Optimistic Mirror Descent}~\cite{OMD_rakhlin} used by Daskalakis et al.~\cite{daskalakis2018training} and also used to train GANs. Finally, algorithm~\ref{algo_i4} was proposed by Malitsky~\cite{malitsky2015projected} in the context of solving monotone variational inequality problems. In all four algorithms, the vectors $x^k_1$ and $x^k_2$ are algorithm states, $\eta$ is a tunable parameter, and $F^k(\cdot)$ is the gradient of the loss function at time step $k$. \vspace{-1em} \noindent\hfil \begin{minipage}[t]{0.45\textwidth} \begin{algorithm}[H] \centering \caption{(Modified Arrow--Hurwicz)} \label{algo_i1} \begin{algorithmic} \FOR{$k=1, 2,\dots$} \STATE{$x^{k+1}_1 = x^k_1 - \eta F^k(x^k_2)$} \STATE{$x^{k+1}_2 = x^{k+1}_1 - \eta F^k(x^k_2)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}[t]{0.45\textwidth} \begin{algorithm}[H] \centering \caption{(Extrapolation from the past)} \label{algo_i2} \begin{algorithmic} \FOR{$k=1, 2,\dots$} \STATE{$x^k_2 = x^k_1 - \eta F^{k-1}(x^{k-1}_2)$} \STATE{$x^{k+1}_1 = x^k_1 - \eta F^k(x^{k}_2)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \noindent\hfill \begin{minipage}[t]{0.45\textwidth} \begin{algorithm}[H] \centering \caption{(Optimistic Mirror Descent)} \label{algo_i3} \begin{algorithmic} \FOR{$k=1, 2,\dots$} \STATE{$x^{k+1}_2 = x^k_2 - 2\eta F^k(x^k_2) + \eta F^{k-1}(x^{k-1}_2)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfill \begin{minipage}[t]{0.45\textwidth} \begin{algorithm}[H] \centering \caption{(Reflected Gradient Method)} \label{algo_i4} \begin{algorithmic} \FOR{$k=1, 2,\dots$} \STATE{$x^{k+1}_1 = x^k_1 - \eta F^k( 2x^k_1 - x^{k-1}_1 )$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfill \vspace{1em} Algorithms \ref{algo_i1}--\ref{algo_i4} are equivalent in the sense that when suitably initialized, the sequences $(x^k_1)_{k\ge 0}$ and $(x^k_2)_{k\ge 0}$ are identical for all four algorithms.\footnote{ In their original formulations, algorithms~\ref{algo_i1}, \ref{algo_i2}, and \ref{algo_i4} included projections onto convex constraint sets. We assume an unconstrained setting here for illustrative purposes. Some of the equivalences no longer hold in the constrained case.} Although these particular equivalences are not difficult to verify and many have been explicitly pointed out in the literature, for example in~\cite{gidel2018a}, algorithm equivalence is not always immediately apparent. Indeed, it is not uncommon for algorithms to be unknowingly re-discovered in a different but equivalent form and given a new name before anyone observes that they are not new after all. In this paper, we present a framework for reasoning about algorithm equivalence, with the ultimate goal of making the analysis and design of algorithms more principled and streamlined. This includes: \begin{itemize} \item A universal way of representing algorithms, inspired by the literature on control theory. Specifically, we will use \emph{state-space realizations} and \emph{transfer functions}. \item A description of different ways in which two algorithms can be deemed equivalent, and how these equivalences manifest themselves with regards to the algorithm representation. \item A computationally efficient way of verifying whether two algorithms belong to the same equivalence class and how to transform between equivalent representations. \end{itemize} We also present a software package we named \lin{}\footnote{ Named after Carl Linnaeus, a botanist and zoologist who invented the modern system of naming organisms.}, for the classification and taxonomy of iterative algorithms. The software is a search engine, where the input is an algorithm described using natural syntax, and the output is a canonical form for the algorithm along with any known names and pointers to relevant literature. The approach described in this paper allows \lin{} to search over first order optimization algorithms such as gradient descent with acceleration, the alternating directions method of multipliers (ADMM), and the extragradient method. As the database in \lin{} grows, it will help algorithm researchers understand and efficiently discover connections between algorithms. More generally, \lin{} advances the goal of making mathematics searchable. This paper is organized as follows. In section \ref{example}, we introduce three examples of equivalent algorithms that motivate our framework. In section \ref{preliminary}, we briefly review important background on linear systems and optimization used throughout the paper. We formally define two notions of algorithm equivalence, \emph{oracle equivalence} and \emph{shift equivalence}, in section \ref{equivalence} and discuss how to characterize them via transfer functions in sections \ref{charac-oracle} and \ref{charac-shift}. Certain transformations can also be identified and characterized with our framework including \emph{algorithm repetition}, repeating an algorithm multiple times, and \emph{conjugation}, a transformation using conjugate function oracles. These are discussed in sections \ref{repe} and \ref{conjugation} respectively. In section \ref{package}, we introduce our software package \lin{} for the classification of iterative algorithms. \section{Motivating examples}\label{example} To explain what we mean by algorithm equivalence, we introduce three motivating examples in this section. Each provides a different view of how two algorithms might be equivalent. \noindent \hfil \begin{minipage}{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo1} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_1 = 2x^k_1 - x^k_2 - \frac{1}{10} \nabla f(2x^k_1 - x^k_2)$} \STATE{$x^{k+1}_2 = x^k_1$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo2} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$\xi^{k+1}_1 = \xi^k_1 - \xi^k_2 - \frac{1}{5} \nabla f(\xi^k_1)$} \STATE{$\xi^{k+1}_2 = \xi^k_2 + \frac{1}{10} \nabla f(\xi^k_1)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \vspace{1em} The first example consists of algorithms \ref{algo1} and \ref{algo2}. These algorithms are equivalent in a strong sense: when suitably initialized, we may transform the iterates of algorithm \ref{algo1} by the invertible linear map $\xi^k_1 = 2x^k_1 - x^k_2, \xi^k_2 = - x^k_1+x^k_2$ to yield the iterates of algorithm \ref{algo2}. We say that the sequences $(x^k_1)_{k\ge 0}$ and $(x^k_2)_{k\ge 0}$ are \emph{equivalent} to sequences $(\xi^k_1)_{k\ge 0}$ and $(\xi^k_2)_{k\ge 0}$ \emph{up to an invertible linear transformation}. \vspace{-1em} \noindent \hfil \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo3} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{${x}^{k+1}_1 = 3{x}^k_1 - 2x^k_2 + \frac{1}{5} \nabla f(-x^k_1 + 2x^k_2)$} \STATE{$x^{k+1}_2 = x^k_1$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo4} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$\xi^{k+1} = \xi^k - \frac{1}{5} \nabla f(\xi^k)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \vspace{1em} The second example consists of algorithms \ref{algo3} and \ref{algo4}. These algorithms do not even have the same number of state variables, so these algorithms are \emph{not} equivalent up to an invertible linear transformation. But when suitably initialized, we may transform the iterates of algorithm \ref{algo3} by the linear map $\xi^k = -x^k_1 +2 x^{k}_2$ to yield the iterates of algorithm \ref{algo4}. This transformation is linear but not invertible. Instead, notice that the sequence of calls to the gradient oracle are identical: the algorithms satisfy \emph{oracle equivalence}, a notion we will define formally later in this paper. \vspace{-1em} \noindent \hfil \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo5} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_1 = \textnormal{prox}_{f}(x^k_3)$} \STATE{$x^{k+1}_2 = \textnormal{prox}_{g}(2x^{k+1}_1 - x^k_3)$} \STATE{$x^{k+1}_3 = x^k_3 + x^{k+1}_2 - x^{k+1}_1$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo6} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$\xi^{k+1}_1 = \textnormal{prox}_{g}(- \xi^k_1 + 2\xi^k_2) + \xi^k_1 - \xi^k_2$} \STATE{$\xi^{k+1}_2 = \textnormal{prox}_{f}(\xi^{k+1}_1)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \vspace{1em} The third example consists of algorithms \ref{algo5} and \ref{algo6}. With suitable initialization, they will generate the same sequence of calls to the proximal operator, ignoring the very first call to one of the oracles. Specifically, algorithm \ref{algo6} is initialized as $\xi^0_1 = x^0_3$, $\xi^0_2 = x^1_1$ and the first call to $\text{prox}_f$ in algorithm \ref{algo5} is ignored. We will say they are equivalent up to a prefix or shift: they satisfy \emph{shift equivalence}. Generalizing from these motivating examples, we will call algorithms equivalent when they generate an identical sequence (e.g., of states or oracle calls) up to some transformations, with suitable initialization. To make our ideas formal, we need a few definitions and some ideas from control theory. We will then revisit those motivating examples and define algorithm equivalence. \section{Preliminaries}\label{preliminary} We let $\mathbb{R}^n$ denote the standard Euclidean space of $n$-dimensional vectors, and use boldface lowercase symbols denote semi-infinite sequences of vectors, which we index using superscripts. For example, we may write $\mathbf{x} \colonequals (x^0, x^1, \dots)$, where $x^k \in \mathbb{R}^n$ for each $k\geq 0$. Subscripts index components or subvectors: for example, we may write $x = \sbmat{x_1 \\ x_2} \in \mathbb{R}^n$, where $x_1\in\mathbb{R}^{n_1}$ and $x_2\in\mathbb{R}^{n - n_1}$. \subsection{Optimization}\label{opt} \titleparagraph{Optimization problem, objective, and constraints} An optimization problem is identified by an objective function and a constraint set. The objective may be written as the sum of several functions, and the constraint set may be the intersection of several sets. As an example, in the optimization problem \cref{eqp1}~\cite{MAL-016} \beq \label{eqp1} \ba{ll} \mbox{minimize} & f(x) + g(z) \\ \mbox{subject to} & Ax + Bz = c, \ea \eeq the objective function is $f(x)+g(z)$ and the constraint set is $\{(x,z): Ax + Bz = c\}$. \titleparagraph{Oracles} We assume an oracle model of optimization: we can only access an optimization problem by querying oracles at discrete query points~[\citefirst[\S4]{boyd_vandenberghe_2004}[\S1]{MAL-050}[\S1]{nesterov2018lectures}. Oracles might include the gradient or proximal operator of a function, or projection onto a constraint set~[\citefirst[\S6]{doi:10.1137/1.9781611974997}[\S2]{fenchel1953convex}[\S1]{OPT-003}. Each query to the oracle returns an output such as the function value, gradient, or proximal operator. For example, the oracles for problem \cref{eqp1} might include the gradients or proximal operators of $f$ and $g$, and projection onto the hyperplane $\{(x,z): Ax + Bz = c\}$. \subsection{Algorithms}\label{algorithm} Detecting equivalence between \emph{any} pair of algorithms is beyond the scope of this paper. Instead, we restrict our attention to equivalence between iterative linear time invariant optimization algorithms. In the following section, we provide some intuition and define each of these terms. Further formalism of these terms will be provided in the next subsection on control theory. \titleparagraph{Iterative algorithms} Given an optimization problem and an initial point $x^0 \in \mathcal{X}$, an \emph{iterative algorithm} $\mathcal{A}$ generates a sequence of points $\mathbf{x} \colonequals (x^k)_{k \geq 0}$ by repeated application of the map $\mathcal{A}: \mathcal X \to \mathcal X$. (We do not distinguish the algorithm from its associated map.) Hence, $x^{k+1} = \mathcal{A}(x^k)$ for $k \geq 0$. We call $x^k$ the \emph{state} of the algorithm at \emph{time} $k$. We make two important simplifying assumptions when treating algorithms. First, suppose the operator $\mathcal{A}$ calls each different oracle \emph{exactly once}. (We will see how to extend our ideas to more complex algorithms later.) This assumption forbids trivial repetition, such as $\mathcal{A}' \colonequals \mathcal{A} \circ \mathcal{A}$. Second, we consider algorithms that are \emph{time-invariant}. In general, one could envision an algorithm $\mathcal{A}^k$ that changes at each timestep. Such time-varying algorithms are common in practice: for example, gradient-based methods with diminishing stepsizes. We view time-varying algorithms as a scheme for switching between different time-invariant algorithms. Since our aim is to reason about algorithm equivalence, we restrict our attention to time-invariant algorithms. A nice benefit of only this restriction is that we can define algorithm equivalence independently of the choice of initial point. The formulation $x^{k+1} = \mathcal{A}(x^k)$ is general enough to include algorithms with multiple timesteps. For example consider algorithm \ref{algo_i4}: $x_1^{k+1} = x_1^k - \eta F (2 x_1^k - x_1^{k-1})$. If we define the new state $x_2^k \colonequals x_1^{k-1}$ and let $x^k \colonequals \sbmat{x_1^k \\ x_2^k}$, then we may rewrite the algorithm as \begin{equation}\label{eq:algdemo} x^{k+1} = \bmat{x_1^{k+1} \\ x_2^{k+1} } = \bmat{x_1^k - \eta F( 2x_1^k - x_2^k) \\ x_1^k} = \mathcal{A}\left( \bmat{x_1^k\\x_2^k} \right) = \mathcal{A}(x^k). \end{equation} The algorithm $\mathcal{A}$ contains a combination of oracle calls and state updates. Define $y^k$ and $u^k$ to be the input and output of the oracles called at time $k$, respectively. Now, write three separate equations for the state update, oracle input, and oracle output. Applying this to \eqref{eq:algdemo}, we obtain: \begin{subequations}\label{eq:algdemo0} \begin{align} \bmat{x_1^{k+1} \\ x_2^{k+1}} &= \bmat{ 1 & 0 \\ 1 & 0} \bmat{x_1^k \\ x_2^k} + \bmat{-\eta \\ 0} u^k \hspace{-1cm}&&\hspace{-1cm} \text{(state update)},\\ y^k &= \bmat{2 & -1} \bmat{x_1^k \\ x_2^k} \hspace{-1cm}&&\hspace{-1cm} \text{(oracle input)}, \\ u^k &= F(y^k) \hspace{-1cm}&&\hspace{-1cm} \text{(oracle output)}. \end{align} \end{subequations} \titleparagraph{Oracle sequence} We have defined an algorithm $\mathcal{A}$ as a map $\mathcal X \to \mathcal X$. In optimization, it is also conventional to write an algorithm as a sequence of update equations, that are executed sequentially on a computer to implement the map. When this sequence of updates is executed, we may record the sequence of states or the sequence of oracle calls (oracle and input pairs), which we call the oracle sequence. There may be several ways of writing the algorithm as a sequence of updates, which may produce different state sequences or oracle sequences. We are not aware of any practical algorithm for optimization that may be written to produce two different oracle sequences. Hence we will assume for now that the oracle sequence produced by an algorithm is unique. We will revisit this assumption later in the paper (section \ref{charac-shift}) to see how our ideas extend to more complex (not-yet-discovered) algorithms. \titleparagraph{Linear algorithms} The equations \eqref{eq:algdemo0} have the general \emph{linear} form \begin{subequations}\label{eqp2} \begin{align} x^{k+1} & = Ax^k + Bu^k, \label{eqp2a}\\ y^k & = Cx^k + Du^k, \label{eqp2b} \\ u^k & = \phi (y^k) \label{eqp2c}. \end{align} \end{subequations} We say that a time-invariant algorithm is \emph{linear} if it can be written in the form of \cref{eqp2}, where $x^k$ is the algorithm state and $\phi$ is the set of oracles. In the rest of the paper, unless specifically noted, our discussion is limited to linear algorithms. We will see that linear algorithms are a rich class that includes many commonly used algorithms, such many accelerated methods, proximal methods, operator splitting methods, and more~\cite{hu2020analysis, doi:10.1137/15M1009597}. The general form~\eqref{eqp2} represents a convenient parameterization of linear algorithms in terms of matrices $(A,B,C,D)$, but it is only a starting point. For example, algorithms \ref{algo_i1}--\ref{algo_i4} have different $(A,B,C,D)$ parameters despite being equivalent algorithms. In the next section, we show how tools from control theory can be brought to bear on these sorts of representations. \subsection{Control theory}\label{control} This subsection provides a brief overview of relevant methods and terminology from control theory. More detail can be found in standard references such as \cite[Ch.~1--3]{antsaklis2006linear} and \cite[Ch.~1,2,5]{williams2007linear}. \titleparagraph{Algorithms as linear systems} Let $\mathbf{u}$ denote the entire sequence of $u^k$ and $\mathbf{y}$ denote the entire sequence of $y^k$. The equations in \cref{eqp2} can be separated into two parts. Equations \cref{eqp2a} and \cref{eqp2b} define a map $\mathbf{H}$ from $\mathbf{u}$ to $\mathbf{y}$ compactly as $\mathbf{y} = \mathbf{H}\mathbf{u}$, while \cref{eqp2c} defines a map $\boldsymbol{\Phi}$ from $\mathbf{y}$ to $\mathbf{u}$ as $\mathbf{u} = \boldsymbol{\Phi}\mathbf{y}$, where $\boldsymbol{\Phi} = \mathrm{diag}\{\phi,\phi,\dots\}$. We can represent these algebraic relations visually via the \emph{block-diagram} shown in \cref{figp1}. \begin{figure}[tbhp] \centering \begin{tikzpicture}[>=latex] \node[scale = 0.75][box9] at (0, 0) (algo) {$\mathbf{H}$}; \node[scale = 0.75][box9] at (0, -1.4) (oracle) {$\boldsymbol{\Phi}$}; \node[scale = 0.75][box10] at (1.2, -0.7) (input) {$\mathbf{u}$}; \node[scale = 0.75][box10] at (-1.2, -0.7) (output) {$\mathbf{y}$}; \draw[->] (algo) -| (output); \draw[->] (output) |- (oracle); \draw[->] (oracle) -| (input); \draw[->] (input) |- (algo); \end{tikzpicture} \caption{Block-diagram representation of an algorithm. This is equivalent to the pair of equations $\mathbf{y} = \mathbf{H} \mathbf{u}$ and $\mathbf{u} = \boldsymbol{\Phi} \mathbf{y}$. } \label{figp1} \end{figure} Consider map $\mathbf{H}$ defined by \cref{eqp2a} and \cref{eqp2b}. For simplicity, we assume that $x^0 = 0$. As we eliminate $\{x^1, \dots, x^k\}$ from~\eqref{eqp2a} and \cref{eqp2b}, map $\mathbf{H}$ can be represented as an semi-infinite matrix, \begin{equation}\label{eqp3} \begin{bmatrix} y^0 \\ y^1 \\ y^2 \\ y^3 \\ \vdots \end{bmatrix} = \underbrace{\begin{bmatrix} D & 0 & 0 & 0 &\cdots \\ CB & D & 0 & 0 &\cdots \\ CAB & CB & D & 0 &\cdots \\ C(A)^2B & CAB & CB & D & \cdots \\ \vdots & \ddots & \ddots & \ddots & \ddots \end{bmatrix}}_\mathbf{H} \begin{bmatrix} u^0 \\ u^1 \\ u^2 \\ u^3 \\ \vdots \end{bmatrix}. \end{equation} In control theory, map $\mathbf{H}$ is considered as a (discrete-time) \emph{system} that maps a sequence of \emph{inputs} $\mathbf{u}$ to a sequence of \emph{outputs} $\mathbf{y}$. Map $\mathbf{H}$ is linear since it can be represented as a semi-infinite matrix. The matrix representation is lower-triangular and it indicates $\mathbf{H}$ is \emph{causal}. Further, $\mathbf{H}$ is time-invariant because the matrix representation is (block) \emph{Toeplitz}, which means that $\mathbf{H}$ is (block) constant along diagonals from top-left to bottom right. Thus, $\mathbf{H}$ is a \emph{causal linear time-invariant system}. For the rest of this paper, we will work with such systems and we will refer to such systems as \emph{linear systems}. Further, to combine maps $\mathbf{H}$ and $\boldsymbol{\Phi}$ together, a linear algorithm in the form of \cref{eqp2} can be regarded as a linear system connected in feedback with a nonlinearity shown by \cref{figp1}. At time $k$, $u^k$ is the input and $y^k$ is the output of the system. Nonlinear feedback $\phi$ represents the set of oracles such as the gradient or subgradient of a convex function and it maps the output $y^k$ to the input $u^k$. \titleparagraph{State-space realization} Reconsider equations \cref{eqp2a} and \cref{eqp2b}. They correspond to the \emph{state-space realization} of system $\mathbf{H}$. In control theory, a state-space realization is characterized by an internal sequence of \emph{states} $\mathbf{x}$ that evolves according to a difference equation with parameters $(A,B,C,D)$: \begin{equation}\label{eqp4} \begin{aligned} x^{k+1} & = Ax^k + Bu^k, \\ y^k & = Cx^k + Du^k, \end{aligned} \qquad \text{or equivalently, } \bmat{x^{k+1} \\ y^k} = L \bmat{ x^k \\ u^k}, \text{ where } L = \bmat{A & B \\ C & D}. \end{equation} Here, $u^k \in \mathbb{R}^m$, $y^k \in \mathbb{R}^p$, and $x^k \in \mathbb{R}^n$. The parameters $(A,B,C,D)$ are matrices of compatible dimensions, so $A\in \mathbb{R}^{n\times n}$, $B \in \mathbb{R}^{n\times m}$, $C \in \mathbb{R}^{p\times n}$, and $D \in \mathbb{R}^{p\times m}$. The state-space realization corresponding to the system $\mathbf{H}$ can also be characterized by omitting all vectors and writing the block matrix $L$ shown in~\eqref{eqp4} (right), which is the map from $(x^k,u^k)$ to $(x^{k+1},y^k)$. In this paper, we rely on such formalism that represents algorithms as linear systems using a state-space realization as \cref{eqp4} for each algorithm, following~\cite{hu2020analysis, doi:10.1137/15M1009597}. The state-space realization $L$ represents the linear part of an algorithm and map $\phi$ represents the nonlinear part. Moreover, we have $\mathcal{A} = (L, \phi)$. In this way, we can unroll \cref{figp1} in time to obtain the block-diagram shown in \cref{figp2}. Each dashed box in \cref{figp2} represents map $\mathcal{A}$ for each iteration. \begin{figure}[tbhp] \centering \begin{tikzpicture}[>=latex] \node[scale = 0.75][box6] at (0,-0.2) (algo1) {$L$}; \node[scale = 0.75] at (0,0) (ref1) {}; \node[scale = 0.75] at (0,-0.4) (ref2) {}; \node[scale = 0.75][box3, left of = ref1 , node distance = 8em] (state0) {$x^{k-1}$}; \node[scale = 0.75][left of = ref1, node distance = 13em] (init1) {\ldots}; \node[scale = 0.75][box3, right of = ref1 , node distance = 8em] (state1) {$x^{k}$}; \node[scale = 0.75][box] at (0, -1) (oracle1) {$\phi$}; \node[scale = 0.75][box3, right of = oracle1, node distance = 6em] (output1) {$y^{k-1}$}; \node[scale = 0.75, left of= oracle1, node distance = 6em][box3] (input1) {$u^{k-1}$}; \node[scale = 0.75][box6, right of = algo1, node distance = 16em] (algo2) {$L$}; \node[scale = 0.75][box8] at (0, -0.55) (a1) {}; \draw[->] (init1) -- (state0); \draw[->] (state0) -- (ref1-|algo1.west); \draw[->] (state1-|algo1.east) -- (state1); \draw[->] (input1) |- (ref2-|algo1.west); \draw[->] (output1) -- (oracle1); \draw[->] (oracle1) -- (input1); \draw[->] (ref2-|algo1.east) -| (output1) ; \draw[->] (state1) -- (state1-|algo2.west); \node[scale = 0.75][box3, right of = state1 , node distance = 16em] (state2) {$x^{k+1}$}; \node[scale = 0.75][box, right of = oracle1, node distance = 16em] (oracle2) {$\phi$}; \node[scale = 0.75][box3, right of = oracle2, node distance = 6em] (output2) {$y^{k}$}; \node[scale = 0.75, left of= oracle2, node distance = 6em][box3] (input2) {$u^{k}$}; \node[scale = 0.75][box6, right of = algo2, node distance = 16em] (algo3) {$L$}; \node[scale = 0.75][box8, right of = a1, node distance = 16em] (a2) {}; \draw[->] (state1-|algo2.east) -- (state2); \draw[->] (input2) |- (ref2-|algo2.west); \draw[->] (output2) -- (oracle2); \draw[->] (oracle2) -- (input2); \draw[->] (ref2-|algo2.east) -| (output2); \draw[->] (state2) -- (state1-|algo3.west); \node[scale = 0.75][right of = state2, node distance = 16em] (end1) {\ldots}; \node[scale = 0.75][box, right of = oracle2, node distance = 16em] (oracle3) {$\phi$}; \node[scale = 0.75][box3, right of = oracle3, node distance = 6em] (output3) {$y^{k+1}$}; \node[scale = 0.75, left of= oracle3, node distance = 6em][box3] (input3) {$u^{k+1}$}; \node[scale = 0.75][box8, right of = a2, node distance = 16em] (a3) {}; \draw[->] (state1-|algo3.east) -- (end1); \draw[->] (input3) |- (ref2-|algo3.west); \draw[->] (output3) -- (oracle3); \draw[->] (oracle3) -- (input3); \draw[->] (ref2-|algo3.east) -| (output3); \end{tikzpicture} \caption{Unrolled-in-time block-diagram representation of an algorithm.} \label{figp2} \end{figure} \titleparagraph{Impulse response and transfer function} From~\eqref{eqp3}, without the assumption that $x^0 = 0$, we can obtain \begin{equation}\label{eqp6} y^k= C(A)^kx^0+ \sum_{j=0}^{k-1}C(A)^{k-(j+1)}Bu^j + Du^k. \end{equation} The output $y^k$ is the sum of $C(A)^kx^0$, which is due to the initial condition $x^0$, and $\sum_{j=0}^{k-1}C(A)^{k-(j+1)}Bu^j + Du^k$, which is due to the inputs $\{u^0,\dots,u^k\}$. The compact form $\mathbf{y} = \mathbf{H}\mathbf{u}$ and its matrix representation~\eqref{eqp3} omit the first term that depends on $x^0$. These representations are formally equivalent to the state-space model only when the state is initialized at $x_0 = 0$. However, linearity of $\mathbf{H}$ allows the two contributions to be studied separately: \[ (\text{total response}) = \underbrace{(\text{zero input response})}_{\text{set $u^k=0$ for $k \geq 0$}} \,+\, \underbrace{(\text{zero state response})}_{\text{set $x^k=0$}}. \] This decomposition is analogous to writing the general solution to a linear differential (or difference) equation as the sum of a homogeneous solution (due to initial conditions only) and a particular solution (due to the non-homogeneous terms only). We will characterize linear systems by their input-output map. The input-output map depends only on the zero state response, which allows us to avoid details about initialization. For simplicity, we denote the entries in the matrix representation of $\mathbf{H}$ in \eqref{eqp3} as \begin{equation}\label{eqp7} H^k = \begin{cases} D & k = 0 \\ C(A)^{k-1}B & k \geq 1 \end{cases}. \end{equation} To study the zero state response, recall from~\eqref{eqp3} that \begin{equation}\label{eqp8} y^k = H^k u^0 + H^{k-1} u^1 + \cdots + H^1 u^{k-1} + H^0u^k. \end{equation} The sequence $(H^k)_{k\geq 0}$ is called the \emph{impulse response} of $\mathbf{H}$, because it corresponds to the impulsive input $u^0=1$ and $u^j=0$ for $j \geq 1$. A convenient way to represent $\mathbf{H}$ is via the use of a \emph{transfer function}. To this end, we can represent $\mathbf{y}$ and $\mathbf{u}$ as generating functions in the variable $z^{-1}$. Equating powers of $z^{-1}$, we have: \begin{equation}\label{eqp9} \underbrace{\left( y^0 + y^1 z^{-1} + y^{2} z^{-2} + \cdots \right)}_{\hat y(z)} = \underbrace{\left( H^0 + H^1 z^{-1} + H^2 z^{-2} + \cdots \right)}_{\hat H(z)} \underbrace{\left( u^0 + u^1 z^{-1} + u^{2} z^{-2} + \cdots \right)}_{\hat u(z)}. \end{equation} We can recover \cref{eqp8} by expanding the multiplication in \cref{eqp9} and grouping terms with the same power of $z^{-1}$. So when written as generating functions, the output is related to the input via multiplication. The functions $\hat y$ and $\hat u$ are the $z$-transforms of the sequences $\mathbf{y}$ and $\mathbf{u}$, respectively, and $\hat H$ is called the \emph{transfer function}. If $p\geq 2$ or $m\geq 2$ (the $H^k$ are matrices), then $\hat H$ is called the \emph{transfer matrix}. Substituting~\eqref{eqp7} into the definition of the transfer function, we can write a compact form for the formal power series $\hat H$, which converges on some appropriate set: \begin{equation}\label{eqp10} \hat H(z) = \left[\begin{array}{c|c} A & B\\ \hline C & D \end{array}\right] = D+\sum_{k=1}^{\infty}C(A)^{k-1}Bz^{-k} = C(zI - A)^{-1}B +D. \end{equation} The transfer function $\hat H(z) = C(zI - A)^{-1}B + D$ can be directly computed from the state-space matrices $(A,B,C,D)$. Moreover, $\hat H(z)$ is a matrix whose entries are rational functions of $z$. Hence the transfer function provides an computationally efficient way to uniquely characterize the input-output map of a system. We will use the block notation with solid lines to indicate transfer function as in \cref{eqp10}. \titleparagraph{Linear transformations of state-space realizations} Consider a linear transformation of the states $x^k$ in~\cref{eqp4}. Specifically, suppose $Q \in \mathbb{R}^{n\times n}$ is invertible, and define $\tilde{x}^k = Qx^k$ for each $k$. The new state-space realization in terms of the new variables $\tilde x^k$ is \begin{equation}\label{eqp11} \begin{aligned} \tilde x^{k+1} & = QAQ^{-1} \tilde x^k + QB u^k \\ y^k & = CQ^{-1} \tilde x^k + Du^k \end{aligned}, \qquad \tilde L = \bmat{ QAQ^{-1} & QB \\ CQ^{-1} & D }. \end{equation} It is straightforward to check that $\mathbf{H}$ and $\tilde \mathbf{H}$ have the same transfer function. Therefore, whether we apply the linear system $\mathbf{H}$ or $\tilde \mathbf{H}$, the same input sequence $\mathbf{u}$ will produce the same output sequence $\mathbf{y}$, although the respective states $x^k$ and $\tilde x^k$ will generally be different. So although the state-space realization $(A,B,C,D)$ depend on the coordinates used to represent states $x^k$, the transfer function is invariant under linear transformations. This invariance is the key to understanding when two optimization algorithms are the same, even if they look different as written. For example, this idea alone suffices to show that algorithms \ref{algo1} and \ref{algo2} are equivalent. \titleparagraph{Minimal realizations} Every set of appropriately-sized state-space parameters $(A,B,C,D)$ produces a transfer matrix whose entries are rational functions of $z$. Closer inspection of the formula $\hat H(z) = C(zI-A)^{-1}B + D$ reveals that $\hat H(z) \to D$ as $z\to\infty$. Therefore, the rational entries of $\hat H(z)$ must be \emph{proper}: the degree of the numerator cannot exceed the degree of the denominator. Moreover, the degree of the common denominator of all entries of $\hat H(z)$ cannot exceed $n$ (the size of the matrix $A$). The converse is also true: given any transfer matrix $\hat H(z)$ whose entries are proper with common denominator degree $n$, there exists a realization $(A,B,C,D)$ where $A$ has size at most $n$ whose transfer function is $\hat H(z)$. Any realization of $\hat H(z)$ for which the size of $A$ is as small as possible is called \emph{minimal}. All minimal realizations of $\hat H(z)$ are related by an invertible state transformation via a suitably chosen invertible matrix $Q$, as in~\eqref{eqp11}. Realizations can be non-minimal when the transfer function has factors that cancel from both the numerator and denominator. For example, the following pair of state-space equations both have the same transfer function: \begin{align*} \hat H(z) = & \left[\begin{array}{c|c} 1 & 1 \\ \hline 1 & 0 \end{array}\right] = 1 \cdot (z-1)^{-1} \cdot 1 = \frac{1}{z-1},\\ \hat H(z) = &\left[\begin{array}{cc|c} 1 & 2 & 1 \\ 0 & 3 & 0 \\ \hline 1 & 6 & 0 \end{array}\right] = \begin{bmatrix} 1 & 6\end{bmatrix} \begin{bmatrix}z-1 & -2 \\ 0 & z-3\end{bmatrix}^{-1}\begin{bmatrix}1\\ 0\end{bmatrix} = \frac{z-3}{z^2-4z+3} = \frac{1}{z-1}. \end{align*} We can detect when two optimization algorithms are equivalent, even when one has additional (redundant) state variables, by computing their minimal realizations. This strategy shows that algorithms \ref{algo3} and \ref{algo4} are equivalent. \titleparagraph{Inverse of state-space realization} Consider a state-space system $\mathbf{H}$ with realization \cref{eqp4} and for which $m=p$ (input and output dimension are the same). Is it possible to find a state-space system $\mathbf{H}^{-1}$ that maps $\mathbf{y}$ back to $\mathbf{u}$? It turns out this is possible if and only if $D$ is invertible. In this case, the transfer function of $\mathbf{H}^{-1}$ is $\hat H^{-1}(z)$, a matrix whose entries are rational functions of $z$. We write the state-space realization of the inverse system $\mathbf{H}^{-1}$ as \[ \hat H^{-1}(z) = \left[\begin{array}{c|c} A & B \\ \hline C & D \end{array}\right]^{-1} = \left[\begin{array}{c|c} A-BD^{-1}C & BD^{-1} \\ \hline -D^{-1}C & D^{-1} \end{array}\right]. \] This explicit realization can be obtained by applying the matrix inversion lemma to \eqref{eqp10}. We can extend this idea to partial inverses of linear systems. Suppose the input sequence $\mathbf{u}$ is partitioned as \[ \mathbf{u} \colonequals (u^0, u^1, \dots) = \left( \bmat{u^0_1 \\ u^0_2}, \bmat{u^1_1 \\ u^1_2}, \dots \right) = \bmat{\mathbf{u}_1 \\ \mathbf{u}_2},\quad\text{where } u^k_1 \in \mathbb{R}^{m_1}, u^k_2 \in \mathbb{R}^{m_2}\text{ for all }k\geq 0 \] and similarly for $\mathbf{y}$. The matrix $D$ and transfer matrix $\hat H(z)$ can also be partitioned conformally as \begin{equation}\label{eqp12} D = \begin{bmatrix} D_{11} & D_{12} \\ D_{21} & D_{22} \end{bmatrix} \quad\text{and}\quad \hat H(z) = \begin{bmatrix} \hat H_{11}(z) & \hat H_{12}(z) \\ \hat H_{21}(z) & \hat H_{22}(z) \end{bmatrix},\quad\text{where }D_{ij} \in \mathbb{R}^{p_i \times m_j}\text{ and similarly for }\hat H(z). \end{equation} If $D_{11}$ is invertible, we can partially invert $\mathbf{H}$ with respect to $\mathbf{u}_1$ and $\mathbf{y}_1$ to form a new system $\mathbf{H}'$ that maps $(\mathbf{y}_1,\mathbf{u}_2)\mapsto (\mathbf{u}_1,\mathbf{y}_2)$. The transfer function $\hat H'(z)$ of the new system $\mathbf{H}'$ satisfies \begin{equation}\label{eqp13} \hat H'(z) = \left[\begin{array}{c c} \hat H_{11}^{-1}(z) & -\hat H_{11}^{-1}(z)\hat H_{12}(z) \\ \hat H_{21}(z)\hat H_{11}^{-1}(z) & \hat H_{22}(z) - \hat H_{21}(z)\hat H_{11}^{-1}(z)\hat H_{12}(z) \end{array}\right]. \end{equation} A detailed proof of \cref{eqp13} is presented in \cref{apd1}. Note that if $D_{22}$ is invertible, we can perform a similar partial inverse with respect to the second component. When an optimization algorithm is related to another by conjugation of one of the function oracles, their transfer functions are related by (possibly partial) inversion. \section{Algorithm equivalence}\label{equivalence} We are now ready to revisit the motivating examples and formally define algorithm equivalence. \subsection{Oracle equivalence}\label{oracle-equ}\ In the first motivating example, the algorithms have the same number of states, and the state sequences are equivalent up to an invertible linear transformation. We call these algorithms \emph{state-equivalent}. In the second motivating example, the state sequence of algorithm \ref{algo3} can be transformed into the state sequence of algorithm \ref{algo4} with a linear transformation. However, unlike the first motivating example, the linear transformation is not invertible; indeed, algorithm \ref{algo4} uses fewer state variables than algorithm \ref{algo3}. Instead, recall that the sequence of calls to the gradient oracle are identical for algorithms \ref{algo3} and \ref{algo3}. Hence these algorithms are \emph{oracle-equivalent}. \begin{definition}\label{def1} Two algorithms are oracle-equivalent on a set of optimization problems if, for any problem in the set and for any initialization for one algorithm, there exists an initialization for the other such that the two algorithms generate the same oracle sequence. \end{definition} Notice that if the oracle sequences (that is, the oracles and their arguments $y^k$) are the same, then the oracles produce the same inputs $u^k$ for the linear systems of each algorithm. Hence, as shown in \cref{fig5}, oracle-equivalent algorithms have matching input $\mathbf{u}$ and output $\mathbf{y}$ sequences. The solid double-sided arrow indicates the sequences $y^k$ and $\tilde y^k$ are identical, and the sequences $u^k$ and $\tilde u^k$ are identical. \begin{figure}[tbhp] \centering \begin{tikzpicture}[>=latex] \node[scale = 0.75][box6] at (0,1.9) (algo1) {$L$}; \node[scale = 0.75] at (0,1.7) (ref2) {}; \node[scale = 0.75] at (0,2.1) (ref1) {}; \node[scale = 0.75][box3, left of = ref1 , node distance = 8em] (state0) {$x^{k-1}$}; \node[scale = 0.75][left of = ref1, node distance = 13em] (init1) {\ldots}; \node[scale = 0.75][box3, right of = ref1, node distance = 8em] (state1) {$x^{k}$}; \node[scale = 0.75][box] at (0, 1.1) (oracle1) {$\phi$}; \node[scale = 0.75][box3, right of = oracle1, node distance = 6em] (output1) {$y^{k-1}$}; \node[scale = 0.75, left of= oracle1, node distance = 6em][box3] (input1) {$u^{k-1}$}; \node[scale = 0.75][box6, right of = algo1, node distance = 16em] (algo2) {$L$}; \draw[->] (init1) -- (state0); \draw[->] (state0) -- (ref1-|algo1.west); \draw[->] (state1-|algo1.east) -- (state1); \draw[->] (input1) |- (ref2-|algo1.west); \draw[->] (output1) -- (oracle1); \draw[->] (oracle1) -- (input1); \draw[->] (ref2-|algo1.east) -| (output1); \draw[->] (state1) -- (state1-|algo2.west); \node[scale = 0.75][box3, right of = state1, node distance = 16em] (state2) {$x^{k+1}$}; \node[scale = 0.75][box, right of = oracle1, node distance = 16em] (oracle2) {$\phi$}; \node[scale = 0.75][box3, right of = oracle2, node distance = 6em] (output2) {$y^{k}$}; \node[scale = 0.75, left of= oracle2, node distance = 6em][box3] (input2) {$u^{k}$}; \node[scale = 0.75][box6, right of = algo2, node distance = 16em] (algo3) {$L$}; \draw[->] (state1-|algo2.east) -- (state2); \draw[->] (input2) |- (ref2-|algo2.west); \draw[->] (output2) -- (oracle2); \draw[->] (oracle2) -- (input2); \draw[->] (ref2-|algo2.east) -| (output2); \draw[->] (state2) -- (state1-|algo3.west); \node[scale = 0.75][right of = state2 , node distance = 16em] (end1) {\ldots}; \node[scale = 0.75][box, right of = oracle2, node distance = 16em] (oracle3) {$\phi$}; \node[scale = 0.75][box3, right of = oracle3, node distance = 6em] (output3) {$y^{k+1}$}; \node[scale = 0.75, left of= oracle3, node distance = 6em][box3] (input3) {$u^{k+1}$}; \draw[->] (state1-|algo3.east) -- (end1); \draw[->] (input3) |- (ref2-|algo3.west); \draw[->] (output3) -- (oracle3); \draw[->] (oracle3) -- (input3); \draw[->] (ref2-|algo3.east) -| (output3); \node[scale = 0.75][box6] at (0,-0.8) (algo10) {$\tilde{L}$}; \node[scale = 0.75] at (0,-1) (ref10) {}; \node[scale = 0.75] at (0,-0.6) (ref20) {}; \node[scale = 0.75][box3, left of = ref10, node distance = 8em] (state00) {$\tilde{x}^{k-1}$}; \node[scale = 0.75][left of = ref10, node distance = 13em] (init10) {\ldots}; \node[scale = 0.75][box3, right of = ref10, node distance = 8em] (state10) {$\tilde{x}^{k}$}; \node[scale = 0.75][box] at (0, 0) (oracle10) {$\hat{\phi}$}; \node[scale = 0.75][box3, right of = oracle10, node distance = 6em] (output10) {$\tilde{y}^{k-1}$}; \node[scale = 0.75, left of= oracle10, node distance = 6em][box3] (input10) {$\tilde{u}^{k-1}$}; \node[scale = 0.75][box6, right of = algo10, node distance = 16em] (algo20) {$\tilde{L}$}; \draw[->] (init10) -- (state00); \draw[->] (state00) -- (ref10-|algo10.west); \draw[->] (state10-|algo10.east) -- (state10); \draw[->] (input10) |- (ref20-|algo10.west); \draw[->] (output10) -- (oracle10); \draw[->] (oracle10) -- (input10); \draw[->] (ref20-|algo10.east) -| (output10) ; \draw[->] (state10) -- (state10-|algo20.west); \node[scale = 0.75][box3, right of = state10 , node distance = 16em] (state20) {$\tilde{x}^{k+1}$}; \node[scale = 0.75][box, right of = oracle10, node distance = 16em] (oracle20) {$\tilde{\phi}$}; \node[scale = 0.75][box3, right of = oracle20, node distance = 6em] (output20) {$\tilde{y}^{k}$}; \node[scale = 0.75, left of= oracle20, node distance = 6em][box3] (input20) {$\tilde{u}^{k}$}; \node[scale = 0.75][box6, right of = algo20, node distance = 16em] (algo30) {$\tilde{L}$}; \draw[->] (state10-|algo20.east) -- (state20); \draw[->] (input20) |- (ref20-|algo20.west); \draw[->] (output20) -- (oracle20); \draw[->] (oracle20) -- (input20); \draw[->] (ref20-|algo20.east) -| (output20); \draw[->] (state20) -- (state10-|algo30.west); \node[scale = 0.75][right of = state20, node distance = 16em] (end10) {\ldots}; \node[scale = 0.75][box, right of = oracle20, node distance = 16em] (oracle30) {$\tilde{\phi}$}; \node[scale = 0.75][box3, right of = oracle30, node distance = 6em] (output30) {$\tilde{y}^{k+1}$}; \node[scale = 0.75, left of= oracle30, node distance = 6em][box3] (input30) {$\tilde{u}^{k+1}$}; \draw[->] (state10-|algo30.east) -- (end10); \draw[->] (input30) |- (ref20-|algo30.west); \draw[->] (output30) -- (oracle30); \draw[->] (oracle30) -- (input30); \draw[->] (ref20-|algo30.east) -| (output30); \draw[-{Straight Barb[left]}] ($(output1.south) + (0.02,0)$) -- ($(output10.north) + (0.02,0)$); \draw[-{Straight Barb[left]}] ($(output10.north) + (-0.02,0)$) -- ($(output1.south) + (-0.02,0)$); \draw[-{Straight Barb[left]}] ($(output2.south) + (0.02,0)$) -- ($(output20.north) + (0.02,0)$); \draw[-{Straight Barb[left]}] ($(output20.north) + (-0.02,0)$) -- ($(output2.south) + (-0.02,0)$); \draw[-{Straight Barb[left]}] ($(output3.south) + (0.02,0)$) -- ($(output30.north) + (0.02,0)$); \draw[-{Straight Barb[left]}] ($(output30.north) + (-0.02,0)$) -- ($(output3.south) + (-0.02,0)$); \draw[-{Straight Barb[left]}] ($(input1.south) + (0.02,0)$) -- ($(input10.north) + (0.02,0)$); \draw[-{Straight Barb[left]}] ($(input10.north) + (-0.02,0)$) -- ($(input1.south) + (-0.02,0)$); \draw[-{Straight Barb[left]}] ($(input2.south) + (0.02,0)$) -- ($(input20.north) + (0.02,0)$); \draw[-{Straight Barb[left]}] ($(input20.north) + (-0.02,0)$) -- ($(input2.south) + (-0.02,0)$); \draw[-{Straight Barb[left]}] ($(input3.south) + (0.02,0)$) -- ($(input30.north) + (0.02,0)$); \draw[-{Straight Barb[left]}] ($(input30.north) + (-0.02,0)$) -- ($(input3.south) + (-0.02,0)$); \end{tikzpicture} \caption{Unrolled block-diagram representation of oracle equivalence.} \label{fig5} \end{figure} Further, since oracle-equivalent algorithms have identical input and output sequences, many analytical properties of interest, particularly those pertaining to algorithm convergence or robustness, are preserved. For example, suppose the target problem is to minimize $f(x)$ with $x\in R^n$, with solution $x^\star$ and corresponding objective value $f(x^\star)$. Further suppose $f$ is convex and differentiable with oracle $\nabla f$. If two algorithms are oracle-equivalent, the sequence of gradients $\left \| \nabla f(x) \right \|$, distance to the solution $\left \| x - x^\star \right \|$, and objective function values $\| f(x) - f(x^\star)\|$ evolve identically, so they have the same worst-case convergence, etc. Moreover, even if the oracle is noisy (e.g., suffers from additive or multiplicative noise, or even adversarial noise), from the point of view of the oracle, the algorithms are indistinguishable and any analytical property that involves only the oracle sequence will be the same. \subsection{Shift equivalence}\label{shift-equ}\ \begin{figure}[tbhp] \centering \begin{tikzpicture}[>=latex] \node[scale = 0.75][box6] at (0.35,1.9) (algo1) {$L$}; \node[scale = 0.75] at (0.35,1.7) (ref2) {}; \node[scale = 0.75] at (0.35,2.1) (ref1) {}; \node[scale = 0.75][box2, left of = ref1 , node distance = 10em] (state0) {$x^{k-1}_1, x^{k-1}_2, x^{k-1}_3$}; \node[scale = 0.75][left of = ref1, node distance = 18em] (init1) {\ldots}; \node[scale = 0.75][box2, right of = ref1, node distance = 10em] (state1) {$x^{k}_1,\quad x^{k}_2,\quad x^{k}_3$}; \node[scale = 0.75][box] at (0.35, 1.1) (oracle1) {$\phi$}; \node[scale = 0.75][box4, right of = oracle1, node distance = 6.5em] (output1) {$y^{k-1}_1, y^{k-1}_2$}; \node[scale = 0.75, left of= oracle1, node distance = 6.5em][box4] (input1) {$u^{k-1}_1, u^{k-1}_2$}; \node[scale = 0.75][box6, right of = algo1, node distance = 20em] (algo2) {$L$}; \draw[->] (init1) -- (state0); \draw[->] (state0) -- (ref1-|algo1.west); \draw[->] (state1-|algo1.east) -- (state1); \draw[->] (input1) |- (ref2-|algo1.west); \draw[->] (output1) -- (oracle1); \draw[->] (oracle1) -- (input1); \draw[->] (ref2-|algo1.east) -| (output1); \draw[->] (state1) -- (state1-|algo2.west); \node[scale = 0.75][box2, right of = state1, node distance = 20em] (state2) {$x^{k+1}_1, x^{k+1}_2, x^{k+1}_3$}; \node[scale = 0.75][box, right of = oracle1, node distance = 20em] (oracle2) {$\phi$}; \node[scale = 0.75][box4, right of = oracle2, node distance = 6.5em] (output2) {$y^{k}_1, \quad y^k_2$}; \node[scale = 0.75, left of= oracle2, node distance = 6.5em][box4] (input2) {$u^{k}_1, \quad u^k_2$}; \draw[->] (state1-|algo2.east) -- (state2); \draw[->] (input2) |- (ref2-|algo2.west); \draw[->] (output2) -- (oracle2); \draw[->] (oracle2) -- (input2); \draw[->] (ref2-|algo2.east) -| (output2); \node[scale = 0.75][right of = state2, node distance = 8em] (end1) {\ldots}; \draw[->] (state2) -- (end1); \node[scale = 0.75][box6] at (-0.35,-0.8) (algo10) {$\tilde{L}$}; \node[scale = 0.75] at (-0.35,-1) (ref10) {}; \node[scale = 0.75] at (-0.35,-0.6) (ref20) {}; \node[scale = 0.75][box2, left of = ref10, node distance = 10em] (state00) {$\tilde{x}^{k-1}_1, \tilde{x}^{k-1}_2, \tilde{x}^{k-1}_3$}; \node[scale = 0.75][left of = ref10, node distance = 18em] (init10) {\ldots}; \node[scale = 0.75][box2, right of = ref10, node distance = 10em] (state10) {$\tilde{x}^{k}_1,\quad \tilde{x}^{k}_2,\quad \tilde{x}^{k}_3$}; \node[scale = 0.75][box] at (-0.35, 0) (oracle10) {$\hat{\phi}$}; \node[scale = 0.75][box4, right of = oracle10, node distance = 6.5em] (output10) {$\tilde{y}^{k-1}_1, \tilde{y}^{k-1}_2$}; \node[scale = 0.75, left of= oracle10, node distance = 6.5em][box4] (input10) {$\tilde{u}^{k-1}_1, \tilde{u}^{k-1}_2$}; \node[scale = 0.75][box6, right of = algo10, node distance = 20em] (algo20) {$\tilde{L}$}; \draw[->] (init10) -- (state00); \draw[->] (state00) -- (ref10-|algo10.west); \draw[->] (state10-|algo10.east) -- (state10); \draw[->] (input10) |- (ref20-|algo10.west); \draw[->] (output10) -- (oracle10); \draw[->] (oracle10) -- (input10); \draw[->] (ref20-|algo10.east) -| (output10) ; \draw[->] (state10) -- (state10-|algo20.west); \node[scale = 0.75][box2, right of = state10 , node distance = 20em] (state20) {$\tilde{x}^{k+1}_1, \tilde{x}^{k+1}_2, \tilde{x}^{k+1}_3$}; \node[scale = 0.75][box, right of = oracle10, node distance = 20em] (oracle20) {$\tilde{\phi}$}; \node[scale = 0.75][box4, right of = oracle20, node distance = 6.5em] (output20) {$\tilde{y}^{k}_1, \quad \tilde{y}^{k}_2$}; \node[scale = 0.75, left of= oracle20, node distance = 6.5em][box4] (input20) {$\tilde{u}^{k}_1, \quad \tilde{u}^{k}_2$}; \draw[->] (state10-|algo20.east) -- (state20); \draw[->] (input20) |- (ref20-|algo20.west); \draw[->] (output20) -- (oracle20); \draw[->] (oracle20) -- (input20); \draw[->] (ref20-|algo20.east) -| (output20); \node[scale = 0.75][right of = state20, node distance = 8em] (end10) {\ldots}; \draw[->] (state20) -- (end10); \draw[-{Straight Barb[left]}] ($(output1.south) + (-0.33,0)$) -- ($(output10.north) + (0.37,0)$); \draw[-{Straight Barb[left]}] ($(output10.north) + (0.33,0)$) -- ($(output1.south) + (-0.37,0)$); \draw[-{Straight Barb[left]}] ($(output1.south) + (0.39,0)$) -- ($(output20.north) + (-0.31,+0.03)$); \draw[-{Straight Barb[left]}] ($(output20.north) + (-0.39,0)$) -- ($(output1.south) + (0.31,-0.03)$); \draw[-{Straight Barb[left]}] ($(output2.south) + (-0.33,0)$) -- ($(output20.north) + (0.37,0)$); \draw[-{Straight Barb[left]}] ($(output20.north) + (0.33,0)$) -- ($(output2.south) + (-0.37,0)$); \draw[-{Straight Barb[left]}] ($(input1.south) + (-0.33,0)$) -- ($(input10.north) + (0.37,0)$); \draw[-{Straight Barb[left]}] ($(input10.north) + (0.33,0)$) -- ($(input1.south) + (-0.37,0)$); \draw[-{Straight Barb[left]}] ($(input1.south) + (0.39,0)$) -- ($(input20.north) + (-0.31,+0.03)$); \draw[-{Straight Barb[left]}] ($(input20.north) + (-0.39,0)$) -- ($(input1.south) + (0.31,-0.03)$); \draw[-{Straight Barb[left]}] ($(input2.south) + (-0.33,0)$) -- ($(input20.north) + (0.37,0)$); \draw[-{Straight Barb[left]}] ($(input20.north) + (0.33,0)$) -- ($(input2.south) + (-0.37,0)$); \end{tikzpicture} \caption{Unrolled block-diagram representation of shift equivalence.} \label{fig6} \end{figure} Now consider algorithms \ref{algo5} and \ref{algo6} from the third motivating example. They are not oracle-equivalent. However, their input and output sequences become identical after shifting algorithm \ref{algo5} one step backward: these algorithms are \emph{shift-equivalent}. \begin{definition}\label{def2} Two algorithms are shift-equivalent on a set of problems if, for any problem in the set and for any initialization for one algorithm, there exists an initialization for the other such that the oracle sequences match up to a prefix. \end{definition} Shift equivalence can also be interpreted as oracle equivalence up to a shift. We depict shift equivalence graphically in \cref{fig6}. Conversely, oracle equivalence can be regarded as a special case of shift equivalence, where the oracle sequences match without any shift. \subsection{Discussion}\label{discussion-equ}\ \titleparagraph{One algorithm, many interpretations} Is it useful to have many different forms of an algorithm, if all the forms are (oracle- or shift-)equivalent? Yes: different rewritings of one algorithm often yield different (``physical'') intuition. For example, algorithm \ref{algo_i1} uses the current loss function for extrapolation~\cite{vasilyev2010extragradient}; while algorithm \ref{algo_i2} seems to extrapolate from the previous loss function~\cite{censor2011subgradient}. Equivalent algorithms can differ in memory usage, computational efficiency, or numerical stability. For example, implementations of algorithms \ref{algo_i3} and \ref{algo_i4} lead to different memory usage~\cite{daskalakis2018training, malitsky2015projected}. In each time step $k$, algorithm \ref{algo_i3} needs to store $x_2^{k}, x_2^{k+1}$ and $F^k(\cdot)$, but algorithm \ref{algo_i4} only needs to store $x_1^{k}$ and $x_1^{k+1}$ in memory. \titleparagraph{Limitations} Do these formal notions of equivalence capture everything an optimization expert might mean by ``equivalent algorithms''? No: an example is shown in algorithm \ref{algop3}. Algorithms \ref{algop3} and \ref{algo4} are related by a nonlinear state transformation, $x^k = \textnormal{exp}(\xi^{k})$. However, none of the equivalences we have discussed capture this example. The difficulty is that algorithm \ref{algop3} is a nonlinear algorithm, while all of our machinery for detecting algorithm equivalence requires linearity. While notions of nonlinear equivalence are certainly interesting, in this paper we will define only those types of equivalence that our framework can detect. \begin{algorithm}[H] \centering \caption{} \label{algop3} \begin{algorithmic} \FOR{$k=1, 2,\dots$} \STATE{$x^{k+1} = x^k \textnormal{exp} ( - \frac{1}{5} \nabla f(\textnormal{log} x^k))$} \ENDFOR \end{algorithmic} \end{algorithm} \section{A characterization of oracle equivalence}\label{charac-oracle} In this section, we will discuss how to characterize oracle equivalence via transfer functions. Recall that oracle equivalence, introduced in section \ref{equivalence}, characterizes an algorithm by its oracle sequence. This sequence is uniquely determined by the initialization of the algorithm (which we ignore) and the input-output map of the linear system representing the algorithm. While the state-space realization of two equivalent algorithms may differ, from subsection \ref{control}, recall that the transfer function of a linear system uniquely characterizes the system as an input-output map. Fortunately, using \cref{eqp10}, we can directly calculate the transfer function from the state-space realization of an algorithm; and we can use equality of transfer functions to check if two algorithms are equivalent. This machinery allows us to avoid the issue of initialization (or of the optimization problem!) entirely, as we can check algorithm equivalence without ever producing a sequence of iterates. More formally, consider two oracle-equivalent algorithms with the same number of oracle calls in each iteration. From subsection \ref{oracle-equ}, we know that for every optimization problem, and for every initialization of the first algorithm, there exists an initialization of the second algorithm so that the oracle sequence of the two algorithms is the same. Concretely, by picking the initialization of the second algorithm appropriately, we can ensure that the first output of the linear systems match. Hence (since the oracles are the same), the first input of the linear systems match, and so the second output of the linear systems match, etc. By induction, for each possible sequence of input $\mathbf{u}$, they produce identical sequences of output $\mathbf{y}$. Then from subsection \ref{control}, the algorithms must have identical impulse responses and consequently identical transfer functions. In light of the previous discussion, we have proved the following proposition, since each step in the reasoning above is necessary and sufficient. \begin{proposition}\label{prop1} Algorithms with the same number of oracle calls in each iteration are oracle-equivalent if and only if they have identical transfer functions. \end{proposition} Importantly, oracle-equivalent algorithms have the same transfer function, even if they have a different number of state variables. But any realization of the algorithm must have at least as many state variables as the minimal realization of the linear system. Oracle-equivalent algorithms have identical oracle sequences and hence converge to the same fixed point (if they converge). Suppose algorithm $\mathcal{A}_1: \mathcal X \to \mathcal X$ with (nonlinear) oracle $\phi: \mathcal X \to \mathcal X$ and state-space realization $(A_1, B_1, C_1, D_1)$, converges to a fixed point $(y^\star, u^\star, x^\star)$ that satisfies \begin{equation}\label{eq36} \begin{aligned} x^\star & = A_1 x^\star + B_1 u^\star \\ y^\star & = C_1 x^\star + D_1 u^\star \\ u^\star & = \phi(y^\star). \end{aligned} \end{equation} If algorithm $\mathcal{A}_2$ is oracle-equivalent to $\mathcal{A}_1$, $\mathcal{A}_2$ converges to a fixed point $(y^\star, u^\star, \tilde x^\star)$ that has the same output and input as the fixed point of $\mathcal{A}_1$; however, the state $\tilde x^\star$ may not be the same, or even have the same dimension. Further, if there is an invertible linear map $Q$ between the states of $\mathcal{A}_1$ and $\mathcal{A}_2$ and $(y^\star, u^\star, x^\star)$ is a fixed point of $\mathcal{A}_1$, then $(y^\star, u^\star, Qx^\star)$ is a fixed point of $\mathcal{A}_2$. We can use this fact to derive a relation between the state-space realizations of the two algorithms: the fixed point equation for $\mathcal{A}_2$ can be written as \begin{equation}\label{eq37} \begin{aligned} Qx^\star & = QA_1Q^{-1} Qx^\star + QB_1 u^\star \\ y^\star & = C_1Q^{-1} Qx^\star + D_1 u^\star \\ u^\star & = \phi(y^\star), \end{aligned} \end{equation} which shows that the state-space realization of $\mathcal{A}_2$ is \begin{equation}\label{eq2} \bmat{ QA_1Q^{-1} & QB_1 \\ C_1Q^{-1} & D_1}, \end{equation} which can be obtained by \cref{eqp11}. \subsection{Motivating examples: proof of equivalence} Now, we will revisit the first and second motivating examples and apply \cref{prop1} to show equivalence. \titleparagraph{\cref{algo1} and \cref{algo2}} The state-space realization and transfer function of algorithm \ref{algo1} are shown as \begin{displaymath} \hat H_1(z) = \left[\begin{array}{c c|c} 2 & -1 &-\frac{1}{10}\\ 1 & 0 & 0 \\ \hline 2 & -1 & 0 \end{array}\right] = \left[\begin{array}{c c} 2 & -1 \end{array} \right] \left(zI - \left[\begin{array}{c c} 2 & -1 \\ 1 & 0 \end{array}\right]\right)^{-1} \left[\begin{array}{c} -\frac{1}{10}\\ 0 \end{array} \right] = \frac{-2z + 1}{10(z-1)^2}. \end{displaymath} The state-space realization and the transfer function of algorithm \ref{algo2} are \begin{displaymath} \hat H_2(z) = \left[\begin{array}{c c|c} 1 & -1 &-\frac{1}{5}\\ 0 & 1 & \frac{1}{10} \\ \hline 1 & 0 & 0 \end{array}\right] = \left[\begin{array}{c c} 1 & 0 \end{array} \right] \left(zI - \left[\begin{array}{c c} 1 & -1 \\ 0 & 1 \end{array}\right]\right)^{-1} \left[\begin{array}{c} -\frac{1}{5}\\ \frac{1}{10} \end{array} \right] = \frac{-2z + 1}{10(z-1)^2}. \end{displaymath} Hence we see algorithms \ref{algo1} and \ref{algo2} have the same transfer function, so by \cref{prop1} they are oracle-equivalent. In fact, since the algorithms have the same number of state variables, there exists an invertible linear transformation \[ Q = \bmat{ 2 & -1 \\ -1 & 1} \] to convert the state-space realization of algorithm \ref{algo1} to the state-space realization of algorithm \ref{algo2} following \cref{eqp11}. \titleparagraph{\cref{algo3} and \cref{algo4}} The state-space realization and transfer function of algorithm \ref{algo3} are \begin{displaymath} \hat H_3(z) = \left[\begin{array}{c c|c} 3 & -2 & \frac{1}{5}\\ 1 & 0 & 0 \\ \hline -1 & 2 & 0 \end{array}\right] = \left[\begin{array}{c c} -1 & 2 \end{array} \right] \left(zI - \left[\begin{array}{c c} 3 & -2 \\ 1 & 0 \end{array}\right]\right)^{-1} \left[\begin{array}{c} \frac{1}{5}\\ 0 \end{array} \right] = -\frac{1}{5(z-1)}. \end{displaymath} The state-space realization and transfer function of algorithm \ref{algo4} are \begin{displaymath} \hat H_4(z) = \left[\begin{array}{c|c} 1 & -\frac{1}{5}\\ \hline 1 & 0 \end{array}\right] = \left[\begin{array}{c} 1 \end{array} \right] \left(zI - \left[\begin{array}{c} 1 \end{array}\right]\right)^{-1} \left[\begin{array}{c} -\frac{1}{5} \end{array} \right] = -\frac{1}{5(z-1)}. \end{displaymath} Algorithms \ref{algo3} and \ref{algo4} have the same transfer function, so by \cref{prop1} they are oracle-equivalent. On the other hand, they have different numbers of states. Consider the invertible linear transformation \begin{displaymath} Q = \left[ \begin{array}{c c} -1 & 2 \\ 0 & 1 \end{array}\right]. \end{displaymath} Applying $Q$ to the state-space realization of algorithm \ref{algo3} leads to \begin{displaymath} \left[\begin{array}{c c:c} 1 & 0 & -\frac{1}{5}\\ -1 & 2 & 0 \\ \hdashline 1 & 0 & 0 \end{array}\right], \end{displaymath} where we have used dashed lines to demarcate the blocks in the state-space realization. This has the same minimal realization as algorithm \ref{algo4} according to subsection \ref{control} on minimal realizations. \begin{displaymath} \left[\begin{array}{c:c} 1 & -\frac{1}{5}\\ \hdashline 1 & 0 \end{array}\right]. \end{displaymath} Note that the state-space realization of algorithm \ref{algo4} is a minimal realization. This shows the reason why algorithms \ref{algo3} and \ref{algo4} are equivalent even if they have different numbers of states. Now we show how the sausage was made. Algorithm \ref{algo3} was designed by starting with the more complex \emph{Triple momentum algorithm} algorithm \ref{algo13}~\cite{doi:10.1137/15M1009597,tmm} and choosing parameters of the algorithm so its transfer function matched algorithm \ref{algo4}. \renewcommand{\thealgorithm}{5.\arabic{algorithm}} \setcounter{algorithm}{0} \begin{algorithm}[H] \caption{Triple momentum algorithm} \label{algo13} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_1 = (1+\beta)x^k_1 - \beta x^k_2 - \alpha \nabla f((1+\eta)x^k_1 - \eta x^k_2)$} \STATE{$x^{k+1}_2 = x^k_1 $} \ENDFOR \end{algorithmic} \end{algorithm} The state-space realization and transfer function of algorithm \ref{algo13} are \begin{equation}\label{eq7} \hat H_7(z) = \left[\begin{array}{c c|c} 1+\beta & -\beta & -\alpha \\ 1 & 0 & 0 \\ \hline 1+\eta & -\eta & 0 \end{array}\right] = -\frac{\alpha((\eta+1)z-\eta)}{(z-1)(z-\beta)}. \end{equation} We now demand that \cref{eq7}(right) must equal the transfer function of algorithm \ref{algo4} for all values of $z$, resulting in the equations \begin{equation}\label{eq3} \begin{aligned} & 5\alpha(\eta+1) = 1 \\ & 5\alpha\eta = \beta. \end{aligned} \end{equation} We solve for the parameters $\alpha$, $\eta$ and $\beta$ to find a solution $\alpha =-\frac{1}{5}$, $\beta = 2$ and $\eta = -2$ to \cref{eq3} that corresponds to algorithm \ref{algo3}. Other solutions exist: for example, $\alpha = 1$, $\beta = -4$ and $\eta = -\frac{4}{5}$ solves \cref{eq3} and yields another (different!) algorithm equivalent to algorithm \ref{algo4}. \section{A characterization of shift equivalence}\label{charac-shift} We can also characterize shift equivalence using transfer functions. Suppose an algorithm uses more than one oracle, and the call to the second oracle depends on the value of the first. Take algorithm \ref{algo5} as example: at iteration $k$, the first update equation calls the oracle $\textnormal{prox}_{f}$ to compute $x^{k+1}_1 = \textnormal{prox}_{f}(x^k_3)$, and the second update equation calls the oracle $\textnormal{prox}_{g}$ to compute $x^{k+1}_2 = \textnormal{prox}_{g}(2x^{k+1}_1 - x^k_3)$. This second update relies on the value of $x^{k+1}_1$. Imagine now that we reorder the update equations by some permutation. Generally this change produces an entirely different algorithm. But if the permutation is a \emph{cyclic} permutation, the order of the oracle calls is preserved. In the example of algorithm \ref{algo5}, we could start with the update equation $x^{k+1}_2 = \textnormal{prox}_{g}(2x^{k+1}_1 - x^k_3)$ and produce exactly the same sequence of oracle calls (after the first) by initializing $x^{k+1}_1$ and $x^k_3$ appropriately. This new algorithm is shift-equivalent to algorithm \ref{algo5} by \cref{def2}. Algorithm \ref{algo5} has three update equations, and so there are two other algorithms that may be produced by cyclic permutations of algorithm \ref{algo5}, shown below as algorithms \ref{algo5_1} and \ref{algo5_2}. \renewcommand{\thealgorithm}{6.\arabic{algorithm}} \setcounter{algorithm}{0} \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo5_1} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_2 = \textnormal{prox}_{g}(2x^{k+1}_1 - x^k_3)$} \STATE{$x^{k+1}_3 = x^k_3 + x^{k+1}_2 - x^{k+1}_1$} \STATE{$x^{k+1}_1 = \textnormal{prox}_{f}(x^k_3)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo5_2} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_3 = x^k_3 + x^{k+1}_2 - x^{k+1}_1$} \STATE{$x^{k+1}_1 = \textnormal{prox}_{f}(x^k_3)$} \STATE{$x^{k+1}_2 = \textnormal{prox}_{g}(2x^{k+1}_1 - x^k_3)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \vspace{1em} Both are shift-equivalent to algorithm \ref{algo5}, but algorithm \ref{algo5_2} is also oracle-equivalent to algorithm \ref{algo5}. (We will revisit and formally prove this result later.) It is easy to see why: the oracles $\textnormal{prox}_{f}$ and $\textnormal{prox}_{g}$ are called in the same order in algorithms~\ref{algo5} and \ref{algo5_2}, but in the opposite order in algorithm \ref{algo5_1}. We introduce notation to generalize this idea to more complex algorithms. Consider an algorithm $\mathcal{A}$ that consists of $m$ update equations and makes $n$ sequential oracle calls in each iteration. We insist that no update equation may contain more than one oracle call, so $m \ge n$. At iteration $k$, the algorithm generates states $x^k_1, \ldots, x^k_m$, outputs $y^k_1, \ldots, y^k_n$, and inputs $u^k_1, \ldots, u^k_n$, respectively. Consider any permutation $\tilde \pi$ of the sequence $(m) = (1, \ldots, m)$. We call algorithm $\mathcal{B} = P_{\tilde \pi}\mathcal{A}$ a \emph{permutation} of algorithm $\mathcal{A}$ if $\mathcal{B}$ performs the update equations of $\mathcal{A}$ in the order $\tilde \pi$ at each iteration. The algorithms $\mathcal{A}$ and $\mathcal{B}$ are shift-equivalent if and only if $\tilde \pi$ is a cyclic permutation of $(m)$. \begin{proposition}\label{prop3} An algorithm and any of its cyclic permutations are shift-equivalent. \end{proposition} \begin{proof} We provide a proof sketch here, and defer a detailed proof to \cref{apd2}. Let us name the oracle calls of the original algorithm $\mathcal A$ so that the oracles are called in order $(n)$. Suppose $\mathcal{B} = P_{\tilde \pi}\mathcal{A}$ where $\tilde \pi$ is a cyclic permutation of $(m)$. The permutation of update equations may reorder the oracle calls within one iteration, so that the oracle calls in algorithm $\mathcal{B}$ follow a cyclic permutation $\pi$ of $(n)$ (possibly, the identity). Hence $\mathcal{A}$ and $\mathcal{B}$ are shift-equivalent. (If the permutation is the identity, then the algorithms are also oracle-equivalent.) \end{proof} \subsection{Reordering oracle calls}\label{odg} Most optimization algorithms proceed by sequential updates, each of which depends on the previous update. However, for completeness, we consider a more general class of equivalences that arises for algorithms whose oracle updates have a more complex dependency structure. We may express the order of oracle calls at each iteration using a directed graph, where the graph has edge from oracle $i$ to oracle $j$ if oracle call $j$ depends on the result of oracle call $i$ (within the same iteration). In other words, within the iteration we must call oracle $i$ before oracle $j$. We call this directed graph the \emph{oracle dependence graph} (ODG) of the algorithm. An example is provided below as algorithm \ref{algo6_1}. Note that we are not aware of any practical algorithm for optimization with this ODG. It is constructed only for illustration. \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo6_1} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_1 = x^k_4 - t\nabla f(x^k_4)$} \STATE{$x^{k+1}_2 = x^{k+1}_1 - t\nabla g(x^{k+1}_1)$} \STATE{$x^{k+1}_3 = x^{k+1}_1 - t\nabla h(x^{k+1}_1)$} \STATE{$x^{k+1}_4 = \textnormal{prox}_{tf}(x^{k+1}_2 + x^{k+1}_3)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{} \label{algo6_2} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_1 = x^k_4 - t\nabla f(x^k_4)$} \STATE{$x^{k+1}_3 = x^{k+1}_1 - t\nabla h(x^{k+1}_1)$} \STATE{$x^{k+1}_2 = x^{k+1}_1 - t\nabla g(x^{k+1}_1)$} \STATE{$x^{k+1}_4 = \textnormal{prox}_{tf}(x^{k+1}_2 + x^{k+1}_3)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \vspace{1em} Figure \ref{fig7} expresses the dependency of oracle calls within each iteration of algorithm \ref{algo6_1}. At each iteration, oracle calls 2 ($\nabla g$) and 3 ($\nabla h$) depends on the result of oracle call 1 ($\nabla f$); oracle call 4 ($\textnormal{prox}_{tf}$) depends on the results of oracle calls 1, 2, and 3. \begin{figure}[tbhp] \centering \begin{tikzpicture} [->,>=stealth',shorten >=1pt,auto,node distance=1.5cm,semithick] \node[state] (A) [minimum size=0.7cm] {$1$}; \node[state] (B) [below right of=A] [minimum size=0.7cm] {$2$}; \node[state] (C) [below left of=A] [minimum size=0.7cm] {$3$}; \node[state] (D) [below left of=B] [minimum size=0.7cm] {$4$}; \path (A) edge node {} (B); \path (A) edge node {} (C); \path (B) edge node {} (D); \path (C) edge node {} (D); \path (A) edge node {} (D); \end{tikzpicture} \caption{Directed graph representing dependency of oracle calls in algorithm \ref{algo6_1}.} \label{fig7} \end{figure} An algorithm is always written as a sequence of update equations. But some algorithms might have a directed graph that may be written as a sequence (with all edges pointing forward) in more than one way. These algorithms can be implemented as a sequence of oracle calls in more than one way. For illustration, consider algorithms \ref{algo6_1} and \ref{algo6_2}. At each iteration, the oracle calls of algorithms \ref{algo6_1} and \ref{algo6_2} are identical: that is, calls to oracles $\nabla f$, $\nabla g$, $\nabla h$, and $\textnormal{prox}_{tf}$ are identical. The only difference is that the oracle calls $\nabla g$ and $\nabla h$ are swapped in the oracle sequence at each iteration. Notice that the state-space realizations of these algorithms \emph{still} have the same transfer function (after swapping the second and third columns and rows). This is consistent with the fact that algorithms \ref{algo6_1} and \ref{algo6_2} share the same directed graph of oracle calls as \cref{fig7}. We know of no practical optimization algorithm like this. However, were one to be discovered, we would suggest an expanded definition of oracle equivalence: two algorithms are oracle-equivalent if there exists a way of writing each algorithm as a sequence of updates so that both algorithms have the same sequence of oracle calls. We can still identify algorithms that are oracle-equivalent in this expanded sense using the transfer function. The oracle calls in an algorithm at each iteration are always written in sequential form. This sequential form is lost in the state-space realization of the algorithm. However, the order (dependency) of oracle calls is encoded in the $D$ matrix of the state-space realization. In this sense, the $D$ matrix is closely related to the adjacency matrix of the directed graph. We have $D_{ij} \neq 0$ if and only if oracle call $i$ depends on the results of oracle call $j$ at each iteration. For example, the $D$ matrix in the state-space realization of algorithm \ref{algo6_1} is provided below. \begin{displaymath} \left[\begin{array}{cccc} 0 & 0 & 0 & 0 \\ -t & 0 & 0 & 0 \\ -t & 0 & 0 & 0 \\ -2t & -t & -t & 0 \end{array}\right] \end{displaymath} In light of this discussion, we can strengthen \cref{prop3} to \cref{prop3_1}. \begin{proposition}\label{prop3_1} An algorithm and any of its cyclic permutations are shift-equivalent; further, if they share the same $D$ matrix in their state-space realizations, they are also oracle-equivalent. \end{proposition} If an algorithm contains $m$ update equations and $n$ oracle calls at each iteration ($m \ge n$), there are $m$ possible cyclic permutations on the update equations. According to the $D$ matrix in the state-space realization, we can group the $m$ cyclic permutations into $n$ distinct equivalent classes. Algorithms within each equivalence class are oracle-equivalent and shift-equivalent, while algorithms in different equivalent classes are only shift-equivalent. The $n$ distinct equivalence classes correspond to the $n$ cyclic permutations of the original order of oracle calls $(n)$. \subsection{Characterization of cyclic permutation}\ In the remainder of this paper, let us restrict our attention to algorithms for which a (cyclic) permutation of the algorithm changes the update order of oracle calls within one iteration, or in other words, changes the $D$ matrix in the state-space realization. In this way, we call algorithm $\mathcal{B} = P_{\pi}\mathcal{A}$ a permutation of algorithm $\mathcal{A}$ if $\mathcal{B}$ performs the update equations of $\mathcal{A}$ in a different order such that the update order of oracle calls of $\mathcal B$ is $\pi$ at each iteration. Suppose $\mathcal{A}$ has state-space realization $(A, B, C, D)$, and $\mathcal{B} = P_{\pi}\mathcal{A}$ where $\pi = (j+1,\ldots, n, 1,\ldots, j)$ for $1< j < n$ is a cyclic permutation of $(n)$. We will show how to recognize this relationship between the algorithms by considering their transfer functions. Partition the oracle calls into two parts, $(1, \ldots, j)$ and $(j+1, \ldots, n)$, and partition the input and output sequences in the same way: $\bar{\mathbf{u}}_1$, $\bar{\mathbf{u}}_2$ for inputs and $\bar{\mathbf{y}}_1$, $\bar{\mathbf{y}}_2$ for outputs. The state-space realization $L_\mathcal{A}$ and transfer function $\hat H_{\mathcal{A}}(z)$ can also be partitioned accordingly as \begin{equation}\label{eq8} L_\mathcal{A} = \left[\begin{array}{c:c c} A & B_1 & B_2 \\ \hdashline C_1 & D_{11} & D_{12} \\ C_2 & D_{21} & D_{22} \end{array}\right], \end{equation} \begin{displaymath} \hat H_{\mathcal{A}}(z) = \left[\begin{array}{c c} C_1(zI - A)^{-1}B_1 + D_{11} & C_1(zI - A)^{-1}B_2 + D_{12} \\ C_2(zI - A)^{-1}B_1 + D_{21} & C_2(zI - A)^{-1}B_2 + D_{22} \\ \end{array}\right] = \left[\begin{array}{c c} \hat H_{11}(z) & \hat H_{12}(z) \\ \hat H_{21}(z) & \hat H_{22}(z) \\ \end{array}\right]. \end{displaymath} The state-space realization corresponds to the state update equations \begin{equation}\label{eq9} \begin{aligned} x^{k+1} & = Ax^k + B_1\bar{u}^k_1 + B_2\bar{u}^k_2 \\ \bar{y}^k_1 & = C_1x^k + D_{11}\bar{u}^k_1 + D_{12}\bar{u}^k_2 \\ \bar{y}^k_2 & = C_2x^k + D_{21}\bar{u}^k_1 + D_{22}\bar{u}^k_2. \end{aligned} \end{equation} Now we can say how the transfer function of an algorithm is related to that of its cyclic permutation. \begin{proposition}\label{prop4} Assume $D_{12} = 0$. Then $\mathcal{B} = P_{\pi}\mathcal{A}$ if and only if the transfer function of $\mathcal{B}$ satisfies \begin{equation}\label{eq11} \hat H_{\mathcal{B}}(z) = \left[\begin{array}{c c} \hat H_{11}(z) & z\hat H_{12}(z) \\ \hat H_{21}(z)/z & \hat H_{22}(z) \\ \end{array}\right]. \end{equation} \end{proposition} \begin{proof} Sufficiency. We will derive the state-space realization of $\mathcal{B}$: \begin{equation}\label{eq12} \left[\begin{array}{c c :c c} A & B_1 & 0 & B_2 \\ 0 & 0 & I & 0\\ \hdashline C_1A & C_1B_1 & D_{11} & C_1B_2 \\ C_2 & D_{21} & 0 & D_{22} \end{array}\right]. \end{equation} To verify this realization is correct, we can write the system equations of this state-space realization as \begin{equation}\label{eq13} \begin{aligned} x^{k+1} & = Ax^k + B_1\bar{u}^k_1 + B_2\bar{u}^k_2 \\ \bar{u}^{k+1}_1 & = \bar{u}^{k+1}_1 \\ \bar{y}^{k+1}_1 & = C_1Ax^k + C_1B_1\bar{u}^k_1 +D_{11}\bar{u}^{k+1}_1 + C_1B_2\bar{u}^k_2 \\ \bar{y}^k_2 & = C_2x^k + D_{21}\bar{u}^k_1 + D_{12}\bar{u}^k_2. \\ \end{aligned} \end{equation} Note that equations \cref{eq13} are the results of equations \cref{eq9} after applying permutation $\pi$. As we perform cyclic permutation $\pi$, within each iteration, the update order of the oracles is shifted as $(j+1,\ldots, n, 1,\ldots, j)$, indicating oracles $(j+1, \ldots, n)$ are updated before $(1, \ldots, j)$. Further, the input and output sequences within one iteration at time step $k$ become $(\bar{u}^k_{2}, \bar{u}^{k+1}_1)$ and $(\bar{y}^k_{2}, \bar{y}^{k+1}_1)$. From the state-space realization, we may compute the transfer function as \begin{equation}\label{eq14} \hat H_{\mathcal{B}}(z) = \left[\begin{array}{c c} C_1(zI - A)^{-1}B_1 + D_{11} & zC_1(zI - A)^{-1}B_2 \\ C_2(zI - A)^{-1}B_1/z + D_{21}/z & C_2(zI - A)^{-1}B_2 + D_{22} \\ \end{array}\right] = \left[\begin{array}{c c} \hat H_{11}(z) & z\hat H_{12}(z) \\ \hat H_{21}(z)/z & \hat H_{22}(z) \\ \end{array}\right]. \end{equation} To arrive at \cref{eq14}, we have used the fact that $D_{12} = 0$ by assumption, and \begin{equation}\label{eq15} \left( zI - \left[\begin{array}{c c} A & B_1 \\ 0 & 0 \\ \end{array}\right]\right)^{-1} = \left[\begin{array}{c c} (zI - A)^{-1} & \frac{1}{z}(zI - A)^{-1}B_1 \\ 0 & \frac{1}{z}I \\ \end{array}\right]. \end{equation} Necessity is provided by \cref{prop1}. Equivalent algorithms must have identical transfer functions. Thus, if we find an algorithm and its transfer function is the same as \cref{eq11}, it must be equivalent to $\mathcal{B}$. \end{proof} We have assumed that $D_{12} = 0$ for algorithm $\mathcal{A}$. This assumption is quite weak. In fact, $D_{12}$ must be $0$ for any algorithm $\mathcal{A}$ that can be represented as a causal linear time-invariant system. Here, causal means that we can implement the algorithm by calling state update equations sequentially. To see this, suppose the state update equations have been arranged in this order, and use \cref{eqp3} to write down the matrix representation of the infinite dimensional map $\mathbf{H}$ that maps input $\mathbf{u}$ to output $\mathbf{y}$ corresponding to $\mathcal{A}$ as \cref{eq18}: \begin{equation}\label{eq18} \mathbf{H} = \begin{bmatrix} D_{11} & D_{12} & 0 & 0 & 0 & 0 &\cdots \\ D_{21} & D_{22} & 0 & 0 & 0 & 0 &\cdots \\ C_1B_1 & C_1B_2 & D_{11} & D_{12} & 0 & 0 &\cdots \\ C_2B_1 & C_2B_2 & D_{21} & D_{22} & 0 & 0 &\cdots \\ C_1AB_1 & C_1AB_2& C_1B_1 & C_1B_2 & D_{11} & D_{12} & \cdots \\ C_2AB_1 & C_2AB_2 & C_2B_1 & C_2B_2 & D_{21} & D_{22} & \cdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \end{bmatrix}. \end{equation} We can see that map $\mathbf{H}$ is (block) Toeplitz. Further, if algorithm $\mathcal{A}$ is causal, map $\mathbf{H}$ must be lower-triangular, and so $D_{12}$ must be $0$. By causality, at each iteration the former oracle calls must be independent with the latter oracle calls while the latter calls can depend on the former calls. This indicates that there are no directed cycles in the directed graph representing oracle calls at each iteration for any causal algorithm. In other words, the graph is a directed acyclic graph (DAG). This is consistent with the fact that any causal algorithm has a lower-triangular $D$ matrix (lower-triangular adjacency matrix of the directed graph). Note that algorithms are not always written with state update equations ordered causally: for example, the state-space realization \cref{eq12} has a non-zero $D_{12}$ block. However, we may reorder these equations so that each equation depends only on previously-computed quantities to reveal that the iteration is causal; after this rearrangement, the new $D_{12}$ block is 0. We discuss permutations further in \cref{apd3}. The fixed points of an algorithm and its cyclic permutations are the same up to a permutation. Suppose algorithm $\mathcal{A}: \mathcal X \to \mathcal X$ with state-space realization of the form \cref{eq8} converges to a fixed point $(\bar{y}_1^\star, \bar{y}_2^\star, \bar{u}_1^\star, \bar{u}_2^\star, x^\star)$. Partition the oracle calls into two (nonlinear) oracles $\phi_1$ and $\phi_2$. Formally, write the update equations as \begin{equation}\label{eq38} \begin{aligned} x^\star & = Ax^\star + B_1\bar{u}^\star_1 + B_2\bar{u}^\star_2 \\ \bar{y}^\star_1 & = C_1x^\star + D_{11}\bar{u}^\star_1 + D_{12}\bar{u}^\star_2 \\ \bar{y}^\star_2 & = C_2x^\star + D_{21}\bar{u}^\star_1 + D_{22}\bar{u}^\star_2 \\ u^\star_1 & = \phi_1(y^\star_1)\\ u^\star_2 & = \phi_2(y^\star_2). \end{aligned} \end{equation} Suppose the cyclic permutation $\pi$ swaps the first and second set of oracle calls. Then the cyclic permutation $\mathcal{B} = P_{\pi}\mathcal{A}$ converges to fixed point $(\bar{y}_2^\star, \bar{y}_1^\star, \bar{u}_2^\star, \bar{u}_1^\star, x^\star)$. To verify this, since $D_{12} = 0$, we have \begin{equation}\label{eq39} \begin{aligned} x^\star & = Ax^\star + B_1\bar{u}^\star_1 + B_2\bar{u}^\star_2 \\ \bar{u}^\star_1 & = \bar{u}^\star_1 \\ \bar{y}^\star_1 & = C_1Ax^\star + C_1B_1\bar{u}^\star_1 +D_{11}\bar{u}^\star_1 + C_1B_2\bar{u}^\star_2 = C_1x^\star + D_{11}\bar{u}^\star_1\\ \bar{y}^\star_2 & = C_2x^\star + D_{21}\bar{u}^\star_1 + D_{12}\bar{u}^\star_2 \\ u^\star_1 & = \phi_1(y^\star_1)\\ u^\star_2 & = \phi_2(y^\star_2). \end{aligned} \end{equation} \subsection{Applications: proof of shift equivalence}\label{app-shift} \titleparagraph{\cref{algo5} and \cref{algo6}} Now, we can revisit algorithms \ref{algo5} and \ref{algo6} in the third motivating example and show that they are permutations of each other and they are shift-equivalent. The transfer function of algorithm \ref{algo5} is \begin{displaymath} \hat H_5(z) = \left[\begin{array}{ c c c | c c } 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & -1 & 1 \\ \hline 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 2 & 0 \\ \end{array}\right] = \left[\begin{array}{ c c } -\frac{1}{z-1} & \frac{1}{z-1} \\ \frac{2z-1}{z-1} & -\frac{1}{z-1} \\ \end{array}\right]. \end{displaymath} The transfer functions of algorithm \ref{algo6} is \begin{displaymath} \hat H_6(z) = \left[\begin{array}{ c c | c c } 1 & -1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \hline 1 & -1 & 0 & 1 \\ -1 & 2 & 0 & 0 \\ \end{array}\right] = \left[\begin{array}{ c c } -\frac{1}{z-1} & \frac{z}{z-1} \\ \frac{2z-1}{z(z-1)} & -\frac{1}{z-1} \\ \end{array}\right]. \end{displaymath} From propositions \ref{prop3} and \ref{prop4}, we know that they are (cyclic) permutation and they are shift-equivalent. \titleparagraph{\cref{algo5_1} and \cref{algo5_2}} Here we revisit algorithms \ref{algo5_1} and \ref{algo5_2} at the beginning of this chapter and show their relations with algorithm \ref{algo5}. The transfer function of algorithm \ref{algo5_1} is \begin{displaymath} \hat H_8(z) = \left[\begin{array}{ c c c | c c } 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ -1 & 0 & 1 & 0 & 1 \\ \hline -1 & 0 & 1 & 0 & 1 \\ 2 & 0 & -1 & 0 & 0 \\ \end{array}\right] = \left[\begin{array}{ c c } -\frac{1}{z-1} & \frac{z}{z-1} \\ \frac{2z-1}{z(z-1)} & -\frac{1}{z-1} \\ \end{array}\right]. \end{displaymath} The transfer function of algorithm \ref{algo5_2} is \begin{displaymath} \hat H_9(z) = \left[\begin{array}{ c c c | c c } 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ -1 & 1 & 1 & 0 & 0 \\ \hline -1 & 1 & 1 & 0 & 0 \\ 1 & -1 & -1 & 2 & 0 \\ \end{array}\right] = \left[\begin{array}{ c c } -\frac{1}{z-1} & \frac{1}{z-1} \\ \frac{2z-1}{z-1} & -\frac{1}{z-1} \\ \end{array}\right]. \end{displaymath} From Propositions \ref{prop3} and \ref{prop4}, we know that algorithms \ref{algo5} and \ref{algo5_1} are (cyclic) permutation and they are shift-equivalent. From Proposition \ref{prop1}, we know algorithms \ref{algo5} and \ref{algo5_2} are oracle-equivalent, thus they are also shift-equivalent. \vspace{-1em} \noindent \hfil \begin{minipage}[t]{0.4\textwidth} \begin{algorithm}[H] \centering \caption{Douglas-Rachford splitting} \label{algo7} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_1 = \textnormal{prox}_{tf}(x^k_3)$} \STATE{$x^{k+1}_2 = \textnormal{prox}_{tg}(2x^{k+1}_1 - x^k_3)$} \STATE{$x^{k+1}_3 = x^k_3 + x^{k+1}_2 - x^{k+1}_1$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}[t]{0.54\textwidth} \begin{algorithm}[H] \centering \caption{ADMM} \label{algo8} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$\xi^{k+1}_1 = \textnormal{argmin}_{\xi}\{g(\xi)+ \frac{\rho}{2} \left \| A\xi + B\xi^{k}_2 -c + \xi^{k}_3 \right \|^2 \} $} \STATE{$\xi^{k+1}_2 = \textnormal{argmin}_{\xi}\{f(\xi)+ \frac{\rho}{2} \left \| A\xi^{k+1}_1 + B\xi -c + \xi^{k}_3 \right \|^2 \} $} \STATE{$\xi^{k+1}_3 = \xi^{k}_3 + A\xi^{k+1}_1 + B\xi^{k+1}_2 - c$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \vspace{1em} \titleparagraph{Douglas-Rachford splitting and ADMM} Consider a last example of algorithm permutation: Douglas-Rachford splitting (DR) (algorithm \ref{algo7}~\cite{douglas1956numerical, eckstein1992douglas}) and the alternating direction method of multipliers (ADMM) (algorithm \ref{algo8}~\cite[\S8]{ryuyinconvex}). Suppose that $A = I$, $B = -I$, and $c = 0$ in \cref{eqp1}. Then both DR and ADMM solve problem \cref{eqp1}~\cite{MAL-016, wen2010alternating, lions1979splitting}, and the update equations of ADMM can be simplified as algorithm \ref{algo8_1}. \begin{algorithm} \centering \caption{Simplified ADMM} \label{algo8_1} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$\xi^{k+1}_1 = \textnormal{prox}_{\frac{1}{\rho}g}(\xi^k_2 - \xi^k_3)$} \STATE{$\xi^{k+1}_2 = \textnormal{prox}_{\frac{1}{\rho}f}(\xi^{k+1}_1 + \xi^k_3)$} \STATE{$\xi^{k+1}_3 = \xi^{k}_3 + \xi^{k+1}_1 - \xi^{k+1}_2$} \ENDFOR \end{algorithmic} \end{algorithm} Further, we assume $\rho = 1/t$ in ADMM. We will compute the transfer function of both algorithms using $\textnormal{prox}_{tf}$ and $\textnormal{prox}_{tg}$ as the oracles. The transfer function of DR is \begin{equation}\label{eq19} \hat H_{10}(z) = \left[\begin{array}{ c c c | c c } 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & -1 & 1 \\ \hline 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 2 & 0 \\ \end{array}\right] = \left[\begin{array}{ c c } -\frac{1}{z-1} & \frac{1}{z-1} \\ \frac{2z-1}{z-1} & -\frac{1}{z-1} \\ \end{array}\right] \end{equation} and the transfer function of ADMM is \begin{equation}\label{eq20} \hat H_{11}(z) = \left[\begin{array}{ c c c | c c } 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & -1 & 1 \\ \hline 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & -1 & 0 & 0 \\ \end{array}\right] = \left[\begin{array}{ c c } -\frac{1}{z-1} & \frac{z}{z-1} \\ \frac{2z-1}{z(z-1)} & -\frac{1}{z-1} \\ \end{array}\right]. \end{equation} From propositions \ref{prop3} and \ref{prop4}, we know that Douglas-Rachford splitting and ADMM (with $\rho = 1/t$) are (cyclic) permutation and they are shift-equivalent. In fact, it is also possible to write the state-space realization for each algorithm using the gradient (or subgradient) of $f$ and $g$ as the oracle. The transfer functions depend on the choice of oracle, but in either case, we obtain the same results: the algorithms are (cyclic) permutation and they are shift-equivalent. We discuss the details further in \cref{apd3_1}. \section{Algorithm repetition}\label{repe} In previous sections, we define equivalence between algorithms with the same number of oracle calls in each iteration. This section considers how to identify relations between two algorithms when the number of oracles in each iteration differs. For example, we would like to detect when one algorithm consists of the another, simpler algorithm, repeated twice or more, possibly with changes to variables or shifts that obscure the relation. Consider an algorithm $\mathcal{A}$. Given a problem and an initialization, the algorithm will generate state sequence $(x^k_\mathcal{A})_{k\ge 0}$, input sequence $(u^k_\mathcal{A})_{k\ge 0}$, and output sequence $(y^k_\mathcal{A})_{k\ge 0}$, respectively. Specifically, the update at time step $k$ can be written as $x^{k+1}_\mathcal{A} = \mathcal{A}(x^k_\mathcal{A})$. Suppose we have another algorithm $\mathcal{B}$ such that $\mathcal{B} = \mathcal A^2$: repeating $\mathcal A$ twice gives the same result as $\mathcal B$. We call $\mathcal B$ a repetition of $\mathcal A$. Just as in the previous sections, algorithm repetition can be characterized by the transfer function. \begin{proposition}\label{prop6} Suppose $\mathcal{A}$ has state-space realization $(A, B, C, D)$. Then $\mathcal{B} = \mathcal{A}^2$ if and only if its transfer function has the form \begin{equation}\label{eq21} \left[\begin{array}{c c} C(zI - A^2)^{-1}AB +D & C(zI - A^2)^{-1}B \\ CA(zI - A^2)^{-1}AB+CB & CA(zI - A^2)^{-1}B+D \end{array}\right]. \end{equation} \end{proposition} \begin{proof} Sufficiency. The update equations of $\mathcal{B}$ can be written as \begin{equation}\label{eq22} \begin{aligned} x^k_1 & = Ax^{k}_\mathcal{B}+Bu^k_1 \\ y^k_1 & = Cx^{k}_\mathcal{B}+Du^{k}_1 \\ x^{k+1}_\mathcal{B} & = Ax^k_1+Bu^k_2 \\ y^k_2 & = Cx^{k}_1+Du^{k}_2, \end{aligned} \end{equation} where $x^k_1$ is an intermediate state. After eliminating the intermediate state $x^k_1$, we arrive at a new system of update equations: \begin{equation}\label{eq23} \begin{aligned} x^{k+1}_\mathcal{B} & = A^2x^{k}_\mathcal{B}+ABu^k_1+Bu^k_2 \\ y^k_1 & = Cx^{k}_\mathcal{B}+Du^{k}_1 \\ y^k_2 & = CAx^{k}_\mathcal{B} + CB u^k_1 +Du^{k}_2. \end{aligned} \end{equation} The corresponding state-space realization has transfer function \begin{equation}\label{eq24} \left[\begin{array}{c|c c} A^2 & AB & B \\ \hline C & D & 0 \\ CA & CB & D \end{array}\right] = \left[\begin{array}{c c} C(zI - A^2)^{-1}AB +D & C(zI - A^2)^{-1}B \\ CA(zI - A^2)^{-1}AB+CB & CA(zI - A^2)^{-1}B+D \end{array}\right]. \end{equation} Necessity is provided by \cref{prop1} since the transfer function uniquely characterizes an algorithm. \end{proof} \renewcommand{\thealgorithm}{7.\arabic{algorithm}} \setcounter{algorithm}{0} \vspace{-1em} \noindent \hfil \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{Gradient method} \label{algo9} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1} = x^k - t\nabla f(x^k)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}[t]{0.46\textwidth} \begin{algorithm}[H] \centering \caption{Repetition of gradient method} \label{algo10} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$\xi^{k+1}_2 = \xi^k_1 - t\nabla f(\xi^k_1)$} \STATE{$\xi^{k+1}_1 = \xi^{k+1}_2 - t\nabla f(\xi^{k+1}_2)$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \vspace{1em} One example of repetition consists the gradient method algorithm \ref{algo9} and its repetition algorithm \ref{algo10}. Note that algorithm \ref{algo4} is algorithm \ref{algo9} with a specific parameter realization. The transfer functions of each algorithm are computed as $\hat H_{12}(z)$ and $\hat H_{13}(z)$ respectively: \[ \hat H_{12}(z) = \left[\begin{array}{ c | c } 1 &-t \\ \hline 1 & 0 \\ \end{array}\right] = - \frac{t}{z-1}, \qquad \hat H_{13}(z) = \left[\begin{array}{ c | c c } 1 & -t & -t \\ \hline 1 & 0 & 0 \\ 1 & -t & 0 \\ \end{array}\right] = \left[\begin{array}{ c c } - \frac{t}{z-1} & - \frac{t}{z-1} \\ - \frac{tz}{z-1} & - \frac{t}{z-1} \\ \end{array}\right]. \] Proposition \ref{prop6} reveals how the transfer function changes when an algorithm is repeated twice. In fact, we can identify an algorithm that has been repeated arbitrarily many times. Suppose algorithm $\mathcal{C}$ is $\mathcal{A}$ repeated $n \geq 1$ times: $\mathcal{C} = \mathcal{A}^n$. \begin{proposition}\label{prop7} Suppose $\mathcal{A}$ has state-space realization $(A, B, C, D)$. Then $\mathcal{C} = \mathcal{A}^n$ for $n\geq 1$ if and only if $\mathcal{C}$ has a transfer function given by \cref{eq26}. \end{proposition} \begin{proof} Sufficiency. We can represent $\mathcal{C}$ with state-space realization \begin{equation}\label{eq25} \left[\begin{array}{c :c c c c c} A^n & A^{n-1}B & \dots & \dots & AB &B \\ \hdashline C & D & 0 & 0 & \dots & 0 \\ CA & CB & D & 0 & \dots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ CA^{n-1} & CA^{n-2}B &\dots & \dots & CB & D \\ \end{array}\right]. \end{equation} Note that $(zI - A^n)^{-1}A^l = A^l(zI - A^n)^{-1}$ for any $n$ and $l$. Let $ \tilde{C} = C(zI - A^n)^{-1}$, and compute the transfer function of $\mathcal C$: \begin{equation}\label{eq26} \left[\begin{array}{c c c c c c} \tilde{C}A^{n-1}B + D & \tilde{C}A^{n-2}B & \dots & \dots & \tilde{C}AB & \tilde{C}B \\ \tilde{C}A^{n}B + CB & \tilde{C}A^{n-1}B + D & \dots & \dots & \tilde{C}A^2B & \tilde{C}AB \\ \vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\ \tilde{C}A^{2n-2}B + CA^{n-2}B & \tilde{C}A^{2n-3}B + CA^{n-3}B & \dots & \dots & \tilde{C}A^nB + CB & \tilde{C}A^{n-1}B + D \\ \end{array}\right]. \end{equation} Necessity is provided by \cref{prop1}, just as in the proof of \cref{prop6}. \end{proof} \titleparagraph{Remark} Proposition \ref{prop6} is a special case of \cref{prop7} when $n = 2$. The dimension of transfer function of $\mathcal{C}$ is $n$ times the dimension of transfer function of $\mathcal{A}$. Similarly, the dimension of input and output of $\mathcal{C}$ is $n$ times the dimension of the input and output of $\mathcal{A}$. At time step $k$, we have $y^k_\mathcal{C} = (y^{nk}_\mathcal{A}, \dots, y^{(n+1)k-1}_\mathcal{A})$ and $u^k_\mathcal{C} = (u^{nk}_\mathcal{A}, \dots, u^{(n+1)k-1}_\mathcal{A})$. Just as for oracle equivalence and cyclic permutations, the fixed points of an algorithm and its repetitions are related, as shown in \cref{prop13}. \begin{proposition}\label{prop13} If algorithm $\mathcal{A}$ converges to a fixed point $(y^\star, u^\star, x^\star)$, then its repetition $\mathcal{A}^n$ for $n\geq 1$ converges to fixed point $(y', u', x^\star)$, with $y' = y^\star \bigotimes \mathbbm{1}^n$ and $u' = u^\star \bigotimes \mathbbm{1}^n$. Here $\bigotimes$ is the Kronecker product and $\mathbbm{1}^n$ is an $n$ dimensional vector whose entries are all ones. \end{proposition} Detailed proof is provided in \cref{apd4}. Since $\mathcal{A}^n$ repeats $\mathcal{A}$ $n$ times, the input and output of the fixed point of $\mathcal{A}^n$ are obtained by repeating the input and output on the corresponding fixed point of $\mathcal{A}$ $n$ times. Repetition gives us many more ways to combine algorithms into complex and unwieldly (but convergent) new methods. We can repeat a sequence of iterations from different algorithms and regard them together as a new algorithm. Suppose we choose $n$ algorithms $\mathcal{A}_1, \dots, \mathcal{A}_n$ with state-space realizations $(A_1, B_1, C_1, D_1), \dots, (A_n, B_n, C_n, D_n)$ and run one iteration of each as a single iteration of our new monster algorithm. For simplicity, suppose the state-space realization matrices $A_i, B_i, C_i, D_i$ for each algorithm $\mathcal A_i$ have the same dimensions as all others $i=1,\ldots,n$. (Otherwise the result is harder to write down, but still straightforward to compute.) Then we can represent the resulting monster algorithm with transfer function \begin{equation}\label{eq27} \left[\begin{array}{c | c c c c c} \prod_{i = n}^{1}A_i &\prod_{i = n}^{2}A_iB_{1} & \dots & \dots & A_nB_{n-1} & B_n \\ \hline C_1 & D_1 & 0 & 0 & \dots & 0 \\ C_2A_1 & C_2B_1 & D_2 & 0 & \dots & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & \vdots \\ C_n\prod_{i = n-1}^{1}A_i & C_n\prod_{i = n-1}^{2}A_iB_{1} &\dots & \dots & C_nB_{n-1} & D_n \\ \end{array}\right]. \end{equation} Hence one easy way to develop new publishable optimization algorithms --- until the present work --- has been to combine existing algorithms into a new monster algorithm with similar convergence properties but new exciting interpretations. Using our software, it is easy for reviewers to detect such algorithm surgery by searching over all pairs (or trios, etc) of known algorithms. This combinatorial search is still not too expensive, since the list of known algorithms is still rather small, and the number of algorithms that makes up a monster algorithm is limited by the number of oracle calls at each iteration of the monster algorithm. \section{Algorithm conjugation}\label{conjugation} In this section, we introduce one last algorithm transformation, conjugation, which alters the oracle calls but results in algorithms that still bear a family resemblance. Algorithm conjugation is a natural operation for convex optimization. For convex optimization, some oracles are closely related to others: for example, when $f^\star(y) = \sup_x \{x^Ty - f(x)\}$ is the Fenchel conjugate of $f$ \cite[\S3]{fenchel1953convex}, \begin{itemize} \item $(\partial f)^{-1} = \partial f^\star$, and \item \emph{Moreau's identity.} $I - \textnormal{prox}_f = \textnormal{prox}_{f^*}$ \end{itemize} [\citefirst{ryu2016primer}[\S2]{ryuyinconvex}. We can rewrite any algorithm in terms of different, also easily computable, oracles using these identities. Consider a simple example: we will obfuscate the proximal gradient method (algorithm \ref{algo11} [\citefirst[\S10]{doi:10.1137/1.9781611974997}{doi:10.1137/080716542}) by rewriting it in terms of the conjugate of the original oracle $\textnormal{prox}_g$, using Moreau's identity, as algorithm \ref{algo12}~\cite{moreau:hal-01867187}. \renewcommand{\thealgorithm}{8.\arabic{algorithm}} \setcounter{algorithm}{0} \vspace{-1em} \noindent \hfil \begin{minipage}[t]{0.42\textwidth} \begin{algorithm}[H] \centering \caption{Proximal gradient method} \label{algo11} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1} = \textnormal{prox}_{tg}(x^k - t\nabla f(x^k))$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \begin{minipage}[t]{0.5\textwidth} \begin{algorithm}[H] \centering \caption{Conjugate of proximal gradient method} \label{algo12} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$\xi^{k+1} = \xi^k - t\nabla f(\xi^k) - t\textnormal{prox}_{\frac{1}{t}g^\star}(\frac{1}{t}(\xi^k - t\nabla f(\xi^k)))$} \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfil \vspace{1em} The transfer function of the algorithm changes when we rewrite the algorithm to call a different oracle, such as calling $\textnormal{prox}_{f^\star}$ instead of $\textnormal{prox}_f$. Yet the sequence of states is preserved! Similarly, when we rewrite an algorithm to call $\partial f^\star$ instead of $\partial f$, the resulting algorithm is related to the original algorithm by swapping the input and output sequences. We say that algorithm $\mathcal B = \mathcal C_\kappa \mathcal A$ is a conjugate of algorithm $\mathcal A$ if algorithm $\mathcal B$ results from rewriting algorithm $\mathcal A$ to use the conjugates of the oracles in set $\kappa \subseteq [n]$, where $[n] = \{1, \ldots, n\}$ is the set of oracle indices for algorithm $\mathcal A$. Interestingly, conjugation preserves the state sequence but not the oracle sequence. We will also call two algorithms conjugates if they are oracle-equivalent to a conjugate pair. Our goal in this section is to describe how to identify conjugate algorithms. For simplicity in the remainder of this section, we suppose that all oracles are (sub)gradients. To detect equivalence of algorithms involving prox using methods presented here, we may write the state-space realization of the algorithm in terms of (sub)gradients: \[ u = \textnormal{prox}_f(y) \quad \iff \quad y \in u + \partial f(u). \] In fact, our software uses this method to check algorithm conjugation. Restricting to (sub)gradients, we see from the identity $(\partial f)^{-1} = \partial f^\star$ that algorithm conjugation swaps the input and output of an algorithm: the algorithm after conjugation takes the output of the original algorithm as input and produces the input of the original one as output. As shown in \cref{fig10}, the input sequence of the algorithm after conjugation is the original output sequence and the output sequence in the algorithm after conjugation is the original input sequence. \begin{figure}[tbhp] \centering \begin{tikzpicture}[>=latex] \node[scale = 0.75][box6] at (0,1.9) (algo1) {$L$}; \node[scale = 0.75] at (0,1.7) (ref2) {}; \node[scale = 0.75] at (0,2.1) (ref1) {}; \node[scale = 0.75][box3, left of = ref1 , node distance = 8em] (state0) {$x^{k-1}$}; \node[scale = 0.75][left of = ref1, node distance = 13em] (init1) {\ldots}; \node[scale = 0.75][box3, right of = ref1, node distance = 8em] (state1) {$x^{k}$}; \node[scale = 0.75][box] at (0, 1.1) (oracle1) {$\phi$}; \node[scale = 0.75][box3, right of = oracle1, node distance = 6em] (output1) {$y^{k-1}$}; \node[scale = 0.75, left of= oracle1, node distance = 6em][box3] (input1) {$u^{k-1}$}; \node[scale = 0.75][box6, right of = algo1, node distance = 16em] (algo2) {$L$}; \draw[->] (init1) -- (state0); \draw[->] (state0) -- (ref1-|algo1.west); \draw[->] (state1-|algo1.east) -- (state1); \draw[->] (input1) |- (ref2-|algo1.west); \draw[->] (output1) -- (oracle1); \draw[->] (oracle1) -- (input1); \draw[->] (ref2-|algo1.east) -| (output1); \draw[->] (state1) -- (state1-|algo2.west); \node[scale = 0.75][box3, right of = state1, node distance = 16em] (state2) {$x^{k+1}$}; \node[scale = 0.75][box, right of = oracle1, node distance = 16em] (oracle2) {$\phi$}; \node[scale = 0.75][box3, right of = oracle2, node distance = 6em] (output2) {$y^{k}$}; \node[scale = 0.75, left of= oracle2, node distance = 6em][box3] (input2) {$u^{k}$}; \node[scale = 0.75][box6, right of = algo2, node distance = 16em] (algo3) {$L$}; \draw[->] (state1-|algo2.east) -- (state2); \draw[->] (input2) |- (ref2-|algo2.west); \draw[->] (output2) -- (oracle2); \draw[->] (oracle2) -- (input2); \draw[->] (ref2-|algo2.east) -| (output2); \draw[->] (state2) -- (state1-|algo3.west); \node[scale = 0.75][right of = state2 , node distance = 16em] (end1) {\ldots}; \node[scale = 0.75][box, right of = oracle2, node distance = 16em] (oracle3) {$\phi$}; \node[scale = 0.75][box3, right of = oracle3, node distance = 6em] (output3) {$y^{k+1}$}; \node[scale = 0.75, left of= oracle3, node distance = 6em][box3] (input3) {$u^{k+1}$}; \draw[->] (state1-|algo3.east) -- (end1); \draw[->] (input3) |- (ref2-|algo3.west); \draw[->] (output3) -- (oracle3); \draw[->] (oracle3) -- (input3); \draw[->] (ref2-|algo3.east) -| (output3); \node[scale = 0.75][box6] at (0,-0.8) (algo10) {$\tilde{L}$}; \node[scale = 0.75] at (0,-1) (ref10) {}; \node[scale = 0.75] at (0,-0.6) (ref20) {}; \node[scale = 0.75][box3, left of = ref10 , node distance = 8em] (state00) {$\tilde{x}^{k-1}$}; \node[scale = 0.75][left of = ref10, node distance = 13em] (init10) {\ldots}; \node[scale = 0.75][box3, right of = ref10, node distance = 8em] (state10) {$\tilde{x}^{k}$}; \node[scale = 0.75][box] at (0, 0) (oracle10) {$\phi^{-1}$}; \node[scale = 0.75][box3, right of = oracle10, node distance = 6em] (output10) {$\tilde{y}^{k-1}$}; \node[scale = 0.75, left of= oracle10, node distance = 6em][box3] (input10) {$\tilde{u}^{k-1}$}; \node[scale = 0.75][box6, right of = algo10, node distance = 16em] (algo20) {$\tilde{L}$}; \draw[->] (init10) -- (state00); \draw[->] (state00) -- (ref10-|algo10.west); \draw[->] (state10-|algo10.east) -- (state10); \draw[->] (input10) |- (ref20-|algo10.west); \draw[->] (output10) -- (oracle10); \draw[->] (oracle10) -- (input10); \draw[->] (ref20-|algo10.east) -| (output10); \draw[->] (state10) -- (state10-|algo20.west); \node[scale = 0.75][box3, right of = state10 , node distance = 16em] (state20) {$\tilde{x}^{k+1}$}; \node[scale = 0.75][box, right of = oracle10, node distance = 16em] (oracle20) {$\phi^{-1}$}; \node[scale = 0.75][box3, right of = oracle20, node distance = 6em] (output20) {$\tilde{y}^{k}$}; \node[scale = 0.75, left of= oracle20, node distance = 6em][box3] (input20) {$\tilde{u}^{k}$}; \node[scale = 0.75][box6, right of = algo20, node distance = 16em] (algo30) {$\tilde{L}$}; \draw[->] (state10-|algo20.east) -- (state20); \draw[->] (input20) |- (ref20-|algo20.west); \draw[->] (output20) -- (oracle20); \draw[->] (oracle20) -- (input20); \draw[->] (ref20-|algo20.east) -| (output20); \draw[->] (state20) -- (state10-|algo30.west); \node[scale = 0.75][right of = state20, node distance = 16em] (end10) {\ldots}; \node[scale = 0.75][box, right of = oracle20, node distance = 16em] (oracle30) {$\phi^{-1}$}; \node[scale = 0.75][box3, right of = oracle30, node distance = 6em] (output30) {$\tilde{y}^{k+1}$}; \node[scale = 0.75, left of= oracle30, node distance = 6em][box3] (input30) {$\tilde{u}^{k+1}$}; \draw[->] (state10-|algo30.east) -- (end10); \draw[->] (input30) |- (ref20-|algo30.west); \draw[->] (output30) -- (oracle30); \draw[->] (oracle30) -- (input30); \draw[->] (ref20-|algo30.east) -| (output30); \draw[-{Straight Barb[left]}] ($(input1.south) + (0.02,0)$) -- ($(output10.north) + (0.02,0.02)$); \draw[-{Straight Barb[left]}] ($(output10.north) + (-0.02,0)$) -- ($(input1.south) + (-0.02,-0.02)$); \draw[-{Straight Barb[left]}] ($(input2.south) + (0.02,0)$) -- ($(output20.north) + (0.02,0.02)$); \draw[-{Straight Barb[left]}] ($(output20.north) + (-0.02,0)$) -- ($(input2.south) + (-0.02,-0.02)$); \draw[-{Straight Barb[left]}] ($(input3.south) + (0.02,0)$) -- ($(output30.north) + (0.02,0.02)$); \draw[-{Straight Barb[left]}] ($(output30.north) + (-0.02,0)$) -- ($(input3.south) + (-0.02,-0.02)$); \draw[-{Straight Barb[right]}] ($(output1.south) + (-0.02,0)$) -- ($(input10.north) + (-0.02,0.02)$); \draw[-{Straight Barb[right]}] ($(input10.north) + (0.02,0)$) -- ($(output1.south) + (0.02,-0.02)$); \draw[-{Straight Barb[right]}] ($(output2.south) + (-0.02,0)$) -- ($(input20.north) + (-0.02,0.02)$); \draw[-{Straight Barb[right]}] ($(input20.north) + (0.02,0)$) -- ($(output2.south) + (0.02,-0.02)$); \draw[-{Straight Barb[right]}] ($(output3.south) + (-0.02,0)$) -- ($(input30.north) + (-0.02,0.02)$); \draw[-{Straight Barb[right]}] ($(input30.north) + (0.02,0)$) -- ($(output3.south) + (0.02,-0.02)$); \end{tikzpicture} \caption{Unrolled block-diagram representation of algorithm conjugation. Here $\tilde \phi = \phi^{-1}$.} \label{fig10} \end{figure} First, let's introduce a bit of standard notation. Suppose an algorithm $\mathcal{A}$ contains $n$ oracle calls in each iteration. The cardinality of a subset $\kappa \subseteq [n]$ is $ \left | \kappa \right |$ and the complement is $\bar{\kappa}=[n] \setminus \kappa$. For any matrix $M\in \mathbb{R}^{n\times n}$, $M[\kappa, \nu]$ is the sub-matrix of $M$ whose rows and columns are indexed by $\kappa$ and $\nu \subseteq [n]$, respectively. We write $M[\kappa, \kappa]$ as $M[\kappa]$ for simplicity. For $i \in [n]$, the conjugation operator $\mathcal C_{i}$ conjugates oracle $i$: it replaces the $i$th oracle by its inverse. The operator $\mathcal C_{\kappa}$ conjugates all oracles in the set $\kappa \subseteq [n]$ to produce the conjugate algorithm $\mathcal C_{\kappa}\mathcal{A}$. \begin{proposition}\label{prop8} Suppose $\mathcal{A}$ has state-space realization $(A, B, C, D)$ and transfer function $\hat H(z)$, and $D[\kappa]$ is invertible. Then $\mathcal{B} = \mathcal C_{\kappa}\mathcal{A}$ if and only if the transfer function $\hat H'(z)$ of $\mathcal{B}$ satisfies \begin{equation}\label{eq28} P{\hat H'(z)}P^{T}= \begin{bmatrix} \hat H[\kappa]^{-1}(z) & -\hat H[\kappa]^{-1}(z)\hat H[\kappa, \bar{\kappa}](z)\\ \hat H[\bar{\kappa}, \kappa](z)\hat H[\kappa]^{-1}(z) & \hat H[\bar{\kappa}](z)-\hat H[\bar{\kappa}, \kappa](z)\hat H[\kappa]^{-1}(z)\hat H[\kappa, \bar{\kappa}](z) \end{bmatrix}. \end{equation} Here $P$ is a permutation matrix that swaps rows and columns so indices in $\kappa$ come first: \begin{equation}\label{eq29} P\hat{H}(z)P^{T}= \begin{bmatrix} \hat H[\kappa](z) & \hat H[\kappa, \bar{\kappa}](z)\\ \hat H[\bar{\kappa}, \kappa](z) & \hat H[\bar{\kappa}](z) \end{bmatrix}. \end{equation} \end{proposition} \begin{proof} Sufficiency. Without loss of generality, suppose the oracles $\kappa = \{1,\ldots,|\kappa|\}$ appear first, \begin{displaymath} \hat H(z) = \begin{bmatrix} \hat H[\kappa](z) & \hat H[\kappa, \bar{\kappa}](z)\\ \hat H[\bar{\kappa}, \kappa](z) & \hat H[\bar{\kappa}](z) \end{bmatrix}, \qquad D = \begin{bmatrix} D[\kappa] & D[\kappa, \bar{\kappa}]\\ D[\bar{\kappa}, \kappa] & D[\bar{\kappa}] \end{bmatrix}, \end{displaymath} and consequently the permutation matrix $P$ is the identity. We obtain the desired results from \cref{eqp13} by setting $D_{11} = D[\kappa]$, $\hat H_{11}(z) = \hat H[\kappa](z)$, $\hat H_{12}(z) = \hat H[\kappa, \bar{\kappa}](z)$, $\hat H_{21}(z) = \hat H[\bar{\kappa}, \kappa](z)$, and $\hat H_{22}(z) = \hat H[\bar{\kappa}](z)$. Necessity is provided by \cref{prop1} as the transfer function uniquely characterizes oracle-equivalent algorithms. \end{proof} From \cref{prop8}, the transfer function $\hat H(z)$ of algorithm $\mathcal{A}$ is partially inverted when the algorithm is conjugated by $\mathcal C_{\kappa}$. The new transfer function $\hat H'(z)$ results from applying the Sweep operator with indices $\kappa$ to $\hat H(z)$ \cite{10.2307/2683825, TSATSOMEROS2000151}. If we consider the input and output sequences for each oracle separately, for any oracle in $\kappa$, the input sequence corresponding to $\mathcal C_{\kappa}\mathcal{A}$ is the original output sequence in $\mathcal{A}$ and the output sequence corresponding to $\mathcal C_{\kappa}\mathcal{A}$ is the original input sequence in $\mathcal{A}$. The input and output sequences of oracles in $[n] \setminus \kappa$ remain unchanged in the new algorithm $\mathcal C_{\kappa}\mathcal{A}$. Proposition \ref{prop8} assumes that $D[\kappa]$ is invertible. In fact, $\mathcal B = \mathcal C_{\kappa}\mathcal A$ is a causal algorithm if and only if $D[\kappa]$ is invertible. We need not condition on causality in the proposition, since any algorithm that can be written down as a set of update equations is necessarily causal. Now we consider two special cases: conjugating 1) a single oracle, or 2) all of the oracles. \begin{corollary}\label{coro1} Consider algorithm $\mathcal{A}$ with state-space realization $(A, B, C, D)$ and transfer function $\hat H(z) \in \mathbb{R}^{n\times n}$. \begin{enumerate}[(a)] \item Suppose $D_{kk} \neq 0$ for any $k \in [n]$. Then the new transfer function $\hat H'(z)$ of $\mathcal C_{k}\mathcal{A}$ can be expressed entrywise as \begin{equation}\label{eq33} h'_{ij}(z) =\begin{cases} 1/h_{kk}(z) & i=k,~j=k \\ -h_{kj}(z)/h_{kk}(z)& i=k,~j\neq k \\ h_{ik}(z)/h_{kk}(z)& i\neq k,~j = k \\ h_{ij}(z)-h_{ik}(z)h_{kj}(z)/h_{kk}(z)& i\neq k,~j\neq k, \end{cases} \end{equation} as $h_{ij}(z)$ and $h'_{ij}(z)$ $1\leq i,j \leq n$ denote the entries of $\hat H(z)$ and $\hat H'(z)$ respectively. \item Suppose $D$ is invertible. Then the transfer function $\hat H'(z)$ of $\mathcal C_{[n]}\mathcal{A}$ satisfies $\hat H'(z) = \hat H^{-1}(z)$. \end{enumerate} \end{corollary} \paragraph{Proximal gradient} Now we can revisit algorithms \ref{algo11} and \ref{algo12} and show that they are conjugate. The transfer functions of algorithms \ref{algo11} and \ref{algo12} are computed as $\hat H_{14}(z)$ and $\hat H_{15}(z)$ below. Note that the state-space realizations are written in terms of (sub)gradients. From \cref{coro1}, they are conjugate with respect to the second oracle. \begin{displaymath} \hat H_{14}(z) = \left[\begin{array}{ c c } -\frac{t}{z-1} & -\frac{t}{z-1} \\ -\frac{tz}{z-1} & -\frac{tz}{z-1} \\ \end{array}\right], \qquad \hat H_{15}(z) = \left[\begin{array}{ c c } 0 & \frac{1}{z} \\ -1 & -\frac{z-1}{tz} \\ \end{array}\right] \end{displaymath} \begin{algorithm}[H] \caption{Chambolle-Pock method} \centering \label{algo14} \begin{algorithmic} \FOR{$k=0, 1, 2,\ldots$} \STATE{$x^{k+1}_1 = \textnormal{prox}_{\tau f}(x^k_1 - \tau x^k_2)$} \STATE{$x^{k+1}_2 = \textnormal{prox}_{\sigma g^{\star}}(x^k_2 + \sigma (2x^{k+1}_1 - x^k_1))$} \ENDFOR \end{algorithmic} \end{algorithm} \paragraph{DR and Chambolle-Pock} Another important example is the relation between DR (algorithm \ref{algo7}) and the primal-dual optimization method proposed by Chambolle and Pock (algorithm \ref{algo14}~[\citefirst{chambolle2011first}{o2018equivalence}). Note that algorithm \ref{algo7} is parameterized by parameter $t$ and algorithm \ref{algo14} is parameterized by parameters $\tau$ and $\sigma$. The transfer functions of algorithms \ref{algo7} and \ref{algo14} are provided below as $\hat H_{10}(z)$ and $\hat H_{16}(z)$ respectively. By \cref{coro1}, we know that they are conjugate with respect to the second oracle if $\tau = t$ and $\sigma = 1/t$. So DR and the Chambolle-Pock method (when the parameter value $\tau = t$ and $\sigma = 1/t$) are conjugate. We will say more about how to discover the correct parameter restriction in section \ref{package}. \begin{displaymath} \hat H_{10}(z) = \begin{bmatrix} -\frac{tz}{z-1} & -\frac{t}{z-1}\\ \frac{t(1-2z)}{z-1} & -\frac{tz}{z-1} \end{bmatrix}, \quad \hat H_{16}(z) = \begin{bmatrix} -\frac{\tau z(z-1)}{2\sigma \tau z - \sigma \tau + z^2 -2z + 1 } & \frac{\sigma \tau z}{2\sigma \tau z - \sigma \tau + z^2 -2z + 1 }\\ \frac{\sigma \tau z(1-2z)}{2\sigma \tau z - \sigma \tau + z^2 -2z + 1 } & -\frac{\sigma z(z-1)}{2\sigma \tau z - \sigma \tau + z^2 -2z + 1 }\end{bmatrix} \xrightarrow[\sigma = \frac{1}{t}]{ \tau = t} \begin{bmatrix} \frac{t(1-z)}{z} & \frac{1}{z}\\ \frac{1-2z}{z} & \frac{1-z}{tz} \end{bmatrix} \end{displaymath} The fixed points of an algorithm and its conjugate are related as stated in \cref{prop14}. \begin{proposition}\label{prop14} If an algorithm $\mathcal{A}$ converges to a fixed point $(y[\kappa]^\star, y[\bar{\kappa}]^\star, u[\kappa]^\star, u[\bar{\kappa}]^\star, x^\star)$, then its conjugate $\mathcal{B}= \mathcal C_{\kappa}\mathcal A$ converges to fixed point $(u[\kappa]^\star, y[\bar{\kappa}]^\star, y[\kappa]^\star, u[\bar{\kappa}]^\star, x^\star)$. \end{proposition} For simplicity, detailed proof is provided in \cref{apd5}. Intuitively, as we invert the input-output map of $u[\kappa]$ and $y[\kappa]$, the corresponding parts in the fixed point are also inverted. \begin{proposition}\label{prop10} Suppose algorithm $\mathcal{A}$ has state-space realization $(A, B, C, D)$, where $D_{ii} \neq 0$ and $D_{jj} \neq 0$. Then $\mathcal C_i\mathcal C_j\mathcal{A} = \mathcal C_j\mathcal C_i\mathcal{A} = \mathcal C_{\{ij\}}\mathcal{A}$. \end{proposition} \begin{proof} By \cref{coro1}, if $D_{ii} \neq 0$ and $D_{jj} \neq 0$, then $\mathcal C_i\mathcal A$ and $\mathcal C_j\mathcal A$ are causal. Note that entries above diagonal of $D$ are all zero because $\mathcal A$ is causal. Thus, $\det(D[\{ij\}]) = D_{ii}D_{jj} \neq 0$ and $\mathcal C_{\{ij\}}\mathcal A$ is causal. The commutative property of the Sweep operator gives the result $\mathcal C_i\mathcal C_j\mathcal A = \mathcal C_j\mathcal C_i\mathcal A = \mathcal C_{\{ij\}}\mathcal A$ \cite{10.2307/2683825,TSATSOMEROS2000151}. \end{proof} Proposition \ref{prop10} states that conjugation of different oracles commutes. This justifies our notation $\mathcal C_\kappa$ for set $\kappa$, as the order of the oracles in $\kappa$ is irrelevant. Further, conjugation and cyclic permutation also commute; see \cref{prop11} and proof in \cref{apd6}. \paragraph{DR and ADMM} We showed in subsection \ref{app-shift} that the DR (algorithm \ref{algo7}) and ADMM (algorithm \ref{algo8}) are related by permutation with a certain choice of parameters. Here, we show that they are related by permutation and conjugation (in either order, as they commute), with a different choice of parameters: $A = I, B = I, c = 0, \rho = t$ for ADMM. The transfer function of this special parameterization of ADMM is shown as $\hat H_{17}(z)$. Relations between DR and ADMM can be illustrated as follows. Recall $\hat H_{10}(z)$ is the transfer function of DR. Here we can observe that different choices of parameters of algorithms can lead to different relations between algorithms. \begin{displaymath} \hat H_{17}(z) = \begin{bmatrix} -\frac{z}{t(z-1)} & \frac{2z-1}{tz(z-1)}\\ \frac{z}{t(z-1)} & -\frac{z}{t(z-1)} \end{bmatrix} \xrightarrow{\mathcal C_{12}} \begin{bmatrix} -\frac{tz}{z-1} & \frac{t(1-2z)}{z(z-1)}\\ -\frac{tz}{z-1} & -\frac{tz}{z-1} \end{bmatrix} \xrightarrow{ P_{21}} \begin{bmatrix} -\frac{tz}{z-1} & -\frac{t}{z-1}\\ \frac{t(1-2z)}{z-1} & -\frac{tz}{z-1} \end{bmatrix}= \hat H_{10}(z) \end{displaymath} The commutative property is important to identify relations between algorithms efficiently. For example, suppose we would like to identify the relations between algorithms \ref{algo7} and \ref{algo8}, with transfer functions $\hat H_{10}(z)$ and $\hat H_{17}(z)$. We can first perform conjugation and next permutation on algorithm \ref{algo7}, and then test equivalence between the resulting algorithm and algorithm \ref{algo8}. We need not try permutation followed by conjugation; as these commute, both orders lead to the same transfer function. We have already shown several relations between DR (algorithm \ref{algo7}), ADMM (algorithm \ref{algo8}), and the Chambolle-Pock method (algorithm \ref{algo14}) using conjugation and permutation. We represent these relations in \cref{fig11}. The figure relates 8 different algorithms: Starting from DR, since it contains 2 oracles, there are 2 possible different algorithms by permutation. From the state-space realization, we can conjugate both oracles, which yields 4 different algorithms by conjugation of different oracles. Therefore, in total there are 2 $\times$ 4 = 8 possible different algorithms, including both ADMM and Chambolle-Pock. In the figure, $\mathcal C_1$ and $\mathcal C_2$ denote conjugation with respect to the first and second oracles respectively, $P$ denotes permutation, and we can move between algorithms by applying the transformation on each edge, in either direction, as each transformation is an involution. \begin{figure}[tbhp] \centering \includegraphics[scale=0.55]{dr-group} \caption{Connections between DR, ADMM, and Chambolle-Pock method.} \label{fig11} \end{figure} \section{Linnaeus}\label{package} We have presented a framework for detecting equivalence between iterative algorithms for continuous optimization. In this section, we introduce a software package called \lin{} that implements these ideas. This package can be used by researchers (or peer reviewers) who wish to understand the novelty of new algorithmic ideas and connections to existing algorithms. The input is an algorithm described in user-friendly syntax with variables, parameters, functions, oracles, and update equations. The system will automatically translate the input algorithm into a canonical form (the transfer function) and use the canonical form to identify whether the algorithm is equivalent to any reference algorithm, possibly after transformations such as permutation, conjugation, or repetition. Further, the software can also serve as a search engine, which will identify connections from the input algorithm to existing algorithms in the literature that appear in \lin{}'s algorithm library. \subsection{Illustrative examples} We use \lin{} to identify the relations between algorithms presented previously in the paper. These examples demonstrate the power and simplicity of \lin{}. Code for these examples can be found at \url{https://github.com/QCGroup/linnaeus}. \titleparagraph{\cref{algo1} and \cref{algo2}} The following code identifies that algorithms \ref{algo1} and \ref{algo2} are oracle-equivalent. We input algorithms \ref{algo1} and \ref{algo2} with variables, oracles, and update equations, and parse them into state-space realizations. Then we check oracle equivalence using the function \texttt{is\_equivalent}. The system returns \texttt{True}, consistent with our analytical results in sections \ref{example} and \ref{charac-oracle}. \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{c+c1}{\char`\#{} define Algorithm 2.1} \PY{n}{algo1} \PY{o}{=} \PY{n}{Algorithm}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{Algorithm 2.1}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} add oracle gradient of f to Algorithm 2.1} \PY{n}{gradf} \PY{o}{=} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}oracle}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{gradf}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} add variables x1, x2, and x3 to Algorithm 2.1} \PY{n}{x1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{x3} \PY{o}{=} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}var}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x1}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x2}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x3}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} add update equations} \PY{c+c1}{\char`\#{} x3 \char`\<{}\char`\-{} 2x1 \char`\-{} x2 } \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x3}\PY{p}{,} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{x1} \PY{o}{\char`\-{}} \PY{n}{x2}\PY{p}{)} \PY{c+c1}{\char`\#{} x2 \char`\<{}\char`\-{} x1} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x2}\PY{p}{,} \PY{n}{x1}\PY{p}{)} \PY{c+c1}{\char`\#{} x1 \char`\<{}\char`\-{} x3 \char`\-{} 1/10*gradf(x3)} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{x3} \PY{o}{\char`\-{}} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{10}\PY{o}{*}\PY{n}{gradf}\PY{p}{(}\PY{n}{x3}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\char`\#{} parse Algorithm 2.1, translate it into canonical form} \PY{n}{algo1}\PY{o}{.}\PY{n}{parse}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- Parse Algorithm 2.1. State-space realization: \end{Verbatim} $\quad \qquad \begin{aligned} \left[\begin{matrix}x^{+}_{1}\\x^{+}_{2}\\x^{+}_{3}\end{matrix}\right] & = \left[\begin{matrix}2 & -1 & 0\\1 & 0 & 0\\2 & -1 & 0\end{matrix}\right] \left[\begin{matrix}x_{1}\\x_{2}\\x_{3}\end{matrix}\right]+\left[\begin{matrix}-0.1\\0\\0\end{matrix}\right] \left[\begin{matrix}\operatorname{gradf}{\left(y_{0} \right)}\end{matrix}\right] \\ \left[\begin{matrix}y_{0}\end{matrix}\right] & = \left[\begin{matrix}2 & -1 & 0\end{matrix}\right] \left[\begin{matrix}x_{1}\\x_{2}\\x_{3}\end{matrix}\right]+\left[\begin{matrix}0\end{matrix}\right] \left[\begin{matrix}\operatorname{gradf}{\left(y_{0} \right)}\end{matrix}\right] \end{aligned}$ \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- \end{Verbatim} \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{algo2} \PY{o}{=} \PY{n}{Algorithm}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{Algorithm 2.2}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{xi1}\PY{p}{,} \PY{n}{xi2}\PY{p}{,} \PY{n}{xi3} \PY{o}{=} \PY{n}{algo2}\PY{o}{.}\PY{n}{add\char`\_{}var}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{xi1}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{xi2}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{xi3}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{gradf} \PY{o}{=} \PY{n}{algo2}\PY{o}{.}\PY{n}{add\char`\_{}oracle}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{gradf}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} xi3 \char`\<{}\char`\-{} xi1} \PY{n}{algo2}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{xi3}\PY{p}{,} \PY{n}{xi1}\PY{p}{)} \PY{c+c1}{\char`\#{} xi1 \char`\<{}\char`\-{} xi1 \char`\-{} xi2 \char`\-{} 1/5*gradf(xi1)} \PY{n}{algo2}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{xi1}\PY{p}{,} \PY{n}{xi1} \PY{o}{\char`\-{}} \PY{n}{xi2} \PY{o}{\char`\-{}} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{5}\PY{o}{*}\PY{n}{gradf}\PY{p}{(}\PY{n}{xi3}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\char`\#{} xi2 \char`\<{}\char`\-{} xi2 + 1/10*gradf(xi3)} \PY{n}{algo2}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{xi2}\PY{p}{,} \PY{n}{xi2} \PY{o}{+} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{10}\PY{o}{*}\PY{n}{gradf}\PY{p}{(}\PY{n}{xi3}\PY{p}{)}\PY{p}{)} \PY{n}{algo2}\PY{o}{.}\PY{n}{parse}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- Parse Algorithm 2.2. State-space realization: \end{Verbatim} $\quad \qquad \begin{aligned} \left[\begin{matrix}\xi^{+}_{1}\\\xi^{+}_{2}\\\xi^{+}_{3}\end{matrix}\right] & = \left[\begin{matrix}1 & -1 & 0\\0 & 1 & 0\\1 & 0 & 0\end{matrix}\right] \left[\begin{matrix}\xi_{1}\\\xi_{2}\\\xi_{3}\end{matrix}\right]+\left[\begin{matrix}-0.2\\0.1\\0\end{matrix}\right] \left[\begin{matrix}\operatorname{gradf}{\left(y_{0} \right)}\end{matrix}\right] \\ \left[\begin{matrix}y_{0}\end{matrix}\right] & = \left[\begin{matrix}1 & 0 & 0\end{matrix}\right] \left[\begin{matrix}\xi_{1}\\\xi_{2}\\\xi_{3}\end{matrix}\right]+\left[\begin{matrix}0\end{matrix}\right] \left[\begin{matrix}\operatorname{gradf}{\left(y_{0} \right)}\end{matrix}\right] \end{aligned}$ \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- \end{Verbatim} \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{c+c1}{\char`\#{} check oracle equivalence} \PY{n}{lin}\PY{o}{.}\PY{n}{is\char`\_{}equivalent}\PY{p}{(}\PY{n}{algo1}\PY{p}{,} \PY{n}{algo2}\PY{p}{,} \PY{n}{verbose} \PY{o}{=} \PY{k+kc}{True}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- Algorithm 2.1 is equivalent to Algorithm 2.2. -------------------------------------------------------------- True \end{Verbatim} \titleparagraph{\cref{algo5} and \cref{algo6}} The second example identifies that algorithms \ref{algo5} and \ref{algo6} are shift-equivalent. We input and parse the algorithms into state-space realizations and then check shift equivalence (cyclic permutation) using the function \texttt{is\_permutation}. The system returns \texttt{True}, consistent with results in sections \ref{example} and \ref{charac-shift}. \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{algo5} \PY{o}{=} \PY{n}{Algorithm}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{Algorithm 2.5}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{x1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{x3} \PY{o}{=} \PY{n}{algo5}\PY{o}{.}\PY{n}{add\char`\_{}var}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x1}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x2}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x3}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{proxf}\PY{p}{,} \PY{n}{proxg} \PY{o}{=} \PY{n}{algo5}\PY{o}{.}\PY{n}{add\char`\_{}oracle}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{proxf}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{proxg}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} x1 \char`\<{}\char`\-{} proxf(x3)} \PY{n}{algo5}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{proxf}\PY{p}{(}\PY{n}{x3}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\char`\#{} x2 \char`\<{}\char`\-{} proxg(2x1 \char`\-{} x3)} \PY{n}{algo5}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x2}\PY{p}{,} \PY{n}{proxg}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{x1} \PY{o}{\char`\-{}} \PY{n}{x3}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\char`\#{} x3 \char`\<{}\char`\-{} x3 + x2 \char`\-{} x1} \PY{n}{algo5}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x3}\PY{p}{,} \PY{n}{x3} \PY{o}{+} \PY{n}{x2} \PY{o}{\char`\-{}} \PY{n}{x1}\PY{p}{)} \PY{n}{algo5}\PY{o}{.}\PY{n}{parse}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- Parse Algorithm 2.5. State-space realization: \end{Verbatim} $\quad \qquad \begin{aligned} \left[\begin{matrix}x^{+}_{1}\\x^{+}_{2}\\x^{+}_{3}\end{matrix}\right] & = \left[\begin{matrix}0 & 0 & 0\\0 & 0 & 0\\0 & 0 & 1\end{matrix}\right] \left[\begin{matrix}x_{1}\\x_{2}\\x_{3}\end{matrix}\right]+\left[\begin{matrix}1 & 0\\0 & 1\\-1 & 1\end{matrix}\right] \left[\begin{matrix}\operatorname{proxf}{\left(y_{0} \right)}\\\operatorname{proxg}{\left(y_{1} \right)}\end{matrix}\right] \\ \left[\begin{matrix}y_{0}\\y_{1}\end{matrix}\right] & = \left[\begin{matrix}0 & 0 & 1\\0 & 0 & -1\end{matrix}\right] \left[\begin{matrix}x_{1}\\x_{2}\\x_{3}\end{matrix}\right]+\left[\begin{matrix}0 & 0\\2 & 0\end{matrix}\right] \left[\begin{matrix}\operatorname{proxf}{\left(y_{0} \right)}\\\operatorname{proxg}{\left(y_{1} \right)}\end{matrix}\right] \end{aligned}$ \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- \end{Verbatim} \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{algo4} \PY{o}{=} \PY{n}{Algorithm}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{Algorithm 2.6}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{xi1}\PY{p}{,} \PY{n}{xi2} \PY{o}{=} \PY{n}{algo4}\PY{o}{.}\PY{n}{add\char`\_{}var}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{xi1}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{xi2}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{proxf}\PY{p}{,} \PY{n}{proxg} \PY{o}{=} \PY{n}{algo4}\PY{o}{.}\PY{n}{add\char`\_{}oracle}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{proxf}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{proxg}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} xi1 \char`\<{}\char`\-{} proxg(\char`\-{}xi1 + 2xi2) + xi1 \char`\-{} xi2} \PY{n}{algo4}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{xi1}\PY{p}{,} \PY{n}{proxg}\PY{p}{(}\PY{o}{\char`\-{}}\PY{n}{xi1} \PY{o}{+} \PY{l+m+mi}{2}\PY{o}{*}\PY{n}{xi2}\PY{p}{)} \PY{o}{+} \PY{n}{xi1} \PY{o}{\char`\-{}} \PY{n}{xi2}\PY{p}{)} \PY{c+c1}{\char`\#{} xi2 \char`\<{}\char`\-{} proxf(xi1)} \PY{n}{algo4}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{xi2}\PY{p}{,} \PY{n}{proxf}\PY{p}{(}\PY{n}{xi1}\PY{p}{)}\PY{p}{)} \PY{n}{algo4}\PY{o}{.}\PY{n}{parse}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- Parse Algorithm 2.6. State-space realization: \end{Verbatim} $\quad \qquad \begin{aligned} \left[\begin{matrix}\xi^{+}_{1}\\\xi^{+}_{2}\end{matrix}\right] & = \left[\begin{matrix}1 & -1\\0 & 0\end{matrix}\right] \left[\begin{matrix}\xi_{1}\\\xi_{2}\end{matrix}\right]+\left[\begin{matrix}1 & 0\\0 & 1\end{matrix}\right] \left[\begin{matrix}\operatorname{proxg}{\left(y_{0} \right)}\\\operatorname{proxf}{\left(y_{1} \right)}\end{matrix}\right] \\ \left[\begin{matrix}y_{0}\\y_{1}\end{matrix}\right] & = \left[\begin{matrix}-1 & 2\\1 & -1\end{matrix}\right] \left[\begin{matrix}\xi_{1}\\\xi_{2}\end{matrix}\right]+\left[\begin{matrix}0 & 0\\1 & 0\end{matrix}\right] \left[\begin{matrix}\operatorname{proxg}{\left(y_{0} \right)}\\\operatorname{proxf}{\left(y_{1} \right)}\end{matrix}\right] \end{aligned}$ \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- \end{Verbatim} \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{c+c1}{\char`\#{} check cyclic permutation (shift equivalence)} \PY{n}{lin}\PY{o}{.}\PY{n}{is\char`\_{}permutation}\PY{p}{(}\PY{n}{algo5}\PY{p}{,} \PY{n}{algo6}\PY{p}{,} \PY{n}{verbose} \PY{o}{=} \PY{k+kc}{True}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- Algorithm 2.5 is a permutation of Algorithm 2.6. -------------------------------------------------------------- True \end{Verbatim} \titleparagraph{DR and ADMM} The third illustrative example shows that DR and ADMM are related by permutation and conjugation, as we saw in section \ref{conjugation}. Further, \lin{} can even reveal the specific parameter choice required for the relation to hold. Just as in section \ref{conjugation}, suppose both DR and ADMM solve problem \cref{eqp1} with $A = I$, $B = I$, and $c = 0$. We input and parse DR and ADMM. To detect the relations, we use function \texttt{test\_conjugate\_permutation} to check conjugation and permutation between DR and ADMM. The results are the same as section \ref{conjugation}. \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{DR} \PY{o}{=} \PY{n}{Algorithm}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{Douglas\char`\-{}Rachford splitting}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{x1}\PY{p}{,} \PY{n}{x2}\PY{p}{,} \PY{n}{x3} \PY{o}{=} \PY{n}{DR}\PY{o}{.}\PY{n}{add\char`\_{}var}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x1}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x2}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{x3}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{t} \PY{o}{=} \PY{n}{DR}\PY{o}{.}\PY{n}{add\char`\_{}parameter}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{t}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} x1 \char`\<{}\char`\-{} prox\char`\_{}tf(x3)} \PY{n}{DR}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x1}\PY{p}{,} \PY{n}{lin}\PY{o}{.}\PY{n}{prox}\PY{p}{(}\PY{n}{f}\PY{p}{,} \PY{n}{t}\PY{p}{)}\PY{p}{(}\PY{n}{x3}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\char`\#{} x2 \char`\<{}\char`\-{} prox\char`\_{}tg(2x1 \char`\-{} x3)} \PY{n}{DR}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x2}\PY{p}{,} \PY{n}{lin}\PY{o}{.}\PY{n}{prox}\PY{p}{(}\PY{n}{g}\PY{p}{,} \PY{n}{t}\PY{p}{)}\PY{p}{(}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{x1} \PY{o}{\char`\-{}} \PY{n}{x3}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\char`\#{} x3 \char`\<{}\char`\-{} x3 + x2 \char`\-{} x1} \PY{n}{DR}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{x3}\PY{p}{,} \PY{n}{x3} \PY{o}{+} \PY{n}{x2} \PY{o}{\char`\-{}} \PY{n}{x1}\PY{p}{)} \PY{n}{DR}\PY{o}{.}\PY{n}{parse}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- Parse Douglas-Rachford splitting. State-space realization: \end{Verbatim} $\quad \qquad \begin{aligned} \left[\begin{matrix}x^{+}_{1}\\x^{+}_{2}\\x^{+}_{3}\end{matrix}\right] & = \left[\begin{matrix}0 & 0 & 1\\0 & 0 & 1\\0 & 0 & 1\end{matrix}\right] \left[\begin{matrix}x_{1}\\x_{2}\\x_{3}\end{matrix}\right]+\left[\begin{matrix}- t & 0\\- 2 t & - t\\- t & - t\end{matrix}\right] \left[\begin{matrix}\frac{d}{d y_{0}} f{\left(y_{0} \right)}\\\frac{d}{d y_{1}} g{\left(y_{1} \right)}\end{matrix}\right] \\ \left[\begin{matrix}y_{0}\\y_{1}\end{matrix}\right] & = \left[\begin{matrix}0 & 0 & 1\\0 & 0 & 1\end{matrix}\right] \left[\begin{matrix}x_{1}\\x_{2}\\x_{3}\end{matrix}\right]+\left[\begin{matrix}- t & 0\\- 2 t & - t\end{matrix}\right] \left[\begin{matrix}\frac{d}{d y_{0}} f{\left(y_{0} \right)}\\\frac{d}{d y_{1}} g{\left(y_{1} \right)}\end{matrix}\right] \end{aligned}$ \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- \end{Verbatim} \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{n}{ADMM} \PY{o}{=} \PY{n}{Algorithm}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{ADMM}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{f}\PY{p}{,} \PY{n}{g} \PY{o}{=} \PY{n}{ADMM}\PY{o}{.}\PY{n}{add\char`\_{}function}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{f}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{g}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{rho} \PY{o}{=} \PY{n}{ADMM}\PY{o}{.}\PY{n}{add\char`\_{}parameter}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{rho}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{n}{xi1}\PY{p}{,} \PY{n}{xi2}\PY{p}{,} \PY{n}{xi3} \PY{o}{=} \PY{n}{ADMM}\PY{o}{.}\PY{n}{add\char`\_{}var}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{xi1}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{xi2}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{xi3}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} xi1 \char`\<{}\char`\-{} argmin(x1, g\char`\^{}*(xi1) + 1/2*rho*||xi1 + xi2 + xi3||\char`\^{}2)} \PY{n}{ADMM}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{xi1}\PY{p}{,} \PY{n}{lin}\PY{o}{.}\PY{n}{argmin}\PY{p}{(}\PY{n}{xi1}\PY{p}{,} \PY{n}{g}\PY{p}{(}\PY{n}{xi1}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{rho}\PY{o}{*}\PY{n}{lin}\PY{o}{.}\PY{n}{norm\char`\_{}square}\PY{p}{(}\PY{n}{xi1} \PY{o}{+} \PY{n}{xi2} \PY{o}{+} \PY{n}{xi3}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\char`\#{} xi2 \char`\<{}\char`\-{} argmin(x2, f\char`\^{}*(xi2) + 1/2*rho*||xi1 + xi2 + xi3||\char`\^{}2)} \PY{n}{ADMM}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{xi2}\PY{p}{,} \PY{n}{lin}\PY{o}{.}\PY{n}{argmin}\PY{p}{(}\PY{n}{xi2}\PY{p}{,} \PY{n}{f}\PY{p}{(}\PY{n}{xi2}\PY{p}{)} \PY{o}{+} \PY{l+m+mi}{1}\PY{o}{/}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{rho}\PY{o}{*}\PY{n}{lin}\PY{o}{.}\PY{n}{norm\char`\_{}square}\PY{p}{(}\PY{n}{xi1} \PY{o}{+} \PY{n}{xi2} \PY{o}{+} \PY{n}{xi3}\PY{p}{)}\PY{p}{)}\PY{p}{)} \PY{c+c1}{\char`\#{} xi3 \char`\<{}\char`\-{} xi3 + xi1 + xi2} \PY{n}{ADMM}\PY{o}{.}\PY{n}{add\char`\_{}update}\PY{p}{(}\PY{n}{xi3}\PY{p}{,} \PY{n}{xi3} \PY{o}{+} \PY{n}{xi1} \PY{o}{+} \PY{n}{xi2}\PY{p}{)} \PY{n}{ADMM}\PY{o}{.}\PY{n}{parse}\PY{p}{(}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- Parse ADMM. State-space realization: \end{Verbatim} $\quad \qquad \begin{aligned} \left[\begin{matrix}\xi^{+}_{1}\\\xi^{+}_{2}\\\xi^{+}_{3}\end{matrix}\right] & = \left[\begin{matrix}0 & -1 & -1\\0 & 1 & 0\\0 & 0 & 0\end{matrix}\right] \left[\begin{matrix}\xi_{1}\\\xi_{2}\\\xi_{3}\end{matrix}\right]+\left[\begin{matrix}- \frac{1}{\rho} & 0\\\frac{1}{\rho} & - \frac{1}{\rho}\\0 & - \frac{1}{\rho}\end{matrix}\right] \left[\begin{matrix}\frac{d}{d y_{0}} g{\left(y_{0} \right)}\\\frac{d}{d y_{1}} f{\left(y_{1} \right)}\end{matrix}\right] \\ \left[\begin{matrix}y_{0}\\y_{1}\end{matrix}\right] & = \left[\begin{matrix}0 & -1 & -1\\0 & 1 & 0\end{matrix}\right] \left[\begin{matrix}\xi_{1}\\\xi_{2}\\\xi_{3}\end{matrix}\right]+\left[\begin{matrix}- \frac{1}{\rho} & 0\\\frac{1}{\rho} & - \frac{1}{\rho}\end{matrix}\right] \left[\begin{matrix}\frac{d}{d y_{0}} g{\left(y_{0} \right)}\\\frac{d}{d y_{1}} f{\left(y_{1} \right)}\end{matrix}\right] \end{aligned}$ \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- \end{Verbatim} \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{c+c1}{\char`\#{} check permutation and conjugation } \PY{c+c1}{\char`\#{} between DR and ADMM} \PY{n}{lin}\PY{o}{.}\PY{n}{test\char`\_{}conjugate\char`\_{}permutation}\PY{p}{(}\PY{n}{DR}\PY{p}{,} \PY{n}{ADMM}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \begin{Verbatim}[commandchars=\\\{\}] -------------------------------------------------------------- ============================================================== Parameters of Douglas-Rachford splitting: \end{Verbatim} $\quad \qquad t$ \begin{Verbatim}[commandchars=\\\{\}] Parameters of ADMM: \end{Verbatim} $\quad \qquad \rho$ \begin{Verbatim}[commandchars=\\\{\}] Douglas-Rachford splitting is a conjugate permutation of ADMM, if the parameters satisfy: \end{Verbatim} $\quad \qquad \rho=t$ \begin{Verbatim}[commandchars=\\\{\}] ============================================================== -------------------------------------------------------------- \end{Verbatim} \subsection{Implementation} In this subsection, we briefly describe the implementation of \lin{}. All expressions in \lin{} are defined symbolically, using the python package for symbolic mathematics \emph{sympy}. In \lin{}, an algorithm is specified by defining variables, parameters, functions, oracles, and update equations. All variables and parameters are symbolic, so there is no need to specialize problem dimensions or parameter choices. The system automatically translates an input algorithm into its state-space realization and computes the transfer function. The transfer functions can be compared and manipulated as needed to establish various kinds of equivalences or other relations between algorithms. \titleparagraph{Parameter declaration} Parameters of the algorithm can be declared as scalar (commutative) or vector or matrix (noncommutative). The following code shows how to add scalar \texttt{t} and matrix \texttt{L} to \texttt{algo1}. \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{c+c1}{\char`\#{} add a scalar parameter t} \PY{n}{t} \PY{o}{=} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}parameter}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{t}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} add a matrix parameter L} \PY{n}{L} \PY{o}{=} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}parameter}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{L}\PY{l+s+s2}{\char`\"{}}\PY{p}{,} \PY{n}{commutative} \PY{o}{=} \PY{k+kc}{False}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \titleparagraph{Parameter specification} Given two input algorithms, \lin{} computes the transfer functions and can compare them to detect equivalence and other relations. Some algorithms are equivalent or related only when the parameters satisfy a certain condition: for example, DR and ADMM. If the transfer functions of each algorithm use different parameters, \lin{} form symbolic equations and solve the equations to determine conditions that, if satisfied by the algorithm parameters, yield the desired relation between the algorithms; see \cref{eq3} in section \ref{charac-oracle}. \titleparagraph{Oracles and function} Oracles play the starring role in our framework: oracle equivalence is possible only if two algorithms share the same oracles. In \lin{}, we provide two approaches to declare and add oracles to an algorithm. The black-box approach is to define oracles as black boxes. When parsing the algorithm, the system treats each oracle as a distinct entity unrelated to any other oracle. An oracle declared using syntax \texttt{add\_oracle} uses the black-box approach. For example, we may add oracles $\nabla f$ and $\textnormal{prox}_g$ to algorithm \texttt{algo1}: \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{c+c1}{\char`\#{} add oracle gradient of f in the first approach} \PY{n}{gradf} \PY{o}{=} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}oracle}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{gradf}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} add oracle prox of g in the first approach} \PY{n}{proxg} \PY{o}{=} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}oracle}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{proxg}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} The functional approach is to define oracles in terms of the (sub)gradient of a function. When parsing an algorithm, all the oracles will be decomposed into (sub)gradients and the state-space realization given in terms of (sub)gradients. We say that two algorithms are oracle-equivalent in terms of functional oracles if they are oracle-equivalent after rewriting the algorithm to use only (sub)gradient oracles. This approach is critical to allow us to identify algorithm conjugation, since conjugate algorithms use different (conjugate) oracles. If every algorithm is represented in terms of (sub)gradients, algorithm conjugation can be detected using \cref{prop8}. Fortunately, common oracles such as prox and argmin can be easily written in terms of (sub)gradients: for example, $\textnormal{prox}_f(x) = (I - \partial f)^{-1}(x)$ and argmin as \cref{eq58}. To use the functional approach, users must define and add functions to the algorithm first using \texttt{add\_function} and then declare and add oracles. The following code shows how to use the functional approach to declare and add oracles $\nabla f$ and $\textnormal{prox}_f$. \begin{changemargin}{1cm}{1cm} \begin{tcolorbox}[breakable, size=fbox, boxrule=1pt, pad at break*=1mm,colback=cellbackground, colframe=cellborder] \begin{Verbatim}[commandchars=\\\{\}] \PY{c+c1}{\char`\#{} add function f} \PY{n}{f} \PY{o}{=} \PY{n}{algo1}\PY{o}{.}\PY{n}{add\char`\_{}function}\PY{p}{(}\PY{l+s+s2}{\char`\"{}}\PY{l+s+s2}{f}\PY{l+s+s2}{\char`\"{}}\PY{p}{)} \PY{c+c1}{\char`\#{} gradient of f with repect to x1} \PY{n}{lin}\PY{o}{.}\PY{n}{grad}\PY{p}{(}\PY{n}{f}\PY{p}{)}\PY{p}{(}\PY{n}{x1}\PY{p}{)} \PY{c+c1}{\char`\#{} prox of f with repect to x2 and parameter t} \PY{n}{lin}\PY{o}{.}\PY{n}{prox}\PY{p}{(}\PY{n}{f}\PY{p}{,}\PY{n}{t}\PY{p}{)}\PY{p}{(}\PY{n}{x2}\PY{p}{)} \end{Verbatim} \end{tcolorbox} \end{changemargin} \subsection{Black-box vs functional oracles} Are two algorithms equivalent with respect to black-box oracles if and only if they are equivalent with respect to functional oracles? Intuitively, when oracles are defined in terms of (sub)gradients, it might be possible to identify more relations with other algorithms. However, as stated in \cref{prop12}, for algorithms that use only proximal operators, argmins, and (sub)gradients as oracles, equivalence is preserved under both black-box and functional definitions of oracles. \begin{proposition}\label{prop12} Suppose two algorithms use only proximal operators, argmins, and (sub)gradients as oracles. Then the two algorithms are equivalent with respect to black-box oracles if and only if they are also equivalent with respect to functional oracles. \end{proposition} \begin{proof} Since for any function $g$ and any $t$, $\textnormal{prox}_{tg}(x) = \textnormal{argmin}_y\{tg(y) + \frac{1}{2} \|x - y\|^2\}$, we can treat proximal operator as a special case of argmin. Without loss of generality, any argmin oracle in a linear algorithm has the form \begin{displaymath} z = \textnormal{argmin}_x \left\{ \lambda g(x) + \frac{1}{2}\left[\begin{array}{c} x \\ y \end{array} \right]^T \left[\begin{array}{cc} Q_{11} & Q_{12}\\ Q_{21} & Q_{22} \end{array} \right] \left[\begin{array}{c} x \\ y \end{array} \right]\right\}. \end{displaymath} Here $z$ is the value of the oracle and $y$ can be regarded as the argument, which means from the perspective of a linear system, $z$ is the input and $y$ is the output. The parameter $\lambda$ can be a scalar or matrix, $g$ is a function, and $Q_{11}$, $Q_{12}$, $Q_{21}$, $Q_{22}$ are parameter matrices. Specifically, \begin{displaymath} \left[\begin{array}{cc} Q_{11} & Q_{12}\\ Q_{21} & Q_{22} \end{array} \right] \end{displaymath} is a symmetric matrix and \begin{displaymath} \frac{1}{2}\left[\begin{array}{c} x \\ y \end{array} \right]^T \left[\begin{array}{cc} Q_{11} & Q_{12}\\ Q_{21} & Q_{22} \end{array} \right] \left[\begin{array}{c} x \\ y \end{array} \right] \end{displaymath} is a quadratic term with respect to $x$ and $y$. The matrix $Q_{11}$ must be invertible if the argmin oracle is single-valued. To recover the proximal operator, choose a scalar $\lambda$ and set \begin{displaymath} \left[\begin{array}{cc} Q_{11} & Q_{12}\\ Q_{21} & Q_{22} \end{array} \right] = \left[\begin{array}{cc} I & -I\\ -I & I \end{array} \right]. \end{displaymath} If $g$ is a convex function, the argmin oracle can be written in terms of the subgradient oracle $\partial g$ as follows [\citefirst[\S6]{doi:10.1137/1.9781611974997}[\S2]{fenchel1953convex}, \begin{equation}\label{eq58} z \in -Q_{11}^{-1}\lambda \partial g (z) - Q_{11}^{-1}Q_{12}y. \end{equation} Suppose we have an algorithm with $n+m$ oracles in total, consisting of $n$ argmins and $m$ (sub)gradients. We can group the argmins and the (sub)gradients together respectively and partition the state-space realization accordingly as \begin{equation}\label{eq54} \left[\begin{array}{c:c c} A & B_1 & B_2\\ \hdashline C_1 & D_{11}& D_{12} \\ C_2 & D_{21} & D_{22} \end{array}\right], \end{equation} where $C_1$, $B_1$ correspond to the argmins, $C_2$, $B_2$ correspond to the (sub)gradients, and $D$ is partitioned accordingly into $D_{11}$, $D_{12}$, $D_{21}$, and $D_{22}$. The transfer function can be represented accordingly as \begin{displaymath} \hat H(z) = \left[\begin{array}{c c} \hat H_{11}(z) & \hat H_{12}(z) \\ \hat H_{21}(z) & \hat H_{22}(z) \\ \end{array}\right] = \left[\begin{array}{c c} C_1(zI - A)^{-1}B_1 + D_{11} & C_1(zI - A)^{-1}B_2 + D_{12} \\ C_2(zI - A)^{-1}B_1 + D_{21} & C_2(zI - A)^{-1}B_2 + D_{22} \\ \end{array}\right]. \end{displaymath} The input and output are partitioned as $(\bar{u}_1, \bar{u}_2)$ and $(\bar{y}_1, \bar{y}_2)$, where $\bar{y}_1 = (y_1, \dots, y_n)$, $\bar{y}_2 = (y_{n+1}, \dots, y_{n+m})$, $\bar{u}_1 = (z_1, \dots, z_n)$, and $\bar{u}_2 = (\nabla f_{n+1}(y_{n+1}), \dots, \nabla f_{n+m}(y_{n+m}))$. For each $i \in \{1, \dots, n\}$ we have \begin{equation}\label{eq55} z_i = \textnormal{argmin}_x \left\{ \lambda_i f_i(x) + \frac{1}{2}\left[\begin{array}{c} x \\ y_i \end{array} \right]^T \left[\begin{array}{cc} Q_{11}^i & Q_{12}^i\\ Q_{21}^i & Q_{22}^i \end{array} \right] \left[\begin{array}{c} x \\ y_i \end{array} \right]\right\} \end{equation} where $Q_{11}^i$ is invertible for any $i \in \{1, \dots, n\}$. Now we rewrite the linear system so that the nonlinearities corresponding to the argmins for the new linear system are (sub)gradients. Let $\lambda = \textnormal{diag}(\lambda_{1}, \dots, \lambda_n)$, $Q_1 = \textnormal{diag}(Q^1_{11}, \dots, Q_{11}^n)$, $Q_2 = \textnormal{diag}(Q^1_{12}, \dots, Q_{12}^n)$, and $M_1 = Q_1^{-1}Q_2 $, and $M_2 = Q_1^{-1}\lambda$. The new state-space realization in terms of the (sub)gradient oracles is \begin{equation}\label{eq34} \left[\begin{array}{c:c c } A - B_1(I + M_1D_{11})^{-1}M_1C_1 & -B_1(I + M_1D_{11})^{-1}M_2 & B_2- B_1(I + M_1D_{11})^{-1}M_1D_{12} \\ \hdashline -(I + M_1D_{11})^{-1}M_1C_1 & -(I + M_1D_{11})^{-1}M_2 & -(I + M_1D_{11})^{-1}M_1D_{12} \\ C_2-D_{21}(I + M_1D_{11})^{-1}M_1C_1 & -D_{21}(I + M_1D_{11})^{-1}M_2 & D_{22}-D_{21}(I + M_1D_{11})^{-1}M_1D_{12} \\ \end{array}\right]. \end{equation} We can compute the transfer function as \small \begin{equation}\label{eq35} \hat H'(z) = \left[\begin{array}{c c} \hat H'_{11}(z) & \hat H'_{12}(z) \\ \hat H'_{21}(z) & \hat H'_{22}(z) \\ \end{array}\right] = \left[\begin{array}{c c} -(I + M_1\hat H_{11}(z))^{-1}M_2 & -(I + M_1 \hat H_{11}(z))^{-1}M_1\hat H_{12}(z) \\ -\hat H_{21}(z)(I +M_1\hat H_{11}(z))^{-1}M_2 & \hat H_{22}(z)-\hat H_{21}(z)(I +M_1\hat H_{11}(z))^{-1}M_1\hat H_{12}(z) \\ \end{array}\right]. \end{equation} \normalsize Note that $I + M_1D_{11}$ is invertible (otherwise the algorithm is not causal) and consequently $I +M_1\hat H_{11}(z)$ is invertible. The matrix $Q_1$ is also invertible, since $Q^i_{11}$ is invertible for any $i \in \{1, \dots, n\}$. A detailed proof of \cref{eq34} and \cref{eq35} is provided in \cref{apd7}. Therefore, we know that if $\hat H(z)$ is fixed then $\hat H'(z)$ is also fixed. \end{proof} \section{Conclusion and future work} In this paper, we have presented a framework for reasoning about equivalence between a broad class of iterative algorithms by using ideas from control theory to represent optimization algorithms. The main insight is that by representing an algorithm as a linear dynamical system in feedback with a static nonlinearity, we can recognize equivalent algorithms by detecting algebraic relations between the transfer functions of the associated linear systems. This framework can identify algorithms that result in the same sequence of oracle calls, or algorithms that are the same up to shifts of the update equations, repetition of the updates with the same unit block, and conjugation of the function oracles. These ideas are implemented in the software package \lin{}, which allows researchers to search for algorithms that are related to a given input and identify parameter settings that make the algorithms equivalent. Our goal is to allow researchers add new algorithms to \lin{} as they are developed, so that \lin{} can remain a valuable resource for algorithm designers seeking to understand connections (if any) to previous methods. Our framework requires that the algorithm is linear in the state and oracle outputs, but not necessarily in the parameters. This constraint still allows us to handle a surprisingly large class of algorithms. There are several interesting directions for future work. Can we detect equivalence between stochastic or randomized algorithms? Our framework applies to such algorithms with almost no modifations, simply by allowing random oracles. For example, we can accept oracles like random search $\argmin \{ f(x + \omega_i): i=1,\ldots,k \}$, stochastic gradient $\nabla f(x) + \omega$, or noisy gradient $\nabla f(x + \omega)$. The definition of oracle equivalence would need a slight modification: for algorithms that use (pseudo-)randomized oracles, two algorithms are oracle-equivalent if they generate identical sequences of oracle calls given the same random seed. Can we detect equivalence between parallel or distributed algorithms? Surprisingly, our framework still works for parallel or distributed algorithms. Notice that in a parallel algorithm, many oracle calls may be independently executed on different processors at about the same time. The precise ordering of these calls is not determined by the algorithm, and so different runs of the algorithm can generate different oracle sequences. However, all the possible oracle sequences generated by the same algorithm share the same dependence graph. Using the formalism defined in subsection \ref{odg}, we can see that our framework can identify equivalence between parallel or distributed algorithms using the expanded definition of oracle equivalence: two algorithms are oracle-equivalent if there exists a way of writing each algorithm as a sequence of updates so that they generate identical sequences of oracle calls. Can we detect equivalence between adaptive or nonlinear algorithms? Transfer functions are only defined for linear time-invariant (LTI) systems, so the LTI assumption in our framework is critical. Nevertheless, many of the other concepts from subsection~\ref{control} do extend to systems that are \emph{almost} LTI. For example, an algorithm with parameters that change on a fixed schedule but is otherwise linear, such as gradient descent with a diminishing stepsize, can be regarded as a linear time-varying (LTV) system~\cite{antsaklis2006linear}, and the notion of a transfer function has been generalized to LTV systems~\cite{LTV_TF}. If, instead, the parameters change adaptively based on the other state variables, the system can be regarded as a linear parameter varying (LPV) system~\cite{LPV_book} or a switched system~\cite{sun2006switched}. Examples of such algorithms include nonlinear conjugate gradient methods and quasi-Newton methods. For these more complicated cases, it is still reasonable to ask whether two algorithms invoke the same sequence of oracle calls. Discovering representations for nonlinear or time-varying algorithms that suffice to check equivalence is an interesting direction for future research. \bibliographystyle{siamplain}
{ "timestamp": "2021-05-12T02:06:03", "yymm": "2105", "arxiv_id": "2105.04684", "language": "en", "url": "https://arxiv.org/abs/2105.04684", "abstract": "When are two algorithms the same? How can we be sure a recently proposed algorithm is novel, and not a minor twist on an existing method? In this paper, we present a framework for reasoning about equivalence between a broad class of iterative algorithms, with a focus on algorithms designed for convex optimization. We propose several notions of what it means for two algorithms to be equivalent, and provide computationally tractable means to detect equivalence. Our main definition, oracle equivalence, states that two algorithms are equivalent if they result in the same sequence of calls to the function oracles (for suitable initialization). Borrowing from control theory, we use state-space realizations to represent algorithms and characterize algorithm equivalence via transfer functions. Our framework can also identify and characterize some algorithm transformations including permutations of the update equations, repetition of the iteration, and conjugation of some of the function oracles in the algorithm. To support the paper, we have developed a software package named Linnaeus that implements the framework to identify other iterative algorithms that are equivalent to an input algorithm. More broadly, this framework and software advances the goal of making mathematics searchable.", "subjects": "Optimization and Control (math.OC)", "title": "An automatic system to detect equivalence between iterative algorithms", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983342959990771, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211067194128 }
https://arxiv.org/abs/1405.1003
From Boltzmann to random matrices and beyond
These expository notes propose to follow, across fields, some aspects of the concept of entropy. Starting from the work of Boltzmann in the kinetic theory of gases, various universes are visited, including Markov processes and their Helmholtz free energy, the Shannon monotonicity problem in the central limit theorem, the Voiculescu free probability theory and the free central limit theorem, random walks on regular trees, the circular law for the complex Ginibre ensemble of random matrices, and finally the asymptotic analysis of mean-field particle systems in arbitrary dimension, confined by an external field and experiencing singular pair repulsion. The text is written in an informal style driven by energy and entropy. It aims to be recreative and to provide to the curious readers entry points in the literature, and connections across boundaries.
\section{Ludwig Boltzmann and his H-Theorem} \smallskip \subsection{Entropy} A simple way to introduce the Boltzmann entropy is to use the concept of combinatorial disorder. More precisely, let us consider a system of $n$ distinguishable particles, each of them being in one of the $r$ possible states (typically energy levels). We have $n=n_1+\cdots+n_r$ where $n_i$ is the number of particles in state $i$. The vector $(n_1,\ldots,n_r)$ encodes the macroscopic state of the system, while the microscopic state of the system is encoded by the vector $(b_1,\ldots,b_n)\in\{1,\ldots,r\}^n$ where $b_i$ is the state of the $i$-th particle. The number of microscopic states compatible with a fixed macroscopic state $(n_1,\ldots,n_r)$ is given by the multinomial coefficient\footnote{Encoding the occurrence of each face of an $r$-faces dice thrown $n$ times.} $n!/(n_1!\cdots n_r!)$. This integer measures the microscopic degree of freedom given the macroscopic state. As a consequence, the additive degree of freedom per particle is then naturally given by $(1/n)\log(n!/(n_1!\cdots n_r!))$. But how does this behave when $n$ is large? Let us suppose simply that $n$ tends to $\infty$ while $n_i/n\to p_i$ for every $1\leq i\leq r$. Then, thanks to the Stirling formula, we get, denoting $p:=(p_1,\ldots,p_r)$, \[ \mathcal{S}}\newcommand{\cT}{\mathcal{T}(p) :=\lim_{n\to\infty}\frac{1}{n}\log\PAR{\frac{n!}{n_1!\cdots n_r!}} =-\sum_{i=1}^rp_i\log(p_i). \] The quantity $\mathcal{S}}\newcommand{\cT}{\mathcal{T}(p)$ is the Boltzmann entropy of the discrete probability distribution $p$. It appears here as an asymptotic additive degree of freedom per particle in a system with an infinite number of particles, each of them being in one of the $r$ possible states, with population frequencies $p_1,\ldots,p_r$. This is nothing else but the first order asymptotic analysis of the multinomial combinatorics: \[ \frac{n!}{n_1!\cdots n_r!}\approx e^{n\mathcal{S}}\newcommand{\cT}{\mathcal{T}(n_1/n,\ldots,n_r/n)}. \] When the disorder of the system is better described by a probability density function $f:\dR^d\to\dR_+$ instead of a discrete probability measure, we may introduce by analogy, or passage to the limit, the continuous Boltzmann entropy of $f$, denoted $\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f)$, or $-H(f)$ in the terminology of Boltzmann, \[ \mathcal{S}}\newcommand{\cT}{\mathcal{T}(f):=-\int_{\dR^d}\!f(x)\log(f(x))\,dx. \] When $X$ is a random variable, we denote by $\mathcal{S}}\newcommand{\cT}{\mathcal{T}(X)$ the entropy of its law. Here we use the notation $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$, which is initially the one used by Clausius for the concept of entropy in thermodynamics. \subsection{Maximum entropy under constraints} The Boltzmann entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ measures an average disorder. One can seek for a probability density $f_*$ that maximizes the linear functional $f\mapsto \mathcal{S}}\newcommand{\cT}{\mathcal{T}(f)$ over a convex class $\mathcal{C}}\newcommand{\cD}{\mathcal{D}$ formed by a set of constraints on $f$: \[ \mathcal{S}}\newcommand{\cT}{\mathcal{T}(f_*)=\max\{\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f):f\in\mathcal{C}}\newcommand{\cD}{\mathcal{D}\}. \] The class $\mathcal{C}}\newcommand{\cD}{\mathcal{D}$ is typically defined by linear (in $f$) statistics on the form $\displaystyle\int\!g(x)f(x)\,dx=c_g$ for $g\in\mathcal{G}}\newcommand{\cH}{\mathcal{H}$. Following Boltzmann, suppose that the internal state of an isolated system is described by a parameter $x\in\dR^d$ which is statistically distributed according to a probability density $f$, and suppose furthermore that the energy of state $x$ is $V(x)$. Then the average energy of the system is \[ a=\int\!V(x)\,f(x)\,dx. \] Let $\mathcal{C}}\newcommand{\cD}{\mathcal{D}$ be the class of probability densities $f$ which satisfy this constraint, and let us seek for $f_*\in\mathcal{C}}\newcommand{\cD}{\mathcal{D}$ that maximizes the entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ on $\mathcal{C}}\newcommand{\cD}{\mathcal{D}$, in other words such that $\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f_*)=\max_{f\in\mathcal{C}}\newcommand{\cD}{\mathcal{D}}\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f)$. A Lagrange variational analysis leads to $-\log f_*=\alpha+\beta V$ where $\alpha,\beta$ are Lagrange multipliers. We select $\alpha,\beta>0$ in such a way that $f_*\in\mathcal{C}}\newcommand{\cD}{\mathcal{D}$, which gives a unique solution \[ f_*=\frac{1}{Z_\beta}e^{-\beta V}\quad\text{where}\quad Z_\beta:=\int\!e^{-\beta V(x)}\,dx. \] In Physics $\beta$ is interpreted as an inverse temperature times a universal constant called the Boltzmann constant, selected in such a way that $f_*\in\mathcal{C}}\newcommand{\cD}{\mathcal{D}$. Indeed, by using the definition of $f_*$, the fact $f,f_*\in\mathcal{C}}\newcommand{\cD}{\mathcal{D}$, and the Jensen inequality for the convex function $u\geq0\mapsto u\log(u)$, we have \[ \mathcal{S}}\newcommand{\cT}{\mathcal{T}(f_*)-\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f) =\int\!\frac{f}{f_*}\log\PAR{\frac{f}{f_*}}f_*\,dx \geq0. \] The quantity in the middle is known as the Kullback-Leibler divergence or relative entropy\footnote{This concept was actually introduced by Solomon Kullback (1907 -- 1994) and Richard Leibler (1914 -- 2003) in the 1950's as an information gain, and was inspired from the entropy in the Shannon theory of communication.} with respect to $f_*dx$, see \cite{MR39968,MR1461541}. The Jensen inequality and the strict convexity of $u\geq0\mapsto u\log(u)$ give that $f_*$ is the unique density which achieves $\max_{\mathcal{C}}\newcommand{\cD}{\mathcal{D}}\mathcal{S}}\newcommand{\cT}{\mathcal{T}$. We write $f_*=\arg\max_{\mathcal{C}}\newcommand{\cD}{\mathcal{D}}\mathcal{S}}\newcommand{\cT}{\mathcal{T}$. The lower is the energy $V(x)$ of state $x$, the higher is the value $f_*(x)$ of the maximum entropy density $f_*$. Taking for instance $V(x)=+\infty\mathbf{1}_{K^c}(x)$ reveals that uniform laws maximize entropy under support constraint, while taking $V(x)=\NRM{x}_2^2$ reveals that Gaussian laws maximize entropy under second moment constraint. In particular, on $\dR$, denoting $G$ a Gaussian random variable, \[ \mathbb{E}}\newcommand{\dF}{\mathbb{F}(X^2)=\mathbb{E}}\newcommand{\dF}{\mathbb{F}(G^2) \Rightarrow \mathcal{S}}\newcommand{\cT}{\mathcal{T}(X)\leq \mathcal{S}}\newcommand{\cT}{\mathcal{T}(G) \quad\text{and}\quad \mathcal{S}}\newcommand{\cT}{\mathcal{T}(X)=\mathcal{S}}\newcommand{\cT}{\mathcal{T}(G) \Rightarrow X\overset{d}{=}G. \] It is well known in Bayesian statistics that many other classical discrete or continuous laws are actually maximum entropy laws over classes of laws defined by natural constraints. \subsection{Free energy and the law of large numbers} Still with $f_*=Z_\beta^{-1}e^{-\beta V}$, we have \[ -\frac{1}{\beta}\log(Z_\beta) =\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*) \quad\text{where}\quad \mathcal{A}}\newcommand{\cB}{\mathcal{B}(f):=\int\!V(x)f(x)\,dx-\frac{1}{\beta}\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f). \] The functional $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is the Helmholtz\footnote{Hermann Ludwig Ferdinand von Helmholtz (1821 -- 1894), inventor, among other things, of the unified concept of energy and its conservation in physics, in competition with Julius Robert von Mayer (1814 -- 1878).} free energy\footnote{Should not be confused with Gibbs free energy (free enthalpy) even if they are closely related for ideal gases.}: mean energy minus temperature times entropy. The functional $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is essentially $-\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ penalized by the average energy. Also, the functional $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ admits $f_*$ as a unique minimizer over the class of densities, without constraints. Indeed, the Helmholtz free energy is connected to the Kullback-Leibler relative entropy: for any density $f$, \[ \mathcal{A}}\newcommand{\cB}{\mathcal{B}(f)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*) =\frac{1}{\beta}\int\!\frac{f}{f_*}\log\PAR{\frac{f}{f_*}}f_*\,dx\geq0 \] with equality if and only if $f=f_*$ thanks to the strict convexity of $u\mapsto u\log(u)$. When $f$ and $f_*$ have same average energy, then we recover the formula for the Boltzmann entropy. As we will see later on, the Helmholtz free energy $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ plays a role for Markov processes. It emerges also from the strong law of large numbers. More precisely, let us equip the set $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$ of probability measures on $\dR^d$ with the narrow topology, which is the dual topology with respect to continuous and bounded functions. If $X_1,\ldots,X_N$ are i.i.d.\ random variables with law $\mu_*\in\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$, then their empirical distribution $\mu_N:=\frac{1}{N}\sum_{k=1}^n\delta_{X_k}$ is a random variable on $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$, and an asymptotic analysis due to Ivan Sanov (1919 -- 1968) in the 1950's reveals that for every Borel set $A\subset\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$, as $N\gg1$, \[ \dP(\mu_N\in A)\approx \exp\PAR{-N\inf_{A}\mathcal{K}}\newcommand{\cL}{\mathcal{L}} \quad\text{where}\quad \mathcal{K}}\newcommand{\cL}{\mathcal{L}(\mu):=\int\!\frac{d\mu}{d\mu_*}\log\PAR{\frac{d\mu}{d\mu_*}}\,d\mu_*. \] The rigorous version, known as the Sanov theorem, says more precisely (see \cite{MR2571413} for a proof) that \[ -\inf_{\mathrm{int}(A)}\mathcal{K}}\newcommand{\cL}{\mathcal{L} \leq \liminf_{N\to\infty}\frac{\log\dP(\mu_N\in A)}{N} \leq \limsup_{N\to\infty}\frac{\log\dP(\mu_N\in A)}{N} \leq -\inf_{\mathrm{clo}(A)}\mathcal{K}}\newcommand{\cL}{\mathcal{L} \] where $\mathrm{int}(A)$ and $\mathrm{clo}(A)$ are the interior and the closure of $A$. Using the terminology of Srinivasa Varadhan\footnote{Srinivasa Varadhan (1940 -- ) is the (main) father of modern large deviations theory.}, ${(\mu_N)}_{N\geq1}$ satisfies to a large deviations principle with speed $N$ and rate function $\mathcal{K}}\newcommand{\cL}{\mathcal{L}$. The functional $\mathcal{K}}\newcommand{\cL}{\mathcal{L}:\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1\to\dR\cup\{+\infty\}$ is the Kullback-Leibler relative entropy with respect to $\mu_*$. By convention $\mathcal{K}}\newcommand{\cL}{\mathcal{L}(\mu):=+\infty$ if $\mu\not\ll\mu_*$. If $d\mu_*(x)=f_*(x)\,dx=Z_\beta^{-1}e^{-\beta V}\,dx$ and $d\mu=f\,d\mu_*$ then \[ \mathcal{K}}\newcommand{\cL}{\mathcal{L}(\mu)=\beta\PAR{\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*)}. \] The Sanov theorem is a refinement of the strong law of large numbers, since by the first Borel-Cantelli lemma, one obtains that with probability one, $\lim_{N\to\infty}\mu_N=\mu_*=\arg\inf \mathcal{K}}\newcommand{\cL}{\mathcal{L}$. The large deviations rate function $\mathcal{K}}\newcommand{\cL}{\mathcal{L}$ is convex and lower semicontinuous with respect to the narrow topology, which is the topology of convergence with respect to bounded and continuous test functions. This topology can be metrized by the metric $d(\mu,\nu):=\sup_{h\in\cH}\int\!h\,d(\mu-\nu)$ where $\cH:=\{h:\max(\NRM{h}_\infty,\LIP{h})\leq1\}$. Now for $A=A_\varepsilon=B(\mu,\varepsilon):=\{\nu:d(\mu,\nu)\leq\varepsilon\}$ we have \[ \dP(\mu_N\in A) =\mu_*^{\otimes N}\PAR{(x_1,\ldots,x_N)\in\dR^N:\sup_{h\in\cH}\PAR{\frac{1}{N}\sum_{i=1}^Nh(x_i)-\int\!h\,d\mu}\leq \varepsilon}. \] and thanks to the Sanov theorem, we obtain the ``volumetric'' formula \[ \inf_{\varepsilon>0} \limsup_{N\to\infty} \frac{1}{N}\log \mu_*^{\otimes N}\PAR{(x_1,\ldots,x_N)\in\dR^N:\sup_{h\in\cH}\PAR{\frac{1}{N}\sum_{i=1}^Nh(x_i)-\int\!h\,d\mu}\leq \varepsilon} =-K(\mu). \] \subsection{Names} The letter $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ was chosen by Rudolf Clausius (1822 -- 1888) for entropy in thermodynamics, possibly in honor of Sadi Carnot (1796 -- 1832). The term \emph{entropy} was forged by Clausius in 1865 from the Greek «$\eta\ \tau\rho{o}\pi\eta$». The letter $H$ used by Boltzmann is the capital Greek letter $\eta$. The letter $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ used for the Helmholtz free energy comes from the German word ``Arbeit'' for work. \medskip \begin{quote}``I propose to name the quantity $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ the entropy of the system, after the Greek word $\eta\ \tau\rho{o}\pi\eta$ (en tropein), the transformation. I have deliberately chosen the word entropy to be as similar as possible to the word energy: the two quantities to be named by these words are so closely related in physical significance that a certain similarity in their names appears to be appropriate.'' \begin{flushright} Rudolf Clausius, 1865 \end{flushright} \end{quote} \subsection{H-Theorem} Back to the motivations of Boltzmann, let us recall that the first principle of Carnot-Clausius thermodynamics\footnote{According to Vladimir Igorevitch Arnold (1937 -- 2010), ``Every mathematician knows it is impossible to understand an elementary course in thermodynamics.''. Nevertheless, the reader may try \cite{fermi,MR1724307}, and \cite{MR1881344} for history.} states that the internal energy of an isolated system is constant, while the second principle states that there exists an extensive state variable called the entropy that can never decrease for an isolated system. Boltzmann wanted to derive the second principle from the idea (controversial, at that time) that the matter is made with atoms. Let us consider an ideal isolated gas made with particles (molecules) in a box with periodic boundary conditions (torus) to keep things as simple as possible. There are of course too many particles to write the equations of Newton for all of them. Newton is in a way beated by Avogadro! The idea of Boltzmann was to propose a statistical approach (perhaps inspired from the one of Euler in fluid mechanics, and from the work of Maxwell, see \cite{MR1881344}): instead of keeping track of each particle, let $(x,v)\mapsto f_t(x,v)$ be the probability density of the distribution of position $x\in\dR^d$ and velocity $v\in\dR^d$ of particles at time $t$. Then one can write an evolution equation for $t\mapsto f_t$, that takes into account the physics of elastic collisions. It is a nonlinear partial differential equation known as the Boltzmann equation: \[ \pd_t f_t(x,v) =- v\pd_x f_t(x,v) + Q(f_t,f_t)(x,v). \] The first term in the right hand side is a linear transport term, while the second term $Q(f_t,f_t)$ is quadratic in $f_t$, a double integral actually, and captures the physics of elastic collisions by averaging over all possible input and output velocities (note here a loss of microscopic information). This equation admits conservation laws. Namely, for every time $t\geq0$, $f_t$ is a probability density and the energy of the system is constant (first principle): \[ f_t\geq0,\quad\pd_t\int\!f_t(x,v)\,dxdv=0,\quad \pd_t\displaystyle\iint\!v^2\,f_t(x,v)\,dxdv=0. \] These constrains define a class of densities $\mathcal{C}}\newcommand{\cD}{\mathcal{D}$ on $\dR^d\times\dR^d$ over which the Boltzmann entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ achieves its (Gaussian in velocity and uniform in position) maximum \[ f_*=\arg\max_\mathcal{C}}\newcommand{\cD}{\mathcal{D} \mathcal{S}}\newcommand{\cT}{\mathcal{T}. \] The H-Theorem states that the entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}=-H$ is monotonic along the Boltzmann equation: \[ \pd_t \mathcal{S}}\newcommand{\cT}{\mathcal{T}(f_t) \geq 0 \] and more precisely, \[ \mathcal{S}}\newcommand{\cT}{\mathcal{T}(f_t)\underset{t\to\infty}{\nearrow}\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f_*)=\max_\mathcal{C}}\newcommand{\cD}{\mathcal{D} \mathcal{S}}\newcommand{\cT}{\mathcal{T} \] where $\mathcal{C}}\newcommand{\cD}{\mathcal{D}$ is the class defined by the conservation law. A refined analysis gives that \[ f_t\underset{t\to\infty}{\longrightarrow} f_*=\arg\max_\mathcal{C}}\newcommand{\cD}{\mathcal{D} \mathcal{S}}\newcommand{\cT}{\mathcal{T}. \] In the space-homogeneous simplified case, $f_t$ depends only on the velocity variable, giving a Gaussian equilibrium for velocities by maximum entropy! In kinetic theory of gases, it is customary to call ``Maxwellian law'' the standard Gaussian law on velocities. We refer to \cite{MR3076094} for a discussion on the concept of irreversibility and the Boltzmann H-Theorem. The work of Boltzmann in statistical physics (nineteenth century) echoes the works of Euler in fluid mechanics (eighteenth century), and of Newton in dynamics (seventeenth century). Before the modern formalization of probability theory and of partial differential equations with functional analysis, Boltzmann, just like Euler, was able to forge a deep concept melting the two! The Boltzmann H-Theorem had and has still a deep influence, with for instance the works of Kac, Lanford, Cercignani, Sinai, Di Perna and Lions, Desvillettes and Villani, Saint-Raymond, \ldots. \medskip \begin{quote} ``Although Boltzmann’s H-Theorem is 135 years old, present-day mathematics is unable to prove it rigorously and in satisfactory generality. The obstacle is the same as for many of the famous basic equations of mathematical physics: we don’t know whether solutions of the Boltzmann equations are smooth enough, except in certain particular cases (close- to-equilibrium theory, spatially homogeneous theory, close-to-vacuum theory). For the moment we have to live with this shortcoming.'' \end{quote} \begin{flushright} Cédric Villani, 2008, excerpt from \cite{MR2509760}\\ H-Theorem and beyond: Boltzmann’s entropy in today’s mathematics \end{flushright} \subsection{Keeping in mind the structure} For our purposes, let us keep in mind this idea of evolution equation, conservation law, monotonic functional, and equilibrium as optimum (of the monotonic functional) under constraint (provided by the conservation law). It will reappear! \subsection{Markov processes and Helmholtz free energy} A Markov process can always be seen as a deterministic evolution equation of a probability law. By analogy with the Boltzmann equation, let us consider a Markov process $(X_t,t\in\dR_+)$ on $\dR^d$. Let us focus on structure and relax the rigor to keep things simple. For any $t\geq0$ and continuous and bounded test function $h$, for any $x\in\dR^d$, \[ P_t(h)(x):=\mathbb{E}}\newcommand{\dF}{\mathbb{F}(h(X_t)|X_0=x). \] Then $P_0(h)=h$, and, thanks to the Markov property, the one parameter family $P=(P_t,t\in\dR_+)$ forms a semigroup of operators acting on bounded continuous test functions, with $P_0=id$. Let us assume that the process admits an invariant measure $\mu_*$, meaning that for every $t\geq0$ and $h$, \[ \int\!P_t(h)\,d\mu_*=\int\!h\,d\mu_*. \] The semigroup is contractive in $L^p(\mu_*)$: $\NRM{P_t}_{p\to p}\leq1$ for any $p\in[1,\infty]$ and $t\geq0$. The semigroup is Markov: $P_t(h)\geq0$ if $h\geq0$, and $P_t(1)=1$. The infinitesimal generator of the semigroup is \( Lh=\pd_{t=0}P_t(h), \) for any $h$ in the domain of $L$ (we ignore these aspects for simplicity). In particular \[ \int\!Lh\,d\mu_* =\pd_{t=0}\int\!P_t(h)\,d\mu_* =\pd_{t=0}\int\!h\,d\mu_* =0. \] When $L$ is a second order linear differential operator without constant term, then we say that the process is a Markov diffusion. Important examples of such Markov diffusions are given in Table \ref{tab:diffproc}. The backward and forward Chapman-Kolmogorov equations are \[ \pd_tP_t=LP_t=P_tL. \] Let us denote by $P_t^*$ the adjoint of $P_t$ in $L^2(\mu_*)$, and let us define \[ \mu_t:=\mathrm{Law}(X_t)\quad\text{and}\quad g_t:=\frac{d\mu_t}{d\mu_*}. \] Then $g_t=P_t^*(g_0)$. If we denote $L^*=\pd_{t=0}P_t^*$ then we obtain the evolution equation \[ \pd_tg_t=L^*g_t. \] The invariance of $\mu_*$ can be seen as a fixed point: if $g_0=1$ then $g_t=P_t^*(1)=1$ for all $t\geq0$, and $L^*1=0$. One may prefer to express the evolution of the density \[ f_t:=\frac{d\mu_t}{dx}=g_tf_* \quad\text{where}\quad f_*:=\frac{d\mu_*}{dx}. \] We have then $\pd_tf_t=G^*f_t$ where $G^*h:=L^*(h/f_*)f_*$. The linear evolution equation \[ \pd_tf_t=G^*f_t \] is in a sense the Markovian analogue of the Boltzmann equation (which is nonlinear!). \begin{table} \begin{tabular}{c||c|c|c} & Brownian motion & Ornstein-Uhlenbeck process & Overdamped Langevin process\\\hline S.D.E.\ & $dX_t=\sqrt{2}dB_t$ & $dX_t=\sqrt{2}dB_t-X_tdt$ & $dX_t=\sqrt{2}dB_t-\nabla V(X_t)\,dt$\\ $\mu_*$ & $dx$ & $\cN(0,I_d)$ & $Z^{-1}e^{-V(x)}\,dx$\\ $f_*$ & $1$ & $(2\pi)^{-d/2}e^{-\NRM{x}_2^2/2}$ & $Z^{-1}e^{-V}$ \\ $L f$ & $\Delta f$ & $\Delta f-x\cdot\nabla f$ & $\Delta-\nabla V\cdot\nabla f$\\ $G^* f$ & $\Delta f$ & $\Delta f+\mathrm{div}(xf)$ & $\Delta+\mathrm{div}(f\nabla V)$\\ $\mu_t$ & $\mathrm{Law}(X_0+\sqrt{2t}G)$ & $\mathrm{Law}(e^{-t}X_0+\sqrt{1-e^{-2t}}G)$ & Not explicit in general\\ $P_t$ & Heat semigroup & O.-U. semigroup & General semigroup \end{tabular} \caption[Two fundamental Gaussian processes on $\dR^d$]{Two fundamental Gaussian processes on $\dR^d$, Brownian Motion and Ornstein-Uhlenbeck, as Gaussian special cases of Markov diffusion processes\protect\footnotemark.} \label{tab:diffproc} \end{table} \footnotetext{The $\sqrt{2}$ factor in the S.D.E.\ allows to avoid a factor $1/2$ in the infinitesimal generators.} Let us focus on the case where $f_*$ is a density, meaning that $\mu_*$ is a probability measure. This excludes Brownian motion, but allows the Ornstein-Uhlenbeck process. By analogy with the Boltzmann equation, we have the first two conservation laws $f_t\geq0$ and $\int\!f_t\,dx=1$, but the average energy has no reason to be conserved for a Markov process. Indeed, the following quantity has no reason to have a constant sign (one can check this on the Ornstein-Uhlenbeck process!): \[ \pd_t\int\!Vf_t\,dx =\pd_tP_t(V) =P_t(LV). \] Nevertheless, if we set $f_*=Z_\beta^{-1}e^{-\beta V}$ then there exists a functional which is monotonic and which admits the invariant law $\mu_*$ as a unique optimizer: the Helmholtz free energy defined by \[ \mathcal{A}}\newcommand{\cB}{\mathcal{B}(f):=\int\!V(x)\,f(x)\,dx-\frac{1}{\beta}\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f). \] In order to compute $\pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)$, we first observe that for any test function $g$, \[ \int\!L^*g\,d\mu_* =0. \] Since $\mu_*$ is invariant, we have $P^*_t(1)=1$ for every $t\geq0$, and, since $P_t^*(g)\geq0$ if $g\geq0$, it follows that the linear form $g\mapsto P^*_t(g)(x)$ is a probability measure\footnote{Actually $P_t^*(g)(x)$ is the value at point $x$ of the density with respect to $\mu_*$ of the law of $X_t$ when $X_0\sim gd\mu_*$.}. Recall that \[ \mathcal{A}}\newcommand{\cB}{\mathcal{B}(f)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*)=\frac{1}{\beta}\int\!\Phi(g)\,d\mu_* \] where $\Phi(u):=u\log(u)$. For every $0\leq s\leq t$, the Jensen inequality and the invariance of $\mu_*$ give \[ \mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*) % =\int\!\Phi(P_{t-s}^*(f_s))\,d\mu_* % \leq\int\!P_{t-s}^*(\Phi(f_s))\,d\mu_* % =\int\!\Phi(f_s)\,d\mu_* % =\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_s)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*), \] which shows that the function $t\mapsto\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)$ is monotonic. Alternatively, the Jensen inequality gives also $\Phi(P^*_t(g))\leq P^*_t(\Phi(g))$ and the derivative at $t=0$ gives $\Phi'(g)L^*g\leq L^*\Phi(g)$, which provides \[ \int\!\Phi'(g)L^*g\,d\mu_*\leq0. \] Used with $g=g_t=f_t/f_*$, this gives \[ \beta\pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t) =\pd_t\int\!\Phi(g_t)\,d\mu_* =\int\!\Phi'(g_t)L^*g_t\,d\mu_* \leq0. \] It follows that the Helmholtz free energy decreases along the Markov process: \[ \pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)\leq0. \] Of course we expect, possibly under more assumptions, that $\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)\searrow\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*)=\min\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ as $t\to\infty$. Let us assume for simplicity that $\beta=1$ and that the process is the Markov diffusion generated by $L=\Delta -\nabla V\cdot\nabla$. In this case $\mu_*$ is symmetric for the Markov process, and $L=L^*$, which makes most aspects simpler. By the chain rule $L(\Phi(g))=\Phi'(g)Lg+\Phi''(g)\ABS{\nabla g}^2$, and thus, by invariance, \[ \pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t) =\int\!\Phi'(g_t)Lg_t\,d\mu_* =-\cF(g_t)\leq0 \quad\text{where}\quad \cF(g):= \int\!\Phi''(g)\ABS{\nabla g}^2\,d\mu_* =\int\!\frac{\ABS{\nabla g}^2}{g}\,d\mu_*. \] The functional $\cF$ is known as the Fisher information\footnote{Named after Ronald Aylmer Fisher (1890 -- 1962), father of modern statistics among other things.}. The identity $\pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)=-\cF(g_t)$ is known as the de Bruijn identity. In the degenerate case $V\equiv0$, then $f_*\equiv1$ is no longer a density, $\mu_*$ is the Lebesgue measure, the Markov process is Brownian motion, the infinitesimal generator in the Laplacian $L=\Delta$, the semigroup $(P_t,t\in\dR_+)$ is the heat semigroup, and we still have a de Bruijn identity $\pd_t\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f_t)=\cF(f_t)$ where $\mathcal{S}}\newcommand{\cT}{\mathcal{T}=-\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ since $V\equiv0$. The quantitative version of the monotonicity of the free energy along the Markov semigroup is related to Sobolev type functional inequalities. We refer to \cite{MR1845806,BGL} for more details. For instance, for every constant $\rho>0$, the following three properties are equivalent: \begin{itemize} \item Exponential decay of free energy: $\forall f_0\geq0$, $\forall t\geq0$, $\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*)\leq e^{-2\rho t}(\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_0)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*))$; \item Logarithmic Sobolev inequality: $\forall f\geq0$, $2\rho(\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*))\leq \cF(f/f_*)$; \item Hypercontractivity: $\forall t\geq0$, $\NRM{P_t}_{q(0)\to q(t)}\leq1$ where $q(t):=1+e^{2\rho t}$. \end{itemize} The equivalence between the first two properties follows by taking the derivative over $t$ and by using the Grönwall lemma. The term ``Logarithmic Sobolev inequality'' is due to Leonard Gross, who showed in \cite{MR420249} the equivalence with hypercontractivity, via the basic fact that for any $g\geq0$, \[ \pd_{p=1}\NRM{g}_p^p =\pd_{p=1}\int\!e^{p\log(g)}\,d\mu_* =\int\!g\log(g)\,d\mu_* =\mathcal{A}}\newcommand{\cB}{\mathcal{B}(gf_*)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*). \] The concept of hypercontractivity of semigroups goes back at least to Edward Nelson \cite{MR0210416}. One may ask if $t\mapsto\cF(f_t)$ is in turn monotonic. The answer involves a notion of curvature. Namely, using the diffusion property via the chain rule and reversibility, we get, after some algebra, \[ \pd_t^2\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t) =-\pd_t\cF(g_t) =2\!\!\int\!g_t\Gamma_{\!\!2}(\log(g_t))\,d\mu_* \] where $\Gamma_{\!\!2}$ is the Bakry-Émery ``gamma-two'' functional quadratic form given by\footnote{The terminology comes from $\Gamma_{\!\!2}(f):=\Gamma_{\!\!2}(f,f):=\frac{1}{2}(L(\Gamma(f,f))-2\Gamma(f,Lf))$ where $\Gamma$ is the ``carré du champ'' functional quadratic form $(f,g)\mapsto\Gamma(f,g)$ defined by $\Gamma(f)=\Gamma(f,f):=\frac{1}{2}(L(f^2)-2fLf)$. Here $\Gamma(f)=\ABS{\nabla f}^2$.} \[ \Gamma_{\!\!2}(f) =\NRM{\nabla^2 f}_{\mathrm{HS}}^2 +\nabla f\cdot (\nabla^2 V)\nabla f. \] See \cite{MR1845806,BGL} for the details. This comes from the Bochner commutation formula \[ \nabla L=L\nabla-(\nabla^2V)\nabla. \] If $\Gamma_{\!\!2}\geq0$ then, along the semigroup, the Fisher information is non-increasing, and the Helmholtz free energy is convex (we already know that $\pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)\leq0$ and $\min\mathcal{A}}\newcommand{\cB}{\mathcal{B}=\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*)$): \[ \pd_t\cF(g_t)\leq0\quad\text{and}\quad\pd^2_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)\geq0. \] This holds for instance if $\nabla^2 V\geq0$ as quadratic forms\footnote{This means $y\cdot(\nabla^2 V(x))y\geq0$ for all $x,y\in\dR^d$, or equivalently $V$ is convex, or equivalently $\mu_*$ is log-concave.}. This is the case for the Ornstein-Uhlenbeck example, for which $\nabla^2 V=I_d$. Moreover, if there exists a constant $\rho>0$ such that for all $f$, \[ \Gamma_{\!\!2}(f)\geq\rho\Gamma(f) \quad\text{where}\quad\Gamma(f)=\ABS{\nabla f}^2 \] then, for any $t\geq0$, $\pd_t\cF(g_t)\leq-2\rho\cF(g_t)$, and the Grönwall lemma gives the exponential decay of the Fisher information along the semigroup ($\rho=1$ in the Ornstein-Uhlenbeck example): \[ \cF(g_t)\leq e^{-2\rho t}\cF(g_0). \] This gives also the exponential decay of the Helmholtz free energy $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ along the semigroup with rate $2\rho$, in other words, a Logarithmic Sobolev inequality with constant $2\rho$: for any $f_0$, \[ \mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_0)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_*) =-\int_0^\infty\!\pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(f_t)\,dt =\int_0^\infty\!\cF(g_t)\,dt \leq \cF(g_0)\int_0^\infty\!e^{-2\rho t}\,dt =\frac{\cF(f_0/f_*)}{2\rho}. \] We used here the Markov semigroup in order to interpolate between $f_0$ and $f_*$. This interpolation technique was extensively developed by Bakry and Ledoux in order to obtain functional inequalities, see \cite{BGL} and references therein. A modest personal contribution to this topic is \cite{MR2081075}. The best possible constant $\rho$ -- which is the largest -- in the inequality $\Gamma_{\!\!2}\geq\rho\Gamma$ is called the Bakry-Émery curvature of the Markov semigroup. The story remains essentially the same for Markov processes on Riemannian manifolds. More precisely, in the context of Riemannian geometry, the Ricci curvature tensor contributes additively to the $\Gamma_{\!\!2}$, see \cite{BGL}. Relatively recent works on this topic include extensions by Lott and Villani \cite{MR2480619}, von Renesse and Sturm \cite{MR2142879}, Ollivier \cite{MR2484937}, among others. The approach can be adapted to non-elliptic hypoelliptic evolution equations, see for instance Baudoin \cite{baudoin}. The Boltzmannian idea of monotonicity along an evolution equation is also used in the work of Grigori Perelman on the Poincaré-Thurston conjecture, and in this case, the evolution equation, known as the Ricci flow of Hamilton, concerns the Ricci tensor itself, see \cite{perelman,xdli}. The exponential decay of the Boltzmann functional $H=-\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ along the (nonlinear!) Boltzmann equation was conjectured by Cercignani and studied rigorously by Villani, see for instance \cite{MR2765747}. Many aspects remain valid for discrete time/space Markov chains, up to the lack of chain rule if the space is discrete. For instance, if an irreducible Markov chain in discrete time and finite state space $E$ has transition matrix $P$ and invariant law $\mu_*$, then ${(P^n)}_{n\geq0}$ is the discrete time Markov semigroup, $L:=P-I$ is the Markov generator, and one can show that for every initial law $\mu$, the discrete Helmholtz free energy is monotonic along the evolution equation, namely \[ \sum_{x\in E}\Phi\PAR{\frac{\mu P^n(x)}{\mu_*(x)}}\mu_*(x) \underset{n\to\infty}{\searrow} 0, \] where $\mu P^n(x)=\sum_{z\in E}\mu(z)P^n(z,x)$ and still $\Phi(u):=u\log(u)$. The details are in Thomas Liggett's book \cite[prop.~4.2]{MR2108619}. According to Liggett \cite[p.~120]{MR2108619} this observation goes back at least to Mark Kac \cite[p.~98]{MR0102849}. Quoting Persi Diaconis, ``\emph{the idea is that the maximum entropy Markov transition matrix with a given invariant law is clearly the matrix with all rows equal to this stationary law, taking a step in the chain increases entropy and keeps the stationary law the same.}''. Discrete versions of the logarithmic Sobolev inequality allow to refine the quantitative analysis of the convergence to the equilibrium of finite state space Markov chains. We refer for these aspects to the work of Diaconis and Saloff-Coste \cite{MR1410112,MR1490046}, the work of Laurent Miclo \cite{MR1478724}, and the book \cite{MT}. The relative entropy allows to control the total variation distance: the so-called Pinsker or Csiszár-Kullback inequality states that for any probability measures $\mu$ and $\nu$ on $E$ with $\mu>0$, \[ \sum_{x\in E}\ABS{\mu(x)-\nu(x)} \leq \sqrt{2\sum_{x\in E}\Phi\PAR{\frac{\nu(x)}{\mu(x)}}\mu(x)}. \] The analysis of the convexity of the free energy along the semigroup of at most countable state space Markov chains was considered in \cite{CDPP} and references therein. More precisely, let ${(X_t)}_{t\in\dR_+}$ be a continuous time Markov chain with at most countable state space $E$. Let us assume that it is irreducible, positive recurrent, and aperiodic, with unique invariant probability measure $\mu_*$, and with infinitesimal generator $L:E\times E\to\dR$. We have, for every $x,y\in E$, \[ L(x,y)=\pd_{t=0}\dP(X_t=y|X_0=x). \] We see $L$ as matrix with non-negative off-diagonal elements and zero-sum rows: $L(x,y)\geq0$ and $L(x,x)=-\sum_{y\neq x}L(x,y)$ for every $x,y\in E$. The invariance reads $0=\sum_{x\in E}\mu_*(x)L(x,y)$ for every $y\in E$. The operator $L$ acts on functions as $(Lf)(x)=\sum_{y\in E}L(x,y)f(y)$ for every $x\in E$. Since $\mu_*(x)>0$ for every $x\in E$, the free energy at unit temperature corresponds to the energy $V(x)=-\log(\mu_*(x))$, for which we have of course $\mathcal{A}}\newcommand{\cB}{\mathcal{B}(\mu_*)=0$. For any probability measure $\mu$ on $E$, \[ \mathcal{A}}\newcommand{\cB}{\mathcal{B}(\mu)-\mathcal{A}}\newcommand{\cB}{\mathcal{B}(\mu_*) =\mathcal{A}}\newcommand{\cB}{\mathcal{B}(\mu) =\sum_{x\in E}\Phi\PAR{\frac{\mu(x)}{\mu_*(x)}}\mu_*(x). \] One can see $x\mapsto\mu(x)$ as a density with respect to the counting measure on $E$. For any time $t\in\dR_+$, if $\mu_t(x):=\dP(X_t=x)$ then $g_t(x):=\mu_t(x)/\mu_*(x)$ and $\pd_t g_t=L^*g_t$ where $L^*$ is the adjoint of $L$ in $\ell^2(\mu_*)$ which is given by $L^*(x,y)=L(y,x)\mu_*(y)/\mu_*(x)$. Some algebra reveals that \[ \pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(\mu_t)=\sum_{x\in E}\SBRA{\Phi'(g_t)L^*g_t}(x)\mu_*(x). \] The right hand side is up to a sign the discrete analogue of the Fisher information. By reusing the convexity argument used before for diffusions, we get that $\pd_t\mathcal{A}}\newcommand{\cB}{\mathcal{B}(\mu_t)\leq0$. Moreover, we get also \[ \pd_t^2\mathcal{A}}\newcommand{\cB}{\mathcal{B}(\mu_t) =\sum_{x\in E}\SBRA{g_tLL\log(g_t)+\frac{(L^*g_t)^2}{g_t}}(x)\mu_*(x). \] The right hand side is a discrete analogue of the $\Gamma_{\!\!2}$-based formula obtained for diffusions. It can be nicely rewritten when $\mu_*$ is reversible. The lack of chain rule in discrete spaces explains the presence of two distinct terms in the right hand side. We refer to \cite{CDPP,venkat} for discussions of examples including birth-death processes. Our modest contribution to this subject can be found in \cite{MR2247924,MR3129037}. We refer to \cite{MR2484937} for some complementary geometric aspects. An analogue on $E=\dN$ of the Ornstein-Uhlenbeck process is given by the so-called M/M/$\infty$ queue for which $(Lf)(x)=\lambda(f(x+1)-f(x))+x\mu((f(x-1)-f(x))$ and $\mu_*=\mathrm{Poisson}(\lambda/\mu)$. \section{Claude Shannon and the central limit theorem} The Boltzmann entropy plays also a fundamental role in communication theory, funded in the 1940's by Claude Elwood Shannon (1916--2001), where it is known as ``Shannon entropy''. It has a deep interpretation in terms of uncertainty and information in relation with coding theory \cite{MR2239987}. For example the discrete Boltzmann entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}(p)$ computed with a logarithm in base $2$ is the average number of bits per symbol needed to encode a random text with frequencies of symbols given by the law $p$. This plays an essential role in lossless coding, and the Huffman algorithm for constructing Shannon entropic codes is probably one of the most used basic algorithm (data compression is everywhere). Another example concerns the continuous Boltzmann entropy which enters the computation of the capacity of continuous telecommunication channels (e.g.\ DSL lines). \begin{quote} ``My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.'' \end{quote} \begin{flushright} Claude E. Shannon, 1961\\ Conversation with Myron Tribus, reported in \cite{Tribus:1971:EI} \end{flushright} For our purposes, let us focus on the link between the Boltzmann entropy and the central limit theorem, a link suggested by Shannon when he forged information theory in the 1940's. \subsection{The CLT as an evolution equation} The Central Limit theorem (CLT) states that if $X_1,X_2,\ldots$ are i.i.d.\ real random variables with mean $\mathbb{E}}\newcommand{\dF}{\mathbb{F}(X_i)=0$ and variance $\mathbb{E}}\newcommand{\dF}{\mathbb{F}(X_i^2)=1$, then \[ S_n:= \frac{X_1+\cdots+X_n}{\sqrt{n}} \underset{n\to\infty}{\overset{d}{\longrightarrow}} \frac{e^{-\frac{1}{2}x^2}}{\sqrt{2\pi}}dx \] where the convergence to the Gaussian law holds in distribution (weak sense). \subsection{Conservation law} The first two moments are conserved along CLT: for all $n\geq1$, \[ \mathbb{E}}\newcommand{\dF}{\mathbb{F}(S_n)=0\quad\text{and}\quad\mathbb{E}}\newcommand{\dF}{\mathbb{F}(S_n^2)=1. \] By analogy with the H-Theorem, the CLT concerns an evolution equation of the law of $S_n$ along the discrete time $n$. When the sequence $X_1,X_2,\ldots$ is say bounded in $L^\infty$ then the convergence in the CLT holds in the sense of moments, and in particular, the first two moments are constant while the remaining moments of order $>2$ become universal at the limit. In other words, the first two moments is the sole information retained by the CLT from the initial data, via the conservation law. The limiting distribution in the CLT is the Gaussian law, which is the maximum of Boltzmann entropy under second moment constraint. If we denote by $f^{*n}$ the $n$-th convolution power of the density $f$ of the $X_i$'s then the CLT writes $\mathrm{dil}_{n^{-1/2}}(f^{*n})\to f_*$ where $f_*$ is the standard Gaussian and where $\mathrm{dil}_\alpha(h):=\alpha^{-1}h(\alpha^{-1}\cdot)$ is the density of the random variable $\alpha Z$ when $Z$ has density $h$. \subsection{Analogy with H-Theorem} Shannon observed \cite{MR0032134} that the entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ is monotonic along the CLT when $n$ is a power of $2$, in other words $\mathcal{S}}\newcommand{\cT}{\mathcal{T}(S_{2^{m+1}})\geq \mathcal{S}}\newcommand{\cT}{\mathcal{T}(S_{2^{m}})$ for every integer $m\geq0$, which follows from (a rigorous proof is due to Stam \cite{stam}) \[ \mathcal{S}}\newcommand{\cT}{\mathcal{T}\PAR{\frac{X_1+X_2}{\sqrt{2}}}=\mathcal{S}}\newcommand{\cT}{\mathcal{T}(S_2)\geq \mathcal{S}}\newcommand{\cT}{\mathcal{T}(S_1)=\mathcal{S}}\newcommand{\cT}{\mathcal{T}(X_1). \] By analogy with the Boltzmann H-Theorem, a conjecture attributed to Shannon (see also \cite{MR0506364}) says that the Boltzmann entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ is monotonic along the CLT for any $n$, more precisely \[ \mathcal{S}}\newcommand{\cT}{\mathcal{T}(X_1)=\mathcal{S}}\newcommand{\cT}{\mathcal{T}(S_1)\leq\cdots\leq \mathcal{S}}\newcommand{\cT}{\mathcal{T}(S_n)\leq \mathcal{S}}\newcommand{\cT}{\mathcal{T}(S_{n+1}) \leq\cdots\underset{n\to\infty}{\nearrow}\mathcal{S}}\newcommand{\cT}{\mathcal{T}(G). \] The idea of proving the CLT using the Boltzmann entropy is very old and goes back at least to Linnik \cite{MR0124081} in the 1950's, who, by the way, uses the term ``Shannon entropy''. But proving convergence differs from proving monotonicity, even if these two aspects are obviously lin(ni)ked \cite{MR2128238}. The approach of Linnik was further developed by Rényi, Csiszár, and many others, and we refer to the book of Johnson \cite{MR2109042} for an account on this subject. The first known proof of the Shannon monotonicity conjecture is relatively recent and was published in 2004 by Artstein, Ball, Barthe, Naor \cite{MR2083473}. The idea is to pull back the problem to the monotonicity of the Fisher information. Recall that the Fisher information of a random variable $S$ with density $g$ is given by \[ \cF(S):=\int\!\frac{\ABS{\nabla g}^2}{g}\,dx. \] It appears when one takes the derivative of $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ along an additive Gaussian perturbation. Namely, the de Bruijn formula states that if $X,G$ are random variables with $G$ standard Gaussian then \[ \pd_t\mathcal{S}}\newcommand{\cT}{\mathcal{T}(X+\sqrt{t}G)=\frac{1}{2}\cF(X+\sqrt{t}G). \] Indeed, if $f$ is the density of $X$ then the density $P_t(f)$ of $X+\sqrt{t}G$ is given by the heat kernel \[ P_t(f)(x)=(f*\mathrm{dil}_{\sqrt{t}} f_*)(x) =\int\!f(y)\frac{e^{-\frac{1}{2t}(y-x)^2}}{\sqrt{2\pi t}}\,dy, \] which satisfies to $\pd_tP_t(f)(x)=\frac{1}{2}\Delta_x P_t(f)(x)$, and which gives, by integration by parts, \[ \pd_t\mathcal{S}}\newcommand{\cT}{\mathcal{T}(X+\sqrt{t}G) =-\frac{1}{2}\int\!(1+\log P_t f)\Delta P_t\,dx =\frac{1}{2}\cF(P_tf). \] In the same spirit, we have the following integral representation, taken from \cite{MR2346565}, \[ \mathcal{S}}\newcommand{\cT}{\mathcal{T}(G)-\mathcal{S}}\newcommand{\cT}{\mathcal{T}(S)% =\int_0^{\infty}\! \PAR{\cF(\sqrt{e^{-2t}}S+\sqrt{1-e^{-2t}}G)-1}\,dt. \] This allows to deduce the monotonicity of $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ along the CLT from the one of $F$ along the CLT, \[ \cF(S_1)\geq \cF(S_2)\geq\cdots\geq \cF(S_n)\geq \cF(S_{n+1})\geq\cdots\searrow \cF(G), \] which is a tractable task \cite{MR2083473}. The de Bruijn identity involves Brownian motion started from the random initial condition $X$, and the integral representation above involve the Ornstein-Uhlenbeck process started from the random initial condition $X$. The fact that the Fisher information $\cF$ is non-increasing along a Markov semigroup means that the entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ is concave along the Markov semigroup, a property which can be traced back to Stam (see \cite[Chapter 10]{MR1845806}). As we have already mentioned before, the quantitative version of such a concavity is related to Sobolev type functional inequalities and to the notion of Bakry-Émery curvature of Markov semigroups \cite{BGL}, a concept linked with the Ricci curvature in Riemannian geometry. Recent works on this topic include extensions by Villani, Sturm, Ollivier, among others. \section{Dan-Virgil Voiculescu and the free central limit theorem} Free probability theory was forged in the 1980's by Dan-Virgil Voiculescu (1946--), while working on isomorphism problems in von Neumann operator algebras of free groups. Voiculescu discovered later in the 1990's that free probability is the algebraic structure that appears naturally in the asymptotic global spectral analysis of random matrix models as the dimension tends to infinity. Free probability theory comes among other things with algebraic analogues of the CLT and the Boltzmann entropy, see \cite{MR1217253,MR1887698,MR2032363,AGZ}. The term ``free'' in ``free probability theory'' and in ``free entropy'' comes from the free group (see below), and has no relation with the term ``free'' in the Helmholtz free energy which comes from thermodynamics (available work obtainable at constant temperature). By analogy, the ``free free energy'' at unit temperature might be $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_*(a)=\tau(V(a))- \chi(a)$ where $\chi$ is the Voiculescu free entropy. We will see in the last sections that such a functional appears as the rate function of a large deviations principle for the empirical spectral distribution of random matrix models! This is not surprising since the Helmholtz free energy, which is nothing else but a Kullback-Leibler relative entropy, is the rate function of the large deviations principle of Sanov, which concerns the empirical measure version of the law of large numbers. \subsection{Algebraic probability space} Let $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ be an algebra over $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$, with unity $id$, equipped with an involution $a\mapsto a^*$ and a normalized linear form $\tau:\mathcal{A}}\newcommand{\cB}{\mathcal{B}\to\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ such that $\tau(ab)=\tau(ba)$, $\tau(id)=1$, and $\tau(aa^*)\geq0$. A basic non commutative example is given by the algebra of square complex matrices: $\mathcal{A}}\newcommand{\cB}{\mathcal{B}=\mathcal{M}}\newcommand{\cN}{\mathcal{N}_n(\mathbb{C}}\newcommand{\dD}{\mathbb{D})$, $id=I_n$, $a^*=\bar{a}^\top$, $\tau(a)=\frac{1}{n}\mathrm{Tr}(a)$, for which $\tau$ appears as an expectation with respect to the empirical spectral distribution: denoting $\lambda_1(a),\ldots,\lambda_n(a)\in\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ the eigenvalues of $a$, we have \[ \tau(a)=\frac{1}{n}\sum_{k=1}^n\lambda_k(a)=\int\!x\,d\mu_a(x) \quad\text{where}\quad \mu_a:=\frac{1}{n}\sum_{k=1}^n\delta_{\lambda_k(a)}. \] If $a=a^*$ (we say that $a$ is real or Hermitian) then the probability measure $\mu_a$ is supported in $\dR$ and is fully characterized by the collection of moments $\tau(a^m)$, $m\geq0$, which can be seen as a sort of algebraic distribution of $a$. Beyond this example, by analogy with classical probability theory, we may see the elements of $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ as algebraic analogues of bounded random variables, and $\tau$ as an algebraic analogue of an expectation, and $\tau(a^m)$ as the analogue of the $m$-th moment of $a$. We say that $a\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ has mean $\tau(a)$ and variance $\tau((a-\tau(a))^2)=\tau(a^2)-\tau(a)^2$, and that $a$ is centered when $\tau(a)=0$. The $*$-law of $a$ is the collection of mixed moments of $a$ and $a^*$ called the $*$-moments: \[ \tau(b_1\cdots b_m)\text{ where } b_1,\ldots,b_m\in\{a,a^*\}\text{ and } m\geq1. \] In contrast with classical probability, the product of algebraic variables may be non commutative. When $a\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is real $a^*=a$, then the $*$-law of $a$ boils down to the moments: $\tau(a^m)$, $m\geq0$. In classical probability theory, the law of a real bounded random variable $X$ is characterized by its moments $\mathbb{E}}\newcommand{\dF}{\mathbb{F}(X^m)$, $m\geq0$, thanks to the (Stone-)Weierstrass theorem. When the bounded variable is not real and takes its values in $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ then we need the mixed moments $\mathbb{E}}\newcommand{\dF}{\mathbb{F}(X^m\bar{X}^n)$, $m,n\geq0$. One can connect $*$-law and spectrum even for non real elements. Namely, if $a\in\mathcal{M}}\newcommand{\cN}{\mathcal{N}_n(\mathbb{C}}\newcommand{\dD}{\mathbb{D})$ and if $\mu_a:=\frac{1}{n}\sum_{k=1}^n\delta_{\lambda_k(a)}$ is its empirical spectral distribution in $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$, then, for any $z\not\in\{\lambda_1(a),\ldots,\lambda_n(a)\}$, \begin{align*} \frac{1}{2}\tau(\log((a-zid)(a-zid)^*)) &=\frac{1}{n}\log\ABS{\det(a-zI_n)} \\ &=\int\!\log\ABS{z-\lambda}\,d\mu_a(\lambda) \\ &=(\log\ABS{\cdot}*\mu_a)(z) \\ &=:-U_{\mu_a}(z). \end{align*} The quantity $U_{\mu_a}(z)$ is exactly the logarithmic potential at point $z\in\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ of the probability measure $\mu_a$. Since $-\frac{1}{2\pi}\log\ABS{\cdot}$ is the so-called fundamental solution of the Laplace equation in dimension $2$, it follows that in the sense of Schwartz distributions, \[ \mu_a=\frac{1}{2\pi}\Delta U_{\mu_a}. \] Following Brown, beyond the matrix case $\mathcal{A}}\newcommand{\cB}{\mathcal{B}=\mathcal{M}}\newcommand{\cN}{\mathcal{N}_n(\mathbb{C}}\newcommand{\dD}{\mathbb{D})$, this suggest to define the spectral measure of an element $a\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ in a abstract algebra $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ as being the probability measure $\mu_a$ on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ given by \[ \mu_a:=-\frac{1}{\pi}\Delta\tau(\log((a-zid)(a-zid)^*)) \] where here again $\Delta=\pd\OL{\pd}$ is the two-dimensional Laplacian acting on $z$, as soon as we know how to define the operator $\log((a-zid)(a-zid)^*)$ for every $z$ such that $a-zid$ is invertible. This makes sense for instance if $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is a von Neumann algebra of bounded operators on a Hilbert space, since one may define $\log(b)$ if $b^*=b$ by using functional calculus. The moral of the story is that the $*$-law of $a$ determines the $*$-law of the Hermitian element $(a-zid)(a-zid)^*$ for every $z\in\mathbb{C}}\newcommand{\dD}{\mathbb{D}$, which in turn determines the Brown spectral measure $\mu_a$. This strategy is known as Hermitization. The so-called Gelfand-Naimark-Segal (GNS, see \cite{AGZ}) construction shows that any algebraic probability space can be realized as a subset of the algebra of bounded operators on a Hilbert space. Using then the spectral theorem, this shows that any compactly supported probability measure is the $*$-law of some algebraic random variable. \subsection{Freeness} The notion of freeness is an algebraic analogue of independence. In classical probability theory, a collection of $\sigma$-field are independent if the product of bounded random variables is centered as soon as the factors are centered and measurable with respect to different $\sigma$-fields. We say that $\cB\subset\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is a sub-algebra of $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ when it is stable by the algebra operations, by the involution, and contains the unity $id$. By analogy with classical probability, we say that the collection ${(\mathcal{A}}\newcommand{\cB}{\mathcal{B}_i)}_{i\in I}$ of sub-algebras of $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ are \emph{free} when for any integer $m\geq1$, $i_1,\ldots,i_m\in I$, and $a_1\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}_{i_1},\ldots,a_m\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}_{i_m}$, we have \[ \tau(a_1\cdots a_m)=0 \] as soon as $\tau(a_1)=\cdots=\tau(a_m)=0$ and $i_1\neq \cdots\neq i_m$ (only consecutive indices are required to be different). We say that ${(a_i)}_{i\in I}\subset\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ are free when the sub-algebras that they generate are free. If for instance $a,b\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ are free and centered, then $\tau(ab)=0$, and $\tau(abab)=0$. Note that in classical probability, the analogue of this last expression will never be zero if $a$ and $b$ are not zero, due to the commutation relation $abab=a^2b^2$. Can we find examples of free matrices in $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_n(\mathbb{C}}\newcommand{\dD}{\mathbb{D})$? Actually, this will not give exciting answers. Freeness is more suited for infinite dimensional operators. It turns out that the definition and the name of freeness come from a fundamental infinite dimensional example constructed from the free group. More precisely, let $F_n$ be the free group\footnote{``Free'' because it is the free product of $n$ copies of $\mathbb{Z}$, without additional relation.} with $2n$ generators (letters and anti-letters) $g_1^{\pm1},\ldots,g_n^{\pm1}$ with $n\geq2$ (for $n=1$, $F_n=\dZ$ is commutative). Let $\varnothing$ be the neutral element of $F_n$ (empty string). Let $A_n$ be the associated free algebra identified with a sub-algebra of $\ell_\mathbb{C}}\newcommand{\dD}{\mathbb{D}^2(F_n)$. Each element of $A_n$ can be seen as a finitely supported complex measure of the form $\sum_{w\in F_n}c_w\delta_w$. The collection ${(\delta_w)}_{w\in F_n}$ is the canonical basis: $\DOT{\delta_w,\delta_{w'}}=\mathbf{1}_{w=w'}$. The product on $A_n$ is the convolution of measures on $F_n$: \[ \PAR{\sum_{w\in F_n}c_w\delta_w}\PAR{\sum_{w\in F_n}c'_w\delta_w} % =\sum_{w\in F_n}\PAR{\sum_{v\in F_n}c_vc'_{v^{-1}w}}\delta_w % =\sum_{w\in F_n}(c*c')_w\delta_w. \] Now, let $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ be the algebra over $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ of linear operators from $\ell_\mathbb{C}}\newcommand{\dD}{\mathbb{D}^2(F_n)$ to itself. The product in $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is the composition of operators. The involution $*$ in $\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is the transposition-conjugacy of operators. The identity operator is denoted $id$. We consider the linear form $\tau:\mathcal{A}}\newcommand{\cB}{\mathcal{B}\to\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ defined for every $a\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ by \[ \tau(a)=\DOT{a\delta_\varnothing,\delta_\varnothing}. \] For every $w\in F_n$, let $u_w\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ be the left translation operator defined by $u_w(\delta_v)=\delta_{wv}$. Then \[ u_w u_{w'}=u_{ww'},\quad (u_w)^*=u_{w^{-1}},\quad u_\varnothing=id, \] and therefore $u_w u_w^*=u_w^* u_w=id$ (we say that $u_w$ is unitary). We have $\tau(u_w)=\mathbf{1}_{w=\varnothing}$. Let us show that the sub-algebras $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1,\ldots,\mathcal{A}}\newcommand{\cB}{\mathcal{B}_n$ generated by $u_{g_1},\ldots,u_{g_n}$ are free. Each $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_i$ consists in linear combinations of $u_{g_i^r}$ with $r\in\dZ$, and centering forces $r=0$. For every $w_1,\ldots,w_m\in F_n$, we have \[ \tau(u_{w_1}\cdots u_{w_m})% =\tau(u_{w_1\cdots w_m})% =\mathbf{1}_{w_1\cdots w_m=\varnothing}. \] Let us consider the case $w_j=g_{i_j}^{r_j}\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}_{i_j}$ with $r_j\in\dZ$, for every $1\leq j\leq m$. Either $r_j=0$ and $w_j=\varnothing$ or $r_j\neq0$ and $\tau(w_j)=0$. Let us assume that $r_1\neq0,\ldots,r_n\neq0$, which implies $\tau(u_{w_1})=\cdots=\tau(u_{w_n})=0$. Now since $G_n$ is a tree, it does not have cycles, and thus if we follow a path starting from the root $\varnothing$ then we cannot go back to the root if we never go back locally along the path (this is due to the absence of cycles). Consequently, if additionally $i_1\neq\cdots\neq i_m$ (i.e.\ two consecutive terms are different), then we have necessarily $w_1\cdots w_m\neq\varnothing$, and therefore $\tau(u_{w_1}\cdots u_{w_m})=0$. From this observation one can conclude that $\mathcal{A}}\newcommand{\cB}{\mathcal{B}_1,\ldots,\mathcal{A}}\newcommand{\cB}{\mathcal{B}_n$ are free. Beyond the example of the free group: if $G_1,\ldots,G_n$ are groups and $G$ their free product, then the algebras generated by $\{u_g:g\in G_1\},\ldots,\{u_g:g\in G_n\}$ are always free in the one generated by $\{u_g:g\in G\}$. \begin{table} \begin{tabular}{c|c} Classical probability & Free probability\\\hline Bounded r.v.\ $X$ on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ & Algebra element $a\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$\\ $\mathbb{E}}\newcommand{\dF}{\mathbb{F}(X^m\bar{X}^{n})$ & $\tau(b_1\cdots b_m)$, $b\in\{a,a^*\}^m$ \\ Law = Moments & Law = $*$-moments\\ $X$ is real & $a=a^*$ \\ Independence & Freeness\\ Classical convolution $*$ & Free convolution $\boxplus$\\ Gaussian law (with CLT) & Semicircle law (with CLT)\\ Boltzmann entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ & Voiculescu entropy $\chi$ \end{tabular} \caption{Conceptual dictionary between classical probability and free probability. The first is commutative while the second is typically non commutative. Free probability is the algebraic structure that emerges from the asymptotic analysis, over the dimension, of the empirical spectral distribution of unitary invariant random matrices. The term free comes from the algebra of linear operators over the free algebra of the free group, an example in which the concept of freeness emerges naturally, in relation with the symmetric random walk on the infinite regular tree of even degree $\geq 4$ (which is the Cayley graph of a non-commutative free group).} \end{table} \subsection{Law of free couples and free convolution} In classical probability theory, the law of a couple of independent random variables is fully characterized by the couple of laws of the variables. In free probability theory, the $*$-law of the couple $(a,b)$ is the collection of mixed moments in $a,a^*,b,b^*$. If $a,b$ are free, then one can compute the $*$-law of the couple $(a,b)$ by using the $*$-law of $a$ and $b$, thanks to the centering trick. For instance, in order to compute $\tau(ab)$, we may write using freeness $0=\tau((a-\tau(a))(b-\tau(b)))=\tau(ab)-\tau(a)\tau(b)$ to get $\tau(ab)=\tau(a)\tau(b)$. As a consequence, one can show that the $*$-law of a couple of free algebraic variables is fully characterized by the couple of $*$-laws of the variables. This works for arbitrary vectors of algebraic variables. In classical probability theory, the law of the sum of two independent random variables is given by the convolution of the law of the variables. In free probability theory, the $*$-law of the sum $a+b$ of two free algebraic variables $a,b\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is given by the so-called free convolution $\mathrm{dist}(a)\boxplus\mathrm{dist}(b)$ of the $*$-law of $a$ and $b$, which can be defined using the $*$-law of the couple $(a,b)$. Following \cite{MR2032363}, given an at most countable family of compactly supported probability measures on $\dR$, one can always construct an algebraic probability space containing free algebraic variables admitting these probability measures as their $*$-distributions. The free convolution $\boxplus$ of probability measures is associative but is not distributive with respect to convex combinations (beware of mixtures!). We have so far at hand an algebraic framework, called free probability theory, in which the concepts of algebra elements, trace, $*$-law, freeness, and free convolution are the analogue of the concepts of bounded random variables, expectation, law, independence, and convolution of classical probability theory. Do we have a CLT, and an analogue of the Gaussian? The answer is positive. \subsection{Free CLT and semicircle law} It is natural to define the convergence in $*$-law, denoted $\overset{*}{\to}$, as being the convergence of all $*$-moments. The Voiculescu free CLT states that if $a_1,a_2,\ldots\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ are free, real $a_i=a_i^*$, with same $*$-law, zero mean $\tau(a_i)=0$, unit variance $\tau(a_i^2)=1$, then \[ s_n:=\frac{a_1+\cdots+a_n}{\sqrt{n}} \overset{*}{\underset{n\to\infty}{\longrightarrow}} \frac{\sqrt{4-x^2}\mathbf{1}_{[-2,2]}}{2\pi}dx. \] The limiting $*$-law is given by the moments of the \emph{semicircle law}\footnote{Also known as the Wigner distribution (random matrix theory) or the Sato-Tate distribution (number theory).} on $[-2,2]$, which are $0$ for odd moments and the Catalan numbers ${(C_m)}_{m\geq0}$ for even moments: for every $m\geq0$, \[ \int_{-2}^2\!x^{2m+1}\frac{\sqrt{4-x^2}}{2\pi}dx=0 \quad\text{and}\quad \int_{-2}^2\!x^{2m}\frac{\sqrt{4-x^2}}{2\pi}dx =C_m:=\frac{1}{1+m}\binom{2m}{m}. \] An algebraic variable $b\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ has semicircle $*$-law when it is real $b=b^*$ and $\tau(b^{2m+1})=0$ and $\tau(b^{2m})=C_m$ for every $m\geq0$. The proof of the free CLT consists in computing the moments of $s_n$ using freeness. This reveals three type of terms: terms which are zero at fixed $n$ thanks to freeness and centering, terms having zero contribution asymptotically as $n\to\infty$, and terms which survive at the limit, and which involve only the second moment of the $a_i$'s. See \cite{MR2316893}\footnote{Beyond freeness, there exists other notions of algebraic independence, allowing to compute moments, and leading to a CLT with other limiting distributions, including the Bernoulli distribution and the arcsine distribution.}. As for the classical CLT, the first two moments are conserved along the free CLT: $\tau(s_n)=0$ and $\tau(s_n^2)=1$ for all $n\geq1$. The semicircle $*$-law is the free analogue of the Gaussian distribution of classical probability. The semicircle $*$-law is stable by free convolution: if $a_1,\ldots,a_n$ are free with semicircle $*$-law then the $*$-law of $a_1+\cdots+a_n$ is also semicircle and its second moment is the sum of the second moments. In particular $s_n=(a_1+\cdots+a_n)/\sqrt{n}$ is semicircle, just like the Gaussian case in the CLT of classical probability! If $\mu_1,\ldots,\mu_n$ are semicircle laws then their free convolution $\mu_1\boxplus\cdots\boxplus\mu_n$ is also a semicircle law and its variance is the sum of the variances. \subsection{Random walks and CLT} Let us reconsider the free group $F_n$. The algebraic variables \[ a_1=\frac{u_{g_1}+u_{g_1^{-1}}}{\sqrt{2}},\ldots, a_n=\frac{u_{g_n}+u_{g_n^{-1}}}{\sqrt{2}} \] are free, real, centered, with unit variance. Let us define \[ a=\sum_{i=1}^n(u_{g_i}+u_{g_i}^{-1})=\sqrt{2}\sum_{i=1}^n a_i \quad\text{and}\quad p =\frac{a}{2n} =\frac{a_1+\cdots+a_n}{\sqrt{2}n}. \] Then $a$ is the adjacency operator of the Cayley graph $G_n$ of the free group $F_n$, which is $2n$-regular, without cycles (a tree!), rooted at $\varnothing$. Moreover, $\DOT{p\delta_v,\delta_w}=\frac{1}{2n}\mathbf{1}_{wv^{-1}\in S}$ where $S=\{g_1^{\pm1},\ldots,g_n^{\pm}\}$, and thus $p$ is the transition kernel of the simple random walk on $G_n$. For every integer $m\geq0$, the quantity $\tau(a^m)=\DOT{a^m\delta_\varnothing,\delta_\varnothing}$ is the number of paths of length $m$ in $G_n$ starting and ending at the root $\varnothing$. From the Kesten-McKay\footnote{Appears for regular trees of even degree (Cayley graph of free groups) in the doctoral thesis of Harry Kesten (1931 -- ) published in 1959. It seems that no connections were made at that time with the contemporary works \cite{MR0095527} of Eugene Wigner (1902 -- 1995) on random matrices published in the 1950's. In 1981, Brendan McKay (1951 -- ) showed \cite{mckay1981expected} that these distributions appear as the limiting empirical spectral distributions of the adjacency matrices of sequences of (random) graphs which are asymptotically regular and without cycles (trees!). He does not cite Kesten and Wigner.} theorem \cite{MR0109367,mckay1981expected,MR2316893}, for every integer $m\geq0$, \[ \tau(a^m) =\DOT{a^m\delta_\varnothing,\delta_\varnothing} =\int\!x^m\,d\mu_{d}(x), \] where $\mu_{d}$ is the Kesten-McKay distribution with parameter $d=2n$, given by \[ d\mu_{d}(x) :=\frac{d\sqrt{4(d-1)-x^2}}{2\pi(d^2-x^2)} \mathbf{1}_{[-2\sqrt{d-1},2\sqrt{d-1}]}(x)dx. \] By parity we have $\tau(a^{2m+1})=0$ for every $m\geq0$. When $n=1$ then $d=2$ and $G_2$ is the Cayley graph of $F_1=\mathbb{Z}$, the corresponding Kesten-McKay law $\mu_{2}$ is the \emph{arcsine law} on $[-2,2]$, $$ \langle a^{2m}\delta_\varnothing,\delta_\varnothing\rangle =\int\!x^{2m}\,d\mu_{2}(x) =\int_{-2}^2\frac{x^{2m}}{\pi\sqrt{4-x^2}}dx =\binom{2m}{m}, $$ and we recover the fact that the number of paths of length $2m$ in $\mathbb{Z}$ starting and ending at the root $\varnothing$ (which is the origin $0$) is given by the central binomial coefficient. The binomial combinatorics is due to the commutativity of $F_1=\dZ$. At the opposite side of degree, when $d=2n\to\infty$ then $\mu_{d}$, scaled by $(d-1)^{-1/2}$, tends to the semicircle law on $[-2,2]$: \[ \lim_{d\to\infty} \frac{\langle a^{2m}\delta_\varnothing,\delta_\varnothing\rangle}{(d-1)^m} =\lim_{d\to\infty} \int\!\left(\frac{x}{\sqrt{d-1}}\right)^{2m}\!\!\!\!d\mu_{d}(x) =\int_{-2}^2 y^{2m}\frac{\sqrt{4-y^2}}{2\pi}dy =C_m:=\frac{1}{1+m}\binom{2m}{m}. \] As a consequence, we have, thanks to $(d-1)^m\sim_{n\to\infty}d^m=(2n)^m=(\sqrt{2n})^{2m}$, \[ \tau\PAR{\PAR{\frac{a_1+\cdots+a_n}{\sqrt{n}}}^{2m}} =\frac{\tau(a^{2m})}{(2n)^m} = \frac{\DOT{a^{2m}\delta_\varnothing,\delta_\varnothing}}{d^m} \underset{n\to\infty}{\longrightarrow} C_m. \] This is nothing else but a free CLT for the triangular array ${((a_1,\ldots,a_n))}_{n\geq1}$! The free CLT is the algebraic structure that emerges from the asymptotic analysis, as $n\to\infty$, of the combinatorics of loop paths of the simple random walk on the Cayley graph $G_n$ of the free group $F_n$, and more generally on the $d$-regular infinite graph without cycles (a tree!) as the degree $d$ tends to infinity. We have $a=b_1+\cdots+b_n$ where $b_i=\sqrt{2}a_i=u_{g_i}+u_{g_i^{-1}}$. But for any $m\in\dN$ we have $\tau(b_i)^{2m+1}=0$ and $\tau(b_i^{2m})=\sum_{r=1}^{2m}\binom{2m}{r}\tau(u_{g_i^{2(r-m)}})=\binom{2m}{m}$, and therefore the $*$-law of $b_i$ is the arcsine law $\mu_2$, and thanks to the freeness of $b_1,\ldots,b_n$ we obtain $\mu_d=\mu_{2}\boxplus\cdots\boxplus\mu_{2}$ ($d$ times). \subsection{Free entropy} Inspired by the Boltzmann H-Theorem view of Shannon on the CLT of classical probability theory, one may ask if there exists, in free probability theory, a free entropy functional, maximized by the semicircle law at fixed second moment, and which is monotonic along the free CLT. We will see that the answer is positive. Let us consider a real algebraic variable $a\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$, $a^*=a$, such that there exists a probability measure $\mu_a$ on $\dR$ such that for every integer $m\geq0$, \[ \tau(a^m)=\int\!x^m\,d\mu_a(x). \] Inspired from the micro-macro construction of the Boltzmann entropy, one may consider an approximation at the level of the moments of the algebraic variable $a$ (which is in general infinite dimensional) by Hermitian matrices (which are finite dimensional). Namely, following Voiculescu, for every real numbers $\varepsilon>0$ and $R>0$ and integers $m\geq0$ and $d\geq1$, let $\Gamma_R(a;m;d,\varepsilon)$ be the relatively compact set of Hermitian matrices $h\in\mathcal{M}}\newcommand{\cN}{\mathcal{N}_d(\mathbb{C}}\newcommand{\dD}{\mathbb{D})$ such that $\NRM{h}\leq R$ and $\max_{0\leq k\leq m}\ABS{\tau(a^k)-\frac{1}{d}\mathrm{Tr}(h^k)}\leq\varepsilon$. The volume $\ABS{\Gamma_R(a;m,d,\varepsilon)}$ measures the degree of freedom of the approximation of the algebraic variable $a$ by matrices, and is the analogue of the cardinal (which was multinomial) in the combinatorial construction of the Boltzmann entropy. We find \[ \chi(a) :=\sup_{R>0}\inf_{m\in\mathbb{N}}\inf_{\varepsilon>0} \varlimsup_{d\to\infty} \PAR{\frac{1}{d^2}\log\ABS{\Gamma_R(a;m,d,\varepsilon)}+\frac{\log(d)}{2}} =\iint\!\log\ABS{x-y}\,d\mu_a(x)d\mu_a(y). \] This quantity depends only on $\mu_a$ and is also denoted $\chi(\mu_a)$. It is a quadratic form in $\mu_a$. It is the Voiculescu entropy functional \cite{MR1887698}. When $\mu$ is a probability measure on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$, we will still denote \[ \chi(\mu):=\iint\!\log\ABS{x-y}\,d\mu(x)d\mu(y). \] However, this is not necessarily the entropy of an algebraic variable when $\mu$ is not supported in $\dR$. The Voiculescu free entropy should not be confused with the von Neumann entropy in quantum probability defined by $S(a)=-\tau(a\log(a))$, which was studied by Lieb in \cite{MR0506364}. For some models of random graphs, one can imitate Voiculescu and forge some sort of graphical Boltzmann entropy, which can be related to a large deviations rate functional, see \cite{bordenavecaputo} and references therein. The semicircle law is for the free entropy the analogue of the Gaussian law for the Boltzmann entropy. The semicircle law on $[-2,2]$ is the unique law that maximizes the Voiculescu entropy $\chi$ among the laws on $\dR$ with second moment equal to $1$, see \cite{AGZ}: \[ \arg\max\BRA{\chi(\mu):\mathrm{supp}(\mu)\subset\dR,\int\!x^2\,d\mu(x)=1} % =\frac{\sqrt{4-x^2}\mathbf{1}_{[-2,2]}(x)}{2\pi}dx. \] How about laws on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ instead of $\dR$? The uniform law on the unit disc is the unique law that maximizes the functional $\chi$ among the set of laws on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ with second moment (mean squared modulus) equal to $1$, see \cite{MR1485778} (here $z=x+iy$ and $dz=dxdy$): \[ \arg\max\BRA{\chi(\mu):\mathrm{supp}(\mu)\subset\mathbb{C}}\newcommand{\dD}{\mathbb{D},\int\!|z|^2\,d\mu(z)=1} % =\frac{\mathbf{1}_{\{z\in\mathbb{C}}\newcommand{\dD}{\mathbb{D}:|z|=1\}}}{\pi}dz, \] Under the uniform law on the unit disc, the real and the imaginary parts follow the semicircle law on $[-1,1]$, and are not independent. If we say that an algebraic variable $c\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ is circular when its $*$-law is the uniform law on the unit disc of $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$, then, if $s_1,s_2\in\mathcal{A}}\newcommand{\cB}{\mathcal{B}$ are free with $*$-law equal to the semicircle law on $[-2,2]$, then $\frac{s_1+is_2}{\sqrt{2}}$ is circular (here $i=(0,1)\in\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ is such that $i^2=-1$). It turns out that the Voiculescu free entropy $\chi$ is monotonic along the Voiculescu free CLT: \[ \chi(a)=\chi(s_1)\leq\cdots\leq\chi(s_n)\leq\chi(s_{n+1})\leq\cdots \underset{n\to\infty}{\nearrow} \max\chi(s) \] where still $s_n=n^{-1/2}(a_1+\cdots+a_n)$ and where $s$ is an semicircle algebraic variable. Shlyakhtenko gave a proof of this remarkable fact, based on a free Fisher information functional (which is a Hilbert transform), that captures simultaneously the classical and the free CLT \cite{MR2346565,MR2304337}. The Boltzmann-Shannon H-Theorem interpretation of the CLT is thus remarkably valid in classical probability theory, and in free probability theory. \subsection{A theorem of Eugene Wigner} Let $H$ be a random $n\times n$ Hermitian matrix belonging to the Gaussian Unitary Ensemble (GUE). This means that $H$ has density proportional to \[ e^{-\frac{n}{4}\mathrm{Tr}(H^2)}% =e^{-\frac{n}{4}\sum_{1\leq j\leq 1}^n H_{jj}^2-\frac{n}{2}\sum_{1\leq j<k\leq n}|H_{jk}|^2}. \] The entries of $H$ are Gaussian, centered, independent, with variance $2/n$ on the diagonal and $1/n$ outside. Let $\mu_n=\frac{1}{n}\sum_{j=1}^n\delta_{\lambda_{j}(H)}$ be the empirical spectral distribution of $H$, which is a random discrete probability measure on $\dR$. For every $m\geq0$, the mean $m$-th moment of $\mu_n$ is given by \[ \mathbb{E}}\newcommand{\dF}{\mathbb{F}\int\!x^m\,d\mu_n(x) =\frac{1}{n}\sum_{j=1}^n\lambda_{j}^m =\frac{\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mathrm{Tr}(H^m)}{n} =\sum_{i_1,\ldots,i_m}\frac{\mathbb{E}}\newcommand{\dF}{\mathbb{F}(H_{i_1i_2}\cdots H_{i_{m-1}i_m}H_{i_mi_1})}{n}. \] In particular, the first two moments of $\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_n$ satisfy to \[ \mathbb{E}}\newcommand{\dF}{\mathbb{F}\int\!x\,d\mu_n(x)=\frac{\mathbb{E}}\newcommand{\dF}{\mathbb{F}(\mathrm{Tr}(H))}{n} =\frac{\mathbb{E}}\newcommand{\dF}{\mathbb{F}\sum_{j=1}^n H_{jj}}{n}=0 \] and \[ \mathbb{E}}\newcommand{\dF}{\mathbb{F}\int\!x^2\,d\mu_n(x)=\frac{\mathbb{E}}\newcommand{\dF}{\mathbb{F}(\mathrm{Tr}(H^2))}{n} =\mathbb{E}}\newcommand{\dF}{\mathbb{F}\frac{\sum_{j,k=1}^n |H_{jk}|^2}{n} =\frac{n(2/n)+(n^2-n)(1/n)}{n} \underset{n\to\infty}{\longrightarrow}1. \] More generally, for any $m>2$, the computation of the limiting $m$-th moment of $\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_n$ boils down to the combinatorics of the paths $i_1\to i_2\to\cdots\to i_m$. It can be shown that the surviving terms as $n\to\infty$ correspond to paths forming a tree and passing exactly zero or two times per each edge. This gives finally, for every integer $m\geq0$, denoting $C_m$ the $m$-th Catalan number, \[ \lim_{n\to\infty}\mathbb{E}}\newcommand{\dF}{\mathbb{F}\int\!x^{2m+1}\,d\mu_n(x)=0 \quad\text{and}\quad \lim_{n\to\infty}\mathbb{E}}\newcommand{\dF}{\mathbb{F}\int\!x^{2m}\,d\mu_n(x)=C_m. \] This means that $\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_n$ tends as $n\to\infty$ in the sense of moments to the semicircle law on $[-2,2]$ (which has unit variance). Just like the CLT, the result is actually universal, in the sense that it remains true if one drops the Gaussian distribution assumption of the entries. This is the famous Wigner theorem \cite{MR0095527}, in its modern general form, named after Eugene Paul Wigner (1902 -- 1995). The GUE case is in fact exactly solvable: one can compute the density of the eigenvalues, which turns out to be proportional to \[ \prod_{j=1}^ne^{-\frac{n}{4}\lambda_{j}^2}\prod_{j<k}(\lambda_j-\lambda_k)^2 =\exp \PAR{-\frac{n}{4}\sum_{j=1}^n\lambda_j^2 -\sum_{j\neq k}\log\frac{1}{\ABS{\lambda_j-\lambda_k}}}. \] The logarithmic repulsion is the Coulomb repulsion in dimension $2$. Also, this suggest to interpret the eigenvalues $\lambda_1,\ldots,\lambda_n$ as a Coulomb gas of two-dimensional charged particles forced to stay in a one dimensional ramp (the real line) and experiencing a confinement by a quadratic potential. These formulas allow to deduce the semicircle limit of the one-point correlation (density of $\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_n$), by using various methods, such as orthogonal polynomials, or large deviations theory, see \cite{AGZ}. \subsection{Asymptotic freeness of unitary invariant random matrices} If $A$ and $B$ are two Hermitian $n\times n$ matrices, then the spectrum of $A+B$ depend not only on the spectrum of $A$ and the spectrum of $B$, but also on the eigenvectors\footnote{If $A$ and $B$ commute, then they admit the same eigenvectors and the spectrum of $A+B$ is the sum of the spectra of $A$ and $B$, but this depends on the way we label the eigenvalues, which depends in turn on the eigenvectors!} of $A$ and $B$. Now if $A$ and $B$ are two independent random Hermitian matrices, there is no reason to believe that the empirical spectral distribution $\mu_{A+B}$ of $A+B$ depend only on the empirical spectral distributions $\mu_A$ and $\mu_B$ of $A$ and $B$. Let $A$ be a $n\times n$ random Hermitian matrix in the GUE, normalized such that $\mu_A$ has a mean second moment equal to $1$. Then the Wigner theorem says that $\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_A$ tend in the sense of moments, as $n\to\infty$, to the semicircle law of unit variance. If $B$ is an independent copy of $A$, then, thanks to the convolution of Gaussian laws, $A+B$ is identical in law to $\sqrt{2}A$, and thus $\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_{A+B}$ tend, in the sense of moments, as $n\to\infty$, to the semicircle law of variance $2$. Then, thanks to the free convolution of semicircle laws, we have \[ \mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_{A+B}-\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_A\boxplus\mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_B \underset{n\to\infty}{\overset{*}{\longrightarrow}}0, \] where $\overset{*}{\to}$ denotes the convergence of moments. Voiculescu has established that this \emph{asymptotic freeness} phenomenon remains actually true beyond the GUE case, provided that the eigenspaces of the two matrices are randomly decoupled using a random unitary conjugation. For example, let $A$ and $B$ two $n\times n$ Hermitian matrices such that $\mu_A\to\mu_a$ and $\mu_B\to\mu_b$ in the sense of moments as $n\to\infty$, where $\mu_a$ and $\mu_b$ are two compactly supported laws on $\dR$. Let $U$ and $V$ be independent random unitary matrices uniformly distributed on the unitary group (we say Haar unitary). Then \[ \mathbb{E}}\newcommand{\dF}{\mathbb{F}\mu_{UAU^*+VBV^*} \underset{n\to\infty}{\overset{*}{\longrightarrow}} \mu_a\boxplus\mu_b. \] See \cite{AGZ}. This asymptotic freeness reveals that free probability is the algebraic structure that emerges from asymptotic analysis of large dimensional unitary invariant models of random matrices. \medskip Since the functional $\chi$ is maximized by the uniform law on the unit disc, one may ask about an analogue of the Wigner theorem for non-Hermitian random matrices. The answer is positive. For our purposes, we will focus on a special ensemble of random matrices, introduced by Jean Ginibre\footnote{Jean Ginibre is also famous for FKG inequalities, and for scattering for Schrödinger operators.} in the 1960's in \cite{Ginibre}, for which one can compute explicitly the density of the eigenvalues. \section{Jean Ginibre and his ensemble of random matrices} One of the most fascinating result in the asymptotic analysis of large dimensional random matrices is the circular law for the complex Ginibre ensemble, which can be proved using the Voiculescu functional $\chi$ (maximized at fixed second moment by uniform law on unit disc). \subsection{Complex Ginibre ensemble} A simple model of random matrix is the Ginibre model: \[ G= \begin{pmatrix} G_{11}&\cdots&G_{1n}\\ \vdots & \vdots & \vdots\\ G_{n1} & \cdots & G_{nn} \end{pmatrix} \] where ${(G_{jk})}_{1\leq j,k\leq n}$ are i.i.d.\ random variables on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$, with $\Re G_{jk},\Im G_{jk}$ of Gaussian law of mean $0$ and variance $1/(2n)$. In particular, $\mathbb{E}}\newcommand{\dF}{\mathbb{F}(|G_{jk}|^2)=1/n$. The density of $G$ is proportional to \[ \prod_{j,k=1}^ne^{-n\ABS{G_{jk}}^2} % = e^{-\sum_{j,k=1}^n n\ABS{G_{jk}}^2} % = e^{-n\mathrm{Tr}(GG^*)}. \] This shows that the law of $G$ is unitary invariant, meaning that $UGU^*$ and $G$ have the same law, for every unitary matrix $U$. Can we compute the law of the eigenvalues of $G$? Let $G=UTU^*$ be the Schur unitary triangularization of $G$. Here $T=D+N$ where $D$ is diagonal and $N$ is upper triangular with null diagonal. In particular $N$ is nilpotent, $T$ is upper triangular, and the diagonal of $D$ is formed with the eigenvalues $\lambda_1,\ldots,\lambda_n$ in $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ of $G$. The Jacobian of the change of variable $G\mapsto (U,D,N)$ is proportional to $\prod_{1\leq j<k\leq n}\ABS{\lambda_j-\lambda_k}^2$ (for simplicity, we neglect here delicate problems related to the non-uniqueness of the Schur unitary decomposition). On the other hand, \[ \mathrm{Tr}(GG^*) =\mathrm{Tr}(DD^*)+\mathrm{Tr}(DN^*)+\mathrm{Tr}(ND^*)+\mathrm{Tr}(NN^*) =\mathrm{Tr}(DD^*). \] This allows to integrate out the $U,N$ variables. The law of the eigenvalues is then proportional to \[ e^{-n\sum_{j=1}^n\ABS{\lambda_j}^2}\prod_{1\leq j<k\leq n}\ABS{\lambda_j-\lambda_k}^2. \] This defines a determinantal process on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$: the complex Ginibre ensemble \cite{MR2641363,MR2932638,MR2552864}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=.5]{circ} \caption{The eigenvalues of a single matrix drawn from the complex Ginibre ensemble of random matrices. The dashed line is the unit circle. This numerical experiment was performed using the promising Julia \url{http://julialang.org/}} \begin{lstlisting}[basicstyle=\footnotesize\ttfamily,frame=single] Pkg.add("Winston"); using Winston # Pkg.add("Winston") is needed once for all. n=1000; (D,U)=eig((randn(n,n)+im*randn(n,n))/sqrt(2*n)); I=[-1:.01:1]; J=sqrt(1-I.^2); hold(true); plot(real(D),imag(D),"b.",I,J,"r--",I,-J,"r--") title(@sprintf("Complex Ginibre Ensemble \end{lstlisting} \end{center} \end{figure} \subsection{Circular law for the complex Ginibre ensemble} In order to interpret the law of the eigenvalues as a Boltzmann measure, we put the Vandermonde determinant inside the exponential: \[ e^{-n\sum_j\ABS{\lambda_j}^2+2\sum_{j<k}\log\ABS{\lambda_j-\lambda_k}}. \] If we encode the eigenvalues by the empirical measure $\mu_n:=\frac{1}{n}\sum_{j=1}^n\delta_{\lambda_j}$, this takes the form \[ e^{-n^2 \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu_n)} \] where the ``energy'' $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu_n)$ of the configuration $\mu_n$ is defined via \[ \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu):= \int\!\ABS{z}^2\,d\mu(z) +\iint_{\neq}\!\log\frac{1}{\ABS{z-z'}}\,d\mu(z)d\mu(z'). \] This suggests to interpret the eigenvalues $\lambda_1,\ldots,\lambda_n$ of $G$ as Coulomb gas of two-dimensional charged particles, confined by a an external field (quadratic potential) and subject to pair Coulomb repulsion. Note that $-\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ can also be seen as a penalized Voiculescu functional. Minimizing a penalized functional is equivalent to minimizing without penalty but under constraint (Lagrange). Presently, if $\mathcal{M}}\newcommand{\cN}{\mathcal{N}$ is the set of probability measures on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$ then $\inf_{\mathcal{M}}\newcommand{\cN}{\mathcal{N}}\mathcal{I}}\newcommand{\cJ}{\mathcal{J}>-\infty$ and the infimum is achieved at a unique probability measure $\mu_*$, which is the uniform law on the unit disc of $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$. How does the random discrete probability measure $\mu_n$ behave as $n\to\infty$? Following Hiai and Petz \cite{MR1606719}\footnote{See also Anderson, Guionnet, and Zeitouni \cite{AGZ}, Ben Arous and Zeitouni \cite{MR1660943}, and Hardy \cite{MR2926763}.}, one may adopt a large deviations approach. Let $\mathcal{M}}\newcommand{\cN}{\mathcal{N}$ be the set of probability measures on $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$. One may show that the functional $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}:\mathcal{M}}\newcommand{\cN}{\mathcal{N}\to\dR\cup\{+\infty\}$ is lower semi continuous for the topology of narrow convergence, is strictly convex, and has compact level sets. Let us consider a distance compatible with the topology. It can be shown that for every ball $B$ for this distance, \[ \dP(\mu_n\in B) \approx \exp\PAR{-n^2\inf_{B}(\mathcal{I}}\newcommand{\cJ}{\mathcal{J}-\inf_{\mathcal{M}}\newcommand{\cN}{\mathcal{N}}\mathcal{I}}\newcommand{\cJ}{\mathcal{J})} \] Now either $\mu_*\in B$ and in this case $\dP(\mu_n\in B)\approx 1$, or $\mu_*\not\in B$ and in this case $\dP(\mu_n\in B)\to0$ exponentially fast. Actually, the first Borel-Cantelli lemma allows to deduce that almost surely \[ \lim_{n\to\infty}\mu_n = \mu_* = \arg\inf \mathcal{I}}\newcommand{\cJ}{\mathcal{J} = \frac{\mathbf{1}_{\{z\in\mathbb{C}}\newcommand{\dD}{\mathbb{D}:\ABS{z}\leq1\}}}{\pi}dz, \] where $z=x+iy$ and $dz=dxdy$. This phenomenon is known as the circular law. If one starts with a Hermitian random Gaussian matrix -- the Gaussian Unitary Ensemble (GUE) -- then the same analysis is available, and produces a convergence to the semicircle law on $[-2,2]$. The circular law is universal, in the sense that it remains valid if one drops the Gaussian assumption of the entries of the matrix, while keeping the i.i.d.\ structure and the $1/n$ variance. This was the subject of a long series of works by Girko, Bai, Tao and Vu, among others, see \cite{MR2906465,MR2567175,MR2908617}. Another way to go beyond the Gaussian case is to start from the Coulomb gas and to replace the quadratic confining potential $\ABS{\cdot}^2$ by a more general potential $V:\mathbb{C}}\newcommand{\dD}{\mathbb{D}\to\dR$, not necessarily radial. This type of generalization was studied for instance by Saff and Totik, and by Hiai and Petz, among others, see for instance \cite{MR1485778,MR1746976,AGZ,MR2926763}. Beyond random matrices, how about the empirical measure of random particles in $\dR^d$ with Coulomb type singular repulsion and external field confinement? Is there an analogue of the circular law phenomenon? Does the ball replace the disc? The answer is positive. \section{Beyond random matrices} Most of the material of this section comes from our work \cite{2013arXiv1304.7569C} with N. Gozlan and P.-A. Zitt. \subsection{The model} We consider a system of $N$ particles in $\dR^d$ at positions $x_1,\ldots,x_N$, say with charge $1/N$. These particles are subject to confinement by an external field via a potential $x\in\dR^d\mapsto V(x)$, and to internal pair interaction (typically repulsion) via a potential $(x,y)\in\dR^d\times\dR^d\mapsto W(x,y)$ which is symmetric: $W(x,y)=W(y,x)$. The idea is that an equilibrium may emerge as $N$ tends to infinity. The configuration energy is \begin{align*} \mathcal{I}}\newcommand{\cJ}{\mathcal{J}_N(x_1,\ldots,x_N) &=\sum_{i=1}^N\frac{1}{N}V(x_i)+\sum_{1\leq i<j\leq N}\frac{1}{N^2}W(x_i,x_j)\\ &=\int\!V(x)\,d\mu_N(x)+\frac{1}{2}\iint_{\neq}\!W(x,y)d\mu_N(x)d\mu_N(y) \end{align*} where $\mu_N$ is the empirical measure of the particles (global encoding of the particle system) \[ \mu_N:=\frac{1}{N}\sum_{k=1}^N\delta_{x_k}. \] The model is mean-field in the sense that each particle interacts with the others only via the empirical measure of the system. If $1\leq d\leq 2$ then one can construct a random normal matrix which admits ours particles $x_1,\ldots,x_N$ as eigenvalues: for any $n\times n$ unitary matrix $U$, \[ M=U\mathrm{Diag}(x_1,\ldots,x_N)U^*, \] which is unitary invariant if $U$ is Haar distributed. However, we are more interested in an arbitrarily high dimension $d$, for which no matrix model is available. We make our particles $x_1,\ldots,x_N$ random by considering the exchangeable probability measure $P_N$ on $(\dR^d)^N$ with density proportional to \[ e^{-\beta_N \mathcal{I}}\newcommand{\cJ}{\mathcal{J}_N(x_1,\ldots,x_N)} \] where $\beta_N>0$ is a positive parameter which may depend on $N$. The law $P_N$ is a Boltzmann measure at inverse temperature $\beta_N$, and takes the form $\prod_{i=1}^Nf_1(x_i)\prod_{1\leq i<j\leq N}f_2(x_i,x_j)$ due to the structure and symmetries of $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}_N$. The law $P_N$ on $(\dR^d)^N$ is informally the invariant law\footnote{One may also view $P_N$ as the steady state of a Fokker-Planck evolution equation with conservation laws.} of the reversible diffusion process ${(X_t)}_{t\in\dR_+}$ solution of the system of stochastic differential equations \[ dX_{t,i}= \sqrt{\frac{2}{\beta_N}}dB_{t,i} -\frac{1}{N}\nabla V(X_{t,i})dt -\frac{1}{N^2}\sum_{j\neq i}\nabla_{\!1}W(X_{t,i},X_{t,j})dt. \] This can be seen as a special McKean-Vlasov mean-field particle system with potentially singular interaction. The infinitesimal generator of this Markov process is $L=\beta_N^{-1}\Delta-\nabla \mathcal{I}}\newcommand{\cJ}{\mathcal{J}_N\cdot\nabla$. The process may explode in finite time depending on the singularity of the interaction $W$ on the diagonal (e.g.\ collisions of particles). The Helmholtz free energy of this Markov process is given, for every probability density $f$ on $(\dR^d)^N$, \[ \int_{(\dR^d)^N}\!\mathcal{I}}\newcommand{\cJ}{\mathcal{J}_N\,f\,dx -\frac{1}{\beta_N}\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f) =\int\!\PAR{\int\!V\,d\mu_N+\frac{1}{2}\iint_{\neq}W\,d\mu_N^{\otimes 2}}\,f\,dx -\frac{1}{\beta_N}\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f). \] The model contains the complex Ginibre ensemble of random matrices as the special case \[ d=2,\ \beta_N=N^2,\ V(x)=\ABS{x}^2,\ W(x,y)=2\log\frac{1}{\ABS{x-y}}, \] which is two-dimensional, with quadratic confinement, Coulomb repulsion, and temperature $1/N^2$. Beyond this two-dimensional example, the typical interaction potential $W$ that we may consider is the Coulomb interaction in arbitrary dimension (we denote by $\ABS{\cdot}$ the Euclidean norm of $\dR^d$) \[ W(x,y)=k_{\Delta}(x-y) \quad\text{with}\quad k_\Delta(x) =\begin{cases} -\ABS{x} & \text{if $d=1$}\\ \log\frac{1}{\ABS{x}} & \text{if $d=2$}\\ \frac{1}{\ABS{x}^{d-2}} & \text{if $d\geq3$} \end{cases} \] and the Riesz interaction, $0<\alpha<d$ (Coulomb if $d\geq3$ and $\alpha=2$) $d\geq1$ \[ W(x,y)=k_{\Delta_\alpha}(x-y) \quad\text{with}\quad k_{\Delta_\alpha}(x)=\frac{1}{\ABS{x}^{d-\alpha}}. \] The Coulomb kernel $k_\Delta$ is the fundamental solution of the Laplace equation, while the Riesz kernel $k_{\Delta_\alpha}$ is the fundamental solution of the fractional Laplace equation, hence the notations. In other words, in the sense of Schwartz distributions, for some constant $c_d$, \[ \Delta_\alpha k_{\Delta_\alpha}=c_d\delta_0. \] If $\alpha\neq2$ then the operator $\Delta_\alpha$ is a non-local Fourier multiplier. \subsection{Physical control problem} With the Coulomb-Gauss theory of electrostatic phenomena in mind, it is tempting to consider the following physical control problem: given an internal interaction potential $W$ and a target probability measure $\mu_*$ in $\dR^d$, can we tune the external potential $V$ and the cooling scheme $\beta_N$ is order to force the empirical measure $\mu_N=\frac{1}{N}\sum_{i=1}^N\delta_{x_i}$ to converge to $\mu_*$ as $N\to\infty$? One can also instead fix $V$ and seek for $W$. \subsection{Topological result: large deviations principle} The confinement is always needed in order to produce a non degenerate equilibrium. This is done here by an external field. This can also be done by forcing a compact support, as Frostman did in his doctoral thesis \cite{frostman}, or by using a manifold as in Dragnev and Saff \cite{MR2276529} and Berman \cite{2008arXiv0812.4224B}. Let $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$ be the set of probability measures on $\dR^d$ equipped with the topology of narrow convergence, which is the dual convergence with respect to continuous and bounded test functions. From the expression of $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}_N$ in terms of $\mu_N$, the natural limiting energy functional is the quadratic form with values in $\dR\cup\{+\infty\}$ defined by \[ \mu\in\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1\mapsto \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu)=\int\!V(x)\,d\mu(x)+\frac{1}{2}\iint\!W(x,y)\,d\mu(x)d\mu(y). \] We make the following assumptions on $V$ and $W$ (fulfilled for instance when the localization potential is quadratic $V=c\ABS{\cdot}^2$ and the interaction potential $W$ is Coulomb or Riesz). \begin{itemize} \item \emph{Localization and repulsion.} \begin{itemize} \item the function $V:\dR^d\to\dR$ continuous, $V(x)\to+\infty$ as $\ABS{x}\to+\infty$, $e^{-V}\in L^1(dx)$; \item the function $W:\dR^d\times\dR^d\to\dR\cup\{+\infty\}$ continuous, finite outside diagonal, and symmetric $W(x,y)=W(y,x)$, (however $W$ can be infinite on the diagonal!); \end{itemize} \item \emph{Near infinity confinement beats repulsion.} For some constants $c\in\dR$ and $\varepsilon_o\in(0,1)$, \[ \forall x,y\in\dR^d,\quad W(x,y)\geq c-\varepsilon_o(V(x)+V(y)); \] \item \emph{Near diagonal repulsion is integrable.} for every compact $K\subset\dR^d$, \[ z\mapsto\sup_{x,y\in K,\ABS{x-y}\geq\ABS{z}}W(x,y)\in L^1(dz); \] \item \emph{Regularity.} $\forall\nu\in\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1(\dR^d)$, if $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\nu)<\infty$ then \[ \exists(\nu_n)\in\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1(\dR^d), \quad\nu_n\ll dx,\quad \nu_n\to\nu, \quad \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\nu_n)\to \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\nu); \] \item \emph{Cooling scheme.} $\beta_N\gg N\log(N)$ (for the Ginibre ensemble $\beta_N=N^2$). \end{itemize} Under these assumptions, it is proven in \cite{2013arXiv1304.7569C} that the sequence ${(\mu_N)}_{N\geq1}$ of random variables taking values in $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$ satisfies to a Large Deviations Principle (LDP) at speed $\beta_N$ with good rate function $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}-\inf_{\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1}\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$. In other words: \begin{itemize} \item Rate function: $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ is lower semi-continuous with compact level sets, and $\inf_{\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1}\mathcal{I}}\newcommand{\cJ}{\mathcal{J}>-\infty$; \item LDP lower and upper bound: for every Borel set $A$ in $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$, \[ \liminf_{N\to\infty} \frac{\log P_N(\mu_N\in A)}{\beta_N} \geq-\inf_{\mu\in\mathrm{int}(A)}(\mathcal{I}}\newcommand{\cJ}{\mathcal{J}-\inf \mathcal{I}}\newcommand{\cJ}{\mathcal{J})(\mu) \] and \[ \limsup_{N\to\infty} \frac{\log P_N(\mu_N\in A)}{\beta_N} \leq-\inf_{\mu\in\mathrm{clo}(A)}(\mathcal{I}}\newcommand{\cJ}{\mathcal{J}-\inf \mathcal{I}}\newcommand{\cJ}{\mathcal{J})(\mu); \] \item Convergence: $\arg\inf \mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ is not empty and almost surely $\lim_{N\to\infty}\mathrm{dist}(\mu_N,\arg\inf \mathcal{I}}\newcommand{\cJ}{\mathcal{J})=0$ where $\mathrm{dist}$ is the bounded-Lipschitz dual distance (it induces the narrow topology on $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$). \end{itemize} This LDP must be seen as an attractive tool in order to show the convergence of $\mu_N$. This topological result is built on the idea that the density of $P_N$ is proportional as $N\gg1$ to \[ e^{-\beta_N \mathcal{I}}\newcommand{\cJ}{\mathcal{J}_N(\mu_N)} \approx e^{-\beta_N\mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu_N)}, \] and thus, informally, the first order global asymptotics as $N\gg1$ is \[ \mu_N\approx\arg\inf \mathcal{I}}\newcommand{\cJ}{\mathcal{J}. \] This generalizes the case of the complex Ginibre ensemble considered in the preceding section. At this level of generality, the set $\arg\inf \mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ is non-empty but is not necessarily a singleton. In the sequel, we provide a more rigid differential result in the case where $W$ is the Riesz potential, which ensures that $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ admits a unique minimizer, which is characterized by simple properties. The $N\log(N)$ in condition $\beta_N\gg N\log(N)$ comes from volumetric (combinatorial) estimates. It is important to realize that $W$ is potentially singular on the diagonal, and that this forbids the usage of certain LDP tools such as the Laplace-Varadhan lemma or the Gärtner-Ellis theorem, see \cite{MR2571413}. If $W$ is continuous and bounded on $\dR^d\times\dR^d$ then one may deduce our LDP from the LDP with $W\equiv0$ by using the Laplace-Varadhan lemma. Moreover, if $W\equiv0$ then $P_N=\eta_N^{\otimes N}$ is a product measure with $\eta_N\propto e^{-(\beta_N/N)V}$, the particles are independent, the rate functional is \[ \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu)-\inf_{\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1}\mathcal{I}}\newcommand{\cJ}{\mathcal{J}=\int\!V\,d\mu-\inf V, \quad\text{and}\quad \arg\inf_{\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1} \mathcal{I}}\newcommand{\cJ}{\mathcal{J} % =\mathcal{M}}\newcommand{\cN}{\mathcal{N}_V =\{\mu\in\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1:\mathrm{supp}(\mu)\subset\arg\inf V\}, \] which gives, thanks to $\beta_N\gg N\log(N)\gg N$, \[ \lim_{N\to\infty}\mathrm{dist}(\mu_N,\mathcal{M}}\newcommand{\cN}{\mathcal{N}_V)=0. \] \subsection{Linear inverse temperature and link with Sanov theorem} If $\beta_N=N$ and $W\equiv0$ then $P_N=(\mu_*)^{\otimes N}$ is a product measure, the law $\mu_*$ has density proportional to $e^{-V}$, the particles are i.i.d.\ of law $\mu_*$, and the Sanov theorem states that ${(\mu_N)}_{N\geq1}$ satisfies to an LDP in $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$ with rate function given by the Kullback-Leibler relative entropy $\mathcal{K}}\newcommand{\cL}{\mathcal{L}$ with respect to $\mu_*$. If $W\in\mathcal{C}}\newcommand{\cD}{\mathcal{D}_b$ then the particles are no longer independent but the Laplace-Varadhan lemma allows to deduce that ${(\mu_N)}_{N\geq1}$ satisfies to an LDP in $\mathcal{M}}\newcommand{\cN}{\mathcal{N}_1$ with rate function $\cR$ given by \[ \cR(\mu) % = \mathcal{K}}\newcommand{\cL}{\mathcal{L}(\mu) + \frac{1}{2}\iint\!W(x,y)\,d\mu(x)d\mu(y) = -\mathcal{S}}\newcommand{\cT}{\mathcal{T}(\mu) + \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu) \] where $\mathcal{S}}\newcommand{\cT}{\mathcal{T}(\mu)=\mathcal{S}}\newcommand{\cT}{\mathcal{T}(f)$ if $d\mu=fd\mu_*$ and $\mathcal{S}}\newcommand{\cT}{\mathcal{T}(\mu)=+\infty$ otherwise. We have here a contribution of the Boltzmann entropy $\mathcal{S}}\newcommand{\cT}{\mathcal{T}$ and a Voiculescu type functional $\chi$ via its penalized version $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$. Various versions of this LDP was considered in the literature in various fields and in special situations, for instance in the works of Messer and Spohn \cite{MR704588}, Kiessling \cite{MR1193342}, Caglioti, Lions, Marchioro, and Pulvirenti \cite{MR1145596}, Bodineau and Guionnet \cite{MR1678526}, and Kiessling and Spohn \cite{MR1669669}, among others. \subsection{Differential result: rate function analysis} Recall that $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ is a quadratic form on measures. \[ \frac{t\mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu)+(1-t)\mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\nu)-\mathcal{I}}\newcommand{\cJ}{\mathcal{J}(t\mu+(1-t)\nu)}{t(1-t)} =\iint\!W\,d(\mu-\nu)^2. \] This shows that $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ is convex if and only if $W$ is weakly positive in the sense of Bochner. Term ``weak'' comes from the fact that we need only to check positivity on measures which are differences of probability measures. The Coulomb kernel in dimension $d=2$ is not positive, but is weakly positive. It turns out that every Coulomb or Riesz kernel is weakly positive, in any dimension. Consequently, the functional $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ is convex in the case of Coulomb or Riesz interaction, for every dimension $d$. One may also rewrite the quadratic form $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ as \[ \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu)=\int\!V\,d\mu+\frac{1}{2}\int\!U_\mu\,d\mu \quad\text{where}\quad U_\mu(x):=\int\!W(x,y)\,d\mu(y). \] In the Coulomb case then $U_\mu$ is the logarithmic potential in dimension $d=2$, and the Coulomb or minus the Newton potential in higher dimensions. An infinite dimensional Lagrange variational analysis gives that the gradient of the functional $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ at point $\mu$ is $V+U_\mu$ should be constant on the support of the optimum, and in the case where $W(x,y)=k_D(x-y)$ where $k_D$ is the fundamental solution of a local (say differential) operator $D$, meaning $Dk_D=-\delta_0$ in the sense of distributions, we have $DU_\mu=-\mu$ which gives finally that on the support of $\mu_*$, \[ \mu_*=DV, \] In particular, if $V$ is the squared Euclidean norm and if $W$ is the Coulomb repulsion then $D$ is the Laplacian and $DV$ is constant, which suggests that $\mu_*$ is constant on its support (this is compatible with what we already know for dimension $d=2$, namely $\mu_*$ is uniform on the unit disc of $\mathbb{C}}\newcommand{\dD}{\mathbb{D}$). We show in \cite{2013arXiv1304.7569C} that if $W$ is the Riesz interaction, then the functional $\mathcal{I}}\newcommand{\cJ}{\mathcal{J}$ is strictly convex, and $\arg\inf \mathcal{I}}\newcommand{\cJ}{\mathcal{J}=\{\mu_*\}$ is a singleton, and $\mu_*$ is compactly supported, and almost surely, \[ \mu_N\underset{N\to\infty}{\longrightarrow}\mu_*, \] and moreover $\mu_*$ is characterized by the Lagrange conditions \[ U_{\mu_*}+V=C_*\text{ on supp$(\mu_*)$ and $\geq C_*$ outside} \] In the Coulomb case the constant $C_*$ is known as the modified Robin constant. The support constraint in the Lagrange conditions make difficult the analysis of the Riesz case beyond the Coulomb case, due to the non-local nature of the fractional Laplacian. Finally, let us mention that it is shown in \cite{2013arXiv1304.7569C} using the Lagrange conditions that one can construct $V$ from $\mu_*$ if $\mu_*$ is smooth enough, and this gives a positive answer to the physical control problem mentioned before. In the Coulomb case, we have $\Delta U_\mu=c_d\mu$, and an integration parts gives \[ \mathcal{I}}\newcommand{\cJ}{\mathcal{J}(\mu)-\int\!V\,d\mu =\frac{1}{2}\int\!U_\mu\,d\mu =\frac{1}{2}\int\!U_\mu\,\Delta U_\mu\,dx =\frac{1}{2}\int\!\ABS{\nabla U_\mu}^2\,dx. \] The right hand side is the integral of the squared norm of the gradient $\nabla U_\mu$ of the electrostatic potential $U_\mu$ generated by $\mu$, in other words the ``squared-field'' (``carré du champ'' in French). \begin{table} \begin{tabular}{r|l} $\cdots$ & $\cdots$\\ 1660 & Newton \\ 1760 & Coulomb, Euler \\ 1800 & Gauss \\ 1820 & Carnot \\ 1850 & Helmholtz \\ 1860 & Clausius\\ 1870 & Boltzmann, Gibbs, Maxwell\\ 1900 & Markov, Perron and Frobenius \\ 1930 & Fisher, Kolmogorov, Vlasov \\ 1940 & de Bruijn, Kac, von Neumann, Shannon \\ 1950 & Linnik, Kesten, Kullback, Sanov, Stam, Wigner\\ 1960 & Ginibre, McKean, Nelson \\ 1970 & Cercignani, Girko, Gross \\ 1980 & Bakry and Émery, McKay, Lions, Varadhan, Voiculescu\\ 2000 & Perelman, Tao and Vu, Villani \\ $\cdots$ & $\cdots$ \end{tabular} \caption{The arrow of time and some of the main actors mentioned in the text. One may have in mind the Stigler law: ``\emph{No scientific discovery is named after its original discoverer.}'', which is attributed by Stephen Stigler to Robert K. Merton.} \end{table} \subsection{Related problems} In the Coulomb case, and when $V$ is radial, then $\mu_*$ is supported in a ring and one can compute its density explicitly thanks to the Gauss averaging principle, which states that the electrostatic potential generated by a distribution of charges in a compact set is, outside the compact set, equal to the electrostatic potential generated by a unique charge at the origin, see \cite{MR2647570}. In particular, in the Coulomb case, and when $V$ is the squared norm, then $\mu_*$ is the uniform law on a ball of $\dR^d$, generalizing the circular law of random matrices. Beyond the Coulomb case, even in the Riesz case, no Gauss averaging principle is available, and no one knows how to compute $\mu_*$. Even in the Coulomb case, the computation of $\mu_*$ is a problem if $V$ is not rotationally invariant. See Saff and Dragnev \cite{MR2276529}, and Bleher and Kuijlaars \cite{MR2921180}. When $V$ is weakly confining, then the LDP may be still available, but $\mu_*$ is no longer compactly supported. This was checked in dimension $d=2$ with the Coulomb potential by Hardy in \cite{MR2926763}, by using a compactification (stereographic) which can be probably used in arbitrary dimensions. It is quite natural to ask about algorithms (exact or approximate) to simulated the law $P_N$. The Coulomb case in dimension $d=2$ is determinantal and this rigid structure allows exact algorithms \cite{MR2216966}. Beyond this special situation, one may run an Euler-Langevin MCMC approximate algorithm using the McKean-Vlasov system of particles. How to do it smartly? Can we do better? Speaking about the McKean-Vlasov system of particles, one may ask about its behavior when $t$ and/or $N$ are large. How it depends on the initial condition, at which speed it converges, do we have a propagation of chaos, does the empirical measure converge to the expected PDE, is it a problem to have singular repulsion? Can we imagine kinetic versions in connection with recent works on Vlasov-Poisson equations for instance \cite{MR2065020}? Similar questions are still open for models with attractive interaction such as the Keller-Segel model. Beyond the first order global analysis, one may ask about the behavior of $\mu_N-\mu_*$. This may lead to central limit theorems in which the speed may depend on the regularity of the test function. Some answers are already available in dimension $d=2$ in the Coulomb case by Ameur, Hedenmalm, and Makarov \cite{MR2817648}. Another type of second order analysis is studied by Serfaty and her collaborators \cite{2014arXiv1403.6860S,MR3163544}. Similar infinite dimensional mean-field models are studied by Lewin and Rougerie in relation with the Hartree model for Bose-Einstein condensates \cite{lewin-gazette}. Such interacting particle systems with Coulomb repulsion have inspired similar models in complex geometry, in which the Laplacian is replaced by the Monge-Ampere equation. The large deviations approach remains efficient in this context, and was developed by Berman \cite{2008arXiv0812.4224B}. The limit in the first order global asymptotics depend on $V$ and $W$, and is thus non universal. In the case of $\beta$-ensembles of random matrix theory (particles confined on the real line $\dR$ with Coulomb repulsion of dimension $2$), the asymptotics of the local statistics are universal, and this was the subject of many contributions, with for instance the works of Ramirez, Rider, and Virág \cite{MR2813333}, Erdős, Schlein, and Yau \cite{MR2810797}, Bourgade, Erdős, and Yau \cite{MR2905803}, and Bekerman, Figalli, and Guionnet \cite{BFG}. Little is known in higher dimensions or with other interaction potential $W$. The particle of largest norm has Tracy-Widom fluctuation in $\beta$-ensembles, and here again, little is known in higher dimensions or with other interactions, see \cite{chapec} for instance and references therein. \subsection{Caveat} The bibliography provided in this text is informative but incomplete. We emphasize that this bibliography should not be taken as an exhaustive reference on the history of the subject. \subsection{Acknowledgments} The author would like to thank Laurent Miclo and Persi Diaconis for their invitation, and François Bolley for his useful remarks on an early draft of the text. This version benefited from the constructive comments of two anonymous reviewers. \begin{center} \begin{figure}[htbp] \includegraphics[scale=0.4]{diaconis} \caption*{Persi Diaconis during his talk. Institut de Mathématiques de Toulouse, March 24, 2014.} \end{figure} \end{center} \makeatletter \def\@MRExtract#1 #2!{#1} \renewcommand{\MR}[1] \xdef\@MRSTRIP{\@MRExtract#1 !}% \href{http://www.ams.org/mathscinet-getitem?mr=\@MRSTRIP}{MR-\@MRSTRIP}} \makeatother
{ "timestamp": "2015-02-27T02:07:28", "yymm": "1405", "arxiv_id": "1405.1003", "language": "en", "url": "https://arxiv.org/abs/1405.1003", "abstract": "These expository notes propose to follow, across fields, some aspects of the concept of entropy. Starting from the work of Boltzmann in the kinetic theory of gases, various universes are visited, including Markov processes and their Helmholtz free energy, the Shannon monotonicity problem in the central limit theorem, the Voiculescu free probability theory and the free central limit theorem, random walks on regular trees, the circular law for the complex Ginibre ensemble of random matrices, and finally the asymptotic analysis of mean-field particle systems in arbitrary dimension, confined by an external field and experiencing singular pair repulsion. The text is written in an informal style driven by energy and entropy. It aims to be recreative and to provide to the curious readers entry points in the literature, and connections across boundaries.", "subjects": "History and Overview (math.HO); Mathematical Physics (math-ph); Probability (math.PR)", "title": "From Boltzmann to random matrices and beyond", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429595026213, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211063670941 }
https://arxiv.org/abs/2112.03758
Matrix completion and semidefinite matrices
Positive semidefinite Hermitian matrices that are not fully specified can be completed provided their underlying graph is chordal. If the matrix is positive definite the completion can be uniquely characterized as the matrix that maximizes the determinant, or as the matrix whose inverse has zeroes in those places that were undetermined in the original matrix. This paper extends these uniqueness results to the case of semidefinite matrices. Because the determinant vanishes for singular matrices, and because the inverse does not exist, we introduce a generalized determinant and use generalized inverses to formulate equivalent characterizations in the semidefinite case. For a class of matrices that are singular but of maximal rank unique characterizations can be given, just as in the positive definite case.
\section{Introduction} The question of which partial Hermitian matrices $H$ can be completed to give a fully specified positive definite Hermitian matrix was solved in \cite{grone}. The solution provided in \cite{grone} was constructive but only gave one element of the completion with each step. While giving the correct solution, the practical implementation of this procedure was rather tedious. In \cite{smith} a faster procedure was proposed that delivered the same matrix in far fewer steps by focusing on whole blocks of the matrix at once. The starting point of both procedures is the graph $\gamma$ that is constructed from the partial matrix. The vertices of $\gamma$ are given by the rows (or columns) $i$, $i=1,\ldots,n$, of the matrix $H$. Two vertices $i$ and $j$ of $\gamma$ are connected by an edge $e$ if the element \begin{equation} h_{ij} \end{equation} of the matrix is given. There is now one condition that the graph $\gamma$ has to satisfy for the procedure to work: The graph has to be \emph{chordal}. A graph is chordal if every loop of four or more elements has a chord, i.e. an edge that connects two nonconsecutive elements of the loop (for an introduction to graph theoretic notions see \cite{golumbic}). Given the graph $\gamma$ we can now proceed by constructing another graph $\Gamma$. The vertices of $\Gamma$ are given by the cliques of $\gamma$ (a clique is a maximal set of vertices that are all connected to each other; again see \cite{golumbic}) and two vertices of $\Gamma$ are connected if the two corresponding cliques have a nonempty intersection (see figure \ref{fig.allgraphs}). \begin{figure}[hbt] \begin{center} \includegraphics[width=12cm]{img/allGraphs.png} \end{center} \caption{a) Given a partial matrix $H$ we can construct a graph $\gamma$ that has an edge for all the specified elements of the matrix. The vertices of the graph are the rows (or columns) of the matrix. For the completion procedure to work the graph $\gamma$ needs to be chordal. The graph shown here is chordal because the central loop only has three elements and thus does not require a chord. b) The dashed lines show the cliques of $\gamma$. c) The cliques are the vertices of the new graph $\Gamma$. Edges in $\Gamma$ connect cliques with a nonempty intersection.}\label{fig.allgraphs} \end{figure} The contribution of \cite{smith} is to show that the completion procedure can be reduced to edges of the graph $\Gamma$. Any edge in $\Gamma$ is given by two cliques $\alpha$ and $\beta$ that have a nonzero intersection $\alpha\cap\beta$. Let \begin{equation} H[\alpha] = \begin{pmatrix} A & B \\ B^\star & C \end{pmatrix} \end{equation} be the submatrix of $H$ with rows and column indices in $\alpha$ and let \begin{equation} H[\beta] = \begin{pmatrix} C & D \\ D^\star & E \end{pmatrix} \end{equation} be the submatrix of $H$ with rows and column indices in $\beta$. We then have \begin{equation} H[\alpha\cap\beta] = C. \end{equation} We can combine these two matrices in one large matrix with row and column indices in $\alpha\cup\beta$, \begin{equation} H[\alpha\cup\beta] = \begin{pmatrix} A & B & X \\ B^\star & C & D \\ X^\star & D^\star & E \end{pmatrix}, \end{equation} where the matrix $X$ remains to be specified. It was now shown in \cite[Theorem 3.2.]{smith} that a partial positive semidefinite matrix $H[\alpha\cup\beta]$ can be completed to a positive semidefinite matrix by choosing \begin{equation}\label{eqn.defx} X = B C^+ D, \end{equation} where $C^+$ denotes the Moore-Penrose inverse of $C$ (see \cite{roger} and \cite{matrixanalysis} for details on the Moore-Penrose inverse). Note that the matrix does not need to be positive definite for the construction to work. It it sufficient for the matrix to be positive \emph{semi}definite. If the matrix is positive definite the choice of $X$ in equation (\ref{eqn.defx}) can be uniquely characterized in two ways. First, it is the choice that maximizes the determinant of the completed matrix. It is also the choice for which the inverse of $H[\alpha\cup\beta]$ has zeroes in those places where $X$ sits in $H[\alpha\cup\beta]$. We see that the treatment of positive definite and positive semidefinite matrices differs. For positive definite matrices the completion in equation (\ref{eqn.defx}) has properties that uniquely determine it. For the semidefinite case we are just left with the existence result. Since the determinant of a singular matrix vanishes and since the inverse of a singular matrix does not exist it is not clear if we can hope to improve this situation. In this paper we now make two contributions. The first contribution is a new proof that the $X$ from equation (\ref{eqn.defx}) gives a positive semidefinite completion of $H[\alpha\cup\beta]$. We then extend the uniqueness results to the semidefinite case. To do this we need to introduce a generalized determinant that gives the determinant of the nonsingular part of the matrix. This determinant lacks many of the nice properties that the usual determinant has. If we restrict our attention to a special class of matrices, though, we can prove three results that hold for positive definite matrices: Fischer's inequality, the Schur determinant lemma, and Banachiewicz's form of the inverse. These three results will then be used to uniquely characterize the completion in equation (\ref{eqn.defx}) for semidefinite matrices. We start by introducing the class of matrices for which our results are valid. \section{Partitioned matrices of maximal rank}\label{sec.partition} Let the Hermitian matrix $H\in M_n(\mathbb{C})$ be partitioned as follows: \begin{equation} H = \begin{pmatrix}\label{eqn.defh} A & B \\ B^\star & C \end{pmatrix}, \end{equation} with $A\in M_k(\mathbb{C})$, $C\in M_l(\mathbb{C})$, $1\le k,l\le n-1$, $k+l=n$. Let $V = K\oplus L$ be the decomposition of $V\simeq \mathbb{C}^n$ in accordance with the partition of $H$ in equation (\ref{eqn.defh}) so that $A$ is a map from $K$ to $K$, and $C$ is a map from $L$ to $L$. Let us further assume that $H$ is positive semidefinite. Let $w\in N(A)\subset K$ be an element of the nullspace of $A$, i.e. let $A w = 0$. For $v=(w,0)^T\in V$ we then have \begin{equation} v^\star H v=0. \end{equation} Since $H$ is positive semidefinite this implies (see \cite{matrixanalysis}) that we already have \begin{equation} Hv = \begin{pmatrix} 0 \\ B^\star w \end{pmatrix} = 0, \end{equation} or \begin{equation}\label{eqn.nulla} N(A) \subset N(B^\star). \end{equation} In a similar fashion we can establish that the nullspace of $C$ is contained in the null space of $B$: \begin{equation}\label{eqn.nullc} N(C) \subset N(B). \end{equation} The relations for the nullspaces are equivalent to these relations for the ranges of $A$ and $C$: \begin{align} R(B) & \subset R(A) \\ R(B^\star) & \subset R(C). \end{align} Because of these properties $H$ is said to have the \emph{column inclusion property}\cite{matrixanalysis}. It follows from equations (\ref{eqn.nulla}) and (\ref{eqn.nullc}) that \begin{equation} N(A) \oplus N(C) \subset N(H). \end{equation} This implies that the rank of $H$ is less than or equal to the sum of the ranks of $A$ and $C$: \begin{align} \text{rank}\ H & = n - \dim N(H) \\ & \le n - (\dim N(A) + \dim N(C)) \\ & = k-\dim N(A) + (n-k) - \dim N(C) \\ & = \text{rank}\ A + \text{rank}\ C \end{align} We have equality if and only if $N(H) = N(A)\oplus N(C)$. In the following, matrices $H$ for which this equality holds will be of particular interest to us which is why we make the following definition: \begin{definition} Let $H$ be a positive semidefinite Hermitian matrix that is partitioned as in equation (\ref{eqn.defh}). We say that $H$ is of \emph{maximal rank} if and only if $N(H) = N(A)\oplus N(C)$\footnote{We should be precise and say that $H$ is of maximal rank with respect to the partition in equation (\ref{eqn.defh}). For the sake of readability we will refrain from doing so and rely on the reader to infer the partition from the context.}. \end{definition} When $H$ is of maximal rank, it vanishes on \begin{equation} N(A)\oplus N(C), \end{equation} and is positive definite when restricted to the sum of the ranges of $A$ and $C$: \begin{equation} R(A) \oplus R(C). \end{equation} In section \ref{sec.ext} we will use this property to extend results that are valid for positive definite matrices to partitioned matrices of maximal rank. We need one more notion before we can formulate these results. \section{The generalized determinant}\label{sec.gendet} A Hermitian matrix $H$ defines a nonsingular map from its range $R(H)$ to its range $R(H)$. If the nullspace $N(H)$ is nonzero, i.e. if $H$ is singular, the determinant of $H$ vanishes. The determinant thus contains no information about the nonsingular map that is $H$ restricted to $R(H)$. To recover this information we introduce a generalized determinant ${\det}_+$: \begin{definition}\label{def.gendet} Let $H$ be Hermitian. Let $\bar H$ be $H$ restricted to the range of $H$: \begin{equation} \bar H = H \vert_{R(H)} : R(H) \longrightarrow R(H) \\ \end{equation} For $H\ne 0$ we then set \begin{equation} {\det}_+ H = \det \bar H. \end{equation} For $H=0$ we set ${\det}_+ H = 1$. \end{definition} We note a number of properties of the generalized determinant: \begin{lemma}\label{lem.gentdetprop} Let $H$ be Hermitian of rank $r\le n$. Let $\lambda_i, i=1, \ldots, n$, be the eigenvalues of $H$. Let us assume that they are ordered in such a way that $\lambda_i = 0$, for $i>r$. We then have: \begin{enumerate} \item If $H$ is of full rank (i.e. if $r=n$) we have \begin{equation} {\det}_+ H = \det H = \prod_i \lambda_i \end{equation} \item For $r<n$ we have \begin{equation} {\det}_+ H = \prod_{i=1}^r \lambda_i \end{equation} \item We have \begin{equation}\label{eqn.limit} {\det}_+ H = \lim_{\epsilon \rightarrow 0} \frac{\det( H + \epsilon I )}{\epsilon^{n-r}} \end{equation} \item For $c>0$ we have \begin{equation} {\det}_+ c H = c^{r} {\det}_+ H \end{equation} \end{enumerate} \end{lemma} \begin{proof} All of these identities follow from the fact that $H = U D U^\star$ for a unitary $U$ and a diagonal $D$ that contains the eigenvalues of $H$ on the diagonal. \end{proof} We note that the generalized determinant lacks many of the properties the usual determinant has. In particular, it is not a continuous function of $H$, and the generalized determinant of the product of two matrices is not the product of the two generalized determinants. \section{The extensions}\label{sec.ext} With the preparations in section \ref{sec.partition}, and the definition of the generalized determinant in the last section, we now want to extent three results to singular matrices: Fischer's inequality, the determinant equality for the Schur complement, and Banachiewicz's form of the inverse of a matrix. \subsection{Fischer's inequality} Fischer's inequality states that for a Hermitian positive semidefinite $H$ as in equation (\ref{eqn.defh}) we have (see e.g. \cite{horn}): \begin{equation} \det H \le \det A\; \det C. \end{equation} Furthermore, if $H$ is positive definite, we have equality if and only if $B=0$. If $H$ is singular, the determinant of $H$ vanishes and the inequality is no longer much of a constraint. We can also no longer infer that $B$ vanishes in the case of equality. We can do better when $H$ is of maximal rank. In this case $H$ vanishes on $N(A)\oplus N(C)$ and is positive definite on \begin{equation} R(A)\oplus R(C). \end{equation} We now look at the restriction of $H$ and all its submatrices to this space and use the Fischer's inequality there. Note that because $H$ is positive semidefinite it has the column (and row) inclusion property and we have \begin{equation} N(A) \subset N(B^\star) \end{equation} and \begin{equation} R(B^\star) \subset R(C). \end{equation} $B^\star$ thus vanishes on the complement of $R(A)$ and maps into $R(C)$. The restriction of $B^\star$ to $R(A)$ thus keeps the nontrivial part of $B^\star$. The same is true for $B$ and its restriction to $R(C)$. As in definition \ref{def.gendet}, we denote the restriction of $H$ to its range by $\bar H$. We obtain \begin{align} {\det}_+ H & = \det \bar H \\ &\le \det \bar A\; \det \bar C \\ & = {\det}_+ A\; {\det}_+ C. \end{align} Because we are looking at the restriction of $H$ to $R(A)\oplus R(C)$ and because $H$ is positive definite on $R(A)\oplus R(C)$, we also get that $B$, when restricted to $R(C)$, vanishes if and only if equality holds above. Since $B$ vanishes on $N(C)$, this is the case if and only if \begin{equation} B = 0. \end{equation} We thus have the following result: \begin{proposition}\label{prop.fischer} Let a Hermitian $H$ be positive semidefinite and partitioned as in equation (\ref{eqn.defh}). If $H$ is of maximal rank then \begin{equation} {\det}_+ H \le {\det}_+ A\; {\det}_+ C, \end{equation} with equality if and only if $B = 0$. \end{proposition} Let us note that we arrived at this result in two steps. Because $H$ is positive semidefinite we know that $B$ restricted to $N(C)$ is zero. That the restriction of $B$ to the range $R(C)$ is also zero follows from Fischer's equality for the positive definite matrix that is $H$ restricted to $R(A)\oplus R(C)$. We note that to show the inequality we could have also used equation (\ref{eqn.limit}) of lemma \ref{lem.gentdetprop}. \subsection{Schur complement} The Schur complement arises naturally when using Gaussian elimination to solve a linear equation. Let $H$ be as in equation (\ref{eqn.defh}) and let us assume we want to solve \begin{align} 0 & = H \begin{pmatrix} k \\ l \end{pmatrix} \\ & = \begin{pmatrix} A k + B l \\ B^\star k + C l \end{pmatrix}. \label{eqn:lineq} \end{align} Assuming that $A$ is invertible we can solve for $k$ in the first equation of (\ref{eqn:lineq}) to obtain \begin{equation}\label{eqn.gauss1} k = - A^{-1}B l. \end{equation} The second equation of (\ref{eqn:lineq}) then gives \begin{equation}\label{eqn.gauss2} (C - B^\star A^{-1}B) l = 0. \end{equation} The expression in the parentheses is called the Schur complement of $A$ in $H$ and is denoted by $H/A$: \begin{equation} H/A = C - B^\star A^{-1}B \end{equation} For a positive semidefinite matrix $H$ we may generalize the definition to \begin{equation} H/A = C - B^\star A^+B, \end{equation} where $A^+$ is the Moore-Penrose inverse of $A$ (again, see \cite{roger} and \cite{matrixanalysis}). Note that the choice of generalized inverse does not matter here since $R(B)\subset R(A)$. Emilie Haynsworth introduced the name and showed that the Schur complement possesses many interesting properties \cite{hayns} (see \cite{horn} for a detailed exposition). In \cite{schur} Issai Schur showed that for a positive definite $H$ the determinant of $H$ satisfies \begin{equation} \det H = \det A\; \det H/A. \end{equation} We now want to show that this equality also holds for the generalized determinant. Again, we focus our attention on $R(A)\oplus R(C)$ where $H$ is positive definite. Using the notation from the previous section we obtain: \begin{align} {\det}_+ H & = \det \bar H \\ & = \det \bar A \; \det \bar H / \bar A \\ & = {\det}_+ A \; \det \bar H / \bar A \end{align} We need to convince ourselves that the last determinant is equal to ${\det}_+ H/A$. To check this, we need to show that \begin{align} N(H/A) & = N( C - B^\star A^+ B ) \\ & = N(C). \end{align} We already know that $N(C) \subset N(H/A)$. To show equality let $l\in N(H/A)$. This $l$ thus satisfies equation (\ref{eqn.gauss2}) that was obtained through Gaussian elimination. We define $k$ according to equation (\ref{eqn.gauss1}): \begin{equation} k = - A^+ B l. \end{equation} It then follows that \begin{equation} H\begin{pmatrix} k\\ l \end{pmatrix} = 0. \end{equation} Since $H$ is of maximal rank this implies that $(k,l)^T\in N(A)\oplus N(C)$. In particular, we have $l\in N(C)$. We thus obtain our second result: \begin{proposition}\label{prop.schur} Let a Hermitian $H$ be positive semidefinite and partitioned as in equation (\ref{eqn.defh}). If $H$ is of maximal rank then \begin{equation} {\det}_+ H = {\det}_+ A\; {\det}_+ H/A. \end{equation} \end{proposition} \subsection{The inverse} The last result concerns the inverse of $H$. When $H$ is positive definite its inverse can be written as \begin{equation}\label{eqn.bana} H^{-1} = \begin{pmatrix} A^{-1} + A^{-1}B(H/A)^{-1}B^\star A^{-1} & - A^{-1} B (H/A)^{-1} \\ - (H/A)^{-1} B^\star A^{-1} & (H/A)^{-1} \end{pmatrix}. \end{equation} This is a remarkable formula because to find the inverse of $H$ we just need to invert $A$ and $H/A$. This formula was first established by Banachiewicz \cite{bana} (see also \cite[p.112]{frazer}). We now want to adapt this formula to our situation where $H$ might be singular but is of maximal rank. Again, we start by looking at the restriction $\bar H$ of $H$ to its range $R(A)\oplus R(C)$. $\bar H$ is positive definite and we can express its inverse in the form of equation (\ref{eqn.bana}). In the last section we established that the ranges and nullspaces of $C$ and $H/A$ coincide so that by replacing the inverses in equation (\ref{eqn.bana}) with Moore-Penrose inverses we obtain the unique Moore-Penrose inverse of $H$. \begin{proposition}\label{prop.bana} Let a Hermitian $H$ be positive semidefinite and partitioned as in equation (\ref{eqn.defh}). If $H$ is of maximal rank then its Moore-Penrose inverse is given by \begin{equation}\label{eqn.banapenrose} H^{+} = \begin{pmatrix} A^{+} + A^{+}B(H/A)^{+}B^\star A^{+} & - A^{+} B (H/A)^{+} \\ - (H/A)^{+} B^\star A^{+} & (H/A)^{+} \end{pmatrix}. \end{equation} \end{proposition} This result can also be deduced from \cite[Theorem 4.6]{ouellette}. \section{Matrix completion} Let $H$ be a Hermitian matrix in $M_n(\mathbb{C})$ that is partitioned as follows: \begin{equation}\label{eqn.defnewh} H = \begin{pmatrix} A & B & X \\ B^\star & C & D \\ X^\star & D^\star & E \end{pmatrix} \end{equation} As in the introduction we use the notation from \cite{matrixanalysis} to denote submatrices of $H$. For $\alpha\subset \{1, \ldots, n\}$ let \begin{equation} H[\alpha] \end{equation} be the submatrix of $H$ with row and column indices in $\alpha$. For $\alpha,\beta\subset \{1, \ldots, n\}$ we let \begin{equation} H[\alpha, \beta] \end{equation} be the submatrix with row indices in $\alpha$ and column indices in $\beta$. Now let $\alpha$ and $\beta$ be such that \begin{equation} H[\alpha] = \begin{pmatrix} A & B \\ B^\star & C \end{pmatrix} \end{equation} and \begin{equation} H[\beta] = \begin{pmatrix} C & D \\ D^\star & E \end{pmatrix}. \end{equation} For $\gamma = \alpha\cap\beta$ we then have \begin{equation} H[\gamma] = C, \end{equation} and for the matrix $X$ in the upper right corner we have \begin{equation} H[ \alpha-\gamma, \beta-\gamma ] = X. \end{equation} We now want to know under what conditions we can choose $X$ so that the matrix $H$ is positive semidefinite. Before we state the result we note this helpful theorem: \begin{theorem}\label{theo.albert} Let $H$ be Hermitian and partitioned as in equation (\ref{eqn.defh}). Then these two statements are equivalent: \begin{enumerate} \item $H$ is positive semidefinite. \item $A$ and $H/A$ are positive semidefinite, and $R(B)\subset R(A)$. \end{enumerate} \end{theorem} A proof of this result can be found in \cite{albert}. We now have: \begin{theorem}\label{theo.x} Let $H\in M_n(\mathbb{C})$ be partitioned as in (\ref{eqn.defnewh}) and let $H[\alpha]$ and $H[\beta]$ be positive semidefinite. Setting \begin{align} X & = B C^+ D \\ & = H[\alpha-\gamma, \gamma] H[\gamma]^+ H[\gamma, \beta-\gamma] \end{align} turns $H$ into a positive semidefinite matrix. \end{theorem} \begin{proof} We apply theorem \ref{theo.albert} to the positive semidefinite matrices $H[\alpha]$ and $H[\beta]$ to obtain: \begin{align} C & \ge 0 \\ H[\alpha]/C & \ge 0 \label{eqn.hac}\\ H[\beta]/C & \ge 0 \label{eqn.hbc} \\ R(B^\star) & \subset R(C) \label{eqn.bsc} \\ R(D) & \subset R(C) \label{eqn.dsc} \end{align} The Schur complement $H/C$ is given by \begin{equation} H/C = \begin{pmatrix} A - B C^+ B^\star & X - B C^+ D \\ X^\star - D^\star C^+ B^\star & E - D^\star C^+ D \end{pmatrix}. \end{equation} If we choose \begin{equation} X = B C^+ D, \end{equation} and also recognize the expressions for $H[\alpha]/C$ and $H[\beta]/C$ we find \begin{equation} H/C = \begin{pmatrix} H[\alpha]/C & 0 \\ 0 & H[\beta]/C \end{pmatrix}. \end{equation} Because of equations (\ref{eqn.hac}) and (\ref{eqn.hbc}) we have \begin{equation} H/C \ge 0. \end{equation} Equations (\ref{eqn.bsc}) and (\ref{eqn.dsc}) then ensure that \begin{equation} R(B^\star\; D ) \subset R(C). \end{equation} Since we also have $C\ge 0$ we can use the theorem \ref{theo.albert} one more time, this time in the other direction, to obtain \begin{equation} H \ge 0. \end{equation} This completes the proof. \end{proof} This result appeared already in \cite{smith}. We have provided a different proof that makes use of theorem \ref{theo.albert}. It turns out that the choice of $X$ in theorem \ref{theo.x} gives $H$ unique properties. It is in these characterizations of $X$ that we go beyond the results in \cite{smith} because we include the case in which $H$ is singular. The first result characterizes $X$ as the unique extension of $H$ that maximizes the determinant of $H$. If $H$ is nonsingular we can just talk about the regular determinant of $H$. If $H$ is singular we have to use the generalized determinant that we introduced in section \ref{sec.gendet}. \begin{theorem} Let $H\in M_n(\mathbb{C})$ be partitioned as in (\ref{eqn.defnewh}) and let $H[\alpha]$ and $H[\beta]$ be positive semidefinite and of maximal rank. The choice \begin{equation} X = B C^+ D \end{equation} is the unique choice for $X$ for which $H$ is positive semidefinite, of maximal rank, and for which the (generalized) determinant is maximal. \end{theorem} \begin{proof} We have shown in the last theorem that $H$ is positive semidefinite if we set $X = B C^+ D$. $H$ is also of maximal rank. In general we have \begin{equation} \text{rank}\ H \le \text{rank}\ A + \text{rank}\ C + \text{rank}\ E. \end{equation} For a positive semidefinite matrix rank is additive over the Schur complement (see \cite{ouellette}) so that we actually have equality: \begin{align} \text{rank}\ H & = \text{rank}\ C + \text{rank}\ H/C \\ & = \text{rank}\ C + \text{rank}\ H[\alpha]/C + \text{rank}\ H[\beta]/C \\ & = \text{rank}\ C + \text{rank}\ A + \text{rank}\ E \end{align} To establish the last equality we have used the assumption that both $H[\alpha]$ and $H[\beta]$ are of maximal rank. Thus $X=B C^+ D$ turns $H$ into a matrix of maximal rank. We now want to show that it is the only such choice that also maximizes the determinant. Let us now assume that $H$ is positive semidefinite and of maximal rank so that we can make use of propositions \ref{prop.fischer} and \ref{prop.schur}. Because of proposition \ref{prop.schur} we have \begin{equation} {\det}_+ H = {\det}_+ C \; {\det}_+ H/C. \end{equation} Because of proposition \ref{prop.fischer} we have \begin{equation} {\det}_+ H/C \le {\det}_+ H[\alpha]/C {\det}_+ H[\beta]/C, \end{equation} with equality if and only if \begin{equation} X = B C^+ D. \end{equation} It follows that this $X$ is the unique choice that maximizes the determinant of $H$. \end{proof} For a nonsingular matrix $H$ we can use the determinant to find the inverse of $H$ (see \cite{matrixanalysis}): \begin{equation}\label{eqn.inverseformula} H^{-1} = \frac{1}{\det H}\left(\frac{\partial}{\partial h_{ij}}\det H \right)^T \end{equation} Since $X$ was chosen such that the determinant is maximal the derivative in equation (\ref{eqn.inverseformula}) vanishes for indices $i$ and $j$ that denote elements of $X$ itself. It follows that $H^{-1}$ has zeroes in those places where the matrix $X$ sits in $H$. It turns out that this uniquely determines $X$ even if $H$ is only positive semidefinite and we have to talk about the Moore-Penrose inverse of $H$ instead. \begin{theorem} Let $H\in M_n(\mathbb{C})$ be partitioned as in (\ref{eqn.defnewh}) and let $H[\alpha]$ and $H[\beta]$ be positive semidefinite and of maximal rank. The choice \begin{equation} X = B C^+ D \end{equation} is the unique choice for $X$ for which $H$ is of maximal rank and for which $H^{+}$ has zeroes in those places where $X$ sits in $H$. \end{theorem} \begin{proof} Because $H$ is of maximal rank we can use proposition \ref{prop.bana} to express the Moore-Penrose inverse of $H$ in terms of $C^+$ and $(H/C)^+$. If $H^+$ is to have zeroes where $X$ is in $H$ then we must have \begin{equation} (H/C)^+ = \begin{pmatrix} (H[\alpha]/C)^+ & 0 \\ 0 & (H[\beta]/C)^+ \end{pmatrix}, \end{equation} which can only be the case if $X=B C^+ D$. \end{proof} For completeness we give the Moore-Penrose inverse for $H$: \begin{equation} H^+ = \begin{pmatrix} (H[\alpha]/C)^+ & -(H[\alpha]/C)^+ B C^+ & 0 \\ - C^+ B^\star (H[\alpha]/C)^+ & \Xi & - C^+ D (H[\beta]/C)^+ \\ 0 & - (H[\beta]/C)^+ D^\star C^+ & (H[\beta]/C)^+ \end{pmatrix}, \end{equation} with \begin{equation} \Xi = C^+ + C^+ B^\star (H[\alpha]/C)^+ B C^+ + C^+ D (H[\beta]/C)^+ D^\star C^+. \end{equation} \section{Conclusion} The problem of how to complete partial Hermitian matrices arises frequently in practical applications (see \cite{otherpaper} for an example from finance). This problem was solved in \cite{grone} for partial matrices whose corresponding graph is chordal. The procedure provided in \cite{grone} was improved upon in \cite{smith} by giving a way to calculate whole blocks of the completion at once. For positive definite matrices the resulting completion is singled out by two uniqueness results. It is the unique matrix that maximizes the determinant, and it is the unique matrix whose inverse has zeroes in those places that were unspecified in the original matrix. In this paper we have extended these uniqueness results to include semidefinite matrices. To make this extension possible we needed to introduce a generalized determinant that gives the determinant of the nontrivial part of a Hermitian matrix. We also needed to focus on matrices whose rank is determined solely by the rank of its diagonal matrices. For these matrices the same uniqueness results hold that hold for positive definite matrices. \section*{Acknowledgement} I would like to thank Patrick B\"uchel for his support during the creation of this work as well as Horst K\"ohler and Thomas Streuer for their initial push to look into maximal determinant completions of matrices, and for their support during the creation of this work.
{ "timestamp": "2021-12-08T02:22:36", "yymm": "2112", "arxiv_id": "2112.03758", "language": "en", "url": "https://arxiv.org/abs/2112.03758", "abstract": "Positive semidefinite Hermitian matrices that are not fully specified can be completed provided their underlying graph is chordal. If the matrix is positive definite the completion can be uniquely characterized as the matrix that maximizes the determinant, or as the matrix whose inverse has zeroes in those places that were undetermined in the original matrix. This paper extends these uniqueness results to the case of semidefinite matrices. Because the determinant vanishes for singular matrices, and because the inverse does not exist, we introduce a generalized determinant and use generalized inverses to formulate equivalent characterizations in the semidefinite case. For a class of matrices that are singular but of maximal rank unique characterizations can be given, just as in the positive definite case.", "subjects": "Rings and Algebras (math.RA)", "title": "Matrix completion and semidefinite matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429585263219, "lm_q2_score": 0.7217432122827967, "lm_q1q2_score": 0.7097211056624565 }
https://arxiv.org/abs/1107.1638
Weighted algorithms for compressed sensing and matrix completion
This paper is about iteratively reweighted basis-pursuit algorithms for compressed sensing and matrix completion problems. In a first part, we give a theoretical explanation of the fact that reweighted basis pursuit can improve a lot upon basis pursuit for exact recovery in compressed sensing. We exhibit a condition that links the accuracy of the weights to the RIP and incoherency constants, which ensures exact recovery. In a second part, we introduce a new algorithm for matrix completion, based on the idea of iterative reweighting. Since a weighted nuclear "norm" is typically non-convex, it cannot be used easily as an objective function. So, we define a new estimator based on a fixed-point equation. We give empirical evidences of the fact that this new algorithm leads to strong improvements over nuclear norm minimization on simulated and real matrix completion problems.
\section{Introduction} \label{sec:introduction} In this paper, we consider the statistical analysis of high dimensional structured data in two close setups: vectors with small support and matrices with low rank. In the first setup, known as Compressed Sensing (CS) \cite{MR2241189,MR1639094,MR2243152,MR2236170,IEEE-Dononho,IEEE-CT}, the aim is to reconstruct a high dimensional vector with only few non-zero coefficients, based on a small number of linear measurements. In the second setup, called Matrix Completion \cite{MR2723472,fazel2002matrix,candes-recht08,Gross}, we aim at reconstructing a small rank matrix from the observations of only a few entries. Both problems are motivated by many practical applications in many different domains (medical~\cite{medical}, imaging~\cite{MR1440119}, seismology~\cite{Sismo}, recommending systems such as the Netflix Prize, etc.) as well as theoretical challenges in many different fields of mathematics (random matrices, geometry of Banach spaces, harmonic analysis, empirical processes theory, etc.). From an algorithmic viewpoint, one central idea is the convex relaxation of the $\ell_0$-functional (the function giving the number of non-zero coefficients of a vector) and of the rank function. This idea gave birth to two well-known algorithms: the Basis Pursuit algorithm~\cite{MR1639094}~ and nuclear norm minimization~\cite{candes-recht08}. Many results have been obtained for these two algorithms and we refer the reader to the next sections for more details. Here we will be interested in weighted versions of these algorithms, see~\cite{MR2461611} in the CS setup. In particular, we will be interested in finding theoretical explanation underlying the fact that, empirically, it is observed that weighted Basis pursuit outperforms classical Basis Pursuit. We will also propose a way to export the idea of reweighting into the Matrix Completion problem. \section{Weighted basis-pursuit in Compressed Sensing} \label{sec:cs-vectors} One way of setting the CS problem is to ask the following question. Starting with a $m \times N$ matrix $A$, called a \emph{sensing} or \emph{measurement} matrix, and with a vector $x$ in $\mathbb{R}^N$, is it possible to reconstruct $x$ from the linear measurements $Ax$? Classical linear algebra theory tells that we need at least $m \geq N$ to recover $x$ from $Ax$ in order to find a unique solution to the linear system. But, if more is known on $x$, then, hopefully, a smaller number $m$ of measurements may be enough. In the theory of CS, it is now well-understood that it is indeed possible to recover sparse signals (signals with a small support, the support being the set of non-zeros entries) from a small number of linear measurements. If $x$ is a sparse vector and $A$ a ``good'' measurement matrix (in a sense to be clarified later), then looking for a vector $y$ with the smallest support and satisfying $Ay = Ax$ can recover $x$ exactly. This procedure, called the $\ell_0$ or support minimization procedure, is known to be the best theoretical procedure to recover any $s$-sparse vector $x$ (vectors with a support size smaller than $s$) from $Ax$ as long as $A$ is injective on the set of all $s$-sparse vectors. However, this problem is NP-hard, and alternatives are suitable in practice, in part because the function $x \mapsto |x|_0$ ($|x|_0$ stands for the cardinality of the support of $x$) is not convex. A natural remedy to this problem is convex relaxation. In~\cite{MR1639094}, the authors propose to minimize the $\ell_1$-norm as the convex envelope of this non-convex function, leading to the so-called Basis-Pursuit algorithm (BP). The BP algorithm minimizes the $\ell_1$ norm on the affine space $x + \ker A$. Namely, consider, for any $y \in \mathbb{R}^m$: \begin{equation} \label{eq:BP} \Delta_1(y) \in \argmin_{t \in \mathbb{R}^N} \Big(|t|_1 : At=y\Big), \end{equation} so that $\Delta_1(Ax)$ is a candidate for the reconstruction of $x$ based on $A x$. We say that $x$ is exactly reconstructed by $\Delta_1$, namely $\Delta_1(Ax) = x$, when $x$ is the unique solution of the minimization problem~\eqref{eq:BP} when $y=Ax$. Note that other algorithms have been introduced in the CS literature. For instance, $\ell_p$-minimization algorithms for $0<p<1$ are considered in~\cite{MR2421974,MR2503311,MR2647010,chartrand2008iteratively}. Some greedy algorithms based on the ideas of the Matching Pursuit algorithm of~\cite{MR1321432,MatchingPursuit} have been used in CS, see~\cite{MR2502366,MR2496554,MR2446929} for instance. In the present paper, we consider weighted-$\ell_1$ minimization over $x+\ker A$. This algorithm was introduced in~\cite{MR2461611}. Since then, it has drawn a particular attention because it is now acknowledged, although mainly only empirically observed, that a proper weighted basis-pursuit algorithm can improve a lot upon basic basis-pursuit. This is illustrated in Figure~\ref{fig:phase-cs}, and many other numerical experiments can be found in~\cite{MR2461611}. However, theoretical explanations of this fact are still lacking. Some results that go in this direction are given in~\cite{2009arXiv0901.2912A, 2009arXiv0904.0994X, khajehnejadanalyzing}, \cite{chartrand2008iteratively}, \cite{2009arXiv0901.2912A}. But, the results given in these papers are of a different nature than ours, since they are using a random model for the unknown vector $x$, such as a vector with i.i.d $N(0, 1)$ non-zero entries, with a distribution support which is uniform conditionally on the sparsity. In the statement of our results, $x$ is an arbitrary deterministic sparse vector. In~\cite{MR2588385} an iteratively reweighted least-squares procedure is studied, as an approximation of basis-pursuit. We introduce the weighted algorithm: for any $y \in \mathbb{R}^m$ and any sequence $w = (w_1, \ldots, w_N)\in\mathbb{R}^N$ of non-negative weights, \begin{equation} \label{eq:general-weighted-algo} \Delta_w(y) \in \argmin_{t\in\mathbb{R}^N} \Big(\sum_{i=1}^N \frac{|t_i|}{w_i} : At = y \Big). \end{equation} We use the convention $t / 0 = \infty$ when $t > 0$ and $0 / 0 = 0$. Note that, under this convention, the algorithm~\eqref{eq:general-weighted-algo} is defined according to the support $I_w$ of $w$ by \begin{equation} \label{eq:values-weight-algo} \big( \Delta_w(y) \big)_{I_w^c} = 0 \; \text{ and } \; \big( \Delta_w(y) \big)_{I_w} \in \argmin_{t \in \mathbb{R}^{I_w}} \Big(\sum_{i\in I_w} \frac{|t_i|}{w_i} : A_{I_w} t = y \Big), \end{equation} where if $t \in \mathbb{R}^N$ and $I \subset \{ 1, \ldots, N \}$, we denote by $t_I$ the vector such that $(t_I)_i = t_i$ if $i \in I$ and $(t_I)_i = 0$ if $i \notin I$. Once again, we say that $x$ is exactly reconstructed by $\Delta_w$, namely $\Delta_w(Ax) = x$, when $x$ is the unique solution of the minimization problem~\eqref{eq:general-weighted-algo} when $y=Ax$. In particular, this requires that the support of $x$ is included in the support of $w$. \subsection{No-loss property} Note that when the weight vector $w$ is close to $x$, then $\sum_{i=1}^N |x_i| / w_i$ is close to $|x|_0$. Moreover, for ``reasonable'' matrices $A$, the vector $x$ is the one with the shortest support in the affine space $x + \ker A$. So, a natural choice for $w$ in~\eqref{eq:general-weighted-algo} is $w = |\Delta_1(Ax)|$. We denote this decoder by $\Delta_2$: \begin{equation} \label{eq:weighted-BP-2} \Delta_2(y) \in \argmin_{t \in \mathbb{R}^N} \Big(\sum_{i=1}^N \frac{|t_i|}{|\Delta_1(y)_i|} : At = y \Big). \end{equation} The next Theorem proves that $\Delta_2$ is at least as good as the Basis Pursuit algorithm $\Delta_1$. \begin{theorem} \label{thm:A} Let $x \in \mathbb{R}^N$. If $\Delta_1(Ax)=x$, then $\Delta_2(Ax)=x$. \end{theorem} The proof of Theorem~\ref{thm:A} is based on the well-known null space property and dual characterization of~\cite{MR2236170}, see Section~\ref{sec:proofs} below. However, it was observed empirically in~\cite{MR2461611} that it is better to consider positive weights, and thus, to consider, for some $\varepsilon > 0$, the weights $w_i = |\Delta_1(y)_i| + \varepsilon$ for $i= 1, \ldots, N$. This is easily understood: if for some $i\in\{1,\ldots,N\}$, $\Delta_1(Ax)_i=0$ while $x_i\neq 0$, then $\Delta_2(Ax)_i$ is also equal to $0$ and there is no hope to recover $x$ using $\Delta_2$ as well. By adding an extra $\varepsilon$ term to each weights, the necessary support condition $\supp(x) \subset \supp(w)$ to reconstruct $x$ from $\Delta_w(Ax)$ is satisfied (see for instance Proposition~\ref{prop:equivalence-reconstruction-exacte} in Section~\ref{sec:proofs}). The choice of $\varepsilon > 0$ can be done in a data-driven way, see~\cite{MR2461611}. \subsection{An empirical evidence} \label{sec:empirical-evidence} In Figure~\ref{fig:phase-cs}, we give a simple illustration of the fact that weighted basis-pursuit can improve a lot upon basic basis-pursuit, using a simple numerical experiment. For many combinations of $m$ ($y$-axis) and $s$ ($x$-axis), we repeat the following experiment 50 times: draw at random a sensing matrix $A$ with i.i.d $N(0, 1/m)$ entries and draw at random a vector with $s$ non-zero coordinates chosen uniformly, with i.i.d $N(0, 1)$ non-zero entries. Then, compute $\hat x_1 = \Delta_1(Ax)$ and $\hat x_w = \Delta_{20}^\varepsilon(Ax)$ (here we take $\varepsilon = 0.01$ without further investigation), where $\Delta_k^\varepsilon(Ax)$ is computed iteratively, using \begin{equation} \label{eq:delta-k-def} \Delta_{k+1}^\varepsilon(Ax) \in \argmin_{t \in \mathbb{R}^N} \Big(\sum_{i=1}^N \frac{|t_i|}{|\Delta_k^\varepsilon(Ax)_i| + \varepsilon} : At = Ax \Big). \end{equation} Then, we count the number of exact reconstructions achieved by $\hat x_1$ and $\hat x_w$ over the 50 repetitions. The plots on the left are the exact recovery counts of $\hat x_1$ (black means exact recovery over the 50 repetitions) while the plots on the right are the exact recovery counts of $\hat x_w$. In these figures, exact recovery is declared exact when $|\hat x - x|_2 / |x|_2 < \eta$, where we take $\eta = 10^{-5}$ on the first line and $\eta = 10^{-6}$ on the second line. The red curve is a theoretical ``phase-transition'' threshold $s \mapsto s \log(e m / s)$. We observe in these figures that $\hat x_w$ improves a lot upon $\hat x_1$, in particular when $\eta = 10^{-6}$. \newlength{\figwidth} \newlength{\figlength} \setlength{\figwidth}{7cm} \begin{figure}[htbp] \centering \includegraphics[width=\figwidth]{phase1tol1e5}% \includegraphics[width=\figwidth]{phasewtol1e5} \\ \includegraphics[width=\figwidth]{phase1tol1e6}% \includegraphics[width=\figwidth]{phasewtol1e6}% \caption{Exact recovery counts (black means exact recovery) of basis-pursuit (left column) and weighted basis-pursuit (right column), where the $x$-axis is the sparsity ($s$) and the $y$-axis is the number of measurements ($m$). Exact recovery is declared with a tolerance equal to $10^{-5}$ on the first line, and equal to $10^{-6}$ on the second line. The red curve is a theoretical phase-transition threshold $s \mapsto s \log(e m / s)$} \label{fig:phase-cs} \end{figure} \subsection{A theoretical explanation} Now, we want to understand if $\Delta_2$ can do better than $\Delta_1$, and why. In particular, if $\Delta_1(Ax)$ is close to $x$ (but fails to reconstruct exactly $x$), under which condition do we get $\Delta_2(Ax)=x$? In general, given a weight vector $w\in\mathbb{R}^N$, what conditions on $w$ can insure that $\Delta_w(Ax)=x$? In Theorem~\ref{thm:B} below, we use the duality argument of~\cite{MR2236170} to prove that the condition \begin{equation} \label{eq:A0} (A0)(I,C) \hspace{1cm} |w_{I^c}|_\infty\big|(1/w)_I\big|_2\leq C, \end{equation} where $I$ is the support of $x$ and $C \geq 0$ is such that \begin{equation*} C \leq \frac{1 - \delta}{\mu}, \end{equation*} where $\delta$ and $\mu$ are, respectively, the restricted isometry and incoherency constants~\cite{MR2300700,MR2236170,MR2243152} of the matrix $A$, ensure that the $w$-weighted algorithm $\Delta_w$ recovers exactly $x$ given $Ax$. It is interesting to note that, so far, only random matrices are able to satisfy the incoherency and isometry properties for small values of $m$. Thus, if one wants the number $m$ of measurements to be of the order (up to some logarithmic factor) of the sparsity of the vector to recover, one has to consider random matrices. This leads to results in Compressed Sensing that hold with a large probability, with respect to the randomness involved in the construction of the sensing matrix. In practice, however, the most interesting sensing matrices are structured matrices, like the Fourier or the Walsh matrices (see~\cite{MR2300700,MR2417886}), since these matrices can be stored and constructed by efficient algorithms. A lot of research go in this direction, and we don't consider this problem here, but rather focus on weighted algorithms. Therefore, we will state our probabilistic results for a simple (and somehow universal) sensing matrix $A$ with entries being i.i.d. centered Gaussian variables with variance $1/m$. \begin{theorem} \label{thm:B} Let $x\in\mathbb{R}^N$ and denote by $I$ its support and by $s$ the cardinality of $I$. Let $C, \mu>0$ and $0 < \delta < 1$. Assume that \begin{equation*} m \geq c_0 \max\Big[\frac{s}{\delta^2}, \frac{s \log N}{ \mu^2}\Big] \;\mbox{ and }\; C \leq \frac{1 - \delta}{\mu}, \end{equation*} where $c_0$ is a purely numerical constant. Consider the event $\Omega(I, C) = \{ |w_{I^c}|_\infty \big| (1/w)_I \big|_2 \leq C \}$ and let $A$ be a $m\times N$ matrix with entries being i.i.d. centered Gaussian random variables with variance $1/m$. Then, with probability larger than \begin{equation*} 1 - 2 \exp(-c_1 m \delta^2) - \exp\big(-c_2 \mu^2 m /s \big) - \P \Big[ \Omega(I, C)^\complement \Big], \end{equation*} the vector $x$ is exactly reconstructed by $\Delta_w(Ax)$. \end{theorem} Theorem~\ref{thm:B} gives an explicit condition, linking the incoherency constant $\mu$, the restricted isometry constant $\delta$, and the constant $C$ from condition $A0(I, C)$ on the weights $w$ that ensures the exact reconstruction of $x$ using $\Delta_w$. This is the first result of this nature for weighted basis pursuit. When $w_{I^c}=0$ then $(A0)(I,C)$ holds with $C = 0$, so that one can take $\delta = 1$ and $\mu = +\infty$. This is the case for $w = (|\Delta_1(Ax)_i|)_{i=1}^N$ when $\Delta_1(Ax)=x$. This condition is also satisfied when the weights vector $w$ is close enough to $|x|$ and when the absolute value of the non-zero coordinates of $|x|$ are sufficiently large. For instance, $(A0)(I,C)$ holds when \begin{equation} \label{eq:exemple-approx-weights} \min_{i\in I} |x_i| \geq \Big(1 + \frac{\sqrt{|I|}}{C}\Big) |w - |x||_\infty. \end{equation} Indeed, if we denote $\varepsilon = |w - |x||_\infty$ then $(A0)(I,C)$ follows from~\eqref{eq:exemple-approx-weights} since $\max_{i\in I^c}w_i\leq \varepsilon$ and \begin{equation*} \Big| \Big(\frac{1}{w}\Big)_I \Big| \leq \frac{\sqrt{|I|}}{\min_{i\in I} w_i}\leq \frac{\sqrt{|I|}}{\min_{i\in I}|x_i| - \varepsilon}. \end{equation*} In particular, if $A0(I, C)$ is satisfied with $C = c_1 / \sqrt{\log N}$, for some constant $0 < c_1 < 1$, then a proportional to $s$ number of Gaussian measurements will be enough to get $\Delta_w(Ax) = x$ with a large probability. In Figure~\ref{fig:A0-verif} below, we give an empirical illustration of the fact that $A0(I, C)$ is indeed a relevant condition for exact reconstruction of weighted basis-pursuit. We consider exactly the same experiment as what we did in Section~\ref{sec:empirical-evidence}, but this time we fix the number of measurements to $m = 110$ and the sparsity of $x$ to $s = 45$. For this combination of $m$ and $s$, the phase transition occurs, namely basis pursuit can either work or not, see Figure~\ref{fig:phase-cs}, so we can expect for these values a strong improvement of weighted basis-pursuit over non-weighted one. On the left-side of Figure~\ref{fig:A0-verif}, we show the value of the constant $C$ over the reweighting iterations. Namely, if $I$ is the support of the true unknown vector $x$, we compute for $k = 1, \ldots, K$ the values of \begin{equation*} C^{k} = |w_{I^c}^{(k)}|_\infty\big|(1 / w^{(k)})_I\big|_2, \end{equation*} where \begin{equation*} w^{(k)} = |\Delta_k^\varepsilon(Ax)| + \varepsilon \end{equation*} over the 10 repetitions (differentiated by different colors), where we recall that $\Delta_k^\varepsilon(Ax)$ is given by~\eqref{eq:delta-k-def} and where we choose $K = 30$. On the right-side of Figure~\ref{fig:A0-verif}, we show the logarithm of relative reconstruction errors over the iterations, namely \begin{equation*} \mathrm{err_k} = \log \Big( \frac{|\Delta_k^\varepsilon(Ax) - x|_2}{|x|_2} \Big) \end{equation*} (we take the logarithm only for illustrational purpose, so that we can see the cases when exact reconstructions occurs). Each repetition of the experiment is represented with a different color. What we observe is a direct correspondence between the constant $C$ from Assumption $A0(I, C)$ and the quality of reconstruction of weighted basis pursuit along the iterations. This tells that Assumption $A0(I, C)$ indeed explains (at least in the considered configuration) when exact reconstruction can or cannot happen using weighted basis pursuit. \setlength{\figlength}{6cm}% \setlength{\figwidth}{7.1cm} \begin{figure}[htbp] \centering \includegraphics[width=\figwidth,height=\figlength]{A0verif-C.pdf} \hspace{-0.9cm} \includegraphics[width=\figwidth,height=\figlength]{A0verif-errors.pdf} \caption{Logarithm of the value of the constant $C$ from Assumption~$A0(I, C)$ (left) and logarithm of the relative reconstruction error of weighted basis pursuit over the iterations (right).} \label{fig:A0-verif} \end{figure} \begin{remark} Note that uniform results can also be derived for the weighted-$\ell_1$ algorithm. Indeed, by using classical machinery, it can be proved that 1) implies 2) implies 3) where: \begin{enumerate} \item for all $x \in \Sigma_s$, $A \diag(w)$ satisfies $\mathrm{RIP}(\delta,8s)$ and $I_x\subset I_{w}$, \item $\sup_{x \in \ker(A \diag(w)) \cap B_1^N}\Norm{x}_2<\frac{1}{2\sqrt{s}}$ and $\forall x \in \Sigma_s, I_x\subset I_{w}$, \item for any $x\in \Sigma_s, \Delta_{w}(Ax)=x$. \end{enumerate} But, it is not clear why, for instance when $w = \Delta_1(Ax)$, it would be easier for the matrix $A \diag(\Delta_1(Ax))$ to satisfy $\mathrm{RIP}$ than for $A$ itself. The same remark also holds for the euclidean section of $B_1^N$ by the kernel of $A \diag(\Delta_1(Ax))$ or $A$. These approaches look too crude to perform a study of $\ell_1$-weighted algorithms, where most of the gain can be done only on the absolute multiplying constant in front of the minimal number of measurements $m$ needed for exact reconstruction. \end{remark} \subsection{Verifying exact reconstruction} Thanks to Theorem~\ref{thm:A}, it is easy to test if we were able to reconstruct exactly a vector $x$ given $Ax$. So far, we have to rely on the theory to insure that with a high probability, we have $\Delta_1(Ax) = x$. Using~\eqref{eq:weighted-BP-2}, we can verify this belief. Indeed, Theorem~\ref{thm:A} entails that $\Delta_2(Ax) = x$ when $\Delta_1(Ax) = x$. In particular, if $\Delta_1(Ax)\neq \Delta_2(Ax)$, then we are sure that we didn't perform the exact reconstruction of $x$ using $\Delta_1(Ax)$. Then, we can iterate the mechanism and define for any $k \geq 1$ \begin{equation*} \Delta_{k+1}(Ax) \in \argmin_{t \in \mathbb{R}^N} \Big(\sum_{i=1}^N \frac{|t_i|}{|\Delta_k(Ax)_i|} : At = Ax \Big), \end{equation*} leading to a sequence \begin{equation} \label{eq:sequence-BP} \Delta_1(Ax), \Delta_2(Ax), \cdots, \Delta_r(Ax). \end{equation} If the sequence~\eqref{eq:sequence-BP} does not become constant after a certain number of iterations, then it is very likely that none of the algorithm $\Delta_k(Ax)$ reconstructed exactly $x$. We also have the following reverse statement. Denote by $\Sigma_k$ the set of all $k$-sparse vectors in $\mathbb{R}^N$. \begin{theorem} \label{thm:C} Let $A$ be a $m \times N$ injective matrix on $\Sigma_m$ and let $x \in \Sigma_{\lfloor m / 2 \rfloor}$. The following statements are equivalent: \begin{enumerate} \item There exists an integer $r$ such that $\Delta_r(Ax)=x$, \item The sequence $\Delta_1(Ax), \Delta_2(Ax),\ldots,$ becomes constantly equal to a $\lfloor m/2 \rfloor$-sparse vector after a certain number of iterations. \end{enumerate} \end{theorem} Note that the matrix with i.i.d. standard Gaussian entries is injective on $\Sigma_m$ with probability one. Thus, we propose to compute the sequence~\eqref{eq:sequence-BP} as an empirical test for the exact reconstruction of a vector $x$ from $Ax$. \section{Iteratively weighted soft-thresholding for matrix completion} \label{cs-matrices} In many applications, data can be represented as a database with missing entries. The problem is then to fill the missing values of the database, leading to the so-called~\emph{matrix completion} problem. For instance, collaborative filtering aims at doing automatic predictions of the taste of users, using the collected tastes of every users at the same time~\cite{goldberg1992using}. The popular Netflix prize is a popular application of this problem\footnote{\texttt{http://www.netflixprize.com/}}. Other applications include machine-learning~\cite{abernethy2006low}, control~\cite{554402}, quantum state tomography~\cite{gross2010quantum}, structure from motion~\cite{tomasi1992shape}, among many others. This problem can be understood as a non-commutative extension of the compressed sensing problem. So, a natural question is the following: \emph{Does the principle of iterative weighting of the $\ell_1$-norm work also for matrix completion?} In this Section, we prove empirically that the answer to this question is yes. We prove that one can improve the convex relaxation principle for matrices, which is based on the nuclear norm \cite{MR2723472},~\cite{Gross}, by using a weighted nuclear norm, in the same way as we did for vectors in Section~\ref{sec:cs-vectors}. However, note that there is, as explained below, a major difference between the vectors and matrices cases at this point, since a weighted nuclear norm is not convex in general, while a weighted $\ell_1$-norm is. Let us first recall standard definitions and notations. Let $A_0 \in \mathbb{R}^{n_1 \times n_2}$ be a matrix with $n_1$ rows and $n_2$ columns. The matrix $A_0$ is not fully observed. What we observe is a given subset $\Omega \subset \{ 1, \ldots, n_1 \} \times \{ 1, \ldots, n_2 \}$ of cardinality $m$ of the entries of $A_0$, where $m \ll n_1 n_2$. For any matrix $A \in \mathbb{R}^{n_1 \times n_2}$, we define the \emph{masking} operator ${\mathcal P}_\Omega (A) \in \mathbb{R}^{n_1 \times n_2}$ such that $({\mathcal P}_\Omega (A))_{j,k} = A_{j,k}$ when $(j,k) \in \Omega$ and $({\mathcal P}_\Omega (A))_{j,k} = 0$ when $(j,k) \notin \Omega$. We define also ${\mathcal P}_\Omega^\perp(A) = A - {\mathcal P}_\Omega(A)$. Since we consider the case where $m \ll n_1 n_2$, the matrix completion problem is in general severely ill-posed. So, one needs to impose a complexity or sparsity assumption on the unknown matrix $A_0$. This is done by assuming that $A_0$ has low rank, which is the natural extension of the sparsity assumption for vectors to the spectrum of a matrix. For the problem of exact reconstruction, other geometrical assumptions are necessary (such as the incoherency assumption, see \cite{candes-recht08,MR2723472,2009arXiv0901.2912A}). Under such assumptions, it is now well-understood that the principle of convex relaxation of the rank function is able to reconstruct exactly the unknown matrix from few measurements, see \cite{candes-recht08,MR2723472, Gross, recht2009simpler}. Indeed, a natural approach would be to solve the problem \begin{equation} \label{eq:rank-matrix-completion} \begin{split} &\text{ minimize } \rank{A} \\ &\text{ subject to } {\mathcal P}_\Omega(A) = {\mathcal P}_\Omega(A_0), \end{split} \end{equation} but this minimization problem is known to be very hard to solve in practice even for small matrices, see for instance \cite{candes-recht08,MR2723472}. The convex envelope of the rank function over the unit ball of the operator norm is the nuclear norm, see~\cite{fazel2002matrix}, which is given by \begin{equation*} \norm{A}_1 = \sum_{j=1}^{n_1 \wedge n_2} \sigma_j(A), \end{equation*} (it is the bi-conjugate of the rank function over the unit ball of the operator norm), where $\sigma_1(A) \geq \cdots \geq \sigma_{n_1 \wedge n_2}(A)$ are the singular values of $A$ in decreasing order. So, the convex relaxation of~\eqref{eq:rank-matrix-completion} is \begin{equation} \label{eq:nuclear-norm-matrix-completion} \begin{split} &\text{ minimize } \norm{A}_1 \\ &\text{ subject to } {\mathcal P}_\Omega(A) = {\mathcal P}_\Omega(A_0). \end{split} \end{equation} This problem has received a lot of attention quite recently, see \cite{candes-recht08,MR2723472, Gross, keshavan2009matrix,recht2009simpler}, among many others. The point is that, in the same way as the basis pursuit for vectors,~\eqref{eq:nuclear-norm-matrix-completion} is able to recover exactly $A_0$ with a large probability, based on an almost minimal number of samples (under some geometrical assumption). In literature concerned about computational problems \cite{springerlink:10.1007/s10107-009-0306-5}, \cite{mazumder_hastie_tibshirani09}, \cite{toh2009accelerated,liu2009implementable}, among others, the relaxed version of~\eqref{eq:nuclear-norm-matrix-completion} is considered, since it is easier to construct a solver for it (one can apply generic first-order optimal methods, such as proximal forward-backward splitting \cite{MR2203849}, among many other methods) and since it is more stable in the presence of noise. Note that the SVT algorithm of~\cite{MR2600248} gives a solution under equality constraints for an objective function with an extra ridge term $\norm{A}_1 + \tau \norm{A}_2^2$. The relaxed problem is simply formulated as penalized least-squares: \begin{equation} \label{eq:matrix-lasso} \hat A_\lambda \in \argmin_{A \in \mathbb{R}^{n_1 \times n_2}} \Big\{ \frac 12 \norm{{\mathcal P}_\Omega(A) - {\mathcal P}_\Omega(A_0)}_2^2 + \lambda \norm{A}_1 \Big\}, \end{equation} where $\lambda > 0$ is a parameter balancing goodness-of-fit and complexity, measured by the nuclear norm. Before we go on, we need some notations. The vector of singular values of $A$ is denoted by $\sigma(A) = (\sigma_1(A), \ldots, \sigma_r(A))$, sorted in non-increasing order, where $r$ is the rank of $A$. We define, for $p \geq 1$, the $p$-Schatten norm by \begin{equation*} \norm{A}_p = |\sigma(A)|_p, \end{equation*} which is the $\ell_p$ norm of $\sigma(A)$. We shall denote also by $\norm{A} = \norm{A}_\infty = \sigma_1(A)$ the operator norm of $A$, and note that $\norm{A}_2$ is the Frobenius norm, associated to the Euclidean inner product $\inr{A, B} = \tr(A^\top B)$, where $\tr(A)$ stands for the trace of $A$. For any matrix $A$ its singular values decomposition (SVD) writes as $A = U \diag(\sigma(X)) V^\top$, where $\diag(\sigma(X))$ is the diagonal matrix with $\sigma(A)$ on its diagonal, and $U$ and $V$ are, respectively $n_1 \times r$ and $n_2 \times r$ orthonormal matrices. \subsection{A new algorithm for matrix completion} We have in mind to do the same as we did in Section~\ref{sec:cs-vectors} for the reconstruction of sparse vectors. For a given weight vector $w = (w_1, \ldots, w_{n_1 \wedge n_2})$, with $w_1 \geq \cdots \geq w_{n_1 \wedge n_2} \geq 0$, we consider \begin{equation} \label{eq:weighted-matrix-lasso} \tilde A_{\lambda}^w \in \argmin_{A \in \mathbb{R}^{n_1 \times n_2}} \Big\{ \frac 12 \norm{{\mathcal P}_\Omega(A) - {\mathcal P}_\Omega(A_0)}_2^2 + \lambda \norm{A}_{1, w} \Big\}, \end{equation} where $\norm{A}_{1, w}$ is the weighted nuclear-norm \begin{equation}\label{eq:weighted-S1-norm} \norm{A}_{1, w} = \sum_{j=1}^{n_1 \wedge n_2} \frac{\sigma_j(A)}{w_j}, \end{equation} with the convention $1/0 = +\infty$. Now, we would like to use the idea of reweighting using previous estimates, in the same as we did in Section~\ref{sec:cs-vectors}: if $\hat A_\lambda$ is a solution to~\eqref{eq:matrix-lasso}, we want to use for instance \begin{equation*} w_j = \sigma_j(\hat A_\lambda), \end{equation*} and find a solution to the problem~\eqref{eq:weighted-matrix-lasso} for this choice of weights. But, let us stress the fact that, while we call $\norm{\cdot}_{1,w}$ the weighted nuclear norm, it is not a norm, since it is not a convex function in general! A simple counter-example is as follows. If $w_1 > w_2$ (which is usually the case since singular values are taken in a non-increasing order) then for $A = \diag(1,0,\ldots,0)$ and $B = \diag(0,1,0,\ldots,0)$, we have \begin{equation*} \frac{\norm{A}_{1,w} + \norm{B}_{1,w}}{2} = \frac{s_1(A) + s_1(B)}{2w_1} = \frac{1}{w_1} < \frac{1}{2} \Big(\frac{1}{w_1} + \frac{1}{w_2}\Big) = \Big\|\frac{A+B}{2} \Big\|_{1,w}, \end{equation*} hence $\norm{\cdot}_{1,w}$ is not convex. Moreover, since the aim of $\norm{\cdot}_{1, w}$ is to promote low-rank matrices, the weight vector $w$ should be chosen non-increasing, corresponding precisely to the case where $\norm{\cdot}_{1, w}$ is non-convex (note that when $0 < w_1 \leq w_2 \leq \cdots\leq w_{n_1\wedge n_2}$, it is easy to prove that $\norm{\cdot}_{1,w}$ is a norm). Consequently,~\eqref{eq:weighted-matrix-lasso} is not a convex minimization problem in general, and a minimization algorithm is very likely to be stuck at a local minimum. But we would like to stick to the idea of reweighting, since it worked well for CS. The first idea that may come to mind is to use a convex relaxation of the non-convex function $\norm{\cdot}_{1, w}$ (just as convex relaxation of the rank function led to the nuclear norm), but it simply leads back to the nuclear norm itself! Indeed, it can be proved that if $w_1 \geq w_2 \geq \cdots \geq w_{n_1\wedge n_2}>0$, the convex envelope of $\norm{\cdot}_{1,w}$ on the ball $\{ A : \norm{A}_1 \leq 1 \}$ is simply $A \mapsto \norm{A}_{1} / w_1$. Let us go back to the original problem~\eqref{eq:matrix-lasso}. It turns out that~\eqref{eq:matrix-lasso} is equivalent to the fact that $\hat A_\lambda$ satisfies the following fixed-point equation: \begin{equation} \label{eq:fixed-point-matrix-lasso} \hat A_\lambda = S_\lambda( {\mathcal P}_{\Omega}^\perp(\hat A_\lambda) + {\mathcal P}_\Omega(A_0)), \end{equation} where $S_\lambda$ is the spectral soft-thresholding operator defined for every $B\in \mathbb{R}^{n_1\times n_2}$ by \begin{equation*} S_\lambda(B) = U_B \diag\Big( ( \sigma_1(B) - \lambda)_+, \ldots, (\sigma_{\rank(B)}(B) - \lambda)_+ \Big) V_B^\top, \end{equation*} where $B = U_B \Sigma_B V_B^\top$ is the SVD of $B$, with $\Sigma_B = \diag(\sigma_1(B), \ldots, \sigma_{\ra(B)}(B))$. This fact is easily explained. Indeed, define $f_2(A) = \frac 12 \norm{{\mathcal P}_\Omega (A) - {\mathcal P}_\Omega (A_0)}_2^2$, which is a differentiable function with gradient $\nabla f_2(A) = {\mathcal P}_\Omega (A) - {\mathcal P}_\Omega (A_0)$ and $f_1(A) = \lambda \norm{A}_1$, which is a non-differentiable convex function. We will denote by $\partial f_1(A)$ the subdifferential of $f_1$ at $A$. The fact that $\hat A_\lambda \in \argmin_{A} \{ f_2(A) + f_1(A) \}$ is equivalent to the fact that $0 \in \partial(f_1 + f_2)(\hat A_\lambda) = \{ \nabla f_2(\hat A_\lambda) \} + \partial f_1(\hat A_\lambda)$ (for the Minkowskii's addition of sets), that we rewrite in the following way: \begin{equation} \label{eq:matrix-lasso-charac} \hat A_\lambda - \nabla f_2(\hat A_\lambda) - \hat A_\lambda \in \partial f_1(\hat A_\lambda). \end{equation} On the other hand, a standard tool in convex analysis is the \emph{proximal} operator, \cite{MR2203849},~\cite{MR0274683}. The proximal operator of a convex function, for instance $f_1$, is given, for every $B\in \mathbb{R}^{n_1\times n_2}$, by \begin{equation*} \prox_{f_1}(B) = \argmin_{A \in \mathbb{R}^{n_1 \times n_2}} \Big\{ \frac{1}{2} \norm{A - B}_2^2 + f_1(A) \Big\}, \end{equation*} the minimizer being unique since $A \mapsto \frac 12 \norm{A - B}_2^2 + f_1(A)$ is strongly convex. But, since $\partial(\frac 12 \norm{\cdot - B}_2^2 + f_1(\cdot))(A) = \{ A - B \} + \partial f_1(A)$, the point $\prox_{f_1}(B)$ is uniquely determined by the inclusion \begin{equation} \label{eq:prox-charac} B - \prox_{f_1}(B) \in \partial f_1(\prox_{f_1}(B)). \end{equation} So, choosing $B = \hat A_\lambda - \nabla f_2(\hat A_\lambda)$ in~\eqref{eq:prox-charac} and identifying with~\eqref{eq:matrix-lasso-charac} leads to the fact that $\hat A_\lambda$ satisfies the fixed-point equation \begin{equation*} \hat A_\lambda = \prox_{f_1}(\hat A_\lambda - \nabla f_2(\hat A_\lambda)), \end{equation*} which leads to~\eqref{eq:fixed-point-matrix-lasso} on this particular case, since we know that $\prox_{f_1}(B) = S_\lambda(B)$ (see Proposition~\ref{prop:weighted-nuclear-prox} below). Note that the same argument proves that, if we add a ridge term to the nuclear norm penalization, namely \begin{equation} \label{eq:matrix-enet} \hat A_{\lambda, \tau} = \argmin_{A \in \mathbb{R}^{n_1 \times n_2}} \Big\{ \norm{{\mathcal P}_\Omega(A) - {\mathcal P}_\Omega(A_0)}_2^2 + 2 \lambda \norm{A}_1 + \tau \norm{A}_2^2 \Big\} \end{equation} for any $\tau \geq 0$, then and equivalent formulation is the fixed point equation \begin{equation} \label{eq:fixed-point-enet} \hat A_{\lambda, \tau} = \frac{1}{1 + \tau} S_\lambda( {\mathcal P}_{\Omega}^\perp(\hat A_{\lambda, \tau}) + {\mathcal P}_\Omega(A_0)), \end{equation} and the minimizer is unique this time, since the objective function is now strongly convex. The argument given above is at the core of the proximal operator theory, and leads to the so-called proximal forward-backward splitting algorithms, see~\cite{MR2203849,MR701288} and~\cite{MR2486527}. Since these algorithm are optimal among the class of first-order algorithms, they drawn a large attention in the machine learning community, see for instance the survey~\cite{bach-book-chapter}. Another advantage in the case of matrix completion is that such an algorithm can handle large scale matrices, see Remark~\ref{rem:large-scale} below. So, we have seen that~\eqref{eq:matrix-lasso} and~\eqref{eq:fixed-point-matrix-lasso}, or~\eqref{eq:matrix-enet} and~\eqref{eq:fixed-point-enet} are equivalent formulations of the same problem. So, instead of considering~\eqref{eq:weighted-matrix-lasso}, we could consider the corresponding fixed-point problem. Unfortunately, since $\norm{\cdot}_{1, w}$ is non-convex, the above arguments based on the subdifferential does not make sense anymore. But still, we can consider an estimator defined as a fixed point equation for the weighted soft-thresholding operator. \begin{theorem} \label{prop:existence-and-unicity} Assume that $\tau > 0$ and $w_1 \geq \cdots \geq w_{n_1 \wedge n_2} \geq 0$. Let us define the matrix $\hat A_{\lambda}^w$ as the solution of the fixed-point equation \begin{equation} \label{eq:weighted-fixed-point} \hat A_{\lambda}^w = \frac{1}{1 + \tau} S_{\lambda}^w( {\mathcal P}_{\Omega}^\perp(\hat A_{\lambda}^w) + {\mathcal P}_\Omega(A_0)), \end{equation} where $S_{\lambda}^w$ is the weighted soft-thresholding operator given by \begin{equation} \label{eq:s-lambda-w-def} S_{\lambda}^w(B) = U_B \diag\Big( \Big( \sigma_1(B) - \frac{\lambda}{w_1} \Big)_+, \ldots, \Big(\sigma_{\ra(B)}(B) - \frac{\lambda}{w_{\ra(B)}}\Big)_+ \Big) V_B^\top, \end{equation} where $B = U_B \diag(\sigma(B)) V_B^\top$ is the SVD of $B$. Then, the solution to~\eqref{eq:weighted-fixed-point} exists and is unique. \end{theorem} Theorem~\ref{prop:existence-and-unicity} is proved in Section~\ref{sec:proofs-cs-matrices} below, and is a by-product of our analysis of the iterative scheme to approximate the solution of~\eqref{eq:weighted-fixed-point}. The parameter $\tau > 0$ can be arbitrarily small (in our numerical experiments we take it equal to zero, see Section~\ref{sec:mc-numerical-study}), but it ensures unicity and convergence of the iterative scheme proposed below. Once again, let us stress the fact that~\eqref{eq:weighted-fixed-point} (with $\tau=0$) is not equivalent to~\eqref{eq:weighted-matrix-lasso} in general, since $A \mapsto \norm{A}_{1, w}$ is not convex. The consideration of~\eqref{eq:weighted-fixed-point} has several advantages: we guarantee unicity of the solution, while the problem~\eqref{eq:weighted-matrix-lasso} may have several solutions, and it is easy to solve the fixed-point problem~\eqref{eq:weighted-fixed-point} using iterations. Even further, from a numerical point of view, it can be easily used together with a continuation algorithm, as explained in Section~\ref{sec:mc-numerical-study} below, to compute a set of solutions for several values of the smoothing parameter $\lambda$. The next Theorem proves that iterates of the fixed-point Equation~\eqref{eq:weighted-fixed-point} converges exponentially fast to the solution. \begin{theorem} \label{thm:algorithm_convergence} Take $A^0$ as the matrix with zero entries and define for any $k \geq 0$: \begin{equation} \label{eq:iterations} A^{k+1} = \displaystyle \frac{1}{1 + \tau} S_\lambda^{w}({\mathcal P}_{\Omega}^\perp(A^k) + {\mathcal P}_\Omega(A_0)). \end{equation} Then, for any $n \geq 1$, one has: \begin{equation*} \| \hat A_\lambda^w - A^n \|_2 \leq \frac{1}{\tau (1 + \tau)^{n}} \| {\mathcal P}_\Omega(A_0) \|_2, \end{equation*} where $\hat A_\lambda^w$ is the solution of~\eqref{eq:weighted-fixed-point}. \end{theorem} The proof of Theorem~\ref{thm:algorithm_convergence} is given in Section~\ref{sec:proofs-cs-matrices}. The main step of the proof is to establish the Lipshitz property of the weighted soft-thresholding operator, see Proposition~\ref{prop:S-w-lipshitz}. Since $S_\lambda^w$ is not a proximal operator (the objective function is not convex), we cannot use directly the property of firm-expansivity, which is a direct consequence of the definition of a proximal operator, see the discussion in Section~\ref{sec:proofs-cs-matrices}. \subsection{Numerical study} \label{sec:mc-numerical-study} \subsubsection{Algorithms} In this Section we compare empirically the quality of reconstruction using nuclear norm minimization~\eqref{eq:matrix-lasso} (NNM), or equivalently~\eqref{eq:fixed-point-matrix-lasso}, and weighted spectral soft-thresholding~\eqref{eq:weighted-fixed-point} (WSST). To compute the NNM we use the Accelerated Proximal Gradient (APG) algorithm of~\cite{toh2009accelerated} using the \texttt{MATLAB} package~\texttt{NNLS}, which is a state-of-the-art solver for the minimization problem~\eqref{eq:matrix-lasso}. This algorithm is based on an accelerated proximal gradient algorithm, itself based on the accelerated gradient of Nesterov, see~\cite{MR701288,nesterov2007gradient} and the FISTA algorithm, see \cite{MR2486527} and see also~\cite{Ji:2009:AGM:1553374.1553434} for a similar algorithm. In the APG algorithm, we use the linesearch and the continuation techniques, see \cite{toh2009accelerated}, but we don't use truncation, since it led to poor results in the problems considered here. The target value of $\lambda$ for NNM and WSST (see~\eqref{eq:matrix-lasso} and~\eqref{eq:weighted-fixed-point}) is simply taken as $\lambda_{\mathrm{target}} = \varepsilon \times \norm{{\mathcal P}_\Omega( A_0)}_\infty$, with $\varepsilon = 10^{-4}$ or $\varepsilon = 10^{-3}$ depending on the problem, see below. The solution coming out of the APG algorithm is denoted by $\hat A_\lambda^{(0)}$. Note that we could have used the FPC~\cite{springerlink:10.1007/s10107-009-0306-5} or SVT~\cite{MR2600248} algorithms instead, but it led in our experiments to poorer results compared to the APG (in particular when looking for solutions with a rank of order, say, 100 on ``real'' matrices, like in the inpainting or recommanding systems, see below). The WSST is computed following the Algorithm~\ref{alg:WSST} below. The first while loop is a continuation loop, that goes progressively to $\lambda_{\mathrm{target}}$. Doing this instead of using $\lambda_{\mathrm{target}}$ directly is known to improve stability and rate of convergence of the algorithm. It does not take more time than using $\lambda_{\mathrm{target}}$ directly (actually, it usually takes less time), since we use warm starts: when taking a smaller $\lambda$, we use the previous value $A_{\mathrm{new}}$ (the solution with the previous $\lambda$) as a starting point. Once we reached $\lambda_{\mathrm{target}}$, we obtain a first solution of the fixed point problem~\eqref{eq:weighted-fixed-point}, denoted by $\hat A_\lambda^{(1)}$. Then, we update the weights by taking $w_j = \sigma_j(\hat A_\lambda^{(1)})$, and we start all over. We don't use a continuation loop again, since we are already at the desired value of $\lambda$. We keep the parameter $\lambda$ fixed, we only repeat the process of updating the weights and finding the solution to the fixed point~\eqref{eq:weighted-fixed-point} $K$ times. By doing this, we are typically going to decrease (eventually a lot) the final rank of the WSST, while keeping a good reconstruction accuracy. This process of updating the weights is usually not long. Typically, after a small number of iterations, two fixed-point solutions before and after an update are very close, so that our choice $K = 50$ is typically too large, but we keep it this way to ensure a good stability of the final solution. Note that in Algorithm~\ref{alg:WSST} we use the iterations~\eqref{eq:iterations} with $\tau = 0$, since it gives satisfactory results. We use a simple stopping rule $\norm{A_{\mathrm{new}} - A_{\mathrm{old}}}_2 / \norm{A_{\mathrm{old}}}_2 \leq \text{tol}$ with $\text{tol} = 5 \times 10^{-4}$ or $\text{tol} = 10^{-3}$ depending on the scaling of the problem, see below. We used in all our computations $q = 0.7$ and $K = 50$. For a fair comparison, we always use, for a reconstruction problem, the same parameters $\varepsilon, \mathrm{tol}$ and $\lambda$ for both NNM and WSST. Of course, for the WSST we need to rescale $\lambda$ by multiplying it by $w_1$ (the first coordinate of the weights vector, which is equal to $\sigma_1(\hat A^{(0)})$ at the first iteration). \begin{remark} \label{rem:large-scale} A good point with WSST is that it can handle large scale matrices, since at each iteration one only needs to store $A_{\mathrm{old}}$, which is a low rank matrix (coming out of a previous spectral soft-thresholding) and ${\mathcal P}_\Omega(A_{\mathrm{old}} + A_0)$, which is a sparse matrix. \end{remark} \begin{remark} The overall computational cost of WSST is obviously much longer than the one of NNM, since we use $K$ iterations, and since we don't use accelerated gradient, linesearch and other accelerating recipes in our implementation of WSST. This is done purposely: we want to compare the quality of reconstruction of the ``pure'' WSST, without helping computational tricks, that usually improves rate of convergence, but accuracy of reconstruction as well (this is the case if one compares NNM with and without these tools). \end{remark} \begin{algorithm}[htbp] \small \KwIn{The observed entries ${\mathcal P}_\Omega(A_0)$, a preliminary reconstruction $\hat A_\lambda^{(0)}$ and parameters $\lambda_1 > \lambda_{\text{target}} > 0$, $0 < q, \mathrm{tol} < 1$, $K \geq 1$} \KwOut{The WSST reconstruction $\hat A_\lambda^{(K)}$}% Put $A_{\text{new}} = 0$, $\lambda = \lambda_1$ and take $w_j = \sigma_j(\hat A_\lambda^{(0)})$ \While{$\lambda > \lambda_{\mathrm{target}}$}{ Put $\delta = +\infty$ \While{$\delta > \mathrm{tol}$}{ $A_{\text{old}} = A_{\text{new}}$ $A_{\text{new}} = S_\lambda^w (A_{\text{old}} - {\mathcal P}_\Omega(A_{\text{old}}) + {\mathcal P}_\Omega(A_0) )$ $\delta = \norm{A_{\mathrm{new}} - A_{\mathrm{old}}}_2 / \norm{A_{\mathrm{old}}}_2$ } $\lambda = \lambda \times q$ } Put $\hat A_\lambda^{(1)} = A_{\mathrm{new}}$ \For{$k=1, \ldots, K$}{ Put $w_j = \sigma_j(\hat A_\lambda^{(k)})$ and $\delta = +\infty$ \While{$\delta > \mathrm{tol}$}{ $A_{\text{old}} = A_{\text{new}}$ $A_{\text{new}} = S_\lambda^w (A_{\text{old}} - {\mathcal P}_\Omega(A_{\text{old}}) + {\mathcal P}_\Omega(A_0) )$ $\delta = \norm{A_{\mathrm{new}} - A_{\mathrm{old}}}_2 / \norm{A_{\mathrm{old}}}_2$ } } \Return $\hat A_\lambda^{(K)}$ \caption{Computation of the iteratively weighted spectral soft-thresholding.} \label{alg:WSST} \end{algorithm} \subsubsection{Phase transition} In Figure~\ref{fig:phase-transision}, we give a first empirical evidence of the fact that WSST improves a lot upon NNM. For each $r \in \{ 5, 10, 15, \ldots, 80 \}$, we repeat the following experiment 50 times. We draw at random $U$ and $V$ as $500 \times r$ matrices with $N(0, 1)$ i.i.d entries, and put $A_0 = U V^\top$ (which is rank $r$ a.s.). Then, we choose uniformly at random $30\%$ of the entries of $A_0$, and compute the NNM and the WSST based on this matrix. In Figure~\ref{fig:phase-transision}, we show, for each $r$ (x-axis), the boxplots of the relative reconstruction errors $\norm{\hat A - A_0}_2 / \norm{A_0}_2$ over the 50 repetitions for $\hat A =$ NNM (top-left) and $\hat A$ = WSST (top-right). On this example, we observe that NNM is not able to recover matrices with a rank larger than 35, while WSST can recover matrices with a rank up to 70. The boxplots of the ranks recovered by NNM and WSST are on the second line, where we observe that WSST always recovers the true rank up to a rank of order $70$, while NNM correctly recovers the rank (only most of the time) up to a rank $35$, and overestimates it a lot for larger ranks. So, on this simulated example, we observe a serious improvement of NNM using WSST, since the latter has the exact reconstruction property for matrices with twice a larger rank ($70$ instead of $35$). \setlength{\figlength}{7.1cm}% \setlength{\figwidth}{7.1cm} \begin{figure}[htbp] \centering \includegraphics[width=\figwidth,height=\figlength]{mcphase1.pdf} \hspace{-0.9cm} \includegraphics[width=\figwidth,height=\figlength]{mcphasew.pdf} \\ \vspace{-1cm} \includegraphics[width=\figwidth,height=\figlength]{mcranks1.pdf} \hspace{-0.9cm} \includegraphics[width=\figwidth,height=\figlength]{mcranksw.pdf} \caption{Boxplots of the recovery errors (first line) and recovered ranks (second line) using NNM (left) and WSST (right) of a $500\times500$ rank $r$ matrix with $r$ between $5$ and $80$ (x-axis)} \label{fig:phase-transision} \end{figure} \subsubsection{Image inpainting} In Figure~\ref{fig:inpainting-examples}, we consider the reconstruction of four test images (``lenna'', ``fingerprint'', ``flinstones'' and ``boat''). Each test image has $512 \times 512$ pixels, and is of rank $50$. We only observe $30\%$ of the pixels, picked uniformly at random, with no noise. The observations are given in the first line of Figure~\ref{fig:inpainting-examples}, where non-observed pixels are represented by white. The second line gives the reconstruction obtained using NNM. The third line shows the difference between the true image and the recovery by NNM, where blue is perfect recovery and red is bad recovery. The fourth line shows the reconstruction using WSST and the fifth shows the difference between the true image and recovery by WSST. On all four images, the recovery is much better using WSST, in particular on the fingerprint and flinstones images. This can be understood form the fact that these two are very structured images. The most surprising fact is that all the four reconstructions using NNM have rank 150 (because of the way we choose $\lambda$, see above), while the rank of the reconstructions obtained with WSST is never more than 90 (with the same choice of $\lambda$). So, WSST leads to simpler (with a lower rank, which is better in terms of compression/description) and more accurate reconstructions. In particular, we observe that WSST is able to recover in a more precise way the underlying geometry of the true images (for instance, on the third line, first column, we can recognize the shape of lenna, while this is not the case with WSST). \setlength{\figlength}{3cm}% \setlength{\figwidth}{3cm} \begin{figure}[htbp] \centering \includegraphics[width=\figwidth,height=\figlength]{lenaOmega.png} \includegraphics[width=\figwidth,height=\figlength]{fingerprintOmega.png} \includegraphics[width=\figwidth,height=\figlength]{flinstonesOmega.png} \includegraphics[width=\figwidth,height=\figlength]{boatOmega.png} \\ \includegraphics[width=\figwidth,height=\figlength]{lena1.png} \includegraphics[width=\figwidth,height=\figlength]{fingerprint1.png} \includegraphics[width=\figwidth,height=\figlength]{flinstones1.png} \includegraphics[width=\figwidth,height=\figlength]{boat1.png} \\ \includegraphics[width=\figwidth,height=\figlength]{lena1diff.png} \includegraphics[width=\figwidth,height=\figlength]{fingerprint1diff.png} \includegraphics[width=\figwidth,height=\figlength]{flinstones1diff.png} \includegraphics[width=\figwidth,height=\figlength]{boat1diff.png} \\ \includegraphics[width=\figwidth,height=\figlength]{lenaw.png} \includegraphics[width=\figwidth,height=\figlength]{fingerprintw.png} \includegraphics[width=\figwidth,height=\figlength]{flinstonesw.png} \includegraphics[width=\figwidth,height=\figlength]{boatw.png} \\ \includegraphics[width=\figwidth,height=\figlength]{lenawdiff.png} \includegraphics[width=\figwidth,height=\figlength]{fingerprintwdiff.png} \includegraphics[width=\figwidth,height=\figlength]{flinstoneswdiff.png} \includegraphics[width=\figwidth,height=\figlength]{boatwdiff.png} \\ \caption{Image reconstruction using NNM and WSST. \emph{First line}: observed pixels (white means non-observed). \emph{Second line}: reconstruction using NNM. \emph{Third line}: difference between truth and NNM (red is bad, blue is good). \emph{Fourth line}: recovery using WSST. \emph{Fifth line}: difference between truth and WSST.} \label{fig:inpainting-examples} \end{figure} \subsubsection{Collaborative filtering} Now, we consider matrix completion for a real dataset: the MovieLens data. It contains 3 datasets, available on \texttt{http://www.grouplens.org/}: \begin{itemize} \item \texttt{movie-100K:} 100,000 ratings for 1682 movies by 943 users \item \texttt{movie-1M:} 1 million ratings for 3900 movies by 6040 users \item \texttt{movie-10M:} 10 million ratings and 100,000 tags for 10681 movies by 71567 users \end{itemize} The ranks of the users are integers between $1$ and $5$. In each 3 datasets, each user has rated at least $20$ movies. For our experiments, we simply choose uniformly at random half of the ratings of each user to form a subset $\Gamma$ of the entire subset $\Omega$ or ratings. Then, based on the ratings in $\Gamma$, we try to predict the ratings in $\Omega - \Gamma$. Since many entries are missing, we measure the accuracy of completion by computing the relative error in $\Omega - \Gamma$. If $\hat A$ is a reconstruction matrix, we reproduce in Table~\ref{tab:collaborative} below the values of \begin{equation} \mathrm{err} = \norm{{\mathcal P}_{\Omega-\Gamma}(\hat A) - {\mathcal P}_{\Omega-\Gamma}(A_0)}_2 / \norm{{\mathcal P}_{\Omega-\Gamma}(A_0)}_2, \end{equation} together with the rank used for the reconstruction. We observe in Table~\ref{tab:collaborative} that WSST improves a lot upon NNM on each datasets. The most surprising fact is that the rank used by WSST is much smaller than the one used by NNM, while leading at the same time to strong prediction improvements. For \texttt{movie-1M} for instance, the prediction error of WSST is $30\%$ better than NNM, while NNM solution has rank 200 and the WSST has rank 40. Once again, we can conclude on this example that WSST gives both much simpler reconstructions, and better prediction accuracy. Note that we considered a maximum rank equal to 200 for the \texttt{movie-100K} and \texttt{movie-1M} datasets, and equal to 50 for \texttt{movie-10M} (to make this problem computationally tractable on a normal computer). \begin{table}[htbp] \centering \footnotesize \begin{tabular}{lcccccc} \multicolumn{7}{l}{\rule{12.5cm}{1pt}} \\ & & & \multicolumn{2}{c}{relative error} & \multicolumn{2}{c}{rank} \\ & $n_1 / n_2$ & $m$ & \multicolumn{2}{l}{\rule{2.9cm}{0.5pt}} & \multicolumn{2}{l}{\rule{2.6cm}{0.5pt}} \\ & & & NNM & WSST & NNM & WSST\\ \texttt{movie-100K:} & 943/1682 & 1.00e+5 & 3.92e-01 & 3.30e-01 & 128 & 33 \\ \texttt{movie-1M:} & 6040/3702 & 1.00e+6 & 3.83e-01 & 2.70e-01 & 200 & 40 \\ \texttt{movie-10M:} & 71567/10674 & 9.91e+6 & 2.76e-01 & 2.36e-01 & 50 & 5 \\ \multicolumn{7}{l}{\rule{12.5cm}{1pt}} \end{tabular} \caption{Relative reconstruction errors for the MovieLens datasets.} \label{tab:collaborative} \end{table} \section{Proofs} \label{sec:proofs} \subsection{Proofs for Section~\ref{sec:cs-vectors}} \label{sec:proofs-cs} We denote by $\ell_p^M$ the space $\mathbb{R}^M$ endowed with the $\ell_p$ norm. The unit ball there is denoted by $B_p^M$. We also denote the unit Euclidean sphere in $\mathbb{R}^M$ by $\cS^{M-1}$. We denote by $(e_1,\ldots,e_N)$ the canonical basis of $\mathbb{R}^N$ and for any $I\subset\{1,\ldots,N\}$ denote by $\mathbb{R}^I$ the subspace of $\mathbb{R}^N$ spanned by $(e_i:i\in I)$. Let $A = [A_{\{1\}}, \ldots, A_{\{N\}}]$ be a matrix from $\mathbb{R}^N$ to $\mathbb{R}^m$, where $A_{\{i\}}$ denotes the $i$-th column vector of $A$. Let $x \in \mathbb{R}^N$ and $I$ an arbitrary subset of $\{1,\ldots,N\}$. We define $A_I = [A_{\{i\}} : i \in I]$ the matrix from $\mathbb{R}^{I}$ to $\mathbb{R}^m$ with columns vectors $A_{\{i\}}$ for $i\in I$. We denote by $x_I$ the vector in $\mathbb{R}^{I}$ with coordinates $x_i$ for $i\in I$, where $x_i$ is the $i$-th coordinate of $x$. We denote by $x^I$ the vector of $\mathbb{R}^N$ such that $x_i^I=0$ when $i\notin I$ and $x_i^I=x_i$ when $i\in I$. If $w\in\mathbb{R}^N$ has non negative coordinates, we denote by $wx$ the vector $(w_1 x_1,\ldots,w_N x_N)$ and by $x/w$ the vector $(x_1/w_1,\ldots,x_N/w_N)$ with the previous convention in case where $w_i=0$ for some $i$. We denote by $|x|$ the vector $(|x_1|,\ldots,|x_N|)$. The support of $x$ is denoted by $I_x$, this is the set of all $i\in\{1,\ldots,N\}$ such that $x_i\neq0$. We also consider the $w$-weighted $\ell_1^N$-norm \begin{equation} \label{eq:l1weightednorm} |x|_{1,w} = \sum_{i=1}^N \frac{|x_i|}{w_i}. \end{equation} Note that $|\cdot|_{1,w}$ is a norm only when restricted to $\mathbb{R}^{I_w}$, where $I_w$ is the support of $w$. We start with the well-known null space property and dual characterization~\cite{MR2236170} of exact reconstruction of a vector by $\ell_1$-based algorithms. \begin{proposition} \label{prop:equivalence-reconstruction-exacte} Let $x, w \in \mathbb{R}^N$ and denote by $I_x$ \textup(resp. $I_w$\textup) the support of $x$ \textup(resp. $w$\textup). The following points are equivalent\textup: \begin{enumerate} \item $\Delta_w(Ax) = x$, \item $I_x \subset I_w$ and for any $h \in \ker A_{I_w}$ such that $h \neq 0$ then \begin{equation*} \Norm{\Big(\frac{h}{w_{I_w}}\Big)_{I_x^\complement}}_1 + \Inr{{\rm sgn}(x_{I_x}), \Big( \frac{h}{w_{I_w}}\Big)_{I_x}} > 0, \end{equation*} \item $I_x\subset I_w$ and there exists $Y \in (\ker A_{I_w})^{\bot}$ such that $(w_{I_w}Y)_{I_x}={\rm sign}(x_{I_x})$ and $|(w_{I_w}Y)_{I_x^\complement}|_\infty < 1$. \end{enumerate} \end{proposition} \begin{proof} It follows from~\eqref{eq:values-weight-algo} that, under each one of the three conditions, we have $I_x\subset I_w$. Therefore, to simply notations, we can work as if the ambient space were $\mathbb{R}^{I_w}$. Hence, without loss of generality, we assume that $\mathbb{R}^{I_w}=\mathbb{R}^N$. We also denote by $I=I_x$ the support of $x$. [Point~2. entails Point~1.] Using standard arguments (see for instance~\cite{MR0274683}), we can see that the subgradient of $|\cdot|_{1,w}$ at $x \in \mathbb{R}^N$ is the set \begin{equation} \label{eq:subgradient-weighted-norm} \begin{split} \partial |x|_{1,w} = \big \{t \in \mathbb{R}^N : t_i &= {\rm sgn}(x_i)/w_i \text{ when } x_i\neq0 \\ &\text{ and } |t_i|\leq 1/w_i \text{ when } x_i=0\big\}. \end{split} \end{equation} Using the definition of the subgradient of $|\cdot|_{1,w}$ at $x$, it follows that for any $h \in \mathbb{R}^N$, \begin{equation*} |x+h|_{1,w} \geq |x|_{1,w} + |(h/w)_{I^\complement}|_1 + \inr{{\rm sgn}(x_I),(h/w)_I}. \end{equation*} Thus, if Point~2 holds then for any $h \in \ker A$ such that $h \neq 0$, \begin{equation*} |x+h|_{1,w} > |x|_{1,w} \end{equation*} and thus Point~1 is satisfied. [Point~3. entails Point~2.] Let $Y \in (\ker A)^{\bot}$ such that $(wY)_I={\rm sgn}(x_I)$ and $|(wY)_{I^\complement}|_\infty<1$. For any $h\neq 0$ in $\ker A$, we have \begin{align*} |(h/w)_{I^\complement}|_1 & + \inr{{\rm sgn}(x_I),(h/w)_I} = \inr{{\rm sgn}(x)^I+{\rm sgn}(h)^{I^\complement}, h/w} \\ &= \inr{({\rm sgn}(x)/w)^I + ({\rm sgn}(h)/w)^{I^\complement}, h} \\ &= \inr{({\rm sgn}(x)/w)^I + ({\rm sgn}(h)/w)^{I^\complement} - Y,h} \\ &= \inr{({\rm sgn}(h)/w)_{I^\complement}-Y_{I^\complement},h_{I^\complement}} = \sum_{i\in I^\complement}\frac{h_i}{w_i} \big({\rm sgn}(h_i)-w_iY_i\big)>0, \end{align*} where we used Point~3 in the fourth inequality. [Point~1. entails Point~3.] This follows from classical results on the minimization of a convex function over a convex set (cf.~\cite{MR0274683}). Nevertheless, we provide a direct proof following the argument of \cite{MR2236170}. Denote by $\{ e_1, \ldots, e_N \}$ the canonical basis in $\mathbb{R}^N$ and by $B_{1,w}^N$ the unit ball associated to the $w$-weighted $\ell_1^N$-norm: \begin{equation} \label{eq:weighted-l1-norm} B^N_{1,w} = \{t \in\mathbb{R}^N : |t|_{1,w} \leq 1 \}. \end{equation} If $x$ is the unique solution of~\eqref{eq:general-weighted-algo} then $|x|_{1,w} B_{1,w}^N \cap (x + \ker A ) = \{ x \}$. Then by a duality argument (for instance Hahn-Banach Theorem for the separation of convex sets), there exists $Y \in \mathbb{R}^N$ such that $x + \ker A \subset \Gamma_1$, where $\Gamma_1 = \{t : \inr{t,Y} = 1 \}$ and $|x|_{1,w} B_{1,w}^N \subset \Gamma_{\leq 1}$, where $\Gamma_{\leq 1} =\{t : \inr{t,Y} \leq 1 \}$. Introduce $F_{1,w}(x) = |x|_{1,w} \mathop{\rm conv}(w_ie_i : x_i\neq 0)$, the face of $|x|_{1,w} B_{1,w}^N$ containing $x$. By moving the hyperplan $\Gamma_1$, we can assume that $|x|_{1,w} B_{1,w}^N \cap \Gamma_1 \subset F_{1,w}(x)$. Since $|x|_{1,w} B_{1,w}^N\subset \Gamma_{\leq1}$, we have $\sup_{t\in |x|_{1,w} B_{1,w}^N}\inr{t,Y}\leq 1$ thus $|(wY)|_\infty\leq1/|x|_{1,w}$. Moreover, $x \in \Gamma_1$ so $1 = \inr{x,Y} \leq |x|_{1,w} |(wY)|_\infty \leq 1$ because $|(wY)|_\infty \leq 1 / |x|_{1,w}$. This is the equality case in H{\"o}lder's inequality, so it follows that $(wY)_I = {\rm sgn}(x_I) / |x|_{1,w}$. Then, for any $i \notin I$, $|x|_{1,w}w_ie_i\in |x|_{1,w} B^N_{1,w}$, thus $\inr{|x|_{1,w}w_ie_i,Y} \leq 1$ and $|x|_{1,w}w_ie_i\notin F_{1,w}(x)$, so $|x|_{1,w} w_i e_i\notin \Gamma_1$ thus $\inr{|x|_{1,w}w_ie_i,Y}< 1$. That is, $|(wY)_{I^\complement}|_\infty < 1/|x|_{1,w}$. Finally, for any $h\in\ker A$, $1=\inr{x+h,Y}=\inr{x,Y}+\inr{h,Y}=1+\inr{h,Y}$, thus $\inr{h,Y} = 0$ and $Y \in (\ker A)^\bot$. Then, we normalize $Y$ by $|x|_{1,w}$ to obtain Point~3. \end{proof} Both Criterions~2 and~3 in Proposition~\ref{prop:equivalence-reconstruction-exacte} can be used to characterize the exact reconstruction of a vector $x$ by the $\ell_1$-weighted algorithm. The vector $Y$ of Criterion~3 is now called an \emph{exact dual certificate} (cf. \cite{MR2236170,Gross}). We will use Criterion~3 and the construction of an exact dual certificate from~\cite{MR2236170} to prove Theorems~\ref{thm:A} and~\ref{thm:B}. Note that Criterion~2 together with the construction of an \emph{inexact dual certificate} (cf.~\cite{Gross}) can also be used. Nevertheless, we do not present this construction here since it does not improve the statement of Theorem~\ref{thm:B}. \subsubsection{Proof of Theorem~\ref{thm:A}} \label{sec:TheoA} In the same way as we did in the proof of Proposition~\ref{prop:equivalence-reconstruction-exacte}, we can work as if the ambient space were $\mathbb{R}^{I_w}$ and assume, without loss of generality, that $\mathbb{R}^{I_w}=\mathbb{R}^N$. We denote by $I$ the support of $x$. We prove first that when $\Delta_1(Ax) = x$, then $A_I$ is injective. Indeed, suppose that there exists some $h\in\mathbb{R}^{I}$ such that $h\neq0$ and $A_Ih=0$. Denote by $h^0 \in \mathbb{R}^N$ the vector such that $h^0_I = h$ and $h^0_{I^\complement}=0$. We have $h^0 \neq 0$ and $A h^0 = A_I h_I = 0$. In particular, for any $\lambda \neq 0$, $\lambda h \in \ker A - \{0\}$. Therefore, since $x$ is the unique solution of the Basis Pursuit algorithm, it follows from Point~2 of Proposition~\ref{prop:equivalence-reconstruction-exacte} (applied to the weight vector $w=(1,\ldots,1)$), that, for every $\lambda \neq 0$, $\inr{{\rm sgn}(x_I), \lambda h^0_I} > 0.$ This is not possible, so $A_I$ is injective. Since $\Delta_1(Ax)=x$, the decoder $\Delta_2$ is given here by \begin{equation*} \Delta_2(Ax) \in \argmin_{t \in \mathbb{R}^N} \Big(\sum_{i=1}^N \frac{|t_i|}{|x_i|} : At = Ax \Big). \end{equation*} Therefore, according to~\eqref{eq:values-weight-algo}, we have $\Delta_2(Ax)_i = 0$ for any $i \notin I$, that is ${\rm supp}(\Delta_2(Ax))\subset I$. As a consequence $A_Ix_I=Ax=A\Delta_2(Ax)=A_I\Delta_2(Ax)_I$ and $A_I$ is injective thus, $x_I=\Delta_2(Ax)_I$. Since $x_{I^\complement} = 0 = \Delta_2(Ax)_{I^\complement}$, we have $x = \Delta_2(Ax)$. \subsubsection{Proof of Theorem~\ref{thm:B}} \label{sec:TheoB} We adapt to our setup the ``dual certificate'' introduced in~\cite{MR2236170} and consider \begin{equation} \label{eq:exact-dual-certificate} Y^0 = A^\top A_I (A_I^\top A_I)^{-1} \Big( \frac{{\rm sgn}(x)}{w} \Big)_I. \end{equation} In particular, we have $Y^0 \in \im (A^\top) = (\ker A)^\bot$ and \begin{equation*} Y_I^0 = A^\top_I A_I(A_I^\top A_I)^{-1} \Big(\frac{{\rm sgn}(x)}{w}\Big)_I = \Big(\frac{{\rm sgn}(x)}{w}\Big)_I. \end{equation*} Thus, we have $(wY^0)_I={\rm sgn}(x_I)$. In view of Proposition~\ref{prop:equivalence-reconstruction-exacte}, it only remains to prove that $|(wY^0)_{I^\complement}|<1$ with high probability. For $0 < \delta < 1$ and $\mu > 0$, we consider the events \begin{equation} \label{eq:Omega0} \Omega_0(I,\delta) = \big\{(1-\delta) |y|_2^2\leq |A_I y|_2^2 \leq (1 + \delta) |y|_2^2, \quad \forall y \in \mathbb{R}^{I} \big\} \end{equation} and \begin{equation} \label{eq:Omega1} \Omega_1(I,\mu) = \big\{\max_{i\in I^\complement} |A_I^\top A_{\{i\}}|_2 < \mu \big\}. \end{equation} First, note that since $A_I^\top A_I - Id$ is Hermitian, we have \begin{equation*} \norm{A_I^\top A_I - Id}_{2 \rightarrow 2} = \sup_{|y|_2=1} \big| |A_I y|_2^2 - 1 \big|. \end{equation*} Thus, on $\Omega_0(I,\delta)$, we have $ \norm{A_I^\top A_I -Id}_{2\rightarrow2}\leq \delta$ and so for any $y\in\mathbb{R}^{I}$, $\big| (A_I^\top A_I)^{-1} y \big|_2 \leq (1 - \delta)^{-1} |y|_2$. In particular, \begin{equation*} \label{eq:Intermediare-0} \Big|(A_I^\top A_I)^{-1} \Big(\frac{{\rm sgn}(x)}{w}\Big)_I \Big|_2 \leq \frac{1}{1 - \delta} \Big| \Big(\frac{{\rm sgn}(x)}{w}\Big)_I \Big|_2 = \frac{1}{1 - \delta} \big| \big( 1 / w \big)_I \big|_2. \end{equation*} Then, it follows that, on $\Omega_0(I, \delta) \cap \Omega_1(I, x)$ and under condition $(A0)(I, (1-\delta) / \mu)$, \begin{align*} |(wY^0)_{I^\complement}|_{\infty} &= \max_{i\in I^\complement} \Big| w_i A_{\{i\}}^\top A_I (A_I^\top A_I)^{-1} \Big(\frac{{\rm sgn}(x)}{w}\Big)_I \Big| \\ &\leq \max_{i\in I^\complement} w_i \max_{i\in I^\complement} \Big|\inr{A_I^\top A_{\{i\}}, (A_I^\top A_I)^{-1} \Big(\frac{{\rm sgn}(x)}{w}\Big)_I}\Big| \\ &\leq \max_{i\in I^\complement} w_i \max_{i\in I^\complement} \big|A_I^\top A_{\{i\}}\big|_2 \Big|(A_I^\top A_I)^{-1} \Big(\frac{{\rm sgn}(x)}{w}\Big)_I\Big|_2 \\ &< \frac{\mu}{1-\delta} \max_{i\in I^\complement} w_i \big|\big(1/w\big)_I\big|_2 \leq 1. \end{align*} Then, Theorem~\ref{thm:B} follows from the probability estimates of $\Omega_0(I,\delta) \cap \Omega_1(I,\mu)$ provided in the next lemma. \begin{Lemma} \label{lem:proba-estimates} Let $A = m^{-1/2}\big(g_{i, j}\big)$ be a $m\times N$ matrix where the $g_{i, j}$'s are i.i.d. standard Gaussian variables. Assume that \begin{equation*} m \geq c_0\max\Big[ \frac{s}{\delta^2}, \frac{s\log N}{ \mu^2} \Big]. \end{equation*} With probability larger than $1 - 2\exp(-c_1 m \delta^2) - \exp(- c_2\mu^2 m /s)$, we have \begin{equation*} (1 - \delta) |y|_2^2 \leq |A_I y|_2^2 \leq (1 + \delta) |y|_2^2, \quad \forall y \in \mathbb{R}^I \end{equation*} and $\max_{i\in I^\complement} |A_I^\top A_{\{i\}}|_2 < \mu.$ \end{Lemma} \begin{proof} For the sake of completeness, we recall here the classical $\varepsilon$-net argument to prove the first statement of Lemma~\ref{lem:proba-estimates}. It is enough to prove that $\sup_{y\in \cS^I} | |A_Iy|_2^2 - 1 | \leq \delta$, where $\cS^I$ is the set of unit vectors of $\ell_2^N$ supported on $I$. First, note that \begin{equation*} \sup_{y \in \cS^I} \big| |A_Iy|_2^2 - 1 \big| = \sup_{y \in \cS^I} | \inr{Ty, y} | = \norm{T}_{2\rightarrow2}, \end{equation*} where $T : \mathbb{R}^I \rightarrow \mathbb{R}^I$ is the symmetric operator $A^\top A - I_d$. Let $\Lambda \subset \cS^I$ be a $1/4$-net of $ \cS^I$ for the $\ell_2$ metric with a cardinality smaller than $9^s$ (the existence of such a net follows from a volumetric argument, see~\cite{MR1036275}). For any $y\in\cS^I$, there exists $z \in \Lambda$ such that $y = z + u$ with $\Norm{u}_2 \leq 1/4$ and therefore, \begin{equation*} |\inr{Ty,y}|\leq |\inr{Tz,z}|+|\inr{Tu,u}|+2|\inr{Tz,u}|\leq \max_{z\in\Lambda}|\inr{Tz, z}|+\frac{9\norm{T}_{2\rightarrow2}}{16}. \end{equation*} Hence, $\norm{T}_{2 \rightarrow 2}\leq (16/7) \max_{z \in \Lambda}|\inr{Tz, z}|$, and it is enough to control the supremum of $y \rightarrow |\inr{Ty,y}|$ over $\Lambda$ instead of $\cS^I$. Let $y\in\Lambda$. We denote by $G_1 / \sqrt{m}, \ldots, G_m / \sqrt{m}$ the row vectors of $A$ where $G_1, \ldots, G_m$ are $m$ independent standard Gaussian vectors of $\mathbb{R}^N$. We have $\inr{Ty, y} = m^{-1}\sum_{i=1}^m \inr{G_i,y}^2 - 1$. Since $\| \inr{G,y}^2 \|_{\psi_1} = \| \inr{G,y} \|_{\psi_2}^2$, it follows from Bernstein inequality for $\psi_1$ random variables~\cite{vanderVaartWellner} that \begin{equation*} \P\big[ |\inr{Ty,y}| \leq \delta\big] \geq 1 - 2\exp(-c_1 m\delta^2), \end{equation*} and a union bound yields \begin{equation*} \P\big[ |\inr{Ty,y}| \leq \delta \; , \; \forall y \in \Lambda\big] \geq 1 - 2\exp( s \log 9 - c_1m \delta^2). \end{equation*} Combining the $\varepsilon$-net argument with this probability estimate we obtain that when $m \geq c_2 s/\delta^2$ then $\norm{T}_{2\rightarrow2}\leq \delta$ with probability at least $1 - 2\exp\big( -c_3m\delta^2\big)$. Now, we turn to the second part of the statement. Let $i\in I^\complement$. The $i$-th column vector of $A$ is $A_{\{i\}}=G_i/\sqrt{m}=(g_{i1},\ldots,g_{im})^\top/\sqrt{m}$ where the $G_i$'s are independent standard Gaussian vectors of $\mathbb{R}^m$. Let $q \geq 2$ to be chosen later. By Markov inequality, \begin{equation} \label{eq:markov-theoB} \mathbb P\Big[\Big|A_I^\top A_{\{i\}} \Big|_2\geq \mu\Big]=\mathbb P\Big[\Big|\sum_{j=1}^m g_{ij}G_{jI}\Big|_2\geq m\mu\big]\leq (m\mu)^{-q}\mathbb{E}\Big|\sum_{j=1}^mg_{ij}G_{jI}\Big|_2^q. \end{equation} Now, we use the vectorial version of Khintchine inequality conditionally to $G_{1J},\ldots,G_{mJ}$, to obtain, for some absolute constant $c_4$, \begin{equation*} \Big(\mathbb{E}_g\Big|\sum_{j=1}^mg_{ij}G_{jI}\Big|_2^q\Big)^{1/q}\leq c_4\sqrt{q}\Big(\mathbb{E}_g\Big|\sum_{j=1}^mg_{ij}G_{jI}\Big|_2^2\Big)^{1/2}=c_4 \sqrt{q}\Big(\sum_{j=1}^m\big| G_{jI}\big|_2^2\Big)^{1/2}. \end{equation*} It follows that \begin{equation*} \mathbb{E}\Big|\sum_{j=1}^mg_{ij}G_{jI}\Big|_2^q\leq \big(c_4^2 q m s\big)^{q/2}. \end{equation*} Hence, in (\ref{eq:markov-theoB}) for $q=\big(\mu/(2c_4^2)\big)^2(m/s)$, we obtain \begin{equation*} \mathbb P\Big[\Big|A_I^\top A_{\{i\}} \Big|_2\geq \mu\Big]\leq \exp\Big(-\frac{\mu^2 m \log 2}{s(2c_4^2)^2}\Big). \end{equation*} The result follows now from an union bound. \end{proof} \subsubsection{Proof of Theorem~\ref{thm:C}} \label{sec:TheoC} \begin{proof} Assume that $\Delta_r(Ax) = x$ and define $y = \Delta_{r+1}(Ax)$. By construction of $y$, we have $\supp(y) \subset \supp(x)$ and $Ax = Ay$. So, since $A$ is injective on $\Sigma_m$ and $x-y \in \Sigma_m$, we have $x=y$. This proves that $\Delta_{r+1}(Ax) = x$, and that the sequence $(\Delta_n(Ax))_n$ is constant and equal to a $\lfloor m/2 \rfloor$-sparse vector starting from the $r$-th iteration. Now, assume that there exists an integer $r$ and $y \in \Sigma_{\lfloor m/2 \rfloor}$ such that $\Delta_r(Ax) = \Delta_{r+1}(Ax) = \cdots=y$. In particular, we have $Ay = Ax$, so since $A$ is injective on $\Sigma_m$ and $x-y \in \Sigma_m$, we have $x=y$. \end{proof} \subsection{Proofs for Section~\ref{cs-matrices}} \label{sec:proofs-cs-matrices} The next proposition shows that weighted spectral soft-thresholding achieves the minimum of the weighted nuclear norm plus a proximity term. Note that, however, weighted spectral soft-thresholding is not a proximal operator, since the weighted nuclear norm is not convex. This entails in particular that the proofs below use a direct analysis, since we cannot use arguments based on subdifferential computations here. \begin{proposition} \label{prop:weighted-nuclear-prox} Let $B \in \mathbb{R}^{n_1 \times n_2}$, $\tau,\lambda \geq 0$ and $w_1 \geq \cdots \geq w_{n_1 \wedge n_2} \geq 0$. Then the minimization problem \begin{equation*} \min_{A \in \mathbb{R}^{n_1 \times n_2}} \Big\{ \frac 12 \norm{A - B}_2^2 + \lambda\sum_{j=1}^{n_1 \wedge n_2} \frac{\sigma_j(A)}{w_j} + \frac{\tau}{2} \norm{A}_2^2 \Big\} \end{equation*} has a unique solution, given by $\frac{1}{1+\tau} S_\lambda^w(B)$, where $S_\lambda^w(B)$ is the weighted soft-thresholding operator~\eqref{eq:s-lambda-w-def}. \end{proposition} \begin{proof}[Proof of Proposition~\ref{prop:weighted-nuclear-prox}] Denote for short $q = n_1\wedge n_2$ and write the SVD of $A$ as $A = U \Sigma V^\top = \sum_{j=1}^q \sigma_j u_j v_j^\top$ where $U = [u_1, \ldots, u_q]$, $V = [v_1, \ldots, v_q]$ and $\Sigma = \diag(\sigma_1, \ldots, \sigma_q)$. We have \begin{equation*} \norm{A - B}_2^2 = \norm{B}_2^2 - 2 \sum_{j=1}^{q} \sigma_j u_j^\top B v_j + (1+\tau) \sum_{j=1}^{q} \sigma_j^2 \end{equation*} so that we want to minimize the function \begin{equation*} \phi(U, V, \Sigma) = \frac 12 \sum_{j=1}^{q} \Big(-2\sigma_j u_j^\top B v_j + (1+\tau)\sigma_j^2 \Big) + \lambda\sum_{j=1}^{q} \frac{\sigma_j}{w_j} \end{equation*} over $U, V, \Sigma$ with the constraints $U^\top U = I$, $V^\top V = I$ and $\sigma_1 \geq \ldots \geq \sigma_q \geq 0$. Using the variational characterization of singular values, if $B = U' \Sigma' V'^\top$ is the SVD of $B$, where $U' = [u_1', \ldots, u_q']$, $V' = [v_1', \ldots, v_q']$, $\Sigma' = \diag(\sigma_1', \ldots, \sigma_q')$, we know that the maximum of $u^\top B v$ over all vectors $u$ and $v$ subject to $|u|_2 = |v|_2 = 1$ and $u$ orthogonal to $u_1', \ldots, u_{j-1}'$ and $v$ orthogonal to $v_1', \ldots, v_{j-1}'$ is achieved at $u_j'$ and $v_j'$, and is equal to $\sigma_j'$. So the maximum of $\phi(U, V, \Sigma)$ is achieved at $U = U'$ and $V = V'$, and \begin{equation*} \phi(U', V', \Sigma) = \frac 12 \sum_{j=1}^{q} \Big( -2 \sigma_j \sigma_j' + (1+\tau) \sigma_j^2 + 2\lambda\frac{\sigma_j}{w_j}\Big). \end{equation*} It is easy to see that for each $j$ the the minimum over $\sigma_j$ is achieved at $\sigma_j = \frac{1}{1+\tau} (\sigma_j' - \frac{\lambda}{w_j})_+$, which is non-increasing. \end{proof} As mentioned before, $S_\lambda^w$ is not a proximal operator. A nice property about proximal operators is that they are firmly non-expansive, see \cite{MR0274683}. Namely, if $T$ is the proximal operator of some convex function over an Hilbert space $H$, then we have \begin{equation*} \norm{T x - Ty}^2 \leq \norm{x - y}^2 - \norm{x - y - (T x - T y)}^2 \end{equation*} for any $x, y \in H$. However, it turns out that we can prove, using a direct analysis, that $S_\lambda^w$ is non-expansive. Once again, the proof uses a direct and technical analysis (since we cannot use arguments based on subdifferential computations), while the property of firm-nonexpansivity of proximal operators is an easy consequence of their definition. \begin{proposition} \label{prop:S-w-lipshitz} Let $w_1 \geq \cdots \geq w_{n_1 \wedge n_2} \geq 0, \lambda \geq 0$. Then, for any $A, B \in \mathbb{R}^{n_1 \times n_2}$, we have \begin{equation*} \norm{S_\lambda^w(A) - S_\lambda^w(B)}_2 \leq \norm{A - B}_2. \end{equation*} \end{proposition} \begin{proof}[Proof of Proposition~\ref{prop:S-w-lipshitz}] Let us assume without loss of generality that $\lambda = 1$. Write the SVD of $A$ and $B$ as $A = U_1 \Sigma_1 V_1^\top$ and $B = U_2 \Sigma_2 V_2^\top$ where $\Sigma_1=\diag[\sigma_{1,1},\ldots,\sigma_{1,r_1}]$, $\Sigma_2=\diag[\sigma_{2,1},\ldots,\sigma_{2,r_2}]$ and $r_1$ (resp. $r_2$) stands for the rank of $A$ (resp. $B$). We also write for short $\bar A = S_1^w(A) = U_1 \bar \Sigma_1 V_1^\top$ and $\bar B = S_1^w(B) = U_2 \bar \Sigma_2 V_2^\top$ where $\bar \Sigma_1 = \diag[ (\sigma_{1,1} - 1/w_1)_+ , \ldots, (\sigma_{1, r_1} - 1/w_{r_1})_+]$ and $\bar \Sigma_2 = \diag[ (\sigma_{2,1} - 1/w_1)_+ , \ldots, (\sigma_{2, r_1} - 1/w_{r_2})_+]$. We want to prove that $\norm{A - B}_2^2 - \norm{\bar A - \bar B}_2^2 \geq 0$. First use the decomposition \begin{align*} \norm{A - B}_2^2 - &\norm{\bar A - \bar B}_2^2 = \norm{A}_2^2 - \norm{\bar A}_2^2 + \norm{B}_2^2 - \norm{\bar B}_2^2 - 2 \inr{A, B} + 2 \inr{\bar A, \bar B} \\ &= \sum_{j=1}^{r_1} \sigma_{1, j}^2 - \sum_{j=1}^{\bar r_1} \Big( \sigma_{1, j} - \frac{1}{w_j} \Big)^2 + \sum_{j=1}^{r_2} \sigma_{2, j}^2 - \sum_{j=1}^{\bar r_2} \Big( \sigma_{2, j}^2 - \frac{1}{w_j} \Big)^2 \\ & \quad - 2 \big(\inr{A, B} - \inr{\bar A, \bar B}\big), \end{align*} where we take $\bar r_1$ such that $\sigma_{1,j} > 1 / w_j$ for $j \leq \bar r_1$ and $\sigma_{1,j} \leq 1 / w_j$ for $j \geq \bar r_1 + 1$, and similarly for $\bar r_2$. We decompose \begin{equation} \label{eq:dec-inr-lemma-lip} \inr{A, B} - \inr{\bar A, \bar B} = \inr{A - \bar A, B - \bar B} + \inr{\bar A, B - \bar B} + \inr{A - \bar A, \bar B} \end{equation} Using von Neumann's trace inequality $\inr{X,Y} \leq \sum_{j} \sigma_j(X) \sigma_j(Y)$ (see for instance \cite{MR832183}, Section~7.4.13), it follows for the first term of~\eqref{eq:dec-inr-lemma-lip} that \begin{equation*} \inr{A - \bar A, B - \bar B} \leq \sum_{j=1}^{r_1 \wedge r_2} (\Sigma_{1} - \bar \Sigma_{1})_{j,j} (\Sigma_{2} - \bar \Sigma_{2})_{j,j}. \end{equation*} Using the same argument for the two other terms of~\eqref{eq:dec-inr-lemma-lip}, we obtain \begin{align*} \inr{A, B} - \inr{\bar A, \bar B} &\leq \sum_{j=1}^{r_1 \wedge r_2} \Big( (\Sigma_{1} - \bar \Sigma_{1})_{j,j}(\Sigma_{2} - \bar \Sigma_{2})_{j,j} + (\bar \Sigma_{1})_{j,j}(\Sigma_2 - \bar \Sigma_{2})_{j,j} \\ & +(\Sigma_{1} - \bar \Sigma_{1})_{j,j} (\bar \Sigma_{2})_{j,j}\Big), \\ \end{align*} We explore the case $r_1 \leq r_2$ and $\bar r_1 \leq \bar r_2$; the other cases follow the same argument. We have \begin{equation*} \inr{A, B} - \inr{\bar A, \bar B} \leq \sum_{j = 1}^{\bar r_1} \frac{\sigma_{1, j}}{w_j} + \Big(\sigma_{2, j} - \frac{1}{w_j} \Big) \frac{1}{w_j} + \sum_{j = \bar r_1 + 1}^{r_1} \sigma_{1, j} \sigma_{2, j}, \end{equation*} so, an easy computation leads to \begin{align*} \norm{A - B}_2^2 - \norm{\bar A - \bar B}_2^2 &\geq \sum_{j=\bar r_2 + 1}^{r_1} \sigma_{1, j}^2 + \sum_{j=\bar r_2 + 1}^{r_2} \sigma_{2, j}^2 - 2 \sum_{j=\bar r_2 + 1}^{r_1} \sigma_{1, j} \sigma_{2, j} \\ & \quad + \sum_{j=\bar r_1 + 1}^{\bar r_2} \Big( \sigma_{1, j}^2 - 2 \sigma_{1, j} \sigma_{2, j} + \frac{2 \sigma_{2, j}}{w_j} - \frac{1}{w_j^2} \Big). \end{align*} We obviously have $\sum_{j=\bar r_2 + 1}^{r_1} \sigma_{1, j}^2 + \sum_{j=\bar r_2 + 1}^{r_2} \sigma_{2, j}^2 - 2 \sum_{j=\bar r_2 + 1}^{r_1} \sigma_{1, j} \sigma_{2, j} \geq 0$. By definition of $\bar r_2$ and $\bar r_1$, we have $\sigma_{1, j} \leq 1 / w_j < \sigma_{2, j}$ for any $j = \bar r_1+1, \ldots, \bar r_2$. Hence, we have \begin{equation*} \sigma_{1, j}^2 - 2 \sigma_{1, j} \sigma_{2, j} + \frac{2 \sigma_{2, j}}{w_j} - \frac{1}{w_j^2} = (\sigma_{1, j} - 2\sigma_{2, j} + 1/w_j) (\sigma_{1, j} - 1/w_j) \geq 0, \end{equation*} which concludes the proof of Proposition~\ref{prop:S-w-lipshitz}. \end{proof} \begin{proof}[Proof of Theorem~\ref{prop:existence-and-unicity}] Consider the sequence $(A^k)_{k \geq 0}$ defined in~\eqref{eq:iterations}. Using Proposition~\ref{prop:S-w-lipshitz} we have for any $k\geq1$ \begin{align*} \norm{A^{k+1} - A^{k}}_2 &= \frac{1}{(1 + \tau)} \norm{S_{\lambda}^w({\mathcal P}_\Omega(A_0) + {\mathcal P}_\Omega^\perp(A^k)) - S_{\lambda}^w({\mathcal P}_\Omega(A_0) + {\mathcal P}_\Omega^\perp(A^{k-1}))}_2 \\ &\leq \frac{1}{(1 + \tau)} \norm{{\mathcal P}_\Omega^\perp(A^k) - {\mathcal P}_\Omega^\perp(A^{k-1})}_2 \leq \frac{1}{(1 + \tau)} \norm{A^k - A^{k-1}}_2, \end{align*} so that $\norm{A^{k+1} - A^{k}}_2 \leq (1 + \tau)^{-k} \norm{A^{1} - A^{0}}_2.$ This proves that $\sum_{k \geq 0} \norm{A^{k+1} - A^{k}}_2 < +\infty$, so the limit of $(A^k)_{k \geq 0}$ exists and is given by \begin{equation*} A^\infty = \sum_{k \geq 0} (A^{k+1} - A^{k})+A^0. \end{equation*} Now, by continuity of $S_\lambda^w$ and ${\mathcal P}_{\Omega}^\perp$, taking the limit on both sides of~\eqref{eq:iterations}, we obtain that $A^\infty$ satisfies the fixed-point equation \begin{equation*} A^\infty = \frac{1}{1 + \tau} S_{\lambda}^w({\mathcal P}_{\Omega}^\perp(A^\infty) + {\mathcal P}_\Omega(A_0)), \end{equation*} so we have found at least one solution. Let us show now that it is unique, so that $\hat A_\lambda^w = A^\infty$: consider a matrix $B$ satisfying the same fixed point equation. We have \begin{align*} \norm{B - A^\infty}_2 &= \frac{1}{(1 + \tau)^2} \norm{S_{\lambda}^w({\mathcal P}_\Omega(A_0) + {\mathcal P}_\Omega^\perp(B)) - S_{\lambda}^w({\mathcal P}_\Omega(A_0) + {\mathcal P}_\Omega^\perp(A^\infty))}_2 \\ &\leq \frac{1}{(1 + \tau)} \norm{{\mathcal P}_\Omega^\perp(B) - {\mathcal P}_\Omega^\perp(A^\infty)}_2 \leq \frac{1}{(1 + \tau)} \norm{B - A^\infty}_2, \end{align*} therefore $B=A^\infty$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:algorithm_convergence}] We know from the proof of Theorem~\ref{prop:existence-and-unicity} that \begin{equation*} \| \hat A_\lambda^w - A^n \|_2 = \norm{\sum_{k \geq n} (A^{k+1} - A^k)}_2 \leq \sum_{k \geq n} \frac{1}{(1 + \tau)^{k}} \norm{A^{1} - A^{0}}_2, \end{equation*} leading to the conclusion. \end{proof} \bibliographystyle{plain} \footnotesize
{ "timestamp": "2011-07-11T02:02:46", "yymm": "1107", "arxiv_id": "1107.1638", "language": "en", "url": "https://arxiv.org/abs/1107.1638", "abstract": "This paper is about iteratively reweighted basis-pursuit algorithms for compressed sensing and matrix completion problems. In a first part, we give a theoretical explanation of the fact that reweighted basis pursuit can improve a lot upon basis pursuit for exact recovery in compressed sensing. We exhibit a condition that links the accuracy of the weights to the RIP and incoherency constants, which ensures exact recovery. In a second part, we introduce a new algorithm for matrix completion, based on the idea of iterative reweighting. Since a weighted nuclear \"norm\" is typically non-convex, it cannot be used easily as an objective function. So, we define a new estimator based on a fixed-point equation. We give empirical evidences of the fact that this new algorithm leads to strong improvements over nuclear norm minimization on simulated and real matrix completion problems.", "subjects": "Information Theory (cs.IT); Statistics Theory (math.ST)", "title": "Weighted algorithms for compressed sensing and matrix completion", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429575500228, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211049578194 }
https://arxiv.org/abs/2204.02933
Quantitative differentiation and the medial axis
We study the medial axis of a set $K$ in Euclidean space (the set of points in space with more than one closest point in $K$) from a "coarse" and "quantitative" perspective. We show that on "most" balls $B(x,r)$ in the complement of $K$, the set of almost-closest points to $x$ in $K$ takes up a small angle as seen from $x$. In other words, most locations and scales in the complement of $K$ "appear" to fall outside the medial axis if one looks with only a certain finite resolution. The word "most" involves a Carleson packing condition, and our bounds are independent of the set $K$.
\section{Introduction} If $K\subseteq \mathbb{R}^k$, the distance from a point $p\in\mathbb{R}^k$ to the set $K$ is $$ d(p,K) = \inf\{d(p,x) : x\in K\},$$ where $d(p,x)$ denotes the Euclidean distance $|p-x|$. If $K$ is closed and $p\in \mathbb{R}^k$, then there is always a point $x\in K$ such that $d(p,x)=d(p,K)$, but this point $x$ may not be unique. The set of points $p\in\mathbb{R}^k$ for which this closest point $x\in K$ is \textit{not} unique is called the \textit{medial axis} of $K$, which we denote $\text{Med}(K)$: \begin{definition} Given $K\subseteq \mathbb{R}^k$, let \begin{equation*} \text{Med}(K)=\{p\in \mathbb{R}^k: \text{ there exist $x,y\in K$ with $x\neq y$ and $d(p,K)=d(p,x)=d(p,y)$\}}. \end{equation*} \end{definition} The medial axis has a fairly long history in both pure and applied mathematics; the results of Erd\"os \cite{erdos} appear before even the name ``medial axis'' was coined by Blum \cite{Blum}. A good overview from the pure mathematical perspective can be found in the introduction and references of \cite{hajlasz}. For one thing, it is well-known that the medial axis of any closed set $K$ has measure zero. A short proof of this fact can be given by applying Rademacher's theorem on the differentiability of Lipschitz functions to the Lipschitz function $x\mapsto\text{dist}(x,K)$. See \cite{mathoverflow} or \cite[Remark 13]{hajlasz} for details. In fact, much stronger results than this can be proven on the smallness of the medial axis: see \cite{erdos, Fremlin}. In this paper, we prove that the medial axis is small from a ``coarse'' or ``quantitative'' perspective. In essence, our main result (Theorem \ref{thm:main} below) says that given a compact set $K\subseteq\mathbb{R}^k$, the set of locations and scales in the complement of $K$ that ``appear'' to be in the medial axis is ``small'', with a control which is independent of the set $K$. The words ``small'' and ``appear'' need some further elaboration. To measure the size of a collection of locations and scales in $\mathbb{R}^k$, we use the notion of a Carleson set: \begin{definition}\label{c-set r^k intro} Let $D\subseteq \mathbb{R}^k\times \mathbb{R}^+$ be measurable. Let $D_r=\{x:(x,r)\in D\}.$ We say that $D$ is a \textit{Carleson set} if there is a $C\geq 0$ such that for every $L>0$ and every ball $B\subseteq \mathbb{R}^k$ of radius $L$, $$\int_0^L|D_r\cap B|\frac{dr}{r}\leq C|B|,$$ where $|\cdot|$ denotes the Lebesgue measure of a set in $\mathbb{R}^k$. We call the minimal $C$ for which this is satisfied the \textit{Carleson constant} of $D$. \end{definition} This definition plays a major role in the area of quantitative geometric measure theory developed by David and Semmes \cite{davidsemmes}. Roughly speaking, if one thinks of $D\subseteq \mathbb{R}^k\times \mathbb{R}^+$ as a collection of balls in $\mathbb{R}^k$ (centers and radii), the Carleson condition says that this is a ``small'' collection: most points of $\mathbb{R}^k$ are not contained in too many balls of $D$ of very different radii. In particular, if $D$ is Carleson then every ball in $\mathbb{R}^k$ contains a ball of comparable size lying outside the collection $D$. A nice discussion of this concept is given by Semmes in \cite[B.29]{gromov}. Our main theorem considers the set of all balls $B(x,r)$ in the complement of a given set $K\subseteq\mathbb{R}^k$. Some of these balls may have the property that there are two points $z_1$ and $z_2$ in $K$ such that \begin{itemize} \item $z_1$ and $z_2$ both ``almost'' minimize the distance to $x$ in $K$, i.e., $$ d(x,z_i) \leq d(x,K) + \epsilon r $$ for some small $\epsilon>0$, and \item the angle between the segments $[x,z_1]$ and $[x,z_2]$ is ``large'', i.e., bounded away from zero in some quantitative way. \end{itemize} We think of a ball satisfying these conditions as representing a point that ``appears'' to be in the medial axis if one looks with only some finite degree of resolution. Our main theorem says that such balls are rare: they form a Carleson set. Moreover, the Carleson constant $C$ is completely independent of the set $K$, depending only on the dimension and the chosen parameters. \begin{theorem}\label{thm:main} Let $K\subseteq \mathbb{R}^k$ and let $\epsilon\geq 0$ and $\delta>0$ satisfy $2\delta+\epsilon<1$. Let \begin{align*} G=& \{(x,r):x\in \mathbb{R}^k, 0<r<d(x,K),\text{ and there exist }z_1,z_2\in K \text{ such that }\\ &d(x,z_1),d(x,z_2)\leq d(x,K)+\epsilon r\\& \text{ but }|\theta|>\cos^{-1}\left(2\left(\frac{1-(2\delta +\epsilon)}{1+2\delta}\right)^2-1\right)\} \end{align*} where $\theta$ is the angle between $[x,z_1]$ and $[x,z_2]$. Then $G$ is a Carleson set whose Carleson constant can be bounded above depending only on $\delta$ and $k$. \end{theorem} Note that the quantity $\cos^{-1}\left(2\left(\frac{1-(2\delta +\epsilon)}{1+2\delta}\right)^2-1\right)$ is positive if $\delta>0$ and tends to $0$ as $\delta,\epsilon\rightarrow 0$. The theorem is already of interest in the case $\epsilon=0$, in which case it more directly concerns the medial axis of $K$. Thus, Theorem \ref{thm:main} gives a precise sense in which every medial axis is seen only at a small collection of locations and scales, in a way which is robust and independent of the base set $K$. The proof of Theorem \ref{thm:main} uses the philosophy of the proof mentioned above that the medial axis has measure zero, which applies Rademacher's theorem to the distance function to $K$ \cite[Remark 13]{hajlasz}. However, instead of using Rademacher's theorem, which provides only infinitesimal and not ``coarse'' information, we use a result from the theory of ``quantitative differentiation'', explained in Section \ref{sec:background}. In Section \ref{sec:proof} we apply this theorem along with some quantitative estimates on the distance function to prove Theorem \ref{thm:main}. To conclude the introduction, we emphasize that Theorem \ref{thm:main} is not implied by the fact that the medial axis has measure zero, nor even by the stronger results of \cite{erdos,Fremlin} mentioned earlier, because those results make no statements about the ``large scale'' structure of the medial axis. \section{Background on quantitative differentiation}\label{sec:background} The theory of ``quantitative differentiation'' considers approximation by Lipschitz functions at ``large'' scales, not just infinitesimal ones. It originates in work of Dorronsoro \cite{dorronsoro} and Jones \cite{Jones}, with extensions by many others since. Good overviews of the material we need can be found in Appendix B by Semmes in \cite{gromov} (especially Section B.29), the notes of Young \cite{young}, or the master's thesis of the second named author \cite{hook}. With the dimension $k$ understood from context, let $B(x,r)$ denote the closed ball of radius $r$ centered at $x\in\mathbb{R}^k$. We use $\mathbb{R}^+$ below to denote the positive real numbers. \begin{definition}\label{ ecd r^k} Let $f:\mathbb{R}^k\rightarrow \mathbb{R} \text{ and } \epsilon >0$. We say that $f$ is $\epsilon$-coarsely-differentiable on $B(x,r)$ if there is an affine function $\lambda:\mathbb{R}^k\rightarrow \mathbb{R}$ such that $$|f(p)-\lambda(p)|\leq \epsilon r \text { for all } p\in B(x,r).$$ \end{definition} The main result we need is that every $1$-Lipschitz function is coarsely differentiable on all balls outside of a Carleson set. \begin{theorem}\label{final qd intro} Let $\epsilon>0$. Let $f:\mathbb{R}^k\rightarrow \mathbb{R}$ be 1-Lipschitz. Let $$G=\{(x,r)\in\mathbb{R}^k\times\mathbb{R}^+: f \text { is not $\epsilon$-coarsely differentiable on } B(x,r)\}.$$ Then $G$ is Carleson, and its Carleson constant can be bounded above depending only on $\epsilon$ and $k$. \end{theorem} As Semmes remarks in \cite[B.29]{gromov}, it is difficult to trace the attribution of this precise result. It follows from the main results of \cite{dorronsoro}, as discussed in \cite{davidsemmes}. It is stated explicitly as \cite[Theorem B.29.10]{gromov} and as \cite[Theorem 2.4]{young}. An exposition of the proof (following that of Young \cite{young}) is given in \cite{hook}, and a generalization to metric space targets is given in \cite{AzzamSchul}. Further generalizations and analogs are discussed in \cite{cheeger}. \section{Proof of the main theorem}\label{sec:proof} The main work in the proof of Theorem \ref{thm:main} lies in the following result. \begin{theorem}\label{ z_1,z_2 close together e} Let $K\subseteq \mathbb{R}^k$ be compact. Let $f:\mathbb{R}^k\rightarrow \mathbb{R}$ be $f(x)=d(x,K)$. Let $\epsilon\geq 0$ and $\delta>0$ be such that $2\delta+\epsilon<1$. Let $x\in \mathbb{R}^k$ be such that there are $z_1,z_2\in K$, where \begin{equation}\label{d(x,z_1)< d(x,K)+er} d(x,z_1)\leq d(x,K)+\epsilon r, \end{equation} $$d(x,z_2)\leq d(x,K)+\epsilon r, $$ and $0<r<d(x,K)$. Suppose that $f$ is $\delta$-coarsely differentiable on $B(x,r)$. Let $\theta$ be the angle between $[x,z_1]$ and $[x,z_2]$. Then $$|\theta|\leq \cos^{-1}\left(2\left(\frac{1-(2\delta+\epsilon)}{1+2\delta}\right)^2-1\right).$$ \end{theorem} This theorem says that if $f$ is $\delta$-coarsely differentiable on some ball $B(x,r)$, then the set of ``almost-closest'' points to $x$ in $K$ must occupy a small angle as seen from $x$. To prove this, we first need a few lemmas. \begin{lemma}\label{f(x)=d(x,K)} Let $K\subseteq\mathbb{R}^k$. Let $f:\mathbb{R}^k\rightarrow \mathbb{R}$ be $f(x)=d(x,K)$. Then $f$ is 1-Lipschitz. \end{lemma} \begin{proof} This is a well-known consequence of the triangle inequality, so we omit the simple proof. \end{proof} \begin{lemma} \label{ extended f(p)-f(y)} Let $K\subseteq\mathbb{R}^k$ and $f:\mathbb{R}^k\rightarrow \mathbb{R}$ be $f(x)=d(x,K)$. Suppose that $x\in \mathbb{R}^k$, $z\in K$, $\epsilon\geq 0$, and $r>0$ satisfy $$d(x,z)\leq d(x,K)+\epsilon r.$$ Let $y\in [x,z]$. Then $$f(x)-f(y)\geq d(x,y)-\epsilon r.$$ \end{lemma} \begin{proof} It suffices to show that $$f(y)\leq f(x)+\epsilon r -d(x,y).$$ As $x,y,z\in [x,z]$, $$d(x,z)=d(x,y)+d(y,z),$$ and so $$d(y,z)=d(x,z)-d(x,y).$$ Thus, \begin{align*} f(y)&=d(y,K) \\&\leq d(y,z) \\&=d(x,z)-d(x,y) \\&\leq d(x,K)+\epsilon r -d(x,y) \\&= f(x)+\epsilon r -d(x,y). \end{align*} \end{proof} \begin{lemma}\label{1+2e} Let $f\colon\mathbb{R}^k\rightarrow\mathbb{R}$ be $1$-Lipschitz. Assume that there is a ball $B(x,r)$ and an affine function $A\colon \mathbb{R}^k\rightarrow\mathbb{R}$ such that \begin{equation}\label{f(t)-A(t)} |f(t)-A(t)|\leq \delta r \text{ for all } t\in B(x,r). \end{equation} Write \begin{equation}\label{A(t)=L(t)+C)} A(t)=L(t)+C, \end{equation} where $L$ is linear and $C\in\mathbb{R}$. Then for all vectors $v\in \mathbb{R}^k$, $$|L(v)|\leq (1+2\delta)||v||.$$ \end{lemma} It suffices to prove this for all unit vectors $v\in \mathbb{R}^k$. To see why, assume the statement holds for all unit vectors and let $v\in \mathbb{R}^k$. Then as $L$ is linear $$\frac{|L(v)|}{||v||}=L\left(\frac{v}{||v||}\right)\leq (1+2\delta). $$ Thus $$|L(v)|\leq (1+2\delta)||v||.$$ \begin{proof} Let $v\in \mathbb{R}^k$ be a unit vector. Thus \begin{align*} |A(x)-A(x+rv)|&=|A(x)-f(x)|+|f(x)-f(x+rv)|+|f(x+rv)-A(x+rv)| \\& \leq \delta r + r+\delta r \\&=r(1+2\delta ). \end{align*} But as $L$ is linear \begin{align*} |A(x)-A(x+rv)|&=|L(x)+C-(L(x+rv)+C)| \\&=|L(x)-L(x+rv)| \\&=|L(rv)| \\&=r|L(v)|. \end{align*} Thus $$r|L(v)|\leq r(1+2\delta)$$ so $$|L(v)|\leq 1+2\delta. $$ \end{proof} \begin{proof}[Proof of Theorem \ref{ z_1,z_2 close together e}] Let $$D=\{p\in \mathbb{R}^k:d(p,x)=r\}.$$ Let $y_1\in D\cap [x,z_1]$ and $y_2\in D\cap [x,z_2]. $ Note that these points exist because $$ d(x, z_i) \geq d(x,K) > r$$ by assumption. Denote the vectors from $y_i$ to $x$ by $$w_1=x-y_1\text{ and } w_2=x-y_2.$$ We wish to find bounds for $$ \left |L\left(\frac{w_1+w_2}{2}\right)\right|\text{ and } \left|\left|\frac{w_1+w_2}{2}\right|\right|.$$ By \eqref{d(x,z_1)< d(x,K)+er} and as $y_1\in [x,z_1]$, we can say by Lemma \ref{ extended f(p)-f(y)} that \begin{align*} f(x)-f(y_1)&\geq d(x,y_1)-\epsilon r \\& =r-\epsilon r\\&=r(1-\epsilon )\\&>0. \end{align*} Thus \begin{equation}\label{ub bound f(y_1) e} f(y_1)\leq f(x)-r(1-\epsilon). \end{equation} Similarly, $$f(y_2)\leq f(x)-r(1-\epsilon).$$ As $f$ is 1-Lipschitz and by our work from above, \begin{align*} f(x)-f(y_1)&= |f(x)-f(y_1)| \\& \leq d(x,y_1) \\& =r. \end{align*} So \begin{equation}\label {lb f(y_1) e} f(y_1)\geq f(x)-r. \end{equation} Similarly, $$f(y_2)\geq f(x)-r.$$ Let $A$ and $L$ be the same functions from Lemma \ref{1+2e}. By \eqref{f(t)-A(t)}, $$f(y_1)-\delta r\leq A(y_1)\leq f(y_1)+\delta r.$$ By \eqref{ub bound f(y_1) e}, and \eqref{lb f(y_1) e}, $$f(x)-r-\delta r\leq A(y_1)\leq f(x)-r(1-\epsilon)+\delta r.$$ By \eqref{A(t)=L(t)+C)}, \begin{equation}\label{u/l bound for L(y_1)} f(x)-r-\delta r -C\leq L(y_1)\leq f(x)-r+r(\delta +\epsilon)-C. \end{equation} Similarly $$f(x)-r-\delta r -C\leq L(y_2)\leq f(x)-r+r(\delta+\epsilon)-C.$$ By \eqref{f(t)-A(t)} and \eqref{A(t)=L(t)+C)} $$-\delta r\leq f(x)-(L(x)+C)\leq \delta r.$$ Thus \begin{equation}\label{bound for L(x)} f(x)-C -\delta r \leq L(x)\leq f(x)-C+\delta r. \end{equation} Thus as $L$ is linear and by \eqref{u/l bound for L(y_1)}, \eqref{bound for L(x)} \begin{align*} L(w_1)&=L(x-y_1) \\&=L(x)-L(y_1) \\&\geq f(x)-C-\delta r -(f(x)-r+r(\epsilon+\delta)-C) \\& =-2\delta r -\epsilon r +r \\&= r(1-(2\delta +\epsilon)). \end{align*} Similarly $$L(w_2)\geq r(1-(2\delta+\epsilon)).$$ Thus as $L$ is linear \begin{equation}\label{lb for (w_1+w_2)/2 e} \left|L\left(\frac{w_1+w_2}{2}\right)\right|\geq L\left(\frac{w_1+w_2}{2}\right)=\frac{L(w_1)+L(w_2)}{2}\geq r(1-(2\delta+\epsilon)). \end{equation} Now, to obtain a bound for $\left|\left|\frac{w_1+w_2}{2}\right|\right|^2$, note first that $||w_1||=||w_2||=r$. Thus, \begin{align*} \left|\left|\frac{w_1+w_2}{2}\right|\right|^2&=\frac{1}{4}\left|\left|w_1+w_2\right|\right|^2 \\&=\frac{1}{4}((w_1+w_2)\cdot (w_1+w_2)) \\&=\frac{1}{4}(||w_1||^2+2(w_1\cdot w_2)+||w_2||^2) \\&=\frac{1}{4}(r^2+2(w_1\cdot w_2)+r^2) \\&=\frac{r^2}{2}+\frac{1}{2}(w_1\cdot w_2) \\&=\frac{r^2}{2}+\frac{1}{2}||w_1|| ||w_2||\cos(\theta) \\&=\frac{r^2}{2}+\frac{r^2}{2}\cos(\theta) \\&=r^2\left(\frac{1+\cos(\theta)}{2}\right). \end{align*} Thus, \begin{equation}\label{length of (w_1+w_2)/2 e} \left|\left|\frac{w_1+w_2}{2}\right|\right|=r\sqrt{\frac{1+\cos(\theta)}{2}}. \end{equation} Thus, by \eqref{lb for (w_1+w_2)/2 e}, \eqref{length of (w_1+w_2)/2 e} and Lemma \ref{1+2e}, \begin{align*} r(1-(2\delta+\epsilon))&\leq \left|L\left(\frac{w_1+w_2}{2}\right)\right| \\&\leq (1+2\delta)\left|\left|\frac{w_1+w_2}{2}\right|\right| \\&\leq r(1+2\delta)\sqrt{\frac{1+\cos(\theta)}{2}}. \end{align*} Thus, $$(1-(2\delta+\epsilon))\leq (1+2\delta)\sqrt{\frac{1+\cos(\theta)}{2}}.$$ Now solving for $\theta$ $$\left(2\left(\frac{1-(2\delta+\epsilon)}{1+2\delta}\right)^2-1\right)\leq \cos(\theta),$$ i.e., $$|\theta|\leq \cos^{-1}\left(2\left(\frac{1-(2\delta +\epsilon)}{1+2\delta}\right)^2-1\right).$$ \end{proof} Now Theorem \ref{thm:main} follows directly from Theorem \ref{final qd intro} and Theorem \ref{ z_1,z_2 close together e}. \begin{proof}[Proof of Theorem \ref{thm:main}] Let $f:\mathbb{R}^k\rightarrow \mathbb{R}$ be $f(x)=d(x,K)$. The measurability of the set $G\subseteq \mathbb{R}^k\times\mathbb{R}^+$ in Theorem \ref{thm:main} follows by standard arguments. Briefly, given $t>0$, let \begin{align*} G_t=&\{(x,r):x\in \mathbb{R}^k, 0<r<d(x,K),\text{ and there exist }z_1,z_2\in K \text{ such that }\\ &d(x,z_1),d(x,z_2)< d(x,K)+\epsilon r+t\\& \text{ but }|\theta|>\cos^{-1}\left(2\left(\frac{1-(2\delta +\epsilon)}{1+2\delta}\right)^2-1\right)\}, \end{align*} where $\theta$ again denotes the angle of $[x,z_1]$ and $[x,z_2]$. Then each $G_t$ is open in $\mathbb{R}^k\times\mathbb{R}^+$ and $$ G = \bigcap_{n=1}^\infty G_{1/n},$$ so is therefore measurable. To see the Carleson condition, it follows from Theorem \ref{ z_1,z_2 close together e} that if $(x,r)\in G$ then $f$ is not $\delta$ -coarsely differentiable on $(x,r)$. Let $H$ denote the collection of $(x,r)$ such that $f$ is not $\delta$-coarsely differentiable on $B(x,r)$. Then $G\subseteq H$. By Lemma \ref{f(x)=d(x,K)}, $f$ is 1-Lipschitz. By Theorem \ref{final qd intro}, $H$ is Carleson, with Carleson constant bounded above depending only on $\delta$. Thus, $G$ is Carleson with Carleson constant bounded above by that of $H$. \end{proof} \printbibliography \end{document}
{ "timestamp": "2022-04-07T02:25:23", "yymm": "2204", "arxiv_id": "2204.02933", "language": "en", "url": "https://arxiv.org/abs/2204.02933", "abstract": "We study the medial axis of a set $K$ in Euclidean space (the set of points in space with more than one closest point in $K$) from a \"coarse\" and \"quantitative\" perspective. We show that on \"most\" balls $B(x,r)$ in the complement of $K$, the set of almost-closest points to $x$ in $K$ takes up a small angle as seen from $x$. In other words, most locations and scales in the complement of $K$ \"appear\" to fall outside the medial axis if one looks with only a certain finite resolution. The word \"most\" involves a Carleson packing condition, and our bounds are independent of the set $K$.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Quantitative differentiation and the medial axis", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429575500227, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211049578193 }
https://arxiv.org/abs/1409.2214
Approximation of eigenvalues of spot cross volatility matrix with a view toward principal component analysis
In order to study the geometry of interest rates market dynamics, Malliavin, Mancino and Recchioni [A non-parametric calibration of the HJM geometry: an application of Itô calculus to financial statistics, {\it Japanese Journal of Mathematics}, 2, pp.55--77, 2007] introduced a scheme, which is based on the Fourier Series method, to estimate eigenvalues of a spot cross volatility matrix. In this paper, we present another estimation scheme based on the Quadratic Variation method. We first establish limit theorems for each scheme and then we use a stochastic volatility model of Heston's type to compare the effectiveness of these two schemes.
\section{Introduction} Let $X$ be a $d$-dimensional stochastic process defined on a probability space $(\Omega, \mathcal{F}, (\mathcal{F})_t, P)$ by \begin{equation} \label{eqX} d X (t) = \bA (t, w)dt + \bB (t,w)dW(t), \ 0 \leq t \leq T, \end{equation} where $W$ is a $d_1$-dimensional standard Brownian motion, $\bA$ is a $d$-dimensional drift process and $\bB$ is a $R^{d\times d_1}$-valued c\`adl\`ag volatility process. In mathematical finance it is widely accepted that processes $X$ of the form defined by (\ref{eqX}) are reasonable models for the (log return of) price processes and interest rates. The spot cross volatility matrix $\Sigma = (\Sigma_{i,j})_{1\leq i, j \leq d}$ of process $X$ is defined by $$\Sigma_{i,j}(t) = \sum_{k=1}^{d_1} \bB_{i,k}(t)\bB_{j,k}(t), \quad 0 \leq t \leq T.$$ We are interested in the following problem: given a finite set of observation data $\{X(t_k, \omega_0): \ t_k = kT/n, k = 0,\ldots, n\}$ of a single trajectory $\omega_0 \in \Omega$, we want to estimate the eigenvalues of $\Sigma(t, \omega_0)$ for any $t \in [0.T]$. This problem appears in mathematical finance, especially in principal component analysis (see \cite{AL2011, L2010, MMR2007}). The estimation of the eigenvalues of the integrated volatility matrix was studied by Wang and Zou (\cite{WZ2010}) (see also the references therein). By the time we completed this paper, we learnt that Jacod and Podolskij \cite{JP2012} had previously introduced some statistics based on a random perturbation approach for ranks of volatility metric of continuous It\^o process. Our approach differs from that in \cite{JP2012} and can be applied to It\^o processes with jump components. Our method to solve this problem is as follows: first we approximate the spot cross volatility matrix $\Sigma$ by a matrix $\hat{\Sigma}$ using the given observations of $X$; next we approximate the eigenvalues of $\Sigma$ by those of $\hat{\Sigma}$. The spot volatility estimation is an important problem in mathematical finance and has been extensively studied by many authors. Up to now, there are two main approaches to this problem. The first approach called the Fourier Series method was introduced by \cite{MM2002} and later developed in \cite{MMR2007, MM2009}. The second approach called the Quadratic Variation method was introduced by \cite{OW2007} and later developed in \cite{O2008, ON2010, NO2009} (see also \cite{APPS2012, JR2012}). It should be noted that there is a very rich literature on the problem of measuring the so called realized volatility as well as problem of estimating parameters of diffusion processes, see \cite{Rao1999, BS2004, AsH2010, BHL2012} and the references therein. In this paper, we present some limit theorems and a numerical study to analyze the effectiveness of the estimation of eigenvalues by using Fourier Series and Quadratic Variation methods. It should be mentioned that in reality, one cannot observe directly either cross volatility matrix or the eigenvalues. Therefore we perform a numerical study with dummy data for which we know both the volatility matrix and its eigenvalues beforehand. In particular, we show that the Fourier Series method may lead to some unexpected results when estimating small eigenvalues; this situation would never arise using the Quadratic Variation method. \subsection*{Acknowledgment} The authors thank Jiro Akahori, Freddy Delbaen, Arturo Kohatsu-Higa, Maria Elvira Mancino, and Shigeyoshi Ogawa for their helpful comments. The authors are also grateful to the referee for her/his valuable comments which led to improvement of the paper. \section{The first Fourier Series estimation scheme}\label{FS} In a series of papers \cite{MM2002, MMR2007, MM2009}, Malliavin et al. introduced a number of Fourier Series estimation schemes for spot volatilities. Although these schemes are essentially based on a same idea, they are slightly different. As we will present later, each scheme has both advantages and disadvantages compared to the other. In this section, we summarize the Fourier Series method presented in \cite{MM2002}. By a change of origin and rescaling, one can suppose that $T = 2\pi$ and the Fourier Series method reconstructs $\Sigma_.(t)$ for all $t \in (0,2\pi)$. Let us denote the Fourier coefficients of $dX_j, \ j = 1,\ldots, d,$ by \begin{align*} a_k(dX_j) = \frac{1}{\pi} \int_{(0,2\pi)}\cos(kt)dX_j(t), b_k(dX_j) = \frac{1}{\pi} \int_{(0,2\pi)}\sin(kt)dX_j(t). \end{align*} The Fourier coefficients of each cross volatility $\Sigma_{u,v},\ 1 \leq u, v \leq d,$ are defined by \begin{align*} a_k(\Sigma_{u,v}) = \frac{1}{\pi} \int_{(0,2\pi)}\cos(kt)\Sigma_{u,v}(t)dt, b_k(\Sigma_{u,v}) = \frac{1}{\pi}\int_{(0,2\pi)} \sin(kt)\Sigma_{u,v}(t)dt. \end{align*} It follows from the Fourier-F\'ejer inversion formula that one can reconstruct $\Sigma$ from its Fourier coefficients by \begin{align*} \Sigma_{u,v}(t) = \lim_{N \to \infty} \sum_{k=0}^N \big( 1 - \frac{k}{N}\big) \big(a_k(\Sigma_{u,v})\cos(kt) + b_k(\Sigma_{u,v})\sin(kt) \big). \end{align*} In practice, based on the observation of $X$ at times $t_i = 2\pi i/n, \ i = 0,\ldots, n$, one can approximate $\Sigma$ as follows. We fix some positive integer $N$. \begin{enumerate} \item Fourier coefficients $a_k(dX_j), \ b_k(dX_j), \ k = 0,\ldots, 2N,$ are approximated by \begin{align*} \hat{a}_k(dX_j) &= \frac{1}{\pi} \sum_{i=1}^n \big(\cos( kt_{i-1}) - \cos(kt_i) \big)X_j(t_{i-1}) + \frac{1}{\pi} \big( X_j(t_n) - X_j(t_0) \big),\\ \hat{b}_k(dX_j) &= \frac{1}{\pi} \sum_{i=1}^n \big(\sin( kt_{i-1}) - \sin(kt_i) \big)X_j(t_{i-1}). \end{align*} \item Fourier coefficients of each cross volatility $\Sigma_{u,v}, \ 1 \leq u, v \leq d,$ are approximated by \begin{align*} \hat{a}_0(\Sigma_{u,v}) & = \frac{\pi}{2(N + 1 - n_0)} \sum_{s = n_0}^{N} \big( \hat{a}_s(dX_u) \hat{a}_s(dX_v) + \hat{b}_s(dX_u) \hat{b}_s(dX_v)\big),\\ \hat{a}_k(\Sigma_{u,v}) &= \frac{\pi}{N + 1 - n_0} \sum_{s = n_0}^{N} \big( \hat{a}_s(dX_u) \hat{a}_{s+k}(dX_v) + \hat{a}_s(dX_v) \hat{a}_{s+k}(dX_u)\big), \\ \hat{b}_k(\Sigma_{u,v}) &= \frac{\pi}{N + 1 - n_0} \sum_{s = n_0}^{N} \big( \hat{a}_s(dX_u) \hat{b}_{s+k}(dX_v) + \hat{a}_s(dX_v) \hat{b}_{s+k}(dX_u)\big), \end{align*} for each $k = 0,\ldots, N$. \item The volatilities $\Sigma_{u,v}(t)$ are approximated by \begin{equation} \label{def_hatSm} \hat{\Sigma}^{N,n}_{u,v}(t) = \sum_{k=0}^N \big( 1 - \frac{k}{N}\big)\big(\hat{a}_k(\Sigma_{u,v}))\cos(kt) + \hat{b}_k(\Sigma_{u,v})\sin(kt) \big). \end{equation} \end{enumerate} Sometime, it is preferable to smooth the F\'ejer kernel in (\ref{def_hatSm}) by replacing $(1-k/N)$ with $\sin^2(\delta k)/(\delta k)^2$ for some appropriate parameter $\delta >0$. \begin{remark} It should be noted here that although the matrix $\hat{\Sigma}^{N,n}(t)$ is symmetric, it is not non-negative definite in general. Therefore some of its eigenvalues may be negative, which is not expected in practice. \end{remark} \section{The second Fourier Series estimation scheme}\label{FS2} In \cite{MM2009}, the authors introduced another version of Fourier Series estimation scheme. Their new scheme was designed to deal with asynchronous data. In the following, we will specialize it for the case of regular sampling. We define $\de^j_i := X^{j}(t_{i+1}) - X^{j}(t_{i}), \quad j = 1, \ldots, d.$ For any integer $k$, $|k| \leq 2N$, let \begin{align*} c^j_{k} := \frac{1}{2\pi}\sum^{n-1}_{i=0} e^(-\im kt_{i})\de^j_i, \quad j = 1, \ldots, d. \end{align*} For each $1 \leq j_1 \leq j_2 \leq d,$ let $\al_{k}(N, j_1, j_2)$ for $|k| \leq N$ be given by \begin{align*} \al_{k}(N, j_1, j_2) := \frac{2\pi}{2N+1}\sum_{|s| \leq N} c^{j_1}_{s}c^{j_2}_{k-s}. \end{align*} Finally, define \begin{align*} \Sigma^{j_1 j_2}_{n, N}(t) &:= \sum_{|k| \leq N} \Big(1-\frac{|k|}{N}\Big)\al_{k}(N, j_1, j_2)e^(\im kt)\\ &= \al_{0}(N, j_1, j_2) + \sum^{N}_{k=1} \Big(1-\frac{k}{N}\Big)\Big(\al_{k}(N, j_1, j_2)e^{\im kt}+\al_{-k}(N, j_1, j_2)e^{-\im kt}\Big). \end{align*} Since the above estimator is written with complex numbers, it may be inconvenient to do simulation. Therefore we rewrite it as below: \begin{equation} \Sigma^{j_1 j_2}_{n, N}(t) := \al_{0}(N, j_1, j_2) + \sum_{k=1}^N \Big(1 -\frac{k}{N}\Big)\Big(a^{j_1 j_2}_k\cos(kt)+b^{j_1 j_2}_k\sin(kt)\Big), \label{Gammaold} \end{equation} where \begin{align*} a^{j_1 j_2}_k &= \frac{1}{\pi(2N+1)}\Big[ \sum^{N}_{s=1} \Big\{\hat{a}_{s}(dX^{j_1})\hat{a}_{k-s}(dX^{j_2}) - \hat{b}_{s}(dX^{j_1})\hat{b}_{k-s}(dX^{j_2})\\ &\hskip 1cm + \hat{a}_{s}(dX^{j_1})\hat{a}_{k+s}(dX^{j_2}) + \hat{b}_{s}(dX^{j_1})\hat{b}_{k+s}(dX^{j_2})\Big\} + \hat{a}_{k}(dX^{j_2})\Big(X^{j_1}(2\pi) - X^{j_1}(0)\Big)\Big],\\ b^{j_1 j_2}_k &= \frac{1}{\pi(2N+1)}\Big[ \sum^{N}_{s=1} \Big\{\hat{a}_{s}(dX^{j_1})\hat{b}_{k-s}(dX^{j_2}) + \hat{b}_{s}(dX^{j_1})\hat{a}_{k-s}(dX^{j_2})\\ &\hskip 1cm + \hat{a}_{s}(dX^{j_1})\hat{b}_{k+s}(dX^{j_2}) - \hat{b}_{s}(dX^{j_1})\hat{a}_{k+s}(dX^{j_2})\Big\} + \hat{b}_{k}(dX^{j_2})\Big(X^{j_1}(2\pi) - X^{j_1}(0)\Big)\Big], \end{align*} and, \begin{align*} &\hat{a}_{s}(dX^{j_1}) = \sum_{i}\cos(st_{i})\de^{j_1}_{i}, \quad \hat{a}_{s}(dX^{j_2}) = \sum_{j}\cos(st_{j})\de^{j_2}_{j},\\ &\hat{b}_{s}(dX^{j_1}) = \sum_{i}\sin(st_{i})\de^{j_1}_{i}, \quad \hat{b}_{s}(dX^{j_2}) = \sum_{j}\sin(st_{j})\de^{j_2}_{j}, \end{align*} and \begin{align*} \al_{0}(N, j_1, j_2) &= \frac{1}{2\pi(2N+1)}\Big[\sum^{N}_{s=1} 2\Big\{\hat{a}_{s}(dX^{j_1})\hat{a}_{s}(dX^{j_2}) + \hat{b}_{s}(dX^{j_1})\hat{b}_{s}(dX^{j_2})\Big\}\\ &\hskip 4cm + \Big(X^{j_2}(2\pi) - X^{j_2}(0)\Big)\Big(X^{j_1}(2\pi) - X^{j_1}(0)\Big)\Big]. \end{align*} \begin{remark} Since the matrix $\Sigma^{j_1 j_2}_{n, N}(t)$ is not symmetric, the eigenvalues may not be real numbers. In order to overcome this drawback we propose two symmetrization methods as follows. \end{remark} \subsection{The first symmetrization} A naive idea to symmetrize the covariance matric is that one first calculates $\Sigma^{j_1 j_2}_{n, N}$ using formula (\ref{Gammaold}) for all $1\leq j_1 \leq j_2 \leq d,$ and then puts $\Sigma^{j_2 j_1}_{n, N}:= \Sigma^{j_1 j_2}_{n, N}$. \subsection{The second symmetrization} Another way to symmetrize the covariance matric is as follows: Denote \begin{align*} \al_{k}(N, j_1, j_2) := \frac{\pi}{2N+1}\sum_{|s| \leq N} \big(c^{j_1}_{s}c^{j_2}_{k-s} + c^{j_2}_{s}c^{j_1}_{k-s}\big), \end{align*} for any $|k| \leq N$ and define \begin{align*} \Sigma^{j_1 j_2}_{n, N}(t) &:= \sum_{|k| \leq N} \Big(1-\frac{|k|}{N}\Big)\al_{k}(N, j_1, j_2)e^(\im kt)\\ &= \al_{0}(N, j_1, j_2) + \sum^{N}_{k=1} \Big(1-\frac{k}{N}\Big)\Big(\al_{k}(N, j_1, j_2)e^{\im kt}+\al_{-k}(N, j_1, j_2)e^{-\im kt}\Big). \end{align*} To simplify the simulation, we rewrite $\Sigma^{j_1 j_2}_{n, N}$ as follows \begin{align*} \Sigma^{j_1 j_2}_{n, N}(t) := \al_{0}(N, j_1, j_2) + \sum_{k=1}^N \Big(1 -\frac{k}{N}\Big)\Big(a^{j_1 j_2}_k\cos(kt)+b^{j_1 j_2}_k\sin(kt)\Big), \end{align*} where \begin{align*} a^{j_1 j_2}_k &= \frac{1}{2\pi(2N+1)}\Big[\sum^{N}_{s=1} \Big\{\hat{a}_{s}(dX^{1})\hat{a}_{k-s}(dX^{2}) - \hat{b}_{s}(dX^{1})\hat{b}_{k-s}(dX^{2}) + \hat{a}_{s}(dX^{1})\hat{a}_{k+s}(dX^{2})\\ &\quad + \hat{b}_{s}(dX^{1})\hat{b}_{k+s}(dX^{2}) + \hat{a}_{s}(dX^{2})\hat{a}_{k-s}(dX^{1}) - \hat{b}_{s}(dX^{2})\hat{b}_{k-s}(dX^{1}) + \hat{a}_{s}(dX^{2})\hat{a}_{k+s}(dX^{1})\\ &\quad + \hat{b}_{s}(dX^{2})\hat{b}_{k+s}(dX^{1})\Big\} + \hat{a}_{k}(dX^{2})\Big(X^{1}(2\pi) - X^{1}(0)\Big) + \hat{a}_{k}(dX^{1})\Big(X^{2}(2\pi) - X^{2}(0)\Big)\Big], \end{align*} \begin{align*} b^{j_1 j_2}_k &= \frac{1}{2\pi(2N+1)}\Big[\sum^{N}_{s=1} \Big\{\hat{a}_{s}(dX^{1})\hat{b}_{k-s}(dX^{2}) + \hat{b}_{s}(dX^{1})\hat{a}_{k-s}(dX^{2}) + \hat{a}_{s}(dX^{1})\hat{b}_{k+s}(dX^{2})\\ &\quad - \hat{b}_{s}(dX^{1})\hat{a}_{k+s}(dX^{2}) + \hat{a}_{s}(dX^{2})\hat{b}_{k-s}(dX^{1}) + \hat{b}_{s}(dX^{2})\hat{a}_{k-s}(dX^{1}) + \hat{a}_{s}(dX^{2})\hat{b}_{k+s}(dX^{1})\\ &\quad - \hat{b}_{s}(dX^{2})\hat{a}_{k+s}(dX^{1})\Big\} + \hat{b}_{k}(dX^{2})\Big(X^{1}(2\pi) - X^{1}(0)\Big) + \hat{b}_{k}(dX^{1})\Big(X^{2}(2\pi) - X^{2}(0)\Big)\Big], \end{align*} and \begin{align*} \al_{0}(N, j_1, j_2) &= \frac{1}{2\pi(2N+1)}\Big[\sum^{N}_{s=1} 2\Big\{\hat{a}_{s}(dX^{j_1})\hat{a}_{s}(dX^{j_2}) + \hat{b}_{s}(dX^{j_1})\hat{b}_{s}(dX^{j_2})\Big\}\\ &\hskip 4cm + \Big(X^{j_2}(2\pi) - X^{j_2}(0)\Big)\Big(X^{j_1}(2\pi) - X^{j_1}(0)\Big)\Big], \end{align*} and $\hat{a}, \hat{b}$ are defined as before. \begin{remark} Each matrix $\Sigma^{j_1 j_2}_{n, N}(t)$ is symmetric, but not necessary positive definite. \end{remark} \subsection{Limit theorem} Since for each $t$, $\Sigma(t)$ is a symmetric non-negative definite matrix, we denote its eigenvalues by $\lambda_i(t), i = 1,\ldots, n,$ such that $\lambda_1(t) \geq \lambda_2(t) \geq \ldots \geq \lambda_d(t) \geq 0$. We also denote by $\hat{\lambda}^{n}_1(t) \geq \hat{\lambda}^{n}_2(t) \geq \ldots \geq \hat{\lambda}^{n}_d(t)$ the eigenvalues of the symmetric matrix $\Sigma^{j_1 j_2}_{n, N}(t)$ defined by either the first or the second symmetrization. Now we are in a position to state the first main result of this paper. \begin{thm} \label{theoremFS} Assume that $\Sigma(t)$ is continuous and for $i = 1,\ldots, d, j= 1,\ldots, d_1$, and $\frac{N}{n} \to 0$ as $n \to \infty,$ $$\bE\Big( \int_0^{2\pi} \big[ \| \bA_i(t) \|^2 + \|\bB_{i,j}(t)\|^4\big] dt \Big) < \infty.$$ Then the following convergence in probability holds \begin{align*} \lim_{n, N \to \infty} \sup_{0 \leq t \leq 2\pi} \sum_{i=1}^d | \hat{\lambda}^{n}_i(t) - \lambda_i(t)| = 0. \end{align*} \end{thm} \begin{remark} This method has been used by Malliavin et al. in \cite{MMR2007} to estimate the eigenvalues of the covariance matrix of a time series of Euro swap rates and Euribor rates. However, these authors did not provide any discussion on the asymptotic behaviour of the estimators. \end{remark} \section{Quadratic Variation method} We briefly recall the Quadratic Variation method which was proposed in Ogawa and Wakayama \cite{OW2007}. Let $(h_n)$ be a sequence of positive numbers satisfying $\lim_{n \to \infty} h_n = 0$. For each $t \in (0, T), \ 1 \leq u, v \leq d$, we denote \begin{align*} \tilde{\Sigma}_{u,v}^n(t) = \frac{1}{2h_n} \sum_{i:( t-h_n) \leq t_i < t_{i+1} \leq (t+h_n)} \! \! \! (X_u(t_{i+1}) - X_u(t_i)) \times (X_v(t_{i+1}) - X_v(t_i)). \end{align*} We suppose that the diffusion coefficient $\bB$ satisfies the following H\"older continuous condition $H(\alpha)$: For some $\alpha \in (0,1]$, there exists a constant $K$ such that for all $s, t \in [0,T]$, \begin{equation} \label{Holder} \bE\|\bB(s) - \bB(t)\|^2 \leq K|s-t|^{2\alpha}. \end{equation} For each $t\in (0,T)$, the approximating matrix $\tilde{\Sigma}^n(t)$ is symmetric, non-negative defined. Hence all of its eigenvalues are non-negative. Let $\tilde{\lambda}^{n}_1(t) \geq \tilde{\lambda}^{n}_2(t) \geq \ldots \geq \tilde{\lambda}^{n}_d(t)$ denote the eigenvalues of $\tilde{\Sigma}^{N,n}(t)$. Here is the second main result of this paper. \begin{thm} \label{theoremQV1} Assume that assumption $H(\alpha)$ holds for some $\alpha \in (0,1]$ and $\sup_{t\in (0,T)} \bE (\|\bA(t)\|^4 + \|\bB(t)\|^4)< \infty$. Then we have $$ \sup_{t \in (h_n, T-h_n)}\sum_{i=1}^d \bE| \tilde{\lambda}^{n}_i(t) - \lambda_i(t)| \leq M\Big( h_n^{\alpha} + \sqrt{\frac{T}{nh_n}}\Big),$$ for some constant $M$ which does not depend on $n$.\\ In particular, if $h_n = O(n^{-1/(2\alpha+1)})$ then \begin{equation*} \sup_{t \in (h_n, T-h_n)} n^{\frac{\alpha}{2\alpha+1}} \sum_{i=1}^d \bE| \tilde{\lambda}^{n}_i(t) - \lambda_i(t)| \leq M . \end{equation*} \end{thm} In the following, we will study the case where the price process $X$ contains jump components. More precisely, we suppose that $X$ is a $d$-dimensional stochastic process defined by \begin{equation} \label{eqXwJ} d X (t) = \bA (t, w)dt + \bB (t,w)dW(t) + dJ(t), \ 0 \leq t \leq T, \end{equation} where $W, \bA, \bB$ are defined as in Section 1 and $J$ is a $d$-dimensional L\'evy process which may depend on $W$. The Blumenthal-Getoor index $\beta$ of $J$ is defined by $$\beta = \inf \{p \geq 0: \int_{|x| \leq 1} \|x\|^p \nu(dx) < \infty\},$$ where $\nu$ is L\'evy measure of $J$. It is well-known that $\beta \in [0,2]$. For each $t \in (0, T), \ 1 \leq u, v \leq d$, we denote \begin{align*} \bar{\Sigma}_{u,v}^n(t) = \frac{\pi}{8h_n} &\sum_i \Big ( |\Dt_i(X_u + X_v) \ \Dt_{i+1}(X_u + X_v)| - |\Dt_i X_u \ \Dt_{i+1}X_u| - |\Dt_i X_v\ \Dt_{i+1}X_v| \Big), \end{align*} where the summation is taken over all indices $i$ such that $( t-h_n) \leq t_{i-1} < t_{i+1} \leq (t+h_n)$ and $\Dt_i X_* = X_*(t_i) - X_*(t_{i-1}).$ Let $\bar{\lambda}^{n}_1(t) \geq \bar{\lambda}^{n}_2(t) \geq \ldots \geq \bar{\lambda}^{n}_d(t) \geq 0$ denote the eigenvalues of $\bar{\Sigma}^{n}(t)$. We have the following limit theorem. \begin{thm} \label{theoremQV2} Assume that \begin{itemize} \item $H(\alpha)$ holds for some $\alpha \in (0,1],$ \item $\forall q >0, \ \sup_{t\in (0,T)} \bE \|\bA (t)\|^q + \bE \|\bB (t)\|^q < \infty$, \item $\beta < 2$ and $\int_{\|x\| \geq 1} \|x\|^2 \nu(dx) < \infty$. \end{itemize} Then for any $\gamma \in (0, \frac{\alpha}{2\alpha + 1} \wedge \frac{2-\beta}{2\beta})$, there exists a constant $M$ such that $$ \sup_n \sup_{t \in (h_n, T-h_n)} n^\gamma \sum_{i=1}^d \bE| \bar{\lambda}^{n}_i(t) - \lambda_i(t)| \leq M,$$ provided that $h_n = O(n^{-2\gamma})$. \end{thm} \begin{remark} In \cite{ON2010}, the authors introduce another cross volatility estimation scheme for jump diffusion processes by using a threshold parameter to reduce the effect of large size jumps. Furthermore, one can combine the threshold method with the bi-power method presented above to produce a more stable estimation (see \cite{N2010}). \end{remark} \begin{remark} By following a similar argument as above, one can construct estimation schemes for eigenvalues of the cross volatility matrix of processes which are contaminated by microstructure noise (see \cite{NO2009, OS2011} for some classes of real-time schemes for the estimation of volatility in the noisy case with/without jumps). \end{remark} \section{Numerical Study} \subsection{Complexity} The computational cost of the Quadratic Variation method is much less than that of Fourier Series method. Indeed, the cost of computing of the Quadratic Variation method is of order $n^{\frac{2\alpha}{2\alpha+1}}$ while one of Fourier Series method is $N^2n$. \subsection{Dummy data} We consider a stochastic volatility model of Heston's type defined by \begin{equation} \begin{cases} dX_i(t) = \gamma_i dt + \sum_{j=1}^d \lambda_{ij}\sqrt{v_j(t)}dW_j(t) \\ dv_j(t) = \alpha_j(b_j - v_j(t))dt + \sigma_j\sqrt{v_j(t)}dB_j(t) \end{cases} \label{SVmodel} \end{equation} for $ 1\leq i \leq d, \ \ 1 \leq j \leq d_1, \ t \in [0,T]$, where $\gamma_i, \alpha_j, b_j, \sigma_j, \lambda_{ij}, \ 1\leq i \leq d, \ 1\leq j \leq d_1,$ are constants, $\alpha_j, b_j , \ 1\leq j \leq d_1,$ are positive; $W_j, B_j, 1\leq j \leq d_1$ are mutually independent standard Brownian motions. \begin{remark}The class of square-root diffusions \begin{equation} \label{squareroot} dv(t) = \alpha (b -v(t))dt + \sigma \sqrt{v(t)}dB(t), \quad v(0) = v_0, \end{equation} with $W$ a standard one dimensional Brownian motion, was studied in \cite{F1951}. The author showed that if parameters $\alpha, b, v_0$ are positive, then $v(t)$ will stay positive. And if one supposes further that $2\alpha b \geq \sigma^2$, then $v(t)$ is strictly positive for all $t$ with probability $1$. \end{remark} Provided that processes $v$'s can be simulated at discretized time-point $t_k = k\Delta = kT/N_0, \ k= 0,\ldots, N_0,$ one can simulate $X$ by using a simple Euler - Maruyama's scheme as follows $$X_i(t_{k+1}) = X_i(t_k) + \gamma_i \Delta + \sqrt{\Delta} \sum_{j=1}^d \lambda_{ij}\sqrt{v_j(t_k)}Z_{jk},$$ where $Z$'s are independent standard normal distribution random variables. The simulation of $v$'s is more involved because the values of $v_j(t_k)$ produced by Euler - Maruyama discretization may become negative. We will simulate $v$'s by sampling from the exact transition laws of the processes (see \cite{G2004}). In the following, we choose $d =5, \ d_1 = 3, \ \gamma_i = b_j = v_j(0) = i/100, X_i(0) = 1, \ \alpha_j = 2, \lambda_{ij} = (-1)^{i+j}\sin(ij), \ 1\leq i \leq d, \ 1\leq j \leq d_1.$ We choose $\sigma_j = \sqrt{2b_j \alpha_j}$ and $T = 2\pi$ to make simulation easier. Based on the sample data of $X$, we use both the Quadratic Variation and Fourier Series methods, as stated in the previous sections, to estimate the cross volatility matrices of $X$ as a function of time and after that we calculate the eigenvalues of each estimated matrix. In particular, since the volatility coefficients of $X$ satisfy assumption (\ref{Holder}) with $\alpha = 1/2$, we choose $h_n = TN_0^{-1/2}$ for the Quadratic Variation scheme. Besides, for the Fourier Series method, we calculate the Fourier coefficients of the cross volatilities up to the Nyquist frequency $2N = N_0/2$ (see \cite{P1981}). We observe the mean square pathwise errors $MSE$ and $mSE$ defined as follows: Suppose that for each $k=0,\ldots, N_0$, $\check{\Sigma}(t_k)$ is an estimator of matrix $\Sigma(t_k)$. We denote by $\check{\lambda}_1(t_k)$ and $\check{\lambda}_d(t_k)$ the maximum and minimum eigenvalues of $\check{\Sigma}(t_k)$. We also denote by $\lambda_1(t_k)$ and $\lambda_d(t_k)$ the maximum and minimum eigenvalues of $\Sigma(t_k)$. Then we measure the errors of the estimations on the whole paths by $$MSE(\Sigma, \check{\Sigma}) = \frac{1}{N_0}\sum_{k=1}^{N_0}|\check{\lambda}_1(t_k) - \lambda_1(t_k)|^2,$$ and $$mSE(\Sigma, \check{\Sigma}) = \frac{1}{N_0}\sum_{k=1}^{N_0}|\check{\lambda}_d(t_k) - \lambda_d(t_k)|^2.$$ \subsubsection{The results of the first Fourier Series method} The simulations show that the Fourier Series estimate does not work well near $0$ and $T$. In order to have a better understanding of errors of each estimation methods at "normal" time, we eliminate $10$ percent of the estimated cross volatilities near the two end points $0$ and $T$ when we calculate the mean square pathwise errors for each symmetrization of the Fourier Series method and the Quadratic Variation method. The means of $mSE$ and $MSE$ of each method are showed in Table \ref{errors1} (Note that in all tables we use $\epsilon$ for value less than $10^{-10}$). Here QV and FS stand for Quadratic Variation and Fourier Series methods, respectively. FS$i$, $i = 1, 2, 3, 4,$ stand for Fourier Series estimation using smooth kernel with $\delta = TN_0^{-0.1(i+2)}$, respectively. Figures \ref{f3_1} and \ref{f4_1} show the estimations of $\lambda_M$ and $\lambda_m$ during $(0, T)$ with $N_0= 10^3$ and $N_0 = 10^4$. \begin{table} \begin{center} \begin{tabular}{ l r r r r r r r r } & $N_0$ & QV & FS & FS$1$ & FS$2$ & FS$3$ & FS$4$\\ \hline MSE & $10^2$ & 23 & 100 & 21 & 21 & 23 & 31\\ mSE & & $\epsilon$ & 6.882 & $\epsilon$ & 0.006 & 0.071 & 0.391 \\ \hline MSE & $10^3$ & 7 & 88 & 15 & 11 & 9 & 12\\ mSE & & $\epsilon$ & 7.889 & $\epsilon$ & $\epsilon$ & 0.01 & 0.077 \\ \hline MSE & $10^4$ & 2 & 93 & 9 & 5 & 4 & 6 \\ mSE & & $\epsilon$ & 7.609 & $\epsilon$ & $\epsilon$ & $\epsilon$ & 0.007 \\ \hline \end{tabular} \caption{Means of $MSE$ and $mSE$ ($\times 10^{-4}$) correspond to the Quadratic Variation method and the first Fourier Series method }\label{errors1} \end{center} \end{table} \begin{figure} \begin{center} \hskip -1cm \includegraphics[width=6.5cm,height=4.5cm]{02_max1000_mod.eps} \includegraphics[width=7.5cm,height=4.5cm]{02_max10000_mod.eps} \end{center} \vskip -1cm \caption{Maximum eigenvalue correspond to the Quadratic Variation method and the first Fourier Series method (left:$N_0 = 10^3$, right:$N_0 = 10^4$)} \label{f3_1} \end{figure} \begin{figure} \begin{center} \hskip -1cm \includegraphics[width=6.5cm,height=4.5cm]{02_min1000_mod.eps} \includegraphics[width=7.5cm,height=4.5cm]{02_min10000_mod.eps} \end{center} \vskip -1cm \caption{Minimum eigenvalue correspond to the Quadratic Variation method and the first Fourier Series method (left:$N_0 = 10^3$, right:$N_0 = 10^4$)} \label{f4_1} \end{figure} Remark that we remove the graph of FS since it oscillates violently making the whole picture difficult to see. The Fourier Series scheme using the modified F\'ejer kernel is able to produce a good estimate provided that one can choose a correct value for the parameter $\delta$. However, Table \ref{errors1} together with Figure \ref{f3_1} shows that this estimation is very sensitive to the choice of $\delta$. And to the best of our knowledge, there is still no effective way to select a good $\delta$. Another disadvantage of the Fourier Series method is evident from Figure \ref{f4_1}. One can see that FS3 and FS4 schemes may produce a negative estimated values of eigenvalues of the cross volatility matrix at a significant level. This drawback happens because the estimated cross volatility matrices using Fourier Series method may be not non-negative definite in general. \subsubsection{The results of the second Fourier Series method} We use the same notations as above. Table \ref{errors2_s1} and Table \ref{errors2_s2} show the means of $mSE$ and $MSE$ of each method while Fourier Series method modified by first symmetrization and second symmetrization, respectively. Figures \ref{fs1_3} and \ref{fs1_4} show the estimations of $\lambda_M$ and $\lambda_m$ during $(0, T)$ with $N_0= 10^3$ and $N_0 = 10^4$ with first symmetrization, and Figures \ref{fs2_3} and \ref{fs2_4} show the estimations of $\lambda_M$ and $\lambda_m$ during $(0, T)$ with $N_0= 10^3$ and $N_0 = 10^4$ with second symmetrization. Base on two symmetrization methods, FS3 and FS4 schemes give us a good result which is shown in Figures \ref{fs1_3} and \ref{fs2_3}. Although the Fourier Series estimators are still not non-negative definite, a negative value of the estimate of eigenvalue of the cross volatility matrix is not significant as shown in Figures \ref{fs1_4} and \ref{fs2_4}. \begin{table} \begin{center} \begin{tabular}{ l r r r r r r r r } & $N_0$ & QV & FS & FS$1$ & FS$2$ & FS$3$ & FS$4$\\ \hline MSE & $10^2$ & 18 & 399 & 456 & 391 & 164 & 106 \\ mSE & & $\epsilon$ & 3 & 10 & 32 & 96 & 1721 \\ \hline MSE & $10^3$ & 8 & 298 & 43 & 8 & 10 & 65 \\ mSE & & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ \\ \hline MSE & $10^4$ & 3 & 31 & 5 & 3 & 5 & 67 \\ mSE & & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ \\ \hline \end{tabular} \caption{Means of $MSE$ and $mSE$ ($\times 10^{-4}$) correspond to the Quadratic Variation method and the second Fourier Series method with first symmetrization}\label{errors2_s1} \end{center} \end{table} \begin{figure} \begin{center} \hskip -1cm \includegraphics[width=6.5cm,height=4.5cm]{09sym1_max1000_mod.eps} \includegraphics[width=7.5cm,height=4.5cm]{09sym1_max10000_mod.eps} \end{center} \vskip -1cm \caption{Maximum eigenvalue correspond to the Quadratic Variation method and the second Fourier Series method with first symmetrization (left:$N_0 = 10^3$, right:$N_0 = 10^4$)} \label{fs1_3} \end{figure} \begin{figure} \begin{center} \hskip -1cm \includegraphics[width=6.5cm,height=4.5cm]{09sym1_min1000_mod.eps} \includegraphics[width=7.5cm,height=4.5cm]{09sym1_min10000_mod.eps} \end{center} \vskip -1cm \caption{Minimum eigenvalue correspond to the Quadratic Variation method and the second Fourier Series method with first symmetrization (left:$N_0 = 10^3$, right:$N_0 = 10^4$)} \label{fs1_4} \end{figure} \begin{table} \begin{center} \begin{tabular}{ l r r r r r r r r } & $N_0$ & QV & FS & FS$1$ & FS$2$ & FS$3$ & FS$4$\\ \hline MSE & $10^2$ & 20 & 427 & 491 & 424 & 175 & 100 \\ mSE & & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & 10 & 3768 \\ \hline MSE & $10^3$ & 8 & 207 & 33 & 9 & 11 & 65 \\ mSE & & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & 14 \\ \hline MSE & $10^4$ & 3 & 32 & 4 & 3 & 5 & 66 \\ mSE & & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ & $\epsilon$ \\ \hline \end{tabular} \caption{Means of $MSE$ and $mSE$ ($\times 10^{-4}$) correspond to the Quadratic Variation method and the second Fourier Series method with second symmetrization}\label{errors2_s2} \end{center} \end{table} \begin{figure} \begin{center} \hskip -1cm \includegraphics[width=6.5cm,height=4.5cm]{09sym2_max1000_mod.eps} \includegraphics[width=7.5cm,height=4.5cm]{09sym2_max10000_mod.eps} \end{center} \vskip -1cm \caption{Maximum eigenvalue correspond to the Quadratic Variation method and the second Fourier Series method with second symmetrization (left:$N_0 = 10^3$, right:$N_0 = 10^4$)} \label{fs2_3} \end{figure} \begin{figure} \begin{center} \hskip -1cm \includegraphics[width=6.5cm,height=4.5cm]{09sym2_min1000_mod.eps} \includegraphics[width=7.5cm,height=4.5cm]{09sym2_min10000_mod.eps} \end{center} \vskip -1cm \caption{Minimum eigenvalue correspond to the Quadratic Variation method and the second Fourier Series method with second symmetrization (left:$N_0 = 10^3$, right:$N_0 = 10^4$)} \label{fs2_4} \end{figure} In our simulation, the Quadratic Variation method works quite well. Its mean square pathwise error is strictly less than the ones of Fourier Series method. In addition, because the estimated cross volatility matrices using the Quadratic Variation method are always symmetric and non-negative definite, all of their eigenvalues are non-negative. Finally, the computation time of the Quadratic Variation scheme is less than 1/100 of the Fourier Series scheme. \section{Proofs} In this section, we sketch the proofs of the main results in Sections 2 and 3. First we need the following auxiliary inequality (see \cite{AGZ2010}). \begin{lem}[Hoffman-Wielandt] \label{HWlemma} Let $A, B$ be $N \times N$ symmetric matrices, with eigenvalues $\lambda^A_1 \leq \lambda^A_2 \leq \ldots \leq \lambda^A_N$ and $\lambda^B_1 \leq \lambda^B_2 \leq \ldots \leq \lambda^B_N$. Then $$ \sum_{i=1}^N |\lambda^A_i - \lambda^B_i|^2 \leq tr(A-B)^2.$$ \end{lem} \subsection{Proof of Theorem \ref{theoremFS}} By using a similar argument as in the proof of Theorem 3.4 (\cite{MM2009}), one can show that the following convergence in probability holds \begin{align*} \lim_{n, N \to \infty} \sup_{0 \leq t \leq 2\pi} \| \hat{\Sigma}^{N,n}(t) - \Sigma(t)\| = 0. \end{align*} Hence, one also has \begin{align*} \lim_{n, N \to \infty} \sup_{0 \leq t \leq 2\pi} tr( \hat{\Sigma}^{N,n}(t) - \Sigma(t))^2 = 0 , \ \text{in probability}. \end{align*} By applying Lemma \ref{HWlemma}, we get the desired result. \subsection{Proof of Theorem \ref{theoremQV1}} After some elementary calculations, one gets from Lemma \ref{HWlemma} that \begin{align} \label{eqnQV1} \sum_{i=1}^d | \tilde{\lambda}^{n}_i(t) - \lambda_i(t)| \leq \sqrt{d}\sum_{i,j=1}^d |\tilde{\Sigma}^{n}_{ij}(t) - \Sigma_{ij}(t)|, \end{align} for all $t \in (0,T)$. On the other hand, it follows from Proposition 3.3 (\cite{OW2007}) that there exists a constant $M > 0$ such that for all $n$ and all $t \in (h_n, T-h_n)$, one has \begin{align*} \Big( h_n^{\alpha} + \sqrt{\frac{T}{nh_n}}\Big)^{-1} \sum_{i, j=1}^d \bE| \tilde{\Sigma}^{n}_{ij}(t) - \Sigma_{ij}(t)| \leq M, \end{align*} which concludes Theorem \ref{theoremQV1}. \subsection{Proof of Theorem \ref{theoremQV2}} It is not hard to see from Propositions 3.13 and 3.14 in \cite{NO2009} that for any $\gamma \in (0, \frac{\alpha}{2\alpha + 1} \wedge \frac{2-\beta}{2\beta})$, there exists a constant $M$ such that \begin{align*} n^\gamma \sum_{i, j=1}^d \bE| \bar{\Sigma}^{n}_{ij}(t) - \Sigma_{ij}(t)| \leq M,\end{align*} for all $n$ and $t \in (0,T)$. This fact together with estimate (\ref{eqnQV1}) yields the desired result. \section{Conclusions} In this paper we studied two methods to estimate the eigenvalues of spot cross volatility matrix. The empirical studies show that in comparison with the Fourier Series method, the Quadratic Variation method is easier to implement, is much faster and is able to avoid the negative eigenvalue problem. The Quadratic Variation method is also applicable to diffusion processes with jumps for which the Fourier Series method is unsuitable.
{ "timestamp": "2014-09-09T02:12:49", "yymm": "1409", "arxiv_id": "1409.2214", "language": "en", "url": "https://arxiv.org/abs/1409.2214", "abstract": "In order to study the geometry of interest rates market dynamics, Malliavin, Mancino and Recchioni [A non-parametric calibration of the HJM geometry: an application of Itô calculus to financial statistics, {\\it Japanese Journal of Mathematics}, 2, pp.55--77, 2007] introduced a scheme, which is based on the Fourier Series method, to estimate eigenvalues of a spot cross volatility matrix. In this paper, we present another estimation scheme based on the Quadratic Variation method. We first establish limit theorems for each scheme and then we use a stochastic volatility model of Heston's type to compare the effectiveness of these two schemes.", "subjects": "Statistical Finance (q-fin.ST)", "title": "Approximation of eigenvalues of spot cross volatility matrix with a view toward principal component analysis", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983342957061873, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7097211046055005 }
https://arxiv.org/abs/1702.00205
Decomposing Weighted Graphs
We solve the following problem: Can an undirected weighted graph G be parti- tioned into two non-empty induced subgraphs satisfying minimum constraints for the sum of edge weights at vertices of each subgraph? We show that this is possible for all constraints a(x), b(x) satisfying d_G(x) >= a(x) + b(x) + 2W_G(x), for every vertex x, where d_G(x), W_G(x) are, respectively, the sum and maximum of incident edge weights.
\section{Introduction} All graphs considered in this paper are finite, undirected and weighted. A weighted graph is a triple $G = (V, E, w)$ such that $(V, E)$ is an undirected simple finite graph and $w : E \mapsto \mathbb{R}_{>0}$ is a weight function. Where $xy \notin E$, we further define $w_{xy} = w_{yx} = 0$. We denote by $V(G)$ the vertex set of a graph $G$. The degree of vertex $x$ with respect to $G$ is denoted by $d_G(x)$ and is the sum of its incident edge weights: $d_G(x) = \sum\limits_{y \in V(G)} w_{xy}$. If $G(X)$ is the subgraph induced by $X$, we use the notation $d_X(x)$ as shorthand for $d_{G(X)}(x)$. We denote by $W_G(x)$ the maximum weight (not including loop edges) of an edge of $G$ incident to $x$: $W_G(x) := \max\limits_{y \in V(G), x \ne y} w_{xy}$. $W(x)$ stands for $W_G(x)$ when the context is clear. $(A, B)$ is called a {\em partition} of a set $V$ if $A, B$ are disjoint, non-empty subsets of $V$ whose union is $V$. Stiebitz \cite{Stiebitz} proved the following decomposition result for a simple, undirected graph $G$: Let $a, b: V(G) \mapsto \mathbb{N}$ be two functions, and assume $d_G(x) \geq a(x) + b(x) + 1$ for every $x \in V(G)$. Then there is a partition $(A,B)$ of $V(G)$ such that \begin{enumerate}[(1)] \item $d_A(x) \geq a(x)$ for every $x \in A$, and \item $d_B(x) \geq b(x)$ for every $x \in B$. \end{enumerate} Stiebitz' result does not lend itself to a natural generalization to weighted graphs, because of the restriction to integers on the vertex functions $a, b$. If this is relaxed, the theorem breaks. The Stiebitz proof in several places relies on vertex degrees being integers. Nonetheless, in this paper we generalize this result to undirected weighted graphs, and base our proof closely on Stiebitz' proof. We prove the following result: \begin{theorem} \label{weighted} Let $G$ be a graph without loop edges, and $a, b : V(G) \mapsto \mathbb{R}_{\geq 0}$ two functions. Assume that $d_G(x) \geq a(x) + b(x) + 2W_G(x)$ for every vertex $x \in V(G)$. Then there is a partition $(A,B)$ of $V(G)$ such that \begin{enumerate}[(1)] \item $d_A(x) \geq a(x)$ for every $x \in A$, and \item $d_B(x) \geq b(x)$ for every $x \in B$. \end{enumerate} \end{theorem} \section{Proof of Theorem \ref{weighted}} Let $G$ be a graph and $a, b : V(G) \mapsto \mathbb{R}_{\geq 0}$ two functions such that \begin{align*} d_G(x) \geq a(x) + b(x) + 2W(x) \end{align*} \noindent for every $x \in V(G)$. Let $f : V(G) \mapsto \mathbb{R}_{\geq 0}$ be a function. $G$ is said to be {\em $f$-meager} if for every induced subgraph $H$ of $G$ there is a vertex $x \in V(H)$ such that $d_H(x) < f(x) + W(x)$.\footnote{Stiebitz uses $d_H(x) \leq f(x)$ and names it {\em $f$-degenerate}, which is the standard terminology for this simple-graph property. The generalization used here is non-standard and non-obvious, and therefore we call it differently.} We say a pair $(A,B)$ is {\em stable} if $A$ and $B$ are disjoint, non-empty subsets of $V(G)$ such that \begin{enumerate}[(1)] \item $d_A(x) \geq a(x)$ for every $x \in A$, and \item $d_B(x) \geq b(x)$ for every $x \in B$. \end{enumerate} We have to show that there is a stable partition of $V(G)$. Following \cite{Stiebitz}, we make the following observation. \begin{proposition} \label{complete} If there exists a stable pair, then there exists a stable partition of $V(G)$, too. \end{proposition} \begin{proof} Let $(A,B)$ be a stable pair such that $A \cup B$ is maximal. We need only to show that $A \cup B = V(G)$. Suppose not, i.e. $C = V(G) \setminus (A \cup B)$ is non-empty. Then the maximality of $A \cup B$ implies that $(A,B \cup C)$ is not stable. Therefore, there is a vertex $x \in C$ such that $d_{B \cup C}(x) < b(x)$. Since $d_G(x) \geq a(x) + b(x) + 2W(x)$, $d_A(x) > a(x) + 2W(x) \geq a(x)$. But then $(A \cup \{x\},B)$ is a stable pair, contradicting the maximality of $A \cup B$. This proves the proposition. \qed \end{proof} We define a {\em meager partition} of $V(G)$ as a partition $(A,B)$ of $V(G)$ such that $G(A)$ is $a$-meager and $G(B)$ is $b$-meager. We define a function $h(A,B)$ of a partition $(A,B)$ as \begin{align*} h(A,B) = \sum\limits_{x \in A,y \in A} w_{xy} + \sum\limits_{x \in B,y \in B} w_{xy} + \sum\limits_{x \in A} b(x) + \sum\limits_{x \in B} a(x) \end{align*} For the proof of Theorem \ref{weighted} we consider two possible cases. \begin{itemize} \item There is no meager partition of $V(G)$. Then, among all non-empty subsets of $V(G)$ select one, say $A$, such that \begin{enumerate}[(i)] \item \label{first} $d_A(x) \geq a(x)$ for all $x \in A$, and \item \label{second} $|A|$ is minimum subject to (\ref{first}) \end{enumerate} Let $B = V(G) \setminus A$. Since $V(G) \setminus \{v\}$ satisfies (\ref{first}) for each vertex $v$, $A$ exists and is a proper subset of $V(G)$. Hence $B$ is non-empty. Because of (\ref{second}), for every non-empty proper subset $A'$ of $A$ there is a vertex $x \in A'$ such that $d_{A'}(x) < a(x)$. This implies that $d_A(x) < a(x) + W(x)$ for some $x \in A$. Hence $G(A)$ is $a$-meager. Clearly $G(B)$ is not $b$-meager, since otherwise $(A,B)$ would be a meager partition of $V(G)$. Therefore, there is a non-empty subset $B'$ of $B$ such that $d_{B'}(x) \geq b(x) + W(x) \geq b(x)$ for all $x \in B'$. Then $(A,B')$ is a stable pair and, by Proposition \ref{complete}, there is a stable partition of $V(G)$. \item There is a meager partition of $V(G)$. Then let $(A,B)$ be a meager partition of $V(G)$ such that $h(A,B)$ is maximum. $G(A)$ being $a$-meager, there is a vertex $x \in A$ such that $d_A(x) < a(x) + W(x)$. Since $d_G(x) \geq a(x) + b(x) + 2W(x)$, $d_B(x) > b(x) + W(x)$. This implies that $|B| \geq 2$. By symmetry we also have $|A| \geq 2$. Next, we claim that there is a non-empty subset $\bar{A} \subseteq A$ such that $d_{\bar{A}}(x) \geq a(x)$ for all $x \in \bar{A}$. Suppose not. Then, clearly, for each $y \in B$, $G(A \cup \{y\})$ is $a$-meager. $G(B)$ being $b$-meager, there is a vertex $y' \in B$ such that $d_B(y') < b(y') + W(y')$. Let $A' = A \cup \{y'\}$ and $B' = B \setminus \{y'\}$. Obviously, $B'$ is non-empty. Now, we easily conclude that $(A',B')$ is a meager partition of $V(G)$. Since $d_G(y') \geq a(y') + b(y') + 2W(y')$ and $d_B(y') < b(y') + W(y')$, we have $d_{A'}(y') > a(y') + W(y')$ and, therefore, \begin{align*} h(A',B') - h(A,B) = d_{A'}(y') - d_B(y') + b(y') - a(y') > 0 \end{align*} \noindent contradicting the maximality of $h(A,B)$. This proves the claim. By symmetry there is a non-empty subset $\bar{B} \subseteq B$ such that $d_{\bar{B}} \geq b(x)$ for all $x \in \bar{B}$. Then $(\bar{A},\bar{B})$ is a stable pair, and, by Proposition \ref{complete}, there is a stable partition of $V(G)$. This completes the proof of Theorem \ref{weighted}. \end{itemize} \section{Concluding Remarks} Theorem \ref{weighted} is tight in view of graphs where $w_{xy} = 1$ for all $x \neq y$: E.g. $K_9$ with unit edge weights has degree $8$ for every vertex. Setting $a(x) = b(x) = 3 + \epsilon$, no stable partition exists for any $\epsilon > 0$. We can generalize Theorem \ref{weighted} to the case of a weighted undirected graph with loops $G = (V, E, w)$: We now allow $w_{xx} > 0$, and even $w_{xx} > W(x)$, for every $x \in V$. \begin{corollary} \label{loop} Let $G$ be a graph with loops and $a, b : V(G) \mapsto \mathbb{R}_{\geq 0}$ two functions. Assume that $d_G(x) \geq a(x) + b(x) + 2W_G(x) - 2w_{xx}$ for every vertex $x \in V(G)$. Then there is a partition $(A,B)$ of $V(G)$ such that \begin{enumerate}[(1)] \item $d_A(x) \geq a(x)$ for every $x \in A$, and \item $d_B(x) \geq b(x)$ for every $x \in B$. \end{enumerate} \end{corollary} This follows by applying Theorem \ref{weighted} on a graph $G'$ derived from $G$ by omitting all loops. We provide an example application for our result, whose solution was the motivation for this paper: \begin{figure}[tbp] \centering \begin{tikzpicture} \draw [black, thick] (-1,3) rectangle (0,2); \draw [gray, thin] (-0.5,2.5) circle [radius=2.1]; \draw [black, thick] (0,1) grid (3,4); \draw [black, thick] (0,0) grid (1,2); \draw [gray, thin] (2.5,1.5) circle [radius=2.1]; \draw [black, thick] (3,0) rectangle (4,1); \draw [gray, thin] (3.5,0.5) circle [radius=2.1]; \end{tikzpicture} \caption[Partitioning squares]{Partitioning squares: 2-color squares so that every circle (radius=2.1) centered on a square center has most of its colored area colored as its center} \label{squares} \end{figure} Let $V$ be a set of grid squares of side $1$ in $\mathbb{R}^2$, and fix a radius $r > 0$ (see Figure \ref{squares}). Is there a non-trivial partition $(A,B)$ of $V$ such that: \begin{enumerate}[(1)] \item For each $x \in A$, a circle of radius $r$ drawn around its centre covers at least as much area in $A$ as in $B$, and \item For each $x \in B$, a circle of radius $r$ drawn around its centre covers at least as much area in $B$ as in $A$ \end{enumerate} \noindent ? (the question is easily extended to higher dimensions and to other metrics). We are able to answer the question in the affirmative: For $r \leq \sqrt{\frac{2}{\pi}}$ square $x$ covers the majority of the radius-$r$ circle drawn around its centre, so any partition satisfies the requirements. So assume $r > \sqrt{\frac{2}{\pi}}$. We build the weighted graph $G(V,E)$ with the square set $V$ serving as the vertex set. The weight of an edge from square $x$ to square $y$, $w_{xy}$, is the area of the part of $y$ whose distance from $x$'s centre is at most $r$. Clearly $w_{xy} = w_{yx}$ and $w_{xy} \leq 1$ for every $x, y \in V$. Also, since $r > \sqrt{\frac{2}{\pi}} > \sqrt{\frac{1}{2}}$, $w_{xx} = 1$ for every $x \in V$. Therefore setting $a(x) = b(x) = d_G(x)/2$ for every $x \in V$, the existence of the sought partition follows from Corollary \ref{loop}. \section*{References}
{ "timestamp": "2017-02-02T02:04:28", "yymm": "1702", "arxiv_id": "1702.00205", "language": "en", "url": "https://arxiv.org/abs/1702.00205", "abstract": "We solve the following problem: Can an undirected weighted graph G be parti- tioned into two non-empty induced subgraphs satisfying minimum constraints for the sum of edge weights at vertices of each subgraph? We show that this is possible for all constraints a(x), b(x) satisfying d_G(x) >= a(x) + b(x) + 2W_G(x), for every vertex x, where d_G(x), W_G(x) are, respectively, the sum and maximum of incident edge weights.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "Decomposing Weighted Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429629196683, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097211029478213 }
https://arxiv.org/abs/1905.01568
Relations among spheroidal and spherical harmonics
A contragenic function in a domain $\Omega\subseteq\mathbf{R}^3$ is a reduced-quaternion-valued (i.e. the last coordinate function is zero) harmonic function, which is orthogonal in $L^2(\Omega)$ to all monogenic functions and their conjugates. The notion of contragenicity depends on the domain and thus is not a local property, in contrast to harmonicity and monogenicity. For spheroidal domains of arbitrary eccentricity, we relate standard orthogonal bases of harmonic and contragenic functions for one domain to another via computational formulas. This permits us to show that there exist nontrivial contragenic functions common to the spheroids of all eccentricities.
\section{Introduction} In certain physical problems in nonspherical domains, it has been found convenient to replace the classical solid spherical harmonics with harmonic functions better adapted to the domain in question. For example, spheroidal harmonics are used in \cite{Hot} for modeling potential fields around the surface of the earth. A systematic analysis of harmonic functions on spheroidal domains was initiated by Szeg\"o \cite{Szego1935}, followed by Garabedian \cite{Garabedian} who produced orthogonal bases with respect to certain natural inner products associated to prolate and oblate spheroids, among them the $L^2$-Hilbert space structures on the interior and on the boundary of the spheroid. Some aspects of the generation of harmonic functions which are orthogonal in the region exterior to a prolate spheroid were considered in \cite{MoraisNguyenKou2016} and generalized recently in \cite{Morais_Habilitation2018}. The main question which interests us is to relate systems of harmonic functions associated with the spheroid $\Omega_\mu$ (defined in \eqref{eq:Omegamu} below) to those associated with the unit ball $\Omega_0$. Our starting point is a fundamental formula for spheroidal harmonics which was worked out in the short but beautiful paper \cite{BBS} and is discussed thoroughly in Chapter 22 of the monumental text \cite{Hot}. In classical books such as \cite{Hobson1931,MorseFeshbach1953,NikiforovUvarov1988}, these expansions in terms of these bases are used separately without specifying relations between them. We complete the above formulas by relating different systems of harmonic functions associated with spheroids of different eccentricity. While the manipulation of the coefficients is essentially algebraic, it must be borne in mind that we are dealing with continuously varying families of function spaces which are determined by integration over varying domains. This study is then extended to include the contragenic functions, which are those harmonic functions orthogonal to both the monogenic functions and the antimonogenic functions in the domain under consideration. In \cite{GMP} a short table of contragenic polynomials was provided, which included some which did not depend on the parameter describing the eccentricity of the spheroid. Such polynomials are thus contragenic for all spheroids. Our main result, Theorem \ref{th:intersection}, describes the intersection of the spaces of contragenic functions. \section{Background on spheroidal harmonics} As a preliminary to the discussion of monogenic and contragenic functions on spheroids, we establish the basic facts for harmonics in this and the next section. Consider the family of coaxial spheroidal domains $\Omega_\mu$, scaled so that the major axis is of length 2: \begin{equation}\label{eq:Omegamu} \Omega_\mu = \{x \in \R^3 |\ x_0^2 + \frac{x_1^2 + x_2^2}{e^{2\nu}} < 1 \}, \end{equation} where $\nu\in\R$ and where following the notation in \cite{GMP} the parameter $\mu=(1-e^{2\nu})^{1/2}$ will be useful in later formulas. The equations relating the Cartesian coordinates of a point $x=(x_0,x_1,x_2)$ in $\Omega_\mu$ to spheroidal coordinates $(u,v,\phi)$ are \begin{equation}\label{eq:prolatecoords} x_0 = \mu \cos u \cosh v, \ x_1 = \mu \sin u \sinh v \cos \phi, \ x_2 = \mu \sin u \sinh v \sin \phi, \end{equation} where in the case of the prolate spheroid ($\nu<0$) the coordinates range over $u\in[0,\pi]$, $v\in[0,\arctanh e^\nu]$, $\phi\in[0,2\pi)$ and the eccentricity is $0<\mu<1$, while for the oblate spheroid ($\nu>0$) we have $u\,\in\,[0,\pi]$ and $v\in[0,\arctanh e^\nu]$, $\phi\in[0,2\pi)$ and $\mu$ is imaginary, $\mu/i>0$. The spheroids reduce to the unit ball $\Omega_0$ for $\nu=0$. In many other treatments of spheroidal functions, which discuss the two (confocal) families separately, the ball is not represented. See \cite{GMP} for a discussion of this question. In terms of the coordinates \eqref{eq:prolatecoords}, the {\it solid spheroidal harmonics} are \begin{equation} \label{eq:prolatepreharmonics} \harorigsol{n,m}{\pm}[\mu](x) = \harorig{n,m}[\mu](u,v) \, \Phi_m^\pm(\phi), \end{equation} where \begin{equation} \Phi_m^+(\phi)=\cos (m\phi), \quad \Phi_m^-(\phi)=\sin (m\phi) \end{equation} and for $\mu\not=0$, \begin{equation} \label{eq:prolatepreharmonicsq} \harorig{n,m}[\mu](u,v) = \frac{(n-m)!}{2^n(1/2)_n}\mu^ P_{n}^{m}(\cos u) P_{n}^{m}(\cosh v) . \end{equation} Here $P_n^m$ are the associated Legendre functions of the first kind \cite[Ch. III]{Hobson1931} of degree $n$ and order $m$, and the (rising) Pochhammer symbol is $(a)_n=a(a+1)\cdots(a+n-1)$ with $(a)_0=1$ by convention. To avoid repetition, we state once and for all that $\harorigsol{n,m}{-}[\mu]$ is only defined for $m\ge1$, i.e.\ $\harorigsol{n,0}{-}[\mu]$ is expressly excluded from all statements of theorems. It was shown in \cite{GMP} that with the scale factor which has been included in \eqref{eq:prolatepreharmonicsq}, the $\harorigsol{n,m}{\pm}[\mu]$ are polynomials in $(x_0,x_1,x_2)$ which are normalized so that the limiting case $\mu\to0$ gives the classical {\it solid spherical harmonics}, \begin{equation*} \harorigsol{n,m}{\pm}[0](x) = |x|^nP_n^m(x_0/|x|) \Phi_m^\pm(\phi). \end{equation*} It is known from \cite{Garabedian} that while the $\harorigsol{n,m}{\pm}[\mu]$ are mutually orthogonal with respect to the Dirichlet norm on $\Omega_\mu$, the closely related functions, which we will call the \textit{Garabedian spheroidal harmonics}, \begin{equation} \label{eq:prolateharmonics} \hargarabsol{n,m}{\pm}[\mu](x) = \frac{\partial}{\partial x_0} \harorigsol{n+1,m}{\pm}[\mu](x) \end{equation} form an orthogonal basis for $\mathcal{H}_2(\Omega_\mu)$, the linear subspace of real-valued harmonic functions in $L_2(\Omega_\mu)$. This property makes the $\hargarabsol{n,m}{\pm}[\mu]$ of greater interest for many considerations. The corresponding boundary Garabedian harmonics $\hargarab{n,m}[\mu]$ in $\Omega_\mu$ are characterized by the relation \begin{equation} \label{eq:Vhat} \hargarabsol{n,m}{\pm}[\mu](x) = \hargarab{n,m}[\mu](u,v) \,\Phi_m^\pm(\phi). \end{equation} We recall \cite{MoraisGuerlebeck2012} that for spherical harmonics, there is a formula analogous to Appell differentiation of monomials, \begin{align} \label{eq:V[0]} \frac{\partial}{\partial x_0} \harorigsol{n+1,m}{\pm}[0](x) = (n+m+1) \harorigsol{n,m}{\pm}[0](x). \end{align} However, $\hargarabsol{n,m}{\pm}[\mu]$ is not so simply related to $\harorigsol{n,m}{\pm}[\mu]$ for $\mu\not=0$, as was explained in \cite{Morais4}. We examine such relations in the next section. \section{Conversions among orthogonal spheroidal harmonics and spherical harmonics} \subsection{Garabedian harmonics expressed by classical harmonics} As mentioned in the Introduction, it is of interest to express the orthogonal basis of harmonic functions for one spheroid $\Omega_\mu$ in terms of those for another spheroid. It is natural to use the unit ball $\Omega_0$ as a point of reference, which will be the case in the first results. We begin the calculation of the coefficients for the relationships among the various classes of harmonic functions by presenting various known formulas in a uniform manner. For $n\ge0$, consider the rational constants \begin{equation} \label{eq:cUU} \cUU{n,m,k} = \frac{ (1/2)_{n-k}\, (n+m-2k+1)_{2k}}{ (-4)^k (1/2)_{n}\, k! } \end{equation} for $0\le m\le n$, $0\le2k\le n$, and let $\cUU{n,m,k}=0$ otherwise. In the present notation, the main result of \cite{BBS} may be expressed as follows (i.e.\ the factor $\cscale{m,n} = (n-m)!/(2n-1)!!$ has been incorporated into \eqref{eq:cUU}). \begin{prop}[\cite{BBS}] \label{prop:BBS} Let $n\ge0$ and $0\le m\le n$. Then \begin{equation*} \harorig{n,m}[\mu] = \sum_{0\le2k\le n-m} \cUU{n,m,k} \mu^{2k}\, \harorig{n-2k,m}[0]. \end{equation*} \end{prop} An important characteristic of this relation is that the same coefficients $\cUU{n,m,k}$ work for the ``$+$'' and ``$-$'' cases (cosines and sines) and, strikingly, for all values of $\mu$. By \eqref{eq:prolatepreharmonics}, an equivalent form of expressing Proposition \ref{prop:BBS} is \begin{equation} \label{eq:UqfromUq} \harorigsol{n,m}{\pm}[\mu] = \sum_{0\le2k\le n-m} \cUU{n,m,k} \mu^{2k}\, \harorigsol{n-2k,m}{\pm}[0]. \end{equation} Since $\partial/\partial x_0$ in \eqref{eq:prolateharmonics} is a linear operator, \eqref{eq:UqfromUq} gives automatically the corresponding result for the Garabedian harmonics, \begin{align} \label{eq:VV} \hargarabsol{n,m}{\pm}[\mu] &= \sum_{0\le2k\le n-m+1} \cVV{n,m,k} \mu^{2k} \,\hargarabsol{n-2k,m}{\pm}[0], \end{align} where $\cVV{n,m,k}=\cUU{n+1,m,k}$. This in turn gives via \eqref {eq:V[0]} the following expression in terms of the spherical harmonics: \begin{coro} \label{coro:VfromU} Let $n\ge0$ and $0\le m\le n$. Then \begin{align*} \hargarab{n,m}[\mu] &= \sum_{0\le2k\le n-m+1} \cUV{n,m,k} \mu^{2k}\, \harorig{n-2k,m}[0] , \end{align*} where \[\cUV{n,m,k}=(n+m-2k+1)\cVV{n,m,k}. \] \end{coro} The coefficients \begin{align*} \cVUmu{n,m,k} &= \frac{ (n+m+1)! \,(1/2)_{n-2k+1}}{4^k(n+m-2k)!(1/2)_{n+1}} \end{align*} give a similar expression for the Garabedian basic harmonics $\hargarabsol{n,m}{\pm}[\mu]$ in terms of the standard harmonics $\harorigsol{n,m}{\pm}[\mu]$ for the same spheroid, rather than in terms of $\harorig{n,m}[0]$: \begin{theo} [\cite{Morais4}] \label{th:VfromU} Let $n\ge0$ and $0\le m\le n$. Then \begin{equation*} \DS \hargarab{n,m}[\mu] = \sum_{0\le 2k\le n-m} \cVUmu{n,m,k} \mu^{2k}\, \harorig{n-2k,m}[\mu]. \end{equation*} \end{theo} In \cite{BBS} the inverse relation of \eqref{eq:UqfromUq} was also derived, expressing $\harorigsol{n,m}{\pm}[0]$ in terms of $\harorigsol{n,m}{\pm}[\mu]$, via \begin{equation} \label{eq:UbacktoU} \harorig{n,m}[0] = \sum_{0\le k\le n-m} \cUU{n,m,k}^0 \mu^{2k} \, \harorig{n-2k,m}[\mu], \end{equation} where the coefficients can be written as \begin{equation} \label{eq:UbacktoUcoef} \cUU{n,m,k}^0 = \frac{ 4^{n-2k}(2n-4k+1)(n-k)!(m+n)!(1/2)_{n-2k}} { k!(2n-2k+1)!(n+m-2k)!}, \end{equation} again independent of $\mu$. In consequence, applying the operator $\partial/\partial x_0$ and using \eqref{eq:V[0]}, we have the following result. \begin{prop}\label{prop:UfromV} Let $n\geq0$ and $0\leq m\leq n$. Then \[ \harorig{n,m}[0] = \sum_{0\leq2k\leq n-m}\cUV{n,m,k}^0\mu^{2k} \,\hargarab{n-2k,m}[\mu], \] where \begin{equation*} \cUV{n,m,k}^0 = \dfrac{\cUU{n+1,m,k}^0}{n+m+1}. \end{equation*} \end{prop} The inverse relation for Theorem \ref{th:VfromU} is a much simpler formula, given as follows: \begin{coro} [\cite{Morais4}] For $n\geq0$ and $0\leq m \leq n$, \[ \harorig{n,m}[\mu] = \frac{1}{n+m+1} \hargarab{n,m}[\mu] + \frac{n+m}{4n^2-1} \mu^2 \,\hargarab{n-2,m}[\mu]. \] \end{coro} This uses the convention $\hargarab{n-2,m}[\mu]=0$ when $m>n$; i.e. \begin{align*} \harorig{n,n-1}[\mu] &= \frac{1}{2n} \hargarab{n,n-1}[\mu],\\ \harorig{n,n}[\mu] &= \frac{1}{2n+1} \hargarab{n,n}[\mu]. \end{align*} \subsection{Conversion among Garabedian harmonics} The preceding subsection does not include the inverse relation of \eqref{eq:VV} of the form \begin{equation} \label{eq:VVinv} \hargarab{n,m}[0] = \sum_{0\leq2k\leq n-m} \cVV{n,m,k}^0 \mu^{2k}\hargarab{n-2k,m}[\mu]. \end{equation} Instead of deriving it directly, we verify first the following remarkable conversion formula, which relates the spheroidal harmonics associated with $\Omega_{\mu}$ to those associated with any other $\Omega_{\mut}$. Write \begin{equation*} \cmumut{n,m,k}= \dfrac{(n+m+1)!(1/2)_{n-2k+2}}{4^{k}k!(n+m-2k+1)! (1/2)_{n-k+2}} \end{equation*} when $0 \leq 2k \leq n-m+2$, otherwise $\cmumut{n,m,k}=0$. \begin{theo}\label{th:cVVmu} Let $n\geq0$, $0\leq m\leq n$, and let $\mu,\mut \in [0,1) \cup i\R^{+}$ such that $\mu\neq0$. The coefficients $\cVVmu{n,m,k}[\mut,\mu]$ in the relation \[ \hargarab{n,m}[\mut] = \sum_{0\leq2k\leq n-m} \cVVmu{n,m,k}[\mut,\mu]\,\hargarab{n-2k,m}[\mu] \] are given by \[ \cVVmu{n,m,k}[\mut,\mu] = {}_2F_1(-k,-n+k-3/2;-n-1/2;(\mut/\mu)^2) \, \cmumut{n,m,k}\,\mu^{2k}, \] with $_2F_1$ denoting the classical Gaussian hypergeometric function. \end{theo} \begin{proof} We begin by replacing $\mu$ with $\mut$ in Corollary \ref{coro:VfromU} and substituting the terms on the right-hand side according to Proposition \ref{prop:UfromV}. By linear independence of the harmonic basis elements, it follows that \begin{equation}\label{eq:sumw} \cVVmu{n,m,k}[\mut,\mu] = \mu^{2k}\sum_{l=0}^{k}\cUV{n,m,l}\cUV{n-2l,m,k-l}^0 \left(\frac{\mut}{\mu}\right)^{2l} \end{equation} in which we note that all terms are real valued. Using reductions such as $(2n-4k+3)(1/2)_{n-2k+1}=2(1/2)_{n-2k+2}$ and recalling $0\le l\le k$, one easily sees that \begin{align*} \cUV{n,m,l} &= \frac{ (1/2)_{n-l+1}(n+m-2l+1)_{2l+1}} {(-4^l)l!(1/2)_{n+1}}, \\ \cUV{n-2l,m,k-l}^0 &= \frac{2\cdot4^{n-2k+1}(n+m-2l)!(n-k-l+1)!(1/2)_{n-2k+2} } { (k-l)! (2n-2k-2l+3)! (n+m-2k+1)! }. \end{align*} Therefore the product can be expressed as \[ \cUV{n,m,l} \cUV{n-2l,m,k-l}^0 = \cmumut{n,m,k} \chypgeom{n,k,l} \] where \begin{align*} \chypgeom{n,k,l} &= \frac{ 2\cdot 4^{n-2k+1}(n+m+1)!(n-k-l+1)!(1/2)_{n-2k+2} } { (-4^l)l!(k-l)! (2n-2k-2l+3)! (n+m-2k+1)! } \\ &= \frac{ (-k)_l(-n+k-3/2)_l }{ l!(-n-1/2)_l} \end{align*} is the coefficient in the polynomial $ {}_2F_1(-k,-n+k-3/2;-n-1/2;(\mut/\mu)^2) = \sum_{l=0}^k \chypgeom{n,k,l} (\mut/\mu)^{2l}$. \end{proof} \begin{coro}\label{coro:proplim} For each $n\geq0$, $0\leq m\leq n$, the limits \begin{equation*} \lim_{\mut\rightarrow0} \cVVmu{n,m,k}[\mut,\mu], \quad \lim_{\mu\rightarrow0} \cVVmu{n,m,k}[\mut,\mu] \end{equation*} exist and are given, respectively, by \begin{equation*} \cVVmu{n,m,k}[0,\mu] = (n+m+1)\cUV{n,m,k}^0\mu^{2k}, \quad \cVVmu{n,m,k}[\mut,0] = \dfrac{\cUV{n,m,k}}{n+m-2k+1}\mut^{2k}. \end{equation*} \end{coro} \begin{proof} We may write \eqref{eq:sumw} as \begin{align*} \cVVmu{n,m,k}[\mut,\mu] &= \sum_{l=1}^{k-1} \cUV{n,m,l} \cUV{n-2l,m,k-l}^0 \, \mu^{2(k-l)}\mut^{2l} \\ & \quad\ \ +\cUV{n,m,k}\cUV{n-2k,m,0}^0\mut^{2k} + \cUV{n,m,0}\cUV{n,m,k}^0\mu^{2k} \end{align*} and then simply take $\mu=0$ or $\mut=0$ to obtain the desired limit. \end{proof} Referring to \eqref{eq:VVinv}, we have \[ \cVV{m,n,k}^0= \frac{\cUV{n,m,k}}{(n+m-2k+1)}. \] \section{Application to orthogonal monogenic and contragenic functions} The standard bases for spheroidal harmonics have their counterparts for the spaces of orthogonal monogenic polynomials taking values in $\mathbb{R}^3$. Monogenic functions are defined by considering $\mathbb{R}^3$ as the real linear subspace of the quaternions $\mathbb{H} = \{\sum_{i=0}^3 x_ie_i\colon\ x_i \in \mathbb{R}\}$ for which the last coordinate $x_3$ vanishes. (Quaternionic multiplication is defined, as usual, so that $e_1^2 = e_2^2 = e_3^2 = -1$ and $e_1 e_2 = e_3 = - e_2 e_1$, $e_2 e_3 = e_1 = - e_3 e_2$, $e_3 e_1 = e_2 = - e_1 e_3$.) For background on quaternionic analysis in $\R^3$, see \cite{Delanghe2007,Joao2009, MoraisGuerlebeck2012,MoraisGuerlebeck22012,MoraisAG}. A function $f\colon\Omega_\mu\to\R^3$ is \textit{monogenic} when it is annihilated by the quaternionic differential operator $ \partial = \partial/\partial x_0+e_1\partial/\partial x_1+e_2\partial/\partial x_2$ acting from the left. The \textit{basic spheroidal monogenic polynomials} are constructed \cite{Morais4,Morais5} as \begin{equation} \label{def:monog} \monog{n,m}{\pm}[\mu] = \overline{\partial}(\harorigsol{n+1,m}{\pm}[\mu]), \end{equation} where $ \overline{\partial} = \partial/\partial x_0 - e_1\partial/\partial x_1- e_2\partial/\partial x_2$. This is analogous to the definition \eqref{eq:prolateharmonics} for harmonic polynomials. $\monog{n,m}{\pm}[\mu]$ is monogenic because $\partial\overline{\partial}$ is equal to the Laplacian operator. We continue with the convention that $m\ge1$ when the ``-'' sign appears in a superscript. \begin{theo} [\cite{Morais4,Morais5}] \label{th:sphmonogformula} For all $n\geq0$, the basic spheroidal monogenic polynomial \eqref{def:monog} is equal to \begin{align*} \monog{n,0}{+}[\mu] = \hargarabsol{n,0}{+}[\mu] - \frac{1}{n+2} \big( \hargarabsol{n,1}{+}[\mu] e_1 + \hargarabsol{n,1}{-}[\mu] e_2 \big) \end{align*} for $m=0$, and \begin{align*} \monog{n,m}{\pm}[\mu] &= \hargarabsol{n,m}{\pm}[\mu] + \Bigl[(n+m+1) \hargarabsol{n,m-1}{\pm}[\mu] - \frac{1}{n+m+2} \hargarabsol{n,m+1}{\pm}[\mu] \Bigr]\frac{e_1}{2} \nonumber\\ & \phantom{\quad \hargarabsol{n,m}{\pm}[\mu]\ } \mp \Bigl[(n+m+1) \hargarabsol{n,m-1}{\mp}[\mu] + \frac{1}{n+m+2} \hargarabsol{n,m+1}{\mp}[\mu] \Bigr] \frac{e_2}{2} \end{align*} for $1\leq m\leq n+1$. The polynomials $\monog{n,m}{\pm}[\mu]$ are orthogonal in $L^2(\Omega_\mu)$, i.e.\ in the sense of the scalar product defined by \[ \langle f,g\rangle_{[\mu]} = \int_{\Omega_\mu} {\rm Sc}(\overline{f}g)\,dV. \] \end{theo} \subsection{Bases for monogenics in distinct spheroids} Analogously to \eqref{eq:VV} and \eqref{eq:VVinv}, we now express $\monog{n,m}{\pm}[\mu]$ in terms of the spherical monogenics $\monog{n,m}{\pm}[0]$. \begin{theo}\label{th:monogenics} For $n\geq0$ and $0\leq m \leq n+1$, \begin{align*} \monog{n,m}{\pm}[\mu] =& \sum_{0\le 2k\le n-m+1} \cVV{n,m,k} \mu^{2k} \monog{n-2k,m}{\pm}[0],\\ \monog{n,m}{\pm}[0] =& \sum_{0\le 2k\le n-m+1} \cVV{n,m,k}^0 \mu^{2k} \monog{n-2k,m}{\pm}[\mu],\\ \monog{n,m}{\pm}[\mut] =& \sum_{0\leq2k\leq n-m+1} \cVV{n,m,k}[\mut,\mu]\, \monog{n-2k,m}{\pm}[\mu], \end{align*} where $\cVV{n,m,k}$, $\cVV{n,m,k}^0$, and $\cVV{n,m,k}[\mu,\mut]$ are as in the previous section. \end{theo} \begin{proof} Fix a value of $\mu$. Note that for given $n$, the collections $\{\DS\monog{k,m}{\pm}[0]\colon\ k\le n,\ 0\le m\le k\}$ and $\{\DS\monog{k,m}{\pm}[\mu]\colon\ k\le n,\ 0\le m\le k\}$ are bases for the same linear space, namely the monogenic $\mathbb{R}^3$-valued polynomials in the variables $(x_0,x_1,x_2)$ of degree $\le n$. Therefore there must exist real coefficients $a^\pm_{k}$ such that $\monog{n,m}{+}[\mu]=\sum_k\sum_m a^+_k\monog{n,k}{+}[0] +\sum_k\sum_m a^-_k\monog{n,k}{-}[0]$. By Theorem \ref{th:sphmonogformula}, the scalar part of this equation expresses the spheroidal harmonics $\hargarabsol{n,m}{\pm}[\mu]$ as a linear combination of the spherical harmonics $\hargarabsol{k,m}{\pm}[0]$. By the uniqueness of the representation \eqref{eq:VV} we have that $a^\pm_{k} = \cVV{n,m,k} \mu^{2k}$. The second formula follows by the same reasoning, and then the relationship between $\monog{n,m}{\pm}[\mu]$ and $\monog{n,m}{\pm}[\mut]$ is a consequence of the fact that by Theorem \ref{th:cVVmu} the matrix $(\cVV{n,m,k}[\mut,\mu])_{n,k}$ is essentially the product of $(\cVV{n,m,k}\mut^{2k})_{n,k}$ and the inverse of $(\cVV{n,m,k}^0\mu^{2k})_{n,k}$. \end{proof} \subsubsection{Spheroidal ambigenic polynomials} \textit{Antimonogenic} functions (quaternionic conjugates of monogenics, i.e.\ annihilated by $\overline{\partial}$) are generally not studied independently, since their properties may be obtained by taking the conjugate of facts about monogenic functions. For example, the basic antimonogenic polynomials satisfy essentially the same relation as given in Theorem \ref{th:monogenics}, \[ \antimonog{n,m}{\pm}[\mu] = \sum_{0\leq2k\leq n-m} \cVV{n,m,k}[\mu,\mut]\, \antimonog{n-2k,m}{\pm}[\mut]. \] However, the subspace of the $\R^3$-valued harmonic functions generated by the monogenic and antimonogenic functions together, that is, the \textit{ambigenic} functions \cite{Alvarez}, is of interest. An ambigenic function is not represented uniquely as a sum of a monogenic and an antimonogenic function because one may add and subtract a \textit{monogenic constant}, that is, a function which is simultaneously monogenic and antimonogenic. A collection of ambigenic polynomials denoted $\{\ambibasic{n,m}{\pm,\pm}[\mu]\}$ was constructed in \cite{GMP} and shown to be a basis of $2n(n+3)+3$ elements for the ambigenic polynomials of degree no greater than $n$, mutually orthogonal in $L^2(\Omega_\mu)$. For our purposes we will only need the particular ambigenic functions \begin{equation} \label{eq:defambi} \ambig{n,m}{\pm}[\mu] = 2\Vec \monog{n,m}{\pm}[\mu] = \monog{n,m}{\pm}[\mu] - \antimonog{n,m}{\pm}[\mu], \end{equation} where $q=\Sc q + \Vec q$ denotes the decomposition of a quaternionic quantity into its scalar and vector parts. It is simple to verify that for fixed $\mu$, the $\ambig{n,m}{\pm}[\mu]$ are linearly independent. \subsection{Relations among contragenic functions for distinct spheroids} The notion of contragenic harmonic functions was introduced in \cite{Alvarez}, arising from the previously unobserved fact that in contrast to $\C$-valued or $\H$-valued functions, there exist $\R^3$-valued harmonic functions which are not ambigenic. Thus a function is called \textit{contragenic} for a given domain $\Omega$ when it is orthogonal in $L^2(\Omega)$ to all monogenic and antimonogenic functions in $\Omega$. In contrast to monogenicity and antimonogenicity, this is not a local property and therefore cannot be characterized in general by direct application of any differential operator. It is of interest to have a basis for the contragenic functions, in order to express an arbitrary harmonic function in a calculable way as a sum of an ambigenic function and a contragenic function. In the following, we will write \begin{align*} \N_*^{(n)}[\mu] =\ & \{\parbox[t]{.6\textwidth}{polynomials of degree $\le n$ in $x_0,x_1,x_1$ which are orthogonal in $L_2(\Omega_\mu)$ to all ambigenic functions in $\Omega_\mu\}$,} \end{align*} for $n\ge 1$ (nonzero constant harmonic functions are never contragenic, so we will have no use for $\N_*^{(0)}[\mu]=\{0\}$), and we have the successive orthogonal complements \[ \N^{(n)}[\mu] = \N_*^{(n)}[\mu] \ominus \N_*^{(n-1)}[\mu], \] which are composed of polynomials of degree precisely $n$. Thus $\N_*^{(n)}[\mu] =\bigoplus_{k=1}^n\N^{(k)}[\mu]$ and there is a Hilbert space orthogonal decomposition $\N_*[\mu] =\bigoplus_{k=1}^\infty\N^{(k)}[\mu]$ of the full collection of contragenic functions in $L^2(\Omega_\mu)$. The following explicit construction of a basis of the $\N^{(n)}[\mu]$, using as building blocks the scalar components of the monogenic functions, can be found in \cite{GMP}. Write \begin{align} \normrat{n,0}[\mu]=& \ 1, \nonumber \\ \normrat{n,m}[\mu] =& \left( \frac{1}{(n+m+1)_2} \frac{ \|\hargarabsol{n,m+1}{+}[\mu]\|_{[\mu]}} { \|\hargarabsol{n,m-1}{+}[\mu] \|_{[\mu]}}\right)^2 \label{eq:normrat} \end{align} for $1\le m\le n-1$, and $\normrat{n,m}[\mu]=0$ for $m\ge n$ since then $\hargarabsol{n,m}{\pm}[\mu]=0$ (this definition involves a slight modification of the notation in \cite{GMP}), where integration over the ellipsoid gives explicitly \begin{equation*}\label{eq:harqnorms} \|\hargarabsol{n,m}{\pm}[\mu]\|_{[\mu]}^2 = (1+\delta_{0,m})\normconst{n,m} \pi \mu^{2n+3} \int_{1}^{\frac{1}{\mu}}P_{n}^{m}(t)P_{n+2}^{m}(t)\,dt. \end{equation*} Here $\delta_{m,m'}$ is the Kronecker symbol and \[ \normconst{n,m} = \frac{ (n+m+1) (n+m+1)!(n-m+2)!} {2^{2n+1} (1/2)_{n+1}(1/2)_{n+2} } . \] \begin{defi} \label{def:Basic_contragenics} For all $n\ge1$, the \textit{basic contragenic polynomials} $\contra{n,m}{\pm}[\mu]$ associated to $\Omega_\mu$ are \begin{align*} \contra{n,0}{+}[\mu] =& -\ambig{n,0}{+}[\mu] e_3 \end{align*} for $m=0$, and \begin{align*} \contra{n,m}{\pm}[\mu] = \frac{1}{2}\big( \mp(\normrat{n,m}[\mu]+1)\ambig{n,m}{\pm}[\mu] + (\normrat{n,m}[\mu]-1)\ambig{n,m}{\mp}[\mu] e_3 \big) \end{align*} for $1\leq m\leq n-1$, where $\ambig{n,m}{\pm}[\mu]$ are defined by \eqref{eq:defambi}. \end{defi} In \cite{GMP} it was shown that $\{\contra{n,m}{\pm}[\mu]\colon\ 0\le m< n-1\}$ is an orthonormal basis for $\N^{(n)}[\mu]$, and that the harmonic polynomials of degree $\le n$ in $\Omega_\mu$ decompose as orthogonal direct sums of the ambigenic and contragenic polynomials of degree $\le n$. With the further notation \begin{align*} \angPsi{+,m}{\pm} &= \Phi_m^{\pm}(\phi)e_1 \pm \Phi_m^{\mp}(\phi)e_2, \\ \angPsi{-,m}{\pm} &= \Phi_m^{\pm}(\phi)e_1 \mp \Phi_m^{\mp}(\phi)e_2, \end{align*} which satisfy the obvious relations $\angPsi{+,m}{\pm}e_3=\pm\angPsi{+,m}{\mp}$, $\angPsi{-,m}{\pm}e_3=\mp\angPsi{-,m}{\mp}$, $e_1\hargarabsol{n,m}{\pm}[\mu]+e_2\hargarabsol{n,m}{\mp}[\mu]= \hargarab{n,m}\angPsi{\pm,m}{\pm}[\mu]$, $e_1\hargarabsol{n,m}{\pm}[\mu]-e_2\hargarabsol{n,m}{\mp}[\mu]= \hargarab{n,m}\angPsi{\mp,m}{\pm}[\mu]$ (where the $\hargarab{n,m}[\mu]$ are given by \eqref{eq:Vhat}), the definitions give us almost immediately that \begin{align} \ambig{n,0}{+}[\mu] &= \frac{-2}{n+2} \hargarab{n,1}[\mu] \angPsi{+,1}{+},\nonumber\\ \ambig{n,m}{\pm}[\mu] &= (n+m+1)\hargarab{n,m-1}[\mu] \angPsi{-,m-1}{\pm} \nonumber\\ &\ \ -\dfrac{1}{n+m+2}\hargarab{n,m+1}[\mu] \angPsi{+,m+1}{\pm}, \label{eq:ambiformula} \end{align} \begin{align} \contra{n,0}{+}[\mu] &= \frac{2}{n+2} \hargarab{n,1}[\mu]\angPsi{+,1}{-},\nonumber\\ \contra{n,m}{\pm}[\mu] &= (n+m+1)\normrat{n,m}[\mu] \hargarab{n,m-1}[\mu] \angPsi{-,m-1}{\mp} \nonumber \\ &\ \ + \dfrac{1}{n+m+2}\hargarab{n,m+1}[\mu] \angPsi{+,m+1}{\mp}, \label{eq:contraformula} \end{align} where $1 \le m \leq n-1$. Adding and subtracting instances of \eqref{eq:ambiformula} and \eqref{eq:contraformula} gives by cancellation decompositions of the harmonic polynomials $\hargarab{n,m}\angPsi{+,m}{\pm}$ and $\hargarab{n,m}\angPsi{-,m}{\pm}$ as the sum of a contragenic and an ambigenic: \begin{lema}\label{lem:VZA} Let $n\geq1$ and $1\leq m \leq n+1$. Then \begin{align*} \hargarab{n,m-1}[\mu]\angPsi{-,m-1}{\pm} = \dfrac{1}{(n+m+1)(\normrat{n,m}[\mu]+1)}\big(&\contra{n,m}{\mp}[\mu] + \ambig{n,m}{\pm}[\mu] \big), \end{align*} and \begin{align*} \hargarab{n,m+1}[\mu]\angPsi{+,m+1}{\pm} = \dfrac{n+m+2}{ \normrat{n,m}[\mu]+1}\big(&\contra{n,m}{\mp}[\mu] - \normrat{n,m}[\mu]\ambig{n,m}{\pm}[\mu] \big) . \end{align*} \end{lema} The definition of contragenic function does not imply that an $L^2$-function which belongs to the space $\N_*^{(n)}[\mut]$ should also be in $\N_*^{(n)}[\mu]$ when $\mut\neq\mu$, because the notion of orthogonality is different for different spheroids. In other words, we may not expect a formula like ``$\contra{n,m}{\pm}[\mut]=\sum z_{n,m,k}[\mut,\mu]\contra{n-2k,m}{\pm}[\mu]$.'' The following result will enable us to give many examples for which $\contra{n,m}{\pm}[\mut]\not\in\N_*^{(n)}[\mu]$ for $m\geq1$. However, it also shows that the intersection of all of the $\N_*^{(n)}[\mu]$ is nontrivial, giving what may be called {\it universal contragenic functions} in the context of spheroids. We will use the coefficients \begin{align} \cZZmu{n,0,k}{C}[\mut,\mu] =& \ \dfrac{n-2k+2}{n+2}\cVVmu{n,1,k}[\mut,\mu], \nonumber\\ \cZZmu{n,m,k}{C}[\mut,\mu] =&\ \left\{\begin{array}{ll} \dfrac{\normrat{n,m}[\mut]+1}{\normrat{n-2k,m}[\mu]+1} \cVVmu{n,m,k}[\mut,\mu],\ \quad & 0\le2k\le n-m-1,\\[2ex] \dfrac{\normrat{n,m}[\mut]}{\normrat{n-2k,m}[\mu]+1} \cVVmu{n,m,k}[\mut,\mu], & n-m\le2k\le n-m+1; \end{array}\right. \nonumber \\ \cZZmu{n,m,k}{A}[\mut,\mu] =&\ \left\{\begin{array}{ll} \dfrac{\normrat{n,m}[\mut]-\normrat{n,m}[\mu]}{\normrat{n-2k,m}[\mu]+1} \cVVmu{n,m,k}[\mut,\mu], & 0\le2k\le n-m-1,\\[2ex] \dfrac{\normrat{n,m}[\mut] }{\normrat{n-2k,m}[\mu]+1} \cVVmu{n,m,k}[\mut,\mu], & n-m\le2k\le n-m+1; \end{array}\right. \label{eq:CZZmu} \end{align} ($1 \le m \le n-1$) to express the decomposition of contragenics for one spheroid in terms of contragenics and ambigenics of any other. \begin{prop} \label{prop:contragenicrelations} Let $n\geq1$. Then \begin{align* \contra{n,0}{+}[\mut] &= \sum_{0\leq 2k\leq n-1} \cZZmu{n,k}{C}[\mut,\mu]\contra{n-2k,0}{}[\mu]; \end{align*} and for $1\leq m\leq n-1$, \begin{equation* \contra{n,m}{\pm}[\mut]=\sum_{0\leq 2k\leq n-m+1} \big( \cZZmu{n,m,k}{C}[\mut,\mu]\contra{n-2k,m}{\pm}[\mu] + \cZZmu{n,m,k}{A}[\mut,\mu] \ambig{n-2k,m}{\pm}[\mu] \big). \end{equation*} \end{prop} \begin{proof} Apply Theorem \ref{th:cVVmu} to the first formula of \eqref{eq:contraformula} with $\mut$ in place of $\mu$ to obtain that \begin{align*} \contra{n,0}{+}[\mut] = \frac{2}{n+2} \sum_{0\leq 2k\leq n-1} \cVVmu{n,1,k}[\mut,\mu]\hargarab{n-2k,1}[\mu] \angPsi{+,1}{-} , \end{align*} which after another application of \eqref{eq:contraformula} reduces to the first statement. In the same way, for $m\ge1$, \begin{align} \contra{n,m}{\pm}[\mut] =& \ (n+m+1)\normrat{n,m}[\mut] \sum_{0\leq 2k\leq n-m+1} \cVVmu{n,m-1,k}[\mut,\mu]\hargarab{n-2k,m-1}[\mu] \angPsi{-,m-1}{\pm} \nonumber\\ & \ + \dfrac{1}{n+m+2} \sum_{0\leq 2k\leq n-m-1} \cVVmu{n,m+1,k}[\mut,\mu]\hargarab{n-2k,m+1}[\mu] \angPsi{+,m+1}{\pm}. \label{eq:Zsum} \end{align} We observe from the definitions leading to Proposition \ref{prop:UfromV} that \begin{align*} \cUV{n,m-1,l}\,\cUV{n-2l,m-1,k-l}^0 = \dfrac{n+m-2k+1}{n+m+1}\cUV{n,m,l}\,\cUV{n-2l,m,k-l}^0, \end{align*} so \eqref{eq:sumw} tells us that \begin{align*} \dfrac{n+m+1}{n+m-2k+1}\cVVmu{n,m-1,k}[\mut,\mu] &= \cVVmu{n,m,k}[\mut,\mu] \\ &= \dfrac{n+m-2k+2}{n+m+2}\cVVmu{n,m+1,k}[\mut,\mu]. \end{align*} From this and Lemma \ref{lem:VZA} we have that \begin{align*} (n+m+1) \cVVmu{n,m-1,k}[\mut,\mu] \hargarab{n-2k,m-1}[\mu] & \angPsi{-,m-1}{\pm} \\ = \frac{1}{\normrat{n-2k,m}[\mu]+1} \cVV{n,m,k}[\mut,\mu] (\contra{n-2k,m}{\mp}&[\mu]+\ambig{n-2k,m}{\pm}[\mu] ), \end{align*} and \begin{align*} \frac{1}{n+m+2}\cVVmu{n,m+1,k}[\mut,\mu] \hargarab{n-2k,m+1}[\mu] & \angPsi{+,m+1}{\pm} \\ = \frac{1}{\normrat{n-2k,m}[\mu]+1} \cVV{n,m,k}[\mut,\mu] (\contra{n-2k,m}{\mp}& [\mu]- \normrat{n-2k,m}[\mu]\ambig{n-2k,m}{\pm}[\mu] ). \end{align*} Inserting these two relations into the respective sums of \eqref{eq:Zsum} gives the desired result. \end{proof} Proposition \ref{prop:contragenicrelations} provides us with some information about the intersection of the spaces of contragenic functions up to degree $n$. \begin{theo}\label{th:intersection} Let $n\geq1$. The following statements hold: \begin{enumerate} \item[(i)] $\contra{n,0}{+}[\mu]\in\N_*^{(n)}[0]$ for all $\mu$; \item[(ii)] $\contra{n,m}{\pm}[\mu] \notin N_*^{(n)}[0]$ when $\mu\neq0$ and $1\le m\le n-1$. \end{enumerate} \end{theo} \begin{proof} The first statement is an immediate consequence of the first formula of Proposition \ref{prop:contragenicrelations}. Now consider a basic element $\contra{n,m}{\pm}[\mu]$ of $\N_*^{(n)}[\mu]$, with $\mu\not=0$ and $1\leq m\leq n-1$. A particular instance of the second formula of Proposition \ref{prop:contragenicrelations} is \[ \contra{n,m}{\pm}[\mu]=\sum_{0\leq 2k\leq n-m+1} \big( \cZZmu{n,m,k}{C}[\mu,0] \contra{n-2k,m}{\pm}[0] + \cZZmu{n,m,k}{A}[\mu,0]\ambig{n-2k,m}{\pm}[0] \big). \] Suppose that $\contra{n,m}{\pm}[\mu]\in\N_*^{(n)}[0]$. Then since the right hand side is orthogonal to all $\Omega_0$-ambigenics, \[ \sum_{0\leq 2k\leq n-m+1} \cZZmu{n,m,k}{A}[\mu,0]\ambig{n-2k,m}{\pm}[0]=0, \] and so by the linear independence, $\cZZmu{n,m,k}{A}[\mu,0]=0$ for all $k$. The case in \eqref{eq:CZZmu} where $2k$ is $n-m$ or $n-m+1$ tells us that $\normrat{n,m}[\mu]=0$, which is manifestly false by \eqref{eq:normrat}. Consequently, $\contra{n,m}{\pm}[\mu]\not\in\N_*^{(n)}[0]$ as claimed. \end{proof} Note that Theorem \ref{th:intersection} does not assert that $\contra{n,0}{+}[\mu]$ lies in the top-level slice $\N^{(n)}[0]$ of $\N_*^{(n)}[0]$. \begin{coro} Let $n\geq1$. Then \[ \dim \bigcap_{\mu\in[0,1)\cup\R^+}\N_*^{(n)}[\mu] \ge n. \] \end{coro} \begin{proof} The result is an immediate consequence of the fact that Theorem \ref{th:intersection} is applicable to arbitrary $\mu$, so the intersection contains a fixed $n$-dimensional subspace of $\N_*^{(n)}[0]$. \end{proof} It also follows from Theorem \ref{th:intersection} that the common intersection $\N_0=\bigcap\N_*[\mu]$ of the full spaces of contragenic functions on spheroids is infinite dimensional, containing all of the contragenic polynomials $\contra{n,m}{+}[\mu]$ for which $m=0$. It seems likely that these contragenic polynomials have special characteristics because of their simpler structure, cf.\ \eqref{eq:contraformula}. This phenomenon is not yet fully understood. Further questions relating to the exact relations among the spaces $\N_*^{(n)}[\mu]$ still remain open. If the method of the proof of Theorem \ref{th:intersection} is applied to linear combinations of the $\contra{n,m}{\pm}[\mu]$ instead of just to these generators individually, transcendental equations related to \eqref{eq:normrat} appear. It is not yet known how the angles between the orthogonal complements of the mode-$0$ subspace $\N_0^{n}[0]$ in $\N_*^{(n)}[\mu]$, or of their union $\N_0[0]$ in $\N[\mu]$, vary with $\mu$.
{ "timestamp": "2019-05-07T02:17:20", "yymm": "1905", "arxiv_id": "1905.01568", "language": "en", "url": "https://arxiv.org/abs/1905.01568", "abstract": "A contragenic function in a domain $\\Omega\\subseteq\\mathbf{R}^3$ is a reduced-quaternion-valued (i.e. the last coordinate function is zero) harmonic function, which is orthogonal in $L^2(\\Omega)$ to all monogenic functions and their conjugates. The notion of contragenicity depends on the domain and thus is not a local property, in contrast to harmonicity and monogenicity. For spheroidal domains of arbitrary eccentricity, we relate standard orthogonal bases of harmonic and contragenic functions for one domain to another via computational formulas. This permits us to show that there exist nontrivial contragenic functions common to the spheroids of all eccentricities.", "subjects": "Complex Variables (math.CV)", "title": "Relations among spheroidal and spherical harmonics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429624315188, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097211025955027 }
https://arxiv.org/abs/2006.02599
Hamilton Cycles in the Semi-random Graph Process
The semi-random graph process is a single player game in which the player is initially presented an empty graph on $n$ vertices. In each round, a vertex $u$ is presented to the player independently and uniformly at random. The player then adaptively selects a vertex $v$, and adds the edge $uv$ to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible.We focus on the problem of constructing a Hamilton cycle in as few rounds as possible. In particular, we present a novel strategy for the player which achieves a Hamiltonian cycle in $(2+4e^{-2}+0.07+o(1)) \, n < 2.61135 \, n$ rounds, assuming that a specific non-convex optimization problem has a negative solution (a premise we numerically support). Assuming that this technical condition holds, this improves upon the previously best known upper bound of $3 \, n$ rounds. We also show that the previously best lower bound of $(\ln 2 + \ln (1+\ln 2) + o(1)) \, n$ is not tight.
\section{Introduction} In this paper, we consider the \textbf{semi-random process} introduced recently in~\cite{process1} that can be viewed as a ``one player game''. The process starts from $G_0$, the empty graph on the vertex set $[n]=\{1,\ldots,n\}$. In each step $t$, a vertex $u_t$ is chosen uniformly at random from $[n]$. Then, the player (who is aware of graph $G_t$ and vertex $u_t$) needs to select a vertex $v_t$ and add an edge $u_tv_t$ to $G_t$ to form $G_{t+1}$. The goal of the player is to build a (multi)graph satisfying a given monotonely increasing property ${\mathcal A}$ as quickly as possible. A \textbf{strategy} in this context is a sequence of functions $f_1,f_2,\ldots$, where for each $t \in {\mathbb N}$, $f_t(u_1,v_1,\ldots, u_{t-1},v_{t-1},u_t)$ is a distribution over $[n]$, given the history of the process up to and including step $t-1$, and vertex $u_t$. Then $v_t$ is chosen according to this distribution. If $f_t$ is an atomic distribution, then $v_t$ is determined by $u_1,v_1,\ldots,u_{t-1},v_{t-1},u_t$. As $f_t$ is determined by $u_t$, and the history of the process up to step $t-1$, it means that the player needs to select her strategy in advance, before the game actually starts. Given ${\bf f}=(f_1,f_2,\ldots)$ and real number $0<q<1$, let $\tau_q({\bf f},n)$ be the minimum $t$ for which ${\mathbb P}(G_t\in {\cal A})\ge q$, where $(G_i)_{i=0}^t$ is obtained following strategy ${\bf f}$. Define \[ \tau_{q}({\cal A},n) = \min_{{\bf f}} \tau_q({\bf f},n), \] where the minimum is over all strategies. Let \[ \tau_{{\cal A}}=\lim_{q\to 1-}\limsup_{n\to\infty} \frac{\tau_{q}({\cal A},n)}{n}; \] note that the limit above exists, since for every $n$ the function $q\to \frac{\tau_{q}({\cal A},n)}{n}$ is nondecreasing, as $\cal A$ is monotonely increasing. \medskip In this paper, we concentrate on property ${\mathcal A = {\tt HAM}}$ that a graph has a Hamilton cycle. It was observed in~\cite{process1} that $$ 1.21973 \le \ln 2+\ln(1+\ln2) \le {\tau_{{\tt HAM}}} \le 3. $$ We improve both upper and lower bounds for ${\tau_{{\tt HAM}}}$. For the upper bound, we need to assume some ``technical condition'' ${\mathcal P}$ that claims that some function is negative on its domain---the function is defined in Subsection~\ref{sec:function}. Unfortunately, we could not prove that ${\mathcal P}$ holds but, in Subsection~\ref{sec:numerical}, we provide strong numerical evidence for it. For the lower bound, we do not optimize the argument (as it gives a small improvement anyway) but aim for a relatively easy proof that shows that the currently existing bound is not tight. \medskip Here are our main results. Theorem~\ref{thm:upper_bound} is proved in Section~\ref{sec:upper_bound} whereas the proof of Theorem~\ref{thm:lower_bound} can be found in Section~\ref{sec:lower_bound}. \begin{thm}\label{thm:upper_bound} Suppose that property ${\mathcal P}$ holds. Then, $$ {\tau_{{\tt HAM}}} \le 2+4e^{-2}+0.07 < 2.61135. $$ \end{thm} \begin{thm}\label{thm:lower_bound} There exists a universal constant $\epsilon > 10^{-8}$ such that $$ {\tau_{{\tt HAM}}} \ge \ln 2+\ln(1+\ln2) + \epsilon. $$ \end{thm} All asymptotics in this paper refer to $n \to \infty$. We say that an event holds \textbf{asymptotically almost surely} (\textbf{a.a.s.}) if the probability that it holds tends to 1 as $n \to \infty$. In the proofs we use the standard Landau notation. Given two sequences of real numbers $a_n$ and $b_n$, we write $a_n=O(b_n)$ if there exists a constant $C>0$ such that $|a_n|\le C|b_n|$ for all $n$. We write $a_n=o(b_n)$ if $b_n>0$ for all sufficiently large $n$ and $\lim_{n\to\infty} a_n/b_n=0$. \section{Upper Bound}\label{sec:upper_bound} In order to obtain an upper bound for ${\tau_{{\tt HAM}}}$, one needs to propose a strategy for the player to build a graph during the semi-random process, and show that after a certain number of steps the resulting graph is a.a.s.\ Hamiltonian. In order to warm up, let us recall an observation made in~\cite{process1} that gives ${\tau_{{\tt HAM}}} \le 3n$ a.a.s. To see it, the following simple strategy can applied: let $v_t= (t-1) \pmod n +1 $ for all $1\le t\le 3n$. Note that this is a non-adaptive strategy, that is, function $f_t$ does not depend on the history of the process nor vertex $u_t$ chosen at time $t$. More importantly, it is easy to see that the resulting graph has the same distribution as the well-known $G_{\text{3-out}}$ process that is Hamiltonian a.a.s.~\cite{3-out}. In general, the \textbf{$m$-out process} is defined for any natural number $m$: each vertex $v \in [n]$ independently chooses $m$ random out-neighbours from $[n]$ to create the random digraph $D_{m\text{–out}}$. We then obtain $G_{m\text{–out}}$ by ignoring orientations. Note that $G_{m\text{–out}}$ is a multi-graph (it may have loops or multiple edges) with minimum degree $m$ and precisely $mn$ edges. In the model, we can either allow these multiple edges and loops, replace multiple edges with single edges and remove loops, or condition on them not occurring. (Since the probability that there are no multiple edges is bounded away from zero, any property that holds a.a.s.\ in the model that allows multiple edges also holds a.a.s.\ when we condition on no multiple edges.) For our application, when the strategy creates a multiple edge or a loop in the underlying undirected graph, we simply ``discard'' that edge. That is, we will not use that edge for the construction of a Hamilton cycle. This section is structured as follows. In Subsection~\ref{sec:upper_strategy}, we define a strategy for the player that creates a random graph $G^*$. The main goal is to prove that $G^*$, together with some additional $o(n)$ semi-random edges, is Hamiltonian a.a.s. Since the argument is quite involved, an overview of the proof is provided in Subsection~\ref{sec:upper_overview}. In Subsection~\ref{sec:upper_notation}, we introduce definitions and notation that will be used through the entire paper. Some useful properties of the graphs involved in the argument are extracted and proved in Subsection~\ref{sec:upper_properties}. In order to achieve our goal, in particular, we need to prove that a.a.s.\ $G^*$ has a 2-matching with $o(n)$ components (the definition is provided in Subsection~\ref{sec:upper_overview}). Subsection~\ref{sec:upper_2-matching_preparation} prepares us for this task. As already mentioned earlier, this part requires property ${\mathcal P}$ that is defined at the end of Subsection~\ref{sec:function}. The proof that a.a.s.\ $G^*$ has a 2-matching with few components is finished in Subsection~\ref{sec:upper_2matching}. Now, it is enough to guide the semi-random process such that after additional $o(n)$ rounds the graph has a Hamiltonian cycle. This last task does not depend on the argument used to show that $G^*$ has the desired 2-matching and so, in fact, we do it earlier, in Subsection~\ref{sec:upper_final}. \subsection{Our Strategy}\label{sec:upper_strategy} In this subsection, we define a strategy for the player that creates a random (multi)graph $G^*$. It will be convenient to work with the directed graph $D_t$ underlying $G_t$. For each edge $u_tv_t$ that is added to $G_t$ at time $t$, we put a directed edge from $v_t$ to $u_t$ in $D_t$. As mentioned before, for the construction of a Hamilton cycle we will only use edges from a subgraph of $G_t$. For any multigraph $G$, let $\hat G$ denote the simple graph obtained from $G$ by deleting all loops and replacing all multiple edges by single edges. Thus, the sequence of multigraphs $(G_t)$ immediately yields the corresponding sequence of simple graphs $(\hat G_t)$. Consider the following strategy that will be defined in four phases. During the first phase, for $1\le t\le 2n$, let $v_t= (t-1) \pmod n +1$. It is clear that $G_{2n}$ has the same distribution as $G_{\text{2-out}}$. Let $V_0$ and $V_1$ be the sets of vertices in $D_{2n}$ of in-degree 0 and, respectively, of in-degree 1. During the second phase, two out edges are added from every vertex in $V_0$ and one out edge is added from every vertex in $V_1$. During the third phase, we add $0.07n$ directed edges uniformly at random, that is, in each step, $v_t$ is uniformly chosen from $[n]\setminus \{u_t\}$. We call $v$ a {\em deficit vertex} if after phase 3 its degree is less than 4. Then, in the fourth and last phase, we repeatedly add a semi-random edge, coloured golden for convenience, coming out of a deficit vertex until its degree in the current underlying undirected {\em simple} graph (that is, in $\hat G_t$) becomes at least 4. Let $\tau_i$ denote the last step of phase $i$ (in particular, $\tau_1=2n$). Note that a golden semi-random edge is added out of $u$ only if a loop or a multiple edge incident with $u$ was created in the process $(G_t)$ during the first two phases. It is easy to show, by a standard first moment calculation, that $G_{\tau_3}$ has $O(1)$ loops or parallel edges, and $o(1)$ other types of multiple edges in expectation. If $v$ is incident with a loop in $G_{\tau_3}$ then $v$ may send out up to two semi-random edges in phase 4. If $v$ is incident with a parallel edge in $G_{\tau_3}$ then $v$ may send out at most one semi-random edge in the final phase. Hence, a.a.s.\ $G_{\tau_4}$ and $\hat G_{\tau_4}$ have the following property. \begin{tabbing} {(\tt E)}: \hspace{0.2cm} \= There are at most $\ln\ln n$ double edge or loops in $G_{\tau_4}$ and they are all vertex disjoint. \\ \>There are at most $\ln\ln n$ golden edges, inducing vertex-disjoint paths of length 1 or 2, \\ \> and every pair of deficit vertices are at distance at least $\ln n/5$ from each other in $\hat G_{\tau_4}$. \end{tabbing} Thus a.a.s.\ the total number of semi-random edges added during the last two phases is $(0.07+o(1))n$. Note that the addition of the golden edges guarantees that the minimum degree of $\hat G_{\tau_4}$ is at least 4, which will be used in the proof later. Finally, let $G^*=G_{\tau_4}$, the multigraph obtained after the last step of phase 4. As we will only use edges in $\hat G_{\tau_4}\subseteq G^*$, we will mainly focus on the process $(\hat G_t)$. \subsection{Overview of the Proof}\label{sec:upper_overview} Let us present an overview of the proof of Theorem~\ref{thm:upper_bound}. First, we will investigate how long it takes to construct graph $G^*$. \begin{lemma}\label{lem:number_of_edges} A.a.s.\ the following holds $$ |E(G^*)| = (2+4e^{-2}+0.07+o(1))n. $$ \end{lemma} In order to state the next lemma, we need one definition. A \textbf{2-matching} in a graph $G$ is a simple subgraph $H$ of $G$ with maximum degree at most 2, that is, a collection of vertex-disjoint paths and cycles. Moreover, we assume that $V(H)=V(G)$ so some paths in $H$ could be isolated vertices. \medskip Now, we are ready to state the lemma. This is the place where property ${\mathcal P}$ is needed. \begin{lemma}\label{lem:2-matching} Suppose that property ${\mathcal P}$ holds. Then, a.a.s.\ $G^*$ has a 2-matching with $o(n)$ components. \end{lemma} The final ingredient does not require property ${\mathcal P}$ anymore. \begin{lemma} \lab{lem:Ham} Suppose $G^*$ has a 2-matching with $o(n)$ components. Then, there exists an adaptive strategy such that a.a.s.\ the semi-random process builds a Hamiltonian graph within an additional $o(n)$ steps. \end{lemma} Theorem~\ref{thm:upper_bound} follows immediately from the above three lemmas. Our strategy for constructing a Hamilton cycle in Lemma~\ref{lem:Ham} is the same as that in~\cite{3-out} where a Hamilton cycle is found in $G_{3\text{-out}}$. We start with a 2-matching $F$ of $G^*$ which has $o(n)$ components. Then, we take an arbitrary component $C$ of $F$ and let $P$ be a path that spans all vertices of $C$. By applying Pos\'{a} rotations, we use either edges in $G^*$, or additional $o(n)$ edges added to $G^*$ to repeatedly absorb vertices in other components of $F$ into the long path we carefully construct, until finally completing the path into a Hamilton cycle. Having less available edges in $G^*$ than in $G_{3\text{-out}}$ requires some new treatments in the proof of Lemma~\ref{lem:Ham}. In order to prove Lemma~\ref{lem:2-matching}, as it is done in~\cite{3-out}, we will apply Tutte and Berge's formula for the size of a maximum 2-matching of $\hat G_{\tau_4}\subseteq G^*$. However, as we have significantly less edges in $\hat G_{\tau_4}$ than in $G_{3\text{-out}}$, it becomes much more challenging to verify the Tutte-Berge conditions. Rough bounds that worked in~\cite{3-out} fail to work in our setting and, in order to achieve a tighter bound we end up with an optimization problem involving a high dimensional objective function. That results in the technical property $\cal P$ that we only support numerically. \subsection{Definitions and Notation}\label{sec:upper_notation} In this subsection, we introduce basic definitions and notation that will be used throughout the paper. Let us start from graph theoretic ones. For a given subset of vertices $S \subseteq V(G)$, let $G[S]$ be the \textbf{graph induced by set $S$}, that is, $V(G[S])=S$ and $$ E(G[S]) = \{ uw \in E(G) : u,v \in S\} \subseteq E(G). $$ Let $e(S)$ denote the number of edges induced by set $S$, that is, $$ e(S) = |E(G[S])| = | \{ xy \in E(G) : x,y \in S \} |. $$ Moreover, let $$ N(S) = \{v\in V(G) \setminus S : \exists u\in S \text{ such that } uv \in E(G) \}. $$ Finally, we say that $S$ is an \textbf{independent set} if $S$ induces no edge, that is, $e(S) = 0$. Given subsets of vertices $U,W \subseteq V(G)$, let $e(U,W)$ denote the number of edges with exactly one end in $U$ and the other end in $W$, that is, $$ e(U,W) = | \{ uw \in E(G) : u \in U, w \in W \} |. $$ For a given vertex $v \in V(G)$, let $\deg(v)$ be the \textbf{degree of $v$}, that is, the number of neighbours of $v$ in $G$. Let $\delta(G) = \min\{ \deg(v) : v \in V(G)\}$ denote the \textbf{minimum degree} of a graph $G$. \medskip For a directed graph $D$ and a given vertex $v \in V(D)$, let $\deg^-(v)$ and $\deg^+(v)$ be the in- and out-degree of $v$, that is, the number of directed edges going to $v$ and, respectively, going from $v$ in $D$. \medskip For sequences of real numbers $a_n$ and $b_n$, we say $a_n=poly(n)$ if there exists a constant $C>0$ such that $n^{-C}<a_n<n^C$ for every $n$. We say $a_n=O(b_n)$ if there exists a constant $C>0$ such that $|a_n|<C|b_n|$ for all $n$. We say $a_n=o(b_n)$ if eventually $b_n>0$ and $\lim_{n\to\infty} a_n/b_n=0$. \medskip Finally, let us introduce the \textbf{binomial random graph} ${\mathcal G}(n,p)$. More precisely, ${\mathcal G}(n,p)$ is a distribution over the class of graphs with vertex set $[n]$ in which every pair $\{i,j\} \in \binom{[n]}{2}$ appears independently as an edge in $G$ with probability~$p$. Note that $p=p(n)$ may (and in our application it does) tend to zero as $n$ tends to infinity. \subsection{Some Technical Properties and Proof of Lemma~\ref{lem:number_of_edges}}\label{sec:upper_properties} Let us start with the following simple observations. \begin{observation}\lab{obs} Our process can be coupled such that the following properties hold. \begin{enumerate} \item [(a)] $G_{2n}$ has the same distribution as $G_{\text{2-out}}$ and thus $\hat G_{\tau_1}\subseteq G_{\text{2-out}}$. \item [(b)] $\hat G_{\tau_2}$ is a subgraph of $G_{4\text{-out}}$; in particular, \begin{equation}\label{eq:prob_tildeG} {\mathbb P} \left( S \subseteq E(\hat G_{\tau_2}) \right) \le \left( \frac {8}{n} \right)^{|S|}, \text{ for any } S \subseteq {[n] \choose 2}. \end{equation} \item [(c)] $\delta(\hat G_{\tau_4}) \ge 4$ and \begin{eqnarray} {\mathbb P} \Big( S \subseteq E(\hat G_{\tau_3})\Big) &\le& \left( \frac {8.15}{n} \right)^{|S|}, \text{ for any } S \subseteq {[n] \choose 2} \label{eq:prob_Gstar}\\ {\mathbb P} \Big( S \subseteq E(\hat G_{\tau_4})\Big) &\le& \left( \frac {13}{n} \right)^{|S|}, \text{ for any } S \subseteq {[n] \choose 2}\label{eq:prob_Gfinal}. \end{eqnarray} \end{enumerate} \end{observation} \begin{proof} \emph{Part (a)}: The property follows immediately from the construction of our process. \medskip \emph{Part (b)}: Recall that $G_{\tau_2}$ is constructed from $G_{2\text{-out}}$ by adding two out edges from every vertex in $V_0$ and one out edge from every vertex in $V_1$. If, instead, two out edges are added from \emph{every} vertex in $G_{2\text{-out}}$, we would get a graph with the same distribution as $G_{4\text{-out}}$. Hence, one may easily couple our process such that $G_{\tau_2}$ is a subgraph of $G_{4\text{-out}}$. In order to see that~(\ref{eq:prob_tildeG}) holds, note first that $$ {\mathbb P} \Big( e \in E(\hat G_{\tau_2}) \Big)={\mathbb P} \Big( e \in E(G_{\tau_2}) \Big) \le {\mathbb P} \Big( e \in E(G_{4\text{-out}}) \Big) = 1 - \left( 1 - \frac {1}{n} \right)^8 \le \frac {8}{n}. $$ The desired inequality holds after observing that $S' \subseteq E(\hat G_{\tau_2})$ does not increase the probability that an edge $e \notin S'$ is also in $E(\hat G_{\tau_2})$. \medskip \emph{Part (c)}: The fact that $\delta(\hat G_{\tau_4}) \ge 4$ follows immediately by construction of $\hat G_{\tau_4}$. For~\eqn{eq:prob_Gstar}, we note that part (b) implies that our process can be coupled such that $\hat G_{\tau_2}$ is a subgraph of $G_{4\text{-out}}$. As a result, $\hat G_{\tau_3}$ can be viewed as a subgraph of $G_{4\text{-out}} \cup {\mathcal G}(n,0.07n)$. Thus, by the union bound we get that \[ {\mathbb P}(e\in E(\hat G_{\tau_3})) \le {\mathbb P}(e\in E(\hat G_{\tau_2})) +{\mathbb P}(e\in E({\mathcal G}(n,0.07n)))\le \frac{8}{n}+\frac{0.14+o(1)}{n}<\frac{8.15}{n}. \] The assertion follows by noting that the presence of other edges do not increase the probability that $e\in E(\hat G_{\tau_3})$. In order to see that~(\ref{eq:prob_Gfinal}) holds, we apply the same argument after noting that every vertex sends out at most two golden semi-random edges. As a result, $\hat G_{\tau_4}$ can be viewed as a subgraph of $G_{6\text{-out}} \cup {\mathcal G}(n,0.07n)$. \end{proof} The next lemma collects some important properties of the graphs involved in the process that will be used in various places of this paper. In particular, part~(a) immediately implies Lemma~\ref{lem:number_of_edges}. \begin{lemma} \lab{lem:aas} A.a.s.\ the following properties hold. \begin{enumerate} \item[(a)] $D_{\tau_1}$ has asymptotically $e^{-2}n$ vertices of in-degree 0 and $2e^{-2}n$ vertices of in-degree 1. In other words, $|V_0|=(e^{-2}+o(1))n$ and $|V_1|=(2e^{-2}+o(1))n$. \item[(b)] For every $\epsilon>0$ there exists $\delta=\delta(\epsilon)>0$ such that for all $S \subseteq [n]$ with $|S|\le \delta n$, $S$ induces at most $(1+\epsilon)|S|$ edges in $\hat G_{\tau_4}$. \item[(c)] All $S\subseteq [n]$ with $|S|\le 0.005n$ induce at most $1.9|S|$ edges in $\hat G_{\tau_4}$. \end{enumerate} \end{lemma} \begin{proof} \emph{Part (a)}: Let $v \in [n]$ be any vertex of $D_{2n}=D_{\tau_1}$. Clearly, \begin{eqnarray*} {\mathbb P} ( \deg^-(v) = 0 ) &=& \left( 1 - \frac {1}{n} \right)^{2n} = e^{-2} + o(1) \\ {\mathbb P} ( \deg^-(v) = 1 ) &=& (2n) \cdot \frac {1}{n} \cdot \left( 1 - \frac {1}{n} \right)^{2n-1} = 2e^{-2} + o(1). \end{eqnarray*} It follows that ${\mathbb E} (|V_0|) = (e^{-2}+o(1)) n$ and ${\mathbb E} (|V_1|) = (2e^{-2}+o(1)) n$. It is straightforward to show the concentration for these random variables (for example, by using the second moment method; we omit details) and so part (a) holds. \medskip \emph{Part (b)}: Let us fix $\epsilon>0$ and $s = s(n) \in {\mathbb N}$. By~(\ref{eq:prob_Gfinal}), the expected number of sets $S \subseteq [n]$ with $|S|=s$ that induce at least $(1+\epsilon)s$ edges in $\hat G_{\tau_4}$ is at most \begin{eqnarray*} g(s) &:=& {n \choose s} {{s \choose 2} \choose (1+\epsilon)s} \left( \frac {13}{n} \right)^{(1+\epsilon)s} \le \left( \frac {en}{s} \right)^s \left( \frac {es^2/2}{(1+\epsilon)s} \right)^{(1+\epsilon)s} \left( \frac {13}{n} \right)^{(1+\epsilon)s} \\ &=& \left( \frac {e^{2+\epsilon} 6.5^{1+\epsilon}}{(1+\epsilon)^{1+\epsilon}} \left( \frac {s}{n} \right)^\epsilon \right)^s \le \left( 6.5e^2 \left( \frac {6.5es}{n} \right)^\epsilon \right)^s. \end{eqnarray*} Clearly, $$ g(s) \le \left( 6.5e^2 \left( 6.5e \delta \right)^\epsilon \right)^s \le (1/2)^s, $$ provided that $s \le \delta n$ and $\delta = \delta(\epsilon) > 0$ is sufficiently small (the optimal value of $\delta$ is $(13e^2)^{-1/\epsilon}/(6.5e)$). On the other hand, if (for example) $s \le \ln n$, then $g(s) \le n^{-\epsilon s/2} \le n^{-\epsilon/2}$. It follows that the expected number of sets $S \subseteq [n]$ with $|S| \le \delta n$ that induce at least $(1+\epsilon)|S|$ edges is at most $$ \sum_{s =1}^{\delta n} g(s) \le \sum_{s=1}^{\ln n} n^{-\epsilon/2} + \sum_{s=\ln n}^{\delta n} (1/2)^s \le (\ln n) n^{-\epsilon/2} + 2 (1/2)^{\ln n} = o(1). $$ Part (b) holds by Markov's inequality. \medskip \emph{Part (c)}: For a given $s=s(n) \in {\mathbb N}$, let $X_s$ be number of sets $S \subseteq [n]$ with $|S|=s$ that induce at least $1.9s$ edges in $\hat G_{\tau_4}$, and let $Y_s$ be the number of sets $S \subseteq [n]$ with $|S|=s$ that induce at least $t(s)$ edges in $\hat G_{\tau_3}$, where \[ t(s)=\left\{ \begin{array}{ll} 1.2s & \mbox{if $s\le \ln n$}\\ 1.89s & \mbox{if $s>\ln n$}. \end{array} \right. \] As a.a.s.\ $\hat G_{\tau_4}\in {\tt E}$, it follows that a.a.s.\ $X_s\le Y_s$ for all $s$, since the number of golden edges induced by $S$ is at most $\min\{2s/3, \ln\ln n\} \le \min\{0.7s,\ln\ln n\}$, given $\hat G_{\tau_4}\in {\tt E}$, and $1.9s-\ln\ln n\ge 1.89s$ when $s>\ln n$. Let $g(s)={\mathbb E}(Y_s)$. By~(\ref{eq:prob_Gstar}), we get that \begin{eqnarray*} g(s) &\le& {n \choose s} {{s \choose 2} \choose t(s)} \left( \frac {8.15}{n} \right)^{t(s)} \le \left( \frac {en}{s} \right)^s \left( \frac {es^2/2}{t(s)} \right)^{t(s)} \left( \frac {8.15}{n} \right)^{t(s)} \end{eqnarray*} If $s \le \ln n$, then $g(s) \le n^{-0.8}$. On the other hand, if $\ln n<s \le \ \delta n$ with $\delta = 0.005$, then $$ g(s) \le \left( e \left( \frac {8.15e}{3.78} \right)^{1.89} \delta^{0.89} \right)^s < 0.7^s. $$ It follows that the expected number of sets $S \subseteq [n]$ with $|S| \le \delta n$ that induce at least $1.9|S|$ edges in $\hat G_{\tau_4}$ is at most $$ \sum_{s=1}^{\delta n} g(s)\le \sum_{s=1}^{\ln n} n^{-0.8} + \sum_{s=\ln n}^{\delta n} 0.7^s = o(1). $$ Part (c) holds by Markov's inequality. \end{proof} \subsection{Proof of Lemma~\ref{lem:Ham}}\label{sec:upper_final} The whole subsection is devoted to prove Lemma~\ref{lem:Ham}. In order to achieve it, we will use a powerful proof technique introduced by Pos\'{a} in~\cite{Posa}. Suppose that $F$ is a 2-matching (that is, a collection of vertex-disjoint paths and cycles) of $\hat G_{\tau_4}\subseteq G^*$ with $o(n)$ components. We will use Pos\'{a} rotations to extend a path in $\hat G_{\tau_4}$ to longer and longer paths, and eventually extend a Hamilton path to a Hamilton cycle, by adding $o(n)$ extra semi-random edges. During the process of extending the paths, we will use edges in $\hat G_{\tau_4}$ whenever possible. If no edges in $\hat G_{\tau_4}$ are of help, then we will use semi-random edges where we strategically choose $v_t$ to help us with the extension of the paths. We start from a path $P=u_1u_2\ldots u_h$ in $F$. If $F$ is a collection of cycles, then we arbitrarily take a cycle and let $P$ be the path obtained by deleting an arbitrary edge in that cycle. Given a path $P$ and an edge $u_h u_j$, $1<j<h-1$ we can create another path of length $h$, namely, $P'=u_1u_2\ldots u_j, u_h, u_{h-1}, \ldots, u_{j+1}$ with a new endpoint $u_{j+1}$. We call this operation a \textbf{Pos\'{a} rotation}. Let ${\cal S}$ be the set of paths in $\hat G_{\tau_4}$ on the same set of vertices as $P$ obtained by fixing $u_1$ and performing any sequence of Pos\'{a} rotations on $P$. Let ${\tt End}$ denote the union of the end vertices of paths in ${\cal S}$ other than $u_1$. \medskip Let us independently consider the following two cases: \medskip \noindent {\em Case 1: there is $x\in {\tt End}$ and $y\notin V(P)$ such that $xy\in E(\hat G_{\tau_4})$.} If $y$ is in a cycle $C$ in $F$, then we can extend $P$ to a longer path on $V(P)\cup V(C)$. On the other hand, if $y$ is in a path $P'$ in $F$, then without loss of generality we may assume that $P'=v_1v_2\ldots v_{\ell} \ldots, v_{h}$ with $v_{\ell}=y$ where $\ell> h/2$. We can now extend $P$ to a longer path on vertex set $V(P)\cup \{v_1,\ldots, v_{\ell}\}$. After that operation, the number of vertex-disjoint paths and cycles remains the same or decreases by one. \medskip \noindent {\em Case 2: for every $x\in {\tt End}$, $N(x)\subseteq V(P)$.} Colour vertices in ${\tt End}$ blue or red as follows. If $u_i\in {\tt End}$ and none of the two neighbours of $u_i$ (or just one neighbour of $u_i$ if $i=h$) on $P$ are in ${\tt End}$, then colour $u_i$ red; otherwise, colour it blue. Let us start with the following observation about red vertices. \begin{claim} Let $U$ denote the set of red vertices in ${\tt End}$. Then $U$ induces an independent set in $\hat G_{\tau_4}$. \end{claim} \begin{proof} For a contradiction, suppose that $x,y$ are both red vertices in ${\tt End}$ and $xy$ is an edge in $\hat G_{\tau_4}$. Without loss of generality, suppose that $y$ was added to ${\tt End}$ before $x$ and let $P'$ be the path obtained via Pos\'{a} rotation with $y$ being the other end. Let $x=u_i$. Since $x$ is red, neither $u_{i-1}$ nor $u_{i+1}$ is in ${\tt End}$. Thus, the two neighbours of $x$ on $P'$ must be $u_{i-1}$ and $u_{i+1}$. But then we can get another path on $V(P)$ via Pos\'{a} rotation on $P'$ where one of $u_{i-1}$ and $u_{i+1}$ becomes an end vertex. This contradicts with the fact that $x$ is red. It follows that $U$ must be an independent set in $G^*$. \end{proof} By the usual argument of Pos\'{a} rotation, for every $u_i\in N({\tt End})$, we must have $$ \{u_{i-1},u_{i+1}\}\cap {\tt End}\neq \emptyset. $$ In particular, it implies that $|N({\tt End})| < 2|{\tt End}|$. However, using the above claim, we get a slightly stronger bound. Let $x_1$ and $x_2$ be the number of red and, respectively, blue vertices in ${\tt End}$. Since the set of red vertices in ${\tt End}$ induces an independent set in $\hat G_{\tau_4}$, it follows that \begin{equation} |N({\tt End})|\le 2x_1+x_2-1 = |{\tt End}|+x_1-1. \lab{x12} \end{equation} Our next task and the main ingredient of the proof of the lemma is the next claim. \begin{claim} \lab{claim:linear} $|{\tt End}|=\Omega(n)$. \end{claim} \begin{proof} In order to simplify the notation, let $S={\tt End}$. Let $\epsilon_0>0$ be a sufficiently small constant that will be determined soon. We will show that $|S| \ge \epsilon_0 n$. For a contradiction, suppose that $|S|<\epsilon_0 n$, and let $$ {\cal N}_i=\{x\notin S:\ e(\{x\}, S)=i\}, \qquad \qquad n_i = |{\cal N}_i|. $$ \begin{claim} For every $0<\epsilon\le 1$, $\sum_{i\ge 1} n_i \ge (2-\epsilon) |S|$, provided $\epsilon_0=\epsilon_0(\epsilon)$ is sufficiently small. \end{claim} \noindent Indeed, by Lemma~\ref{lem:aas}(b) applied with $\epsilon'=\epsilon/2$ and $S \cup N(S)$, we get that a.a.s.\ $$ e\left( S \cup \bigcup_{i \ge 2} {\cal N}_i \right) \le (1+\epsilon') \left|S \cup \bigcup_{i \ge 2} {\cal N}_i \right|, $$ provided $\epsilon_0$ is sufficiently small. It follows that \begin{equation} e(S) +\sum_{i\ge 2} i n_i \le (1+\epsilon')\left(|S|+\sum_{i\ge 2} n_i\right).\label{e(S)} \end{equation} On the other hand, by Observation~\ref{obs}(c), $\delta(\hat G_{\tau_4}) \ge 4$ and so $$ 2e(S) + \sum_{i\ge 1} n_i \ge 4|S|. $$ Substituting $ 2e(S)\le (2+2\epsilon')|S|+\sum_{i\ge 2}(2+2\epsilon'-2i)n_i$ from~\eqn{e(S)} into the above yields \[ (2-2\epsilon')|S|\le n_1+\sum_{i\ge 2}(2+2\epsilon'-2i+1) n_ \] By the definition of $\epsilon'$ and as $\epsilon'\le 1/2$, we get \begin{eqnarray} (2-\epsilon)|S|&=&(2-2\epsilon')|S|\le n_1+\sum_{i\ge 2}(2+\epsilon-2i+1) n_i \label{SS}\\ &\le& n_1\le \sum_{i\ge 1} n_i. \lab{S} \end{eqnarray} This finishes the proof of the claim. \medskip We apply the above claim with $\epsilon=0.05$ so we may assume that \begin{equation} |N(S)|\ge (2-\epsilon)|S|\quad \mbox{if $|S|\le \epsilon_0 n$.}\lab{N(S)} \end{equation} By~\eqn{x12} and~\eqn{N(S)}, \[ (2-\epsilon)|S|\le |S|+x_1-1, \] and hence \[ (1-\epsilon)|S| \le x_1-1. \] Let $X_1$ denote the set of red vertices and let $X_2$ be the set of blue vertices in $S$. Let $X_1'\subseteq X_1$ be the set of red vertices with at least 2 blue neighbours. \begin{claim} $|X'_1|\le 1.3\epsilon|S|.$ \end{claim} \noindent Indeed, consider the subgraph of $G$ induced by $Y=X'_1\cup X_2$. By the definition of $X'_1$, $Y$ induces at least $2|X'_1|$ edges. Hence, $2|X'_1|\le 1.1 (|X'_1|+|X_2|)$. As $x_2<\epsilon|S|$, we have $|X_1'|\le (1.1/0.9)\epsilon|S|< 1.3 \epsilon |S|$, which finishes the proof of the claim. \medskip Therefore, every vertex in $X_1\setminus X_1'$ has at least 3 neighbours in $\overline{S}$. Thus, $e(S, \overline{S})\ge 3|X_1\setminus X_1'|\ge 3((1-\epsilon)|S|-1.3\epsilon |S|) \ge 3(1-2.3\epsilon)|S|$. That is, \begin{equation} \sum_{i\ge 1} in_i \ge (3-6.9\epsilon)|S|. \lab{S-Sbar} \end{equation} By~\eqn{SS} and noting that $2i-1\ge i$ for every $i\ge 2$, \[ (2-\epsilon)|S|\le n_1 + (2+\epsilon)\sum_{i\ge 2} n_i -\sum_{i\ge 2} in_i. \] Plugging the lower bound for $\sum_{i\ge 1} in_i $ from~\eqn{S-Sbar} yields \[ (2+\epsilon)\sum_{i\ge 1}n_i\ge n_1+(2+\epsilon)\sum_{i\ge 2} n_i \ge (2-\epsilon)|S| +(3-6.9\epsilon)|S| = (5-7.9\epsilon) |S|. \] Thus, \[ |N(S)|=\sum_{i\ge 1}n_i \ge \frac{5-7.9\epsilon}{2+\epsilon} |S| >2.1 |S|, \] as $\epsilon<0.05$. This contradicts with $|N(S)|\le |S|+x_1-1<2|S|$. It follows then that $|S|\ge \epsilon_0 n$. \end{proof} Now, it is straightforward to finish the proof of Lemma~\ref{lem:Ham}. \begin{proof}[Proof of Lemma~\ref{lem:Ham}] We extend $P$ whenever possible, and if it is not possible, then $|V(P)|\ge \epsilon_0 n$ by Claim~\ref{claim:linear}. The vertices outside of $P$ are in a collection $\cal F$ of $o(n)$ paths and cycles. Let $v_t$ be an arbitrary vertex outside of $P$ that is either an end vertex of a path, or any vertex in a cycle. If the semi-random process selects a vertex $u_t\in {\tt End}$ then, by performing Pos\'{a} rotations, we extend $P$ to a longer path by absorbing a path or a cycle in $\cal F$ that was not in $P$. The number of components in $\cal F$ goes down by 1. Otherwise, we simply ignore $u_t$ and $v_t$, and repeat until eventually $u_t\in {\tt End}$. Since $|{\tt End}|\ge \epsilon_0 n$, the probability that $u_t\in {\tt End}$ is at least $1/\epsilon_0$. Hence, it takes $O(1)$ trials on average to absorb a path or a cycle. Since there are only $o(n)$ paths or cycles to be absorbed, it follows immediately from Chernoff bound that a.a.s.\ an additional $o(n)$ edges are enough to be added to $G^*$ to make it Hamiltonian. \end{proof} \subsection{Preparation for the Proof of Lemma~\ref{lem:2-matching}}\label{sec:upper_2-matching_preparation} Our aim now is to prove that $G^*$ has a 2-matching with $o(n)$ components. In order to achieve it, we will apply the consequence of the Tutte-Berge matching formula~\cite[Theorem~30.7]{Shrijver} to $\hat G_{\tau_4}$, which is a simple graph and a subgraph of $G^*$. However, we need one more definition before we can state it. Given a simple graph $G$, let $\kappa(G)$ be the number of edges in a maximum 2-matching of $G$ (that is, $\kappa(G)$ is the \textbf{size} of a maximum 2-matching). The Tutte-Berge matching theorem implies the following. \begin{thm}\label{thm:Tutte-Berge} Let $G$ be a simple graph on the vertex set $[n]$. Then, \[ \kappa(G)=\min\left\{n+ |U| - |S| +\sum_{X} \floor{\frac{e(X,S)}{2}}\right\}, \] where $U$ and $S$ are disjoint subsets of $[n]$, $S$ is an independent set, and $X$ ranges over the components of $G-U-S$. \end{thm} Despite the fact that the above theorem provides the exact value for $\kappa(G)$, it is not so easy to apply it in the context of random graphs. Fortunately, if $G$ belongs to some family of graphs, then we get an easier property to check. We will first define the family, then prove a weaker but more workable statement, and finally show that a.a.s.\ $G^*$ belongs to the family. \medskip Let ${\tt C}_{\tt cyclic}$ be the family of graphs on the vertex set $[n]$ which satisfy the following properties: there are at most $n/\ln n$ subsets $S \subseteq [n]$ with $|S| \le \ln n/10$ such that $S$ induces a connected subgraph with at least the same number of edges as the number of vertices; that is, $G[S]$ is connected and $|E(G[S])| \ge |S|$. \begin{cor}\label{cor:cyclic} Suppose $G\in {\tt C}_{\tt cyclic}$ and all 2-matchings of $G$ have more than $\gamma$ components for some $\gamma\ge 0$. Then, $G$ has vertex partition $[n]=S\cup T\cup R\cup U$ such that \begin{itemize} \item[(a)] $S$ is an independent set and $G[T]$ is a forest; \item[(b)] $|S|\ge \max\{|U|, \gamma-11n/\ln n\}$; \item[(c)] $e(S\cup T)+e(S\cup T, R)\le |T|+2|S|-2|U|-2\gamma + 33 n/\ln n$; \item[(d)] $e(R,T)=0$. \end{itemize} \end{cor} \begin{proof} The proof is almost identical to that in~\cite{3-out} so we only briefly sketch the argument here. Let $F$ be a maximum 2-matching of $G$. Since $G\in {\tt C}_{\tt cyclic}$, the number of cycles in $F$ is at most $n/\ln n+n/(\ln n/10)=11n/\ln n$. Let $c(F)$ and $e(F)$ denote the numbers of components and, respectively, edges in $F$. Then, $$ \gamma \le c(F)\le n-e(F)+11n/\ln n=n+11n/\ln n-\kappa(G). $$ Thus, $\kappa(G)\le n+11n/\ln n-\gamma$. Let $S$ and $U$ be a pair of disjoint subsets of $[n]$ that minimize \begin{equation} n+ |U| - |S| +\sum_{X\in {\mathcal C}} \floor{\frac{e(X,S)}{2}},\label{TB} \end{equation} where ${\mathcal C}$ is the set of components of $G-U-S$, and $S$ is an independent set. Let $T$ be the union of components of $G-U-S$ that are trees, and $R=[n]-U-S-T$. By Theorem~\ref{thm:Tutte-Berge} and our earlier observation, we get that $\kappa(G) = n + |U| - |S| +\sum_{X\in {\mathcal C}} \floor{\frac{e(X,S)}{2}} \le n+11n/\ln n-\gamma$, and so \begin{equation} |U| - |S| +\sum_{X\in {\mathcal C}} \floor{\frac{e(X,S)}{2}} \le 11n/\ln n-\gamma. \lab{cond} \end{equation} By our construction, $[n]=S\cup T\cup R\cup U$ is a partition of the vertex set, and properties (a) and (d) hold. It remains to show that properties (b) and (c) also hold. It follows immediately from inequality~\eqn{cond} that $$ |S| \ge |U| + \sum_{X\in {\mathcal C}} \floor{\frac{e(X,S)}{2}} + \gamma - 11n/\ln n \ge \gamma - 11n/\ln n, $$ since $|U| \ge 0$ and $\sum_{X\in {\mathcal C}} \floor{\frac{e(X,S)}{2}}\ge 0$. On the other hand, by Theorem~\ref{thm:Tutte-Berge} and the fact that $(S,U)$ is chosen such that it minimizes~\eqn{TB}, we have $n\ge \kappa(G) \ge n+|U|-|S|$, which implies $|S|\ge |U|$. This shows that property~(b) holds. For part~(c), let $p$ and $q$ denote the number of components in $G[T]$ and, respectively, $G[R]$. Since $G\in{\tt C}_{\tt cyclic}$, there are at most $n/\ln n$ components in $G[R]$ of order at most $\ln n/10$, and at most $10n/\ln n$ components in $G[R]$ of order greater than $\ln n/10$. It follows that $q\le 11n/\ln n$. Then, \[ \sum_{X\in {\mathcal C}} \floor{\frac{e(X,S)}{2}}\ge \frac{(e(S,T)-p)+(e(S,R)-q)}{2}\ge \frac{e(S,T)+e(S,R)-p-11n/\ln n}{2}. \] Hence, \[ 11n/\ln n-\gamma\ge |U| - |S| +\sum_{X\in {\mathcal C}} \floor{\frac{e(X,S)}{2}} \ge |U|-|S|+\frac{e(S,T)+e(S,R)-p-11n/\ln n}{2}. \] It follows that \[ e(S,T)+e(S,R) \le 33n/\ln n-2\gamma+2|S|-2|U|+p. \] Now condition (c) follows since \[ e(S\cup T)+e(S\cup T, R)=e(S,T) +e(T) +e(S,R) \le e(S,T) +e(S,R) + |T|-p, \] as $T$ induces a forest and $e(T,R)=0$. \end{proof} \medskip Let us now show that $\hat G_{\tau_4}$ belongs to the family ${\tt C}_{\tt cyclic}$ and so Corollary~\ref{cor:cyclic} can be applied. \begin{lemma}\label{lem:gstar_cyclic} A.a.s.\ $\hat G_{\tau_4}\in {\tt C}_{\tt cyclic}$. \end{lemma} \begin{proof} Let ${\cal Z}$ be the family of sets $S$ with $|S|\le \ln n/10$ where $S$ induces a connected subgraph of $\hat G_{\tau_4}$ with at least $|S|$ edges, and let $Z=|{\cal Z}|$. We will show that ${\mathbb E} [Z]=o(n/\ln n)$ which proves the lemma as it implies that $Z \le n/\ln n$ by Markov's inequality. For a given $S \subseteq [n]$ with $|S| \le \ln n /10$, let $X_S$ be the indicator random variable that $S$ induces a connected subgraph of $\hat G_{\tau_3}$ with at least $|S|$ edges. Let $X = \sum_{S: 3 \le |S|\le\ln n/10} X_S$. It follows that \begin{eqnarray*} {\mathbb E} [X] &\le& \sum_{s=3}^{\ln n / 10} {n \choose s} s^{s-2} {s \choose 2} \left( \frac {8.15}{n} \right)^s \le \sum_{s=3}^{\ln n / 10} ( 8.15e)^s = O \left( (8.15e)^{\ln n/10} \right) \\ &=& O(n^{0.36}) = o(n/\log n). \end{eqnarray*} (Indeed, there are $n \choose s$ sets of cardinality $s$, $s^{s-2}$ spanning trees of $K_s$, and $s \choose 2$ choices for an additional edge. By~(\ref{eq:prob_Gstar}), the probability that selected edges are present in $G^*$ is at most $(8.15n)^s$.) Note that $X$ counts those sets $S\in {\cal Z}$ that already satisfy the desired property in the subgraph $\hat G_{\tau_3}$. We may assume that $\hat G_{\tau_4}$ has property {\tt E}. Hence, it is sufficient to further bound the number of sets $S\in {\cal Z}$ that contain exactly one deficit vertex $v$ and induce at least one golden edge incident with $v$. Let $Y_S$ be the indicator variable that $S$ is such a set. Let $Y = \sum_{S: 3 \le |S|\le\ln n/10} Y_S$ and we immediately have $Z\le X+Y$. Hence, our next task is to upper bound ${\mathbb E} [Y]$. There are $|S|$ ways to choose vertex $v$ in $S$ to be the deficit vertex. Then either $v$ is incident with a loop, or a multiple edge in $G_{\tau_2}$. We will only bound ${\mathbb E}[Y_S1_{A}]$ where $A$ denotes the event that the deficit vertex in $S$ is incident with a loop in $G_{\tau_2}$; the other case can be dealt with analogously. Let $s=|S|$. There are $s^{s-2}\binom{s}{2}$ ways to specify a set of $s$ edges that must be induced by $S$. Given a specification of such $s$ edges, there are at most $s$ ways to specify one of them to be golden. The probability for that specific edge to be golden is at most $2/n<8.15/n$ (as $v$ sends out 2 golden edges in total). There could be another edge among the $s$ edges that is golden, and the conditional probability for that is at most $2/n<8.15/n$. Moreover, the probability that $v$ is incident with a loop is $O(1/n)$. It follows now that \[ {\mathbb E}[Y1_{A}] \le \sum_{s=3}^{\ln n/10}\binom{n}{s}s^2\cdot s^{s-2}\binom{s}{2}\left(\frac{8.15}{n}\right)^s \cdot O(1/n) = O\left( \frac {\ln^2 n}{n}\right) \cdot \sum_{s=3}^{\ln n / 10} (8.15e)^s=o(1). \] As we already mentioned, similar calculations show that ${\mathbb E} [Y1_{B}]=o(1)$, where $B$ is the event that the deficit vertex in $S$ is incident with a parallel edge in $G_{\tau_2}$. Combining all of the above, we have ${\mathbb E} [Z] \le {\mathbb E} [X] +{\mathbb E} [Y1_A]+{\mathbb E}[Y1_{B}]=o(n/\ln n)$. The lemma follows by Markov's inequality. \end{proof} \medskip Let us fix an arbitrarily small $\epsilon>0$. After combining Lemma~\ref{lem:gstar_cyclic} and Corollary~\ref{cor:cyclic}, it remains to show that a.a.s.\ there is no vertex partition $S\cup T\cup U\cup R$ of $\hat G_{\tau_4}$ satisfying properties (a)--(d) in Corollary~\ref{cor:cyclic} with some $\gamma \ge \epsilon n$. However, the distribution of $\hat G_{\tau_4}$ is complicated. As a result, we will work on $G_{\tau_4}$ instead and use Property {\tt E}, which implies that $\hat G_{\tau_4}$ misses at most $\ln \ln n$ edges of $G_{\tau_4}$. It will be convenient to colour edges of $G_{\tau_3}$ in one of the four colours: blue, green, red, and yellow. Recall that $G_{\tau_3}$ is constructed during the first three phases, and ${\mathcal G}_{\tau_4}$ is obtained by adding up to $\ln \ln n$ golden semi-edges to $G_{\tau_3}$. During the first phase, $G_{2n}$ and the corresponding directed graph $D_{2n}$ are created; $V_0$ and $V_1$ are the sets of vertices in $D_{2n}$ of in-degree 0 and, respectively, of in-degree 1. Let us colour edges of $G_{2n}$ green if their counterparts in $D_{2n}$ are directed into one of the vertices in $V_0 \cup V_1$ which we also colour green. The remaining edges are coloured blue. During the second phase graph $G_{\tau_2}$ is created; let us colour edges added during this phase red. Finally, edges added during the third phase are coloured yellow. Let us consider any partition $[n]=S\cup T\cup U\cup R$. For any $i\in \{S,T,U, R\}$, let $\alpha_i$ be the fraction of vertices that belong to set $i$ (that is, $\alpha_i=|i|/n$) and let $\gamma_i$ be the fraction of vertices of $i$ that are green (that is, $\gamma_i = |G_i|/\alpha_i n$ where $G_i$ is the set of green vertices in set $i$). Moreover, let $\beta_i$ be the fraction of vertices in $G_i$ that received no incoming edge in $D_{2n}$ (that is, $\beta_i = |G_i \cap V_0|/|G_i|$). In order to simplify the notation, we define the following vectors: ${\bm \alpha}=(\alpha_i)_{i\in \{S,T,U,R\}}$, ${\bm \beta}=(\beta_i)_{i\in \{S,T,U,R\}}$, ${\bm \gamma}=(\gamma_i)_{i\in \{S,T,U,R\}}$. It follows immediately from the above definitions that the following properties hold: \begin{equation} \sum_{i\in \{S,T,U,R\}}\alpha_i=1, \quad, 0\le \gamma_i\le 1, \quad 0\le \beta_i\le 1 \quad\mbox{for all $i\in \{S,T,U,R\}$}. \lab{alpha} \end{equation} Next, for $i,j\in \{S,T,U,R\}$, let $b_{ij} \cdot (2\alpha_i n)$, $g_{ij} \cdot (2\alpha_i n)$, and $r_{ij} \cdot (2\beta_i+(1-\beta_i))\gamma_i\alpha_i n$ denote the numbers of blue, green and, respectively, red edges from set $i$ to set $j$. Vectors ${\bm b}=(b_{ij})_{i,j\in \{S,T,U,R\}}$, ${\bm g}=(g_{ij})_{i,j\in \{S,T,U,R\}}$, and ${\bm r}=(r_{ij})_{i,j\in \{S,T,U,R\}}$ describe the distribution of edges of a given colour between parts. Let $y_1\cdot 0.07n$ denote the number of yellow edges that are either incident to a vertex in $U$, or are induced by $R$. Let $y_2\cdot 0.07n$ denote the number of yellow edges that are induced by $T$. Note that there are $(1-y_1-y_2) \cdot 0.07n$ edges between $S$ and $R \cup T$. Hence, the vector $(y_1, y_2, 1-y_1-y_2)$ describes the distribution of yellow edges. Let us fix ${\bm u}=({\bm \alpha}, {\bm \beta}, {\bm \gamma}, {\bm b}, {\bm g}, {\bm r}, y_1,y_2)$. Our goal is to upper bound the probability $P(\bm u)$ that there exists a partition $[n]=S\cup T\cup U\cup R$ with $|i|=\alpha_i n$ for $i\in \{S,T,U,R\}$, and subsets $i'\subseteq i$ for $i\in \{S,T,U,R\}$ with $|i'|=\gamma_i\alpha_i n$ such that the following properties hold: \begin{itemize} \item there are exactly $b_{ij} \cdot (2\alpha_i n)$ blue directed edges from set $i$ to set $j$; \item there are exactly $g_{ij} \cdot (2\alpha_i n)$ green directed edges from set $i$ to set $j$; \item there are exactly $r_{ij} \cdot (2\beta_i+(1-\beta_i))\gamma_i\alpha_i n$ red directed edges from set $i$ to set $j$; \item there are exactly $y_1\cdot 0.07n$ yellow edges that are either incident to a vertex in $U$, or are induced by $R$; \item there are exactly $y_2\cdot 0.07n$ yellow edges that are induced by $T$; \item there are no yellow edges inside $S$, or between $R$ and $T$; \item all vertices in $i'$ received at most 1 incoming green edge; \item all vertices in $i\setminus i'$ received at least 2 incoming blue edges. \end{itemize} We will show that $P({\bm u}) \le poly(n) \exp(f(\bm u)n)$ for some explicit function $f(\bm u)$. Unfortunately, this function is quite involved so we will define it in the next section. \subsection{Property ${\mathcal P}$}\label{sec:function} Let $\epsilon_0=2^{-32}$. Let us start with an observation that, due to Lemma~\ref{lem:aas}, we may assume that the parameter ${\bm u}$ is of a specific form, that is, it satisfies the following constraints: \begin{align} &-\epsilon_0<\sum_{i\in \{S,T,U,R\}}\gamma_i\alpha_i - 3e^{-2}<\epsilon_0\label{eq1}\\ &-\epsilon_0<\sum_{i\in \{S,T,U,R\}}\beta_i\gamma_i\alpha_i - e^{-2}<\epsilon_0\label{eq2}\\ &-\epsilon_0<\sum_{i\in \{S,T,U,R\}} 2\alpha_i \sum_{j\in \{S,T,U,R\}} g_{ij} - 2e^{-2}<\epsilon_0,\label{eq3} \end{align} Indeed, equations~(\ref{eq1}) and~(\ref{eq2}) follow from the fact that a.a.s.\ $|V_0|+|V_1| = (3e^{-2}+o(1)) n$ and, respectively, $|V_0| = (e^{-2}+o(1)) n$ (Lemma~\ref{lem:aas}(a)); equation~(\ref{eq3}) follows from the fact that the number of green edges is equal to $|V_1|$ and so a.a.s.\ it is asymptotic to $2e^{-2}n$. We also have the following set of obvious constraints: \begin{align} &\sum_{j\in \{S,T,U,R\}} (b_{ij}+g_{ij}) =1 , \quad \mbox{for all $i\in \{S,T,U,R\}$} \lab{sum1} \\ &\sum_{j\in \{S,T,U,R\}} r_{ij} =1 , \quad \mbox{for all $i\in \{S,T,U,R\}$} \\ &y_1+y_2\le 1\\ &{\bm \alpha}, {\bm \beta}, {\bm \gamma}, {\bm b}, {\bm g}, {\bm r}, y_1, y_2 \in [0,1]. \end{align} (For the ease of notation, we write that a vector is in $[0,1]$ when every component of the vector is in $[0,1]$.) As we only consider partitions satisfying properties (a)--(d) stated in Corollary~\ref{cor:cyclic}, we additionally require that \begin{align} &\alpha_S\ge \alpha_U, \\ &2\alpha_Tb_{TT}+2\alpha_Tg_{TT}+\gamma_T\alpha_T(2\beta_T+(1-\beta_T))r_{TT} + 0.07y_2 \le \alpha_T+\min\{\epsilon_0, \alpha_T\}, \\ &c_{SS}=c_{RT}=c_{TR}=0,\quad \mbox{for all $c\in \{b,g,r\}$}. \end{align} The first constraint comes immediately from property~(b), and the last constraint follows from properties~(a) and~(d). The second constraint comes from the fact that $e(T)\le |T|$ in $\hat G_{\tau_4}$ required by property~(a), which together with Property {\tt E} imply that $e(G)\le |T|+\epsilon_0 n$ and $e(G)\le 2|T|$ in $G_{\tau_4}$ (note there can be at most $|T|$ loops or double edges induced by $T$). Finally, let us note that Properties~(c) and {\tt E}, and Lemma~\ref{lem:number_of_edges} imply that a.a.s.\ the number of edges incident with $U$ or induced by $R$ is at least \begin{align*} |E(G_{\tau_4})| - & e(S\cup T) - e(S \cup T,R) \\ & \ge (2+4e^{-2}+0.07+o(1))n -|T|-2|S|+2|U| + 2\gamma-33n/\ln n -\ln \ln n\\ & \ge (2+4e^{-2}+0.07)n-|T|-2|S|+2|U|\\ & =(4e^{-2}+0.07+4\alpha_U+\alpha_T+2\alpha_R)n. \end{align*} This yields the following constraint: \begin{align} 2\alpha_U +\gamma_U\alpha_U(1+\beta_U) + 2\alpha_S(b_{SU}+g_{SU}) +\gamma_S\alpha_S r_{SU}(1+\beta_S) + 2\alpha_T(b_{TU}+g_{TU}) & \nonumber\\ + \gamma_T\alpha_Tr_{TU}(1+\beta_T)+ 2\alpha_R(b_{RU}+b_{RR}+g_{RU}+g_{RR}) & \nonumber\\ + \gamma_R\alpha_R(r_{RU}+r_{RR})(1+\beta_R) +0.07 y_1&\nonumber\\ \ge 4e^{-2} +0.07 + 4\alpha_U + \alpha_T + 2\alpha_R. & \lab{constraint1} \end{align} For $i\in\{S,T,U,R\}$, the number of blue edges coming into set $i$ must be at least $2\alpha_i(1-\gamma_i) n$, as every vertex in $i\setminus (V_0\cup V_1)$ must receive at least 2 blue edges. This yield the following set of constraints: \begin{equation} \sum_{j\in\{S,T,U,R\}} 2\alpha_j b_{ji} \ge 2\alpha_i (1-\gamma_i),\quad \mbox{for all $i\in\{S,T,U,R\}$.}\label{blue} \end{equation} Finally, the number of green edges coming into each set satisfies the following constraints: \begin{equation} \sum_{j\in\{S,T,U,R\}} 2\alpha_j g_{ji} = \alpha_i \gamma_i(1-\beta_i),\quad \mbox{for all $i\in\{S,T,U,R\}$.}\lab{constraint} \end{equation} \medskip Now, we are ready to show that $P({\bm u}) \le poly(n) \exp(f(\bm u)n)$ for some explicit function $f(\bm u)$. Given a vector of non-negative real numbers ${\bm a}$ with $\sum_i a_i = 1$, let $H({\bm a})=-\sum_{i} a_i\ln a_i$. If ${\bm a}=(a_1,a_2)$, then we simply write $H(a_1)$ for $H({\bm a})$. By convention, we set $0\log(0)=0$, for any $a >0$ we set $a\log(0)=-\infty$, and we treat $-\infty< x$ for every real number $x$. Given ${\bm u}$, there are $\binom{n}{\alpha_S n, \alpha_T n, \alpha_U n, \alpha_R n} =poly(n) \exp(H(\alpha_S,\alpha_T,\alpha_U,\alpha_R)n)$ choices for sets $S$, $T$, $U$, and $R$. Given $S$, $T$, $U$, $R$, there are \[ \prod_{i\in\{S,T,U,R\}}\binom{\alpha_i n}{\gamma_i \alpha_i n} =poly(n)\prod_{i\in\{S,T,U,R\}} \exp(H(\gamma_i) \alpha_i n) =poly(n) \exp \left( n \sum_{i\in\{S,T,U,R\}} H(\gamma_i) \alpha_i \right) \] ways to choose $G_S$, $G_T$, $G_U$ and $G_R$. The probability that the number of blue and green edges going out of $S$ into each part of $S$, $T$, $U$, $R$ is precisely as prescribed by $\bm u$ is equal to $poly(n) \exp(f_S n)$, where \begin{align*} f_S=&2\alpha_S \Big( H(b_{SU},b_{ST},b_{SR},g_{SU},g_{ST},g_{SR}) + b_{SU}\ln((1-\gamma_U)\alpha_U) +b_{ST}\ln ((1-\gamma_T)\alpha_T)\\ &+b_{SR}\ln ((1-\gamma_R)\alpha_R) +g_{SU}\ln(\gamma_U\alpha_U)+g_{ST}\ln(\gamma_T\alpha_T) +g_{SR}\ln(\gamma_R\alpha_R) \Big). \end{align*} Indeed, there are $2 \alpha_S n$ edges going out of $S$ that are blue or green. We need to partition them into 6 classes depending on their colour and to which part they go to. This gives us the term $2\alpha_S H(b_{SU},b_{ST},b_{SR},g_{SU},g_{ST},g_{SR})$. For each $i\in\{T,U,R\}$, there are $2\alpha_S b_{Si} n$ blue edges that need to go to blue vertices of $i$ (hence terms $2\alpha_S b_{Si} \ln((1-\gamma_i)\alpha_i)$) and there are $2\alpha_S g_{Si} n$ green edges that need to go to green vertices of $i$ (hence terms $2\alpha_S g_{Si} \ln(\gamma_i \alpha_i)$). Similarly, the probabilities that the number of blue and green edges going out of $T$, $U$, $R$ into other parts is precisely as encoded by $\bm u$ are $poly(n) \exp(f_T n)$, $poly(n) \exp(f_Un)$ and, respectively, $poly(n) \exp(f_Rn)$, where \begin{align*} f_T=&2\alpha_T \Big( H(b_{TS},b_{TT},b_{TU},g_{TS},g_{TT},g_{TU}) + b_{TS}\ln((1-\gamma_S)\alpha_S) +b_{TT}\ln ((1-\gamma_T)\alpha_T)\\ &+b_{TU}\ln ((1-\gamma_U)\alpha_U) +g_{TS}\ln(\gamma_S\alpha_S)+g_{TT}\ln(\gamma_T\alpha_T) +g_{TU}\ln(\gamma_U\alpha_U) \Big), \end{align*} \begin{align*} f_U=&2\alpha_U \Big( H(b_{US},b_{UT},b_{UU},b_{UR},g_{US},g_{UT}, g_{UU}, g_{UR}) + b_{US}\ln((1-\gamma_S)\alpha_S) \\ &+b_{UT}\ln ((1-\gamma_T)\alpha_T) +b_{UU}\ln ((1-\gamma_U)\alpha_U)+b_{UR}\ln ((1-\gamma_R)\alpha_R) +g_{US}\ln(\gamma_S\alpha_S) \\ &+g_{UT}\ln(\gamma_T\alpha_T) +g_{UU}\ln(\gamma_U\alpha_U)+g_{UR}\ln(\gamma_R\alpha_R) \Big), \end{align*} and \begin{align*} f_R=&2\alpha_R \Big( H(b_{RS},b_{RU},b_{RR},g_{RS},g_{RU},g_{RR}) + b_{RS}\ln((1-\gamma_S)\alpha_S) +b_{RU}\ln ((1-\gamma_U)\alpha_U)\\ &+b_{RR}\ln ((1-\gamma_R)\alpha_R) +g_{RS}\ln(\gamma_S\alpha_S)+g_{RU}\ln(\gamma_U\alpha_U) +g_{RR}\ln(\gamma_R\alpha_R) \Big). \end{align*} Given that, the probabilities that the number of red edges going out of $S$, $T$, $U$, $R$ into each part of $S$, $T$, $U$, $R$ exactly as dictated by $\bm u$ are $poly(n) \exp(g_S n)$, $poly(n) \exp(g_T n)$, $poly(n) \exp(g_Un)$ and, respectively, $poly(n) \exp(g_Rn)$, where \begin{align*} g_S=&\alpha_S\gamma_S(2\beta_S+(1-\beta_S))\Big(H(r_{SU},r_{ST},r_{SR})+r_{SU}\ln \alpha_U+r_{ST}\ln \alpha_T+r_{SR}\ln \alpha_R\Big), \\ g_T=&\alpha_T\gamma_T(2\beta_T+(1-\beta_T))\Big(H(r_{TS},r_{TT},r_{TU})+r_{TS}\ln \alpha_S+r_{TT}\ln \alpha_T+r_{TU}\ln \alpha_U\Big), \\ g_U=&\alpha_U\gamma_U(2\beta_U+(1-\beta_U))\Big(H(r_{US},r_{UT},r_{UU}, r_{UR})+r_{US}\ln \alpha_S+r_{UT}\ln \alpha_T \\ &\qquad\qquad\qquad\qquad\qquad\qquad +r_{UU}\ln \alpha_U+r_{UR}\ln \alpha_R\Big), \\ g_R=&\alpha_R\gamma_R(2\beta_R+(1-\beta_R))\Big(H(r_{RS},r_{RU},r_{RR})+r_{RS}\ln \alpha_S+r_{RU}\ln \alpha_U+r_{RR}\ln \alpha_R\Big). \end{align*} \medskip In order to continue our computations, we need the following auxiliary lemma on the ``balls into bins'' model. \begin{lemma}\lab{lem:balls_in_bins} Fix $\alpha > 0$ and suppose that $\alpha n$ balls are thrown independently and uniformly at random into $n$ bins. \begin{enumerate} \item[(a)] If $\alpha>2$, then the probability that every bin receives at least two balls is asymptotic to $poly(n)\exp(t(\alpha)n)$ with $t(\alpha)=\lambda-\alpha+\alpha\ln(\alpha/\lambda)+\ln(1-e^{-\lambda}-\lambda e^{-\lambda})$, where $\lambda = \lambda(\alpha) > 0$ is the unique solution of the following equation: \[ \frac{\lambda(1-e^{-\lambda})}{1-e^{-\lambda}-\lambda e^{-\lambda}}=\alpha. \] \item[(b)] If $\alpha\le 1$, then the probability that every bin receives at most one ball is asymptotic to $poly(n)\exp(\kappa(\alpha)n)$, where $ \kappa(\alpha)=-\alpha-(1-\alpha)\ln (1-\alpha). $ \item[(c)] If $\alpha=2$, the probability that every bin receives exactly two balls is asymptotic to $poly(n)\exp((\ln 2-2)n)$. \end{enumerate} \end{lemma} Before we prove the lemma, let us note that $$ f(\lambda) := \frac{\lambda(1-e^{-\lambda})}{1-e^{-\lambda}-\lambda e^{-\lambda}} = \frac {\lambda (1-(1-\lambda+O(\lambda^2)))}{1 - (1-\lambda+\lambda^2/2+O(\lambda^3))(1+\lambda)} = \frac {\lambda^2 + O(\lambda^3)}{\lambda^2/2+O(\lambda^3)} = 2 +O(\lambda), $$ so $\lim_{\lambda \to 0^+} f(\lambda) = 2$. It is also straightforward to see that $\lim_{\lambda \to \infty} f(\lambda) = \infty$ and $f(\lambda)$ is an increasing function of $\lambda$. Hence, indeed, $\lambda=\lambda(\alpha)$ is well defined. For convenience, we define $\lambda(2)=0$ and set \[ t(2)=\lim_{\alpha\to 2+} t(\alpha)=\ln 2-2. \] This definition of $t:[2,\infty)\to {\mathbb R}$ unifies parts (a) and (c) in the lemma above. \begin{proof} Suppose that $\alpha\ge 2$. Let $K$ be the truncated Poisson variable with parameter $\lambda = \lambda(\alpha)$ and truncated at 2, that is, \[ {\mathbb P}(K=j)=\frac{e^{-\lambda}\lambda^j}{j!(1-e^{-\lambda}-\lambda e^{-\lambda})}, \qquad\mbox{for every integer $j\ge 2$.} \] It follows that \begin{eqnarray*} {\mathbb E} K &=& \sum_{j\ge 2} j \cdot {\mathbb P}(K=j) = \sum_{j\ge 2} \frac{e^{-\lambda}\lambda^j}{(j-1)!(1-e^{-\lambda}-\lambda e^{-\lambda})} \\ &=& \frac{\lambda}{1-e^{-\lambda}-\lambda e^{-\lambda}} \sum_{j\ge 1} \frac{e^{-\lambda}\lambda^j}{j!} = \frac{\lambda(1-e^{-\lambda})}{1-e^{-\lambda}-\lambda e^{-\lambda}} = \alpha, \end{eqnarray*} by the definition of $\lambda$. Let $k_1,\ldots, k_n$ be $n$ independent copies of $K$. Then, by Gnedenko's local limit theorem~\cite{lll}, \begin{eqnarray*} \Theta(n^{-1/2}) &=& {\mathbb P} \left( \sum_{i=1}^n k_i=\alpha n \right) = \sum_{\substack{j_1\ge 2, \ldots, j_n\ge 2\\ \sum_{i=1}^n j_i =\alpha n}} \prod_{i=1}^n \frac{e^{-\lambda} \lambda^{j_i}}{j_i!(1-e^{-\lambda}-\lambda e^{-\lambda})} \\ &=& \frac{e^{-\lambda n}\lambda^{\alpha n}}{(1-e^{-\lambda}-\lambda e^{-\lambda})^n}{\sum}^*, \end{eqnarray*} where \[ {\sum}^*= \sum_{\substack{j_1\ge 2, \ldots, j_n\ge 2\\ \sum_{i=1}^n j_i =\alpha n}} \prod_{i=1}^n \frac{1}{j_i!}. \] Hence, \[ {\sum}^* = poly(n) \frac{(1-e^{-\lambda}-\lambda e^{-\lambda})^n}{e^{-\lambda n}\lambda^{\alpha n}}. \] Consider now throwing $\alpha n$ balls independently and uniformly at random into $n$ bins. By Stirling's formula ($x! = poly(x) (x/e)^x$), the probability that every bin receives at least 2 balls is equal to \begin{eqnarray*} \sum_{\substack{j_1\ge 2, \ldots, j_n\ge 2\\ \sum_{i=1}^n j_i=\alpha n}} \binom{\alpha n}{j_1,\ldots, j_n} \ n^{-\alpha n} &=& \frac{(\alpha n)!}{n^{\alpha n}} {\sum}^* = poly(n) e^{-\alpha n} \alpha^{\alpha n} {\sum}^* \\ &=& poly(n) e^{-\alpha n} \alpha^{\alpha n} \frac{(1-e^{-\lambda}-\lambda e^{-\lambda})^n}{e^{-\lambda n}\lambda^{\alpha n}}=poly(n)\exp(t(\alpha) n). \end{eqnarray*} This completes the proof of part~(a). To show part~(b), suppose that $\alpha\le 1$. The probability that every bin receives at most one ball is equal to \[ \frac{(n)_{\alpha n}}{n^{\alpha n}} = \frac {n!}{(n-\alpha n)! n^{\alpha n}} =poly(n)\exp(\kappa(\alpha)n), \] where $(x)_j=\prod_{i=0}^{j-1}(x-j)$ denotes the $j$-th falling factorial. To show part~(c), note that the probability that every bin receives exactly two balls is equal to \[ \frac{(2n)!/2^n}{n^{2n}} = poly(n)\exp((\ln2-2)n). \] This finishes the proof of the lemma. \end{proof} \medskip We are now back to our problem. With Lemma~\ref{lem:balls_in_bins} at hand, we will be able to prove the following claims. \begin{claim}\lab{claim:blue} The probability that all vertices in $[n]\setminus(G_S\cup G_T\cup G_U\cup G_R)$ receive at least two blue edges is equal to $poly(n) \exp \left( n \sum_{i\in \{S,T,U,R\}} w_i \right) $, where \[ w_i=(1-\gamma_i)\alpha_i\Big(\lambda_i-d_i+d_i\ln(d_i/\lambda_i)+\ln(1-e^{-\lambda_i}-\lambda_ie^{-\lambda_i})\Big), \] \[ d_i=\frac{\sum_{j\in\{S,T,U,R\}} 2\alpha_jb_{ji}}{(1-\gamma_i)\alpha_i}, \] and $\lambda_i=\lambda_i(d_i) > 0$ is the unique solution of the following equation: \[ \frac{\lambda_i(1-e^{-\lambda_i})}{1-e^{-\lambda_i}-\lambda_ie^{-\lambda_i}}=d_i. \] \end{claim} Before we move to the proof, let us remark that, by~\eqn{blue}, for every $i\in\{S,T,U,R\}$ we have $d_i\ge 2$ and so $\lambda_i$ is well defined. \begin{proof} Note that for each $i\in\{S,T,U,R\}$, the number of blue edges coming into $i\setminus G_i$ is equal to $\sum_{j\in \{S,T,U,R\}} 2\alpha_j n b_{ji}$. Moreover, $|i\setminus (V_0\cup V_1)|=(1-\gamma_i)\alpha_i n$. The claim follows immediately from Lemma~\ref{lem:balls_in_bins}(a) applied with $\alpha=\sum_{j\in \{S,T,U,R\}} 2\alpha_j b_{ji}/(1-\gamma_i)\alpha_i = d_i$ and the number of balls equal to $(1-\gamma_i)\alpha_i n$. \end{proof} \medskip \begin{claim}\lab{claim:green} The probability that all vertices in $G_S\cup G_T\cup G_U\cup G_R$ receive at most one green edge is equal to $poly(n) \exp \left( n \sum_{i\in \{S,T,U,R\}} \tilde w_i \right) $, where \[ \tilde w_i= \alpha_i\gamma_i \Big(-1+\beta_i-\beta_i\ln \beta_i\Big). \] \end{claim} \begin{proof} Note that for each $i\in\{S,T,U,R\}$, the number of green edges coming into $i\cap(V_0\cup V_1)$ is $(1-\beta_i)\gamma_i\alpha_i n$. Moreover, $|i\cap(V_0\cup V_1)|=\gamma_i \alpha_i n$. The claim follows immediately from Lemma~\ref{lem:balls_in_bins}(b) applied with $\alpha=(1-\beta_i)\gamma_i\alpha_i/\gamma_i\alpha_i=1-\beta_i$ and the number of bins equal to $\gamma_i\alpha_i n$. \end{proof} \medskip \begin{claim}\lab{claim:yellow} The probability that there are exactly $0.07y_1n$ yellow edges incident with $U$ or induced by $R$, and exactly $0.07y_2n$ yellow edges induced by $T$ is equal to $poly(n) \exp(h n)$, where \begin{align*} h=0.07\ln(0.07) &+ 0.07y_1\ln\left(\frac{\alpha_U^2+2\alpha_U(1-\alpha_U)+\alpha_R^2}{0.07 y_1}\right)+ 0.07y_2\ln\left(\frac{\alpha_T^2}{0.07y_2}\right)\\ &+ 0.07(1-y_1-y_2)\ln\left(\frac{2\alpha_S(\alpha_T+\alpha_R)}{0.07(1-y_1-y_2)}\right). \end{align*} \end{claim} \begin{proof} Recall that there are $0.07n$ yellow edges in total, so the remaining $0.07(1-y_1-y_2)n$ yellow edges are between $S$ and $R \cup T$. The probability in the claim is equal to \[ \binom{\binom{\alpha_Un}{2}+\alpha_U(1-\alpha_U)n^2+\binom{\alpha_Rn}{2}}{0.07y_1n}\binom{\binom{\alpha_Tn}{2}}{0.07y_2n}\binom{\alpha_Sn(\alpha_T+\alpha_R)n}{0.07(1-y_1-y_2)n} \binom{\binom{n}{2}}{0.07n}^{-1}, \] which is equal to $poly(n) \exp(hn)$ by Stirling's formula. \end{proof} \medskip Combining everything together, it follows that $P({\bm u})=poly(n)\exp(f(\bm u)n)$, where \[ f(\bm u)=H(\alpha_S,\alpha_T,\alpha_U,\alpha_R)+\sum_{i\in\{S,T,U,R\}}\big(\alpha_iH(\gamma_i)+f_i+g_i+w_i+\tilde w_i\big) +h. \] Finally, we are ready to state property ${\mathcal P}$. \begin{definition}[Property~${\mathcal P}$] Suppose there exists $\delta>0$ such that $f({\bm u})<-\delta$ for all vectors ${\bm u}$ subject to~\eqn{alpha}--\eqn{constraint} and $\alpha_R\le 0.995$. \end{definition} Let us remark that the reason to separate $\alpha_R$ from 1 in the definition of Property ${\cal P}$ is that the probability of a specified vertex partition $S\cup T\cup U\cup R$ satisfying Corollary~\ref{cor:cyclic}(a)--(d) will not be exponentially small when $S$, $T$, and $U$ are all of sub-linear size, and thus $f$ is not bounded away from 0 in the entire region~\eqn{alpha}--\eqn{constraint}. \subsection{Proof of Lemma~\ref{lem:2-matching}}\label{sec:upper_2matching} \begin{proof}[Proof of Lemma~\ref{lem:2-matching}] Suppose that property ${\mathcal P}$ holds, that is, there exists $\delta>0$ such that $f({\bm u})<-\delta$ for all vectors ${\bm u}$ subject to~\eqn{alpha}--\eqn{constraint} and $\alpha_R\le 0.995$. Our goal is to show that a.a.s.\ $\hat G_{\tau_4}$ has a 2-matching with $o(n)$ components. Fix $\epsilon>0$. As mentioned earlier, after combining Lemma~\ref{lem:gstar_cyclic} and Corollary~\ref{cor:cyclic}, it remains to show that a.a.s.\ there is no vertex partition $S\cup T\cup U\cup R$ of $\hat G_{\tau_4}$ satisfying properties (a)--(d) in Corollary~\ref{cor:cyclic} with some $\gamma \ge \epsilon n$ and $|R|\le 0.995n$. The expected number of partitions $S\cup T\cup U\cup R$ satisfying (a)--(d) with $\gamma \ge \epsilon n$ and $|R|\le 0.995n$ is at most \begin{equation} \sum_{{\bm u}} P({\bm u})= \sum_{{\bm u}} poly(n)\exp(f(\bm u)n), \lab{expec} \end{equation} where the sum is over all possible values of ${\bm u}$ satisfying constraints~\eqn{alpha}--\eqn{constraint} and $\alpha_R\le 0.995$, we have $f({\bm u})<-\delta/2$ for all ${\bm u}$ in the range of summation of~\eqn{expec} restricted to $\alpha_R\le 0.995$. The number of possible values of ${\bm u}$ in the summation is clearly $poly(n)$. Hence, the expected number of partitions $S\cup T\cup U\cup R$ satisfying (a)--(d) where $|R|\le 0.995 n$ is $$ \sum_{\bm u} poly(n)\exp(f(\bm u) n) = poly(n)\exp( -\delta n / 2) = o(1). $$ It only remains to consider partitions $S\cup T\cup U\cup R$ satisfying (a)--(d) with $|R|>0.995n$. Let \begin{align*} x_1 &\quad \mbox{denote the number of edges between $S$ and $T$};\\ x_2 &\quad \mbox{denote the number of edges between $U$ and $T$};\\ x_3 &\quad \mbox{denote the number of edges between $S$ and $U$};\\ x_4 &\quad \mbox{denote the number of edges between $S$ and $R$}. \end{align*} Since the minimum degree of $\hat G_{\tau_4}$ is at least 4, $S$ induces an independent set, and $T$ induces a forest, we get that \[ x_1+x_2+2e(T)\ge 4|T|,\quad e(T)<|T|,\quad \text{ and } \quad x_1+x_3+x_4\ge 4|S|. \] By property~(c) and the fact that $\gamma \ge \epsilon n$, we get that $x_1+e(T)+x_4 \le |T|+2|S|-2|U|$. Hence, \begin{align} 2|T|+2|S|-2|U| &+x_1+x_2+x_3 > |T|+2|S|-2|U| +x_1+x_2+x_3+e(T)\nonumber\\ & \ge 2x_1+x_2+x_3+x_4+2e(T)\ge 4(|S|+|T|). \end{align} It follows that $x_1+x_2+x_3\ge 2(|S|+|T|+|U|) = 2(|S\cup T\cup U|)$, that is, $S\cup T\cup U$ induces at least $2|S\cup T\cup U|$ edges. However, by Lemma~\ref{lem:aas}(c), this does \emph{not} happen a.a.s.\ for any partition with $|S\cup T\cup U| \le 0.005n$. \end{proof} \subsection{Numerical support}\label{sec:numerical} The goal of this section is to provide a numerical evidence that property $\mathcal{P}$ holds. The optimization problem was carefully investigated using the code written in the Julia language~\cite{Julia}, \texttt{JuMP.jl} package~\cite{Jump} with \texttt{Ipopt} solver~\cite{Ipopt}. The optimization problem we needed to face is challenging for the following reasons. Firstly of all, it involves a non-convex optimization problem which potentially has many local optima (we numerically confirmed that this is the case in our problem). In order to overcome this challenge, we used a standard multi-start~\cite{Multistart} approach for solving global optimization problems. However, due to a stochastic nature of the heuristic search procedure used in this process, it means that the results we obtained are only heuristic in nature. In other words, the numerical results we obtained strongly suggest that the desired property holds but this is, unfortunately, not a formal proof of this. Second of all, the objective function contains terms of the form $x\ln(x)$ which have derivatives tending to $\infty$ as $x\to0$. This creates a challenge when solving the problem using numerical methods. More importantly, in the problem there are some local optima for which some variables are equal to zero. In order to overcome this problem, we relaxed the original problem by replacing $x\ln(x)$ with some other function $f(x) \le x \ln(x)$ (we need this property as we deal with a maximization problem and terms of the form $x\ln(x)$ appear with a negative sign in the objective function). Function $f(x)$ should be a quadratic function near $0$, its value and the values of its first and second derivatives should match in the point of change of the formula. The exact function we ended up using as a relaxation of $x\ln(x)$ is: \begin{equation*} f(x) = \begin{cases} 2^{31}x^2+\ln(2^{-32})x-2^{-33} &\text{if $0 \le x < 2^{-32}$}\\ x\ln(x) &\text{if $x \geq 2^{-32}$} \end{cases}. \end{equation*} The third challenge is that the optimization problem for most of the variables allows the domain to be $[0,1]$ and we have $\ln(x)$ occurring in multiple places of the formulation of the objective function (and also other than $x\ln(x)$ which is handled by the relaxation described above). This poses another challenge when the solver performs a local search in the points near the boundary of the admissible set. In such cases a logarithm of negative value might be considered (note that the solver evaluates the objective function for points contained in some small neighbourhood of a current potential solution before ensuring that the constraints are satisfied; as a result, if points close to $0$ are considered, such neighbourhood could contain negative values), which leads to errors when performing the computation. In order to overcome this problem, we apply the transformation given by the formula $$ g(x) = \frac {1}{2} \left( \sin \left( \pi \left(x- \frac 12\right) \right)+1 \right) $$ to every variable that is constrained to the interval $[0,1]$, before passing it for the evaluation of the objective function and constraints. Note that this transformation is a bijection from the interval $[0,1]$ into the interval $[0,1]$ but it guarantees that if some decision variable is tested outside the $[0,1]$ interval it is transformed back to $[0,1]$ interval (such values are rejected later anyway due to the constraints but are tested during the optimization process which causes no error). Also note that the transformation we use is an analytic function, which means that it does not introduce additional problems when calculating the first or the second derivatives of the objective functions or constraints. In order to explore the solution space thoroughly, we have performed two optimization processes. In the first one, we tested the interior of the solution space, that is, all decision variables that are restricted to $[0,1]$ were in fact constrained even further to be in the $[0.005, 0.995]$ interval. In the second optimization scenario, we did not impose these additional constraints and all the variables were allowed to be taken from their original domain. The largest local optimum found across both scenarios was $-0.000722123670503$ (we report the value of the original objective function, before the relaxation). It was clearly separated from the boundary; indeed, all decision variables restricted to the interval $[0,1]$ actually lied in the $[0.0032,0.9586]$ interval. This is consistent with a theoretical understanding of the problem; it is expected that there is no problem with the boundary. In both scenarios there were some additional local optima (two in the first scenario and four in the second) but all of them were smaller than the one we report above. In order to make sure that our results are stable we tested several different values for $\epsilon_0$, various relaxation functions $f$ and space transformation functions $g$, and many separation margins from the boundary. In all cases we consistently obtained that the best local optimum found was below zero. Therefore, it provides a strong numerical support for the conjecture that the objective value of our optimization problem is negative, that is, property $\mathcal{P}$ holds. We independently tested if the third phase (where $0.07n$ semi-random edges are sprinkled) is required for $f({\bm u})$ to be negative and bounded away from zero. Denote by $\hat{\bm u}$ the best solution for our original problem we found; it satisfies $f(\hat{\bm u})<0$. However, if $0.06n$ edges are added instead of $0.07n$, then the best solution that solver is able to find is a point ${\bm u}^*$ with $f({\bm u}^*)>0$. This time, all $[0,1]$-constrained variables in ${\bm u}^*$ turned out to be in the interval $[0.0358, 0.8691]$. We also checked the relationship between points $\hat{\bm u}$ and ${\bm u}^*$. Both points are very close to each other ($\|{\bm u}^*-\hat{\bm u}\|_{\infty}=0.0024$), which means that the results are stable. Having said that, they are clearly not identical as changing the number of random edges added during the third phase affects the constraints of our optimization problem. In particular, point ${\bm u}^*$ is not feasible for the process involving adding $0.07n$ random edges. \section{Lower bound}\label{sec:lower_bound} As it was done in the argument for an upper bound, it will also be convenient to work with the directed graph $D_t$ underlying $G_t$. For each edge $u_tv_t$ that is added to $G_t$ at time $t$, we put a directed edge from $v_t$ to $u_t$ in $D_t$ (recall that $u_t$ is a random vertex selected by the semi-random graph process and $v_t$ is a vertex selected by the player). The existing lower bound for ${\tau_{{\tt HAM}}}$ that was observed in~\cite{process1} follows from the fact that in order to construct a Hamilton cycle, the player has to create a graph with minimum degree at least 2. However, this trivial necessary condition alone requires $(\ln 2+\ln(1+\ln2) + o(1)) n$ steps. Indeed, in order to reach a graph with minimum degree 2, the player has to play greedily during the first part of the game by selecting vertices of $G_t$ that are of degree 0. This part of the game ends at step $(\ln 2 + o(1))n$ a.a.s. From that point on, she continues playing greedily by selecting vertices of degree 1 which requires additional $(\ln(1+\ln2) + o(1)) n$ steps a.a.s. In order to improve the lower bound (unfortunately, only by a hair) we will use another trivial observation. We will call a vertex $x$ in $D_t$ \textbf{problematic} if it is of in-degree at least 3 (out-degree of $x$ is not important) with the in-neighbours $y_1, y_2, y_3$ (if $x$ has in-degree larger than 3, then these are the \emph{first} three in-neighbours sorted by the time when they were added to the graph), each of them of out-degree 1 and in-degree 1. Since $y_i$'s are of degree 2 in the underlying graph $G_t$, the three edges $y_i x$ must be included in a potential Hamilton cycle but then, indeed, vertex $x$ creates a problem. It gives us another trivial necessary condition: if $G_t$ has a Hamilton cycle, then there are no problematic vertices. Indeed, if $G_t$ has a vertex $v$ adjacent to three vertices, all of which are of degree 2, then $G_t$ cannot be Hamiltonian. This results in various types of ``problematic'' vertices. Our definition focuses only on a particular type for the purpose of simplifying the proof. The numerical improvement is tiny and the bound we prove is certainly not tight. Hence, we only provide sketches of the proofs. The computations presented in the paper were performed by using Maple~\cite{Maple}. The worksheets can be found at the following address~\cite{Pawel}. \medskip For convenience, we will distinguish a few phases in the semi-random graph process. The first phase lasts exactly $n \ln 2$ steps. Our first goal is to show that if the player plays greedily, then a.a.s.\ there will be linearly many problematic vertices at the end of first phase. \begin{claim}\label{claim:1} Suppose that the player plays greedily during the first phase of the process. Then, a.a.s.\ there are $(\xi+o(1))n$ problematic vertices at the end of this phase, where $$ \xi = \frac {1}{128} \left( 4(\ln 2)^4 + 20 (\ln 2)^3 + 54 (\ln 2)^2 - 18 \ln 2 - 21\right) \approx 0.0004035. $$ \end{claim} \begin{proof} It is fairly easy to show that the number of problematic vertices is a.a.s.\ at least $\xi n$ for some positive constant $\xi$. By the standard first and second moment calculations, after the first $(\ln 2/2)n$ steps there will be at least $(e^{-c} c^3/6) n$ vertices of in-degree at least 3 in $D_t$ where $c=\ln 2/2$. Then, a.a.s.\ a positive fraction of these vertices turns out problematic during the next $(\ln 2/2)n$ steps. Of course, in order to get larger constant $\xi$ it is best to track the process and apply the differential equation's method (see~\cite{DE} for more information on the DE's method). We briefly sketch the argument. For $a,b,c \in \{0, 1\}$ and $a \ge b \ge c$, we will say that a vertex $x$ in $D_t$ is of \textbf{type $(a,b,c)$} if it is of in-degree at least 3, with the first three in-neighbours $y_1, y_2$ and $y_3$ (order is not important), each of which has out-degree 1 and in-degree $a$, $b$, and $c$, respectively. In particular, vertex of type $(1,1,1)$ is simply a problematic vertex. Similarly, vertices of in-degree 2 could be of \textbf{type $(a,b)$} and vertices of in-degree 1 could be of \textbf{type $(a)$}. The remaining vertices of in-degree at least 1 are called \textbf{neglected}. (Note that neglected vertices can still prevent Hamilton cycle to be constructed but we simply neglect them.) In order to analyze the process, we need to keep track of 9 random variables associated with vertices of different types, random variables $X_{abc}$, $X_{ab}$, and $X_{a}$. In particular, $X_{111}(t)$ is the number of problematic vertices (type $(1,1,1)$) at the end of step $t$. Moreover, let $Y(t)$ be the number of neglected vertices at the end of step $t$. It is straightforward to compute the conditional expectations; for example, $$ {\mathbb E} \Big( X_{111}(t+1)-X_{111}(t) ~~|~~ D_t \Big) = \frac {X_{110}(t)}{n} - 3 \ \frac{X_{111}(t)}{n}. $$ Indeed, the only chance to create a problematic vertex is when the semi-random process selects the in-neighbour of a vertex of type $(1,1,0)$ that is of in-degree 0. On the other hand, if the process selects any of the first three in-neighbours of a problematic vertex, this vertex becomes neglected. The other expectations can be computed in a similar way. This suggests the following system of differential equations that should reflect the behaviour of the corresponding random variables: \begin{eqnarray*} x_{0}'(x) &=& 1 - x_{0}(x) - x_{00}(x) - x_{000}(x) - x_{1}(x) - x_{10}(x) - x_{100}(x) - x_{11}(x) \\ &&- x_{110}(x) - x_{111}(x) - y(x) - 2x_{0}(x), \\ x_{00}'(x) &=& x_{0}(x) - 3x_{00}(x), \\ x_{000}'(x) &=& x_{00}(x) - 3x_{000}(x), \\ x_{1}'(x) &=& x_{0}(x) - 2x_{1}(x), \\ x_{10}'(x) &=& 2x_{00}(x) + x_{1}(x) - 3x_{10}(x), \\ x_{100}'(x) &=& 3x_{000}(x) + x_{10}(x) - 3x_{100}(x), \\ x_{11}'(x) &=& x_{10}(x) - 3x_{11}(x), \\ x_{110}'(x) &=& 2x_{100}(x) + x_{11}(x) - 3x_{110}(x),\\ x_{111}'(x) &=& x_{110}(x) - 3x_{111}(x), \\ y'(x) &=& x_{1}(x) + x_{10}(x) + x_{100}(x) + 2x_{11}(x) + 2x_{110}(x) + 3x_{111}(x), \end{eqnarray*} with the initial condition that all functions at $x=0$ are equal to zero. This system of equations can be explicitly solved. In particular, we get that $$ x_{111}(x) = \frac{e^{-3x}x^4}{4} + \frac {5 e^{-3x} x^3}{4} + \frac {27 e^{-3x} x^2}{8} + \frac {39 e^{-3x}x}{8} + \frac {39 e^{-3x}}{16} - 3 e^{-2x} x - 3 e^{-2x} + \frac {9 e^{-x}}{16}. $$ It follows from the DE's method that a.a.s.\ $X_{111}(t) = (1+o(1)) x_{111}(t/n) n$ for any $0 \le t \le n \ln 2$. Hence, a.a.s.\ the number of problematic vertices at the end of the first phase is equal to $(1+o(1)) x_{111}(\ln 2)$ and the claim holds. \end{proof} The above claim implies that if the player concentrates on achieving minimum degree 2 as soon as possible (that is, play greedily until the graph has minimum degree equal to 2), then a.a.s.\ there will be $(\xi+o(1)) n$ problematic vertices at the end of the first phase. If she continues playing greedily, then a.a.s.\ some positive fraction of these problematic vertices will remain present in the graph. Making them negligible will take linearly many steps. As a result, the player might want to adjust her strategy and not play greedily but start paying attention to problematic vertices instead. We now argue that this will also slow her down. For a given $\delta \in [0,1]$ ($\delta=\delta(n)$ could be a function of $n$), let $\mathcal{F}_{\delta}$ be a family of strategies in which $(1-\delta) n \ln 2$ steps in the first phase are greedy (that is, the player selects some isolated vertex) but $\delta n \ln 2$ steps are non-greedy (that is, the player selects some vertex of degree at least 1). We will show that playing non-greedily has a penalty in the form of reaching minimum degree 2 later in comparison to the minimum degree 2 process. \begin{claim}\label{claim:2} Fix any $\delta \in [0,1]$. For any strategy from family $\mathcal{F}_{\delta}$, a.a.s.\ it takes at least $$ (\ln 2 + \ln(1+\ln 2) + \epsilon_1(\delta) + o(1)) n $$ steps for $G_t$ to reach minimum degree 2, where $$ \epsilon_1(\delta) = \ln \left( (2^{1 + \delta} - 1) \ln(2^{1 + \delta} - 1) - 2^{1 + \delta} \delta \ln 2 + (1 + \ln 2) 2^{\delta} \right) - \delta \ln 2 - \ln(1 + \ln 2), $$ for $\delta\in[0,1/2]$ and $\epsilon_1(\delta)=\epsilon_1(1/2)$ for $\delta\in(1/2,1]$. \end{claim} Note that $\epsilon_1(\delta)$ is an increasing function of $\delta$ on $[0,1/2]$ and $\epsilon_1(0)=0$ (which corresponds to the original minimum degree 2 process). \begin{proof} It is important to notice that the objective here is only to eliminate all vertices of degree below 2, and thus the player does not need to worry about problematic vertices. First consider $\delta\in[0,1/2]$. As in the case of the unrestricted minimum degree 2 process (which corresponds to $\delta=0$), it is straightforward to see (for example, by a simple coupling argument) that it is always beneficial to play a greedy move instead of a non-greedy one\footnote{For any strategy $\bf{f}$ of $\mathcal{F}_{\delta}$ which does not prioritize greedy moves first, there exists another strategy within $\mathcal{F}_{\delta}$ which \textit{does} prioritize greedy moves first, and whose completion time is stochastically dominated by the completion time of $\bf{f}$.}. Hence, in order to achieve our goal, the best strategy from the family $\mathcal{F}_{\delta}$ is to play on vertices of degree 0 during the first $(1-\delta)n \ln 2$ steps. After that, the player should select vertices of degree 1 until the end of the first phase , that is, during the following $\delta n \ln 2$ steps. As there are no restrictions on the game after that (in particular, no restrictions on the number of non-greedy moves), she should play greedily until the end of the game; that is, play on vertices of degree 0 until they disappear and then play on vertices of degree 1 until the end of the game. Hence, both the first and the second phase are split into two sub-phases, depending on which type of vertices are selected. In order to analyze how long it takes to finish this process, we need to keep track of two random variables: $Y(t)$ and $Z(t)$, the number of vertices at time $t$ of degree 0 and 1, respectively. We say that a move is of \textbf{type $i$} (where $i \in \{0,1\}$) if the player plays on a vertex of degree $i$. It is not difficult to see that \begin{eqnarray*} {\mathbb E} \Big( Y(t+1) - Y(t) ~~|~~ G_t \text{ and type } i \Big) &=& - \delta_{i = 0} - \frac {Y(t)}{n} \\ {\mathbb E} \Big( Z(t+1) - Z(t) ~~|~~ G_t \text{ and type } i \Big) &=& \delta_{i = 0} - \delta_{i=1} + \frac {Y(t)}{n} - \frac {Z(t)}{n}. \end{eqnarray*} where $\delta_A$ is the Kronecker delta function ($\delta_A=1$ if $A$ is true and $\delta_A=0$ otherwise). The corresponding system of DEs is \begin{eqnarray*} y'(x) &=& - \delta_{i = 0} - y(x) \\ z'(x) &=& \delta_{i = 0} - \delta_{i=1} + y(x) - z(x). \end{eqnarray*} The initial condition is $y(0)=1$ and $z(0)=0$. Moreover, the final values of $y(x)$ and $z(x)$ after one of the sub-phases are used as the initial values for the next sub-phase. The conclusion follows from the DE's method. We skip the details and refer the interested reader to the Maple worksheets available on-line. It is easy to see that if $1/2<\delta\le 1$ then any strategy from ${\cal F}_{\delta}$ a.a.s.\ takes at least $(\ln 2+\ln(1+\ln 2) +\epsilon_1(1/2)+o(1))n$ steps to build a graph with minimum degree at least 2. During the second sub-phase of phase 1, the player may select any non-isolated vertex if there are no vertices of degree 1 left. These moves are not helping with building a graph with minimum degree 2 and thus it takes even longer to complete the process. \end{proof} Our next task is to estimate the number of problematic vertices at the end of the first phase, provided that the player uses a strategy from family $\mathcal{F}_\delta$. \begin{claim}\label{claim:3} Fix any $\delta \in [0,\xi/(2 \ln 2)]$, where $\xi$ is defined in Claim~\ref{claim:1}. For any strategy from family $\mathcal{F}_{\delta}$, a.a.s.\ there are at least $(\xi-2\delta \ln 2+o(1))n$ problematic vertices at the end of the first phase. \end{claim} \begin{proof} It is not clear what the best strategy for minimizing the number of problematic vertices is. So, in order to keep the argument as simple as possible, we will help the player and propose to play the following auxiliary game, a mixture of on-line and off-line variants of the game. We simply run the greedy algorithm by selecting an isolated vertex in each step of the process. It follows from Claim~\ref{claim:1} that a.a.s.\ there are $(\xi+o(1))n$ problematic vertices at the end of the first phase. After that, we ask the player to `rewind' the process and carefully `rewire' $\delta$ fraction of moves in any way she wants keeping the remaining $1-\delta$ fraction of moves greedy, as required. Each modified move affects at most two problematic vertices so the number of problematic vertices decreases by at most $2 \cdot \delta n \ln 2$. Since this task clearly is much easier for the player than the original one, the lower bound follows. \end{proof} Our final task is to combine all results together. \begin{claim}\label{claim:4} Fix any $\delta \in [0,\xi/(2 \ln 2)]$, where $\xi$ is defined in Claim~\ref{claim:1}. For any strategy from family $\mathcal{F}_{\delta}$, a.a.s.\ it takes at least $$ (\ln 2 + \ln(1+\ln 2) + \epsilon_1(\delta) + \epsilon_2(\delta) + o(1)) n $$ steps for $G_t$ to reach minimum degree 2 and remove all problematic vertices that were created during the first phase. Function $\epsilon_1(\delta)$ is defined in Claim~\ref{claim:2} and \begin{eqnarray*} \epsilon_2(\delta) &=& \frac {\ln \big( 3 \tau(\delta) + 1 \big)}{3}, \\ \tau(\delta) &=& (\xi - 2\delta \ln 2)\ \exp(-3 \ln (1+\ln 2) - 3 \epsilon_1(\delta)). \end{eqnarray*} \end{claim} \begin{proof} As in the proof of the previous claim, it is not clear what the best strategy is. Since we aim for an easy argument without optimizing the constants, we propose the player to play the following auxiliary game. We let her play the degree-greedy algorithm from the family $\mathcal{F}_\delta$ which optimizes the time needed to achieve minimum degree 2 (without worrying about problematic vertices). At the end of the first phase we artificially `destroy' some problematic vertices (if needed), leaving only $(\xi-2\delta \ln 2+o(1)) n$ of them in the graph. Clearly, this is an easier game for the player to play. Indeed, by Claim~\ref{claim:3} any strategy from $\mathcal{F}_\delta$ creates at least that many problematic vertices and so this is certainly a sweet deal for her. The player continues the game trying to reach minimum degree at least 2 and to destroy the remaining problematic vertices. It is straightforward to see that the best strategy is to continue playing the degree-greedy algorithm, destroying the remaining isolated vertices before playing vertices of degree 1. That part is taking $(\ln(1+\ln 2)+\epsilon_1(\delta)+o(1))n$ steps by Claim~\ref{claim:2}. In the meantime, vertices selected by the random graph process land on the neighbours of problematic vertices. The probability that a given problematic vertex is not destroyed is equal to $$ \left( 1 - \frac {3}{n} \right)^{(\ln(1+\ln 2)+\epsilon_1(\delta)+o(1))n} = \exp \Big( - 3 \big( \ln(1+\ln 2)+\epsilon_1(\delta) \big) \Big) + o(1). $$ Hence a.a.s.\ there are $(\tau(\delta)+o(1))n$ problematic vertices at this point. After that, the player has to destroy the remaining problematic vertices. Obviously, the best strategy is to choose $v_t$ to be one of the first three neighbours of a problematic vertex. A problematic vertex $x$ can also be destroyed if $u_t$ happens to be one of these neighbours. Let $Y(t)$ be the number of problematic vertices at the end of step $t$ (for simplicity counting from $t=0$). It is straightforward to see that $$ {\mathbb E} \Big( Y(t+1) - Y(t) ~~|~~ G_t \Big) = - 1 - \frac {3Y(t)}{n}. $$ The corresponding DE is $y'(x) = - 1 - 3y(x)$ with the initial condition $y(0)=\tau(\delta)$. It follows that $y(x) = -1/3 + (\tau(\delta) +1/3)e^{-3x}$ and so we get that a.a.s.\ it takes another $(\epsilon_2(\delta)+o(1))n$ steps to finish the game, and the claim holds. \end{proof} Theorem~\ref{thm:lower_bound} follows immediately from Claim~\ref{claim:4}. Let us first extend $\epsilon_2(\delta)$ to $[0,1]$ by setting $\epsilon_2(\delta)=0$ for $\delta\in(\xi/(2\ln 2),1]$. We have shown that for every $\delta\in[0,1]$, any strategy from ${\cal F}_{\delta}$ a.a.s.\ takes at least $(\ln 2+\ln(1+\ln 2)+\epsilon_1(\delta)+\epsilon_2(\delta)+o(1))n$ steps to build a Hamilton cycle. Note that $\epsilon_1(\delta)$ is an increasing function of $\delta$; the more non-greedy moves the player needs to play, the longer the game is. On the other hand, $\epsilon_2(\delta)$ is a decreasing function on $[0,\xi/(2 \ln 2)]$ with $\epsilon_2(\xi/(2 \ln 2))=0$; the non-greedy moves can be spent on destroying problematic vertices and so the number of them decreases with $\delta$. After more careful investigation we get that $\epsilon_1(\delta) + \epsilon_2(\delta)$ is a decreasing function on $[0,\xi/(2 \ln 2)]$ and then it is equal to $\epsilon_1(\delta)$ and so it starts increasing. Therefore we get that $$ \epsilon = \min_{\delta} \Big( \epsilon_1(\delta) + \epsilon_2(\delta) \Big) = \epsilon_1 \left( \frac {\xi}{2 \ln 2} \right) + \epsilon_2 \left( \frac {\xi}{2 \ln 2} \right) = \epsilon_1 \left( \frac {\xi}{2 \ln 2} \right) \approx 2.403 \cdot 10^{-8}. $$
{ "timestamp": "2020-06-05T02:05:58", "yymm": "2006", "arxiv_id": "2006.02599", "language": "en", "url": "https://arxiv.org/abs/2006.02599", "abstract": "The semi-random graph process is a single player game in which the player is initially presented an empty graph on $n$ vertices. In each round, a vertex $u$ is presented to the player independently and uniformly at random. The player then adaptively selects a vertex $v$, and adds the edge $uv$ to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible.We focus on the problem of constructing a Hamilton cycle in as few rounds as possible. In particular, we present a novel strategy for the player which achieves a Hamiltonian cycle in $(2+4e^{-2}+0.07+o(1)) \\, n < 2.61135 \\, n$ rounds, assuming that a specific non-convex optimization problem has a negative solution (a premise we numerically support). Assuming that this technical condition holds, this improves upon the previously best known upper bound of $3 \\, n$ rounds. We also show that the previously best lower bound of $(\\ln 2 + \\ln (1+\\ln 2) + o(1)) \\, n$ is not tight.", "subjects": "Combinatorics (math.CO); Probability (math.PR)", "title": "Hamilton Cycles in the Semi-random Graph Process", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429624315188, "lm_q2_score": 0.7217432062975978, "lm_q1q2_score": 0.7097211025955026 }
https://arxiv.org/abs/1507.02753
The Density of Shifted and Affine Eisenstein Polynomials
In this paper we provide a complete answer to a question by Heyman and Shparlinski concerning the natural density of polynomials which are irreducible by Eisenstein's criterion after applying some shift. The main tool we use is a local to global principle for density computations over a free $\mathbb Z$-module of finite rank.
\section{Introduction} \label{sec:introduction} Let $\mathbb{Z}$ be the ring of rational integers. The Eisenstein irreducibility criterion~\cite{bib:eisenstein,bib:schoenemann} is a very convenient tool to establish that a polynomial in $\mathbb{Z}[x]$ is irreducible. It is a well understood fact that the density of irreducible polynomials of fixed degree $d$ among all the polynomials of degree $d$ is equal to one. The question which naturally arises is the following: \begin{question}\label{qu:normal_Eis} What is the density of polynomials which are irreducible by the Eisenstein criterion? \end{question} More informally, how likely is it that checking whether a random polynomial is irreducible using only the Eisenstein irreducibility criterion leads to success? In \citep{bib:PolyDub,bib:shparlinskiEisen} the authors deal with Eisenstein polynomials of fixed degree with coefficients over $\mathbb{Z}$. They provide a complete answer to the above question in the case of monic (See~\citep{bib:PolyDub}, \cite[Theorem~1]{bib:shparlinskiEisen}) and non-monic (See~\cite[Theorem~2]{bib:shparlinskiEisen}) Eisenstein polynomials. From now on we will specialize to the case of non-monic Eisenstein polynomials, since the proofs and methods can be easily adapted from one case to the other. In~\cite{bib:shparlinskiEisen}, the authors consider the set of polynomials of degree at most $d$ having integer coefficients bounded in absolute value by $B$ (the \emph{height} of a polynomial) and give a sharp estimate for the number $\rho(B)$ of polynomials which are irreducible by the Eisenstein criterion. The \emph{natural density} of Eisenstein polynomials is then the limit of the sequence $\rho(B)/(2B)^{d+1}$, which fully answers Question~\ref{qu:normal_Eis}. As is well known, a polynomial $f(x)$ is irreducible if and only if $f(x+i)$ is irreducible for all $i\in \mathbb{Z}$. Using this simple observation, one could check irreducibility by trying to use the Eisenstein criterion for many $i$. How likely is it that this procedure works? More formally, \begin{question}\label{qu:shiftedquestion} What is the natural density of polynomials $f(x)$ for which $f(x+i)$ is irreducible by the Eisenstein criterion for some integer shift $i$? \end{question} In~\citep{bib:heyman2014shifted}, Heyman and Shparlinski address this question, giving a lower bound on this density. Nevertheless, the question regarding the exact density remained open. In this paper, we provide a complete solution to Question~\ref{qu:shiftedquestion} using a local to global principle for densities~\citep[Lemma~20]{bib:poonenAnn}. Using similar methods, we also provide a solution to the question appearing in \cite[Section~7]{bib:heyman2014shifted} about \emph{affine} Eisenstein polynomials. Our proofs are also supported by Monte Carlo experiments which we provide in Section~\ref{sec:simulations}. \section{Notation} \label{sec:notation} I this section we fix the notation that will be used throughout the paper. \begin{definition} Let $R$ be an integral domain and $R[x]$ be the ring of polynomials with coefficients in $R$. We say that $f(x)=\sum^n_{i=0} \alpha_i x^i\in R[x]$ of degree $n$ is \emph{Eisenstein with respect to a prime ideal $p$} or \emph{$p$-Eisenstein} if \begin{itemize} \item $\alpha_n\notin p$. \item $\alpha_i\in p$ for all $i\in \{0,\dots,n-1\}$. \item $\alpha_0\notin p^2$. \end{itemize} We say that $f(x)$ is \emph{Eisenstein} if it is Eisenstein with respect to some prime ideal $p$. \end{definition} In this paper, we will only consider the ring of integers $R = \mathbb{Z}$ and the rings of $p$-adic integers $R = \mathbb{Z}_p$. \begin{definition} For any subset $A\subseteq \mathbb{Z}^d$, we define \begin{align*} \overline{\rho}(A) &:= \limsup_{B \to \infty}\frac{\abs[\big]{A \cap [-B,B[^d}}{(2B)^d}, \\ \underline{\rho}(A) &:= \liminf_{B \to \infty}\frac{\abs[\big]{A \cap [-B,B[^d}}{(2B)^d}. \\ \intertext{If these coincide, we denote their value by $\rho(A)$ and call it the \emph{natural density} of $A$:} \rho(A) &:= \smashoperator[r]{\lim_{B \to \infty}} \frac{\abs[\big]{A \cap [-B,B[^d}}{(2B)^d}. \end{align*} \end{definition} In what follows, we will identify the module $R[x]_{\le n}$ of polynomials of degree at most $n$ with $R^{n+1}$ by the standard basis $\{1, x, \ldots, x^n\}$. \begin{definition} Let $E\subseteq \mathbb{Z}^{n+1}$ be the set of degree $n$ Eisenstein polynomials over the integers. Let $E_p$ be the set of degree $n$ Eisenstein polynomials over $\mathbb{Z}_p$. \end{definition} The reader should notice that we are computing the density of shifted (or affine) Eisenstein polynomials of degree \emph{exactly} $n$ among polynomials of degree \emph{at most} $n$. Nevertheless it is easy to see that the density of shifted (and also affine, see Remark~\ref{rem:eislowdeg}) Eisenstein polynomials of degree less or equal than $n$ is the same. \section{Shifted Eisenstein Polynomials} \label{sec:shifted} In this section, we determine the density of polynomials $f(x) \in \mathbb{Z}[x]$ such that $f(x+i)$ is Eisenstein for some shift $i \in \mathbb{Z}$. For this, let $\sigma$ be the linear map defined by \begin{align*} \sigma\colon \mathbb{Z}^{n+1} &\longrightarrow \mathbb{Z}^{n+1} \\ f(x) &\longmapsto f(x+1). \end{align*} It is easy to see that $\sigma$ has determinant one. Similarly, we get a determinant one map over $\mathbb{Z}_p$ for any $p$, which we will also denote by $\sigma$. \begin{definition} Let $\overline E\subseteq \mathbb{Z}^{n+1}$ be the set of degree $n$ polynomials which are Eisenstein after applying some shift $i\in \mathbb{Z}$: \begin{equation*} \overline E = \{f(x)\in\mathbb{Z}^{n+1} \,:\, f(x+i)\in E \text{ for some } i \in \mathbb{Z} \}. \end{equation*} We call these polynomials \emph{shifted Eisenstein}. \end{definition} In order to compute the density of $\overline{E}$, it we need to consider each prime $p$ separately. We do this by working over the $p$-adic integers. \begin{definition} Let $\overline E_p \subseteq \mathbb{Z}_p^{n+1}$ be the set of degree $n$ polynomials of $\mathbb{Z}_p[x]$ which are Eisenstein after applying some shift $i\in \mathbb{Z}_p$: \begin{equation*} \overline E_p = \{f(x)\in\mathbb{Z}_p^{n+1} \,:\, f(x+i)\in E_p \text{ for some } i \in \mathbb{Z}_p \}. \end{equation*} We also call these polynomials shifted Eisenstein, since it will always be clear from the context to which set we are referring. \end{definition} Notice that $\overline{E}_p\cap \mathbb{Z}^{n+1}$ are exactly the polynomials of $\mathbb{Z}[x]$ of degree $n$ which are shifted $p$-Eisenstein. \begin{lemma}\label{thm:disjoint_shifted} If $f(x) \in \mathbb{Z}_p^{n+1}$ is shifted Eisenstein, then it is so with respect to exactly one rational integer shift $i \in \{0, \ldots, p-1\}$. In other words, \begin{equation*} \overline E_p = \bigsqcup_{i = 0}^{p-1} \sigma^{-i} E_p. \end{equation*} \end{lemma} \begin{proof} We clearly have \begin{equation*} \bigcup_{i = 0}^{p-1} \sigma^{-i} E_p \subseteq \overline E_p. \end{equation*} The other inclusion is easy but not completely trivial. Let $f(x) = \sum^n_{i=0} \alpha_i x^i \in E_p$ and $k \in \mathbb{Z}_p$. We first show that $f(x + kp)$ is also Eisenstein: Clearly $f(x)=f(x+kp)$ in $\mathbb{Z}_p/p\mathbb{Z}_p$, so the only condition which one has to check is that the coefficient of the term of degree zero of $f(x+kp)$ is not in $p^2\mathbb{Z}_p$. This coefficient is in fact $f(kp) = \alpha_0 + \alpha_1 k p + \sum^{n}_{i=2} \alpha_i k^i p^i$. Modulo $p^2\mathbb{Z}_p$ we have that \begin{itemize} \item $\alpha_i k^i p^i$ is congruent to zero for $i\geq 2$, \item $\alpha_1 k p$ is congruent to zero since $\alpha_1$ is in $p\mathbb{Z}_p$, \item $\alpha_0$ is not congruent to zero since the polynomial $f(x)$ is Eisenstein, \end{itemize} from which it follows that the polynomial $f(x+kp)$ is Eisenstein in $\mathbb{Z}_p[x]$. Let now $f(x) \in \overline{E}_p$, then $f(x+u)$ is Eisenstein for some $u \in \mathbb{Z}_p$. The inclusion \begin{equation*} \overline E_p \subseteq \bigcup_{i = 0}^{p-1} \sigma^{-i} E_p \end{equation*} will follow if we show that we can select $u$ in $\{0, \ldots, p-1\}$. Write $u = kp + i$ with $i \in \{0, \ldots, p-1\}$ and $k \in \mathbb{Z}_p$. Using what we proved above, we see that $f(x + u - kp) = f(x + i)$ is Eisenstein, and the inclusion follows. We now show that the union is disjoint, i.e. $\sigma^{-i} E_p\cap \sigma^{-j} E_p=\emptyset$ for any $i,j\in \{0,\dots,p-1\}$ and $i\neq j$. Without loss of generality, we can assume $i>j$. Then \begin{equation*} \sigma^{-i} E_p \cap \sigma^{-j} E_p = \emptyset \:\Longleftrightarrow\: E_p \cap \sigma^{i-j} E_p = \emptyset. \end{equation*} Let $t:=j-i$ and $\sum^{n}_{i=0} \alpha_k x^k=f(x)\in E_p$, then the coefficient of the degree zero term of $f(x+t)$ is $f(t)=\alpha_n t^n+ \sum^{n-1}_{k=0} \alpha_k t^k$. Now, the reduction of $\alpha_k$ modulo $p$ is zero for any $k<n-1$ and $\alpha_n$ and $t$ are invertible modulo $p$, so $f(x+t)$ is not Eisenstein. \end{proof} Let $\mu_p$ be the $p$-adic measure on $\mathbb{Z}_p^{n+1}$ and $\mu_{\infty}$ the Lebesgue measure on $\mathbb{R}^{n+1}$. (For basics on the $p$-adic measure, we refer to~\cite{bib:robert2013course}.) \begin{lemma} In the above notation we have \[\mu_p(\overline E_p)=\frac{(p-1)^2}{p^{n+1}}.\] \end{lemma} \begin{proof} Since $\sigma^{-1}$ has determinant one, it does not change the $p$-adic volumes. Therefore, by Lemma~\ref{thm:disjoint_shifted}, one has $\mu_p(\overline E_p)=p\cdot \mu_p(E_p)$. It is easy to compute the measure $\mu_p(E_p)$ by writing $E_p = (p\mathbb{Z}_p \setminus p^2\mathbb{Z}_p) \times (p\mathbb{Z}_p)^{n-1} \times (\mathbb{Z}_p \setminus p\mathbb{Z}_p)$. \end{proof} In order to obtain the density $\rho(\overline E)$ from the local data $\{\mu_p(\overline E_p)\}_p$, we will use the following lemma~\cite[Lemma~20]{bib:poonenAnn}. \begin{lemma}\label{thm:bjornstoll} Suppose $U_\infty\subseteq \mathbb{R}^d$ is such that $\mathbb{R}^+\cdot U_\infty=U_\infty$, $\mu_\infty(\partial U_\infty)=0$. Let $U_\infty^1=U_\infty\cap [-1,1]^d$ and $s_\infty=\mu_\infty(U_\infty^1)$. Let $U_p\subseteq \mathbb{Z}_p^d$, $\mu_p(\partial U_p)=0$ and $s_p=\mu_p(U_p)$ for each prime $p$. Let $M_\mathbb{Q}$ be the set of places of $\mathbb{Q}$. Moreover, suppose that \begin{equation}\label{eq:conditiondens} \lim_{M \to \infty} \rho(\{a \in \mathbb{Z}^d \,:\, a\in U_p \text{ for some finite prime $p$ greater than $M$}\}) = 0. \end{equation} Let $P\colon \mathbb{Z}^d \longrightarrow 2^{M_\mathbb{Q}}$ be defined as $P(a) = \{v\in M_\mathbb{Q} \,:\, a \in U_v\}$. Then we have: \begin{enumerate} \item $\sum_v s_v$ converges. \item For any $T \subseteq 2^{M_\mathbb{Q}}$, $\nu(T):=\rho(P^{-1}(T))$ exists and defines a measure on $2^{M_\mathbb{Q}}$, which is concentrated at the finite subsets of $M_\mathbb{Q}$. \item Let $S$ be a finite subset of $M_\mathbb{Q}$, then \begin{equation*} \nu(\{S\}) = \prod_{v \in S} s_v \prod_{v \notin S} (1-s_v). \end{equation*} \end{enumerate} \end{lemma} \begin{proof} For the proof, see~\cite[Lemma~20]{bib:poonenAnn}. \end{proof} After showing that condition (\ref{eq:conditiondens}) applies, we can use Lemma~\ref{thm:bjornstoll} to determine the density of shifted Eisenstein polynomials over the integers. \begin{theorem}\label{thm:shifted3} Let $n \ge 3$. The density of shifted Eisenstein polynomials of degree $n$ is \begin{equation}\label{eq:shifted3} \rho(\overline E) = 1 - \smashoperator{\prod_{p \text{ prime}}} \; \left(1-\frac{(p-1)^2}{p^{n+1}}\right). \end{equation} \end{theorem} \begin{proof} Set $U_p=\overline{E}_p$ for all $p$ and $U_\infty=\emptyset$. The conditions $\mu_p(\partial U_p) = 0$ hold since $U_p$ is both closed and open. Notice that in the notation of Lemma~\ref{thm:bjornstoll} we have that $P^{-1}(\{\emptyset\})$ equals the complement of $E$. Therefore, if condition (\ref{eq:conditiondens}) is verified, we get the claim: \begin{equation*} \rho(\overline E) = 1-\smashoperator{\prod_{p \text{ prime}}} \left(1-s_p\right) = 1-\smashoperator{\prod_{p \text{ prime}}} \; \left(1-\frac{(p-1)^2}{p^{d+1}}\right). \end{equation*} Let us now show that the condition indeed holds: \begin{gather} \lim_{M \to \infty}\overline{\rho} (\{a\in \mathbb{Z}^{n+1} \,:\, a\in \overline{E}_p \text{ for some finite prime $p$ greater than $M$}\}) \nonumber \\ = \lim_{M \to \infty}\limsup_{B \to \infty} \frac{\abs[\big]{\bigcup_{p>M}\overline{E}_p\cap [-B,B[^{n+1}}}{(2B)^{n+1}}. \label{eq:fundamentallimit} \end{gather} We have $\overline E_p \cap [-B,B[^{n+1}\,=\emptyset$ for $p>CB^2$, where $C$ is a constant depending only on the degree $n$. One can see that using the following argument: Let $f(x)$ be a polynomial in $[-B,B[^{n+1}$ for which $f(x+i)$ is Eisenstein, then~\cite[Lemma~1]{bib:heyman2014shifted} \begin{equation*} p^{n-1} \mid \disc(f(x+i)) = \disc(f(x)) \ne 0. \end{equation*} Now, the discriminant of $f(x)$ is a polynomial of degree $2n-2$ in the coefficients, whence \begin{equation*} p^{n-1}\leq\disc(f(x))\leq D B^{2n-2} \end{equation*} for some constant $D$ depending only on $n$. Therefore, for $C=D^{1/(n-1)}$, we have $p\leq C B^{2}$. Thus, we have just shown that for fixed $B$, the union in (\ref{eq:fundamentallimit}) is finite, and we can bound it by \begin{gather} \lim_{M \to \infty}\limsup_{B \to \infty} \frac{\abs[\big]{\bigcup_{CB^2>p>M}\overline{E}_p \cap [-B,B[^{n+1}}} {(2B)^{n+1}} \nonumber \\ \leq \lim_{M \to \infty}\limsup_{B \to \infty} \smashoperator[r]{\sum_{CB^2>p>M}} \frac{\abs[\big]{\overline{E}_p\cap [-B,B[^{n+1}}}{(2B)^{n+1}}. \label{eq:equationsum} \end{gather} Given the order of the limits, we can fix the following setting: $M>n$ and $B>M$. Now let us bound $\abs[\big]{\overline{E}_p\cap [-B,B[^{n+1}}$ in the following two cases: \begin{enumerate} \item $2B<p$: In this case, we can consider $[-B,B[^{n+1}$ as a subset of $\smash{\mathbb{F}_p^{n+1}}$ without losing any information. The reader should notice that modulo $p$, the elements of $\overline{E}_p$ have a multiple root of order $n$ at some $i\in \mathbb{F}_p$. Now, the key observation is the following: The reduction modulo $p$ of the polynomials in $[-B,B[^{n+1}\;\cap\; \overline{E}_p$ is contained in the set \begin{equation*} S_p := \{a(x-i)^n \,:\, a \in [-B,B[\,\setminus \{0\} \text{ and } {-nai} \in [-B,B[\}. \end{equation*} This represents the condition that the degree $n$ and $n-1$ coefficients live in $[-B,B[$: \begin{equation*} [-B,B[^{n+1} \,\cap\, \overline E_p\subseteq S_p. \end{equation*} Observe now that $\abs{S_p}=(2B-1)2B\leq (2B)^2$, since $n$ and $a$ are invertible modulo $p$ (recall $p>M>n$). We conclude that \begin{equation*} \abs[\big]{[-B,B[^{n+1} \,\cap\, \overline E_p}\leq \abs{S_p}\leq (2B)^2. \end{equation*} Notice that this bound is uniform in $p$. \item $2B\geq p$: In this case, the bound is more natural. Consider the projection map \begin{align*} \pi\colon \mathbb{Z}^{n+1} &\longrightarrow \mathbb{F}_p^{n+1} \\ \intertext{and the shift map modulo $p$} \sigma^{-1}\colon \mathbb{F}_p^{n+1} &\longrightarrow \mathbb{F}_p^{n+1} \\ f(x) &\longmapsto f(x-1). \end{align*} Consider the sets of polynomials $L_p := \{ax^n \,:\, a\in\mathbb{F}_p^*\}$ and \begin{equation}\label{eq:Lp_union} \overline{L}_p = \bigcup^{p-1}_{i=0} \sigma^{-i}L_p. \end{equation} We have $\abs{\overline L_p}\leq p^2$. Notice that \begin{equation}\label{eq:observation} \pi([-B,B[^{n+1} \,\cap\, \overline E_p)\subseteq \overline{L}_p. \end{equation} At this step, we observe that the projection is at most $\ceil{2B/p}^{n+1}$ to one, therefore we can bound $\abs[\big]{[-B,B[^{n+1} \,\cap\, \overline E_p}$ using the projection map and condition (\ref{eq:observation}): \begin{equation*} \abs[\big]{[-B,B[^{n+1} \,\cap\, \overline E_p} \leq \abs{\overline{L}_p} \cdot \ceil{2B/p}^{n+1} \leq p^2 \left(\frac{2B}{p}+1\right)^{n+1} \leq p^2 \left(\frac{4B}{p}\right)^{n+1}, \end{equation*} where the last inequality follows from $2B\geq p$. At the end of the day, the bound we have is of the form \begin{equation*} \abs[\big]{[-B,B[^{n+1} \,\cap\, \overline E_p}\leq 4^{n+1} \frac{B^{n+1}}{p^{n-1}}. \end{equation*} \end{enumerate} Let us now come back to the sum in (\ref{eq:equationsum}), which we can split according to the two cases above: \begin{equation}\label{eq:estimate_sum} \smashoperator[r]{\sum_{CB^2>p>M}} \frac{\abs[\big]{\overline{E}_p \,\cap\, [-B,B[^{n+1}}}{(2B)^{n+1}} \leq \sum_{CB^2>p>2B}\frac{(2B)^2}{(2B)^{n+1}} + \sum_{2B \ge p>M}\frac{2^{n+1}}{p^{n-1}}. \end{equation} Using the limit in $B$, the first sum goes to zero by the prime number theorem since $n\geq 3$. As $B$ goes to infinity, the other sum becomes a converging series (again $n\geq 3$) starting at the index $M$. Letting $M$ go to infinity, this too goes to zero. Hence we have shown that condition~(\ref{eq:conditiondens}) holds, and the theorem follows. \end{proof} In degree $2$, the above proof does not work: Indeed, it is easily seen that $\sum_p s_p$ diverges for $n=2$, so by the first claim of Lemma~\ref{thm:bjornstoll}, the proof we gave in degree greater or equal than $3$ is doomed to fail in degree $2$. However, we have a much simpler application of the lemma which shows that the density of shifted Eisenstein polynomials of degree $2$ is indeed one, as Theorem~\ref{thm:shifted3} suggests. \begin{proposition}\label{thm:shifted2} The density of shifted Eisenstein polynomials of degree $n = 2$ is one. \end{proposition} \begin{proof} Let again $U_\infty = \emptyset$. We now apply Lemma~\ref{thm:bjornstoll} to a truncated sequence of sets. For this, let $M$ be a positive integer and \begin{equation*} U_p = \begin{cases} \overline{E}_p &\text{ if } p \le M \\ \emptyset &\text{ if } p > M. \end{cases} \end{equation*} This truncated sequence now automatically satisfies condition~(\ref{eq:conditiondens}), and we get the density \begin{equation*} \underline\rho(\overline{E}) \ge \rho\Big(\smashoperator[r]{\bigcup_{p \le M}} \overline{E}_p \cap \mathbb{Z}^{3}\Big) = 1 - \smashoperator{\prod_{p \le M}}\left(1-\frac{(p-1)^2}{p^{3}}\right). \end{equation*} Letting $M$ tend to infinity gives $\rho(\overline{E}) = 1$, as the product diverges to zero. \end{proof} \begin{remark} Even though the density of shifted Eisenstein polynomials of degree $2$ is one, not all irreducible polynomials are Eisenstein for some shift (or even affine transformation): Take for example the polynomial $f(x) = x^2 + 8x - 16$, which is irreducible over $\mathbb{Z}$. Its discriminant is $2^7$, so it could only be shifted Eisenstein with respect to $2$. But neither $f(x)$ nor $f(x+1) = x^2 + 10x - 7$ is $2$-Eisenstein. \end{remark} \section{Affine Eisenstein Polynomials} \label{sec:affine} In~\cite[Section~7]{bib:heyman2014shifted}, the question was also raised about the density of polynomials that become Eisenstein after an arbitrary affine transformation, instead of only considering shifts. We can address this question as well, using the same methods as in Section~\ref{sec:shifted}. \begin{definition}\label{def:affine_transformation} For $f(x) \in R^{n+1}$ and $A = \begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix}\in R^{2\times 2}$, we define the \emph{affine transformation of $f$ by $A$} as \begin{equation*} f * A := (cx + d)^n f\bigg(\frac{ax + b}{cx + d}\bigg). \end{equation*} It is easy to see that, when restricted to $\GL_2(R)$, this is a right group action. \end{definition} Like in Section~\ref{sec:shifted}, we consider the set of polynomials with integer coefficients that become Eisenstein after some affine transformation. \begin{definition} Let $\widetilde{E} \subseteq \mathbb{Z}^{n+1}$ be the set of degree $n$ polynomials which become Eisenstein of degree $n$ after some affine transformation $A\in \mathbb{Z}^{2 \times 2}$: \begin{equation*} \widetilde{E} = \{f(x)\in\mathbb{Z}^{n+1} \,:\, f * A \in E \text{ for some } A \in \mathbb{Z}^{2 \times 2} \}. \end{equation*} We call these polynomials \emph{affine Eisenstein}. \end{definition} It is easy to see that if both $f$ and $f * A$ have degree $n$ and $f * A$ is irreducible, then so is $f$. Hence, an affine Eisenstein polynomial is irreducible. \begin{remark}\label{rem:eislowdeg} The reader should notice that also in this case, we only consider affine Eisenstein polynomials of degree \emph{exactly} $n$. Nevertheless an observation is required: It could happen that a degree $n$ polynomial becomes Eisenstein of \emph{lower} degree after some affine transformation. Fortunately, it can be seen that a polynomial for which this happens is never irreducible. Likewise, a polynomial of degree less than $n$ cannot become Eisenstein of degree $n$ after an affine transformation, since any transformation that increases the degree introduces factors $cx + d$. \end{remark} We again consider each prime separately by working over the $p$-adic integers. \begin{definition} Let $\widetilde{E}_p \subseteq \mathbb{Z}_p^{n+1}$ be the set of degree $n$ polynomials of $\mathbb{Z}_p[x]$ which become Eisenstein of degree $n$ after some affine transformation $A \in \mathbb{Z}_p^{2 \times 2}$: \begin{equation*} \widetilde{E}_p = \{f(x)\in\mathbb{Z}_p^{n+1} \,:\, f * A \in E_p \text{ for some } A \in \mathbb{Z}_p^{2 \times 2} \}. \end{equation*} We also call these polynomials affine Eisenstein, since it will always be clear from the context to which set we are referring. \end{definition} In what follows we compute the measure $\mu_p(\widetilde E_p)$. For this, we need to write $\widetilde{E}_p$ as a disjoint union of transformed copies of $E_p$ as in Lemma~\ref{thm:disjoint_shifted}. The following lemma is essential for this. \begin{lemma}\label{thm:affine_equivalence} Assume $f(x) \in \mathbb{Z}_p^{n+1}$ is Eisenstein of degree $n$, and let $A = \begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix} \in \mathbb{Z}_p^{2 \times 2}$. Then, $f * A$ is Eisenstein of degree $n$ if and only if $p \mid b$, $p \nmid a$, $p \nmid d$. \end{lemma} \begin{proof} If we write $f(x) = \sum_{i=0}^{n} \alpha_i x^i$ and $f * A = \sum_{l=0}^{n} \beta_l x^l$, then a simple calculation gives \begin{equation}\label{eq:affine_coefficients} \beta_l = \sum_{j=0}^{l} \sum_{s=l}^{n} \binom{n + j - s}{j} \binom{s-j}{l-j} \alpha_{s-j} d^{n-s} b^{s-l} a^{l-j} c^j. \end{equation} Assume now that $f * A$ is Eisenstein, so $p \mid \beta_l$ for $0 \le l \le n-1$, $p^2 \nmid \beta_0$, $p \nmid \beta_n$. Consider first $\beta_0$. Reducing modulo $p$ and using that $p \mid \alpha_i$ for $i < n$, we see that \begin{equation*} \beta_0 \equiv \alpha_n b^n \pmod{p}. \end{equation*} Since $p \nmid \alpha_n$, we get that $p \mid b$. Knowing this, we reduce $\beta_0$ modulo $p^2$ and get \begin{equation*} \beta_0 \equiv \alpha_0 d^n + \alpha_1 d^{n-1} b \equiv \alpha_0 d^n \pmod{p^2}, \end{equation*} since $p^2 \mid \alpha_1 b$. From this, we see that $p^2 \nmid \beta_0$ if and only if $p \nmid d$. Finally, we reduce $\beta_n$ modulo $p$ and get \begin{equation*} \beta_n \equiv \alpha_n a^n \pmod{p}, \end{equation*} from which we conclude that $p \nmid a$. Vice versa, if we assume that $p \mid b$, $p \nmid a$, $p \nmid d$, the same computations as above show that $p \nmid \beta_n$, $p \mid \beta_0$, $p^2 \nmid \beta_0$, and we easily see from (\ref{eq:affine_coefficients}) that $p \mid \beta_l$ for $0 < l < n$. Hence, $f * A$ is Eisenstein. \end{proof} We denote by $S = \{ \begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix} \in \mathbb{Z}_p^{2 \times 2} \,:\, p \mid b, p \nmid a, p \nmid d \}$ the set of matrices from Lemma~\ref{thm:affine_equivalence}. This is a subgroup of $\GL_2(\mathbb{Z}_p)$. We can obtain the disjoint union decomposition of $\widetilde{E}_p$ by considering the left cosets of $S$, but first, we need to deal with the noninvertible matrices. It turns out that they don't matter. \begin{lemma}\label{thm:noninvertible} Let $f(x) \in \mathbb{Z}_p^{n+1}$. If $A = \begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix} \in \mathbb{Z}_p^{2 \times 2}$ is \emph{not} invertible, then $f * A$ is not Eisenstein of degree $n$. \end{lemma} \begin{proof} Assume for contradiction that $f * A$ is Eisenstein of degree $n$. We write again $f(x) = \sum_{i=0}^{n} \alpha_i x^i$ and $f * A = \sum_{l=0}^{n} \beta_l x^l$. We reduce modulo $p$: \begin{align*} \bar{A} = \begin{pmatrix} \bar{a} & \bar{b} \\ \bar{c} & \bar{d} \end{pmatrix} \in \mathbb{F}_p^{2 \times 2}. \end{align*} Since $\det \bar{A} = 0$, there are two cases: Either $\bar{c} = \bar{d} = 0$, or there is a $\lambda \in \mathbb{F}_p$ such that $\bar{a} = \lambda \bar{c}$ and $\bar{b} = \lambda \bar{d}$. We consider the second case. Since $f * A$ is Eisenstein, we see that $\bar{f} * \bar{A} = \bar \beta_n x^n \in \mathbb{F}_p[x]$ with $\bar{\beta}_n \ne 0$. On the other hand, \begin{equation*} \bar{f} * \bar{A} = (\bar{c} x + \bar{d})^n \bar{f} \bigg( \frac{\lambda \bar{c} x + \lambda \bar{d}}{\bar{c} x + \bar{d}} \bigg) = (\bar{c} x + \bar{d})^n \bar{f}(\lambda). \end{equation*} From this, we see that $\bar{f}(\lambda) \ne 0$, $\bar{c} \ne 0$ and $\bar{d} = 0$. This means that $p \mid d$ and $p \mid b$, from which it follows by (\ref{eq:affine_coefficients}) that $p^2 \mid \beta_0$. This contradicts the assumption that $f * A$ is Eisenstein. The case $\bar{c} = \bar{d} = 0$ is similar. \end{proof} Hence, we only need to consider the action of $\GL_2(R)$ on $R^{n+1}$. According to Lemma~\ref{thm:affine_equivalence}, the action of elements of $S$ does not change whether a polynomial is Eisenstein. Therefore, to see if a polynomial $f(x) \in \mathbb{Z}_p^{n+1}$ is affine Eisenstein, it is enough to check one representative of each left coset of $S \subset \GL_2(\mathbb{Z}_p)$. We can list these cosets explicitly. \begin{lemma}\label{thm:equivalence_classes} The subgroup $S \subset \GL_2(\mathbb{Z}_p)$ has $p + 1$ left cosets, which are the following: \begin{itemize} \item $\begin{pmatrix} 1 & i \\ 0 & 1 \end{pmatrix}S$ for $i \in \{0, \ldots, p-1\}$ (corresponding to shifts), and \item $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}S$ (corresponding to the reciprocal). \end{itemize} \end{lemma} \begin{proof} It is easy to see that these $p+1$ left cosets are distinct. We need to show that every $A = \begin{psmallmatrix} s & t \\ u & v \end{psmallmatrix}\in \GL_2(\mathbb{Z}_p)$ lies in one of them. Consider first the case $p \mid v$. Then, \begin{equation*} \begin{pmatrix} s & t \\ u & v \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} u & v \\ s & t \end{pmatrix}, \end{equation*} with $\begin{psmallmatrix} u & v \\ s & t \end{psmallmatrix} \in S$. If instead $p \nmid v$, let $i \equiv t/v \pmod{p}$, $i \in \{0, \ldots, p-1\}$. Then, \begin{equation*} \begin{pmatrix} s & t \\ u & v \end{pmatrix} = \begin{pmatrix} 1 & i \\ 0 & 1 \end{pmatrix} \begin{pmatrix} s - iu & t - iv \\ u & v \end{pmatrix}, \end{equation*} with $p \mid t - iv$ by choice of $i$, and $p \nmid s - iu$ since the matrix has to be invertible. \end{proof} Together, Lemmata~\ref{thm:affine_equivalence} and~\ref{thm:equivalence_classes} say that $f(x)$ is affine Eisenstein with respect to some $A$ if and only if it is shifted Eisenstein with respect to some $i \in \{0, \ldots, p-1\}$, or if its reciprocal $x^n f(1/x)$ is Eisenstein; and these possibilities are exclusive. In other words, \begin{equation*} \widetilde E_p = \operatorname{recip}(E_p) \sqcup \bigsqcup_{i = 0}^{p-1} \sigma^{-i} E_p. \end{equation*} Since shifting and taking the reciprocal are linear maps with determinant $\pm 1$, they preserve the $p$-adic measure, and we see that \begin{equation*} \mu_p(\widetilde E_p) = (p+1) \mu_p(E_p) = \frac{(p+1)(p-1)^2}{p^{n+2}}. \end{equation*} With this, we can now show the analogue of Theorem~\ref{thm:shifted3} for affine transformations. \begin{theorem}\label{thm:affine3} Let $n \ge 3$. The density of affine Eisenstein polynomials of degree $n$ is \begin{equation*} \rho(\widetilde E) = 1 - \smashoperator{\prod_{p \text{ prime}}} \; \left(1 - \frac{(p+1)(p-1)^2}{p^{n+2}}\right). \end{equation*} \end{theorem} \begin{proof} The proof is mostly the same as for Theorem~\ref{thm:shifted3}. For the verification of condition (\ref{eq:conditiondens}), note that the case $2B < p$ is unchanged from the proof of Theorem~\ref{thm:shifted3}, since the reciprocal polynomial cannot be $p$-Eisenstein for $p > B$. For the case $2B \ge p$, we simply get an additional term in the union (\ref{eq:Lp_union}), and so the estimate changes to $\abs{\overline L_p} \le p(p+1)$. However, this doesn't affect the convergence of the sum in (\ref{eq:estimate_sum}). \end{proof} \begin{remark} Clearly, the density of affine Eisenstein polynomials of degree $n = 2$ is one. After all, we are considering a superset of the shifted Eisenstein polynomials of Proposition~\ref{thm:shifted2}. \end{remark} \section{Monte Carlo Simulations} \label{sec:simulations} As in~\cite[Section~6]{bib:heyman2014shifted}, we ran some Monte Carlo simulations to verify how near our results are to the actual probability of finding a shifted (or affine) Eisenstein polynomial among all the polynomials of a given height. For degrees $n=3$ and $4$, we tested $20\,000$ random polynomials of height at most $1\,000\,000$. The results are shown in Tables~\ref{tab:simulation3} and~\ref{tab:simulation4}. The first column contains the number of polynomials which were actually found by the Monte Carlo experiment, while the second column contains the expected number given by~\cite[Theorem~2]{bib:shparlinskiEisen} and Theorems~\ref{thm:shifted3} and~\ref{thm:affine3}. All the experiments seem to agree with our theoretical results. The simulations were done using the Sage computer algebra system~\cite{bib:sage}, and the code is available upon request. \begin{table}[htp] \centering \caption{Simulations for degree $n = 3$.} \label{tab:simulation3} \begin{tabular}{|l|r|r|} \hline {} & found & expected \\ \hline irreducible & $20\,000$ & $20\,000$ \\ Eisenstein & $1112$ & $1112$ \\ shifted Eisenstein & $3416$ & $3353$ \\ affine Eisenstein & $4360$ & $4328$ \\ \hline \end{tabular} \end{table} \begin{table}[htp] \centering \caption{Simulations for degree $n = 4$.} \label{tab:simulation4} \begin{tabular}{|l|r|r|} \hline {} & found & expected \\ \hline irreducible & $20\,000$ & $20\,000$ \\ Eisenstein & $432$ & $449$ \\ shifted Eisenstein & $1096$ & $1112$ \\ affine Eisenstein & $1570$ & $1547$ \\ \hline \end{tabular} \end{table} \section*{Acknowledgements} The authors were supported in part by Swiss National Science Foundation grant number 149716 and \emph{Armasuisse}. \bibliographystyle{plainnat}
{ "timestamp": "2015-07-13T02:04:57", "yymm": "1507", "arxiv_id": "1507.02753", "language": "en", "url": "https://arxiv.org/abs/1507.02753", "abstract": "In this paper we provide a complete answer to a question by Heyman and Shparlinski concerning the natural density of polynomials which are irreducible by Eisenstein's criterion after applying some shift. The main tool we use is a local to global principle for density computations over a free $\\mathbb Z$-module of finite rank.", "subjects": "Number Theory (math.NT)", "title": "The Density of Shifted and Affine Eisenstein Polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429619433692, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097211022431841 }
https://arxiv.org/abs/1405.2331
Fixed points of local actions of nilpotent Lie groups on surfaces
Let $G$ be connected nilpotent Lie group acting locally on a real surface $M$. Let $\varphi$ be the local flow on $M$ induced by a $1$-parameter subgroup. Assume $K$ is a compact set of fixed points of $\varphi$ and $U$ is a neighborhood of $K$ containing no other fixed points.Theorem: If the Dold fixed-point index of $\varphi_t|U$ is nonzero for sufficiently small $t>0$, then ${\rm Fix} (G) \cap K \ne \emptyset$.
\section{Introduction} \mylabel{sec:intro} {\em Notation:} $M$ is a manifold with tangent bundle $TM$ and boundary $\ensuremath{\partial} M$. The set of vector fields on $M$ is $\ensuremath{\mathfrak v} (M)$, and the set of vector fields that are $C^1$ (continously differentiable) is $\ensuremath{\mathfrak v}^1 (M)$. The zero set of $X\in \ensuremath{\mathfrak v} (M)$ is $\Z X$. $G$ always denotes a connected Lie group with unit element $e$ and Lie algebra $\gg$. We usually treat $\gg$ as the set of one-parameter subgroups $X\co\RR\to G$, but we also exploit the linear structure on $\gg$ defined by its identification with the tangent space to $G$ at $e$. $\RR$ denotes the vector space of real numbers, $\Rp=[0,\infty)$, $\R n$ is Euclidean $n$-space. $\ZZ$ is the group of integers, $\NN:=\{0,1, \dots\}$ is the set of natural numbers, and $\emp$ is the empty set. The frontier of a subset $S$ of a topological space is denoted by $\fr S$. The fixed point set of $f\co A \to B$ is $\Fix f:= \{a\in A\co f (a)=a\}$. Maps are continuous unless the contrary is indicated. \smallskip In 1885 Poincar\'e \cite{Poincare85} published a seminal result on surface dynamics, extended to higher dimensions by Hopf \cite{Hopf25} in 1925: \begin{theorem*} [\sc Poincar\'{e}-Hopf] Every vector field on a closed manifold $M$ of nonzero Euler characteristic vanishes at some point. \end{theorem*} A smooth vector field induces a flow $\Phi^X:=\{\Phi^X_t\}_{t\ge 0}$ --- a continuous action of the group $\RR$ --- and the theorem implies $\Phi$ has a fixed point. The same conclusion holds for semi-flows (continuous actions of $\Rp$) on a broad class of spaces including all compact polyhedra and topological manifolds, thanks to Lefschetz's Fixed Point Theorem \cite{Lefschetz26}). In his pioneering 1964 paper, E. Lima \cite{Lima64} generalized the Poincar\'e-Hopf Theorem to actions of connected abelian Lie groups on compact surfaces, allowing nonempty boundaries. This was extended to nilpotent groups in 1986 by my former student J. Plante \cite{Plante86} : \begin{theorem*} [\sc Plante] \mylabel{th:plante} Every continuous action of a connected nilpotent Lie group on a compact surface of nonzero Euler characteristic has a fixed point. \end{theorem*} Our goal is a generalization of Plante's Theorem to local actions (Section \ref{sec:local}) on all surfaces. This necessitates replacement of the assumption $\chi (M)\ne 0$. The new hypothesis is based Dold's fixed point index $I(f)\in \ZZ$ for maps $f\co U\to M$, where $U\subset M$ is open and $\Fix f$ (see Section \ref{sec:index}). Let $\Phi:=\{\Phi_t\}_{t\in\RR}$ be a local flow on $M$. A {\em block for $\Phi$} is a compact set \[\textstyle K\subset \Fix \Phi :=\bigcap_{t\in\RR}\Fix{\Phi_t} \] having an {\em isolating neighborhood} $U$: a precompact open neighborhood of $K$ such that $K=\Fix \Phi \cap \ov U$. When $\Phi:=\Phi^X$, the local flow generated by a vector field $X$ on $M$ then $K\subset \Z X$, the {\em zero set} of $X$. For sufficiently small $t >0$, the {\em index of $\Phi$ at $K$} is well defined by the formula \[\msf i_K (\Phi):=I (\Phi_t|U).\] When $\Phi$ comes from a vector field $X$ we define \[ \msf i_K (X)=\msf i (X, U) :=\msf i_K (X). \] In Theorem \ref{th:MAIN} and its corollaries $G$ is nilpotent, and a local action $\Phi$ of $G$ on a surface $M$ is postulated (see Section \ref{sec:local}). Each $X\in \gg$ is a one-parameter subgroup $\RR \to G$, inducing a local flow $\Phi^X$ on $M$ whose fixed point set is denoted by $\Fix X$. A block $K$ for $\Phi^X$ is an {\em $X$-block}. $K$ is {\em essential} if $\msf i_K (X)\ne 0$. \begin{theorem} \mylabel{th:MAIN} If $K$ is an essential $X$-block, then $\Fix G \cap K\ne\varnothing$. \end{theorem} As distinct $X$-blocks are disjoint, we obtain: \begin{corollary} \mylabel{th:MAINcor1} If $X$ has $n$ essential blocks, then $\Fix G$ has at least $n$ components. \qed \end{corollary} Let $\Z \aa$ denote the set of common zeros of a set $\aa\subset\ensuremath{\mathfrak v} (M)$. \begin{corollary} \mylabel{th:MAINcor2} Let $\aa \subset \ensuremath{\mathfrak v}^1 (M)$ be a finite-dimensional linear subspace tangent to $\ensuremath{\partial} M$ and forming a nilpotent Lie algebra under the Lie bracket operation. If $K$ is an essential block of zeros for some $X\in \gg$, then $\Z \aa \ne\varnothing$. \qed \end{corollary} The inspiration for Theorem \ref{th:MAIN} is a remarkable result of C. Bonatti \cite{Bonatti92}: \begin{theorem*}[\sc {Bonatti}] Assume $\dim M\le 4$, $\ensuremath{\partial} M=\emp$, and $X$, $Y$ are commuting analytic vector fields $M$. If $K$ is an essential block for the local flow generated by $X$, then $\Z Y\cap K\ne\varnothing$.\footnote{ ``The demonstration of this result involves a beautiful and quite difficult local study of the set of zeros of $X$, as an analytic $Y$-invariant set. Of course, analyticity is an essential tool in this study, and the validity of this type of result in the smooth case remains an open— and apparently hard— question.'' ---P. Molino \cite{Molino93}} \end{theorem*} This is one of the few fixed points theorem for noncompact Lie groups ($\R 2$ in this case) acting on manifolds that are not compact, or have dimensions $>2$, or have zero Euler characteristic. Another is Borel's Fixed Point Theorem, stated below. \smallskip When $\ensuremath{\partial} M=\varnothing$, our definition of the index of $X$ in $U$ extends Bonatti's definition for vector fields, which runs as follows. Let $X$ be a $C^1$ vector field on $M$ generating the local flow $\psi$. Let $U\subset M$ be an isolating neighborhood of an $X$-block $K$. Then $\msf i_K(X)$ equals the intersection number of $X(U)$ and the image of the trivial field on $U$ in the tangent bundle of $M$, which is Bonatti's definition. Equivalently: If a vector field $Y$ is sufficiently close to $X$ and $\Z Y\cap U$ is finite, then $\msf i_K(X)$ equals the the sum of the Poincar\'e-Hopf indices of the zeros of $Y$ in $U$. This sum, the {\em Poincar\'e-Hopf index} of $Y|U$, is denoted by $\msf i_{PH} (Y|U)$. See Proposition \ref{th:fundA}. \subsection*{Discussion} Plante's theorem does not extend to Lie groups that are solvable, or even supersoluble,\footnote {Supersoluble: All eigenvalues in the adjoint representation are real. This implies solvable.} because he proved \cite{Plante86}: \begin{itemize} \item Every compact surface supports a fixed-point free $C^\infty$ action by the group $H$ of real matrices $ \left[\begin{smallmatrix} a & b\\ 0 & 1 \end{smallmatrix}\right], \ (a >0)$. \end{itemize} On the other hand, sufficiently strong assumptions imply fixed-point theorem for several natural classes of solvable group actions: \smallskip $\bullet$ \ There is a fixed point in every algebraic action of a solvable, linear, irreducible algebraic group on a complete algebraic variety over an algebraically closed field (Borel \cite{Borel56, Borel69}; see also Humphreys \cite{Humphreys75}, Onishchik \& Vinberg \cite{Onish-Vinberg90}.) This celebrated result needs no assumptions on dimensions, Euler characteristics or compactness, and is valid for nonsmooth varieties. \smallskip $\bullet$ Borel's theorem extends to holomorphic actions of connected solvable Lie groups on compact K\"ahler manifolds $M$ with $H^1 (M)=\{0\}$ ( A. Sommese \cite{Sommese73}). \smallskip $\bullet$ $\Fix G\ne\varnothing$ when $G$ is supersoluble and acts analytically on a compact surface $M$ with $\chi (M) \ne 0$ (A. Weinstein and M.\ W. Hirsch \cite{HW2000}). But this result fails for groups that arx solvable but not supersoluble: The group of real matrices \, $ \left[\begin{smallmatrix} a & -b & \ x \\ b & a & \ y \\ 0 & \ 0 & \ 1 \end{smallmatrix}\right], \ (a^2+b^2=1), \, $ acts without fixed point on the 2-sphere of oriented lines through the origin in $\R 3$. And it fails for $C^\infty$ actions, by Plante's Theorem. \smallskip $\bullet$ \ The conclusion of Bonatti's theorem holds for analytic vector fields $X, Y$ on a real or complex $2$-manifold $M$ without boundary, satisfying $[X, Y]\wedge X=0$ (Hirsch \cite{Hirsch2014}). Two applications follow: \smallskip $\bullet$ \ Let $\aa$ be a Lie algebra, perhaps infinite dimensional, of analytic vector fields on a real or complex $2$-manifold with empty boundary. If $X\in \aa$ spans a one-dimensional ideal, $\Z \aa$ meets every essential $X$-block. \smallskip $\bullet$ \ Assume the center of $G$ has positive dimension and $M$ is a complex $2$-manifold with empty boundary such that $\chi (M)\ne 0$. Then every holomorphic action of $G$ on $M$ has a fixed point. \smallskip M. Belliart \cite{Belliart97} classified the pairs $(G, M)$ where $G$ has a continuous fixed-point free action on a compact surface $M$, relying on the classification of transitive surface actions in Mostow \cite{Mostow50}. In particular: \smallskip $\bullet$ A solvable $G$ acts without fixed point on the compact surface $M$ iff $G$ maps homomorphically onto $\msf {Aff}_+(\RR)$. \smallskip For related work on the dynamics of Lie group actions see \cite{BL96, ET79, Hirsch2010, Hirsch2011, Horne65, Hounie81}. \paragraph{Open questions.} Is there an example of a connected nilpotent Lie group acting without fixed-point on a compact $n$-manifold, $n>2$, having nonzero Euler characteristic? Does Bonatti's Theorem generalize to three or more vector fields, or to manifolds of dimensions $ >4$? \section{Local actions} \mylabel{sec:local} For a map $f\co A\to B$ we adopt the convention that notation such as $f(\xi)$ presupposes that $\xi$ is an element or subset of $A$. The domain $A$ of $f$ is denoted by $\mcal D f$ and the range of $f$ by $\mcal Rf:=f(A)$. Let $g, \phi$ denote maps. Regardless of the domains and ranges, the composition $g\circ f$ is defined as the map (perhaps empty) $g\circ f: x\mapsto g (f (x))$ whose domain is $f^{-1}(\mcal D g)$. The associative law holds for these compositions: The maps $ (h\circ g)\circ f$ and $h\circ (g\circ f)$ have the same domain \[D:= \{x\in \mcal D h\co h (x)\in \mcal D g, \quad g (h(x))\in \mcal D f\}, \] and \[ x\in D\implies (h\circ g)(f(x)) = h\circ ((g\circ f)(x)). \] A {\em local homeomorphism} on a topological space $Q$ is a homeomorphism between open subsets of $Q$. The set of these homeomorphisms is denoted by $\msf {LH}(Q)$. \begin{definition} \mylabel{th:defloc} A {\em local action} of the connected Lie group $G$ on a manifold $M$ is a triple $(G, M, \alpha)$ where $\alpha\co G \to \msf {LH}(M)$ is a function having the following properties: \begin{itemize} \item The set $\Omega_\alpha:=\big\{(g, p)\in G\times M\co p\in \mcal D \alpha(g)\big\}$ is an open neighborhood of $\{e\} \times M$. \item The {\em evaluation map} \[ \msf{ev_\alpha}\co \Omega_\alpha \to M,\quad (g, p)\mapsto \alpha(g) (p) \] is continuous. \item $\alpha(e)$ is the identity map of $M$. \item The maps $\alpha (fg)\circ \alpha (h)$, $\alpha (f)\circ \alpha (gh)$ agree on the intersection of their domains, \ ($f,g,h\in G$). \item $\alpha(e)$ is the identity map of $M$. \item $\alpha (g^{-1})=\alpha (g)^{-1}$. \end{itemize} \end{definition} $\alpha$ is a {\em global action} provided $\ensuremath{\Omega}_\alpha=G\times M$. If $G$ is connected and simply connected and $M$ is compact, every local action extends to a unique global action. In the rest of this section a local action $(G, M, \alpha)$ is assumed. We sometimes omit the notation ``$\alpha$'', writing $g$ for $\alpha (g)$, $\mcal D g$ for $\mcal D \alpha(g)$, and so forth. A homomorphism $\phi\co H \to G$ of Lie groups induces the local action $\alpha\circ\phi$ of $H$ on $M$ defined by \[\alpha\circ \phi \co h\to \alpha (\phi(h)), \] called the {\em pullback} of $\alpha$ by $\phi$. When $\phi$ is an inclusion we set $\alpha|H:=\alpha\circ \phi$. In this way $\alpha$ induces local actions of all Lie subgroups. A {\em local flow} is a local action of the group $\RR$ of real numbers. If $X\co \RR\to G$ is a one-parameter subgroup, \, $\alpha \circ X$ is the local flow characterized by \begin{equation} \label{eq:pullback} (\alpha\circ X) (t)\co p \mapsto \alpha(X(t))(p), \quad ( t\in \RR, \ p\in \mcal D \alpha (X(t)). \end{equation} A Lie algebra homomorphism $X\mapsto \hat X$ from $\gg$ to $C^\infty$ vector fields on $M$ gives rise to a local action $(G, M, \alpha)$ such that the maps $t\mapsto (\alpha\circ X (t))$ are the integral curves of $\hat X$. (See Palais \cite[{Th.\ }II.11]{Palais57}, also Varadarajan \cite [{Th.\ }2.16.6]{Varadarajan76}). A set $S\subset M$ is {\em invariant} provided $g(S)$ is defined and in $S$ for all $g\in G$, or more equivalently: \[S\subset \mcal D g \cap \mcal R g. \] The {\em orbit} of $p\in M$ is the smallest invariant set containing $p$. The {\em fixed-point set} of $\alpha$ is \[\textstyle \Fix G:=\bigcap_{g\in G}\Fix g. \] \begin{proposition} \mylabel{th:fixg} $\Fix \gg = \Fix G$. \end{proposition} \begin{proof} Equation (\ref{eq:pullback}) implies $\Fix \gg \subset \Fix G$. A neighborhood of $e$ is covered by one-parameter subgroups, and it generates $G$ because $G$ is connected. This implies $\Fix G\subset \Fix \gg$. \end{proof} If $H\subset \gg$ is a connected Lie subgroup we set \[\Fix H=\Fix{\alpha|H}. \] \begin{proposition} \mylabel{th:hg} Let $G$ have a local action. \begin{description} \item[(i)] If $p\in \Fix h\cap\mcal Dg$, then $g(p)\in \Fix{ghg^{-1}}.$ \item[(ii)] If $H\subset G$ is a connected normal Lie subgroup, $\Fix H$ is invariant under $G$. \end{description} \end{proposition} \begin{proof} (i) is straightforward and implies (ii). \end{proof} \section{The fixed-point index} \mylabel{sec:index} The late A. Dold \cite {Dold65, Dold72} defined a fixed-point index for a large class of maps having compact fixed-point sets. We use Dold'sindex to define an index for blocks in local flows. Dold's index $I (f)\in\ZZ$ is defined for data $f, V, M, \mcal S$ where \begin{itemize} \item $V$ is an open set in a topological space $\mcal S$, \item $f\co V \to \mcal S$ is continuous with $\Fix$ compact, \item $V$ is a {\em Euclidean neighborhood retract} (ENR): Some open set in a Euclidean space retracts onto onto a homeomorph of $V$.\footnote{The class of ENRs includes metrizable topological manifolds and triangulable subsets of Euclidean spaces.} \end{itemize} We will use the following properties of $I(f)$: \begin{description} \item{(D1)} \, $I(f)=I(f|V_0)$\, if $V_0\subset V$ is an open neighborhood of $\Fix f$. \item{(D2)} \, $I(f)=\begin{cases} & 0 \ \mbox{if $\Fix f =\varnothing$,}\\ & 1 \ \mbox{if $f$ is constant.} \end{cases} $ \item{(D3)} \, $I(f)=\sum_i I(f| V_i)$\, if $V$ is the union of finitely many disjoint open sets $V_i$. \item{(D4)} \, $I(f_0)=I(f_1)$\, if there is a homotopy $f_t\co V\to \mcal S,\, (0\le t \le 1)$\, such that \,$\bigcup_t\Fix{f_t}$ is compact. \end{description} These correspond to (5.5.11), (5.5.12), (5.5.13) and (5.5.15) in Chapter VII of Dold's book \cite{Dold72}. \begin{description} \item{(D5)} \, Assume $\mcal S$ is a manifold, $f$ is $C^1$, \ $\Fix f$ is a singleton $p\in M \,\verb=\=\, \ensuremath{\partial} M$, and $\ensuremath{\operatorname{\mathsf{Det}}} f'(p)\ne 0$. Then $I(f)= (-1)^\nu$, where $\nu$ is the number of distinct eigenvalues $\ensuremath{\lambda}$ of $f'(p)$ such that $\ensuremath{\lambda} >1$. (See \cite[VII.5.17, Ex. 3]{Dold72}).) \item{(D6)} \, Suppose $\mcal S$ is compact, $V=\mcal S$, and $f$ is homotopic to the identity. Then $I (f)=\chi (\mcal S)$. (See \cite[VII.6.22]{Dold72}.) \end{description} \subsection*{The index for local flows} Let $\varphi:= \{\varphi_t\}_{t\in \RR}$ be a local flow in a topological space $\mcal S$. A compact set $K\subset\Fix \varphi$ is a {\em block} for $\varphi$, or a {\em $\varphi$-block}, if it has an open, precompact ENR neighborhood $V\subset \mcal S$ such that $\ov V\cap\Fix\varphi=K$. Such a $V$ is said to be {\em isolating} for $\varphi$, and for $(\varphi, K)$. When $\varphi$ is smooth this language agrees with the terminology for $X$-blocks in the Introduction, and $\Fix \varphi = \Z X$. It turns out that the fixed-point index $I (\varphi_t|V)$ for all sufficiently small $t>0$ depends only on $\varphi$ and $K$: \begin{proposition} \mylabel{th:tau} If $V$ is isolating for $\varphi$, there exists $\tau >0$ such that for all $t \in (0,\tau]$: \begin{description} \item[(a)] $\Fix {\varphi_t}\cap V$ is compact, \item[(b)] $I (\varphi_t|V)=I (\varphi_{\tau}|V)$. \end{description} \end{proposition} \begin{proof} If (a) fails there exist convergent sequences $\{t_k\}$ in $[0,\infty)$, and $\{p_k\}$ in $V$, such that \[ t_k\searrow 0, \quad p_k\in \Fix{\varphi_{t_k}}\cap V, \quad p_k\to q\in \fr V. \] Joint continuity of $(t,x)\mapsto \varphi_t (x)$ yields the contradiction $q\in \Fix {\varphi} \cap \fr V$. Assertion (b) is a consequence of (a) and (D4). \end{proof} \begin{definition} \mylabel{th:defeqindex} Using the notation of Proposition \ref{th:tau} we define the {\em index} of $\varphi$ in $V$, and at $K$, as: \[ \msf i(\varphi,V)=\msf i_K (\varphi):= I(\varphi_\tau|V). \] $K$ and $V$ are {\em essential} for $\varphi$ if $\msf i_K (V)\ne 0$. This implies $K\ne\varnothing$ by (D2). \end{definition} We say that $\varphi$ is {\em smooth} and {\em generates $X\in \ensuremath{\mathfrak v}^1 (M)$}, provided $\mcal S$ is a manifold $M$ and \[ \left.\pd t\right|_{t=0} \varphi_t (p) = X_p, \quad (p\in M). \] This implies $X|\ensuremath{\partial} M$ is tangent to $\ensuremath{\partial} M$, and $\Z X= \Fix{\varphi}$. Recall that $\msf i_{PH}(X)$ denotes the Poincar\'e-Hopf index for vector fields $X\in\ensuremath{\mathfrak v}(M)$ such that $\ensuremath{\partial} M=\emp$ and $\Z X$ finite. \begin{proposition} \mylabel{th:fundA} Assume $\ensuremath{\partial} M=\emp$. Let $\varphi$ be a smooth local flow on $M$ generating $X\in\ensuremath{\mathfrak v} (M)$. Suppose $V\subset M$ is isolating for $\varphi$. Let $\{X^k\}$ be a sequence in $\ensuremath{\mathfrak v} (M)$ converging to $X$ such that each set $\Z {X^k}\cap \ov V$ is finite. Then $\msf i (\varphi, V)= \msf i_{PH} (X^k|V)$ for sufficiently large $k$. \end{proposition} \begin{proof} Choose a sequence $\{Y^k\} $ in $\ensuremath{\mathfrak v}^1 (M)$ with $\Z{Y^k}\cap\ov V$ finite, with $Y^k$ and so close to $X^k$ that $\msf i_{PH} (Y_k|V)= \msf i_{PH} (X^k|V)$ and $\lim_k Y_k= X$. Let $\psi^k$ denote the local flow of $Y^k$. There exists $\rho >0$ such that \[\mbox{$\lim_{k\to\infty} \psi^k_t (x) =\varphi_t (x)$ \ uniformly for $(t,x) \in [0,\rho]\times \ov V$.} \] The conclusion follows by applying (D4) to $f_0:=\phi_t|V$ and $f_1:=\psi^k_t|V$ for sufficiently small $t>0$. \end{proof} This result can be used to show that the Dold index and the Bonatti index coincide in situations where both are defined. \smallskip Now assume $G$ has a local action on $M$. Every $X\in \gg$ generates a local flow $\varphi^X$ on $M$. A block $K\subset \Fix{\varphi^X}$ is called an $X$-block. When $U\subset M$ is $U$ is isolating for $\alpha\circ X$ we say $U$ is isolating for $X$, and set \[ \msf i (X, U)= \msf i_K (X) := \msf i (\varphi^X, U). \] $K$ is {\em essential for $X$} provided $\msf i_K (X)\ne 0$. \begin{proposition} \mylabel{th:stable} Assume $V\subset M$ is isolating for $X$. \begin{description} \item[(a)] The set \[\mathfrak N (X, V,\gg):= \big\{Y\in \gg \co \text{$V$ is isolating for $Y$ and \,$\msf i (Y, V) = \msf i (X, V)$}\big\} \] is an open neighborhood of $X$ in $\gg$. \item[(b)] If $\ov V$ is a compact invariant manifold, $\msf i (Y, V)=\chi (\ov V)$ \,for all $Y\in \mathfrak N (X, V,\gg)$. \end{description} \end{proposition} \begin{proof} {\em (a) } Compactness of $\fr V$ implies that the set $\mathfrak N(\gg):=\{Y\in \ensuremath{\mathfrak v} (g) \co \Fix Y \cap \fr V =\emp$ is an open neighborhood of $X$, and $V$ is isolating for every $Y\in \mathfrak N(\gg)$. If $Y\in \mathfrak N(\gg)$ is sufficiently close to $X$ and $0\le s\le 1$ then $Y_s:= (1-s)X +sY$ also lies in $\mathfrak N (\gg)$, and therefore $\msf i (Y, V) = \msf i (X, V)$ by (D4). {\em (b) } Follows from (D6). \end{proof} \section{Fixed point sets, stabilizers and ideals} \mylabel{sec:ideals} As usual, $G$ denotes a connected Lie group with Lie algebra $\gg$. When $G$ is nilpotent, its exponential map $\gg\to G$ is an analytic diffeomorphism sending subalgebras onto closed subgroups, and ideals onto normal subgroups. In some situations $\gg$ is more convenient than $G$ because it as a natural linear structure. A local action of $G$ on a surface $M$ is assumed. Note that $\Fix G=\Fix \gg$ because $G$ is connected. The {\em isotropy group of $p\in M$} is the subgroup $G_p\subset G$ generated by \[ \{g\in G\co g(p)=p\}. \] The {\em stabilizer of $p\in M$} is the subalgebra $\gg_p\subset \gg$ generated by \[\big\{Y\in\gg\co p\in\Fix Y \big\}. \] The {\em stabilizer of $S\subset M$} is $\gg_S:= \bigcap_{p\in S}\gg_p$. Evidently a one-parameter subgroup $Y\co\RR\to G$ belongs to $\gg_p$ iff $Y (\RR)\subset G_p$. \begin{lemma} \mylabel{th:covering} If $\gg$ is nilpotent and $\dim (\gg) \ge 2$, every element of $\gg$ lies in an ideal of codimension one. \end{lemma} \begin{proof} If $\dim (\gg)= 2$ the conclusion is trivial because $\gg$ is abelian. Assume inductively: $\dim (\gg)=d\ge 3$ and the lemma holds for Lie algebras of lower dimension. Let $Y\in \gg$ be arbitrary. Fix a $1$-dimensional central ideal $\ensuremath{\mathfrak j}$ and a surjective Lie algebra homomorphism \[\pi\co \gg\to \gg/\ensuremath{\mathfrak j}. \] By the inductive assumption $\pi (Y)$ belongs to a codimension-one ideal $\ensuremath{\mathfrak f}\subset \gg/\ensuremath{\mathfrak j}$, whence $Y$ belongs the codimension-one ideal $\pi^{-1} (\ensuremath{\mathfrak f}) \subset \gg$. \end{proof} The following simple result is useful: \begin{proposition} \mylabel{th:abh} If $p\in M$ and $\ensuremath{\mathfrak u}, \ensuremath{\mathfrak w} \subset \gg_p$ are linear subspaces such that $\ensuremath{\mathfrak u} +\ensuremath{\mathfrak w}=\gg$, then $p\in \Fix \gg$. \qed \end{proposition} The set $\mcal C (\gg)$ of codimension-one ideals has a natural structure as a projective variety in the real projective space $\P{d-1}, \,d=\dim (\gg)$, and is given the corresponding metrizable topology. \begin{proposition}[\sc Plante \cite{Plante86}] \mylabel{th:V} Assume $\gg$ is nilpotent. \begin{description} \item[(i)] Every component of $\mcal C(\gg)$ has positive dimension. \item[(ii)] Every codimension-one subalgebra of $\gg$ is an ideal. \item[(iii)] If $O\subset M$ is a one-dimensional orbit of the local action, then \[ p\in O\implies \gg_p=\gg_O \in\mcal C(\gg). \] \end{description} \end{proposition} \subsection*{Minimal sets} \mylabel{sec:minimal} A {\em minimal set} (for the local action of $G$ on $M$) is a nonempty compact invariant set containing no smaller such set. Compact orbits are minimal sets; all other minimal sets are {\em exceptional}. An orbit homeomorpic to the unit circle $\S 1$ is a {\em circle orbit}. A circle orbit is {\em isolated} if it has a neighborhood containing no other circle orbit; otherwise it is {\em nonisolated}. The following proposition is adapted from Plante \cite{Plante86}: \begin{proposition} \mylabel{th:plante2} Let $M_1\subset M$ be a compact surface. \begin{description} \item[(i)] The number of exceptional minimal sets in $M_1$ is at most half the genus of $M_1$. \item[(ii)] The union of the minimal sets in $M_1$ is compact. \item[(iii)] The union of the nonisolated circle orbits in $M_1$ is compact. \item[(iv)] If $C\subset M$ is a nonisolated circle orbit, every neighborhood of $C$ contains a compact invariant surface $Q$ such that: \begin{itemize} \item each component $P$ of $Q$ is either an annulus or a M\"obius band, \item $\ensuremath{\Lambda}\,\verb=\=\,P$ contains at most finitely many minimal sets. \end{itemize} \end{description} \end{proposition} \begin{proof} (i) is a generalization of \cite[Lemma 2.3]{Plante86}, with the same proof. Slight revisions of arguments on \cite[page 155]{Plante86} prove the other assertions. \end{proof} \section{Proof of Theorem \ref{th:MAIN}} \mylabel{sec:proofs} Recall that $G$ is a connected nilpotent Lie group with a local action on a surface $M$, the Lie algebra of $G$ is $\gg$, and $K\subset M$ is an essential block of fixed points for the induced local flow of a one-parameter subgroup $X\co \RR \to G$. The theorem states that $\Fix G\cap K\ne\varnothing$, which is trivial if $\dim G \le 1$. Assume inductively: $\dim G >1$ and the conclusion holds for groups of lower dimension. Every neighborhood of $K$ in $M$ contains an isolating neighborhood $U$ for $(X, K)$ such that $\ov U$ is a compact surface. It suffices to prove for all such $U$ that \begin{equation} \label{eq:MAIN*} \Fix G \cap U\ne\varnothing. \end{equation} \begin{lemma} \mylabel{th:wm} If $U$ contains only finitely many minimal sets, Equation (\ref{eq:MAIN*}) holds. \end{lemma} \begin{proof} Since $\dim (\gg)>1$ and $\gg$ is covered by codimension-one ideals (Lemma \ref{th:covering}), $\gg$ contains a set $\{Y_k\}_{K\in \NN}$ converging to $X$, and a sequence $\{\ensuremath{\mathfrak h}_k\}$ of pairwise distinct, codimension-one ideals, such that: $Y_k\in\ensuremath{\mathfrak h}_k$, $U$ is isolating for $Y_k$, and $\msf i (Y, U)\ne 0$ (Proposition \ref{th:stable}). As the set $K_k := \Fix {\ensuremath{\mathfrak h}_k}\cap U $ is compact, nonempty by the induction hypothesis, and invariant by Proposition \ref{th:hg}(ii), there is a minimal set $L_k\subset K_k\subset U$. The hypothesis of the Lemma implies there exist indices $i, j$ such that $\ensuremath{\mathfrak h}_i\ne \ensuremath{\mathfrak h}_j$ and $L_i= L_j$ . Equation (\ref{eq:MAIN*}) now follows from Proposition \ref{th:abh}. \end{proof} In verifying Equation (\ref{eq:MAIN*}) we can assume $ U$ contains infinitely many minimal sets, thanks to Lemma \ref{th:wm}. Setting $M_1:=\ov U$ in Proposition \ref{th:plante2}, we see that all but finitely many of these are circle orbits, and there is a nonempty, compact, invariant surface $P\subset \ov U$ such that: \begin{equation} \label{eq:pm1a} \chi (P)=0 \ \mbox{and $\ov U \setminus P$ contains only finitely many minimal sets.} \end{equation} If $\Fix G\cap P\ne\varnothing$, Equation (\ref{eq:MAIN*}) holds and the proof is complete. Henceforth assume: \begin{equation} \label{eq:FGP} \Fix G\cap P=\emp \end{equation} \begin{lemma} \mylabel{th:z} There exists $Z\in \gg$ such that: \begin{description} \item[(a)] $U$ is isolating for $Z$, \item[(b)] $\msf i (Z, U)=\msf i (X, U)\ne 0$, \item[(c)] $\Fix Z \cap \ensuremath{\partial} P=\emp$. \end{description} \end{lemma} \begin{proof} Each of the finitely many components $C_i\subset \ensuremath{\partial} P$ contains no fixed point by (\ref{eq:FGP}), hence it is a circle orbit. Proposition \ref{th:V}(iii) shows that property (c) holds for all $Z$ in the dense open set $\gg\,\verb=\=\, \bigcup_i\gg_{C_i}$, while (a) and (b) hold if $Z$ is in the nonempty open set $\mathfrak N (X, U,\gg)$ (see Proposition \ref{th:stable}). Thus the Lemma is satisfied by all $Z$ in the nonempty set $\mathfrak N (X, U,\gg) \,\verb=\=\,\bigcup\gg_{C_i}$. \end{proof} Fix $Z$ as in Lemma \ref{th:z}. Then \begin{equation} \label{eq:zun0} \msf i (Z, U)\ne 0. \end{equation} Since $\Fix Z \cap\ensuremath{\partial} P=\emp$ by Lemma \ref{th:z}(c), the sets $U\,\verb=\=\,P$ and $P\,\verb=\=\,\ensuremath{\partial} P$ are isolating for $Z$. Therefore \begin{equation} \label{eq:Z1A} \msf i (Z, U) =\msf i (Z, U\verb=\= P) + \msf i (Z, P\,\verb=\=\, \ensuremath{\partial} P). \end{equation} by (D3) in Section \ref{sec:index}. Now $\msf i (Z, P\,\verb=\=\,\ensuremath{\partial} P)=0$, by Theorem \ref{th:stable}(b) with $V:=P\,\verb=\=\,\ensuremath{\partial} P$. Consequently \begin{equation} \label{eq:Z2} \msf i (Z, U\,\verb=\=\, \ensuremath{\partial} P) \ne 0 \end{equation} by (\ref{eq:zun0}) and (\ref{eq:Z1A}). Equations (\ref{eq:pm1a}) and (\ref{eq:Z2}) show that $U\,\verb=\=\, P$ contains only finitely many minimal sets. Therefore Lemma \ref{th:wm} yields \[ \Fix G \cap (U\,\verb=\=\, P) \ne\varnothing, \] implying (\ref{eq:MAIN*}). This completes the proof of Theorem \ref{th:MAIN}. \qed
{ "timestamp": "2014-05-12T02:11:42", "yymm": "1405", "arxiv_id": "1405.2331", "language": "en", "url": "https://arxiv.org/abs/1405.2331", "abstract": "Let $G$ be connected nilpotent Lie group acting locally on a real surface $M$. Let $\\varphi$ be the local flow on $M$ induced by a $1$-parameter subgroup. Assume $K$ is a compact set of fixed points of $\\varphi$ and $U$ is a neighborhood of $K$ containing no other fixed points.Theorem: If the Dold fixed-point index of $\\varphi_t|U$ is nonzero for sufficiently small $t>0$, then ${\\rm Fix} (G) \\cap K \\ne \\emptyset$.", "subjects": "Dynamical Systems (math.DS)", "title": "Fixed points of local actions of nilpotent Lie groups on surfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429619433692, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097211022431841 }
https://arxiv.org/abs/1701.06030
Fourth-order time-stepping for stiff PDEs on the sphere
We present in this paper algorithms for solving stiff PDEs on the unit sphere with spectral accuracy in space and fourth-order accuracy in time. These are based on a variant of the double Fourier sphere method in coefficient space with multiplication matrices that differ from the usual ones, and implicit-explicit time-stepping schemes. Operating in coefficient space with these new matrices allows one to use a sparse direct solver, avoids the coordinate singularity and maintains smoothness at the poles, while implicit-explicit schemes circumvent severe restrictions on the time-steps due to stiffness. A comparison is made against exponential integrators and it is found that implicit-explicit schemes perform best. Implementations in MATLAB and Chebfun make it possible to compute the solution of many PDEs to high accuracy in a very convenient fashion.
\section{Introduction} We are interested in computing smooth solutions of stiff PDEs on the unit sphere of the form \begin{equation} u_t = \mathcal{L}u + \mathcal{N}(u), \quad u(t=0,x,y,z)=u_0(x,y,z), \label{PDE} \end{equation} \noindent where $u(t,x,y,z)$ is a function of time $t$ and Cartesian coordinates $(x,y,z)$ with $x^2 + y^2 + z^2=1$. The function $u$ can be real or complex and \reff{PDE} can be a single equation, as well as a system of equations. In this paper, we restrict our attention to $\mathcal{L} u = \alpha\Delta u$ and to a nonlinear non-differential operator $\mathcal{N}$ with constant coefficients, but the techniques we present can be applied to more general cases. A large number of PDEs of interest in science and engineering take this form. Examples on the sphere include the (diffusive) Allen--Cahn equation $u_t = \epsilon\Delta u + u - u^3$ with $\epsilon\ll1$~\cite{du2008}, the (dispersive) focusing nonlinear Schr\"{o}dinger equation $u_t=i\Delta u + iu|u|^2$~\cite{takaoka2016}, the Gierer--Meinhardt~\cite{bhattacharya2005}, Ginzburg--Landau~\cite{rubinstein1995} and Brusselator~\cite{trinh2016} equations, and many others. There are several methods to discretize the spatial part of \reff{PDE} with spectral accuracy, including spherical harmonics~\cite{atkinson2012}, radial basis functions (RBFs)~\cite{fornberg2015a} and the double Fourier sphere (DFS) method~\cite{merilees1973, orszag1974}. The DFS method is the only one that leads to a $\mathcal{O}(N\log N)$ complexity per time-step, where $N$ is the total number of grid points in the spatial discretization of \reff{PDE}. For spherical harmonics, the cost per time-step is $\mathcal{O}(N^{3/2})$ since there are no effective ``fast'' spherical transforms,\footnote{Fast $\mathcal{O}(N\log N)$ spherical transforms have received significant attention but require so far a $\mathcal{O}(N^2)$ precomputational cost~\cite{rokhlin2006, tygert2008, tygert2010}. Note that in a recent manuscript~\cite{slevinsky2017} Slevinsky proposed a new fast spherical transform based on conversions between spherical harmonics and bivariate Fourier series with a lower $\mathcal{O}(N^{3/2})$ precomputational cost. For implicit-explicit schemes with the DFS method, the precomputation is $\mathcal{O}(N)$.} and for global RBFs~\cite[Ch.~6]{fornberg2015a}, the cost per time-step is $\mathcal{O}(N^2)$ since these generate dense differentiation matrices.\footnote{RBF-FD~\cite[Ch.~7]{fornberg2015a} generate sparse matrices but only achieve algebraic orders of accuracy. Moreover, the solution time for these sparse matrices is not necessarily $\mathcal{O}(N)$.} We focus in this paper on the DFS method and present a novel formulation operating in coefficient space. Once the spatial part of \reff{PDE} has been discretized by the DFS method on an $n\times m$ uniform longitude-latitude grid, it becomes a system of $nm$ ODEs, \begin{equation} \hat{u}' = \mathbf{L}\hat{u} + \mathbf{N}(\hat{u}), \quad \hat{u}(0)=\hat{u}_0, \label{ODE} \end{equation} \noindent where $\hat{u}(t)$ is a vector of $nm$ Fourier coefficients and $\Lbf$ (an $nm\times nm$ matrix) and $\Nbf$ are the discretized versions of $\mathcal{L}$ and $\mathcal{N}$ in Fourier space. Solving the system \reff{ODE} with generic explicit time-stepping schemes can be highly challenging because of \textit{stiffness}: the large eigenvalues of $\Lbf$---due to the second differentiation order in \reff{PDE} and the clustering of points near the poles---force one to use very small time-steps. Exponential integrators and implicit-explicit (IMEX) schemes are two classes of numerical methods that are aimed at treating stiffness. For exponential integrators, the linear part~$\Lbf$ is integrated exactly using the matrix exponential while a numerical scheme is applied to the nonlinear part~$\Nbf$. For IMEX schemes, an explicit formula is used to advance~$\Nbf$ while an implicit scheme is used to advance~$\Lbf$. We show in this paper that the DFS method combined with IMEX schemes leads to $\mathcal{O}(nm\log nm)$ per time-step algorithms for both diffusive and dispersive PDEs, that exponential integrators achieve this complexity for diffusive PDEs only, and that IMEX schemes outperform exponential integrators in both cases. For numerical comparisons, we consider two versions of the fourth-order ETDRK4 exponential integrator of Cox and Matthews~\cite{cox2002} and two fourth-order IMEX schemes, the IMEX-BDF4~\cite{hundsdorfer2007} and LIRK4~\cite{calvo2001} schemes. \begin{figure} \hspace{-1.2cm} \includegraphics[scale=.6]{glsol.eps} \vspace{-10cm} \caption{\textit{Initial condition and real part of the solution at times $t=10,20,100$ of the Ginzburg--Landau equation computed by the \textup{\texttt{spinsphere}} code. This solution is oriented in a direction at a $\pi/8$ angle from the north-south axis, so the symmetry maintained in the computations is a reflection of global accuracy.}} \label{fig:GLsol} \end{figure} By contrast, Kassam and Trefethen demonstrated in~\cite{kassam2005} that exponential integrators (ETDRK4) outperform IMEX schemes (IMEX-BDF4); this can be explained by two factors. First, they focused on problems with diagonal matrices $\Lbf$. (IMEX-BDF4 performed better than ETDRK4 for the only non-diagonal problem they considered.) For diagonal problems, exponential integrators are particularly efficient since the computation of the matrix exponential is trivial and the matrix exponential is diagonal too (hence, its action on vectors can trivially be computed in linear time). Second, IMEX-BDF4 is unstable for dispersive PDEs since it is based on the fourth-order backward differentiation formula, which is unstable for dispersive PDEs---this is why they could not make it work for the KdV equation. Our DFS method in coefficient space leads to matrices that are not diagonal but have a sparsity structure that makes IMEX schemes particularly efficient, and we consider not only IMEX-BDF4 but also LIRK4, which is stable for dispersive PDEs. There are libraries for solving time-dependent PDEs on the sphere, including SPHEREPACK~\cite{spherepack1999} and FEniCS~\cite{fenics2013}. However, none of these are aimed at solving stiff PDEs and easily allow for computing in an integrated environment. The algorithms we shall describe in this paper are aimed at solving stiff PDEs, and have been implemented in MATLAB and made available as part of Chebfun~\cite{chebfun} in the \texttt{spinsphere} code. (Note that \texttt{spin} stands for \textbf{s}tiff \textbf{P}DE \textbf{in}tegrator.) The recent extension of Chebfun to the sphere~\cite{townsend2016}, built on its extension to periodic problems~\cite{montanelli2015b}, provides a very convenient framework for working with functions on the sphere. For example, the function \begin{equation} f(\lambda, \theta) = \cos(1+ \cos\lambda\sin(2\theta)) \end{equation} \noindent can be approximated to machine precision by the following command: \vspace{.2cm} \begin{small} \begin{verbatim} f = spherefun(@(lam,th) cos(1 + cos(lam).*sin(2*th))); \end{verbatim} \end{small} \vspace{.2cm} \noindent Using \texttt{spherefun} objects as initial conditions, the \texttt{spinsphere} code allows one to solve stiff PDEs with a few lines of code, using IMEX-BDF4 for diffusive and LIRK4 for dispersive PDEs. For example, the following MATLAB code solves the Ginzburg--Landau equation $u_t = 10^{-4}\Delta u + u -(1+1.5i)u|u|^2$ with $1024$ grid points in longitude and latitude and a time-step $h=10^{-1}$: \vspace{.2cm} \begin{small} \begin{verbatim} n = 1024; h = 1e-1; tspan = [0 100]; S = spinopsphere(tspan); S.lin = @(u) 1e-4*lap(u); S.nonlin = @(u) u-(1+1.5i)*u.*abs(u).^2; u0 = @(x,y,z) 1/3*(cos(40*x)+cos(40*y)+cos(40*z)); th = pi/8; c = cos(th); s = sin(th); S.init = spherefun(@(x,y,z)u0(c*x-s*z,y,s*x+c*z)); u = spinsphere(S, n, h); \end{verbatim} \end{small} \vspace{.2cm} \noindent The initial condition and the solution at times $t=10,20,100$ are shown in Figure~\ref{fig:GLsol}. (The Ginzburg--Landau equation in 2D goes back to the 1970s with the work of Stewartson and Stuart~\cite{stewartson1971}. Rubinstein and Sternberg studied it on the sphere~\cite{rubinstein1995}, with applications in the study of liquid crystals~\cite{pismen1992} and nonequilibrium patterns~\cite{pomeau1983}.) Figure~\ref{code:LIRK4} lists a detailed MATLAB code to solve the same problem; a more sophisticated version of this code is used inside \texttt{spinsphere}. (Note that this code might be a little bit slow; for speed, the reader might want to adjust $n$ and $h$, and change the initial condition. We also encourage the reader to type \texttt{spinsphere('gl')} or \texttt{spinsphere('nls')} to invoke an example computation.) The paper is structured as follows. In the next section, we review the DFS method (Section 2.1) and present a new Fourier spectral method in coefficient space, which, using multiplication matrices that differ from the usual ones, avoids the coordinate singularity (Section 2.2), takes advantage of sparse direct solvers (Section 2.3), and maintains smoothness at the poles (Sections 2.4 and 2.5). The time-stepping schemes are presented in Section 3 while Section 4 is dedicated to numerical comparisons on simple PDEs. \section{A Fourier spectral method in coefficient space} We present in this section a Fourier spectral method for the spatial discretization of \reff{PDE}, based on the DFS method and novel Fourier multiplication matrices in coefficient space. The accuracy of the method is tested by solving the Poisson and heat equations. \subsection{The double Fourier sphere method} The DFS method uses the longitude-latitude coordinate transform, \begin{equation} x = \cos\lambda\sin\theta, \; y = \sin\lambda\sin\theta, \; z=\cos\theta, \label{coord} \end{equation} \noindent with $(\lambda,\theta)\in[-\pi,\pi]\times[0,\pi]$. The azimuth angle $\lambda$ corresponds to the longitude while the polar (or zenith) angle $\theta$ corresponds to the latitude.\footnote{To be precise, $\theta$ is the colatitude, defined as ``$\pi/2$ minus latitude'' with latitude in $[-\pi/2,\pi/2]$. For simplicity, we will refer to it as latitude. Note that $\theta=0$ corresponds to the north pole and $\theta=\pi$ to the south pole.} A function $u(x,y,z)$ on the sphere is written as $u(\lambda,\theta)$ using \reff{coord}, i.e., \begin{equation} u(\lambda, \theta) = u(\cos\lambda\sin\theta, \sin\lambda\sin\theta, \cos\theta), \quad (\lambda,\theta)\in[-\pi,\pi]\times[0,\pi], \label{defu} \end{equation} \noindent and \reff{PDE} with $\mathcal{L}=\alpha\Delta$ becomes \begin{equation} u_t = \alpha\Delta u + \mathcal{N}(u), \quad u(t=0,\lambda,\theta)=u_0(\lambda,\theta), \quad (\lambda,\theta)\in[-\pi,\pi]\times[0,\pi]. \label{PDE2} \end{equation} \noindent Note that the function $u(\lambda,\theta)$ in \reff{defu} is $2\pi$-periodic in $\lambda$ but not periodic in $\theta$. The key idea of the DFS method---developed by Merilees~\cite{merilees1973} and further studied by Orszag~\cite{orszag1974} in the 1970's, and recently revisited by Townsend et al.\ with the use of low-rank approximations~\cite{townsend2016}---is to associate a function $\tilde{u}(\lambda,\theta)$ with $u(\lambda,\theta)$, $2\pi$-periodic in both $\lambda$ and $\theta$, defined on $[-\pi,\pi]\times[-\pi,\pi]$, and constant along the lines $\theta=0$ and $\theta=\pm\pi$ corresponding to the poles. Mathematically, the function $\tilde{u}(\lambda,\theta)$ is defined as \begin{equation} \tilde{u}(\lambda, \theta) = \left\{ \begin{array}{ll} u(\lambda, \theta), & (\lambda,\theta)\in[-\pi,\pi]\times[0,\pi], \\ u(\lambda+\pi, -\theta), & (\lambda,\theta)\in[-\pi,0]\times[-\pi,0], \\ u(\lambda-\pi, -\theta), & (\lambda,\theta)\in[0,\pi]\times[-\pi,0]. \end{array} \right. \label{DFS} \end{equation} \noindent The function $u$ is ``doubled-up'' in the $\theta$-direction and flipped; see, e.g., \cite[Fig.~1]{townsend2016}. Since the function $\tilde{u}$ is $2\pi$-periodic in both $\lambda$ and $\theta$, it can be approximated by a 2D Fourier series, \begin{equation} \tilde{u}(\lambda, \theta) \approx \sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}\sum_{k=-n/2}^{n/2}{\hspace{-0.3cm}}'{\;\,}\hat{u}_{jk}e^{ij\theta}e^{ik\lambda}. \label{Fourier2D} \end{equation} \noindent The numbers $n$ and $m$ are assumed to be even (this will be the case throughout this paper) and the primes on the summation signs mean that the boundary terms $j=\pm m/2$ or $k=\pm n/2$ are halved. The Fourier coefficients are defined by \begin{equation} \hat{u}_{jk} = \frac{1}{nm}\sum_{p=1}^{m}\sum_{q=1}^{n}\tilde{u}(\lambda_q,\theta_p)e^{-ij\theta_p}e^{-ik\lambda_q}, \quad -\frac{m}{2}\leq j \leq\frac{m}{2}-1, \quad -\frac{n}{2}\leq k \leq\frac{n}{2}-1, \label{Coeffs2D} \end{equation} \noindent with $\hat{u}_{j,n/2}=\hat{u}_{j,-n/2}$ for all $j$ and $\hat{u}_{m/2,k}=\hat{u}_{-m/2,k}$ for all $k$, and correspond to a 2D uniform grid with $n$ points in longitude and $m$ points in latitude, \begin{equation} \lambda_q = -\pi + (q-1)\frac{2\pi}{n}, \quad 1\leq q\leq n, \quad \theta_p = -\pi + (p-1)\frac{2\pi}{m}, \quad 1\leq p\leq m. \label{Grid2D} \end{equation} \noindent The $nm$ Fourier coefficients $\hat{u}_{jk}$ can be computed by sampling $\tilde{u}$ on the grid and using the 2D FFT, costing $\mathcal{O}(nm\log nm)$ operations. In practice we take $m=n$ since it leads to the same resolution in each direction around the equator where the spacing is the coarsest. As mentioned in~\cite{townsend2016}, every smooth function $u(\lambda,\theta)$ on the sphere is associated with a smooth bi-periodic function $\tilde{u}(\lambda,\theta)$ on $[-\pi,\pi]^2$ via~\reff{DFS}, but the converse is not true since smooth bi-periodic functions might not be constant along the lines $\theta=0$ and $\theta=\pm\pi$ corresponding to the poles. To be smooth on the sphere, functions of the form~\reff{DFS} have to satisfy the \textit{pole conditions}, which ensures that $\tilde{u}(\lambda,\theta)$ is single-valued at the poles despite the fact that latitude circles degenerate into a single point there. For approximations of the form~\reff{Fourier2D}--\reff{Coeffs2D}, this is given by \begin{equation} \sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}\hat{u}_{jk} = \sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}(-1)^j\hat{u}_{jk} = 0, \quad |k|\geq1. \label{polecondition1} \end{equation} \noindent As we will see in the numerical experiments of Sections~2.4 and 2.5, when solving a PDE involving the Laplacian operator, if the right-hand side (for Poisson's equation) or the initial condition (for the heat equation) is a smooth function on the sphere, then the solutions obtained with our DFS method are also smooth functions on the sphere. Therefore, we do not have to impose the conditions~\reff{polecondition1}. Similarly, we do not impose the ``doubled-up'' symmetry in~\reff{DFS}, as it was preserved throughout our experiments (we leave as an open problem to prove this). For brevity we only illustrate the condition~\reff{polecondition1} in our experiments. Let us finish this section with some comments about Fourier series for solving PDEs on the sphere. One way of using them is to use standard double Fourier series on a ``doubled-up'' version of $u$, i.e., the DFS method~\reff{DFS}--\reff{Coeffs2D}. This is what Merilees did and, combined with a Fourier spectral method in value space, he solved the shallow water equations~\cite{merilees1973}. Another way is to use half-range cosine or sine series~\cite{boyd1978, cheong2000a, orszag1974, shen1999, yee1980}---after all, spherical harmonics are represented as proper combinations of half-ranged cosine or sine series. For example, Orszag~\cite{orszag1974} suggested the use of approximations of the form \begin{equation} u(\lambda, \theta) \approx \sum_{k=-n/2}^{n/2}\sum_{j=0}^{n/2}\hat{u}_{jk}\sin^s\theta\cos j\theta e^{ik\lambda}, \label{Orszag} \end{equation} \noindent where $s=0$ if $k$ is even, $s=1$ if $k$ is odd. (Note that the Fourier series in~\reff{Orszag} approximates $u$ directly, as opposed to $\tilde{u}$.) When using half-range cosine or sine terms as basis functions, one must be careful that the pole conditions are satisfied. One can either select basis functions that satisfy the pole conditions or impose a constraint on the Fourier coefficients to enforce it. For solving PDEs involving the Laplacian operator (in both value and coefficient spaces), Orszag imposed certain constraints on the coefficients $\hat{u}_{jk}$ in~\reff{Orszag}, analogous to those in~\reff{polecondition1}, \begin{equation} \sum_{j=0}^{n/2} \hat{u}_{jk} = \sum_{j=0}^{n/2} (-1)^j\hat{u}_{jk} = 0, \quad |k|\geq2. \label{polecondition2} \end{equation} \noindent Boyd~\cite{boyd1978} studied Orszag's method and showed that the constraints~\reff{polecondition2} are actually not necessary for solving \textit{time-independent} PDEs in value space but mentioned that the ``absence of pole constraints is still risky'' when working with coefficients. The spectral method we present in this paper is based on Merilees' approach and is similar to the method of Townsend et al.~\cite{townsend2016}, but the Fourier multiplication matrices we use are different. It is simpler to implement than the half-range cosine/sine methods since there are no constraints to impose, and gives comparable accuracy. \subsection{Fourier multiplication matrices in coefficient space} In this section, we are interested in finding a matrix for multiplication by $\sin^2\theta$ that is nonsingular. This is crucial because some of the time-stepping methods we shall describe in Section~3 need to compute the inverse of such a matrix. For this discussion we can restrict our attention to the 1D case. Consider an even number $m$ of equispaced points $\{\theta_p\}_{p=1}^m$ on $[-\pi,\pi]$, \begin{equation} \theta_p = -\pi + (p-1)\frac{2\pi}{m}, \quad 1\leq p \leq m. \end{equation} \noindent (Note that these points include $-\pi$ but not $\pi$.) Let $u$ be a complex-valued function on $[-\pi,\pi]$ with values $\{u_p\}_{p=1}^m$ at these points. It is well known~\cite[Ch.~13]{henrici1986} that there exists a unique degree $m/2$ trigonometric polynomial $p(\theta)$ that interpolates $u(\theta)$ at these $m$ points, i.e., such that $p(\theta_p)=u_p$ for each $p$, of the symmetric form \begin{equation} p(\theta) = \sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}\hat{u}_j e^{ij\theta}, \label{triginterp} \end{equation} \noindent with Fourier coefficients \begin{equation} \hat{u}_j = \frac{1}{m}\sum_{p=1}^{m}u_pe^{-ij\theta_p}, \quad -\frac{m}{2}\leq j\leq\frac{m}{2}-1, \end{equation} \noindent and $\hat{u}_{m/2}=\hat{u}_{-m/2}$. The prime on the summation sign indicates that the terms $j=\pm m/2$ are halved. Hence, we shall define the vector of $m+1$ Fourier coefficients as \begin{equation} \hat{u} = \Big(\frac{\hat{u}_{-m/2}}{2}, \hat{u}_{-m/2+1},\ldots,\hat{u}_{m/2-1},\frac{\hat{u}_{m/2}}{2}=\frac{\hat{u}_{-m/2}}{2}\Big)^T. \label{vec1} \end{equation} \noindent Let us emphasize that if the trigonometric interpolant were defined as \begin{equation} p(\theta) = \sum_{j=-m/2}^{m/2-1}\hat{u}_j e^{ij\theta}, \label{triginterp2} \end{equation} \noindent the derivative of \reff{triginterp2} would have a mode $(-im/2)e^{-im\theta/2}$ leading to complex values for real data.\footnote{Consider for example $u(\theta)=\cos(\theta)$ with $m=2$. The representation \reff{triginterp2} gives $p(\theta)=e^{-i\theta}$ with correct values $-1$ and $1$ at grid points $\theta_1=-\pi$ and $\theta_2=0$ but its derivative $p'(\theta)=-ie^{-i\theta}$ is complex-valued on the grid. The representation \reff{triginterp} gives $p(\theta)=1/2(e^{-i\theta}+e^{i\theta})$, which is indeed the correct answer.} However, FFT codes only store $m$ coefficients, i.e., they assume that $p(\theta)$ is of the form~\reff{triginterp2} with \begin{equation} \hat{u} = \Big(\frac{\hat{u}_{-m/2}}{2}+\frac{\hat{u}_{m/2}}{2}=\hat{u}_{-m/2}, \hat{u}_{-m/2+1},\ldots,\hat{u}_{m/2-1}\Big)^T. \label{vec2} \end{equation} \noindent As a consequence, the first entry of the $m\times m$ first-order Fourier differentiation matrix $\Dbf_m$, which acts on \reff{vec2}, is zero, \begin{equation} \Dbf_m = \mathrm{diag}\Big(i(0,-m/2+1,-m/2+2,\ldots,m/2-1)\Big), \label{diffmat} \end{equation} \noindent to cancel the mode $(-im/2)e^{-im\theta/2}$. Another way of seeing this is to adopt the following point of view: to compute derivatives, we map the vector of $m$ Fourier coefficients \reff{vec2} to the representation \reff{vec1} with $m+1$ coefficients, differentiate, and then map back to $m$ coefficients. Thus, $\Dbf_m$ can be written as the product of three matrices,\footnote{Readers might find details such as \reff{Deven}--\reff{Qmat} unexciting, and we would not disagree. But what trouble it causes in computations if you do not get these details right!} \begin{equation} \Dbf_m = \Qbf\Dbf_{m+1}\Pbf \label{Deven} \end{equation} \noindent where the $(m+1)\times m$ matrix $\Pbf$ maps \reff{vec2} to \reff{vec1}, \begin{equation} \Pbf = \begin{pmatrix} \frac{1}{2} \\ & 1 \\ & & \ddots \\ & & & 1 \\ \frac{1}{2} & & & 0 \end{pmatrix}, \label{Pmat} \end{equation} \noindent $\Dbf_{m+1}$ is the $(m+1)\times(m+1)$ first-order Fourier differentiation matrix, \begin{equation} \Dbf_{m+1} = \mathrm{diag}\Big(i(-m/2, -m/2+1, \ldots, m/2)\Big), \label{Dodd} \end{equation} \noindent and $\Qbf$ is the $m\times(m+1)$ matrix that maps back to $m$ coefficients, \begin{equation} \Qbf = \begin{pmatrix} 1 & & & & 1\\ & 1 \\ & & \ddots \\ & & & 1 & 0 \end{pmatrix}. \label{Qmat} \end{equation} \noindent (Note that the first entry of the differentiation matrix \reff{Dodd} is nonzero.) The same point of view can be adopted for multiplication matrices, with the difference that multiplying by $\sin^2\theta$ or $\cos\theta\sin\theta$ will increase the length of the representation by four since \begin{equation} \sin^2\theta = -\frac{1}{4}e^{-2i\theta} + \frac{1}{2} - \frac{1}{4}e^{2i\theta}, \quad \cos\theta\sin\theta = -\frac{1}{4}e^{-2i\theta} + \frac{1}{4}e^{2i\theta}. \end{equation} \noindent Therefore, to multiply by, e.g., $\sin^2\theta$, we map \reff{vec2} to \reff{vec1}, multiply by $\sin^2\theta$ with an $(m+1+4)\times(m+1)$ matrix, and then truncate and map back to $m$ coefficients. The resulting matrix for multiplication by $\sin^2\theta$, which we denote by $\Tbf_{\sin^2}$, is given by \begin{equation} \Tbf_{\sin^2} = \Qbf\Mbf_{\sin^2}(:,3:m+3)\Pbf, \label{Tsin2_a} \end{equation} \noindent where $\Mbf_{\sin^2}$ is the $(m+1+4)\times(m+1+4)$ matrix defined by \begin{equation} \Mbf_{\sin^2} = \begin{pmatrix} \frac{1}{2} & 0 & -\frac{1}{4} \vphantom{\ddots} \\ 0 & \frac{1}{2} & 0 & -\frac{1}{4} \\ -\frac{1}{4} & 0 & \ddots & \ddots & \ddots \\ & -\frac{1}{4} & \ddots & \ddots & \ddots & \ddots \\ & & \ddots & \ddots & \ddots & \ddots & -\frac{1}{4} \\ & & & \ddots & \ddots & \ddots & 0 \\ & & & & -\frac{1}{4} & 0 & \frac{1}{2} \vphantom{\ddots} \end{pmatrix}, \label{Msin2} \end{equation} \noindent $\Pbf$ is defined as before and $\Qbf$ is the following $m\times(m+1+4)$ matrix, \begin{equation} \Qbf = \begin{pmatrix} 0 & 0 & 1 & & & & 1 & 0 & 0\\ & & & 1 \\ & & & & \ddots \\ & & & & & 1 & 0 & 0 & 0 \end{pmatrix}. \end{equation} \noindent We have used MATLAB notation in \reff{Tsin2_a}: $\Mbf_{\sin^2}(:,3:m+3)$ is obtained from \reff{Msin2} by removing the first and last two columns---these columns would hit zero coefficients in the padded-with-zeros version of $\hat{u}$. This leads to \begin{equation} \Tbf_{\sin^2} = \begin{pmatrix} \frac{1}{2} & 0 & -\frac{1}{4} & & & & -\frac{1}{4} & 0 \vphantom{\ddots} \\ 0 & \frac{1}{2} & 0 & -\frac{1}{4} & & & & 0 \\ -\frac{1}{8} & 0 & \ddots & \ddots & \ddots \\ & -\frac{1}{4} & \ddots & \ddots & \ddots & \ddots \\ & & -\frac{1}{4} & \ddots & \ddots & \ddots & -\frac{1}{4} \\ & & & \ddots & \ddots & \ddots & \ddots & -\frac{1}{4} \\ -\frac{1}{8} & & & & -\frac{1}{4} & \ddots & \frac{1}{2} & 0 \\ 0 & 0 & & & & -\frac{1}{4} & 0 & \frac{1}{2} \vphantom{\ddots} \end{pmatrix}. \label{Tsin2_b} \end{equation} \noindent Using the Gershgorin circle theorem~\cite{varga2004} we see that the $m\times m$ matrix \reff{Tsin2_b} is nonsingular since it is row diagonally dominant, with strict diagonal dominance in the second row, and irreducible. Let us add some comments about \reff{Tsin2_b}. If we operated in value space, we would obtain a singular matrix, since the multiplication matrix in value space, $\Mbf_{\sin^2}^v$, a diagonal matrix with entries $\{\sin^2\theta_p\}_{p=1}^m$, has two zeros corresponding to $\theta_p=-\pi$ and $\theta_p=0$. (The standard remedy in that case is to shift the $\theta$-grid so that it does not contain the poles~\cite{merilees1973}.) From this matrix, we can obtain a multiplication in coefficient space by multiplying by the DFT matrix $\textbf{F}$ and its inverse, \begin{equation} \textbf{F}\Mbf_{\sin^2}^v\textbf{F}^{-1} = \begin{pmatrix} \frac{1}{2} & 0 & -\frac{1}{4} & & & & -\frac{1}{4} & 0 \vphantom{\ddots} \\ 0 & \frac{1}{2} & 0 & -\frac{1}{4} & & & & -\frac{1}{4} \\ -\frac{1}{4} & 0 & \ddots & \ddots & \ddots \\ & -\frac{1}{4} & \ddots & \ddots & \ddots & \ddots \\ & & \ddots & \ddots & \ddots & \ddots & -\frac{1}{4} \\ & & & \ddots & \ddots & \ddots & \ddots & -\frac{1}{4} \\ -\frac{1}{4} & & & & -\frac{1}{4} & \ddots & \frac{1}{2} & 0 \\ 0 & -\frac{1}{4} & & & & -\frac{1}{4} & 0 & \frac{1}{2} \vphantom{\ddots} \end{pmatrix}. \label{Msin2t} \end{equation} \noindent This matrix is indeed singular since $\textbf{F}$ defines a unitary transformation, and its null space contains the vectors $(1,1,\ldots)^T$ and $(1,-1,1,-1,\ldots)^T$, which correspond to the Fourier coefficients of the delta functions at $\theta=0$ and $\theta=-\pi$. In~\cite{townsend2016}, the authors use the $m\times m$ version of \reff{Msin2} (which is also nonsingular) as opposed to \reff{Tsin2_b}; this leads to incorrect results for trigonometric polynomials of degree $m/2-2$. To illustrate this, let us consider $m=6$ and multiply $\sin^2(\theta)$ by a trigonometric polynomial of degree $m/2-2=1$, e.g., $\cos(\theta)$. In the representation~\reff{vec2}, the function $\cos(\theta)=1/2(e^{-i\theta}+e^{i\theta})$ has coefficients \begin{equation} c = \Big(0, 0, \frac{1}{2}, 0, \frac{1}{2}, 0\Big)^T, \end{equation} \noindent while the product $\cos(\theta)\sin(\theta)=-1/8(e^{-3i\theta}+e^{3i\theta}) + 1/8(e^{-i\theta}+e^{i\theta})$ has coefficients \begin{equation} d = \Big(-\frac{1}{4}, 0, \frac{1}{8}, 0, \frac{1}{8}, 0\Big)^T. \end{equation} \noindent For $m=6$, we have \begin{equation} \Tbf_{\sin^2} = \begin{pmatrix} \frac{1}{2} & 0 & -\frac{1}{4} & 0 & -\frac{1}{4} & 0 \\ 0 & \frac{1}{2} & 0 & -\frac{1}{4} & 0 & 0 \\ -\frac{1}{8} & 0 & \frac{1}{2} & 0 & -\frac{1}{4} & 0 \\ 0 & -\frac{1}{4} & 0 & \frac{1}{2} & 0 & -\frac{1}{4} \\ -\frac{1}{8} & 0 & -\frac{1}{4} & 0 & \frac{1}{2} & 0 \\ 0 & 0 & 0 & -\frac{1}{4} & 0 & \frac{1}{2} \end{pmatrix}, \quad \Tbf_{\sin^2}\,c = \Big(-\frac{1}{4}, 0, \frac{1}{8}, 0, \frac{1}{8}, 0\Big)^T = d, \end{equation} \noindent while \begin{equation} \Mbf_{\sin^2} = \begin{pmatrix} \frac{1}{2} & 0 & -\frac{1}{4} & 0 & 0 & 0 \\ 0 & \frac{1}{2} & 0 & -\frac{1}{4} & 0 & 0 \\ -\frac{1}{4} & 0 & \frac{1}{2} & 0 & -\frac{1}{4} & 0 \\ 0 & -\frac{1}{4} & 0 & \frac{1}{2} & 0 & -\frac{1}{4} \\ 0 & 0 & -\frac{1}{4} & 0 & \frac{1}{2} & 0 \\ 0 & 0 & 0 & -\frac{1}{4} & 0 & \frac{1}{2} \end{pmatrix}, \quad \Mbf_{\sin^2}\,c = \Big(-\frac{1}{8}, 0, \frac{1}{8}, 0, \frac{1}{8}, 0\Big)^T \neq d. \end{equation} Similarly, the matrix for multiplication by $\cos\theta\sin\theta$ is the product of three matrices, \begin{equation} \Tbf_{\cos\sin} = \Qbf\Mbf_{\cos\sin}(:,3:m+3)\Pbf, \end{equation} \noindent with \begin{equation} \Mbf_{\cos\sin} = \begin{pmatrix} 0 & 0 & \frac{i}{4} \vphantom{\ddots} \\ 0 & 0 & 0 & \frac{i}{4} \\ -\frac{i}{4} & 0 & \ddots & \ddots & \ddots \\ & -\frac{i}{4} & \ddots & \ddots & \ddots & \ddots \\ & & \ddots & \ddots & \ddots & \ddots & \frac{i}{4} \\ & & & \ddots & \ddots & \ddots & 0 \\ & & & & -\frac{i}{4} & 0 & 0 \vphantom{\ddots} \end{pmatrix}, \end{equation} \noindent and is given by \begin{equation} \Tbf_{\cos\sin} = \begin{pmatrix} 0 & 0 & \frac{i}{4} & & & & -\frac{i}{4} & 0 \vphantom{\ddots} \\ 0 & 0 & 0 & \frac{i}{4} & & & & 0 \\ -\frac{i}{8} & 0 & \ddots & \ddots & \ddots \\ & -\frac{i}{4} & \ddots & \ddots & \ddots & \ddots \\ & & -\frac{i}{4} & \ddots & \ddots & \ddots & \frac{i}{4} \\ & & & \ddots & \ddots & \ddots & \ddots & \frac{i}{4} \\ \frac{i}{8} & & & & -\frac{i}{4} & \ddots & 0 & 0 \\ 0 & 0 & & & & -\frac{i}{4} & 0 & 0 \vphantom{\ddots} \end{pmatrix}. \end{equation} \subsection{Laplacian matrix and linear systems} The Laplacian operator on the sphere is \begin{equation} \Delta u = \frac{1}{\sin\theta}\big(\sin\theta\,u_\theta\big)_\theta + \frac{1}{\sin^2\theta}u_{\lambda\lambda}, \end{equation} \noindent which we write as \begin{equation} \Delta u = u_{\theta\theta} + \frac{\cos\theta\sin\theta}{\sin^2\theta}u_\theta + \frac{1}{\sin^2\theta}u_{\lambda\lambda}. \label{lap} \end{equation} \noindent We want to discretize $\Delta$ with a matrix $\Lbf$ using a Fourier spectral method in coefficient space on an $n\times m$ uniform longitude-latitude grid~\reff{Grid2D}, and we look for a solution of the form~\reff{Fourier2D}--\reff{Coeffs2D}. Using Kronecker products, we can write $\Lbf$ as \begin{equation} \Lbf = \Ibf_n\otimes(\Dbf_{m}^{(2)} + \Tbf_{\sin^2}^{-1} \Tbf_{\cos\sin} \Dbf_{m}) + \Dbf_{n}^{(2)}\otimes(\Tbf_{\sin^2}^{-1}), \label{Laplacian} \end{equation} \noindent where $\Tbf_{\sin^2}$, $\Tbf_{\cos\sin}$ and $\Dbf_m$ have been defined in the previous section, $\Ibf_n$ is the $n\times n$ identity matrix and $\Dbf_{m}^{(2)}$ is the second-order Fourier differentiation matrix, \begin{equation} \Dbf^{(2)}_m = \mathrm{diag}\Big(-(m/2)^2, -(m/2-1)^2, \ldots, -1, 0, -1, \ldots, -(m/2-1)^2\Big). \label{diffmat2} \end{equation} \noindent Note that the matrix $\Lbf$ is block diagonal with $n$ dense blocks of size $m\times m$. Let us emphasise that the $n$ blocks correspond to the $n$ longitudinal wavenumbers $-n/2\leq k\leq n/2-1$ and that the size~$m$ of each block corresponds to the $m$ latitudinal wavenumbers $-m/2\leq j\leq m/2-1$. \begin{figure} \hspace{-.5cm} \begin{minipage}[t]{0.32\hsize} \includegraphics[height=0.93\textwidth]{L1.eps} \end{minipage} \begin{minipage}[t]{0.32\hsize} \includegraphics[height=0.93\textwidth]{L.eps} \end{minipage} \begin{minipage}[t]{0.32\hsize} \includegraphics[height=0.93\textwidth]{U.eps} \end{minipage} \caption{\textit{Sparsity pattern of the matrix $z\Tbf_{\sin^2} + w\Tbf_{\sin^2}\Lbf_i$ (left), and its $\mrm{L}$ (middle) and $\mrm{U}$ (right) factors for $m=16$. Triangular systems involving $\mrm{L}$ and $\mrm{U}$ are solvable in $\mathcal{O}(m)$ operations.} } \label{fig:LUspyplots} \end{figure} Some of the time-stepping schemes we shall describe in Section 3 involve solving linear systems of the form $(z\Ibf_{nm} + w\Lbf)x=b$, where $\Ibf_{nm}$ denotes the $nm\times nm$ identity matrix. Fortunately, the matrix structure allows for a linear-cost direct solver. The key observation is that $\Lbf$ is block diagonal, and each $m\times m$ block $\Lbf_i$ of $\Lbf$, \begin{equation} \Lbf_i = \Dbf_m^{(2)} + \Tbf_{\sin^2}^{-1}\Tbf_{\cos\sin}\Dbf_m + \Dbf^{(2)}_n(i,i)\Tbf_{\sin^2}^{-1}, \end{equation} \noindent is dense but $(z\Ibf_m + w\Lbf_i)x = b$ can be solved in $\mathcal{O}(m)$ operations since it is equivalent to solving \begin{equation} (z\Tbf_{\sin^2} + w\Tbf_{\sin^2}\Lbf_i)x = \Tbf_{\sin^2}b, \label{eq:shiftlin} \end{equation} \noindent and $(z\Tbf_{\sin^2} + w\Tbf_{\sin^2}\Lbf_i)$ is pentadiagonal with two (near-)corner elements. Therefore, using a sparse direct solver based on the standard LU factorization without pivots~\cite{davis2006}, the L and U factors have the sparsity patterns indicated in Figure~\ref{fig:LUspyplots} (due to the diagonal dominance in most blocks, the LU factorization without pivoting completes without breaking down\footnote{For the IMEX schemes of Section~3.2, we can prove that all the blocks but one are diagonally dominant; for ETDRK4-CF (see Section~3.1.1), the diagonal dominance is violated much more frequently. Nonetheless, diagonal dominance is merely a sufficient condition for the LU factorization to not require pivoting, and in practice, all the methods result in linear systems for which the LU factorization causes no stability issues.}). Since L and U have at most three nonzero elements per column and row respectively, each triangular linear system is solvable in $\mathcal{O}(m)$ operations; thus once an LU factorization is computed, the linear system~\eqref{eq:shiftlin} can be solved in $\mathcal{O}(m)$ operations. Therefore, linear systems of the form \begin{equation} (z\Ibf_{nm} + w\Lbf)x = b \label{eq:shiftlinsysL} \end{equation} \noindent can be solved blockwise in $\mathcal{O}(nm)$ operations.\footnote{In practice we do not solve linear systems of the form~\reff{eq:shiftlinsysL} blockwise, i.e., by solving $n$ linear systems~\reff{eq:shiftlin}. Instead, a more efficient approach is to store the left-hand sides of~\reff{eq:shiftlin} altogether as a sparse matrix and use MATLAB's sparse linear solver.} Because of the structure of the LU factors (see Figure~\ref{fig:LUspyplots}), the coefficients one gets when solving systems of the form~\reff{eq:shiftlinsysL} have the property that the even modes in $\lambda$ correspond to even functions in $\theta$ and the odd modes to odd functions. To see this, note that every other term in the LU factors is zero, so the even and odd modes are decoupled. \subsection{Poisson's equation} To test the accuracy of \reff{Laplacian}, we first solve a \textit{time-independent} PDE, \textit{Poisson's equation}, with a zero-mean condition for uniqueness, \begin{equation} \begin{array}{l} \Delta u=f(\lambda,\theta), \quad (\lambda,\theta)\in[-\pi,\pi]\times[0,\pi], \\\\ \dsp\int_{0}^{\pi}\int_{-\pi}^{\pi} u(\lambda, \theta)\sin\theta d\lambda d\theta = 0, \end{array} \label{Poisson} \end{equation} \noindent where $f$ also has zero mean on $[-\pi,\pi]\times[0,\pi]$. Using the DFS method, we seek a solution $\tilde{u}$ of the ``doubled-up'' version of \reff{Poisson}, \begin{equation} \begin{array}{l} \Delta \tilde{u}=\tilde{f}(\lambda,\theta), \quad (\lambda,\theta)\in[-\pi,\pi]^2, \\\\ \dsp\int_{0}^{\pi}\int_{-\pi}^{\pi} \tilde{u}(\lambda, \theta)\sin\theta d\lambda d\theta = 0, \end{array} \label{Poisson2} \end{equation} \noindent of the form~\reff{Fourier2D}--\reff{Coeffs2D}. The true solution $u$ can be recovered by restricting $\tilde{u}$ to $(\lambda,\theta)\in[-\pi,\pi]\times[0,\pi]$. (Note that the zero-mean condition in \reff{Poisson2} is on the original domain $[-\pi,\pi]\times[0,\pi]$ since $\tilde{u}$ must coincide with $u$ on this domain.) Townsend et al.~\cite{townsend2016} showed that the zero-mean condition can be discretized as \begin{equation} \begin{array}{ll} \dsp \int_{0}^{\pi}\int_{-\pi}^{\pi} \tilde{u}(\lambda, \theta)\sin\theta d\lambda d\theta & \dsp \approx \sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}\sum_{k=-n/2}^{n/2}{\hspace{-0.3cm}}'{\;\,}\hat{u}_{jk}\int_0^\pi\sin\theta e^{ij\theta}d\theta\int_{-\pi}^\pi e^{ik\lambda}d\lambda \\\\ & \dsp = 2\pi\sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}\hat{u}_{j0}\frac{1 + e^{ij\pi}}{1-j^2}=0. \end{array} \label{zeromean} \end{equation} \noindent Poisson's equation \reff{Poisson2} is then discretized by \begin{equation} \Lbf\hat{u} = \hat{f}, \label{Poisson3} \end{equation} \noindent where $\hat{u}$ and $\hat{f}$ are the vectors of $nm$ Fourier coefficients~\reff{Coeffs2D} of $\tilde{u}$ and $\tilde{f}$, and $\Lbf$ is the Laplacian matrix \reff{Laplacian}. We impose the zero-mean condition by replacing the $(m/2+1)$st row of the $(n/2+1)$st block of~$\Lbf$ by \reff{zeromean}.\footnote{The $(n/2+1)$st block of~$\Lbf$ corresponds to longitudinal wavenumber $k=0$ while the $(m/2+1)$st row of this block corresponds to latitudinal wavenumber $j=0$. } Note that, given the Fourier coefficients of $\tilde{u}$ and $\tilde{f}$, \reff{Poisson3} can be solved in $\mathcal{O}(nm)$ operations since it is of the form~\reff{eq:shiftlinsysL} with $z=0$. Since the Laplacian operator on the sphere has real eigenvalues $-l(l+1)$, $l\geq 0$, with eigenfunctions the spherical harmonics $Y^m_l(\lambda,\theta)$~\cite{atkinson2012}, a simple test is to take the right-hand side $f$ to be a spherical harmonic $Y_{l}^{m}$ with exact solution $u=-1/(l(l+1))Y_{l}^{m}$. A slightly more complicated test is given in \cite{cheong2000a} and this is what we shall present here. We solve Poisson's equation for a family of right-hand sides $f_l$ defined by \begin{equation} f_l(\lambda,\theta) = l(l+1)\sin^l\theta\cos (l\lambda) + (l+1)(l+2)\cos\theta\sin^l\theta\cos (l\lambda), \quad l\geq1. \end{equation} \noindent The exact solution $u_l^{ex}(\lambda,\theta)$ is given by \begin{equation} u_l^{ex}(\lambda,\theta) = -\sin^l\theta\cos (l\lambda) - \cos\theta\sin^l\theta\cos (l\lambda), \quad l\geq1. \end{equation} \noindent We compute the solutions $u_l$ for $m=n=128$ grid points in each direction for $1\leq l\leq 64$ and, following \cite[Fig.~1]{cheong2000a}, we plot the logarithm (in base 10) of the relative $L^2$-error $E$, \begin{equation} E = \frac{||u_l(\lambda,\theta) - u_l^{ex}(\lambda,\theta)||_2}{||u_l^{ex}(\lambda,\theta)||_2}, \label{error_Poisson} \end{equation} \noindent with (continuous) $L^2$-norm on the sphere $||\cdot||_2$, against $l$. (The $L^2$-norm of a \texttt{spherefun} can be computed in Chebfun with the \texttt{norm} command.) We also plot the logarithm of the error $P$ in satisfying the pole condition~\reff{polecondition1}, i.e., \begin{equation} P = \max\Bigg(\max_{k\neq 0}\Bigg|\sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}\hat{u}_{jk}^l\Bigg|, \max_{k\neq 0}\Bigg|\sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}(-1)^j\hat{u}_{jk}^l\Bigg|\Bigg), \label{error_polecondition} \end{equation} \noindent where the $\hat{u}_{jk}^l$ are the computed coefficients of $\tilde{u}_l$; see Figure \ref{fig:poisson}. The accuracy is excellent; the results are similar to those shown in~\cite[Fig.~1]{cheong2000a} and to results we have obtained with the Poisson solver of~\cite{townsend2016} (although the discretization matrices are different as noted in Section 2.2, the effect on the solution is negligible when $m$ and $n$ are taken large enough). \begin{figure} \centering \includegraphics[scale=.5]{poisson.eps} \caption{\textit{Variation of the relative error $\log_{10}E$ and of the error in the pole condition $\log_{10}P$ with wavenumber $l$ for $m=n=128$. The accuracy is excellent for every wavenumber $1\leq l\leq 64$.}} \label{fig:poisson} \end{figure} \subsection{Heat equation} \begin{figure} \centering \includegraphics[scale=.4]{heat.eps} \caption{\textit{Variation of the relative error $E$ at $t=1$ and of the error in the pole condition $P$ with time-step $h$ for $m=n=128$. The error $E$ scales as $\mathcal{O}(h^4)$. Note that the initial condition $Y_{64}^{64}(\lambda,\theta)$ corresponds to the highest wavenumbers that can be resolved on a $128\times 128$ grid.}} \label{fig:heat} \end{figure} As we mentioned in the introduction, Orszag~\cite{orszag1974} and Boyd~\cite{boyd1978} showed that when solving a \textit{time-dependent} PDE involving the Laplacian with half-range cosine/sine series \`a la \reff{Orszag}, it is crucial to impose the pole conditions \reff{polecondition2}; otherwise, they observed that ``the numerical solution still converged as the time step was shortened---to a wildly wrong answer.'' Therefore, let us now test the accuracy of~\reff{Laplacian} with a time-dependent PDE and illustrate that the pole conditions \reff{polecondition1} are also satisfied in this case. We consider the \textit{heat equation} with a thermal diffusivity and an initial condition that lead to a particularly simple exact solution, \begin{equation} u_t = \frac{1}{l(l+1)}\Delta u, \quad u(t=0,\lambda,\theta) = Y_{l}^{m}(\lambda,\theta), \quad (\lambda,\theta)\in[-\pi,\pi]\times[0,\pi]. \label{Heat} \end{equation} \noindent The exact solution is $u^{ex}(t,\lambda,\theta) = e^{-t}Y_{l}^{m}(\lambda,\theta)$. Using the DFS method, we seek a solution $\tilde{u}$ of the ``doubled-up'' version of \reff{Heat}, \begin{equation} \tilde{u}_t = \frac{1}{l(l+1)}\Delta \tilde{u}, \quad \tilde{u}(t=0,\lambda,\theta) = Y_{l}^{m}(\lambda,\theta), \quad (\lambda,\theta)\in[-\pi,\pi]^2, \label{Heat2} \end{equation} \noindent of the form \begin{equation} \tilde{u}(t, \lambda, \theta) \approx \sum_{j=-m/2}^{m/2}{\hspace{-0.3cm}}'{\;\,}\sum_{k=-n/2}^{n/2}{\hspace{-0.3cm}}'{\;\,}\hat{u}_{jk}(t)e^{ij\theta}e^{ik\lambda}. \label{trigsol} \end{equation} \noindent We discretize the Laplacian operator with~\reff{Laplacian} and use the fourth-order backward differentiation formula to march in time. We take $l=64$, $\tilde{u}(t=0,\lambda,\theta)=Y_{64}^{64}(\lambda,\theta)$, $m=n=128$ grid points and solve~\reff{Heat2} up to $t=1$ for various time-steps $h$. We plot the relative $L^2$-error $E$ at $t=1$, \begin{equation} E = \frac{||u(t=1,\lambda,\theta) - u^{ex}(t=1,\lambda,\theta)||_2}{||u^{ex}(t=1,\lambda,\theta)||_2}, \label{error_Heat} \end{equation} \noindent and the error in the pole conditions (as defined in~\reff{error_polecondition}) against $h$; the error $E$ scales as $\mathcal{O}(h^4)$, see Figure~\ref{fig:heat}. (With $m=n=128$ grid points, the error due to the spatial discretization is small compared to the error due to the time discretization, so we are really measuring the latter.) \subsection{Bound for the eigenvalues of the Laplacian matrix} As mentioned in Section~2.4, the eigenvalues of the Laplacian operator are $-l(l+1)$, for integers $l\geq 0$. What about the eigenvalues of the Laplacian matrix~\reff{Laplacian}? We show in Appendix~A that they are all real and nonpositive, and have observed numerically that some of them are spectrally accurate approximations to the eigenvalues $-l(l+1)$, but some others, the so-called \textit{outliers}~\cite[Ch.~10]{trefethen2000}, are of order $\mathcal{O}(n^2m^2)$ as $n=m\rightarrow\infty$. We shall prove the latter fact below. These large eigenvalues are meaningless physically but of crucial importance in practice since for time-stepping algorithms applied to \reff{ODE} to be stable, we need the eigenvalues of \reff{Laplacian}, scaled by the time-step, to lie in their stability region. We examine now the largest (in magnitude) eigenvalues of~\reff{Laplacian}. It suffices to examine each block \begin{equation} \Lbf_i = \big(\Dbf_m^{(2)} + \Tbf_{\sin^2}^{-1}\Tbf_{\cos\sin}\Dbf_m + \Dbf^{(2)}_n(i,i)\Tbf_{\sin^2}^{-1}\big), \end{equation} \noindent whose largest eigenvalue can be bounded as \begin{equation} |\lambda_{\max}(\Lbf_i)|\leq \|\Lbf_i\|\leq \|\Dbf_m^{(2)}\| + \|\Tbf_{\sin^2}^{-1}\|\|\Tbf_{\cos\sin}\Dbf_m + \Dbf^{(2)}_n(i,i)\Ibf_m\|. \end{equation} \noindent We trivially have $\|\Dbf_m^{(2)}\|=\mathcal{O}(m^2)$ and \begin{equation} \|\Tbf_{\cos\sin}\Dbf_m + \Dbf^{(2)}_n(i,i)\Ibf_m\| \leq \|\Tbf_{\cos\sin}\Dbf_m\| + |\Dbf^{(2)}_n(i,i)| =\mathcal{O}(m+i^2). \end{equation} \noindent It remains to bound $\|\Tbf_{\sin^2}^{-1}\|$; we claim that this is $\mathcal{O}(m^2)$. To verify this, we come back to the definition of $\Tbf_{\sin^2} = \Qbf\Mbf_{\sin^2}(:,3:m+3)\Pbf$ from \reff{Tsin2_a}, and note that \begin{equation} \Qbf^T= \begin{pmatrix} \mathbf{0}_{2\times m}\\ \Pbf \\ \mathbf{0}_{2\times m} \end{pmatrix} \mrm{diag}(2,1,\ldots,1). \end{equation} \noindent We can then write \begin{equation} \Tbf_{\sin^2}=\Qbf\Mbf_{\sin^2}\Qbf^T\mbox{diag}(\frac{1}{2},1,\ldots,1)=\mbox{diag}(\sqrt{2},1,\ldots,1)\tilde\Qbf\Mbf_{\sin^2}\tilde\Qbf^T\mbox{diag}(\frac{1}{\sqrt{2}},1,\ldots,1), \end{equation} \noindent where $\tilde\Qbf := \mrm{diag}(\frac{1}{\sqrt{2}},1,\ldots,1)\Qbf$ has orthonormal rows. Hence, using the fact that $\sigma_{\min}(\mathbf{ABC})\geq \sigma_{\min}(\mathbf{A})\sigma_{\min}(\mathbf{B})\sigma_{\min}(\mathbf{C})$ (which holds when $\mathbf{A}$ and $\mathbf{C}$ are square), we obtain \begin{equation} \sigma_{\min}(\Tbf_{\sin^2}) \geq \frac{1}{\sqrt{2}}\sigma_{\min}(\tilde\Qbf\Mbf_{\sin^2}\tilde\Qbf^T). \end{equation} \noindent Now, since $\Mbf_{\sin^2}$ is symmetric positive definite, we have $\sigma_{\min}(\tilde\Qbf\Mbf_{\sin^2}\tilde\Qbf^T) =\lambda_{\min}(\tilde\Qbf\Mbf_{\sin^2}\tilde\Qbf^T) \geq \lambda_{\min}(\Mbf_{\sin^2}(3:m+3,3:m+3))$, where we used the nonzero structure of $\tilde\Qbf$ for the last inequality. Moreover, the eigenvalues of $\Mbf_{\sin^2}(3:m+3,3:m+3)$ are explicitly known to be \begin{equation} \lambda_j= \Big\{\frac{1}{2}\big(\cos(\pi j/(m/2+1))+1\big),\,1\leq j\leq \frac{m}{2}\Big\} \cup\Big\{\frac{1}{2}\big(\cos(\pi j/(m/2+2))+1\big),\,1\leq j\leq \frac{m}{2}+1\Big\}. \label{eq:cluster} \end{equation} \noindent Thus $1/\sigma_{\min}(\tilde\Qbf\Mbf_{\sin^2}\tilde\Qbf^T) \leq 1/\min_j\lambda_j=\mathcal{O}(m^2)$, and hence $\|\Tbf_{\sin^2}^{-1}\|=1/\sigma_{\min}(\Tbf_{\sin^2})=\mathcal{O}(m^2)$, as required. We conclude that $\|\Lbf_i\|=\mathcal{O}(m^2(i^2+m))$; since this holds for every $i$, it follows that \begin{equation} \label{eq:Leig} |\lambda_{\max}(\Lbf)|=\mathcal{O}(n^2m^2+m^3). \end{equation} \noindent This bound scales as $\mathcal{O}(n^2m^2)$ when $m=n$ (our usual choice). We illustrate \reff{eq:Leig} in Figure~\ref{fig:maxeig}, which suggests that it is sharp. The fact that the largest eigenvalue has order $n^2m^2$ makes time-dependent PDEs on the sphere particularly stiff. It is a consequence of both the second order of the Laplacian operator and the clustering of the points near the poles in~\reff{eq:cluster}. It would imply severe restrictions on the time-steps for generic explicit algorithms---this is why we use exponential integrators and IMEX schemes, which we describe next. (In the literature, the severe time-stepping restrictions due to uniform longitude-latitude grids is sometimes called the \textit{pole problem}~\cite{boyd1978, orszag1974}). Note that the time-stepping restrictions resulting from the clustering near the poles can be addressed by truncating high-frequency terms in the space discretization (see, e.g.,~\cite[Sec.~2.1.6]{fornberg1998} and \cite{fornberg1997}). However, this approach does not overcome stiffness resulting from the second-order operator. \begin{figure} \centering \includegraphics[scale=.4]{maxeig.eps} \caption{\textit{Variation of $|\lambda_{\max}(\Lbf)|$ with $m=n$. The bound~\eqref{eq:Leig} is accurately reflected.}} \label{fig:maxeig} \end{figure} \section{Fourth-order time-stepping on the sphere} Using the DFS method, we seek a solution $\tilde{u}$ of the ``doubled-up'' version of \reff{PDE2}, \begin{equation} \tilde{u}_t = \alpha\Delta\tilde{u} + \mathcal{N}(\tilde{u}), \quad \tilde{u}(t=0,\lambda,\theta)=\tilde{u}_0(\lambda,\theta), \quad (\lambda,\theta)\in[-\pi,\pi]^2, \label{PDE3} \end{equation} \noindent of the form~\reff{trigsol}. Discretizing the Laplacian operator with the Laplacian matrix~\reff{Laplacian}, we obtain the system of $nm$ ODEs \reff{ODE} where $\Lbf$ is \reff{Laplacian} multiplied by $\alpha$. Time is discretized with time-step $h$ and the problem is to find the Fourier coefficients $\hat{u}^{n+1}$ of $\tilde{u}$ at $t_{n+1}=(n+1)h$ from the coefficients $\hat{u}^{n}$ at $t_{n}=nh$ and also coefficients at previous time-steps (for multistep schemes). Note that, in practice, nonlinear evaluations $\Nbf(\hat{u}^n)$ are carried out in value space. We present in this section four time-stepping algorithms for solving \reff{ODE}, and show how it is possible to achieve $\mathcal{O}(nm\log nm)$ complexity per time-step in most cases. Two of them are exponential integrators based on the ETDRK4 scheme with different strategies for computing the matrix exponential and related functions, while the two others are IMEX schemes. As before, we have observed numerically that these schemes combined with our spatial discretization preserve both the ``doubled-up'' symmetry \reff{DFS} and the pole conditions \reff{polecondition1}, i.e., if one starts with a smooth ``doubled-up'' initial condition, then the solution at time $t$ is also a smooth ``doubled-up'' function. \subsection{Exponential integrators} Dozens of exponential integration formulas of order four and higher have been proposed over the last 15 years~\cite{cox2002, hochbruck2005, hochbruck2010, krogstad2005, luan2014a, minchev2004, ostermann2006}. The first author recently demonstrated~\cite{montanelli2016c} that it is hard to do much better than the ETDRK4 scheme of Cox and Matthews~\cite{cox2002}. The formula for this scheme is: \begin{equation} \begin{array}{l} \hat{a}^n = \ph_0(h\Lbf/2)\hat{u}^n + (h/2)\ph_1(h\Lbf/2)\Nbf(\hat{u}^n), \\\\ \hat{b}^n = \ph_0(h\Lbf/2)\hat{u}^n + (h/2)\ph_1(h\Lbf/2)\Nbf(\hat{a}^n), \\\\ \hat{c}^n = \ph_0(h\Lbf/2)\hat{a}^n + (h/2)\ph_1(h\Lbf/2)\big[2\Nbf(\hat{b}^n) - \Nbf(\hat{u}^n)\big], \\\\ \hat{u}^{n+1} = \ph_0(h\Lbf)\hat{u}^n + hf_1(h\Lbf)\Nbf(\hat{u}^n) + hf_2(h\Lbf)\big[\Nbf(\hat{a}^n) + \Nbf(\hat{b}^n)\big] + hf_3(h\Lbf)\Nbf(\hat{c}^n), \end{array} \label{ETDRK4} \end{equation} \noindent where the $\ph$-functions are defined by \begin{equation} \begin{array}{l} \ph_0(h\Lbf) = e^{h\Lbf}, \\\\ \ph_1(h\Lbf) = h^{-1}\Lbf^{-1}(e^{h\Lbf} - \Ibf), \\\\ \ph_2(h\Lbf) = h^{-2}\Lbf^{-2}(e^{h\Lbf} - h\Lbf - \Ibf), \\\\ \ph_3(h\Lbf) = h^{-3}\Lbf^{-3}(e^{h\Lbf} - h^2\Lbf^2/2 - h\Lbf - \Ibf), \end{array} \end{equation} \noindent and the coefficients $f_1$, $f_2$ and $f_3$ are linear combinations of the $\ph$-functions, \begin{equation} \begin{array}{l} f_1(h\Lbf) = \ph_1(h\Lbf) - 3\ph_2(h\Lbf) + 4\ph_3(h\Lbf) = h^{-3}\Lbf^{-3}[-4\Ibf - h\Lbf + e^{h\Lbf}(4 - 3h\Lbf + (h\Lbf)^2)], \\\\ f_2(h\Lbf) = 2\ph_2(h\Lbf) - 4\ph_3(h\Lbf) = 2h^{-3}\Lbf^{-3}[2\Ibf + h\Lbf + e^{h\Lbf}(-2\Ibf + h\Lbf)], \\\\ f_3(h\Lbf) = -\ph_2(h\Lbf) + 4\ph_3(h\Lbf) = h^{-3}\Lbf^{-3}[-4\Ibf - 3h\Lbf - (h\Lbf)^2 + e^{h\Lbf}(4\Ibf - h\Lbf)]. \end{array} \end{equation} When all the eigenvalues of $\Lbf$ are real (i.e., $\alpha\in\R$, $\alpha>0$ corresponding to a diffusive PDE), matrix-vector products $\ph_l(h\Lbf)v$ can be evaluated using rational approximations computed by the Carath\'{e}odory--Fej\'{e}r (CF) method, as described in~\cite{schmelzer2007}. If $\Lbf$ has some imaginary eigenvalues---the extreme case being when all the eigenvalues are imaginary (i.e., $\alpha\in i\R$, dispersive PDE)---methods based on rational approximations necessarily become expensive, and the $\ph$-functions have to be precomputed before the time-stepping starts, e.g., using the eigenvalue decomposition of $\Lbf$ and contour integrals.\footnote{A comparison of methods for computing the $\ph$-functions can be found in~\cite{ashi2009}.} We denote by ETDRK4-CF the method with CF approximations and by ETDRK4-EIG the method with eigenvalue decomposition. We shall give details about the precomputation of the coefficients for both ETDRK4-CF and ETDRK4-EIG below. Note that Du and Zhu computed in \cite{du2005} the stability region of \reff{ETDRK4} and showed that it includes parts of both the negative real axis and the imaginary axis. \subsubsection{ETDRK4-CF} When all the eigenvalues of $\Lbf$ are real, one can compute matrix-vector products $\ph_l(h\Lbf)v$ in $\mathcal{O}(mn)$ operations using near-best rational approximations to the $\varphi$-functions on the negative real axis~\cite{schmelzer2007}. For an efficient implementation, we use the algorithm that uses common poles for approximating the exponential and other $\varphi$-functions~\cite{schmelzer2007}, as we summarize below. Using the CF method for the negative real line \cite{trefethen1983}, we obtain a rational approximant \begin{equation} e^z\approx r_\infty + \sum_{j=1}^p \frac{c_j}{z-z_j}, \label{eq:cfapprox} \end{equation} \noindent which has error decaying like $\approx 9.28903^{-p}$ with a type $(p,p)$ function. (We use the MATLAB \texttt{cf} code of Trefethen, Weideman and Schmelzer~\cite{trefethen2006} in our experiments.) To obtain an approximant to $\varphi_l(z)$ using the approximant~\eqref{eq:cfapprox} to $e^z=\varphi_0(z)$, we use the fact~\cite[Prop.~4.1]{schmelzer2007} that defining \begin{equation} B_z= \begin{pmatrix} z&1\\0& 0 \end{pmatrix} \end{equation} \noindent we have \setlength{\extrarowheight}{5pt} \begin{equation} \varphi_l(B_z)= \begin{pmatrix} \varphi_l(z)&\varphi_{l+1}(z)\\0& \varphi_l(0) \end{pmatrix}, \quad l\geq 0. \end{equation} \setlength{\extrarowheight}{0pt} \noindent Together with the identity \setlength{\extrarowheight}{5pt} \begin{equation} (B_z-z_jI)^{-1}= \begin{pmatrix} (z-z_j)^{-1}&(z-z_j)^{-1}z_j^{-1} \\0& -z_j^{-1} \end{pmatrix}, \end{equation} \setlength{\extrarowheight}{0pt} \noindent we obtain the approximation \begin{equation} \varphi_{l}(z)\approx \sum_{j=1}^p \frac{c_jz_j^{-l}}{z-z_j}, \quad l\geq 0. \label{eq:phiapp} \end{equation} \noindent As suggested in \cite[Prop.~4.1]{schmelzer2007}, we further incorporate a shift $1$ in~\reff{eq:cfapprox}, which with $p=12$ (the default choice; the accuracy is $\approx 10^{-8}$ with $p=10$) gives accuracy $\approx 10^{-10}$ on the negative real axis for all $\ph_l$, $0\leq l\leq 3$. Given~\eqref{eq:cfapprox} and~\eqref{eq:phiapp}, evaluating $\ph_l(h\Lbf)b$ for a vector $b$ as in~\reff{ETDRK4} can be approximated as \begin{equation} \ph_l(h\Lbf)b\approx \sum_{j=1}^p c_jz_j^{-l} (h\Lbf-z_j\Ibf)^{-1}b, \quad 0\leq l\leq 3, \label{action} \end{equation} \noindent which reduces to $p=12$ shifted linear systems of the form~\eqref{eq:shiftlinsysL}, which we do with linear cost as described in Section~2.3. In practice, we compute and store the LU factorizations of the matrices that appear in~\reff{action} before the time-stepping starts. Note that computing different products $\ph_l(hL)v$ at once with the same $v$ requires no further linear systems. Let us emphasize three aspects of this approach. First, it is not necessary to explicitly compute and store the $\ph$-functions; instead, their action on vectors is directly computed via \reff{action}. Second, the most expensive operation in~\reff{ETDRK4}--\reff{action} is the 2D FFT, which costs $\mathcal{O}(nm\log nm)$ operations; see Table~\ref{tab:costs}. Third, this method is not applicable when $\Lbf$ has some imaginary eigenvalues; this is because low-degree rational functions are unable to approximate the exponential on the imaginary axis, which is oscillatory. In this case one has to compute and store the $\ph$-functions, and the complexity increases to $\mathcal{O}(nm^2)$ per time-step, as we describe next. \begin{figure} \centering \includegraphics[scale=.4]{condV.eps} \caption{\textit{Variation of $\mrm{cond}(\mathbf{V})$ with $m=n$. It is of order $m$ and reasonably small; therefore, an approach based on eigenvalues and eigenvectors for the computation of the $\ph$-functions is valid.}} \label{fig:conds} \end{figure} \subsubsection{ETDRK4-EIG} To compute the $\ph$-functions, one can use a method based on eigenvalues and eigenvectors. The idea is to diagonalize $\Lbf=\mathbf{V\Lambda V^{-1}}$ and then apply the $\varphi$-functions to the eigenvalues, \begin{equation} \ph_l(h\Lbf)=\mathbf{V}\ph_l(h\mathbf{\Lambda})\mathbf{V^{-1}}, \quad 0\leq l \leq 3, \label{EIG} \end{equation} \noindent with \begin{equation} \ph_l(h\mathbf{\Lambda}) = \begin{pmatrix} \ph_l(h\lambda_1) \\ & \ph_l(h\lambda_2) \\ & & \ddots \\ & & & \ph_l(h\lambda_{nm})\\ \end{pmatrix}. \end{equation} \noindent For a general $nm \times nm$ matrix $\Lbf$, this would require $\mathcal{O}(n^3m^3)$ operations, but for~\reff{Laplacian}, this can be done blockwise in $\mathcal{O}(nm^3)$ operations. Note that this corresponds to Method 14 of \cite{moler2003}. Theoretically, this approach only works when $\mathcal{L}$ is nondefective, that is, when it has a complete set of linearly independent eigenfunctions---this is a well known result for the Laplacian operator on the sphere~\cite{atkinson2012}. In practice, difficulties occur when the discretizaion $\Lbf$ is ``nearly'' defective, i.e., when $\mrm{cond}(\mathbf{V})=||\mathbf{V}||\,||\mathbf{V}^{-1}||$ is large. Fortunately, we have observed numerically that the condition number is small and of order $m$ as $m=n$ increases; see Figure \ref{fig:conds}. Once we have computed the eigenvalue decomposition, we follow the idea of Kassam and Trefethen~\cite{kassam2005} and evaluate the $\varphi$-functions at each scaled eigenvalue $h\lambda$ using Cauchy's integral formula, \begin{equation} \ph_l(h\lambda) = \frac{1}{2\pi i}\oint_\Gamma \frac{\ph_l(z)}{z - h\lambda} dz \approx \frac{1}{M} \sum_{k=1}^M \ph_l\big(h\lambda + e^{2\pi i (k-0.5)/M}\big), \quad 0\leq l\leq 3. \label{contour} \end{equation} \noindent The constant $M$ is the number of points in the discretized contour integration; we take $M=32$ in our experiments. The precomputation step costs $\mathcal{O}(nm^3)$ operations, while the cost per time-step is $\mathcal{O}(nm^2)$ since one has to compute block diagonal matrix-vector products $\ph_l(h\Lbf)v$; see Table~\ref{tab:costs}. \subsection{Implicit-explicit schemes} We present in this section the two IMEX schemes we consider in this paper. The first one, IMEX-BDF4~\cite{ascher1995}, is a multistep scheme which is stable only for diffusive PDEs. The second one, LIRK4~\cite{calvo2001}, is a one-step scheme, stable for both diffusive and dispersive PDEs. \subsubsection{IMEX-BDF4} Following Kassam and Trefethen~\cite{kassam2005}, we consider a scheme known either as SBDF4 (in \cite{ascher1995}), AB4BD4 (in \cite{cox2002}) or IMEX-BDF4 (in \cite{hundsdorfer2007}), which combines a fourth-order Adams--Bashforth formula and a fourth-order backward differentiation scheme in a nontrivial way. The method is given by: \begin{equation} \begin{array}{l} (25\Ibf_{nm} - 12h\Lbf)\hat{u}^{n+1} = 48\hat{u}^{n} - 36\hat{u}^{n-1} + 16\hat{u}^{n-2} - 3\hat{u}^{n-3} + 48h\Nbf(\hat{u}^{n}) - 72h\Nbf(\hat{u}^{n-1})\\\\ \hspace{4cm} + \; 48h\Nbf(\hat{u}^{n-2}) - 12h\Nbf(\hat{u}^{n-3}). \end{array} \label{IMEX-BDF4} \end{equation} \noindent At each time-step, one has to solve a linear system to get the Fourier coefficients $\hat{u}^{n+1}$, which we do with linear cost by multiplying each block of~\reff{IMEX-BDF4} by $\Tbf_{\sin^2}$, as explained in Section 2.3. Therefore, the dominant cost in~\reff{IMEX-BDF4} is the $\mathcal{O}(nm\log nm)$ 2D FFT for the nonlinear evaluations; see Table~\ref{tab:costs} at the end of this section. Let us add three comments about \reff{IMEX-BDF4}. First, the LU factorization of the left-hand side of \reff{IMEX-BDF4} is computed and stored before the time-stepping starts. Second, this is a multistep formula so it has to be started with a one-step scheme---in the numerical comparisons of Section~4, we initialize it with three steps of ETDRK4-CF. Third, it is unstable for dispersive PDEs since the stability region of the fourth-order backward differentiation formula does not contain the portion of the imaginary axis near the origin. In these cases, one can use IMEX Runge--Kutta schemes~\cite{calvo2001, kennedy2003, pareschi2005} or extrapolation-based IMEX schemes~\cite{cardone2014, constantinescu2010}. We have decided to focus on the former. \begin{table} \caption{\textit{Computational costs (per time-step) of the time-stepping algorithms with $p=12$ in the CF approximation. The IMEX-BDF$\,4$ scheme is particularly cheap while for ETDRK$\,4$-CF one needs to solve an extremely large number of linear systems. ETDRK$\,4$-EIG is the only scheme that has a $\mathcal{O}(nm^2)$ cost per time-step and a $\mathcal{O}(nm^3)$ precomputation step. Precomputations for the other schemes (LU factorizations) cost $\mathcal{O}(nm)$ operations.}} \vspace{.5cm} \centering \ra{1.3} \begin{tabular}{lccccc} \hline & \multicolumn{2}{c}{\textbf{ETDRK4}} & \phantom{a} & \multicolumn{2}{c}{\textbf{IMEX}}\\ \cmidrule{2-3} \cmidrule{5-6} & \textbf{CF} & \textbf{EIG} && \textbf{BDF4} & \textbf{LIRK4}\\ \midrule \textbf{\# $\mathcal{O}(nm\log nm)$ FFTs} & 8 & 8 && 2 & 12\\ \textbf{\# $\mathcal{O}(nm)$ linear solves} & $9p=108$ & 0 && 1 & 5\\ \textbf{\# $\mathcal{O}(nm^2)$ matrix-vector products} & 0 & 9 && 0 & 0\\ \textbf{diffusive PDEs} & \checkmark & \checkmark && \checkmark & \checkmark\\ \textbf{dispersive PDEs} & $\times$ & \checkmark && $\times$ & \checkmark\\ \bottomrule \end{tabular} \label{tab:costs} \end{table} \subsubsection{LIRK4} IMEX Runge--Kutta schemes combine explicit Runge--Kutta formulas to adavance the nonlinear part and implicit Runge--Kutta to advance the linear part~\cite{calvo2001, kennedy2003, pareschi2005}. In this paper, we use the fourth-order LIRK4 scheme of Calvo, de Frutos and Novo~\cite{calvo2001}. It combines an implicit $L$-stable five-stage fourth-order Runge--Kutta method, whose Butcher tableau is given in \cite[Table~6.5]{hairer1991}, with a six-stage fourth-order explicit Runge--Kutta method, whose coefficients were derived in~\cite{calvo2001}. The formula for this scheme is: \setlength{\extrarowheight}{5pt} \begin{equation} \begin{array}{l} (\Ibf_{nm} - \frac{1}{4}h \Lbf)\hat{a}^n = \hat{u}^n + \frac{1}{4}h\Nbf(\hat{u}^n), \\\\ (\Ibf_{nm} - \frac{1}{4}h \Lbf)\hat{b}^n = \hat{u}^n + \frac{1}{2}h\Lbf\hat{a}^n - \frac{1}{4}h\Nbf(\hat{u}^n) + h\Nbf(\hat{a}^n), \\\\ (\Ibf_{nm} - \frac{1}{4}h \Lbf)\hat{c}^n = \hat{u}^n + \frac{17}{50}h\Lbf\hat{a}^n - \frac{1}{25}h\Lbf\hat{b}^n - \frac{13}{100}h\Nbf(\hat{u}^n) + \frac{43}{75}h\Nbf(\hat{a}^n) + \frac{8}{75}h\Nbf(\hat{b}^n), \\\\ (\Ibf_{nm} - \frac{1}{4}h \Lbf)\hat{d}^n = \hat{u}^n + \frac{371}{1360}h\Lbf\hat{a}^n - \frac{137}{2720}h\Lbf\hat{b}^n + \frac{15}{544}h\Lbf\hat{c}^n - \frac{6}{85}h\Nbf(\hat{u}^n) + \frac{42}{85}h\Nbf(\hat{a}^n), \\\\ \hspace{3.05cm} + \, \frac{179}{1360}h\Nbf(\hat{b}^n) - \frac{15}{272}h\Nbf(\hat{c}^n), \\\\ (\Ibf_{nm} - \frac{1}{4}h \Lbf)\hat{e}^n = \hat{u}^n + \frac{25}{24}h\Lbf\hat{a}^n - \frac{49}{48}h\Lbf\hat{b}^n + \frac{125}{16}h\Lbf\hat{c}^n - \frac{85}{12}h\Lbf\hat{d}^n + \frac{79}{24}h\Nbf(\hat{a}^n) - \frac{5}{8}h\Nbf(\hat{b}^n) \\ \hspace{3.05cm} + \, \frac{25}{2}h\Nbf(\hat{c}^n) - \frac{85}{6}h\Nbf(\hat{d}^n), \\\\ \hat{u}^{n+1} = \hat{u}^n + \frac{25}{24}h\Lbf\hat{a}^n - \frac{49}{48}h\Lbf\hat{b}^n + \frac{125}{16}h\Lbf\hat{c}^n - \frac{85}{12}h\Lbf\hat{d}^n + \frac{1}{4}h\Lbf\hat{e}^n + \frac{25}{24}h\Nbf(\hat{a}^n) - \frac{49}{48}h\Nbf(\hat{b}^n) \\\\ \hspace{1.59cm} + \, \frac{125}{16}h\Nbf(\hat{c}^n) - \frac{85}{12}h\Nbf(\hat{d}^n) + \frac{1}{4}h\Nbf(\hat{e}^n). \end{array} \label{LIRK4} \end{equation} \setlength{\extrarowheight}{0pt} The LU factorization of the left-hand sides of~\reff{LIRK4} is computed and stored before the time-stepping starts. Note that the most expensive operations in the computation of the internal stages $\hat{a}^n$, $\hat{b}^n$, $\hat{c}^n$, $\hat{d}^n$ and $\hat{e}^n$ are the nonlinear evaluations ($\mathcal{O}(nm \log nm)$ work). The other operations, i.e., linear solves and matrix-vector products in the right-hand side (each block being multiplied by $\Tbf_{\sin^2}$), can be carried out in linear time. The computation of $\hat{u}^{n+1}$ from $\hat{u}^n$ requires matrix-vector products of the form $\Lbf v$, which reduce to $n$ dense matrix-vector products $\Lbf_iv$; each can be done in $\mathcal{O}(m)$ operations since $\Lbf_i v=\Tbf_{\sin^2}^{-1}(\Tbf_{\sin^2}\Lbf_i)v$. (The LU factorization of $\Tbf_{\sin^2}$ is also computed and stored before the time-stepping starts.) \section{Numerical comparisons} \subsection{Methodology} To compare time-stepping schemes, we follow the methodology of~\cite{kassam2005}. We solve a given PDE up to $t=T$ for various time-steps $h$ and a fixed number of grid points. We estimate the ``exact'' solution $u^{ex}(t=T,\lambda,\theta)$ by using a very small time-step (half the smallest time-step $h$) and ETDRK4-EIG (the most accurate time-stepping scheme). We then measure the relative $L^2$-error $E$ at $t=T$ between the computed solution $u(t=T,\lambda,\theta)$ and $u^{ex}(t=T,\lambda,\theta)$, \begin{equation} E = \frac{\Vert u(t=T,\lambda,\theta) - u^{ex}(t=T,\lambda,\theta)\Vert_2}{\Vert u^{ex}(t=T,\lambda,\theta)\Vert_2}. \label{error_PDE} \end{equation} \noindent For both $u$ and $u^{ex}$ we use $m=n=256$ grid points. (With these grid sizes, the error due to the spatial discretization is small compared to the error due to the time discretization.) We use $p=12$ in the CF approximation for ETDRK4-CF and $M=32$ points to compute the contour integrals~\reff{contour} for ETDRK4-EIG. We plot \reff{error_PDE} against relative time-steps $h/T$ and computer times on a pair of graphs.\footnote{The precomputation of the coefficients of the exponential integrators, the LU factorizations for the IMEX schemes and the starting phase of IMEX-BDF4 are not included in the computing time. Timings were done on a 2.8\,GHz Intel i7 machine with 16\,GB of RAM using MATLAB R2015b and Chebfun v5.6.0.} The former gives a measure of the accuracy of the time-stepping scheme for various time-steps or, equivalently, for various number of integration steps. (If the relative time-step is $10^{-3}$, it means that the scheme performed $10^3$ steps to reach $t=T$.) However, it is possible that each step is more costly, so it is the latter that ultimately matters. \subsection{Results for the diffusive case} The \textit{Allen--Cahn equation}, derived by Allen and Cahn in the 1970s, is a reaction-diffusion equation which describes the process of phase separation in iron alloys~\cite{allen1979}, studied in the ball and on the sphere in, e.g.,~\cite{du2008}. It is given by \begin{equation} u_t = \epsilon\Delta u + u - u^3, \quad \epsilon\ll1, \label{eq:AC} \end{equation} \noindent with linear diffusion $\epsilon\Delta u$ and a cubic reaction term $u-u^3$. The function $u$ is the order parameter, a correlation function related to the positions of the different components of the alloy. The Allen--Cahn equation exhibits stable equilibria at $u=\pm 1$, while $u=0$ is an unstable equilibrium. Solutions often display metastability where wells $u\approx -1$ compete with peaks $u\approx 1$, and structures remain almost unchanged for long periods of time before changing suddenly. \noindent We take $\epsilon=10^{-2}$ and \begin{equation} u(t=0,x,y,z) = \cos(\cosh(5xz)-10y), \label{eq:ACIC} \end{equation} \noindent and solve up to $t=10$. The initial condition and the solution at times $t=1,2,10$ are shown in Figure~\ref{fig:ACsol}. The initial condition quickly converges to a metastable $u\approx\pm1$ solution (at around $t=10$) and eventually to the stable constant solution $u=1$ (at around $t=60$). \begin{figure} \hspace{-1.1cm} \includegraphics[scale=.59]{acsol.eps} \vspace{-9.8cm} \caption{\textit{Initial condition~$\reff{eq:ACIC}$ and solution at times $t=1,2,10$ of the Allen--Cahn equation~$\reff{eq:AC}$.}} \label{fig:ACsol} \end{figure} \begin{figure} \hspace{-.9cm} \includegraphics[scale=.45]{accomp.eps} \caption{\textit{Relative error $E$ at $t=10$ versus relative time-step and computer time for the Allen--Cahn equation~$\reff{eq:AC}$.}} \label{fig:ACcomp} \end{figure} The results are shown in Figure~\ref{fig:ACcomp}. All the schemes are stable for the time-steps we have considered, except IMEX-BDF4 for the largest time-step. The ETDRK4 schemes and LIRK4 have similar accuracy, while IMEX-BDF4 is significantly less accurate. However, IMEX-BDF4 is the most efficient scheme. This can be explained by looking at Table~\ref{tab:costs}: IMEX-BDF4 requires very few operations per time-step. Note that in this experiment, ETDRK4-CF ($\mathcal{O}(nm\log nm)$ work per time-step) is not more efficient than ETDRK4-EIG ($\mathcal{O}(nm^2)$). Again, the reason can be found in Table~\ref{tab:costs}: ETDRK4-CF requires the solution of 108 linear systems per time-step. (For $m=n$ sufficiently large, ETDRK4-CF will be indeed more efficient than ETDRK4-EIG.) \subsection{Results for the dispersive case} The \textit{nonlinear Schr\"odinger (NLS) equation}, \begin{equation} u_t = i\Delta u + i\vert u\vert^2u, \label{eq:NLS} \end{equation} \noindent models a variety of physical phenomena, including the propagation of light in optical fibres; on the sphere, a recent theoretical study of its solutions can be found in~\cite{takaoka2016}. A nonlinear variant of the Schr\"odinger equation, it couples dispersion $i\Delta u$ with a nonlinear potential $i\vert u\vert^2u$. Note that the wave function $u$ is complex-valued. \noindent We take \begin{equation} u(t=0,\lambda,\theta) = A\bigg(\frac{2B^2}{2-\sqrt{2}\sqrt{2-B^2}\cos(AB\theta)}-1\bigg) + Y_3^3(\lambda,\theta) \label{eq:NLSIC} \end{equation} \noindent with $A=B=1$ and solve up to $t=1$. The initial condition and the real part of the solution at times $t=0.3,0.6,1$ are shown in Figure~\reff{fig:NLSsol}. The initial condition is the superposition of two nonlinear waves in which energy concentrates in a localized and oscillatory fashion, a \textit{breather} and a spherical harmonic. \begin{figure} \hspace{-1.1cm} \includegraphics[scale=.57]{nlssol.eps} \vspace{-9.1cm} \caption{\textit{Initial condition~$\reff{eq:NLSIC}$ and real part of the solution at times $t=0.3,0.6,1$ of the NLS equation~$\reff{eq:NLS}$.}} \label{fig:NLSsol} \end{figure} \begin{figure} \hspace{-.9cm} \includegraphics[scale=.45]{nlscomp.eps} \caption{\textit{Relative error $E$ at $t=1$ versus relative time-step and computer time for the NLS equation.}} \label{fig:NLScomp} \end{figure} The results are shown in Figure~\ref{fig:NLScomp}. Both schemes are stable for the time-steps we have considered. LIRK4 is less accurate than ETDRK4-EIG in this case, but since it has a much lower cost per time-step, it is more efficient. Let us emphasize that timings do not include the precomputation step, which takes significantly longer for ETDRK4-EIG ($\mathcal{O}(nm^3)$ versus $\mathcal{O}(nm)$). Also, since LIRK4 and ETDRK4-EIG have different scaling in $m=n$, the results ultimately depend on the size of the discretization. For example, for $m=n=128$ instead of $256$, ETDRK4-EIG is slightly more efficient. \section{Discussion} We have presented algorithms for solving stiff PDEs on the sphere with spectral accuracy in space and fourth-order in time. For the spatial discretization, we have used a variant of the DFS method in coefficient space. The main advantages of our method are that it is not necessary to numerically impose the pole conditions~\reff{polecondition1}, while operating in coefficient space avoids the coordinate singularity without using shifted grids and leads to matrices $\Lbf$ that can be computed and inverted efficiently. We have tested our method with the Poisson and heat equations and obtained excellent results. For solving nonlinear time-dependent PDEs, we have used IMEX schemes and exponential integrators to circumvent the time-stepping restrictions due to the large eigenvalues of the Laplacian matrix. For diagonal problems, exponential integrators are particularly efficient since the computation and the action of the matrix exponential can trivially be computed in linear time. For problems that allow for fast numerical linear algebra (fast sparse direct solver), as in this paper, we have given numerical evidence that IMEX schemes outperform exponential integrators. The IMEX-BDF4 time-stepping scheme is remarkably efficient for diffusive PDEs but since it is unstable for dispersive PDEs and needs to be started with another algorithm, it might be preferable to use LIRK4 for both diffusive and dispersive PDEs. For problems that generate dense matrices $\Lbf$, it is not clear which one would perform best. On a grid with $N$ points, both exponential integrators (dense matrix-vector products) and IMEX schemes (triangular systems from precomputed LU factorizations) would have a $\mathcal{O}(N^2)$ cost per time-step. Note that dense matrices correspond to, e.g., the DFS method applied to \reff{PDE} with highly oscillatory variable coefficients or a Chebyshev discretization in value space of PDEs of the form \reff{PDE} in 1D/2D/3D.\; However, as recently shown by Aurentz in~\cite{aurentz2015}, it seems that all spectral differentiation matrices (even the dense ones!) might allow for fast numerical linear algebra, which would render IMEX schemes more efficient for value-based Chebyshev discretizations---this is a story to be continued. We summarize these observations in Table~\ref{tab:comparisons}. \begin{table} \caption{\textit{Computation costs per time-step for a grid with $N$ points and most efficient time-stepping scheme in each case. Diagonal problems were considered in}~\cite{kassam2005}. \textit{In this paper, we investigated non-diagonal problems that allow for fast numerical linear algebra (i.e., linear systems can be solved in linear time). In the latter case, IMEX schemes outperform exponential integrators. It is not clear which scheme would perform best in the dense numerical linear algebra case.}} \vspace{.5cm} \centering \ra{1.3} \begin{tabular}{ccc} \toprule & \textbf{diffusive PDEs} & \textbf{dispersive PDEs}\\ \midrule \textbf{diagonal problems} & $\mathcal{O}(N\log N)$ & $\mathcal{O}(N\log N)$\\ & ETDRK4 & ETDRK4\\\\ \textbf{non-diagonal problems} & $\mathcal{O}(N\log N)$ & $\mathcal{O}(N\log N)$\\ \textbf{fast sparse direct solver} & IMEX-BDF4 & LIRK4\\\\ \textbf{non-diagonal problems} & $\mathcal{O}(N^2)$ & $\mathcal{O}(N^2)$\\ \textbf{dense solver} & TBD & TBD\\ \bottomrule \end{tabular} \label{tab:comparisons} \end{table} We have not considered the ``sliders'' of Fornberg and Driscoll~\cite{driscoll2002b, fornberg1999}. While this can be an efficient alternative to IMEX schemes and exponential integrators for diagonal problems~\cite{kassam2005}, it is not clear how it can be extended to non-diagonal ones. Our method can in principle deal with nonsmooth initial conditions for diffusive problems, as long as the diffusion is strong enough to smooth out the solution. Also, as we mentioned in the introduction, our method could be applied to more general PDEs, including linear operators consisting of powers of the Laplacian operator. These would have a larger bandwidth, but could still be inverted efficiently. Therefore, we believe that the same conclusions would hold. Analogues of the matrices $\Pbf$ and $\Qbf$ would be involved. Future directions include the application of our method to hyperbolic problems, e.g., the barotropic vorticity equation or the shallow water equations. Such problems involve nonlinear differential operators with large eigenvalues. While stiffness in the linear part (as in the Allen--Cahn and NLS equations) can be treated by using IMEX schemes or exponential integrators, it is not obvious how to deal with a stiff nonlinear operator. For the barotropic vorticity equation, \begin{equation} u_t = \mathcal{N}(u) = -\frac{(\Delta^{-1} u)_\theta}{\sin\theta}u_\lambda + \frac{(\Delta^{-1} u)_\lambda}{\sin\theta}(u_\theta - 2\Omega\sin\theta), \end{equation} \noindent a possible approach would be to use the EPIRK schemes~\cite{tokman2006}, e.g., the EPIRK2 scheme given by \begin{equation*} \hat{u}^{n+1} = \hat{u}^n + \mathbf{J}^{-1}(e^{h\mathbf{J}} - \Ibf)\Nbf(\hat{u}^n), \quad \mathbf{J} = \frac{d\Nbf}{d\hat{u}}(\hat{u}), \end{equation*} \noindent with Arnoldi iteration for the matrix-vector products involving the Jacobian matrix $\mathbf{J}$ of the discretization $\Nbf$ of $\mathcal{N}$ in Fourier space---the cost per time-step would also be $\mathcal{O}(N\log N)$ operations. \begin{appendix} \section{Eigenvalues of the Laplacian matrix} The Laplacian matrix~\reff{Laplacian} is a discretized Laplacian, so one might expect that the eigenvalues are all real and nonpositive. For example, Gottlieb and Lustman~\cite{gottlieb1983} give a nontrivial proof that the discretization of the second derivative operator in the Chebyshev collocation method on a real interval with separated boundary conditions has real and negative eigenvalues. Here we show that essentially the same holds on the sphere for~\reff{Laplacian} (a slight difference is that we have one zero eigenvalue since the Laplacian of a constant is zero). \begin{theorem}\label{thm:eigreal} The eigenvalues of $\Lbf$ in~\eqref{Laplacian} are all real and nonpositive. \end{theorem} \begin{proof} Clearly it suffices to examine each $i$th block $\Lbf_i=\Dbf_m^{(2)} + \Tbf_{\sin^2}^{-1}\Tbf_{\cos\sin}\Dbf_m + \Dbf^{(2)}_n(i,i)\Tbf_{\sin^2}^{-1}$. We first note that the eigenvalues of $\Lbf_i$ are equal to those of the matrix pencil \begin{equation} \Abf_i-\lambda \Bbf_i :=\Tbf_{\sin^2}\Dbf_m^{(2)} + \Tbf_{\cos\sin}\Dbf_m + \Dbf^{(2)}_n(i,i)\Ibf_m -\lambda\Tbf_{\sin^2}, \label{eq:defAB} \end{equation} which corresponds to the generalized eigenvalue problem $\Abf_i x=\lambda \Bbf_i x$. We shall prove that this pencil has negative real eigenvalues, regardless of $i$; we drop the subscript $i$ for simplicity. Our proof proceeds as follows: \begin{enumerate} \item The eigenvalues of $\Abf-\lambda\Bbf$ are the values of $\lambda_0$ for which the matrix $\Abf-\lambda_0\Bbf$ has a zero eigenvalue. \item For any fixed $\lambda_0\in (-\infty,0]$, all the eigenvalues of the matrix $\Abf-\lambda_0\Bbf$ are real. Therefore we can define $m$ real continuous functions $f_j(\lambda_0):=\lambda_j(\Abf-\lambda_0\Bbf)$ for $j=1,\ldots,m$. \item For every $j$, we have $f_j(0)\leq 0$, and $f_j(\lambda_0)\geq 0$ for negative $\lambda_0$ with sufficiently large $|\lambda_0|$. \item By the intermediate value theorem to each $f_j(\lambda_0)$, there is at least one root $f_j(\lambda_0)=0$ in $\lambda_0\in(-\infty,0]$ for each $j$. It follows that $\Abf-\lambda \Bbf$ has $m$ real eigenvalues (counting multiplicities), hence so does $\Lbf_i$. \end{enumerate} The only nontrivial parts are the second step, and the claim $f_j(0)\leq 0$. We first prove the second step, that the matrix $\Cbf(\lambda_0):=\Abf-\lambda_0\Bbf$ has only real eigenvalues. To do this we apply the similarity transformation with the permutation matrix $\Pbf=\Ibf_m(:,[1:2:m, 2:2:m])$, which gives $\Pbf^T\Cbf(\lambda_0)\Pbf=\mbox{diag}(\Cbf_1(\lambda_0),\Cbf_2(\lambda_0))$, a block-diagonal matrix with two $\frac{m}{2}\times \frac{m}{2}:=\ell\times \ell$ blocks. Here $\Cbf_1(\lambda_0)$ is tridiagonal with extra elements in the upper-right and lower-left corners (as in~\eqref{eq:C1} below), and $\Cbf_2(\lambda_0)$ is tridiagonal.\footnote{It is possible to use this structure in the linear solvers. We did not do this in our experiments, as applying the permutation also requires $\mathcal{O}(nm)$ operations.} For $\Cbf_2(\lambda_0)$, we can verify that the products of the neighboring off-diagonals are positive, \begin{equation} \Cbf_2(\lambda_0)_{j,j+1}\Cbf_2(\lambda_0)_{j+1,j}>0, \end{equation} \noindent for all $j$ (when $\lambda_0=0$ the product can be 0; we exclude this case for the moment and assume $\lambda_0<0$). Hence, $\Cbf_2(\lambda_0)$ is diagonally similar to a symmetric matrix, thus its eigenvalues are all real. It remains to prove that the eigenvalues of $\Cbf_1(\lambda_0)$ are all real. Note that $\Cbf_1(\lambda_0)$ is of the form \begin{equation}\label{eq:C1} \Cbf_1(\lambda_0) =\begin{pmatrix} \alpha & \beta' & & & & & & \beta' \\ \beta & \alpha_1 & \beta_1 & \\ & \beta_{\ell-2} & \alpha_2 & \beta_2 & \\ & & \beta_{\ell-3} & \ddots & \ddots & \\ & & & \ddots & \ddots & \ddots & \\ & & & & \ddots & \ddots & \beta_{\ell-1} & \\ & & & & & \beta_{2} & \alpha_{2} & \beta_{\ell-2} \\ \beta & & & & & & \beta_{1} & \alpha_{1} \end{pmatrix}. \end{equation} Several properties of $\Cbf_1(\lambda_0)$ are worth noting: (i) the $(\ell-1)\times (\ell-1)$ submatrix obtained by removing the first row and column is symmetric about the antidiagonal, both in the diagonal and off-diagonal elements (note the double appearance of $\beta_i$) and (ii) the products of the neighboring off-diagonal blocks are all positive $\beta_j\beta_{\ell-j-1}> 0$ for all $j$, and $\beta\beta'>0$. In Lemma~\ref{lem:realeig} below we prove that any matrix~\eqref{eq:C1} with such structure has only real eigenvalues, establishing that $f_j(\lambda_0)$ is real for any $\lambda_0<0$. The claim extends to $\lambda_0=0$ by continuity of the eigenvalues, completing the second step in the proof. It remains to show $f_j(0)\leq 0$ for every $j$, that is, $\Abf$ has only nonpositive eigenvalues. Since $\Dbf_n^{(2)}(i,i)\leq 0$ for all $i$, it suffices to treat the case for which $\Dbf_n^{(2)}(i,i)=0$. We again examine $\Cbf_1(0)$ and $\Cbf_2(0)$ separately. For each of these, after deflating the zero eigenvalue (if present) we can apply a diagonal similarity transformation so that the Gershgorin disks, whose centers lie on the negative half line, do not contain the origin, implying that all the eigenvalues are nonpositive. This completes the proof of Theorem~\ref{thm:eigreal}. \end{proof} It remains to prove that the eigenvalues of matrices of the form in~\eqref{eq:C1} are all real. A key fact is that a real tridiagonal matrix with the neighboring off-diagonals having the same sign is diagonally similar to a symmetric tridiagonal matrix, and an analogous result holds for arrowhead matrices. \begin{lemma}\label{lem:realeig} For any real matrix of the form~\eqref{eq:C1}, with $\beta_i\beta_{\ell-i-1}> 0$ for all $i$ and $\beta\beta'>0$, all the eigenvalues are real. \end{lemma} \begin{proof} Denote the matrix by $\Cbf$. We can apply a diagonal similarity transformation to the bottom-right $(\ell-1)\times(\ell-1)$ part $\Cbf_2$, to obtain a symmetric matrix $\Dbf^{-1}\Cbf_2\Dbf$. Since $\Cbf_2$ is symmetric about the antidiagonal, so is the diagonal matrix $\Dbf$; thus $\Dbf^{-1}\Cbf_2\Dbf$ is both symmetric and symmetric about the antidiagonal, and so the transformation $\widehat \Cbf= \big[ \begin{smallmatrix} 1\\ & \Dbf^{-1} \end{smallmatrix} \big] \Cbf \big[ \begin{smallmatrix} 1\\& \Dbf \end{smallmatrix} \big]$ preserves the property that the off-diagonal parts of the first row and column are parallel (when one is transposed). Now let $\Qbf$ be an orthogonal matrix of eigenvectors of $\Dbf^{-1}\Cbf_2\Dbf$ such that $\Qbf^T(\Dbf^{-1}\Cbf_2\Dbf)\Qbf$ is diagonal, and consider the matrix $\widetilde \Cbf = \big[ \begin{smallmatrix} 1\\ & \Qbf^T \end{smallmatrix} \big] \widehat \Cbf \big[ \begin{smallmatrix} 1\\& \Qbf \end{smallmatrix} \big]$. By the antidiagonal symmetry of $\Dbf^{-1}\Cbf_2\Dbf$, each eigenvector (column of $\Qbf$) has the property that it is either in the form $[v_1,v_2,\ldots,-v_2,-v_1]^T$ or $[v_1,v_2,\ldots,v_2,v_1]^T$. Therefore, $\widetilde \Cbf$ is an arrowhead matrix with the property that for every $j$, we have either $\widetilde \Cbf_{j,1}=\widetilde \Cbf_{1,j}=0$, or $\widetilde \Cbf_{j,1}\widetilde \Cbf_{1,j}>0$. It follows that there exists a diagonal similarity transformation that brings $\widetilde \Cbf$ to symmetric form, hence has real eigenvalues. \end{proof} \begin{figure} \begin{footnotesize} \begin{verbatim} m = 1024; n = m; h = 1e-1; T = 100; u0 = @(x,y,z) cos(40*x)+cos(40*y)+cos(40*z); th = pi/8; c = cos(th); s = sin(th); u0 = 1/3*spherefun(@(x,y,z) u0(c*x-s*z,y,s*x+c*z)); v0 = reshape(coeffs2(u0, m, n), m*n, 1); g = @(u) u - (1+1.5i)*u.*(abs(u).^2); c2v = @(u) trigtech.coeffs2vals(u); c2v = @(u) c2v(c2v(reshape(u,m,n)).').'; v2c = @(u) trigtech.vals2coeffs(u); v2c = @(u) reshape(v2c(v2c(u).').',m*n,1); N = @(u) v2c(g(c2v(u))); Dm = spdiags(1i*[0,-m/2+1:m/2-1]', 0, m, m); D2m = spdiags(-(-m/2:m/2-1).^2', 0, m, m); D2n = spdiags(-(-n/2:n/2-1).^2', 0, n, n); Im = speye(m); In = speye(n); P = speye(m+1); P = P(:, 1:m); P(1,1) = .5; P(m+1,1) = .5; Q = speye(m+1+4); Q = Q(3:m+2,:); Q(1,3) = 1; Q(1,m+3) = 1; Msin2 = toeplitz([1/2, 0, -1/4, zeros(1, m+2)]); Msin2 = sparse(Msin2(:, 3:m+3)); Tsin2 = round(Q*Msin2*P, 15); Mcossin = toeplitz([0, 0, 1i/4, zeros(1, m+2)]); Mcossin = sparse(Mcossin(:, 3:m+3)); Tcossin = round(Q*Mcossin*P, 15); Lap = 1e-4*(kron(In, Tsin2*D2m + Tcossin*Dm) + kron(D2n, Im)); Tsin2 = kron(In, Tsin2); [L, U] = lu(Tsin2); [La, Ua] = lu(Tsin2 - 1/4*h*Lap); itermax = round(T/h); v = v0; for iter = 1:itermax Nv = N(v); w = Tsin2*v; wa = w + h*Tsin2*1/4*Nv; a = Ua\(La\wa); Na = N(a); wb = w + h*Lap*1/2*a + h*Tsin2*(-1/4*Nv + Na); b = Ua\(La\wb); Nb = N(b); wc = w + h*Lap*(17/50*a - 1/25*b) + h*Tsin2*(-13/100*Nv + 43/75*Na + 8/75*Nb); c = Ua\(La\wc); Nc = N(c); wd = w + h*Lap*(371/1360*a - 137/2720*b + 15/544*c) ... + h*Tsin2*(-6/85*Nv + 42/85*Na + 179/1360*Nb - 15/272*Nc); d = Ua\(La\wd); Nd = N(d); we = w + h*Lap*(25/24*a - 49/48*b + 125/16*c - 85/12*d) ... + h*Tsin2*(79/24*Na - 5/8*Nb + 25/2*Nc - 85/6*Nd); e = Ua\(La\we); Ne = N(e); v = v + h*(U\(L\(Lap*(25/24*a - 49/48*b + 125/16*c - 85/12*d + 1/4*e)))) ... + h*(25/24*Na - 49/48*Nb + 125/16*Nc - 85/12*Nd + 1/4*Ne); end vals = c2v(v); vals = vals([m/2+1:m 1], :); u = spherefun(real(vals)); plot(u) \end{verbatim} \end{footnotesize} \caption{MATLAB \textit{code to solve the Ginzburg--Landau equation on the sphere with the DFS method and the LIRK\,{\nf 4} time-stepping scheme; this code can be used for both diffusive and dispersive PDEs.}} \label{code:LIRK4} \end{figure} \end{appendix} \section*{Acknowledgements} We thank Grady Wright for a fruitful exchange of emails about multiplication matrices and for reading an early draft of this manuscript. We also thank Alex Townsend and Heather Wilber for discussions about Fourier series on spheres, Jared Aurentz for various suggestions related to numerical linear algebra, and the referees for their helpful comments. The authors are much indebted to Nick Trefethen for his continual support and encouragement. \bibliographystyle{siam}
{ "timestamp": "2017-12-27T02:02:09", "yymm": "1701", "arxiv_id": "1701.06030", "language": "en", "url": "https://arxiv.org/abs/1701.06030", "abstract": "We present in this paper algorithms for solving stiff PDEs on the unit sphere with spectral accuracy in space and fourth-order accuracy in time. These are based on a variant of the double Fourier sphere method in coefficient space with multiplication matrices that differ from the usual ones, and implicit-explicit time-stepping schemes. Operating in coefficient space with these new matrices allows one to use a sparse direct solver, avoids the coordinate singularity and maintains smoothness at the poles, while implicit-explicit schemes circumvent severe restrictions on the time-steps due to stiffness. A comparison is made against exponential integrators and it is found that implicit-explicit schemes perform best. Implementations in MATLAB and Chebfun make it possible to compute the solution of many PDEs to high accuracy in a very convenient fashion.", "subjects": "Numerical Analysis (math.NA)", "title": "Fourth-order time-stepping for stiff PDEs on the sphere", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429614552198, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097211018908656 }
https://arxiv.org/abs/2205.04211
Real Algebraic Geometry, Positivity and Convexity
Chapters 1 to 4 are the lecture notes of my course "Real Algebraic Geometry I" from the winter term 2020/2021. Chapters 5 to 8 are the lecture notes of its continuation "Real Algebraic Geometry II" from the summer term 2021. Chapters 9 and 10 are the lecture notes of its further continuation "Geometry of Linear Matrix Inequalities" from the winter term 2021/2022. These courses have been delivered at the University of Konstanz in Southern Germany. The entirety of these lecture notes is accompanied by a list of 47 long videos which is available from the following YouTube playlist:this https URL
\chapter{Introduction} The study of polynomial equations is a canonical subject in mathematics education, as is illustrated by the following examples: Quadratic equations in one variable (high school), systems of linear equations (linear algebra), polynomial equations in one variable and their symmetries (algebra, Galois theory), diophantine equations (number theory) and systems of polynomial equations (algebraic geometry, commutative algebra). \bigskip\noindent In contrast to this, the study of polynomial inequalities (in the sense of ``greater than'' or ``greater or equal than'') is mostly neglected even though it is much more important for applications: Indeed, in applications one often searches for a real solution rather than a complex one (as in classical algebraic geometry) and this solution must not necessarily be exact but only approximate. \bigskip\noindent In a course about linear algebra there is frequently no time for linear optimization. An introductory course about algebra usually treats groups, rings and fields but disregards ordered and real closed fields as well as preorders or prime cones of rings. In a first course on algebraic geometry there is often no special attention paid to the real part of a variety and in commutative algebra quadratic modules are practically never treated. \bigskip\noindent Most algebraists do not even know the notion of a preorder although it is as important for the study of systems of polynomial inequalities as the notion of an ideal is for the study of systems of polynomial equations. People from more applied areas such as numerical analysis, mathematical optimization or functional analysis know often more about real algebraic geometry than some algebraists, but often do not even recognize that polynomials play a decisive role in what they are doing. There are for example countless articles from functional analysis which are full of equations with binomial coefficients which turn out to be just disguised simple polynomial identities. \bigskip\noindent In the same way as the study of polynomial systems of equations leads to the study of rings and their generalizations (such as modules), the study of systems of polynomial inequalities leads to the study of rings which are endowed with something that resembles an order. This additional structure raises many new questions that have to be clarified. These questions arise already at a very basic level so that we need as prerequisites only basic linear algebra, algebra and analysis. In particular, at least the first half of this course is really extremely well suited to students heading for students enrolled in programs for mathematics education. It includes several topics which are directly relevant for high school teaching. \bigskip\noindent To arouse the reader's curiosity, we present the following table. It contains on the left column notions we assume the reader is familiar with. On the right column we name what could be seen more or less as their real counterparts mostly introduced in this course. \begin{tabular}{r|l} Algebra&Real Algebra\\ Algebraic Geometry&Real Algebraic Geometry\\ systems of polynomial equations&systems of polynomial inequalities\\ ``$=$''&``$\ge$''\\ complex solutions&real solutions\\ $\C$&$\R$\\ algebraically closed fields&real closed fields\\ fields&ordered fields\\ ideals&preorders\\ prime ideals&prime cones\\ spectrum&real spectrum\\ Noetherian&quasi-compact\\ radical&real radical\\ fundamental theorem of algebra&fundamental theorem of algebra\\ Aachen, Aalborg, Aarhus, \dots&Dortmund, Dresden, Dublin, Innsbruck, \dots\\ \dots, Zagreb, Zürich&\qquad\quad \dots, Konstanz, Leipzig, Ljubljana, Rennes \end{tabular} \bigskip\noindent It is intended that the fundamental theorem of algebra appears on both sides of the table. In its usual form, it says that each non-constant univariate complex polynomial has a complex root. In Section \ref{sec:rcf}, we will formulate it in a ``real'' way. The difficulties one has to deal with in the ``real world'' become already apparent when one asks the corresponding ``real question'': When does a univariate complex polynomial have a real root? The answer to this will be given in Section \ref{sec:hermite} and requires already quite a bit of thoughts. \bigskip\noindent Traditionally, Real Algebraic Geometry has many ties with fields like Model Theory, Valuation Theory, Quadratic Form Theory and Algebraic Topology. In this lecture, we mainly emphasize however connections to fields like Optimization, Functional Analysis and Convexity that came up during the recent years and are now fully established. \bigskip\noindent Throughout the lecture, $\N:=\{1,2,3,\dots\}$ and $\N_0:=\{0\}\cup\N$ denote the set of positive and nonnegative integers, respectively. \mainmatter \chapter{Ordered fields} \section{Orders of fields} \begin{reminder}\label{ordered-set} Let $M$ be a set. An \emph{order} on $M$ is a relation $\le$ on $M$ such that for all $a,b,c\in M$: \[ \begin{array}{rcl} &a\le a & \text{(reflexivity)}\\ &(a\le b \et b\le c)\implies a\le c & \text{(transitivity)}\\ &(a\le b \et b\le a)\implies a=b & \text{(antisymmetry)}\\ \text{and}&a\le b\text{ or }b\le a & \text{(linearity)} \end{array} \] In this case, $(M,\le)$ (or simply $M$ if $\le$ is clear from the context) is called an \emph{ordered set}. For $a,b\in M$, one defines \begin{align*} a<b&:\iff a\le b\et a\ne b,\\ a\ge b&:\iff b\le a \end{align*} and so on. \end{reminder} \begin{df}\label{ordhom} Let $(M,\le_1)$ and $(N,\le_2)$ be ordered sets and $\ph\colon M\to N$ be a map. Then $\ph$ is called a \emph{homomorphism} (of ordered sets) or \emph{monotonic} if \[a\le_1 b\implies\ph(a)\le_2\ph(b)\] for all $a,b\in M$. If $\ph$ is \alal{injective}{bijective} and if \[a\le_1 b\iff\ph(a)\le_2\ph(b)\] for all $a,b\in M$, then $\ph$ is called an \alal{\emph{embedding}}{\emph{isomorphism}} (of ordered sets). \end{df} \begin{pro}\label{automono} Let $(M,\le_1)$ and $(N,\le_2)$ be ordered sets and $\ph\colon M\to N$ a homomorphism. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\ph$ is an embedding \item $\ph$ is injective \item $\forall a,b\in M:(\ph(a)\le_2\ph(b)\implies a\le_1b)$ \end{enumerate} \end{pro} \begin{proof} \underline{(c)$\implies$(b)} \quad Suppose (c) holds and let $a,b\in M$ such that $\ph(a)=\ph(b)$. Then $\ph(a)\le_2\ph(b)$ and $\ph(a)\ge_2\ph(b)$. Now (c) implies $a\le_1b$ and $a\ge_1b$. Hence $a=b$. \smallskip \underline{(b)$\implies$(c)} \quad Suppose (b) holds and let $a,b\in M$ with $a\not\le_1b$. To show: $\ph(a)\not\le_2\ph(b)$. We have $a>_1b$ and it suffices to show $\ph(a)>_2\ph(b)$. From $a\ge_1b$ it follows by the monotonicity of $\ph$ that $\ph(a)\ge_2\ph(b)$. From $a\ne b$ and the injectivity of $\ph$ we get $\ph(a)\ne\ph(b)$. \smallskip From (b)$\iff$(c) and (a)$\iff$((b)$\et$(c)) [$\to$ \ref{ordhom}] the claim now follows. \end{proof} \begin{df}\label{def-ordered-field} Let $K$ be a field. An \emph{order} of $K$ is an order $\le$ on $K$ such that for all $a,b,c\in K$ we have: \[ \begin{array}{rcl} &a\le b\implies a+c\le b+c & \text{(monotonicity of addition)}\\ \text{and}&(a\le b \et c\ge0)\implies ac\le bc & \text{(monotonicity of multiplication).}\\ \end{array} \] In this case, $(K,\le)$ (or simply $K$ when $\le$ is clear from the context) is called an \emph{ordered field}. \end{df} \begin{df}\label{ordfieldhom} Let $(K,\le_1)$ and $(L,\le_2)$ be ordered fields. A field homomorphism (or equivalently, field embedding!) $\ph\colon K\to L$ is called a \emph{homomorphism} or \emph{embedding} of ordered fields if $\ph$ is monotonic (pay attention to \ref{automono} together with the fact that field homomorphisms are injective). If $\ph$ is moreover surjective, then $\ph$ is called an \emph{isomorphism} of ordered fields. If there exists an embedding of ordered fields from $(K,\le_1)$ into $(L,\le_2)$, then $(K,\le_1)$ is called \emph{embeddable} in $(L,\le_2)$ and one denotes $(K,\le_1)\hookrightarrow(L,\le_2)$. If there is an isomorphism of ordered fields from $(K,\le_1)$ to $(L,\le_2)$, then $(K,\le_1)$ and $(L,\le_2)$ are called \emph{isomorphic}. This is denoted by $(K,\le_1)\cong(L,\le_2)$. $(K,\le_1)$ is called an \emph{ordered subfield} of $(L,\le_2)$, or equivalently $(L,\le_2)$ an \emph{ordered extension field} of $(K,\le_1)$, if $(K,\le_1)\to(L,\le_2),\ a\mapsto a$ is an embedding, that is if $K$ is a subfield of $L$ and $(\le_1)=(\le_2)\cap(K\times K)$. For every subfield of $L$ there is obviously a unique order making it into an ordered subfield of $(L,\le_2)$. This order is called the order \emph{induced} by $(L,\le_2)$. \end{df} \begin{pro}\label{squares} Let $(K,\le)$ be an ordered field. Then $a^2\ge0$ for all $a\in K$. \end{pro} \begin{proof} Let $a\in K$. When $a\ge0$ this follows immediately from the monotonicity of multiplication [$\to$ \ref{def-ordered-field}]. When $a\le0$ the monotonicity of addition [$\to$ \ref{def-ordered-field}] yields $0=a-a\le-a$, whence $-a\ge0$ and therefore $a^2=(-a)^2\ge0$. \end{proof} \begin{pro}\label{qembeds} Let $(K,\le)$ be an ordered field. Then $K$ is of characteristic $0$ and the uniquely determined field homomorphism $\Q\to K$ is an embedding of ordered fields ${(\Q,\le_\Q)}\hookrightarrow(K,\le)$. Hence $(K,\le)$ can be seen as an ordered extension field of $(\Q,\le_{\Q})$. In particular, for $K=\Q$ it follows that $(\le_\Q)=(\le)$, i.e., $\Q$ can only be ordered in the familiar way. \end{pro} \begin{proof} From \ref{squares} we have $0\le1^2=1$ in $(K,\le)$. Using the monotonicity of the addition, we deduce \begin{equation} \tag{$*$} 0\le1\le1+1\le1+1+1\le\dots \end{equation} If we had $\chara K\ne0$, then $(*)$ would give $0\le1\le0$ by the transitivity of $\le$ which would imply $0=1$ in $K$ by the antisymmetry of $\le$, contradicting the definition of a field. Let $\ph$ denote the field homomorphism $\Q\to K$ and let $a,b\in\Q$ with $a\le_{\Q}b$. To show: $\ph(a)\le\ph(b)$. Write $a=\frac kn$ and $b=\frac\ell n$ with $k,\ell\in\Z$ and $n\in\N$. Then \[\ph(n)=\underbrace{1+\dots+1}_{\text{$n$ times}}\underset{\chara K=0}{\overset{(*)}>}0\] and, by the monotonicity of multiplication and Proposition \ref{squares}, also \[\frac1{\ph(n)}=\left(\frac1{\ph(n)}\right)^2\ph(n)\ge0.\] Hence it suffices to show that $\ph(a)\ph(n)\le\ph(b)\ph(n)$. This reduces to $\ph(an)\le\ph(bn)$, that is $\ph(k)\le\ph(\ell)$, or equivalently $\ph(\ell-k)\ge0$. But due to $\ell-k\ge_\Q0$ this follows from $(*)$. \end{proof} \begin{pront}\label{introabssgn} Let $(K,\le)$ be an ordered field. Then for every $a\in K^\times$ there are uniquely determined $\sgn a\in\{-1,1\}$ (``sign'' of $a$) and $|a|\in K_{\ge0}:=\{x\in K\mid x\ge0\}$ (``absolute value'' of $a$) such that \[a=(\sgn a)|a|.\] One declares $\sgn0:=|0|:=0$. It follows that $|ab|=|a||b|$, $\sgn(ab)=(\sgn a)(\sgn b)$ and $|a+b|\le|a|+|b|$ for all $a,b\in K$. \end{pront} \begin{proof} The first part is very easy. Let now $a,b\in K$. Then $ab=(\sgn a)(\sgn b)|a||b|$, implying $|ab|=|a||b|$ as well as $\sgn(ab)=(\sgn a)(\sgn b)$. For the claimed triangle inequality, we can suppose $a+b\ge0$ (otherwise replace $a$ by $-a$ and $b$ by $-b$). Then $|a+b|=a+b\le a+|b|\le|a|+|b|$. \end{proof} \begin{df}\label{archetcdef} Let $(K,\le)$ be an ordered field. \begin{enumerate}[(a)] \item $(K,\le)$ is called \emph{Archimedean} if $\forall a\in K:\exists N\in\N:a\le N$ (or equivalently, $\forall a\in K:\exists N\in\N:-N\le a$). \item A sequence $(a_n)_{n\in\N}$ in $K$ is called \begin{itemize} \item a \emph{Cauchy sequence} if $\forall\ep\in K_{>0}:\exists N\in\N:\forall m,n\ge N:|a_m-a_n|<\ep$, \item \emph{convergent to $a\in K$} if $\forall\ep\in K_{>0}:\exists N\in\N:\forall n\ge N:|a_n-a|<\ep$ (one easily shows that $a$ is then uniquely determined and writes $\lim_{n\to\infty}a_n=a$), \item \emph{convergent} if there is some $a\in K$ such that $\lim_{n\to\infty}a_n=a$. \end{itemize} We call $(K,\le)$ \emph{Cauchy complete} if every Cauchy sequence converges in $K$ (by the way it is immediate that every convergent sequence is a Cauchy sequence). \item We call a subset $A\subseteq K$ \emph{bounded from above} if $K$ contains an upper bound for $A$ (meaning some $b\in K$ such that $\forall a\in A:a\le b$). We call $(K,\le)$ \emph{complete} if every nonempty subset of $K$ bounded from above possesses a least upper bound, i.e., a supremum. \end{enumerate} \end{df} \begin{pro}\label{qdense} Let $(K,\le)$ be an ordered field. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $(K,\le)$ is Archimedean \item $\forall a,b\in K:(a<b\implies\exists c\in\Q:a<c<b)$ \end{enumerate} \end{pro} \begin{proof} \underline{(b)$\implies$(a)} \quad Suppose (b) holds and let $a\in K$. To show: $\exists N\in \N:a\le N$. WLOG $a>0$. To show: $\exists N\in\N: \frac1N\le\frac 1a$. Choose $c\in\Q$ such that $0<c<\frac 1a$. Write $c=\frac mN$ for certain $m,N\in\N$. Then $\frac 1N\le\frac mN=c<\frac 1a$. \smallskip \underline{(a)$\implies$(b)} \quad Suppose (a) holds and let $a,b\in K$ such that $a<b$. Choose $N\in\N$ such that $\frac1{b-a}<N$. Then $\frac 1N<b-a$ and hence $a+\frac 1N<b$. Now choose the smallest $m\in\Z$ such that $a<\frac mN$. If we had $\frac mN\ge b$, then $a+\frac 1N<\frac mN$ and therefore $a<\frac{m-1}N$, contradicting our choice of $m$. Therefore $a<\frac mN<b$. \end{proof} \begin{lem} Let $(K,\le)$ be an Archimedean ordered field. Then \[K=\left\{\lim_{n\to\infty}a_n\mid \text{$(a_n)_{n\in\N}$ sequence in $\Q$ that converges in $K$} \right\}.\] \end{lem} \begin{proof} Let $a\in K$. We have to show that there is a sequence $(a_n)_{n\in\N}$ in $\Q$ that converges in $K$ to $a$. Choose for every $n\in\N$ according to \ref{qdense} some $a_n\in\Q$ such that $a\le a_n<a+\frac 1n$. Let $\ep\in K_{>0}$. Choose $N\in\N$ such that $\frac 1\ep<N$. For $n\ge N$ we now have $|a_n-a|=a_n-a<\frac 1n\le\frac 1N<\ep$. \end{proof} \begin{lem} Suppose $(K,\le)$ is an Archimedean ordered field and $(a_n)_{n\in\N}$ is a sequence in $\Q$. Then the following are equivalent: \begin{enumerate}[(a)] \item $(a_n)_{n\in\N}$ is a Cauchy sequence in $(\Q,\le_\Q)$ \item $(a_n)_{n\in\N}$ is a Cauchy sequence in $(K,\le)$ \end{enumerate} \end{lem} \begin{proof} This follows easily from \ref{qdense}. \end{proof} \begin{exo} Suppose $(K,\le)$ is an ordered field and $(a_n)_{n\in\N}$, $(b_n)_{n\in\N}$ are convergent sequences in $K$. Then \[\lim_{n\to\infty}(a_n+b_n)=\left(\lim_{n\to\infty}a_n\right)+\left(\lim_{n\to\infty}b_n\right)\qquad\text{and}\qquad \lim_{n\to\infty}(a_nb_n)=\left(\lim_{n\to\infty}a_n\right)\left(\lim_{n\to\infty}b_n\right).\] \end{exo} \begin{thm}\label{realschar} Let $(K,\le)$ be an ordered field. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $(K,\le)$ is Archimedean and Cauchy complete \item $(K,\le)$ is complete \end{enumerate} \end{thm} \begin{proof} \underline{(a)$\implies$(b)} \quad Suppose (a) holds and let $A\subseteq K$ be a nonempty subset bounded from above. Choose for every $n\in\N$ the smallest $k_n\in\Z$ such that $\forall a\in A:a\le\frac{k_n}n$ and set $a_n:=\frac{k_n}n\in\Q$ (use the Archimedean property!). Using again the Archimedean property, one can show easily that $(a_n)_{n\in\N}$ is a Cauchy sequence and therefore convergent by hypothesis. We leave it as an exercise to the reader to show that $a:=\lim_{n\to\infty}a_n$ is the least upper bound of $A$ in $(K,\le)$. \underline{(b)$\implies$(a)} \quad We prove the contraposition. First, suppose that $(K,\le)$ is not Archimedean, i.e., the set \[A:=\{a\in K\mid\forall N\in\N:a\le-N\}\] is not empty. We claim that $A$ does not have a least upper bound: Indeed, if $a\in K$ is an upper bound of $A$, then so is $a-1<a$ since $A=\{a\in K\mid\forall N\in\Z:a\le N\}={\{a\in K\mid\forall N\in\Z:a+1\le N\}}=\{a-1\mid a\in K,\forall N\in\N:a\le N\}=A-1$. Finally, suppose that $(K,\le)$ is not Cauchy complete, say $(a_n)_{n\in\N}$ is a Cauchy sequence in $K$ that does not converge. We claim that \[A:=\{a\in K\mid\exists N\in\N:\forall n\ge N:a\le a_n\}\] is nonempty and bounded from above but does not possess a least upper bound. We leave this as an exercise to the reader. \end{proof} \begin{lem}\label{archemb} Suppose $(K,\le)$ is an Archimedean ordered field and $(R,\le_R)$ a complete ordered field. Then there is exactly one embedding $(K,\le)\hookrightarrow(R,\le_R)$. This embedding is an isomorphism if and only if $(K,\le)$ is complete. \end{lem} \begin{proof} Exercise. \end{proof} \begin{thm}\label{introduce-the-reals} There is a complete ordered field $(\R,\le)$. It is essentially unique, for if ${(K,\le_K)}$ is another complete ordered field, then there is exactly one isomorphism from $(K,\le_K)$ to $(\R,\le)$. \end{thm} \begin{proof} The uniqueness is clear from \ref{archemb} together with \ref{realschar}. We only sketch the proof of existence and leave the details as an exercise to the reader: Show that the Cauchy sequences in $\Q$ form a subring $C$ of $\Q^\N$ and that \[I:=\left\{(a_n)_{n\in\N}\in C\mid\lim_{n\to\infty}a_n=0\right\}\] is a maximal ideal of $C$. Set $\R:=C/I$. Show that \[a\le b:\iff\exists (a_n)_{n\in\N},(b_n)_{n\in\N}\text{ in }C:(a=\cc{(a_n)_{n\in\N}}I ~\&~b=\cc{(b_n)_{n\in\N}}I ~\&~\forall n\in\N:a_n\le b_n)\] ($a,b\in\R$) defines an order $\le$ on $\R$. It is clear that $(\R,\le)$ is Archimedean. By Theorem \ref{realschar} it suffices to show that $(\R,\le)$ is Cauchy complete. To this end, let $(a_n)_{n\in\N}$ be a Cauchy sequence in $(\R,\le)$. By \ref{qdense}, there exists a sequence $(q_n)_{n\in\N}$ in $\Q$ such that $|a_n-q_n|<\frac{1}{n}$ for $n\in\N$. Now deduce from the fact that $(a_n)_{n\in\N}$ is a Cauchy sequence in $(\R,\le)$ that $(q_n)_{n\in\N}$ is such in $(\R,\le)$ and hence also in $(\Q,\le)$. Now $(q_n)_{n\in\N}\in C$. Set $a:=\cc{(q_n)_{n\in\N}}{\scriptstyle I}$. It is enough to show $\lim_{n\to\infty}a_n=a$. Finally show that this is equivalent to $\lim_{n\to\infty}q_n=a$ in $(K,\le)$ and prove the latter. \end{proof} \begin{cor}\label{archsubfieldreals} $(\R,\le)$ is an Archimedean ordered field into which every Archimedean ordered field can be embedded. Up to isomorphy it is the only such ordered field. \end{cor} \begin{proof} The first statement is clear from \ref{realschar}, \ref{archemb} and \ref{introduce-the-reals}. Uniqueness: Let $(K,\le_K)$ be another such ordered field. Then \[(\R,\le)\overset\ph\hookrightarrow(K,\le_K)\overset\ps\hookrightarrow(\R,\le)\] and $\ps\circ\ph$ is the by \ref{archemb} unique embedding $(\R,\le)\hookrightarrow(\R,\le)$, i.e., $\ps\circ\ph=\id$. This implies that $\ps$ is surjective. Hence $(K,\le_K)\cong(\R,\le)$. \end{proof} \begin{nt}\label{divnot} Let $A$ be a ring. Then we often use suggestive notation to describe certain subsets of $A$ such as the following: \begin{itemize} \item $A^2=\{a^2\mid a\in A\}$\qquad(``squares'') \item $\sum A^2=\{\sum_{i=1}^\ell a_i^2\mid\ell\in\N_0,a_i\in A\}$\qquad(``sums of squares'') \item $\sum A^2T=\left\{\sum_{i=1}^\ell a_i^2t_i\mid\ell\in\N_0,a_i\in A,t_i\in T\right\}\qquad(T\subseteq A)$\\ \hspace*{15em}(``sums of elements of $T$ weighted by squares'') \item $T+T=\{s+t\mid s,t\in T\}\qquad(T\subseteq A)$ \item $TT=\{st\mid s,t\in T\}\qquad(T\subseteq A)$ \item $-T=\{-t\mid t\in T\}\qquad(T\subseteq A)$ \item $T+aT=\{s+at\mid s,t\in T\}\qquad(T\subseteq A,a\in A)$ \end{itemize} \end{nt} \begin{pro}\label{unary-order} Let $K$ be a field. \begin{enumerate}[\normalfont(a)] \item If $\le$ is an order of $K$ \emph{[$\to$ \ref{def-ordered-field}]}, then $P:=K_{\ge0}=\{a\in K\mid a\ge0\}$ has the following properties: \begin{equation} \tag{$*$} P+P\subseteq P,\quad PP\subseteq P,\quad P\cup-P=K\quad\text{ and }\quad P\cap-P=\{0\}. \end{equation} \item If $P$ is a subset of $K$ satisfying $(*)$, then the relation $\le_P$ on $K$ defined by \[a\le_P b:\iff b-a\in P\qquad(a,b\in K)\] is an order of $K$. \item The correspondence \begin{align*} (\le)&\mapsto K_{\ge0}\\ (\le_P)&\mapsfrom P \end{align*} defines a bijection between the set of all orders on $K$ and the set of all subsets of $K$ satisfying $(*)$. \end{enumerate} \end{pro} \begin{proof} (a) We get $P\mypm+\cdot P\subseteq P$ from the monotonicity of \alal{addition}{multiplication} [$\to$ \ref{def-ordered-field}], $P\cup-P=K$ from the linearity [$\to$ \ref{ordered-set}] and $P\cap-P=\{0\}$ from the antisymmetry [$\to$ \ref{ordered-set}]. \smallskip (b) We get reflexivity from $0\in P$, transitivity from $P+P\subseteq P$, antisymmetry from $P\cap-P=\{0\}$, linearity from $P\cup-P=K$, monotonicity of addition from the definition of $\le_P$ and monotonicity of multiplication $PP\subseteq P$. \smallskip (c) Suppose first that $\le$ is an order of $K$ and set $P:=K_{\ge0}$. Then $(\le)=(\le_P)$ since $a\le b\iff b-a\ge0\iff b-a\in P\iff a\le_P b$ for all $a,b\in K$. Conversely, let $P\subseteq K$ be given such that $P$ satisfies $(*)$. We show $K_{\ge_P\,0}=P$. Indeed, \[K_{\ge_P\,0}=\{a\in K\mid 0\le_Pa\}=\{a\in K\mid a\in P\}=P.\] \end{proof} \begin{rem}\label{unaryrem} \ref{unary-order}(c) allows us to view orders of fields $K$ as subsets of $K$. We reformulate some of the preceding notions and results in this new language: \begin{enumerate}[(a)] \item Definition \ref{def-ordered-field}: Let $K$ be a field. An order of $K$ is a subset $P$ of $K$ satisfying \[P+P\subseteq P,\quad PP\subseteq P,\quad P\cup-P=K\quad\text{ and }\quad P\cap-P=\{0\}.\] \item Definition \ref{ordfieldhom}: Let $(K,P)$ and $(L,Q)$ be ordered fields. A field homomorphism $\ph\colon K\to L$ is called a homomorphism or an embedding of ordered fields if $\ph(P)\subseteq Q$. One calls $(K,P)$ an ordered subfield of $(L,Q)$ if $K$ is a subfield of $L$ and $P=Q\cap K$ (or equivalently $P\subseteq Q$). \item Proposition \ref{squares}: Let $(K,P)$ be an ordered field. Then $K^2\subseteq P$. \item Definition \ref{archetcdef}: An ordered field $(K,P)$ is called \emph{Archimedean} if \[\forall a\in K:\exists N\in\N:N+a\in P,\] ($\iff P-\N=K\iff P+\Z=K\iff P+\Q=K$). \end{enumerate} \end{rem} \section{Preorders} \begin{df}\label{defpreorder} Let $A$ be a commutative ring and $T\subseteq A$. Then $T$ is called a \emph{preorder} of $A$ if $A^2\subseteq T$, $T+T\subseteq T$ and $TT\subseteq T$. If moreover $-1\notin T$, then $T$ is called a \emph{proper} preorder of $A$. \end{df} \begin{ex}\label{sqsm} \begin{enumerate}[(a)] \item If $A$ is a commutative ring, then $\sum A^2$ is the smallest preorder of $A$. \item Every order of a field is a proper preorder. \end{enumerate} \end{ex} \begin{pro}\label{diffsquare} Let $A$ be a commutative ring with $\frac12\in A$ (i.e., $2\in A^\times$). Then \[a=\left(\frac{a+1}2\right)^2-\left(\frac{a-1}2\right)^2\] for all $a\in A$. In particular, $A=A^2-A^2$. \end{pro} \begin{dfpro}\label{supportideal} Let $A$ be a commutative ring with $\frac12\in A$ and $T\subseteq A$ a preorder. Then the \emph{support} $T\cap-T$ of $T$ is an ideal of $A$. \end{dfpro} \begin{proof} $T\cap-T$ is obviously a subgroup of (the additive group of) $A$ and we have \[ \begin{array}{rcl} A(T\cap-T)&\overset{\ref{diffsquare}}=&(A^2-A^2)(T\cap-T)\\ &\subseteq&(T-T)(T\cap-T)\\ &\subseteq&(T(T\cap-T))-(T(T\cap-T))\\ &\subseteq&((TT)\cap(-TT))+((-TT)\cap TT)\\ &\subseteq&(T\cap-T)+((-T)\cap T)\subseteq T\cap -T. \end{array} \] \end{proof} \begin{cor}\label{preproper} Suppose $A$ is a commutative ring with $\frac12\in A$ and $T\subseteq A$ is a preorder. Then \[\text{T is proper}\iff T\ne A.\] \end{cor} \begin{proof} ``$\Longrightarrow$'' trivial \smallskip ``$\Longleftarrow$'' Suppose $T\ne A$. Then of course also $T\cap-T\ne A$. Since $T\cap-T$ is an ideal, we have $1\notin T\cap -T$. Since $1=1^2\in T$, it follows that $1\notin-T$, i.e., $-1\notin T$. \end{proof} \begin{ex} In \ref{diffsquare}, \ref{supportideal} and \ref{preproper}, it is essential to require $\frac12\in A$. Take for example $A=\F_2(X)$. Then $A^2=\F_2(X^2)$ since $\F_2(X)\to\F_2(X),\ p\mapsto p^2$ is a homomorphism (Frobenius). Therefore $A^2-A^2 =\F_2(X^2)\ne\F_2(X)$. Moreover $T:=\F_2(X^2)=\sum A^2$ is a preorder of $A$ but $T\cap-T=\F_2(X^2)$ is not an ideal of $A$ (since $1\in T\cap-T\ne\F_2(X)$). Also $T$ is not proper although $T\ne A$. The same is true for $\F_2[X]$ instead of $\F_2(X)$ and from this one can get similar examples in the ring $\Z[X]$ (exercise). \end{ex} \begin{pro}\label{propfield} Let $K$ be a field and $T\subseteq K$ a preorder. Then \[\text{T is proper}\iff T\cap-T=\{0\}.\] \end{pro} \begin{proof} If $\chara K=2$, then $-1=1\in T\cap-T$. Therefore suppose now $\chara K\ne2$. Then \[-1\notin T\overset{1\in T}\iff1\notin T\cap-T\overset{\ref{supportideal}}{\underset{\substack{\text{$K$ field}\\\chara K\ne2}}\iff}T\cap-T=\{0\}.\] \end{proof} \begin{lem}\label{againpreorder} Suppose $A$ is a commutative ring, $T\subseteq A$ a preorder and $a\in A$. Then $T+aT$ is again a preorder. \end{lem} \begin{proof} $A^2\subseteq T\subseteq T+aT$, $(T+aT)+(T+aT)\subseteq(T+T)+a(T+T)\subseteq T+aT$ and $(T+aT)(T+aT)\subseteq TT+aTT+aTT+a^2TT\subseteq T+aT+aT+A^2T\subseteq T+a(T+T)+TT\subseteq T+aT+T\subseteq T+aT$ \end{proof} \begin{thm}\label{orderpreorder} Let $K$ be a field and $P\subseteq K$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $P$ is an order of $K$ \emph{[$\to$ \ref{unaryrem}]}. \item $P$ is a proper preorder of $K$ \emph{[$\to$ \ref{defpreorder}]} such that $P\cup-P=K$. \item $P$ is a maximal proper preorder of $K$. \end{enumerate} \end{thm} \begin{proof} \underline{(a)$\implies$(b)}\quad\ref{sqsm}(b) \smallskip \underline{(b)$\implies$(c)}\quad Suppose (b) holds and let $T$ be a proper preorder of $K$ with $P\subseteq T$. To show: $T\subseteq P$. To this end, let $a\in T$. If $a$ was not in $P$, then $-a\in P\subseteq T$ (since $P\cup-P=K$) and therefore $a\in T\cap-T\overset{\ref{propfield}}=\{0\}$ in contradiction to $0=0^2\in P$. \smallskip \underline{(c)$\implies$(a)}\quad Suppose (c) holds. Because of \ref{propfield}, we have to show only $P\cup-P=K$. Assume $P\cup-P\ne K$. Choose then $a\in K$ such that $a\notin P$ and $-a\notin P$. Then $P+aP$ and $P-aP$ are preorders according to Lemma \ref{againpreorder} and both contain $P$ properly (note that $0,1\in P$). Because of the maximality of $P$ none of $P+aP$ and $P-aP$ is proper, i.e., $-1\in P+aP$ and $-1\in P-aP$. Write $-1=s+as'$ and $-1=t-at'$ for certain $s,s',t,t'\in P$. Then $-as'=1+s$ and $at'=1+t$. It follows that $-a^2s't'=1+s+t+st$ and therefore $-1=s+t+st+a^2s't'\in P+P+PP+A^2PP\subseteq P\ \lightning$. \end{proof} \begin{thm}\label{artin-schreier} Let $K$ be a field and $T\subseteq K$ a proper preorder. Then there is an order $P$ of $K$ such that $T\subseteq P$ and we have $T=\bigcap\{P\mid\text{$P$ order of $K$},T\subseteq P\}$. \end{thm} \begin{proof} Consider the partially ordered set of all proper preorders of $K$ containing $T$. In this partially ordered set, every chain has an upper bound (the empty chain has $T$ as an upper bound and every nonempty chain possesses its union as an upper bound). By Zorn's lemma, the partially ordered set has a maximal element. Every such element is obviously a maximal proper preorder and therefore by \ref{orderpreorder} an order. Now we turn to the second statement: ``$\subseteq$'' is clear. ``$\supseteq$'' Let $a\in K\setminus T$. To show: There is an order $P$ of $K$ with $T\subseteq P$ and $a\notin P$. By \ref{againpreorder}, $T-aT$ is a preorder. It is proper for otherwise there would be $s,t\in T$ with $-1=s-at$ and it would follow that $t\ne0$, $at=1+s$ and $a=\left(\frac 1t\right)^2t(1+s)\in K^2TT\subseteq T$. By the already proved, there is an order $P$ of $K$ with $T-aT\subseteq P$. If $a$ lied in $P$, then $a\in P\cap-P=\{0\}$ in contradiction to $a\notin T$. \end{proof} \begin{df}\label{dfreal} A field is called \emph{real} (in older literature mostly \emph{formally real}) if it admits an order. \end{df} \begin{thm}\label{realchar} Let $K$ be a field. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $K$ is real. \item $-1\notin\sum K^2$ \item $\forall n\in\N:\forall a_1,\dots,a_n\in K:(a_1^2+\dots+a_n^2=0\implies a_1=0)$ \end{enumerate} \end{thm} \begin{proof} \underline{(a)$\implies$(b)}\quad follows from \ref{squares}. \smallskip \underline{(b)$\implies$(a)}\quad By \ref{sqsm}, $\sum K^2$ is a preorder. If it is proper, then it is contained in an order by \ref{artin-schreier}. \smallskip \underline{(b)$\iff$(c)}\quad \begin{align*}-1\in\sum K^2&\iff\exists n\in\N:\exists a_2,\dots,a_n\in K:-1=a_2^2+\dots+a_n^2\\ &\iff\exists n\in\N:\exists a_2,\dots,a_n\in K:1^2+a_2^2+\dots+a_n^2=0\\ &\iff\exists n\in\N:\exists a_1\in K^\times:\exists a_2,\dots,a_n\in K:a_1^2+a_2^2+\dots+a_n^2=0\\ \end{align*} \end{proof} \begin{ex} Because of $-1=\ii^2\in\sum\C^2$, the field $\C:=\R(\ii)$ does not admit an order. \end{ex} \section{Extensions of orders} \begin{df}\label{oexf} Let $(K,P)$ be an ordered field and $L$ an extension field of $K$ (or in other words: let $L|K$ be a field extension and $P$ be an order of $K$). We call $Q$ an \emph{extension} of the order $P$ to $L$ if the following equivalent conditions are fulfilled [$\to$ \ref{unaryrem}(b)]: \begin{enumerate}[(a)] \item $(L,Q)$ is an ordered extension field of $(K,P)$. \item $Q$ is an order of $L$ such that $P\subseteq Q$. \item $Q$ is an order of $L$ such that $Q\cap K=P$. \end{enumerate} \end{df} \begin{thm}\label{extend-ordering} Let $(K,P)$ be an ordered field and $L$ an extension field of $K$. Then the order $P$ of $K$ can be extended to $L$ if and only if $-1\notin\sum L^2P$. \end{thm} \begin{proof} Since every order is a preorder [$\to$ \ref{sqsm}], an order of $L$ contains $P$ if and only if it contains the preorder generated in $L$ by $P$ (i.e., the smallest preorder of $L$ containing $P$, or in other words, the intersection of all preorders of $L$ containing $P$), namely $\sum L^2P$. If $\sum L^2P$ is not proper, then there is of course no order of $L$ containing it. On the contrary, if $\sum L^2P$ is proper, then there is such an order by Theorem \ref{artin-schreier}. \end{proof} \begin{reminder}\label{quadratic-remind} Let $L|K$ be a field extension with $\chara K\ne2$. Then \[[L:K]\le2\iff\exists d\in K:L=K(\sqrt d)\] since for $x\in L$ and $a,b,c\in K$ with $a\ne0$ and $ax^2+bx+c=0$ we have $(2ax+b)^2=4a(ax^2+bx)+b^2=b^2-4ac=:d$ and therefore $K(x)=K(2ax+b)=K(\sqrt d)$. \end{reminder} \begin{thm}\label{extend2} Let $(K,P)$ be an ordered field and $d\in K$. The order $P$ can be extended to $K(\sqrt d)$ if and only if $d\in P$. \end{thm} \begin{proof} If $\sqrt d\in K$, then $d=(\sqrt d)^2\in P$. Suppose now that $\sqrt d\notin K$. Set $L:=K(\sqrt d)$. Because of $P+dP\subseteq\sum L^2P\subseteq P+dP+K\sqrt d$, we have $-1\notin\sum L^2P\iff-1\notin P+dP$. Since $P$ is a maximal proper preorder by \ref{orderpreorder} and $P+dP$ is a preorder by \ref{againpreorder}, we obtain $-1\notin P+dP\iff P=P+dP\iff d\in P$. Combining, we get $-1\notin\sum L^2P\iff d\in P$ and we conclude by Theorem \ref{extend-ordering}. \end{proof} \begin{ex}\label{sqrt2} In \ref{extend2}, the extension is in general not unique: $\Q(\sqrt 2)$ admits exactly two orders, namely the ones induced by the two field embeddings $\Q(\sqrt 2)\hookrightarrow\R$ (in the one $\sqrt 2$ is positive, in the other negative). That it does not admit a third one, follows from the fact that for every order $P$ of $\Q(\sqrt2)$ we have by \ref{archsubfieldreals} $(\Q(\sqrt 2),P)\hookrightarrow(\R,\R_{\ge0})$ because $P$ is Archimedean since $\Q(\sqrt 2)=\Q+\Q\sqrt2$ and \[|\sqrt2|_P-1\overset{\ref{diffsquare}}= \left(\frac{|\sqrt2|_P}2\right)^2-\left(\frac{|\sqrt2|_P-2}2\right)^2\overset{\ref{squares}}\le_P \left(\frac{|\sqrt2|_P}2\right)^2=\frac12\] [$\to$ \ref{archetcdef}(a)]. \end{ex} \begin{thm}\label{extendodd} If $L|K$ is a field extension of odd degree, then each order of $K$ can be extended to $L$. \end{thm} \begin{proof} Assume the claim does not hold. Then there is a counterexample $L|K$ with $[L:K]=2n+1$ for some $n\in\N$. We choose the counterexample in a way such that $n$ is as small as possible. We will now produce another counterexample $L'|K$ with $[L':K]\le2n-1$ which will contradict the minimality of $n$. Due to $\chara K=0$, we have that $L|K$ is separable. By the primitive element theorem, there is some $a\in L$ with $L=K(a)=K[a]$. The condition $-1\in\sum L^2P$ which is satisfied by \ref{extend-ordering} translates via the isomorphism $K[X]/(f)\to L,\ \overline g\mapsto g(a)$ in \begin{equation} \tag{$*$} 1+\sum_{i=1}^\ell a_ig_i^2=hf \end{equation} with $\ell\in\N$, $a_i\in P$, $g_i,h\in K[X]$, where $f$ denotes the minimal polynomial of $a$ over $K$ (in particular $\deg f=[K(a):K]=[L:K]=2n+1$) and the $g_i$ are chosen in such a way that $\deg g_i\le 2n$. Each of the $\ell+1$ terms in the sum on the left hand side of $(*)$ has an \emph{even} degree $\le4n$ and a leading coefficient from $PK^2\subseteq P$ (except those terms that are zero of course). Since $P$ is an order, the monomials of highest degree appearing on the left hand side of $(*)$ cannot cancel out. So the left hand side and therefore also the right hand side of $(*)$ has an \emph{even} degree $\le4n$. It follows that $h$ has an \emph{odd} degree $\le2n-1$. Choose an irreducible divisor $h_1$ of $h$ in $K[X]$ of \emph{odd} degree and a root $b$ of $h_1$ in an extension field of $K$ (e.g., in the splitting field of $h_1$ over $K$ or in the algebraic closure of $K$). Set $L':=K(b)$. Substituting $b$ in $(*)$ yields $-1=\sum_{i=1}^\ell a_ig_i(b)^2\in\sum PL'^2$. By \ref{extend-ordering}, $P$ cannot be extended to $L'$. Since $[L':K]=[K(b):K]=\deg\irr_Kb=\deg h_1\le 2n-1$ is \emph{odd}, we gain the desired still smaller counterexample. \end{proof} \begin{thm}\label{rtlfct} Let $K$ be a field. Then every order of $K$ can be extended to $K(X)$. \end{thm} \begin{proof} Let $P$ be an order of $K$. Assume that $P$ cannot be extended to $K(X)$. By \ref{extend-ordering} we then have $-1\in\sum K(X)^2P$. Because of $\#K=\infty$ [$\to$ \ref{qembeds}] we can plug in a suitable point from $K$ (``avoid finitely many poles'') and get $-1\in\sum K^2P=P\ \lightning$. \end{proof} \begin{ex}\label{ordersrx} Due to \ref{rtlfct} there is an order on $\R(X)$. If $P$ is such an order, then by the completeness of $(\R,\le)$ [$\to$ \ref{introduce-the-reals}], the set $\R_{\le_PX}=\{a\in\R\mid a\le_PX\}$ is either empty or not bounded from above (in which case it is $\R$) or it has a supremum $t$ in $\R$ (in which case it equals $(-\infty,t)$ if $t>_PX$ or $(-\infty,t]$ if $t<_PX$). Hence \[\R_{\le_PX}=\{a\in\R\mid a\le_PX\}\in\{\emptyset\}\cup\{(-\infty,t)\mid t\in\R\}\cup\{(-\infty,t]\mid t\in\R\}\cup\{\R\} =:C.\] We claim now that the map \begin{align*} \Ph\colon\{P\mid P\text{ order of $\R(X)$}\}&\to C\\P&\mapsto\R_{\le_PX} \end{align*} is a bijection. It is easy to see that for all $I,J\in C$ there is a ring automorphism $\ph_{I,J}$ of $\R(X)$ such that for all orders $P$ of $\R(X)$, we have \[\Ph(P)=I\iff\Ph(\ph_{I,J}(P))=J:\] \begin{itemize} \item $I=\R \et J=(-\infty,0] \leadsto \ph_{I,J}\colon X\mapsto\frac1X$ \item $I=\emptyset \et J=(-\infty,0) \leadsto \ph_{I,J}\colon X\mapsto\frac1X$ \item $I=(-\infty,t) \et J=(-\infty,0) \leadsto \ph_{I,J}\colon X\mapsto X+t$ \item $I=(-\infty,t] \et J=(-\infty,0] \leadsto \ph_{I,J}\colon X\mapsto X+t$ \item $I=(-\infty,0) \et J=(-\infty,0] \leadsto \ph_{I,J}\colon X\mapsto-X$ \item other $I$ and $J$ $\leadsto$ composition of the above automorphisms \end{itemize} From this we get the \emph{surjectivity} of $\Ph$, since as already mentioned there is an order $P$ of $\R(X)$ and if we set $I:=\Ph(P)$, then $\Ph(\ph_{I,J}(P))=J$ for all $J\in C$. For the \emph{injectivity} of $\Ph$, it suffices to show that \emph{there is} some $I\in C$ having only one preimage under $\Ph$ since then \begin{align*} \#\{P\mid\Ph(P)=J\}&=\#\{P\mid\Ph(\ph_{J,I}(P))=I\}\\ &=\#\{\ph_{J,I}(P)\mid\Ph(\ph_{J,I}(P))=I\}=\#\{P\mid\Ph(P)=I\}=1 \end{align*} for all $J\in C$. We therefore fix $I:=\R\in C$ and show that there at most (and therefore exactly) one order $P$ of $\R(X)$ such that $\Ph(P)=I$. To this end, suppose $\Ph(P)=I$. If $f,g\in\R[X]\setminus\{0\}$, then one easily verifies that \[\frac fg\in P\overset{\R(X)^2\subseteq P}\iff fg\in P\iff\text{the leading coefficient of $fg$ is positive.}\] This uniquely determines $P$. Consequently, $\Ph$ is a bijection. We fix the following notation: \begin{align*} P_{-\infty}&:=\Ph^{-1}(\emptyset)\\ P_{t-}&:=\Ph^{-1}((-\infty,t))\text{ for $t\in\R$}\\ P_{t+}&:=\Ph^{-1}((-\infty,t])\text{ for $t\in\R$}\\ P_\infty&:=\Ph^{-1}(\R) \end{align*} Now $\{P\mid P\text{ order of }\R(X)\}=\{P_{-\infty}\}\cup\{P_{t-},P_{t+}\mid t\in\R\}\cup\{P_\infty\}$. By easy considerations one obtains, \begin{align*} P_{-\infty}&=\{r\in\R(X)\mid\exists c\in\R:\forall x\in(-\infty,c):r(x)\ge0\},\\ P_{t-}&=\{r\in\R(X)\mid\exists\ep\in\R_{>0}:\forall x\in(t-\ep,t):r(x)\ge0\}\qquad(t\in\R),\\ P_{t+}&=\{r\in\R(X)\mid\exists\ep\in\R_{>0}:\forall x\in(t,t+\ep):r(x)\ge0\}\qquad(t\in\R),\\ P_\infty&=\{r\in\R(X)\mid\exists c\in\R:\forall x\in(c,\infty):r(x)\ge0\}. \end{align*} None of these orders is Archimedean. \end{ex} \section{Real closed fields}\label{sec:rcf} \begin{pro} Let $K$ be a field. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $K$ admits exactly one order. \item $\sum K^2$ is an order of $K$. \item $(\sum K^2)\cup(-\sum K^2)=K$ and $-1\notin\sum K^2$ \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)}\quad Suppose $P$ is the unique order of $K$. By \ref{sqsm} and \ref{artin-schreier}, we then get $\sum K^2=P$. \smallskip \underline{(b)$\implies$(c)} is trivial. \smallskip \underline{(c)$\implies$(a)}\quad Suppose (c) holds. Using \ref{unaryrem}(a) and \ref{propfield}, we see that $\sum K^2$ is an order of $K$, and it is the only one by \ref{sqsm} and \ref{orderpreorder}(b). \end{proof} \begin{ex} $\Q$ and $\R$ possess exactly one order. \end{ex} \begin{convention}\label{convention} If $K$ is a field admitting exactly one order, then we will often understand $K$ as an ordered field, that is we speak of $K$ but mean $(K,\sum K^2)$. \end{convention} \begin{df}\label{dfeuclid} A field $K$ is called \emph{Euclidean} if $K^2$ is an order of $K$. \end{df} \begin{rem}\label{euclidunique} If $K$ is Euclidean, then $K^2$ is the unique order of $K$. \end{rem} \begin{ex} $\R$ is Euclidean but not $\Q$. \end{ex} \begin{notrem}\label{notremsqrt} \begin{enumerate}[(a)] \item Let $(K,\le)$ be an ordered field. If $a,b\in K$ such that $a=b^2$, then we write $\sqrt a:=|b|\in K_{\ge0}$ [$\to$ \ref{introabssgn}] (this is obviously well-defined). If $a\in K\setminus K^2$, we continue to denote by $\sqrt a\in\overline K\setminus K$ an arbitrary but fixed square root of $a$ in the algebraic closure $\overline K$ of $K$. One shows easily that $a\le b\iff\sqrt a\le\sqrt b$ for all $a,b\in K^2$. \item If $K$ is a Euclidean field (with order $\le$ [$\to$ \ref{convention}, \ref{euclidunique}]), then in particular $\sqrt a\in K_{\ge0}$ and $(\sqrt a)^2=a$ for \emph{all} $a\in K_{\ge0}=K^2=\sum K^2$. \item We write $\ii:=\sqrt{-1}$. If $K$ is a real field, then $K(\ii)=K\oplus K\ii$ as a $K$-vector space \end{enumerate} \end{notrem} \begin{pro}\label{cheuclid} Let $K$ be a real field. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $K$ is Euclidean. \item $K=-K^2\cup K^2$ \item $K(\ii)=K(\ii)^2$ \item Every polynomial of degree $2$ in $K(\ii)[X]$ has a root in $K(\ii)$. \end{enumerate} \end{pro} \begin{proof} \underline{(d)$\implies$(c)} is trivial. \smallskip \underline{(c)$\implies$(b)}\quad Suppose (c) holds and let $a\in K$. Write $a=(b+\ii c)^2$ for some $b,c\in K$. Then $a=b^2-c^2$ and $bc=0$ [$\to$ \ref{notremsqrt}(c)]. Therefore $a=b^2$ or $a=-c^2$. \smallskip \underline{(b)$\implies$(a)}\quad Suppose (b) holds. It suffices to show $K^2+K^2\subseteq K^2$. For this purpose, let $a,b\in K$. To show: $a^2+b^2\in K^2$. If we had $a^2+b^2\notin K^2$, then $a^2+b^2\in-K^2$, say $a^2+b^2+c^2=0$ for some $c\in K$ and \ref{realchar}(c) would imply $c=0\ \lightning$. \smallskip \underline{(a)$\implies$(c)}\quad Suppose (a) holds and let $a,b\in K$. By \ref{notremsqrt}(c), we have to show $a+b\ii\in K(\ii)^2$. Set $r:=\sqrt{a^2+b^2}\in K_{\ge0}$ according to \ref{notremsqrt}(b). Then $r^2=a^2+b^2\ge a^2=|a|^2$ and therefore $r\ge|a|$ by \ref{notremsqrt}(a), i.e., $r\pm a\ge0$. It follows that $\sqrt{\frac{r+a}2},\sqrt{\frac{r-a}2}\in K_{\ge0}$ and the calculation \[\left(\sqrt{\frac{r+a}2}\pm\sqrt{\frac{r-a}2}\ii\right)^2=\frac{r+a}2\pm 2\sqrt{\frac{r^2-a^2}4}\ii-\frac{r-a}2=a\pm 2\left|\frac b2\right|\ii=a\pm|b|\ii\] shows $a+b\ii\in K(\ii)^2$. \smallskip \underline{(c)$\implies$(d)} follows from $X^2+bX+c=(X+\frac b2)^2+(c-\frac{b^2}4)$ for $b,c\in K(\ii)$. \end{proof} \begin{df}\label{dfrealclosed} Let $R$ be a field. Then $R$ is called \emph{real closed} if $R$ is Euclidean [$\to$~\ref{dfeuclid}, \ref{cheuclid}] and every polynomial of \emph{odd} degree from $R[X]$ has a root in $R$. \end{df} \begin{ex} $\R$ is real closed by the intermediate value theorem from calculus and by \ref{dfeuclid}. \end{ex} \begin{rem} We now generalize the fundamental theorem of algebra from $\C=\R(\ii)$ to $R(\ii)$ for any real closed field $R$. The usual Galois theoretic proof goes through as we will see immediately. \end{rem} \begin{thm}[``generalized fundamental theorem of algebra'']\label{fund} Let $R$ be a real closed field. Then $C:=R(\ii)$ is algebraically closed. \end{thm} \begin{proof} Let $z\in\overline C$. To show: $z\in C$. Choose an intermediate field $L$ of $\overline C|C$ with $z\in L$ such that $L|R$ is a finite Galois extension (e.g., the splitting field of $(X^2+1)\irr_Rz$ over $R$). We show $L=C$. Choose a $2$-Sylow subgroup $H$ of the Galois group $G:=\Aut(L|R)$. From Galois theory we know that $[L^H:R]=[G:H]$ is odd. Hence $L^H=R$ since every element of $L^H$ has over $R$ a minimal polynomial of odd degree which has a root in $R$ and therefore must be linear. Galois theory then implies $G=H$. Hence $G$ is a $2$-group. Therefore the subgroup $I:=\Aut(L|C)$ of $G$ is also a $2$-group. By Galois theory, it is enough to show $I=\{1\}$. If we had $I\ne\{1\}$, then there would exist, as one knows from algebra, a subgroup $J$ of $I$ with $[I:J]=2$. From this we get with Galois theory $[L^J:C]=[L^J:L^I]=[I:J]=2$, contradicting \ref{cheuclid}(d). \end{proof} \begin{thm}\label{rcchar} Let $R$ be a field. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $R$ is real closed. \item $R\ne R(\ii)$ and $R(\ii)$ is algebraically closed. \item $R$ is real but there is no real extension field $L\ne R$ of $R$ such that $L|R$ is algebraic. \end{enumerate} \end{thm} \begin{proof} \underline{(a)$\implies$(b)} follows from \ref{fund}. \smallskip \underline{(b)$\implies$(c)}\quad Suppose (b) holds. In order to show that $R$ is real, it is enough to show by Theorem \ref{realchar} that $\sum R^2=R^2$ since $-1\notin R^2$ because $R\ne R(\ii)$. To this end, let $a,b\in R$. To show: $a^2+b^2\in R^2$. Since $R(\ii)$ is algebraically closed, we have $a+b\ii\in R(\ii)^2$, that is there are $c,d\in R$ such that $a+b\ii=(c+d\ii)^2$ and it follows that $a^2+b^2=(a+b\ii)(a-b\ii)=(c+d\ii)^2(c-d\ii)^2=((c+d\ii)(c-d\ii))^2=(c^2+d^2)^2\in R^2$. Now let $L|R$ be an algebraic field extension and suppose $L$ is real. To show: $L=R$. Since $L(\ii)|R(\ii)$ is again algebraic and $R(\ii)$ is algebraically closed, we obtain $L(\ii)=R(\ii)$. For this reason $L$ is a real intermediate field of $R(\ii)|R$ and it follows that $L=R$. \smallskip \underline{(c)$\implies$(a)}\quad Suppose (c) holds. Choose an order $P$ of $R$ according to Definition \ref{dfreal}. For all $d\in P$, $R(\sqrt d)$ is real by \ref{extend2} and therefore $R(\sqrt d)=R$. It follows that $P\subseteq R^2\subseteq P$ and hence $P=R^2$, i.e., $R$ is Euclidean. According to Definition \ref{dfrealclosed} it remains to show that each polynomial $f\in R[X]$ of odd degree has a root in $R$. Let $f\in R[X]$ be of odd degree. Choose an irreducible divisor $g$ of $f$ in $R[X]$ of odd degree. Choose a root $a$ of $g$ in an extension field of $R$. Since $[R(a):R]=\deg g$ is odd, $R(a)$ is real by \ref{extendodd} and therefore $R(a)=R$. Thus $a\in R$ satisfies $g(a)=0$ and hence $f(a)=0$. \end{proof} \begin{thm}[``real version of the generalized fundamental theorem of algebra'']\label{realfund} Let $R$ be a field. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $R$ is real closed. \item $\{f\in R[X]\mid\text{$f$ is irreducible and monic}\}=\\ \{X-a\mid a\in R\}\cup\{(X-a)^2+b^2\mid a,b\in R,b\ne 0\} $ \end{enumerate} \end{thm} \begin{proof} \underline{(a)$\implies$(b)}\quad Suppose (a) holds. ``$\supseteq$'' is clear since $R$ is real. ``$\subseteq$'' Let $f\in R[X]$ be irreducible and monic of degree $\ge2$. Since $R(\ii)$ is algebraically closed by \ref{fund}, there are $a,b\in R$ such that $f(a+b\ii)=0$. Due to $R\ne R(\ii)$ we can apply the automorphism of the field extension $R(\ii)|R$ given by $\ii\mapsto-\ii$ in order to obtain $f(a-b\ii)=0$. By observing $a+b\ii\ne a-b\ii$ (since $b\ne0$ because $f\in R[X]$ is irreducible of degree $\ge2$), we get \[f=\underbrace{(X-(a+b\ii))(X-(a-b\ii))}_{(X-a)^2+b^2\in R[X]}g\] for some $g\in R(\ii)[X]$. But then $g\in R[X]$ and hence even $g=1$. \smallskip \underline{(b)$\implies$(a)}\quad Suppose (b) holds. We will show \ref{rcchar}(b), i.e., that $R\ne R(\ii)$ and $R(\ii)$ is algebraically closed. The first claim $R\ne R(\ii)$ follows from the irreducibility of $X^2+1=(X-0)^2+1^2\in R[X]$. Now suppose $f\in R(\ii)[X]$ is of degree $\ge1$. Consider the ring automorphism \[R(\ii)[X]\to R(\ii)[X],\ p\mapsto p^*\] given by $a^*=a$ for $a\in R$, $\ii^*=-\ii$ and $X^*=X$. Then $f^*f\in R[X]$. If $f^*f$ has a root $a\in R$, then $f(a)=0$ or $f^*(a)=0$ and then again $f(a)=0$. Suppose therefore that $f^*f$ has no root in $R$. Then there exist $a,b\in R$ with $b\ne0$ such that $(X-a)^2+b^2$ divides $f^*f$ in $R[X]$. Because of $(X-a)^2+b^2=(X-(a+b\ii))(X-(a-b\ii))$, $a+b\ii$ is a root of $f$ or of $f^*$. If $f^*(a+b\ii)=0$, then $f(a-\ii b)=f^{**}((a+\ii b)^*)=(f^*(a+\ii b))^*=0^*=0$. Therefore $a+\ii b$ or $a-\ii b$ is a root of $f$ in $R(\ii)$. \end{proof} \begin{notterm}\label{intervals} Let $(K,\le)$ be an ordered field. \begin{enumerate}[(a)] \item We extend the order $\le$ in the obvious way to the set $\{-\infty\}\cup K\cup\{\infty\}$ by declaring $-\infty<a<\infty$ for all $a\in K$. \item We adopt the usual notation for intervals \begin{align*} (a,b)&:=(a,b)_K:=\{x\in K\cup\{\pm\infty\}\mid a<x<b\}\qquad(a,b\in K\cup\{\pm\infty\})\\ &\text{(``interval from $a$ to $b$ without endpoints'')}\\ [a,b)&:=[a,b)_K:=\{x\in K\cup\{\pm\infty\}\mid a\le x<b\}\qquad(a,b\in K\cup\{\pm\infty\})\\ &\text{(``interval from $a$ to $b$ with $a$ and without $b$'')} \end{align*} and so forth. \item We use terminology like \begin{align*} \text{$f\ge0$ on $S$}&:\iff\forall x\in S:f(x)\ge0\qquad(f\in K[X_1,\dots,X_n],S\subseteq K^n)\\ &\text{(``$f$ is nonnegative on $S$'')}\\ \text{$f>0$ on $S$}&:\iff\forall x\in S:f(x)>0\qquad(f\in K[X_1,\dots,X_n],S\subseteq K^n)\\ &\text{(``$f$ is positive on $S$'').} \end{align*} \end{enumerate} \end{notterm} \begin{cor}[``intermediate value theorem for polynomials'']\label{intermediate} Let $R$ be a real closed field, $f\in R[X]$ and $a,b\in R$ such that $a\le b$ and $\sgn(f(a))\ne\sgn(f(b))$. Then there is $c\in[a,b]_R$ with $f(c)=0$. \end{cor} \begin{proof} WLOG $f$ is monic. By \ref{realfund}, all nonlinear monic irreducible polynomials in $R[X]$ are positive on $R$. Hence $f=g\prod_{i=1}^k(X-a_i)^{\al_i}$ with $k\in\N_0$, $a_i\in R$, $\al_i\in\N$, $a_1<\dots<a_k$ and some $g\in R[X]$ that is positive on $R$. On the sets $(-\infty,a_1)$, $(a_1,a_2)$, \dots, $(a_{k-1},a_k)$ and $(a_k,\infty)$ each $X-a_i$ and therefore $f$ has constant sign. Hence $a$ and $b$ cannot lie both in the same such set. Consequently, there is an $i\in\{1,\dots,k\}$ such that $a_i\in[a,b]$. Set $c:=a_i$. \end{proof} \begin{cor}[``Rolle's theorem for polynomials'']\label{rolle} Suppose $R$ is a real closed field, $f\in R[X]$ and $a,b\in R$ with $a<b$ and $f(a)=f(b)$. Then there exists a $c\in(a,b)_R$ such that $f'(c)=0$. \end{cor} \begin{proof} WLOG $f\ne0$, $f(a)=0=f(b)$ and $\nexists x\in(a,b):f(x)=0$. Write \[f=(X-a)^\al(X-b)^\be g\] for some $\al,\be\in\N$ and $g\in R[X]$ with $\forall x\in[a,b]:g(x)\ne0$. We find \begin{align*} f'&=(X-a)^\al\be(X-b)^{\be-1}g+\al(X-a)^{\al-1}(X-b)^\be g+(X-a)^\al(X-b)^\be g'\\ &=(X-a)^{\al-1}(X-b)^{\be-1}h \end{align*} where $h:=\be(X-a)g+\al(X-b)g+(X-a)(X-b)g'$. Hence it suffices to find $c\in(a,b)$ such that $h(c)=0$. We can apply the intermediate value theorem \ref{intermediate} because \[h(a)=\al(a-b)g(a)\quad\text{and}\quad h(b)=\be(b-a)g(b)\] and again by \ref{intermediate} we have $\sgn(g(a))=\sgn(g(b))$. \end{proof} \begin{cor}[``mean value theorem for polynomials'']\label{mean} Let $R$ be a real closed field, $f\in R[X]$ and $a,b\in R$ with $a<b$. Then there is some $c\in(a,b)_R$ satisfying $f'(c)=\frac{f(b)-f(a)}{b-a}$. \end{cor} \begin{proof} Setting $g:=(X-a)(f(b)-f(a))-(b-a)(f-f(a))$, we get $g(a)=0=g(b)$ and $g'=f(b)-f(a)-(b-a)f'$. Rolle's theorem \ref{rolle} yields $c\in(a,b)$ such that $g'(c)=0$. \end{proof} \begin{df}\label{monodef} \begin{enumerate}[(a)] \item Let $(M,\le_1)$ and $(N,\le_2)$ be ordered sets. A map $\ph\colon M\to N$ is called \emph{anti-monotonic} [$\to$ \ref{ordhom}] if \[a\le_1b\implies\ph(a)\ge_2\ph(b)\] for all $a,b\in M$. \item If $(K,\le)$ is an ordered field, $f\in K[X]$ and $I\subseteq K$, then we say that $f$ is \alalal{monotonic}{injective}{anti-monotonic} on $I$ if $I\to K,\ x\mapsto f(x)$ is \alalal{monotonic}{injective}{anti-monotonic}. \end{enumerate} \end{df} \begin{cor}\label{dermono} Let $R$ be a real closed field, $f\in R[X]$ and $a,b\in R$. If $\malal{f'\ge0}{f'\le0}$ on $(a,b)$ \emph{[$\to$ \ref{intervals}(c)]}, then $f$ is \alal{}{anti-} monotonic on $[a,b]$. If $f'$ has no root on $(a,b)$, then $f$ is injective on $[a,b]$. \end{cor} \begin{proof} The statement is empty in case $a>b$, trivial in the case $a=b$ and it follows from the mean value theorem \ref{mean} in case $a<b$. \end{proof} \section{Descartes' rule of signs}\label{sec:descartes} \begin{notation}\label{degnot} Let $A$ be a commutative ring with $0\ne1$ and $d\in\R$. We denote \[A[X_1,\dots,X_n]_d:=\{f\in A[X_1,\dots,X_n]\mid\deg f\le d\}\] (where $\deg0:=-\infty$). \end{notation} \begin{pro}[``Taylor formula for polynomials''] Suppose $K$ is a field of characteristic $0$, $d\in\N_0$, $f\in K[X]_d$ and $a\in K$. Then \[f=\sum_{k=0}^d\frac{f^{(k)}(a)}{k!}(X-a)^k.\] \end{pro} \begin{proof} Since $K[X]\to K[X],\ p\mapsto p'$ commutes with the ring automorphism $K[X]\to K[X],\ p\mapsto p(X+a)$, we can WLOG suppose $a=0$. But then the claim follows from the definition of the (formal) derivative. \end{proof} \begin{lem}\label{sgnbounds} Suppose $(K,\le)$ is an ordered field, $k\in\N$, $c_1,\dots,c_k\in K^\times$, $\al_1,\dots,\al_k\in\N_0$, $\al_1<\ldots<\al_k$ and $f=\sum_{i=1}^kc_iX^{\al_i}$. \begin{enumerate}[(a)] \item $\sgn(f(x))=(\sgn x)^{\al_k}\sgn(c_k)$ for all $x\in K$ satisfying $|x|>\max\left\{1,\frac{|c_1|+\ldots+|c_{k-1}|}{|c_k|}\right\}$ \item $\sgn(f(x))=(\sgn x)^{\al_1}\sgn(c_1)$ for all $x\in K^\times$ satisfying $\frac1{|x|}>\max\left\{1,\frac{|c_2|+\ldots+|c_{k}|}{|c_1|}\right\}$ \end{enumerate} \end{lem} \begin{proof} \begin{enumerate}[(a)] \item For all $x\in K$ with $|x|>\max\left\{1,\frac{|c_1|+\ldots+|c_{k-1}|}{|c_k|}\right\}$, we have \[\left|\sum_{i=1}^{k-1}c_ix^{\al_i}\right|\overset{\ref{introabssgn}}\le \sum_{i=1}^{k-1}|c_i||x|^{\al_i}\overset{1\le|x|}\le \sum_{i=1}^{k-1}|c_i||x|^{\al_{k}-1}= |c_k|\left(\frac{\sum_{i=1}^{k-1}|c_i|}{|c_k|}\right)|x|^{\al_k-1}<|c_kx^{\al_k}|.\] \item For all $x\in K^\times$ with $\frac1{|x|}>\max\left\{1,\frac{|c_2|+\ldots+|c_{k}|}{|c_1|}\right\}$, we have \[\left|\sum_{i=2}^kc_ix^{\al_i}\right|\overset{\ref{introabssgn}}\le \sum_{i=2}^k|c_i||x|^{\al_i}\overset{|x|\le1}\le \sum_{i=2}^k|c_i||x|^{\al_1+1}= |c_1|\left(\frac{\sum_{i=2}^k|c_i|}{|c_1|}\right)|x|^{\al_1+1}<|c_1x^{\al_1}|.\] \end{enumerate} \end{proof} \begin{reminder}\label{multiplicity} Let $K$ be a field, $f\in K[X]$ and $a\in K$. Then \[ \mu(a,f):=\sup\{k\in\N_0\mid\text{$(X-a)^k$ divides $f$ in $K[X]$}\}\in\N_0\cup\{\infty\} \] is called the \emph{multiplicity} of $a$ in $f$. We have \[\mu(a,f)=\infty\iff f=0\] and \[\mu(a,f)\ge1\iff f(a)=0.\] We call $a$ a \emph{multiple} root of $f$ if $\mu(a,f)\ge2$ and we call it a $k$-fold root of $f$ ($k\in\N$) if $\mu(a,f)=k$. In case $\chara K=0$, one has \[\mu(a,f)=\sup\{k\in\N_0\mid f^{(0)}(a)=\ldots=f^{(k-1)}(a)=0\}\] as one can see easily. \end{reminder} \begin{df}\label{defst} Let $(K,\le)$ be an ordered field and $0\ne f\in K[X]$. \begin{enumerate}[(a)] \item The \emph{number of positive roots counted with multiplicity} of $f$ is \[\mu(f):=\sum_{a\in K_{>0}}\mu(a,f)\in\N_0.\] Writing $f=g\prod_{i=1}^m(X-a_i)$ with $a_1,\dots,a_m\in K_{>0}$ and $g\in K[X]$ with $g(x)\ne0$ for all $x\in K_{>0}$, we therefore have $\mu(f)=m$. \item Writing $f=\sum_{i=1}^kc_iX^{\al_i}$ with $c_1,\dots,c_k\in K^\times$ and $\al_1,\dots,\al_k\in\N_0$ such that \[\al_1<\ldots<\al_k,\] we define the \emph{number of sign changes in the coefficients} of $f$ \[\si(f):=\#\{i\in\{1,\dots,k-1\}\mid \sgn(c_i)\ne\sgn(c_{i+1})\}\in\N_0.\] \end{enumerate} \end{df} \begin{pro}\label{parity} Let $R$ be a real closed field and $f\in R[X]\setminus\{0\}$. Then $\mu(f)$ and $\si(f)$ have the same parity. \end{pro} \begin{proof} Write $f=\sum_{i=1}^kc_iX^{\al_i}=g\prod_{i=1}^m(X-a_i)$ with $c_1,\dots,c_k\in R^\times$, $\al_1,\dots,\al_k\in\N_0$, $a_1,\dots,a_m\in R_{>0}$ and $g\in R[X]$ such that $\al_1<\ldots<\al_m$ and $g(x)\ne0$ for all $x\in R_{>0}$. Since $R$ is real closed, WLOG $g(x)>0$ for all $x\in R_{>0}$ by the intermediate value theorem \ref{intermediate}. But then by Lemma \ref{sgnbounds}, both the lowest and highest coefficient of $g$ is positive. Now the claim follows from $\mu(f)=m$, $\sgn(c_1)=(-1)^m$ and $\sgn(c_k)=1$. \end{proof} \begin{lem}\label{loose1} Let $R$ be a real closed field and $f\in R[X]\setminus R$. Then $\mu(f)\le\mu(f')+1$ and $\si(f)\le\si(f')+1$. \end{lem} \begin{proof} The second statement is easy to prove. For the first statement, suppose $a_1,\dots,a_m\in R$ are the positive roots of $f$ and $a_1<\ldots<a_m$. Since $R$ is real closed, there exist roots $b_1,\dots,b_{m-1}\in R$ of $f'$ such that $a_1<b_1<a_2<\ldots<b_{m-1}<a_m$ by Rolle's Theorem \ref{rolle}. Now $\mu(f')=\sum_{a\in K_{>0}}\mu(a,f')\ge \sum_{i=1}^m\mu(a_i,f')+\sum_{i=1}^{m-1}\mu(b_i,f')\ge\sum_{i=1}^m\mu(a_i,f')+m-1= \sum_{i=1}^m(\mu(a_i,f)-1)+m-1=\sum_{i=1}^m\mu(a_i,f)-1=\mu(f)-1$. \end{proof} \begin{rem}\label{loo} In the situation of Lemma \ref{loose1}, $\si(f')\le\si(f)$ holds trivially but $\mu(f')\le\mu(f)$ fails in general as the example $f=(X-1)^2+1$ shows. \end{rem} \begin{thm}[Descartes' rule of signs]\label{descartes} Let $R$ be a real closed field. Then $\mu(f)\le\si(f)$ for all $f\in R[X]\setminus\{0\}$. \end{thm} \begin{proof} Induction on $d:=\deg f\in\N_0$. \smallskip \underline{$d=0$}\qquad $\mu(f)=0=\si(f)$ \smallskip \underline{$d-1\to d$\quad$(d\in\N)$}\qquad $\mu(f)\overset{\ref{loose1}}\le\mu(f')+1\overset{\text{induction}}{\underset{\text{hypothesis}}\le}\si(f')+1\overset{\ref{loo}}\le\si(f)+1$ and therefore $\mu(f)\le\si(f)$ by Proposition \ref{parity}. \end{proof} \begin{ex} Let $R$ be a real closed field and $f:=X^4-5X^3-21X^2+115X-150\in R[X]$. Then $\si(f)=3$ and therefore $\mu(f)\in\{1,3\}$ by \ref{descartes} and \ref{parity}. For $f(-X)=X^4+5X^3-21X^2-115X-150$, we have $\si(f(-X))=1$ and therefore $\mu(f(-X))=1$. One can verify that $\si((1+X)^{22}f)=1$ from which we get $\mu(f)=\mu((1+X)^{22}f)=1$. Hence $f$ has exactly two roots in $R$, namely two simple (i.e., $1$-fold [$\to$ \ref{multiplicity}]) ones, one positive and one negative. \end{ex} \begin{df}\label{defrr} Let $R$ be a real closed field. We call a polynomial $f\in R[X]$ \emph{real-rooted} if it has no root in $R(\ii)\setminus R$ [$\to$ \ref{fund}]. \end{df} \begin{pro}\label{rrr} Let $R$ be real closed field and $f\in R[X]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f$ is real-rooted. \item There are $d\in\N_0$, $c\in R^\times$ and $a_1,\dots,a_d\in R$ such that $f=c\prod_{i=1}^d(X-a_i)$. \end{enumerate} \end{pro} \begin{proof} For (a)$\implies$(b) use the fundamental theorem \ref{fund} or \ref{realfund}. \end{proof} \begin{thm}\emph{[$\to$ \ref{loo}]}\label{derrr} Suppose $R$ is a real closed field and $f\in R[X]\setminus R$ is real-rooted. Then $f'$ is real-rooted and $\mu(f')\le\mu(f)$. \end{thm} \begin{proof} Using \ref{rrr}, write $f=c\prod_{i=1}^n(X-a_i)^{\al_i}$ with $c,a_1,\dots,a_n\in R$ and $\al_1,\dots,\al_n\in\N$ such that $c\ne0$ and \[a_1<\ldots<a_n.\] Since $R$ is real closed, there exist roots $b_1,\dots,b_{n-1}\in R$ of $f'$ such that \[a_1<b_1<a_2<\ldots<b_{n-1}<a_n\] by Rolle's Theorem \ref{rolle}. We have $\mu(a_i,f)=\al_i$ and therefore \[\mu(a_i,f')=\al_i-1\] for all $i\in\{1,\dots,n\}$. It follows that \begin{align*} \deg(f')&\ge\sum_{i=1}^n\mu(a_i,f')+\sum_{i=1}^{n-1}\mu(b_i,f')\ge\sum_{i=1}^n\mu(a_i,f')+n-1\\ &=\sum_{i=1}^n(\al_i-1)+n-1=\deg(f)-1=\deg(f'), \end{align*} whence \[\deg(f')=\sum_{i=1}^n\mu(a_i,f')+\sum_{i=1}^{n-1}\mu(b_i,f')\] and \[\mu(b_i,f')=1\] for all $i\in\{1,\dots,n-1\}$. It follows that \[\{x\in R(\ii)\mid f'(x)=0\}\subseteq\{a_1,b_1,a_2,\dots,b_{n-1},a_n\}\subseteq R,\] in particular $f'$ is real-rooted. Choose $k\in\{1,\dots,n+1\}$ such that $a_k,\dots,a_n$ are the positive roots of $f$. Then \[\{x\in R\mid f'(x)=0,x>0\}\subseteq\begin{cases}\{b_{k-1},a_k,\dots,b_{n-1},a_n\}&\text{if }k\ge2\\\{a_1,b_1,\dots,b_{n-1},a_n\} &\text{if }k=1\end{cases}.\] If $k\ge2$, then \[\mu(f')\le\sum_{i=k}^n(\underbrace{\mu(b_{i-1},f')}_{=1}+\underbrace{\mu(a_i,f')}_{=\mu(a_i,f)-1})=\mu(f).\] If $k=1$, then one sees similarly that $\mu(f')=\mu(f)-1\le\mu(f)$. \end{proof} \begin{thm}[Descartes' rule of signs for real-rooted polynomials]\label{descartesrr} Let $R$ be a real closed field. Then $\mu(f)=\si(f)$ for all real-rooted $f\in R[X]$. \end{thm} \begin{proof} By Theorem \ref{descartes}, it is enough to show $\mu(f)\ge\si(f)$ for all real-rooted $f\in R[X]$ by induction on $d:=\deg f\in\N_0$. \smallskip \underline{$d=0$}\qquad $\mu(f)=0=\si(f)$ \smallskip \underline{$d-1\to d$\quad$(d\in\N)$}\qquad $\mu(f)\overset{\ref{derrr}}\ge\mu(f') \overset{ \substack{\text{induction}\\{\text{hypothesis}}}}{\underset{\ref{derrr}} \ge}\si(f')\overset{\ref{loose1}}\ge\si(f)-1$ and therefore $\mu(f)\ge\si(f)$ by Proposition \ref{parity}. \end{proof} \begin{ex}\label{symmex} \begin{multline*} \det\begin{pmatrix}1-X&0&1\\0&-2-X&1\\1&1&-X\end{pmatrix}=(1-X)(2+X)X+2+X+X-1\\ =(2+X-2X-X^2)X+2X+1=-X^3-X^2+4X+1\in\R[X] \end{multline*} is real-rooted since it is the characteristic polynomial of a symmetric matrix. By Descartes' rule \ref{descartesrr}, it has exactly one positive root. \end{ex} \section{Counting real zeros with Hermite's method}\label{sec:hermite} \begin{reminder}\label{longremi} \begin{enumerate}[(a)] \item Let $A$ be a commutative ring with $0\ne1$ and $f\in A[X_1,\dots,X_n]$. Then $f$ is called \emph{homogeneous} if $f$ is a an $A$-linear combination of monomials of the same degree. Moreover, $f$ is called a \emph{$k$-form} ($k\in\N_0$) if $f$ is an $A$-linear combination of monomials of degree $k$ (i.e., if $f=0$ or $f$ is homogeneous of degree $k$). One often says \emph{linear form} instead of $1$-form and \emph{quadratic form} instead of $2$-form. \item If $K$ is a field, one can identify the $K$-vector subspace of $K[X_1,\dots,X_n]$ consisting of the \alal{linear}{quadratic} forms introduced in (a) via the isomorphism $f\mapsto(x\mapsto f(x))$ with the $K$-vector space $\malal{(K^n)^*}{Q(K^n)}$ introduced in linear algebra. Hence the notion of a linear or quadratic form introduced in (a) differs only insignificantly from the corresponding notion from linear algebra. \item Let $A$ be a set and $M=(a_{ij})_{\substack{1\le i\le m\\1\le j\le n}}\in A^{m\times n}$ a matrix. Then $M^T:=(a_{ij})_{\substack{1\le j\le n\\1\le i\le m}}\in A^{n\times m}$ is called the \emph{transpose} of $M$. The elements of $SA^{n\times n}:=\{M\in A^{n\times n}\mid M=M^T\}$ are called \emph{symmetric} matrices. \item Let $K$ be a field. Then $(a_1,\dots,a_n)\mapsto a_1X_1+\ldots+a_nX_n$ ($a_i\in K$) defines an isomorphism between $K^{1\times n}\cong K^n$ and the $K$-vector space of the linear forms in $K[X_1,\dots,X_n]$. If $\chara K\ne2$, then $(a_{ij})_{1\le i,j\le n}\mapsto\sum_{i,j=1}^na_{ij}X_iX_j$ ($a_{ij}\in K$) defines an isomorphism between $SK^{n\times n}$ and the $K$-vector space of the quadratic forms in $K[X_1,\dots,X_n]$. If $f\in K[X_1,\dots,X_n]$ is a linear or quadratic form, then we call the preimage $M(f)$ of $f$ under the respective isomorphism the \emph{representing matrix} of $f$. This is the representing matrix of $f$ in the sense of linear algebra with respect to the canonical bases. \item Suppose $K$ is a field satisfying $\chara K\ne2$, $q\in K[X_1,\dots,X_n]$ a quadratic form, $\ell_1,\dots,\ell_m\in K[X_1,\dots,X_n]$ linear forms and $\la_1,\dots,\la_m\in K$. Then \[q=\sum_{k=1}^m\la_k\ell_k^2\iff M(q)=P^T \begin{pmatrix}~ \begin{tikzpicture}[inner sep=0] \node (a1) {$\la_1$}; \node (an) at (1.5,-1.5) [anchor=north west] {$\la_m$}; \node[scale=3.2] at (1.3,-0.4) {$0$}; \node[scale=3.2] at (0.2,-1.2) {$0$}; \draw[loosely dotted,very thick,dash phase=3pt] (a1)--(an); \end{tikzpicture} \end{pmatrix} P \] where \[P:=\begin{pmatrix}M(\ell_1)\\\vdots\\M(\ell_m)\end{pmatrix}\in K^{m\times n}.\] Here $P$ is invertible if and only if $m=n$ and $\ell_1,\dots,\ell_m$ are linearly independent. \item Let $K$ be a field satisfying $\chara K\ne2$ and $q\in K[X_1,\dots,X_n]$ a quadratic form. One can \emph{easily} calculate linearly independent linear forms $\ell_1,\dots,\ell_m\in K[X_1,\dots,X_n]$ and $\la_1,\dots,\la_m\in K$ such that $q=\sum_{k=1}^m\la_k\ell_k^2$. Indeed, one can write \[X_1^2+a_2X_1X_2+\dots+a_nX_1X_n\qquad(a_i\in K)\] as $\Big(\underbrace{X_1+\frac{a_2}2X_2+\dots+\frac{a_n}2X_n}_{\ell_1}\Big)^2- \underbrace{\left(\frac{a_2}2X_2+\dots+\frac{a_n}2X_n\right)^2}_{\in K[X_2,\dots,X_n]}$ and \[X_1X_2+a_3X_1X_3+\ldots+a_nX_1X_n+b_3X_2X_3+\ldots+b_nX_2X_n\] as \begin{multline*} \big(\underbrace{X_1+b_3X_3+\ldots+b_nX_n}_{h_1}\big)\big(\underbrace{X_2+a_3X_3+\ldots+a_nX_n}_{h_2}\big)\\ - \underbrace{\left(a_3X_3+\ldots+a_nX_n\right)\left(b_3X_3+\ldots+b_nX_n\right)}_{\in K[X_3,\dots,X_n]} \end{multline*} where $h_1h_2=\Big(\underbrace{\frac{h_1+h_2}2}_{\ell_1}\Big)^2-\Big(\underbrace{\frac{h_1-h_2}2}_{\ell_2}\Big)^2$ [$\to$ \ref{diffsquare}]. In this way one can in each step place one or two variables in one or two squares and the arising linear forms are obviously linearly independent. Consider $q:=2X_1X_2+2X_1X_3+2X_2X_3+2X_3X_4$ as an example: \begin{align*} q&:=2(X_1X_2+X_1X_3+X_2X_3)+2X_3X_4\\ &=2((\underbrace{X_1+X_3}_{h_1})(\underbrace{X_2+X_3}_{h_2}))-2X_3^2+2X_3X_4\\ &=\underbrace{\frac12}_{\la_1=-\la_2}((\underbrace{h_1+h_2}_{\ell_1})^2-(\underbrace{h_1-h_2}_{\ell_2})^2) \underbrace{-2}_{\la_3}\Big(\underbrace{X_3-\frac12X_4}_{\ell_3}\Big)^2+\underbrace{\frac24}_{\la_4}{\underbrace{X_4}_{\ell_4}}^2. \end{align*} Hence $q=\sum_{k=1}^4\la_k\ell_k^2=\frac12(X_1+X_2+2X_3)^2-\frac12(X_1-X_2)^2-2(X_3-\frac12X_4)^2+\frac12X_4^2$ and by (e) \[\begin{pmatrix}0&1&1&0\\1&0&1&0\\1&1&0&1\\0&0&1&0\end{pmatrix}=P^TDP\] where \[P:=\begin{pmatrix}1&1&2&0\\1&-1&0&0\\0&0&1&-\frac12\\0&0&0&1\end{pmatrix}\qquad\text{and}\qquad D:=\begin{pmatrix}\frac12&0&0&0\\0&-\frac12&0&0\\0&0&-2&0\\0&0&0&\frac12\end{pmatrix}. \] \item Translating (f) into the language of matrices, one obtains for each field $K$ with $\chara K\ne2$ and each $M\in SK^{n\times n}$ the following: One can \emph{easily} find a $P\in\GL_n(K)=(K^{n\times n})^\times$ and a diagonal matrix $D\in K^{n\times n}$ such that $M=\underline{\underline{P^T}}DP$. This is the diagonalization of $M$ as a quadratic form which is much simpler than the diagonalization of $M$ as an endomorphism where one wants to reach $M=\underline{\underline{P^{-1}}}DP$ (in case $K=\R$ perhaps even with $P^{-1}=P^T$). \item Let $K$ be a Euclidean field [$\to$ \ref{dfeuclid}] and $q\in K[X_1,\dots,X_n]$ a quadratic form. According to (f), one can then \emph{easily} compute \emph{linearly independent} linear forms \[\ell_1,\dots,\ell_s,\ell_{s+1},\dots,\ell_{s+t}\in K[X_1,\dots,X_n]\] satisfying $q=\sum_{i=1}^s\ell_i^2-\sum_{j=1}^t\ell_{s+j}^2$. By completing $\ell_1,\dots,\ell_{s+t}$ to a basis $\ell_1,\dots,\ell_n$ of the vector space of all linear forms in $K[X_1,\dots,X_n]$ and by writing $q=1\cdot\sum_{i=1}^s\ell_i^2+(-1)\sum_{j=1}^t\ell_{s+j}^2+0\cdot\sum_{k=t+1}^n\ell_k^2$, one sees for the \emph{rank} $\rk(q):=\rk M(q)$ of $q$ that $\rk(q)\overset{(e)}=s+t$. We define the \emph{signature} of $q$ as $\sg(q):=s-t$. This is well-defined by \emph{Sylvester's law of inertia}: If $\ell_1',\dots,\ell_{s'}',\ell_{s'+1}',\dots,\ell_{s'+t'}'\in K[X_1,\dots,X_n]$ are other linearly independent linear forms satisfying $q=\sum_{i=1}^{s'}\ell_i'^2-\sum_{j=1}^{t'}\ell_{s'+j}'^2$, then $s'+t'=\rk(q)=s+t$ and one sees again by completing to a basis and (e) that there are subspaces $U$, $W$, $U'$, $W'$ of $K^n$ such that $q(U)\subseteq K_{\ge0}$, $\dim U=n-t$, $q(W\setminus\{0\})\subseteq K_{<0}$, $\dim W=t$, $q(U')\subseteq K_{\ge0}$, $\dim U'=n-t'$, $q(W'\setminus\{0\})\subseteq K_{<0}$, $\dim W'=t'$. One deduces $U\cap W'=\{0\}$ and $U'\cap W=\{0\}$, whence $(n-t)+t'\le n$ and $(n-t')+t\le n$. Therefore $t=t'$ and thus $s=s'$. \item Let $K$ be a field and $f=X^d+a_{d-1}X^{d-1}+\ldots+a_0\in K[X]$ with $d\in\N_0$ and $a_i\in K$. The \emph{companion matrix} $C_f$ of $f$ is the representing matrix of the $K$-vector space endomorphism \[K[X]/(f)\to K[X]/(f),\ \overline p\mapsto\overline{Xp}\qquad(p\in K[X])\] with respect to the basis $\overline 1,\dots,\overline{X^{d-1}}$, i.e., \[C_f= \begin{tikzpicture}[loosely dotted,thick,baseline] \matrix (m) [matrix of math nodes,nodes in empty cells,right delimiter=),left delimiter=(,column sep=1em]{ 0&0&&&&0&-a_0\\ 1&0&&&& &-a_1\\ 0&1&&&& &-a_2\\ 0&0\\ \\ & &&&&0\\ 0&0&&&0&1&-a_{d-1}\\ } ; \draw (m-1-2)-- (m-1-6); \draw (m-2-2)-- (m-6-6); \draw (m-3-2)-- (m-7-6); \draw (m-4-2)-- (m-7-5); \draw (m-1-6)-- (m-6-6); \draw (m-3-7)-- (m-7-7); \draw (m-4-1)-- (m-7-1); \draw (m-4-2)-- (m-7-2) -- (m-7-5); \end{tikzpicture}\in K^{d\times d}. \] One sees easily that $f$ is the minimal polynomial and therefore for degree reasons, up to a sign, also the characteristic polynomial of $C_f$. Now suppose furthermore that $f$ splits into linear factors, i.e., \[f=\prod_{k=1}^m(X-x_k)^{\al_k}\] for some $m\in\N_0$, $\al_1,\ldots,\al_m\in\N$ and $x_1,\dots,x_m\in K$ (here the $x_i$ do not yet have to be pairwise distinct so that one could take $\al_k=1$ but to avoid a confusing change of notation in view of the proof of Theorem \ref{hermite1}, we allow here and in Proposition \ref{herm} that $\al_k\in\N$). Then $C_f$ is similar to a triangular matrix with diagonal entries \[\underbrace{x_1,\dots,x_1}_{\al_1},\quad\underbrace{x_2,\dots,x_2}_{\al_2},\quad\dots\quad,\quad\underbrace{x_m,\dots,x_m}_{\al_m}.\] Then $C_f^i$ is for every $i\in\N_0$ similar to a triangular matrix whose diagonal entries are \[\underbrace{x_1^i,\dots,x_1^i}_{\al_1},\quad\underbrace{x_2^i,\dots,x_2^i}_{\al_2},\quad\dots\quad,\quad\underbrace{x_m^i,\dots,x_m^i}_{\al_m}.\] In particular, we have $\tr(C_f^i)=\sum_{k=1}^m\al_kx_k^i$ for all $i\in\N_0$ and consequently \[\tr(g(C_f))=\sum_{k=1}^m\al_kg(x_k)\] for all $g\in K[X]$. \item If $K$ is a field and $x_1,\dots,x_m\in K$ are pairwise distinct, then the \emph{Vandermonde} matrix \[ \begin{pmatrix} 1&x_1&\dots&x_1^{m-1}\\ \vdots&\vdots&&\vdots\\ 1&x_m&\dots&x_m^{m-1} \end{pmatrix}\in K^{m\times m} \] is invertible since it is the representing matrix of the injective and therefore bijective linear map \[K[X]_{m-1}\to K^m,\ p\mapsto\begin{pmatrix}p(x_1)\\\vdots\\p(x_m)\end{pmatrix}\] [$\to$ \ref{degnot}] with respect to the canonical bases. \item Let $K$ be a field and let $x_1,\dots,x_m\in K$ be pairwise distinct. Furthermore, let $d\in\N_0$ with $m\le d$. Consider for $k\in\{1,\dots,m\}$ the linear forms $\ell_k:=\sum_{i=1}^dx_k^{i-1}T_i\in K[T_1,\dots,T_d]$. Then $\ell_1,\dots,\ell_m$ are linearly independent. Indeed, because of (d) this is equivalent to the linear independence of the vectors $(x_k^0,\dots,x_k^{d-1})$ ($k\in\{1,\dots,m\}$) in $K^d$. But already the truncated vectors $(x_k^0,\dots,x_k^{m-1})$ ($k\in\{1,\dots,m\}$) are linearly independent by (j). \end{enumerate} \end{reminder} \begin{df}\label{hermite} Let $K$ be a field and $f,g\in K[X]$ where $f$ is monic of degree $d$. Then the quadratic form \[H(f,g):=\sum_{i,j=1}^d\tr(g(C_f)C_f^{i+j-2})T_iT_j\in K[T_1,\dots,T_d]\] is called the \emph{Hermite form} of $f$ with respect to $g$. The quadratic form $H(f):=H(f,1)$ is simply called the Hermite form of $f$. \end{df} \begin{rem}\label{hankel} Let $K$ be a field with $\chara K\ne2$ and let $f,g\in K[X]$ where $f$ is monic of degree $d$. Then $M(H(f,g))$ [$\to$ \ref{longremi}(d)] is called the \emph{Hermite matrix of $f$ with respect to $g$}. This is a Hankel matrix, i.e., of the form \[\begin{pmatrix} \begin{tikzpicture}[scale=0.2,thick] { \draw(6+0.05,1+0.05)--(6-0.05,1-0.05); \draw(6+0.05,2+0.05)--(5-0.05,1-0.05); \draw(6+0.05,3+0.05)--(4-0.05,1-0.05); \draw(6+0.05,4+0.05)--(3-0.05,1-0.05); \draw(6+0.05,5+0.05)--(2-0.05,1-0.05); \draw(4+0.05,7+0.05)--(0-0.05,3-0.05); \draw(3+0.05,7+0.05)--(0-0.05,4-0.05); \draw(2+0.05,7+0.05)--(0-0.05,5-0.05); \draw(1+0.05,7+0.05)--(0-0.05,6-0.05); \draw(0+0.05,7+0.05)--(0-0.05,7-0.05); \draw[thick,dotted] (2+0.2,5-0.2)--(4-0.2,3+0.2); } \end{tikzpicture} \end{pmatrix}. \] Furthermore, $M(H(f))$ is called the \emph{Hermite matrix of $f$}. \end{rem} \begin{pro}\label{herm} Let $K$ be a field and $f,g\in K[X]$. Suppose $x_1,\dots,x_m\in K$ and $\al_1,\dots,\al_m\in\N_0$ such that $f=\prod_{k=1}^m(X-x_k)^{\al_k}$ and $d:=\deg f$. Then \[H(f,g)=\sum_{i,j=1}^d\left(\sum_{k=1}^m\al_kg(x_k)x_k^{i+j-2}\right)T_iT_j=\sum_{k=1}^m\al_kg(x_k)\left(\sum_{i=1}^dx_k^{i-1}T_i\right)^2.\] \end{pro} \begin{proof} \ref{hermite} and \ref{longremi}(i). \end{proof} \begin{thm}[Counting roots with one side condition]\label{hermite1} Let $R$ be a real closed field, $C:=R(\ii)$, $f,g\in R[X]$ and $f$ monic. Then \begin{align*} \rk H(f,g)&=\#\{x\in C\mid f(x)=0,g(x)\ne0\}\qquad\text{and}\\ \sg H(f,g)&=\#\{x\in R\mid f(x)=0,g(x)>0\}\\ &\,-\#\{x\in R\mid f(x)=0,g(x)<0\}. \end{align*} \end{thm} \begin{proof} Denote by $p\mapsto p^*$ again the ring automorphism of $C[X]$ with $x^*=x$ for all $x\in R$, $\ii^*=-\ii$ and $X^*=X$. Using the fundamental theorem of algebra \ref{realfund} and this automorphism, we can write \[f=\prod_{k=1}^m(X-x_k)^{\al_k}\prod_{t=1}^n(X-z_t)^{\be_t}\prod_{t=1}^n(X-z_t^*)^{\be_t}\] for some $m,n\in\N_0$ $\al_k,\be_t\in\N$, $x_k\in R$, $z_t\in C\setminus R$ and $x_1,\dots,x_m,z_1,\dots,z_n,z_1^*,\dots,z_n^*$ pairwise distinct. By renumbering the $z_t$, we can find $r\in\{0,\dots,n\}$ such that $g(z_1)\ne0,\dots,g(z_r)\ne0$ and $g(z_{r+1})=0,\dots,g(z_n)=0$. By \ref{herm}, \ref{longremi}(k) and \ref{cheuclid}(c), we obtain linear forms $\ell_1,\dots,\ell_m,g_1,\dots,g_r,h_1,\dots h_r\in R[T_1,\dots,T_d]$ where $d:=\deg f$ such that \begin{align*} H(f,g)&=\sum_{k=1}^m\al_kg(x_k)\ell_k^2+\sum_{t=1}^r(g_t+\ii h_t)^2+\sum_{t=1}^r(g_t-\ii h_t)^2\\ &=\sum_{k=1}^m\al_kg(x_k)\ell_k^2+2\sum_{t=1}^rg_t^2-2\sum_{t=1}^rh_t^2 \end{align*} where $\ell_1,\dots,\ell_m,g_1+\ii h_1,g_1-\ii h_1,\dots,g_r+\ii h_r,g_r-\ii h_r\in C[T_1,\dots,T_d]$ are linearly independent. Due to $C(g_i+\ii h_i)+C(g_i-\ii h_i)=Cg_i+Ch_i$, we have that \[\ell_1,\dots,\ell_m,g_1,\dots,g_r,h_1,\dots,h_r\] are also linearly independent in $C[T_1,\dots,T_d]$ and therefore also in $R[T_1,\dots,T_d]$. It follows that \begin{align*} \rk H(f,g)&=\#\{k\in\{1,\dots,m\}\mid g(x_k)\ne0\}+2r\\ &=\#\{k\in\{1,\dots,m\}\mid g(x_k)\ne0\}+2\#\{t\in\{1,\dots,n\}\mid g(z_t)\ne0\}\\ &=\#\{x\in C\mid f(x)=0,g(x)\ne0\}\qquad\text{and}\\ \sg H(f,g)&=\#\{k\in\{1,\dots,m\}\mid g(x_k)>0\}-\#\{k\in\{1,\dots,m\}\mid g(x_k)<0\}+r-r\\ &=\#\{x\in R\mid f(x)=0,g(x)>0\}-\#\{x\in R\mid f(x)=0,g(x)<0\}. \end{align*} \end{proof} \begin{cor}[Counting roots without side conditions]\label{crwsc} Let $R$ be a real closed field, $C:=R(\ii)$ and suppose $f\in R[X]$ is monic. Then \begin{align*} \rk H(f)&=\#\{x\in C\mid f(x)=0\}\qquad\text{and}\\ \sg H(f)&=\#\{x\in R\mid f(x)=0\}. \end{align*} \end{cor} \begin{cor}[Counting roots with several side conditions]\label{several} Let $R$ be a real closed field, $m\in\N_0$, $f,g_1,\dots,g_m\in R[X]$ and $f$ monic. Then \[ \frac1{2^m}\sum_{\al\in\{1,2\}^m}\sg H(f,g_1^{\al_1}\dots g_m^{\al_m})=\#\{x\in R\mid f(x)=0,g_1(x)>0,\dots,g_m(x)>0\} \] \end{cor} \begin{proof} The left hand side equals \[ \frac1{2^m}\sum_{\al\in\{1,2\}^m} \sum_{\substack{x\in R\\f(x)=0}}\sgn((g_1^{\al_1}\dots g_m^{\al_m})(x))\\ =\frac1{2^m}\sum_{\substack{x\in R\\f(x)=0}}\prod_{k=1}^m (\underbrace{\sgn(g_k(x))+(\sgn(g_k(x)))^2}_{\textstyle=\begin{cases}0&\text{if }g_k(x)\le0\\2&\text{if }g_k(x)>0\end{cases}}). \] \end{proof} \section{The real closure} \begin{df}\label{dfrealclosure} Let $(K,P)$ be an ordered field. An extension field $R$ of $K$ is called a \emph{real closure} of $(K,P)$ if $R$ is real closed, $R|K$ is algebraic and the order of $R$ [$\to$ \ref{convention}, \ref{dfeuclid}] is an extension of $P$ [$\to$ \ref{oexf}]. \end{df} \begin{pro}\label{maxordered} Let $(R,P)$ be an ordered field. Then $R$ is real closed if and only if there is no ordered extension field $(L,Q)$ of $(R,P)$ such that $L\ne R$ and $L|R$ is algebraic. \end{pro} \begin{proof} One direction follows from \ref{rcchar}(c). Conversely, suppose that every ordered extension field $(L,Q)$ of $(R,P)$ with $L|R$ algebraic satisfies $L=R$. To show: \begin{enumerate}[(a)] \item $R$ is Euclidean. \item Every polynomial of odd degree from $R[X]$ has a root in $R$. \end{enumerate} For (a), we show $P=R^2$. To this end, let $a\in P$. By \ref{extend2}, we can extend $P$ to $R(\sqrt a)$. Due to the hypothesis, this implies $R(\sqrt a)=R$ and therefore $a=(\sqrt a)^2\in R^2$. To show (b), let $f\in R[X]$ be of odd degree. Choose in $R[X]$ an irreducible divisor $g$ of $f$ of odd degree. Choose a root $x$ of $g$ in some extension field of $R$. Then $R(x)$ is an extension field of $R$ with odd $[R(x):R]$ so that $P$ can be extended to $R(x)$ by \ref{extendodd}. By hypothesis, this gives $R(x)=R$. In particular, $g$ and therefore $f$ has a root in $R$. \end{proof} \begin{thm}\label{existsrc} Every ordered field has a real closure. \end{thm} \begin{proof} Let $(K,P)$ be an ordered field. Consider the algebraic closure $\overline K$ of $K$ and the set \[M:=\{(L,Q)\mid L\text{ subfield of }\overline K, Q\text{ order of }L,(K,P)\text{ is an ordered subfield of }(L,Q)\}\] which is partially ordered by declaring \begin{align*} (L,Q)\preceq(L',Q')&:\iff(L,Q)\text{ is an ordered subfield of }(L',Q')\\ &\overset{\text{\ref{unaryrem}(b)}}\iff(L\subseteq L'\et Q\subseteq Q') \end{align*} for all $(L,Q),(L',Q')\in M$. In $M$ every chain possesses an upper bound: The empty chain has $(K,P)$ as an upper bound. A nonempty chain $C\subseteq M$ has \[\left(\bigcup\{L\mid(L,Q)\in C\},\bigcup\{Q\mid(L,Q)\in C\}\right)\in M\] as an upper bound. By Zorn's lemma, $M$ possesses a maximal element $(R,Q)$. Of course, $\overline K$ is also the algebraic closure of $R$ and therefore each algebraic extension of $R$ is (up to $R$-isomorphy) an intermediate field of $\overline K|R$. The maximality of $(R,Q)$ in $M$ signifies by \ref{maxordered} just that $R$ is real closed. Because of $(R,Q)\in M$, the field extension $R|K$ is algebraic and the order $Q$ is an extension of $P$. \end{proof} \begin{lem}\label{sameno} Let $(K,P)$ be an \emph{ordered} subfield of the real closed fields $R$ and $R'$ [$\to$~\ref{convention}] and $f\in K[X]$. Then $f$ has the same number of roots in both $R$ and $R'$. \end{lem} \begin{proof} WLOG $f$ is monic. The number in question is by \ref{crwsc} equal to the signature of $H(f)$ that can be calculated already over $(K,P)$ [$\to$ \ref{longremi}(f)(h)]. \end{proof} \begin{thm}\label{unic} Let $(K,P)$ be an ordered subfield of $(L,Q)$ such that $L|K$ is algebraic. Let $\ph$ be a homomorphism of ordered fields from $(K,P)$ into a real closed field $R$. Then there is exactly one homomorphism $\ps$ of ordered fields from $(L,Q)$ to $R$ with $\ps|_K=\ph$. \end{thm} \begin{proof} Choose a real closure $R'$ of $(L,Q)$ according to \ref{existsrc}. Existence: Using Zorn's lemma, one reduces easily to the case where $L|K$ is finite. Since $\ph\colon K\to\ph(K)\subseteq R$ is an isomorphism of ordered fields, we can suppose WLOG that $(K,P)$ is an ordered subfield of $R$ and $\ph=\id_K$. We denote the different $K$-homo\-morphisms from $L$ to $R$ by $\ps_1,\dots,\ps_m$ ($m\in\N_0$). Assume that none of these is a homomorphism of ordered fields from $(L,Q)$ to $R$ (for example if $m=0$). Then there are $b_1,\dots,b_m\in Q$ such that $\ps_1(b_1)\notin R^2,\dots, \ps_m(b_m)\notin R^2$. By the primitive element theorem there exists \[a\in L':=L(\sqrt{b_1},\dots,\sqrt{b_m})\overset{b_i\in Q\subseteq R'^2}\subseteq R'\] such that $L'=K(a)$. The minimal polynomial of $a$ over $K$ has by \ref{sameno} the same number of roots in $R'$ and $R$ and therefore in particular a root in $R$. Hence there is a $K$-homomorphism $\ps\colon L'\to R$. Choose $i\in\{1,\dots,m\}$ with $\ps|_L=\ps_i$ (in particular $m>0$). Then $\ps_i(b_i)=\ps(b_i)=(\ps(\sqrt{b_i}))^2\in R^2$ $\lightning$. Unicity: Let $a\in L$. Choose $f\in K[X]\setminus\{0\}$ with $f(a)=0$. Choose $a_1,\dots,a_m\in R'$ with $a_1<\ldots<a_m$ such that $\{x\in R'\mid f(x)=0\}=\{a_1,\dots,a_m\}$. Again WLOG $\ph|_K=\id_K$ and hence $(K,P)$ is an ordered subfield of $R$. By \ref{sameno} there are $b_1,\dots,b_m\in R$ such that $b_1<\ldots<b_m$ and $\{x\in R\mid f(x)=0\}=\{b_1,\dots,b_m\}$. Choose now $i\in\{1,\dots,m\}$ such that $a=a_i$. We show that each homomorphism $\ps$ of ordered fields from $(L,Q)$ to $R$ with $\ps|_K=\id$ satisfies $\ps(a)=b_i$. To this end, fix such a $\ps$. By the already proved existence statement, there is a homomorphism of ordered fields $\rh\colon R'\to R$ such that $\rh|_L=\ps$. Since $\rh$ is an embedding, we have $\{\rh(a_1),\dots,\rh(a_m)\}=\{b_1,\dots,b_m\}$ and by the monotonicity we even get $\rh(a_j)=b_j$ for all $j\in\{1,\dots,m\}$. We deduce $\ps(a)=\ps(a_i)=\rh(a_i)=b_i$. \end{proof} \begin{cor}\label{genauein} Let $R$ and $R'$ be real closures of the ordered field $(K,P)$. Then there is exactly one $K$-isomorphism from $R$ to $R'$. \end{cor} \begin{proof} The $K$-isomorphisms from $R$ to $R'$ are obviously exactly the isomorphisms of ordered fields from $R$ to $R'$ whose restriction to $K$ is the identity. For this reason, the claim follows easily from \ref{unic} (for the surjectivity in the existence part use either \ref{rcchar}(c) or the unicity of $K$-automorphisms of $R$ and of $R'$ [$\to$ \ref{unic}]). \end{proof} \begin{notterm}\label{theclosure} Because of \ref{genauein}, we speak of \emph{the} real closure $\overline{(K,P)}$ of $(K,P)$. It contains by \ref{unic} (up to $K$-isomorphy) every ordered field extension $(L,Q)$ of $(K,P)$ with $L|K$ algebraic. \end{notterm} \begin{thm}\label{bijext} Suppose $(K,P)$ is an ordered field, $L|K$ an algebraic extension, $R$ a real closed field and $\ph$ a homomorphism of ordered fields from $(K,P)$ to $R$. Then \begin{align*} \{\ps\mid\ps\colon L\to R\text{ homomorphism},\ps|_K=\ph\}&\to\{Q\mid\text{$Q$ is an extension of $P$ to $L$}\}\\ \ps&\mapsto\ps^{-1}(R^2) \end{align*} is a bijection. \end{thm} \begin{proof} The well-definedness is easy to see. To verify the bijectivity, let $Q$ be an extension of $P$ to $L$. We have to show that there is exactly one homomorphism $\ps\colon L\to R$ with $\ps|_K=\ph$ fulfilling the condition $\ps^{-1}(R^2)=Q$ that is equivalent to $\ps$ being a homomorphism of ordered fields from $(L,Q)$ to $R$ since \begin{align*} \ps^{-1}(R^2)=Q&\iff\ps^{-1}(R^2\cap\ps(L))=Q\overset{\ps\colon L\to\ps(L)}{\underset{\text{bijective}}\iff} R^2\cap\ps(L)=\ps(Q)\\ &\overset{R^2\cap\ps(L)}{\underset{\text{order of $\ps(L)$}}\iff}\ps(Q)\subseteq R^2\cap\ps(L)\iff\ps(Q)\subseteq R^2. \end{align*} Hence we get the unicity and existence of $\ps$ from \ref{unic}. \end{proof} \begin{cor} Suppose $(K,P)$ is an ordered field, $R:=\overline{(K,P)}$ and $L|K$ a finite extension. Let $a\in L$ with $L=K(a)$ and $f$ be the minimal polynomial of $a$ over $K$. Then \begin{align*} \{x\in R\mid f(x)=0\}&\to\{Q\mid\text{$Q$ is an extension of $P$ to $L$}\}\\ x&\mapsto\{g(a)\mid g\in K[X],g(x)\in R^2\} \end{align*} is a bijection. \end{cor} \begin{proof} By \ref{bijext} it is enough to see that \begin{align*} \{x\in R\mid f(x)=0\}&\to\{\ps\mid\text{$\ps\colon L\to R$ is a $K$-homomorphism}\}\\ x&\mapsto(g(a)\mapsto g(x))\qquad(g\in K[X]) \end{align*} is a bijection. This is easy to see. \end{proof} \begin{ex} Let $(K,P)$ be an ordered field with $2\notin K^2$. Denote by $\sqrt 2$ one of the two square roots of $2$ in the algebraic closure $\overline K$ of $K$ [$\to$ \ref{notremsqrt}(a)]. Then there are exactly $2$ orders of $K(\sqrt2)$ that extend $P$, namely the two induced by the field embeddings $K(\sqrt2)\hookrightarrow\overline{(K,P)}$ (in one of which $\sqrt2$ is positive and in one of which it is negative). In particular, this is true if $(K,P)$ is not Archimedean [$\to$ \ref{unaryrem}(d)] and in this case we cannot argue with $\R$ instead of $\overline{(K,P)}$ as we did in \ref{sqrt2}. \end{ex} \begin{pro}\label{relcl} Let $R$ be a real closed field and $K$ a subfield of $R$ that is (relatively) algebraically closed in $R$ (i.e., no element of $R\setminus K$ is algebraic over $K$). Then $K$ is real closed. \end{pro} \begin{proof} Apply the criterion from \ref{maxordered}: Every ordered extension field $(L,Q)$ of $(K,R^2\cap K)$ such that $L|K$ is algebraic is contained in $R$ up to $K$-isomorphy [$\to$ \ref{unic}, \ref{theclosure}] and therefore equal to $K$. \end{proof} \begin{ex}\label{ralg} The field $\R_{\text{alg}}:=\{x\in\R\mid\text{$x$ algebraic over $\Q$}\}$ of \emph{real algebraic numbers} is the algebraic closure of $\Q$ in $\R$. By \ref{relcl}, $\R_{\text{alg}}$ is real closed and therefore the real closure of $\Q$ [$\to$ \ref{convention}]. Hence $\R_{\text{alg}}$ is uniquely embeddable in every real closed field by \ref{unic}. In this sense, $\R_{\text{alg}}$ is the smallest real closed field. \end{ex} \section{Real quantifier elimination}\label{sec:qe} \begin{rem}\label{unionsection} Let $M$, $I$ and $J_i$ for each $i\in I$ be sets and suppose $A_{ij}\subseteq M$ for all $i\in I$ and $j\in J_i$. Defining the empty intersection as $M$ (that is $\bigcap_{i\in\emptyset}\ldots:=\bigcap\emptyset:=M$), one has \begin{align*} \bigcup_{i\in I}\bigcap_{j\in J_i}A_{ij}&=\bigcap_{(j_i)_{i\in I}\in\prod_{i\in I}J_i}\bigcup_{i\in I}A_{ij_i},\\ \bigcap_{i\in I}\bigcup_{j\in J_i}A_{ij}&=\bigcup_{(j_i)_{i\in I}\in\prod_{i\in I}J_i}\bigcap_{i\in I}A_{ij_i},\\ \complement\bigcup_{i\in I}\bigcap_{j\in J_i}A_{ij}&=\bigcap_{i\in I}\bigcup_{j\in J_i}\complement A_{ij}\qquad\text{and}\\ \complement\bigcap_{i\in I}\bigcup_{j\in J_i}A_{ij}&=\bigcup_{i\in I}\bigcap_{j\in J_i}\complement A_{ij} \end{align*} where the \emph{complement} of $A\subseteq M$ is given by $\complement A:=\complement_MA:=M\setminus A$. \end{rem} \begin{dfpro}\label{booleanalgebra} Let $M$ be a set and $\pow(M)$ its power set. \begin{enumerate}[\normalfont(a)] \item We call $\mathcal S\subseteq\pow(M)$ a \emph{Boolean algebra} on $M$ if \begin{itemize} \item $\emptyset\in\mathcal S$, \item $\forall S\in\mathcal S:\complement S\in\mathcal S$, \item $\forall S_1,S_2\in\mathcal S:S_1\cap S_2\in\mathcal S$ and \item $\forall S_1,S_2\in\mathcal S:S_1\cup S_2\in\mathcal S$. \end{itemize} \item Let $\mathcal G\subseteq\pow(M)$. Then the set of all finite \alal{unions}{intersections} of finite \alal{intersections}{unions} of elements of $\mathcal G$ and their complements (with $\bigcap\emptyset:=M$) is obviously the smallest Boolean algebra $\mathcal S$ on $M$ with $\mathcal G\subseteq\mathcal S$. It is called the Boolean algebra \emph{generated} by $\mathcal G$ (on $M$). Its elements are called the \emph{Boolean combinations} of elements of $\mathcal G$. \end{enumerate} \end{dfpro} \begin{dfrem}\label{introsemialg} In the sequel, we let $(K,P)$ always be an ordered field, for example $(K,P)=(\Q,\Q_{\ge0})$ unless otherwise stated. Moreover, we let $\mathcal R$ be a set of real closed fields containing $(K,P)$ as an ordered subfield. For $n\in\N_0$, we set \[\mathcal R_n:=\{(R,x)\mid R\in\mathcal R,x\in R^n\}.\] Thereby we have $R^0=\{\emptyset\}=\{0\}$ and we identify $\mathcal R_0$ with $\mathcal R$. A Boolean combination of sets of the form \[\{(R,x)\in\mathcal R_n\mid p(x)\ge0\text{ (in $R$)}\}\qquad(p\in K[X_1,\dots,X_n])\] is called a \begin{itemize} \item \emph{$K$-semialgebraic set in $R^n$} if $\mathcal R=\{R\}$, and \item an \emph{$n$-ary $(K,P)$-semialgebraic class} if $\mathcal R$ is ``potentially very big'' (in any case big enough to contain all real closed ordered extension fields of $(K,P)$ that are currently in the game). \end{itemize} We identify $K$-semialgebraic sets in $R^n$ with subsets of $R^n$. Thus these are simply the subsets of $R^n$ that can be defined by combining finitely many polynomial inequalities with coefficients in $K$ by the logical connectives ``not'', ``and'' and ``or''. A \emph{semialgebraic set} in $R^n$ is an $R$-semialgebraic set in $R^n$. A \emph{semialgebraic class} is a $\Q$-semialgebraic class. \end{dfrem} \begin{rem}\label{rcfclass} \begin{enumerate}[(a)] \item On the first reading, the reader might want to think of $\mathcal R=\{R\}$ or even of $\mathcal R=\{\R\}$ in order to have a good geometric perception. Initially one can therefore think of $(K,P)$-semialgebraic classes as $K$-semialgebraic sets. \item One can conceive $\mathcal R$ as the ``set'' of all real closed ordered extension fields of $(K,P)$. Unfortunately, this is not a set (otherwise Zorn's lemma would yield real closed fields having no proper real closed extension field in contradiction to \ref{rtlfct} combined with \ref{existsrc}) but a proper \emph{class}. But we do not want to get into the formal notion of a class and instead adopt a naïve point of view from which sets and classes are synonymous where ``big'' sets often tend to be called classes. \item Whoever gets vertiginous from (b), has several ways out: Our resort here is that $\mathcal R$ is a honest set that is at any one time sufficiently big (often $\#\mathcal R=1$ is enough and almost always $\#\mathcal R=2$ is enough). Alternatively, one could learn the subtle non-naïve handling of sets and classes. As a third option, one could work, instead of with $(K,P)$-semialgebraic classes, with formulas of first-order logic in the language of ordered fields with additional constants for the elements of $K$. The last two options are technically very involved. \end{enumerate} \end{rem} \begin{rem}\label{nothingorall} Obviously, $\emptyset$ and $\mathcal R$ are the only $0$-ary $(K,P)$-semialgebraic classes. Note that this uses heavily that $(K,P)$ is an ordered subfield of every $R\in\mathcal R$. \end{rem} \begin{pro}\label{sanf} Every $(K,P)$-semialgebraic class is of the form \[\bigcup_{i=1}^k\left\{(R,x)\in\mathcal R_n\mid f_i(x)=0,g_{i1}(x)>0,\dots,g_{im}(x)>0\right\}\] for some $n,k,m\in\N_0$, $f_i,g_{ij}\in K[X_1,\dots,X_n]$. \end{pro} \begin{proof} By \ref{introsemialg} and \ref{booleanalgebra}(b) such a class is a finite union of classes of the form \begin{multline*} \left\{(R,x)\in\mathcal R_n\mid h_1(x)\ge0,\dots,h_s(x)\ge0,h_{s+1}(x)<0,\dots,h_{s+t}(x)<0\right\}\\ =\bigcup_{\de\in\{0,1\}^s}\left\{(R,x)\in\mathcal R_n\mid \begin{aligned} \sgn(h_1(x))=\de_1,\dots,\sgn(h_s(x))=\de_s,\\ -h_{s+1}(x)>0,\dots,-h_{s+t}(x)>0 \end{aligned} \right\}\\ =\bigcup_{\de\in\{0,1\}^s}\left\{(R,x)\in\mathcal R_n\mid \begin{aligned} \left(\sum_{\substack{i=1\\\de_i=0}}^sh_i^2\right)(x)=0, \operator{\mathrm{\&}}_{\substack{i=1\\\de_i=1}}^sh_i(x)>0,\\ -h_{s+1}(x)>0,\dots,-h_{s+t}(x)>0 \end{aligned} \right\} \end{multline*} for some $s,t\in\N_0$ and $h_i\in K[X_1,\dots,X_n]$ \end{proof} \begin{pro}\label{sapre} Let $m,n\in\N_0$, $h_1,\dots,h_m\in K[X_1,\dots,X_n]$ and $S\subseteq\mathcal R_m$ a $(K,P)$-semialgebraic class. Then $\{(R,x)\in\mathcal R_n\mid(R,(h_1(x),\dots,h_m(x)))\in S\}$ is a $(K,P)$-semialgebraic class. \end{pro} \begin{proof} If $S=\bigcup_{i=1}^k\{(R,y)\in\mathcal R_m\mid f_i(y)=0,g_{i1}(y)>0,\dots,g_{i\ell}(y)>0\}$ with $m,k,\ell\in\N_0$, $f_i,g_{ij}\in K[Y_1,\dots,Y_m]$ so that \begin{multline*} \{(R,x)\in\mathcal R_n\mid(h_1(x),\dots,h_m(x))\in S\}\\ =\bigcup_{i=1}^k\left\{(R,x)\in\mathcal R_n\mid (f_i(h_1,\dots,h_m))(x)=0,\right.\qquad\qquad\qquad\qquad\qquad\ \;\\ \left.(g_{i1}(h_1,\dots,h_m))(x)>0,\dots, (g_{i\ell}(h_1,\dots,h_m))(x)>0\right\}. \end{multline*} \end{proof} \begin{cor}\label{preimagesa} Let $R$ be a real closed field. Preimages of semialgebraic subsets of $R^m$ under polynomial maps $R^n\to R^m$ are again semialgebraic in $R^n$. \end{cor} \begin{lem}\label{sigsa} For every $s\in\N_0$, \[\left\{(R,x)\in\mathcal R_{d+1}\mid\si\left(\sum_{i=0}^d x_iT^i\right)=s\text{ with respect to $R[T]$}\right\}\] is a semialgebraic class. \end{lem} \begin{proof} The class in question equals \[\bigcup_{\substack{\de\in\{-1,0,1\}^{d+1}\\\si\left(\sum_{i=0}^d\de_iT^i\right)=s\text{ with respect to $\R[T]$}}} \left\{(R,x)\in\mathcal R_{d+1}\mid\sgn_R(x_0)=\de_0,\dots,\sgn_R(x_{d})=\de_d\right\}.\] \end{proof} \begin{rem}\label{advertisetarski} We will now need the \emph{simultaneous} diagonalization of a symmetric matrix as a quadratic form and as an endomorphism [$\to$ \ref{longremi}(g)]. The reader should know this over $\R$ from linear algebra but we will now need it more generally over an arbitrary real closed field. Later in this chapter, we will provide methods from which it becomes immediately clear that, for each fixed matrix size, the class of all fields $R\in\mathcal R$, over which the corresponding statement is true, is a $0$-ary semialgebraic class. Since the statement is true over $\R$, it must then by \ref{nothingorall} also hold true over every real closed field. In a similar way, we will soon be able to carry over a great many statements from $\R$ to all real closed fields. Unfortunately, we are not that far yet and therefore we have to check if the proof from linear algebra goes through over an arbitrary real closed field. Some of the proofs of the diagonalization in question use however proper analysis instead of just the fundamental theorem of algebra. Since the whole analysis is built on the completeness of $\R$ [$\to$ \ref{introduce-the-reals}], those proofs do not generalize without further ado. Thus we give a compact ad-hoc-proof. \end{rem} \begin{thm}\label{eucliddiag} Let $R$ be a real closed field and $M\in SR^{n\times n}$. Then there is some $P\in\GL_n(R)$ satisfying $P^TP=I_n$ such that $P^TMP$ is a diagonal matrix. \end{thm} \begin{proof} Call a symmetric bilinear form $V\times V\to R,\ (v,w)\mapsto\langle v,w\rangle$ on an $R$-vector space $V$ positive definite if $\langle v,v\rangle>0$ for all $v\in V\setminus\{0\}$. Call an $R$-vector space together with a positive definite symmetric bilinear form a Euclidean $R$-vector space. Call an endomorphism $f$ of a Euclidean $R$-vector space $V$ self-adjoint if $\langle f(v),w\rangle=\langle v,f(w)\rangle$ for all $v,w\in V$. \medskip \textbf{Claim 1:} Let $V$ be a Euclidean $R$-vector space, $f\in\End(V)$ self-adjoint and $v$ an eigenvector of $f$. Then $U:=\{u\in V\mid\langle u,v\rangle=0\}$ is a subspace of $V$ with $v\notin U$ and $f(U)\subseteq U$. \smallskip \emph{Explanation.} Choose $\la\in R$ with $f(v)=\la v$ and let $u\in U$. Then $\langle f(u),v\rangle=\langle u,f(v)\rangle=\langle u,\la v\rangle=\la\langle u,v\rangle=\la0=0$. \medskip \textbf{Claim 2:} Let $V\ne\{0\}$ be a finite-dimensional Euclidean $R$-vector space and $f\in\End(V)$ self-adjoint. Then $f$ possesses an eigenvalue in $R$. \smallskip \emph{Explanation.} Assume $f$ has no eigenvalue. By Cayley-Hamilton and the fundamental theorem \ref{realfund}, it is easy to show that there are $a,b\in R$ with $b\ne0$ such that \[(f-a\id_V)^2+b^2\id_V\] has a non-trivial kernel. Since $f$ is self-adjoint, $g:=f-a\id_V$ is so. Choose $v\in V$ with $g^2(v)=-b^2v$. Then $0\le\langle g(v),g(v)\rangle=\langle g^2(v),v\rangle=\langle-b^2v,v\rangle =-b^2\langle v,v\rangle<0$. $\lightning$ \medskip \textbf{Claim 3:} Let $V$ be a finite-dimensional Euclidean $R$-vector space and $f\in\End(V)$ self-adjoint. Then there is an eigenbasis $v_1,\dots,v_n$ for $f$ with $(\langle v_i,v_j\rangle)_{1\le i,j\le n}=I_n$. \smallskip \emph{Explanation.} Use Claim 1, Claim 2 and induction over the dimension $V$. \smallskip\noindent In virtue of $\langle x,y\rangle:=\sum_{i=1}^nx_iy_i$ ($x,y\in R^n$), $R^n$ is a Euclidean $R$-vector space and $f\colon R^n\to R^n,\ x\mapsto Mx$ is self-adjoint. By Claim 3, there is an eigenbasis $v_1,\dots,v_n$ for $f$ such that $(\langle v_i,v_j\rangle)_{1\le i,j\le n}=I_n$. Set $P:=(v_1\ldots v_n)\in\GL_n(R)$. Then \[P^TP=\begin{pmatrix}v_1^T\\\vdots\\v_n^T\end{pmatrix}\begin{pmatrix}v_1&\ldots&v_n\end{pmatrix}=I_n\] and $P$ is the change-of-basis matrix from $(v_1,\dots,v_n)$ to the standard basis. It follows that $P^TMP=P^{-1}MP$ is the representing matrix of $f$ with respect to $(v_1,\dots,v_n)$. \end{proof} \begin{cor}[Determination of the signature using Descartes' rule of signs]\label{combining} Let $R$ be a real closed field, $q\in R[T_1,\dots,T_d]$ a quadratic form and $h:=\det(M(q)-XI_d)\in R[X]$ the characteristic polynomial of the representing matrix \emph{[$\to$ \ref{longremi}(d)]} of $q$. Then we have: \begin{enumerate}[\normalfont(a)] \item $h$ is real-rooted \emph{[$\to$ \ref{defrr}]} \item $\sg q=\mu(h)-\mu(h(-X))$ \emph{[$\to$ \ref{longremi}(h), \ref{defst}(a)]} \item $\sg q=\si(h)-\si(h(-X))$ \emph{[$\to$ \ref{defst}(b)]} \end{enumerate} \end{cor} \begin{proof} Using \ref{eucliddiag}, choose $P\in\GL_d(R)$ such that $P^TP=I_d$ and $P^TM(q)P$ is diagonal, say \[P^TM(q)P= \begin{pmatrix}~ \begin{tikzpicture}[inner sep=0] \node (a1) {$\la_1$}; \node (an) at (1.5,-1.5) [anchor=north west] {$\la_d$}; \node[scale=3.2] at (1.3,-0.4) {$0$}; \node[scale=3.2] at (0.2,-1.2) {$0$}; \draw[loosely dotted,very thick,dash phase=3pt] (a1)--(an); \end{tikzpicture} \end{pmatrix} \] with $\la_i\in R$. We have \begin{align*} h&=h\det(P^TP)\\ &=(\det(P^T))(\det(M(q)-XI_d))(\det P)\\ &=\det(P^TM(q)P-XP^TP)=\det \begin{pmatrix}~ \begin{tikzpicture}[inner sep=0] \node (a1) {$\la_1-X$}; \node (an) at (1.5,-1.5) [anchor=north west] {$\la_d-X$}; \node[scale=3.2] at (1.8,-0.4) {$0$}; \node[scale=3.2] at (0.2,-1.2) {$0$}; \draw[loosely dotted,very thick,dash phase=3pt] (a1)--(an); \end{tikzpicture} \end{pmatrix}=\prod_{i=1}^d(\la_i-X), \end{align*} from which (a) follows immediately. Because of \[M(q)=(P^T)^T \begin{pmatrix}~ \begin{tikzpicture}[inner sep=0] \node (a1) {$\la_1$}; \node (an) at (1.5,-1.5) [anchor=north west] {$\la_d$}; \node[scale=3.2] at (1.3,-0.4) {$0$}; \node[scale=3.2] at (0.2,-1.2) {$0$}; \draw[loosely dotted,very thick,dash phase=3pt] (a1)--(an); \end{tikzpicture} \end{pmatrix} P^T\] and $P^T\in\GL_d(R)$, it follows from \ref{longremi}(e) that \[ \sg q=\#\{i\in\{1,\dots,d\}\mid\la_i>0\}-\#\{i\in\{1,\dots,d\}\mid\la_i<0\}=\mu(h)-\mu(h(-X)), \] which proves (b). Finally, (c) follows from (a) and (b) due to the exactness of Descartes' rule of signs for real rooted polynomials [$\to$ \ref{descartesrr}]. \end{proof} \begin{rem} Combining \ref{combining} with \ref{several}, one can reduce the count of real roots of polynomials without multiplicity with side conditions by means of the Hermite method from §\ref{sec:hermite} to the count of roots of real-rooted polynomials with multiplicity by means of Descartes' rule from §\ref{sec:descartes}. \end{rem} \begin{lem}\label{elim1} Let $m,n,d\in\N_0$ and $f,g_1,\dots,g_m\in K[X_1,\dots,X_{n+1}]$. Then \begin{align*} \{(R,x)\in\mathcal R_n\mid&\deg f(x,X_{n+1})=d\quad\et\\ &\exists x_{n+1}\in R\colon(f(x,x_{n+1})=0\et g_1(x,x_{n+1})>0\et\ldots\et g_m(x,x_{n+1})>0)\} \end{align*} is a $(K,P)$-semialgebraic class. \end{lem} \begin{proof} Write $f=\sum_{i=0}^Dh_iX_{n+1}^i$ for some $D\in\N_0$, $D\ge d$ and $h_i\in K[X_1,\dots,X_n]$. WLOG $h_d\ne0$. Then \[f_0:=\sum_{i=0}^d\frac{h_i}{h_d}X_{n+1}^i\in K(X_1,\dots,X_n)[X_{n+1}]\] is monic of degree $d$. For every $\al\in\{1,2\}^m$, we consider also $g_1^{\al_1}\dotsm g_m^{\al_m}$ as a polynomial in $X_{n+1}$ with coefficients from the field $K(X_1,\dots,X_n)$ and set \[h_\al:=\det(M(H(f_0,g_1^{\al_1}\dotsm g_m^{\al_m}))-XI_d)\in K(X_1,\dots,X_n)[X].\] By construction [$\to$ \ref{longremi}(i), \ref{hermite}, \ref{hankel}], there is some $N\in\N$ such that \[h_d^Nh_\al\in K[X_1,\dots,X_n,X]\] for all $\al\in\{1,2\}^m$. Now the class from the claim can be written by \ref{several} as \begin{align*} \Bigg\{(R,x)\in\mathcal R_n\mid&h_D(x)=\ldots=h_{d+1}(x)=0\ne h_d(x)\quad\et\\ &\sum_{\al\in\{1,2\}^m}\sg H(f_0(x,X_{n+1}),(g_1^{\al_1}\dotsm g_m^{\al_m})(x,X_{n+1}))>0\Bigg\}. \end{align*} But \begin{multline*} \left\{(R,x)\in\mathcal R_n\mid h_d(x)\ne0\et \sum_{\al\in\{1,2\}^m}\sg H(f_0(x,X_{n+1}),(g_1^{\al_1}\dotsm g_m^{\al_m})(x,X_{n+1}))>0\right\}\\ \overset{\ref{combining}}{\underset{\tikz[baseline=-.75ex] \node[scale=0.8,shape=regular polygon, regular polygon sides=3, inner sep=0pt, draw, thick] {\textbf{!}};}=}\left\{(R,x)\in\mathcal R_n\mid h_d(x)\ne0\et \sum_{\al\in\{1,2\}^m}(\si(h_\al(x,X))-\si(h_\al(x,-X)))>0\right\}\\ =\bigcup_{\substack{(s_{\al})_{\al\in\{1,2\}^m},(t_{\al})_{\al\in\{1,2\}^m}\in\{0,\dots,d\}^{\{1,2\}^m}\\\sum_{\al\in\{1,2\}^m}(s_\al-t_\al)>0}} \bigcap_{\al\in\{1,2\}^m}\left\{(R,x)\in\mathcal R_n\mid \begin{aligned} &h_d(x)\ne0,\\ &\si((h_d^Nh_\al)(x,X))=s_\al,\\ &\si((h_d^Nh_\al)(x,-X))=t_\al \end{aligned}\right\} \end{multline*} is $(K,P)$-semialgebraic by \ref{sigsa} and \ref{sapre}. Here the warning sign $\tikz[baseline=-.75ex] \node[scale=0.8,shape=regular polygon, regular polygon sides=3, inner sep=0pt, draw, thick] {\textbf{!}};$ indicates where an important argument flows in: \[h_\al(x,X)=\det(M(H(f_0(x,X_{n+1}),(g_1^{\al_1}\dotsm g_m^{\al_m})(x,X_{n+1})))-XI_d)\] since evaluating in $x$ commutes with building companion matrices, Hermite forms and with taking determinants [$\to$ \ref{hermite}, \ref{longremi}(i)]. \end{proof} \begin{lem}\label{adjust} Let $R$ be a real closed field, $m\in\N_0$ and $g_1,\dots,g_m\in R[X]$. Setting $g:=g_1\dotsm g_m$ and $f:=(1-g^2)g'$, we have \begin{enumerate}[(a)] \item There is an $x\in R$ satisfying $g_1(x)>0,\dots,g_m(x)>0$ if and only if there is such an $x\in R$ satisfying in addition $f(x)=0$. \item If $f=0$ and $g_1\ne0,\dots,g_m\ne0$, then $g_1,\dots,g_m\in R$. \end{enumerate} \end{lem} \begin{proof} (b) Suppose $f=0$. Then $g^2=1$ or $g'=0$. In both cases it follows $g\in R$ and thus $g_1,\dots,g_m\in R$ provided that $g_1\ne0,\dots,g_m\ne0$. \smallskip (a) Let $x\in R$ such that $g_1(x)>0,\dots,g_m(x)>0$. Denote by $a_1,\dots,a_r$ where $r\in\N_0$ and $a_1<\ldots<a_r$ the roots of $g$ in $R$. First consider the case where $r=0$. By the intermediate value theorem \ref{intermediate} each of the $g_i$ is positive on $R$. It suffices therefore to show that $f$ has a root in $R$. By Definition \ref{dfrealclosed}, $g$ has even degree. If $g$ has degree $0$, then $g'=0$ and we are done. So suppose now $\deg g\ge 2$. Then the degree of $g'$ is odd so that $g'$ and in particular $f$ has a root in $R$ by Definition \ref{dfrealclosed}. From now on suppose that $r>0$. By the intermediate value theorem \ref{intermediate} each of the $g_i$ has constant sign on each of the intervals $(-\infty,a_1),(a_1,a_2),\dots,(a_{r-1},a_r),(a_r,\infty)$. It is therefore enough to show that $f$ possesses in each of these sets a root. By Rolle's theorem \ref{rolle}, $g'$ and therefore $f$ has on each of the sets $(a_i,a_{i+1})$ ($1\le i\le r-1$) a root. WLOG $f\ne0$. Then $g'\ne0$ and $g$ has degree $\ge1$. Consequently, $1-g^2$ has a leading monomial of even degree with a negative leading coefficient. By Lemma \ref{sgnbounds}(a), $(1-g^2)(y)<0$ for all $y\in R$ with $|y|$ sufficiently big. On the other hand, $(1-g^2)(a_1)=1=(1-g^2)(a_r)$. By the intermediate value theorem \ref{intermediate}, $1-g^2$ and therefore $f$ has a root on each of the sets $(-\infty,a_1)$ and $(a_r,\infty)$. \end{proof} \begin{lem}\label{elim2} Let $m,n\in\N_0$ and $g_1,\dots,g_m\in K[X_1,\dots,X_{n+1}]$. Then \[\{(R,x)\in\mathcal R_n\mid\exists x_{n+1}\in R:(g_1(x,x_{n+1})>0\et\dots\et g_m(x,x_{n+1})>0)\}\] is a $(K,P)$-semialgebraic class. \end{lem} \begin{proof} Set $g:=g_1\dotsm g_m$ and $f:=(1-g^2)\frac{\partial g}{\partial X_{n+1}}$. Denote by $D:=\deg_{X_{n+1}}f\in\{-\infty\}\cup\N_0$ the degree of $f$ considered as a polynomial in $X_{n+1}$ with coefficients from $K[X_1,\dots,X_n]$. The class in question equals because of \ref{adjust} \begin{multline*} \bigcup_{d=0}^D\left\{(R,x)\in\mathcal R_n\mid\deg f(x,X_{n+1})=d\et\exists x_{n+1}\in R:\left( \begin{aligned} f(x,x_{n+1})&=0\ \et\\ g_1(x,x_{n+1})&>0\ \et\\ &\ \ \vdots\\ g_m(x,x_{n+1})&>0 \end{aligned} \right) \right\}\\ \cup\{(R,x)\in\mathcal R_n\mid f(x,X_{n+1})=0\et g_1(x,0)>0\et\dots\et g_m(x,0)>0\} \end{multline*} and therefore is $(K,P)$-semialgebraic by \ref{elim1}. \end{proof} \begin{thm}[Real quantifier elimination]\label{elim} Suppose $n\in\N_0$ and $S$ is an $(n+1)$-ary $(K,P)$-semialgebraic class. Then $\{(R,x)\in\mathcal R_n\mid\exists x_{n+1}\in R:(R,(x,x_{n+1}))\in S\}$ and $\{(R,x)\in\mathcal R_n\mid\forall x_{n+1}\in R:(R,(x,x_{n+1}))\in S\}$ are $n$-ary $(K,P)$-semialgebraic classes. \end{thm} \begin{proof} Because the second class is the complement of \[\{(R,x)\in\mathcal R_n\mid\exists x_{n+1}\in R:(R,(x,x_{n+1}))\in\complement S\},\] it is enough to consider the first class. By means of \ref{sanf}, one can assume WLOG that $S$ is of the form \[S=\{(R,(x,x_{n+1})\in\mathcal R_{n+1}\mid f(x,x_{n+1})=0,g_1(x,x_{n+1})>0,\dots,g_m(x,x_{n+1})>0\}\] for some $f,g_i\in K[X_1,\dots,X_{n+1}]$. Setting $D:=\deg_{X_{n+1}}f$, we obtain \begin{multline*} \{(R,x)\in\mathcal R_n\mid\exists x_{n+1}\in R:(R,(x,x_{n+1}))\in S\}\\ =\bigcup_{d=0}^D\left\{(R,x)\in\mathcal R_n\mid\deg f(x,X_{n+1})=d\et\exists x_{n+1}\in R:\left( \begin{aligned} f(x,x_{n+1})&=0\ \et\\ g_1(x,x_{n+1})&>0\ \et\\ &\ \ \vdots\\ g_m(x,x_{n+1})&>0 \end{aligned} \right)\right\}\\ \cup\left( \begin{aligned} &\{(R,x)\in\mathcal R_n\mid f(x,X_{n+1})=0\}\cap\\ &\{(R,x)\in\mathcal R_n\mid\exists x_{n+1}\in R:(g_1(x,x_{n+1})>0\et\dots\et g_m(x,x_{n+1})>0)\} \end{aligned} \right) \end{multline*} which is $(K,P)$-semialgebraic by \ref{elim1} and \ref{elim2}. \end{proof} \begin{thm}{}\emph{[$\to$ \ref{preimagesa}]}\label{imagesa} Let $R$ be a real closed field. Images of semialgebraic subsets of $R^n$ under polynomial maps $R^n\to R^m$ are again semialgebraic in $R^m$. \end{thm} \begin{proof} Let $S\subseteq R^n$ be semialgebraic and let $h_1,\dots,h_m\in R[X_1,\dots,X_n]$. We have to show that $\{y\in\R^m\mid\exists x\in R^n:(x\in S \et y_1=h_1(x)\et\ldots\et y_m=h_m(x))\}$ is again semialgebraic. But this follows by applying $n$ times the quantifier elimination \ref{elim}. \end{proof} \begin{ex}[Tarski principle]\label{tprinciple} The real quantifier elimination \ref{elim} can be used together with \ref{nothingorall} to generalize many statements from $\R$ to other real closed fields. This has already been advertised in \ref{advertisetarski}. To give the reader a sense of the type of statements admitting such a generalization, we give several examples. \begin{enumerate}[(a)] \item(``intermediate value theorem for rational functions'') [$\to$ \ref{intermediate}] From analysis, we know for $R=\R$: If $f,g\in R[X]$, $a,b\in R$ with $a\le b$, $g(c)\ne0$ for all $c\in[a,b]$ and $\sgn\left(\frac{f(a)}{g(a)}\right)\ne\sgn\left(\frac{f(b)}{g(b)}\right)$, then there is a $c\in[a,b]$ with $f(c)=0$. We claim that this is valid even for all real closed fields $R$. To this end, it is enough to show that for each $d\in\N$ \[ S_d:= \left\{R\in\mathcal R\mid \begin{aligned} &\forall x_0,\dots,x_d,y_0,\dots,y_d,a,b\in R:\\ & \left. \left(\begin{aligned} &\left.\left(\begin{aligned} &\scriptstyle(a\le b~\et~\left(\forall c\in[a,b]:\sum_{i=0}^dy_ic^i\ne0\right)~\et\\ &\scriptstyle\sgn\left(\left(\sum_{i=0}^dx_ia^i\right)\left(\sum_{i=0}^dy_ib^i\right)\right)\ne\sgn\left(\left(\sum_{i=0}^dx_ib^i\right)\left(\sum_{i=0}^dy_ia^i\right)\right) \end{aligned}\right)\right\}\scriptstyle(*)\\ &\implies\left.\exists c\in[a,b]:\sum_{i=0}^dx_ic^i=0\right\}\scriptstyle(**) \end{aligned}\right) \right\}\scriptstyle(***) \end{aligned} \right\} \] is a semialgebraic class because then $\R\in S_d$ implies by \ref{nothingorall} $S_d=\mathcal R$. Fix $d\in\N$. Applying the quantifier elimination \ref{elim} $2d+4$ times, it is enough to show that the following class is semialgebraic: \begin{multline*} \{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b))\in\mathcal R^{2d+4}\mid(***)\}=\\ \complement \underbrace{\{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b))\in\mathcal R^{2d+4}\mid(*)\}}_{S'}\\ \cup\underbrace{\{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b))\in\mathcal R^{2d+4}\mid(**)\}}_{S''} \end{multline*} It is thus enough to show that $S'$ and $S''$ are semialgebraic. We accomplish this in each case by applying the quantifier elimination \ref{elim}. We explicate this only for $S'$ since it is analogous and even simpler for $S''$: \begin{multline*} S'=\{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b))\mid b-a\ge0\}\cap\\ \{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b))\mid\forall c\in R:(\overbrace{c\in[a,b]\implies\sum_{i=0}^dy_ic^i\ne0}^{(****)})\}\cap\\ \bigcup_{\substack{\de,\ep\in\{-1,0,1\}\\\de\ne\ep}} \left\{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b))\mid \begin{aligned} \scriptstyle\sgn\left(\left(\sum_{i=0}^dx_ia^i\right)\left(\sum_{i=0}^dy_ib^i\right)\right)=\de,\\ \scriptstyle\sgn\left(\left(\sum_{i=0}^dx_ib^i\right)\left(\sum_{i=0}^dy_ia^i\right)\right)=\ep \end{aligned} \right\}. \end{multline*} By quantifier elimination it is enough to show that \[\{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b))\mid({*}{*}{*}{*})\}\] is semialgebraic. But this class equals \begin{multline*} \{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b))\mid c<a\text{ or }b<c\}~\cup\\ \left\{(R,(x_0,\dots,x_d,y_0,\dots,y_d,a,b,c))\mid \sum_{i=0}^dy_ic^i\ne0\right\} \end{multline*} \item Let $R$ be a real closed field and $f\in R[X]$ with $f\ge0$ on $R$. We claim that the sum $g:=f+f'+f''+\dots$ of all derivatives of $f$ satisfies again $g\ge0$ on $R$. We show this first for $R=\R$: In this case, we have for all $x\in\R$ \[\frac{dg(x)e^{-x}}{dx}=g'(x)e^{-x}-g(x)e^{-x}=(g'(x)-g(x))e^{-x}=-f(x)e^{-x}\le0,\] from which it follows that $h\colon\R\to\R,\ x\mapsto g(x)e^{-x}$ is anti-monotonic [$\to$~\ref{monodef}]. From this and the fact that $\lim_{x\to\infty}h(x)=\lim_{x\to\infty}(g(x)e^{-x})=0$, we deduce that $h(x)\ge0$ and therefore $g(x)\ge0$ for all $x\in\R$. Thus the claim is proved for $R=\R$. To show it for all real closed fields $R$, it is now enough to show that for all $d\in\N$ \begin{align*} S_d:=\Bigg\{R\in\mathcal R\mid~&\forall a_0,\dots,a_d\in R:\Bigg(\left(\forall x\in R:\sum_{i=0}^da_ix^i\ge0\right)\implies\\ &\forall x\in R:\sum_{k=0}^d\sum_{i=k}^di(i-1)\dotsm(i-k+1)a_ix^{i-k}\ge0\Bigg)\Bigg\} \end{align*} is semialgebraic since then by \ref{nothingorall} $\R\in S_d$ implies $S_d=\mathcal R$. This can be shown for each $d\in\N$ by applying the quantifier elimination $d+3$ times. \item We can reprove \ref{eucliddiag} since for $R=\R$ it is already known from linear algebra and it suffices to show for fixed $n\in\N$ that \[ S_n:=\left\{R\in\mathcal R\mid \begin{aligned} &\forall a_{11},a_{12},\dots,a_{nn}\in R:\\ &\left( \begin{aligned} &(\forall i,j\in\{1,\dots,n\}:a_{ij}=a_{ji})\implies\\ &\left(\begin{aligned} &\exists b_{11},b_{12},\dots,b_{nn}\in R:\\ &\left(\begin{aligned} \begin{aligned} &\left( \forall i,k\in\{1,\dots,n\}: \sum_{j=1}^nb_{ji}b_{jk}=\de_{ik} \right) \et\\ &\forall i,\ell\in\{1,\dots,n\}:\left(i\ne\ell\implies\sum_{j,k=1}^nb_{ji}a_{jk}b_{k\ell}=0\right) \end{aligned} \end{aligned}\right) \end{aligned}\right) \end{aligned} \right) \end{aligned} \right\} \] is semialgebraic. We manage to do so by implementing the quantifications over $i,j,k,\ell$ as finite intersections of semialgebraic classes and by eliminating the quantification over $a_{11},\dots,b_{nn}$ by applying $2n^2$ times \ref{elim}. \item By \ref{nothingorall}, $\{R\in\mathcal R\mid R\text{ archimedean}\}$ [$\to$ \ref{archetcdef}(a)] is not a semialgebraic class (if $\mathcal R$ is big enough) since it contains $\R$ but not $\overline{(\R(X),P)}$ where $P$ is an arbitrary order of $\R(X)$. \end{enumerate} \end{ex} \section{Canonical isomorphisms of Boolean algebras of semialgebraic sets and classes} In this section, we fix again an ordered field $(K,P)$ and a set $\mathcal R$ of real closed extensions of $(K,P)$ [$\to$ \ref{introsemialg}]. \begin{df}\label{bahom} Let $M_1$ and $M_2$ be sets, $\mathcal S_1$ a Boolean algebra on $M_1$ and $\mathcal S_2$ a Boolean algebra on $M_2$. Then $\Ph\colon\mathcal S_1\to\mathcal S_2$ is called a \emph{homomorphism of Boolean algebras} if $\Ph(\emptyset)=\emptyset$, $\Ph\left(\complement S\right)=\complement\Ph(S)$, $\Ph(S\cap T)=\Ph(S)\cap\Ph(T)$ and $\Ph(S\cup T)=\Ph(S)\cup\Ph(T)$ for all $S,T\in\mathcal S_1$. If $\Ph$ is in addition \alalal{injective}{surjective}{bijective}, then $\Ph$ is called an \alalal{embedding}{epimorphismus}{isomophism} of Boolean algebras. \end{df} \begin{lem}\label{charemb} Suppose $\mathcal S_1$ and $\mathcal S_2$ are Boolean algebras and $\Ph\colon\mathcal S_1\to\mathcal S_2$ is a homomorphism. Then the following are equivalent: \begin{enumerate}[(a)] \item $\Ph$ is an embedding. \item $\forall S\in\mathcal S_1:(\Ph(S)=\emptyset\implies S=\emptyset)$ \end{enumerate} \end{lem} \begin{proof} \underline{(a)$\implies$(b)}\quad Suppose (a) holds and consider $S\in\mathcal S_1$ such that $\Ph(S)=\emptyset$. Then $\Ph(S)=\emptyset=\Ph(\emptyset)$ and hence $S=\emptyset$ by the injectivity of $\Ph$. \smallskip \underline{(b)$\implies$(a)}\quad Suppose (b) holds and let $S,T\in\mathcal S_1$ such that $\Ph(S)=\Ph(T)$. Then $\Ph(S\setminus T)=\Ph\left(S\cap\complement T\right)=\Ph(S)\cap\complement\Ph(T)=\emptyset$ and therefore $S\setminus T=\emptyset$. Analogously, we obtain $T\setminus S=\emptyset$. Then $S=T$. \end{proof} \begin{notation}\label{introsn} Let $n\in\N_0$. From now on, we denote by $\mathcal S_n$ the Boolean algebra of all $n$-ary $(K,P)$-semialgebraic classes. For every $R\in\mathcal R$, we let furthermore $\mathcal S_{n,R}$ denote the Boolean algebra of all $K$-semialgebraic subsets of $R^n$ (i.e., $\mathcal S_{n,R}=\mathcal S_n$ for $\mathcal R=\{R\}$). We call the map $\set_R\colon\mathcal S_n\to\mathcal S_{n,R},\ S\mapsto\{x\in R^n\mid(R,x)\in S\}$ the \emph{setification} to $R$ for every $R\in\mathcal R$. \end{notation} \begin{thmdef}\label{setification} Let $n\in\N_0$ and $R\in\mathcal R$. The setification \[\set_R\colon\mathcal S_n\to\mathcal S_{n,R}\] is an isomorphism of Boolean algebras. We call its inverse map \[\class_R:=\set_R^{-1}\colon\mathcal S_{n,R}\to\mathcal S_n\] the \emph{classification}. \end{thmdef} \begin{proof} It is clear that $\set_R$ is an epimorphism. Suppose $\emptyset\ne S\in\mathcal S_n$. By Lemma \ref{charemb}, it suffices to show $\set_RS\ne\emptyset$. By the quantifier elimination \ref{elim}, \[T:=\{R'\in\mathcal R\mid\exists x\in R'^n:(R',x)\in S\}\] is $(K,P)$-semialgebraic and hence by \ref{nothingorall} either empty or $\mathcal R$. From $S\ne\emptyset$, we have of course $T\ne\emptyset$. Therefore $R\in\mathcal R=T$, i.e., there is some $x\in R^n$ with $(R,x)\in S$. Then $x\in\set_RS$ and thus $\set_RS\ne\emptyset$. \end{proof} \begin{cordef}\label{transfer} Let $n\in\N_0$ and $R,R'\in\mathcal R$. Then there is exactly one isomorphism of Boolean algebras $\transfer_{R,R'}\colon\mathcal S_{n,R}\to\mathcal S_{n,R'}$ satisfying \[\transfer_{R,R'}(\{x\in R^n\mid p(x)\ge0\})=\{x\in R'^n\mid p(x)\ge0\}\] for all $p\in K[X_1,\dots,X_n]$. We call $\transfer_{R,R'}$ the \emph{transfer} from $R$ to $R'$. \end{cordef} \begin{proof} The uniqueness is clear since $\mathcal S_{n,R}$ is generated by \[\{\{x\in R^n\mid p(x)\ge0\}\mid p\in K[X_1,\dots,X_n]\}\] [$\to$ \ref{booleanalgebra}(b)]. Existence is established by setting $\transfer_{R,R'}:=\set_{R'}\circ\class_R$. Indeed, let $p\in K[X_1,\dots,X_n]$ and set $S:=\{(R'',x)\in\mathcal R_n\mid p(x)\ge0\text{ in $R''$}\}$. Then the claim is that $\transfer_{R,R'}(\set_R S)=\set_{R'}(S)$ which is clear since $\transfer_{R,R'}(\set_R S)=(\set_{R'}\circ\class_R)(\set_R S)=\set_{R'}((\underbrace{\class_R\circ\set_R}_{\id_{\mathcal S_n}})(S))$. \end{proof} \chapter{Hilbert's 17th problem} \section{Nonnegative polynomials in one variable} \begin{thm}\label{so2s} Suppose $R$ is a real closed field and $f\in R[X]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f\ge0$ on $R$ \qquad\emph{[$\to$ \ref{intervals}]} \item $f$ is a sum of two squares in $R[X]$. \item $f\in\sum R[X]^2$ \qquad\emph{[$\to$ \ref{divnot}]} \end{enumerate} \end{thm} \begin{proof}(b)$\implies$(c)$\implies$(a) is trivial. In order to show (a)$\implies$(b), we set $C:=R(\ii)$ and consider the ring automorphism \[C[X]\to C[X],\ p\mapsto p^*\] given by $a^*=a$ for $a\in R$, $\ii^*=-\ii$ and $X^*=X$. WLOG $f\ne0$. By the fundamental theorem of algebra \ref{realfund}, there exist $k,\ell\in\N_0$, $c\in R^\times$, $a_1,\dots,a_k\in R$, $b_1,\dots,b_k\in R^\times$, $\al_1,\dots,\al_\ell\in\N$ and pairwise different $d_1,\dots,d_\ell\in R$ such that \begin{align*} f&=c\left(\prod_{i=1}^k((X-a_i)^2+b_i^2)\right)\prod_{j=1}^\ell(X-d_j)^{\al_j}\\ &=c\left(\prod_{i=1}^k(X-(a_i+b_i\ii))\right)\left(\prod_{i=1}^k(X-(a_i-b_i\ii))\right)\prod_{j=1}^\ell(X-d_j)^{\al_j}. \end{align*} Suppose now $f\ge0$ on $R$. Then we have $0\le\sgn(f(x))=(\sgn c)\prod_{j=1}^\ell(\sgn(x-d_j))^{\al_j}$ for all $x\in R$. From this, we deduce easily $\al_j\in2\N$ and $c\in R^2$. Setting \[g:=\sqrt c\left(\prod_{i=1}^k(X-(a_i+b_i\ii))\right)\prod_{j=1}^\ell(X-d_j)^{\frac{\al_j}2}\in C[X],\] we have now $f=g^*g$. Writing $g=p+\ii q$ with $p,q\in R[X]$, this amounts to $f=(p-\ii q)(p+\ii q)=p^2+q^2$. \end{proof} \begin{thm}[Cassels]\label{cassels} Let $(K,\le)$ be an ordered field. Suppose $\ell\in\N_0$, $f_1,\dots,f_\ell\in K[X]$, $g_1,\dots,g_\ell\in K[X]\setminus\{0\}$ and $a_1,\dots,a_\ell\in K_{\ge0}$ with $\sum_{i=1}^\ell a_i\left(\frac{f_i}{g_i}\right)^2\in K[X]$. Then there are $p_1,\dots,p_\ell\in K[X]$ such that \[\sum_{i=1}^\ell a_i\left(\frac{f_i}{g_i}\right)^2=\sum_{i=1}^\ell a_ip_i^2.\] \end{thm} \begin{proof} WLOG $a_i>0$ for all $i\in\{1,\dots,\ell\}$ and $g_1=\ldots=g_\ell$. It suffices to show: Let $h\in K[X]$ a polynomial for which there exists some $g\in K[X]$ of degree $\ge1$ and $f_1,\dots,f_\ell\in K[X]$ satisfying $hg^2=\sum_{i=1}^\ell a_if_i^2$. Then there is some $G\in K[X]\setminus\{0\}$ with a degree that is smaller than that of $g$ and $F_1,\dots,F_\ell\in K[X]$ satisfying $hG^2=\sum_{i=1}^\ell a_iF_i^2$. We prove this: Write $f_i=q_ig+r_i$ with $q_i,r_i\in K[X]$ and $\deg r_i<\deg g$ for all $i\in\{1,\dots,\ell\}$. If $r_i=0$ for all $i\in\{1,\dots,\ell\}$, then we set $G:=1$ and $F_i:=q_i$ for all $i\in\{1,\dots,\ell\}$ and have \[hG^2=h=\frac1{g^2}(hg^2)=\frac1{g^2}\sum_{i=1}^\ell a_if_i^2= \sum_{i=1}^\ell a_i\left(\frac{f_i}g\right)^2=\sum_{i=1}^\ell a_iq_i^2=\sum_{i=1}^\ell a_iF_i^2.\] In the sequel, we suppose that the set $I:=\{i\in\{1,\dots,\ell\}\mid r_i\ne0\}$ is nonempty. Now we set $s:=\sum_{i=1}^\ell a_iq_i^2-h$, $t:=\sum_{i=1}^\ell a_if_iq_i-gh$, $F_i:=sf_i-2tq_i$ for $i\in\{1,\dots,\ell\}$ and $G:=sg-2t$. Then we obtain \begin{align*} hG^2&=s^2hg^2-4stgh+4t^2h\\ &=s^2\sum_{i=1}^\ell a_if_i^2-4st(t+gh)+4t^2(s+h)\\ &=s^2\sum_{i=1}^\ell a_if_i^2-4st\sum_{i=1}^\ell a_if_iq_i+4t^2\sum_{i=1}^\ell a_iq_i^2\\ &=\sum_{i=1}^\ell a_i(sf_i-2tq_i)^2=\sum_{i=1}^\ell a_iF_i^2. \end{align*} It remains to show that $G\ne0$ and $\deg G<\deg g$. To this end, we calculate \begin{align*} G&=g\sum_{i=1}^\ell a_iq_i^2-gh-2\sum_{i=1}^\ell a_if_iq_i+2gh\\ &=\frac1g\left(g^2\sum_{i=1}^\ell a_iq_i^2+g^2h-2g\sum_{i=1}^\ell a_if_iq_i\right)\\ &=\frac1g\left(g^2\sum_{i=1}^\ell a_iq_i^2+\sum_{i=1}^\ell a_if_i^2-2g\sum_{i=1}^\ell a_if_iq_i\right)\\ &=\frac1g\sum_{i=1}^\ell a_i(g^2q_i^2-2(gq_i)f_i+f_i^2)\\ &=\frac1g\sum_{i=1}^\ell a_i(gq_i-f_i)^2=\frac1g\sum_{i=1}^\ell a_ir_i^2=\frac1g\sum_{i\in I}a_ir_i^2. \end{align*} If we had $G=0$, then this would mean $\sum_{i\in I}a_ir_i^2=0$. Since the leading coefficient of $a_ir_i^2$ is positive for all $i\in I\ne\emptyset$, this is impossible. Hence $G\ne0$. Because of $\deg r_i<\deg g$ for all $i\in I$, we have $\deg G<2\deg g-\deg g=\deg g$. \end{proof} \section{Homogenization and dehomogenization} \begin{df}\label{introhom} Let $A$ be commutative ring with $0\ne1$. \begin{enumerate}[(a)] \item If $k\in\N_0$ and $f\in A[X_1,\dots,X_n]$, then the sum of all terms (i.e., monomials with their coefficients) of degree $k$ of $f$ is called the \emph{$k$-th homogeneous part} of $f$. This is a $k$-form [$\to$ \ref{longremi}(a)]. \item If $f\in A[X_1,\dots,X_n]\setminus\{0\}$ and $d:=\deg f$, then the $d$-th homogeneous part of $f$ is called the \emph{leading form} $\lf(f)$ of $f$. We set $\lf(0):=0$. \item If $f\in A[X_1,\dots,X_n]$, $d:=\deg f\in\N_0$ and $f=\sum_{k=0}^df_k$ with a $k$-form $f_k$ for all $k\in\{0,\dots,d\}$, then the \emph{homogenization} $f^*\in A[X_0,\dots,X_n]$ of $f$ (with respect to $X_0$) is given by \[f^*:=\sum_{k=0}^dX_0^{d-k}f_k\] which equals $X_0^df\left(\frac{X_1}{X_0},\dots,\frac{X_n}{X_0}\right)$ in case $A$ is a field (since then the field of rational functions $A(X_0,X_1,\ldots,X_n)$ exists). We set $0^*:=0$. \item For homogeneous $f\in A[X_0,\dots,X_n]$, we call $\widetilde f:=f(1,X_1,\dots,X_n)$ the \emph{dehomogenization} of $f$ (with respect to $X_0$). \end{enumerate} \end{df} \begin{rem}\label{homdehom} Let $A$ be a commutative ring with $0\ne1$. \begin{enumerate}[(a)] \item $\lf(f)=f^*(0,X_1,\dots,X_n)$ for all $f\in A[X_1,\dots,X_n]$. \item For $f,g\in A[X_1,\dots,X_n]$, we have \[(f+g)^*=f^*+g^*\] in case $\deg f=\deg g=\deg(f+g)$ and \[(fg)^*=f^*g^*\] if $A$ is an integral domain. \item $A[X_0,\dots,X_n]\to A[X_1,\dots,X_n],\ f\mapsto\widetilde f$ is a ring homomorphism. \item For all $f,g\in A[X_1,\dots,X_n]$, we have \[\lf(f+g)=\lf(f)+\lf(g)\] in case $\deg f=\deg g=\deg(f+g)$ and \[\lf(fg)=\lf(f)\lf(g)\] if $A$ is an integral domain. \item For all $f\in A[X_1,\dots,X_n]$, we have $\widetilde{f^*\,}=f$. \item If $f\in A[X_0,\dots,X_n]\setminus\{0\}$ is homogeneous and $m:=\max\{k\in\N_0\mid X_0^k\mid f\}$, then $X_0^m{\widetilde f\;}^*=f$. \end{enumerate} \end{rem} \begin{lem}\label{pol0} Suppose $K$ is a field, $n,d\in\N_0$, $f\in K[X_1,\dots,X_n]_d$ and let $I_1,\dots,I_n\subseteq K$ be sets of cardinality at least $d+1$ each such that $f(x)=0$ for all $x\in I_1\times\ldots\times I_n$. Then $f=0$. \end{lem} \begin{proof} Induction by $n$. \smallskip \underline{$n=0$}\quad\checkmark \smallskip \underline{$n-1\to n\quad(n\in\N)$}\quad Write $f=\sum_{k=0}^df_kX_n^k$ with $f_k\in K[X_1,\dots,X_{n-1}]_d$. For all \[(x_1,\dots,x_{n-1})\in I_1\times\ldots\times I_{n-1},\] the polynomial $f(x_1,\dots,x_{n-1},X_n)=\sum_{k=0}^df_k(x_1,\dots,x_{n-1})X_n^k\in K[X_n]_d$ is a polynomial with at $d+1$ roots. Thus $f_k(x_1,\dots,x_{n-1})=0$ for all $k\in\{0,\dots,d\}$ and $(x_1,\dots,x_{n-1})\in I_1\times\ldots\times I_{n-1}$. By induction hypothesis, $f_k=0$ for all $k\in\{0,\dots,d\}$. \end{proof} \begin{rem}\label{soslongrem} Let $K$ be a real field, $\ell,n\in\N_0$, $p_1,\dots,p_\ell\in K[X_1,\dots,X_n]$ and \[f:=\sum_{i=1}^\ell p_i^2.\] \begin{enumerate}[(a)] \item If $f=0$, then $p_1=\ldots=p_\ell=0$. This follows from \ref{pol0} together with \ref{realchar}(c). Instead of \ref{pol0}, one can alternatively employ the fact that $K(X_1,\dots,X_n)$ is real which is clear by applying \ref{rtlfct} $n$ times. \item If $f\ne0$, then $\deg f=2d$ with $d:=\max\{\deg(p_i)\mid i\in\{1,\dots,\ell\}\}$ since otherwise $\sum_{i=1,\deg(p_i)=d}^\ell\lf(p_i)^2=0$, contradicting (a). \item If $d\in\N_0$ and $f$ is a $2d$-form, then every $p_i$ is a $d$-form. This can be seen similarly to (b) by considering the homogeneous parts of the $p_i$ of smallest (instead of largest) degree. \item We have $f^*\in\sum K[X_0,\dots,X_n]^2$. More precisely, $f^*$ is a $2d$-form for some $d\in\N_0$ that is a sum of $\ell$ squares of $d$-forms since \[f^*=X_0^{2d}f\left(\frac{X_1}{X_0},\dots,\frac{X_n}{X_0}\right)=\sum_{i=1}^\ell\left(X_0^dp_i\left( \frac{X_1}{X_0},\dots,\frac{X_n}{X_0}\right)\right)^2\] and $X_0^dp_i\left(\frac{X_1}{X_0},\dots,\frac{X_n}{X_0}\right)=X_0^{d-\deg p_i}p_i^*\in K[X_0,\dots,X_n]$ for all $i\in\{1,\dots,\ell\}$ with $p_i\ne0$ (note that $\deg p_i\le d$ by (b)). \end{enumerate} \end{rem} \begin{pro}\label{lf2} Let $(K,\le)$ be an ordered field and $f\in K[X_1,\dots,X_n]$ with $f\ge0$ on $K^n$. Then $f$ has an even degree except if $f=0$, and we have $\lf(f)\ge0$ on $K^n$. \end{pro} \begin{proof} WLOG $f\ne0$. Then $g:=\lf(f)\ne0$. Set $d:=\deg g$. For all $x\in K^n$, $f_x:=f(Tx)\in K[T]$ is a polynomial in one variable with $f_x\ge0$ on $K$ whose leading coefficient is $g(x)$ in case that $g(x)\ne0$. Choose $x_0\in K^n$ with $g(x_0)\ne0$ [$\to$ \ref{pol0}]. Then $f_{x_0}$ has degree $d$ and because of $f_{x_0}\ge0$ on $K$, it follows that $d\in2\N_0$ by \ref{sgnbounds}(a). Now let $x\in K^n$ be arbitrary such that $g(x)\ne0$. Again by \ref{sgnbounds}(a), it follows from $f_x\ge0$ on $K$ that $g(x)\ge0$. \end{proof} \begin{pro}\label{psdpsdhom} Let $(K,\le)$ be an ordered field and $f\in K[X_1,\dots,X_n]$. \begin{enumerate}[\normalfont(a)] \item $f\ge 0$\text{ on }$K^n\iff f^*\ge0\text{ on }K^{n+1}$ \item $f\in\sum K[X_1,\dots,X_n]^2\iff f^*\in\sum K[X_0,\dots,X_n]^2$ \end{enumerate} \end{pro} \begin{proof} (a) ``$\Longleftarrow$ '' If $f^*$ is nonnegative on $K^{n+1}$, then also on $\{1\}\times K^n$. \smallskip ``$\Longrightarrow$'' Suppose $f\ge0$ on $K^n$. WLOG $f\ne0$. By \ref{lf2}, we can write $\deg f=2d$ with $d\in\N_0$. Due to $f^*\overset{\ref{introhom}(c)}=X_0^{2d}f\left(\frac{X_1}{X_0},\dots,\frac{X_n}{X_0}\right)$, we deduce $f^*\ge0$ on $K^\times\times K^n$. It remains to show $f^*\ge0$ on $\{0\}\times K^n$ which is equivalent by \ref{homdehom}(a) to $\lf(f)\ge0$ on $K^n$. The latter holds by \ref{lf2}. \smallskip (b) ``$\Longrightarrow$'' has been shown in \ref{soslongrem}(d). \smallskip ``$\Longleftarrow$ '' follows from \ref{homdehom}(c). \end{proof} \section{Nonnegative quadratic polynomials} \begin{df}\label{psdpd} Let $(K,\le)$ be an ordered field. \begin{enumerate}[(a)] \item If $f\in K[X_1,\dots,X_n]$ is homogeneous [$\to$ \ref{longremi}(a)], then $f$ is called\\ \alal{\emph{positive semidefinite (psd)}}{\emph{positive definite (pd)}} (over $K$) if $f\malal{\ge0\text{ on }K^n}{>0\text{ on }K^n\setminus\{0\}}$. \item If $M\in SK^{n\times n}$, then $M$ is called \alal{psd}{pd} (over K) if the quadratic form represented by $M$ [$\to$ \ref{longremi}(d)] is \alal{psd}{pd}, i.e., $x^TMx\malal{\ge0\text{ for all }x\in K^n}{>0\text{ for all }x\in K^n\setminus\{0\}}$. \end{enumerate} \end{df} \begin{pro}\label{sospsd2} Let $K$ be a Euclidean field and $q\in K[X_1,\dots,X_n]$ a quadratic form. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $q$ is psd \emph{[$\to$ \ref{psdpd}(a)]} \item $q\in\sum K[X_1,\dots,X_n]^2$ \emph{[$\to$ \ref{divnot}]} \item $q$ is a sum of $n$ squares of linear forms \emph{[$\to$ \ref{longremi}(a)]}. \item $\sg q=\rk q$ \emph{[$\to$ \ref{longremi}(h)]}. \end{enumerate} \end{pro} \begin{proof} (d)$\implies$(c)$\implies$(b)$\implies$(a) is trivial. Now suppose that (d) does not hold. We show that then (a) also fails. Write $q=\sum_{i=1}^s\ell_i^2-\sum_{j=1}^t\ell_{s+j}^2$ with $s,t\in\N_0$ and linearly independent linear forms $\ell_1,\dots,\ell_s,\ell_{s+1},\dots,\ell_{s+t}\in K[X_1,\dots,X_n]$. Since $s-t=\sg q\ne\rk q=s+t$, we have $t\ge1$. By linear algebra, \[\ph\colon K^n\to K^{s+t},\ x\mapsto\begin{pmatrix}\ell_1(x)\\\vdots\\\ell_{s+t}(x)\end{pmatrix}\] is surjective. Choose $x\in K^n$ with $\ph(x)=\begin{pmatrix}0\\\vdots\\0\\1\end{pmatrix}$. Then $q(x)=-1<0$. \end{proof} \begin{pro}\label{psdeq} Let $K$ be a Euclidean field and $M\in SK^{n\times n}$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $M$ is psd \emph{[$\to$ \ref{psdpd}(b)]}. \item $\exists s\in\N_0:\exists A\in K^{s\times n}:M=A^TA$ \item $\exists A\in K^{n\times n}:M=A^TA$ \item All eigenvalues of $M$ in the real closure $\overline{(K,K^2)}$ are nonnegative. \item All coefficients of $\det(M+XI_n)\in K[X]$ are nonnegative. \item If $M=(a_{ij})_{1\le i,j\le n}$, then for all $I\subseteq\{1,\dots,n\}$, we have $\det((a_{ij})_{(i,j)\in I\times I})\ge0$. \end{enumerate} \end{pro} \begin{proof} Using \ref{longremi}(e) and \ref{soslongrem}(c), one sees that (a), (b) and (c) are nothing else than the corresponding statements in \ref{sospsd2}. \smallskip \underline{(a)$\implies$(f)}\quad follows from applying (a)$\implies$(c) to the submatrices of $M$ in question. \smallskip \underline{(f)$\implies$(e)}\quad Each coefficients of $\det(M+XI_n)$ is a sum of certain determinants appearing in (f). \smallskip \underline{(e)$\implies$(d)} is trivial. \smallskip \underline{(d)$\implies$(a)} follows easily from \ref{eucliddiag}. \end{proof} \begin{term}{}[$\to$ \ref{degnot}, \ref{longremi}(a)]\label{quintic} Let $A$ be a commutative ring with $0\ne1$. Polynomials from $A[X_1,\dots,X_n]_d$ [$\to$ \ref{degnot}] are called \emph{constant} for $d=0$, \emph{linear} for $d=1$, \emph{quadratic} for $d=2$, \emph{cubic} for $d=3$, \emph{quartic} for $d=4$, \emph{quintic} for $d=5$, \dots \end{term} \begin{pro}\label{son1s} Let $K$ be a Euclidean field and $q\in K[X_1,\dots,X_n]_2$. The following are equivalent: \begin{enumerate}[\normalfont(a)] \item $q\ge0$ on $K^n$ \item $q\in\sum K[X_1,\dots,X_n]^2$ \item $q$ is a sum of $n+1$ squares of linear polynomials. \end{enumerate} \end{pro} \begin{proof} (a)$\overset{\text{\ref{psdpsdhom}(a)}}\implies q^*\ge0\text{ on $K^{n+1}$}\overset{\ref{sospsd2}}\implies$(c)$\implies$(b) $\implies$(a) \end{proof} \section{The Newton polytope} \begin{dfpro}\label{dfconv} Let $(K,\le)$ be an ordered field, $V$ a $K$-vector space and $A\subseteq V$. Then $A$ is called \emph{convex} if $\forall x,y\in A:\forall\la\in[0,1]_K:\la x+(1-\la)y\in A$. The smallest convex superset of $A$ is obviously \[\conv A:=\left\{\sum_{i=1}^m\la_ix_i\mid m\in\N,\la_i\in K_{\ge0},x_i\in A,\sum_{i=1}^m\la_i=1\right\},\] called the convex set \emph{generated} by $A$ or the \emph{convex hull} of $A$. We call finitely generated convex sets, i.e., convex hulls of finite sets, \emph{polytopes}. A polytope is thus of the form \[\conv\{x_1,\dots,x_m\}=\left\{\sum_{i=1}^m\la_ix_i\mid\la_i\in K_{\ge0},\sum_{i=1}^m\la_i=1\right\}\] for some $m\in\N_0$ and $x_1,\dots,x_m\in V$. If $A$ is a convex set, then a point $x\in A$ is called an \emph{extreme point} of $A$ if there are no $y,z\in A$ such that $y\ne z$ and $x=\frac{y+z}2$. Extreme points of polytopes are also called \emph{vertices} of the polytope. \end{dfpro} \begin{exo}\label{extremeexo} Suppose $(K,\le)$ is an ordered field, $V$ a $K$-vector space, $A\subseteq V$ convex, $x\in A$ and $\la\in(0,1)_K$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $x$ is an extreme point of $A$. \item There are no $y,z\in A$ such that $y\ne z$ and $x=\la y+(1-\la)z$. \end{enumerate} \end{exo} \begin{lem}\label{nonredisvertex} Let $(K,\le)$ be an ordered field, $V$ a $K$-vector space, $m\in\N_0$, $x_1,\dots,x_m\in V$, $P:=\conv\{x_1,\dots,x_m\}$ and suppose $P\ne\conv(\{x_1,\dots,x_m\}\setminus\{x_i\})$ for all $i\in\{1,\dots,m\}$. Then $P$ is a polytope and $x_1,\dots,x_m$ are its vertices. \end{lem} \begin{proof} To show: \begin{enumerate}[(a)] \item Every vertex of $P$ equals one of the $x_i$. \item Every $x_i$ is a vertex of $P$. \end{enumerate} For (a), let $x$ be a vertex of $P$. Write $x=\sum_{i=1}^m\la_ix_i$ with $\la_i\in K_{\ge0}$ and $\sum_{i=1}^m\la_i=1$. WLOG $\la_1\ne0$. Then $\la_1=1$ for otherwise $\mu:=\sum_{i=2}^m\la_i=1-\la_1>0$ and $x=\la_1x_1+\mu\Bigg(\underbrace{\sum_{i=2}^m\frac{\la_i}\mu x_i}_{\rlap{$\scriptstyle\in\conv\{x_2,\dots,x_m\}$}}\Bigg)$, contradicting \ref{extremeexo}(b). \noindent\smallskip To prove (b), we let $y,z\in P$ with $x_1=\frac{y+z}2$. To show: $y=z$. Write $y=\sum_{i=1}^m\la_ix_i$ and $z=\sum_{i=1}^m\mu_ix_i$ with $\la_i,\mu_i\in K_{\ge0}$ and $\sum_{i=1}^m\la_i=1=\sum_{i=1}^m\mu_i$. We show that $\la_1=1=\mu_1$. It is enough to show $\frac{\la_1+\mu_1}2=1$. If we had $\frac{\la_1+\mu_1}2<1$, then it would follow from $(1-\frac{\la_1+\mu_1}2)x_1=\sum_{i=2}^m\frac{\la_i+\mu_i}2x_i$ that $x_1\in\conv\{x_2,\dots,x_m\}$ and therefore $P=\conv\{x_1,\dots,x_m\}=\conv\{x_2,\dots,x_m\}\ \lightning$. \end{proof} \begin{cor}\label{polyisconvexhull} Every polytope is the convex hull of its finitely many vertices. \end{cor} \begin{dfpro}\label{minkowskisum} Suppose $(K,\le)$ is an ordered field, $V$ is a $K$-vector space and let $A$ and $B$ be subsets of $V$. Then $A+B:=\{x+y\mid x\in A,y\in B\}$ is called the \emph{Minkowski sum} of $A$ and $B$. We have $(\conv A)+(\conv B)=\conv(A+B)$. Let now $A$ and $B$ be convex. Then $A+B$ is also convex. If $z$ is an extreme point of $A+B$, then there are uniquely determined $x\in A$ and $y\in B$ such that $z=x+y$, and $x$ is an extreme point of $A$ and $y$ is one of $B$. \end{dfpro} \begin{proof} ``$\subseteq$'' Let $x_1,\dots,x_m\in A$, $y_1,\dots,y_n\in B$, $\la_1,\dots,\la_m\in K_{\ge0}$, $\mu_1,\dots,\mu_n\in K_{\ge0}$ and $\sum_{i=1}^m\la_i=1=\sum_{j=1}^n\mu_j$. Then $\sum_{i=1}^m\sum_{j=1}^n\la_i\mu_j=\left(\sum_{i=1}^m\la_i\right)(\sum_{j=1}^n\mu_j)=1\cdot1=1$ and \[ \sum_{i=1}^m\la_ix_i+\sum_{j=1}^n\mu_jy_j=\left(\sum_{j=1}^n\mu_j\right)\sum_{i=1}^m\la_ix_i +\left(\sum_{i=1}^m\la_i\right)\sum_{j=1}^n\mu_jy_j =\sum_{i=1}^m\sum_{j=1}^n\la_i\mu_j(x_i+y_j). \] \smallskip ``$\supseteq$'' is trivial. \smallskip Let now $A$ and $B$ be convex. Then $A+B=(\conv A)+(\conv B)=\conv(A+B)$ is convex. Finally, let $z$ be an extreme point of $A+B$ and let $x\in A$ and $y\in B$ with $z=x+y$. Then $x$ is an extreme point of $A$ since if we had $x=\frac{x_1+x_2}2$ with different $x_1,x_2\in A$, then it would follow that $z=\frac{(x_1+y)+(x_2+y)}2$ and $x_1+y\ne x_2+y\ \lightning$. In the same way, $y$ is an extreme point of $B$. Suppose now that $x'\in A$ and $y'\in B$ such that $z=x'+y'$. Then $z=\frac{x+x'}2+\frac{y+y'}2$ and $\frac{x+x'}2$ is also an extreme point of $A$ which is possible only for $x=x'$. Analogously, $y=y'$. \end{proof} \begin{notation}\label{monomnotation} Suppressing $n$ in the notation, we denote by $\x:=(X_1,\dots,X_n)$ a tuple of variables and set $A[\x]:=A[X_1,\dots,X_n]$ for every commutative ring $A$ with $0\ne1$ in $A$. For $\al\in\N_0^n$, we write $|\al|:=\al_1+\dots+\al_n$ and $\x^\al:=X_1^{\al_1}\dotsm X_n^{\al_n}$. \end{notation} \begin{df} Let $K$ be a field and $f\in K[\x]$. Write $f=\sum_{\al\in\N_0^n}a_\al\x^\al$ with $a_\al\in K$. Then the finite set $\supp(f):=\{\al\in\N_0^n\mid a_\al\ne0\}$ is called the \emph{support} of $f$ and its convex hull $N(f):=\conv(\supp(f))\subseteq\R^n$ the \emph{Newton polytope} of $f$. \end{df} \begin{df} Let $K$ be a field, $f\in K[\x]$ and $a\in K$. We say that $a$ is a \emph{vertex coefficient} of $f$ if there is a vertex $\al$ of $N(f)$ such that $a\x^\al$ is a term of $f$. \end{df} \begin{rem}\label{vcnon0} Since every vertex of the Newton polytope of a polynomial lies by \ref{nonredisvertex} in the support of the polynomial, vertex coefficients are always $\ne0$. \end{rem} \begin{thm}\label{newtontimes} Let $K$ be a field and $f,g\in K[\x]$. Then $N(fg)=N(f)+N(g)$ and every vertex coefficient of $fg$ is the product of a vertex coefficient of $f$ with a vertex coefficient of $g$. \end{thm} \begin{proof}``$\subseteq$'' $\supp(fg)\subseteq\supp(f)+\supp(g)\subseteq N(f)+N(g)$ and therefore $N(fg)=\conv(\supp(fg))\subseteq N(f)+N(g)$ since $N(f)+N(g)$ is convex by \ref{minkowskisum}. \smallskip ``$\supseteq$'' By \ref{minkowskisum}, $N(f)+N(g)$ is a polytope. By virtue of \ref{polyisconvexhull}, it suffices to show that its vertices lie in $N(fg)$. Consider therefore a vertex $\ga$ of $N(f)+N(g)$. We even show that $\ga\in\supp(fg)$. By \ref{minkowskisum}, there are uniquely determined $\al\in N(f)$ and $\be\in N(g)$ such that $\ga=\al+\be$, and $\al$ is a vertex of $N(f)$ and $\be$ a vertex of $N(g)$. By \ref{vcnon0}, we have $\al\in\supp(f)$ and $\be\in\supp(g)$. Because of unicity of $\al$ and $\be$, the coefficient of $\x^\ga$ in $fg$ equals the product of the respective coefficients of $\x^\al$ and $\x^\be$ in $f$ and $g$, respectively, and hence is in particular $\ne0$. Thus $N(fg)=N(f)+N(g)$ is shown. Also the extra claim follows from the above. \end{proof} \begin{pro}\label{newtoncontainment} Let $K$ be a field and $f,g\in K[\x]$. Then $N(f+g)\subseteq\conv(N(f)\cup N(g))$. \end{pro} \begin{proof} $\supp(f+g)\subseteq\supp(f)\cup\supp(g)\subseteq N(f)\cup N(g)$ implies \[N(f+g)=\conv(\supp(f+g))\subseteq\conv(N(f)\cup N(g)).\] \end{proof} \begin{thm}\label{novertexcancellation} Let $(K,\le)$ be an ordered field and $f,g\in K[\x]$ such that all vertex coefficients of $f$ and $g$ have the same sign. Then $N(f+g)=\conv(N(f)\cup N(g))$ and all vertex coefficients of $f+g$ also have this sign. \end{thm} \begin{proof} ``$\subseteq$'' is \ref{newtoncontainment} \smallskip ``$\supseteq$'' We have that $\conv(N(f)\cup N(g))=\conv(\supp(f)\cup\supp(g))$ is a polytope. Let $\al$ be one of its vertices. By \ref{polyisconvexhull}, it is enough to show that $\al\in N(f+g)$. We even show that $\al\in\supp(f+g)$. By \ref{nonredisvertex}, $\al$ lies in at least one of the sets $\supp(f)$ and $\supp(g)$. If $\al$ lies only in one of these two, then the claim is clear. If on the other hand $\al$ lies in both, then $\al$ is a vertex of both $\conv(\supp(f))=N(f)$ and $\conv(\supp(g))=N(g)$ and the coefficients of $\x^\al$ in $f$ and in $g$ and hence also in $f+g$ have the same sign, from which it follows again that $\al\in\supp(f+g)$. Thus $N(f+g)=\conv(N(f)\cup N(g))$ is proven. The extra claim follows from what was shown. \end{proof} \begin{lem}\label{aa2a} Let $(K,\le)$ be an ordered field, $V$ a $K$-vector space and $A$ a convex subset of $V$. Then $A+A=2A:=\{2x\mid x\in A\}$. \end{lem} \begin{proof} ``$\supseteq$'' trivial \smallskip ``$\subseteq$'' Let $x,y\in A$. Then $x+y=2\frac{x+y}2\in 2A$. \end{proof} \begin{thm}\label{vertexsquare} Let $(K,\le)$ be an ordered field and $f\in K[\x]$. Then $N(f^2)=2N(f)$ and all vertex coefficients of $f^2$ are squares of vertex coefficients of $f$ and therefore positive. \end{thm} \begin{proof} $N(f^2)=2N(f)$ follows from \ref{newtontimes} and \ref{aa2a}. Suppose $\ga$ is a vertex of $N(f^2)\overset{\ref{newtontimes}}=N(f)+N(f)$. By \ref{minkowskisum}, there are uniquely determined $\al,\be\in N(f)$ with $\ga=\al+\be$. Due to $\ga=\be+\al$, it follows that $\al=\be$. But then the coefficient of $\x^\ga$ in $f^2$ is just the coefficient belonging to $\x^\al$ in $f$ squared. \end{proof} \begin{thm}\label{sosvc} Let $(K,\le)$ be an ordered field, $\ell\in\N_0$, $p_1,\dots,p_\ell\in K[\x]$ and $f:=\sum_{i=1}^\ell p_i^2$. Then $N(f)=2\conv(N(p_1)\cup\ldots\cup N(p_\ell))$ and all vertex coefficients of $f$ are positive. \end{thm} \begin{proof} For each $i\in\{1,\dots,\ell\}$, we have by \ref{vertexsquare} that $N(p_i^2)=2N(p_i)$ and that all vertex coefficients of $p_i^2$ are positive. By \ref{novertexcancellation}, \begin{align*} N(f)&=\conv(N(p_1^2)\cup\ldots\cup N(p_\ell^2))=\conv(2N(p_1)\cup\ldots\cup2N(p_\ell))\\ &=2\conv(N(p_1)\cup\ldots\cup N(p_\ell)) \end{align*} and all vertex coefficients of $f$ are positive. \end{proof} \begin{ex}\label{motzkin} For the \emph{Motzkin polynomial} $f:=X^4Y^2+X^2Y^4-3X^2Y^2+1\in\R[X,Y]$, we have $f\ge0$ on $\R^2$ but $f\notin\sum\R[X,Y]^2$. At first we show $f\ge0$ on $\R^2$ in three different ways: \begin{enumerate}[(1)] \item From the inequality of arithmetic and geometric means known from analysis, it follows that $\sqrt[3]{abc}\le\frac13(a+b+c)$ for all $a,b,c\in\R_{\ge0}$. Setting here $a:=x^4y^2$, $b:=x^2y^4$ and $c:=1$ for arbitrary $x,y\in\R$, we deduce $x^2y^2\le\frac13(x^4y^2+x^2y^4+1)$. \item \begin{align*} (1+X^2)f&=X^4Y^2+X^2Y^4-3X^2Y^2+1+X^6Y^2+X^4Y^4-3X^4Y^2+X^2\\ &=1-2X^2Y^2+X^4Y^4+X^2-2X^2Y^2+X^2Y^4+X^2Y^2-2X^4Y^2+X^6Y^2\\ &=(1-X^2Y^2)^2+X^2(1-Y^2)^2+X^2Y^2(1-X^2 )^2\in\sum\R[X,Y]^2 \end{align*} \item \begin{align*} f(X^3,Y^3)&=X^{12}Y^6+X^6Y^{12}-3X^6Y^6+1\\ &=X^4Y^2-X^8Y^4-X^6Y^6+\frac14X^{12}Y^6+\frac12X^{10}Y^8+\frac14X^8Y^{10}\\ &\quad+X^2Y^4-X^6Y^6-X^4Y^8+\frac14X^{10}Y^8+\frac12X^8Y^{10}+\frac14X^6Y^{12}\\ &\quad+1-X^4Y^2-X^2Y^4+\frac14X^8Y^4+\frac12X^6Y^6+\frac14X^4Y^8\\ &\quad+\frac34X^8Y^4-\frac32X^6Y^6+\frac34X^4Y^8\\ &\quad+\frac34X^{10}Y^8-\frac32X^8Y^{10}+\frac34X^6Y^{12}\\ &\quad+\frac34X^{12}Y^6-\frac32X^{10}Y^8+\frac34X^8Y^{10}\\ &=\left(X^2Y-\frac12X^4Y^5-\frac12X^6Y^3\right)^2\\ &\quad+\left(XY^2-\frac12X^3Y^6-\frac12X^5Y^4\right)^2\\ &\quad+\left(1-\frac12X^2Y^4-\frac12X^4Y^2\right)^2\\ &\quad+\frac34\left(X^2Y^4-X^4Y^2\right)^2\\ &\quad+\frac34(X^3Y^6-X^5Y^4)^2\\ &\quad+\frac34(X^4Y^5-X^6Y^3)^2 \end{align*} Now we show $f\notin\sum\R[X,Y]^2$: \begin{align*} N(f)&=\conv(\supp(f))=\conv\{(4,2),(2,4),(2,2),(0,0)\}\\ &=\conv\{(4,2),(2,4),(0,0)\}. \end{align*} Assume $f=\sum_{i=1}^\ell p_i^2$ with $\ell\in\N_0$ and $p_1,\dots,p_\ell\in\sum\R[X,Y]$. Then \[N(p_i)\subseteq\conv(N(p_1)\cup\ldots\cup N(p_\ell))=\frac12N(f)=\conv\{(2,1),(1,2),(0,0)\}\] by \ref{sosvc} and hence $\supp(p_i)\subseteq\N_0^2\cap N(p_i)\subseteq\N_0^2\cap\conv\{(2,1),(1,2),(0,0)\}= \{(0,0),(1,1),(2,1),(1,2)\}$ for all $i\in\{1,\dots,\ell\}$. The coefficient of $X^2Y^2$ in $p_i^2$ is therefore the coefficient of $XY$ in $p_i$ squared and therefore nonnegative. Then the coefficient of $X^2Y^2$ in $f$ is also nonnegative $\lightning$. This shows $f\notin\sum\R[X,Y]^2$. Thus one can neither generalize \ref{so2s}(a)$\implies$(c) to polynomials in several variables nor \ref{son1s}(a)$\implies$(b) to polynomials of arbitrary degree. Note also that exactly the same proof shows even $f+c\notin\sum\R[X,Y]^2$ for all $c\in\R$. By \ref{psdpsdhom}, the \emph{Motzkin form} $f^*:=X^4Y^2+X^2Y^4-3X^2Y^2Z^2+Z^6$ is psd [$\to$ \ref{psdpd}] but is likewise no sum of squares of polynomials. Again by \ref{psdpsdhom}, the dehomogenizations $f^*(1,Y,Z)=Y^2+Y^4-3Y^2Z^2+Z^6$ and $f^*(X,1,Z)=X^4+X^2-3X^2Z^2+Z^6$ are also polynomials that are $\ge0$ on $\R^2$ but that are no sums of squares of polynomials. \end{enumerate} \end{ex} \section{Artin's solution to Hilbert's 17th problem} \begin{lem}\label{pqpos} Let $R$ be a real closed field and $f,p,q\in R[\x]$. Suppose $q\ne0$, $f=\frac pq$, $p\ge0$ on $R^n$ and $q\ge0$ on $R^n$. Then $f\ge0$ on $R^n$. \end{lem} \begin{proof} Using the Tarski principle \ref{tprinciple}, one can reduce to the case $R=\R$. But then the subset $\{x\in\R^n\mid f(x)<0\}$ of $\{x\in\R^n\mid q(x)=0\}$ is open in $\R^n$ and therefore empty since otherwise $q=0$ would follow from \ref{pol0}. \end{proof} \noindent In the year 1900, Hilbert presented his famous list of 23 seminal problems at the International Congress of Mathematicians in Paris. In 1927, Artin gave a positive solution to the 17th of these problems. This corresponds to the case $K=\R$ in the following theorem. \begin{thm}[Artin]\label{artin} Suppose $R$ is a real closed field and $(K,\le)$ an ordered subfield of $R$. Let $f\in K[\x]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f\ge0$ on $R^n$ \item $f\in\sum K_{\ge0}K(\x)^2$ \end{enumerate} \end{thm} \begin{proof} (b)$\implies$(a) follows from Lemma \ref{pqpos}. We show (a)$\implies$(b) by contraposition. Suppose $f\notin\sum K_{\ge0}K(\x)^2$. To show: $\exists x\in R^n:f(x)<0$. Since $\sum K_{\ge0}K(\x)^2$ is now a proper preorder of $K(\x)$ [$\to$ \ref{defpreorder}, \ref{preproper}], there is by \ref{artin-schreier} an order $P$ of $K(\x)$ with $f\notin P$. Set $R':=\overline{(K(\x),P)}$. Then there is an $x\in R'^n$ with $f(x)<0$ namely $x:=(X_1,\dots,X_n)$ since $f(x)=f<0$ in $R'$. Due to $K_{\ge0}\subseteq P\subseteq R'^2$, $(K,\le)$ is an \emph{ordered} subfield of $R'$. Since the $K$-semialgebraic set $\{x\in R'^n\mid f(x)<0\}$ is nonempty, its transfer $\{x\in R^n\mid f(x)<0\}$ to $R$ [$\to$ \ref{transfer}] is also nonempty. \end{proof} \begin{cor}{}\emph{[$\to$ \ref{cassels}]} Suppose $R$ is a real closed field and $(K,\le)$ an ordered subfield of $R$. Let $f\in K[X]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f\ge0$ on $R$ \item $f\in\sum K_{\ge0}K[X]^2$ \end{enumerate} \end{cor} \begin{proof} (b)$\implies$(a) is trivial. \smallskip (a)$\implies$(b) follows from \ref{artin} and \ref{cassels}. \end{proof} \section{The Gram matrix method} \begin{thm}\label{gram} Let $K$ be a Euclidean field, $f\in K[\x]$ and $\frac12N(f)\cap\N_0^n\subseteq\{\al_1,\dots,\al_m\}\subseteq\N_0^n$ \emph{(for instance set $\{\al_1,\dots,\al_m\}$ equal to $\frac12N(f)\cap\N_0^n$ or to $\{\al\in\N_0^n\mid 2|\al|\le\deg f\}$)}. Set $v:=\begin{pmatrix}\x^{\al_1}\\\vdots\\\x^{\al_m}\end{pmatrix}$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f\in\sum K[\x]^2$ \item There is a \emph{psd} matrix \emph{[$\to$ \ref{psdpd}(b)]} $G\in SK^{m\times m}$ (``Gram matrix'') satisfying $f=v^TGv$. \item $f$ is a sum of $m$ squares in $K[\x]$. \end{enumerate} \end{thm} \begin{proof} \underline{(a)$\implies$(b)}\quad Let $\ell\in\N_0$ and $p_1,\dots,p_\ell\in K[\x]$ with $f=\sum_{i=1}^\ell p_i^2$. By \ref{sosvc}, we have $\supp(p_i)\subseteq\frac12N(f)\cap\N_0^n\subseteq\{\al_1,\dots,\al_m\}$. Hence there is an $A\in K^{\ell\times m}$ such that \[Av=\begin{pmatrix}p_1\\\vdots\\p_\ell\end{pmatrix}.\] It follows that $f=\begin{pmatrix}p_1&\dots&p_\ell\end{pmatrix} \begin{pmatrix}p_1\\\vdots\\p_\ell\end{pmatrix}=(Av)^TAv=v^TA^TAv=v^TGv$ where $G:=A^TA\in SK^{m\times m}$. By \ref{psdeq}, $G$ is psd. \smallskip \underline{(b)$\implies$(c)}\quad Let $G\in SK^{m\times m}$ be psd with $f=v^TGv$. Choose according to \ref{psdeq} an $A\in K^{m\times m}$ satisfying $G=A^TA$. Write \[Av=\begin{pmatrix}p_1\\\vdots\\p_m\end{pmatrix}.\] Then $p_1,\dots,p_m\in K[\x]$ and \[v^TGv=v^TA^TAv=(Av)^TAv= \begin{pmatrix}p_1&\dots&p_m\end{pmatrix}\begin{pmatrix}p_1\\\vdots\\p_m\end{pmatrix} =\sum_{i=1}^mp_i^2.\] \smallskip \underline{(c)$\implies$(a)} is trivial. \end{proof} \begin{ex} Let $K$ be a Euclidean field and $f:=2X_1^4+5X_2^4-X_1^2X_2^2+2X_1^3X_2\in K[X_1,X_2]$. Then $N(f)=\conv\{(4,0),(0,4)\}$ and therefore \[\frac12N(f)\cap\N_0^2=\{(2,0),(1,1),(0,2)\}.\] Set $v:=\begin{pmatrix}X_1^2\\X_1X_2\\X_2^2\end{pmatrix}$. From $\{G\in SK^{3\times 3}\mid f=v^TGv\}=\left\{\begin{pmatrix}2&1&a\\1&-2a-1&0\\a&0&5\end{pmatrix}\mid a\in K\right\}$, we obtain \[f\in\sum K[X_1,X_2]^2\iff\exists a\in K:\begin{pmatrix}2&1&a\\1&-2a-1&0\\a&0&5\end{pmatrix} \text{ psd.}\] For all $a\in K$, we have \begin{align*} &\det\begin{pmatrix}2+T&1&a\\1&T-2a-1&0\\a&0&5+T\end{pmatrix} =(2+T)(T-2a-1)(5+T)-a^2(T-2a-1)-5-T\\ &=(T^2-2aT+T-4a-2)(5+T)-(1+a^2)T+2a^3+a^2-5\\ &=T^3-2aT^2+T^2-4aT-2T+5T^2-10aT+5T-20a-10-(1+a^2)T+2a^3+a^2-5\\ &=T^3+(6-2a)T^2+(2-14a-a^2)T-15-20a+a^2+2a^3 \end{align*} and by \ref{psdeq}(e), we obtain \[\begin{pmatrix}2&1&a\\1&-2a-1&0\\a&0&5\end{pmatrix}\text{ psd}\iff \begin{array}[c]{rl} 2a^3+a^2-20a-15\ge0\\ \et\ -a^2-14a+2\ge0\\ \et\ -2a+6\ge0&. \end{array} \] Set $a:=-3$. Then $2a^3+a^2-20a-15=-2\cdot27+9+60-15=-54+9+60-15=0$, $-a^2-14a+2=-9+42+2=35\ge0$ and $-2a+6=12\ge0$. For this reason $f\in\sum K[X_1,X_2]^2$. The quadratic form \[q:=\begin{pmatrix}T_1&T_2&T_3\end{pmatrix} \begin{pmatrix}2&1&a\\1&-2a-1&0\\a&0&5\end{pmatrix} \begin{pmatrix}T_1\\T_2\\T_3\end{pmatrix}\in K[T_1,T_2,T_3]\] obviously satisfies \[q(X_1^2,X_1X_2,X_2^3)=v^T\begin{pmatrix}2&1&a\\1&-2a-1&0\\a&0&5\end{pmatrix}v=f.\] Because of \[\sg q\overset{\text{\ref{sospsd2}(d)}}=\rk q=\rk\begin{pmatrix}2&1&-3\\1&5&0\\-3&0&5\end{pmatrix}=2,\] $q$ is a sum of $2$ squares of linear forms in $K[T_1,T_2,T_3]$ and thus $f$ a sum of $2$ squares of polynomials. To compute this representation explicitely, we employ the procedure from \ref{longremi}(f): \begin{align*} q&=2T_1^2+2T_1T_2-6T_1T_3+5T_2^2+5T_3^2\\ &=2\Big(\underbrace{T_1+\frac12T_2-\frac32T_3}_{\ell_1}\Big)^2-2\Big(\frac12T_2-\frac32T_3\Big)^2+5T_2^2+5T_3^2\\ &=2\ell_1^2+\frac92T_2^2+3T_2T_3+\frac12T_3^2\\ &=2\ell_1^2+\frac92\Big(\underbrace{T_2+\frac13T_3}_{\ell_2}\Big)^2=2\ell_1^2+\frac92\ell_2^2\\ &=\frac12(2T_1+T_2-3T_3)^2+\frac12(3T_2+T_3)^2. \end{align*} Hence $f=\frac12(2X_1^2+X_1X_2-3X_2^2)^2+\frac12(3X_1X_2+X_2^2)^2$. \end{ex} \chapter{Prime cones and real Stellensätze} \section{The real spectrum of a commutative ring} In this section, we let $A$, $B$ and $C$ always be commutative rings. \begin{reminder}\label{specfunctor} An ideal $\p$ of $A$ is called a prime ideal of $A$ if \[1\notin\p\quad\text{ and }\quad\forall a,b\in A:(ab\in\p\implies(a\in\p\text{ or }b\in\p)).\] We call $\spec A=\{\p\mid\p\text{ prime ideal of $A$}\}$ the \emph{spectrum} of $A$. If $I$ is an ideal of $A$, then \[I\in\spec A\iff A/I\text{ is an integral domain.}\] Because every integral domain extends to a field (e.g., to its quotient field) and every field to an algebraically closed field (e.g., to its algebraic closure), $\spec A$ consists exactly of the kernels of ring homomorphisms of $A$ in \alalal{integral domains}{fields}{algebraically closed fields}. Every ring homomorphism $\ph\colon A\to B$ induces a map \[ \spec\ph\colon\spec B\to\spec A, \q\mapsto\ph^{-1}(\q), \] for if $\q\in\spec B$, then $\p:=\ph^{-1}(\q)\in\spec A$ since $\ph$ induces an embedding $A/\p\hookrightarrow B/\q$ by the homomorphism theorem. If $\ph\colon A\to B$ and $\ps\colon B\to C$ are ring homomorphisms, then \[\spec(\ps\circ\ph)=(\spec\ph)\circ(\spec\ps).\] \end{reminder} \begin{notation} If $A$ is an integral domain, then \[\qf A:=(A\setminus\{0\})^{-1}A=\left\{\frac ab\mid a,b\in A,b\ne0\right\}\] denotes its quotient field. \end{notation} \begin{df}\label{introrealspectrum} We call $\sper A:=\{(\p,\le)\mid\p\in\spec A,\ \text{$\le$ order of }\qf(A/\p)\}$ the \emph{real spectrum} of $A$. \end{df} \begin{rem}\label{sperfunctor} Every ring homomorphism $\ph\colon A\to B$ induces a map \[\sper\ph\colon\sper B\to\sper A,\ (\q,\le)\mapsto(\ph^{-1}(\q),\le'),\] where $\le'$ denotes the order of $\qf(A/\p)$ with $\p:=\ph^{-1}(\q)$ which makes the canonical embedding $\qf(A/\p)\hookrightarrow\qf(B/\q)$ into an embedding $(\qf(A/\p),\le')\hookrightarrow(\qf(B/\q),\le)$ of ordered fields. If $\ph\colon A\to B$ and $\ps\colon B\to C$ are ring homomorphisms, then we have again \[\sper(\ps\circ\ph)=(\sper\ph)\circ(\sper\ps).\] \end{rem} \begin{ex}\label{specsperex} Since $\R[X]$ is a principal ideal domain, the fundamental theorem \ref{realfund} implies \[\spec\R[X]=\{(0)\}\cup\{(X-a)\mid a\in\R\}\cup \{(\underbrace{(X-a)^2+b^2}_{\rlap{$\scriptstyle=(X-(a+b\ii))(X-(a-b\ii))$}})\mid a,b\in\R,b\ne0\}\] where $((X-a)^2+b^2)=((X-a')^2+b'^2)\iff(a=a'\et|b|=|b'|)$ for all $a,a',b,b'\in\R$. The spectrum of $\R[X]$ therefore can be seen as consisting of \begin{itemize} \item one ``generic point'', \item the real numbers, and \item the unordered pairs of two distinct conjugated complex numbers. \end{itemize} Because of $\qf(\R[X]/(0))\cong\qf(\R[X])=\R(X)$, $\qf(\R[X]/(X-a))=\R[X]/(X-a)\cong\R$ for all $a\in\R$ and $\qf(\R[X]/((X-a)^2+b^2))\cong\R[X]/((X-a)^2+b^2)\cong\C$ for $a,b\in\R$ with $b\ne0$, we obtain in the notation of \ref{ordersrx} (and with the identification $\R[X]/(0)=\R[X]$) \begin{align*} \sper\R[X]=\{((0),P_{-\infty}),((0),P_\infty)\}&\cup\{((0),P_{a-})\mid a\in\R\}\cup\{((0),P_{a+})\mid a\in\R\}\\ &\cup\{((X-a),(\R[X]/(X-a))^2)\mid a\in\R\}. \end{align*} The \emph{real} spectrum of $\R[X]$ thus corresponds to an accumulation consisting of \begin{itemize} \item the two points at infinity, \item for each real number two points infinitely close, and \item the real numbers. \end{itemize} \end{ex} \begin{df}\label{supportmap} We call $\supp\colon\sper A\to\spec A,\ (\p,\le)\mapsto\p$ the \emph{support map}. \end{df} \begin{df}{}[$\to$ \ref{unary-order}(a), \ref{specfunctor}]\label{dfprimecone} A subset $P$ of $A$ is called a \emph{prime cone} of $A$ if $P+P\subseteq P$, $PP\subseteq P$, $P\cup-P=A$, $-1\notin P$ and $\forall a,b\in A:(ab\in P\implies(a\in P\text{ or }-b\in P))$. \end{df} \begin{pro}\label{primeconeispreorder} Every prime cone of $A$ is a proper preorder of $A$ \emph{[$\to$ \ref{defpreorder}]}. \end{pro} \begin{proof} Suppose $P$ is a prime cone of $A$ and $a\in A$. To show: $a^2\in P$. Due to $a\in A=P\cup-P$, we have $a\in P$ or $-a\in P$. In the first case we get $a^2=aa\in PP\subseteq P$ and in the second $a^2=(-a)^2=(-a)(-a)\in PP\subseteq P$. \end{proof} \begin{pro}\label{primeconechar} Suppose $P\subseteq A$ satisfies $P+P\subseteq P$, $PP\subseteq P$ and $P\cup-P=A$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $P$ is a prime cone of $A$. \item $-1\notin P$ and $\forall a,b\in A:(ab\in P\implies(a\in P\text{ or }-b\in P))$ \item $P\cap-P$ is a prime ideal of $A$ \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\iff$(b)} is Definition \ref{dfprimecone}. \smallskip \underline{(b)$\implies$(c)}\quad Suppose (b) holds and set $\p:=P\cap-P$. Then $\p$ is obviously a subgroup of $A$ and we have $A\p=(P\cup-P)\p=P\p\cup-P\p=P(P\cap-P)\cup-P(P\cap-P) \subseteq(PP\cap-PP)\cup(-PP\cap PP)\subseteq(P\cap-P)\cup(-P\cap P)=P\cap-P=\p$, i.e., $\p$ is an ideal of $A$ (if $\frac12\in A$ this follows alternatively from \ref{primeconeispreorder} and \ref{supportideal}). From $-1\notin P$ we get $1\notin\p$. It remains to show $\forall a,b\in A\colon(ab\in\p\implies(a\in\p\text{ or }b\in\p))$. To this end, let $a,b\in A$ with $a\notin\p$ and $b\notin\p$. To show: $ab\notin\p$. WLOG $a\notin P$ and $-b\notin P$ (otherwise replace $a$ by $-a$ and/or $-b$ by $b$, taking into account $-\p=\p$). By hypothesis, we obtain then $ab\notin P$ and thus $ab\notin\p$. \smallskip \underline{(c)$\implies$(b)}\quad Suppose (c) holds. Due to $P\cup-P=A$, we have $1\in P$ or $-1\in P$. If $-1\in P$, then again $1=(-1)(-1)\in PP\subseteq P$. Hence $1\in P$. If we had $-1\in P$, then $1\in\p:=P\cap-P\in\spec A$ $\lightning$. Thus $-1\notin P$. Let now $a,b\in A$ such that $a\notin P$ and $-b\notin P$. To show: $ab\notin P$. Because of $P\cup-P=A$, we have $a\in-P$ and $b\in P$ from which $-ab=(-a)b\in PP\subseteq P$. If we had in addition $ab\in P$, then $ab\in\p$ and thus $a\in\p\subseteq P$ or $b\in\p\subseteq-P$ $\lightning$. Hence $ab\notin P$. \end{proof} \begin{rem}\label{primeconefield} If $K$ is a field, then \ref{primeconechar} signifies because of $\spec K=\{(0)\}$ just that the prime cones of $K$ are exactly the orders of $K$ [$\to$ \ref{unaryrem}]. \end{rem} \begin{lem}\label{primeconeinfield} Let $P$ be a prime cone of $A$ and $\p:=P\cap-P$ [$\to$ \ref{primeconechar}(c)]. Then \[P_\p:=\left\{\frac{\cc a\p}{\cc s\p}\mid a\in A,s\in A\setminus\p,as\in P\right\}\] is an order (i.e., a prime cone [$\to$ \ref{primeconefield}]) of $\qf(A/\p)$. \end{lem} \begin{proof} To show [$\to$ \ref{unaryrem}(a)]: \begin{enumerate}[(a)] \item $P_\p+P_\p\subseteq P_\p$, \item $P_\p P_\p\subseteq P_\p$, \item $P_\p\cup-P_\p=\qf(A/\p)$, and \item $P_\p\cap-P_\p=(0)$. \end{enumerate} (a) Suppose that $a,b\in A$ and $s,t\in A\setminus\p$ with $as,bt\in P$ define arbitrary elements $\frac{\overline a}{\overline s},\frac{\overline b}{\overline t}\in P_\p$. Then \[\frac{\cc a\p}{\cc s\p}+\frac{\cc b\p}{\cc t\p}=\frac{\cc{at}\p}{\cc{st}\p}+\frac{\cc{bs}\p}{\cc{st}\p}= \frac{\cc{at+bs}\p}{\cc{st}\p}\in P_\p,\] since $at+bs\in A$, $st\in A\setminus\p$ and $(at+bs)st=ast^2+bts^2\in PA^2+PA^2\subseteq PP+PP \subseteq P+P\subseteq P$. \smallskip (b) Let again $a,b\in A$ and $s,t\in A\setminus\p$ satisfy $as,bt\in P$. Then \[\frac{\cc a\p}{\cc s\p}\frac{\cc b\p}{\cc t\p}=\frac{\cc{ab}\p}{\cc{st}\p}\in P_\p\] since $ab\in A$, $st\in A\setminus\p$ and $abst=(as)(bt)\in PP\subseteq P$. \smallskip (c) Let $a\in A$ and $s\in A\setminus\p$ define an arbitrary element $\frac{\overline a}{\overline s}\in\qf(A/\p)$. Because of $P\cup-P=A$, we have $as\in P$ or $-as\in P$, i.e., $-\frac{\overline a}{\overline s}=\frac{\overline{-a}}{\overline s}\in P_\p$ or $\frac{\overline a}{\overline s}\in P_\p$. \smallskip (d) Suppose $a,b\in A$ and $s,t\in A\setminus\p$ with $as,bt\in P$ satisfy \[\frac{\cc a\p}{\cc s\p}=-\frac{\cc b\p}{\cc t\p}.\] Then $at+bs\in\p$ and therefore $ast^2+bts^2=st(at+bs)\in\p\subseteq-P$, i.e., $-ast^2-bts^2\in P$. From $ast^2=(as)t^2\in PA^2\subseteq P$ and $bts^2=(bt)s^2\in PA^2\subseteq P$ we deduce $-ast^2,-bts^2\in P$. Consequently, $ast^2,bts^2\in\p$ and thus $a,b\in\p$. We obtain \[\frac{\cc a\p}{\cc s\p}=0=\frac{\cc b\p}{\cc t\p}\] as desired. \end{proof} \begin{lem}{}[$\to$ \ref{unary-order}]\label{makeintoprimecone} Let $(\p,\le)\in\sper A$. Then $\{a\in A\mid\cc a\p\ge0\}$ is a prime cone of $A$. \end{lem} \begin{proof} Set $P:=\{a\in A\mid\cc a\p\ge0\}$. Then $P+P\subseteq P$, $PP\subseteq P$, $P\cup-P=A$ and $P\cap-P=\p\in\spec A$. Now $P$ is a prime cone of $A$ by \ref{primeconechar}(c). \end{proof} \begin{pro}{}\emph{[$\to$ \ref{unary-order}(c)]}\label{sperprimecones} The correspondence \begin{align*} (\p,\le)&\mapsto\{a\in A\mid\cc a\p\ge0\}\\ (P\cap-P,P_{P\cap-P})&\mapsfrom P \end{align*} defines a bijection between $\sper A$ and the set of all prime cones of $A$. \end{pro} \begin{proof} The well-definedness of both maps follows from Lemmata \ref{primeconeinfield} and \ref{makeintoprimecone}. Now first let $(\p,\le)\in\sper A$ and $P:=\{a\in A\mid\cc a\p\ge0\}$. We show $(\p,\le)=(P\cap-P,P_{P\cap-P})$. It is clear that $\p=P\cap-P$. Finally, \begin{align*} P_{P\cap-P}=P_\p &=\left\{\frac{\cc a\p}{\cc s\p}\mid a\in A,s\in A\setminus\p,as\in P\right\}\\ &=\left\{\frac{\cc a\p}{\cc s\p}\mid a\in A,s\in A\setminus\p,\cc{as}\p\ge0\right\}\\ &=\left\{\frac{\cc a\p}{\cc s\p}\mid a\in A,s\in A\setminus\p,\frac{\cc a\p}{\cc s\p}\ge0\right\}= \{x\in\qf(A/\p)\mid x\ge0\}. \end{align*} Conversely, suppose that $P$ is a prime cone of $A$ and $\p:=P\cap-P$. We show \[P=\{a\in A\mid\cc a\p\in P_\p\}.\] Here ``$\subseteq$'' is trivial. To show ``$\supseteq$'', let $a\in A$ such that $\cc a\p\in P_\p$. Then there are $b\in A$ and $s\in A\setminus\p$ such that $bs\in P$ and $\overline a=\frac{\overline b}{\overline s}$. It follows that $\cc{as^2}\p=\cc{bs}\p$ and thus $as^2\in bs+\p\subseteq P+\p\subseteq P+P\subseteq P$. Since $P$ is a prime cone, we deduce $a\in P$ or $-s^2\in P$. If we had $-s^2\in P$, then $s^2\in P\cap-P=\p$ (since $s^2\in A^2\subseteq P$) and therefore $s\in\p\ \lightning$. \end{proof} \begin{rem}{}[$\to$ \ref{unaryrem}]\label{newlang} As a result of \ref{sperprimecones}, we can see elements of the real spectrum as prime cones. We reformulate some of the above in this new language: \begin{enumerate}[(a)] \item Remark \ref{sperfunctor}: Let $\ph\colon A\to B$ be a ring homomorphism. Then $\ph$ induces the map $\sper\ph\colon\sper B\to\sper A,\ Q\mapsto\ph^{-1}(Q)$. Suppose namely that $Q\in\sper B$, $\q:=Q\cap-Q$, $P:=\ph^{-1}(Q)$ and $\p:=P\cap-P$. Then $\ph^{-1}(\q)=\ph^{-1}(Q)\cap-\ph^{-1}(Q)= P\cap-P=\p$ and the embedding $\qf(A/\p)\hookrightarrow\qf(B/\q)$ induced by $\ph$ is an embedding of ordered fields $(\qf(A/\p),P_\p)\hookrightarrow(\qf(B/\q),Q_\q)$ because for $a\in A$ and $s\in A\setminus\p$ with $as\in P$ we have $\ph(a)\in B$, $\ph(s)\in B\setminus\q$, $\ph(a)\ph(s)=\ph(as)\in\ph(P)\subseteq Q$. \item Definition \ref{supportmap}: The support map is $\supp\colon\sper A\to\spec A,\ P\mapsto P\cap-P$ [$\to$ \ref{sperprimecones}]. In particular, the Definitions \ref{supportmap} and \ref{supportideal} are compatible. \end{enumerate} \end{rem} \begin{df}\label{realrep} For every $(\p,\le)\in\sper A$, we call the real closed field \[R_{(\p,\le)}:=\overline{(\qf(A/\p),\le)}\] the \emph{representation field} of $(\p,\le)$ and the ring homomorphism \[\rh_{(\p,\le)}\colon A\to R_{(\p,\le)},\ a\mapsto\cc a\p\] the \emph{representation} of $(\p,\le)$. \end{df} \begin{pro}\label{kernelrep} Let $P\in\sper A$. Then $P=\rh_P^{-1}(R_P^2)$ and $\supp P=\ker\rh_P$. \end{pro} \begin{proof} $\rh_P^{-1}(R_P^2)=\{a\in A\mid\rh_P(a)\ge0\text{ in }R_P\}=\{a\in A\mid\cc a{\supp P}\in P_{\supp P}\} \overset{\ref{sperprimecones}}=P$ and therefore \[\supp P=P\cap-P=\rh_P^{-1}(R_P^2)\cap-\rh_P^{-1}(R_P^2) =\rh_P^{-1}(R_P^2\cap-R_P^2)=\rh_P^{-1}(\{0\})=\ker\rh_P.\] \end{proof} \begin{pro}{}\emph{[$\to$ \ref{specfunctor}]}\label{sperviahom} Let $P$ be a set. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $P\in\sper A$ \item There is an ordered field $(K,\le)$ and a ring homomorphism $\ph\colon A\to K$ such that $P=\ph^{-1}(K_{\ge0})$. \item There exists a real closed field $R$ and a ring homomorphism $\ph\colon A\to R$ such that $P=\ph^{-1}(R^2)$. \end{enumerate} \end{pro} \begin{proof} (a)$\overset{\ref{kernelrep}}\implies$(c)$\overset{\text{trivial}}\implies$(b)$\overset{\text{\ref{newlang}(a)}}\implies$(a) \end{proof} \section{Preorders and maximal prime cones} Throughout this section, let $A$ be a commutative ring. \begin{pro}\label{preorderprimecone} Let $T$ be a proper preorder of $A$ \emph{[$\to$ \ref{defpreorder}]}. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $T$ is a prime cone of $A$. \item $\forall a,b\in A:(ab\in T\implies(a\in T\text{ or }-b\in T))$ \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)} is trivial by Definition \ref{dfprimecone}. \smallskip\underline{(b)$\implies$(a)}\quad Suppose (b) holds. By Definition \ref{dfprimecone}, it suffices to show $T\cup-T=A$. But for all $a\in A$ it follows from (b) that $a\in T$ or $-a\in T$ because of $aa=a^2\in T$. \end{proof} \begin{thm}{}\emph{[$\to$ \ref{orderpreorder}]}\label{maxpreorder} Suppose $T$ is a maximal proper preorder \emph{[$\to$ \ref{defpreorder}]} of $A$. Then $T$ is a prime cone of $A$. \end{thm} \begin{proof} We show \ref{preorderprimecone}(b). For this purpose let $a,b\in A$ satisfy $a\notin T$ and $-b\notin T$. Then $T+aT$ and $T-bT$ are preorders of $A$ [$\to$ \ref{againpreorder}] that properly contain $T$. Due to the maximality of $T$, therefore neither $T+aT$ nor $T-bT$ is proper as a preorder, i.e., $-1\in T+aT$ and $-1\in T-bT$. Choose $s,t\in T$ such that $-as\in1+T$ and $bt\in1+T$. Then $-abst\in(1+T)(1+T)\subseteq1+T$ and thus $-1\in T+abst\subseteq T+abT$. Since $T$ is proper, we conclude that $ab\notin T$ as desired. \end{proof} \begin{cor}\label{inmaxprimecone} Every proper preorder of $A$ is contained in a maximal prime cone of $A$. \end{cor} \begin{proof} Use \ref{maxpreorder} and Zorn's lemma. \end{proof} \begin{pro}\label{primeconeinclusion} Let $P,Q\in\sper A$ such that $P\subseteq Q$ and set $\q:=\supp Q$. Then $Q=P\cup\q$. \end{pro} \begin{proof} ``$\supseteq$'' is trivial. ``$\subseteq$'' Let $a\in Q\setminus P$. To show: $a\in\q$. From $-a\in P\subseteq Q$ we get $a\in Q\cap-Q=\q$. \end{proof} \begin{proterm}\label{spear} Let $P\in\sper A$. Then ``the spear'' \[\{Q\in\sper A\mid P\subseteq Q\}\] is a chain in the partially ordered set $\sper A$ that possesses a largest element (``a spearhead''). \end{proterm} \begin{proof} Let $Q_1,Q_2\in\sper A$ with $P\subseteq Q_1$ and $P\subseteq Q_2$. Suppose $Q_1\not\subseteq Q_2$. To show: $Q_2\subseteq Q_1$. Choose $a\in Q_1\setminus Q_2$. Let $b\in Q_2$. To show $b\in Q_1$. We have $a-b\notin Q_2$ (or else $a\in Q_2\ \lightning$) and thus $a-b\notin P$ because of $P\subseteq Q_2$. Then $b-a\in P\subseteq Q_1$ and thus $b\in Q_1$. The existence of the ``spearhead'' follows now from \ref{inmaxprimecone}. \end{proof} \section{Quotients and localization} Throughout this section, we let $A$ be a commutative ring. \begin{pro} \alal{Preimages}{Images} of preorders \emph{[$\to$ \ref{defpreorder}]} under \alal{homomorphisms}{epimorphisms} of commutative rings are again preorders. \end{pro} \begin{proof} Exercise. \end{proof} \begin{pro} Let $I$ be an ideal of $A$. The correspondence \begin{align*} T&\mapsto\cc TI:=\{\cc aI\mid a\in T\}\\ \{a\in A\mid\cc aI\in P\}&\mapsfrom P \end{align*} defines a bijection between the set of \alal{preorders \emph{[$\to$ \ref{defpreorder}]}} {prime cones \emph{[$\to$ \ref{dfprimecone}]}} $T$ of $A$ with $I\subseteq T$ and the set of \alal{preorders}{prime cones} of $A/I$. \end{pro} \begin{proof} Exercise. \end{proof} \begin{lem}\label{preorderlocalize} Let $S\subseteq A$ be multiplicative and $T\subseteq A$ a preorder. Let \[\io\colon A\to S^{-1}A,\ a\mapsto\frac a1\] denote the canonical homomorphism. Then the preorder generated by $\io(T)$ in $S^{-1}A$ equals $S^{-2}T=\left\{\frac a{s^2}\mid a\in T,s\in S\right\}$. This preorder is proper if and only if $T\cap -S^2=\emptyset$. \end{lem} \begin{proof} Exercise. \end{proof} \begin{pro}\label{sperlocalize} Let $S\subseteq A$ be multiplicative. The correspondence \begin{align*} P&\mapsto S^{-2}P\\ \left\{a\in A\mid\frac a1\in Q\right\}&\mapsfrom Q \end{align*} gives rise to a bijection between $\{P\in\sper A\mid(\supp P)\cap S=\emptyset\}$ and $\sper(S^{-1}A)$. \end{pro} \begin{proof} Let $P\in\sper A$ with $(\supp P)\cap S=\emptyset$. By \ref{preorderlocalize}, $S^{-2}P$ is a proper preorder of $S^{-1}A$ since $P\cap-S^2\subseteq(P\cap-A^2)\cap(-S)\subseteq(P\cap-P)\cap(-S)= (\supp P)\cap-S=-((\supp P)\cap S)=-\emptyset=\emptyset$. To show that $S^{-2}P$ is a prime cone of $S^{-1}A$, we verify the condition from \ref{preorderprimecone}(b) where we use that for any two fractions in $S^{-1}A$, one can find a common denominator from $S^2$. Let $a,b\in A$ and $s\in S$ with $\frac a{s^2}\cdot\frac b{s^2}\in S^{-2}P$. To show: $\frac a{s^2}\in S^{-2}P$ or $-\frac b{s^2}\in S^{-2}P$. Choose $c\in P$ and $u\in S$ with $\frac{ab}{s^4}=\frac c{u^2}$. Then there is $v\in S$ such that $abu^2v=cs^4v$ and therefore $(au^2)(bv^2)=abu^2v^2=cs^4v^2\in P$. Since $P$ is a prime cone, it follows that $au^2\in P$ or $-bv^2\in P$. Hence $\frac a{s^2}=\frac{au^2}{(su)^2}\in S^{-2}P$ or $-\frac b{s^2}=\frac{-bv^2}{(sv)^2}\in S^{-2}P$. Conversely, let $Q\in\sper(S^{-1}A)$. For $\io\colon A\to S^{-1}A,\ a\mapsto\frac a1$, we have [$\to$ \ref{newlang}(a)] \[\left\{a\in A\mid\frac a1\in Q\right\} =(\sper\io)(Q)\in\sper A.\] If we had $s\in S$ with $\frac s1\in Q\cap-Q$, then $1=\frac ss=\frac1s\cdot\frac s1\in S^{-1}A(\supp Q)\subseteq\supp Q\ \lightning$. \smallskip It remains to show that the maps are inverse to each other: \begin{enumerate}[(a)] \item If $P\in\sper A$ with $(\supp P)\cap S=\emptyset$, then $P=\left\{a\in A\mid\frac a1\in S^{-2}P\right\}$. \item If $Q\in\sper(S^{-1}A)$, then $Q=\left\{\frac a{s^2}\mid a\in A,\frac a1\in Q,s\in S\right\}$. \end{enumerate} \smallskip To show (a), let $P\in\sper A$ with $(\supp P)\cap S=\emptyset$. ``$\subseteq$'' is trivial. ``$\supseteq$'' Let $a\in A$ with $\frac a1\in S^{-2}P$. Choose $b\in P$ and $s\in S$ with $\frac a1=\frac b{s^2}$. Then there is $t\in S$ such that $as^2t=bt$ and thus $as^2t^2=bt^2\in P$. It follows that $a\in P$ or $-s^2t^2\in P$. The latter would lead to $s^2t^2\in(\supp P)\cap S \lightning$. Hence $a\in P$. \smallskip To show (b), consider an arbitrary $Q\in\sper(S^{-1}A)$. ``$\supseteq$'' is trivial. ``$\subseteq$'' Let $b\in A$ and $s\in S$ with $\frac bs\in Q$. Then for $a:=sb\in A$, we have $\frac bs=\frac{sb}{s^2}=\frac a{s^2}$ and $\frac a1=\frac{sb}1=\left(\frac s1\right)^2\frac bs\in Q$. \end{proof} \section{Abstract real Stellensätze} \begin{df} Let $A$ be a commutative ring. We call the ring homomorphism \[A\to\prod_{(\p,\le)\in\sper A}R_{(\p,\le)},\ a\mapsto\left(\widehat a\colon(\p,\le)\mapsto\cc a\p\right)\] the \emph{real representation} of $A$. For $a\in A$, we say that $\widehat a$ is the \emph{function represented} by $a$ on the real spectrum of $A$. \end{df} \begin{thm}[abstract real Stellensatz \cite{kri,ste,pre}]\label{abstractstellensatz} Suppose $A$ is a commutative ring, $I\subseteq A$ an ideal, $S\subseteq A$ a multiplicative set and $T\subseteq A$ a preorder. Then the following conditions are equivalent: \begin{enumerate}[\normalfont(a)] \item There does \emph{not} exist any $P\in\sper A$ satisfying \begin{align*} \forall a\in I&:\widehat a(P)=0,\\ \forall s\in S&:\widehat s(P)\ne0\quad\text{and}\\ \forall t\in T&:\widehat t(P)\ge0. \end{align*} \item There are $a\in I$, $s\in S$ and $t\in T$ such that $a+s^2+t=0$. \end{enumerate} \end{thm} \begin{proof} \underline{(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(b)}\quad Replacing $T$ by the preorder $T+I$, we can suppose WLOG $I=(0)$. Suppose (b) does not hold. By \ref{preorderlocalize}, $S^{-2}T$ is then a proper preorder of $S^{-1}A$. Consequently, $S^{-2}T$ is contained in a prime cone $Q$ of $S^{-1}A$ by \ref{inmaxprimecone}. Now \ref{sperlocalize} yields $P:=\left\{a\in A\mid\frac a1\in Q\right\}\in\sper A$ and $(\supp P)\cap S=\emptyset$. For all $s\in S$, we have $\widehat s(P)=\cc s{\supp P}\ne0$ in $R_P$ [$\to$ \ref{realrep}, \ref{sperprimecones}] since $s\notin\supp P$. For all $t\in T$, we have $\widehat t(P)\ge0$ because $t\in P$. \end{proof} \begin{termnot}\label{preorderedring} \begin{enumerate}[(a)] \item We call a pair $(A,T)$ consisting of a commutative ring $A$ and a preorder $T$ of $A$ a \emph{preordered ring}. \item If $(A,T)$ is a preordered ring, then we define its \emph{real spectrum} \[\sper(A,T):=\{P\in\sper A\mid T\subseteq P\}.\] \item{}[$\to$ \ref{intervals}(c)] If $A$ is a commutative ring, $a\in A$ and $S\subseteq\sper A$, then we write \begin{align*} \widehat a\ge0\text{ on $S$} &:\iff\forall P\in S:\widehat a(P)\ge0,\\ \widehat a>0\text{ on $S$} &:\iff\forall P\in S:\widehat a(P)>0, \end{align*} and so forth. \end{enumerate} \end{termnot} \begin{cor}[abstract Positivstellensatz]\label{abstractpositivstellensatz} Let $(A,T)$ be a preordered ring and $a\in A$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\widehat a>0$ on $\sper(A,T)$ \item $\exists t\in T:ta\in1+T$ \item $\exists t\in T:(1+t)a\in1+T$. \end{enumerate} \end{cor} \begin{proof} \underline{(b)$\implies$(c)}\quad If $t,t'\in T$ satisfy $ta=1+t'$, then \[(1+t+t')a=ta+(1+t')a=1+t'+ta^2\in1+T.\] \smallskip\underline{(c)$\implies$(a)} is trivial. \smallskip\underline{(a)$\implies$(b)} follows by applying \ref{abstractstellensatz} on the ideal $(0)$, the multiplicative set $\{1\}$ and the preorder $T-aT$. \end{proof} \begin{cor}[abstract Nichtnegativstellensatz]\label{abstractnichtnegativstellensatz} Let $(A,T)$ be a preordered ring and $a\in A$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\widehat a\ge0$ on $\sper(A,T)$ \item $\exists t\in T:\exists k\in\N_0:ta\in a^{2k}+T$ \end{enumerate} \end{cor} \begin{proof} \underline{(b)$\implies$(a)} is trivial. \underline{(a)$\implies$(b)} follows by applying \ref{abstractstellensatz} on the ideal $(0)$, the multiplicative set $\{1,a,a^2,\dots\}$ and the preorder $T-aT$. \end{proof} \begin{cor}[abstract real Nullstellensatz \cite{kri,du2,ris,efr}]\label{abstractrealnullstellensatz} Let $A$ be a commutative ring, $I\subseteq A$ an ideal and $a\in A$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\widehat a=0$ on $\{P\in\sper A\mid I\subseteq\supp P\}$ \item $\exists k\in\N_0:\exists s\in\sum A^2:a^{2k}+s\in I$ \end{enumerate} \end{cor} \begin{proof} \underline{(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(b)} follows by applying \ref{abstractstellensatz} on the ideal $I$, the multiplicative set $\{1,a,a^2,\dots\}$ and the preorder $\sum A^2$. \end{proof} \section{The real radical ideal} Throughout this section, we let $A$ be a commutative ring. \begin{df}{}[$\to$ \ref{realchar}(c)] $A$ is called \emph{real} (or \emph{real reduced}) if \[\forall n\in\N:\forall a_1,\dots,a_n\in A:(a_1^2+\dots+a_n^2=0\implies a_1=0).\] \end{df} \begin{rem} We have \[A\ne\{0\}\text{ real}\implies-1\notin\sum A^2\overset{\ref{inmaxprimecone}}{\underset{\text{\ref{sqsm}(a)}}\iff} \sper A\ne\emptyset.\] Here ``$\Longrightarrow$'' cannot be replaced by ``$\iff$'' (in contrast to the case where $A$ is a field [$\to$ \ref{realchar}]) as the example of $A=\R[X]/(X^2)$ shows. \end{rem} \begin{df}\label{dfrealideal} An ideal $I\subseteq A$ is called \emph{real} (or \emph{real radical ideal}) if $A/I$ is real, i.e., $\forall n\in\N:\forall a_1,\dots,a_n\in A:(a_1^2+\ldots+a_n^2\in I\implies a_1\in I)$. \end{df} \begin{pro}\label{realidealchar} Let $\p\in\spec A$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\p$ is real \emph{[$\to$ \ref{dfrealideal}]} \item $\qf(A/\p)$ is real \emph{[$\to$ \ref{dfreal}]} \item $\exists P\in\sper A:\p=\supp P$ \emph{[$\to$ \ref{newlang}(b)]} \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)}\quad Suppose (a) holds and let $n\in\N$, $a_1,\dots,a_n,s\in A/\p$ with $s\ne0$ such that $\left(\frac{a_1}s\right)^2+\ldots+\left(\frac{a_n}s\right)^2=0$. Then $a_1^2+\ldots+a_n^2=0$. Since $A/\p$ is real, it follows that $a_1=0$ and therefore $\frac{a_1}s=0$. \smallskip \underline{(b)$\implies$(c)}\quad Suppose (b) holds. Then $\qf(A/\p)$ possesses an order $\le$. According to Definition \ref{introrealspectrum}, we have $(\p,\le)\in\sper A$ and of course $\p=\supp(\p,\le)$ by Definition \ref{supportmap}. \smallskip \underline{(c)$\implies$(a)}\quad Suppose $\p=\supp P$ for some $P\in\sper A$. Let $n\in\N$ and $a_1,\dots,a_n\in A$ satisfy $a_1^2+\ldots+a_n^2\in\p$. Then $\widehat a_1(P)^2+\ldots+\widehat a_n(P)^2=0$ and thus $\widehat a_1(P)=0$, i.e., $a_1\in\p$. \end{proof} \begin{df} The \emph{real radical} $\rrad I$ of an ideal $I\subseteq A$ is defined by \[\rrad I:=\bigcap\left\{\p\in\rspec A\mid I\subseteq\p\right\}\] where $\rspec A:=\{\p\in\spec A\mid\p\text{ is real}\}$ and $\bigcap\emptyset:=A$. \end{df} \begin{rem}\label{intersectionreal} Since every intersection of real ideals of $A$ is obviously again a real ideal of $A$, for every ideal $I\subseteq A$, the set $\rrad I$ is a real ideal of $I$. \end{rem} \begin{thm}{}\emph{[$\to$ \ref{abstractrealnullstellensatz}]}\label{rradchar} For every ideal $I$ of $A$, \[\rrad I=\left\{a\in A\mid\exists k\in\N_0:\exists s\in\sum A^2:a^{2k}+s\in I\right\}.\] \end{thm} \begin{proof} \ref{realidealchar} shows that this is just a reformulation of \ref{abstractrealnullstellensatz}. \end{proof} \begin{rem} Let $I\subseteq A$ be an ideal. Then by \ref{intersectionreal}, $\rrad I$ is the smallest real ideal of $A$ containing $I$. \end{rem} \begin{df} We call $\rnil A:=\bigcap\rspec A=\rrad(0)$ the \emph{real nilradical} of $A$. \end{df} \begin{cor} We have \[ \{a\in A\mid\widehat a=0\}=\rnil A=\{a\in A\mid\exists k\in\N:\exists s\in\sum A^2:a^{2k}+s=0\}.\] \end{cor} \section{Constructible sets}\label{sec:constructible} In this section, we let $(A,T)$ always be a preordered ring [$\to$ \ref{preorderedring}(a)]. At the moment it is a general one but after Proposition \ref{constructiblesetnormalform}, we will further specialize $(A,T)$. \begin{df}{}[$\to$ \ref{introsemialg}]\label{introconstructible} A Boolean combination [$\to$ \ref{booleanalgebra}(b)] of sets of the form \[\{P\in\sper(A,T)\mid a\in P\}\qquad(a\in A)\] is called a \emph{constructible subset} of the real spectrum of $(A,T)$. We denote the Boolean algebra of all constructible sets of $\sper(A,T)$ by $\mathcal C_{(A,T)}$. The analogous definition remains in force for a commutative ring instead of a preordered ring $(A,T)$. \end{df} \begin{pro}{}\emph{[$\to$ \ref{sanf}]}\label{constructiblesetnormalform} Every constructible subset of $\sper(A,T)$ is of the form \[\bigcup_{i=1}^k \left\{P\in\sper(A,T)\mid\widehat a_i(P)=0,\widehat b_{i1}(P)>0,\ldots,\widehat b_{im}(P)>0\right\}\] for some $k,m\in\N_0$, $a_i,b_{ij}\in A$. \end{pro} \begin{proof} Completely analogous to \ref{sanf} using that $a\in P\overset{\ref{sperprimecones}}\iff\widehat a(P)\ge0$ for all $a\in A$ and $P\in\sper A$. \end{proof} For the rest of this section, we fix an ordered field $(K,\le)$, denote by $R:=\overline{(K,\le)}$ its real closure, we let $n\in\N_0$ and set $A:=K[\x]$ and $T:=\sum K_{\ge0}A^2$. Then $(A,T)$ is a preordered ring and for all $P\in\sper(A,T)$ there is by \ref{unic} exactly one homomorphism from $R$ to the representation field $R_P$ of $P$ extending $\rh_P|_K$ [$\to$ \ref{realrep}]. In virtue of this homomorphism, which is of course an embedding of ordered fields, we interpret $R$ as an (ordered) subfield of $R_P$. In particular, we write $R=R_P$ if it is an isomorphism. \begin{pront}\label{rnassubsetofsper} The correspondence \begin{align*} P&\mapsto x_P:=(\rh_P(X_1),\ldots,\rh_P(X_n))\\ \{f\in A\mid f(x)\ge0\}=:P_x&\mapsfrom x \end{align*} defines a bijection between $\{P\in\sper(A,T)\mid R_P=R\}$ and $R^n$. \end{pront} \begin{proof} We first show that both maps are well-defined. For every $P\in\sper(A,T)$ with $R_P=R$, we have $x_P\in R^n$ under the identification of $R_P$ and $R$. Conversely, let $x\in R^n$. Consider the ring homomorphism \[\ph\colon A\to R,\ f\mapsto f(x).\] Then $P_x=\ph^{-1}(R^2)=(\sper\ph)(R_{\ge0})\in\sper A$ [$\to$ \ref{sperfunctor}, \ref{newlang}(a)]. Obviously, $K_{\ge0}\subseteq P_x$ and therefore $P_x\in\sper(A,T)$. In order to show $R_{P_x}=R$, we set $\p:=\supp P_x$ and consider the homomorphism of ordered fields \[(\qf(A/\p),(P_x)_\p)\to(R,R^2),\ \frac{\cc a\p}{\cc s\p}\mapsto\frac{a(x)}{s(x)}\qquad(a\in A,s\in A\setminus\p)\] induced by $\ph$ according to \ref{sperfunctor} taking into account \ref{newlang}. Since $R$ is real closed, this homomorphism extends (uniquely) to a homomorphism of (ordered) fields \[\ps\colon R_{P_x}=\overline{(\qf(A/\p),(P_x)_\p)}\to R.\] We obviously have $\ps|_K=\id$ and therefore $\ps|_R$ is a $K$-endomorphism of the real closure $R$ of $(K,\le)$ which can only be the identity by \ref{unic}. The injectivity of $\ps$ now implies $R_{P_x}=R$ as desired. For later use we note that $\ps=\id_R$ implies \begin{equation} \tag{$*$}\cc f\p=\ps^{-1}(\ps(\cc f\p))=\ps^{-1}(f(x))=f(x) \end{equation} for all $f\in A$. \smallskip It remains to show that both maps are inverse to each other. This means: \begin{enumerate}[(a)] \item $P=P_{x_P}$ for all $P\in\sper(A,T)$ with $R_P=R$ \item $x=x_{P_x}$ for all $x\in R^n$ \end{enumerate} To show (a), let $P\in\sper(A,T)$ such that $R_P=R$. Then \begin{align*} P_{x_P}&=\{f\in A\mid f(\rh_P(X_1),\dots,\rh_P(X_n))\ge0\text{ in $R$}\}\\ &=\{f\in A\mid\rh_P(f)\ge0\text{ in $R_P$}\}= \{f\in A\mid\widehat f(P)\ge0\}=P. \end{align*} To show (b), we let $x\in R^n$. Then $\cc{X_i}{\supp P_x}\overset{(*)}=x_i\in R$ for all $i\in\{1,\dots,n\}$. Consequently, $x_{P_x}=(\rh_{P_x}(X_1),\dots,\rh_{P_x}(X_n))= (\cc{X_1}{\supp P_x},\dots,\cc{X_n}{\supp P_x})=(x_1,\dots,x_n)=x$. \end{proof} \begin{thmdef}{}\emph{[$\to$ \ref{introsn}, \ref{setification}]}\label{slimfatten} Let $n\in\N_0$ and denote again by $\mathcal S_{n,R}$ the Boolean algebra of all $K$-semialgebraic subsets of $R^n$. Then \[\slim\colon\mathcal C_{(A,T)}\to\mathcal S_{n,R},\ C\mapsto\{x\in R^n\mid P_x\in C\}\] is an isomorphism of Boolean algebras. We call $\slim$ the \emph{despectrification} or \emph{slimming} (in German: \emph{Entspeckung}) and \[\fatten:=\slim^{-1}\] the \emph{spectrification} or \emph{fattening} (in German: \emph{Verspeckung}). For all $f\in A$, one has \[\slim(\{P\in\sper(A,T)\mid f\in P\})=\{x\in R^n\mid f(x)\ge0\}.\] \end{thmdef} \begin{proof} It is obvious that $\slim$ is a homomorphism of Boolean algebras [$\to$ \ref{bahom}] satisfying $\slim(\{P\in\sper(A,T)\mid f\in P\})=\{x\in R^n\mid f\in P_x\}=\{x\in R^n\mid f(x)\ge0\}$ for all $f\in A$. Let $\mathcal R\supseteq\{R_P\mid P\in\sper(A,T)\}$ be a set of real closed fields that are ordered extension fields of $(K,\le)$ [$\to$ \ref{rcfclass}(b)]. Let $\mathcal S_n$ again denote the Boolean algebra of all $(K,\le)$-semialgebraic classes [$\to$ \ref{introsn}] and consider \[\Ph\colon\mathcal S_n\to\mathcal C_{(A,T)},\ S\mapsto\{P\in\sper(A,T)\mid(R_P,(\rh_P(X_1),\dots, \rh_P(X_n)))\in S\}.\] It is obvious that $\Ph$ is a homomorphism of Boolean algebras satisfying \begin{multline*} \Ph(\{(R',x)\mid R'\in\mathcal R,x\in R'^n,f(x)\ge0\text{ in }R'\})\\ =\{P\in\sper(A,T)\mid f(\rh_P(X_1),\dots,\rh_P(X_n))\ge0\text{ in }R_P\}\\ =\{P\in\sper(A,T)\mid\rh_P(f)\ge0\text{ in }R_P\}\\ =\{P\in\sper(A,T)\mid\widehat f(P)\ge0\}=\{P\in\sper(A,T)\mid f\in P\} \end{multline*} for all $f\in A$. From this one sees, in the first place, that $\Ph$ is surjective and, secondly, that $\slim\circ\,\Ph=\set_R$ [$\to$ \ref{introsn}] which is an isomorphism of Boolean algebras by \ref{setification}. Along with $\set_R$, $\Ph$ is also injective. We conclude that $\Ph$ is an isomorphism and with it $\slim=(\slim\circ\,\Ph)\circ\;\Ph^{-1}$. \end{proof} \begin{ex}\label{sperrx} In \ref{specsperex}, we have already described $\sper\R[X]$. Now we describe $\sper\R[X]$ as a set of prime cones [$\to$ \ref{newlang}] while using \ref{ordersrx}: For $t\in\R$, we set \begin{align*} P_{t-}&:=\{f\in\R[X]\mid\exists\ep\in\R_{>0}:\forall x\in(t-\ep,t):f(x)\ge0\},\\ P_t&:=\{f\in\R[X]\mid f(t)\ge0\}\quad\text{and}\\ P_{t+}&:=\{f\in\R[X]\mid\exists\ep\in\R_{>0}:\forall x\in(t,t+\ep):f(x)\ge0\} \end{align*} Finally, we set \begin{align*} P_{-\infty}&:=\{f\in\R[X]\mid\exists c\in\R:\forall x\in(-\infty,c):f(x)\ge0\}\quad\text{and}\\ P_{\infty}&:=\{f\in\R[X]\mid\exists c\in\R:\forall x\in(c,\infty):f(x)\ge0\}. \end{align*} Then \[\sper\R[X]=\{P_{-\infty},P_\infty\}\cup\{P_{t-}\mid t\in\R\}\cup\{P_t\mid t\in\R\}\cup \{P_{t+}\mid t\in\R\}.\] The fattening of the semialgebraic set $[0,1)\subseteq\R$ is the set \begin{align*} C:=\{P_0,P_{0+}\}&\cup\{P_{t-}\mid t\in(0,1)\}\cup\{P_t\mid t\in(0,1)\}\\ &\cup\{P_{t+}\mid t\in(0,1)\}\cup\{P_{1-}\} \subseteq\sper\R[X]. \end{align*} In particular, $C$ is constructible. In contrast to this, $C':=C\setminus\{P_{1-}\}$ is not constructible for otherwise $C$ and $C'$ would have the same slimming in contradiction to \ref{slimfatten}. \end{ex} \section{Real Stellensätze}\label{sec:realstellensaetze} \begin{rem}\label{idealmultiplicativesetpreorder} Let $A$ be a commutative ring. \begin{enumerate}[(a)] \item Since every intersection of \alalal{ideals}{multiplicative sets}{preorders} of $A$ is again \alalal{an ideal}{a multiplicative set}{a preorder} of $A$, there exists for every subset $E\subseteq A$ \alalal{a smallest ideal}{a smallest multiplicative set}{a smallest preorder} of $A$ containing $E$. It is called the \alalal{ideal}{multiplicative set}{preorder} \emph{generated} by $E$. \item \alalal{An ideal}{A multiplicative set}{A preorder} of $A$ is called \emph{finitely generated} if it is generated by a finite subset of $A$. \item The \alalal{ideal}{multiplicative set}{preorder} generated by $a_1,\dots,a_m\in A$ (i.e., by $\{a_1,\dots,a_m\}\subseteq A$) is $\malalal{Aa_1+\ldots+Aa_m} {\{a_1^{\al_1}\dotsm a_m^{\al_m}\mid\al\in\N_0^m\}}{\sum_{\de\in\{0,1\}^m}\sum A^2a_1^{\de_1}\dotsm a_m^{\de_m}}$. \item If \alalal{an ideal}{a multiplicative set}{a preorder} of $A$ is generated by $E\subseteq A$, then it is the union over all \alalal{ideals}{multiplicative sets}{preorders} of $A$ generated by a finite subset of $E$. \item If \alalal{an ideal $I$}{a multiplicative set $S$}{a preorder $T$}$\,\subseteq A$ is generated by $E\subseteq A$ and if $P\in\sper A$, then $\malalal{\forall a\in I:\widehat a(P)=0}{\forall s\in S:\widehat s(P)\ne0}{\forall t\in T:\widehat t(P)\ge0}\iff \malalal{\forall a\in E:\widehat a(P)=0}{\forall s\in E:\widehat s(P)\ne0}{\forall t\in E:\widehat t(P)\ge0}$. \end{enumerate} \end{rem} \begin{rem}\label{idealmultiplicativesetpreorderpolynomialring} Let $(L,\le)$ be an ordered field and $K$ a subfield of $L$. If \alalal{an ideal $I$}{a multiplicative set $S$}{a preorder $T$}$\subseteq K[\x]$ is generated by $E\subseteq K[\x]$ and if $x\in L^n$, then \[\malalal{\forall g\in I:g(x)=0}{\forall h\in S:h(x)\ne0}{\forall f\in T:f(x)\ge0}\iff \malalal{\forall g\in E:g(x)=0}{\forall h\in E:h(x)\ne0}{\forall f\in E:f(x)\ge0}.\] \end{rem} \begin{remterm} \begin{enumerate}[(a)] \item ``over $B$ generated by $E$'' stands for ``generated by $B\cup E$'' \item ``over $B$ finitely generated'' stands for ``generated by $B\cup E$ for some finite set $E$'' \item If $(K,\le)$ is an ordered field and $n\in\N_0$, then the preorder generated by $p_1,\dots,p_m\in K[\x]$ over $K_{\ge0}$ equals $\sum_{\de\in\{0,1\}^m}\sum K_{\ge0}K[\x]^2p_1^{\de_1}\dotsm p_m^{\de_m}$ [$\to$ \ref{idealmultiplicativesetpreorder}(c)]. \end{enumerate} \end{remterm} \begin{pro} Let $(K,\le)$ be an ordered field, $R:=\overline{(K,\le)}$ and $n\in\N_0$ \emph{[$\to$ \ref{rnassubsetofsper}]}. Let $I$ be an ideal, $S$ a \emph{finitely generated} multiplicative set and $T$ a preorder of $K[\x]$ \emph{finitely generated over $K_{\ge0}$}. Then \[\{P\in\sper K[\x]\mid(\forall g\in I:\widehat g(P)=0), (\forall h\in S:\widehat h(P)\ne0),(\forall f\in T:\widehat f(P)\ge0)\}\] is a constructible subset of $\sper(K[\x],\sum K_{\ge0}K[\x]^2)$ whose slimming is the $K$-semialgebraic set \[\{x\in R^n\mid(\forall g\in I:g(x)=0), (\forall h\in S:h(x)\ne0),(\forall f\in T:f(x)\ge0)\}.\] \end{pro} \begin{proof} By Hilbert's basis theorem, $I$ is finitely generated as well. Now use \ref{idealmultiplicativesetpreorder}, \ref{slimfatten} and \ref{idealmultiplicativesetpreorderpolynomialring}. \end{proof} \begin{thm}[real Stellensatz \cite{kri,ste,pre}]\emph{[$\to$ \ref{abstractstellensatz}]}\label{stellensatz} Let $(K,\le)$ be an ordered subfield of the real closed field $R$, $n\in\N_0$, $I$ an ideal of $K[\x]$, $S$ a \emph{finitely generated} multiplicative set of $K[\x]$ and $T$ a preorder of $K[\x]$ \emph{finitely generated over $K_{\ge0}$}. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item There does \emph{not} exist any $x\in R^n$ satisfying \begin{align*} \forall g\in I&:g(x)=0,\\ \forall h\in S&:h(x)\ne0\quad\text{and}\\ \forall t\in T&:t(x)\ge0. \end{align*} \item $0\in I+S^2+T$ \end{enumerate} \end{thm} \begin{proof} \underline{(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(b)}\quad WLOG $R=\overline{(K,\le)}$ [$\to$ \ref{relcl}]. Because the fattening of the empty set is empty by \ref{slimfatten} [$\to$ \ref{bahom}], (a) implies Condition \ref{abstractstellensatz}(a) from the abstract real Stellensatz applied to $A:=K[\x]$. \end{proof} \begin{cor}[Positivstellensatz]\emph{[$\to$ \ref{abstractpositivstellensatz}]}\label{positivstellensatz} Let $(K,\le)$ be an ordered subfield of the real closed field $R$, $n\in\N_0$, $T$ a preorder of $K[\x]$ \emph{finitely generated over $K_{\ge0}$}, \[S:=\{x\in R^n\mid\forall p\in T:p(x)\ge0\}\] and $f\in K[\x]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f>0$ on $S$ \item $\exists t\in T:tf\in1+T$ \item $\exists t\in T:(1+t)f\in1+T$ \end{enumerate} \end{cor} \begin{proof} Alternatively from \ref{stellensatz} (as \ref{abstractpositivstellensatz} from \ref{abstractstellensatz}) or from \ref{abstractpositivstellensatz} (as \ref{stellensatz} from \ref{abstractstellensatz} using \ref{slimfatten}). \end{proof} \begin{cor}[Nichtnegativstellensatz]\emph{[$\to$ \ref{abstractnichtnegativstellensatz}]}\label{nichtnegativstellensatz} Let $(K,\le)$ be an ordered subfield of the real closed field $R$, $n\in\N_0$, $T$ a preorder of $K[\x]$ \emph{finitely generated over $K_{\ge0}$}, \[S:=\{x\in R^n\mid\forall p\in T:p(x)\ge0\}\] and $f\in K[\x]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f\ge0$ on $S$ \item $\exists t\in T:\exists k\in\N_0:tf\in f^{2k}+T$ \end{enumerate} \end{cor} \begin{proof} Alternatively from \ref{stellensatz} (as \ref{abstractnichtnegativstellensatz} from \ref{abstractstellensatz}) or from \ref{abstractnichtnegativstellensatz} (as \ref{stellensatz} from \ref{abstractstellensatz} using \ref{slimfatten}). \end{proof} \begin{rem} In the special case $T=\sum K_{\ge0}K[\x]^2$, the Nichtnegativstellensatz \ref{nichtnegativstellensatz} is obviously a strengthening of Artin's solution \ref{artin} to Hilbert's 17th problem in which Condition (b) is refined. This refinement has the advantage that the proof of (b)$\implies$(a) does not require a real argument as it was the case in \ref{artin}. The proof of \ref{nichtnegativstellensatz} requires prime cones of rings instead of just preorders of fields and therefore is substantially more difficult as the proof of \ref{artin}. \end{rem} \begin{cor}[real Nullstellensatz \cite{kri,du2,ris,efr}]\emph{[$\to$ \ref{abstractrealnullstellensatz}]}\label{realnullstellensatz} Let $K$ be a Euclidean subfield of the real closed field $R$, $n\in\N_0$, $I$ an ideal of $K[\x]$ and \[V:=\{x\in R^n\mid\forall p\in I:p(x)=0\}.\] Then $\{f\in K[\x]\mid f=0\text{ on }V\}=\rrad I$. \end{cor} \begin{proof} Using the description of $\rrad I$ from \ref{rradchar}, this follows alternatively from \ref{stellensatz} (as \ref{abstractrealnullstellensatz} from \ref{abstractstellensatz}) or from \ref{abstractrealnullstellensatz} (as \ref{stellensatz} from \ref{abstractstellensatz} using \ref{slimfatten}). \end{proof} \begin{df}{}[$\to$ \ref{dfrealclosure}] Let $K$ be field. An extension field $R$ of $K$ is called \emph{a} real closure of $K$ if $R$ is real closed and $R|K$ is algebraic. \end{df} \begin{rem} For two fields $K$ and $R$, the following are equivalent: \begin{enumerate}[(a)] \item $R$ is a real closure of $K$. \item There is an order $\le$ of $K$ such that $R=\overline{(K,\le)}$. \end{enumerate} \end{rem} \begin{thm}[variant of the real Stellensatz]\emph{[$\to$ \ref{stellensatz}]}\label{altstellensatz} Let $K$ be a field, $n\in\N_0$, $I$ an ideal of $K[\x]$, $S$ a \emph{finitely generated} multiplicative set of $K[\x]$ and $T$ a \emph{finitely generated} preorder of $K[\x]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item There does \emph{not} exist a real closure $R$ of $K$ and an $x\in R^n$ such that \begin{align*} \forall g\in I&:g(x)=0,\\ \forall h\in S&:h(x)\ne0\quad\text{and}\\ \forall f\in T&:f(x)\ge0. \end{align*} \item $0\in I+S^2+T$ \end{enumerate} \end{thm} \begin{proof} \underline{(b)$\implies$(a)} is trivial. \smallskip We show \underline{(a)$\implies$(b)} by contraposition. Suppose (b) does not hold. We have to show that (a) does not hold. By the abstract real Stellensatz \ref{abstractstellensatz}, there is some $P\in\sper K[\x]$ such that $\forall g\in I:\widehat g(P)=0$, $\forall h\in S:\widehat h(P)\ne0$ and $\forall f\in T:\widehat f(P)\ge0$ all hold at the same time. Now consider the real closure $R:=\overline{(K,K\cap P)}$ of $K$ and the preordered ring $(K[\x],\sum(K\cap P)K[\x]^2)$. The set \[U:=\left\{x\in R^n\mid(\forall g\in I:g(x)=0),(\forall h\in S:h(x)\ne0), (\forall f\in T:f(x)\ge0)\right\}\] is $K$-semialgebraic by \ref{idealmultiplicativesetpreorderpolynomialring} since $I$, $S$ and $T$ are finitely generated. We will show that $U\ne\emptyset$. We have chosen $P$ to be an element of the constructible subset of the real spectrum of this preordered ring which is the fattening of $U$, i.e., $P\in\fatten(U)$ in the notation of \ref{slimfatten}. In particular, $\fatten(U)\ne\emptyset$ and thus $U\ne\emptyset$ by \ref{slimfatten} \end{proof} \begin{rem}\label{fingenimportant} Wherever the hypothesis ``finitely generated'' appears in this section, it cannot be omitted. For instance, assume that the Positivstellensatz \ref{positivstellensatz} holds with the weaker hypothesis ``$K_{\ge0}\subseteq T$'' instead of ``$T$ finitely generated over $K_{\ge0}$''. Consider then $K:=R:=\R$, $n:=1$ and the preorder of $\R[X]$ generated by \[E:=\{X-N\mid N\in\N\}.\] Then $S:=\{x\in\R\mid\forall p\in T:p(x)\ge0\}=\emptyset$ and thus $f:=-1>0$ on $S$. It follows that $\exists t\in T:tf\in1+T$ and thus by \ref{idealmultiplicativesetpreorder}(d) even $\exists t\in T':tf\in1+T'$ for a preorder $T'\subseteq T$ generated by a finite set $E'\subseteq E$. The trivial direction of \ref{positivstellensatz} then yields $-1>0$ on $S':=\{x\in\R\mid\forall p\in T':p(x)\ge0\}\overset{\ref{idealmultiplicativesetpreorderpolynomialring}}= \{x\in\R\mid\forall p\in E':p(x)\ge0\}=[N,\infty)$ for some $N\in\N$. $\lightning$ \end{rem} \begin{rem}{}[$\to$ \ref{artin-schreier}] Let $A$ be a commutative ring and $T\subseteq A$ a proper preorder. Exactly as in the field case, there exists some $P\in\sper A$ such that $T\subseteq P$ [$\to$ \ref{inmaxprimecone}]. In sharp contrast, to the field case we do in general however not have that $T=\bigcap\sper(A,T)$. As an example, take $A:=\R[X,Y]$, $T:=\sum\R[X,Y]^2$ and consider the Motzkin polynomial $f:=X^4Y^2+X^2Y^4-3X^2Y^2+1$. By \ref{motzkin}, we have $f\notin T$ and $S:=\{(x,y)\in\R^2\mid f(x,y)\ge0\}=\R^2$. By \ref{slimfatten}, the fattening \[C:=\{P\in\sper A\mid f\in P\}\subseteq\sper A=\sper(A,T)\] of $S$ equals the whole of $\sper A$, i.e., $f\in\bigcap\sper(A,T)$. \end{rem} \chapter{Schmüdgen's Positivstellensatz} \section{The abstract Archimedean Positivstellensatz} \begin{df}{}{[$\to$ \ref{unaryrem}(d)]} A preordered ring $(A,T)$ is called \emph{Archimedean} if \[\forall a\in A:\exists N\in\N:N+a\in T,\] which is equivalent to $T-\N=A$ and also to $T+\Z=A$. \end{df} \begin{df}\label{dfarch} Let $A$ be a commutative ring. \begin{enumerate}[(a)] \item A preorder $T$ of $A$ is called Archimedean if $(A,T)$ is Archimedean. \item $A$ is called Archimedean if $(A,\sum A^2)$ is Archimedean. \end{enumerate} \end{df} \begin{thm}[abstract Archimedean Positivstellensatz \cite{sto,kad,kri,du1}] \emph{[$\to$ \ref{abstractpositivstellensatz}]}\label{abstractarchimedeanpositivstellensatz} Let $(A,T)$ be an Archimedean preordered ring and $a\in A$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\widehat a>0$ on $\sper(A,T)$ \item $\exists N\in\N:Na\in1+T$ \end{enumerate} \end{thm} \begin{proof} \underline{(b)$\implies$(a)} is trivial \smallskip\underline{(a)$\implies$(b)}\quad For the multiplicative set $S:=\N\cdot1\subseteq A$, $(S^{-1}A,S^{-2}T)$ is again an Archimedean preordered ring [$\to$ \ref{preorderlocalize}] and we have [$\to$ \ref{sperlocalize}] \[\widehat a>0\text{ on }\sper(A,T)\iff\widehat{\left(\frac a1\right)}>0\text{ on }\sper(S^{-1}A,S^{-2}T). \] We can therefore suppose $\N\cdot1\subseteq A^\times$ and therefore have a homomorphism \[\Q=\N^{-1}\Z\to A,\ \frac pq\mapsto\frac pq\qquad(p\in\Z,q\in\N).\] Suppose now that (a) holds. By the abstract Positivstellensatz \ref{abstractpositivstellensatz}, there is some $t\in T$ such that $ta\in1+T$. Since $T$ is Archimedean, there are $N\in\N$ with $N-t\in T$ and $r\in\N$ with $a+r\in T$. Now you can decrease $r\in\frac1N\N_0$ a finite number of times by $\frac1N$ until it gets negative since \[a+\left(r-\frac1N\right)=\frac N{N^2}((\underbrace{N-t}_{\in T})(\underbrace{a+r}_{\in T})+ (\underbrace{ta-1}_{\in T})+\underbrace{rt}_{\in T})\in T\] as long as $r\ge0$. It follows $a-\frac1N\in T$ and thus $Na\in1+T$. \end{proof} \begin{cor}\label{archmax} Let $A$ be a commutative ring and $P\in\sper A$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $P$ is Archimedean and a maximal prime cone. \emph{[$\to$ \ref{dfarch}(a)]} \item $(\qf(A/\p),P_\p)$ is Archimedean where $\p:=\supp P$. \emph{[$\to$ \ref{primeconeinfield}]} \item $R_P$ is Archimedean. \emph{[$\to$ \ref{realrep}]} \item There exists a homomorphism $\ph\colon A\to\R$ such that $P=(\sper\ph)(\R_{\ge0})$. \end{enumerate} \end{cor} \begin{proof} \underline{(a)$\implies$(b)}\quad Suppose (a) holds and let $a\in A$ and $s\in A\setminus\p$. To show: $\exists N\in\N:\frac{\cc a\p}{\cc s\p}+N\in P_\p$. WLOG $s\in P$. Since $P$ is maximal, we have $\sper(A,P)=\{P\}$ and thus $\widehat s>0$ on $\sper(A,P)$. By \ref{abstractarchimedeanpositivstellensatz}, there is $N'\in\N$ such that $N's\in1+P$. Choose $N''\in\N$ such that $a+N''\in P$ and set $N:=N'N''$. Then $a+Ns=a+N'N''s\in a+N''+P\subseteq P+P\subseteq P$ and thus $(a+Ns)s\in PP\subseteq P$. It follows that $\frac{\cc a\p}{\cc s\p}+N=\frac{\cc{a+Ns}\p}{\cc s\p}\in P_\p$. \smallskip\underline{(b)$\implies$(c)}\quad If (b) holds, then $(\qf(A/\p),P_\p)\hookrightarrow(\R,\R_{\ge0})$ by \ref{archsubfieldreals} [$\to$ \ref{ordfieldhom}] and \[R_P=\overline{(\qf(A/\p),P_\p)}\hookrightarrow(\R,\R_{\ge0})\] by \ref{unic}. \smallskip\underline{(c)$\implies$(d)}\quad Suppose that (c) holds. Choose an embedding $\io\colon R_P\hookrightarrow\R$ according to \ref{archsubfieldreals}. We have $\io^{-1}(\R_{\ge0})=(R_P)_{\ge0}$ because $\io$ is an embedding of ordered fields. Now set $\ph:=\io\circ\rh_P$. Then $\ph^{-1}(\R_{\ge0})=\rh_P^{-1}(\io^{-1}(\R_{\ge0}))=\rh_P^{-1}((R_P)_{\ge0})\overset{\ref{kernelrep}}=P$. \smallskip\underline{(d)$\implies$(a)}\quad Suppose $\ph\colon A\to\R$ is a homomorphism with $P=\ph^{-1}(\R_{\ge0})$. Then $P$ is Archimedean for if $a\in A$, then one can choose $N\in\N$ with $\ph(a)+N\ge0$ and it follows that $a+N\in\ph^{-1}(\R_{\ge0})=P$. In order to show that $P$ is maximal, let $Q\in\sper A$ with $P\subseteq Q$. To show: $P=Q$. If we had $a\in Q\setminus P$, then $\ph(a)<0$ and thus $\ph(Na)\le-1$ for some $N\in\N$ from which it would follow that $\ph(-1-Na)\ge0$ and thus $-1-Na\in P\subseteq Q$ and $-1=(-1-Na)+Na\in Q+Q\subseteq Q$ $\lightning$. \end{proof} \section{The Archimedean Positivstellensatz [$\to$ §\ref{sec:realstellensaetze}]} \begin{lem}\label{evashom} Suppose $(K,\le)$ is an ordered subfield of $\R$, $n\in\N_0$ and $K_{\ge0}\subseteq T\subseteq K[\x]$. Then the correspondence \begin{align*} x&\mapsto\ev_x\colon K[\x]\to\R,\ p\mapsto p(x)\\ (\ph(X_1),\dots,\ph(X_n))&\mapsfrom\ph \end{align*} defines a bijection between $S:=\{x\in\R^n\mid\forall p\in T:p(x)\ge0\}$ and the set of all ring homomorphisms $\ph\colon K[\x]\to\R$ satisfying $\ph(T)\subseteq\R_{\ge0}$. \end{lem} \begin{proof} It is obviously enough to show that every ring homomorphism $\ph\colon K[\x]\to\R$ with $\ph(T)\subseteq\R_{\ge0}$ is the identity on $K$. But this is clear by \ref{archemb} since the identity is the \emph{only} embedding of ordered fields $(K,\le)\hookrightarrow(\R,\R_{\ge0})$. \end{proof} \begin{thm}[Archimedean Positivstellensatz]\emph{[$\to$ \ref{abstractarchimedeanpositivstellensatz}, \ref{positivstellensatz}]}\label{archimedeanpositivstellensatz} Suppose $(K,\le)$ is an ordered subfield of $\R$, $n\in\N_0$, $T\subseteq K[\x]$ is an \emph{Archimedean} preorder containing $K_{\ge0}$, $S:=\{x\in\R^n\mid\forall p\in T:p(x)\ge0\}$ and $f\in K[\x]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f>0$ on $S$ \item $\exists N\in\N:f\in\frac1N+T$ \end{enumerate} \end{thm} \begin{proof} \underline{(b)$\implies$(a)} is trivial. \smallskip\underline{(a)$\implies$(b)}\quad Suppose that (a) holds. It is enough to show that $\widehat f>0$ on $\sper(K[\x],T)$ due to the abstract Archimedian Positivstellensatz \ref{abstractarchimedeanpositivstellensatz} using $\frac1N=N\left(\frac1N\right)^2$. To this end, let $P\in\sper(K[\x],T)$. We show that $\widehat f(P)>0$, i.e., $\widehat f(P)\not\le0$ which is equivalent to $f\notin-P$. Choose a maximal prime cone $Q$ of $K[\x]$ such that $P\subseteq Q$ by \ref{inmaxprimecone}. We even show that $f\notin-Q$. By \ref{archmax}(d) and \ref{evashom}, there is some $x\in S$ satisfying $Q=\ev_x^{-1}(\R_{\ge0})=\{p\in K[\x]\mid p(x)\ge0\}$. From $f(x)>0$, we deduce now $f\notin-Q$ as desired. \end{proof} \begin{rem} If $T$ is finitely generated over $K_{\ge0}$ in the situation of \ref{archimedeanpositivstellensatz}, then one can reduce \ref{archimedeanpositivstellensatz} alternatively by fattening to \ref{abstractarchimedeanpositivstellensatz}. This ultimately uses unnecessarily the heavy artillery of real quantifier elimination \ref{elim} and is not applicable if $T$ is not finitely generated over $K_{\ge0}$. The principal reason why the real quantifier elimination is not needed here is \ref{archsubfieldreals}. \end{rem} \section{Schmüdgen's characterization of Archimedean preorders of the polynomial ring} \begin{dfpro}\label{arithmbounded} Let $(A,T)$ be a preordered ring. Then \[B_{(A,T)}:=\{a\in A\mid\exists N\in\N:N\pm a\in T\}\] is a subring of $A$ which we call the ring of with respect to $T$ \emph{arithmetically bounded} elements of $A$. \end{dfpro} \begin{proof} One sees immediately that $B_{(A,T)}$ is a subgroup of the additive group of $A$. It is clear that $1\in B_{(A,T)}$. Finally, we have $B_{(A,T)}B_{(A,T)}\subseteq B_{(A,T)}$ as one sees immediately from the identity \[3N^2\pm ab=(N+a)(N\pm b)+N(N-a)+N(N\mp b)\qquad(N\in\N,a,b\in A).\] \end{proof} \begin{lem}\label{squarerootsarithmeticallybounded} Let $(A,T)$ be a preordered ring such that $\frac12\in A$. Then \[a^2\in B_{(A,T)}\implies a\in B_{(A,T)}\] for all $a\in A$. \end{lem} \begin{proof} Choose $N\in\N$ with $(N-1)-a^2\in T$. Then \[N\pm a=(N-1)-a^2+\left(\frac12\pm a\right)^2+3\left(\frac12\right)^2\in T.\] \end{proof} \begin{rem}\label{archb} If $(A,T)$ is a preordered ring, then $T$ is Archimedean if and only if $B_{(A,T)}=A$. \end{rem} \begin{lem}\label{archchar} Suppose $(K,\le)$ is an ordered subfield of $\R$, $n\in\N_0$ and $T\subseteq K[\x]$ is a preorder containing $K_{\ge0}$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $T$ is Archimedean. \item $\exists N\in\N:N-\sum_{i=1}^nX_i^2\in T$ \item $\exists N\in\N:\forall i\in\{1,\dots,n\}:N\pm X_i\in T$ \end{enumerate} \end{lem} \begin{proof} \underline{(a)$\implies$(b)} is trivial. \smallskip\underline{(b)$\implies$(c)}\quad If (b) holds, then $N-X_i^2\in T$ and thus $X_i^2\in B_{(A,T)}$ for all $i\in\{1,\dots,n\}$. Now apply \ref{squarerootsarithmeticallybounded}. \smallskip\underline{(c)$\implies$(a)}\quad Since $(K,\le)$ is Archimedean and $K_{\ge0}\subseteq T$, we have $K\subseteq B_{(A,T)}$. If now moreover (c) holds, then $K[\x]=B_{(A,T)}$. \end{proof} \begin{thm}[Schmüdgen's Theorem \cite{sch,bw}]\label{schmuedgen} Suppose $(K,\le)$ is an ordered subfield of $\R$, $n\in\N_0$ and $T$ a preorder of $K[\x]$ which is finitely generated \emph{over $K_{\ge0}$}. Write \[S:=\{x\in\R^n\mid\forall p\in T:p(x)\ge0\}.\] Then \[\text{$T$ Archimedean}\iff\text{$S$ compact}.\] \end{thm} \begin{proof} \cite{bw} ``$\Longrightarrow$'' Let $T$ be Archimedean. By \ref{archchar}(b), there is some $N\in\N$ with $N-\sum_{i=1}^nX_i^2\in T$. Then $S$ is contained in the ball of radius $\sqrt N$ centered at the origin and thus bounded. Anyway $S$ is already closed. Consequently, $S$ is compact. \smallskip ``$\Longleftarrow$'' Let $S$ be compact. Write $r:=\sum_{i=1}^nX_i^2$ and choose $N\in\N$ such that $N-r>0$ on $S$. By the Positivstellensatz \ref{positivstellensatz}, we find $t\in T$ such that $(1+t)(N-r)\in1+T\subseteq T$. We know that $T':=T+(N-r)T$ is a preorder of $K[\x]$ that is Archimedean by \ref{archchar}. We have $(1+t)T'\subseteq T$ and $N-r+Nt=(1+t)(N-r)+tr\in T+T\subseteq T$. Choose $N'\in\N$ with $N'-t\in T'$. Then \[(1+N')(N'-t)=(1+t)(N'-t)+(N'-t)^2\in(1+t)T'+T\subseteq T+T\subseteq T\] from which $N'-t\in T$ follows because of $\frac1{1+N'}=(1+N')\left(\frac1{1+N'}\right)^2\in T$. We conclude that \[N(N'+1)-r=NN'+N-r=(N-r+tN)+N(N'-t)\in T+T\subseteq T.\] Now \ref{archchar} implies that $T$ is Archimedean. \end{proof} \begin{cor}[Schmüdgen's Positivstellensatz]\emph{[$\to$ \ref{archimedeanpositivstellensatz}]} \label{schmuedgenpositivstellensatz} Suppose $(K,\le)$ is an ordered subfield of $\R$, $n\in\N_0$, $T$ a preorder of $K[\x]$ which is finitely generated \emph{over $K_{\ge0}$}. Suppose $S:=\{x\in\R^n\mid\forall p\in T:p(x)\ge0\}$ is \emph{compact} and $f\in K[\x]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f>0$ on $S$ \item $\exists N\in\N:f\in\frac1N+T$ \end{enumerate} \end{cor} \begin{proof} By Schmüdgen's Theorem \ref{schmuedgen}, $T$ is Archimedean. But then the Archimedean Positivstellensatz \ref{archimedeanpositivstellensatz} proves the equivalence of (a) and (b). \end{proof} \begin{rem}\label{needep} \begin{enumerate}[(a)] \item Exactly as in \ref{fingenimportant}, one sees that the hypothesis ``$T$ finitely generated over $K_{\ge0}$'' cannot be replaced by the weaker hypothesis ``$K_{\ge0}\subseteq T$''. \item If one drops the requirement that $S$ is compact, then \ref{schmuedgenpositivstellensatz} gets wrong as the example $K:=\R$, $n:=1$, $T:=\sum\R[X]^2+\sum\R[X]^2X^3$ and $f:=X+1$ shows: We have $f>0$ on $S=[0,\infty)$ but $f\notin T$ for degree reasons as one sees from \ref{soslongrem}(b). \item In the situation of \ref{schmuedgenpositivstellensatz}, we unfortunately do not have in general \[f\ge0\text{ on }S\iff f\in T.\] For this, consider $K:=\R$, $n:=1$, $T:=\sum\R[X]^2+\sum\R[X]^2X^3(1-X)$ and $f:=X$. Then $f\ge0$ on $S=[0,1]$. Assume $f\in T$. Write $f=\sum_ip_i^2+\sum_jq_j^2X^3(1-X)$ for some $p_i,q_j\in\R[X]$. Evaluating in $0$, yields $0=\sum_ip_i(0)^2$ and thus $p_i(0)=0$ for all $i$. Write $p_i=Xp_i'$ for some $p_i'\in\R[X]$. Then $X=f=X^2\left(\sum_ip_i'^2+\sum_jq_j^2X(1-X)\right)\ \lightning$. \end{enumerate} \end{rem} \chapter{The real spectrum as a topological space} \section{Tikhonov's theorem} \begin{rem} Any finite intersection of unions of certain sets is a union of finite intersections of such sets [$\to$ \ref{unionsection}]. \end{rem} \begin{reminder}\mbox{}[$\to$ \ref{booleanalgebra}]\label{topreminder} Let $M$ be a set. \begin{enumerate}[(a)] \item A set $\O \subseteq\mathcal P(M)$ is called a \emph{topology} on $M$ if \begin{itemize} \item $M\in\O $, \item $\forall A_1,A_2\in\O :A_1\cap A_2\in\O $ and \item $\forall\mathcal A\subseteq\O :\bigcup\mathcal A\in\O $. \end{itemize} In this case, $(M,\O )$ is called a \emph{topological space} and the elements of $\O $ are called its \emph{open sets}. \item Let $\mathcal G\subseteq\mathcal P(M)$. Then the set of all unions of finite intersections of elements of $\mathcal G$ (where $\bigcap\emptyset:=M$) is obviously the smallest topology $\O $ on $M$ such that $\mathcal G\subseteq\O $. It is called the topology \emph{generated by $\mathcal G$} (on $M$). \item If $\O $ and $\O '$ are topologies on $M$, then $\O $ is called \emph{\alal{coarser}{finer}} than $\O '$ if $\malal{\O \subseteq\O '} {\O \supseteq\O '}$. \item The finest topology on $M$ is the \emph{discrete topology} $\O :=\mathcal P(M)$. \item The coarsest topology on $M$ is the \emph{trivial topology} (in German: \emph{Klumpentopologie}) $\O :=\{\emptyset,M\}$. \end{enumerate} \end{reminder} \begin{reminder}\label{conti} Let $(M,\O )$ and $(N,\mathcal P)$ be topological spaces and $f\colon M\to N$ be a map. Then $f$ is called \emph{continuous} if $f^{-1}(B)\in\O $ for all $B\in\mathcal P$. If $\mathcal P$ is generated by $\mathcal G$, then $f$ is continuous if and only if $f^{-1}(B)\in\mathcal\O$ for all $B\in\mathcal G$. \end{reminder} \begin{reminder}\label{initialtop} Let $M$ be a set, $(N_i,\mathcal P_i)_{i\in I}$ a family of topological spaces and $(f_i)_{i\in I}$ a family of maps $f_i\colon M\to N_i$ ($i\in I$). Then there exists a coarsest topology $\O $ on $M$ making all $f_i$ ($i\in I$) continuous. One calls $\O $ the \emph{initial topology} (or \emph{weak topology}) with respect to $(f_i)_{i\in I}$. If $I=\{1,\dots,n\}$, then $\O $ is also called the initial topology with respect to $f_1,\dots,f_n$. This topology $\O $ is generated by $\{f_i^{-1}(B_i)\mid i\in I,B\in\mathcal P_i\}$. More generally, the following holds: If $\mathcal P_i$ is generated by $\mathcal G_i$ for $i\in I$, then $\O $ is generated by $\{f_i^{-1}(B)\mid i\in I,B\in\mathcal G_i\}$. It holds that $\O $ is the unique topology on $M$ having the following property: For every topological space $(M',\O ')$ and every $g\colon M'\to M$, the map $g$ is continuous if and only if all the maps $f_i\circ g$ with $i\in I$ are continuous. \end{reminder} \begin{ex}\label{subspaceproductspace} \begin{enumerate}[(a)] \item Let $(N,\mathcal P)$ be a topological space and $M\subseteq N$. Then one endows $M$ with the initial topology $\O $ with respect to $M\to N,\ x\mapsto x$. One calls $\O $ the topology \emph{induced} by $\mathcal P$ on $M$ and $(M,\O )$ a \emph{subspace} of $(N,\mathcal P)$. We have \[\O =\{M\cap B\mid B\in\mathcal P\}.\] \item Let $(N_i,\mathcal P_i)_{i\in I}$ be a family of topological spaces. Then there exists a coarsest topology $\O $ on $N:=\prod_{i\in I}N_i$ making all projections $\pi_i\colon N\to N_i,\ (x_j)_{j\in I}\mapsto x_i$ ($i\in I$) continuous. One calls $\O $ the \emph{product topology} of the $\mathcal P_i$ ($i\in I$) on $N$ and $(N,\O )$ the \emph{product space} of the $(N_i,P_i)$ ($i\in I$). The elements of $\O $ are exactly the unions of sets of the form $\prod_{i\in I}B_i$ where $B_i\in\mathcal P_i$ for $i\in I$ and $\#\{i\in I\mid B_i\ne N_i\}<\infty$. \end{enumerate} \end{ex} \begin{rem}\label{inducedproductcommute} The constructions (a) and (b) in Example \ref{subspaceproductspace} commute in the following sense: Let $(N_i,\mathcal P_i)_{i\in I}$ be a family of topological spaces and $(N,\O )$ its product. Furthermore, let $(M_i)_{i\in I}$ be a family of sets such that $M_i\subseteq N_i$ for each $i\in I$ and set $M:=\prod_{i\in I}M_i$. Then $\O $ induces on $M$ the product topology of the topologies induced on the $M_i$ by the $\mathcal P_i$. \end{rem} \begin{df}\label{dffilter} Let $M$ be a set and $\mathcal S$ a Boolean algebra on $M$ [$\to$ \ref{booleanalgebra}] (for instance $\mathcal S=\mathcal P(M)$). A set $\mathcal F\subseteq\mathcal S$ is called a \emph{filter} in $\mathcal S$ (or filter on $M$ in case $\mathcal S=\mathcal P(M)$) if \begin{itemize} \item $\emptyset\notin\mathcal F,M\in\mathcal F$, \item $\forall U,V\in\mathcal F:U\cap V\in\mathcal F$ and \item $\forall U\in\mathcal F:\forall V\in\mathcal S:(U\subseteq V\implies V\in\mathcal F)$. \end{itemize} If in addition $\forall U\in\mathcal S:(U\in\mathcal F\text{ or }\complement U\in\mathcal F)$, then $\mathcal F$ is called an \emph{ultrafilter}. \end{df} \begin{pro} Let $\mathcal S$ be a Boolean algebra on the set $M$ and $\mathcal F$ a filter in $\mathcal S$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\mathcal F$ is an ultrafilter. \item $\forall U,V\in\mathcal S:(U\cup V\in\mathcal F\implies(U\in\mathcal F\text{ or }V\in\mathcal F))$ \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)}\quad Suppose that (a) holds and let $U,V\in\mathcal S$ such that $U\cup V\in\mathcal F$ and $U\notin\mathcal F$. To show: $V\in\mathcal F$. Since $\mathcal F$ is an ultrafilter, we have $\complement U\in\mathcal F$ and thus $(U\cup V)\cap\complement U\in\mathcal F$. Because of $(U\cup V)\cap\complement U\subseteq V$ it then also holds that $V\in\mathcal F$. \smallskip\underline{(b)$\implies$(a)} is trivial. \end{proof} \begin{ex}\label{neighbor} Let $(M,\O )$ be a topological space and $x\in M$. Then \[\mathcal U_x:=\{U\in\mathcal P(M)\mid\exists A\in\O :x\in A\subseteq U\}\] is a filter on $M$. One calls $\mathcal U_x$ the \emph{neighborhood filter} of $x$ and its elements the \emph{neighborhoods} of $x$. In general, $\mathcal U_x$ is not an ultrafilter since $[-1,1]=[-1,0]\cup[0,1]$ is a neighborhood of $0$ in $\R$ as opposed to $[-1,0]$ and $[0,1]$. \end{ex} \begin{df} Let $(M,\O )$ be a topological space, $\mathcal F$ a filter on $M$ and $x\in M$. One says that $\mathcal F$ \emph{converges} to $x$ and writes $\mathcal F\to x$ if \[\mathcal U_x\subseteq\mathcal F.\] If $\mathcal F$ converges to exactly one point $x$, one calls this the \emph{limit} of $\mathcal F$ and writes \[x=\lim\mathcal F.\] \end{df} \begin{df} Let $(M,\O )$ be a topological space, $(a_n)_{n\in\N}$ a sequence in $M$ and $x\in M$. We call \[\mathcal F:=\{U\in\mathcal P(M)\mid\exists N\in\N:\{a_n\mid n\ge N\}\subseteq U\}\] the filter associated to $(a_n)_{n\in\N}$. It is clearly a filter on $M$. One says that $(a_n)_{n\in\N}$ \emph{converges} to $x$ and writes \[a_n\overset{n\to\infty}\longrightarrow x\] if $\mathcal F\to x$. If $\mathcal F$ converges to exactly one point $x$, one calls this the \emph{limit} of $(a_n)_{n\in\N}$ and writes \[x=\lim_{n\to\infty}a_n.\] \end{df} \begin{dflem}\label{ultraimage} Suppose $f\colon M\to N$ is a map and $\mathcal F$ a filter on $M$. Then the \emph{image filter} \[f(\mathcal F):=\{V\in\mathcal P(N)\mid\exists U\in\mathcal F:f(U)\subseteq V\}\] is a filter on $N$. If $\mathcal F$ is an ultrafilter, then so is $f(\mathcal F)$. \end{dflem} \begin{proof} One sees immediately that $f(\mathcal F)$ is a filter. Now let $\mathcal F$ be an ultrafilter. Suppose $V\subseteq N$ and $V\notin f(\mathcal F)$. To show: $\complement V\in f(\mathcal F)$. For $U:=f^{-1}(V)$, one has $f(U)\subseteq V$ and thus $U\notin\mathcal F$. But then $f^{-1}(\complement V)=\complement U\in\mathcal F$. From $f(\complement U)\subseteq\complement V$, we obtain thus $\complement V\in f(\mathcal F)$. \end{proof} \begin{lem}\label{ultraprodconv} Let $M$ be a set endowed with the initial topology with respect to a family $(f_i)_{i\in I}$ of maps $f_i\colon M\to N_i$ into topological spaces $N_i$ ($i\in I$). Let $\mathcal F$ be a filter on $M$ and $x\in M$. Then \[\mathcal F\to x\iff\forall i\in I:f_i(\mathcal F)\to f_i(x).\] \end{lem} \begin{proof} ``$\Longrightarrow$'' Suppose $\mathcal F\to x$ and let $i\in I$. To show: $f_i(\mathcal F)\to f_i(x)$. Let $V\in\mathcal U_{f_i(x)}$. To show: $V\in f_i(\mathcal F)$. Since $f_i$ is continuous, we have $U:=f_i^{-1}(V)\in\mathcal U_x$ and thus $U\in\mathcal F$. From $f_i(U)\subseteq V$, we get $V\in f_i(\mathcal F)$. \smallskip ``$\Longleftarrow$'' Suppose $f_i(\mathcal F)\to f_i(x)$ for all $i\in I$. Let $U\in\mathcal U_x$. To show: $U\in\mathcal F$. Choose $n\in\N_0$, $i_1,\dots,i_n\in I$ and $V_k$ open in $N_{i_k}$ ($k\in\{1,\dots,n\}$) such that \[x\in f_{i_1}^{-1}(V_1)\cap\ldots\cap f_{i_n}^{-1}(V_n)\subseteq U.\] Since $\mathcal F$ is a filter, it is enough to show that $f_{i_k}^{-1}(V_k)\in\mathcal F$ for all $k\in\{1,\dots,n\}$. Fix therefore $k\in\{1,\dots,n\}$. Since $V_k$ is an (open) neighborhood of $f_{i_k}(x)$ in $N_{i_k}$, the hypothesis yields $V_k\in f_{i_k}(\mathcal F)$. Hence there is $U_0\in\mathcal F$ such that $f_{i_k}(U_0)\subseteq V_k$. Now everything follows from $U_0\subseteq f_{i_k}^{-1}(f_{i_k}(U_0))\subseteq f_{i_k}^{-1}(V_k)$. \end{proof} \begin{df}\label{dfcomp} Let $(M,\O )$ be a topological space. Then $(M,\O )$ is called a \emph{Hausdorff} space if every two distinct points of $M$ can be separated by disjoint neighborhoods, i.e., \[\forall x,y\in M:(x\ne y\implies\exists U\in\mathcal U_x:\exists V\in\mathcal U_y:U\cap V=\emptyset).\] We call $(M,\O )$ \emph{quasicompact} if every open cover of $M$ possesses a finite subcover, i.e., \[\forall\mathcal A\subseteq\O :\left(M=\bigcup\mathcal A\implies \exists\mathcal B\subseteq\mathcal A:\left(\#\mathcal B<\infty~\et~M=\bigcup\mathcal B\right)\right).\] Furthermore, we call a quasicompact Hausdorff space \emph{compact}. \end{df} \begin{pro}\label{maxultra} Suppose $M$ is a set, $\mathcal S$ a Boolean algebra on $M$ and $\mathcal U$ a filter in $\mathcal S$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\mathcal U$ is an ultrafilter in $\mathcal S$. \item $\mathcal U$ is a maximal filter in $\mathcal S$. \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)}\quad Suppose that (a) holds and let $\mathcal F$ be a filter in $\mathcal S$ such that $\mathcal U\subseteq\mathcal F$. In order to show $\mathcal F\subseteq\mathcal U$, we fix $U\in\mathcal F$. If we had $U\notin\mathcal U$, we would get $\complement U\in\mathcal U\subseteq\mathcal F$ and thus $\emptyset=U\cap\complement U\in\mathcal F$ $\lightning$. \smallskip\underline{(b)$\implies$(a)}\quad Suppose that (b) holds and let $U\in\mathcal S$ satisfy $U\notin\mathcal U$. To show: $\complement U\in\mathcal U$. It is enough to show that $\mathcal F:=\{V\in\mathcal S\mid\exists A\in\mathcal U:A\cap\complement U\subseteq V\}$ is a filter in $\mathcal S$ because then it follows from $\mathcal U\subseteq\mathcal F$ that $\complement U\in\mathcal F=\mathcal U$. For this, it suffices to show $\emptyset\notin\mathcal F$. If we had $\emptyset\in\mathcal F$, then there would be an $A\in\mathcal U$ satisfying $A\cap\complement U=\emptyset$ and from $A\subseteq U$ it would follow that $U\in\mathcal U$ $\lightning$. \end{proof} \begin{thm}\label{makeultra} Let $M$ be a set, $\mathcal S$ a Boolean algebra on $M$ and $\mathcal F$ a filter in $\mathcal S$. Then there is an ultrafilter $\mathcal U$ in $\mathcal S$ such that $\mathcal F\subseteq\mathcal U$. \end{thm} \begin{proof} By \ref{maxultra}, it suffices to show that the set $\{\mathcal F'\mid\mathcal F'\text{ Filter in }\mathcal S,\mathcal F\subseteq\mathcal F'\}$ partially ordered by inclusion has a maximal element. This follows from Zorn's lemma since the union of a nonempty chain of filters in $\mathcal S$ is again a filter in $\mathcal S$. \end{proof} \begin{thm}\label{ultracompact} A topological space $M$ is quasicompact if and only if each ultrafilter on the set $M$ converges in $M$. \end{thm} \begin{proof} Let $M$ be a topological space. We show the equivalence of the following statements: \begin{enumerate}[(a)] \item $M$ is not quasicompact. \item There is an ultrafilter on $M$ that does not converge. \end{enumerate} \underline{(a)$\implies$(b)}\quad Suppose that (a) holds. Then for each $x\in M$, there is obviously an open set $A_x\subseteq M$ with $x\in A_x$ in such a way that $\bigcup_{x\in M}A_x=M$ and $A_{x_1}\cup\ldots\cup A_{x_n}\ne M$ for all $n\in\N$ and $x_1,\dots,x_n\in M$. Then \[\mathcal F:=\left\{U\in\mathcal P(M)\mid\exists n\in\N:\exists x_1,\ldots,x_n\in M: \left(\complement A_{x_1}\right)\cap \ldots\cap\left(\complement A_{x_n}\right)\subseteq U\right\}\] is a filter on $M$ that can be extended by \ref{makeultra} to an ultrafilter $\mathcal U$ on $M$. Let $x\in M$. We show that $\mathcal U$ does not converge to $x$. If we had $\mathcal U\to x$, then $A_x\in\mathcal U$ in contradiction to $\complement A_x\in\mathcal U$. \smallskip\underline{(b)$\implies$(a)}\quad Suppose that (b) holds. Choose an ultrafilter $\mathcal U$ on $M$ that does not converge. Then for every $x\in M$ there is an $U_x\in\mathcal U_x$ such that $U_x\notin\mathcal U$. WLOG $U_x$ is open for every $x\in M$. Of course $M=\bigcup_{x\in M}U_x$. If $n\in\N$ and $x_1,\dots,x_n\in M$, then $\complement(U_{x_1}\cup\ldots\cup U_{x_n})=\left(\complement U_{x_1}\right)\cap\ldots\cap \left(\complement U_{x_n}\right)\in\mathcal U$ and thus $\emptyset\ne\complement(U_{x_1}\cup\ldots\cup U_{x_n})$, i.e., $M\ne U_{x_1}\cup\ldots\cup U_{x_n}$. \end{proof} \begin{thm}[Tikhonov]\label{tikhonov} Products of quasicompact topological spaces are quasicompact. \end{thm} \begin{proof} Let $(N_i)_{i\in I}$ be a family of quasicompact topological spaces and $M:=\prod_{i\in I}N_i$ the product space [$\to$ \ref{subspaceproductspace}(b)]. Consider for each $i\in I$ the canonical projection $\pi_i\colon M\to N_i$. According to \ref{ultracompact} it suffices to show that every ultrafilter on $M$ converges. For this purpose, let $\mathcal U$ be an ultrafilter on $M$. By \ref{ultraimage}, the image filters $\pi_i(\mathcal U)$ ($i\in I$) are again ultrafilters and therefore converge. Accordingly, we choose $(x_i)_{i\in I}$ satisfying $\pi_i(\mathcal U)\to x_i$ for each $i\in I$. From \ref{ultraprodconv}, we now get $\mathcal U\to(x_i)_{i\in I}$. \end{proof} \begin{cor}\label{tikhonovcor} Products of compact spaces are compact. \end{cor} \begin{rem} Let $M$ be a topological space. \alal{In \ref{ultracompact}, we have shown}{Using \ref{makeultra}, one shows as an exercise} that $M$ is \alal{quasicompact}{a Hausdorff space} if and only if every ultrafilter on $M$ converges to \alal{at least}{at most} one point of $M$. Therefrom, $M$ is compact if and only if each ultrafilter on $M$ converges to exactly one point of $M$. \end{rem} \begin{reminder}\label{compactsubspace} Let $M$ be a topological space and $A\subseteq M$. Then $A$ is called \emph{closed} in $M$ if $\complement A$ is open in $M$. We call $A$ \alal{quasicompact}{compact} if $A$ furnished with the subspace topology [$\to$ \ref{subspaceproductspace}(a)] is a \alal{quasicompact}{compact} topological space. Consequently, $A$ is quasicompact if and only if each open cover of $A$ in $M$ possesses a finite subcover, i.e., \[\forall\mathcal A\subseteq\O :\left(A\subseteq\bigcup\mathcal A\implies \exists\mathcal B\subseteq\mathcal A:\left(\#\mathcal B<\infty~\et~A\subseteq\bigcup\mathcal B\right)\right).\] It follows immediately that closed subsets of \alal{quasicompact}{compact} topological spaces are again \alal{quasicompact}{compact}. \end{reminder} \section{Topologies on the real spectrum} \begin{df}\label{spectraltop} Let $A$ be a commutative ring. We call the topology generated by \[\{\{P\in\sper A\mid\widehat a(P)>0\}\mid a\in A\}\] on $\sper A$ the \emph{spectral topology} (or \emph{Harrison-topology}) on $\sper A$. Moreover, we call the topology generated by $\mathcal C_A$ [$\to$ \ref{introconstructible}] or, equivalently [$\to$ \ref{constructiblesetnormalform}], by \[\{\{P\in\sper A\mid\widehat a(P)=0\}\mid a\in A\}\cup\{\{P\in\sper A\mid\widehat a(P)>0\}\mid a\in A\},\] the \emph{constructible topology} on $\sper A$. Unless otherwise indicated, we endow $\sper A$ always with the spectral topology. It is coarser than the constructible topology. \end{df} \begin{reminder}\label{dfhomeo} Let $M$ and $N$ be topological spaces. A bijection $f\colon M\to N$ is called a \emph{homeomorphism} if both $f$ and $f^{-1}$ are continuous. One calls $M$ and $N$ \emph{homeomorphic} if there exists a homeomorphism from $M$ to $N$. \end{reminder} \begin{thm}\label{constrcompact} Let $A$ be a commutative ring. Then $\sper A$ is compact with respect to the constructible topology. \end{thm} \begin{proof} We endow the two-element set $\{0,1\}$ with the discrete topology [$\to$ \ref{topreminder}(d)]. Then $\{0,1\}$ is compact and so is $\{0,1\}^A=\prod_{i\in A}\{0,1\}$ with respect to the product topology by Tikhonov's Theorem \ref{tikhonov}. For every $B\subseteq A$, we denote by \[1_B\colon A\to\{0,1\},\ a\mapsto\begin{cases}0&\text{if }a\notin B\\1&\text{if }a\in B\end{cases}\] the corresponding characteristic function. Consider $S:=\{1_P\mid P\in\sper A\}\subseteq\{0,1\}^A$ endowed with the subspace topology of the product topology. Obviously, \[\sper A\to S,\ P\mapsto 1_P\] is a homeomorphism. Since $\{0,1\}^A$ is compact by \ref{tikhonovcor}, it suffices to show that $S$ is closed in $\{0,1\}^A$ since then $S$ and consequently $\sper A$ is compact [$\to$ \ref{compactsubspace}]. Encoding \ref{dfprimecone} in characteristic functions, we obtain \begin{align*} S=&\bigcap_{a,b\in A}\left\{\ch\in\{0,1\}^A\mid\ch(a)=0\text{ or }\ch(b)=0\text{ or }\ch(a+b)=1\right\}\cap\\ &\bigcap_{a,b\in A}\left\{\ch\in\{0,1\}^A\mid\ch(a)=0\text{ or }\ch(b)=0\text{ or }\ch(ab)=1\right\}\cap\\ &\bigcap_{a\in A}\left\{\ch\in\{0,1\}^A\mid\ch(a)=1\text{ or }\ch(-a)=1\right\}\cap\\ &\left\{\ch\in\{0,1\}^A\mid\ch(-1)=0\right\}\cap\\ &\bigcap_{a,b\in A}\left\{\ch\in\{0,1\}^A\mid\ch(ab)=0\text{ or }\ch(a)=1\text{ or }\ch(-b)=1\right\}. \end{align*} Being thus an intersection of closed sets, $S$ is itself closed. \end{proof} \begin{cor} Let $A$ be a commutative ring. Then $\sper A$ is quasicompact. \end{cor} \begin{proof} Every open cover of $\sper A$ is in particular an open cover with respect to the finer constructible topology. By \ref{constrcompact}, it possesses therefore a finite subcover. \end{proof} \begin{reminder}\label{interiorclosure} Let $M$ be a topological space and $A\subseteq M$. \alal{The \emph{interior} $A^\circ$}{The \emph{closure} $\overline A$} of $A$ (in $M$) is the \alal{union}{intersection} over all \alal{open subsets}{closed supersets} of $A$ in $M$, i.e., the \alal{largest open subset}{smallest closed superset} of $A$ in $M$. One shows immediately \begin{align*} A^\circ&=\{x\in M\mid\exists U\in\mathcal U_x:U\subseteq A\}\quad\text{and}\\ \overline A&=\{x\in M\mid\forall U\in\mathcal U_x:U\cap A\ne\emptyset\}. \end{align*} Therefore one calls the elements of $\malal{A^\circ}{\overline A}$ also \emph{\alal{interior}{adherent} points} of $A$. One says that $A$ is \emph{dense} in $M$ if $\overline A=M$ or, equivalently, if every nonempty open subset of $M$ contains an element of $A$. \end{reminder} \begin{rem} Let $A$ be a commutative ring and let $P,Q\in\sper A$. Then \begin{align*} P\subseteq Q&\iff\forall a\in A:(\widehat a(P)\ge0\implies\widehat a(Q)\ge0)\\ &\iff\forall a\in A:(\widehat a(Q)<0\implies\widehat a(P)<0)\\ &\iff\forall a\in A:(\widehat{-a}(Q)<0\implies\widehat{-a}(P)<0)\\ &\iff\forall a\in A:(\widehat a(Q)>0\implies\widehat a(P)>0)\\ &\iff\forall U\in\mathcal U_Q:P\in U\\ &\iff\forall U\in\mathcal U_Q:U\cap\{P\}\ne\emptyset\\ &\iff Q\in\overline{\{P\}}. \end{align*} Thus if there are $P,Q\in\sper A$ with $P\subset Q$, then $\sper A$ ist not a Hausdorff space. For example, $\sper\R[X]$ is not a Hausdorff space [$\to$ \ref{sperrx}]. \end{rem} \begin{rem}\label{spercontinuity} Suppose $A$ and $B$ are commutative rings and $\ph\colon A\to B$ is a ring homomorphism. Then \[\sper\ph\colon\sper B\to\sper A,\ Q\mapsto\ph^{-1}(Q)\] is continuous with respect to the spectral topologies on both sides as well as with respect to the constructible topologies on both sides because for $a\in A$, we have \begin{align*} (\sper\ph)^{-1}(\{P\in\sper A\mid\widehat a(P)>0\})&= \{Q\in\sper B\mid\widehat a((\sper\ph)(Q))>0\}\\ &=\{Q\in\sper B\mid\widehat a(\ph^{-1}(Q))>0\}\\ &=\{Q\in\sper B\mid a\in\ph^{-1}(Q)\setminus-\ph^{-1}(Q)\}\\ &=\{Q\in\sper B\mid a\in\ph^{-1}(Q\setminus-Q)\}\\ &=\{Q\in\sper B\mid \ph(a)\in Q\setminus-Q\}\\ &=\{Q\in\sper B\mid\widehat{\ph(a)}(Q)>0\} \end{align*} and analogously \[(\sper\ph)^{-1}(\{P\in\sper A\mid\widehat a(P)\ge0\})=\{Q\in\sper B\mid\widehat{\ph(a)}(Q)\ge0\}.\] \end{rem} \begin{rem}\label{sperpreordcomp} Let $(A,T)$ be a preordered ring [$\to$ \ref{preorderedring}]. Then \[\sper(A,T)=\bigcap_{t\in T}\left\{P\in\sper A\mid\widehat t(P)\ge0\right\},\] as an intersection of closed sets, is again closed in $\sper A$, namely with respect to the spectral but also with respect to the constructible topology on $\sper A$. By \ref{compactsubspace}, $\sper(A,T)$ is thus quasicompact with respect to the spectral and compact with respect to the constructible topology. \end{rem} \section{The real spectrum of polynomial rings} As in §\ref{sec:constructible}, we fix in this section an ordered field $(K,\le)$, we denote by $R:=\overline{(K,\le)}$ its real closure, we let $n\in\N_0$, $A:=K[\x]=K[X_1,\dots,X_n]$ and $T:=\sum K_{\ge0}A^2$. Moreover, we denote by $\mathcal S:=\mathcal S_{n,R}$ the Boolean algebra of all $K$-semialgebraic subsets of $R^n$ [$\to$~\ref{introsemialg}, \ref{introsn}] and by $\mathcal C:=\mathcal C_{(A,T)}$ the Boolean algebra of all constructible subsets of $\sper(A,T)$ [$\to$ \ref{introconstructible}]. Consider again the isomorphisms of Boolean algebras \[\slim\colon\mathcal C\to\mathcal S, C\mapsto\{x\in R^n\mid P_x\in C\}\] and $\fatten:=\slim^{-1}$ [$\to$ \ref{slimfatten}]. \begin{thm}\label{fatclo} Let $S\in\mathcal S$. Then $\fatten(S)$ is the closure of $\{P_x\mid x\in S\}$ in $\sper(A,T)$ (or equivalently in $\sper A$ \emph{[$\to$ \ref{sperpreordcomp}]}) with respect to the constructible topology. \end{thm} \begin{proof}[Proof (simplified by Jakob Everling)] For the duration of this proof, we endow $\sper(A,T)$ with the constructible topology. Since $\complement\fatten(S)\in\mathcal C$ is open, $\fatten(S)$ is closed. Because of \[S=\slim(\fatten(S))\overset{\ref{slimfatten}}=\{x\in R^n\mid P_x\in\fatten(S)\},\] we have $\{P_x\mid x\in S\}\subseteq\fatten(S)$ and thus $\overline{\{P_x\mid x\in S\}}\subseteq\fatten(S)$. In order to show $\fatten(S)\subseteq \overline{\{P_x\mid x\in S\}}$, we let $P\in\fatten(S)$ and $U\in\mathcal U_P$. To show: $U\cap\{P_x\mid x\in S\}\ne\emptyset$. WLOG $U$ is open. WLOG $U\subseteq\fatten(S)$ (because $\fatten(S)$ is open and $P\in\fatten(S)$, one can otherwise replace $U$ by $U\cap\fatten(S)\in\mathcal U_P)$. Since $\slim$ is an isomorphism of Boolean algebras by \ref{slimfatten}, it follows from $\emptyset \ne U \subseteq\fatten(S)$ that $\emptyset \ne \slim(U)\subseteq\slim(\fatten(S)) = S$. But since $\slim(U)= \{ x\in R^n \mid P_x \in U \}$, this means that there is an $x\in S$ such that $P_x \in U$. \end{proof} \begin{cor} $\{P_x\mid x\in R^n\}$ lies dense in $\sper(A,T)$ with respect to the constructible topology and thus also with respect to the spectral topology. \end{cor} \begin{lem}\label{sectfat} Let $\mathcal F$ be \alal{a filter}{an ultrafilter} in $\mathcal S$. Then $\{\fatten(S)\mid S\in\mathcal F\}$ is \alal{a filter}{an ultrafilter} in $\mathcal C$ and $\bigcap\{\fatten(S)\mid S\in\mathcal F\}$ is \alal{nonempty}{a singleton}. \end{lem} \begin{proof} The first part follows immediately from the fact that $\fatten$ is according to \ref{slimfatten} an isomorphism of Boolean algebras combined with the definition of \alal{a filter}{an ultrafilter} \ref{dffilter}. Since $\fatten(S)$ is for each $S\in\mathcal F$ closed with respect to the constructible topology, it would follow from $\bigcap\{\fatten(S)\mid S\in\mathcal F\}=\emptyset$ together with the compactness of $\sper(A,T)$ with respect to the constructible topology [$\to$ \ref{sperpreordcomp}] that there would be $n\in\N$ and $S_1,\dots,S_n\in \mathcal F$ such that $\fatten(S_1)\cap\ldots\cap\fatten(S_n)=\emptyset$, which would imply $\fatten(S_1\cap\ldots\cap S_n)=\emptyset$ and thus $\emptyset=S_1\cap\ldots\cap S_n\in\mathcal F\ \lightning$. Hence $\bigcap\{\fatten(S)\mid S\in\mathcal F\}\ne\emptyset$. Finally, let $\mathcal F$ and thus $\{\fatten(S)\mid S\in\mathcal F\}$ be an ultrafilter and fix $P,Q\in\bigcap\{\fatten(S)\mid S\in\mathcal F\}$. Assume $P\ne Q$. Since $\sper(A,T)$ is a Hausdorff space with respect to the constructible topology, there is some $C\in\mathcal C$ such that $P\in C$ but $Q\notin C$. Since $\{\fatten(S)\mid S\in\mathcal F\}$ is an ultrafilter in $\mathcal C$, we obtain $C=\fatten(S)$ or $\complement C=\fatten(S)$ for some $S\in\mathcal F$. In the first case, it follows that $Q\notin\fatten(S)\ \lightning$, in the second that $P\notin\fatten(S)\ \lightning$. \end{proof} \begin{lem}\label{ultra2cone} Let $\mathcal U$ be an ultrafilter in $\mathcal S$. Then \[P_{\mathcal U}:=\{f\in A\mid\{x\in R^n\mid f(x)\ge 0\}\in\mathcal U\}\in \sper(A,T)\] and $\bigcap\{\fatten(S)\mid S\in\mathcal U\}=\{P_{\mathcal U}\}$. \end{lem} \begin{proof} By Lemma \ref{sectfat}, there is some $Q\in\sper(A,T)$ satisfying \[\bigcap\{\fatten(S)\mid S\in\mathcal U\}=\{Q\}.\] We show $P_{\mathcal U}=Q$. If $f\in P_{\mathcal U}$, then $Q\in\fatten(\{x\in R^n\mid f(x)\ge0\})$, i.e., $\widehat f(Q)\ge0$ and hence $f\in Q$. If on the other hand $f\in A\setminus P_{\mathcal U}$, then $\{x\in R^n\mid f(x)<0\}\in\mathcal U$ (since $\mathcal U$ is an ultrafilter) and thus $Q\in\fatten(\{x\in R^n\mid f(x)<0\})$, i.e., $\widehat f(Q)<0$ and hence $f\notin Q$. \end{proof} \begin{lem}\label{cone2ultra} Let $P\in\sper(A,T)$. Then \begin{align*} \mathcal U_P:=\{S\in\mathcal S\mid~&\exists f\in\supp P:\exists m\in\N: \exists g_1,\dots,g_m\in P\setminus-P:\\ &\{x\in R^n\mid f(x)=0,g_1(x)>0,\dots,g_m(x)>0\}\subseteq S\} \end{align*} is an ultrafilter in $\mathcal S$ and we have $\{S\in\mathcal S\mid P\in\fatten(S)\}=\mathcal U_P$. \end{lem} \begin{proof} Since $\{C\in\mathcal C\mid P\in C\}$ is an ultrafilter in $\mathcal C$ and $\slim\colon\mathcal C\to\mathcal S$ is an isomorphism of Boolean algebras, $\{S\in\mathcal S\mid P\in\fatten(S)\}$ is an ultrafilter in $\mathcal S$. From the description of $K$-semialgebraic subsets of $R^n$ implied by Theorem \ref{sanf}, one gets that this ultrafilter equals \begin{align*} \{S\in\mathcal S\mid&\exists S'\subseteq S:\exists f,g_1,\dots,g_m\in A:\\ &S'=\{x\in R^n\mid f(x)=0,g_1(x)>0,\dots,g_m(x)>0\}\et P\in\fatten(S')\}=\mathcal U_P \end{align*} since $\fatten$ is an isomorphism of Boolean algebras. \end{proof} \begin{thm}[Bröcker's ultrafilter theorem \cite{bro}]\label{broecker} The correspondence \[ \begin{array}{rcll} \mathcal U&\mapsto&P_{\mathcal U}&[\to \ref{ultra2cone}]\\ \mathcal U_P&\mapsfrom&P&[\to \ref{cone2ultra}] \end{array} \] defines a bijection between the set of ultrafilters in $\mathcal S$ and $\sper(A,T)$. \end{thm} \begin{proof} To show: (a) If $\mathcal U$ is an ultrafilter in $\mathcal S$, then $\mathcal U=\mathcal U_{P_{\mathcal U}}$. (b) If $P\in\sper(A,T)$, then $P=P_{{\mathcal U}_P}$. \medskip In order to show (a), we let $\mathcal U$ be an ultrafilter in $\mathcal S$. By \ref{cone2ultra}, we have to show that $\{S\in\mathcal S\mid P_{\mathcal U}\in\fatten(S)\}=\mathcal U$. Since $\fatten$ is an isomorphism of Boolean algebras by \ref{slimfatten}, $\{S\in\mathcal S\mid P_{\mathcal U}\in \fatten(S)\}$ is a filter in $\mathcal S$. Since $\mathcal U$ is a maximal filter in $\mathcal S$ [$\to$ \ref{maxultra}], it suffices to show that $\mathcal U\subseteq\{S\in\mathcal S\mid P_{\mathcal U}\in\fatten(S)\}$. To this end, let $S\in\mathcal U$. Then $\{P_{\mathcal U}\}\subseteq \fatten(S)$ by \ref{ultra2cone} and thus $P_{\mathcal U}\in\fatten(S)$. \medskip For (b), we let $P\in\sper(A,T)$. By \ref{ultra2cone}, $\bigcap\{\fatten(S)\mid S\in\mathcal U_P\}$ consists of exactly one element, namely $P_{\mathcal U_P}$. Therefore it is enough to show $P\in\bigcap \{\fatten(S)\mid S\in\mathcal U_P\}$. Thus fix $S\in\mathcal U_P$. By \ref{cone2ultra}, we then obtain $P\in\fatten(S)$. \end{proof} \begin{pro}\label{semialgk} Every semialgebraic subset of $R^n$ \emph{[$\to$ \ref{introsemialg}]} is even $K$-semialgebraic. \end{pro} \begin{proof} To begin with, we show that all one-element subsets of $R$ are $K$-semialgebraic. For this, let $a\in R$. To show: $\{a\}$ is $K$-semialgebraic. Since $R|K$ is algebraic, there is $f\in K[X]\setminus\{0\}$ with $f(a)=0$. Set $k:=\#\{x\in R\mid f(x)=0\}$ and choose $j\in\{1,\dots,k\}$ such that $a$ is the $j$-th root of $f$ when the roots of $f$ in $R$ are arranged in increasing order with respect to the order $\le_R$ of $R$. By applying the real quantifier elimination \ref{elim} $k$ times, we obtain that \[\{a\}=\{y\in R\mid\exists x_1,\dots,x_k\in R: (x_1<_R\ldots<_Rx_k\et f(x_1)=\ldots=f(x_k)=0\et x_j=y)\}\] is $K$-semialgebraic. Now consider an arbitrary $p\in R[\x]$. It suffices to show that $\{x\in R^n\mid p(x)\ge0\}$ is $K$-semialgebraic. Write $p=\sum_{\substack{\al\in\N^n\\|\al|\le d}}a_\al\x^\al$ [$\to$ \ref{monomnotation}] with $d:=\deg p$ and $a_\al\in R$. Since all $\{a_\al\}$ are $K$-semialgebraic by what has already been shown, real quantifier elimination yields that \begin{align*} \{x\in R^n\mid p(x)\ge0\}=\Bigg\{x\in R^n\mid&\exists\text{ family } (y_\al)_{|\al|\le d}\text{ in $R$}:\\ &\left(\biget_{|\al|\le d}y_\al\in\{a_\al\}\et\sum_{|\al|\le d}y_\al x_1^{\al_1}\dotsm x_n^{\al_n}\ge0\right)\Bigg\} \end{align*} is $K$-semialgebraic. \end{proof} \begin{thm}\label{rcsper} $\sper R[\x]\to\sper(A,T),\ P\mapsto P\cap A$ is bijective. \end{thm} \begin{proof} Because of \ref{semialgk}, we obtain from applying the ultrafilter theorem of Bröcker twice (once in the special case $K=R$) that \begin{align*} \sper R[\x]&\to\sper(A,T)\\ \{f\in R[\x]\mid\{x\in R^n\mid f(x)\ge0\}\in\mathcal U\}&\mapsto \{f\in A\mid\{x\in R^n\mid f(x)\ge0\}\in\mathcal U\}\\ &\text{($\mathcal U$ an ultrafilter in $\mathcal S$)} \end{align*} is a bijection. \end{proof} \begin{cor}\label{rcspercor} $\sper R(\x)\to\sper(K(\x),\sum K_{\ge0}K(\x)^2),\ P\mapsto P\cap K(\x)$ is bijective. \end{cor} \begin{proof} In the commutative diagram \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=5em,column sep=8em] {\sper R(\x) & \sper(K(\x),\sum K_{\ge0}K(\x)^2)\\ \{P\in\sper R[\x]\mid\supp P=(0)\} &\{P\in\sper(A,T)\mid\supp P=(0)\}\\}; \path[->] (m-1-1) edge node [left] {$P\mapsto P\cap R[\x]$} (m-2-1) edge node [above] {$P\mapsto P\cap K(\x)$} (m-1-2) (m-2-1) edge node [below] {$P\mapsto P\cap A$} (m-2-2) (m-1-2) edge node [right] {$P\mapsto P\cap A$} (m-2-2); \end{tikzpicture} \end{center} both vertical arrows represent bijections by Proposition \ref{sperlocalize} or by the very definition of the real spectrum \ref{introrealspectrum}. It therefore suffices to show that the lower horizontal arrow represents a bijection. Because of the bijection from \ref{rcsper}, it therefore suffices to show that every $P\in\sper R[\x]$ with $\supp P\ne(0)$ satisfies even $\supp(P\cap A)\ne(0)$. Thus fix $P\in\sper R[\x]$ and $f\in\p:=\supp P$ with $f\ne0$. Since $K$ has characteristic $0$, there exists an extension field $L$ of $K$ containing all coefficients of $f$ such that $L|K$ is a finite Galois extension. If $C$ denotes the algebraic closure of $R$ (and therefore of $K$), then we can of course suppose that $L$ is a subfield of $C$. By extending the action of the Galois group $\Aut(L|K)$ from $L$ to $L[\x]$, we obtain $h:=\prod_{g\in\Aut(L|K)}gf\in A\setminus\{0\}$. Clearly, $f$ divides $h$ in $L[\x]$ and therefore in $C[X]$. Translating this divisibility into a system of affine linear equations (whose variables correspond to the coefficients of the corresponding multiplier polynomial), we see by linear algebra that the same system must have a solution over the field $L\cap R$. This means that $f$ divides $h$ in $(L\cap R)[\x]$ and therefore in $R[\x]$. Since $f\in\p$ and $\p$ is an ideal in $R[\x]$, we get now $h\in\p\cap A=\supp(P\cap A)$. \end{proof} \begin{thm}\label{ultrasurjective} Let $(L,\le')$ be an ordered extension field of $(K,\le)$. Then \[\sper\left(L[\x],\sum L_{\ge'0}L[\x]^2\right)\to\sper(A,T),\ P\mapsto P\cap A\] is surjective. \end{thm} \begin{proof} Let $\mathcal S''$ denote the Boolean algebra of all $L$-semialgebraic subsets of $R'^n$ where $R':=\overline{(L,L_{\ge'0})}$. The Boolean algebra $\mathcal S'\subseteq\mathcal S''$ of all $K$-semialgebraic subsets of $R'^n$ is isomorphic to $\mathcal S$ in virtue of the $\transfer_{R,R'}\colon\mathcal S\to\mathcal S'$ [$\to$ \ref{transfer}]. Now let $Q\in\sper(A,T)$ be given. We show that there is $P\in\sper(L[\x],\sum L_{\ge'0}L[\x]^2)$ with $Q=P\cap A$. By \ref{cone2ultra}, $\mathcal U_Q$ is an ultrafilter in $\mathcal S$. Since $\mathcal U_Q$ is a filter in $\mathcal S$, \[\mathcal F:=\{S''\in\mathcal S''\mid\exists S\in\mathcal U_Q: \transfer_{R,R'}(S)\subseteq S''\}\] is a filter in $\mathcal S''$. Choose by \ref{makeultra} an ultrafilter $\mathcal U$ in $\mathcal S''$ such that $\mathcal F\subseteq\mathcal U$. By Bröcker's ultrafilter theorem \ref{broecker}, there is $P\in\sper(L[\x],\sum L_{\ge'0}L[\x]^2)$ such that $\mathcal U=\mathcal U_P$. We have \begin{align*} Q\overset{\ref{broecker}}=P_{\mathcal U_Q}&= \{f\in A\mid\{x\in R^n\mid f(x)\ge0\}\in\mathcal U_Q\}\\ &=\{f\in A\mid\transfer_{R,R'}(\{x\in R^n\mid f(x)\ge0\})\in \{\transfer_{R,R'}(S)\mid S\in\mathcal U_Q\}\}\\ &\overset!=\{f\in A\mid\{x\in R'^n\mid f(x)\ge''0\}\in\mathcal U\}=P_\mathcal U\cap A =P_{\mathcal U_P}\cap A\overset{\ref{broecker}}=P\cap A \end{align*} where $\le''$ denotes the unique order on $R'$ and the equality flagged with an exclamation mark follows from the claim \[\mathcal U\cap\mathcal S'=\{\transfer_{R,R'}(S)\mid S\in\mathcal U_Q\}.\] The inclusion ``$\supseteq$'' in this claim is trivial. The other inclusion ``$\subseteq$'' follows from the fact that $\{\transfer_{R,R'}(S)\mid S\in\mathcal U_Q\}$ is an ultrafilter and thus a maximal filter in $\mathcal S'$ and that $\mathcal U\cap\mathcal S'$ is a filter in $\mathcal S'$. \end{proof} \section{The finiteness theorem for semialgebraic classes} In this section, we fix a real closed field $R_0$ (in the applications, one mostly has $R_0=\R$ or $R_0=\R_{\text{alg}}$ [$\to$ \ref{ralg}]). Moreover, we let $\mathcal R$ denote the class of all real closed extension fields of $R_0$ [$\to$ \ref{rcfclass}(b)] (that is the class of all real closed fields in case $R_0=\R_{\text{alg}}$). Whoever gets vertiginous from this [$\to$ \ref{rcfclass}(c)] can take for $\mathcal R$ a set of real closed extension fields of $R_0$ that is sufficiently large to contain all representation fields $R_P$ of prime cones $P\in\sper R_0[\x]$ [$\to$ \ref{realrep}] (which we perceive as an extension fields of $R_0$ in virtue of the representation $\rh_P$ of $P$, confer the discussion before \ref{rnassubsetofsper}). \begin{thm}[Finiteness theorem for semialgebraic classes] \label{finiteness} Let $n\in\N_0$ and $\mathcal E$ a set of $n$-ary $R_0$-semialgebraic classes. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\bigcup\mathcal E=\mathcal R_n$ \item $\exists k\in\N:\exists S_1,\dots,S_k\in\mathcal E: S_1\cup\ldots\cup S_k=\mathcal R_n$. \item $\exists k\in\N:\exists S_1,\dots,S_k\in\mathcal E: \set_{R_0}(S_1)\cup\ldots\cup\set_{R_0}(S_k)=R_0^n$ \quad\emph{[$\to$ \ref{introsn}]}. \end{enumerate} \end{thm} \begin{proof} \underline{(b)$\iff$(c)} is clear because the setification $\set_{R_0}\colon\mathcal S_n\to\mathcal S_{n,R_0}$ [$\to$ \ref{introsn}] and thus also the classification $\class_{R_0}=\set_{R_0}^{-1}\colon\mathcal S_{n,R_0}\to\mathcal S_n$ is an isomorphism of Boolean algebras [$\to$ \ref{setification}]. \smallskip\underline{(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(b)}\quad Suppose that (a) holds. In the proof of \ref{slimfatten}, we have shown that \[\Ph\colon\mathcal S_n\to\mathcal C_{R_0[\x]},\ S\mapsto\{P\in\sper R_0[\x]\mid(R_P,(\rh_P(X_1),\dots, \rh_P(X_n)))\in S\}\] is an isomorphism of Boolean algebras. Moreover, we have \[\bigcup\{\Ph(S)\mid S\in\mathcal E\}=\sper R_0[\x]\] by the definition of $\Ph$. From \ref{constrcompact}, we get the existence of $k\in\N$ and $S_1,\dots,S_k\in\mathcal E$ satisfying $\Ph(S_1)\cup\ldots\cup\Ph(S_k)=\sper R_0[\x]$. Since $\Ph$ is an isomorphism, we deduce $S_1\cup\ldots\cup S_k=\mathcal R_n$. \end{proof} \begin{cor}\label{finitenesscor} Let $n\in\N_0$ and $\mathcal E$ a set of $n$-ary $R_0$-semialgebraic classes satisfying \[\forall S_1,S_2\in\mathcal E: \exists S_3\in\mathcal E:S_1\cup S_2\subseteq S_3.\] Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\bigcup\mathcal E=\mathcal R_n$ \item $S=\mathcal R_n$ for some $S\in\mathcal E$ \item $\set_{R_0}(S)=R_0^n$ for some $S\in\mathcal E$ \end{enumerate} \end{cor} \begin{rem} In practice, \ref{finitenesscor} is mostly applied in the following context: One has a certain true statement about real numbers (for example that $\R$ is Archimedean [$\to$ \ref{archetcdef}(a)]). Now one is interested in one of the following questions: \begin{enumerate}[(a)] \item Does the statement hold for all real closed extension fields of $\R$? (In our example: Is every real closed field extension of $\R$ Archimedean?) \item Does the statement hold in a strengthened form (with certain quantitative additional information, so called ``bounds'') for every real closed extension of $\R$? (In our example: Is there an $N\in\N$ such that we have for all real closed field extensions $R$ of $\R$ and all $a\in R$ that $|a|\le N$?) \item Does the statement hold in the strengthened from (that is ``with bounds'') for the real numbers? (In our example: Is there some $N\in\N$ such that for all $a\in\R$ one has $|a|\le N$?) \end{enumerate} \ref{finitenesscor} establishes under certain circumstances a connection between these three questions. For this aim, one tries to express the statement in such a way that for $n$ numbers a certain ``semialgebraic event'' occurs where the event is the existence of a bound. The set of events is $\mathcal E$. \end{rem} \begin{ex}\label{boundex} For $n:=1$, $R_0:=\R$ and $\mathcal E:=\{\{(R,a)\in\mathcal R_1\mid-N\le a\le N\}\mid N\in\N\}$, \ref{finitenesscor} says that the following are equivalent: \begin{enumerate}[(a)] \item For every real closed extension field $R$ of $\R$ and every $a\in R$, there is some $N\in\N$ with $|a|\le N$, i.e., every real closed extension field $R$ of $\R$ is Archimedean. \item There is some $N\in\N$ such that for every real closed extension field $R$ of $\R$ and every $a\in R$ we have $|a|\le N$. \item There is some $N\in\N$ such that for every $a\in\R$ we have $|a|\le N$. \end{enumerate} Since (c) obviously fails, we see that (a) also fails. Thus we see (once more) that there are non-Archimedean real closed (extension) fields (of $\R$). \end{ex} \begin{thm}[Existence of degree bounds for Hilbert's 17th problem] \label{h17bound} For all $n,d\in\N_0$, there is some $D\in\N$ such that for every real closed field $R$ and every $f\in R[\x]_d$ \emph{[$\to$ \ref{degnot}]} with $f\ge0$ on $R^n$, there are $p_1,\dots,p_D\in R[\x]_D$ and $q\in R[\x]\setminus\{0\}$ with $f=\sum_{i=1}^D\left(\frac{p_i}q\right)^2$. \end{thm} \begin{proof} Let $n,d\in\N_0$. Set $N:=\dim\R[\x]_d$ and write $\{\al\in\N_0^n\mid|\al|\le d\}=\{\al_1,\dots,\al_N\}$. Set $R_0:=\R_{\text{alg}}$ and \[S_D:=\left\{(R,(a_1,\dots,a_N))\in\mathcal R_N~\middle|~ \begin{aligned} &\left(\forall x\in R^n:\sum_{i=1}^Na_ix_1^{\al_{i1}}\dotsm x_n^{\al_{in}}\ge0\right)\implies\\ &\text{There are families $(b_{i\al})_{\substack{1\le i\le D\\|\al|\le D}}$ and $(c_\al)_{|\al|\le D}\ne0$ in $R$}\\ &\text{such that}\\ &\left(\sum_{i=1}^Na_i\x^{\al_i}\right)\left(\sum_{|\al|\le D}c_\al\x^\al\right)^2 =\sum_{i=1}^D\left(\sum_{|\al|\le D}b_{i\al}\x^\al\right)^2 \end{aligned} \right\} \] for each $D\in\N$. Obviously, $S_D$ is for each $D\in\N$ an $R_0$-semialgebraic class since the polynomial identity in the last part of its specification can for example be expressed by finitely many polynomial equations in the $a_i$, $b_{i\al}$ and $c_\al$, the requirement on the existence of the two finite families and the quantification ``$\forall x\in R^n$'' is allowed because of the real quantifier elimination \ref{elim}. Set $\mathcal E:=\{S_D\mid D\in\N\}$ and observe that $\forall D_1,D_2\in\N:\exists D_3\in\N:S_{D_1}\cup S_{D_2}\subseteq S_{D_3}$ (take $D_3:=\max\{D_1,D_2\}$). By Artin's solution to Hilbert's 17th problem \ref{artin}, we have $\bigcup\mathcal E=\mathcal R_N$. Now \ref{finitenesscor} yields $S_D=\mathcal R_N$ for some $D\in\N$. \end{proof} \begin{rem} Recently, Lombardi, Perrucci and Roy \cite{lpr} managed to prove that one can choose in \ref{h17bound} \[D:=2^{2^{2^{d^{4^n}}}}.\] We will neither use nor prove this in this lecture. \end{rem} \begin{dfpro}\label{ordval} Let $(K,\le)$ be an ordered extension field of $\R$. Then \[\O_{(K,\le)}:=B_{(K,K_{\ge0})}=\{a\in K\mid\exists N\in\N:|a|\le N\}\] is a subring of $K$ \emph{[$\to$ \ref{arithmbounded}]} with a single maximal ideal \[\m_{(K,\le)}:=\left\{a\in K\mid\forall N\in\N:|a|\le\frac 1N\right\}\] with group of units \[\O_{(K,\le)}^\times=\O_{(K,\le)}\setminus\m_{(K,\le)} =\left\{a\in K\mid\exists N\in\N:\frac 1N\le|a|\le N\right\}.\] We call the elements of $\malalal{\O_{(K,\le)}}{\m_{(K,\le)}}{K\setminus\O_{(K,\le)}}$ the \emph{\alalal{finite}{infinitesimal}{infinite}} elements of $(K,\le)$. For every $a\in\O_{(K,\le)}$, there is exactly one $\st(a)\in\R$, called the \emph{standard part} of $a$, such that \[a-\st(a)\in\m_{(K,\le)}.\] The map $\O_{(K,\le)}\to\R,\ a\mapsto\st(a)$ is a ring homomorphism with kernel $\m_{(K,\le)}$. If $a,b\in\O_{(K,\le)}$ satisfy $\st(a)<\st(b)$, then $a<b$. The standard part $\st(p)$ of a polynomial $p\in\O_{(K,\le)}[\x]$ arises by replacing each coefficient of $p$ by its standard part. Also $\O_{(K,\le)}[\x]\to\R[\x],\ p\mapsto\st(p)$ is a ring homomorphism. \end{dfpro} \begin{proof} The existence of the standard part follows easily from the completeness of $\R$ [$\to$~\ref{introduce-the-reals}] and its uniqueness is trivial. The rest is also easy. We show exemplarily: \begin{enumerate}[(a)] \item $\st(ab)=(\st(a))(\st(b))$ for all $a,b\in\O_{(K,\le)}$ \item $\st(a)<\st(b)\implies a<b$ for all $a,b\in\O_{(K,\le)}$ \end{enumerate} To show (a), let $a,b\in\O_{(K,\le)}$. Because of $a-\st(a),b-\st(b)\in\m_{(K,\le)}$, we have \begin{align*} ab-(\st(a))(\st(b))&= (ab-(\st(a))b)+((\st(a))b-(\st(a))(\st(b)))\\ &=(a-\st(a))b+(\st(a))(b-\st(b))\in\m_{(K,\le)}+\m_{(K,\le)}\subseteq\m_{(K,\le)} \end{align*} For (b), we fix again $a,b\in\O_{(K,\le)}$ with $\st(a)<\st(b)$. Choose $N\in\N$ with \[\st(b)-\st(a)>\frac1N.\] Then $|a-\st(a)|\le\frac1{2N}$ and $|b-\st(b)|\le\frac1{2N}$ and thus \begin{align*} a&=a-\st(a)+\st(a)\le|a-\st(a)|+\st(a)\le\frac1{2N}+\st(a)-\st(b)+\st(b)\\ &<\frac1{2N}-\frac1N+\st(b)=-\frac1{2N}+\st(b)-b+b\\ &\le-\frac1{2N}+|b-\st(b)|+b\le-\frac1{2N}+\frac1{2N}+b=b \end{align*} \end{proof} \begin{ex}[Nonexistence of degree bounds for Schmüdgen's Positivstellensatz {[$\to$ \ref{schmuedgenpositivstellensatz}]}]\label{nonexdegbounds} For every $\ep\in\R_{>0}$, we have $X+\ep>0$ on $[0,1]$ so that Schmüdgen's Positivstellensatz \ref{schmuedgenpositivstellensatz} together with \ref{so2s}(b) yields $p_1,p_2,q_1,q_2\in\R[X]$ such that \[(*)\qquad X+\ep=p_1^2+p_2^2+(q_1^2+q_2^2)X^3(1-X). \] One can ask the question if there is in analogy to \ref{h17bound} a $D\in\N$ such that for all $\ep\in\R_{>0}$ there are $p_1,p_2,q_1,q_2\in\R[X]_D$ satisfying $(*)$. To this end, consider for each $D\in\N$ \[S_D:=\left\{(R,\ep)\in\mathcal R_1~\middle|~ \begin{aligned} &\ep>0\implies \exists b_0,\dots,b_D,b_0',\dots,b_D',c_0,\dots,c_D,c_0',\dots,c_D'\in R: \\ &X+\ep= \left(\sum_{i=0}^Db_iX^i\right)^2+\left(\sum_{i=0}^Db_i'X^i\right)^2+\\ &\qquad\qquad\qquad\qquad\left(\left(\sum_{i=0}^Dc_iX^i\right)^2+\left(\sum_{i=0}^Dc_i'X^i\right)^2\right) X^3(1-X) \end{aligned} \right\} \] As in the proof of \ref{h17bound}, one shows that $S_D$ is for each $D\in\N$ an $\R$-semialgebraic class. Set $\mathcal E:=\{S_D\mid D\in\N\}$. We claim that the answer to the above question is no. Assume it would be yes. Then $\set_\R(S_D)=\R$ for some $D\in\N$ and thus $\bigcup\mathcal E=\mathcal R_1$ by \ref{finitenesscor}. Choose a non-Archimedean real closed extension field $R$ of $\R$ and an $\ep>0$ which is infinitesimal in $R$. Then there are $p_1,p_2,q_1,q_2\in R[X]$ satisfying $(*)$. It suffices to show that all coefficients of these four polynomials are finite in $R$ [$\to$ \ref{ordval}] since then $X=\st(X+\ep)=(\st(p_1))^2+(\st(p_2))^2+((\st(q_1))^2+(\st(q_2))^2)X^3(1-X)$ in contradiction to \ref{needep}. It therefore suffices to show that the coefficient $c$ of biggest absolute value among all coefficients of the four polynomials is finite. Assume it were infinite. Then $\frac 1c$ would be infinitesimal and \begin{align*} 0&=\st\left(\frac{X+\ep}{c^2}\right)=\st\left(\left(\frac{p_1}c\right)^2+ \left(\frac{p_2}c\right)^2+\left(\left(\frac{q_1}c\right)^2+\left(\frac{q_2}c\right)^2 \right)X^3(1-X)\right)\\ &=\Big(\underbrace{\st\left(\frac{p_1}c\right)}_{\widetilde p_1}\Big)^2+ \Big(\underbrace{\st\left(\frac{p_2}c\right)}_{\widetilde p_2}\Big)^2+ \Big(\Big(\underbrace{\st\left(\frac{q_1}c\right)}_{\widetilde q_1}\Big)^2+ \Big(\underbrace{\st\left(\frac{q_2}c\right)}_{\widetilde q_2}\Big)^2\Big)X^3(1-X). \end{align*} It follows that $\widetilde p_1=\widetilde p_2=\widetilde q_1=\widetilde q_2=0$ on $(0,1)$ and thus $\widetilde p_1=\widetilde p_2=\widetilde q_1= \widetilde q_2=0$, contradicting the choice of $c$ $\lightning$. \end{ex} \begin{rem} Completely analogous to \ref{h17bound}, one can prove the existence of degree bounds for the real Stellensätze \ref{stellensatz}, \ref{positivstellensatz} and \ref{nichtnegativstellensatz} \emph{in the case $K=R$}. \end{rem} \chapter{Semialgebraic geometry} Throughout this chapter, we let $R$ be a real closed field and $K$ a subfield of $R$. Moreover, $\mathcal S_n$ denotes for each $n\in\N_0$ the Boolean algebra of all $K$-semialgebraic subsets of $R^n$ [$\to$ \ref{introsn}, \ref{introsemialg}]. \section{Semialgebraic sets and functions} \begin{reminder}{}[$\to$ \ref{sanf}, \ref{rcfclass}(a)]\label{sasanf} Every $K$-semialgebraic subset of $R^n$ is of the form \[\bigcup_{i=1}^k\left\{x\in R^n\mid f_i(x)=0,g_{i1}(x)>0,\dots,g_{im}(x)>0\right\}\] for some $k,m\in\N_0$, $f_i,g_{ij}\in K[X_1,\dots,X_n]$. \end{reminder} \begin{reminder}{}[$\to$ \ref{elim}]\label{saproj} For all $n\in\N_0$ and $S\in\mathcal S_{n+1}$, \[\{x\in R^n\mid\exists y\in R:(x,y)\in S\},\{x\in R^n\mid\forall y\in R:(x,y)\in S\}\in \mathcal S_n.\] \end{reminder} \begin{df} Let $m,n\in\N_0$ and $A\subseteq R^m$. A map $f\colon A\to R^n$ is called \emph{$K$-semialgebraic} if its graph \[\Ga_f:=\{(x,y)\in A\times R^n\mid y=f(x)\}\subseteq R^{m+n}\] is $K$-semialgebraic. We say ``semialgebraic'' for ``$R$-semialgebraic''. \end{df} \begin{rem}\label{domainsa} The domains of $K$-semialgebraic functions are $K$-semialgebraic. Indeed, if $A\subseteq R^m$ and $f\colon A\to R^n$ is $K$-semialgebraic, then by \ref{saproj} also \[\{x\in R^m\mid\exists y\in R^n:(x,y)\in\Ga_f\}=A\] is $K$-semialgebraic. \end{rem} \begin{df}\label{ordertop} We equip $R$ with the \emph{order topology} which is generated [$\to$ \ref{topreminder}(b)] by the intervals $(a,b)_R$ with $a,b\in R$ [$\to$ \ref{intervals}(b)]. Moreover, we endow $R^n$ with the corresponding product topology [$\to$ \ref{subspaceproductspace}(b)] which is generated according to \ref{initialtop} by the sets $\prod_{i=1}^n(a_i,b_i)_R$ with $a_i,b_i\in R$. \end{df} \begin{rem} For $R=\R$, the topology introduced in \ref{initialtop} on $R^n=\R^n$ is obviously the usual Euclidean topology on $\R^n$. \end{rem} \begin{exo}\label{continfnorm} Let $m,n\in\N_0$, $A\subseteq R^m$ and $f\colon A\to R^n$ a map. Then $f$ is continuous [$\to$ \ref{conti}, \ref{subspaceproductspace}(a)] if and only if \[\forall x\in A:\forall\ep\in R_{>0}:\exists\de\in R_{>0}:\forall y\in A: (\|x-y\|_\infty<\de\implies\|f(x)-f(y)\|_{\infty}<\ep)\] where \[\|x\|_\infty:= \begin{cases} 0&\text{if $k=0$}\\ \max\{|x_1|,\dots,|x_k|\}&\text{if $k>0$} \end{cases} \] for $x\in R^k$. \end{exo} \begin{pro}\label{fieldopsa} The maps \begin{align*} R^2\to R,\ &(a,b)\mapsto a+b,\\ R^2\to R,\ &(a,b)\mapsto ab,\\ R\setminus\{0\}\to R,\ &a\mapsto a^{-1},\\ R\to R,\ &a\mapsto|a|\qquad\emph{\text{[$\to$ \ref{introabssgn}]}},\\ R_{\ge0}\to R,\ &a\mapsto\sqrt a\qquad\emph{\text{[$\to$ \ref{notremsqrt}]}} \end{align*} are $\Q$-semialgebraic and continuous. \end{pro} \begin{proof} It is clear that these maps are $\Q$-semialgebraic. Because of the real quantifier elimination \ref{elim}, the class of all real closed fields for which the claim holds is semialgebraic [$\to$ \ref{introsemialg}]. Since the claim is known to hold for $R=\R$, it holds also for all real closed fields [$\to$ \ref{nothingorall}]. \end{proof} \begin{cor}\label{polycont} Polynomial maps $R^m\to R^n$ are continuous. \end{cor} \begin{cor}\label{normcont} $R^n\to R,\ x\mapsto\|x\|:=\|x\|_2:=\sqrt{x_1^2+\ldots+x_n^2}$ is continuous. \end{cor} \begin{rem}\label{inf22} Because of \ref{normcont} and \ref{continfnorm}, there is to every $\ep\in R_{>0}$ some $\de\in R_{>0}$ such that $\forall x\in R^n:(\|x\|_\infty<\de\implies\|x\|<\ep)$. On the other hand, $\|x\|_\infty\le\|x\|$ for all $x\in R^n$. It follows that the topology on $R^n$ is also generated by the open balls $\{x\in R^n\mid\|x-y\|<\ep\}$ ($y\in R^n,\ep>0$) and that \ref{continfnorm} holds also with $\|.\|$ instead of $\|.\|_\infty$. \end{rem} \begin{rem}\label{toprn} \begin{enumerate}[(a)] \item By \ref{polycont}, $R^n$ is obviously endowed with the initial topology with respect to all maps $R^n\to R,\ x\mapsto p(x)$ $(p\in R[\x]$) [$\to$ \ref{initialtop}]. \item Because of (a), the topology on $R^n$ is obviously generated by the sets \[\{x\in R^n\mid p(x)>0\}\qquad(p\in R[\x]).\] \item Viewing $R^n$ in virtue of the injective map \[R^n\to\sper R[\x],\ x\mapsto P_x=\{f\in R[\x]\mid f(x)\ge0\}\] as a subset of $\sper R[\x]$, the topology on $R^n$ is due to (b) induced by the spectral topology [$\to$ \ref{spectraltop}] on $\sper R[\x]$. \end{enumerate} \end{rem} \begin{thm}\label{saclo} \begin{enumerate}[\normalfont(a)] \item If $A\subseteq R^m$ and $f\colon A\to R^n$ is $K$-semialgebraic, then $f(B)\in\mathcal S_n$ for all $B\in\mathcal S_m$ with $B\subseteq A$ and $f^{-1}(C)\in\mathcal S_m$ for all $C\in\mathcal S_n$. \item If $A\subseteq R^\ell$, $B\subseteq R^m$, $f\colon A\to B$ and $g\colon B\to R^n$ are $K$-semialgebraic, then $g\circ f\colon A\to R^n$ is again $K$-semialgebraic. \item If $A\in\mathcal S_n$, then the $K$-semialgebraic functions $A\to R$ form a subring of the ring $R^A$ of all functions $A\to R$. \end{enumerate} \end{thm} \begin{proof} \begin{enumerate}[(a)] \item Let $A\subseteq R^m$ and $f\colon A\to R^n$ be $K$-semialgebraic. By \ref{saproj}, with $\Ga_f$ also $f(B)=\{y\in R^n\mid\exists x\in R^m:(x\in B\et (x,y)\in\Ga_f)\}$ is for all $B\in\mathcal S_m$ with $B\subseteq A$ $K$-semialgebraic, and $f^{-1}(C)=\{x\in R^m\mid\exists y\in R^n:(y\in C\et (x,y)\in\Ga_f)\}$ is for all $C\in\mathcal S_n$ also $K$-semialgebraic. \item Suppose $A\subseteq R^\ell$, $B\subseteq R^m$ and $f\colon A\to B$ as well as $g\colon B\to R^n$ are $K$-semialgebraic. Then $\Ga_f\in\mathcal S_{\ell+m}$ and $\Ga_g\in\mathcal S_{m+n}$ and thus \[\Ga_{g\circ f}=\{(x,z)\in A\times R^n\mid\exists y\in R^m:((x,y)\in\Ga_f\et(y,z)\in \Ga_g)\}\in\mathcal S_{\ell+n}.\] Hence $g\circ f$ is $K$-semialgebraic. \item If $A\in\mathcal S_n$ and $f_1,f_2\colon A\to R$ are $K$-semialgebraic, then also \[A\to R^2,\ x\mapsto(f_1(x),f_2(x))\] is $K$-semialgebraic. Now apply \ref{fieldopsa} and (b). \end{enumerate} \end{proof} \begin{ex} If $R$ is a non-Archimedean (real closed) extension of $\R$, then $[0,1]_R$ is not compact [$\to$ \ref{dfcomp}]. Indeed, if $\ep\in\m_R$ [$\to$ \ref{ordval}] with $\ep>0$, then \[[0,1]_R\subseteq\bigcup_{a\in[0,1]_R}(a-\ep,a+\ep)_R,\] but there is no $N\in\N$ and $a_1,\dots,a_N\in[0,1]_R$ with $[0,1]_R\subseteq\bigcup_{k=1}^N(a_k-\ep,a_k+\ep)_R$ (for otherwise $[0,1]_\R=\st([0,1]_R)\subseteq\{\st(a_1),\dots,\st(a_N)\}\ \lightning$). \end{ex} \begin{df}\label{dfsacompact} Let $A\subseteq R^n$. We call $A$ \emph{bounded} if there is $b\in R$ with $\|x\|\le b$ for all $x\in A$ [$\to$ \ref{normcont}]. Moreover, $A$ is called \emph{$K$-semialgebraically compact} if $A\in\mathcal S_n$ and $A$ is bounded and closed. We simply say ``semialgebraically compact'' instead of ``$R$-semialgebraically compact''. \end{df} \begin{rem}\label{sacompreals} From analysis, one knows for $R=\R$: A $K$-semialgebraic set $A\subseteq\R^n$ is compact if and only if it is $K$-semialgebraically compact. \end{rem} \begin{pro}\label{ksemalgcomp} Let $A\in\mathcal S_n$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $A$ is bounded. \item $\exists b\in R:\forall x\in A:\|x\|\le b$ \item $\exists b\in R:\forall x\in A:\|x\|_\infty\le b$ \item $\exists b\in K:\forall x\in A:\|x\|\le b$ \item $\exists b\in K:\forall x\in A:\|x\|_\infty\le b$ \end{enumerate} \end{pro} \begin{proof} WLOG $A\ne\emptyset$. We have (a)$\overset{\ref{dfsacompact}}\iff$(b)$\overset{\ref{inf22}}\iff$(c) $\Longleftarrow$ (e) $\Longleftarrow$ (d). It remains to show (b)$\implies$(d). Suppose therefore that (b) holds. The set \[S:=\{\|x\|\mid x\in A\}\subseteq R_{\ge0}\] is $K$-semialgebraic [$\to$ \ref{saproj}]. Hence $S$ can be defined by finitely many polynomials [$\to$ \ref{sanf}] with coefficients in $K$ and by Lemma \ref{sgnbounds}(a) we find some $b\in K_{>1}$ such that each of these polynomials has constant sign on the interval $(b,\infty)_R$. Then either $(b,\infty)_R\cap S=\emptyset$ or $(b,\infty)_R\subseteq S$. But the latter is impossible due to (b). Hence $\forall x\in A:\|x\|\le b$. \end{proof} \begin{thm}\label{compactimage} Let $A\subseteq R^m$ and suppose $f\colon A\to R^n$ is $K$-semialgebraic and continuous. Then for every $K$-semialgebraically compact set $B\subseteq A$, the set $f(B)$ is also $K$-semialgebraically compact. \end{thm} \begin{proof} If $B\in\mathcal S_m$ with $B\subseteq A$, then $f(B)\in\mathcal S_n$ by \ref{saclo}(a) since $f$ is $K$-semialgebraic. For the rest of the claim we can suppose that $K=R$. We fix a ``complexity bound'' $N\in\N$ and fix $m,n\in\N_0$ but no longer fix $A$ and $f$. By \ref{sasanf}, it suffices to show the following: $(*)$ For all $f_1,\dots,f_N,g_{11},g_{12},\dots,g_{NN}\in R[X_1,\dots,X_m,Y_1,\dots,Y_n]_N$ and\\ $\widetilde f_1,\dots,\widetilde f_N,\widetilde g_{11},\widetilde g_{12},\dots, \widetilde g_{NN}\in R[X_1,\dots,X_m]_N$, if we set \begin{align*} \Ga&:=\bigcup_{i=1}^N\{(x,y)\in R^m\times R^n\mid f_i(x,y)=0,g_{i1}(x,y)>0,\dots, g_{iN}(x,y)>0\},\\ A&:=\{x\in R^m\mid\exists y\in R^n:(x,y)\in\Ga\}\text{ and}\\ B&:=\bigcup_{i=1}^N\{x\in R^m\mid\widetilde f_i(x)=0,\widetilde g_{i1}(x)>0, \ldots,\widetilde g_{iN}(x)>0\}, \end{align*} then \begin{itemize} \item $\Ga$ is not the graph of a continuous function from $A$ to $R^n$ or \item $B$ is not a subset of $A$ or \item $B$ is not closed in $R^m$ or \item $B$ is not bounded in $R^m$ or \item $\{y\in R^n\mid\exists x\in R^m:(x\in B\et(x,y)\in\Ga)\}$ is closed and bounded in $R^n$. \end{itemize} We now in addition no longer fix $R$. One can easily figure out why the class of all real closed fields $R$ for which $(*)$ holds is semialgebraic. For this aim, one applies many times the real quantifier elimination \ref{elim}, for example for introducing the finitely many coefficients of the $f_i,g_{ij},\widetilde f_i, \widetilde g_{ij}$ by universal quantifiers. By \ref{nothingorall}, $(*)$ now holds either for all or for no real closed field $R$. Therefore it is enough to show $(*)$ for $R=\R$. But we know this from analysis due to \ref{sacompreals}. \end{proof} \begin{exo}\label{dim1} \begin{enumerate}[(a)] \item The \alal{open}{closed} semialgebraic subsets of $R$ are exactly the finite unions of pairwise disjoint sets of the form $\malal{(-\infty,\infty)_R,(-\infty,a)_R,(a,\infty)_R\text{ and }(a,b)_R} {(-\infty,\infty)_R,(-\infty,a]_R,[a,\infty)_R\text{ and }[a,b]_R}$ with $a,b\in R$. \item The semialgebraically compact subsets of $R$ are exactly the finite unions of pairwise disjoint sets of the form $[a,b]_R$ with $a,b\in R$. \end{enumerate} \end{exo} \section{The \L ojasiewicz inequality} \begin{pro}\label{polgrowth} Let $a\in K$ and suppose $h\colon(a,\infty)_R\to R$ is $K$-semialgebraic. Then there is $b\in K\cap[a,\infty)_R$ and $N\in\N$ such that $|h(x)|\le x^N$ for all $x\in(b,\infty)_R$. \end{pro} \begin{proof} Using \ref{sasanf}, we write \[\Ga_h=\bigcup_{i=1}^k\{(x,y)\in R^2\mid f_i(x,y)=0,g_{i1}(x,y)>0,\ldots, g_{im}(x,y)>0\}\] with $k,m\in\N_0$ and $f_i,g_{ij}\in K[X,Y]$ where we suppose each of the $k$ sets contributing to this union to be nonempty. We must have $k>0$ and $\deg_Yf_i>0$ for all $i\in\{1,\dots,k\}$ (for otherwise there would be $x,c,d\in R$ with $c<d$ and $\{x\}\times (c,d)_R\subseteq\Ga_h$ which is impossible since $\Ga_h$ is the graph of a function). Write $\prod_{i=1}^kf_i=\sum_{i=0}^dp_iY^i$ with $d>0$, $p_0,\ldots,p_d\in K[X]$ and $p_d\ne0$. By rescaling one of the $f_i$ if necessary, we can suppose that the leading coefficient of $p_d$ is greater than $1$. Choose $c\in K\cap[a,\infty)_R$ such that $p_d>1$ on $(c,\infty)_R$ [$\to$ \ref{sgnbounds}(a)]. Because of $\sum_{i=0}^dp_i(x)h(x)^i=0$ and $p_d(x)\ne0$ for all $x\in(c,\infty)_R$, we have \[|h(x)|\le\max\left\{1,\frac{|p_0(x)|+\ldots+|p_{d-1}(x)|}{|p_d(x)|} \right\}\le1+|p_0(x)|+\ldots+|p_{d-1}(x)| \] for all $x\in(c,\infty)_R$ [$\to$ \ref{sgnbounds}(a)]. Now the existence of $b$ is easy to see. \end{proof} \begin{thm}[\L ojasiewicz inequality]\label{lojasiewicz} Let $n\in\N_0$ and suppose $A\subseteq R^n$ is $K$-semialgebraically compact and $f,g\colon A\to R$ are continuous $K$-semialgebraic functions satisfying \[\forall x\in A:(f(x)=0\implies g(x)=0).\] Then there is $N\in\N$ and $C\in K_{\ge0}$ such that \[\forall x\in A:|g(x)|^N\le C|f(x)|.\] \end{thm} \begin{proof} With $A$ also $A_t:=\{x\in A\mid|g(x)|=\frac1t\}$ is $K$-semialgebraically compact for each $t\in R_{>0}$. Set $I:=\{t\in R_{>0}\mid A_t\ne\emptyset\}$. For each $t\in I$, \[f_t:=\min\{|f(x)|\mid x\in A_t\}\] exists by \ref{compactimage} and \ref{dim1}(b). Apparently, we have to show that there exist $N\in\N$ and $C\in K_{\ge0}$ such that $\forall t\in I:\left(\frac1t\right)^N\le Cf_t$. By hypothesis, we have $f_t>0$ for all $t\in I$. Furthermore, \[R_{>0}\to R,\ t\mapsto\begin{cases}0&\text{if $t\notin I$}\\ \frac1{f_t}&\text{if $t\in I$} \end{cases}\] is $K$-semialgebraic. Thus, by \ref{polgrowth} there are $b\in K_{>0}$ and $N\in\N$ such that \[(*)\qquad\frac1{f_t}\le t^N\] for all $t\in I\cap(b,\infty)_R$. Since \[B:=\left\{x\in A\mid|g(x)|\ge\frac1b\right\}=\bigcup_{t\in I\cap(0,b]_R}A_t\] is $K$-semialgebraically compact, we can choose according to \ref{compactimage} and \ref{ksemalgcomp} some $C\in K_{\ge1}$ satisfying \[\frac{|g(x)|^N}{|f(x)|}\le C\] for all $x\in B$ (note that $f(x)\ne0$ for all $x\in B$). We deduce \[(**)\qquad\frac1{f_t}\le Ct^N\] for all $t\in I\cap(0,b]_R$. Together with $(*)$, we obtain $(**)$ even for all $t\in I$ as desired. \end{proof} \begin{lem}(``shrinking map'', in German: ``Schränkungstranformation'') \label{shrink} Let $n\in\N_0$, $B:=\{x\in R^n\mid\|x\|<1\}$ and $S:=\{x\in R^n\mid\|x\|=1\}$. The maps \begin{align*} \ph\colon R^n\to B,\ &x\mapsto\frac x{\sqrt{1+\|x\|^2}}\qquad\text{and}\\ \ps\colon B\to R^n,\ &y\mapsto\frac y{\sqrt{1-\|y\|^2}} \end{align*} are $\Q$-semialgebraic, continuous and inverse to each other. For all $A\in\mathcal S_n$, we have \[\text{$A$ closed}\iff\text{$\ph(A)\cup S$ is $K$-semialgebraically compact.}\] \end{lem} \begin{proof} From \ref{fieldopsa}, the $\Q$-semialgebraicity and the continuity are clear. For all $x\in R^n$, we have \[\ps(\ph(x))=\frac{\frac x{\sqrt{1+\|x\|^2}}}{\sqrt{1-\frac{\|x\|^2}{1+\|x\|^2}}}= \frac{\frac{x}{\sqrt{1+\|x\|^2}}}{\frac1{\sqrt{1+\|x\|^2}}}=x.\] For all $y\in B$, we have \[\ph(\ps(y))=\frac{\frac y{\sqrt{1-\|y\|^2}}}{\sqrt{1+\frac{\|y\|^2}{1-\|y\|^2}}}= \frac{\frac y{\sqrt{1-\|y\|^2}}}{\sqrt{\frac1{1-\|y\|^2}}}=y.\] Now let $A\in\mathcal S_n$. To show: $A$ closed $\iff$ $\ph(A)\cup S$ closed. \smallskip ``$\Longleftarrow$'' Suppose $\ph(A)\cup S$ is closed. Then $\ph(A)=(\ph(A)\cup S)\cap B$ is closed in $B$ (with respect to the topology induced from $R^n$) and thus also $A=\ph^{-1}(\ph(A))$ in $R^n$. \smallskip ``$\Longrightarrow$'' Let $A$ be closed. Then $\ph(A)=\ps^{-1}(A)$ is closed in $B$ and hence $\ph(A)=C\cap B$ for some closed set $C\subseteq R^n$. WLOG $C\subseteq B\cup S$ (otherwise replace $C$ by $C\cap(B\cup S)$). WLOG $S\subseteq C$ (otherwise replace $C$ by $C\cup S$). Now $\ph(A)\cup S\subseteq C\subseteq(C\cap B)\cup(C\cap S)= \ph(A)\cup S$. Hence $\ph(A)\cup S=C$ is closed. \end{proof} \begin{cor}\label{lojasiewiczcor} Let $n\in\N_0$ and suppose that $A\subseteq R^n$ is closed and $f,g\colon A\to R$ are continuous $K$-semialgebraic functions satisfying \[\forall x\in A:(f(x)=0\implies g(x)=0).\] Then there are $N,k\in\N$ and $C\in K_{\ge0}$ such that \[\forall x\in A:|g(x)|^N\le C(1+\|x\|^2)^k|f(x)|.\] \end{cor} \begin{proof} By \ref{domainsa}, $A$ is $K$-semialgebraic. If $A$ is bounded, then $A$ is $K$-semialgebraically compact and the claim follows (with $k:=1$) from the \L ojasiewicz inequality \ref{lojasiewicz}. Now suppose that $A$ is unbounded. Since $\{\|x\|\mid x\in A\}\subseteq R$ is $K$-semialgebraic, there is then some $a\in K$ such that $(a,\infty)_R\subseteq\{\|x\|\mid x\in A\}$. The functions \begin{align*} \mathring f\colon(a,\infty)_R\to R,\ &t\mapsto\max\{|f(x)|\mid x\in A,\|x\|=t\} \qquad\text{and}\\ \mathring g\colon(a,\infty)_R\to R,\ &t\mapsto\max\{|g(x)|\mid x\in A,\|x\|=t\} \end{align*} are semialgebraic. By \ref{polgrowth}, there are $b\in K\cap[a,\infty)_R$ with $b\ge1$ and $\ell\in\N$ such that $\mathring f(t)\le(1+t^2)^\ell$ and $\mathring g(t)\le(1+t^2)^\ell$ for all $t\in(b,\infty)_R \subseteq R_{\ge1}$. Now consider the continuous $K$-semialgebraic functions \[f_0\colon A\to R,\ x\mapsto\frac{f(x)}{(1+\|x\|^2)^{\ell+1}}\qquad\text{and}\qquad g_0\colon A\to R,\ x\mapsto\frac{g(x)}{(1+\|x\|^2)^{\ell+1}}.\] We have $\forall x\in A:(f_0(x)=0\implies g_0(x)=0)$ and obviously it is enough to show that there are $N\in\N$ and $C\in K_{\ge0}$ such that $\forall x\in A:|g_0(x)|^N\le C|f_0(x)|$ (set then $k:=\max\{1,(N-1)(\ell+1)\}$). The advantage of $f_0$ and $g_0$ over $f$ and $g$ is that there is for all $\ep\in R_{>0}$ a semialgebraically compact set $B\subseteq A$ such that $|f_0(x)|<\ep$ and $|g_0(x)|<\ep$ for all $x\in A\setminus B$. With the notation of Lemma \ref{shrink}, the $K$-semialgebraic functions \begin{align*} \widetilde f\colon\ph(A)\cup S\to R,\ &y\mapsto \begin{cases} 0&\text{if $y\in S$}\\ f_0(\ps(y))&\text{if $y\in\ph(A)$} \end{cases} \qquad\text{and}\\ \widetilde g\colon\ph(A)\cup S\to R,\ &y\mapsto \begin{cases} 0&\text{if $y\in S$}\\ g_0(\ps(y))&\text{if $y\in\ph(A)$} \end{cases} \end{align*} are continuous. For example, for $\widetilde f$ one sees this as follows: Since $f_0\circ\ps|_{\ph(A)}$ is continuous and $\ph(A)=(\ph(A)\cup S)\cap B$ is open in $\ph(A)\cup S$, it suffices to show by \ref{continfnorm} and \ref{inf22} that \[\forall y_0\in S:\forall\ep\in R_{>0}:\exists\de\in R_{>0}:\forall y\in\ph(A): (\|y_0-y\|<\de\implies|f_0(\ps(y))|<\ep).\] To this end, let $y_0\in S$ and $\ep\in R_{>0}$. Choose a semialgebraically compact set $B\subseteq A$ with $|f_0(x)|<\ep$ for all $x\in A\setminus B$. Then $\ph(B)$ is semialgebraically compact by \ref{compactimage} and consequently $S\cup\ph(A\setminus B)=(S\cup\ph(A))\setminus\ph(B)$ is open in $\ph(A)\cup S$. Thus there is $\de\in R_{>0}$ with $\{y\in\ph(A)\cup S\mid\|y_0-y\|<\de\}\subseteq S\cup\ph(A\setminus B)$, i.e., \[\{y\in\ph(A)\mid\|y_0-y\|<\de\}\subseteq\ph(A\setminus B).\] Now let $y\in\ph(A)$ with $\|y_0-y\|<\de$. Then $y\in\ph(A\setminus B)$ and thus $\ps(y)\in A\setminus B$. Hence $|f_0(\ps(y))|<\ep$. This shows the continuity of $\widetilde f$. For all $y\in\ph(A)$, we have obviously \[\widetilde f(y)=0\implies f_0(\ps(y))=0\implies g_0(\ps(y))=0\implies\widetilde g(y)=0.\] Altogether, $\forall y\in\ph(A)\cup S:(\widetilde f(y)=0\implies\widetilde g(y)=0)$. Since $\ph(A)\cup S$ is $K$-semialgebraically compact by \ref{shrink}, we get from the \L ojasiewicz inequality \ref{lojasiewicz} $N\in\N$ and $C\in R_{\ge0}$ with $\forall y\in\ph(A)\cup S:|\widetilde g(y)|^N\le C|\widetilde f (y)|$. In particular, we obtain $\forall y\in\ph(A):|g_0(\ps(y))|^N\le C|f_0(\ps(y))|$ which means $\forall x\in A:|g_0(x)|^N\le C|f_0(x)|$ as desired. \end{proof} \section{The finiteness theorem for semialgebraic sets} \begin{df}\label{dfbasicopenclosed} Let $n\in\N_0$. A subset $S$ of $R^n$ is called \emph{$K$-basic \alal{open}{closed}} if there are $m\in\N_0$ and $g_1,\dots,g_m\in K[\x]$ satisfying $S=\{x\in R^n\mid g_1(x)\malal>\ge0,\dots,g_m(x)\malal>\ge0\}$. \end{df} \begin{rem}\label{basicclosedisclosed} Every $K$-basic \alal{open}{closed} subset of $R^n$ is $K$-semialgebraic and \alal{open}{closed} in $R^n$. \end{rem} \begin{thm}[Finiteness theorem for semialgebraic sets] \label{finthm} Let $n\in\N_0$ and $S\in\mathcal S_n$ \alal{open}{closed}. Then $S$ is a finite union of $K$-basic \alal{open}{closed} subsets of $R^n$. \end{thm} \begin{proof} \begin{align*} &\text{$S$ is a finite union of $K$-basic open subsets of $R^n$}\\ \iff&\text{$S$ is a finite union of finite intersections of sets of the form $\{x\in R^n\mid g(x)>0\}$}\\ &\text{\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad($g\in K[\x]$)}\\ \iff&\text{$\complement S$ is a finite intersection of finite unions of sets of the form $\{x\in R^n\mid g(x)\ge0\}$}\\ &\text{\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad($g\in K[\x]$)}\\ \overset{\ref{unionsection}}\iff& \text{$\complement S$ is a finite union of finite intersections of sets of the form $\{x\in R^n\mid g(x)\ge0\}$}\\ &\text{\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad($g\in K[\x]$)}\\ \iff&\text{$\complement S$ is a finite union of $K$-basic closed subsets of $R^n$.} \end{align*} It is thus enough to show the claim for open $S$. Write \[S=\bigcup_{i=1}^\ell\{x\in R^n\mid f_i(x)=0,g_{i1}(x)>0,\ldots,g_{im}(x)>0\}\] according to \ref{sasanf} with $\ell,m\in\N_0$, $f_i,g_{ij}\in K[\x]$. Fix $i\in\{1,\dots,\ell\}$. It is enough to find a $K$-basic open set $U\subseteq R^n$ such that \[\{x\in R^n\mid f_i(x)=0,g_{i1}(x)>0,\ldots,g_{im}(x)>0\}\subseteq U \subseteq S.\] Consider the closed set $A:=R^n\setminus S\in\mathcal S_n$ and the continuous $K$-semialgebraic functions \begin{align*} f\colon A\to R,&\ x\mapsto(f_i(x))^2\qquad\text{and}\\ g\colon A\to R,&\ x\mapsto\prod_{j=1}^m(|g_{ij}(x)|+g_{ij}(x)). \end{align*} We have $\forall x\in A:(f(x)=0\implies g(x)=0)$. By \ref{lojasiewiczcor}, there thus exist $N,k\in\N$ and $C\in K_{\ge0}$ such that $\forall x\in A:|g(x)|^N\le C(1+\|x\|^2)^kf(x)$. For all $x\in A$ satisfying $g_{i1}(x)>0,\ldots,g_{im}(x)>0$, we thus have $(2^m\prod_{j=1}^mg_{ij}(x))^N\le C(1+\sum_{j=1}^nx_j^2)^kf_i(x)^2$. Set \[U:=\left\{x\in R^n\mid C\left(1+\sum_{j=1}^nx_j^2\right)^kf_i(x)^2< \left(2^m\prod_{j=1}^mg_{ij}(x)\right)^N,g_{i1}(x)>0,\dots,g_{im}(x)>0\right\}.\] Then $U\cap A=\emptyset$ and $\{x\in R^n\mid f_i(x)=0,g_{i1}(x)>0,\ldots, g_{im}(x)>0\}\subseteq U\subseteq S$. \end{proof} \begin{ex} The ``slashed square'' $S:=(-1,1)_R^2\setminus([0,1]_R\times\{0\})$ is $K$-semialgebraic and open. By \ref{finthm}, it is thus a finite union of $K$-basic open subsets of $R^2$. Indeed, \begin{align*} S=&\{(x,y)\in R^2\mid-1<x<1,-(y+1)y^2(y-1)>0\}\cup\\ &\left\{(x,y)\in R^2\mid\left(x+\frac12\right)^2+y^2<\left(\frac12\right)^2\right\} \end{align*} is a union of two $K$-basic open sets. However, $S$ is not $K$-basic open. To show this, we assume \[S=\{(x,y)\in R^2\mid g_1(x,y)>0,\ldots,g_m(x,y)>0\}\] with $m\in\N_0$, $g_i\in K[X,Y]$. For continuity reasons, we have $g_i(x,0)\ge0$ for all $x\in[0,1]_R$ and $i\in\{1,\ldots,m\}$. Because of $([0,1]_R\times\{0\})\cap S=\emptyset$, we have thus $[0,1]_R=\bigcup_{i=1}^m\{x\in[0,1]_R\mid g_i(x,0)=0\}$. WLOG $\#\{x\in[0,1]_R\mid g_1(x,0)=0\}=\infty$. Then $g_1(X,0)=0$ and consequently $(R\times\{0\})\cap S=\emptyset$ in contradiction to $(-1,0)_R\times\{0\}\subseteq S$. \end{ex} \begin{thm}[Abstract version of the finiteness theorem for semialgebraic sets] \label{finthmabstract} Let $R|K$ be algebraic, i.e., $R$ be the real closure of $(K,K\cap R^2)$. Let $n\in\N_0$ and write $A:=K[\x]$ and $T:=\sum K_{\ge0}A^2$ so that we are in the setting described before \ref{rnassubsetofsper}. Denote by \[\fatten\colon\mathcal S_n\to\mathcal C:=\mathcal C_{(A,T)}\] again the fattening \emph{[$\to$ \ref{slimfatten}, \ref{fatclo}]}. Let $S\in\mathcal S_n$. Then \[ S \malal{\text{open}}{\text{closed}} \text{in $R^n$} \iff \fatten(S)\malal{\text{open}}{\text{closed}} \text{in $\sper(A,T)$}. \] \end{thm} \begin{proof} It is enough to show: $S$ open $\iff$ $\fatten(S)$ open. \smallskip ``$\Longleftarrow$'' By definition of the spectral topology [$\to$ \ref{spectraltop}], $\fatten(S)$ is a union of sets of the form $\{P\in\sper(A,T)\mid\widehat g_1(P)>0,\ldots,\widehat g_m(P)>0\}$ ($m\in\N_0,g_1,\dots,g_m\in A$). By \ref{constrcompact} and \ref{compactsubspace}, $\fatten(S)$ is quasicompact [$\to$ \ref{dfcomp}] with respect to the constructible topology [$\to$ \ref{spectraltop}]. Hence $\fatten(S)$ is a finite union of sets of the described form, i.e., \[(**)\qquad \fatten(S)=\bigcup_{i=1}^k\{P\in\sper(A,T)\mid\widehat g_{i1}(P)>0,\ldots, \widehat g_{im}(P)>0\}\] with $k,m\in\N_0$, $g_{ij}\in A$. It follows by \ref{slimfatten} that \[(*)\qquad S=\bigcup_{i=1}^k\{x\in R^n\mid g_{i1}(x)>0,\ldots,g_{im}(x)>0\}.\] In particular, $S$ is open. \smallskip ``$\Longrightarrow$'' By the finiteness theorem for semialgebraic sets \ref{finthm}, we can find $k,m\in\N_0$ and $g_{ij}\in A$ such that $(*)$ holds. It follows that $(**)$ holds. In particular, $\fatten(S)$ is open. \end{proof} \begin{rem} Suppose we are in the situation of Theorem \ref{finthmabstract}. \begin{enumerate}[(a)] \item From the very definition of the slimming $\slim$ \ref{slimfatten}, one sees immediately that it is (unlike in general the fattening!) compatible even with arbitrary unions instead of just finite ones: If $(C_i)_{i\in I}$ is a family of constructible subsets of $\sper(A,T)$ whose union $\bigcup_{i\in I}C_i$ is again constructible, then \[\slim\left(\bigcup_{i\in I}C_i\right)=\bigcup_{i\in I}\slim\left(C_i\right).\] \item From (a) it is clear that the proof of the easy direction ``$\Longleftarrow$'' of Theorem \ref{finthmabstract} could be considerably simplified by ignoring the quasicompactness of $\fatten(S)$ and instead replacing $(*)$ and $(**)$ by similar conditions with a possibly infinite union instead of a finite one. \item In the special case $R=K$, the easy direction ``$\Longleftarrow$'' of Theorem \ref{finthmabstract} follows also simply from \ref{toprn}(c) which says that \[S=\slim(\fatten(S))=\{x\in R^n\mid P_x\in\fatten(S)\}\] is open. \item If one had already \ref{toprn2}(c) available, the easy direction ``$\Longleftarrow$'' of Theorem \ref{finthmabstract} would follow exactly as in the preceding item (c) also in the general case. However, our proof of \ref{toprn2}(c) will use Corollary \ref{finthmabstractcor} and therefore Theorem \ref{finthmabstract}. \end{enumerate} \end{rem} \begin{rem}\label{finthmabstractmotivates} The description of \ref{finthmabstract} as an abstract version of \ref{finthm} is motivated by the fact that one can easily retrieve the latter from the first: Note first that one can reduce in \ref{finthm} to the case where $R|K$ is algebraic by using the transfer between $R$ and $\overline{(K,K\cap R_{\ge0})}$ [$\to$ \ref{transfer}]. For this, one has to argue that this transfer preserves openness which can be accomplished by real quantifier elimination \ref{elim}. Thus let now $R|K$ be algebraic, $n\in\N_0$ and $S\in\mathcal S_n$ open (by the first part of the proof of Theorem \ref{finthm}, it suffices to treat the case of open sets). We have to show that $S$ is a finite union of $K$-basic open subsets of $R^n$. As seen in the easy part ``$\Longleftarrow$'' of the proof of \ref{finthmabstract}, for this purpose, it suffices to show that $\fatten(S)$ is open. This follows from the difficult part ``$\Longrightarrow$'' of \ref{finthmabstract}. \end{rem} \begin{cor}[Strengthening of \ref{rcsper}]\label{finthmabstractcor} Let $R|K$ be algebraic, i.e., $R$ be the real closure of $(K,K\cap R_{\ge0})$. Let $n\in\N_0$ and write $A:=K[\x]$ and $T:=\sum K_{\ge0}A^2$. Then \[\sper R[\x]\to\sper(A,T),\ P\mapsto P\cap A \] is a homeomorphism with respect to both, the spectral as well as the constructible topology on both sides. \end{cor} \begin{proof} The map is continuous with respect to both topologies by \ref{spercontinuity} and bijective by \ref{rcsper}. According to the definition of a homeomorphism \ref{dfhomeo} and the definition of both topologies in \ref{spectraltop}, it suffices to show that for all $C\in\mathcal C_{R[\x]}$ we have $\{P\cap A\mid P\in C\}\in\mathcal C_{(A,T)}$ and that this latter set is open in $\sper(A,T)$ whenever $C$ is open in $\sper R[\x]$. For this purpose, let $C\in\mathcal C_{R[\x]}$. The slimming $\{x\in R^n\mid P_x\in C\}$ [$\to$ \ref{slimfatten}] of $C$ is then a semialgebraic subset of $R^n$ and thus even $K$-semialgebraic by \ref{semialgk} since $R|K$ is algebraic. By \ref{sasanf}, we thus find $k,m\in\N_0$ and $f_i,g_{ij}\in K[\x]$ such that \[\{x\in R^n\mid P_x\in C\}=\bigcup_{i=1}^k \{x\in R^n\mid f_i(x)=0,g_{i1}(x)>0,\ldots, g_{im}(x)>0\},\] where one can even choose $f_1=\ldots=f_k=0$ by the finiteness theorem for semialgebraic sets \ref{finthm} in the case where $C$ is open. Fattening this, we obtain \[C=\bigcup_{i=1}^k\{P\in\sper R[\x]\mid\widehat f_i(P)=0,\widehat g_{i1}(P)>0, \ldots,\widehat g_{im}(P)>0\}\] and therefore [$\to$ \ref{spercontinuity}] \begin{multline*} \{P\cap A\mid P\in C\}=\bigcup_{i=1}^k\{P\in\sper(A,T)\mid\widehat f_i(P)=0, \widehat g_{i1}(P)>0,\ldots,\widehat g_{im}(P)>0\}\\ \in\mathcal C_{(A,T)}. \end{multline*} If $C$ is open, then so is $\{P\cap A\mid P\in C\}$ because of the choice of $f_i=0$. \end{proof} \begin{rem}\label{toprn2} In the situation of \ref{finthmabstract}, one can now generalize \ref{toprn} as follows: \begin{enumerate}[(a)] \item $R^n$ is equipped with the initial topology with respect to all maps $R^n\to R,\ x\mapsto p(x)$ $(p\in A$). \item The topology on $R^n$ is generated by the sets $\{x\in R^n\mid p(x)>0\}$ ($p\in A$). \item Viewing $R^n$ in virtue of the injective map [$\to$ \ref{rnassubsetofsper}] \[R^n\to\sper A,\ x\mapsto P_x=\{f\in A\mid f(x)\ge0\}\] as a subset of $\sper A$, the topology on $R^n$ is induced by the spectral topology on $\sper A$ [$\to$ \ref{finthmabstractcor}]. \end{enumerate} Indeed, (a) is again obvious. Unlike in \ref{finthmabstract}, (b) does not immediately follow from (a) anymore. Instead one first proves (c) by using the corresponding item from \ref{toprn} together with Corollary \ref{finthmabstractcor}. Finally, one easily deduces (b) from (c). \end{rem} \chapter{Convex sets in vector spaces} In this chapter, $K$ denotes always a subfield of $\R$ equipped with the order and the subspace topology [$\to$ \ref{subspaceproductspace}(a)] induced by $\R$ unless otherwise specified. \section{The isolation theorem for cones} \begin{df}\label{defcone} Let $V$ be a $K$-vector space. A subset $C\subseteq V$ is called a \emph{(convex) cone} (in $V$) if $0\in C$, $C+C\subseteq C$ and $K_{\ge0}C\subseteq C$ [$\to$ \ref{divnot}]. A cone $C\subseteq V$ is called \emph{proper} if $C\ne V$. \end{df} \begin{ex}\label{poiscone} Let $T$ be a preorder [$\to$ \ref{defpreorder}] of $K[\x]$ with $K_{\ge0}\subseteq T$. Then $T$ is a cone. Moreover, $T$ is proper as a preorder [$\to$ \ref{preproper}] if and only if $T$ is proper as a cone. \end{ex} \begin{pro} Let $V$ be a $K$-vector space and $C\subseteq V$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $C$ is a cone. \item $C$ is convex \emph{[$\to$ \ref{dfconv}]}, $C\ne\emptyset$ and $K_{\ge0}C\subseteq C$. \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)} is trivial. \smallskip\underline{(b)$\implies$(a)}\quad Suppose that (b) holds. From $C\ne\emptyset$ and $0C\subseteq C$, we get $0\in C$. To show: $C+C\subseteq C$. Let $x,y\in C$. Then $\frac x2+\frac y2\in C$ and thus $x+y=2\left(\frac x2+\frac y2\right)\in C$. \end{proof} \begin{df}\label{defunit} Let $C$ be a cone in the $K$-vector space $V$ and $u\in V$. Then $u$ is called a \emph{unit} for $C$ (in $V$) if for every $x\in V$ there is some $N\in\N$ with $Nu+x\in C$. \end{df} \begin{ex}{}[$\to$ \ref{poiscone}] Let $T$ be a preorder of $K[\x]$ with $K_{\ge0}\subseteq T$. Then $T$ is Archimedean [$\to$ \ref{dfarch}(a)] if and only if $1$ is a unit for $T$. \end{ex} \begin{pro}\label{unitchar} Let $C$ be a cone on the $K$-vector space $V$ and $u\in V$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $u$ is a unit for $C$. \item $V=C-\N u$ \item $V=C-K_{\ge0}u$ \item $u\in C$ and $V=C+\Z u$ \item $u\in C$ and $V=C+Ku$ \item $\forall x\in V:\exists\ep\in K_{>0}:u+\ep x\in C$ \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)$\implies$(c)} is clear. \smallskip\underline{(c)$\implies$(d)}\quad Suppose that (c) holds. Then $u\in C-K_{\ge0}u$ and thus $(1+\la)u\in C$ for some $\la\in K_{\ge0}$ and so $u\in C$. Fix now $x\in V$. To show: $x\in C+\Z u$. Choose $\la\in K_{\ge0}$ with $x\in C-\la u$. Choose $N\in\N$ with $\la\le N$. Then $(N-\la)u\in C$ and hence \begin{multline*} x=(x-(N-\la)u)+(N-\la)u\in(C-\la u-(N-\la)u)+C\\ \subseteq C-Nu\subseteq C-\N u\subseteq C+\Z u. \end{multline*} \smallskip\underline{(d)$\implies$(e)} is trivial. \smallskip\underline{(e)$\implies$(f)}\quad Suppose that (e) holds and let $x\in V$. Choose $\la\in K$ such that $x\in C-\la u$. If $\la\le0$, then $x\in C$ and consequently $u+\ep x=u+x\in C+C\subseteq C$ with $\ep:=1$. If $\la>0$, then set $\ep:=\frac1\la>0$. Then $u+\ep x\in\ep C\subseteq C$. \smallskip\underline{(f)$\implies$(a)}\quad Suppose that (f) holds and let $x\in V$. To show: $\exists N\in\N:Nu+x\in C$. Choose $\ep\in K_{>0}$ with $u+\ep x\in C$. Choose $N\in\N$ with $\frac1\ep\le N$. From (f), it follows also that $u\in C$ and hence $(N-\frac1\ep)u\in C$. Now $Nu+x=(N-\frac1\ep)u+\frac1\ep u+x\in C+\frac1\ep(u+\ep x)\subseteq C+\frac1\ep C\subseteq C+C\subseteq C$. \end{proof} \begin{cor} Let $u$ be a unit for the cone $C$ in the $K$-vector space $V$. Then $u\in C$ and $V=C-C$. \end{cor} \begin{rem}\label{unitinteriorpoint} The units for a cone in $K^n$ are exactly its interior points [$\to$ \ref{interiorclosure}, \ref{unitchar}(f)]. \end{rem} \begin{df}\label{defstate} Let $V$ be a $K$-vector space, $C\subseteq V$ and $u\in V$. A \emph{state} of $(V,C,u)$ is a $K$-linear function $\ph\colon V\to\R$ satisfying $\ph(C)\subseteq\R_{\ge0}$ and $\ph(u)=1$. We refer to the set $S(V,C,u)\subseteq\R^V$ of all states of $(V,C,u)$ as the \emph{state space} of $(V,C,u)$. \end{df} \begin{ex}\label{badexample} Set $K:=\R$, $V:=\R[X]$, $C:=P_\infty\in\sper\R[X]$. Then the cone $C$ does not possess a unit in $V$ and we have $S(V,C,u)=\emptyset$ for all $u\in V$. Indeed, let $u\in V$. Choose $d\in\N$ with $d>\deg u$. Then $u-\ep X^d\notin C$ for all $\ep>0$. By \ref{unitchar}(f), $u$ is thus not a unit for $C$. Assume $\ph\in S(V,C,u)$. Then $\ep\ph(X^d)-1=\ph(\ep X^d-u)\in \ph(C)\subseteq\R_{\ge0}$ for all $\ep>0$ $\lightning$. \end{ex} \begin{ex} Set $K:=\Q$, $V:=\Q^2$, $C:=\{(x,y)\in\Q^2\mid y\ge\sqrt2x\}$. All elements of $C$ except $0$ are units for $C$ [$\to$ \ref{unitinteriorpoint}]. There is no $\ph\in V^*\setminus\{0\}$ satisfying $\ph(C)\subseteq\Q_{\ge0}$ but for each $u\in C\setminus\{0\}$, we have $\#S(V,C,u)=1$. \end{ex} \begin{lem}\label{sublinear} Let $u$ be a unit for a proper cone $C$ in the $K$-vector space $V$. Then \[\rh\colon V\to\R,\ x\mapsto\sup\{\la\in K\mid x-\la u\in C\}\] is well-defined and we have $\rh(x)+\rh(y)\le\rh(x+y)$ as well as $\rh(\la x)=\la\rh(x)$ for all $x,y\in V$ and $\la\in K_{\ge0}$. \end{lem} \begin{proof} Let $x,y\in V$ and $\la\in K_{\ge0}$. For the well-definedness of $\rh$, we have to show that $I:=\{\la\in K\mid x-\la u\in C\}$ is nonempty and bounded from above [$\to$ \ref{archetcdef}, \ref{introduce-the-reals}]. Since $u$ is a unit for $C$, we have $I\ne\emptyset$ and furthermore there is $N\in\N$ such that $-x+Nu\in C$. Then $\la<N+1$ for all $\la\in I$ since otherwise, if $\la\in I$ satisfied $\la\ge N+1$, then \begin{align*} -u&=Nu-(N+1)u=(-x+Nu)+x-(N+1)u\\ &\in C+x-\la u+(\la-(N+1))u\\ &\subseteq C+C+K_{\ge0}u\subseteq C. \end{align*} But now $-u\notin C$ for otherwise $C\overset{\text{\ref{unitchar}(b)}}=V$. Now choose sequences $(\la_n)_{n\in\N}$ and $(\mu_n)_{n\in\N}$ in $K$ such that $x-\la_nu,y-\mu_nu\in C$ for all $n\in\N$ and $\lim_{n\to\infty}\la_n=\rh(x)$ as well as $\lim_{n\to\infty}\mu_n=\rh(y)$. Then we have $(x+y)-(\la_n+\mu_n)u\in C+C\subseteq C$ and thus $\la_n+\mu_n\le\rh(x+y)$ for all $n\in\N$. It follows that \[\rh(x)+\rh(y)=\left(\lim_{n\to\infty}\la_n\right)+\left(\lim_{n\to\infty}\mu_n\right) =\lim_{n\to\infty}(\la_n+\mu_n)\le\rh(x+y).\] Moreover, $\la x-\la\la_n u\in\la C\subseteq C$ and thus $\la\la_n\le\rh(\la x)$ for all $n\in\N$. It follows that $\la\rh(x)=\la\lim_{n\to\infty}\la_n=\lim_{n\to\infty}\la\la_n \le\rh(\la x)$ and analogously $\frac1\la\rh(\la x)\le\rh\left(\frac1\la(\la x)\right)$ if $\la\ne0$, i.e., $\la\rh(x)=\rh(\la x)$. \end{proof} \begin{thm}[Isolation theorem for cones]\label{isolation} Let $u$ be a unit for the proper cone $C$ in the $K$-vector space $V$. Then $S(V,C,u)\ne\emptyset$. \end{thm} \begin{proof} Since the union of a nonempty chain of cones in $V$ is again a cone in $V$, we can use Zorn's lemma to enlarge $C$ to a cone of $V$ that is maximal with respect to the property of not containing $-u$. WLOG suppose that $C$ has already this maximality property. \medskip \textbf{Claim 1:} $C\cup-C=V$ \smallskip \emph{Explanation.} Let $x\in V$ with $x\notin-C$. To show: $x\in C$. Due to the maximality of $C$ it is enough to show that the cone $C+K_{\ge0}x$ does not contain $-u$. But if we had $-u=y+\la x$ for some $y\in C$ and $\la\in K_{\ge0}$, then $\la>0$ and $x=\frac1\la(-u-y)\in-C$ $\lightning$. \medskip\noindent Consider for each $x\in V$, the sets \[I_x:=\{\la\in K\mid x-\la u\in C\}\text{ and }J_x:=\{\la\in K\mid x-\la u\in-C\}.\] \medskip \textbf{Claim 2:} $\forall x\in V:\forall\la\in I_x:\forall\mu\in J_x:\la\le\mu$ \smallskip \emph{Explanation.} Let $x\in V$, $\la\in I_x$ and $\mu\in J_x$. Then $x-\la u\in C$ and $\mu u-x\in C$. Thus, $(\mu-\la)u=(\mu u-x)+(x-\la u)\in C+C\subseteq C$. If we had $\mu<\la$, then we had $-u\in C\ \lightning$. \medskip\noindent Consider now $\ph\colon V\to\R,\ x\mapsto\sup I_x$ [$\to$ \ref{sublinear}]. \medskip \textbf{Claim 3:} $-\ph(x)=\sup\{\la\in K\mid x-\la(-u)\in-C\}$ for all $x\in V$ \smallskip \emph{Explanation.} Let $x\in V$. From $I_x\cup J_x\overset{\text{Claim 1}} =K$ and Claim 2, we get \[\ph(x)=\sup I_x=\inf J_x\] and hence \[-\ph(x)=-\inf J_x=\sup\{-\la\mid\la\in K,x-\la u\in -C\}= \sup\{\la\in K\mid x+\la u\in -C\}.\] \medskip\noindent From \ref{sublinear}, we obtain $\ph(x)+\ph(y)\le\ph(x+y)$ and $\ph(\la x)=\la\ph(x)$ for all $x,y\in V$ and $\la\in K_{\ge0}$. Since $-u$ is a unit for the proper cone $-C$, \ref{sublinear} and Claim 3 yield also $-\ph(x)-\ph(y)\le-\ph(x+y)$ for all $x,y\in V$. It follows that \[\ph(x)+\ph(y)\le\ph(x+y)\le\ph(x)+\ph(y)\] and therefore $\ph(x)+\ph(y)=\ph(x+y)$ for all $x,y\in V$. In particular, $\ph(x)+\ph(-x)=\ph(0)=0$ and hence $\ph(-x)=-\ph(x)$ for all $x\in V$ from which we deduce \[\ph((-\la)x)=\ph(-\la x)=-\ph(\la x)=-\la\ph(x)=(-\la)\ph(x)\] for all $x\in V$ and $\la\in K_{\ge0}$. Altogether, $\ph(\la x)=\la\ph(x)$ for all $x\in V$ and $\la\in K_{\ge0}\cup K_{\le0}=K$, i.e., $\ph$ is $K$-linear. Obviously, $\ph(C)\subseteq\R_{\ge0}$ and $\ph(u)=1$. Therefore $\ph\in S(V,C,u)$. \end{proof} \begin{lem}\label{xelc} Let $C$ be a cone in the $K$-vector space $V$ and $x\in V$. Then \[x\in C\iff x\in C-K_{\ge0}x.\] \end{lem} \begin{proof} ``$\Longrightarrow$'' is trivial. \smallskip ``$\Longleftarrow$'' Let $x\in C-K_{\ge0}x$, for instance $x=y-\la x$ with $y\in C$ and $\la\in K_{\ge0}$. Then \[x=\frac1{1+\la}y\in C.\] \end{proof} \begin{cor}\label{conemembership} Suppose $u$ is a unit for the cone $C$ in the $K$-vector space $V$ and $x\in V$. If $\ph(x)>0$ for all $\ph\in S(V,C,u)$, then $x\in C$. \end{cor} \begin{proof} Suppose $x\notin C$. To show: $\exists\ph\in S(V,C,u):\ph(x)\le0$. By \ref{xelc}, the cone $C-K_{\ge0}x$ is proper. Since $u$ is a unit for $C$, it is of course also a unit for $C-K_{\ge0}x$. By the isolation theorem \ref{isolation}, there is $\ph\in S(V,C-K_{\ge0}x,u)$. We have $\ph\in S(V,C,u)$ and $\ph(x)\le0$. \end{proof} \begin{exo}{}[$\to$ \ref{defstate}]\label{statespacetop} Let $V$ be a $K$-vector space, $C\subseteq V$ and $u\in V$. We equip the $\R$-vector space $\R^V$ of all functions from $V$ to $\R$ with the product topology [$\to$ \ref{subspaceproductspace}(b)]. Then $S(V,C,u)$ is a closed convex subset of $\R^V$ which we equip with the subspace topology [$\to$ \ref{subspaceproductspace}(a)]. Using \ref{initialtop}, one shows that this topology is at the same time also the initial topology with respect to the functions \[S(V,C,u)\to\R,\ \ph\mapsto\ph(x)\qquad(x\in V).\] \end{exo} \begin{thm}\label{statespacecompact} Let $u$ be a unit for the cone $C$ in the $K$-vector space $V$. Then the state space $S(V,C,u)$ is compact \emph{[$\to$ \ref{dfcomp}]}. \end{thm} \begin{proof} Choose for each $x\in V$ an $N_x\in\N$ such that $\pm x+N_xu\in C$. Then we have for all $\ph\in S(V,C,u)$ and $x\in V$ that $\pm\ph(x)+N_x=\ph(\pm x+N_xu)\ge0$ and thus \[\ph(x)\in[-N_x,N_x].\] Thus $S(V,C,u)\subseteq\prod_{x\in V}[-N_x,N_x]$. From analysis (cf. \ref{sacompreals}) and Tikhonov's theorem \ref{tikhonov}, $\prod_{x\in V}[-N_x,N_x]$ is compact with respect to the product topology. But the product topology on $\prod_{x\in V}[-N_x,N_x]$ is induced by the topology of $\R^V$ [$\to$ \ref{inducedproductcommute}]. By \ref{statespacetop}, $S(V,C,u)$ is thus closed in the compact space $\prod_{x\in V}[-N_x,N_x]$ and hence is compact itself [$\to$ \ref{compactsubspace}]. \end{proof} \begin{exo}\label{quasicompactimage} Let $M$ and $N$ be topological spaces and $f\colon M\to N$ be continuous. If $M$ is quasicompact [$\to$ \ref{dfcomp}], then so is $f(M)$ [$\to$ \ref{compactsubspace}] \end{exo} \begin{cor}\label{takeson} Let $M$ be a nonempty quasicompact topological space and $f\colon M\to\R$ be continuous. Then $f$ takes on a minimum and a maximum, i.e., there are $x,y\in M$ with \[f(x)\le f(z)\le f(y)\] for all $z\in M$. \end{cor} \begin{proof} $f(M)$ is compact by \ref{quasicompactimage}. Hence $f(M)$ is nonempty, bounded and closed. From the first two properties, it follows that $\inf f(M), \sup f(M)\in\R$ exist [$\to$ \ref{archetcdef}(c), \ref{introduce-the-reals}]. The last property yields $\inf f(M)=\min f(M)$ and $\sup f(M)=\max f(M)$. \end{proof} \begin{thm}[Strengthening of \ref{conemembership}]\emph{[$\to$ \ref{archimedeanpositivstellensatz}]}\label{conemembershipunit} Let $u$ be a unit for the cone $C$ in the $K$-vector space $V$ and $x\in V$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\forall\ph\in S(V,C,u):\ph(x)>0$ \item $\exists N\in\N:x\in\frac1Nu+C$ \item $x$ is a unit for $C$. \end{enumerate} \end{thm} \begin{proof} \underline{(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(b)}\quad Suppose that (a) holds. If $S(V,C,u)=\emptyset$, then $C=V$ by \ref{isolation} and we can choose $N\in\N$ arbitrarily. Suppose therefore that $S(V,C,u)\ne\emptyset$. Then the continuous function $S(V,C,u)\to\R,\ \ph\mapsto\ph(x)$ takes on by \ref{statespacecompact} and \ref{takeson} a minimum $\mu$ for which $\mu>0$ holds by (a). Choose $N\in\N$ such that $\frac1N<\mu$. Then $\ph\left(x-\frac1Nu\right)=\ph(x)-\frac1N\ge\mu-\frac1N>0$ for all $\ph\in S(V,C,u)$. Now \ref{conemembership} yields that $x-\frac 1Nu\in C$. \smallskip \underline{(b)$\implies$(c)}\quad Suppose that (b) holds and let $y\in V$. To show: $\exists N\in\N:Nx+y\in C$. Choose $N',N''\in\N$ with $x\in\frac1{N'}u+C$ and $N''u+y\in C$. Setting $N:=N'N''$, we obtain $Nx+y\in N''N'\left(\frac1{N'}u+C\right)+y\subseteq N''(u+C)+y\subseteq N''u+y+C\subseteq C+C\subseteq C$. \smallskip \underline{(c)$\implies$(a)}\quad Suppose that (c) holds and let $\ph\in S(V,C,u)$. To show: $\ph(x)>0$. Choose $N\in\N$ with $Nx-u\in C$. Then $N\ph(x)-1=\ph(Nx-u)\ge0$ and thus $\ph(x)\ge\frac1N>0$ for all $\ph\in S(V,C,u)$. \end{proof} \section{Separating convex sets in topological vector spaces} \begin{df}\label{deftopvs} A $K$-vector space $V$ together with a topology on $V$ [$\to$ \ref{topreminder}(a)] is called a \emph{topological $K$-vector space} if $V\times V\to V, (x,y)\mapsto x+y$ and $K\times V\to V,\ (\la,x)\mapsto\la x$ are continuous and $\{0\}$ is a closed set in $V$. \end{df} \begin{ex}\label{topvsex} \begin{enumerate}[(a)] \item If $I$ is a set, then $K^I$ (endowed with the product topology [$\to$ \ref{subspaceproductspace}(b)]) is a topological $K$-vector space. \item A $K$-vector space $V$ together with the discrete topology on $V$ is a topological vector space if and only if $V=\{0\}$. Indeed, if $y\in V\setminus\{0\}$, then \[\{(\la,x)\in K\times V\mid \la x= y\}=\{(\la,\la^{-1}y)\mid\la\in K^\times\}\] is not open in $K\times V$. \item From analysis, one knows that every normed $\R$-vector space, in particular every $\R$-vector space with scalar product, is a topological $\R$-vector space. \end{enumerate} \end{ex} \begin{lem}\label{convgencone} Let $V$ be a $K$-vector space and $A\subseteq V$ be convex. If $0\notin A\ne\emptyset$, then $A$ generates a proper convex cone, i.e., $\sum_{x\in A}K_{\ge0}x\ne V$. \end{lem} \begin{proof} Suppose that $A\ne\emptyset$ and $\sum_{x\in A}K_{\ge0}x=V$. We show $0\in A$. Choose $y\in A$ and write $-y=\sum_{i=1}^m\la_ix_i$ with $\la_1,\dots, \la_m\in K_{\ge0}$ and $x_1,\dots,x_m\in A$. Setting $\mu:=1+\sum_{i=1}^m \la_i>0$, we have then $0=\frac1\mu y+\sum_{i=1}^m\frac{\la_i}\mu x_i\in A$ since $\frac1\mu+\sum_{i=1}^m\frac{\la_i}\mu=\frac\mu\mu=1$. \end{proof} \begin{lem}{}[$\to$ \ref{unitinteriorpoint}]\label{interiorisunit} Let $V$ be a topological $K$-vector space, $C\subseteq V$ a convex cone and $u\in C^\circ$ [$\to$ \ref{interiorclosure}]. Then $u$ is a unit for $C$ [$\to$ \ref{defunit}]. \end{lem} \begin{proof} We show $\forall x\in V:\exists\ep\in K_{>0}:u+\ep x\in C$ [$\to$ \ref{unitchar}(f)]. For this aim, fix $x\in V$. From Definition \ref{deftopvs}, it follows that $K\to V, \la\mapsto u+\la x$ is continuous. Choose an open set $A\subseteq V$ such that $u\in A\subseteq C$. Then $\{\la\in K\mid u+\la x\in A\}$ is open and contains $0$. In particular, there is $\ep\in K_{>0}$ such that $u+\ep x\in A \subseteq C$. \end{proof} \begin{ex}\label{zigzag} Consider the $\R$-vector space $V:=C([0,1],\R)$ of all continuous real valued functions on the interval $[0,1]\subseteq\R$ together with the scalar product defined by \[\langle f,g\rangle:=\int_0^1f(x)g(x)dx\qquad(f,g\in V).\] By \ref{topvsex}(c), this is a topological vector space. The constant function $u\colon[0,1]\to\R,\ x\mapsto 1$ is a unit for the cone $C:=C([0,1],\R_{\ge0})$ of all functions nonnegative on $[0,1]$ by \ref{takeson} (since $[0,1]$ is compact by \ref{sacompreals}). But $u$ does not lie in $C^\circ$ since for every $\ep>0$ there is some $f\in V$ with $\|u-f\|=\sqrt{\int_0^1(u(x)-f(x))^2dx}<\ep$ and $f\notin C$. \end{ex} \begin{rem}\label{transhomeo} From Definition \ref{deftopvs}, it follows that for every topological $K$-vector space $V$ the maps $V\to V,\ x\mapsto\la x+y$ ($\la\in K^\times,y\in V)$ are homeomorphisms [$\to$ \ref{dfhomeo}]. \end{rem} \begin{lem}\label{conthalfspace} Suppose $V$ is a topological $K$-vector space and $\ph\colon V\to\R$ is $K$-linear. Then the following are equivalent: \begin{enumerate}[(a)] \item $\ph$ is continuous. \item $\ph^{-1}(\R_{>0})$ is open. \item $\ph^{-1}(\R_{\ge0})$ is closed. \end{enumerate} \end{lem} \begin{proof} \underline{(b)$\iff$(c)} follows from $\ph^{-1}(\R_{\ge0})=-\ph^{-1}(\R_{\le0})=-(V\setminus\ph^{-1}(\R_{>0}))$ since $V\to V,\ x\mapsto -x$ is a homeomorphism by \ref{transhomeo}. \smallskip \underline{(a)$\implies$(b)} is trivial. \smallskip \underline{(b)$\implies$(a)} \quad WLOG $\ph\ne0$. WLOG choose $u\in V$ in such a way that $\ph(u)=1$ (otherwise scale $\ph$). Suppose that (b) holds. Then the set $\ph^{-1}(\R_{>a})=au+\ph^{-1}(\R_{>0})$ is open and hence $\ph^{-1}(\R_{<-a})=-\ph^{-1}(\R_{>a})$ is open for all $a\in K$ [$\to$ \ref{transhomeo}]. So the set $\ph^{-1}((a,b)_\R)=\ph^{-1}(\R_{>a})\cap\ph^{-1}(\R_{<b})$ is open for all $a,b\in K$. Since every open subset of $\R$ is a union of intervals $(a,b)_\R$ with $a,b\in K$, the continuity of $\ph$ follows. \end{proof} \begin{lem}\label{contiffinterior} Let $V$ be a topological $K$-vector space and $\ph\colon V\to\R$ be $K$-linear map. Then $\ph$ is continuous if and only if $\ph^{-1}(\R_{\ge0})$ has an interior point. \end{lem} \begin{proof} WLOG $\ph\ne0$. If $\ph$ is continuous, then $\ph^{-1}(\R_{>0})$ is open and because of $\ph\ne0$ nonempty. Conversely, let $u$ be an interior point of $\ph^{-1}(\R_{\ge0})$. By \ref{conthalfspace}, it is enough to show that $\ph^{-1}(\R_{>0})$ is open. For this, consider $x\in\ph^{-1}(\R_{>0})$. We have to show that there is an open set $A\subseteq V$ such that $x\in A\subseteq\ph^{-1}(\R_{>0})$. Choose an open set $B\subseteq V$ with $u\in B\subseteq\ph^{-1}(\R_{\ge0})$. Choose $\la\in K_{>0}$ such that $\la\ph(u)<\ph(x)$. Then $A:=x+\la(B-u)$ is open by \ref{transhomeo}, and we have $x=x+\la(u-u)\in A$ and \[\ph(A)=\ph(x)+\la(\ph(B)-\ph(u))\subseteq\ph(x)+\R_{\ge0}-\la\ph(u) \subseteq\R_{>0}.\] \end{proof} \begin{ex} Let $V:=C([0,1],\R)$ be the topological $K$-vector space from \ref{zigzag} and $x\in[0,1]$. Then $V\to\R,\ f\mapsto f(x)$ is not continuous. \end{ex} \begin{thm}[Separation theorem for topological vector spaces] \label{septopvs} Let $A$ and $B$ be convex sets in the topological $K$-vector space $V$ with $A^\circ\ne\emptyset\ne B$ and $A\cap B=\emptyset$. Then there is a continuous $K$-linear function $\ph\colon V\to\R$ with $\ph\ne0$ and $\ph(x)\le\ph(y)$ for all $x\in A$ and $y\in B$. \end{thm} \begin{proof} Since $A$ is convex, also $-A$ is convex and thus the Minkowski sum $B-A=B+(-A)$ [$\to$ \ref{minkowskisum}] is also convex. By hypothesis, we have $0\notin B-A\ne\emptyset$, for which reason there is according to \ref{convgencone} a proper cone $C\subseteq V$ such that $B-A\subseteq C$. Due to $A^\circ\ne\emptyset$ and $B\ne\emptyset$, \ref{transhomeo} yields $(B-A)^\circ\ne\emptyset$ and thus $C^\circ\ne\emptyset$. Choose $u\in C^\circ$. By \ref{interiorisunit}, $u$ is a unit for $C$. By the isolation theorem \ref{isolation}, there exists a state $\ph$ of $(V,C,u)$. Because of $\ph(u)=1$, we have $\ph\ne0$ and because of $\ph(B-A)\subseteq\R_{\ge0}$, we have $\ph(x)\le\ph(y)$ for all $x\in A$ and $y\in B$. Finally, $\ph$ is continuous by \ref{contiffinterior} since $u$ is an interior point of $C$ and a fortiori of $\ph^{-1}(\R_{\ge0})$. \end{proof} \begin{cor}\label{septopvscor} Let $A$ and $B$ be convex sets in the topological $K$-vector space $V$ satisfying $A\cap B=\emptyset$. Suppose $A$ is open. Then there is a continuous $K$-linear function $\ph\colon V\to\R$ and an $r\in\R$ such that $\ph(x)<r\le\ph(y)$ for all $x\in A$ and $y\in B$. \end{cor} \begin{proof} If $A=\emptyset$ or $B=\emptyset$ then we can set $\ph:=0\in V^*$ and choose $r\in\R$ arbitrarily since the statement $\forall x\in A:\forall y\in B:\ph(x)<r\le\ph(y)$ is empty. WLOG $A\ne\emptyset$ and $B\ne\emptyset$. Choose by \ref{septopvs} a continuous $K$-linear function $\ph\colon V\to\R$ with $\ph\ne0$ and $\ph(x)\le\ph(y)$ for all $x\in A$ and $y\in B$. The set $\{\ph(x)\mid x\in A\}\subseteq\R$ is nonempty because of $A\ne\emptyset$ and bounded from above because of $B\ne\emptyset$. It thus possesses a supremum $r\in\R$. We have $\ph(x)\le r\le\ph(y)$ for all $x\in A$ and $y\in B$. Let $x\in A$. It remains to show that $\ph(x)<r$. For this purpose, choose $z\in V$ such that $\ph(z)>0$. The function $K\to V,\ \la\mapsto x+\la z$ is continuous and together with $0$, a whole neighborhood of $0$ lies in the preimage of $A$ under this function. In particular, there is an $\ep\in K_{>0}$ such that $x+\ep z\in A$. Then $\ph(x)<\ph(x)+\ep\ph(z)=\ph(x+\ep z)\le r$. \end{proof} \begin{lem}\label{stayinside} Let $V$ be a topological $K$-vector space, $A\subseteq V$ be convex, $x\in A^\circ$, $y\in A$ and $\la\in K$ with $0<\la\le1$. Then $\la x+(1-\la)y\in A^\circ$. \end{lem} \begin{proof} Choose an open neighborhood $B$ of $x$ with $B\subseteq A$. Setting $z:=\la x+(1-\la)y$, $C:=z+\la(B-x)$ is by \ref{transhomeo} an open neighborhood of $z$. It is enough to show $C\subseteq A$. To this end, let $c\in C$. Because of $B=x+\frac1\la(C-z)$, we have then $b:=x+\frac1\la(c-z)\in B\subseteq A$. Consequently, $c=\la(b-x)+z=\la b-\la x+\la x+(1-\la)y=\la b+(1-\la)y\in A$. \end{proof} \begin{pro}\label{intcloconvex} Suppose $V$ is a topological $K$-vector space and $A\subseteq V$ is convex. Then both $A^\circ$ and $\overline A$ are convex. \end{pro} \begin{proof} It follows immediately from Lemma \ref{stayinside} that $A^\circ$ is convex. In order to show that $\overline A$ is convex, fix $x,y\in\overline A$ and $\la\in[0,1]_K$. To show: $z:=\la x+(1-\la)y\in\overline A$. Let $B$ be a neighborhood of $z$ in $V$. To show: $B\cap A\ne\emptyset$. Since \[V\times V\to V,\ (x',y')\mapsto \la x'+(1-\la)y'\] is continuous, there are neighborhoods $C$ of $x$ and $D$ of $y$ in $V$ such that \[\la C+(1-\la)D\subseteq B.\] Due to $x,y\in\overline A$, we find $x_0\in C\cap A$ and $y_0\in D\cap A$. Then \[z_0:=\la x_0+(1-\la)y_0\in B\cap A.\] \end{proof} \begin{df}\label{defbalanced} Let $V$ be a $K$-vector space and $A\subseteq V$ a set. Then $A$ is called \emph{balanced} if $\la x\in A$ for all $x\in A$ and $\la\in K$ with $|\la|\le1$. \end{df} \begin{pro}\label{balanced} Suppose $V$ be a topological $K$-vector space and $B$ is a neighborhood of $0$ in $V$. Then there is a balanced open neighborhood $A$ of $0$ in $V$ with $A\subseteq B$. \end{pro} \begin{proof} WLOG $B$ is open [$\to$ \ref{neighbor}]. Since the scalar multiplication is continuous by \ref{deftopvs}, there is an $\ep\in K_{>0}$ and an open neighborhood $C$ of $0$ in $V$ such that \[\forall\la\in(-\ep,\ep)_K:\forall x\in C:\la x\in B.\] By \ref{transhomeo}, each $\la C$ with $\la\in K^\times$ is open. Thus also $A:=\bigcup_{\la\in(-\ep,\ep)_K\setminus\{0\}}\la C\subseteq B$ is open. Moreover, we have $0\in A$ and $A$ is obviously balanced. \end{proof} \begin{exo}\label{comclo} In a Hausdorff space [$\to$ \ref{dfcomp}], every compact subset [$\to$ \ref{compactsubspace}] is closed. \end{exo} \begin{df}\label{defvstop} Let $V$ be a $K$-vector space. We call a topology on $V$ making $V$ into a topological vector space [$\to$ \ref{deftopvs}] a \emph{vector space topology} on $V$. \end{df} \begin{rem} Up to now the condition $\overline{\{0\}}=\{0\}$ from Definition \ref{deftopvs} has been used nowhere. From now on, we will however need it. We will show that each finite-dimensional $\R$-vector space carries exactly one vector space topology which would be false without the condition $\overline{\{0\}}=\{0\}$ since otherwise the trivial topology [$\to$ \ref{topreminder}(e)] would also be a vector space topology. \end{rem} \begin{pro}\label{topvshausdorff} Every topological $K$-vector space is a Hausdorff space. \end{pro} \begin{proof} Let $V$ be a topological $K$-vector space [$\to$ \ref{deftopvs}] and let $x,y\in V$ with $x\ne y$. Set $z:=x-y\ne0$. By Definition \ref{deftopvs}, $\{0\}$ and thus by \ref{transhomeo} also $\{z\}$ is closed. Hence $V\setminus\{z\}$ is an open neighborhood of $0$. Since $V\times V\to V,\ (v,w)\mapsto v-w$ is continuous by \ref{deftopvs}, there is a neighborhood $U$ of $0$ such that $U-U\subseteq V\setminus\{z\}$. Then $(x+U)\cap(y+U)=\emptyset$ for otherwise there would be $u,v\in U$ with $x+u=y+v$ from which it would follow $z=x-y=v-u\in U-U$ $\lightning$. \end{proof} \begin{pro}\label{onetop} Let $V$ be a finite-dimensional $\R$-vector space. Then there is exactly one vector space topology \emph{[$\to$ \ref{defvstop}]} on $V$. \end{pro} \begin{proof} Choose a basis $v_1,\dots,v_n$ of $V$. Then $f\colon\R^n\to V,\ x\mapsto\sum_{i=1}^nx_iv_i$ is a vector space isomorphism. With $\R^n$ [$\to$ \ref{topvsex}] also $V$ possesses therefore a vector space topology. This shows existence. For uniqueness, endow now $V$ with any vector space topology. We show that $f$ is a homeomorphism. By \ref{deftopvs}, $f$ is certainly continuous. It is enough to show that images of open sets under $f$ are again open. For this purpose, it suffices to show that for all open balls in $\R^n$ the image of their center is an interior point of their image because if $A\subseteq\R^n$ is open then every point in $f(A)$ is the image of the center of an open ball contained in $A$. Due to \ref{transhomeo}, it suffices to consider the ball $B:=\{x\in\R^n\mid\|x\|<1\}$ around the origin of radius $1$. In order to show that $0\in(f(B))^\circ$, we take the sphere $S:=\{x\in\R^n\mid\|x\|=1\}$. By \ref{sacompreals}, $S$ is compact and hence so is by \ref{quasicompactimage} and \ref{topvshausdorff} also $f(S)$. According to \ref{comclo}, $f(S)$ is thus closed in $V$. Hence $V\setminus f(S)$ is a neighborhood of $0$ in $V$. By \ref{balanced}, there is a balanced open neighborhood $A$ of $0$ in $V$ with $A\subseteq V\setminus f(S)$, i.e., $A\cap f(S)=\emptyset$. Since $f$ is continuous, $f^{-1}(A)$ is an open neighborhood of $0$ in $\R^n$. Due to the linearity of $f$, with $A$ also $f^{-1}(A)$ is balanced according to Definition \ref{defbalanced}. Since $f^{-1}(A)$ is disjoint to $S$, it follows that $f^{-1}(A)\subseteq B$ and thus $A\subseteq f(B)$. Hence $0\in(f(B))^\circ$ as desired. \end{proof} \section{Convex sets in locally convex vector spaces} \begin{df}\label{defloccon} A \emph{locally convex $K$-vector space} is a topological $K$-vector space [$\to$ \ref{deftopvs}] in which for every $x\in V$ each neighborhood of $x$ contains a convex neighborhood of $x$. \end{df} \begin{rem} Because of \ref{transhomeo}, one can restrict oneself in \ref{defloccon} to $x=0$. \end{rem} \begin{ex}{}[$\to$ \ref{topvsex}] \begin{enumerate}[(a)] \item If $I$ is a set, then $K^I$ is a locally convex $K$-vector space. \item If a $K$-vector space $V$ is endowed with the initial topology [$\to$ \ref{initialtop}] with respect to a family $(f_i)_{i\in I}$ of $K$-linear functions $f_i\colon V\to\R$ in such a way that to each $x\in V\setminus\{0\}$ there is some $i\in I$ with $f_i(x)\ne0$, then $V$ is a locally convex $K$-vector space. \item Every normed $\R$-vector space $V$, in particular every $\R$-vector space with scalar product, is a locally convex $\R$-vector space since \[\|\la x+(1-\la)y\|\le\la\|x\|+(1-\la)\|y\|<\la\ep+(1-\la)\ep=\ep\] for all $\ep>0$, $x,y\in V$ satisfying $\|x\|,\|y\|<\ep$ (``balls are convex'') and $\la\in[0,1]_\R$. \end{enumerate} \end{ex} \begin{lem}\label{closedminkowskisum} Suppose $V$ is a topological $K$-vector space, $A\subseteq V$ is closed and $C\subseteq V$ is compact. Then $A+C$ is closed. \end{lem} \begin{proof} Let $x\in V\setminus(A+C)$. We have to show that there is a neighborhood $U$ of the origin satisfying $(x+U)\cap(A+C)=\emptyset$. \medskip \textbf{Claim:} For each $y\in C$, there exists a neighborhood $U_y$ of the origin such that \[(x+U_y)\cap(y+U_y+A)=\emptyset.\] \smallskip \emph{Explanation.} Let $y\in C$. Then $V\times V\to V,\ (x',y')\mapsto x-y+x'-y'$ is continuous and $(0,0)$ lies in the preimage of the open set $V\setminus A$ since $x-y\notin A$ (otherwise we would have $x\in A+y\subseteq A+C$). Hence there is a neighborhood $U_y$ with \[x-y+U_y-U_y\subseteq V\setminus A,\] i.e., $(x+U_y-y-U_y)\cap A=\emptyset$. \smallskip\noindent By compactness of $C$, there is a finite subset $D\subseteq C$ such that $C\subseteq\bigcup_{y\in D}(y+U_y)$. Now $U:=\bigcap_{y\in D}U_y$ is a neighborhood of the origin. In order to show that \[(x+U)\cap(A+C)=\emptyset,\] it is enough to prove that $(x+U)\cap(A+y+U_y)=\emptyset$ for all $y\in D$. For this purpose, it suffices to show that $(x+U_y)\cap(y+U_y+A)=\emptyset$ for all $y\in D$. But this holds even for all $y\in C$ by the above claim. \end{proof} \begin{thm}[Separation theorem for locally convex vector spaces]{} \emph{[$\to$ \ref{septopvs}, \ref{septopvscor}]} \label{seplocconvs} Let $A$ and $C$ be convex sets in the locally convex $K$-vector space $V$ with $A\cap C=\emptyset$. Let $A$ be closed and $C$ be compact. Then there is a continuous $K$-linear function $\ph\colon V\to\R$ and $r,s\in\R$ with $\ph(x)\le r<s\le\ph(y)$ for all $x\in A$ and $y\in C$. \end{thm} \begin{proof} If $A=\emptyset$ or $C=\emptyset$, we can take $\ph:=0\in V^*$ and choose $r,s\in\R$ arbitrary since the statement $\forall x\in A:\forall y\in C:\ph(x)\le r<s\le\ph(y)$ is empty. WLOG $A\ne\emptyset$ and $C\ne\emptyset$. We have that $B:=C-A$ is by \ref{closedminkowskisum} closed and by hypothesis we have $0\notin B$. Since $V$ is locally convex, there is in view of \ref{intcloconvex} a convex open set $D\subseteq V$ with $0\in D$ and $D\cap B=\emptyset$. Since $B$ is also convex, there is by Corollary \ref{septopvscor} a continuous $K$-linear function $\ph\colon V\to\R$ and an $\ep\in\R$ such that $\ph(x)<\ep\le\ph(y)$ for all $x\in D$ and $y\in B$. In particular, $\ep>\ph(0)=0$ and $\ph(x)+\ep\le\ph(y)$ for all $x\in A$ and $y\in C$. Because of $A\ne\emptyset\ne C$, $r:=\sup\{\ph(x)\mid x\in A\}\in\R$ and $s:=\inf\{\ph(y)\mid y\in C\}\in\R$ exist. Moreover, we have $r+\ep\le s$, i.e., $r<s$. \end{proof} \begin{df}\label{dfface} Let $V$ be a $K$-vector space and $A\subseteq V$ be convex. Then a convex set $F\subseteq A$ is called a \emph{face} of $A$ if for all $x,y\in A$ with $\frac{x+y}2\in F$, we have also $x,y\in F$. \end{df} \begin{pro}\label{extremepointface} Suppose $V$ is a $K$-vector space, $A\subseteq V$ is convex and $x\in A$. Then $x$ is an extreme point of $A$ \emph{[$\to$ \ref{dfconv}]} if and only if $\{x\}$ is a face of $A$. \end{pro} \begin{proof} \begin{align*} &x\text{ is an extreme point of }A\\ \overset{\ref{dfconv}}\iff&\nexists y,z\in A:\left(y\ne z\et x=\frac{y+z}2\right)\\ \iff&\forall y,z\in A:\left(x=\frac{y+z}2\implies y=z\right)\\ \iff&\forall y,z\in A:\left(x=\frac{y+z}2\implies y=z=x\right)\\ \iff&\forall y,z\in A:\left(\frac{y+z}2\in\{x\}\implies y,z\in\{x\}\right) \end{align*} \end{proof} \begin{pro}{}\emph{[$\to$ \ref{extremeexo}]}\label{ratiootherthan2} Suppose $V$ is a $K$-vector space, $A\subseteq V$ is convex, $F\subseteq A$ is convex and $\la\in(0,1)_K$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $F$ is a face of $A$ \item $\forall x,y\in A:(\la x+(1-\la)y\in F\implies x,y\in F)$ \end{enumerate} \end{pro} \begin{proof} \underline{(b)$\implies$(a)} is an easy exercise. \smallskip \underline{(a)$\implies$(b)} \quad Assume that $F$ is a face of $A$ but there are $x,y\in A$ such that \[\la x+(1-\la)y\in F\] and WLOG (otherwise permute $x$ and $y$ and replace $\la$ by $1-\la$) $x\notin F$. If $\la<\frac12$, one then can replace $(x,\la)$ by $(x',\la')$ where $x':=\frac{x+y}2$ and $\la':=2\la\in(0,1)_K$ because we then have $x'\in A\setminus F$ (since $A$ is convex and $F$ is a face of $A$), $\la'\in(0,1)_K$ and \[\la'x'+(1-\la')y=2\la\frac{x+y}2+(1-2\la)y=\la x+(1-\la)y\in F.\] By iterating this in case of need finitely many times, one can suppose $\la\ge\frac12$. Then \[z:=x+2((\la x+(1-\la)y)-x)=(2\la-1)x+2(1-\la)y\in A\] since $2\la-1\ge0$, $2(1-\la)\ge0$ and $(2\la-1)+2(1-\la)=1$. Now \[\frac{x+z}2=x+(\la x+(1-\la)y)-x=\la x+(1-\la)y\in F\] and thus $x,z\in F$ since $F$ is a face of $A$ $\lightning$. \end{proof} \begin{ex}\label{exforfaces} \begin{enumerate}[(a)] \item If $V$ is a $K$-vector space and $A\subseteq V$ is convex, then both $\emptyset$ and $A$ are faces of $A$. We call these the \emph{trivial} faces of $A$. \item The faces of $[0,1]^2\subseteq\R^2$ are $\emptyset$, $\{(0,0)\}$, $\{(0,1)\}$, $\{(1,0)\}$, $\{(1,1)\}$, $\{0\}\times[0,1]$, $\{1\}\times[0,1]$, $[0,1]\times\{0\}$, $[0,1]\times\{1\}$, $[0,1]^2$. \item The faces of $B:=\{x\in\R^2\mid\|x\|\le1\}$ are $\emptyset$, $\{x\}$ ($x\in B\setminus B^\circ$) and $B$. \item $\{x\in\R^2\mid\|x\|<1\}$ has only the trivial faces. \end{enumerate} \end{ex} \begin{dfpro}\label{defexposed} Let $V$ be a $K$-vector space and suppose $A\subseteq V$ is convex. We call $F$ an \emph{exposed face} of $A$ if there is a $K$-linear function $\ph\colon V\to\R$ such that \[F=\{x\in A\mid\forall y\in A:\ph(x)\le\ph(y)\}.\] Every exposed face of $A$ is a face of $A$. \end{dfpro} \begin{proof} Let $F$ be an exposed face of $A$. To show: $F$ is a face of $A$. It is easy to show that $F$ is convex. Choose a $K$-linear $\ph\colon V\to\R$ such that $F=\{x\in A\mid\forall y\in A:\ph(x)\le\ph(y)\}$. Let $x,y\in A$ such that $\frac{x+y}2\in F$. To show: $x,y\in F$. It is obviously enough to show that $\ph(x)=\ph\left(\frac{x+y}2\right)=\ph(y)$. We have that \[\ph(x)+\ph(y)=\ph\left(\frac{x+y}2\right)+\ph\left(\frac{x+y}2\right) \overset{x,y\in A}{\underset{\frac{x+y}2\in F}\le}\ph(x)+\ph(y) \] where the inequality would be strict if one of $\ph(x)$ and $\ph(y)$ were different from $\ph\left(\frac{x+y}2\right)$. \end{proof} \begin{ex}{}[$\to$ \ref{exforfaces}] \begin{enumerate}[(a)] \item If $V$ is a $K$-vector space and $A\subseteq V$ is convex, then $A$ is an exposed face of $A$ while $\emptyset$ might be exposed [$\to$ \ref{exforfaces}(d)] or non-exposed [\ref{exforfaces}(c)]. \item All faces of $[0,1]^2\subseteq\R^2$ are exposed except $\emptyset$. \item All faces of $\{x\in\R^2\mid\|x\|\le1\}$ are exposed except $\emptyset$. \item All faces of $\{x\in\R^2\mid\|x\|<1\}$ are exposed. \item $((-\infty,0]\times[0,\infty))\cup\{(x,y)\in\R_{\ge0}^2\mid y\ge x^2\}$ has exactly one nonexposed face, namely $\{0\}$. \end{enumerate} \end{ex} \begin{pro}\label{faceofface} Suppose $V$ is a $K$-vector space, $A\subseteq V$ is convex, $F$ is a face of $A$ and $G\subseteq F$. Then the following holds: \[\text{$G$ is a face of $F$}\iff\text{$G$ is a face of $A$}.\] \end{pro} \begin{proof} ``$\Longrightarrow$'' Let $G$ be a face of $F$ and let $x,y\in A$ with $\frac{x+y}2\in G$. To show: $x,y\in G$. Because of $\frac{x+y}2\in G\subseteq F$, we have $x,y\in F$. Since $G$ is a face of $F$, it follows that $x,y\in G$. \smallskip ``$\Longleftarrow$'' Let $G$ be a face of $A$ and let $x,y\in F$ with $\frac{x+y}2\in G$. Because of $x,y\in F\subseteq A$, we then have $x,y\in G$. \end{proof} \begin{rem} Every intersection of faces (except possible $\bigcap\emptyset:=V$) of a convex set in a $K$-vector space $V$ is obviously again a face of this convex set. \end{rem} \begin{lem}\label{hasextreme} Let $C\ne\emptyset$ be a compact convex subset of a locally convex $K$-vector space $V$. Then $C$ possesses an extreme point. \end{lem} \begin{proof} Every intersection of a nonempty chain of closed nonempty faces of $C$ is again a closed nonempty face of $C$. Indeed, if the intersection were empty, then a finite subintersection would be empty by the compactness of $C$ [$\to$ \ref{dfcomp}] which is impossible since we dealt with a chain. By Zorn's lemma there is thus a minimal closed nonempty face $F$ of $C$. Being a closed subset of a compact set, $F$ is compact itself [$\to$ \ref{compactsubspace}]. By \ref{extremepointface}, it suffices to show that $\#F=1$. Due to $F\ne\emptyset$, it suffices to exclude $\#F\ge2$. Assume $x,y\in F$ such that $x\ne y$. By \ref{seplocconvs}, there is a continuous $K$-linear function $\ph\colon V\to\R$ such that $\ph(x)<\ph(y)$. Then \[G:=\{v\in F\mid\forall w\in F:\ph(v)\le\ph(w)\}\] is nonempty by \ref{takeson} because $F$ is compact and nonempty and $\ph$ is continuous. According to \ref{defexposed}, $G$ is an (exposed) face of $F$. Hence $G$ is a face of $C$ by \ref{faceofface}. From the continuity of $\ph$, we deduce that \[G=F\cap\bigcap_{w\in F}\ph^{-1}((-\infty,\ph(w)])\] is closed. Moreover, $y\notin G$ since $\ph(y)\not\le\ph(x)$. Therefore $G$ is a closed nonempty face of $C$ that is properly contained in $F$, contradicting the minimality of $F$. \end{proof} \begin{notation} Let $A$ be a convex set in a $K$-vector space $V$. Then we write \[\extr A\] for the set of extreme points of $A$. \end{notation} \begin{thm}{}\emph{[$\to$ \ref{takeson}]}\label{takesonextr} Suppose $C$ is a nonempty compact convex subset of a locally convex $K$-vector space $V$ and $\ph\colon V\to\R$ is a continuous $K$-linear function. Then $\ph$ attains on $C$ a minimum and a maximum in an extreme point of $C$. In other words, there are $x,y\in\extr C$ such that \[\ph(x)\le\ph(z)\le\ph(y)\] for all $z\in C$. \end{thm} \begin{proof} Since one could replace $\ph$ by $-\ph$, we show only the existence of $x\in\extr C$ such that $\ph(x)\le\ph(z)$ for all $z\in C$. By \ref{takeson}, there is $y\in C$ such that $\ph(y)\le\ph(z)$ for all $z\in C$, i.e., the exposed face [$\to$ \ref{defexposed}] \[F:=\{y\in C\mid\forall z\in C:\ph(y)\le\ph(z)\}\] of $C$ is nonempty. Since $\ph$ is continuous, \[F=C\cap\bigcap_{z\in C}\ph^{-1}((-\infty,\ph(z)]_\R)\] is a closed subset of the compact set $C$ and hence compact itself. By Lemma \ref{hasextreme}, $F$ possesses an extreme point $x$ which is by \ref{faceofface} and \ref{extremepointface} also an extreme point of $C$. \end{proof} \begin{cor}[Krein–Milman theorem]\label{kreinmilman} Suppose $C$ is a compact convex subset of a locally convex $K$-vector space $V$. Then $C$ is the closure of the convex hull of the set of its extreme points, i.e., \[C=\overline{\conv(\extr C)}.\] \end{cor} \begin{proof} ``$\supseteq$'' From $\extr C\subseteq C$ and the convexity of $C$, we get $\conv(\extr C)\subseteq C$. Being a compact subset of a Hausdorff space [$\to$ \ref{topvshausdorff}], $C$ is closed [$\to$ \ref{comclo}] which entails even $\overline{\conv(\extr C)}\subseteq C$. ``$\subseteq$'' WLOG $C\ne\emptyset$. $A:=\overline{\conv(\extr C)}$ is closed, nonempty by Lemma \ref{hasextreme} and convex by \ref{intcloconvex}. We show $V\setminus A\subseteq V\setminus C$. Let $x\in V\setminus A$. By the separation theorem for locally convex vector spaces \ref{seplocconvs}, there is a continuous $K$-linear function $\ph\colon V\to\R$ such that $\ph(y)<\ph(x)$ for all $y\in A$. By \ref{takesonextr}, there is $y\in\extr C\subseteq A$ satisfying $\ph(z)\le\ph(y)$ for all $z\in C$. It follows that $\ph(z)\le\ph(y)<\ph(x)$ for all $z\in C$. Therefore $x\notin C$. \end{proof} \begin{df} Let $V$ be a $K$-vector space, $C\subseteq V$ and $u\in V$. We call an extreme point [$\to$ \ref{dfconv}] of the state space $S(V,C,u)$ [$\to$ \ref{defstate}, \ref{statespacetop}] a \emph{pure state} of $(V,C,u)$. \end{df} \begin{thm}[Strengthening of \ref{conemembershipunit}] \label{conemembershipunitextreme} Suppose $u$ is a unit for the cone $C$ in the $K$-vector space $V$ and let $x\in V$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\forall\ph\in\extr S(V,C,u):\ph(x)>0$ \item $\forall\ph\in S(V,C,u):\ph(x)>0$ \item $\exists N\in\N:x\in\frac1Nu+C$ \item $x$ is a unit for $C$. \end{enumerate} \end{thm} \begin{proof} \underline{(b)$\iff$(c)$\iff$(d)} is \ref{conemembershipunit}. \smallskip \underline{(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(b)} \quad WLOG $S(V,C,u)\ne\emptyset$. It suffices to show that the function \[S(V,C,u)\to\R,\ \ph\mapsto\ph(x)\] attains a minimum in an extreme point of $S(V,C,u)$. But this follows from \ref{takesonextr} because $S(V,C,u)$ is a nonempty compact [$\to$ \ref{statespacecompact}] convex [$\to$ \ref{statespacetop}] subset of the locally convex [$\to$ \ref{topvsex}(a)] $\R$-vector space $\R^V$ and \[\R^V\to\R,\ \ph\mapsto\ph(x)\] is continuous [$\to$ \ref{subspaceproductspace}(b)]. \end{proof} \begin{cor}[Strengthening of \ref{conemembership}]\label{conemembershipextr} Suppose $u$ is a unit for the cone $C$ in the $K$-vector space $V$ and let $x\in V$. If $\ph(x)>0$ for all pure states $\ph$ of $(V,C,u)$, then $x\in C$. \end{cor} \section{Convex sets in finite-dimensional vector spaces} \begin{lem}\label{findimunit} Let $C$ be a cone in a finite-dimensional $K$-vector space $V$. Then $U:=C-C$ is a subspace of $V$ and $C$ possesses \emph{in $U$} a unit [$\to$ \ref{defunit}]. \end{lem} \begin{proof} On the basis of Definition \ref{defcone}, it is easy to see that $U$ is a subspace of $V$. Choose a basis $u_1,\dots,u_m$ of $U$ and write $u_i=v_i-w_i$ with $v_i,w_i\in C$ for $i\in\{1,\dots,m\}$. We show that $u:=\sum_{i=1}^mv_i+\sum_{i=1}^mw_i\in C$ is a unit for $C$ \emph{in $U$}. For this purpose, fix $v\in U$. To show: $\exists N\in\N:Nu+v\in C$. Write $v=\sum_{i=1}^m\la_iu_i$ with $\la_i\in K$ for $i\in\{1,\dots,m\}$. Choose $N\in\N$ with $N\ge|\la_i|$ for $i\in\{1,\dots,m\}$. Then \[Nu+v=\sum_{i=1}^m\underbrace{(N+\la_i)}_{\ge0}v_i+\sum_{i=1}^m \underbrace{(N-\la_i)}_{\ge0}w_i\in C.\] \end{proof} \begin{thm}[Finite-dimensional isolation theorem]\emph{[$\to$ \ref{isolation}]}\label{findimisolation} Let $C$ be a proper cone in the finite-dimensional $K$-vector space $V$. Then there is a $K$-linear function $\ph\colon V\to\R$ with $\ph\ne0$ and $\ph(C)\subseteq\R_{\ge0}$. \end{thm} \begin{proof} $U:=C-C$ is by \ref{findimunit} a subspace of $V$. \medskip \textbf{Case 1:} $C=U$ \smallskip Then $U$ is a proper subspace of $V$ and by linear algebra it is easy to see that there is some $\ph\in V^*\setminus\{0\}$ such that $\ph(U)=\{0\}$. \medskip \textbf{Case 2:} $C\ne U$ \smallskip By \ref{findimunit}, there exists a unit $u$ for $C$ in $U$. The isolation theorem \ref{isolation} provides us with some $\ph_0\in S(U,C,u)$. Extend $\ph_0$ by linear algebra to a $K$-linear function $\ph\colon V\to\R$. \end{proof} \begin{rem} Example \ref{badexample} shows that one cannot omit the hypothesis $\dim V<\infty$ in \ref{findimunit} and \ref{findimisolation}. \end{rem} \begin{thm}[Separation theorem for finite-dimensional vector spaces] \emph{[$\to$ \ref{septopvs}]}\label{sepfindimvs} Let $A$ and $B$ be convex sets in the finite-dimensional $K$-vector space $V$ such that $A\ne\emptyset\ne B$ and $A\cap B=\emptyset$. Then there is a $K$-linear function $\ph\colon V\to\R$ such that $\ph\ne0$ and $\ph(x)\le\ph(y)$ for all $x\in A$ and $y\in B$. \end{thm} \begin{proof} Completely analogous to the proof of \ref{septopvs}. \end{proof} \begin{df}{}[$\to$ \ref{dfconv}]\label{affinehull} Let $V$ be a $K$-vector space and $A\subseteq V$. Then $A$ is called an \emph{affine subspace of $V$} if $\forall x,y\in A:\forall\la\in K:\la x+(1-\la)y\in A$. The smallest affine subspace of $V$ containing $A$ is obviously \[\aff A:=\left\{\sum_{i=1}^m\la_ix_i\mid m\in\N,\la_i\in K,x_i\in A,\sum_{i=1}^m\la_i=1\right\}, \] called the affine subspace generated by $A$ or the \emph{affine hull} of $A$. \end{df} \begin{dfpro}\label{dfdimaff} Let $V$ be a $K$-vector space. Then for each $A\subseteq V$, the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $A$ is a nonempty affine subspace of $V$. \item There is an $x\in V$ and a subspace $U$ of $V$ such that $A=x+U$. \end{enumerate} If these conditions are met, then $U$ as in \emph{(b)} is uniquely determined and is called the \emph{direction} of $A$. Then $\dim A:=\dim U\in\N_0\cup\{\infty\}$ is the \emph{dimension} of $A$. We set $\dim\emptyset:=-1$. \end{dfpro} \begin{proof} \underline{(b)$\implies$(a)} is easy. \smallskip \underline{(a)$\implies$(b)}\quad Suppose that (a) holds. Choose $x\in A$. Set $U:=A-x$. To show: $U+U\subseteq U$ and $KU\subseteq U$. Let $u,v\in U$ and $\la\in K$. To show: $u+v\in U$ and $\la u\in U$. Choose $a,b\in A$ such that $u=a-x$ and $v=b-x$. Then $u+v=(1a+1b+(-1)x)-x\in(\aff A)-x=A-x=U$ and $\la u=(\la a+(1-\la)x)-x\in(\aff A)-x=A-x=U$. \smallskip \underline{Uniqueness claim}\quad Whenever $x,y\in V$ and $U$ and $W$ are subspaces of $V$ satisfying $x+U=y+W$, then $x-y\in W$ and thus $U=(y-x)+W=W$. \end{proof} \begin{df}\label{dfdimconv} Let $V$ be a $K$-vector space and $A\subseteq V$ be convex. Then \[\dim A:=\dim\aff A\in\{-1\}\cup\N_0\cup\{\infty\}\] is the \emph{dimension} of $A$. \end{df} \begin{pro}{}\emph{[$\to$ \ref{ratiootherthan2}]}\label{face234} Suppose that $V$ is a $K$-vector space, $A\subseteq V$ is convex and $F$ is a face of $A$. Let $m\in\N$, $x_1,\dots,x_m\in A$ and $\la_1,\dots,\la_m\in K_{>0}$ such that $\sum_{i=1}^m\la_i=1$ and $\sum_{i=1}^m\la_ix_i\in F$. Then $x_1,\dots,x_m\in F$. \end{pro} \begin{proof} WLOG $m\ge2$. Let $i\in\{1,\dots,m\}$. To show: $x_i\in F$. WLOG $i=1$. We have $0<\la_1<1$ and $y:=\sum_{i=2}^m\frac{\la_i}{1-\la_1}x_i \in A$ since $\sum_{i=2}^m\frac{\la_i}{1-\la_1}=\frac{1-\la_1}{1-\la_1}=1$. From $\sum_{i=1}^m\la_ix_i=\la_1x_1+(1-\la_1)y$ it follows thus by \ref{ratiootherthan2} that $x_1,y\in F$. \end{proof} \begin{pro}\label{faceaff} Suppose $V$ is a $K$-vector space, $A\subseteq V$ is convex and $F$ is a face of $A$. Then $F=(\aff F)\cap A$. \end{pro} \begin{proof} ``$\subseteq$'' is trivial. ``$\supseteq$'' Let $x\in(\aff F)\cap A$. To show: $x\in F$. Write $x=\sum_{i=1}^m\la_iy_i-\sum_{j=1}^n\mu_jz_j$ with $m,n\in\N_0$, $\la_i,\mu_j\in K_{>0}$, $y_i,z_j\in F$ and $\sum_{i=1}^m\la_i-\sum_{j=1}^n\mu_j=1$. Setting $\la:=\sum_{i=1}^m\la_i$ and $\mu:=\sum_{j=1}^n\mu_j$, it follows that $\frac1{1+\mu}x+\sum_{j=1}^n\frac{\mu_j}{1+\mu}z_j= \sum_{i=1}^m\frac{\la_i}\la y_i\in F$ and thus $x\in F$ by \ref{face234}. \end{proof} \begin{pro}\label{dimensiondrop} Let $V$ be a finite-dimensional $K$-vector space. \begin{enumerate}[\normalfont(a)] \item If $A$ and $B$ are affine subspaces of $V$ with $A\subseteq B$, then \[A=B\iff\dim A=\dim B.\] \item If $F$ and $G$ are faces of the convex set $A\subseteq V$ with $F\subseteq G$, then \[F=G\iff\dim F=\dim G.\] \end{enumerate} \end{pro} \begin{proof} (a) follows from \ref{dfdimaff} by linear algebra and (b) follows hereof by \ref{dfdimconv} and \ref{faceaff}. \end{proof} \begin{rem}\label{topsvs} Suppose $V$ is a topological $K$-vector space, $K'$ is a subfield of $K$ and $V'$ a $K'$-vector subspace of the $K'$-vector space $V$. Then $V$ induces on $V'$ a vector space topology. This is easy to see since $V\times V$ induces on $V'\times V'$ the product topology of the induced topologies and $K\times V$ induces on $K'\times V'$ the product topology of the induced topologies. \end{rem} \begin{dfpro}\label{dfrelint} Let $A$ be a convex set in the topological $K$-vector space $V$. The interior of $A$ in the topological space $\aff A$ (endowed with the topology induced by $V$) is called the \emph{relative interior} of $A$, denoted by $\relint A$. This is a convex set. \end{dfpro} \begin{proof} WLOG $A\ne\emptyset$. Write $\aff A=x+U$ for some $x\in V$ and some subspace $U$ of $V$ [$\to$ \ref{dfdimaff}]. WLOG $x=0$ [$\to$ \ref{transhomeo}]. WLOG $U=V$ [$\to$ \ref{topsvs}]. Then $\relint A=A^\circ$ is convex by \ref{intcloconvex}. \end{proof} \begin{rem}\label{topfindimrem} Let $V$ be a finite-dimensional $K$-vector space. Choose a basis $v_1,\dots,v_n$ of $V$. Then $f\colon K^n\to V,\ x\mapsto\sum_{i=1}^nx_iv_i$ is a vector space isomorphism that is continuous with respect to every vector space topology of $V$ [$\to$ \ref{defvstop}] and that is a homeomorphism with respect to exactly one vector space topology of $V$ [$\to$ \ref{topvsex}(a)]. Consequently, there is a finest [$\to$ \ref{topreminder}(c)] vector space topology on $V$ (cf. also \ref{onetop}). With respect to this topology on $V$, we have for all $A\subseteq V$ that \[\text{$A$ open in $V$}\iff\text{$f^{-1}(A)$ open in $K^n$},\] independently of the choice of the basis $v_1,\dots,v_n$. It is easy to see that $K^n$ carries the initial topology with respect to all \alal{linear forms on $K^n$}{$K$-linear functions $K^n\to\R$}. The finest vector space topology on $V$ is therefore also the initial topology [$\to$ \ref{initialtop}] with respect to all \alal{linear forms on $V$}{$K$-linear functions $V\to\R$}. If $U$ is a subspace of $V$, then the finest vector space topology of $V$ induces on $U$ again the finest vector space topology because one can extend every linear form on $U$ to one on $V$. \end{rem} \begin{ex} Since $\R$ is a topological $\R$-vector space and thus also a topological $\Q$-vector space, also $\Q+\sqrt 2\Q\subseteq\R$ is a topological $\Q$-vector space with respect to the induced topology [$\to$ \ref{topsvs}]. One sees easily that \[\Q+\sqrt2\Q\to\Q,\ x+\sqrt2y\mapsto x\qquad(x,y\in\Q)\] is not continuous. \end{ex} \begin{lem}\label{reddim} Let $A\subseteq K^n$ be convex. Then $A^\circ=\emptyset\implies\aff A\ne K^n$. \end{lem} \begin{proof} Suppose that $\aff A=K^n$. We show that $A^\circ\ne\emptyset$. Denote by $e_1,\dots,e_n$ the standard basis of $K^n$ and set $e_0:=0\in K^n$. Write $e_i=\sum_{j=1}^m\la_{ij}x_{ij}$ with $m\in\N$, $\la_{ij}\in K$, $x_{ij}\in A$ and $\sum_{j=1}^m\la_{ij}=1$ for $i\in\{0,\dots,n\}$. We show that \[x:=\sum_{i=0}^n\sum_{j=1}^m\frac1{m(n+1)}x_{ij}\in A^\circ.\] Since $A$ is convex, we have $x\in A$ and it suffices to show that for each $i\in\{1,\dots,n\}$, there is an $\ep\in K_{>0}$ such that $x\pm\ep e_i\in A$ (cf. also \ref{unitinteriorpoint}). For this purpose, fix $i\in\{1,\dots,n\}$. From $e_i=e_i-0=e_i-e_0= \sum_{j=1}^m\la_{ij}x_{ij}+\sum_{j=1}^m(-\la_{0j})x_{0j}$ and $\sum_{j=1}^m\la_{ij}-\sum_{j=1}^m\la_{0j}=1-1=0$, the existence of such an $\ep>0$ easily follows. \end{proof} \begin{thm}\label{relintclosure} Suppose $V$ is a finite-dimensional topological $K$-vector space that is equipped with the finest vector space topology \emph{[$\to$ \ref{topfindimrem}]} and $A\subseteq V$ is convex. Then $A\subseteq\overline{\relint A}$. \end{thm} \begin{proof} WLOG $A\ne\emptyset$. Write $\aff A=x+U$ for some $x\in V$ and some subspace $U$ of $V$. Obviously, $\aff(A-x)\overset{\ref{affinehull}}= (\aff A)-x=U$, $\relint(A-x)\overset{\ref{transhomeo}}= (\relint A)-x$ and $\overline{\relint(A-x)}= \overline{\relint A}-x$. Replacing $A$ by $A-x$, we can thus suppose that $\aff A=U$. Using the last remark from \ref{topfindimrem}, we can therefore suppose that $\aff A=V$ (otherwise replace $V$ by $U$). According to \ref{topfindimrem}, we can reduce to the case where $V=K^n$ (with the product topology). We have to show $A\subseteq \overline{A^\circ}$. Choose $y\in A^\circ$ with Lemma \ref{reddim}. Let $x\in A$. To show: $x\in\overline{A^\circ}$. By \ref{stayinside}, we have $(1-\la)x+\la y\in A^\circ$ for all $\la\in(0,1]_K$. Obviously, we have $x\in\overline{\{(1-\la)x+\la y\mid\la\in(0,1]_K\}}\subseteq\overline{A^\circ}$. \end{proof} \begin{thm}\label{hasaface} Let $V$ be a finite-dimensional $K$-vector space that is equipped with the finest vector space topology \emph{[$\to$ \ref{topfindimrem}]}. Let $A\subseteq V$ be convex and $x\in A\setminus\relint A$. Then there is an exposed face $F$ of $A$ satisfying $\dim F<\dim A$ and $x\in F$. \end{thm} \begin{proof} Similarly to the proof of \ref{relintclosure}, we reduce again to the case $\aff A=V$. Note that $A^\circ$ is convex [$\to$ \ref{dfrelint}] and nonempty [$\to$~\ref{relintclosure}]. The separation theorem \ref{sepfindimvs} yields a $K$-linear function $\ph\colon V\to\R$ with $\ph\ne0$ and $\ph(x)\le\ph(y)$ for all $y\in A^\circ$. Since $\ph$ is continuous [$\to$ \ref{topfindimrem}], the set $\ph^{-1}([\ph(x),\infty)_\R)$ is closed and contains with $A^\circ$ also $\overline{A^\circ}$ and hence by \ref{relintclosure} also $A$, i.e., $\ph(x)\le\ph(y)$ for all $y\in A$. In other words, $x$ is an element of the exposed face [$\to$~\ref{defexposed}] $F:=\{z\in A\mid\forall y\in A:\ph(z)\le\ph(y)\}$ of $A$. By \ref{dimensiondrop}, it is enough to show $F\ne A$. If we had $F=A$, we would have $\ph|_A=\ph(x)$ and hence by linearity of $\ph$ also $\ph=\ph|_{\aff A}=\ph(x)$, i.e., $\ph=0$ $\lightning$. \end{proof} \begin{rem}\label{tacitly} If we use topological notions in a finite-dimensional $\R$-vector space $V$, then we tacitly furnish $V$ with its unique vector space topology [$\to$ \ref{onetop}] which is the initial topology with respect to the family of all linear forms on $V$ [$\to$ \ref{topfindimrem}]. \end{rem} \begin{thm}[Minkowski's theorem] \emph{[$\to$ \ref{polyisconvexhull}, \ref{kreinmilman}]}\label{minkowski} Let $V$ be a finite-dimensional $\R$-vector space. Let $A\subseteq V$ be convex and compact. Then \[A=\conv(\extr A).\] \end{thm} \begin{proof} Since $A$ is closed [$\to$ \ref{comclo}], all faces of $A$ are also closed [$\to$ \ref{faceaff}, \ref{dfdimaff}, \ref{transhomeo}] and therefore compact [$\to$ \ref{compactsubspace}]. By induction, we can thus assume that $F=\conv(\extr F)$ for all faces $F$ of $A$ that satisfy $\dim F<\dim A$. ``$\supseteq$'' is trivial. ``$\subseteq$'' Let $x\in A$. To show: $x\in\conv(\extr A)$. WLOG $x\notin\extr A$. Choose $y,z\in A$ with $y\ne z$ and $x\in\conv\{y,z\}$. Because of the assumptions on $A$, WLOG $y,z\in A\setminus\relint A$. By \ref{hasaface}, there are (exposed) faces $F$ and $G$ of $A$ such that $\dim F<\dim A$, $\dim G<\dim A$, $y\in F$ and $z\in G$. From \ref{extremepointface} and \ref{faceofface}, we get $\extr F\subseteq\extr A$ and $\extr G\subseteq\extr A$. Consequently, $y\in F=\conv(\extr F)\subseteq \conv(\extr A)$ and $z\in G=\conv(\extr G)\subseteq \conv(\extr A)$ where the equalities follow from the induction hypothesis. Finally, $x\in\conv\{y,z\}\subseteq\conv(\extr A)$. \end{proof} \begin{thm}\label{fundthmlin} Let $(K,\le)$ be an arbitrary ordered field, let $V$ be a $K$-vector space with $n:=\dim V<\infty$. Suppose that $E\subseteq V$ is a finite set that generates $V$ and $x\in V$. Then exactly one of the following conditions occurs: \begin{enumerate}[\normalfont(a)] \item $x$ is a nonnegative linear combination of elements of $E$ that form a basis of $V$. \item There is some $\ell\in V^*$ with $\ell(E)\subseteq K_{\ge0}$ and $\ell(x)<0$ and a linearly independent set $F\subseteq E\cap\ker\ell$ with $\#F=n-1$. \end{enumerate} \end{thm} \begin{proof} It is easy to see that (a) and (b) cannot occur both at the same time. Indeed, from (a) it follows that $x\in\sum_{v\in E}K_{\ge0}v$ which is not compatible with (b) because if $\ell\in V^*$ with $\ell(E)\subseteq K_{\ge0}$, then $\ell(x)\in\ell\left(\sum_{v\in E}K_{\ge0}v\right)\subseteq K_{\ge0}$. \medskip We choose an order $\le$ on $E$ [$\to$ \ref{ordered-set}] and a basis $B\subseteq E$ of $V$. We show that the following algorithm always terminates: \begin{enumerate}[(1)] \item Write $x=\sum_{v\in B}\la_vv$ with $\la_v\in K$ for all $v\in B$. \item If $\la_v\ge0$ for all $v\in B$, then stop since (a) occurs. \item $u:=\min\{v\in B\mid\la_v<0\}$ \item Define $\ell\in V^*$ by $\ell(u)=1$ and $\ell(v)=0$ for all $v\in B\setminus\{u\}$ (so that $\ell(x)=\la_u<0$). \item If $\ell(E)\subseteq K_{\ge0}$, then stop since (b) occurs. \item $w:=\min\{v\in E\mid\ell(v)<0\}$ \item Replace $B$ by the new basis $(B\setminus\{u\})\cup\{w\}$ and go to (1). \end{enumerate} Observe first of all that in Step (7) the set $(B\setminus\{u\})\cup\{w\}$ is again a basis since $B$ is one. Indeed, $w$ does not lie in the subspace generated by $B\setminus\{u\}$ since $\ell$ vanishes according to its choice in (4) on this subspace while it does not vanish on $w$ by the choice of $w$ in (6). \medskip To show that this algorithm terminates, we assume that this is not the case. Let then denote by $(B_k,u_k,w_k,\ell_k)$ the value of $(B,u,w,\ell)$ after Step (6) in the $k$-th iteration of the loop. We first argue that the existence of $s,t\in\N$ with \[(*)\qquad u_t\le u_s=w_t\text{ and } \{v\in B_s\mid v>u_s\}=\{v\in B_t\mid v>u_s\}\] causes a contradiction. For this purpose, let $x=\sum_{v\in B_s}\la_vv$ with $\la_v\in K$ for all $v\in B_s$ be the representation of $x$ from the $s$-th iteration of the loop. We will apply $\ell_t$ to this representation of $x$. For that matter, observe the following: \begin{itemize} \item For all $v\in B_s$ with $v<u_s=w_t$, we have by the assignment to $u_s$ in (3) that $\la_v\ge0$. \item For all $v\in E\supseteq B_s$ with $v<u_s=w_t$, we have by the assignment to $w_t$ in (6) that $\ell_t(v)\ge0$. \item $\la_{u_s}<0$ according to (3) \item $\ell_t(u_s)=\ell_t(w_t)<0$ according to (6) \item For all $v\in B_s$ with $v>u_s=w_t$, we have $\ell_t(v)=0$ since for these $v$ we have by $(*)$ that $v\in B_t\setminus\{u_t\}$ (using that $u_t\le u_s$) and thus $\ell_t(v)=0$ by (4). \end{itemize} It thus follows that \[ 0\overset{(4)}>\ell_t(x)= \underbrace{ \sum_{\substack{v\in B_s\\v<u_s}}\underbrace{\la_v}_{\ge0}\underbrace{\ell_t(v)}_{\ge0} }_{\ge0}+ \underbrace{ \underbrace{\la_{u_s}}_{<0}\underbrace{\ell_t(u_s)}_{<0} }_{>0} +\underbrace{ \sum_{\substack{v\in B_s\\v>u_s}}\la_v\underbrace{\ell_t(v)}_{=0} }_{=0} >0 \] which is the desired contradiction. \medskip Finally, we show the existence of $s,t\in\N$ with $(*)$. For clarity, we first generalize the algorithm by looking at the following more abstract version of it: \medskip Suppose $E$ is a finite set, $\le$ an order on $E$ and $B$ a subset of $E$. \begin{enumerate}[(1')] \item Choose $u\in B$. \item Choose $w\in E\setminus B$. \item Replace $B$ by $(B\setminus\{u\})\cup\{w\}$ and go to (1'). \end{enumerate} Denote by $(B_k,u_k,w_k)$ the value of $(B,u,w)$ after Step (2') in the $k$-th iteration of the algorithm. We show the existence of $s,t\in\N$ satisfying $(*)$. Since $E$ is finite, the power set of $E$ is also finite. Consequently, there are $p,q\in\N$ such that $p<q$ and $B_p=B_q$. Because of (3'), it then obviously holds that $\{u_s\mid p\le s<q\} =\{w_t\mid p\le t<q\}$. Set $v_0:=\max\{u_s\mid p\le s<q\}= \max\{w_t\mid p\le t<q\}$. Then \[\{v\in B_s\mid v>v_0\}=\{v\in B_t\mid v>v_0\}\] for all $s,t\in\{p,\dots,q-1\}$. Choose $s,t\in\{p,\dots,q-1\}$ with $u_s=v_0=w_t$ (note that $s<t$ or $t<s$ but certainly not $s=t$). Now $(*)$ holds. \end{proof} \begin{cor}\emph{[$\to$ \ref{sospsd2}]}\label{linhomnichtnegativstellensatz} Let $(K,\le)$ be an arbitrary ordered field. Let $m,n\in\N_0$, $f,\ell_1,\dots,\ell_m\in K[X_1,\dots,X_n]$ be linear forms \emph{[$\to$ \ref{longremi}(a)]} and set \[S:=\{x\in K^n\mid\ell_1(x)\ge0,\ldots,\ell_m(x)\ge0\}.\] Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f\ge0$ on $S$ \item $f\in K_{\ge0}\ell_1+\ldots+K_{\ge0}\ell_m$ \item There are $i_1,\dots,i_s\in\{1,\dots,m\}$ such that $\ell_{i_1},\dots, \ell_{i_s}$ are linearly independent and \[f\in K_{\ge0}\ell_{i_1}+\ldots+K_{\ge0}\ell_{i_s}.\] \end{enumerate} \end{cor} \begin{proof} \underline{(c)$\implies$(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(c)}\quad Suppose that (a) holds. \medskip \textbf{Claim:} $f\in V:=K\ell_1+\ldots+K\ell_m$ \smallskip \emph{Explanation.} Assume $f\notin V$. Then there is some $\ph\in(KX_1+\ldots+KX_n)^*$ such that $\ph(\ell_1)=\ldots=\ph(\ell_m)=0$ and $\ph(f)=-1$. Set $x:=(\ph(X_1),\ldots,\ph(X_n))\in K^n$. Then $\ell_i(x)=\ph(\ell_i)=0$ for all $i\in\{1,\dots,m\}$. Hence $x\in S$ and $f(x)=\ph(f)=-1<0$. $\lightning$ \smallskip\noindent Now apply \ref{fundthmlin} to $V$ and $E:=\{\ell_1,\dots,\ell_m\}$ (taking account of the claim). Then it suffices to show that for all $\ph\in V^*$ with $\ph(E)\subseteq K_{\ge0}$ also $\ph(f)\ge0$ holds. Thus let $\ph\in V^*$ with $\ph(E)\subseteq K_{\ge0}$. Choose $\ps\in(KX_1+\ldots+KX_n)^*$ with $\ps|_V=\ph$. Set $x:=(\ps(X_1),\ldots,\ps(X_n))\in K^n$. Then $\ell_i(x)=\ps(\ell_i)=\ph(\ell_i)\ge0$ for all $i\in\{1,\dots,m\}$ and thus $x\in S$ and $\ph(f)=\ps(f)=f(x)\ge0$. \end{proof} \begin{cor}[Linear Nichtnegativstellensatz] \emph{[$\to$ \ref{son1s}, \ref{nichtnegativstellensatz}]} \label{linnichtnegativstellensatz} Let $(K,\le)$ be an arbitrary ordered field. Let $m,n\in\N_0$, $f,\ell_1,\dots,\ell_m\in K[X_1,\dots,X_n]_1$ \emph{[$\to$ \ref{degnot}]} and suppose \[S:=\{x\in K^n\mid\ell_1(x)\ge0,\ldots,\ell_m(x)\ge0\}\ne\emptyset.\] Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f\ge0$ on $S$ \item $f\in K_{\ge0}+K_{\ge0}\ell_1+\ldots+K_{\ge0}\ell_m$ \item There are $i_1,\dots,i_s\in\{0,\dots,m\}$ such that $\ell_{i_1},\ldots,\ell_{i_s}$ are linearly independent and \[f\in K_{\ge0}\ell_{i_1}+\ldots+K_{\ge0}\ell_{i_s}\] where $\ell_0:=1$. \end{enumerate} \end{cor} \begin{proof} \underline{(c)$\implies$(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(c)}\quad Suppose that (a) holds. Due to \ref{linhomnichtnegativstellensatz}, it suffices to show that $f^*\ge0$ on $S^*:=\{x=(x_0,\dots,x_n)\in K^{n+1}\mid x_0\ge0,\ell_1^*(x)\ge0,\ldots, \ell_m^*(x)\ge0\}$ [$\to$ \ref{introhom}(c)(d), \ref{homdehom}(e)]. To this end, let $(x_0,\dots,x_n)\in S^*$. \medskip \textbf{Case 1:} $x_0>0$ \smallskip Then $\left(1,\frac{x_1}{x_0},\dots,\frac{x_n}{x_0}\right)= \frac1{x_0}(x_0,\dots,x_n)\in S^*$ and hence $\left(\frac{x_1}{x_0},\ldots,\frac{x_n}{x_0}\right)\in S$. From (a), it follows that $f^*\left(1,\frac{x_1}{x_0},\dots,\frac{x_n}{x_0}\right)= f\left(\frac{x_1}{x_0},\dots,\frac{x_n}{x_0}\right)\ge0$ and hence also $f^*(x_0,\ldots,x_n)=x_0f^*\left(1,\frac{x_1}{x_0},\dots,\frac{x_n}{x_0}\right)\ge0$ \medskip \textbf{Case 2:} $x_0=0$ \smallskip Then $(\lf(\ell_i))(x_1,\dots,x_n)\overset{\text{\ref{homdehom}(a)}}= \ell_i^*(x_0,\dots,x_n)\ge0$ and therefore \[(\lf(\ell_i))(\la x_1,\dots,\la x_n)\ge0\] for all $i\in\{1,\dots,m\}$ and $\la\in K_{\ge0}$. Because of $S\ne\emptyset$, we can choose $(y_1,\dots,y_n)\in S$. Then $\ell_i(y_1+\la x_1,\dots,y_n+\la x_n)\ge0$ for all $\la\in K_{\ge0}$ and $i\in\{1,\dots,m\}$. Due to (a), we have thus $f(y_1+\la x_1,\dots,y_n+\la x_n)\ge0$ for all $\la\in K_{\ge0}$. It follows that $(\lf(f))(x_1,\dots,x_n)\ge0$. Hence \[f^*(x_0,\dots,x_n)\overset{\text{\ref{homdehom}(a)}}=(\lf(f))(x_1,\dots,x_n)\ge0.\] \end{proof} \begin{df}\label{defray} Let $V$ be a $K$-vector space and $C\subseteq V$ a cone [$\to$ \ref{defcone}]. \begin{enumerate}[(a)] \item The sets of the form $K_{\ge0}x$ with $x\in C\setminus\{0\}$ are called the \emph{rays} of $C$. \item Rays of $C$ that are at the same time faces [$\to$ \ref{dfface}] of $C$ are called \emph{extreme rays} of $C$. \item A set $B\subseteq C\setminus\{0\}$ is called a \emph{base} of $C$, if for each $x\in C\setminus\{0\}$ there is exactly one $\la\in K_{>0}$ such that $\la x\in B$, i.e., if every ray of $C$ hits the set $B$ in exactly one point. \end{enumerate} \end{df} \begin{pro}\label{extrbase} Suppose $V$ is a $K$-vector space and $C\subseteq V$ is a cone with convex base $B$. Then for all $x\in V$, \[\text{$K_{\ge0}x$ is an extreme ray of $C$}\iff\exists\la\in K_{>0}:\la x\in\extr B.\] \end{pro} \begin{proof} Let $x\in V$. \smallskip ``$\Longrightarrow$'' Let $K_{\ge0}x$ be an extreme ray of $C$. Then it follows that $x\in C\setminus\{0\}$. Hence there is exactly one $\la\in K_{>0}$ such that $\la x\in B$. We claim $\la x\in\extr B$. For this purpose, consider $y,z\in B$ with $\frac{y+z}2=\la x$. To show: $y=z=\la x$. From $y,z\in C$ and $\frac{y+z}2\in K_{\ge0}x$, we deduce $y,z\in K_{\ge0}x$. Due to $y,z\in B$ and $0\notin B$, we get $y,z\in K_{>0}x$. Again from $y,z\in B$ and the uniqueness of $\la$, we get $y=\la x=z$. \smallskip ``$\Longleftarrow$'' WLOG let $x\in\extr B$. To show: $K_{\ge0}x$ is an extreme ray of $C$. Since $x\in B\subseteq C\setminus\{0\}$, $K_{\ge0}x$ is a ray of $C$. Let $y,z\in C$ with $\frac{y+z}2\in K_{\ge0}x$. To show: $y,z\in K_{\ge0}x$. WLOG $y\ne0$ and $z\ne0$. If we had $y+z=0$, then one could easily show $0\in B\ \lightning$. WLOG $y+z=x$. Choose $\mu,\nu\in K_{>0}$ such that $\mu y,\nu z\in B$. Then \[ x=y+z=(\mu^{-1}+\nu^{-1}) \underbrace{\left(\frac{\mu^{-1}}{\mu^{-1}+\nu^{-1}}(\mu y)+\frac{\nu^{-1}}{\mu^{-1}+\nu^{-1}} (\nu z)\right)}_{\in B} \] and thus $\mu^{-1}+\nu^{-1}=1$. Since $x=\mu^{-1}(\mu y)+\nu^{-1}(\nu z)$, $\mu y,\nu z\in B$ and $x\in\extr B$, we have $\mu y=x=\nu y$. \end{proof} \begin{thm}{}\emph{[$\to$ \ref{minkowski}]}\label{conicminkowski} Every convex cone with compact \emph{[$\to$ \ref{tacitly}]} convex base in a finite-dimensional $\R$-vector space is the sum of its extreme rays. \end{thm} \begin{proof} Suppose $V$ is a finite-dimensional $\R$-vector space and $C\subseteq V$ is a convex cone with compact convex base $B$. Let $x\in C$. To show: $x$ is a sum of elements of extreme rays of $C$. WLOG $x\in B$. By Minkowski's theorem \ref{minkowski}, we have $x\in\conv(\extr B)$, say $x=\sum_{i=1}^m\la_ix_i$ with $m\in\N$, $\la_1,\dots,\la_m\in K_{\ge0}$, $\la_1+\ldots+\la_m=1$ and $x_i\in\extr B$. According to \ref{extrbase}, $K_{\ge0}x_i$ is for all $i\in\{1,\dots,m\}$ an extreme ray of $C$. \end{proof} \begin{pro}\label{compactbaseclosed} Every convex cone with compact \emph{[$\to$ \ref{tacitly}]} base in a finite-dimensional $\R$-vector space is closed. \end{pro} \begin{proof} Suppose $V$ is a finite-dimensional $\R$-vector space and $C\subseteq V$ is a convex cone with compact base $B$. By Tikhonov's theorem \ref{tikhonov}, also $[0,1]_\R\times B$ is compact. From \ref{quasicompactimage} together with the continuity of the scalar multiplication, we obtain that \[A:=\{\la x\mid\la\in[0,1]_\R,x\in B\}\] is again compact. WLOG $V=\R^n$ by \ref{onetop}. WLOG $B\ne\emptyset$. Set \[d:=\min\{\|y\|\mid y\in B\}>0.\] In order to show that $C$ is closed, we now let $x\in V\setminus C$. WLOG $\|x\|<\frac d2$ [$\to$ \ref{transhomeo}]. Since $A$ is closed by \ref{comclo}, there is an $\ep>0$ such that $\{y\in V\mid\|x-y\|<\ep\}\cap A=\emptyset$. From $0\in A$, we get $\ep\le\|x\|<\frac d2$. Then $\{y\in V\mid\|x-y\|<\ep\}\cap C=\emptyset$ for if $y\in C\setminus A$, then there is $\la\in K$ with $0<\la<1$ and $\la y\in B$ and it follows that $\|y\|=\frac1\la\|\la y\|\ge\frac1\la d>d$ which is incompatible with $\|x-y\|<\frac d2$ (which would imply contrarily $\|y\|\le\|y-x\|+\|x\|<\frac d2+\frac d2=d$). This shows that $C$ is closed. \end{proof} \section{Application to ternary quartics} A \emph{ternary quartic} is a $4$-form (also called quartic form [$\to$ \ref{quintic}]) in $3$ variables. \begin{lem}\label{densarg} Let $(K,\le)$ be an ordered field and $G\in SK^{m\times m}$. Then $G$ is psd [$\to$ \ref{psdpd}] if and only if $x^TGx\ge0$ for all $x\in(K^\times)^m$. \end{lem} \begin{proof} Suppose $x^TGx\ge0$ for all $x\in(K^\times)^m$. Let $z\in K^m$. We have to show that \[z^TGz\ge0.\] Choose $y\in(K^\times)^m$ arbitrary. Then $z+\la y\in(K^\times)^m$ and therefore \[z^TGz+2\la y^TGz+\la^2y^TGy=(z+\la y)^TG(z+\la y)\ge0\] for all but finitely many $\la\in K$. For example, by \ref{sgnbounds}(b) applied to the polynomial \[z^TGz+2y^TGzT+y^TGyT^2\in K[T],\] it follows that $z^TGz\ge0$. \end{proof} \begin{lem}\label{h1888a} Let $K$ be a Euclidean field and $f\in K[X,Y,Z]$ a $4$-form. Suppose that there are linearly independent $v_1,v_2,v_3\in K^3$ such that $f(v_1)=f(v_2)=f(v_3)=0$. Then the following are equivalent: \begin{enumerate}[(a)] \item $f$ is psd [$\to$ \ref{psdpd}(a)] \item $f\in\sum K[X,Y,Z]^2$ \item $f$ is a sum of $3$ squares of quadratic forms in $K[X,Y,Z]$. \end{enumerate} \end{lem} \begin{proof} Denote by $e_1,e_2,e_3$ the standard basis of $K^3$. Set $A:=\begin{pmatrix}v_1&v_2&v_3\end{pmatrix} \in\GL_3(K)$ and $g:=f\left(A\left(\begin{smallmatrix}X\\Y\\Z \end{smallmatrix}\right)\right)\in K[X,Y,Z]$. Then $g$ is a $4$-form satisfying $g(e_1)=g(e_2)=g(e_3)=0$. Since $A$ defines a permutation (even a vector space isomorphism) \[K^3\to K^3,\ \left(\begin{smallmatrix}x\\y\\z \end{smallmatrix}\right)\mapsto A \left(\begin{smallmatrix}x\\y\\z \end{smallmatrix}\right)\] on $K^3$, we have that \[\text{$f$ is psd}\iff\text{$g$ is psd}.\] Since $A$ induces on the other hand a ring automorphism \[K[X,Y,Z]\to K[X,Y,Z],\ h\mapsto h\left(A \left(\begin{smallmatrix}X\\Y\\Z \end{smallmatrix}\right)\right),\] we obtain \[f\in\sum K[X,Y,Z]^2\iff g\in\sum K[X,Y,Z]^2.\] Since this ring automorphism permutes the quadratic forms in $K[X,Y,Z]$, we have that \[\text{(c)}\iff\text{$g$ is a sum of $3$ squares of quadratic forms}.\] Replacing $f$ by $g$, we can henceforth suppose that $v_1=e_1$, $v_2=e_2$ and $v_3=e_3$. \smallskip \underline{(c)$\implies$(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(c)} \quad It is easy to see that each polynomial $g\in K[T]$ with $g\ge0$ on $K$ and $g(0)=0$ lies in the ideal $(T^2)$ [$\to$ \ref{sgnbounds}(b)]. Suppose now that $(a)$ holds. The vanishing at $0$ and the nonnegativity of the polynomials \[f(1,T,0),\ f(1,0,T),\ f(T,1,0),\ f(0,1,T),\ f(0,T,1),\ f(T,0,1)\in K[T]\] therefore forces the coefficients of \[X^4,\quad X^3Y,\quad X^3Z,\quad Y^4,\quad Y^3X,\quad Y^3Z, \quad Z^4,\quad Z^3X,\quad Z^3Y\] in $f$ to vanish. For example, the first polynomial forces the coefficients of $X^4$ and $X^3Y$ to vanish, and the second one the coefficients of again $X^4$ and of $X^3Z$. It follows that \begin{align*} N(f)&\subseteq\conv\{(2,2,0),\xcancel{(2,1,1)},(2,0,2),(0,2,2), \xcancel{(1,2,1)},\xcancel{(1,1,2)}\}\text{, i.e.,}\\ \frac12N(f)&\subseteq\conv\{(1,1,0),(1,0,1),(0,1,1)\}\quad\text{and thus}\\ \frac12N(f)\cap\N_0^3&\subseteq\{(1,1,0),(1,0,1),(0,1,1)\}. \end{align*} By the Gram matrix method \ref{gram}, we have to show that there is a \emph{psd} matrix $G\in SK^{3\times3}$ satisfying \[(*)\qquad f=\begin{pmatrix}XY&XZ&YZ\end{pmatrix}G \begin{pmatrix}XY\\XZ\\YZ\end{pmatrix}.\] Since every monomial occurring in $f$ is a product of two entries of $\begin{pmatrix}XY&XZ&YZ\end{pmatrix}$, there is certainly a $G\in SK^{3\times3}$ satisfying $(*)$ (actually one sees easily that there is a \emph{unique} such $G$ which does however not play an immediate role). But from $(*)$ it follows \emph{automatically} that $G$ is \emph{psd} since $f$ is psd. In order to see this, let $v\in K^3$. We have to show that $v^TGv\ge0$. Using \ref{densarg}, one reduces to the case $v\in(K^\times)^3$. Then set $\la:=v_1 v_2v_3$ and $x:=\frac1{v_3}$, $y:=\frac1{v_2}$ and $z:=\frac1{v_1}$. Now $v=\la\begin{pmatrix}xy\\xz\\yz\end{pmatrix}$ and therefore \[v^TGv=\la^2\begin{pmatrix}xy&xz&yz\end{pmatrix}G \begin{pmatrix}xy\\xz\\yz\end{pmatrix}\overset{(*)}=\la^2f(x,y,z)\ge0.\] \end{proof} \begin{lem}\label{h1888b} Let $K$ be a Euclidean field and $f\in K[X,Y,Z]$ a $4$-form. Suppose there are linearly independent $v_1,v_2,v_3\in K^3$ satisfying $f(v_1+Tv_2)\in(T^3)$ and $f(v_3)=0$. Then the following are equivalent: \begin{enumerate}[(a)] \item $f$ is psd \item $f\in\sum K[X,Y,Z]^2$ \item $f$ is a sum of $3$ squares of quadratic forms in $K[X,Y,Z]$. \end{enumerate} \end{lem} \begin{proof} Almost exactly as in the proof of \ref{h1888a}, one sees that one can suppose WLOG $v_1=e_1$, $v_2=e_2$ and $v_3=e_3$. \smallskip \underline{(c)$\implies$(b)$\implies$(a)} is again trivial. \smallskip \underline{(a)$\implies$(c)} \quad One sees easily that a polynomial $g\in K[T]$ with $g\ge0$ on $K$ and $g\in(T^{2k-1})$ lies in $(T^{2k})$ for $k\in\N$. Suppose now that (a) holds. By considering the polynomials \[f(1,T,0),\ f(1,0,T),\ f(0,T,1),\ f(T,0,1)\in K[T],\] one sees easily that the coefficients of \[X^4,\quad X^3Y,\quad X^2Y^2,\quad XY^3,\quad X^3Z,\quad Z^4, \quad Z^3Y,\quad Z^3X\] in $f$ must vanish. More precisely, the first polynomial is responsible for the first four of these coefficients, the second for the coefficients of $X^4$ (again) and $X^3Z$, the third for the coefficients of $Z^4$ and $Z^3Y$, and the last for the coefficients of $Z^4$ (again) and $Z^3X$. It follows that \begin{align*} N(f)&\subseteq\conv\{(2,0,2),(2,1,1),\xcancel{(1,1,2)},\xcancel{(1,2,1)}, (0,2,2),\xcancel{(0,3,1)},(0,4,0)\}\text{, i.e.,}\\ \frac12N(f)&\subseteq\conv\left\{(1,0,1),\left(1,\frac12,\frac12\right),(0,1,1),(0,2,0)\right\}\quad\text{and thus}\\ \frac12N(f)\cap\N_0^3&\subseteq\{(1,0,1),(0,1,1),(0,2,0)\}. \end{align*} By the Gram matrix method \ref{gram}, we have to show that there is a \emph{psd} matrix $G\in SK^{3\times3}$ satisfying \[(*)\qquad f=\begin{pmatrix}XZ&YZ&Y^2\end{pmatrix}G \begin{pmatrix}XZ\\YZ\\Y^2\end{pmatrix}.\] If the monomial $X^2YZ$ actually appeared in $f$, we would now run into a big problem that we did not have in the proof of \ref{h1888a} because this monomial is not a product of two entries of $\begin{pmatrix}XZ&YZ&Y^2\end{pmatrix}$. But this coefficient vanishes as one easily shows since for all $y\in K$, the leading coefficient of $f(X,y,1)\in K[X]$ is nonnegative since this polynomial is nonnegative on $K$. As in the proof of \ref{h1888a}, one sees again that there exists $G\in SK^{3\times3}$ satisfying $(*)$ (one could again see easily that $G$ is unique). From $(*)$ it follows \emph{automatically} that $G$ is psd since $f$ is psd. To see this, let $v\in K^3$. To show: $v^TGv\ge0$. Using \ref{densarg}, one reduces to the case $v\in K\times(K^\times)^2$. Then set $\la:=v_2^2v_3$ and $x:=\frac{v_1}{v_2^2}$, $y:=\frac1{v_2}$, $z:=\frac1{v_3}$. Now $v=\la\begin{pmatrix}xz\\yz\\y^2\end{pmatrix}$ and therefore \[v^TGv=\la^2\begin{pmatrix}xz&yz&y^2\end{pmatrix}G \begin{pmatrix}xz\\yz\\y^2\end{pmatrix}\overset{(*)}=\la^2f(x,y,z)\ge0.\] \end{proof} \begin{lem}\label{h1888c} Let $K$ be a Euclidean field and $f\in K[X,Y,Z]$ a $4$-form. Suppose there are linearly independent $v_1,v_2\in K^3$ satisfying $f(v_1+Tv_2)\in(T^3)$ and $f(v_2)=0$. Then the following are equivalent: \begin{enumerate}[(a)] \item $f$ is psd \item $f\in\sum K[X,Y,Z]^2$ \item $f$ is a sum of $3$ squares of quadratic forms in $K[X,Y,Z]$. \end{enumerate} \end{lem} \begin{proof} One can again suppose WLOG $v_1=e_1$ and $v_2=e_2$. \smallskip \underline{(c)$\implies$(b)$\implies$(a)} is again trivial. \smallskip \underline{(a)$\implies$(c)} \quad One uses again that a polynomial $g\in K[T]$ with $g\ge0$ on $K$ and $g\in(T^{2k-1})$ lies in $(T^{2k})$ for $k\in\N$. Suppose now that (a) holds. By considering the polynomials \[f(1,T,0),\ f(1,0,T),\ f(T,1,0),\ f(0,1,T)\in K[T],\] one sees easily that the coefficients of \[X^4,\quad X^3Y,\quad X^2Y^2,\quad XY^3,\quad X^3Z,\quad Y^4, \quad Y^3Z\] in $f$ must vanish. More precisely, the first polynomial is responsible for the first four of these coefficients, the second for the coefficients of $X^4$ (again) and $X^3Z$, the third for the coefficients of $Y^4$ and $XY^3$ (again), and the last for the coefficients of $Y^4$ (again) and $Y^3Z$. It follows that \begin{align*} N(f)&\subseteq\conv\{(2,0,2),(2,1,1),(1,2,1),(0,2,2),\xcancel{(0,1,3)},(0,0,4), \xcancel{(1,0,3)},\xcancel{(1,1,2)}\}\text{, i.e.,}\\ \frac12N(f)&\subseteq\conv\left\{(1,0,1),\left(1,\frac12,\frac12\right),\left(\frac12,1,\frac12\right),(0,1,1),(0,0,2)\right\}\quad\text{and thus}\\ &\frac12N(f)\cap\N_0^3\subseteq\{(1,0,1),(0,1,1),(0,0,2)\}. \end{align*} By the Gram matrix method \ref{gram}, we have to show that there is a \emph{psd} matrix $G\in SK^{3\times3}$ satisfying \[(*)\qquad f=\begin{pmatrix}XZ&YZ&Z^2\end{pmatrix}G \begin{pmatrix}XZ\\YZ\\Z^2\end{pmatrix}.\] If one of the monomials $X^2YZ$ and $XY^2Z$ actually appeared in $f$, we would have trouble since these monomials are not a product of two entries of $\begin{pmatrix}XZ&YZ&Z^2\end{pmatrix}$. But these coefficients vanish as one easily shows since for all $x,y\in K$, the leading coefficients of $f(X,y,1)\in K[X]$ and $f(x,Y,1)\in K[Y]$ are nonnegative since these polynomials are nonnegative on $K$. One sees again that there exists $G\in SK^{3\times3}$ satisfying $(*)$ (one could again see easily that $G$ is unique). From $(*)$ it follows \emph{automatically} that $G$ is psd since $f$ is psd. To see this, let $v\in K^3$. To show: $v^TGv\ge0$. Using \ref{densarg}, one reduces to the case $v\in K\times(K^\times)^2$. Then set $\la:=v_2^2v_3$ and $x:=\frac{v_1}{v_2v_3}$, $y:=\frac1{v_3}$, $z:=\frac1{v_2}$. Now $v=\la\begin{pmatrix}xz\\yz\\z^2\end{pmatrix}$ and therefore \[v^TGv=\la^2\begin{pmatrix}xz&yz&z^2\end{pmatrix}G \begin{pmatrix}xz\\yz\\z^2\end{pmatrix}\overset{(*)}=\la^2f(x,y,z)\ge0.\] \end{proof} \begin{lem}\label{subtractlin4} Let $f\in\R[X,Y,Z]$ be a psd $4$-form that is not a sum of $3$ squares of quadratic forms in $\R[X,Y,Z]$ and that has two linearly independent zeros in $\R^3$. Then there is a linear form $\ell\in\R[X,Y,Z]\setminus\{0\}$ such that $f-\ell^4$ is psd. \end{lem} \begin{proof} By Lemma \ref{h1888a}, the zeros of $f$ span a two-dimensional subspace of $\R^3$. By a change of coordinates, we can thus achieve that $f(e_2)=f(e_3)=0$ and \[f>0\text{ on }\R^\times\times\R^2.\] We now claim that there is some $\ep\in\R_{>0}$ such that f$-\ep X^4$ is psd. By homogeneity, it suffices to find $\ep>0$ such that $f-\ep X^4\ge0$ holds on the compact set \[[-1,1]_\R^3\setminus(-1,1)_\R^3.\] For this purpose, it is enough show that for each two-dimensional face $F$ of the polytope $[-1,1]^3$ (i.e., for each side of the cube $[-1,1]^3$) there is some $\ep>0$ such that $f-\ep X^4\ge0$ on $F$. On the two sides $\{-1\}\times[-1,1]^2$ and $\{1\}\times[-1,1]^2$, $f$ is positive so that the existence of such an $\ep$ for them follows from \ref{takeson}. After a further change of coordinates, it suffices to consider from the remaining four sides just $[-1,1]^2\times\{1\}$. Consider therefore $\widetilde f:=f(X,Y,1)\in\R[X,Y]$ [$\to$ \ref{introhom}(d)]. From Lemma \ref{h1888c}, we deduce \[\frac{\partial^2\widetilde f}{\partial Y^2}(0,y)>0\] for all $y\in\R$ that satisfy $\widetilde f(0,y)=0$ (apply \ref{h1888c} to $f$, $v_1:=(0,y,1)$ and $v_2:=(0,1,0)$, taking into account that $\frac{\partial\widetilde f}{\partial Y}(0,y)=0$ due to $\widetilde f\ge0$ on $\R^2$). In the same way, Lemma \ref{h1888b} implies that for each $y\in\R$ satisfying $\widetilde f(0,y)=0$ all other directional derivatives of $\widetilde f$ in $(0,y)$ are also positive. Altogether, $\widetilde f$ has thus only zeros in $\R^2$ at which the second derivative (i.e., the Hessian) is \emph{pd} (recall that all zeros of $\widetilde f$ lie on the $y$-axis). From analysis we know that each zero of the nonnegative polynomial $\widetilde f$ (in $\R^2$, or equivalently $\{0\}\times\R$ since all zeros lie on the $y$-axis) is an isolated \emph{global} minimizer. Therefore \[\{(x,y)\in\R^2\mid\widetilde f(x,y)=0\}=\{(0,y_1),\dots,(0,y_m)\}\] for some $m\in\N$ and $y_1,\dots,y_m\in\R$ (one of the $y_i$ is $0$). Since $-X^4$ as well as its first and second derivative vanishes on the $y$-axis (since $\frac{\partial X^4}{\partial X}=4X^3$, $\frac{\partial X^4}{\partial Y}=0$, $\frac{\partial^2X^4}{\partial X^2}=12X^2$, $\frac{\partial^2X^4}{\partial X\partial Y}=0$ and $\frac{\partial^2X^4}{\partial Y^2}=0$), every $(0,y_i)$ is a zero and an isolated \emph{local} minimizer of $\widetilde f-X^4$. Choose for each $i\in\{1,\dots,m\}$ an open neighborhood $U_i$ of $(0,y_i)$ such that $\widetilde f-X^4>0$ on $U_i\setminus\{(0,y_i)\}$. Then of course also $\widetilde f-\ep X^4>0$ on $U_i\setminus\{(0,y_i)\}$ for all $\ep\le1$ and $i\in\{1,\dots,m\}$. Since $\widetilde f$ is positive on the compact set $[-1,1]^2\setminus(U_1\cup\dots\cup U_m)$, there is an $\ep\in(0,1)_\R$ such that $\widetilde f-\ep X^4>0$ on $[-1,1]^2\setminus(U_1\cup\dots\cup U_m)$. Altogether, $\widetilde f-\ep X^4>0$ on $[-1,1]^2\setminus\{(0,y_1),\dots,(0,y_m)\}$ and $\widetilde f-\ep X^4=0$ on $\{(0,y_1),\dots,(0,y_m)\}$. \end{proof} \begin{lem}\label{extremehas2zeros} Suppose $f$ lies on an extreme ray [$\to$ \ref{defray}(b)] of the cone $P$ of the psd $4$-forms in $\R[X,Y,Z]$. Then there are linearly independent $v_1,v_2\in\R^3$ such that $f(v_1)=f(v_2)=0$. \end{lem} \begin{proof} If $f$ were pd, then the forms $f\pm\ep X^4$ would be psd for some $\ep>0$ (choose $\ep$ for instance as the minimum of $f$ on the compact unit sphere of $\R^3$) and because of $f=\frac12(f-\ep X^4)+\frac12(f+\ep X^4)$ it would follow that $f+\ep X^4\in\R_{\ge0}f$ and thus $f\in\R X^4$ $\lightning$. Hence $f$ has at least one zero $v_1\in\R^3\setminus\{0\}$. After a change of coordinates, we can without loss of generality achieve $v_1=e_1$. Since $(0,0)$ is a local (even a global) minimizer of $f(1,Y,Z)\in\R[Y,Z]$, we know from analysis that $\frac{\partial f}{\partial Y}(1,0,0)=\frac{\partial f}{\partial Z}(1,0,0)=0$. It follows that there are a quadratic form $a\in\R[Y,Z]$, a cubic form $b\in\R[Y,Z]$ and a quartic form $c\in\R[Y,Z]$ such that \[f=aX^2+bX+c.\] The quadratic form $a$ is positive semidefinite since either $a(y,z)=0$ or $a(y,z)$ is the leading coefficient of $f(X,y,z)\in\R[X]$ which is nonnegative on $\R$. We have to show that there exists $v_2\in\R\times(\R^2\setminus\{0\})$ such that $f(v_2)=0$. We make a case distinction by $\rk(a)$ [$\to$ \ref{longremi}(h)]. \medskip \textbf{Case 1:} $\rk(a)=0$ \smallskip Then $a=0$ and thus $b(y,z)=0$ for all $(y,z)\in\R^2$ from which $b=0$ follows by \ref{pol0}. If $f=c\in\R[Y,Z]$ was pd, then $c\pm\ep Y^4\in\R[Y,Z]$ would be psd for some $\ep>0$ and it would follow that $c+\ep Y^4\in\R_{\ge0}c$ and thus $c\in\R Y^4$ $\lightning$. Now choose $(y,z)\in\R^2\setminus\{0\}$ such that $c(y,z)=0$ and set $v_2:=(0,y,z)$. Then $f(v_2)=c(y,z)=0$ and $v_1$ and $v_2$ are linearly independent. \medskip \textbf{Case 2:} $\rk(a)=1$ \smallskip By a coordinate change in the $y$-$z$-plane WLOG $a=Y^2$. Then $b(0,z)=0$ for all $z\in\R$ and hence $b(0,Z)=0$, i.e., $b=Yb'$ for some $b'\in\R[Y,Z]$. It follows that $f=X^2Y^2+b'XY+c=\left(XY+\frac{b'}2\right)^2+\left(c-\frac{b'^2}4\right)$. For all $(y,z)\in\R^\times\times\R$, we find some $x\in\R$ satisfying $xy+\frac{b'(y,z)}2=0$ from which $c(y,z)-\frac{b'(y,z)^2}4=f(x,y,z)\ge0$ follows. Hence $c-\frac{b'^2}4\in P$. Aside from that, we have of course $(XY+\frac{b'}2)^2\in P$. Since $f$ lies on an extreme ray of $P$, it follows that $(XY+\frac{b'}2)^2\in\R f$ (and $c-\frac{b'^2}4\in\R f$). Now choose $(y,z)\in\R^\times\times\R$ arbitrary and with it $x\in\R$ such that $xy+\frac{b'(y,z)}2=0$. Then $f(x,y,z)=0$. \medskip \textbf{Case 3:} $\rk(a)=2$ \smallskip By a coordinate change in the $y$-$z$-plane WLOG $a=Y^2+Z^2$. Since $f$ is psd, also the $6$-form $4ac-b^2\in\R[Y,Z]$ is psd. We have to show that there is $(y,z)\in\R^2\setminus\{0\}$ such that there exists $x\in\R$ satisfying $a(y,z)x^2+b(y,z)x+c(y,z)=0$. Because of $a(y,z)\ne0$ for all $(y,z)\in\R^2\setminus\{0\}$, this is equivalent to the existence of $(y,z)\in\R^2\setminus\{0\}$ with $(b^2-4ac)(y,z)\ge0$, i.e., $(4ac-b^2)(y,z)=0$ (since $4ac-b^2$ is psd). We have thus to show that $4ac-b^2$ is not pd. Aiming for a contradiction, assume that $4ac-b^2$ is pd. Then also the $6$-forms $4a(c\pm\ep Y^4)-b^2$ are psd for some $\ep>0$ (choose for example $4\ep$ as the minimum of $4ac-b^2$ on the compact unit sphere of $\R^2$ and take into account that $a=Y^2+Z^2$). It follows that $f\pm\ep Y^4\in P$. From $f=\frac12(f+\ep Y^4)+\frac12(f-\ep Y^4)$, we obtain $f+\ep Y^4\in\R_{\ge0}f$ and thus $f\in\R Y^4$ $\lightning$. \end{proof} \begin{lem}\label{base} Let $d,n\in\N_0$ and let $V$ be the $\R$-vector space of all $2d$-forms in $\R[\x]= \R[X_1,\dots,X_n]$ and $P\subseteq V$ be the cone of all psd forms in $V$. Then $P$ is a closed cone with compact convex base [$\to$ \ref{defray}(c)]. \end{lem} \begin{proof} As an intersection of closed sets, $P=\bigcap_{x\in\R^n}\{p\in V\mid p(x)\ge0\}$ is closed. By \ref{pol0}, \[\|p\|:=\sum_{x_1=-d}^d\ldots\sum_{x_n=-d}^d|p(x_1,\dots,x_n)|\qquad(p\in V)\] defines a norm on $V$. Then \[B:=\{p\in P\mid\|p\|=1\}=\left\{p\in V\mid\sum_{x_1=-d}^d\ldots \sum_{x_n=-d}^dp(x_1,\dots,x_n)=1\right\}\] is a compact convex base of $P$. \end{proof} \begin{lem}\label{extremequadratic2} Let $V$ denote the $\R$-vector space of all $4$-forms in $\R[X,Y,Z]$ and $P\subseteq V$ the cone of all psd forms in $V$. Suppose that $f$ lies on an extreme ray of $P$. Then $f$ is a square of a quadratic form. \end{lem} \begin{proof} It is enough to show that $f$ is a \emph{sum of} squares of quadratic forms for if $f=\sum_{i=1}^mq_i^2\ne0$ with $2$-forms $q_i\in\R[X,Y,Z]$, then \[f=\frac12\underbrace{2q_1^2}_{\in P}+\frac12 \underbrace{2\sum_{i=2}^mq_i^2}_{\in P}\] and thus $q_1^2\in\R_{\ge0}f$. If there is a linear form $\ell\in\R[X,Y,Z]\setminus\{0\}$ such that $f-\ell^4$ is psd, then $\ell^4\in\R_{\ge0}f$ and $f=(c\ell^2)^2$ for some $c\in\R^\times$ so that we are done. From now on therefore suppose that such a linear form does not exist. From the Lemmata \ref{subtractlin4} and \ref{extremehas2zeros}, it follows now that $f$ is a sum of $3$ squares of $2$-forms in $\R[X,Y,Z]$. \end{proof} \begin{thm}\label{h1888} Let $R$ be a real closed field and $f\in R[X,Y,Z]$ a $4$-form. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f$ is psd. \item $f\in\sum R[X,Y,Z]^2$ \item $f$ is a sum of squares of quadratic forms in $R[X,Y,Z]$. \end{enumerate} \end{thm} \begin{proof} \underline{(c)$\implies$(b)$\implies$(a)} is trivial. \smallskip \underline{(a)$\implies$(c)} follows for $R=\R$ from \ref{extremequadratic2} together with the conic version \ref{conicminkowski} of Minkowski's theorem and \ref{base}. Using the Gram matrix method \ref{gram} (or \ref{fundthmlin}), one sees that the class of all real closed fields $R$ for which (a)$\implies$(c) holds for all $4$-forms $f\in R[X,Y,Z]$, is semialgebraic. By \ref{nothingorall}, every real closed field belongs to this class. In short, the statement follows thus from the case $R=\R$ by the Tarski principle \ref{tprinciple}. \end{proof} \begin{cor}[dehomogenized version of \ref{h1888}]\label{dh1888} Let $R$ be a real closed field and $f\in R[X,Y]_4$. Then \[\text{$f\ge0$ on $R^2$}\iff f\in\sum R[X,Y]^2.\] \end{cor} \begin{proof} ``$\Longleftarrow$'' is trivial. \smallskip ``$\Longrightarrow$'' Suppose $f\ge0$ on $R^2$. WLOG $f\notin R$. Then $\deg f=2$ or $\deg f=4$ by \ref{soslongrem}(b). For $\deg f=2$, the claim follows from \ref{son1s}. Suppose therefore $\deg f=4$. Then $f^*:=Z^4f\left(\frac XZ,\frac YZ\right)\in R[X,Y,Z]$ is the homogenization of $f$ with respect to $Z$ [$\to$~\ref{introhom}(c)] and $f^*$ is psd by \ref{psdpsdhom}(a). Now \ref{h1888} yields $f^*\in\sum R[X,Y,Z]^2$. By dehomogenization [$\to$ \ref{introhom}(d), \ref{homdehom}], it follows that $f\in\sum R[X,Y]^2$. \end{proof} \begin{rem} A posteriori, we see now that in the situation of Lemma \ref{extremehas2zeros}, there actually exist even infinitely many pairwise linearly independent zeros of $f$. This follows from \ref{extremequadratic2}. Indeed, if $f=q^2$ with a $2$-form $q\in\R[X,Y,Z]$, then WLOG $\sg q\ge0$ (otherwise replace $q$ by $-q$) and thus $\sg q\in\{0,1,2,3\}$. If $\sg q=3$, then $q$ and thus $f$ is positive definite which is of course impossible by \ref{extremehas2zeros}. If $\sg q=2$, then after a linear change of coordinates we have WLOG $f=X^2+Y^2$ which contradicts again \ref{extremehas2zeros} since any zero of $q$ and hence of $f$ in $\R^3$ lies in $\{(0,0)\}\times\R$. If $\sg q=1$, then WLOG $q\in\{X^2,X^2+Y^2-Z^2\}$. If $q=X^2$, then for example the $(0,y,1)$ where $y\in\R$ are pairwise linearly independent zeros of $q$ and therefore also of $f$. If $q=X^2+Y^2-Z^2$, then the $(x,1,\sqrt{x^2+1})$ where $x\in\R$ are pairwise linearly independent zeros of $q$ and therefore of $f$. Indeed even the projections of these vectors onto their first two components are already linearly independent as we have already seen. If $\sg q=0$, then WLOG $q\in\{0,X^2-Y^2\}$. The case $q=0$ is trivial. In the case $q=X^2-Y^2$ for example the $(x,x,1)$ where $x\in\R$ are pairwise linearly independent zeros of $q$ and therefore also of $f$. Again even the projections of these vectors onto their second and third components are already linearly independent. \end{rem} \begin{rem} We will neither use nor prove the following: \begin{enumerate}[(a)] \item In 1888, Hilbert showed a strengthening of \ref{h1888} (``sum of \emph{three} squares'' instead of ``sum of squares'', cf. also \ref{h1888a}, \ref{h1888b}, \ref{h1888c} and \ref{subtractlin4}) \cite{hil}. A very long and tedious elementary proof for this has been given by Scheiderer and Pfister in 2012 \cite{ps}.. \item Scheiderer showed in 2016 that \[X^4+XY^3+Y^4-3X^2YZ-4XY^2Z+2X^2Z^2+XZ^3+YZ^3+Z^4\] is psd but does not belong to $\sum\Q[X,Y,Z]^2$ \cite{s2}. In the same year, Henrion, Naldi, Safey El Din gave an elementary proof for this \cite{hns}. \end{enumerate} \end{rem} \chapter{Nonnegative polynomials with zeros} Throughout this chapter, $K$ denotes again always a subfield of $\R$ with the induced order. Moreover, we let $A$ always be a commutative ring (e.g., $A=K[X_1,\dots,X_n]$). \section{Modules over semirings} \begin{df}\label{deftmodule} Let $T\subseteq A$. Then we call $T$ a \emph{semiring} of $A$ if $\{0,1\}\subseteq T$, $T+T\subseteq T$ and $TT\subseteq T$ [$\to$ \ref{defpreorder}]. If $T$ is a semiring of $A$, then $M\subseteq A$ is called a \emph{$T$-module} of $A$ if $0\in M$, $M+M\subseteq M$ and $TM\subseteq M$. \end{df} \begin{rem}\label{semiringrem} \begin{enumerate}[(a)] \item $\text{$T$ is a preorder of $A$}\iff(\text{$T$ is a semiring of $A$} \et A^2\subseteq T)$ \item If $T$ is a semiring of $A$, then $T-T$ is a subring of $A$. \item If $T$ is a semiring of $A$ and $M$ a $T$-module of $A$, then $M-M$ is a $(T-T)$-module of $A$. \item If $T$ is a semiring of $A$, then $T$ is a $T$-module of $A$. \end{enumerate} \end{rem} \begin{df}\label{defarchsemiring} Let $T$ be a semiring of $A$ and $M$ a $T$-module of $A$. Then $M$ is called \emph{Archimedean} (in $A$) if $\forall a\in A:\exists N\in\N:N+a\in M$ [$\to$ \ref{dfarch}(a)]. \end{df} \begin{rem} Due to \ref{semiringrem}(d), the notion of an Archimedean semiring is also defined by \ref{defarchsemiring}. Because of \ref{semiringrem}(a), this generalizes the notion of an Archimedean preorder of $A$ [$\to$ \ref{dfarch}(a)]. \end{rem} \begin{df}{}[$\to$ \ref{arithmbounded}]\label{bamu} Let $T$ be a semiring of $A$, $M$ a $T$-module of $A$ and $u\in A$. Then \[B_{(A,M,u)}:=\{a\in A\mid\exists N\in\N:Nu\pm a\in M\}\] the set of with respect to $M$ by $u$ \emph{arithmetically bounded} elements of $A$. If $u=1$, then we write $B_{(A,M)}:=B_{(A,M,u)}$ and omit the specification ``by $u$''. \end{df} \begin{pro}\label{mboundedtimesmbounded} Suppose $T$ is a semiring of $A$, $M_1$ and $M_2$ are $T$-modules of $A$, $u_1\in M_1$ and $u_2\in M_2$. Then $\sum M_1M_2$ is also a $T$-module of $A$ and we have \[B_{(A,M_1,u_1)}B_{(A,M_2,u_2)}\subseteq B_{(A,\sum M_1M_2,u_1u_2)}.\] \end{pro} \begin{proof} Let $a_i\in B_{(A,M_i,u_i)}$, say $Nu_i\pm a_i\in M_i$ for $i\in\{1,2\}$ with $N\in\N$. Then (cf. the proof of \ref{arithmbounded}) \[3N^2u_1u_2\pm a_1a_2=(Nu_1+a_1)(Nu_2\pm a_2)+Nu_2(Nu_1-a_1)+ Nu_1(Nu_2\mp a_2).\] \end{proof} \begin{cor} Let $T$ be a semiring of $A$, $M$ a $T$-module of $A$, $u\in T$ and $v\in M$. Then $B_{(A,T,u)}B_{(A,M,v)}\subseteq B_{(A,M,uv)}$. \end{cor} \begin{proof} Apply \ref{mboundedtimesmbounded} to $M_1:=T$, $M_2:=M$, $u_1:=u$, $u_2:=v$ and observe $\sum M_1M_2=\sum TM=M$. \end{proof} \begin{cor}{}\emph{[$\to$ \ref{arithmbounded}]}\label{bmodule} Let $T$ be a semiring of $A$. Then $B_{(A,T)}$ is a subring of $A$. Moreover, if $M$ a $T$-module of $A$ and $u\in M$, then $B_{(A,M,u)}$ is a $B_{(A,T)}$-module of $A$. \end{cor} \begin{rem}{}[$\to$ \ref{defarchsemiring}, \ref{archb}]\label{archmoduleb} If $T\subseteq A$ is a semiring and $M\subseteq A$ a $T$-module with $1\in M$, then $M$ is Archimedean if and only if $B_{(A,M)}=A$. \end{rem} \begin{thm}{}\emph{[$\to$ \ref{archchar}]}\label{archsemiringchar} Let $n\in\N_0$ and $T\subseteq K[\x]$ a semiring with $K_{\ge0}\subseteq T$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $T$ is Archimedean. \item $\exists N\in\N:\forall i\in\{1,\dots,n\}:N\pm X_i\in T$ \item $\exists m\in\N:\exists\ell_1,\dots,\ell_m\in T\cap K[\x]_1:\exists N\in\N:\\ \emptyset\ne\{x\in K^n\mid\ell_1(x)\ge0,\dots,\ell_m(x)\ge0\}\subseteq[-N,N]_K^n$ \end{enumerate} \end{thm} \begin{proof} Write $A:=K[\x]$. From $K_{\ge0}\subseteq T$, it follows that $K\subseteq B_{(A,T)}$. Hence we have $B_{(A,T)}=A\iff X_1,\dots,X_n\in B_{(A,T)}$ which shows (a)$\iff$(b). The implication (b)$\implies$(c) is trivial and (c)$\implies$(b) is an easy consequence of the linear Nichtnegativstellensatz \ref{linnichtnegativstellensatz}. \end{proof} \begin{lem}{}[$\to$ \ref{squarerootsarithmeticallybounded}]\label{rootqmb} Suppose that $\frac12\in A$ (i.e., $2\in A^\times$), let $M\subseteq A$ be a $(\sum A^2)$-module with $1\in M$ and let $a\in A$. Then \[a^2\in B_{(A,M)}\iff a\in B_{(A,M)}.\] \end{lem} \begin{proof} ``$\Longrightarrow$'' If $N\in\N$ with $(N-1)-a^2\in M$, then \[N\pm a=(N-1)-a^2+\left(\frac12\pm a\right)^2+3\left(\frac12\right)^2\in M\] (exactly as in the proof of \ref{squarerootsarithmeticallybounded}). ``$\Longleftarrow$'' If $N\in\N$ with $(2N-1)\pm a\in M$, then \[N^2(2N-1)-a^2=2\left(\frac12\right)^2((N-a)^2(2N-1+a)+(N+a)^2(2N-1-a))\in M.\] \end{proof} \begin{pro}\label{bammodule} Suppose $\frac12\in A$, $T\subseteq A$ is a preorder and $M\subseteq A$ is a $T$-module with $1\in M$. Then $B_{(A,M)}$ is a subring of $A$ and $B_{(A,M,u)}$ a $B_{(A,M)}$-module of $A$ for each $u\in T$. \end{pro} \begin{proof} It obviously suffices to show $B_{(A,M)}B_{(A,M,u)}\subseteq B_{(A,M,u)}$ for all $u\in T$ (since this means $B_{(A,M)}B_{(A,M)}\subseteq B_{(A,M)}$ for $u=1$). If $a\in B_{(A,M)}$, then we have \[a=\left(\frac12\right)^2((a+1)^2-(a-1)^2)\] and because of $1\in M$ also $a+1,a-1\in B_{(A,M)}$. Therefore it is enough to show $a^2B_{(A,M,u)}\subseteq B_{(A,M,u)}$ for all $a\in B_{(A,M)}$ and $u\in T$. For this purpose, fix $a\in B_{(A,M)}$, $u\in T$ and $b\in B_{(A,M,u)}$. To show: $a^2b\in B_{(A,M,u)}$. From \ref{rootqmb}, we get $a^2\in B_{(A,M)}$. Choose $N\in\N$ such that $N-a^2,Nu\pm b\in M$. Due to $a^2,u\in T$, we get now $Nu-ua^2,Nua^2\pm a^2b\in M$. Consequently, \[N^2u\pm a^2b=(N^2u-Nua^2)+(Nua^2\pm a^2b)\in M+M\subseteq M.\] \end{proof} \begin{thm}{}\emph{[$\to$ \ref{archchar}, \ref{archsemiringchar}]} \label{archmodulechar} Suppose $n\in\N_0$ and $M\subseteq K[\x]$ is a $(\sum K_{\ge0}K[\x]^2)$-module with $1\in M$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $M$ is Archimedean. \item $\exists N\in\N:N-\sum_{i=1}^nX_i^2\in M$ \item $\exists N\in\N:\forall i\in\{1,\dots,n\}:N\pm X_i\in M$ \item $\exists m\in\N:\exists\ell_1,\dots,\ell_m\in M\cap K[\x]_1:\exists N\in\N:\\ \emptyset\ne\{x\in K^n\mid\ell_1(x)\ge0,\dots,\ell_m(x)\ge0\}\subseteq[-N,N]_K^n$ \end{enumerate} \end{thm} \begin{proof} \underline{(a)$\implies$(b)} is trivial. \smallskip \underline{(b)$\implies$(c)}\quad If (b) holds, then $N-X_i^2\in M$ and thus $X_i^2\in B_{(K[\x],M)}$ for all $i\in\{1,\dots,n\}$. Apply now \ref{rootqmb}. \smallskip \underline{(c)$\implies$(d)} is trivial and \underline{(d)$\implies$(c)} follows again from the linear Nichtnegativstellensatz \ref{linnichtnegativstellensatz}. \smallskip \underline{(c)$\implies$(a)} follows from \ref{bammodule}. \end{proof} \section{Pure states on rings and ideals} In this section, we always suppose that the field $K$ is a subring of $A$. In particular, $\Q\subseteq A$ and $A$ is a $K$-vector space. \begin{rem}\label{abstractarchimedeanpositivstellensatz2} Under the just made mild hypothesis $\Q\subseteq A$, one can reformulate the abstract Archimedean Positivstellensatz \ref{abstractarchimedeanpositivstellensatz} as follows: \begin{center} \fbox{ \begin{minipage}{35em} For arbitrary $A$ and $K$ as above, let $T$ be an Archimedean preorder of $A$ such that $K_{\ge0}\subseteq T$ and $a\in A$. Then the following are equivalent: \begin{enumerate}[(a)] \item $\ph(a)>0$ for all $K$-linear ring homomorphisms $\ph\colon A\to\R$ with $\ph(T)\subseteq\R_{\ge0}$. \item $\exists N\in\N:a\in\frac1N+T$ \end{enumerate} \end{minipage} } \end{center} To see this, first note that in (a), one can omit the $K$-linearity of $\ph$ since it just means that $\ph|_K=\id_K$ which follows from \ref{archemb} by $K_{\ge0}\subseteq T$ since the identity is the \emph{only} embedding of ordered fields from $K$ to $\R$ (cf. the proof of \ref{evashom}). But then the theorem becomes strongest for $K=\Q$ and we can thus assume $K=\Q$ which makes redundant the hypothesis $K_{\ge0}\subseteq T$ since for all $m,n\in\N$, we have $\frac mn=mn\left(\frac1n\right)^2\in\sum A^2\subseteq T$. This last fact also shows (b)$\iff$(b') where we denote by (a') and (b') the corresponding conditions from \ref{abstractarchimedeanpositivstellensatz}, namely: \begin{enumerate}[(a')] \item $\widehat a>0$ on $\sper(A,T)$ \item $\exists N\in\N:Na\in1+T$ \end{enumerate} It remains to show that (a)$\iff$(a'). To this end, it suffices by \ref{archmax}(d) to show that (a') is equivalent to \begin{enumerate}[(a'')] \item $\widehat a(Q)>0$ for all maximal elements $Q$ of $\sper(A,T)$. \end{enumerate} It is clear that (a')$\implies$(a''). To show (a'')$\implies$(a'), suppose that (a'') holds and let $P\in\sper(A,T)$. To show: $\widehat a(P)>0$. Using \ref{inmaxprimecone} or \ref{spear}, we find a maximal element $Q$ of $\sper(A,T)$ such that $P\subseteq Q$. By \ref{primeconeinclusion}, we have $Q=P\cup\supp(Q)$. Due to (a''), we have $a\in Q\setminus-Q$, i.e., $a\in Q\setminus\supp(Q)\subseteq P$, and because of $a\notin-P$ (for otherwise $a\in-Q$) it follows that $a\in P\setminus-P$, i.e., $\widehat a(P)>0$. This shows (a')$\iff$(a''). These arguments were implicitly present already in the proof of \ref{archimedeanpositivstellensatz}. \end{rem} \begin{rem}\label{semiringu1} Suppose $T$ is a semiring of $A$ with $K_{\ge0}\subseteq T$ and $M$ a $T$-module of $A$. Then $M$ is a cone in the $K$-vector space $A$ and we have: \[\text{$M$ is Archimedean [$\to$ \ref{defarchsemiring}]}\iff \text{$1$ is a unit for $M$ [$\to$ \ref{defunit}]}\] \end{rem} \begin{motivation}\label{puremotivation} If $T$ is an Archimedean preorder of $A$ with $K_{\ge0}\subseteq T$, then the Archimedean Positivstellensatz \ref{abstractarchimedeanpositivstellensatz} in the version of \ref{abstractarchimedeanpositivstellensatz2} amounts to the equivalence of \[\exists N\in\N:a\in\frac1N+T\] with \[(*)\qquad\ph(a)>0\text{ for all ($K$-linear) ring homomorphisms }\ph\colon A\to\R \text{ with $\ph(T)\subseteq\R_{\ge0}$} \] while \ref{conemembershipunitextreme}, paying attention to \ref{semiringu1}, tells that the same condition is equivalent to \[(**)\qquad\ph(a)>0\text{ for all pure states }\ph\text{ of }(A,T,1). \] The following imprecise questions arise: \begin{enumerate}[(a)] \item What do pure states ``on rings'' have to do with ring homomorphisms? \item Can the Archimedean Positivstellensatz be generalized from preorders to semirings or even to modules over semirings? \item If $(*)$ holds only with ``$\ge$'' instead of ``$>$'', then $\exists N\in\N:a\in\frac1N+T$ can of course not hold anymore but one would still want to prove that $a\in T$. In this case, is it possible to find an ideal $I\subseteq A$ (e.g., the kernel of a ring homomorphism $\ph$ from $(*)$ with $\ph(a)=0$) such that $I\cap T$ possesses in the $K$-vector space $I$ a unit $u$ in such a way that $a\in I$ and $(**)$ holds for $(I,I\cap T,u)$ instead of $(A,T,1)$? Then one could apply \ref{conemembershipunitextreme} or \ref{conemembershipextr} in order to finally still show that $a\in T$ (even $a\in\frac1Nu+(I\cap T)$). \item What can one say about pure states ``on ideals''? This question generalizes (a) and is motivated by (c). \end{enumerate} \end{motivation} \begin{reminder}\label{binomialseries} For $z\in\C$ and $k\in\N_0$, the binomial coefficient \[\binom zk:=\prod_{i=1}^k\frac{z-i+1}i\] is declared. From analysis, one knows that \[\sqrt{1+t}=(1+t)^{\frac12}=\sum_{i=0}^\infty\binom{\frac12}it^i\] for all $t\in\R$ with $|t|<1$. \end{reminder} \begin{lem}\label{binomialmiracle} For all $k\in\N$, the coefficients of \[p_k:=\left(\sum_{i=0}^k\binom{\frac12}i(-T)^i\right)^2-(1-T)\in\Q[T]\] are nonnegative. \end{lem} \begin{proof} In the ring $\Q[[T]]$ of formal power series, we have because of \ref{binomialseries} and the identity theorem for power series from analysis that \[\left(\sum_{i=0}^\infty\binom{\frac12}i(-T)^i\right)^2=1-T.\] Now let $k\in\N$ be fixed. For $i\in\N_0$ with $i\le k$, the coefficient of $T^i$ in $p_k$ obviously equals the coefficient of $T^i$ in \[\left(\sum_{i=0}^\infty\binom{\frac12}i(-T)^i\right)^2-(1-T)\] which is zero. The binomial coefficient $\binom{\frac12}i$ is positive for $i\in\{0,1,3,5,\dots\}$ and negative for $i\in\{2,4,6,\dots\}$. The only positive coefficient of \[\sum_{i=0}^k\binom{\frac12}i(-T)^i\] is thus the constant term. Hence, for $i\in\N_0$ with $i>k$, the coefficient of $T^i$ in $p_k$ is thus a sum of products of two nonpositive reals and therefore nonnegative. \end{proof} \begin{lem}\label{magiclemma} Suppose $I$ is an ideal of $A$, $T$ is a preorder of $A$ with $K_{\ge0}\subseteq T$, $M\subseteq I$ is a $T$-module of $A$, $u$ is a unit for $M$ in $I$, $a\in T$ and $(1-2a)u\in M$. Then [$\to$ \ref{defstate}] \[S(I,M,u)\subseteq S(I,(1-a)M,u).\] \end{lem} \begin{proof} Let $\ph\in S(I,M,u)$. To show: $\ph((1-a)M)\subseteq\R_{\ge0}$. Let $b\in M$. To show: \[\ph((1-a)b)\ge0.\] WLOG $u-b\in M$ (otherwise choose $N\in\N$ with $Nu-b\in M$ and replace $b$ by $\frac1Nb\in M$). We show $\ph((1-a)b)>-\ep$ for all $\ep>0$. To this end, let $\ep>0$. It is enough to show that there is a $k\in\N$ satisfying \[\ph((1-a)b)>\ph\left(\left(\sum_{i=0}^k\binom{\frac12}i(-a)^i\right)^2b\right)-\ep\] since $A^2M\subseteq TM\subseteq M\subseteq\ph^{-1}(\R_{\ge0})$. Because of $a\in T$, we have \[(1-(2a)^i)u=\sum_{j=0}^{i-1}((2a)^j-(2a)^{j+1})u=\sum_{j=0}^{i-1}(2a)^j(1-2a)u\in M\] for all $i\in\N_0$, i.e., \[(\square)\qquad\left(\frac1{2^i}-a^i\right)u\in M\] for all $i\in\N_0$. By \ref{binomialseries}, we can choose $k\in\N$ such that \[\left(\sum_{i=0}^k\binom{\frac12}i\left(-\frac12\right)^i\right)^2<\left(1-\frac12\right)+\ep,\] i.e., $p_k\left(\frac12\right)<\ep$ with $p_k$ as in Lemma \ref{binomialmiracle}. We show that $\ph(p_k(a)b)<\ep$ which is exactly our claim. Since $p_k(a)\in T$ holds by Lemma \ref{binomialmiracle}, it is enough to show that $\ph(p_k(a)u)<\ep$ since $\ph(p_k(a)b)\le\ph(p_k(a)u)$ holds due to $p_k(a)(u-b)\in M$. But we have \[\ph(p_k(a)u)\le\ph(p_k\left(\frac12\right)u) =p_k\left(\frac12\right)\ph(u)=p_k\left(\frac12\right)<\ep\] due to $(p_k\left(\frac12\right)-p_k(a))u\in M$ (use \ref{binomialmiracle} and $(\square)$). \end{proof} \begin{thm}[Burgdorf, Scheiderer, Schweighofer \cite{bss}]\emph{[$\to$ \ref{puremotivation}(d)]} \label{puremult} Suppose that $I$ is an ideal of $A$, $T$ is a preorder or an Archimedean semiring of $A$, $K_{\ge0}\subseteq T$, $M\subseteq I$ is a $T$-module of $A$, $u$ is a unit for $M$ in $I$ and $\ph$ is a pure state of $(I,M,u)$. Then \[(*)\qquad\ph(ab)=\ph(au)\ph(b)\] for all $a\in A$ and $b\in I$. \end{thm} \begin{proof} Due to $T-T=A$ [$\to$ \ref{diffsquare}, \ref{defarchsemiring}] it suffices to show $(*)$ for all $a\in T$ and $b\in I$. If $T$ is \alal{an Archimedean semiring}{a preorder}, then one can here suppose by scaling $a$ that $\malal{1-a\in T}{u-2au\in M}$ and thus because of \alal{$TM\subseteq M$}{\text{Lemma \ref{magiclemma}}} that \[S(I,M,u)\subseteq S(I,(1-a)M,u).\] Moreover, we can suppose that $\ph(au)<1$. Fix therefore $a\in T$ with $S(I,M,u)\subseteq S(I,(1-a)M,u)$ and $\ph(au)<1$. We have to show $(*)$ for all $b\in I$. \medskip \textbf{Case 1:} $\ph(au)=0$ \smallskip Then we have to show that $\ph(ab)=0$ for all $b\in I$. For this purpose, fix $b\in I$. Choose $N\in\N$ such that $Nu\pm b\in M$. Then $Nau\pm ab\in TM\subseteq M$ and therefore $|\ph(ab)|\le N\ph(au)=0$. Hence $\ph(ab)=0$. \medskip \textbf{Case 2:} $\ph(au)\ne0$ \smallskip Then $\ph(au)>0$ because of $au\in TM\subseteq M$. Furthermore, we have $\ph((1-a)u)>0$ since $\ph(au)<1=\ph(u)$. For each $c\in A$ with $\ph(cu)>0$ and $\ph\in S(I,cM,u)$, \[\ph_c\colon I\to\R,\ b\mapsto\frac{\ph(cb)}{\ph(cu)}\] is a state of $(I,M,u)$. In particular, $\ph_a,\ph_{1-a}\in S(I,M,u)$. Because of \[\ph=\ph(au)\ph_a+\ph((1-a)u)\ph_{1-a},\] $\ph(au)>0$, $\ph((1-a)u)>0$ and $\ph(au)+\ph((1-a)u)=\ph(u)=1$, we have by \ref{extremeexo} or \ref{ratiootherthan2} that $\ph=\ph_a$ (and $\ph=\ph_{1-a}$). \end{proof} \begin{cor}{}\emph{[$\to$ \ref{puremotivation}(a)]}\label{pureringhom1} Let $T$ be an Archimedean semiring of $A$ such that $K_{\ge0}\subseteq T$ and $M$ a $T$-module of $A$ with $1\in M$. Then every pure state of $(A,M,1)$ is a ring homomorphism. \end{cor} \begin{cor}{}\emph{[$\to$ \ref{puremotivation}(a)]}\label{pureringhom2} Let $M$ be an Archimedean $\left(\sum K_{\ge0}A^2\right)$-module of $A$. Then every pure state of $(A,M,1)$ is a ring homomorphism. \end{cor} \begin{cor}[Becker, Schwartz \cite{bs}, first generalization of the abstract Archimedean Positivstellensatz \ref{abstractarchimedeanpositivstellensatz} in the version of \ref{abstractarchimedeanpositivstellensatz2}] \emph{[$\to$ \ref{puremotivation}(b)]} \label{beckerschwartzarchimedean} Let $T$ be an Archimedean semiring of $A$ with $K_{\ge0}\subseteq T$, $M$ a $T$-module of $A$ with $1\in M$ and $a\in A$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\ph(a)>0$ for all ($K$-linear) ring homomorphisms $\ph\colon A\to\R$ with $\ph(M)\subseteq\R_{\ge0}$. \item $\exists N\in\N:a\in\frac1N+M$ \end{enumerate} \end{cor} \begin{proof} \ref{conemembershipunitextreme}, \ref{semiringu1}, \ref{pureringhom1} \end{proof} \begin{cor}[Jacobi \cite{jac}, second generalization of the abstract Archimedean Positivstellensatz \ref{abstractarchimedeanpositivstellensatz} in the version of \ref{abstractarchimedeanpositivstellensatz2}] \emph{[$\to$ \ref{puremotivation}(b)]} \label{jacobiarchimedean} Let $M$ be an Archimedean $\left(\sum K_{\ge0}A^2\right)$-module of $A$. Then (a) and (b) from \ref{beckerschwartzarchimedean} are equivalent. \end{cor} \begin{rem} Using Lemma \ref{evashom}, one gets for the polynomial ring $K[\x]$ concrete geometric versions of \ref{beckerschwartzarchimedean} and \ref{jacobiarchimedean} which are completely analogous to \ref{archimedeanpositivstellensatz} (first and second generalization of the Archimedean Positivstellensatz). Instead of stating them, we give immediately concrete examples. \end{rem} \begin{ex}{}[$\to$ \ref{beckerschwartzarchimedean}]\label{hman} Let $\ell_1,\dots,\ell_m\in\R[\x]_1$ such that \[\{x\in\R^n\mid\ell_1(x)\ge0,\dots,\ell_m(x)\ge0\}\] is nonempty and compact. Moreover, let $g_1,\dots,g_\ell\in\R[\x]$ and set \[S:=\{x\in\R^n\mid\ell_1(x)\ge0,\dots,\ell_m(x)\ge0,g_1(x)\ge0,\dots,g_\ell(x)\ge0\}.\] Then for each $f\in\R[\x]$ with $f>0$ on $S$, we have \[f\in\sum_{i=0}^\ell\sum_{\al\in\N_0^m}\R_{\ge0}\ell_1^{\al_1}\dotsm\ell_m^{\al_m}g_i=:M\] where $g_0:=1$. This is because $M$ is a $T$-module with $1\in M$ for the semiring \[T:=\sum_{\al\in\N_0^m}\R_{\ge0}\ell_1^{\al_1}\dotsm\ell_m^{\al_m}\] which is Archimedean by \ref{archsemiringchar}(c). \end{ex} \begin{ex}[Putinar][$\to$ \ref{jacobiarchimedean}]\label{putinar} Let $R\in\R_{\ge0}$ and let $g_1,\dots,g_m\in\R[\x]$. Set \[S:=\{x\in\R^n\mid g_1(x)\ge0,\dots,g_m(x)\ge0,\|x\|\le R\}.\] Then for every $f\in\R[\x]$ with $f>0$ on $S$, we have \[f\in\sum_{i=0}^{m+1}\sum\R[\x]^2g_i\] with $g_0:=1$ and $g_{m+1}:=R^2-\sum_{i=1}^nX_i^2$ [$\to$ \ref{archmodulechar}(b)]. \end{ex} \begin{ex}[P\'olya \cite{pol}][$\to$ \ref{beckerschwartzarchimedean}] Let $k\in\N_0$ and suppose $f\in\R[\x]$ a $k$-form such that $f(x)>0$ for all $x\in\R_{\ge0}^n\setminus\{0\}$. Then there is some $N\in\N$ such that \[(X_1+\dots+X_n)^Nf\in\sum_{\substack{\al\in\N_0^n\\|\al|=N+k}} \R_{>0}\x^\al.\] This can be shown as follows: We have $f>0$ on $\De:=\{x\in\R_{\ge0}^n\mid x_1+\ldots+x_n=1\}$. By \ref{beckerschwartzarchimedean}, we obtain analogously to \ref{hman} that \[f-\ep\in\sum_{\al\in\N_0^{n+2}}\R_{\ge0}X_1^{\al_1}\dotsm X_n^{\al_n} (1-(X_1+\dots+X_n))^{\al_{n+1}} (X_1+\dots+X_n-1)^{\al_{n+2}}. \] By substituting $X_i\mapsto\frac{X_i}{X_1+\dots+X_n}$ and clearing denominators, one gets the claim due to homogeneity of $f$. \end{ex} \section{Dichotomy of pure states on ideals} In this section, we let $K$ again be a subring of $A$ so that we consider $A$ also as a $K$-vector space. \begin{pro}\label{assringhom} Let $I$ be a ideal of $A$ and $u\in I$. Let $\ph\in S(I,\emptyset,u)$ \emph{[$\to$ \ref{defstate}]}. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $\forall a\in A:\forall b\in I:\ph(ab)=\ph(au)\ph(b)$ \emph{[$\to$ \ref{puremult}$(*)$]} \item There is a ring homomorphism $\Ph\colon A\to\R$ such that \[(**)\qquad\forall a\in A:\forall b\in I:\ph(ab)=\Ph(a)\ph(b).\] \end{enumerate} In Condition (b), $\Ph$ is uniquely determined since $(**)$ implies $\Ph(a)=\ph(au)$ for all $a\in A$ and we call $\Ph$ the ring homomorphism \emph{belonging to} or \emph{associated to} $\ph$ (on $A$). Note that $\Ph$ does not depend on $u$ for if $v\in I$ with $\ph(v)=1$ then $(**)$ of course also implies $\Ph(a)=\ph(av)$. Exactly one of the following alternatives occurs: \begin{enumerate}[\normalfont(1)] \item $\Ph(u)\ne0$ and $\forall b\in I:\ph(b)=\frac{\Ph(b)}{\Ph(u)}$ \item $\Ph|_I=0$ \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)}\quad If (a) holds, then $\Ph\colon A\to\R,\ a\mapsto\ph(au)$ is a ring homomorphism since $\Ph(a)\Ph(b)=\ph(au)\ph(bu)\overset{(*)}=\ph(abu)=\Ph(ab)$ holds for all $a,b\in A$. \smallskip \underline{(b)$\implies$(a)} is clear. \medskip\noindent Because of $u\in I$ it is clear that (1) and (2) exclude each other. If $\ph(u^2)\ne0$, then (1) occurs since $(*)$ implies $\ph(bu)=\ph(u^2)\ph(b)$ for all $b\in I$. If $\ph(u^2)=0$, then $\ph(bu)=\ph(u^2)\ph(b)=0\ph(b)=0$ for all $b\in I$. \end{proof} \begin{thm}[Dichotomy]\label{dichotomy} Under the hypotheses of \ref{puremult}, exactly one of the following cases occurs: \begin{enumerate}[\normalfont(1)] \item $\ph$ is the restriction of a scaled ring homomorphism: There is a ring homomorphism $\Ph\colon A\to\R$ such that $\Ph(u)\ne0$ and $\ph=\frac1{\Ph(u)}\Ph|_I$. \item There is a ring homomorphism $\Ph\colon A\to\R$ with $\Ph|_I=0$ such that $(**)$ from \ref{assringhom}(b) holds. \end{enumerate} We have $\text{\normalfont(1)}\iff\ph(u^2)\ne0$ and $\text{\normalfont(2)}\iff\ph(u^2)=0$. In both {\normalfont(1)} and {\normalfont(2)}, $\Ph$ is uniquely determined, namely it is the ring homomorphism that according to \ref{assringhom} belongs to $\ph$. We have $\Ph(T)\subseteq\R_{\ge0}$. If $u\in T$, then additionally $\Ph(M)\subseteq\R_{\ge0}$. \end{thm} \begin{proof} Easy with \ref{puremult} and \ref{assringhom}. \end{proof} \begin{cor} Let $M$ be a $\left(\sum K_{\ge0}A^2\right)$-module of $A$ with $1\in M$. If $M$ has a unit in $A$, then $M$ is Archimedean. \end{cor} \begin{proof} Let $u$ be a unit for $M$ in $A$. By \ref{conemembershipunitextreme}, it is enough to show that $\ph(1)>0$ for all $\ph\in\extr S(A,M,u)$. Now let $\ph$ be a pure state of $(A,M,u)$ with the associated ring homomorphism $\Ph\colon A\to\R$. Due to $\Ph(1)=1\ne0$, in the Dichotomy \ref{dichotomy} only case (1) can occur, i.e., $\Ph(u)\ne0$ and $\ph=\frac1{\Ph(u)}\Ph$. Because of $\Ph(u)=\ph(u^2)=\ph(u^2\cdot1)\in\ph(M)\subseteq\R_{\ge0}$, we have $\Ph(u)>0$. It follows that $\ph(1)>0$. \end{proof} \begin{ex}\label{triangle1} Consider the semiring $T:=\sum_{\al,\be,\ga\in\N_0}K_{\ge0}X^\al Y^\be(1-X-Y)^\ga$ of $K[X,Y]$ and \[S:=\{(x,y)\in\R^2\mid\forall p\in T:p(x,y)\ge0\}= \{(x,y)\in\R^2\mid x\ge0,y\ge0,x+y\le1\}.\] Since $S$ is bounded and $X,Y,1-X-Y$ are linear, $T$ is Archimedean by \ref{archsemiringchar}(c). Consider the ideal $I:=(X,Y)$ and the $T$-module $M:=T\cap I$ of $K[X,Y]$. Then $u:=X+Y$ is a unit for $M$ in $I$ because $B_{(K[X,Y],T)}\overset{\ref{archmoduleb}}=K[X,Y]$ and thus by \ref{bmodule} $B_{(K[X,Y],M,u)}$ is an ideal of $K[X,Y]$ that contains $X$, $Y$ and thus $I$ since $u\pm X,u\pm Y\in M$. The ring homomorphisms \[\Ph\colon K[X,Y]\to\R\] satisfying $\Ph(T)\subseteq\R_{\ge0}$ are obviously exactly the evaluations $\ev_x$ in points $x\in S$ (compare Lemma \ref{evashom}). Now let $\ph$ be a pure state of $(I,M,u)$. By the Dichotomy \ref{dichotomy}, exactly one of the following cases occurs: \begin{enumerate}[(1)] \item There is some $(x,y)\in S\setminus\{(0,0)\}$ with $\ph(p)=\frac{p(x,y)}{x+y}$ for all $p\in I$. \item $\ph(pX+qY)=\ph(pX)+\ph(qY)=p(0,0)\ph(X)+q(0,0)\ph(Y)$ for all $p,q\in K[X,Y]$. \end{enumerate} In Case (2), one can set $\la_1:=\ph(X)\ge0$ and $\la_2:=\ph(Y)\ge0$ and one obtains $\la_1+\la_2=\ph(X+Y)=\ph(u)=1$ as well as $\ph=\la_1\ph_1+\la_2\ph_2$ with $\ph_1\colon I\to\R,\ p\mapsto\frac{\partial p}{\partial X}(0,0)$ and $\ph_2\colon I\to\R,\ p\mapsto\frac{\partial p}{\partial Y}(0,0)$. Since every polynomial in $M$ vanishes in the origin and is nonnegative on $S$, we obtain $\ph_1,\ph_2\in S(I,M,u)$. Because of $\ph\in\extr S(I,M,u)$, in Case~(2) we have $\ph=\ph_1$ or $\ph=\ph_2$. Using \ref{conemembershipunitextreme}, we now obtain: If $f\in K[X,Y]$ with $f>0$ on $S\setminus\{0\}$, $f(0)=0$, $\frac{\partial f}{\partial X}(0)>0$ and $\frac{\partial f}{\partial Y}(0)>0$, then $f\in T$. \end{ex} \begin{ex}\label{triangle2} Let $T$ and $S$ be as in Example \ref{triangle1}. Consider the ideal $I:=(X)$ and the $T$-module $M:=T\cap I$ of $K[X,Y]$. Then $u:=X$ is a unit for $M$ in $I$ since $B_{(K[X,Y],M,u)}$ is an ideal of $K[X,Y]$ by \ref{bmodule} that contains $X$ and thus $I$ because $u\pm X\in M$. Let $\ph$ be a pure state of $(I,M,u)$. By the Dichotomy \ref{dichotomy}, exactly one of the following cases occurs: \begin{enumerate}[(1)] \item There is some $(x,y)\in S\setminus(\{0\}\times\R)$ with $\ph(p)=\frac{p(x,y)}x$ for all $p\in I$. \item There is some $y\in[0,1]$ such that $\ph(pX)=p(0,y)\ph(X)=p(0,y)\ph(u)=p(0,y)$ for alle $p\in K[X,Y]$. \end{enumerate} In Case (2), there is obviously a $y\in[0,1]$ such that $\ph(p)=\frac{\partial p}{\partial X}(0,y)$ for all $p\in I$. Observe that each $f\in K[X,Y]$ with $f=0$ on $S\cap(\{0\}\times\R)$ satisfies $f(0,Y)=0$ and thus $f\in I$. Now \ref{conemembershipunitextreme} yields: If $f\in K[X,Y]$ with $f>0$ on $S\setminus(\{0\}\times\R)$, $f=0$ on $S\cap(\{0\}\times\R)$ and $\frac{\partial f}{\partial X}(0,y)>0$ for all $y\in[0,1]$, then $f\in T$. At first glance, it might irritate that one would have to check here that $\frac{\partial f}{\partial X}(0,1)>0$. However, note that for $y=1$ and in fact for every $y\in\R$, $\frac{\partial f}{\partial X}(0,y)$ is the derivative of $f$ in \emph{every} direction $(1,z)$ with $z\in\R$ since $\frac{\partial f}{\partial Y}(0,y)=0$. \end{ex} \begin{ex} Let $T$ and $S$ again be as in \ref{triangle1} and \ref{triangle2}. Consider the ideal $I:=(X^2,XY)$ and the $T$-module $M:=T\cap I$ of $K[X,Y]$. Then $u:=X^2+XY$ is a unit for $M$ in $I$ since $u\pm X^2,u\pm XY\in M$. Let $\ph$ be a pure state of $(I,M,u)$. By the Dichotomy \ref{dichotomy}, exactly one of the following cases occurs: \begin{enumerate}[(1)] \item There is some $(x,y)\in S\setminus(\{0\}\times\R)$ with $\ph(p)=\frac{p(x,y)}{x(x+y)}$ for all $p\in I$. \item There is some $y\in[0,1]$ such that $\ph(pX^2+qXY)=p(0,y)\ph(X^2)+q(0,y)\ph(XY)$ for all $p,q\in K[X,Y]$. \end{enumerate} Suppose now that (2) holds and fix $y\in[0,1]$ accordingly. Consider $\la_1:=\ph(X^2)\ge0$, $\la_2:=\ph(XY)\ge0$. Then $\la_1+\la_2=\ph(u)=1$. \medskip Consider first the case $y>0$. From $0=\ph(YX^2-X(XY))=\la_1y-\la_20=\la_1y$ we get $\la_1=0$. Then $\frac1y\frac{\partial(pX^2+qXY)}{\partial X}(0,y)=\frac1yq(0,y)y=q(0,y)=\la_1p(0,y)+\la_2q(0,y) =\ph(pX^2+qXY)$ for all $p,q\in K[X,Y]$. Hence $\ph=\ph_y$ with \[\ph_y\colon I\to\R,\ p\mapsto\frac1y\frac{\partial p}{\partial X}(0,y).\] \medskip Consider now the case $y=0$. Then $ \frac12\frac{\partial^2(pX^2+qXY)}{\partial X^2}(0,0)=p(0,0)=p(0,y)$ and $\frac{\partial^2(pX^2+qXY)}{\partial X\partial Y}(0,0)=q(0,0)=q(0,y)$ for all $p,q\in K[X,Y]$. Hence $\ph=\la_1\ps_1+\la_2\ps_2$ with \[\ps_1\colon I\to\R,\ p\mapsto\frac12\frac{\partial^2p}{\partial X^2}(0,0)\qquad\text{and}\qquad \ps_2\colon I\to\R,\ p\mapsto\frac{\partial^2p}{\partial X\partial Y}(0,0).\] \medskip\noindent Before we give a summary, we observe that \[I=\left\{f\in K[X,Y]\mid f=0\text{ on }S\cap(\{0\}\times\R),\frac{\partial f}{\partial X}(0)=0\right\}\] where ``$\subseteq$'' is clear since the right hand side forms obviously an ideal and ``$\supseteq$'' can be seen as follows: If $f\in K[X,Y]$ with $f=0$ on $S\cap(\{0\}\times\R)$, then $f(0,Y)=0$ and thus $f\in(X)$. If $f=Xg\in K[X,Y]$ with $\frac{\partial f}{\partial X}(0)=0$, then $g(0)=0$, hence $g\in(X,Y)$ and consequently $f\in(X^2,XY)$. Taking into account that each polynomial in $M$ is nonnegative on $S$, one obtains $\ph_y\in S(I,M,u)$ for all $y\in(0,1]_\R$ and $\ps_1,\ps_2\in S(I,M,u)$. The above considerations therefore yield \[\extr S(I,M,u)\subseteq \{\ph_y\mid y\in(0,1]_\R\}\cup\{\ps_1,\ps_2\}\] from which one obtains with \ref{conemembershipunitextreme}: If $f\in K[X,Y]$ with \begin{itemize} \item $f>0$ on $S\setminus(\{0\}\times\R)$, \item $f=0$ on $S\cap(\{0\}\times\R)$, \item $\frac{\partial f}{\partial X}(0,y)>0$ for $y\in(0,1]_\R$, \item $\frac{\partial f}{\partial X}(0,0)=0$, \item $\frac{\partial^2f}{\partial X^2}(0,0)>0$ and \item $\frac{\partial^2f}{\partial X\partial Y}(0,0)>0$, \end{itemize} then $f\in T$. \end{ex} \section{A local-global-principle} \begin{pro}\label{uaaa} Let $T$ be a semiring of $A$ with $K_{\ge0}\subseteq T$, $M$ a $T$-module of $A$, $n\in\N_0$ and $a_1,\dots,a_n\in A$. Set $I:=(a_1,\dots,a_n)$. Moreover, let $u$ be a unit for $\malal TM$ in $A$ and suppose $a_1,\dots,a_n\in\malal MT$. Then $u(a_1+\ldots+a_n)$ is a unit for $M\cap I$ in $I$. \end{pro} \begin{proof} Let $b\in I$ and set $v:=u(a_1+\dots+a_n)$. To show: $\exists N\in\N:Nv+b\in M\cap I$. Write $b=\sum_{i=1}^nc_ia_i$ with $c_1,\dots,c_n\in A$. Choose $N\in\N$ such that $Nu\pm c_i\in\malal TM$ for $i\in\{1,\dots,n\}$. Then $Nv\pm b=\sum_{i=1}^n(Nua_i\pm c_ia_i) =\sum_{i=1}^n(Nu\pm c_i)a_i\in M$. \end{proof} \begin{thm}[Burgdorf, Scheiderer, Schweighofer \cite{bss}] Let $T$ be an Archimedean semiring of $A$ with $K_{\ge0}\subseteq T$ and $M$ a $T$-module of $A$. Let $a\in A$ such that there is for each maximal ideal $\m$ of $A$ some $t\in T\setminus\m$ with $ta\in M$. Then $a\in M$. \end{thm} \begin{proof}[Proof (simplified by Leonhard Nenno)] The ideal \[I:=(\{t\in T\mid ta\in M\})\] is not contained in any maximal ideal $\m$ of $A$ for otherwise we find by our hypothesis some $s\in T\setminus\m$ with $sa\in M$ which entails the contradiction $s\in I\subseteq\m$. It follows that $1\in I$, i.e., we can choose $m\in\N$ and $t_1,\dots,t_m\in T$ and $d_1,\dots,d_m\in A$ with $t_1a,\dots,t_ma\in M$ such that \[1=d_1t_1+\ldots+d_mt_m.\] Multiplying with $a$, it follows that \[a\in(at_1,\ldots,at_m)=:J.\] By the (first version of) \ref{uaaa}, $u:=at_1+\ldots+at_m=at$ with $t:=t_1+\ldots+t_m\in T$ is a unit for $M\cap J$ in $J$. To show that $a\in M$, we will now apply \ref{conemembershipextr}. So let $\ph$ be a pure state of $(J,J\cap M,u)$. To show: $\ph(a)>0$. Denote by $\Ph$ the ring homomorphism that belongs to $\ph$ according to \ref{puremult} and \ref{assringhom}. We have $\Ph(T)\subseteq\R_{\ge0}$ [$\to$ \ref{dichotomy}]. Now \[1=\ph(u)=\ph(at)=\ph(ta)=\underbrace{\Ph(t)}_{\ge0}\ph(a).\] Thus $\ph(a)>0$. \end{proof} \chapter{Pure states and nonnegative polynomials over real closed fields} \section{Pure states and polynomials over real closed fields} Throughout this section, we let $R$ be a real closed extension field of $\R$, we set $\O:=\O_R$, $\m:=\m_R$ and we make extensive use of the standard part maps $\O\to\R,\ a\mapsto\st(a)$, $\O[\x]\to\R[\x],\ p\mapsto\st(p)$ [$\to$ \ref{ordval}] and $\O^n\to\R^n,\ x\mapsto\st(x):=(\st(x_1),\ldots,\st(x_n))$ which are surjective ring homomorphisms. \begin{df}{}[$\to$ \ref{defpreorder}, \ref{dfarch}(a)]\label{defqm} Let $A$ be a commutative ring and $M\subseteq A$. Then $M$ is called a \emph{quadratic module} of $A$ if $M$ is a $\sum A^2$-module of $A$ containing $1$ [$\to$ \ref{deftmodule}], or in other words, if $\{0,1\}\subseteq M$, $M+M\subseteq M$ and $A^2M\subseteq M$. We call a quadratic module $M$ of $A$ \emph{Archimedean} if $B_{(A,M)}=A$ [$\to$ \ref{arithmbounded}, \ref{archb}, \ref{bamu}]. \end{df} \begin{pro}{}\emph{[$\to$ \ref{archmodulechar}]}\label{archmodulecharrcf} Suppose $n\in\N_0$ and $M$ is a quadratic module of $\O[\x]$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $M$ is Archimedean. \item $\exists N\in\N:N-\sum_{i=1}^nX_i^2\in M$ \item $\exists N\in\N:\forall i\in\{1,\dots,n\}:N\pm X_i\in M$ \end{enumerate} \end{pro} \begin{proof} \underline{(a)$\implies$(b)} is trivial. \smallskip \underline{(b)$\implies$(c)}\quad If (b) holds, then $N-X_i^2\in M$ and thus $X_i^2\in B_{(\O[\x],M)}$ for all $i\in\{1,\dots,n\}$. Apply now \ref{rootqmb}. \smallskip \underline{(c)$\implies$(a)} follows from \ref{bammodule} since $\O\subseteq B_{(\O[\x],M)}$. \end{proof} \begin{rem} In contrast to \ref{archmodulechar}(d), one cannot add \begin{multline*} \exists m\in\N:\exists\ell_1,\dots,\ell_m\in M\cap\O[\x]_1:\exists N\in\N:\\ \emptyset\ne\{x\in R^n\mid\ell_1(x)\ge0,\dots,\ell_m(x)\ge0\}\subseteq[-N,N]_R^n \end{multline*} as another equivalent condition in \ref{archmodulecharrcf}. Indeed, choose $R$ non-Archimedean [$\to$ \ref{boundex}] and $\ep\in\m \setminus\{0\}$. Then $\emptyset\ne\{0\}=\{x\in R\mid\ep x\ge0,-\ep x\ge0\}\subseteq[-1,1]_R$ but the quadratic module \[\sum\O[X]^2+\sum\O[X]^2\ep X+\sum\O[X]^2(-\ep X) \overset{\ref{diffsquare}}=\sum\O[X]^2+\O[X]\ep X\] generated by $\ep X$ and $-\ep X$ in $\O[X]$ is not Archimedean for if we had $N\in\N$ with \[N-X^2\in\sum\O[X]^2+\O[X]\ep X,\] then taking standard parts would yield $N-X^2\in\sum\R[X]^2$ which contradicts \ref{soslongrem}(b). \end{rem} \begin{df}{}[$\to$ \ref{evashom}]\label{adamshom} For every $x\in\O^n$, we define the ring homomorphism \[\ev_x\colon\O[\x]\to\O,\ p\mapsto p(x)\] and set $I_x:=\ker\ev_x$. \end{df} \begin{pro} Let $x\in\O^n$. Then $I_x=(X_1-x_1,\dots,X_n-x_n)$. \end{pro} \begin{proof} It is trivial that $J:=(X_1-x_1,\dots,X_n-x_n)\subseteq I_x$. Conversely, $p\equiv_Jp(x)=0$ for all $p\in I_x$. This shows the converse inclusion $I_x\subseteq J$. \end{proof} \begin{notation} Suppose $A$ is a commutative ring and $I$ is an ideal of $A$. As it is customary in commutative algebra, we will in the following often denote by $I^2$ the product of the ideal $I$ with itself which in our suggestive notation [$\to$ \ref{divnot}] would be written $\sum II$. From the context, the reader should be able to avoid misinterpreting $I^2$ as what it would mean in this suggestive notation, namely $\{a^2\mid a\in I\}$. The same applies to $I^3$ and so on. Another source of confusion could be that, we will often use the notation $\m^n$ to denote the Cartesian power \[\underbrace{\m\times\ldots\times\m}_{\text{$n$ times}}.\] \end{notation} \begin{lem}\label{coprime} Suppose $x,y\in\O^n$ with $\st(x)\ne\st(y)$. Then $I_x$ and $I_y$ are coprime, i.e., $1\in I_x+I_y$. \end{lem} \begin{proof} WLOG $x_1-y_1\notin\m$. Then $x_1-y_1\in\O^\times$ and \[1=\frac{x_1-X_1}{x_1-y_1}+\frac{X_1-y_1}{x_1-y_1}\in I_x+I_y.\] \end{proof} \begin{lem}\label{ixunit} Let $M$ be an Archimedean quadratic module of $\O[\x]$ and $x\in\O^n$. Then \[u_x:=(X_1-x_1)^2+\ldots+(X_n-x_n)^2\] is a unit for $M\cap I_x^2$ in the real vector space $I_x^2$ [$\to$ \ref{defunit}]. \end{lem} \begin{proof} Using the ring automorphism \[\O[\x]\to\O[\x],\ p\mapsto p(X_1-x_1,\ldots,X_n-x_n),\] which is also an isomorphism of real vector spaces, we can reduce to the case $x=0$. Since $u_x\in I_0^2$, it suffices to show that $I_0^2\subseteq B_{(\O[\x],M,u_x)}$. Since $M$ is Archimedean, \ref{bammodule} yields that $B_{(\O[\x],M,u_x)}$ is an $\O[\x]$-module of $\O[\x]$ [$\to$~\ref{deftmodule}], i.e., an ideal of $\O[\x]$. Because of \[I_0^2=(X_iX_j\mid i,j\in\{1,\dots,n\}),\] it suffices therefore to show that $X_iX_j\in B_{(\O[\x],M,u_x)}$ for all $i,j\in\{1,\dots,n\}$. Thus fix $i,j\in\{1,\dots,n\}$. Then $\frac12(X_i^2+X_j^2)\pm X_iX_j=\frac12(X_i\pm X_j)^2\in M$ and thus $\frac12u_x\pm X_iX_j\in M$. Since $u_x\in M$, this implies $u_x\pm X_iX_j\in M$. \end{proof} \begin{nt} We use the symbols $\nabla$ and $\hess$ to denote the gradient and the Hessian of a real-valued function of $n$ real variables, respectively. For a \emph{polynomial} $p\in\R[\x]$, we understand its gradient $\nabla p$ as a column vector from $\R[\x]^n$, i.e., as a vector of polynomials. Similarly, its Hessian $\hess p$ is a symmetric matrix polynomial of size $n$, i.e., a symmetric matrix from $\R[\x]^{n\times n}$. Using formal partial derivatives, we more generally define $\nabla p\in R[\x]^n$ and $\hess p\in R[\x]^{n\times n}$ even for $p\in R[\x]$. \end{nt} \begin{lem}\label{secondtypelemma} Let $x\in\O^n$ and $\ph\in S(I_x^2,\sum\O[\x]^2\cap I_x^2,u_x)$ [$\to$ \ref{defstate}] such that $\ph|_{I_x^3}=0$. Then there exist $v_1,\dots,v_n\in\R^n$ such that $\sum_{i=1}^nv_i^Tv_i=1$ and \[\ph(p)=\frac12\st\left(\sum_{i=1}^nv_i^T(\hess p)(x)v_i\right)\] for all $p\in I_x^2$. \end{lem} \begin{proof} As in the proof of Lemma \ref{ixunit}, one easily reduces to the case $x=0$. \medskip \textbf{Claim 1:} $\ph(au_x)=0$ for all $a\in\m$. \smallskip \emph{Explanation.} Let $a\in\m$. WLOG $a\ge0$. Then $a\in\O\cap R_{\ge0}=\O^2$ and thus $au_x\in\sum\O[\x]^2\cap I_0^2$. This shows $\ph(au_x)\ge0$. It remains to show that $\ph(au_x)\le\frac1N$ for all $N\in\N$. For this purpose, fix $N\in\N$. Then $\frac1N-a\in\O\cap R_{\ge0}=\O^2$ and thus $\left(\frac1N-a\right)u\in\sum\O[\x]^2\cap I_0^2$. It follows that $\ph\left(\left(\frac1N-a\right)u_x\right)\ge0$, i.e., $\ph(au_x)\le\frac1N$. \medskip \textbf{Claim 2:} $\ph(aX_i^2)=0$ for all $a\in\m$ and $i\in\{1,\dots,n\}$. \smallskip \emph{Explanation.} Let $a\in\m$. WLOG $a\ge0$ and thus $a\in\O^2$. Then \[\sum_{i=1}^n\underbrace{\ph(\overbrace{aX_i^2}^{\rlap{$\scriptstyle\in\O[\x]^2\cap I_0^2$}})}_{\ge0}=\ph(au_x)\overset{\text{Claim 1}}=0.\] \medskip \textbf{Claim 3:} $\ph(aX_iX_j)=0$ for all $a\in\m$ and $i,j\in\{1,\dots,n\}$. \smallskip \emph{Explanation.} Fix $i,j\in\{1,\dots,n\}$ and $a\in\m$. If $i=j$, then we are done by Claim 2. So suppose $i\ne j$. WLOG $a\ge0$ and thus $a\in\O^2$. Then \[a(X_i^2+X_j^2\pm 2X_iX_j)=a(X_i\pm X_j)^2\in\O[\x]^2\cap I_0^2\] and thus $\pm 2\ph(aX_iX_j)\underset{\text{Claim 2}}=\ph(aX_i^2)+\ph(aX_j^2)\pm 2\ph(aX_iX_j)\ge 0$. \medskip \textbf{Claim 4:} $\ph(p)=\frac12\st\left(\tr\left((\hess p)(0)A\right)\right)$ for all $p\in I_0^2$ where \[A:=\begin{pmatrix}\ph(X_1X_1)&\dots&\ph(X_1X_n)\\ \vdots&\ddots&\vdots\\ \ph(X_nX_1)&\dots&\ph(X_nX_n) \end{pmatrix}. \] \smallskip \emph{Explanation.} Let $p\in I_0^2$. By $\ph|_{I_0^3}=0$, we can reduce to the case $p=aX_iX_j$ with $i,j\in\{1,\dots,n\}$ and $a\in\O$. Using Claim 3, we can assume $a=1$. Comparing both sides, yields the result. \medskip \textbf{Claim 5:} $A$ is psd [$\to$ \ref{psdpd}(b)]. \smallskip \emph{Explanation.} If $w\in\R^n$, then $w^TAw=\ph((w_1X_1+\ldots+w_nX_n)^2)\ge0$ since \[(w_1X_1+\ldots+w_nX_n)^2\in\R[\x]^2\cap I_0^2\subseteq\sum\O[\x]^2\cap I_0^2.\] \medskip\noindent By Claim 5 and \ref{psdeq}(c), we can choose $B\in\R^{n\times n}$ such that $A=B^TB$. Denote by $v_i$ the $i$-th row of $B$ for $i\in\{1,\dots,n\}$. Then by Claim 4, we get \begin{multline*} \ph(p)=\frac12\st(\tr((\hess p)(0)A)) =\frac12\st(\tr((\hess p)(0)B^TB))\\ =\frac12\st(\tr(B(\hess p)(0)B^T))=\frac12\st\left(\sum_{i=1}^nv_i^T(\hess p)(0)v_i\right) \end{multline*} for all $p\in I_0^2$. In particular, we obtain $1=\ph(u_0)=\sum_{i=1}^nv_i^Tv_i$. \end{proof} \begin{lem}\label{stpointev} Let $\Ph\colon\O[\x]\to\R$ be a ring homomorphism. Then there is some $x\in\R^n$ such that $\Ph(p)=\st(p(x))$ for all $p\in\O[\x]$. \end{lem} \begin{proof} By \ref{archemb}, we have $\Ph|_\R=\id_\R$. It is easy to see that $\Ph|_\m=0$. Indeed, for each $N\in\N$ and $a\in\m$, we have $\frac1N\pm a\in R_{\ge0}\cap\O=\O^2$ and therefore $\frac1N\pm\Ph(a)\in\R_{\ge0}$. Finally set \[x:=(\Ph(X_1),\dots,\Ph(X_n))\in\R^n\] and use that $\Ph|_\R=\id_\R$, $\Ph|_\m=0$ and that $\Ph$ is a ring homomorphism. \end{proof} \begin{thm}{}\emph{[$\to$ \ref{dichotomy}]}\label{dicho} Let $M$ be an Archimedean quadratic module of $\O[\x]$ and set \[S:=\{x\in\R^n\mid\forall p\in M:\st(p(x))\ge0\}.\] Moreover, suppose $k\in\N_0$ and let $x_1,\ldots,x_k\in\O^n$ satisfy $\st(x_i)\ne\st(x_j)$ for $i,j\in\{1,\dots,k\}$ with $i\ne j$. Then $u:=u_{x_1}\dotsm u_{x_k}$ is a unit for the cone $M\cap I$ in the real vector space \[I:=I_{x_1}^2\dotsm I_{x_k}^2=I_{x_1}^2\cap\ldots\cap I_{x_k}^2\] and for all pure states $\ph$ of $(I,M\cap I,u)$ exactly one of the following cases occurs: \begin{enumerate}[\normalfont(1)] \item There is an $x\in S\setminus\{\st(x_1),\dots,\st(x_k)\}$ such that \[\ph(p)=\st\left(\frac{p(x)}{u(x)}\right)\] for all $p\in I$. \item There is an $i\in\{1,\dots,k\}$ and $v_1,\dots,v_n\in\R^n$ such that $\sum_{\ell=1}^nv_\ell^Tv_\ell=1$ and \[\ph(p)=\st\left(\frac{\sum_{\ell=1}^nv_\ell^T(\hess p)(x_i)v_\ell}{2\prod_{\substack{j=1\\j\ne i}}^ku_{x_j}(x_i)}\right)\] for all $p\in I$. \end{enumerate} \end{thm} \begin{proof} The Chinese remainder theorem from commutative algebra shows that \[I=I_{x_1}^2\dotsm I_{x_k}^2=I_{x_1}^2\cap\ldots\cap I_{x_k}^2\] since $I_{x_i}$ and $I_{x_j}$ and thus also $I_{x_i}^2$ and $I_{x_j}^2$ are coprime for all $i,j\in\{1,\dots,k\}$ with $i\ne j$. By \ref{ixunit}, $u_{x_i}$ is a unit for $M\cap I_{x_i}^2$ in $I_{x_i}^2$ for each $i\in\{1,\dots,k\}$. To show that $u$ is a unit for the cone $M\cap I$ in the real vector space $I$, it suffices to find for all $a_1,b_1\in I_{x_1},\dots,a_k,b_k\in I_{x_k}$ an $N\in\N$ such that $Nu+ab\in M$ where we set $a:=a_1\dotsm a_k$ and $b:=b_1\dotsm b_k$. Because of $Nu+ab=(Nu-\frac12a^2-\frac12b^2)+\frac12(a+b)^2$, it is enough to find $N\in\N$ with $Nu-a^2\in M$ and $Nu-b^2\in M$. By symmetry, it suffices to find $N\in\N$ with $Nu-a^2\in M$. Choose $N_i\in\N$ with $N_iu_{x_i}-a_i^2\in M$ for $i\in\{1,\ldots,k\}$. We now claim that $N:=N_1\dotsm N_k$ does the job. Indeed, the reader shows easily by induction that actually \[N_1\dotsm N_iu_{x_1}\dotsm u_{x_i}-a_1^2\dotsm a_i^2\in M\] for $i\in\{1,\dots,k\}$. Now let $\ph$ be a pure state of $(I,M\cap I,u)$. Denote by $\Ph\colon\O[\x]\to\R$ the ring homomorphism belonging to $\ph$, i.e., \[(*)\qquad\ph(pq)=\Ph(p)\ph(q)\] for all $p\in\O[\x]$ and $q\in I$ [$\to$ \ref{assringhom}, \ref{dichotomy}]. By Lemma \ref{stpointev}, we can choose $x\in\R^n$ such that \[\Ph(p)=\st(p(x))\] for all $p\in\O[\x]$. Since $u\in I\cap\sum\O[\x]^2$, we have \[\st(p(x))=\Ph(p)=\Ph(p)\ph(u)\overset{(*)}=\ph(pu)=\ph(up)\overset{up\in M}\in\ph(M)\subseteq\R_{\ge0}\] for all $p\in M$ which means $x\in S$. \smallskip Now first suppose that Case (1) in the Dichotomy \ref{dichotomy} occurs. We show that $x$ satisfies (1). Note that $\Ph(u)\ne0$ by \ref{dichotomy}. This means $\st(u_{x_i}(x))\ne0$ and therefore $\st(x)\ne\st(x_i)$ for all $i\in\{1,\dots,k\}$. The rest follows from \ref{dichotomy}. \smallskip Now suppose that Case (2) in the Dichotomy \ref{dichotomy} occurs. We show that then (2) holds. First note that $\prod_{i=1}^k\Ph(u_{x_i})= \Ph(u)=0$ because $u\in I$ and $\Ph|_I=0$. Choose $i\in\{1,\dots,k\}$ such that $\st(u_{x_i}(x))=\Ph(u_{x_i})=0$. Then $x=\st(x_i)$. Define \[\ps\colon I_{x_i}^2\to\R,\ p\mapsto\ph\left(p\prod_{\substack{j=1\\j\ne i}}^ku_{x_j}\right). \] Since $u_{x_j}\in\sum\O[\x]^2\cap I_{x_j}^2$ for all $j\in\{1,\dots,k\}$, it follows that $\ps\in S(I_{x_i}^2,M\cap I_{x_i}^2,u_{x_i})$. If $p\in I_{x_i}$ and $q\in I_{x_i}^2$, then \[\ps(pq)=\ph\left(pq\prod_{\substack{j=1\\j\ne i}}^ku_{x_j}\right) \overset{(*)}= \Ph(p)\ph\left(q\prod_{\substack{j=1\\j\ne i}}^ku_{x_j}\right)=0\] since $\Ph(p)=\st(p(x))=(\st(p))(x)=(\st(p))(\st(x_i))=\st(p(x_i))=\st(0)=0$. It follows that $\ps|_{I_{x_i}^3}=0$. We can thus apply Lemma \ref{secondtypelemma} to $\ps$ and obtain $v_1,\dots,v_n\in\R^n$ such that $\sum_{\ell=1}^nv_\ell^Tv_\ell=1$ and \[\ps(p)=\frac12\st\left(\sum_{\ell=1}^nv_\ell^T(\hess p)(x_i)v_\ell\right)\] for all $p\in I_{x_i}^2$. Because of $\st(x_i)\ne\st(x_j)$ for $j\in\{1,\dots,k\}\setminus\{i\}$, we have \[c:=\Ph\left(\prod_{\substack{j=1\\j\ne i}}^k u_{x_j}\right)=\prod_{\substack{j=1\\j\ne i}}^k\Ph(u_{x_j})= \prod_{\substack{j=1\\j\ne i}}^k(\st(u_{x_j}))(\st(x_i))\ne0.\] Hence we obtain \[c\ph(p)\overset{(*)}=\ps(p)\] for all $p\in I$. \smallskip It only remains to show that (1) and (2) cannot occur both at the same time. If (1) holds, then we have obviously $\ph(u^2)\ne0$. If (2) holds, then $\ph(u^2)=0$ since $\hess(u^2)(x_i)=0$ for all $i\in\{1,\dots,k\}$ as one easily shows. \end{proof} \begin{lem}\label{membershipix} For all $x\in\O^n$, we have \[I_x^2=\left\{p\in\O[\x]\mid p(x)=0,\nabla p(x)=0\right\}.\] \end{lem} \begin{proof} For $x=0$ it is easy. One reduces the general case to the case $x=0$ as in the proof of \ref{ixunit}. \end{proof} \begin{thm}\label{mainrep} Let $M$ be an Archimedean quadratic module of $\O[\x]$ and set \[S:=\{x\in\R^n\mid\forall p\in M:\st(p(x))\ge0\}.\] Moreover, suppose $k\in\N_0$ and let $x_1,\ldots,x_k\in\O^n$ have pairwise distinct standard parts. Let \[f\in\bigcap_{i=1}^kI_{x_i}^2\] such that \[\st(f(x))>0\] for all $x\in S\setminus\{\st(x_1),\ldots,\st(x_k)\}$ and \[\st(v^T(\hess f)(x_i)v)>0\] for all $i\in\{1,\dots,k\}$ and $v\in\R^n\setminus\{0\}$. Then $f\in M$. \end{thm} \begin{proof} Define $I$ and $u$ as in Theorem \ref{dicho} so that $f\in I$. We will apply \ref{conemembershipextr} to the real vector space $I$, the cone $M\cap I$ in $I$ and the unit $u$ for $M\cap I$. From Theorem \ref{dicho}, we see indeed easily that $\ph(f)>0$ for all $\ph\in\extr S(I,M\cap I,u)$. \end{proof} \begin{cor}\label{mainrep2} Let $M$ be an Archimedean quadratic module of $\O[\x]$ and set \[S:=\{x\in\R^n\mid\forall p\in M:\st(p(x))\ge0\}.\] Moreover, let $k\in\N_0$ and $x_1,\ldots,x_k\in\O^n$ such that their standard parts are pairwise distinct and lie in the interior of $S$. Let \[f\in\bigcap_{i=1}^kI_{x_i}^2.\] Set again $u:=u_{x_1}\dotsm u_{x_k}\in\O[\x]$. Suppose there is $\ep\in\R_{>0}$ such that \[f\ge\ep u\text{ on }S.\] Then $f\in M$. \end{cor} \begin{proof} By \ref{mainrep}, we have to show: \begin{enumerate}[(a)] \item $\forall x\in S\setminus\{\st(x_1),\ldots,\st(x_k)\}:\st(f(x))>0$ \item $\forall i\in\{1,\dots,k\}:\forall v\in\R^n\setminus\{0\}:\st(v^T(\hess f)(x_i)v)>0$ \end{enumerate} It is easy to show (a). To show (b), fix $i\in\{1,\ldots,k\}$. Because of $f-\ep u\ge0$ on $S$ and \[(f-\ep u)(x_i)=f(x_i)-\ep u(x_i)=0-0=0,\] $\st(x_i)$ is a local minimum of $\st(f-\ep u)\in\R[\x]$ on $\R^n$. From elementary analysis, we know therefore that $(\hess\st(f-\ep u))(\st(x_i))$ is psd. Because of $u_{x_i}(x_i)=0$ and $\nabla u_{x_i}(x_i)=0$, we get \[(\hess u)(x_i)= \left(\prod_{\substack{j=1\\j\ne i}}^ku_{x_j}(x_i)\right)(\hess u_{x_i})(x_i)= 2\left(\prod_{\substack{j=1\\j\ne i}}^ku_{x_j}(x_i)\right)I_n. \] Therefore \[\st(v^T(\hess f)(x_i)v)\ge\ep\st(v^T(\hess u)(x_i)v)=2\ep v^Tv\st\left(\prod_{\substack{j=1\\j\ne i}}^ku_{x_j}(x_i)\right)>0\] for all $v\in\R^n\setminus\{0\}$. \end{proof} \begin{cor}\label{mainrep3} Let $n,m\in\N_0$ and suppose $g_1,\ldots,g_m\in\R[\x]$ generate an Archimedean quadratic module in $\R[\x]$ \emph{[$\to$ \ref{archmodulechar}]}. Set \[S:=\{x\in R^n\mid g_1(x)\ge0,\ldots,g_m(x)\ge0\}.\] Moreover, let $k\in\N_0$ and $x_1,\ldots,x_k\in\O^n$ and $\ep\in\R_{>0}$ such that the sets $x_1+\ep B,\ldots,x_k+\ep B$ are pairwise disjoint and all contained in $S$ where \emph{[$\to$ \ref{normcont}]} \[B:=\{x\in R^n\mid\|x\|_2<1\}\subseteq\O^n.\] Set once more $u:=u_{x_1}\dotsm u_{x_k}\in\O[\x]$. Let $f\in\O[\x]$ such that $f\ge\ep u$ on $S$ and \[f(x_1)=\ldots=f(x_k)=0.\] Then $f$ lies in the quadratic module generated by $g_1,\ldots,g_m$ in $\O[\x]$. \end{cor} \begin{proof} The quadratic module $M$ generated by $g_1,\ldots,g_m$ in $\O[\x]$ is clearly also Archimedean [$\to$ 9.1.2]. Moreover, it is easy to see that $\{x\in\R^n\mid \forall p\in M:\st(p(x))\ge0\}=S\cap\R^n$. Hence $f\in M$ follows from \ref{mainrep2} once we show that \[\nabla f(x_1)=\ldots=\nabla f(x_k)=0.\] Choose $d\in\N_0$ with $f\in\R[\x]_d$. Since $f\ge\ep u\ge0$ on $S$ and thus $f\ge0$ on $x_i+\ep B$ for all $i\in\{1,\ldots,k\}$, it suffices to prove the following for $R'=R$: If $p\in R'[\x]_d$, $x\in R'^n$, $\de\in R'_{>0}$ such that $p\ge0$ on $x+\de B'$ where $B':=\{x\in R'^n\mid\|x\|_2<1\}$ and $p(x)=0$, then $\nabla p(x)=0$. To see this, we employ the Tarski principle [$\to$ \ref{tprinciple}]: The class of all $R'\in\mathcal R$ [$\to$ \ref{introsemialg}] such that this holds true for all $p\in R'[\x]_d$ is obviously a $0$-ary semialgebraic class by real quantifier elimination. By elementary analysis, $\R$ is an element of this class. We conclude thus by \ref{nothingorall}. \end{proof} \section{Degree bounds and quadratic modules} \begin{df}\label{dftrunc} Let $d,m\in\N_0$, $g_1,\dots,g_m\in\R[\x]$ and set $g_0:=1\in\R[\x]$. For $i\in\{0,\dots,m\}$, set $r_i:=\frac{d-\deg g_i}2$ if $g_i\ne0$ and $r_i:=-\infty$ if $g_i=0$. Then we denote by $M(g_1,\dots,g_m)$ the quadratic module generated by $g_1,\dots,g_m$ in $\R[\x]$. Moreover, we define the \emph{$d$-truncated quadratic module} $M_d(g_1,\dots,g_m)$ associated to $g_1,\dots,g_m$ by \[ M_d(g_1,\dots,g_m):= \left\{\sum_{i=0}^m\sum_jp_{ij}^2g_i\mid p_{ij}\in\R[\x]_{r_i}\right\}\subseteq M(g_1,\dots,g_m)\cap\R[\x]_d. \] \end{df} \begin{rem} Let $m\in\N_0$ and $g_1,\dots,g_m\in\R[\x]$. Set again $g_0:=1\in\R[\x]$. \begin{enumerate}[(a)] \item $M(g_1,\dots,g_m)=\bigcup_{d\in\N_0}M_d(g_1,\dots,g_m)$ \item For all $d\in\N_0$, \[M_d(g_1,\dots,g_m)=\sum_{i=0}^m\left(\left(\sum\R[\x]^2g_i\right)\cap\R[\x]_d\right)\] by \ref{soslongrem}(b). \item In general, the inclusion $M_d(g_1,\dots,g_m)\subseteq M(g_1,\dots,g_m)\cap\R[\x]_d$ is proper as \ref{nonexdegbounds} shows. In fact, the validity of Schmüdgen's and Putinar's Positivstellensätze \ref{schmuedgen} and \ref{putinar} strongly relies on this. \end{enumerate} \end{rem} \begin{thm}[Putinar's Positivstellensatz with zeros and degree bounds] \label{putinarzerosdegreebound} Let $n,m\in\N_0$ and $g_1,\ldots,g_m\in\R[\x]$ such that $M(g_1,\ldots,g_m)$ is Archimedean. Set \begin{align*} B&:=\{x\in\R^n\mid\|x\|<1\}\qquad\text{and}\\ S&:=\{x\in\R^n\mid g_1(x)\ge0,\ldots,g_m(x)\ge0\}. \end{align*} Moreover, let $k\in\N_0$, $N\in\N$ and $\ep\in\R_{>0}$. Then there exists \[d\in\N_0\] such that for all $f\in\R[\x]_N$ with all coefficients in $[-N,N]_\R$ and $\#\{x\in S\mid f(x)=0\}=k$, we have: Denoting by $x_1,\dots,x_k$ the distinct zeros of $f$ on $S$, if the sets $x_1+\ep B,\ldots,x_k+\ep B$ are pairwise disjoint and contained in $S$ and if we have $f\ge\ep u$ on $S$ where $u:=u_{x_1}\dotsm u_{x_k}\in\R[\x]$ \emph{[$\to$ \ref{ixunit}]}, then \[f\in M_d(g_1,\dots,g_m).\] \end{thm} \begin{proof}{}(cf. the proof of Theorem \ref{h17bound}) Set $\nu:=\dim\R[\x]_N$. For each $d\in\N_0$, the class $S_d$ of all pairs $(R,a)$ where $R$ is a real closed extension field of $\R$ and $a\in R^\nu$ such that the following holds is obviously a $\nu$-ary $\R$-semialgebraic class [$\to$ \ref{introsemialg}]: If $a\in[-N,N]_R^\nu$ and if $a$ is the vector of coefficients (in a certain fixed order) of a polynomial $f\in R[\x]_N$ with exactly $k$ zeros $x_1,\dots,x_k$ on $S':=\{x\in R^n\mid g_1(x)\ge0,\dots,g_m(x)\ge0\}$, then at least one of the following conditions (a), (b) and (c) is fulfilled: \begin{enumerate}[(a)] \item The sets $x_1+\ep B',\ldots,x_k+\ep B'$ are not pairwise disjoint or not all contained in $S'$ where $B':=\{x\in R^n\mid\|x\|_2<1\}$. \item $f\ge\ep u$ on $S'$ is violated where $u:=u_{x_1}\dotsm u_{x_k}\in R[\x]$. \item $f$ is a sum of $d$ elements from $R[\x]$ where each term in the sum is of degree at most $d$ and is of the form $p^2g_i$ with $p\in R[\x]$ and $i\in\{0,\dots,m\}$ where $g_0:=1\in R[\x]$. \end{enumerate} Set $\mathcal E:=\{S_d\mid d\in\N_0\}$ and observe that $\forall d_1,d_2\in\N_0:\exists d_3\in\N_0:S_{d_1}\cup S_{d_2}\subseteq S_{d_3}$ (take $d_3:=\max\{d_1,d_2\}$). By \ref{mainrep3}, we have $\bigcup\mathcal E=\mathcal R_\nu$. Now \ref{finitenesscor} yields $S_d=\mathcal R_\nu$ for some $d\in\N_0$. \end{proof} \begin{cor}[Putinar's Positivstellensatz with degree bounds \cite{pre,ns,kri'}]\emph{[$\to$~\ref{putinar}]} Let $n,m\in\N_0$ and $g_1,\ldots,g_m\in\R[\x]$ such that $M(g_1,\ldots,g_m)$ is Archimedean. Set \[S:=\{x\in\R^n\mid g_1(x)\ge0,\ldots,g_m(x)\ge0\}.\] Moreover, let $N\in\N$ and $\ep\in\R_{>0}$. Then there exists \[d\in\N_0\] such that for all $f\in\R[\x]_N$ with all coefficients in $[-N,N]_\R$ and with $f\ge\ep$ on $S$, we have \[f\in M_d(g_1,\dots,g_m).\] \end{cor} \begin{pro}\label{squeeze} Suppose $S\subseteq\R^n$ is compact, $x_1,\dots,x_k\in S^\circ$ are pairwise distinct, $u:=u_{x_1}\dotsm u_{x_k}\in\R[\x]$ \emph{[$\to$ \ref{ixunit}]} and $f\in\R[\x]$ with $f(x_1)=\ldots=f(x_k)=0$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $f>0$ on $S\setminus\{x_1,\ldots,x_k\}$ and $\hess f(x_1),\dots,\hess f(x_k)$ are pd. \item There is some $\ep\in\R_{>0}$ such that $f\ge\ep u$ on $S$. \end{enumerate} \end{pro} \begin{proof} \underline{(b)$\implies$(a)} is easy to show (cf. the proof of \ref{mainrep2}). \smallskip \underline{(a)$\implies$(b)} \quad It is easy to show that one can WLOG assume that $S=\bigdotcup_{i=1}^k(x_i+\ep B)$ for some $\ep>0$ where $B$ is the closed unit ball in $\R^n$. Then one finds easily an Archimedean quadratic module $M$ of $\R[\x]$ such that \[S=\{x\in\R^n\mid\forall p\in M:p(x)\ge0\}.\] A strengthened version of Theorem \ref{mainrep} now yields $f-\ep u\in M$ for some $\ep\in\R_{>0}$ and thus $f-\ep u\ge0$ on $S$. One gets this strengthened version of Theorem \ref{mainrep} by applying (a)$\implies$(c) from \ref{conemembershipunitextreme} instead of \ref{conemembershipextr} in its proof. Alternatively, we leave it as an exercise to the reader to give a direct proof using only basic multivariate analysis. \end{proof} \begin{cor}[Putinar's Positivstellensatz with zeros \cite{s1}]\label{putinarzeros} Let $g_1,\ldots,g_m\in\R[\x]$ such that $M(g_1,\ldots,g_m)$ is Archimedean. Set \[S:=\{x\in\R^n\mid g_1(x)\ge0,\ldots,g_m(x)\ge0\}.\] Moreover, suppose $k\in\N_0$ and $x_1,\ldots,x_k\in S^\circ$ are pairwise distinct. Let $f\in\R[\x]$ such that $f(x_1)=\ldots=f(x_k)=0$, $f>0$ on $S\setminus\{x_1,\ldots,x_k\}$ and $\hess f(x_1),\dots,\hess f(x_k)$ are pd. Then \[f\in M(g_1,\ldots,g_m).\] \end{cor} \begin{proof} This follows from \ref{putinarzerosdegreebound} by Proposition \ref{squeeze}. \end{proof} \begin{rem} Because of Proposition \ref{squeeze}, Theorem \ref{putinarzerosdegreebound} is really a quantitative version of Corollary \ref{putinarzeros}. \end{rem} \begin{rem}\label{commentonbounds} \begin{enumerate}[(a)] \item In Condition (c) from the proof of Theorem \ref{putinarzerosdegreebound}, we speak of ``a sum of $d$ elements'' instead of ``a sum of elements'' (which would in general be strictly weaker). Our motivation to do this was that this is the easiest way to make sure that we can formulate (c) in a ``semialgebraic way''. A second motivation could have been to formulate Theorem \ref{putinarzerosdegreebound} in stronger way, namely by letting $d$ be a bound not only on the degree of the quadratic module representation but also on the number of terms in it. This second motivation is however not interesting because we get also from the Gram matrix method \ref{gram} a bound on this number of terms (a priori bigger than $d$ but after readjusting $d$ we can again assume it to be $d$). We could have used the Gram matrix method already to see that ``a sum of elements'' (instead of ``a sum of $d$ elements'') can also be expressed semialgebraically. \item We could strengthen condition (c) from the proof of Theorem \ref{putinarzerosdegreebound}, by writing ``with $p\in R[\x]$ all of whose coefficients lie in $[-d,d]_R$'' instead of just ``with $p\in R[\x]$''. Then $\bigcup\mathcal E=\mathcal R_\nu$ would still hold since Corollary \ref{mainrep3} states that $f$ lies in the quadratic module generated by $g_1,\ldots,g_m$ even in $\O[\x]$ not just in $\R[\x]$. This would lead to a real strengthening of Theorem \ref{putinarzerosdegreebound}, namely we could ensure that $d$ is a bound not only on the degree of the quadratic module representation but also on the size of the coefficients in it. However, we do currently not know of any application of this and therefore renounced to carry this out. \end{enumerate} \end{rem} \chapter{Linearizing systems of polynomial inequalities} \section{The Lasserre hierachy for a system of polynomial inequalities} By a \emph{system of polynomial inequalities}, we understand a (finite) system of (real non-strict) polynomial inequalities in several variables, i.e., a condition of the form \[g_1(x)\ge0,\ldots,g_m(x)\ge0\qquad(x\in\R^n)\] where $m,n\in\N_0$ and $\g=(g_1,\ldots,g_m)\in\R[\x]^m$. Its sets of solutions are exactly the basic closed semialgebraic subsets of $\R^n$ [$\to$ \ref{dfbasicopenclosed}]. We now introduce notation for these. \begin{df} For $\g=(g_1,\ldots,g_m)\in\R[\x]^m$, we consider the basic closed semialgebraic set [$\to$ \ref{dfbasicopenclosed}] \[S(\g):=\{x\in\R^n\mid g_1(x)\ge0,\ldots,g_m(x)\ge0\}.\] \end{df} \noindent One can easily imagine that many ``horribly complicated'' computational problems can be easily translated into the problem of solving systems of polynomial inequalities. It is outside of the scope of these lecture notes to make this statement more formal. Consequently, it would be no surprise if it were in general very hard to deal with polynomial inequalities algorithmically. The proof of real quantifier elimination [$\to$ Theorem \ref{elim}, §\ref{sec:qe}] (by means of which one can for example decide whether a given system of polynomial inequalities is solvable) can be read as an algorithm of horrible complexity which is basically useless for practical purposes. We try to identify cases where efficient algorithms for dealing with systems of polynomial inequalities might possibly exist. Our approach will be to relate systems of polynomial inequalities to \emph{linear matrix inequalities}. Before we introduce the latter, we need more notation. \begin{df}{}[$\to$ \ref{psdpd}(b)] Let $A\in S\R^{k\times k}$. We write $A\malal\succeq\succ0$ to express that $A$ is \alal{psd}{pd}, i.e., $A$ is symmetric and $x^TMx\malal\ge>0$ for all $x\in\malal{\R^k}{\R^k\setminus\{0\}}$. If $B\in\R^{k\times k}$ is another matrix, we write $A\malal\succeq\succ B$ or $B\malal\preceq\prec A$ to express that $A-B\malal\succeq\succ0$. We say that $A$ is \emph{\alal{negative semidefinite (nsd)}{negative definite (nd)}} if $A\preceq 0$. \end{df} \noindent By a \emph{linear matrix inequality (LMI)}, we understand a condition of the form \[A_0+x_1A_1+\ldots+x_nA_n\succeq0\qquad(x\in\R^n)\] where $n\in\N_0$, $k\in\N_0$ and $A_0,\ldots,A_n\in S\R^{k\times k}$. An equivalent way of writing this is by saying it is of the form \[\begin{pmatrix}\ell_{11}(x)&\ldots&\ell_{1k}(x)\\\vdots&&\vdots\\\ell_{k1}(x)&\ldots&\ell_{kk}(x)\end{pmatrix}\qquad(x\in\R^n) \] where each $\ell_{ij}\in\R[\x]_1$ is a linear polynomial [$\to$ \ref{quintic}] and $\ell_{ij}=\ell_{ji}$ for all $i,j\in\{1,\ldots,k\}$. We speak of a \emph{diagonal} LMI if, in the above, each $A_i$ is diagonal or, equivalently, $\ell_{ij}=0$ for all $i$ different from $j$. A diagonal LMI obviously just corresponds to a finite system of (non-strict) linear inequalities. Solution sets of LMIs are called \emph{spectrahedra}. They generalize the solution sets of diagonal LMIs that are called \emph{polyhedra}. \begin{ex} Consider the LMIs \[\begin{pmatrix}1+x_1&x_2\\x_2&1-x_1\end{pmatrix}\succeq0\qquad(x_1,x_2\in\R)\] and \[\begin{pmatrix}1&x_1&x_2\\x_1&1&0\\x_2&0&1\end{pmatrix}\succeq0\qquad(x_1,x_2\in\R).\] \noindent By \ref{psdeq}(f), we have that the first LMI is equivalent to \[1\ge0\et1+x_1\ge0\et1-x_1\ge0\et(1+x_1)(1-x_1)-x_2^2\ge0\qquad(x_1,x_2\in\R)\] and the second one to \[1\ge0\et1-x_1^2\ge0\et1-x_2^2\ge0\et1-x_1^2-x_2^2\ge0\qquad(x_1,x_2\in\R).\] Hence both are actually equivalent to \[x_1^2+x_2^2\le1\qquad(x_1,x_2\in\R)\] so that they define the closed unit disk in the plane. \end{ex} There are very efficient algorithms to deal with finite systems of linear inequalities (for example, to decide whether such a system is solvable). We catched already a glimpse of this in the algorithmic proof of \ref{fundthmlin}. Finite systems of linear inequalities correspond to diagonal LMIs. As a matter of fact which goes beyond our scope, there are still very efficient algorithms to deal with arbitrary LMIs. In fact, solving a system of linear inequalities in such a way that a given linear objective function is maximized or minimized is the problem of \emph{linear programming} (LP). LP is ubiquitous in science and engineering and there are extremely efficient LP solvers. The more general problem of solving an LMI in such a way that a given linear objective function is maximized or minimized is the problem of \emph{semidefinite programming} (SDP). SDP is still in its infancy but gets more and more appreciated by many people within mathematics and its applications. For SDP there are still very efficient solvers although they cannot yet compete with LP solvers. \begin{ex}\label{x4y4} Consider the following toy system of polynomial inequalities: \[(*)\qquad\quad 1-x_1+x_2\ge0,\quad 1-x_1^4-x_2^4\ge0\qquad(x_1,x_2\in\R).\] The terms $x_1^4$ and $x_2^4$ make that it is not a system of linear inequalities. A first idea which is in general way too naive is to simply replace them by new unknowns $y_1$ and $y_2$ and consider \[(**)\qquad1-x_1+x_2\ge0,\quad1-y_1-y_2\ge0\qquad(x_1,x_2,y_1,y_2\in\R).\] Every solution of $(*)$ clearly gives rise to a solution of $(**)$ by setting $y_1:=x_1^4$ and $y_2:=x_2^4$. This implies that the projection \[\{(x_1,x_2)\in\R^2\mid\exists y_1,y_2:(x_1,x_2,y_1,y_2)\text { solves }(**)\}\] of the set of solutions of $(**)$ onto $x$-space contains the set of solutions of $(*)$. The converse is however obviously not true and this does not come as a surprise since one lost too much information about $(*)$. The idea is now to add a whole bunch of redundant inequalities to $(*)$ before replacing all nonlinear terms by new variables. For example, we can simply add for each $a,b,c,d,e,f\in\R$ the inequality \[(***)\qquad(a+bx_1+cx_2+dx_1^2+ex_1x_2+fx_2^2)^2\ge0.\] We could now expand these inequalities into $a^2+2abx_1+\ldots+f^2x_2^4\ge0$ and replace not only the term $x_1^4$ and $x_2^4$ by $y_1$ and $y_2$ as before but also all other nonlinear terms by new variables $y_3$, $y_4$ and so on. This would lead to an infinite family of linear inequalities parametrized by $a,b,c,\ldots\in\R$. The hope is that inequalities that were redundant before the linearization, could be valuable after the linearization. One of the problems is that one does in general not know how to deal algorithmically with \emph{infinite} systems of linear inequalities. Happily, there is a way of turning the infinite system parametrized by $a,b,c,\dots$ into one single LMI. In order to get rid of the parameters, one first separates the parameters $a,b,c,\ldots$ from the monomial expressions $1,x_1,x_2,x_1^2,\ldots$ by the following trick: Rewrite $(***)$ as a (matrix) product of row and column vectors as follows: \[ \begin{pmatrix}a&b&c&d&e&f\end{pmatrix} \begin{pmatrix}1\\x_1\\x_2\\x_1^2\\x_1x_2\\x_2^2\end{pmatrix} \begin{pmatrix}1&x_1&x_2&x_1^2&x_1x_2&x_2^2\end{pmatrix} \begin{pmatrix}a\\b\\c\\d\\e\\f\end{pmatrix}\ge0. \] Multiplying the two interior vectors gives a symmetric matrix of monomials of a certain structure. More precisely it transform our condition to \[\begin{pmatrix}a&b&c&d&e&f\end{pmatrix} \begin{pmatrix}1&x_1&x_2&x_1^2&x_1x_2&x_2^2\\ x_1&x_1^2&x_1x_2&x_1^3&x_1^2x_2&x_1x_2^2\\ x_2&x_1x_2&x_2^2&x_1^2x_2&x_1x_2^2&x_2^3\\ x_1^2&x_1^3&x_1^2x_2&x_1^4&x_1^3x_2&x_1^2x_2^2\\ x_1x_2&x_1^2x_2&x_1x_2^2&x_1^3x_2&x_1^2x_2^2&x_1x_2^3\\ x_2^2&x_1x_2^2&x_2^3&x_1^2x_2^2&x_1x_2^3&x_2^4\\ \end{pmatrix} \begin{pmatrix}a\\b\\c\\d\\e\\f\end{pmatrix}\ge0. \] If we now linearize again, we get again an infinite family of linear inequalities, namely the same as we would have had got before by simply expanding and linearizing $(***)$: \[\begin{pmatrix}a&b&c&d&e&f\end{pmatrix} \begin{pmatrix}1&x_1&x_2&y_3&y_4&y_5\\ x_1&y_3&y_4&y_6&y_7&y_8\\ x_2&y_4&y_5&y_7&y_8&y_9\\ y_3&y_6&y_7&y_1&y_{11}&y_{12}\\ y_4&y_7&y_8&y_{11}&y_{12}&y_{10}\\ y_5&y_8&y_9&y_{12}&y_{10}&y_2\\ \end{pmatrix} \begin{pmatrix}a\\b\\c\\d\\e\\f\end{pmatrix}\ge0. \] The big advantage is now however that we can get rid of the parameters $a,b,c,\ldots$ by passing over to a linear matrix inequality. Namely, the above is by \ref{psdpd}(b) valid for all $a,b,c,\ldots\in\R$ if and only if the LMI \[\begin{pmatrix}1&x_1&x_2&y_3&y_4&y_5\\ x_1&y_3&y_4&y_6&y_7&y_8\\ x_2&y_4&y_5&y_7&y_8&y_9\\ y_3&y_6&y_7&y_1&y_{11}&y_{12}\\ y_4&y_7&y_8&y_{11}&y_{12}&y_{10}\\ y_5&y_8&y_9&y_{12}&y_{10}&y_2\\ \end{pmatrix}\succeq0\qquad(x_1,x_2,y_1,y_2,\ldots\in\R) \] holds. But there are other redundant inequalities that one can add before the linearization. For example, one could multiply the inequality $1-x_1+x_2\ge0$ from the original system $(*)$ by a square of a general polynomial of some degree. Because we do not want to introduce to many new variables $y_i$ in the end (and intuitively we think that each variable $y_i$ should ideally appear many times in the final LMIs so that there is not too much freedom), we decide to multiply this time just with a general linear polynomial. So we add for each $a,b,c\in\R$ the inequality \[(***\,*)\qquad(a+bx_1+cx_2)^2(1-x_1+x_2)\ge0.\] We could again expand this and linearize (re-using some of the $y_i$ from before of course) to get another infinite family of linear inequalities. But we again apply our trick that will lead to an LMI by doing this in the following equivalent way: Rewrite $(***\,*)$ as \[ \begin{pmatrix}a&b&c\end{pmatrix} (1-x_1+x_2) \begin{pmatrix}1\\x_1\\x_2\end{pmatrix} \begin{pmatrix}1&x_1&x_2\end{pmatrix} \begin{pmatrix}a\\b\\c\end{pmatrix}\ge0 \] where the second (invisible) multiplication dot out of the four is a scalar product and the others are matrix products of row and column vectors. Linearizing \begin{multline*} (1-x_1+x_2) \begin{pmatrix}1\\x_1\\x_2\end{pmatrix} \begin{pmatrix}1&x_1&x_2\end{pmatrix}=(1-x_1+x_2)\begin{pmatrix}1&x_1&x_2\\x_1&x_1^2&x_1x_2\\x_2&x_1x_2&x_2^2\end{pmatrix} \\ =\begin{pmatrix}1-x_1+x_2&x_1-x_1^2+x_1x_2&x_2-x_1x_2+x_2^2\\x_1-x_1^2+x_1x_2&x_1^2-x_1^3+x_1^2x_2&x_1x_2-x_1^2x_2+x_1x_2^2\\x_2-x_1x_2+x_2^2&x_1x_2-x_1^2x_2+x_1x_2^2&x_2^2-x_1x_2^2+x_2^3\end{pmatrix} \end{multline*} we obtain the LMI \[ \begin{pmatrix} -x_1+x_2+1&x_1-y_3+y_4&x_2-y_4+y_5\\ x_1-y_3+y_4&y_3-y_6+y_7&y_4-y_7+y_8\\ x_2-y_4+y_5&y_4-y_7+y_8&y_5-y_8+y_9 \end{pmatrix}\succeq0 \qquad (x_1,x_2,y_1,y_2,\ldots\in\R). \] Finally, we could also add redundant inequalities stemming from the inequality $1-x_1^4-x_2^4$. So we could multiply it by a square of a general polynomial of some degree. Again, to economize additional variables $y_i$, we decide to take degree zero here which amounts to linearize the inequality $1-x_1^4-x_2^4$ just like this without further ado. This leads to the the linear inequality \[1-y_1-y_2\ge0\qquad(y_1,y_2\in\R)\] that we had already in $(**)$ and which can be seen as an LMI of size $1$. All in all, we consider now the following system of three LMIs of sizes $6$, $3$ and $1$ (which could easily be written as a single LMI of size $10$ by forming a block diagonal matrix) \[(\Box)\qquad \begin{aligned} \begin{pmatrix}1&x_1&x_2&y_3&y_4&y_5\\ x_1&y_3&y_4&y_6&y_7&y_8\\ x_2&y_4&y_5&y_7&y_8&y_9\\ y_3&y_6&y_7&y_1&y_{11}&y_{12}\\ y_4&y_7&y_8&y_{11}&y_{12}&y_{10}\\ y_5&y_8&y_9&y_{12}&y_{10}&y_2\\ \end{pmatrix}&\succeq0 \\ \begin{pmatrix} -x_1+x_2+1&x_1-y_3+y_4&x_2-y_4+y_5\\ x_1-y_3+y_4&y_3-y_6+y_7&y_4-y_7+y_8\\ x_2-y_4+y_5&y_4-y_7+y_8&y_5-y_8+y_9 \end{pmatrix}&\succeq0 \\ 1-y_1-y_2&\ge0. \end{aligned} \qquad(x_1,x_2,y_1,y_2,\ldots\in\R). \] It is clear that the projection \[\{(x_1,x_2)\in\R^2\mid\exists y_1,\ldots,y_{12}:(x_1,x_2,y_1,\ldots,y_{12})\text { solves }(\Box)\}\] of the set of solutions of $(\Box)$ onto $x$-space contains the set of solutions of the original system $(*)$. Each solution of $(*)$ can be made into a solution of $(\Box)$ by assigning to each $y_i$ the value of the nonlinear monomial expression that has been replaced by $y_i$. In this sense $(\Box)$ is a so-called \emph{relaxation} of $(*)$. Conversely, it is not clear in what extent a solution of $(\Box)$ might perhaps give rise to a solution of $(*)$. A first step to investigate this, is to find a more schematic way of describing the solution set of $(\Box)$. This time we will hide the parameters $a,b,c,...$ not in an LMI but in a truncated quadratic module [$\to$ \ref{dftrunc}]. Note that, before we came up with the idea of writing it as an LMI, we had produced simply certain infinite systems of linear inequalities in the $x_1,x_2,y_1,y_2,\ldots$. If we take, this time really redundantly, also sums of these linear inequalities, we observe that we effectively have added the linearizations of the inequalities \[p(x)\ge0\qquad(x\in\R^n)\] for each $p\in M_4(g_1,g_2)\in\R[X_1,X_2]_4$ where $g_1:=1-X_1+X_2\in\R[X_1,X_2]$ and $g_2:=1-X_1^4-X_2^4\in\R[X_1,X_2]$. An elegant way of expressing this is to replace the variables of the linearized system $x_1,x_2,y_1,y_2,\ldots$ by the unknown values \[\ph(X_1),\ph(X_2),\ph(X_1^4),\ph(X_2^4),\ldots\] of some linear function $\ph\colon\R[X_1,X_2]_4\to\R$ that satisfies $\ph(1)=1$. Knowing that a linear function $\ph\colon\R[X_1,X_2]_4\to\R$ can be uniquely determined by arbitrarily prescribed values on the monomials, we see easily that the solution set of $(\Box)$ can be identified with the state space [$\to$ \ref{defstate}] \[S(\R[X_1,X_2]_4,M_4(g_1,g_2),1)\subseteq\R[X_1,X_2]^*.\] \end{ex} \begin{df}\label{lasserredef} Let $d\in\N$, $m\in\N_0$ and $\g\in\R[\x]^m$. Then we call \[L_d(\g):=S(\R[\x]_d,M_d(\g),1)\subseteq\R[\x]_d^*\] the \emph{degree $d$ state space} associated to $\g$ and \[S_d(\g):=\{(\ph(X_1),\ldots,\ph(X_n))\mid\ph\in L_d(\g)\}\subseteq\R^n\] the \emph{degree $d$ Lasserre relaxation} associated to $\g$. \end{df} \begin{rem}\label{hierarchy} Let $m\in\N_0$ and $\g\in\R[\x]^m$. \begin{enumerate}[(a)] \item For each $d\in\N$, $L_d(\g)$ is obviously a convex subset of the real vector space $\R[\x]_d^*$. \item For each $d\in\N$, $S_d(\g)$ is a convex subset of $\R^n$ due to (a). \item We have \[S(\g)\subseteq S_d(\g)\] for all $d\in\N$ since the evaluation at $x$ \[\R[\x]_d\to\R,\ p\mapsto p(x)\] is a degree $d$ state with $(\ph(X_1),\ldots,\ph(X_n))=x$ for each $x\in S(\g)$. \item We even have \[\conv S(\g)\subseteq S_d(\g)\] for all $d\in\N$ by (b) and (c). \item We have \[S_{d+1}(\g)\subseteq S_d(\g)\] for all $d\in\N$ for if we restrict a degree $d+1$ state associated to $\g$ to $\R[\x]_d$, we get a degree $d$ state associated to $\g$. \item If there is some $i\in\{1,\ldots,m\}$ such that $g_i=0$ or $\deg g_i>d$, then $M_d(\g)$ and therefore $L_d(\g)$ and $S_d(\g)$ do not change if we remove $g_i$ from $\g$ [$\to$ \ref{dftrunc}]. In this sense, $g_i$ is simply ignored by $M_d(\g)$, $L_d(\g)$ and $S_d(\g)$ . Therefore one often considers only those $d\in\N$ that are greater than or equal to the degree of each $g_i$. \item Let $d\in\N$ such that $g_1,\ldots,g_m\in\R[\x]_d\setminus\{0\}$ and set $g_0:=1\in\R[\x]_d$. For $i\in\{0,\dots,m\}$, set $r_i:=\lfloor\frac{d-\deg g_i}2\rfloor$ [$\to$ \ref{dftrunc}]. Let $1,u_1,\ldots,u_N$ be the monomial basis of $\R[\x]_d$. Then $\{(\ph(u_1),\ldots,\ph(u_N))\mid\ph\in L_d(\g)\}$ can be defined by m+1 simultaneous LMIs of sizes $\dim(\R[\x]_{r_0}),\ldots, \dim(\R[\x]_{r_m})$ in $N$ variables. This can be seen analogously to Example \ref{x4y4}. \end{enumerate} \end{rem} \begin{lem}\label{modclo} Let $d,m\in\N_0$ and $\g\in\R[\x]^m$. Suppose $S_d(\g)\ne\emptyset$. Then \[\overline{M_d(\g)}=\{f\in\R[\x]_d\mid\forall\ph\in L_d(\g):\ph(f)\ge0\}.\] \end{lem} \begin{proof} The right hand side of the claimed equation equals \[\bigcap_{\ph\in L_d(\g)}\ph^{-1}(\R_{\ge0})\] and is therefore closed since every $\ph\in L_d(\g)$ is linear and therefore continuous [$\to$~\ref{tacitly}]. To prove the inclusion from left to right, it is therefore enough to show that $M_d(\g)$ is contained in the right hand side, which is trivial. For the converse inclusion, we let $f\in\R[\x]_d\setminus\overline{M_d(\g)}$ and we want to find $\ph\in L_d(\g)$ with $\ph(f)<0$. By \ref{seplocconvs} there are a linear $\ph_0\colon\R[\x]_d\to\R$ with $\ph_0(g)<\ph_0(f)$ for all $g\in\overline{M_d(\g)}$. Because $M_d(\g)$ is a cone, it follows easily that $\ph_0(M_d(\g))\subseteq\R_{\le0}$ using a scaling argument. Since $0\in M_d(\g)$, we have moreover $0<\ph_0(f)$. Setting $\ph_1:=-\ph_0$, we have $\ph_1(M_d(\g))\subseteq\R_{\ge0}$ and $\ph_1(f)<0$. If $\ph_1(1)>0$ then we set $\ph:=\frac1{\ph_1(1)}\ph_1\in L_d(\g)$ and we are done. Suppose therefore $\ph_1(1)=0$. From $S_d(\g)\ne\emptyset$ it follows that $L_d(\g)\ne\emptyset$. Choose $\ps\in L_d(\g)$. Now $\ps+N\ph_1\in L_d(\g)$ for all $N\in\N$. Choose $N\in\N$ big enough such that $\ps(f)+N\ph_1(f)<0$ and set $\ph:=\ps+N\ph_1$. \end{proof} \begin{rem}\label{usedallover} The following trivial fact will be crucial and we will use it tacitly: For all $f\in\R[\x]_1$ and linear $\ph\colon\R[\x]_1\to\R$ with $\ph(1)=1$, we have \[f(\ph(X_1),\ldots,\ph(X_n))=\ph(f).\] \end{rem} \begin{lem}\label{exact1} Let $d\in\N$, $m\in\N_0$ and $\g\in\R[\x]^m$ such that $S_d(\g)\ne\emptyset$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $S_d(\g)\subseteq\overline{\conv S(\g)}$ \item $\forall f\in\R[\x]_1:(f\ge0\text{ on }S(\g)\implies f\in\overline{M_d(\g)})$ \end{enumerate} \end{lem} \begin{proof} Suppose first that (a) holds. In order to show (b), let $f\in\R[\x]_1$ with $f\ge0$ on $S(\g)$. Since $f$ is linear it follows that $f\ge0$ on $\conv S(\g)$ and by continuity even on its closure. Then (a) implies that in particular $f\ge0$ on $S_d(\g)$. But then $\ph(f)=f(\ph(X_1),\ldots,\ph(X_n))\ge0$ for all $\ph\in L_d(\g)$. By Lemma \ref{modclo}, this implies $f\in\overline{M_d(\g)}$ as desired. Conversely, suppose now that (b) holds and let $x\in\R^n\setminus\overline{\conv S(\g)}$. We show that $x\notin S_d(\g)$. By \ref{seplocconvs} there is a linear form $f_0\in\R[\x]_1$ and $r\in\R$ with $f_0(y)\le r<f_0(x)$ for all $y\in\overline{\conv S(\g)}$. Setting $f:=r-f_0\in\R[\x]_1$, it follows that $f(y)\ge0>f(x)$ for all $y\in\overline{\conv S(\g)}$. In particular $f\ge0$ on $S(\g)$ and therefore $f\in\overline{M_d(\g)}$ by (b). By Lemma \ref{modclo}, we have now $f(\ph(X_1),\ldots,\ph(X_n))=\ph(f)\ge0$ for all $\ph\in L_d(\g)$. Hence $f\ge0$ on $S_d(\g)$. From $f(x)<0$ we now deduce that $x\notin S_d(\g)$ as desired. \end{proof} \begin{dfpro}\emph{[$\to$ \ref{dfconv}, \ref{defcone}]}\quad Let $V$ be a real vector space and $A\subseteq V$. The smallest cone containing $A$ is obviously \[\cone A:=\left\{\sum_{i=1}^m\la_ix_i\mid m\in\N_0,\la_i\in\R_{\ge0},x_i\in A\right\}.\] We call it the cone \emph{generated} by $A$ or the \emph{conic hull} of $A$. \end{dfpro} \begin{rem}\label{convcone} Let $V$ be real vector space and $A\subseteq V$. Consider the subset $A\times\{1\}$ of the real vector space $V\times\R$. One easily sees that \[\conv A=\{x\mid(x,1)\in\cone(A\times\{1\})\}.\] \end{rem} \begin{lem}[Carathéodory]\label{cara} Let $n\in\N_0$, $V$ an $n$-dimensional real vector space and $A\subseteq V$. \begin{enumerate}[(a)] \item $\forall x\in\cone A:\exists B:(B\subseteq A\et~\#B\le n\et x\in\cone B)$ \item $\forall x\in\conv A:\exists B:(B\subseteq A\et~\#B\le n+1\et x\in\conv B)$ \end{enumerate} \end{lem} \begin{proof} To prove (a), we may suppose that $A$ generates $V$ as a vector space since otherwise we can pass over from $V$ to the subspace generated by $A$. Let $x\in\cone A$. Choose a finite set $E\subseteq A$ such that $E$ generates $V$ and $x\in\cone E$. By \ref{fundthmlin} there is a basis $B\subseteq E$ of $V$ such that $x\in\cone B$ or there is some $\ell\in V^*$ with $\ell(E)\subseteq\R_{\ge0}$ and $\ell(x)<0$. But the latter cannot occur since $\ell(E)\subseteq\R_{\ge0}$ would imply $\ell(x)\in\ell(\cone E)\subseteq\R_{\ge0}$. Clearly $\#B=n$ since $B$ is a basis. To prove (b), let $x\in\conv A$. By Remark \ref{convcone}, we have $(x,1)\in\cone(A\times\{1\})$. Since any subset of $A\times\{1\}$ is of the form $B\times\{1\}$ for some $B\subseteq A$, (a) yields that there is some $B\subseteq A$ with $\#B\le\dim(V\times\R)=n+1$ and $(x,1)\in\cone(B\times\{1\})$. Again by Remark \ref{convcone}, we get $x\in\conv B$. \end{proof} \begin{lem}\label{convhullcompact} If $S\subseteq\R^n$ is compact, then $\conv S$ is compact. \end{lem} \begin{proof} Set $\De:=\{\la\in\R_{\ge0}^{n+1}\mid\la_1+\ldots+\la_{n+1}=1\}$. The set $\De\times S^{n+1}\subseteq\R^{n+1+n(n+1)}$ is bounded and closed and therefore compact. By Carathéodory \ref{cara}(b), we have that $\conv S$ is the image of the map \[\De\times S^{n+1}\to\R^n,\ (\la_1,\ldots,\la_{n+1},x_1,\ldots,x_{n+1})\mapsto\sum_{i=1}^{n+1}\la_ix_i.\] But since this map is continuous, its image is compact [$\to$ \ref{quasicompactimage}]. \end{proof} \begin{lem}\label{homogeneousclosed} Let $V$ and $W$ finite-dimensional real vector spaces [$\to$ \ref{tacitly}] and \[f\colon V\to W\] continuous with $f^{-1}(\{0\})=\{0\}$. Suppose there exists $r\in\R_{>0}$ such that \[\forall v\in V:\forall\la\in\R_{\ge0}:f(\la x)=\la^rf(x).\] Then $f(V)$ is closed in $W$. \end{lem} \begin{proof} Make both $V$ and $W$ into normed vector spaces by fixing arbitrary norms. Let $(x_k)_{k\in\N}$ be a sequence in $V$ such that $\lim_{k\to\infty}f(x_k)=:y$ exists. We show that $y\in f(V)$. Since $f(0)=0$, we suppose WLOG $y\ne0$. Then WLOG $f(x_k)\ne0$ and in particular $x_k\ne0$ for all $k\in\N$. Due to $f^{-1}(\{0\})=\{0\}$, we have for the continuous function \[h\colon S:=\{x\in V\mid\|x\|=1\}\to\R,\ x\mapsto\|f(x)\|\] that $h>0$ on $S$. Because $S$ is compact, there is some $\ep\in\R_{>0}$ such that $h\ge\ep$ on $S$. Now \[\|f(x_k)\|=\|x_k\|^r\left\|f\left(\frac{x_k}{\|x_k\|}\right)\right\|\ge\|x_k\|^r\ep.\] The convergent sequence $(f(x_k))_{k\in\N}$ is of course bounded and thus $(x_k)_{k\in\N}$ is now also bounded. By the Bolzano–Weierstrass theorem, we may WLOG suppose that $(x_k)_{k\in\N}$ is convergent with $x:=\lim_{k\to\infty}x_k\in V$. Now \[y=\lim_{k\to\infty}f(x_k)\overset{f\text{ continuous}}=f\left(\lim_{k\to\infty}x_k\right)=f(x)\in f(V).\] \end{proof} \begin{lem}\label{truncclosed} Let $d,m\in\N_0$ and $\g\in\R[\x]^m$. Suppose that $S(\g)$ has nonempty interior. Then $M_d(\g)$ is closed. \end{lem} \begin{proof} Obviously $M_d(\g)$ does not change if we discard those $g_i$ which are either the zero polynomial or whose degree exceeds $d$ [$\to$ \ref{hierarchy}(f)]. Moreover, discarding some of the $g_i$ does certainly not take away any points from $S(\g)$. Therefore we have WLOG $0\le\deg(g_i)\le d$ for all $i\in\{1,\dots,m\}$. Set $g_0:=1\in\R[\x]$. For $i\in\{0,\dots,m\}$, set $r_i:=\left\lfloor\frac{d-\deg g_i}2\right\rfloor$ and $C_i:=\cone(\{p^2\mid p\in\R[\x]_{r_i}\})\subseteq\R[\x]_{2r_i}$. By \ref{dftrunc}, we get easily \[ M_d(\g)= \left\{\sum_{i=0}^ms_ig_i\mid s_0\in C_0,\ldots,s_m\in C_m\right\}\subseteq\R[\x]_d. \] Setting $k_i:=\dim(\R[\x]_{2r_i})$ for $i\in\{0,\dots,m\}$, we get from this easily by Carathéodory \ref{cara}(a) that \[ M_d(\g)= \left\{\sum_{i=0}^m\left(\sum_{j=1}^{k_i}p_{ij}^2\right)g_i~\middle|~p_{ij}\in\R[\x]_{r_i}\right\}\subseteq\R[\x]_d. \] In other words, $M_d(\g)$ is the image of the map \[ f\colon\prod_{i=0}^m\R[\x]_{r_i}^{k_i}\to\R[\x]_d,\ ((p_{ij})_{j\in\{1,\ldots,k_i\}})_{i\in\{0,\ldots,m\}}\mapsto \sum_{i=0}^m\sum_{j=1}^{k_i}p_{ij}^2g_i. \] This map is quadratically homogeneous, that is $f(\la p)=\la^2f(p)$ for all $p$ in its domain and all $\la\in\R$. If we can prove that $f(p)=0$ implies $p=0$ for all $p$ in the domain of $f$, then we will be done by Lemma \ref{homogeneousclosed}. Consider the polynomial $h:=g_1\cdots g_m\ne0$. Since $S(\g)$ has non-empty interior, Lemma \ref{pol0} implies that $h$ does not vanish on the whole of $S(\g)$. Hence \[S':=\{x\in S(\g)\mid h(x)\ne0\}=\{x\in\R^n\mid g_1(x)>0,\ldots,g_m(x)>0\}\] is non-empty. Now if \[ p:=((p_{ij})_{j\in\{1,\ldots,k_i\}})_{i\in\{0,\ldots,m\}}\in\prod_{i=0}^m\R[\x]_{r_i}^{k_i} \] such that $f(p)=0$, then each $p_{ij}$ vanishes on $S'$ so that $p=0$ as desired because $S'$ is open. \end{proof} \begin{thm}\label{exact2}\emph{[$\to$ \ref{exact1}]}\quad Let $d\in\N$, $m\in\N_0$ and $\g\in\R[\x]^m$ such that $S(\g)$ is compact with nonempty interior. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $S_d(\g)=\conv S(\g)$ \item $\forall f\in\R[\x]_1:(f\ge0\text{ on }S(\g)\implies f\in M_d(\g))$ \end{enumerate} \end{thm} \begin{proof} We have $\emptyset\ne S(\g)^\circ\subseteq S(\g)\subseteq S_d(\g)$ and therefore $S_d(\g)\ne\emptyset$. By the Lemmata \ref{convhullcompact} and \ref{truncclosed}, we have that $\conv S(\g)$ and $M_d(\g)$ are closed. Now the theorem follows from Lemma \ref{exact1}. \end{proof} \begin{cor}\label{exact3} Let $d\in\N$, $m\in\N_0$ and $\g\in\R[\x]^m$ such that $S(\g)$ is compact with nonempty interior. Suppose that we are given $F\subseteq\R[\x]_1$ such that \[\conv S(\g)=\{x\in\R^n\mid\forall f\in F:f(x)\ge0\}.\] Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $S_d(\g)=\conv S(\g)$ \item $F\subseteq M_d(\g)$ \end{enumerate} \end{cor} \begin{proof}(a)$\implies$(b) follows directly from the corresponding implication in Theorem \ref{exact2}. For the other implication, suppose that (b) holds and let $x\in S_d(\g)$. We claim that $x\in\conv S(\g)$. Choose $\ph\in L_d(\g)$ such that $x=(\ph(X_1),\ldots,\ph(X_n))$. By our hypothesis on $F$, it is enough to show that $f(x)\ge0$ for all $f\in F$. But if $f\in F$, then $f\in M_d(\g)$ by (b) and hence $\ph(f)\ge0$. Since $f$ is linear, we have $f(x)=f(\ph(X_1),\ldots,\ph(X_n))=\ph(f)\ge0$ as desired. \end{proof} \begin{ex}\label{x4y4rev} Taking up our Example \ref{x4y4}, we consider again the same system of polynomial inequalities \[(*)\qquad\quad 1-x_1+x_2\ge0,\quad 1-x_1^4-x_2^4\ge0\qquad(x_1,x_2\in\R).\] This has given rise to a system $(\Box)$ of three LMIs of sizes $6$, $3$ and $1$ in $14$ unknowns $x_1,x_2,y_1,\ldots y_{12}$. We had asked the question whether \[\{(x_1,x_2)\in\R^2\mid\exists y_1,\ldots,y_{12}:(x_1,x_2,y_1,\ldots,y_{12})\text { solves }(\Box)\}\] equals the set of solutions of $(*)$. Translating this into our setting, we declare \[g_1:=1-X_1-X_2\in\R[X_1,X_2],\qquad g_2:=1-X_1^4-X_2^4\in\R[X_1,X_2]\] and $\g:=(g_1,g_2)$. Then $S(\g)$ is the set of solutions of $(*)$ and the degree four state space associated to $\g$ can be identified with the set of solutions of $(\Box)$. Moreover, the just mentioned question just asks whether $S(\g)=S_4(\g)$. We leave it as an exercise to the reader to show that $S(\g)$ is convex and compact with nonempty interior and that \begin{align*} F:=\{f\in\R[\x]_1\mid\exists x\in\R^2:(&f(x)=0=g_2(x)~\et\\ &\nabla f(x)=\nabla g_2(x))\}\cup\{g_1\}\subseteq\R[\x_1] \end{align*} satisfies $S(\g)=\{x\in\R^n\mid\forall f\in F:f(x)\ge0\}$. By Corollary \ref{exact3}, the question whether $S(\g)=S_4(\g)$ is now equivalent to $F\subseteq M_4(\g)$. Clearly $g_1\in M_4(\g)$. Now suppose $f\in F\setminus\{g_1\}$. We will then show that $f\in M_4(g_2)\subseteq M_4(\g)$. By \ref{dh1888} and \ref{soslongrem}(b) this means that there is some $\la\in\R_{\ge0}$ such that $f-\la g_2\ge0$ on $\R^2$. We claim that we can take $\la:=1$, i.e., $f-g_2\ge0$ on $\R^2$. For this purpose, write $f=a_1X_1+a_2X_2+b$ with $a_1,a_2,b\in\R$. Choose $x\in\R^2$ with $f(x)=0=g_2(x)$ and $\nabla f(x)=\nabla g_2(x)$. We have \begin{align*} &a_1x_1+a_2x_2+b=0=1-x_1^4-x_2^4\qquad\text{and}\\ &\begin{pmatrix}a_1\\a_2\end{pmatrix}=\nabla f(x)=\nabla g_2(x)=-4\begin{pmatrix}x_1^3\\x_2^3\end{pmatrix}. \end{align*} It follows that $-4x_1^4-4x_2^4+b=0$ and hence $b=4(x_1^4+x_2^4)=4$. Therefore we have to show that \begin{align*} h&:=f-g_2=-4x_1^3X_1-4x_2^3X_2+4-1+X_1^4+X_2^4\\ &=X_1^4+X_2^4-4x_1^3X_1-4x_2^3X_2+3 \end{align*} is nonnegative on the whole of $\R^2$. Since the leading form $X_1^4+X_2^4$ of $h$ [$\to$ \ref{introhom}] is positive definite [$\to$ \ref{introhom}], it is easy to show that $h$ assumes a minimum on $\R^2$. Choose $y\in\R^2$ such that $h(y)\le h(z)$ for all $z\in\R^2$. Then \[4\begin{pmatrix}y_1^3\\y_2^3\end{pmatrix}-4\begin{pmatrix}x_1^3\\x_2^3\end{pmatrix}=\nabla h(y)=0.\] Using that $\R\to\R,\ t\mapsto t^3$ is injective, we deduce $x=y$. Hence \[0=f(x)-g(x)=h(x)=h(y)\le h(z)\] for all $z\in\R^2$ as desired. \end{ex} \section{Strict quasiconcavity} \begin{df}\label{dfqc} Let $g\in\R[\x]$. If $x\in\R^n$, then we call $g$ \emph{strictly concave} at $x$ if $(\hess g)(x)\prec0$ and \emph{strictly quasiconcave} at $x$ if \[((\nabla g)(x))^Tv=0\implies v^T(\hess g)(x)v<0\] for all $v\in\R^n\setminus\{0\}$. If $S\subseteq\R^n$, we call $g$ \emph{strictly (quasi-)concave} on $S$ is $g$ is \emph{strictly (quasi-)concave} at every point of $S$. \end{df} \begin{rem} Let $g\in\R[\x]$ and $x\in\R^n$ such that $\nabla g(x)=0$. \begin{enumerate}[(a)] \item $g$ is strictly quasiconcave at $x$ if and only if $\hess g(x)\prec0$. \item If $g$ is strictly quasiconcave at $x$ and $g(x)=0$, then there is a neighborhood $U$ of $x$ such that $U\cap S(g)=\{x\}$. \end{enumerate} \end{rem} \begin{lem}\label{prepqcc} Let $n\in\N$, $g\in\R[\x]$ and $x\in\R^n$ such that $g(x)=0$ and $\nabla g(x)\ne0$. Suppose $v_1,\dots,v_n$ form a basis of $\R^n$, $U$ is an open neighborhood of $0$ in $\R^{n-1}$, $\ph\colon U\to\R$ is smooth and satisfies $\ph(0)=0$ as well as \[(*)\qquad g(x+\xi_1v_1+\ldots+\xi_{n-1}v_{n-1}+\ph(\xi)v_n)=0\] for all $\xi=(\xi_1,\dots,\xi_{n-1})\in U$. Then the following hold: \begin{enumerate}[(a)] \item $(\nabla g(x))^Tv_1=\ldots=(\nabla g(x))^Tv_{n-1}=0\iff \nabla\ph(0)=0$ \item If $\nabla\ph(0)=0$ and $(\nabla g(x))^Tv_n>0$, then \[\text{$g$ is strictly quasiconcave at $x$}\iff\hess\ph(0)\succ0. \] \end{enumerate} \end{lem} \begin{proof} Taking the derivative of $(*)$ with respect to $\xi_i$, we get \[(**)\qquad (\nabla g(x+\xi_1v_1+\ldots+\xi_{n-1}v_{n-1}+\ph(\xi)v_n))^T\left(v_i+\frac{\partial\ph(\xi)}{\partial\xi_i}v_n\right)=0\] for all $i\in\{1,\dots,n-1\}$ and $\xi\in U$. Setting here $\xi$ to $0$, we get \[(\nabla g(x))^T\left(v_i+\frac{\partial\ph(\xi)}{\partial\xi_i}\middle|_{\xi=0}v_n\right)=0\] for each $i\in\{1,\dots,n-1\}$. From this, (a) follows easily (for ``${\implies}$'' use that $(\nabla g(x))^Tv_n\ne0$ since $v_1,\dots,v_n$ is a basis). Taking the derivative of $(**)$ with respect to $\xi_j$, we get \begin{multline*} \left(v_j+\frac{\partial\ph(\xi)}{\partial\xi_j}v_n\right)^T(\hess g(x+\xi_1v_1+\ldots+\xi_{n-1}v_{n-1}+\ph(\xi)v_n))\left(v_i+\frac{\partial\ph(\xi)}{\partial\xi_i}v_n\right)\\ +(\nabla g(x+\xi_1v_1+\ldots+\xi_{n-1}v_{n-1}+\ph(\xi)v_n))^T \left(\frac{\partial^2\ph(\xi)}{\partial\xi_i\partial\xi_j}v_n\right)=0 \end{multline*} for all $i,j\in\{1,\dots,n-1\}$ and $\xi\in U$. To prove (b), suppose now that $\nabla\ph(0)=0$ and $(\nabla g(x))^Tv_n>0$. Then the preceding equation implies \[\hess\ph(0)=-\frac1{(\nabla g(x))^Tv_n}(v_i^T(\hess g(x))v_j)_{i,j\in\{1,\dots,n-1\}}.\] Since $v_1,\ldots,v_{n-1}$ now form a basis of the orthogonal complement of $\nabla g(x)$ by (a), the matrix $(v_i^T(\hess g(x))v_j)_{i,j\in\{1,\dots,n-1\}}$ is negative definite if and only if $g$ is strictly quasiconcave at $x$ (see Definition \ref{dfqc}). \end{proof} \noindent The following proposition is important for understanding the notion of quasiconcavity. It is trivial that quasiconcavity of a polynomial $g$ at $x$ depends only on $x$ and the function $V\to\R,\ x\mapsto g(x)$ where $V$ is an arbitrarily small neighborhood of $x$. But if $g(x)=0$ and $\nabla g(x)\ne0$, then it actually depends only on $x$ and the function \[V\to\{-1,0,1\},\ x\mapsto\sgn(g(x))\] as the equivalence of Conditions (a) and (b) of the following proposition show. \begin{notation} For $g\in\R[\x]$, we call \[Z(g):=\{x\in\R^n\mid g(x)=0\}\] the \emph{real zero set} of $g$. \end{notation} \begin{pro}\label{qcc} Let $n\in\N$, $g\in\R[\x]$ and $x\in\R^n$ such that \[g(x)=0\text{ and }\nabla g(x)\ne0.\] Suppose that $V$ is a neighborhood of $x$. Then the following are equivalent: \begin{enumerate}[\normalfont(a)] \item $g$ is strictly quasiconcave at $x$. \item There is a basis $v_1,\dots,v_n$ of $\R^n$, an open neighborhood $U$ of $0$ in $\R^{n-1}$ and a smooth function $\ph\colon U\to\R$ such that $\ph(0)=0$, $\nabla\ph(0)=0$, $\hess\ph(0)\succ0$, \[(*)\qquad x+\xi_1v_1+\ldots+\xi_{n-1}v_{n-1}+\ph(\xi)v_n\in Z(g)\cap V\] for all $\xi\in U$ and \[(**)\qquad x+\la v_n\in S(g)\cap V\] for all small enough $\la\in\R_{>0}$. \item Condition (b) holds with ``basis'' replaced by ``orthogonal basis''. \end{enumerate} For any basis $v_1,\dots,v_n$ of $\R^n$ like in (b), one has \[(***)\qquad(\nabla g(x))^Tv_1=\ldots=(\nabla g(x))^Tv_{n-1}=0\qquad\text{and}\qquad(\nabla g(x))^Tv_n>0. \] \end{pro} \begin{proof} Using Lemma \ref{prepqcc}(a), it is easy to show that any $v_1,\dots,v_n$ like in (b) satisfy $(***)$ using that $(\nabla g(x))^Tv_n=0$ would contradict the hypothesis $\nabla g(x)\ne0$ since $v_1,\dots,v_n$ is a basis. Now Part (b) of the same lemma shows that (b) implies (a). Since it is trivial that (c) implies (b), it only remains to show that (a) implies (c). To this end, let (a) be satisfied. In order to show (c), choose an orthogonal basis $v_1,\dots,v_n$ of $\R^n$ satisfying $(***)$. The implicit function theorem yields an open neighborhood $U$ of the origin in $\R^{n-1}$ such that for each $\xi=(\xi_1,\dots,\xi_{n-1})\in U$ there is a unique $\ph(\xi)\in\R$ satisfying $(*)$, in particular $\ph(0)=0$. Moreover, one can choose $U$ such that the resulting function $\ph\colon U\to\R$ is smooth. From $(\nabla g(x))^Tv_n>0$, we get $(**)$. From Part (a) of Lemma \ref{prepqcc}, we get $\nabla\ph(0)=0$. From Part (b) of the same lemma and from (a), we obtain $\hess\ph(0)\succ0$. \end{proof} \begin{rem}\label{quasiconcavelift} For $g\in\R[\x]$ and $x\in\R^n$, the following are obviously equivalent: \begin{enumerate}[(a)] \item $g$ is strictly quasiconcave at $x$. \item $\exists\la\in\R:\la(\nabla g(x))(\nabla g(x))^T\succ(\hess g)(x)$ \end{enumerate} \end{rem} \begin{pro}\label{psdinterior} $\R^{n\times n}_{\succeq0}:=\{A\in\R^{n\times n}\mid A\succeq0\}$ is a cone in the vector space $S\R^{n\times n}$ whose interior \emph{[$\to$ \ref{tacitly}]} is $\R^{n\times n}_{\succ0}:=\{A\in\R^{n\times n}\mid A\succ0\}$. \end{pro} \begin{proof} Equip $S\R^{n\times n}$ with the norm defined by \[\|A\|:=\max_{\substack{x\in\R^n\\\|x\|\le1}}|x^TAx|\] for $A\in S\R^{n\times n}$. By \ref{topvsex}(c), this norm (as any other norm) induces the unique vector space topology on $S\R^{n\times n}$. If $A$ is an interior point of $\R^{n\times n}_{\succeq0}$, then there exists $\ep\in\R_{>0}$ such that $A-\ep I_n\succeq0$ and thus $A\succeq\ep I_n\succ0$. Conversely, let $A\in\R^{n\times n}$ satisfy $A\succ0$. We show that $A$ is an interior point of $\R^{n\times n}_{\succeq0}$. By \ref{psdeq}, the lowest eigenvalue $\ep$ of $A$ is nonnegative since $A\succeq0$. Actually, we have even $\ep>0$ since $A$ has trivial kernel due to $A\succ0$. Now $A-\ep I_n$ has only nonnegative eigenvalues and thus $A-\ep I_n\succeq0$ by \ref{psdeq}. It suffices to show that a ball around $A$ with radius $\ep$ in $S\R^{n\times n}$ is contained in $\R^{n\times n}_{\succeq0}$. For this purpose, let $B\in S\R^{n\times n}$ with $\|B-A\|\le\ep$ and fix $x\in\R^n$ with $\|x\|=1$. We have to show that $x^TBx\ge0$. But we have $x^TBx=x^TAx+x^T(B-A)x\ge x^TAx-\|B-A\|\ge\ep x^TI_nx-\ep=0$. \end{proof} \begin{lem}\label{strictly2neighbor} Let $g\in\R[\x]$. If $g$ is strictly (quasi-)concave at $x\in\R^n$, then there is a neighborhood $U$ of $x$ such that $g$ is strictly (quasi-)concave on $U$. \end{lem} \begin{proof} The first statement follows from the openness of $\R^{n\times n}_{\succ0}$ [$\to$ \ref{psdinterior}] by the continuity of $\R^n\to S\R^{n\times n},\ x\mapsto(\hess g)(x)$. The second statement follows similarly by using the equivalence of (a) and (b) in Remark \ref{quasiconcavelift}. \end{proof} \begin{rem}\label{derexo} Let $k\in\N_0$, $g\in\R[\x]$ and $x\in\R^n$ with $g(x)=0$. Then \begin{align*} (\nabla(g(1-g)^k))(x)&=(\nabla g)(x)\qquad\text{and}\\ (\hess(g(1-g)^k))(x)&=(\hess g-2k(\nabla g)(\nabla g)^T)(x). \end{align*} \end{rem} \begin{lem}\label{quasi2concave} Suppose $g\in\R[\x]$, $u\in\R^n$ and $g(u)=0$. Then the following are equivalent: \begin{enumerate}[(a)] \item $g$ is strictly quasiconcave at $u$. \item There exists $k\in\N$ such that $g(1-g)^k$ is strictly concave at $u$. \item There exists $k\in\N$ such that for all $\ell\in\N$ with $\ell\ge k$, we have that $g(1-g)^\ell$ is strictly concave at $u$. \end{enumerate} \end{lem} \begin{proof} Combine Remarks \ref{derexo} and \ref{quasiconcavelift}. \end{proof} \section{Lagrange multipliers from real closed fields} \begin{rem}\label{gradientpositive} If $u,x\in\R^n$ with $v:=x-u\ne0$, $g\in\R[\x]$, $g$ is strictly quasiconcave at $u$, $g(u)=0$ and $g\ge0$ on $\conv\{u,x\}$, then obviously $(\nabla g(u))^Tv>0$. \end{rem} \begin{lem}[Existence of real Lagrange multipliers]\label{lagrange} Suppose $u\in\R^n$, $f\in\R[\x]$, $m\in\N_0$, $\g\in\R[\x]^m$. Let $U$ be a neighborhood of $u$ in $\R^n$ such that $U\cap S(\g)$ is convex and not a singleton. Moreover, suppose $g_1,\ldots,g_m$ are strictly quasiconcave at $u$, $f\ge0$ on $U\cap S(\g)$ and \[f(u)=g_1(u)=\ldots=g_m(u)=0.\] Then there are $\la_1,\ldots,\la_m\in\R_{\ge0}$ such that \[\nabla f(u)=\sum_{i=1}^m\la_i\nabla g_i(u).\] \end{lem} \begin{proof} Choose $x\in U\cap S(\g)$ with $v:=x-u\ne0$. By Remark \ref{gradientpositive}, we have \[(\nabla g_i(u))^Tv>0\] for $i\in\{1,\ldots,m\}$ since $U\cap S(\g)$ is convex. Assume the required Lagrange multipliers do not exist. Then \ref{fundthmlin} yields $w\in\R^n$ such that $((\nabla f)(u))^Tw<0$ and $((\nabla g_i)(u))^Tw\ge0$ for all $i\in\{1,\ldots,m\}$. Replacing $w$ by $w+\ep v$ for some small $\ep>0$, we get even $((\nabla g_i)(u))^Tw>0$ for all $i\in\{1,\ldots,m\}$. Then for all sufficiently small $\de\in\R_{>0}$, we have $u+\de v\in U\cap S(\g)$ but $f(u+\de v)<0$ $\lightning$. \end{proof} \begin{df}{}[$\to$ \ref{interiorclosure}]\label{dfboundary} Let $M$ be a topological space and $A\subseteq M$. We call \[\partial A:=\overline A\setminus A^\circ=\overline A\cap\overline{M\setminus A}\] the \emph{boundary} of $A$. \end{df} \begin{df}\label{nearconvexboundary} Let $S\subseteq\R^n$. We call \[\convbd S:=S\cap\partial\conv S\] the \emph{convex boundary} of $S$. Obviously, \[\convbd S=\{x\in S\mid\forall U\in\mathcal U_x:U\not\subseteq\conv S\}.\] We say that $S$ \emph{has nonempty interior near its convex boundary} if $\convbd S\subseteq\overline{S^\circ}$. \end{df} \begin{pro}\label{convbdchar} Let $S\subseteq\R^n$. Then \[\convbd S=\{u\in S\mid\exists\ph\in(\R^n)^*\setminus\{0\}:\forall x\in S:\ph(u)\le\ph(x)\}.\] \end{pro} \begin{proof} ``$\supseteq$'' Let $u\in S$ and $\ph\in(\R^n)^*\setminus\{0\}$ such that $\forall x\in S:\ph(u)\le\ph(x)$. Then even $\forall x\in\conv S:\ph(u)\le\ph(x)$. Choose $v\in\R^n$ such that $\ph(v)>0$. Then $\ph(u-\ep v)<\ph(u)$ and hence $u-\ep v\notin\conv S$ for each $\ep\in\R_{>0}$. It follows that no neighborhood of $u$ is contained in the convex hull of $S$. Hence $u\in\convbd S$. \smallskip ``$\subseteq$'' If $\dim\conv S<n$ [$\to$ \ref{dfdimconv}], we have $\partial\conv S=\overline{\conv S}$ and hence $\convbd S=S$ and one easily finds $\ph\in(\R^n)^*\setminus\{0\}$ that is constant on $\conv S$. So now suppose that $\dim\conv S=n$. Let $u\in\convbd S$. By Theorem \ref{hasaface}, we get an exposed face $F$ of $\conv S$ with $\dim F<n$ and $u\in F$. Choose $\ph\colon\R^n\to\R$ linear such that \[F=\{y\in\conv S\mid\forall x\in\conv S:\ph(y)\le\ph(x)\}.\] Since $\dim F<n$, we have obviously $\ph\ne0$. \end{proof} \begin{lem}\label{prep1} Let $B\subseteq\R^n$ be a closed ball in $\R^n$, $m\in\N_0$ and $\g\in\R[\x]^m$. Suppose that $g_1,\ldots,g_m\in\R[\x]$ are strictly quasiconcave on $B$. Then the following hold: \begin{enumerate}[(a)] \item $S:=B\cap S(\g)$ is convex. \item Every linear form from $\R[\x]\setminus\{0\}$ [$\to$ \ref{longremi}(a)] has at most one minimizer on $S$. \item Let $u$ be a minimizer of the linear form $f\in\R[\x]\setminus\{0\}$ on $S$ and set \[I:=\{i\in\{1,\ldots,m\}\mid g_i(u)=0\}.\] Then $u$ is also minimizer of $f$ on $S':=\{x\in B\mid\forall i\in I:g_i(x)\ge0\}\supseteq S$. \end{enumerate} \end{lem} \begin{proof} (a) Let $x,y\in S$ with $x\ne y$ and $i\in\{1,\ldots,m\}$. The polynomial \[f:=g_i(Tx+(1-T)y)\in\R[T]\] attains a minimum $a$ on $[0,1]_\R$ [$\to$ \ref{takeson}]. We have to show $a\ge0$. Because of $f(0)=g_i(y)\ge0$ and $f(1)=g_i(x)\ge0$, it is enough to show that this minimum is not attained in a point $t\in(0,1)_\R$. Assume it is. Then $f'(t)=0$, i.e., $((\nabla g_i)(z))^Tv=0$ for $z:=tx+(1-t)y$ and $v:=x-y\ne0$. Since $z\in B$ and hence $g_i$ is strictly quasiconcave at $z$, it follows that $v^T((\hess g_i)(z))v<0$, i.e., $f''(t)<0$. Then $f<a$ on a neighborhood of $t$ [$\to$~\ref{sgnbounds}(b)] $\lightning$. \medskip (b) Suppose $x$ and $y$ are minimizers of the linear form $f\in\R[\x]\setminus\{0\}$ on $S$. Then $x,y\in\convbd S$ by \ref{convbdchar}. Since $f$ is linear, it is constant on $\aff\{x,y\}$. Hence even \[\conv\{x,y\}\overset{\text{(a)}}\subseteq\aff\{x,y\}\cap S\overset{\ref{convbdchar}}\subseteq\convbd S\overset{\text{(a)}}=S\cap\partial S =S\cap(\overline S\setminus S^\circ)\overset{\text{$S$ closed}}=S\setminus S^\circ=\partial S.\] Since $\conv\{x,y\}\setminus\{x,y\}\subseteq B^\circ$, we have then that $\conv\{x,y\}\setminus\{x,y\}\subseteq Z(g_1\dotsm g_m)$. Assume now for a contradiction that $x\ne y$. Then this implies that at least one of the $g_i$ vanishes on $\aff\{x,y\}$. Fix a corresponding $i$. Setting $v:=y-x$, we have then $((\nabla g_i)(x))^Tv=0$ and $v^T((\hess g_i)(x))v=0$. Since $g_i$ is strictly quasiconcave at $x$, this implies $v=0$, i.e., $x=y$ as desired. \medskip (c) By definition of $I$, the sets $S$ and $S'$ coincide on a neighborhood of $u$ in $\R^n$. Hence $u$ is a \emph{local} minimizer of $f$ on $S'$. Since $S'$ is convex by (a) and $f$ is linear, $u$ is also a (\emph{global}) minimizer of $f$ on $S'$. \end{proof} \begin{lem}\label{lagrange2} Suppose $B$ is a closed ball in $\R^n$, $m\in\N_0$ and $\g\in\R[\x]^m$. Suppose that $g_1,\ldots,g_m\in\R[\x]$ are strictly quasiconcave on $B$ and that $S:=B\cap S(\g)$ has nonempty interior. Then the following hold: \begin{enumerate}[(a)] \item For every real closed extension field $R$ of $\R$ and all linear forms $f\in R[\x]\setminus\{0\}$, $f$ has a unique minimizer on $\transfer_{\R,R}(S)$. \item For every real closed extension field $R$ of $\R$, all linear forms $f\in R[\x]$ with \[\|\nabla f\|_2=1\] [$\to$~\ref{normcont}] (note that $\nabla f\in R^n$ as $f$ is linear) and every $u\in\transfer_{\R,R}(B^\circ)$ which minimizes $f$ on $\transfer_{\R,R}(S)$, there are $\la_1,\ldots,\la_m\in\O_R\cap R_{\ge0}$ with \[\la_1+\ldots+\la_m\notin\m_R\] such that both $f-f(u)-\sum_{i=1}^m\la_ig_i$ and its gradient vanish at $u$. \end{enumerate} \end{lem} \begin{proof} (a) Consider the class of all real closed extension fields $R$ of $\R$ such that all linear forms from $R[\x]\setminus\{0\}$ have a unique minimizer on $\transfer_{\R,R}(S)$. By real quantifier elimination [$\to$ \ref{elim}], this is easily seen to be a $0$-ary $\R$-semialgebraic class [$\to$ \ref{introsemialg}]. By \ref{nothingorall}, this class is either empty or consists of all real closed extensions fields of $\R$. Hence it suffices to prove the statement in the case $R=\R$ [$\to$ \ref{tprinciple}]. But then the unicity part follows from Lemma \ref{prep1}(b) and the existence part from \ref{takeson}. \medskip (b) Now let $R$ be a real closed field extension of $\R$, $f\in R[\x]$ a linear form with $\|\nabla f\|_2=1$ and $u$ a minimizer of $f$ on $\transfer_{\R,R}(S\cap B^\circ)$. Set \[I:=\{i\in\{1,\ldots,m\}\mid g_i(u)=0\}\] and define the set \[S':=\{x\in B\mid\forall i\in I:g_i(x)\ge0\}\supseteq S\] which is convex by \ref{prep1}(a). Using the Tarski principle [$\to$ \ref{tprinciple}], one shows easily that $u$ is a minimizer of $f$ on $\transfer_{\R,R}(S')$ by Lemma \ref{prep1}(c). Note also that of course $u\in\O_R^n$ and $\st(u)\in S$. Now Lemma \ref{lagrange} says in particular that for all linear forms $\widetilde f\in\R[\x]$ and minimizers $\widetilde u$ of $\widetilde f$ on $S'\cap B^\circ$ with $\forall i\in I:g_i(\widetilde u)=0$, there is a family $(\la_i)_{i\in I}$ in $\R_{\ge0}$ such that \[\nabla\widetilde f=\sum_{i\in I}\la_i\nabla g_i(\widetilde u).\] Using the Tarski principle [$\to$ \ref{tprinciple}], we see that actually for all real closed extension fields $\widetilde R$ of $\R$, all linear forms $\widetilde f\in\widetilde R[\x]$ and all minimizers $\widetilde u$ of $\widetilde f$ on $\transfer_{\R,R}(S'\cap B^\circ)$ with $\forall i\in I:g_i(\widetilde u)=0$, there is a family $(\la_i)_{i\in I}$ in $R_{\ge0}$ such that \[\nabla\widetilde f=\sum_{i\in I}\la_i\nabla g_i(\widetilde u).\] We apply this to $\widetilde R:=R$, $\widetilde u:=u$, $\widetilde f:=f$ and thus obtain a family $(\la_i)_{i\in I}$ in $R_{\ge0}$ such that \[(*)\qquad\nabla f=\sum_{i\in I}\la_i\nabla g_i(u).\] In order to show that $\la_i\in\O_R$ for all $i\in I$, we choose a point $x\in S'^\circ\ne\emptyset$ with $\prod_{i\in I}g_i(x)\ne0$ and thus $g_i(x)>0$ for all $i\in I$. Setting $v:=x-u\in\O_R^n$, we get from $(*)$ that $(\nabla f)^Tv=\sum_{i\in I}\la_i(\nabla g_i(u))^Tv$. Since $\st(u)\in S'$ and $S'$ is convex, Remark~\ref{gradientpositive} yields $\st((\nabla g_i(u))^Tv)=(\nabla g_i(\st(u)))^T\st(v)>0$ for all $i\in I$ (use that $\st(u)\ne x$ since $g_i(\st(u))=0$ while $g_i(x)>0$). Together with $\la_i\ge0$ for all $i\in I$, this shows $\la_i\in\O_R$ for all $i\in I$ as desired. It now suffices to show that $\sum_{i\in I}\la_i\notin\m_R$. But this is clear since $(*)$ yields in particular \[1=\|\nabla f\|_2\le\sum_{i\in I}\la_i\|(\nabla g_i)(u)\|_2\le\left(\sum_{i\in I}\la_i\right)\max_{i\in I}\|(\nabla g_i)(u)\|_2\] (note that $I\ne\emptyset$ by the first inequality) which readily implies $\sum_{i\in I}\la_i\notin\m_R$. \end{proof} \begin{lem}\label{finitecontact} Let $m\in\N_0$ and $\g\in\R[\x]^m$ such that $S:=S(\g)$ is compact and has nonempty interior near its convex boundary. Suppose that $g_i$ is strictly quasiconcave on \[(\convbd S)\cap Z(g_i)\] for each $i\in\{1,\dots,m\}$. Let $R$ be real closed extension field of $\R$ and $f\in R[\x]$ be a linear form with $\|\nabla f\|_2=1$. Then the following hold: \begin{enumerate}[(a)] \item $F:=\{u\in S\mid\forall x\in S:\st(f(u))\le\st(f(x))\}$ is a finite subset of $\convbd S$. \item $S':=\transfer_{\R,R}(S)\subseteq\O_R^n$ and $f$ has a unique minimizer $x_u$ on \[\{x\in S'\mid \st(x)=u\}\] for each $u\in F$. \item For every $u\in F$, there are $\la_{u1},\ldots,\la_{um}\in\O_R\cap R_{\ge0}$ with \[\la_{u1}+\ldots+\la_{um}\notin\m_R\] such that both $f-f(x_u)-\sum_{i=1}^m\la_{ui}g_i$ and its gradient vanish at $x_u$. \end{enumerate} \end{lem} \begin{proof} (a) Obviously $\st(f)\ne0$ and hence \[F=\{u\in S\mid\forall x\in S:(\st(f))(u)\le(\st(f))(x)\}\subseteq \convbd S\] by Proposition \ref{convbdchar}. We now prove that $F$ is finite. WLOG $S\ne\emptyset$. Set [$\to$ \ref{takeson}] \[a:=\min\{(\st(f))(x)\mid x\in S\}\] so that \[F=\{u\in S\mid(\st(f))(u)=a\}.\] By compactness of $S$, it is enough to show that every $x\in S$ possesses a neighborhood $U$ in $S$ such that $U\cap F\subseteq\{x\}$. This is trivial for the points of $S\setminus F$. So consider an arbitrary point $x\in F$. Since $x\in\convbd S$, each $g_i$ is positive or strictly quasiconcave at $x$. According to \ref{strictly2neighbor}, we can choose a closed ball $B$ of positive radius around $x$ in $\R^n$ such that each $g_i$ is positive or strictly quasiconcave even on $B$. By Lemma \ref{prep1}(b), $\st(f)$ has at most one minimizer on $U:=S\cap B$, namely $x$, i.e., $U\cap F\subseteq\{x\}$. \medskip (b) First observe that $S':=\transfer_{\R,R}(S)\subseteq\O_R^n$ since the transfer from $\R$ to $R$ is an isomorphism of Boolean algebras [$\to$ \ref{transfer}]: Choosing $N\in\N$ with $S\subseteq[-N,N]_\R^n$, we have $S'\subseteq \transfer_{\R,R}([-N,N]^n_\R)=[-N,N]^n_R\subseteq\O_R^n$. Now we fix $u\in F$ and we show that $f$ has a unique minimizer on \[A:=\{x\in S'\mid\st(x)=u\}.\] Choose $\ep\in\R_{>0}$ such that each $g_i$ is strictly quasiconcave or positive on the ball \[B:=\{v\in\R^n\mid\|v-u\|\le\ep\}.\] Since $u\in\convbd S\subseteq\overline{S^\circ}$, Lemma \ref{lagrange2}(a) says that $f$ has a unique minimizer $x$ on $\transfer_{\R,R}(S\cap B)$. Because of $A\subseteq\transfer_{\R,R}(S\cap B)$, it is thus enough to show $x\in A$. Note that $u\in F\cap B\subseteq S\cap B\subseteq\transfer_{\R,R}(S\cap B)$ and thus $f(x)\le f(u)$. This implies $\st(f(\st(x)))=\st(f(x))\le\st(f(u))$ which yields together with $\st(x)\in S$ that $\st(x)\in F$ (and $\st(f(\st(x)))=\st(f(u))$). Again by Lemma \ref{lagrange2}(a), $\st(f)$ has a unique minimizer on $S\cap B$ . But $u$ and $\st(x)$ are both a minimizer of $\st(f)$ on $S\cap B$ (note that $\st(x)\in S\cap B$). Hence $u=\st(x)$ and thus $x\in A$ as desired. \medskip (c) Fix $u\in F$. Choose again $\ep\in\R_{>0}$ such that each $g_i$ is strictly quasiconcave or positive on the ball $B:=\{v\in\R^n\mid\|v-u\|\le\ep\}$ and such that $B\cap F=\{u\}$. Since $x_u\in\transfer_{\R,R}(B^\circ)$ obviously minimizes $f$ on $\transfer_{\R,R}(S\cap B)$, we get the necessary Lagrange multipliers by Lemma \ref{lagrange2}(b). \end{proof} \section{Linear polynomials and truncated quadratic modules} \begin{exo}\label{calculusexo} For all $k\in\N$ and $x\in[0,1]_\R$, we have $x(1-x)^k\le\frac1k$. \end{exo} The main geometric idea in the proof of the following theorem is as follows: Consider a hyperplane that isolates a basic closed semialgebraic subset of $\R^n$ and that is defined over a real closed extension field of $\R$. Because we want to apply Theorem $\ref{mainrep}$ to get a sums of squares ``isolation certificate'', the points where the hyperplane gets infinitesimally close to the set pose problems unless the hyperplane \emph{exactly} touches the set in the respective point. The idea is to find a nonlinear infinitesimal deformation of the hyperplane so that all ``infinitesimally near points'' becoming ``touching points''. This would be easier (although still not obvious) if there is at most one ``infinitesimally near point'' but since we deal in this article with \emph{not necessarily convex} basic closed semialgebraic sets, it is crucial to cope with several such points. \begin{thm}\label{linearstability} Let $m\in\N_0$ and $\g\in\R[\x]^m$ such that $M(\g)$ is Archimedean and suppose that $S:=S(\g)$ has nonempty interior near its convex boundary. Suppose that $g_i$ is strictly quasiconcave on $(\convbd S)\cap Z(g_i)$ for each $i\in\{1,\dots,m\}$. Let $R$ be a real closed extension field of $\R$ and $\ell\in\O_R[\x]_1$ such that $\ell\ge0$ on $\transfer_{\R,R}(S)$. Then $\ell$ lies in the quadratic module generated by $g_1,\ldots,g_m$ in $\O_R[\x]$. \end{thm} \begin{proof} We will apply Theorem $\ref{mainrep}$. Since $S$ is compact, we can rescale the $g_i$ and suppose WLOG that \[g_i\le1\text{ on }S\] for $i\in\{1,\ldots,m\}$. Let $M$ denote the quadratic module generated by $g_1,\ldots,g_m$ in $\O_R[\x]$. Since $M(\g)$ is Archimedean, also $M$ is Archimedean by \ref{archmodulechar}(b) and \ref{archmodulecharrcf}(b). Moreover, $S$ could now alternatively be defined from $M$ as in Theorem $\ref{mainrep}$. Write \[\ell=f-c\] with a linear form $f\in\O_R[\x]$ and $c\in\O_R$. By a rescaling argument, we can suppose that at least one of the coefficients of $\ell$ lies in $\O_R^\times$ [$\to$ \ref{ordval}]. If $\st(\ell(x))>0$ for all $x\in S$, then Theorem $\ref{mainrep}$ applied to $\ell$ with $k=0$ yields $\ell\in M$ and we are done. Hence we can from now on suppose that there is some $u\in S$ with $\st(\ell(u))=0$. For such an $u$, we have $\st(c)=\st(f(u))$ so that at least one coefficient of $f$ must lie $\O_R^\times$. By another rescaling, we now can suppose WLOG that $\|\nabla f\|_2=1$. Now we are in the situation of Lemma~\ref{finitecontact} and we define \[F,\quad (x_u)_{u\in F}\quad\text{and}\quad (\la_{ui})_{(u,i)\in F\times\{1,\ldots,m\}}\] accordingly. Note that \[F=\{u\in S\mid\st(\ell(u))=0\}\ne\emptyset\] since $\st(\ell(x))\ge0$ for all $x\in S$. We have $f(x_u)-c=\ell(x_u)\ge0$ and \[\st(f(x_u)-c)=\st(\ell(u))=0\] for all $u\in F$. Hence $f(x_u)-c\in\m_R\cap R_{\ge0}$ for all $u\in F$. We thus have \[\ell-\underbrace{(f(x_u)-c)}_{=:\la_{u0}\in\m_R\cap R_{\ge0}}-\sum_{i=1}^m\underbrace{\la_{ui}}_{\in\O_R\cap R_{\ge0}}g_i\in I_{x_u}^2\] for all $u\in F$ by \ref{finitecontact}(c) and \ref{membershipix}. Evaluating this in $x_u$ (and using $g_i(x_u)\ge0$) yields \begin{gather} \tag{$*$}g_i(x_u)\ne0\implies\la_{ui}=0\qquad\text{and thus}\\ \tag{$**$}\la_{ui}g_i\equiv_{I_{x_u}^2}\la_{ui}g_i(1-g_i)^k \end{gather} for all $u\in F$, $i\in\{1,\ldots,m\}$ and $k\in\N$. By the Chinese remainder theorem, we find polynomials $s_0,\ldots,s_m\in\O_R[\x]$ such that $s_i\equiv_{I_{x_u}^3}\sqrt{\la_{ui}}\in\O_R$ for all $u\in F$ and $i\in\{0,\ldots,m\}$ because the ideals $I_{x_u}^3$ ($u\in F$) are pairwise coprime [$\to$ \ref{coprime}] (use that $\st(x_u)=u\ne v=\st(x_v)$ for all $u,v\in F$ with $u\ne v$). By an easy scaling argument, we can even guarantee that the coefficients of $s_0$ lie in $\m_R$ since $\sqrt{\la_{u0}}\in\m_R$. Then we have \begin{equation} \tag{$***$}s_i^2\equiv_{I_{x_u}^3}\la_{ui} \end{equation} which means in other words \[s_i^2(x_u)=\la_{ui},\qquad(\nabla(s_i^2))(x_u)=0\qquad\text{and}\qquad(\hess(s_i^2))(x_u)=0\] for all $i\in\{0,\ldots,m\}$ and $k\in\N$. It suffices to show that there is $k\in\N$ such that the polynomial \[\ell-s_0^2-\sum_{i=1}^ms_i^2(1-g_i)^{2k}g_i\overset{(***)}{\underset{(**)}\in}\bigcap_{u\in F}I_{x_u}^2\] lies in $M$ since this implies immediately $\ell\in M$. By Theorem $\ref{mainrep}$, this task reduces to find $k\in\N$ such that $f_k>0$ on $S\setminus F$ and $(\hess(f_k))(u)\succ0$ for all $u\in F$ where \[f_k:=\st(\ell)-\sum_{i=1}^m\st(s_i^2)(1-g_i)^{2k}g_i\in\R[\x] \] is the standard part of this polynomial. Note for later use that $f_k$ and $\nabla f_k$ vanish on $F$ for all $k\in\N$. In order to find such a $k$, we calculate \begin{align*} (\hess f_k)(u)&\overset{(***)}=-\sum_{i=1}^m\st(\la_{ui})\hess((1-g_i)^{2k}g_i)(u)\\ &\overset{\ref{derexo}}{\underset{(*)}=}\sum_{i=1}^m\st(\la_{ui})(4k(\nabla g_i)(\nabla g_i)^T-\hess g_i)(u) \end{align*} for $u\in F$ and $k\in\N$. By Lemma \ref{quasi2concave} we can choose $k\in\N$ such that $g_i(1-g_i)^{2k}$ is strictly concave on $\{x\in F\mid g_i(x)=0\}$ for $i\in\{1,\ldots,m\}$. Since $\st(\la_1)+\ldots+\st(\la_m)>0$ [$\to$ \ref{finitecontact}(c)], we get together with $(*)$ and \ref{derexo} that for all sufficiently large $k$, we have $(\hess f_k)(u)\succ0$ for all $u\in F$. In particular, we can choose $k_0\in\N$ such that $\hess(f_{k_0})(u)\succ0$ for all $u\in F$. Since $f_{k_0}$ and $\nabla f_{k_0}$ vanish on $F$, we have by elementary analysis that there is an open subset $U$ of $\R^n$ containing $F$ such that $f_{k_0}>0$ on $U\setminus F$. Now $S\setminus U$ is compact so that we can choose $N\in\N$ with $\st(\ell)\ge\frac1N$ and $\st(s_i^2)\le N$ on $S\setminus U$. Then $f_k\ge\frac1N-m\frac N{2k}$ on $S\setminus U$ by Exercise \ref{calculusexo} since $0\le g_i\le1$ on $S$ for all $i\in\{1,\ldots,m\}$. For all sufficiently large $k\in\N$ with $k\ge k_0$, we now have $f_k>0$ on $S\setminus U$ and because of $f_k\ge f_{k_0}>0$ on $(S\cap U)\setminus F$ (use again that $0\le g_i\le1$ on $S$) even $f_k>0$ on $S\setminus F$. \end{proof} \begin{cor}\label{linearstabilitycor} Let $m\in\N_0$ and $\g\in\R[\x]^m$ such that $M(\g)$ is Archimedean and suppose that $S:=S(\g)$ has nonempty interior near its convex boundary. Suppose that $g_i$ is strictly quasiconcave on $(\convbd S)\cap Z(g_i)$ for each $i\in\{1,\dots,m\}$. Let $R$ be a real closed extension field of $\R$ and $\ell\in R[\x]_1$ such that $\ell\ge0$ on $\transfer_{\R,R}(S)$. Then $\ell$ lies in the quadratic module generated by $g_1,\ldots,g_m$ in $R[\x]$. \end{cor} \begin{cor}\label{strqc} Let $m\in\N_0$ and $\g\in\R[\x]^m$ such that $M(\g)$ is Archimedean and suppose that $S(\g)$ has nonempty interior near its convex boundary. Suppose that $g_i$ is strictly quasiconcave on $(\convbd S(\g)\cap Z(g_i)$ for each $i\in\{1,\dots,m\}$. Then there exists \[d\in\N\] such that for all $\ell\in\R[\x]_1$ with $\ell\ge0$ on $S(\g)$, we have \[\ell\in M_d(\g).\] \end{cor} \begin{proof}{}(cf. the proofs of Theorems \ref{h17bound} and \ref{putinarzerosdegreebound}) For each $d\in\N$, consider the class $S_d$ of all pairs $(R,a_0,a_1,\ldots,a_n)$ where $R$ is a real closed extension field of $\R$ and $a_0,a_1,\ldots,a_n\in R$ such that whenever \[\forall x\in\transfer_{\R,R}(S):a_1x_1+\ldots+a_nx_n+a_0\ge0\] holds, the polynomial $a_1X_1+\ldots+a_nX_n+a_0$ is a sum of $d$ elements from $R[\x]$ where each term in the sum is of degree at most $d$ and is of the form $p^2g_i$ with $p\in R[\x]$ and $i\in\{0,\dots,m\}$ where $g_0:=1\in R[\x]$ [$\to$ \ref{commentonbounds}(a)]. By real quantifier elimination \ref{elim}, it is easy to see that this is an $(n+1)$-ary $\R$-semialgebraic class. Set $\mathcal E:=\{S_d\mid d\in\N\}$ and observe that $\forall d_1,d_2\in\N:\exists d_3\in\N:S_{d_1}\cup S_{d_2}\subseteq S_{d_3}$ (take $d_3:=\max\{d_1,d_2\}$). By \ref{linearstabilitycor}, we have $\bigcup\mathcal E=\mathcal R_{n+1}$. Now \ref{finitenesscor} yields $\set_\R(S_d)=\R^{n+1}$ for some $d\in\N$. \end{proof} Our lecture notes culminate in the following result which is a contribution to the theory of solving systems of polynomial inequalities. \begin{cor}[Kriel, Schweighofer \cite{ks'}] Let $m\in\N_0$ and $\g\in\R[\x]^m$ such that $M(\g)$ is Archimedean and suppose that $S(\g)$ has nonempty interior near its convex boundary. Suppose that $g_i$ is strictly quasiconcave on $(\convbd S(\g))\cap Z(g_i)$ for each $i\in\{1,\dots,m\}$. Then \[S_d(\g)=\conv S(\g)\] for all sufficiently large $d\in\N$. \end{cor} \begin{proof} First consider the special case $S(\g)=\emptyset$. In this case, $-1\in M(\g)$ for example by \ref{putinar} or by \ref{strqc}. By Definition \ref{lasserredef}, this entails $L_d(\g)=\emptyset$ and thus $S_d(\g)=\emptyset=S(\g)$ for all sufficiently large $d\in\N$. Now suppose that $S(\g)\ne\emptyset$. Since $M(\g)$ is Archimedean, $S(\g)$ is then compact. By \ref{convbdchar} and \ref{takeson}, it follows that $\convbd S(\g)\ne\emptyset$. In particular, $S^\circ\ne\emptyset$ by Definition \ref{nearconvexboundary}. Hence the conditions of Theorem \ref{exact2} are met and what we have to show is therefore exactly that there is $d\in\N$ such that \[\forall f\in\R[\x]_1:(f\ge0\text{ on }S(\g)\implies f\in M_d(\g)).\] But this is exactly what Corollary \ref{strqc} says. \end{proof} \backmatter
{ "timestamp": "2022-05-10T02:36:33", "yymm": "2205", "arxiv_id": "2205.04211", "language": "en", "url": "https://arxiv.org/abs/2205.04211", "abstract": "Chapters 1 to 4 are the lecture notes of my course \"Real Algebraic Geometry I\" from the winter term 2020/2021. Chapters 5 to 8 are the lecture notes of its continuation \"Real Algebraic Geometry II\" from the summer term 2021. Chapters 9 and 10 are the lecture notes of its further continuation \"Geometry of Linear Matrix Inequalities\" from the winter term 2021/2022. These courses have been delivered at the University of Konstanz in Southern Germany. The entirety of these lecture notes is accompanied by a list of 47 long videos which is available from the following YouTube playlist:this https URL", "subjects": "History and Overview (math.HO); Algebraic Geometry (math.AG); Optimization and Control (math.OC)", "title": "Real Algebraic Geometry, Positivity and Convexity", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429604789204, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097211011862281 }
https://arxiv.org/abs/1706.09093
On the imaginary parts of chromatic root
While much attention has been directed to the maximum modulus and maximum real part of chromatic roots of graphs of order $n$ (that is, with $n$ vertices), relatively little is known about the maximum imaginary part of such graphs. We prove that the maximum imaginary part can grow linearly in the order of the graph. We also show that for any fixed $p \in (0,1)$, almost every random graph $G$ in the Erdös-Rényi model has a non-real root.
\section{Introduction} A (vertex) $k$-colouring of a (finite, undirected, simple) graph $G$ is a function $f:V(G) \rightarrow \{1,\ldots,k\}$ such that no two adjacent vertices receive the same colour, that is, if $uv$ is an edge of $G$, then $f(u) \neq f(v)$. The function $\pi(G,k)$ that for all nonnegative integers $k$ counts the number of $k$-colourings of $G$, is well known to be a polynomial function of $k$, and its extension to all complex numbers $x$ is called the {\em chromatic polynomial} of $G$. There is likely no better studied graph polynomial than the chromatic polynomial, with interest initiated by Birkhoff in his work on the famous Four Colour Conjecture -- whether every planar graph can be coloured with four colours. The research literature on the topic is vast -- see \cite{read}, \cite{readtutte} and \cite{dongbook} for a recent survey. The Four Colour Theorem \cite{fct1,fct2} is equivalent to stating that $4$ is never a root of the chromatic polynomial of a planar graph, and the nature and location of roots of chromatic polynomials ({\em chromatic roots}) has been of great interest. There are no negative real roots (as the coefficients of a chromatic polynomial alternate in sign). While it is known \cite{jackson,thomassen} that the closure of the set of real chromatic roots is the set $\{0,1\} \cup [32/27,\infty)$, the closure of the set of complex chromatic roots is in fact the whole complex plane \cite{sokaldense}. Various results are known about the maximum modulus of chromatic roots of a graph, with regard to the order and size (that is, the number of edges). For example, the chromatic roots of graphs of order $n$ and size $m$ are known to be in the disks $|z| < 8 \Delta < 8n$ \cite{sokalbound} and $|z-1| \leq m-n+1$ \cite{brownmonomial}, where $\Delta$ is the maximum degree of a vertex of $G$. On the other hand, there are chromatic roots of modulus at least $\displaystyle{\frac{m-1}{n-2}}$ \cite{brownbound}. As every complete graph of order $n$ has a chromatic root at $n-1$, the rate of growth of \[ \mathrm{maxmod}(n) = \mathrm{max} \{ |z| : z \mbox{ is a chromatic root of a graph of order } n \}\] is linear, that is, there are positive constants $C_{1}$ and $C_{2}$ such that $C_{1}n \leq \mathrm{maxmod}(n) \leq C_{2}n$. The same result holds for the maximum {\em real part} of a chromatic root of a graph of order $n$, \[ \mathrm{maxreal}(n) = \mathrm{max} \{ \Re(z) : z \mbox{ is a chromatic root of a graph of order } n \}\] -- the function also grows linearly, for the same reasons. However, what can be said about the growth rate of \[ \mathrm{maximaginary}(n) = \mathrm{max} \{ \Im(z) : z \mbox{ is a chromatic root of a graph of order } n \},\] the maximum {\em imaginary part} of a chromatic root of order $n$? Very little is known. In \cite{brownbound} it was shown that the maximum imaginary part of a chromatic root of the complete bipartite graph $K_{\lfloor n/2 \rfloor,\lceil n/2 \rceil}$ of order $n$ has a chromatic root with imaginary part $\Omega(\sqrt{n})$. Of course, from the maximum modulus of a chromatic root of a graph of order $n$ being at most $8 \Delta$, the growth rate of the maximum imaginary part of a chromatic root is no more than linear. Alan Sokal (private communication) has suggested that, via computations, the maximum imaginary part of the complete bipartite graph $K_{n,n}$ seems to be about $.7239685$ times the order of the graph. However, a rigorous argument is elusive. What is the true rate of growth of the maximum imaginary part of a chromatic root of a graph of order $n$? Another question concerns chromatic roots for random graphs. Only some computational results are to be found \cite{bussel}. Many graphs, including forests and chordal graphs (including complete graphs) have only real chromatic roots, while others (such as complete bipartite graphs) do not. We ask: do almost all graphs (in the Erd\"{o}s-R\'{e}nyi model, for fixed edge probability $p \in (0,1)$) have all real roots? We show that for any fixed $p \in (0,1)$, almost all graphs, in fact, have a non-real chromatic root. Our approach to both problems involves the well known \textit{Gauss-Lucas Theorem}: \begin{theorem}[Gauss-Lucas] \label{GL} Let $f(z)\in\mathbb{C}[z]$ be a nonconstant polynomial with complex coefficients, and let $f'(z) = \mathrm{d} f(z)/\mathrm{d} z$ be the derivative of $f(z)$. Then all the roots of $f'(z)$ lie in the convex hull of the set of roots of $f(z)$ in $\mathbb{C}$. \end{theorem} A simple but important consequence is the following: \begin{cor} If some nonzero iterated derivative of a polynomial $f(x)$ with complex coefficients has a root with imaginary part $b > 0$, then $f(x)$ has a non-real root as well, with imaginary part at least $b$. \end{cor} A proof of Theorem \ref{GL} can be found in \cite{prasolov}. Informally, the proof is very simple. Let $f(z)\in\mathbb{C}[z]$ be a polynomial of degree $d\geq 1$, and let $\xi_1,...,\xi_d$ be the roots of $f(z)$ (which are not necessarily distinct). Then $f'(z)/f(z) = \sum_{j=1}^d (z-\xi_j)^{-1}$ as rational functions in $\mathbb{C}(z)$. Let $K$ be the convex hull of $\{\xi_1,...,\xi_d\}$, and consider any $w\in\mathbb{C}\smallsetminus K$. Note that $f(w)\neq 0$. There is a line $L$ in the complex plane with $w$ on one side of $L$ and $K$ on the other side of $L$. Let $\theta\in(-\pi,\pi]$ be either of the angles such that the line $\mathrm{e}^{\mathrm{i}\theta}\mathbb{R}$ is perpendicular to $L$. Then all of the real numbers $\Re(\mathrm{e}^{-\mathrm{i}\theta}(w-\xi_j))$ for $1\leq j\leq d$ are nonzero and have the same sign. It follows that $f'(w)/f(w)\neq 0$, so that $f'(w)\neq 0$. We use the Gauss-Lucas Theorem \ref{GL} to investigate non-real roots of chromatic polynomials of graphs. The idea is to differentiate a polynomial $f(z)\in\mathbb{C}[z]$ repeatedly until only a polynomial $g(z)$ of degree at most four remains. Then discriminant conditions determine whether $g(z)$ has a non-real root. By the Gauss-Lucas Theorem \ref{GL}, if $g(z)$ has a non-real root then so does $f(z)$. When $g(z)$ is quadratic we can solve for its roots easily, and obtain a lower bound for the largest imaginary part of a root of $f(z)$. For quartic polynomials $g(z)$ we use the following criterion. \begin{prop}\cite{rees} \label{disc4} Let $g(z) = az^4 + b z^3 + c z^2 + d z + f$ be a quartic polynomial in $\mathbb{C}[z]$. Then $g(z)$ has a non-real root if \begin{eqnarray*} \mathrm{Disc}(g(z)) &=& 256a^3f^3-192a^2bdf^2-128a^2c^2f^2+144a^2cd^2f-27a^2d^4+144ab^2cf^2-6ab^2d^2f \\ & & -80abc^2df+18abcd^3+16ac^4f-4ac^3d^2-27b^4f^2+18b^3cdf-4b^3d^3-4b^2c^3f+b^2c^2d^2\\ &<& 0. \end{eqnarray*} \end{prop} The applicability of this strategy depends on the fact that the first few coefficients (of highest degree) of chromatic polynomials have relatively straightforward combinatorial meaning. Similarly, this reasoning can be applied to any class of polynomials for which some of the highest order terms can be determined. The somewhat surprising fact is that, at least in the case of chromatic polynomials, this rather weak information seems to work quite well. \section{Linear growth of the maximum imaginary part of a chromatic root} While the moduli and real parts of chromatic roots grow linearly in the order of a graph, the same was not known for imaginary parts. In this section we shall prove that indeed this is the case. Given positive integers $a,b,c,d$, let $C_{4}(a,b,c,d)$ be the graph formed from a cycle of length $4$ by replacing the vertices in cyclic order by cliques of order $a, b, c$ and $d$ respectively (all edges are present between vertices of a clique and the two `adjacent' cliques in cyclic order). Figure~\ref{cycle4cliques} shows one such graph. \begin{figure}[htp] \begin{center} \includegraphics[scale=0.75]{cycle4cliques.pdf} \caption{The graph $C_{4}(3,2,4,3)$.} \label{cycle4cliques} \end{center} \end{figure} The chromatic polynomials and roots of such graphs were investigated in \cite{ringfourcliques}, where it was shown that, surprisingly, the non-real roots all have real part equal to $(a+b+c+d-1)/2$ (so that these roots line up vertically in the complex plane when the order $n = a+b+c+d$ is fixed -- see Figure~\ref{c4cliqueexample2}). \begin{figure}[htp] \begin{center} \includegraphics[scale=0.35]{c4cliqueexample2.pdf} \caption{The chromatic roots of $C_{4}(a,b,c,d)$ for $1 \leq a,b,c,d \leq 6$. Note how the roots line up in vertical stacks.} \label{c4cliqueexample2} \end{center} \end{figure} The sharing of the same real part for the nonreal chromatic roots is interesting enough, but we are interested in the imaginary parts. To do so, we first need to travel through a sequence of polynomials related to the chromatic polynomial of $C_{4}(a,b,c,d)$ in order to discuss its chromatic roots. As in \cite{ringfourcliques}, it was observed that one could write \[ \pi(C_{4}(a,b,c,d),x) = \frac{(x)_{b+c}(x)_{c+d}}{(x)_{a+c}}Q_{a,b,c,d}(x),\] where $(x)_{k} = x(x-1) \cdots (x-k+1)$ is the {\em $k$th falling factorial of $x$}, and $Q_{a,b,c,d}(x)$ is a polynomial in $x$. Moreover, if we set \[ p = (b+c-a-d+1)/2,~q = (c+d-a-b+1)/2, \mbox{ and } k = (b+d-a-c+1)/2,\] then we can express \[ Q_{a,b,c,d}(z + (n-1)/2) = F_{a,p,q,k}(z)\] for another polynomial $F = F_{a,p,q,k}$. Moreover, it turns out that $F$ is an even polynomial, that can be expressed as \[ F_{a,p,q,k}(z) = W_{a,p,q,k}(z^{2})\] for a polynomial $W_{a,p,q,k}(z)$; in fact, $W_{a,p,q,k}(z)$ satisfies \begin{eqnarray} W_{0,p,q,k}(z) & = & 1,\label{w0}\\ W_{1,p,q,k}(z) & = & z+ pq + pk + qk\label{w1} \end{eqnarray} and for $a \geq 2$, \begin{eqnarray} W_{a,p,q,k}(z) & = & (z+(a-1)(2p+2q+2k+2a-3)+pq+pk+qk)W_{a-1,p,q,k}(z) \nonumber \\ & & -(a-1)(p+q+a-2)(q+k+a-2)(p+k+a-2)W_{a-2,p,q,k}(z).\label{wa} \end{eqnarray} The roots of $W$ were then shown to be real and nonpositive, so that for every negative root $r$ of $W$, both $-\sqrt{-r} i$ and $\sqrt{-r} i$ are roots of $F$, and hence $Q$, and thus $\pi(C_{4}(a,b,c,d)$, has roots at $(n-1)/2 \pm \sqrt{-r} i$. With all of this out of the way, our plan is to show that when $a = b = c = d$, we can find a root $r$ of $W$ so that $-r = \Omega(n^{2})$; this will imply that the graphs $C_{4}(a,a,a,a)$ have a chromatic root with imaginary part at least $Cn$ for some positive constant $C$. We can then extend the result to all $n$ by noting that if $n = 4a+l$, for some $1 \leq l \leq 3$, then by noting that that disjoint union of a graph with isolated vertices does not change the set of chromatic roots , we find that some graph of order $n$ has imaginary part at least $C(n-3)$, and hence imaginary part at least $C^{\prime}n$ for a slightly smaller constant $C^{\prime}$ (and sufficiently large $n$). So the question is, how large in absolute value are the roots of $W$ guaranteed to be? When $a = b= c = d$, we find from the formulas that $p = q = k = 1/2$, and that \begin{eqnarray} W_{0} = W_{0,1/2,1/2,1/2}(z) & = & 1,\label{w0prime}\\ W_{1} = W_{1,1/2,1/2,1/2}(z) & = & z+ 3/4\label{w1prime} \end{eqnarray} and for $a \geq 2$, \begin{eqnarray} W_{a} = W_{a,1/2,1/2,1/2}(z) & = & (z+2(a-1)a+3/4)W_{a-1}(z) - (a-1)^{4}W_{a-2}(z).\label{waprime} \end{eqnarray} Moreover, from this recursion, we can calculate that \begin{eqnarray} W_{a} & = & z^{a} + \left( \frac{2}{3} a^{3} + \frac{1}{12} a \right) z^{a-1} + \nonumber \\ & & \left( \frac{2}{9}a^{6}-\frac{3}{5}a^{5}+\frac{5}{9}a^{4}-\frac{1}{6}a^{3}+\frac{1}{288}a^{2}-\frac{7}{480}a \right) z^{a-2} + \cdots . \label{wexpansion} \end{eqnarray} We are interested in the leftmost (that is, the `most negative`) root of $W_{a}$. Figure~\ref{warootsovernsquared} plots the leftmost root, divided by $n^2$, and here we see what suggests limiting behaviour. \begin{figure}[htp] \begin{center} \includegraphics[scale=0.3]{warootsovernsquared.pdf} \caption{The leftmost roots of $W_{a}$ for $a \leq 40$, divided by $n^2 = 16 a^{2}$.} \label{warootsovernsquared} \end{center} \end{figure} To get a bound on the roots of $W_{a}$, we differentiate it down $a-2$ times, until we reach a quadratic: \[ W_{a}^{(a-2)} = \frac{a!}{2}z^{2} + (a-1)!\left( \frac{2}{3} a^{3} + \frac{1}{12} a \right) z + (a-2)!\left( \frac{2}{9}a^{6}-\frac{3}{5}a^{5}+\frac{5}{9}a^{4}-\frac{1}{6}a^{3}+\frac{1}{288}a^{2}-\frac{7}{480}a \right).\] We factor out $(a-2)!$, and consider the quadratic \[ f_{a}(z) = \frac{a(a-1)}{2}z^{2} + (a-1)\left( \frac{2}{3} a^{3} + \frac{1}{12} a \right) z + \left( \frac{2}{9}a^{6}-\frac{3}{5}a^{5}+\frac{5}{9}a^{4}-\frac{1}{6}a^{3}+\frac{1}{288}a^{2}-\frac{7}{480}a \right).\] By the quadratic formula (and Maple) we find that the roots of $f_{a}$ are \[ -\frac{2}{3}a^{2}-\frac{1}{12} \pm \frac{1}{15}\sqrt{170a^{3}-55a^{2}-5a-5}.\] If $r$ is either of these roots (as we are interested in the limiting behaviour of the roots, either root will do), we find by the Gauss-Lucas theorem that $W_{a}$'s leftmost root $R_{a}$ is to the left of $r$. A straightforward calculation shows that \[ \lim _{n->\infty} \frac{r}{n^{2}} = -\frac{1}{24},\] Thus for any fixed $\varepsilon > 0 $, and sufficiently large $n$, \[ R_{a} \leq \left( -\frac{1}{24} + \varepsilon \right) n^{2}.\] Finally, as the imaginary parts of chromatic roots of $C_{4}(a,a,a,a)$ are the square roots of the roots of $W_{a}$, we find that $C_{4}(a,a,a,a)$, for sufficiently large enough $a$, has a chromatic root with imaginary part at least \[ \sqrt{\frac{1}{24} - \varepsilon}.\] Putting all the pieces together, we have shown: \begin{theorem} The growth rate of the maximum imginary part of a chromatic root is linear, that is, there are positive constants $C_{1}$ and $C_{2}$ such that for all sufficiently large $n$, \[ C_{1}n \leq \mathrm{maximaginary}(n) \leq C_{2}n. \] \qed \end{theorem} In fact the proof shows that any positive constant slightly less than $1/\sqrt{24} \approx 0.2041$ will do for $C_{1}$. \section{Non-real chromatic roots of almost all graphs} We now turn to random graphs, and ask, is it more likely that all the chromatic roots are real or not? Our model is the usual Erd\"{o}s-R\'{e}nyi model $G \in {\mathcal G}_{n,p}$, where each edge appears independently with fixed probability $p$. Of the $833$ (isomorphism classes of) connected graphs with seven vertices, $273$ of them have chromatic polynomials with only real roots. (For eight vertices the proportion is $1627/11117$.) In this section we prove that for any fixed $p \in (0,1)$, as $n \rightarrow \infty$ almost all random graphs $G \in {\mathcal G}_{n,p}$ have a non-real root. It is well known that the chromatic polynomial of a graph $G$ of order $n$ and size $m$ is monic, of degree $n$, with integer coefficients of alternating sign. The top coefficients are known (see, for example \cite[p. 31]{dongbook}: \[ \pi(G,x) = x^{n} - mx^{n-1} + \left( {{m} \choose {2}} - t \right)x^{n-2} - \cdots,\] where $t$ is the number of triangles (i.e. $K_{3}$'s) in $G$. The expected number of edges and triangles in a random graph $G \in {\mathcal G}_{n,p}$ are, respectively, \[ E(K_{2}) = p{{n} \choose {2}} \mbox{ and } E(K_{3}) = p^{3}{{n} \choose {3}}.\] Chebyshev's inequality for a discrete random variable $X$ states that for any $\lambda > 0 $, \[ \mbox{Prob}(|X - E(X)| \geq \lambda) \leq \frac{\mbox{Var(X)}}{\lambda^{2}},\] and it follows that for any $\varepsilon > 0$, that \[ \mbox{Prob}(|X - E(X)| \leq \varepsilon |E(X)|) \geq \frac{\mbox{Var(X)}}{\varepsilon^{2}(E(X))^{2}}.\] Standard techniques can show that for both of of the random variables $X = M$, the number of edges, and $T$, the number of triangles, for any graph in ${\mathcal G}_{n,p}$, \[\mbox{Var(X)} = o((E(X))^{2}).\] (For example, writing $T = \sum T_{S}$, where the sum is taken over all subsets of cardinality $3$ of the vertex set $\{1,\ldots,n\}$ and $T_{S}$ is an indicator random variable for whether $S$ induces a triangle, then \begin{eqnarray*} \mbox{Var}(T) & = & \sum_{S} \mbox{Var}(T_{S}) + \sum_{S^{\prime} \neq S} \mbox{Cov}(T_{S},T_{S^\prime})\\ & \leq & E(T) + \sum_{S^{\prime} \neq S} \mbox{Cov}(T_{S},T_{S^\prime})\\ & \leq & E(T) + \sum_{S^{\prime} \neq S, |S| \geq 2, |S^\prime| \geq 2} E(T_{S}T_{S^\prime})\\ & = & E(T) + 6{{n} \choose {4}}p^{6}\\ & = & o((E(T))^2), \end{eqnarray*} where we have partitioned the pairs of subsets $(S,S^{\prime})$ according to the cardinality of their intersection -- the covariance is $0$ is their intersection is of size $0$ or $1$ and used the fact that $\mbox{Cov}(X,Y) \leq E(XY)$.) It follows from Chebyshev's inequality that for any fixed $\varepsilon > 0 $, and for almost all graphs $G \in {\mathcal G}_{n,p}$, \begin{eqnarray*} (1-\varepsilon)p{{n} \choose {2}} \leq M \leq (1+\varepsilon)p{{n} \choose {2}}, \end{eqnarray*} and \begin{eqnarray*} (1-\varepsilon)p^{3}{{n} \choose {3}} \leq T \leq (1+\varepsilon)p^{3}{{n} \choose {3}}. \end{eqnarray*} Let $G \in {\mathcal G}_{n,p}$. With probability tending to $1$, the values of $M$ and $T$ are \begin{eqnarray} M & = & (1 + \varepsilon_{M})p{{n} \choose {2}}, \mbox{ and}\label{quadm}\\ T & = & (1 + \varepsilon_{T})p^{3}{{n} \choose {3}}\label{quadt}, \end{eqnarray} where $\varepsilon_{M}$ and $\varepsilon_{T},$ are all bounded in absolute value by some fixed but very small $\varepsilon > 0 $, dependent on $p$, that we shall choose shortly. We again apply the Gauss-Lucas Theorem applied to the $(n-2)$-th derivative of $\pi(G,x)$: \[ f_{n-2} = (\pi(G,x))^{(n-2)} = (n-2)! \left( \frac{n(n-1)}{2}x^{2} - (n-1)mx + {{m} \choose {2}} - t \right) .\] By the quadratic formula, $f_{n-2}$ has a non-real root if and only if its discriminant is negative. Substituting in (\ref{quadm}) and (\ref{quadt}), we find that the discriminant of $f_{n-2}/(n-2)!$ is \[ p^{2} \left( \frac{1}{3}(\varepsilon_{T}+1)p - \frac{1}{4}(\varepsilon_{M}+1)^2 \right) .\] For any $p < 3/4$ we can choose $\varepsilon$ positive but sufficiently close to $0$ to force this discriminant to be negative, and hence for all $p \in (0,3/4)$, $f_{n-2} = (\pi(G,x))^{(n-2)}$ has a nonreal root. The Gauss-Lucas theorem implies the same is true for $\pi(G,x)$. Now what about $p \geq 3/4$? The argument provided fails, as then in general $f_{n-2}$ has two real roots. We shall need to be more subtle in our argument, and jump from using a quadratic to using a quartic (the use of a cubic provides no assistance here). To do so, we consider the expansion of the chromatic polynomial for the first five terms from the top (again, see \cite[p. 31-32]{dongbook}): \begin{eqnarray*} &&x^{n} - Mx^{n-1} + \left( {{M} \choose {2}} - T \right)x^{n-2} - \left( {{M} \choose {3}} - (M-2)T - IC_{4} + 2nIK_{4} \right) x^{n-2} + \\ &&\biggl( {{M} \choose {4}} - {{M-2} \choose {2}}T + {{T} \choose {2}} -(M-3)IC_{4} -(2M-9)IK_{4} -\\ && IC_{5} + IK_{2,3} + 2IH +3IW_{5} -6IK_{5}\biggr) x^{n-3} - \cdots, \end{eqnarray*} where $IK_{4}$ and $IK_{5}$ are the number of $K_{4}$'s and $K_{5}$'s in $G$, respectively, and $IC_{4}$, $IC_{5}$, $IK_{2,3}$, $IH$ and $IW_{5}$ are the number of {\em induced} $C_{4}$'s, $C_{5}$'s, $K_{2,3}$'s, $H$'s (see Figure~\ref{quarticgraphs}) and $W_{5}$'s (i.e. a wheel of order $5$) in $G$, respectively. The expected number of edges, triangles, $K_{4}$-s induced $C_{4}$-s, induced $C_{5}$-s, induced $K_{2,3}$-s, induced $H$-s, induced $W_{5}$-s and $K_{5}$-s in a random graph $G \in {\mathcal G}_{n,p}$ are, respectively: \begin{equation*} \begin{aligned}[t] M & = p{{n} \choose {2}}\\ T & = p^{3}{{n} \choose {3}}\\ IK_{4} & = p^{6}{{n} \choose {4}}\\ IC_{4} & = 3p^{4}(1-p)^{2}{{n} \choose {4}}\\ IC_{5} & = 12p^{5}(1-p)^{5}{{n} \choose {5}} ~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{aligned} \begin{aligned}[t] IK_{2,3} & = 10p^{6}(1-p)^{4}{{n} \choose {5}}\\ IH & = 60p^{7}(1-p)^{3}{{n} \choose {5}}\\ IW_{5} & = 15p^{8}(1-p)^{2}{{n} \choose {5}}\\ K_{5} & = p^{10}{{n} \choose {5}}\\ ~ & ~~~~~~~~~~~~~ \end{aligned} \end{equation*} \begin{figure}[htp] \begin{center} \includegraphics[scale=0.6]{quarticgraphs.pdf} \caption{Graphs whose counts appear in some of the coefficients in the chromatic polynomial.} \label{quarticgraphs} \end{center} \end{figure} Using similar techniques as presented earlier on counting triangles, for all of these random variables and for a graph in ${\mathcal G}_{n,p}$, $\mbox{Var(X)} = o((E(X))^{2}),$ and so from Chebyshev's inequality that for any fixed $\varepsilon > 0 $, and for almost all graphs $G \in {\mathcal G}_{n,p}$, \begin{eqnarray*} (1-\varepsilon)10p^{6}(1-p)^{4}{{n} \choose {5}} & \leq IK_{2,3} \leq & (1+\varepsilon)10p^{6}(1-p)^{4}{{n} \choose {5}}, ]\end{eqnarray*} for example. Similar inequalities hold in all other cases. For $G \in {\mathcal G}_{n,p}$, with probability tending to $1$, the values of the salient graph parameters are \begin{eqnarray} M & = & (1 + \varepsilon_{M})p{{n} \choose {2}},\label{quarm}\\ T & = & (1 + \varepsilon_{T})p^{3}{{n} \choose {3}},\label{quart}\\ IK_{4} & = & (1 + \varepsilon_{IK_{4}})12p^{6}{{n} \choose {4}},\label{quark4}\\ IC_{4} & = & (1 + \varepsilon_{IC_{4}})3p^{4}(1-p)^{2}{{n} \choose {4}},\label{quaric4}\\ IC_{5} & = & (1 + \varepsilon_{IC_{5}})12p^{5}(1-p)^{5}{{n} \choose {5}},\label{quaric5}\\ IK_{2,3} & = & (1 + \varepsilon_{IK_{2,3}})10p^{6}(1-p)^{4}{{n} \choose {5}},\label{quarik23}\\ IH & = & (1 + \varepsilon_{IH})60p^{7}(1-p)^{3}{{n} \choose {5}},\label{quarih}\\ IW_{5} & = & (1 + \varepsilon_{IW_{5}})15p^{8}(1-p)^{2}{{n} \choose {5}}, \mbox{ and},\label{quariw5}\\ IK_{4} & = & (1 + \varepsilon_{IK_{4}})p^{10}{{n} \choose {5}},\label{quark5} \end{eqnarray} where $\varepsilon_{M},\varepsilon_{T},\varepsilon_{IK_{4}},\varepsilon_{IC_{4}},\varepsilon_{IC_{5}},,\varepsilon_{IK_{2,3}},\varepsilon_{IH},\varepsilon_{IW_{5}}$ and $\varepsilon_{IK_{5}}$ are all bounded in absolute value by some fixed but very small $\varepsilon > 0 $, to be chosen to satisfy some inequalities. We now apply the Gauss-Lucas Theorem to the $(n-4)$-th derivative of $\pi(G,x)$, which is $(n-4)!$ times \begin{eqnarray*} & & \frac{n(n-1)(n-2)(n-3)}{24}x^{4} - \frac{(n-1)(n-2)(n-3)}{6}mx^{3} + \frac{(n-2)(n-3)}{2}\left( {{m} \choose {2}} - t \right) x^{2} \\ & - & (n-3)\left( {{m} \choose {3}} - (m-2)t - ic_{4} + 2nk_{4} \right) x \\ & + & \left( {{m} \choose {4}} - {{m-2} \choose {2}}t + {{t} \choose {2}} -(m-3)ic_{4} -(2m-9)k_{4} - ic_{5} + ik_{2,3} + 2ih +3iw_{5} -6k_{5}\right). \end{eqnarray*} When a quartic has all real roots is more involved than for a quadratic (or cubic). The {\em discriminant} of a quartic \[ g = ax^{4}+bx^{3}+cx^{2}+dx+e\] is given by \begin{eqnarray*} \mbox{Disc}(g) & = & 256a^3e^3-192a^2bde^2-128a^2c^2e^2+144a^2cd^2e-27a^2d^4+144ab^2ce^2-6ab^2d^2e \\ & & -80abc^2de+18abcd^3+16ac^4e-4ac^3d^2-27b^4e^2+18b^3cde-4b^3d^3-4b^2c^3e+b^2c^2d^2. \end{eqnarray*} If a quartic's discriminant is negative, then the quartic has two distinct real roots and two non-real roots \cite{rees}. Substituting in (\ref{quarm})--(\ref{quark5}) to the discriminant of $(n-4)!(\pi(G,x))^{(n-4)}$ (as given in Proposition~\ref{disc4}), we get a polynomial of degree $30$ in $n$, whose leading coefficient we deonte by $lc$. If we set all the various $\varepsilon$'s equal to $0$ in $lc$, we get \begin{eqnarray*} & & -(1/93312)p^{21}-(1/186624)p^{20}+(1/124416)p^{19}-(227/80621568)p^{18}-(1/1119744)p^{17}\\ & & +(5/2985984)p^{16}-(5/2985984)p^{15}+(5/5308416)p^{14}-(1/3538944)p^{13}+(1/28311552)p^{12}. \end{eqnarray*} The largest real root of this polynomial (in $p$) is approximately $0.31564$. As the leading coefficient is negative, it follows that for $p > 0.32$, this polynomial is negative. The roots of a polynomial depend continuously on its coefficients, so it follows that we can choose $\varepsilon > 0$ so small that if the absolute values of all of $\varepsilon_{M},\varepsilon_{T},\varepsilon_{IK_{4}},\varepsilon_{IC_{4}},\varepsilon_{IC_{5}},\varepsilon_{IK_{2,3}},\varepsilon_{IC_{h}},\varepsilon_{IW_{5}}$ and $\varepsilon_{IK_{5}}$ are at most $\varepsilon$, then $lc$ will be negative, provided that $p > 0.32$ (we ensure that the largest real root is less than $0.32$ and the sign of the leading coefficient of $lc$ remains negative). As $lc$ is the leading coefficient of discriminant of the quartic $(n-4)!(\pi(G,x))^{(n-4)}$), it follows that $(\pi(G,x))^{(n-4)}$ and (by Gauss-Lucas) $\pi(G,x)$ itself has a non-real root. By showing that in each of the cases $p < 0.75$ and $p > 0.32$ there is a non-real chromatic root, we have completed our proof of the following. \begin{theorem} Let $p \in (0,1)$ be fixed. Then with probability tending to $1$ as $n \rightarrow \infty$, a graph $G \in {\mathcal G}_{n,p}$ has a non-real chromatic root. \qed \end{theorem} \section{Concluding Remarks} We end our discussion with a few questions. \begin{ques} Does $\displaystyle{\lim _{n \rightarrow \infty} \frac{\mathrm{maximaginary}(n)}{n}}$ exist? If so, what is its value? \end{ques} We have shown that if it does exist, it must be larger than $1/2\sqrt{6} \approx 0.020$. However, even for the ring graphs of order $n$ we considered, the largest imaginary parts seem to approach approximately $0.45n$. And calculations show that the largest imaginary parts of chromatic roots of complete bipartite graphs $K_{n/2,n/2}$ are roughly $0.72n$, which raises an extremal problem. \begin{ques} Which graphs of order $n$ have a chromatic root of largest imaginary part? Is it the complete bipartite graph with (nearly) equal parts? \end{ques} We have verified that this is indeed the case for order at most $8$. Finally, while we have shown that almost all graphs ${\mathcal G}_{n,p}$ have a non-real chromatic root, what can be said about the maximum imaginary part? \begin{ques} For fixed $p \in (0,1)$, is the maximum imaginary part of a chromatic root of almost all graphs $\Omega(n)$? \end{ques} \vskip0.4in \noindent {\bf \large Acknowledgments:} This research was supported in part by NSERC grants RGPIN 170450-2013 (J.I. Brown) and OGP0105392 (D.G. Wagner). \bibliographystyle{elsarticle-num}
{ "timestamp": "2017-06-29T02:02:31", "yymm": "1706", "arxiv_id": "1706.09093", "language": "en", "url": "https://arxiv.org/abs/1706.09093", "abstract": "While much attention has been directed to the maximum modulus and maximum real part of chromatic roots of graphs of order $n$ (that is, with $n$ vertices), relatively little is known about the maximum imaginary part of such graphs. We prove that the maximum imaginary part can grow linearly in the order of the graph. We also show that for any fixed $p \\in (0,1)$, almost every random graph $G$ in the Erdös-Rényi model has a non-real root.", "subjects": "Combinatorics (math.CO)", "title": "On the imaginary parts of chromatic root", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429590144717, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097211001292721 }
https://arxiv.org/abs/1312.7538
A technique for determining the signs of sensitivities of steady states in chemical reaction networks
We present a computational procedure to characterize the signs of sensitivities of steady states to parameter perturbations in chemical reaction networks.
\section{Introduction} An important question in the mathematical analysis of chemical reaction networks is the characterization of sensitivities of steady states to perturbations in parameters. An example of a parameter is the total concentration of an enzyme in its various activity states. Its value might be manipulated experimentally in various forms, through expression knock-downs via interference RNA methods, or up-regulation, titration of inducers, pharmacological interventions through small-molecule inhibitors, or other modifications. Often, one wants to predict the effect of such perturbations, in a manner that depends only on the structure of the network of reactions and not on the actual values of other parameters, such as kinetic constants, which are typically very imperfectly known. Let us start with a very trivial example. Suppose that we study the following reversible bimolecular reaction: \[ A+B \; \arrowschem{k_2}{k_1} \; C \,. \] Let us write lower case letters $a,b,c$ for the concentrations of $A$, $B$, and $C$ respectively. Modeling with deterministic mass-action kinetics, the steady states of the associated ordinary differential equation are obtained by solving \be{eq:binary_binding} k_1 a b \,-\, k_2c \;=\; 0 \end{equation} subject to two conservation laws: \[ a+c=A_T \quad \mbox{and} \quad b+c=B_T\,, \] where $A_T$ and $B_T$ are two positive constants denoting the total (bound and unbound) forms of $A$ and $B$ respectively. For the associated set of ordinary differential equations, all solutions converge to a unique positive steady state determined by~\rref{eq:binary_binding} and the conservation laws. Suppose that we now perform the following experiment. First, the system is allowed to relax to steady state, starting from the concentrations $a(0)=A_T$, $b(0)=B_T$, and $c(0)=0$. The final concentrations $a_f$, $b_f$, and $c_f$ are measured. Next, the experiment is repeated, but the total amount $A_T$ is now set to a slightly larger value, while $B_T$ is kept constant. Let us call the final concentrations obtained in this new experiment, with larger $A_T$, as $a_f'$, $b_f'$, and $c_f'$. What can we say about the signs of the differences $\Delta a = a_f'-a_f$, $\Delta b = b_f'-b_f$, and $\Delta c = c_f'-c_f$? One approach to answering this question is to substitute the conservation laws into the steady state equation~\rref{eq:binary_binding}, for instance eliminating $a$ and $b$ so that $c=c_f$ can be found by solving the quadratic equation: \[ k_1 (A_T-c)(B_T-c) - k_2c = 0 \] for the unique root that is between $0$ and $\min\{A_T,B_T\}$: \[ \frac{k_1(A_T+B_T)+k_2 - \sqrt{(k_1(A_T+B_T)+k_2)^2 - 4 k_1^2 A_TB_T}}{2k_1} \] and then $a_f$ and $b_f$ are obtained from $a_f=A_T-c_f$ and $b_f=B_T-c_f$. A similar solution can be obtained for the larger value of $A_T$, and the differences $\Delta a$, $\Delta b$, and $\Delta c$ can be computed. Obviously, this is not a practical, or even possible, approach for large networks. On the other hand, a more conceptual and generalizable approach to this problem is as follows. Suppose that we view the vector of steady states $x=(a_f,b_f,c_f)$ as a curve which is parametrized by $A_T$, which we write as an abstract parameter $\lambda $. Thus, for all values of this parameter $\lambda $, we have that the following three equations must hold: \begin{eqnarray*} k_1 a(\lambda ) b(\lambda ) \,-\, k_2c(\lambda ) &=& 0\\ a(\lambda )+c(\lambda )&=&\lambda \\ b(\lambda )+c(\lambda )&=&B_T. \end{eqnarray*} Taking derivatives with respect to $\lambda $, we have: \begin{eqnarray*} k_1 a'(\lambda ) b(\lambda ) + k_1 a(\lambda ) b'(\lambda ) \,-\, k_2c'(\lambda ) &=& 0\\ a'(\lambda )+c'(\lambda )&=&1\\ b'(\lambda )+c'(\lambda )&=&0. \end{eqnarray*} Substituting $b'(\lambda ) = -c'(\lambda )$ and $a'(\lambda ) =1 -c'(\lambda )$ in the first equation, we have that: \[ k_1 (1-c'(\lambda )) b(\lambda ) - k_1 a(\lambda ) c'(\lambda ) \,-\, k_2c'(\lambda ) \;=\; 0, \] which may be re-arranged as: \[ k_1b(\lambda ) \;=\; M c'(\lambda ),\quad \mbox{where} \;\; M = k_1 a(\lambda ) + k_2 + k_1 b(\lambda ) \,. \] Since $M>0$ and $k_1b(\lambda )>0$, we conclude that $c'(\lambda )>0$. In other words, $\Delta c>0$ for an increase in $\lambda =A_T$. Since $b'(\lambda ) = -c'(\lambda )$, we also know that $\Delta b<0$. What about $\Delta a$? If we only substitute $b'(\lambda ) = -c'(\lambda )$ in the first equation, we have that: \[ k_1 a'(\lambda ) b(\lambda ) = (k_1a(\lambda ) + k_2)c'(\lambda ) \] and so, using $k_1b(\lambda )>0$ and $k_1a(\lambda ) + k_2>0$, we conclude that $a'(\lambda )$ has the same sign as $c'(\lambda )$. Finally, since we also know that $a'(\lambda )+c'(\lambda )=1>0$, this implies that $a'(\lambda )>0$, so $\Delta a>0$. The rest of this paper shows how to extend this conceptual argument to more arbitrary networks. \section{Preliminaries} We start with arbitrary systems of ordinary differential equations (ODE's) \be{eq:gensys} \dot x(t) = f(x(t))\,. \end{equation} The vectors $x$ are assumed to lie in the positive orthant $\R^{n_{\mbox{\tiny \sc s}}}_+$ of $\R^{n_{\mbox{\tiny \sc s}}}$, that is, $x=(x_1,\ldots ,x_{{n_{\mbox{\tiny \sc s}}}})^T$ with each $x_i>0$, and $f$ is a differentiable vector field, mapping $\R^{n_{\mbox{\tiny \sc s}}}_+$ into $\R^{n_{\mbox{\tiny \sc s}}}$. We later specialize to ODE's that describe chemical reaction networks (CRN's), for which the abstract procedure to be described next can be made computationally explicit. In the latter context, we think of the coordinates $x_i(t)$ of $x$ as describing the concentrations of various chemical species $S_i$, $i=1,\ldots ,{n_{\mbox{\tiny \sc s}}}$. Suppose that $x^{\lambda }$ describes a $\lambda $-parametrized smooth curve of steady states for the system~\rref{eq:gensys}, where $\lambda $ is a scalar parameter ranging over some open interval $\Lambda $. The steady state condition amounts to asking that \be{eq:ss} f(x^{\lambda }) = 0 \end{equation} for all values of the parameter $\lambda \in \Lambda $. In addition to~(\ref{eq:ss}), we also assume that the steady states of interest are constrained by a set of algebraic equations \be{eq:gss} g_1(x^{\lambda }) = 0,\; g_2(x^{\lambda }) = 0,\; \ldots , \ g_{{n_{\mbox{\tiny \sc c}}}}(x^{\lambda }) = 0 \end{equation} where ${n_{\mbox{\tiny \sc c}}}$ is some positive integer (which we take to be zero when there are no additional constraints). We write simply $g(x^{\lambda })=0$, where $g:\R^{n_{\mbox{\tiny \sc s}}}_+\rightarrow \R^{n_{\mbox{\tiny \sc c}}}$ is a differentiable mapping whose components are the $g_i$'s. Some or all $g_i$ might be linear functions, representing moities or stochiometric constraints, but nonlinear constraints will be useful when treating certain examples, as will be discussed later. Let us denote by \[ \sx^{\lambda } \,:=\; \frac{\partial \xl}{\partial \lambda } \in \R^{{n_{\mbox{\tiny \sc s}}}\times 1} \] the derivative of the vector function $x^{\lambda }$ with respect to $\lambda $, viewed as a function $\Lambda \rightarrow \R^{{n_{\mbox{\tiny \sc s}}}\times 1}$. We are interested in answering the following question: \begin{center} what are the signs of the entries of $\sx^{\lambda }$? \end{center} Obviously, the answer to this question will, generally speaking, depend on the chosen $\lambda $. The computation of the steady state $x^{\lambda }$ as a function of $\lambda $ generally will involve the numerical approximate solution of nonlinear algebraic equations, and has to be repeated for each individual parameter $\lambda $. Our aim is, instead, to provide conditions that allow one to find these signs independently of the specific $\lambda $, and, even independently of other parameters that might appear in the specification of $f$ and of $g$, such as kinetic constants, and to do so using only linear algebraic and logical operations, with no recourse to numerical approximations. Proceeding in complete generality, we take the derivative with respect to $\lambda $ in~(\ref{eq:ss}), so that, by the chain rule, we have that $f'(x^{\lambda })\sx^{\lambda } = 0$, where $f'(x)$ denotes the Jacobian matrix of $f$ evaluated at a state $x$. In other words, \be{eq:null_f} \sx^{\lambda } \in {\cal N}(f'(x^{\lambda }))\,, \end{equation} where ${\cal N}(f'(x))$ denotes the nullspace of the matrix $f'(x)$. Similarly, we have that \be{eq:null_g} \sx^{\lambda } \in {\cal N}(g'(x^{\lambda }))\,. \end{equation} The reason for introducing separately $f$ and $g$ will become apparent later: we will be asking that each of the ${n_{\mbox{\tiny \sc c}}}\times {n_{\mbox{\tiny \sc s}}}$ entries of the Jacobian matrix of $g$ should not change sign over the state space (which happens, in particular, when $g$ is linear, as is the case with stoichiometric constraints). No similar requirement will be made of $f$, but instead, we will study the special case in which $f$ represents the dynamics of a CRN. \subsubsection*{Notations for signs of vectors and of subspaces} For any (row or column) vector $u$ with real entries, we introduce the vector of signs of entries of $u$, denoted $\mbox{sign}\, u$, as the (row or column) vector with entries in the set $\{-1,0,1\}$ whose $i$th coordinate satisfies: \[ (\mbox{sign}\, u)_i = \threeif% {-1}{\mbox{if $u_i<0$}}% {1}{\mbox{if $u_i>0$}}% {0}{\mbox{if $u_i=0$.}} \] (The function $\mbox{sign}\,$ is sometimes called the ``signature function'' when viewed as a map $\R^m\rightarrow \{-1,0,1\}^n$.) More generally, for any subspace ${\cal W}$ of vectors with real entries, we define \[ \mbox{sign}\, {\cal W} = \{\mbox{sign}\, v \, | \, v \in {\cal W}\}\,. \] Computing $\mbox{sign}\, {\cal W}$ amounts to the combinatorial problem of determining which orthants are intersected by ${\cal W}$.% \footnote{We do not need to use this fact, but it is worth noting that, given a basis of ${\cal W}$, the signs of ${\cal W}$ represent the ``oriented matroid'' associated to a matrix that lists the basis as its columns, which is the set of ``covectors'' of this basis. This topic is central to the theory of oriented matroids.} We also introduce the positive and negative parts of a vector $u$, denoted by $u^+$ and $u^-$ respectively, as follows: \[ (u^+)_i = \twoif {u_i} {\mbox{if $u_i>0$}}% {0} {\mbox{if $u_i\leq 0$}}% \, \quad\quad (u^-)_i = \twoif {-u_i} {\mbox{if $u_i<0$}}% {0} {\mbox{if $u_i\geq 0$}\,.}% \] Note that $u = u^+ - u^-$, $\mbox{sign}\, u = \mbox{sign}\, u^+ - \mbox{sign}\, u^-$, and: \be{eq:signsign} (\mbox{sign}\, u)^+ = \mbox{sign}(u^+)\,,\quad (\mbox{sign}\, u)^- = \mbox{sign}(u^-)\,. \end{equation} Suppose that $u\in \R^{1\times n}$ and $v\in \R^{n\times 1}$, for some positive integer $n$. The equality: \be{eq:productsign} \mbox{sign}(u v) \;=\; \mbox{sign}\left(\mbox{sign}(u) \, \mbox{sign}(v)\right)\,. \end{equation} need not hold for arbitrary vectors: for example, if $u=(1,-1/4,-1/4,-1/4)$ and $v=(1,1,1,1)^T$ then $\mbox{sign}(uv)=\mbox{sign}(1/4)=1$, but \[ \mbox{sign}\left(\mbox{sign}(u)\mbox{sign}(v)\right)= \mbox{sign}\left((1,-1,-1,-1)(1,1,1,1)^T\right) = \mbox{sign}(-2) = -1\,. \] However, equality~\rref{eq:productsign} is true provided that we assume that (a) $u^-=0$ or $u^+=0$ (that is, either $u_i\geq 0$ for all $i$, or $u_i\leq 0$ for all $i$, respectively), and also that (b) $v^-=0$ or $v^+=0$. This is proved as follows. Take first the case $u^-=0$ and $v^-=0$. Each term in the sum $uv=\sum_{i=1}^nu_iv_i$ is non-negative. Thus, $uv>0$, that is, $\mbox{sign}(uv)=1$, if and only if $u_i>0$ and $v_i>0$ for some common index $i$, and $uv = \mbox{sign}(uv)=0$ otherwise. Similarly, as $\mbox{sign}(u) \mbox{sign}(v)=\sum_{i=1}^n\mbox{sign}(u_i)\mbox{sign}(v_i)$, we know that $\mbox{sign}(u) \mbox{sign}(v)>0$, i.e. $\mbox{sign}\,(\mbox{sign}(u) \mbox{sign}(v))=1$, if and only if $\mbox{sign}(u_i)=\mbox{sign}(v_i)=1$ for some $i$, and $\mbox{sign}(u) \mbox{sign}(v)=0$ otherwise. But $\mbox{sign}(u_i)=\mbox{sign}(v_i)=1$ is the same as $u_i>0$ and $v_i>0$. Thus~(\ref{eq:productsign}) is true. The case $u^+=0$ and $v^-=0$ can be reduced to $u^-=0$ and $v^-=0$ by considering $-u$ instead of $u$: $\mbox{sign}(u v) = -\mbox{sign}((-u) v) = -\mbox{sign}(\mbox{sign}(-u)\mbox{sign}(v)) = \mbox{sign}(\mbox{sign}(u)\mbox{sign}(v))$. Similarly for the remaining two cases. \subsubsection*{A parameter-dependent constraint set} Denoting \[ {\cal W}(x^{\lambda })={\cal N}(f'(x^{\lambda }))\bigcap {\cal N}(g'(x^{\lambda })) \] we have that~\rref{eq:null_f} and~\rref{eq:null_g} can be summarized as follows, in terms of the sign notations just introduced: \[ \pi ^{\lambda }\,:= \; \mbox{sign}\, \sx^{\lambda } \,\in \, \mbox{sign}\, {\cal W}(x^{\lambda })\,. \] Therefore, one could in principle determine the possible values of $\pi ^{\lambda }$ once that ${\cal W}(x^{\lambda })$ is known. However, in applications one typically does not know explicitly the curve $x^{\lambda }$, which makes the problem difficult because the subspace ${\cal W}(x^{\lambda })$ depends on $\lambda $, and even computing the steady states $x^{\lambda }$ is a hard problem. As discussed below, for the special case of ODE systems arising from CRN's, a more systematic procedure is possible. Before turning to CRN's, however, we discuss general facts true for all systems. For every positive concentration vector $x$ define: \be{eq:sigma} \Sigma (x) \;:= \; \left\{\mbox{sign} \left(\nu f'(x)\right) \, | \, \nu \in \R^{1\times {n_{\mbox{\tiny \sc s}}}} \right\} \bigcup \left\{\mbox{sign} \left(e_i^Tg'(x)\right) \, | \, i\in \{1,\ldots ,{n_{\mbox{\tiny \sc c}}}\} \right\} \;\subseteq \; \{-1,0,1\}^{1\times {n_{\mbox{\tiny \sc s}}}} \,. \end{equation} Here $e_i^T$ denotes the canonical row vector $(0,\ldots 0,,1,0,\ldots 0)$ with a ``$1$'' in the $i$th position and zeroes elsewhere. The row vectors $\nu $ are used in order to generate an arbitrary linear combination of the rows of the Jacobian matrix of $f$, a set rich enough to, ideally, permit the unique determination of the sign of $\sx^{\lambda }$. As we will use $g$ to introduce constraints of constant sign, and the constant sign property is not preserved under arbitrary linear combinations of rows, we only allow $\nu =e_i^T$ for $g$, that is to say, we simply look at the signs of the rows of $g'(x)$. Since at a steady state $x=x^{\lambda }$, $f'(x^{\lambda })\sx^{\lambda }=0$ and $g'(x^{\lambda })\sx^{\lambda }=0$, we also have that: \be{eq:v} v \,\sx^{\lambda } \;=\; 0 \end{equation} for each linear combination $v = \nu f'(x^{\lambda })$ and each row $v = e_i^Tg'(x^{\lambda })$. An easy yet key observation is that the sign vectors in the set $\Sigma (x^{\lambda })$ strongly constrain the possible signs $\pi ^{\lambda }=\mbox{sign}\,\sx^{\lambda } = \mbox{sign}\, \frac{\partial \xl}{\partial \lambda }$. For simplicity in notations, we drop $\lambda $ in $\pi ^{\lambda }$ and in $\sx^{\lambda }$ when $\lambda $ is clear from the context, and write simply $\pi $ or $\xi $, with coordinates $\pi _i$ and $\xi _i$ respectively. \bl{lem:mainlogical} Pick any $\lambda \in \Lambda $. For every $\sigma \in \Sigma (x^{\lambda })$, and $\pi =\pi ^{\lambda }$, it must hold that either: \be{eq:allzero} \forall\, i\, \sigma _i\pi _i = 0 \end{equation} or: \be{eq:oppsigns} \left( \exists i\, \sigma _i\pi _i >0\right) \;\; \mbox{and} \;\; \left( \exists j\, \sigma _j\pi _j <0\right) \end{equation} (where $i$ and $j$ range over $\{1,\ldots ,{n_{\mbox{\tiny \sc s}}}\}$ in all quantifiers). In other words, either all the coordinates of the vector \[ \left(\sigma _1\pi _1,\sigma _2\pi _2,\ldots ,\sigma _{{n_{\mbox{\tiny \sc s}}}}\pi _{{n_{\mbox{\tiny \sc s}}}}\right) \] are zero, or the vector must have both positive and negative entries. \end{lemma} \begin{proof} Pick $\sigma =\mbox{sign}\, v\in \Sigma (x^{\lambda })$, $\pi =\pi ^{\lambda }$, $\xi =\sx^{\lambda }$. Suppose that~(\ref{eq:allzero}) is false. Then, either there is some $i$ such that $\sigma _i\pi _i >0$ or there is some $j$ such that $\sigma _j\pi _j <0$. If $\sigma _i\pi _i >0$ for some $i$, then also $v_i\xi _i>0$. As~(\ref{eq:v}) holds, $\sum_{i=1}^{{n_{\mbox{\tiny \sc s}}}} v_i\xi _i = 0$, so that there must exist some other index $j$ for which $v_j\xi _j <0$, which means that $\sigma _j\pi _j<0$. Similarly, if there is some $j$ such that $\sigma _j\pi _j <0$, necessarily there is some $i$ such that $\sigma _i\pi _i >0$, by the same argument. \end{proof} We may express the conclusion of Lemma~\ref{lem:mainlogical} in formal logic terms as follows. Let $p_{\sigma ,\pi }$ and $q_{\sigma ,\pi }$ be the following logical disjunctions: \begin{eqnarray*} p_{\sigma ,\pi } &=& \exists i\, \sigma _i\pi _i >0\\ q_{\sigma ,\pi } &=& \exists j\, \sigma _j\pi _j <0 \end{eqnarray*} and observe that condition~(\ref{eq:allzero}) is equivalent to asking that both $p_{\sigma ,\pi }$ and $q_{\sigma ,\pi }$ are false. Thus, Lemma~\ref{lem:mainlogical} says that, for each $\sigma \in \Sigma $, either both $p_{\sigma ,\pi }$ and $q_{\sigma ,\pi }$ are false or both $p_{\sigma ,\pi }$ and $q_{\sigma ,\pi }$ are true. The ``XNOR($p$,$q$)'' binary function has value ``true'' if and only if $p$ and $q$ are simultaneously true or false. Thus, Lemma~\ref{lem:mainlogical} asserts that this logical statement is true, for $\pi =\pi ^{\lambda }$: \be{eq:xnor} \mbox{XNOR}(p_{\sigma ,\pi },q_{\sigma ,\pi }) \quad \forall\, \sigma \in \Sigma \,. \end{equation} Given any two sign vectors $\sigma $, $\pi $, testing this property is simple in any programming language. For example, in MATLAB{\textregistered} syntax, one may write: \begin{eqnarray*} \zeta &=& \sigma .*\pi \\ p &=& \mbox{sign\,}(\mbox{sum\,} (\zeta >0))\\ q &=& \mbox{sign\,}(\mbox{sum\,} (\zeta <0))\\ \mbox{XNOR} &=& \mbox{sign\,} (p*q + (1-p)*(1-q)) \end{eqnarray*} and the variable XNOR will have value $1$ if $\mbox{XNOR}(p_{\sigma ,\pi },q_{\sigma ,\pi })$ is true, and value $0$ otherwise. The basis of our approach will be as follows. We will show how to obtain a state-independent set $\Sigma _0$ which is a subset of $\Sigma (x)$ for all states $x$. In particular, for all steady states $x^{\lambda }$, we will have: \be{eq:sigma0_in_all_sigma} \Sigma _0 \; \subseteq \; \bigcap_{\lambda \in \Lambda } \Sigma (x^{\lambda }) \,. \end{equation} Compared to the individual sets $\Sigma (x^{\lambda })$, which depend on the particular steady state $x^{\lambda }$, the elements of this subset are obtained using only linear algebraic operations; the computation of $\Sigma _0$ does not entail solving nonlinear equations nor simulating differential equations. Once that this set $\Sigma _0$ (or even just some large subset of it, which is easier to compute) has been obtained, we may ask, for each potential sign vector $\pi $, if~\rref{eq:xnor} is true or not. Thus, for each $\pi $, we need to test if the conjunction of the clauses in~\rref{eq:xnor}: \be{eq:conjunction} \bigwedge_{\sigma \in \Sigma _0} \mbox{XNOR}(p_{\sigma ,\pi },q_{\sigma ,\pi }) \end{equation} (or the conjunction only over a more easily computed subset) is true or false. In other words, we are interested in computing the subset of sign vectors $\pi $ for which~\rref{eq:conjunction} is valid. This question is one of propositional logic (there are only $3^{{n_{\mbox{\tiny \sc s}}}}$ possible sign vectors), and as such is decidable algorithmically, although it has large computational complexity. We prefer to carry out a sieve procedure for restricting the possible sign vectors, by testing each $\pi $ one at a time. For moderate numbers of species, this is easy and fast to perform computationally. So we test for each $\pi $ if~\rref{eq:conjunction} is valid. If false, then the sign vector $\pi $ is ruled out as a possible sign and eliminated from the list. The surviving $\pi $'s are the possible sign vectors. Of course, since~\rref{eq:xnor} is only a necessary, and not a sufficient, condition, we are not guaranteed to find a minimal set of signs. However, we find for many examples that the procedure indeed leads to a unique, or close to unique, solution, after deleting the zero solution (since $\sigma =0$ is always a solution) and also deleting one element in the pair $\{\sigma ,-\sigma \}$ for each $\sigma $ (since $\nu \xi =0$ implies $\nu (-\xi )=0$, solutions appear always in pairs). Testing~\rref{eq:conjunction}, for a fixed $\pi $, is itself a hard computational problem (NP-hard on the number of species) and hence infeasible for large-scale networks. Good heuristics, such as the Davis-Putnam-Logemann-Loveland (DPLL) algorithm for clauses in conjunctive normal form, are extensively discussed in the rich literature on satisfiability. \comment \alpha rticle{Davis:1960:CPQ:321033.321034, author = {Davis, Martin and Putnam, Hilary}, title = {A Computing Procedure for Quantification Theory}, journal = {J. ACM}, issue_date = {July 1960}, volume = {7}, number = {3}, month = jul, year = {1960}, issn = {0004-5411}, pages = {201--215}, numpages = {15}, url = {http://doi.acm.org/10.1145/321033.321034}, doi = {10.1145/321033.321034}, acmid = {321034}, publisher = {ACM}, address = {New York, NY, USA}, } \alpha rticle{Davis:1962:MPT:368273.368557, author = {Davis, Martin and Logemann, George and Loveland, Donald}, title = {A Machine Program for Theorem-proving}, journal = {Commun. ACM}, issue_date = {July 1962}, volume = {5}, number = {7}, month = jul, year = {1962}, issn = {0001-0782}, pages = {394--397}, numpages = {4}, url = {http://doi.acm.org/10.1145/368273.368557}, doi = {10.1145/368273.368557}, acmid = {368557}, publisher = {ACM}, address = {New York, NY, USA}, } \alpha rticle{Ouyang1998281, title = "How good are branching rules in DPLL? ", journal = "Discrete Applied Mathematics ", volume = "89", number = "1–3", pages = "281 - 286", year = "1998", note = "", issn = "0166-218X", doi = "http://dx.doi.org/10.1016/S0166-218X(98)00045-6", url = "http://www.sciencedirect.com/science/article/pii/S0166218X98000456", author = "Ming Ouyang", keywords = "Branching rules", keywords = "\{DPLL\} " } \beta ook{Harrison:2009:HPL:1540610, author = {Harrison, John}, title = {Handbook of Practical Logic and Automated Reasoning}, year = {2009}, isbn = {0521899575, 9780521899574}, edition = {1st}, publisher = {Cambridge University Press}, address = {New York, NY, USA}, } However, we have found that a straightforward exhaustive testing of all possibilities is quite useful, as long as the number of species is reasonably small. The key issue, then, is to find a way to explicitly generate a state-independent subset $\Sigma _0$ of $\Sigma (x^{\lambda })$, and we turn to that problem next. \section{CRN terminology and notations} We consider a collection of chemical reactions that involves a set of ${n_{\mbox{\tiny \sc s}}}$ ``species'': \[ S_i , \; i \in \{1,2, \ldots {n_{\mbox{\tiny \sc s}}} \} \,. \] The ``species'' might be ions, atoms, or large molecules, depending on the context. A \emph{chemical reaction network} (``CRN'' for short) involving these species is a set of chemical reactions $\mathcal{R}_j$, $j\in \{1,2, \ldots , {n_{\mbox{\tiny \sc r}}} \}$, represented symbolically as: \be{reactions} \mathcal{R}_k: \quad \sum_{i =1}^{{n_{\mbox{\tiny \sc s}}}} \al_{ik} S_i \;\;\rightarrow \;\; \sum_{i =1}^{{n_{\mbox{\tiny \sc s}}}} \bb_{ik} S_i \,, \end{equation} where the $\al_{ik}$ and $\bb_{ik}$ are some non-negative integers that quantify the number of units of species $S_i$ consumed, respectively produced, by reaction ${\cal R}_k$. Thus, in reaction 1, $\al_{11}$ units of species $S_1$ combine with $\al_{21}$ units of species $S_2$, etc., to produce $\bb_{11}$ units of species $S_1$, $\bb_{21}$ units of species $S_2$, etc., and similarly for each of the other ${n_{\mbox{\tiny \sc r}}}-1$ reactions. We will assume the following ``non autocatalysis'' condition: no species $S_i$ can appear on both sides of the same reaction. With this assumption, either $\al_{ik}=0$ or $\bb_{ik}=0$ for each species $S_i$ and each reaction ${\cal R}_k$ (both are zero if the species in question is neither consumed nor produced), Note that we are not excluding autocatalysis which occurs through one ore more intermediate steps, such as the autocatalysis of $S_1$ in $S_1+S_2\rightarrow S_3\rightarrow 2S_1+S_4$, so this assumption is not as restrictive as it might at first appear. Suppose that $\al_{ik}>0$ for some $(i,k)$; then we say that species $S_i$ is a \emph{reactant} of reaction ${\cal R}_k$, and by the non autocatalysis assumption, $\bb_{ik}=0$ for this pair $(i,k)$. If instead $\bb_{ik}>0$, then we say that species $S_i$ is a \emph{product} of reaction ${\cal R}_k$, and again by the non autocatalysis assumption, $\al_{ik}=0$ for this pair $(i,k)$. It is convenient to arrange the $\al_{ik}$'s and $\bb_{ik}$'s into two ${n_{\mbox{\tiny \sc s}}}\times {n_{\mbox{\tiny \sc r}}}$ matrices $A$, $B$ respectively, and introduce the \emph{stoichiometry matrix} $\Gamma = B-A$. In other words, \[ \Gamma = \left(\gamma _{ij}\right)_{ij} \in \R^{{n_{\mbox{\tiny \sc s}}}\times {n_{\mbox{\tiny \sc r}}}} \] is defined by: \begin{equation} \label{stocmatrix} \gamma _{ij} \;=\; \bb_{ij}-\al_{ij}\,,\quad i=1,\ldots ,{n_{\mbox{\tiny \sc s}}}\,, \quad j=1,\ldots ,{n_{\mbox{\tiny \sc r}}}\,. \end{equation} The matrix $\Gamma $ has as many columns as there are reactions. Its $k$th column shows, for each species (ordered according to their index $i$), the net ``produced$-$consumed'' by reaction ${\cal R}_k$. The symbolic information given by the reactions~(\ref{reactions}) is summarized by the matrix $\Gamma $. Observe that $\gamma _{ik}=-\al_{ik}<0$ if $S_i$ is a reactant of reaction ${\cal R}_k$, and $\gamma _{ik}=\bb_{ik}>0$ if $S_i$ is a product of reaction ${\cal R}_k$. To describe how the state of the network evolves over time, one must provide in addition to $\Gamma $ a rule for the evolution of the vector: \[ \mypmatrix{[S_1(t)] \cr [S_2(t)] \cr \vdots \cr [S_{{n_{\mbox{\tiny \sc s}}}}(t)]}\,, \] where the notation $[S_i(t)]$ means the concentration of the species $S_i$ at time $t$. We will denote the concentration of $S_i$ simply as $x_i(t) = [S_i(t)]$ and let $x=(x_1,\ldots ,x_{{n_{\mbox{\tiny \sc s}}}})^T$. Observe that only non-negative concentrations make physical sense. A zero concentration means that a species is not present at all; we will be interested in \emph{positive vectors} $x$ of concentrations, those for which $x_i>0$ for all $i$, meaning that all species are present. Another ingredient that we require is a formula for the actual rate at which the individual reactions take place. We denote by $R_k(x)$ be algebraic form of the $k$th reaction. We postulate the following two axioms that the reaction rates $R_k(x)$, $k=1,\ldots ,{n_{\mbox{\tiny \sc r}}}$ must satisfy: \begin{itemize} \item for each $(i,k)$ such that species $S_i$ is a reactant of ${\cal R}_k$, $\frac{\partial R_k}{\partial x_i}(x)>0$ for all (positive) concentration vectors $x$; \item for each $(i,k)$ such that species $S_i$ is not a reactant of ${\cal R}_k$, $\frac{\partial R_k}{\partial x_i}(x)=0$ for all (positive) concentration vectors $x$. \end{itemize} These axioms are natural, and are satisfied by every reasonable model, and specifically by mass-action kinetics, in which the reaction rate is proportional to the product of the concentrations of all the reactants: \[ R_k(x) = \kappa _k \prod_{i=1}^{{n_{\mbox{\tiny \sc s}}}} x_i^{\al_{ij} } \mbox{ for all } j=1,\ldots ,{n_{\mbox{\tiny \sc r}}} \] (the positive coefficients $\kappa _k$ are the reaction, or kinetic, constants; $x_i^{\al_{ij}}=1$ when $\al_{ij}=0$). Recall that $\al_{ik}>0$ and $\bb_{ik}=0$ if and only if $S_i$ is a reactant of ${\cal R}_k$. Therefore the above axioms state that, for every positive $x$, \be{RiffG} \frac{\partial R_k}{\partial x_i}(x)>0 \;\Longleftrightarrow\; \al_{ik}>0 \end{equation} and also \be{RiffG0} \frac{\partial R_k}{\partial x_i}(x)=0 \;\Longleftrightarrow\; \al_{ik}=0 \end{equation} because the expressions on both sides are either zero or positive. We arrange reactions into a column vector function $R(x)\in \R^{{n_{\mbox{\tiny \sc r}}}}$: \[ R(x):= \mypmatrix{R_1(x) \cr R_2(x) \cr \vdots \cr R_{{n_{\mbox{\tiny \sc r}}}}(x)} \,. \] With these conventions, the system of differential equations associated to the CRN is given as follows: \be{chemreactionnetwork} \frac{dS}{dt} \;=\; f(x) \;=\; \Gamma \, R(x) \,. \end{equation} Observe that $f'(x) = \Gamma R'(x)$, where $R'(x)$ is the Jacobian matrix of $R$, which is the matrix whose $(k,j)$th entry is $\frac{\partial R_k}{\partial x_j}(x)$. We will assume from now also specified a differentiable mapping \[ g \,:\; \R^{n_{\mbox{\tiny \sc s}}}_+ \rightarrow \R^{n_{\mbox{\tiny \sc c}}} \,, \] where ${n_{\mbox{\tiny \sc c}}}$ is some positive integer (possibly zero, to indicate the case where there are no additional constraints), and $g$ has the property that \be{eq:assume_constant_signs_g} \mbox{all ${n_{\mbox{\tiny \sc c}}}\times {n_{\mbox{\tiny \sc s}}}$ entries of the Jacobian matrix $g'(x)$ have constant sign.} \end{equation} This happens in the special case when $g$ is linear, as is the case for stoichiometric constraints. It is perfectly fine to add linear combinations of those rows of $g$ that are linear, since that will not change the constant sign assumption on $g'$. We assume in the theoretical discussion that $g$ has been extended by possibly adding one or more such combinations. Observe that a nonlinear $g$ may also have the constant sign property. For example, suppose that ${n_{\mbox{\tiny \sc s}}}=5$, ${n_{\mbox{\tiny \sc c}}}=1$, and \[ g(x) = a x_1x_3- b x_2^2 \] where $a$ and $b$ are positive constants. Then the Jacobian matrix (gradient, since ${n_{\mbox{\tiny \sc c}}}=1$) is: \[ g'(x) = \nabla g(x) = (a x_3 \,,\, -2b x_2 \,,\, a x_1 \,,\, 0 \,,\, 0) \] which has constant sign $(1,-1,1,0,0)$. For chemical reaction networks, it is not necessary for the entries of $f'(x)$, and much less the entries of the products $\nu f'(x)$ for vectors $\nu $, to have constant sign. Our next task will be to introduce algebraic conditions that allow one to check if the sign is constant, for any given vector $\nu $. Before proceeding, however, we give an example of non-constant sign. Take the following CRN, with ${n_{\mbox{\tiny \sc s}}}=4$ and ${n_{\mbox{\tiny \sc r}}}=2$: \be{counterexample:R1R2} \mathcal{R}_1: \; X_1+X_2 \rightarrow X_4\,,\quad\quad \mathcal{R}_2: \; X_2+X_3 \rightarrow X_1 \end{equation} which is formally specified, assuming mass-action kinetics, as follows: \[ A = \mypmatrix{% 1 & 0 \cr 1 & 1 \cr 0 & 1 \cr 0 & 0}\,,\quad B = \mypmatrix{% 0 & 1 \cr 0 & 0 \cr 0 & 0 \cr 1 & 0}\,,\quad \Gamma = \mypmatrix{% -1 & 1 \cr -1 & -1 \cr 0 & -1 \cr 1 & 0}\,,\quad R(x) = (k_1x_1x_2,k_2x_2x_3)^T \,. \] Thus the ODE set $\dot x=f(x)=\Gamma R(x)$ corresponding to this CRN has: \begin{eqnarray*} f(x) \;=\; \mypmatrix{% -k_1x_1x_2 + k_2x_2x_3 \cr -k_1x_1x_2 - k_2x_2x_3 \cr - k_2x_2x_3 \cr k_1x_1x_2} \,. \end{eqnarray*} Let $\nu =e_1^T$. Observe that $\nu f'(x) = (-k_1x_2,-k_1x_1+k_2x_3,k_2x_2,0)$ does not have constant sign, because its second entry, which is the same as the $(1,2)$ entry of $f'(x)$, is the function $-k_1x_1+k_2x_3$, which changes sign depending on whether $x_1>k_2x_3/k_1$ or $x_1<k_2x_3/k_1$. Ruling out vectors $\nu $ that lead to such ambiguous signs is the purpose of our algorithm to be described next. \section{Sensitivities for CRN's} Introduce the following space: \[ {\mathbf V} \;:=\;\mbox{row span of }\Gamma \;=\;\left\{\nu \Gamma \, | \, \nu \in \R^{1\times {n_{\mbox{\tiny \sc s}}}} \right\} \; \subseteq \; \R^{1\times {n_{\mbox{\tiny \sc r}}}} \,. \] Since $f'(x) = \Gamma R'(x)$, the definition~\rref{eq:sigma} of $\Sigma $ becomes: \[ \Sigma (x) \;:=\; \left\{\mbox{sign}\left(vR'(x)\right) \, | \, v\in {\mathbf V} \right\} \bigcup \left\{\mbox{sign}\, \left(e_i^Tg'(x)\right) \, | \, i\in \{1,\ldots ,{n_{\mbox{\tiny \sc c}}}\} \right\} \;\subseteq \; \{-1,0,1\}^{1\times {n_{\mbox{\tiny \sc s}}}} \] when specialized to CRN. As we assumed Property~\rref{eq:assume_constant_signs_g}, the expressions $\mbox{sign}\, (e_i^Tg'(x))$ are actually independent of $x$. On the other hand, the sign vectors $\sigma =\mbox{sign}\, v R'(x)$ generally depend on the particular $x$. The following Lemma shows that, for vectors $\rho $ with non-negative entries, the sign of the vector $\rho R'(x)$ is the same, no matter what the state $x$ is, and moreover, this sign can be explicitly computed using only stoichiometry information. We denote by \[ A_j=(\al_{j1},\ldots ,\al_{j{n_{\mbox{\tiny \sc r}}}})^T \;\in \; \R^{{n_{\mbox{\tiny \sc r}}}\times 1} \] the $j$th column of the transpose $A^T$, i.e.. the transpose of the $j$th row of $A$. \bl{lem:rhoA} For any positive concentration vector $x$, any non-negative row vector $\rho $ of size ${n_{\mbox{\tiny \sc r}}}$, and any species index $j\in \{1,\ldots ,{n_{\mbox{\tiny \sc s}}}\}$: \be{rhoA0} \rho A_j = 0 \;\Longleftrightarrow\; \rho \frac{\partial R}{\partial x_j}(x) = 0\,. \end{equation} Thus, also \be{rhoA} \rho A_j > 0 \;\Longleftrightarrow\; \rho \frac{\partial R}{\partial x_j}(x) > 0 \,, \end{equation} since the expressions in each side of~\rref{rhoA0} can only be zero or positive. \end{lemma} \begin{proof} We have that \[ \rho A_j = \sum_{k\in K_\rho } \rho _k \al_{jk} \] where $K_\rho := \{k | \rho _k>0\}$. Since every $\al_{jk}\geq 0$, the equality $\rho A_j=0$ holds if and only if $\al_{jk}=0$ for all $k\in K_\rho $. Similarly, from \[ \rho \frac{\partial R}{\partial x_j}(x) = \sum_{k\in K_\rho } \rho _k \frac{\partial R_k}{\partial x_j}(x) \] and $\frac{\partial R_k}{\partial x_j}(x)\geq 0$ we have that $\rho \frac{\partial R}{\partial x_j}(x) = 0$ if and only if $\frac{\partial R_k}{\partial x_j}(x)=0$ for all $k\in K_\rho $. From~(\ref{RiffG0}), we conclude~(\ref{rhoA0}). \end{proof} Lemma~\ref{lem:rhoA} is valid for all non-negative $\rho $. When specialized to $v=\nu \Gamma \in {\mathbf V}$, and defining $\sigma = \mbox{sign}\, vR'(x)$, it says that $\sigma $ does not depend on $x$. However, elements of the form $v=\nu \Gamma \in {\mathbf V}$ will generally not be non-negative (nor non-positive), so the lemma cannot be applied to them. Instead, we will apply Lemma ~\ref{lem:rhoA} to the positive and negative parts of such a vector, but only when such positive and negative parts satisfy a certain ``orthogonality'' property, as defined by the subset of ${\mathbf V}$ introduced below. \subsubsection*{A state-independent subset of $\Sigma $} For any $v\in {\mathbf V}$, consider the sign vector $\widetilde {\mu }_v := \mbox{sign}\, v A^T \in \{-1,0,1\}^{1\times {n_{\mbox{\tiny \sc s}}}}$, whose $j$th entry is $vA_j = \nu \Gamma A_j$ if $v=\nu \Gamma $ with $\nu \in \R^{1\times {n_{\mbox{\tiny \sc s}}}}$, as well as the positive and negative parts of $v$, $v^+$ and $v^-$, Define the following set of vectors (``$G$'' for ``good''): \[ \VV_{G}\;:=\; \left\{ v\in {\mathbf V} \,|\, \mbox{for each }j\in \{1,\ldots ,{n_{\mbox{\tiny \sc s}}}\} \mbox{ either } v^+ A_j = 0 \mbox{ or } v^- A_j = 0 \right\}\,. \] Observe that, if $v\in \VV_{G}$, then \be{cases_signs} vA_j = (v^+ - v^-)A_j = v^+A_j - v^-A_j = \threeif {v^+A_j}{\mbox{if } v^-A_j = 0}% {-v^-A_j}{\mbox{if } v^+A_j = 0}% {0}{\mbox{if } v^+A_j = v^-A_j =0 \,.}% \end{equation} Consider the following set of sign vectors $\widetilde {\mu }_v$ parametrized by elements of $\VV_{G}$: \be{eq:sigma0} {\widetilde \Sigma }_0 \; := \; \left\{\widetilde {\mu }_v = \mbox{sign}(vA^T) \, | \, v \in \VV_{G}\right\} \; \subseteq \; \{-1,0,1\}^{1\times {n_{\mbox{\tiny \sc s}}}}\,. \end{equation} The key fact is that this is a subset of $\Sigma (x)$ for all $x$, as shown next. \bl{lem:uniquesign} For every positive concentration vector $x$, \[ {\widetilde \Sigma }_0 \subseteq \Sigma (x). \] \end{lemma} \begin{proof} Pick any $\widetilde {\mu }_v\in {\widetilde \Sigma }_0$, where $v\in \VV_{G}\subseteq {\mathbf V}$, and fix any positive concentration vector $x$. We must prove that $\widetilde {\mu }_{v}\in \Sigma (x)$. As $\Sigma (x)$ includes all expressions of the form $\mbox{sign}(vR'(x))$, for $v\in {\mathbf V}$, it will suffice to show that, for this same vector $v$, \be{eq:uniquesign} \mbox{sign}\left(v\frac{\partial R}{\partial x_j}(x)\right) = \mbox{sign} \left(vA_j\right) \end{equation} for each species index $j\in \{1,\ldots ,{n_{\mbox{\tiny \sc s}}}\}$. For each $j\in \{1,\ldots ,{n_{\mbox{\tiny \sc r}}}\}$, we will show the following three statements: \be{eq:uniquesign1} v^- A_j>0 \mbox{ (and so } v^+ A_j = 0\mbox{)} \;\;\Longrightarrow\;\; v\frac{\partial R}{\partial x_j}(x) = -v^-\frac{\partial R}{\partial x_j}(x) < 0 \,, \end{equation} \be{eq:uniquesign2} v^+ A_j>0 \mbox{ (and so } v^- A_j = 0\mbox{)} \;\;\Longrightarrow\;\; v\frac{\partial R}{\partial x_j}(x) = v^+\frac{\partial R}{\partial x_j}(x) > 0 \,, \end{equation} and \be{eq:uniquesign3} v^- A_j = v^+ A_j = 0 \;\;\Longrightarrow\;\; v\frac{\partial R}{\partial x_j}(x) = 0 \,. \end{equation} Suppose first that $v^- A_j>0$. Applying ~(\ref{rhoA0}) with $\rho =v^+$, we have that $v^+\frac{\partial R}{\partial x_j}(x) = 0$. Applying ~(\ref{rhoA}) with $\rho =v^-$, we have that $v^-\frac{\partial R}{\partial x_j}(x) > 0$. Therefore \[ v\frac{\partial R}{\partial x_j}(x) = (v^+ - v^-)\frac{\partial R}{\partial x_j}(x) = v^+\frac{\partial R}{\partial x_j}(x) - v^-\frac{\partial R}{\partial x_j}(x) = - v^-\frac{\partial R}{\partial x_j}(x) < 0\,, \] thus proving~(\ref{eq:uniquesign1}). If, instead, $v^- A_j =0$ and $v^+ A_j > 0$, a similar argument shows that~(\ref{eq:uniquesign2}) holds. Finally, suppose that $v^+ A_j = v^- A_j =0$. Then, again by~(\ref{rhoA0}), applied to $\rho =v^+$ and $\rho =v^-$, \[ v\frac{\partial R}{\partial x_j}(x) = (v^+ - v^-)\frac{\partial R}{\partial x_j}(x) = 0\,, \] and so~(\ref{eq:uniquesign3}) holds. The desired equality~\rref{eq:uniquesign} follows from~\rref{eq:uniquesign1}-\rref{eq:uniquesign3}. Indeed, we consider three cases: (a) $v A_j < 0$, (b) $v A_j > 0$, and (c) $v A_j = 0$. In case (a), \rref{cases_signs} shows that $v A_j = -v^-A_j$ (because the first and third cases would give a non-negative value), and therefore $-v^-A_j<0$, that is, $v^-A_j>0$, so~\rref{eq:uniquesign1} gives that $v\frac{\partial R}{\partial x_j}(x)$ is also negative. In case (b), similarly $v^+ A_j=v A_j>0$, and so~\rref{eq:uniquesign2} shows~\rref{eq:uniquesign}. Finally, consider case (c), $v A_j = 0$. If it were the case that $v^+ A_j$ is nonzero, then, since $v\in \VV_{G}$, $v^- A_j=0$, and therefore~\rref{cases_signs} gives that $v A_j=v^+A_j>0$, a contradiction; similarly, $v^- A_j$ must also be zero. So,~\rref{eq:uniquesign3} gives that $v\frac{\partial R}{\partial x_j}(x)=0$ as well. \end{proof} \br{rem:interpretM} To interpret the set $\VV_{G}$, it is helpful to study the special case in which $v$ is simply a row of $\Gamma $, that is, $v=\nu \Gamma $ and $\nu =e_i^T$, the canonical row vector $(0,\ldots 0,,1,0,\ldots 0)$ with a ``$1$'' in the $i$th position and zeroes elsewhere. Since \[ e_i^TB-e_i^TA = e_i^T(B-A) = e_i^T\Gamma = v^+ - v^- \,, \] and the vectors $e_i^TB$ and $e_i^TA$ have non-overlapping positive entries (by the non autocatalysis assumption), we have that $v^+=e_i^TB$ and $v^-=e_i^TA$. Since $e_i^TBA_j =\sum_k \bb_{ik}\al_{jk}$, asking that this number be positive amounts to asking that \be{eq:ij1} \mbox{$i$ is a product of some reaction ${\cal R}_{k}$ which has $j$ as a reactant.} \end{equation} Since $e_i^TAA_j =\sum_k \al_{ik}\al_{jk}$, asking that this number is positive amounts to asking that \be{eq:ij2} \mbox{$i$ and $j$ are both reactants in some reaction ${\cal R}_{k'}$.} \end{equation} Thus, if the network in question has the property that~(\ref{eq:ij1}) and~(\ref{eq:ij2}) cannot both hold simultaneously for any pair of species $i,j$, then we cannot have that both $e_i^TBA_j >0$ and $e_i^TAA_j >0$ hold. In other words, $e_i^T\in \VV_{G}$ for all $i$. As an illustration, take the CRN $\mathcal{R}_1: X_1+X_2 \rightarrow X_4$ and $\mathcal{R}_2: X_2+X_3 \rightarrow X_1$ treated in~\rref{counterexample:R1R2}. We claim that $e_1^T\not\in \VV_{G}$, which reflects the fact that $e_1^Tf'(x)$ does not have constant sign. Indeed, in this case we have that, with $i=1$ and $j=2$, $X_1$ and $X_2$ are reactants in $\mathcal{R}_1$ but $X_1$ is also a product of reaction ${\cal R}_{2}$, which has $X_2$ as a reactant. Algebraically, $e_1^T\Gamma = (-1,1) = (0,1)-(1,0) = v^+ - v^-$ and $A_2=(1,1)^T$, so $v^+ A_2=1$ and $v^- A_2 = 1$. This means that $\nu =e_1^T\not\in \VV_{G}$, since the property defining $\VV_{G}$ would require that at least one of $v^+ A_2$ or $v^- A_2$ should vanish. We have re-derived, in a purely algebraic manner, the fact that $-k_1x_1+k_2x_3$ changes sign. \mybox\end{remark} Testing whether a given vector $v\in {\mathbf V}$, $v=\nu \Gamma $ with $\nu \in \R^{1\times {n_{\mbox{\tiny \sc s}}}}$, belongs to $\VV_{G}$ is easy to do. For example, in MATLAB\textregistered-like syntax, one may write: \begin{eqnarray*} v &=& \nu * \Gamma \\ v^+ &=& (v>0).*v\\ v^- &=& -(v<0).*v\\ \vvp_A &=& \mbox{sign}(v^+ * A')\\ \vvn_A &=& \mbox{sign}(v^- * A') \end{eqnarray*} and we need to verify that the vectors $\vvp_A$ and $\vvn_A$ have disjoint supports, which can be done with the command \[ \mbox{sum}(\vvp_A.*\vvn_A)=% =0 \] which returns $1$ (true) if and only if $v\in \VV_{G}$, in which case we accept $v$ and we may use $\sigma = \mbox{sign}\left(vA^T\right)$ to test the conditions in Lemma~\ref{lem:mainlogical}. \subsubsection*{Explicit generation of elements of ${\widetilde \Sigma }_0$} The set ${\widetilde \Sigma }_0$ defined in~(\ref{eq:sigma0}) is constructed in such a way as to be independent of states $x$, which makes it more useful than the sets $\Sigma (x)$ from a computational standpoint. Yet, in principle, computing this set potentially involves the testing of the conditions ``$v^+ A_j=0$ or $v^- A_j=0$'' that define the set $\VV_{G}$, for every $v=\nu \Gamma $, that is, for every possible real-valued vector $\nu \in \R^{1\times {n_{\mbox{\tiny \sc s}}}}$ (and each $j$). We describe next a more combinatorial way to generate the elements of ${\widetilde \Sigma }_0$. We introduce the set of signs associated to the row span ${\mathbf V}$ of $\Gamma $: \be{def:signV} {\mathbf S}\;:=\; \mbox{sign}\, {\mathbf V} \; \subseteq \; \{-1,0,1\}^{1\times {n_{\mbox{\tiny \sc s}}}} \,. \end{equation} Denote: \[ \alpha \;:=\; \mbox{sign}\, A^T \in \{0,1\}^{{n_{\mbox{\tiny \sc r}}}\times {n_{\mbox{\tiny \sc s}}}} \] so that the $j$th column of $\alpha $ is $\alpha _j =\mbox{sign}\, A_j\in \{0,1\}^{{n_{\mbox{\tiny \sc r}}}\times 1}$. \bl{lem:signscombinatorial} Pick any $s\in {\mathbf S}$, $s=\mbox{sign}\, v$, where $v\in {\mathbf V}$. Then, for each $j\in \{1,\ldots ,{n_{\mbox{\tiny \sc s}}}\}$: \[ \mbox{sign}(v^+ A_j) \;=\; \mbox{sign}(s^+\alpha _j)\,, \quad \quad \mbox{sign}(v^- A_j) \;=\; \mbox{sign}(s^-\alpha _j) \,. \] \end{lemma} \begin{proof} By (\ref{eq:productsign}), applied with $u=v^+$ and $v=A_j$, $\mbox{sign}(v^+ A_j)= \mbox{sign}(\mbox{sign}(v^+)\alpha _j)$. By (\ref{eq:productsign}) applied with $u=v^-$ and $v=A_j$, $\mbox{sign}(v^- A_j)=\mbox{sign}(\mbox{sign}(v^-)\alpha _j)$. Since, by~(\ref{eq:signsign}) applied with $u=v$, $s^+ = \mbox{sign}(v^+)$ and $s^- = \mbox{sign}(v^-)$, the conclusion follows. \end{proof} In analogy to the definition of the set $\VV_{G}$, we define (``$G$'' for ``good''): \[ \SV_{G}\;:=\; \left\{s \in {\mathbf S} \,|\, \mbox{for each }j\in \{1,\ldots ,{n_{\mbox{\tiny \sc s}}}\} \mbox{ either } s^+ \alpha _j = 0 \mbox{ or } s^- \alpha _j = 0 \right\}\,. \] Observe that, if $s\in \SV_{G}$, then \be{cases_signs_0} s \alpha _j = (s^+ - s^-)\alpha _j = s^+a_j - s^-a_j = \threeif {s^+\alpha _j}{\mbox{if } s^-\alpha _j = 0}% {-s^-\alpha _j}{\mbox{if } s^+\alpha _j = 0}% {0}{\mbox{if } s^+\alpha _j = s^-\alpha _j =0 \,.}% \end{equation} Consider the following set of sign vectors parametrized by elements of $\SV_{G}$: \be{eq:sigma01} \Sigma _0 \; := \; \left\{\mu _s = \mbox{sign}(s \alpha ) \, | \, s \in \SV_{G}\right\} \; \subseteq \; \{-1,0,1\}^{1\times {n_{\mbox{\tiny \sc s}}}}\,. \end{equation} \bp{cor:combinatorialSo_elements} Pick any $s\in {\mathbf S}$, $s=\mbox{sign}\, v$, where $v\in {\mathbf V}$. Then \[ s\in \SV_{G} \;\;\mbox{if and only if} \;\; v\in \VV_{G} \] and for such $s$ and $v$, \be{equality_signs} \mbox{sign}(vA^T) \;=\; \mbox{sign}(s\alpha ) \,. \end{equation} \end{proposition} \begin{proof} Let $s=\mbox{sign}\, v$, $v\in {\mathbf V}$, and pick any $j\in \{1,\ldots ,{n_{\mbox{\tiny \sc s}}}\}$. We claim that $s^{\pm}\alpha _j=0$ if and only if $v^{\pm}A_j=0$. Since $j$ is arbitrary, this shows that $s\in \SV_{G}$ if and only if $v\in \VV_{G}$. Indeed, suppose that $s^+\alpha _j=0$. By Lemma~\ref{lem:signscombinatorial}, $\mbox{sign}(v^+ A_j)=\mbox{sign}(s^+\alpha _j)=0$, so $v^+ A_j=0$. Conversely, if $v^+ A_j=0$ then $s^+\alpha _j=0$, for the same reason. Similarly, $s^-\alpha _j=0$ is equivalent to $v^- A_j=0$. Suppose now that $s\in \SV_{G}$ and $v\in \VV_{G}$, and pick any $j\in \{1,\ldots ,{n_{\mbox{\tiny \sc s}}}\}$. Assume that $s^+\alpha _j=0$. Since, by~\rref{cases_signs_0} and~\rref{cases_signs}, $s\alpha _j=-s^-\alpha _j$ and $vA_j=-v^-A_j$, we have, again by Lemma~\ref{lem:signscombinatorial}, that \[ \mbox{sign}(s\alpha _j) = -\mbox{sign}(s^-\alpha _j) = -\mbox{sign}(v^- A_j) = \mbox{sign}(vA_j)\,. \] If, instead, $s^-\alpha _j=0$ (and thus $v^- A_j=0$), \[ \mbox{sign}(s\alpha _j) = \mbox{sign}(s^+\alpha _j) = \mbox{sign}(v^+ A_j) = \mbox{sign}(vA_j)\,. \] As $j$ was arbitrary, and we proved that the $j$th coordinates of the two vectors in~\rref{equality_signs} are the same, the vectors must be the same. \end{proof} \bc{lem:combinatorialSo} ${\widetilde \Sigma }_0=\Sigma _0$. \end{corollary} \begin{proof} Pick any element of ${\widetilde \Sigma }_0$, $\widetilde {\mu }_v = \mbox{sign}(vA^T)$, $v\in \VV_{G}$. By Corollary~\ref{cor:combinatorialSo_elements}, $s=\mbox{sign}\, v \in \SV_{G}$. Moreover, also by Corollary~\ref{cor:combinatorialSo_elements}, $\widetilde {\mu }_v = \mbox{sign}(s\alpha )$, so we know that $\widetilde {\mu }_v\in \Sigma _0$. Conversely, take an element $\mu _s\in \Sigma _0$. This means that $\mu _s = \mbox{sign}(s \alpha )$ for some $s \in \SV_{G}\subseteq {\mathbf S}=\mbox{sign}\,{\mathbf V}$. Let $v\in {\mathbf V}$ be such that $s=\mbox{sign}\, v$. By Corollary~\ref{cor:combinatorialSo_elements}, $v\in \VV_{G}$, and also $\mu _s=\mbox{sign}(vA^T)$. By definition of ${\widetilde \Sigma }_0$, this means that $\mu _s\in {\widetilde \Sigma }_0$. \end{proof} We can simplify the definition of $\Sigma _0$ a bit further, by noticing that the finite subset ${\mathbf S}$ can be in fact be generated using only \emph{integer} vectors. The definition in~\rref{def:signV}) says that: \[ {\mathbf S} = \left\{\mbox{sign}\,(\nu \Gamma ) \, | \, \nu \in \R^{1\times {n_{\mbox{\tiny \sc s}}}} \right\} \; \subseteq \; \{-1,0,1\}^{1\times {n_{\mbox{\tiny \sc s}}}}\,. \] \bl{lemma:integerS} \[ {\mathbf S} = \left\{\mbox{sign}\,(\nu \Gamma ) \, | \, \nu \in \Z^{1\times {n_{\mbox{\tiny \sc s}}}} \right\} \; \subseteq \; \{-1,0,1\}^{1\times {n_{\mbox{\tiny \sc s}}}}\,. \] \end{lemma} \begin{proof} Pick any $s\in {\mathbf S}$. Thus $s=\mbox{sign}\, v$, where $v=\nu \Gamma $ for some $\nu \in \R^{1\times {n_{\mbox{\tiny \sc s}}}}$. Consider the set of indices of the coordinates of $v$ that vanish (equivalently, $s_i=0$), $I=\{i\in \{1,\ldots {n_{\mbox{\tiny \sc s}}}\} \, | \, v_i=0\}$. Suppose that $I = \{i_1,\ldots ,i_p\}$. Let $e_i$ denote the canonical column vector $(0,\ldots 0,,1,0,\ldots 0)^T$ with a ``$1$'' in the $i$th position and zeroes elsewhere, and introduce the ${n_{\mbox{\tiny \sc s}}}\times p$ matrix $E_I = (e_{i_1},e_{i_2},\ldots ,e_{i_p})$. The definition of $I$ means that $\nu \Gamma E_I = vE_I=0$ and $\nu \Gamma e_j=ve_j=v_j\not= 0$ for all $j\not\in I$. The matrix $D = \Gamma E_I$ has integer, and in particular rational, entries. Thus, the left nullspace of $D$ has a rational basis, that is, there is a set of rational vectors $\{u_1,\ldots ,u_q\}$, where $q$ is the dimension of this nullspace, such that $u_iD=0$ and $uD=0$ if and only if $u$ is a linear combination of the $u_i$'s. In particular, since $\nu D=0$, there are real numbers $r_1,\ldots ,r_q$ such that $\nu =\sum_i r_iu_i$. Now pick sequences of rational numbers $r_i^{(k)} \rightarrow r_i$ as $k\rightarrow \infty $ and define $\nu ^{(k)}:=\sum_i r_i^{(k)}u_i$. This sequence converges to $\nu $, and, being combinations of the $u_i$'s, $\nu ^{(k)}D=0$ for all $k$. Let $v^{(k)} := \nu ^{(k)}\Gamma $, so we have that $v^{(k)}\rightarrow v$ as $k\rightarrow \infty $, and $v^{(k)}E_I=0$ for all $k$. On the other hand, for each $j\not\in I$, as $v e_j\not= 0$, for all large enough $k$, $(v^{(k)})_j$, the $j$th coordinate of $v^{(k)}$, has the same sign as $v_j$. In conclusion, for large enough $k$, $\mbox{sign}\, v^{(k)} = \mbox{sign}\, v = s$. Multiplying the rational vector $\nu ^{(k)}$ by the least denominator of its coordinates, the sign does not change, but now we have an integer vector with the same sign. \end{proof} \section{Summary and implementations} Our procedure for finding signs $\pi ^{\lambda }$ of derivatives $\sx^{\lambda }$ consists of the following steps: \begin{enumerate} \item Construct a subset ${\cal S}\subseteq {\mathbf S}$. \item For each element $s\in {\cal S}$, test the property $(s^+ \alpha _j)\cdot (s^- \alpha _j) = 0$, which defines $\SV_{G}$. The $s$'s that pass this test are collected into a set ${\cal S}_G$, which is known to be a subset of $\SV_{G}$. \item Take the set of elements of the form $\mu _s = \mbox{sign}(s \alpha )$, for $s$ in ${\cal S}_G$, and add to these the signs of the rows of the Jacobian $g'$ of $g$ (by assumption, these sign vectors are independent of $x$). Let us call this set ${\cal T}$. \item Now apply the sieve procedure, testing~\rref{eq:conjunction} over elements of ${\cal T}$ (which is a subset of $\Sigma _0$). The elements $\pi $ that pass this test are reported as possible signs of derivatives of steady states with respect to the parameter $\lambda $, in the sense that they have not been eliminated when checking~\rref{eq:conjunction} over elements of ${\cal T}$. \item If a unique (after eliminating $0$ as well as one element of each pair $\{\pi ,-\pi \}$) solution remains, we stop. If there is more than one sign that passed all tests, and if ${\cal S}$ was a proper subset of ${\mathbf S}$, we generate a larger set ${\cal S}$, and hence a potentially larger ${\cal T}$, and repeat the subsequent steps for the larger subset. \item If multiple solutions exist, we may also add additional linear combinations of those coordinates of $g$ that are linear functions, and enlarge $g$ in that manner. (Without loss of generality, arguing in the same manner as for ${\mathbf S}$, we only need to add integer combinations.) \end{enumerate} The first step, constructing ${\mathbf S}$, or a large subset ${\cal S}$ of it, can be done in various ways. Since, by Lemma~\ref{lemma:integerS}, we can generate ${\mathbf S}$ using integer vectors, the elements of ${\mathbf S}$ have the form $\mbox{sign}\, v$ where we may assume, without loss of generality, that each entry of $v=\nu \Gamma $ is either zero or, if nonzero, is either $\geq 1$ or $\leq -1$. Thus, testing whether a sign vector $s$ belongs to ${\mathbf S}$ amounts to testing the feasibility of a linear program (LP): we need that $\nu \Gamma e_i=0$ for those indices $i$ for which $s_i=0$, that $\nu \Gamma e_i\leq -1$ for those indices $i$ for which $s_i=-1$, and that $\nu \Gamma e_i\geq 1$ for those indices $i$ for which $s_i=1$. (These are closed, not strict, conditions, as needed for an LP formulation.) This means that one can check each of the $3^n$ possible sign vectors efficiently. One can combine the testing of LP feasibility with the search over the $3^n$ possible sign vectors into a Mixed Integer Linear Programming (MILP) formulation, by means of the technique called in the MILP field a ``big M'' approximation. This is a routine reduction: one first fixes a large positive number $M$, and then formulates the following inequalities: \[ \nu \Gamma e_i - M L_i + U_i \leq 0,\quad -\nu \Gamma e_i - M U_i + L_i \leq 0,\quad L_i + U_i \leq 1, \] where the vector $\nu $ is required to be real and the variables $L_i$, $U_i$ binary ($\{0,1\}$). Given any solution, we have that $-M \leq \nu \Gamma e_i\leq -1$ (so $s=-1$) for those $i$ for which $(L_i,U_i)=(0,1)$, $1 \leq \nu \Gamma e_i\leq M$ (so $s=1$) for indices for which $(L_i,U_i)=(1,0)$, and $\nu \Gamma e_i=0$ (i.e., $s_i=0$) when $(L_i,U_i)=(0,0)$. (This trick will miss any solutions for which $\nu \Gamma e_i\leq -1$ but $M$ was not taken large enough that $-M \leq \nu \Gamma e_i$, or $\nu \Gamma e_i\geq 1$ but $M$ was not taken large enough that $\nu \Gamma e_i\leq M$.) The resulting MILP can be solved using relaxation-based cutting plane methods, branch and bound approaches, or heuristics such as simulated annealing. Often, however, simply testing sparse integer vectors in the integer-generating form in Lemma~\ref{lemma:integerS} works well. In practice, we find that starting with $\nu =\pm e_i^T$ (canonical basis vectors and their negatives) and sums of pairs of such vectors, in addition to using the appropriate conservation laws, is typically enough to uniquely determine the sign vector $\pi $ (up to all signs being reversed, and except for the trivial solution $\pi =0$), provided that steady states are uniquely determined from conservation laws. \section{Example} \bex{example:ambiguity} We consider the following reaction network: \[ \begin{array}{ccccc} E_0 & \arrowschem{}{} & E&\\ E+S & \arrowschem{}{} & C & \arrowchem{} & E+P\\ F+P & \arrowschem{}{} & D & \arrowchem{} & F+S\,. \end{array} \] Here $E$ is a kinase that is constitutively activated and inactivated. Its active form drives a phosphorylation reaction in which a substrate, $S$ is converted to an active form $P$, which can be dephosphorylated back into inactive form by a constitutively active phosphatase $F$. There are two intermediate enzyme-substrate complexes as well. Consider the following three conservation laws: \be{eq:ss1} e_0+e+c=e_T \end{equation} \be{eq:ss2} f+d=f_T \end{equation} and \be{eq:ss3} s+c+p+d = s_T\,. \end{equation} We may think of $e_T$ as total amount of enzyme, $f_T$ as total amount of phosphatase, and $s_T$ as total amount of substrate. We will study what happens when each of these total amounts is varied while keeping the other two fixed. We are also interested in the total concentration of active kinase, free or bound, $x = e+c$ and the total concentration of product, free or bound, $y = p+d$. In order to obtain this information, we add these variables and add ``virtual'' stoichiometric constraints $p+d-y=0$ and $e_0+x=e_T$ (from~\rref{eq:ss1}) to constrain these variables. The program returns this outputs: \begin{verbatim} -1 -1 1 -1 -1 1 -1 -1 -1 e0 e s c d f p x y \end{verbatim} when perturbing only $e_T$, \begin{verbatim} -1 -1 1 1 1 1 -1 1 -1 e0 e s c d f p x y \end{verbatim} when perturbing only $f_T$, and \begin{verbatim} -1 -1 1 1 1 -1 1 1 1 e0 e s c d f p x y \end{verbatim} when perturbing only $s_T$. \mybox\end{example} \end{document} \bex{example:sequential} Consider the two reversible reactions \begin{eqnarray*} X_0+Y_1 &\arrowschem{k_{-1}}{k_{1}}& X_1+Y_0\\ X_1+Y_1 &\arrowschem{k_{-2}}{k_{2}} &X_2+Y_0 \,. \end{eqnarray*} (Equivalently, this network is modeled in the CRN formalism through the four unidirectional reactions $\mathcal{R}_1: X_0+Y_1 \rightarrow X_1+Y_0$, $\mathcal{R}_2: X_1+Y_0 \rightarrow X_0+Y_1$, $\mathcal{R}_3: X_1+Y_1 \rightarrow X_2+Y_0$, and $\mathcal{R}_4: X_2+Y_0 \rightarrow X_1+Y_1$.) This network can be thought to describe a kinase $Y$ which, when in active form $Y_1$ transfers a phosphate group to $X_0$ (and hence becomes inactivated, denoted by $Y_0$, while $X_0$ becomes $X_1$), and which when active can also transfer a second phosphate group to $X_1$ (and hence becomes inactivated, while $X_0$ becomes $X_2$). We write coordinates of states as $x = (x_0,x_1,x_2,y_0,y_1)$. Two conservation laws are as follows: \begin{eqnarray*} x_0 + x_1 + x_2 &=& X_T\\ x_1 + 2 x_2 + y_1 &=& P_T \end{eqnarray*} representing the conservation of total $X$ and total number of phosphate groups. In addition to these two conservation laws, we added also: \[ x_0 - x_2 - y_1 = X_T - P_T \,, \] which is obtained by subtracting them. Finally, we add a fourth constraint, a nonlinear constraint $g$, whose Jacobian has constant sign, the function $g(x) = a x_0x_2- b x_1^2$ (as in an example mentioned earlier with variables labeled before as $x_i$, $i=1,\ldots ,5$). This additional constraint is needed in order to generate enough sign vectors to test against, and arises as follows. Along a curve $x^{\lambda }$ of steady states, we have \begin{eqnarray*} f_1(x^{\lambda }) &=& k_1 x_0(\lambda ) y_1(\lambda ) - k_{-1} x_1(\lambda ) y_0(\lambda ) \;=\; 0\\ f_2(x^{\lambda }) &=& k_2 x_1(\lambda ) y_1(\lambda ) - k_{-2} x_2(\lambda ) y_0(\lambda ) \;=\; 0 \end{eqnarray*} and therefore also \[ g(x^{\lambda }) \,:=\; \frac{x_2(\lambda )}{y_1(\lambda )} f_1(x^{\lambda }) \,-\, \frac{k_{-1}x_1(\lambda )}{k_{-2}y_1(\lambda )} f_2(x^{\lambda }) \;=\; 0 \,. \] The vector function $g$ simplifies as: \[ g(x) = k_1 x_0x_2 - \frac{k_{-1}k_2}{k_{-2}} x_1^2 \] and so its Jacobian (gradient) is: \[ g'(x) = \nabla g(x) = \left(k_1 x_2 \,,\, -2 \frac{k_{-1}k_2}{k_{-2}} x_1 \,,\, k_1 x_0 \,,\, 0\,,\,0\right)\,, \] which has constant sign $(1,-1,1,0,0)$. We want to know what happens when the total amount of kinase, $y_0+y_1=Y_T$, is allowed to vary. The program returns this output: \begin{verbatim} -1 -1 1 -1 -1 -1 0 1 -1 -1 -1 1 1 -1 -1 x0 x1 x2 y0 y1 \end{verbatim} (all signs could be reversed and that would also be a solution). This means that $x_0$, $y_0$, and $y_1$ change in the same direction, but $x_2$ in the opposite direction, and $x_1$ is undetermined. Since $y_0+y_1=Y_T$, an increase in $Y_T$ means that both $y_0$ and $y_1$ increase, and thus we conclude that $x_0$ increases and $x_2$ decreases when the kinase amount is up-regulated. \mybox\end{example} \end{document}
{ "timestamp": "2013-12-31T02:07:40", "yymm": "1312", "arxiv_id": "1312.7538", "language": "en", "url": "https://arxiv.org/abs/1312.7538", "abstract": "We present a computational procedure to characterize the signs of sensitivities of steady states to parameter perturbations in chemical reaction networks.", "subjects": "Quantitative Methods (q-bio.QM); Dynamical Systems (math.DS)", "title": "A technique for determining the signs of sensitivities of steady states in chemical reaction networks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429585263221, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097210997769536 }
https://arxiv.org/abs/0903.3456
A Groupoid Approach to Discrete Inverse Semigroup Algebras
Let $K$ be a commutative ring with unit and $S$ an inverse semigroup. We show that the semigroup algebra $KS$ can be described as a convolution algebra of functions on the universal étale groupoid associated to $S$ by Paterson. This result is a simultaneous generalization of the author's earlier work on finite inverse semigroups and Paterson's theorem for the universal $C^*$-algebra. It provides a convenient topological framework for understanding the structure of $KS$, including the center and when it has a unit. In this theory, the role of Gelfand duality is replaced by Stone duality.Using this approach we are able to construct the finite dimensional irreducible representations of an inverse semigroup over an arbitrary field as induced representations from associated groups, generalizing the well-studied case of an inverse semigroup with finitely many idempotents. More generally, we describe the irreducible representations of an inverse semigroup $S$ that can be induced from associated groups as precisely those satisfying a certain "finiteness condition". This "finiteness condition" is satisfied, for instance, by all representations of an inverse semigroup whose image contains a primitive idempotent.
\section{Introduction} It is by now well established in the $C^*$-algebra community that there is a close relationship between inverse semigroup $C^*$-algebras and \'etale groupoid $C^*$-algebras~\cite{Paterson,Exel,Renault,graphinverse,ultragraph,higherrank,resendeetale,strongmorita}. More precisely, Paterson assigned to each inverse semigroup $S$ an \'etale (in fact, ample) groupoid $\mathscr G(S)$, called its universal groupoid, and showed that the universal and reduced $C^*$-algebras of $S$ and $\mathscr G(S)$ coincide~\cite{Paterson}. On the other hand, if $\mathscr G$ is a discrete groupoid and $K$ is a unital commutative ring, then there is an obvious way to define a groupoid algebra $K\mathscr G$. The author showed that if $S$ is an inverse semigroup with finitely many idempotents, then $KS\cong K\mathscr G$ for the so-called underlying groupoid $\mathscr G$ of $S$~\cite{mobius1,mobius2}; this latter groupoid coincides with the universal groupoid $\mathscr G(S)$ when $S$ has finitely many idempotents. It therefore seems natural to conjecture that, for any inverse semigroup $S$, one has that $KS\cong K\mathscr G(S)$ for an appropriate definition of $K\mathscr G(S)$. This is what we achieve in this paper. We then proceed to use groupoids to establish a number of new results about inverse semigroup algebras, including a description of all the finite dimensional irreducible representations over a field as induced representations from groups, as well as of all simple modules over an arbitrary commutative ring satisfying a certain additional condition. The idea behind our approach is to view $K$ as a topological ring by endowing it with the discrete topology. One can then imitate the usual definition of continuous functions with compact support on an \'etale groupoid (note that we do not assume that groupoids are Hausdorff, so one must take the usual care with this). It turns out that \'etale groupoids are too general to deal with in the discrete context because they do not have enough continuous functions with compact support. However, the category of ample groupoids from~\cite{Paterson} is just fine for the task. These groupoids have a basis of compact open sets and so they have many continuous maps to discrete spaces. A key idea is that the role of Gelfand duality for commutative $C^*$-algebras can now be played by Stone duality for boolean rings. Paterson's universal groupoid $\mathscr G(S)$ is an ample groupoid, and so fits nicely into this context. The paper proceeds as follows. First we develop the basic theory of the convolution algebra $K\mathscr G$ of an ample groupoid $\mathscr G$ over a commutative ring with unit $K$, including a description of its center. We then discuss ample actions of inverse semigroups and the groupoid of germs construction. The algebra of the groupoid of germs is shown to be a cross product (in an appropriate sense) of a commutative $K$-algebra with the inverse semigroup in the Hausdorff case. The universal groupoid is constructed as the groupoid of germs of the spectral action of $S$, which is shown to be the terminal object in the category of boolean actions via Stone duality. It is proved that the universal groupoid is Hausdorff if and only if the intersection of principal downsets in $S$ is finitely generated as a downset, yielding the converse to a result of Paterson~\cite{Paterson}. The isomorphism of $KS$ with $K\mathscr G(S)$ is then established via the M\"obius inversion trick from~\cite{mobius1,mobius2}; Paterson's result for the universal $C^*$-algebra is obtained as a consequence of the case $K=\mathbb C$ via the Stone-Weierstrass theorem. We believe the proof to be easier than Paterson's since we avoid the ardous detour through a certain auxiliary inverse semigroup that Paterson follows in order to use the theory of localizations~\cite{Paterson}. Using the isomorphism with $KS$ with $K\mathscr G(S)$ we give a topological proof of a result of Crabb and Munn describing the center of the algebra of a free inverse monoid~\cite{Munncentre}. To study simple $K\mathscr G$-modules, we associate to each unit $x$ of $\mathscr G$ induction and restriction functors between the categories of $KG_x$-modules and $K\mathscr G$-modules, where $G_x$ is the isotropy group of $x$. It turns out that induction preserves simplicity, whereas restriction takes simple modules to either $0$ or simple modules. This allows us to obtain an explicit parameterization of the finite dimensional simple $K\mathscr G$-modules as induced representations from isotropy groups in the case that $K$ is a field. More generally, we can construct all simple $K\mathscr G$-modules satisfying an additional finiteness condition as induced representations for $K$ an arbitrary commutative ring with unit. The methods are reminiscent of the theory developed by Munn and Ponizovsky for finite semigroups~\cite{CP,oknisemigroupalgebra}, as interpreted through~\cite{myirreps}. The final section of the paper applies the results to inverse semigroups via their universal groupoids. In particular, we parameterize all the finite dimensional simple $KS$-modules for an inverse semigroup $S$ in terms of group representations. Munn gave a different construction of the finite dimensional irreducible representations of an arbitrary inverse semigroup~\cite{Munnarb} via cutting to ideals and reducing to the case of $0$-simple inverse semigroups; it takes him a bit of argument to deduce the classical statements for semigroups with finitely many idempotents from this approach. We state our result in a groupoid-free way, although the proof relies on groupoids. As a corollary we give necessary and sufficient conditions for the finite dimensional irreducible representations to separate points of $S$. Our techniques also construct all the simple $KS$-modules for inverse semigroups $S$ satisfying descending chain condition on idempotents, or whose idempotents are central or form a descending chain isomorphic to $(\mathbb N,\geq)$. \section{\'Etale and ample groupoids} By a \emph{groupoid} $\mathscr G$, we mean a small category in which every arrow is an isomorphism. Objects will be identified with the corresponding units and the space of units will be denoted $\mathscr G^0$. Then, for $g\in \mathscr G$, the domain and range maps are given by $d(g)=g^{-1} g$ and $r(g)=gg^{-1}$, respectively. A \emph{topological groupoid} is a groupoid whose underlying set is equipped with a topology making the product and inversion continuous (where the set of composable pairs is given the induced topology from the product topology). In this paper, we follow the usage of Bourbaki and reserve the term compact to mean a Hausdorff space with the Heine-Borel property. Notice that a locally compact space need not be Hausdorff. By a \emph{locally compact groupoid}, we mean a topological groupoid $\mathscr G$ that is locally compact and whose unit space $\mathscr G^0$ is locally compact Hausdorff in the the induced topology. A locally compact groupoid $\mathscr G$ is said to be \emph{\'etale} if the domain map $d\colon \mathscr G\rightarrow \mathscr G^0$ is \'etale, that is, a local homeomorphism. We do not assume that $\mathscr G$ is Hausdorff. For basic properties of \'etale groupoids (also called $r$-discrete groupoids), we refer to the treatises~\cite{Exel,Paterson,Renault}. We principally follow~\cite{Exel} in terminology. Fix an \'etale groupoid $\mathscr G$ for this section. A basic property of \'etale groupoids is that their unit space is open~\cite[Proposition 3.2]{Exel}. \begin{Prop}\label{unitsareopen} The subspace $\mathscr G^0$ is open in $\mathscr G$. \end{Prop} Of critical importance is the notion of a slice (or $\mathscr G$-set, or local bissection). \begin{Def}[Slice] A \emph{slice} $U$ is an open subset of $\mathscr G$ such that $d|_U$ and $r|_U$ are injective (and hence homeomorphisms since $d$ and $r$ are open). The set of all slices of $\mathscr G$ is denoted $\mathscr G^{op}$. \end{Def} One can view a slice as the graph of a partial homeomorphism between $d(U)$ and $r(U)$ via the topological embedding $U\hookrightarrow d(U)\times r(U)$ sending $u\in U$ to $(d(u),r(u))$. Notice that any slice is locally compact Hausdorff in the induced topology, being homeomorphic to a subspace of $\mathscr G^0$. An \emph{inverse semigroup} is a semigroup $S$ so that, for all $s\in S$, there exists a unique $s^*\in S$ so that $ss^*s=s$ and $s^*ss^*=s^*$. The set $E(S)$ of idempotents of $S$ is a commutative subsemigroup; it is ordered by $e\leq f$ if and only if $ef=e$. With this ordering $E(S)$ is a meet semilattice with the meet given by the product. Hence, it is often referred to as the \emph{semilattice of idempotents} of $S$. The order on $E(S)$ extends to $S$ as the so-called \emph{natural partial order} by putting $s\leq t$ if $s=et$ for some idempotent $e$ (or equivalently $s=tf$ for some idempotent $f$). This is equivalent to $s=ts^*s$ or $s=ss^*t$. If $e\in E(S)$, then the set $G_e=\{s\in S\mid ss^*=e=s^*s\}$ is a group, called the \emph{maximal subgroup} of $S$ at $e$. Idempotents $e,f$ are said to be $\mathscr D$-equivalent, written $e\D f$, if there exists $s\in S$ so that $e=s^*s$ and $f=ss^*$; this is the analogue of von Neumann-Murray equivalence. See the book of Lawson~\cite{Lawson} for details. \begin{Prop} The slices form a basis for the topology of $\mathscr G$. The set $\mathscr G^{op}$ is an inverse monoid under setwise multiplication. The inversion is also setwise and the natural partial order is via inclusion. The semilattice of idempotents is the topology of $\mathscr G^0$. \end{Prop} \begin{proof} See~\cite[Propositions 3.5 and 3.8]{Exel}. \end{proof} A particularly important class of \'etale groupoids is that of ample groupoids~\cite{Paterson}. \begin{Def}[Ample groupoid] An \'etale groupoid is called \emph{ample} if the compact slices form a basis for its topology. \end{Def} One can show that the compact slices also form an inverse semigroup~\cite{Paterson}. The inverse semigroup of compact slices is denoted $\mathscr G^{a}$. The idempotent set of $\mathscr G^a$ is the semilattice of compact open subsets of $\mathscr G^0$. Notice that if $U\in \mathscr G^a$, then any clopen subset $V$ of $U$ also belongs to $\mathscr G^a$. Since we shall be interested in continuous functions with compact support into discrete rings, we shall restrict our attention to ample groupoids in order to ensure that we have ``enough'' continuous functions with compact support. So from now on $\mathscr G$ is an ample groupoid. To study ample groupoids it is convenient to discuss generalized boolean algebras and Stone duality. \begin{Def}[Generalized boolean algebra] A \emph{generalized boolean algebra} is a poset $P$ admitting finite (including empty) joins and non-empty finite meets so that the meet distributes over the join and if $a\leq b$, then there exists $x\in P$ so that $a\wedge x=0$ and $a\vee x=b$ where $0$ is the bottom of $P$. Then, given $a,b\in P$ one can define the relative complement $a\setminus b$ of $b$ in $a$ to be the unique element $x\in P$ so that $(a\wedge b)\vee x=a$ and $a\wedge b\wedge x=0$. Morphisms of generalized boolean algebras are expected to preserve finite joins and finite non-empty meets. A generalized booleam algebra with a maximum (i.e., empty meet) is called a \emph{boolean algebra}. \end{Def} It is well known that a generalized boolean algebra is the same thing as a boolean ring. A \emph{boolean ring} is a ring $R$ with idempotent multiplication. Such rings are automatically commutative of characteristic $2$. The multiplicative semigroup of $R$ is then a semilattice, which is in fact a generalized boolean algebra. The join is given by $a\vee b = a+b-ab$ and the relative complement by $a\setminus b = a-ab$. Conversely, if $B$ is a generalized boolean algebra, we can place a boolean ring structure on it by using the meet as multiplication and the symmetric difference $a+b=(a\setminus b)\vee (b\setminus a)$ as the addition. Boolean algebras correspond in this way to unital boolean rings. For example, $\{0,1\}$ is a boolean algebra with respect to its usual ordering. The corresponding boolean ring is the two-element field $\mathbb F_2$. See~\cite{halmosnew} for details. \begin{Def}[Locally compact boolean space] A Hausdorff space $X$ is called a \emph{locally compact boolean space} if it has a basis of compact open sets~\cite{halmosnew}. \end{Def} It is easy to see that the set $B(X)$ of compact open subspaces of any Hausdorff space $X$ is a generalized boolean algebra (and is a boolean algebra if and only if $X$ is compact). Restriction to the case of locally compact boolean spaces gives all generalized boolean algebras. In detail, if $A$ is a generalized boolean algebra and $\mathrm{Spec}(A)$ is the set of non-zero morphisms $A\rightarrow \{0,1\}$ endowed with the subspace topology from $\{0,1\}^A$, then $\mathrm{Spec}(A)$ is a locally compact boolean space with $B(\mathrm{Spec}(A))\cong A$. Dually, if $X$ is a locally compact boolean space, then $X\cong \mathrm{Spec}(B(X))$. In fact, $B$ and $\mathrm{Spec}$ give a duality between the categories of locally compact boolean spaces with proper continuous maps and generalized boolean algebras: this is the famous Stone duality. If $\psi\colon X\rightarrow Y$ is a proper continuous map of locally compact boolean spaces, then $\psi^{-1}\colon B(Y)\rightarrow B(X)$ is a homomorphism of generalized boolean algebras; recall that a continuous map is \emph{proper} if the preimage of each compact set is compact. If $\varphi\colon A\rightarrow A'$ is a morphism of generalized boolean algebras, then $\widehat{\varphi}\colon \mathrm{Spec}(A')\rightarrow \mathrm{Spec}(A)$ is given by $\psi\mapsto \psi\varphi$. The homeomorphism $X\rightarrow \mathrm{Spec}(B(X))$ is given by $x\mapsto \varphi_x$ where $\varphi_x(U) = \chi_U(x)$. The isomorphism $A\rightarrow B(\mathrm{Spec}(A))$ sends $a$ to $D(a)=\{\varphi\mid \varphi(a)=1\}$. The reader is referred to~\cite{halmosnew,stonespace2} for further details. A key example for us arises from the consideration of ample groupoids. If $\mathscr G$ is an ample groupoid, then $\mathscr G^0$ is a locally compact boolean space and $B(\mathscr G^0)=E(\mathscr G^a)$. In fact, one has the following description of ample groupoids. \begin{Prop} An \'etale groupoid $\mathscr G$ is ample if and only if $\mathscr G^0$ is a locally compact boolean space. \end{Prop} \begin{proof} If $\mathscr G$ is ample, we already observed that $\mathscr G^0$ is a locally compact boolean space. For the converse, since $\mathscr G^{op}$ is a basis for the topology it suffices to show that each $U\in \mathscr G^{op}$ is a union of compact slices. But $U$ is homeomorphic to $d(U)$ via $d|_U$. Since $\mathscr G^0$ is a locally compact boolean space, we can write $d(U)$ as a union of compact open subsets of $\mathscr G^0$ and hence we can write $U$ as union of compact open slices by applying $d|_U^{-1}$. \end{proof} In any poset $P$, it will be convenient to use, for $p\in P$, the notation \begin{align*} p^{\uparrow}&=\{q \in P\mid q\geq p\}\\ p^{\downarrow}&=\{q\in P\mid q\leq p\}. \end{align*} \begin{Def}[Semi-boolean algebra] A poset $P$ is called a \emph{semi-boolean algebra} if each principal downset $p^{\downarrow}$ with $p\in P$ is a boolean algebra. \end{Def} It is immediate that every generalized boolean algebra is a semi-boolean algebra. A key example for us is the inverse semigroup $\mathscr G^a$ for an ample groupoid $\mathscr G$. \begin{Prop}\label{semibooleanalgebra} Let $\mathscr G$ be an ample groupoid. Then $\mathscr G^a$ is a semi-boolean algebra. Moreover, the following are equivalent: \begin{enumerate} \item $\mathscr G$ is Hausdorff; \item $\mathscr G^a$ is closed under pairwise intersections; \item $\mathscr G^a$ is closed under relative complements. \end{enumerate} \end{Prop} \begin{proof} Let $U\in \mathscr G^a$. Then the map $d\colon U\rightarrow d(U)$ gives an isomorphism between the posets $U^{\downarrow}$ and $B(d(U))$. Since $B(d(U))$ is a boolean algebra, this proves the first statement. Suppose that $\mathscr G$ is Hausdorff and $U,V\in \mathscr G^a$. Then $U\cap V$ is a clopen subset of $U$ and hence belongs to $\mathscr G^a$. If $\mathscr G^a$ is closed under pairwise intersections and $U,V\in \mathscr G^a$, then $U\cap V$ is compact open and so $U\cap V$ is clopen in $U$. Then $U\setminus V = U\setminus (U\cap V)$ is a clopen subset of $U$ and hence belongs to $\mathscr G^a$. Finally, suppose that $\mathscr G^a$ is closed under relative complements and let $g,h\in \mathscr G$. As $\mathscr G^a$ is a basis for the topology on $\mathscr G$, we can find slices $U,V\in \mathscr G^a$ with $g\in U$ and $h\in V$. If $g,h\in U$ or $g,h\in V$, then we can clearly separate them by disjoint open sets since $U$ and $V$ are Hausdorff. Otherwise, $g\in U\setminus V$, $h\in V\setminus U$ and these are disjoint open sets as $\mathscr G^a$ is closed under relative complements. This completes the proof. \end{proof} \section{The algebra of an ample groupoid} Fix for this section an ample groupoid $\mathscr G$. Following the idea of Connes~\cite{Connes}, we now define the space of continuous $K$-valued functions with compact support on $\mathscr G$ where $K$ is a commutative ring with unit. \begin{Def}[$K\mathscr G$]\label{definecompactsupport} If $\mathscr G$ is an ample groupoid and $K$ is a commutative ring with unit equipped with the discrete topology, then $K\mathscr G$ is the space of all $K$-valued functions on $\mathscr G$ spanned by functions $f\colon \mathscr G\rightarrow K$ such that: \begin{enumerate} \item There is an open Hausdorff subspace $V$ in $\mathscr G$ so that $f$ vanishes outside $V$; \item $f|_V$ is continuous with compact support. \end{enumerate} We call $K\mathscr G$ the algebra of continuous $K$-valued functions on $\mathscr G$ with compact support (but the reader is cautioned that if $\mathscr G$ is not Hausdorff, then $K\mathscr G$ will contain discontinuous functions). \end{Def} For example, if $\mathscr G$ has the discrete topology, then one can identify $K\mathscr G$ with the vector space of all functions of finite support on $\mathscr G$. A basis then consists of the functions $\delta_g$ with $g\in \mathscr G$. In general, as $K$ is discrete, the support of a function $f$ as in (1) will be, in fact, compact open and so one may take $V$ to be the support. \begin{Prop}\label{supportisopen} With the notation of Definition~\ref{definecompactsupport}, one can always choose $V$ to be compact open so that $\mathrm{supp}(f)=V=f^{-1}(K\setminus \{0\})$. \end{Prop} \begin{proof} Let $f$ and $V$ be as in (1) and (2). Since $K$ is discrete and $f(\mathrm{supp}(f|_V))$ is compact, it must in fact be finite. Hence $f^{-1}(f(V)\setminus \{0\})$ is a clopen subset of $V$ contained in $\mathrm{supp}(f)$ and so is compact open. It follows that $\mathrm{supp}(f|_V)=f^{-1}(K\setminus \{0\})$ is compact open and may be used in place of $V$ in (1) of Definition~\ref{definecompactsupport}. \end{proof} Notice that if $\mathscr G$ is not Hausdorff, it will have compact open subsets that are not closed (cf.~Proposition~\ref{semibooleanalgebra}). The corresponding characteristic function of such a compact open will be discontinuous, but belong to $K\mathscr G$. It turns out that the algebraic structure of $K\mathscr G$ is controlled by $\mathscr G^a$. We start at the level of $K$-modules. \begin{Prop}\label{characteristicbasis} The space $K\mathscr G$ is spanned by the characteristic functions of elements of $\mathscr G^a$. \end{Prop} \begin{proof} Evidently, if $U\in \mathscr G^a$, then $\chi_U\in K\mathscr G$. Let $A$ be the subspace spanned by such characteristic functions. By Proposition~\ref{supportisopen}, it suffices to show that if $f\colon \mathscr G\rightarrow K$ is a function so that $V=f^{-1}(K\setminus \{0\})$ is compact open and $f|_V$ is continuous, then $f\in A$. Since $K$ is discrete and $V$ is compact, we have $f(V)\setminus \{0\}=\{c_1,\ldots,c_r\}$ for certain $c_i\in K\setminus \{0\}$ and the $V_i=f^{-1}(c_i)$, for $i=1,\ldots, r$, are disjoint compact open subsets of $V$. Then $f=c_1\chi_{V_1}+\cdots+c_r\chi_{V_r}$ and so it suffices to show that if $U$ is a compact open subset of $\mathscr G$, then $\chi_U\in A$. Since $\mathscr G^a$ is a basis for the topology of $\mathscr G$ and $U$ is compact open, it follows $U=U_1\cup \cdots \cup U_r$ with the $U_i\in \mathscr G^a$. Since $U_i\subseteq U$, for $i=1,\ldots,r$, and $U$ is Hausdorff, it follows that any finite intersection of elements of the set $\{U_1,\ldots, U_r\}$ belongs to $\mathscr G^a$. The principle of inclusion-exclusion yields: \begin{equation}\label{charfuncs} \chi_U=\chi_{U_1\cup\cdots\cup U_n} = \sum_{k=1}^n (-1)^{k-1}\sum_{\substack{I\subseteq \{1,\ldots,n\}\\ |I|=k}}\chi_{\bigcap_{i\in I}U_i} \end{equation} Hence $\chi_U\in A$, as required. \end{proof} We now define the convolution product on $K\mathscr G$ in order to make it a $K$-algebra. \begin{Def}[Convolution] Let $f,g\in K\mathscr G$. Then their \emph{convolution} $f\ast g$ is defined, for $x\in \mathscr G$, by \[f\ast g(x) = \sum_{y\in d^{-1}d(x)}f(xy^{-1})g(y).\] \end{Def} Of course, one must show that this sum is really finite and $f\ast g$ belongs to $K\mathscr G$, which is the content of the following proposition. \begin{Prop}\label{convolutionwelldefined} Let $f,g\in K\mathscr G$. Then: \begin{enumerate} \item $f\ast g\in K\mathscr G$; \item If $f,g$ are continuous with compact support on $U,V\in \mathscr G^a$, respectively, then $f\ast g$ is continuous with compact support on $UV$; \item If $U,V\in \mathscr G^a$, then $\chi_U\ast \chi_V=\chi_{UV}$; \item If $U\in \mathscr G^a$, then $\chi_{U^{-1}}(x) = \chi_U(x^{-1})$. \end{enumerate} \end{Prop} \begin{proof} Since the characteristic functions of elements of $\mathscr G^a$ span $K\mathscr G$ by Proposition~\ref{characteristicbasis}, it is easy to see that (1) and (2) are consequences of (3). We proceed to the task at hand: establishing (3). Indeed, we have \begin{equation}\label{convolutionofslice} \chi_U\ast \chi_V(x) = \sum_{y\in d^{-1} d(x)}\chi_U(xy^{-1})\chi_V(y). \end{equation} Suppose first $x\in UV$. Then we can find $a\in U$ and $b\in V$ so that $x=ab$. Therefore, $a=xb^{-1}$, $d(x)=d(b)$ and $\chi_U(xb^{-1})\chi_V(b)=1$. Moreover, since $U$ and $V$ are slices, $b$ is the unique element of $V$ with $d(x)=d(b)$. Thus the right hand side of \eqref{convolutionofslice} is $1$. Conversely, suppose $x\notin UV$ and let $y\in d^{-1} d(x)$. If $y\notin V$, then $\chi_V(y)=0$. On the other hand, if $y\in V$, then $xy^{-1}\notin U$, for otherwise we would have $x=xy^{-1}\cdot y\in UV$. Thus $\chi_U(xy^{-1})=0$. Therefore, each term of the right hand side of \eqref{convolutionofslice} is zero and so $\chi_U\ast \chi_V = \chi_{UV}$, as required. Statement (4) is trivial. \end{proof} The associativity of convolution is a straightforward, but tedious exercise~\cite{Exel,Paterson}. \begin{Prop} Let $K$ be a commutative ring with unit and $\mathscr G$ an ample groupoid. Then $K\mathscr G$ equipped with convolution is a $K$-algebra. \end{Prop} If $K=\mathbb C$, we make $\mathbb C \mathscr G$ into a $\ast$-algebra by defining $f^*(x) = \ov{f(x^{-1})}$. \begin{Cor} The map $\varphi\colon \mathscr G^a\rightarrow K\mathscr G$ given by $\varphi(U) = \chi_U$ is a homomorphism. \end{Cor} \begin{Rmk}[Groups] If $\mathscr G^0$ is a singleton, so that $\mathscr G$ is a discrete group, then $K\mathscr G$ is the usual group algebra. \end{Rmk} \begin{Rmk}[Locally compact boolean spaces] In the case $\mathscr G=\mathscr G^0$, one has that $K\mathscr G$ is the subalgebra of $K^{\mathscr G}$ spanned by the characteristic functions of compact open subsets of $\mathscr G$ equipped with the pointwise product. If $K=\mathbb F_2$, then $K\mathscr G\cong B(\mathscr G^0)$ viewed as a boolean ring. \end{Rmk} \begin{Rmk}[Discrete groupoids]\label{discretegroupoid} Notice that if $\mathscr G$ is a discrete groupoid and $g\in \mathscr G$, then $\{g\}\in \mathscr G^a$ and $\delta_g=\chi_{\{g\}}$. It follows easily that \[\delta_g\ast \delta_h= \begin{cases} \delta_{gh} & d(g)=r(h)\\ 0 &\text{else}.\end{cases}\] Thus $K\mathscr G$ can be identified with the $K$-algebra having basis $\mathscr G$ and whose product extends that of $\mathscr G$ where we interpret undefined products as $0$. This is exactly the groupoid algebra considered, for example, in~\cite{mobius1,mobius2}. \end{Rmk} Propositions~\ref{characteristicbasis} and~\ref{convolutionwelldefined} imply that $K\mathscr G$ is a quotient of the semigroup algebra $K\mathscr G^a$. Clearly $K\mathscr G$ satisfies the relations $\chi_{U\cup V}=\chi_U+\chi_V$ whenever $U,V\in B(\mathscr G^0)$ with $U\cap V=\emptyset$. We show that these relations define $K\mathscr G$ as a quotient of $K\mathscr G^a$ in the case that $\mathscr G$ is Hausdorff. This result should hold in general (in analogy with the analytic setting~\cite{Paterson}), but so far we have been unsuccessful in proving it. First we need a definition. If $U,V\in \mathscr G^a$, we say $U$ is \emph{orthogonal} to $V$, written $U\perp V$, if $UV^{-1} =\emptyset=U^{-1} V$. In this case $U\cup V$ is a disjoint union and belongs to $\mathscr G^a$. Indeed, $U^{-1} UV^{-1} V=\emptyset=UU^{-1} VV^{-1}$ and so $U,V$ have disjoint images under both $d$ and $r$. Thus $d,r$ restrict to homeomorphisms of $U\cup V$ with $U^{-1} U\cup V^{-1} V$ and $UU^{-1}\cup VV^{-1}$, respectively. Consequently, $U\cup V$ is a compact open slice. More generally, if $U_1,\ldots, U_n\in \mathscr G^a$ are pairwise orthogonal, then the union $U_1\cup\cdots \cup U_n$ is disjoint and a compact open slice. See~\cite{Paterson} for details. \begin{Thm}\label{presentation} Let $\mathscr G$ be a Hausdorff ample groupoid. Then $K\mathscr G=K\mathscr G^a/I$ where $I$ is the ideal generated by all elements $U+V-(U\cup V)$ where $U,V$ are disjoint elements of $B(\mathscr G^0)$. \end{Thm} \begin{proof} First of all, Propositions~\ref{characteristicbasis} and~\ref{convolutionwelldefined} yield a surjective homomorphism $\lambda\colon K\mathscr G^a\rightarrow K\mathscr G$ given by $U\mapsto \chi_U$ and evidentally $I\subseteq \ker \lambda$. We establish the converse via several intermediate steps. First note that $\emptyset\in I$ since $\emptyset = \emptyset+\emptyset -(\emptyset\cup \emptyset)\in I$. \begin{Step}\label{additive} Suppose that $U\perp V$ with $U,V\in \mathscr G^a$. Then $U+V-(U\cup V)\in I$. \end{Step} \begin{proof} Note $U\cup V = (U\cup V)(U^{-1} U\cup V^{-1} V)$ and $U^{-1} U\cap V^{-1} V=\emptyset$. Thus \[U+V-(U\cup V) = (U\cup V)[U^{-1} U+V^{-1} V-(U^{-1} U\cup V^{-1} V)]\in I\] as required. \end{proof} The next step uses that $\mathscr G^a$ is closed under pairwise intersection and relative complement in the Hausdorff setting (Proposition~\ref{semibooleanalgebra}). \begin{Step}\label{writeasunion} If $U_1,\ldots, U_n\in \mathscr G^a$, then we can find $V_1,\ldots, V_m\in \mathscr G^a$ so that: \begin{enumerate} \item $V_i\cap V_j=\emptyset$ all $i\neq j$; \item $V_1\cup\cdots\cup V_m = U_1\cup\cdots\cup U_n$; \item For all $1\leq i\leq m$ and $1\leq j\leq n$, either $V_i\subseteq U_j$ or $V_i\cap U_j=\emptyset$. \end{enumerate} \end{Step} \begin{proof} We induct on $n$, the case $n=1$ being trivial as we can take $m=1$ and $V_1=U_1$. Assume the statement for $n-1$ and find pairwise disjoint elements $W_1,\ldots, W_m\in \mathscr G^a$ so that $W_1\cup\cdots \cup W_m=U_1\cup\cdots\cup U_{n-1}$ and, for all $1\leq i\leq m$ and $1\leq j\leq n-1$, either $W_i\subseteq U_j$ or $W_i\cap U_j=\emptyset$. Set $V_i=W_i\cap U_n$, $V_i'=W_i\setminus U_n$ (for $i=1,\ldots,m$) and put $V_{m+1} = (U_n\setminus W_1)\cap \cdots \cap (U_n\setminus W_m)= U_n\setminus (W_1\cup\cdots\cup W_m)$. All these sets are elements of $\mathscr G^a$, some of which may be empty. It is clear from the construction that the $V_i$ ($1\leq i\leq m+1$) and $V_i'$ ($1\leq i\leq m$) form a collection of pairwise disjoint subsets. Moreover, \begin{align*} V_1\cup V_1'\cup\cdots\cup V_m\cup V'_m\cup V_{m+1} &= W_1\cup\cdots\cup W_m\cup U_n\setminus (W_1\cup\cdots\cup W_m)\\ &= U_1\cup\cdots \cup U_n \end{align*} and so the second condition holds. For the final condition, first note that $V_{m+1}\subseteq U_n$ and intersects no other $U_i$. On the other hand, for $1\leq i\leq m$, we have $V_i\subseteq U_n$ and $V_i'\cap U_n=\emptyset$. For any $1\leq j\leq n-1$ and $1\leq i\leq m$, as $V_i,V_i'\subseteq W_i$, we have that $V_i\cap U_j\neq \emptyset$ implies $V_i\subseteq W_i\subseteq U_j$ and similarly $V_i'\cap U_j\neq \emptyset$ implies $V_i'\subseteq W_i\subseteq U_j$. This establishes Step~\ref{writeasunion}. \end{proof} Our next step is an easy observation. \begin{Step}\label{disjointgivesperp} Suppose that $U,V\in \mathscr G^a$ are disjoint and have a common upper bound $W\in \mathscr G^a$. Then $U\perp V$. \end{Step} \begin{proof} Suppose that $UV^{-1} \neq \emptyset$. Then there exists $u\in U$ and $v\in V$ with $d(u)=d(v)$. But $u,v\in W$ then implies $u=v$. This contradicts that $U\cap V=\emptyset$. The proof that $U^{-1} V=\emptyset$ is dual. \end{proof} We may now complete the proof. Suppose $0=\sum_{i=1}^n c_i\chi_{U_i}$ with $c_i\in K$ and $U_i\in \mathscr G^a$, for $1\leq i\leq n$. Choose $V_1,\ldots, V_m\in \mathscr G^a$, as per Step~\ref{writeasunion}. Then, for each $j$, we can write $U_j = V_{i_1}\cup \cdots\cup V_{i_{k_j}}$ for certain indices $i_1,\ldots,i_{k_j}$. Since the $V_{i_r}$ are pairwise disjoint subsets of the slice $U_j$, they are in fact mutually orthogonal by Step~\ref{disjointgivesperp}. Repeated application of Step~\ref{additive} now yields that $U_j +I= V_{i_1}+\cdots+V_{i_{k_j}}+I$. On the other hand, since the union is disjoint, clearly $\chi_{U_j}= \chi_{V_{i_1}}+\cdots+\chi_{V_{i_{k_j}}}$. We conclude that there exist $d_j\in K$, for $j=1,\ldots, m$, so that $\sum_{i=1}^n c_iU_i+I = \sum_{i=1}^m d_jV_j+I$ and $0=\sum_{i=1}^n c_i\chi_{U_i}=\sum_{i=1}^m d_j\chi_{V_j}$. But since the $V_j$ are disjoint, this immediately yields $d_1=\cdots=d_m=0$ and so $\sum_{i=1}^n c_iU_i\in I$, as required. \end{proof} Our next goal is to show that $K\mathscr G$ is unital if and only if $\mathscr G^0$ is compact. \begin{Prop}\label{unital} The $K$-algebra $K\mathscr G$ is unital if and only if $\mathscr G^0$ is compact. \end{Prop} \begin{proof} Suppose first $\mathscr G^0$ is compact. Since it is open in the relative topology by Proposition~\ref{unitsareopen}, it follows that $u=\chi_{\mathscr G^0}\in K\mathscr G$. Now if $f\in K\mathscr G$, then we compute \[f\ast u(x) = \sum_{y\in d^{-1} d(x)}f(xy^{-1})u(y) = f(x)\] since $d(x)$ is the unique element of $\mathscr G^0$ in $d^{-1} d(x)$. Similarly, \[u\ast f(x) = \sum_{y\in d^{-1} d(x)}u(xy^{-1})f(y) = f(x)\] since $xy^{-1} \in\mathscr G^0$ implies $x=y$. Thus $u$ is the multiplicative identity of $\mathscr G$. Conversely, suppose $u$ is the multiplicative identity. We first claim that $u=\chi_{\mathscr G^0}$. Let $x\in \mathscr G$. Choose a compact open set $U\subseteq \mathscr G^0$ with $d(x)\in U$. Suppose first $x\notin \mathscr G^0$. Then \[0=\chi_U(x) = u\ast \chi_U(x) = \sum_{y\in d^{-1} d(x)}u(xy^{-1})\chi_U(y) = u(x)\] since $\{d(x)\} = U\cap d^{-1} d(x)$. Similarly, if $x\in \mathscr G^0$, then we have \[1=\chi_U(x) = u\ast \chi_U(x) = \sum_{y\in d^{-1} d(x)}u(xy^{-1})\chi_U(y) = u(x).\] So we must show that $\chi_{\mathscr G^0}\in K\mathscr G$ implies that $\mathscr G^0$ is compact. By Proposition~\ref{characteristicbasis}, there exist $U_1,\ldots, U_k\in \mathscr G^a$ and $c_1,\ldots, c_k\in K$ so that $\chi_{\mathscr G^0} = c_1\chi_{U_1}+\cdots+c_k\chi_{U_k}$. Thus $\mathscr G^0\subseteq U_1\cup \cdots\cup U_k$. But then $\mathscr G^0= d(U_1)\cup \cdots\cup d(U_k)$. But each $d(U_i)$ is compact, being homeomorphic to $U_i$, so $\mathscr G^0$ is compact, as required. \end{proof} The center of $K\mathscr G$ can be described by functions that are constant on conjugacy classes, analogously to the case of groups. \begin{Def}[Class function] Define $f\in K\mathscr G$ to be a \emph{class function} if: \begin{enumerate} \item $f(x)\neq 0$ implies $d(x)=r(x)$; \item $d(x)=r(x)=d(z)$ implies $f(zxz^{-1})=f(x)$. \end{enumerate} \end{Def} \begin{Prop} The center of $K\mathscr G$ is the set of class functions. \end{Prop} \begin{proof} Suppose first that $f$ is a class function and $g\in K\mathscr G$. Then \begin{equation}\label{findcenter} f\ast g(x) = \sum_{y\in d^{-1}d(x)}f(xy^{-1})g(y) = \sum_{y\in d^{-1}d(x)\cap r^{-1}r(x)}f(xy^{-1})g(y) \end{equation} since $f(xy^{-1})=0$ if $r(x)=r(xy^{-1})\neq d(xy^{-1}) = r(y)$. But $f(xy^{-1}) = f(y(y^{-1} x)y^{-1}))=f(y^{-1} x)$ since $f$ is a class function and $d(y^{-1} x)=d(x)=d(y)=r(y^{-1} x)$. Peforming the change of variables $z=y^{-1} x$, we obtain that the right hand side of \eqref{findcenter} is equal to \[\sum_{z\in d^{-1}d(x)\cap r^{-1}d(x)} g(xz^{-1})f(z)= \sum_{z\in d^{-1}d(x)} g(xz^{-1})f(z) = g\ast f(x)\] where the first equality uses that $f(z)=0$ if $d(z)\neq r(z)$. Thus $f\in Z(K\mathscr G)$. Conversely, suppose $f\in Z(K\mathscr G)$. First we consider the case $x\in \mathscr G$ and $d(x)\neq r(x)$. Choose a compact open set $U\subseteq \mathscr G^0$ so that $d(x)\in U$ and $r(x)\notin U$. Then \[\chi_U\ast f(x) = \sum_{y\in d^{-1} d(x)} \chi_U(xy^{-1})f(y) = 0\] since $xy^{-1}\in U$ forces it to be a unit, but then $y=x$ and $xx^{-1} =r(x)\notin U$. On the other hand, \[f\ast \chi_U(x) = \sum_{y\in d^{-1} d(x)} f(xy^{-1})\chi_U(y) = f(x)\] since $d(x)$ is the unique element of $d^{-1} d(x)$ in $U$. Thus $f(x)=0$. The remaining case is that $d(x)=r(x)$ and we have $d(z)=d(x)$. Then $zx^{-1}$ is defined. Choose $U\in \mathscr G^a$ so that $zx^{-1} \in U$. Then \[f\ast \chi_U(z) = \sum_{y\in d^{-1} d(z)}f(zy^{-1})\chi_U(y) = f(zxz^{-1})\] since $y\in U\cap d^{-1} d(z)$ implies $y=zx^{-1}$. On the other hand, \[\chi_U\ast f(z) = \sum_{y\in d^{-1} d(z)} \chi_U(zy^{-1})f(y)=f(x)\] since $r(zy^{-1}) = r(zx^{-1})$ and so $zy^{-1} \in U$ implies $zy^{-1} =zx^{-1}$, whence $y=x$. This shows that $f(x)=f(zxz^{-1})$, completing the proof of the proposition. \end{proof} Our next proposition provides a sufficient condition for the characteristic functions of an inverse subsemigroup of $\mathscr G^a$ to span $K\mathscr G$. \setcounter{Step}{0} \begin{Prop}\label{smgroupgen} Let $S\subseteq \mathscr G^a$ be an inverse subsemigroup such that: \begin{enumerate} \item $E(S)$ generates the generalized boolean algebra $B(\mathscr G^0)$; \item $D=\{U\in \mathscr G^a\mid U\subseteq V\ \text{some}\ V\in S\}$ is a basis for the topology on $\mathscr G$. \end{enumerate} Then $K\mathscr G$ is spanned by the characteristic functions of elements of $S$. \end{Prop} \begin{proof} Let $A$ be the span of the $\chi_V$ with $V\in S$. Then $A$ is a $K$-subalgebra by Proposition~\ref{convolutionwelldefined}. We break the proof up into several steps. \begin{Step}\label{step1} The collection $B$ of compact open subsets of $\mathscr G^0$ so that $\chi_U\in A$ is a generalized boolean algebra. \end{Step} \begin{proof} This is immediate from the formulas, for $U,V\in B(\mathscr G^0)$, \begin{align*} \chi_U\ast \chi_V &= \chi_{UV} = \chi_{U\cap V}\\ \chi_{U\setminus V} & = \chi_U-\chi_{U\cap V}\\ \chi_{U\cup V} &= \chi_{U\setminus V}+\chi_{V\setminus U}+\chi_{U\cap V}. \end{align*} since $A$ is a subalgebra. \end{proof} We may now conclude by (1) that $A$ contains $\chi_U$ for every element of $B(\mathscr G^0)$. \begin{Step}\label{step2} The characteristic function of each element of $D$ belongs to $A$. \end{Step} \begin{proof} If $U\subseteq V$ with $V\in S$, then $VU^{-1} U=U$ and so $\chi_{U}=\chi_V\ast \chi_{U^{-1} U}\in A$ by Step~\ref{step1} since $U^{-1} U\in E(\mathscr G^a)=B(\mathscr G^0)$. \end{proof} \begin{Step}\label{step3} Each $\chi_U$ with $U\in \mathscr G^a$ belongs to $A$. \end{Step} \begin{proof} Since $D$ is a basis for $\mathscr G$ by hypothesis and $U$ is compact open, we may write $U=U_1\cup\cdots\cup U_n$ with the $U_i\in D$, for $i=1,\ldots, n$. Since the $U_i$ are all contained in $U$, any finite intersection of the $U_i$ is clopen in $U$ and hence belongs to $\mathscr G^a$. As $D$ is a downset, in fact, any finite intersection of the $U_i$ belongs to $D$. Therefore, $\chi_U\in A$ by \eqref{charfuncs}. \end{proof} Proposition~\ref{characteristicbasis} now yields the result. \end{proof} \section{Actions of inverse semigroups and groupoids of germs} As inverse semigroups are models of partial symmetry~\cite{Lawson}, it is natural to study them via their actions on spaces. From such ``dynamical systems'' we can form a groupoid of germs and hence, in the ample setting, a $K$-algebra. This $K$-algebra will be shown to behave like a cross product. \subsection{The category of actions} Let $X$ be a locally compact Hausdorff space and denote by $I_X$ the inverse semigroup of all homeomorphisms between open subsets of $X$. \begin{Def}[Action] An \emph{action} of an inverse semigroup $S$ on $X$ is a homomorphism $\varphi\colon S\rightarrow I_X$ written $s\mapsto \varphi_s$. As usual, if $s\in S$ and $x\in \mathrm{dom}(\varphi_s)$, then we put $sx=\varphi_s(x)$. Let us set $X_e$ to be the domain of $\varphi_e$ for $e\in E(S)$. The action is said to be \emph{non-degenerate} if $X=\bigcup_{e\in E(S)}X_e$. If $\psi\colon S\rightarrow I_Y$ is another action, then a morphism from $\varphi$ to $\psi$ is a continuous map $\alpha\colon X\rightarrow Y$ so that, for all $x\in X$, one has that $sx$ is defined if and only if $s\alpha(x)$ is defined, in which case $\alpha(sx) = s\alpha(x)$. \end{Def} We will be most interested in what we term ``ample'' actions. These actions will give rise to ample groupoids via the groupoid of germs constructions. \begin{Def}[Ample action] A non-degenerate action $\varphi\colon S\rightarrow I_X$ of an inverse semigroup $S$ on a space $X$ is said to be \emph{ample} if: \begin{enumerate} \item $X$ is a locally compact boolean space; \item $X_e\in B(X)$, for all $e\in E(S)$. \end{enumerate} If in addition, the collection $\{X_e\mid e\in E(S)\}$ generates $B(X)$ as a generalized boolean algebra, we say the action is \emph{boolean}. \end{Def} If $\mathscr G$ is an ample groupoid, there is a natural boolean action of $\mathscr G^a$ on $\mathscr G^0$. Namely, if $U\in \mathscr G^a$, then the domain of its action is $U^{-1} U$ and the range is $UU^{-1}$. If $x\in U^{-1} U$, then there is a unique element $g\in U$ with $d(g)=x$. Define $Ux=r(g)$. This is exactly the partial homeomorphism whose ``graph'' is $U$. See~\cite{Exel,Paterson} for details. \begin{Prop}\label{propermap} Suppose that $S$ has ample actions on $X$ and $Y$ and let $\alpha\colon X\rightarrow Y$ be a morphism of the actions. Then: \begin{enumerate} \item For each $e\in E(S)$, one has $\alpha^{-1}(Y_e)=X_e$; \item $\alpha$ is proper; \item $\alpha$ is closed. \end{enumerate} \end{Prop} \begin{proof} From the definition of a morphism, $\alpha(x)\in Y_e$ if and only if $e\alpha(x)$ is defined, if and only if $ex$ is defined, if and only if $x\in X_e$. This proves (1). For (2), let $C\subseteq Y$ be compact. Since the action on $Y$ is non-degenerate, we have that $C\subseteq \bigcup_{e\in E(S)} Y_e$ and so by compactness of $C$ it follows that $C\subseteq Y_{e_1}\cup\cdots\cup Y_{e_n}$ for some idempotents $e_1,\ldots,e_n$. Thus $\alpha^{-1} (C)$ is a closed subspace of $\alpha^{-1} (Y_{e_1}\cup\cdots\cup Y_{e_n}) = X_{e_1}\cup\cdots \cup X_{e_n}$ and hence is compact since the $X_{e_i}$ are compact. Finally, it is well known that a proper map between locally compact Hausdorff spaces is closed. \end{proof} For us the main example of a boolean action is the action of an inverse semigroup $S$ on the spectrum of its semilattice of idempotents. For a semilattice $E$, we denote by $\widehat E$ the space of non-zero semilattice homomorphisms $\varphi\colon E\rightarrow \{0,1\}$ topologized as a subspace of $\{0,1\}^E$. Such a homomorphism extends uniquely to a boolean ring homomorphism $\mathbb F_2E\rightarrow \mathbb F_2$ and hence $\widehat E\cong \mathrm{Spec}(\mathbb F_2E)$, and is in particular a locally compact boolean space with $B(\widehat E)\cong \mathbb F_2E$ (viewed as a generalized boolean algebra). For $e\in E$, define $D(e) = \{\varphi\in \widehat E\mid \varphi(e)=1\}$; this is the compact open set corresponding to $e$ under the above isomorphism. The semilattice of subsets of the form $D(e)$ generates $B(\widehat E)$ as a generalized boolean algebra because $E$ generates $\mathbb F_2E$ as a boolean ring. In fact, the map $e\mapsto D(e)$ is the universal semilattice homomorphism of $E$ into a generalized boolean algebra (corresponding to the universal property of the inclusion $E\rightarrow \mathbb F_2E$). Elements of $\widehat E$ are often referred to as \emph{characters}. \begin{Def}[Spectral action] Suppose that $S$ is an inverse semigroup with semilattice of idempotents $E(S)$. To each $s\in S$, there is an associated homeomorphism $\beta_s\colon D(s^*s)\rightarrow D(ss^*)$ given by $\beta_s(\varphi)(e) = \varphi(s^*es)$. The map $s\mapsto \beta_s$ provides a boolean action $\beta\colon S\rightarrow I_{\widehat{E(S)}}$~\cite{Exel,Paterson}, which we call the \emph{spectral action}. \end{Def} The spectral action enjoys the following universal property. \begin{Prop}\label{universal} Let $\mathscr C$ be the category of boolean actions of $S$. \begin{enumerate} \item Suppose $S$ has a boolean action on $X$ and an ample action on $Y$ and let $\psi\colon X\rightarrow Y$ be a morphism. Then $\psi$ is a topological embedding of $X$ as a closed subspace of $Y$. \item Each homset of $\mathscr C$ contains at most one element. \item The spectral action $\beta\colon S\rightarrow I_{\widehat{E(S)}}$ is the terminal object in $\mathscr C$. \end{enumerate} \end{Prop} \begin{proof} By Proposition~\ref{propermap}, the map $\psi$ is proper and $\psi^{-1}\colon B(Y)\rightarrow B(X)$ takes $Y_e$ to $X_e$ for $e\in E(S)$. Since the $X_e$ with $e\in E(S)$ generate $B(X)$ as a generalized boolean algebra, it follows that $\psi^{-1} \colon B(Y)\rightarrow B(X)$ is surjective. By Stone duality, we conclude $\psi$ is injective. Also $\psi$ is closed being proper. This establishes (1). Again by Proposition~\ref{propermap} if $\psi\colon X\rightarrow Y$ is a morphism of boolean actions, then $\psi$ is proper and $\psi^{-1}\colon B(Y)\rightarrow B(X)$ sends $Y_e$ to $X_e$. Since the $Y_e$ with $e\in E(S)$ generate $B(Y)$, it follows $\psi^{-1}$ is uniquely determined by the actions of $S$ on $X$ and $Y$. But $\psi^{-1}$ determines $\psi$ by Stone duality, yielding (2). Let $\alpha\colon S\rightarrow I_Y$ be a boolean action. By definition the map $e\mapsto Y_e$ yields a homomorphism $E(S)\rightarrow B(Y)$ extending to a surjective homomorphism $\mathbb F_2E(S)\rightarrow B(Y)$. Recalling $\mathbb F_2E(S)\cong B(\widehat {E(S)})$, Stone duality yields a proper continuous injective map $\psi\colon Y\rightarrow \widehat{E(S)}$ that sends $y\in Y$ to $\varphi_y\colon E(S)\rightarrow \{0,1\}$ given by $\varphi_y(e) = \chi_{Y_e}(y)$. It remains to show that the map $y\mapsto \varphi_y$ is a morphism. Let $y\in Y$ and $s\in S$. Then $sy$ is defined if and only if $y\in Y_{s^*s}$, if and only if $\varphi_y(s^*s)=1$, if and only if $\varphi_y\in D(s^*s)$. If $y\in Y_{s^*s}$, then $s\varphi_y(e) = \varphi_y(s^*es) = \chi_{Y_{s^*es}}(y)$. But $Y_{s^*es}$ is the domain of $\alpha_{es}$. Since $sy$ is defined, $y\in Y_{s^*es}$ if and only if $sy\in Y_e$. Thus $\chi_{Y_{s^*es}}(y) = \chi_{Y_e}(sy) = \varphi_{sy}(e)$. We conclude that $\varphi_{sy} = s\varphi_y$. This completes the proof of the theorem. \end{proof} Since the restriction of a boolean action to a closed invariant subspace is evidentally boolean, we obtain the following description of boolean actions, which was proved by Paterson in a slightly different language~\cite{Paterson}. \begin{Cor} There is an equivalence between the category of boolean actions of $S$ and the poset of $S$-invariant closed subspaces of $\widehat{E(S)}$. \end{Cor} \subsubsection{Filters} Recall that a \emph{filter} $\mathscr F$ on a semilattice $E$ is a non-empty subset that is closed under pairwise meets and is an upset in the ordering. For example, if $\varphi\colon E\rightarrow \{0,1\}$ is a character, then $\varphi^{-1}(1)$ is a filter. Conversely, if $\mathscr F$ is a filter, then its characteristic function $\chi_{\mathscr F}$ is a non-zero homomorphism. A filter is called \emph{principal} if it has a minimum element, i.e., is of the form $e^{\uparrow}$. A character of the form $\chi_{e^{\uparrow}}$, with $e\in E$, is called a \emph{principal character}. Notice that every filter on a finite semilattice $E$ is principal and in this case $\widehat E$ is homeomorphic to $E$ with the discrete topology. In general, the set of principal characters is dense in $\widehat{E}$ since $\chi_{e^{\uparrow}}\in D(e)$ and the generalized boolean algebra generated by the open subsets of the form $D(e)$ is a basis for the topology on $\widehat{E}$. Thus if $\widehat{E}$ is discrete, then necessarily each filter on $E$ is principal and $\widehat{E}$ is in bijection with $E$. However, the converse is false, as we shall see in a moment. The following, assumedly well-known, proposition captures when every filter is principal, when the topology on $\widehat E$ is discrete and when the principal characters are discrete in $\widehat E$. \begin{Prop}\label{descendingchain} Let $E$ be a semilattice. Then: \begin{enumerate} \item Each filter on $E$ is principal if and only if $E$ satisfies the descending chain condition; \item The topology on $\widehat E$ is discrete if and only if each principal downset of $E$ is finite; \item The set of principal characters is discrete in $\widehat E$ if and only if, for all $e\in E$, the downset $e^\downharpoonleft = \{f\in E\mid f<e\}$ is finitely generated. \end{enumerate} \end{Prop} \begin{proof} Suppose first $E$ satisfies the descending chain condition and let $\mathscr F$ be a filter. Then each element $e\in \mathscr F$ is above a minimal element of $\mathscr F$, else we could construct an infinite strictly descending chain. But if $e,f\in \mathscr F$ are minimal, then $ef\in \mathscr F$ implies that $e=ef=f$. Thus $\mathscr F$ is principal. Conversely, suppose each filter in $E$ is principal and that $e_1\geq e_2\geq \cdots$ is a descending chain. Let $\mathscr F = \bigcup_{i=1}^{\infty} e_i^{\uparrow}$. Then $\mathscr F$ is a filter. By assumption, we have $\mathscr F=e^{\uparrow}$ for some $e\in E$. Because $e\in \mathscr F$, we must have $e\geq e_i$ for some $i$. On the other hand $e_j\geq e$ for all $j$ since $\mathscr F=e^{\uparrow}$. If follows that $e=e_i=e_{i+1}=\cdots$ and so $E$ satisfies the descending chain condition. To prove (2), suppose first that $\widehat E$ is discrete. Then since $D(e)$ is compact open, it must be finite. Moreover, each filter on $E$ is principal and so $D(e) = \{\chi_{f^{\uparrow}}\mid f\leq e\}$. It follows that $e^{\downarrow}$ is finite. Conversely, if each principal downset is finite, then every filter on $E$ is principal by (1). Suppose $e^{\downarrow}\setminus \{e\} = \{f_1,\ldots,f_n\}$. Then $\{\chi_{e^{\uparrow}}\} = D(e)\setminus (D(f_1)\cup\cdots\cup D(f_n))$ is open. Thus $\widehat E$ is discrete. For (3), suppose first $e^\downharpoonleft$ is generated by $f_1,\ldots,f_n$. Then $\chi_{e^{\uparrow}}$ is the only principal character in the open set $D(e)\setminus (D(f_1)\cup \cdots \cup D(f_n))$. This establishes sufficiency of the given condition for discreteness. Conversely, suppose that the principal characters form a discrete set. Then there is a basic neighborhood of $\chi_{e^{\uparrow}}$ of the form $U=D(e')\setminus (D(f_1)\cup \cdots \cup D(f_n))$ for certain $e',f_1,\ldots, f_n\in E$ (cf.~\cite{Paterson}) containing no other principal character. Then $e\leq e'$ and $e\nleq f_i$, for $i=1,\ldots, n$. In particular, $ef_1,\ldots,ef_n\in e^\downharpoonleft$. We claim they generate it. Indeed, if $f<e\leq e'$, then since $\chi_{f^\uparrow}\notin U$, we must have $f\leq f_i$ for some $i=1,\ldots, n$ and hence $f=ef\leq ef_i$, for some $i=1,\ldots n$. This completes the proof. \end{proof} For instance, consider the semilattice $E$ with underlying set $\mathbb N\cup \{\infty\}$ and with order given by $0<i<\infty$ for all $i\geq 1$ and all other elements are incomparable. Then $E$ satisfies the descending chain condition but $\infty^{\downarrow}$ is infinite. If we identify $\widehat E$ with $E$ as sets, then the topology is that of $\infty$ being a one-point compactification of the discrete space $\mathbb N$. The condition in (3) is called \emph{pseudofiniteness} in~\cite{Munnsemiprimitive}. \begin{Def}[Ultrafilter] A filter $\mathscr F$ on a semilattice $E$ is called an \emph{ultrafilter} if it is a maximal proper filter. \end{Def} Recall that an idempotent $e$ of an inverse semigroup is called \emph{primitive} if it is minimal amongst its non-zero idempotents. The connection between ultrafilters and morphisms of generalized boolean algebras is well known~\cite{halmosnew}. \begin{Prop}\label{ultrafilterchar} Let $E$ be a semilattice with zero. \begin{enumerate} \item A principal filter $e^{\uparrow}$ is an ultrafilter if and only if $e$ is primitive. \item Moreover, if $E$ is a generalized boolean algebra, then a filter $\mathscr F$ on $E$ is an ultrafilter if and only if $\chi_{\mathscr F}\colon E\rightarrow \{0,1\}$ is a morphism of generalized boolean algebras. \end{enumerate} \end{Prop} \begin{proof} Evidentally, if $e$ is not a minimal non-zero idempotent, then $e^{\uparrow}$ is not an ultrafilter since it is contained in some proper principal filter. Suppose that $e$ is primitive and $f\notin e^{\uparrow}$. Then $ef< e$ and so $ef=0$. Thus no proper filter contains $e^{\uparrow}$. This proves (1). To prove (2), suppose first that $\mathscr F$ is an ultrafilter. We must verify that $e_1\vee e_2\in \mathscr F$ implies $e_i\in \mathscr F$ for some $i=1,2$. Suppose neither belong to $\mathscr F$. For $i=1,2$, put $\mathscr F_i = \{e\in E\mid \exists f\in \mathscr F\ \text{such that}\ e\geq e_if\}$. Then $\mathscr F_i$, for $i=1,2$, are filters properly containing $\mathscr F$. Thus $0\in \mathscr F_1\cap\mathscr F_2$ and so we can find $f_1,f_2\in F$ with $e_1f_1=0=e_2f_2$. Then $f_1f_2\in \mathscr F$ and $0=e_1f_1f_2\vee e_2f_1f_2=(e_1\vee e_2)f_1f_2\in \mathscr F$, a contradiction. Thus $e_i\in \mathscr F_i$ some $i=1,2$. Conversely, suppose that $\chi_{\mathscr F}$ is a morphism of generalized boolean algebras. Then $0\notin \mathscr F$ and so $\mathscr F$ is a proper filter. Suppose that $\mathscr F'\supsetneq \mathscr F$ is a filter. Let $e\in \mathscr F'\setminus \mathscr F$ and let $f\in \mathscr F$. We cannot have $fe\in \mathscr F$ as $f\notin \mathscr F$. Because $fe\vee (f\setminus e)=f\in \mathscr F$ and $\chi_{\mathscr F}$ is a morphism of generalized boolean algebras, it follows that $f\setminus e\in \mathscr F\subseteq \mathscr F'$. Thus $0=e(f\setminus e)\in \mathscr F'$. We conclude $\mathscr F$ is an ultrafilter. \end{proof} It follows from the proposition that if $E$ is a generalized boolean algebra, then the points of $\mathrm{Spec}(E)$ can be identified with ultrafilters on $E$. If $X$ is a locally compact boolean space and $x\in X$, then the corresponding ultrafilter on $B(X)$ is the set of all compact open neighborhoods of $x$. It is not hard to see that $\mathrm{Spec}(E)$ is a closed subspace of $\widehat E$ for a generalized boolean algebra $E$. For semilattices in general, the space of ultrafilters is not closed in $\widehat E$, which led Exel to consider the closure of the space of ultrafilters, which he terms the space of tight filters~\cite{Exel}. \subsection{Groupoids of germs} There is a well-known construction assigning to each non-degenerate action $\varphi\colon S\rightarrow I_X$ of an inverse semigroup $S$ on a locally compact Hausdorff space $X$ an \'etale groupoid, which we denote $S\ltimes_\varphi X$, known as the \emph{groupoid of germs} of the action~\cite{Paterson,Exel,resendeetale}. Usually, $\varphi$ is dropped from the notation if it is understood. The groupoid of germs construction is functorial. It goes as follows. As a set $S\ltimes_\varphi X$ is the quotient of the set $\{(s,x)\in S\times X\mid x\in X_{s^*s}\}$ by the equivalence relation that identifies $(s,x)$ and $(t,y)$ if and only if $x=y$ and there exists $u\leq s,t$ with $x\in X_{u^*u}$ One writes $[s,x]$ for the equivalence class of $(s,x)$ and calls it the \emph{germ of $s$ at $x$}. The associated topology is the so-called germ topology. A basis consists of all sets of the form $(s,U)$ where $U\subseteq X_{s^*s}$ and $(s,U) = \{[s,x]\mid x\in U\}$. The multiplication is given by defining $[s,x]\cdot [t,y]$ if and only if $ty=x$, in which case the product is $[st,y]$. The units are the elements $[e,x]$ with $e\in E(S)$ and $x\in X_e$. The projection $[e,x]\mapsto x$ gives a homeomorphism of the unit space of $S\ltimes X$ with $X$, and so from now on we identify the unit space with $X$. One then has $d([s,x]) =x$ and $r([s,x])=sx$. The inversion is given by $[s,x]^{-1} = [s^*,sx]$. The groupoid $S\ltimes X$ is an \'etale groupoid~\cite[Proposition 4.17]{Exel}. The reader should consult~\cite{Exel,Paterson} for details. Observe that if $\mathscr B$ is a basis for the topology of $X$, then a basis for $S\ltimes X$ consists of those sets of the form $(s,U)$ with $U\in \mathscr B$, as is easily verified. Let us turn to some enlightening examples. \begin{Example}[Transformation groupoids] In the case that $S$ is a discrete group, the equivalence relation on $S\times X$ is trivial and the topology is the product topology. The resulting \'etale groupoid is consequently Hausdorff and is known in the literature as the \emph{transformation groupoid} associated to the transformation group $(S,X)$~\cite{Renault,Paterson}. \end{Example} \begin{Example}[Maximal group image] An easy example is the case when $X$ is a one-point set on which $S$ acts trivially. It is then straightforward to see that $S\ltimes X$ is the maximal group image of $S$. Indeed, elements of the groupoid are equivalence classes of elements of $S$ where two elements are considered equivalent if they have a common lower bound. \end{Example} \begin{Example}[Underlying groupoid] Another example is the case $X=E(S)$ with the discrete topology. The action is the so-called Munn representation $\mu\colon S\rightarrow I_{E(S)}$ given by putting $X_e = e^{\downarrow}$ and defining \mbox{$\mu_s\colon X_{s^*s}\rightarrow X_{ss^*}$} by $\mu_s(e) = ses^*$~\cite{Lawson}. The spectral action is the dual of the (right) Munn representation. We observe that each equivalence class $[s,e]$ contains a unique element of the form $(t,e)$ with $t^*t=e$, namely $t=se$. Then $e$ is determined by $t$. Thus arrows of $S\ltimes X$ are in bijection with elements of $S$ via the map $s\mapsto [s,s^*s]$. One has $d(s)=s^*s$, $r(s)=ss^*$ and if $s,t$ are composable, their composition is $st$. The inverse of $s$ is $s^*$. This is the so-called \emph{underlying groupoid} of $S$~\cite{Lawson}. The slice $(s,\{s^*s\})$ contains just the arrow $s$, so the topology is discrete. \end{Example} The next proposition establishes the functoriality of the germ groupoid. \begin{Prop}\label{functoriality} Let $S$ act on $X$ and $Y$ and suppose $\psi\colon X\rightarrow Y$ is a morphism. Then there is a continuous functor $\Psi\colon S\ltimes X\rightarrow S\ltimes Y$ given by $[s,x]\mapsto [s,\psi(x)]$. If the actions are boolean, then $\Psi$ is an embedding of groupoids and the image consists precisely of those arrows of $S\ltimes Y$ between elements of $\psi(X)$. \end{Prop} \begin{proof} We verify that $\Psi$ is well defined. First note that $x\in X_{s^*s}$ if and only if $\psi(x)\in Y_{s^*s}$ for any $s\in S$ by the definition of a morphism. Suppose that $(s,x)$ is equivalent to $(t,x)$. Then we can find $u\leq s,t$ with $x\in X_{u^*u}$. It then follows that $\psi(x)\in Y_{u^*u}$ and so $(s,\psi(x))$ is equivalent to $(t,\psi(x))$. Thus $\Psi$ is well defined. The details that $\Psi$ is a continuous functor are routine and left to the reader. In the case the actions are boolean, the fact that $\Psi$ is an embedding follows easily from Proposition~\ref{universal}. The description of the image follows because if $[s,y]$ satisfies $y, sy\in \psi(X)$ and $y=\psi(x)$, then $[s,y]=\Psi([s,x])$. \end{proof} In~\cite{Exel,Paterson}, the term \emph{reduction} is used to describe a groupoid obtained by restricting the units to a closed subspace and taking the full subgroupoid (in the sense of~\cite{Mac-CWM}) of all arrows between these objects. The following is~\cite[Proposition 4,18]{Exel}. \begin{Prop} The basic open set $(s,U)$ with $U\subseteq X_{s^*s}$ is a slice of $S\ltimes X$ homeomorphic to $U$. \end{Prop} From the proposition, it easily follows that if $\varphi\colon S\rightarrow I_X$ is an ample action and $\mathscr G=S\ltimes_{\varphi} X$, then each open set of the form $(s,U)$ with $U\in B(X)$ belongs to $\mathscr G^a$ and the collection of such open sets is a basis for the topology on $\mathscr G$. Thus $S\ltimes_{\varphi} X$ is an ample groupoid. Moreover, the map $s\mapsto (s,X_{s^*s})$ in this setting is a homomorphism $\theta\colon S\rightarrow \mathscr G^a$, as the following lemma shows in the general case. \begin{Lemma}\label{slicehom} Let $S$ have a non-degenerate action on $X$. If $(s,U)$ and $(t,V)$ are basic neighborhoods of $\mathscr G=S\ltimes X$, then \[(s,U)(t,V) = (st, t^*(U\cap tV))\] and $(s,U)^{-1} = (s^*,sU)$. Consequently, the map $s\mapsto (s,X_{s^*s})$ is a homomorphism from $S$ to $\mathscr G^{op}$ and furthermore if the action is ample, then it is a homomorphism to $\mathscr G^a$. \end{Lemma} \begin{proof} First observe that $t^*(X_{s^*s}\cap tX_{t^*t}) = t^*(X_{s^*s}\cap X_{tt^*}) = t^*(X_{s^*stt^*}) = t^*s^*stt^*(X_{s^*stt^*}) = t^*s^*s(X_{s^*stt^*}) = X_{t^*s^*st} = X_{(st)^*(st)}$. The final statement now follows from the first. By definition $U\subseteq X_{s^*s}$ and $V\subseteq X_{t^*t}$. Therefore, we have $t^*(U\cap tV)\subseteq t^*(X_{s^*s}\cap tX_{t^*t})= X_{(st)^*st}$ and so $(st, t^*(U\cap tV))$ is well defined. Suppose $x\in U$ and $y\in V$ with $ty=x$. Then $x\in U\cap tV$ and so $y\in t^*(U\cap tV)$. Moreover, $[s,x]\cdot [t,y] = [st,y]\in (st,t^*(U\cap tV))$. This shows $(s,U)(t,V)\subseteq (st,t^*(U\cap tV))$. For the converse, assume $[st,y]\in (st,t^*(U\cap tV))$. Then $y\in t^*tV=V$ and if we set $x=ty$, then $x\in tt^*U\subseteq U$. We conclude $[st,y]=[s,x]\cdot[t,y]\in (s,U)(t,V)$. The equality $(s,U)^{-1} = (s^*,sU)$ is trivial. \end{proof} Notice that the action of the slice $(s,X_{s^*s})$ on $\mathscr G^0=X$ is exactly the map $x\mapsto sx$. Indeed, the domain of the action of $(s,X_{s^*s})$ is $X_{s^*s}$ and if $x\in X_{s^*s}$, then $[s,x]$ is the unique element of the slice with domain $x$. But $r([s,x]) = sx$. In the case of the trivial action on a one-point space, the map from Lemma~\ref{slicehom} is the maximal group image homomorphism. In the case of the Munn representation, the map is $s\mapsto \{t\in S\mid t\leq s\}=s^{\downarrow}$. It is straightforward to verify that in this case the homomorphism is injective. The reader should compare with~\cite{mobius1,mobius2}. Summarizing, we have the following proposition. \begin{Prop}\label{amplestuff} Let $\varphi\colon S\rightarrow I_X$ be an ample action and put $\mathscr G=S\ltimes_{\varphi} X$. Then: \begin{enumerate} \item $\mathscr G$ is an ample groupoid; \item There is a homomorphism $\theta\colon S\rightarrow \mathscr G^a$ given by $\theta(s) = (s,X_{s^*s})$; \item $\bigcup \theta(S) = \mathscr G$; \item $\{U\in \mathscr G^a\mid U\subseteq \theta(s)\ \text{some}\ s\in S\}$ is a basis for the topology on $\mathscr G$; \item There is a homomorphism $\Theta\colon S\rightarrow K\mathscr G$ given by \[\Theta(s) = \chi_{\theta(s)} = \chi_{(s,X_{s^*s})},\] which is a $\ast$-homomorphism when $K=\mathbb C$. \end{enumerate} Moreover, if $\varphi$ is a boolean action, then: \begin{enumerate} \item [(6)] $E(\theta(S))$ generates $B(\mathscr G^0)\cong B(X)$ as a generalized boolean algebra; \item [(7)] $\Theta(S)$ spans $K\mathscr G$. \end{enumerate} \end{Prop} \begin{proof} Item (4) is a consequence of the fact that if $U\subseteq X_{s^*s}$, then the basic set $(s,U)$ is contained in $\theta(s)=(s,X_{s^*s})$. Item (5) follows from Proposition~\ref{convolutionwelldefined} and Lemma~\ref{slicehom}. Item (7) is a consequence of (4), (6) and Proposition~\ref{smgroupgen}. The remaining statements are clear. \end{proof} The universal groupoid of an inverse semigroup was introduced by Paterson~\cite{Paterson} and has since been studied by several authors~\cite{Exel,lenz,resendeetale,strongmorita}. \begin{Def}[Universal groupoid] Let $S$ be an inverse semigroup. The groupoid of germs $\mathscr G(S)=S\ltimes_\beta \widehat{E(S)}$ is called the \emph{universal groupoid} of $S$~\cite{Paterson,Exel}. It is an ample groupoid. Notice that if $E(S)$ is finite (or more generally if each principal downset of $E(S)$ is finite), then the underlying groupoid of $S$ is the universal groupoid. \end{Def} Propositions~\ref{universal} and~\ref{functoriality} immediately imply the following universal property of $\mathscr G(S)$, due to Paterson~\cite{Paterson}. \begin{Prop} Let $\varphi\colon S\rightarrow I_X$ be a boolean action. Then there is a unique continuous functor $\Phi\colon S\ltimes X\rightarrow \mathscr G(S)$ so that $\Phi((s,X_{s^*s})) = (s,D(s^*s))$. Moreover, $\Phi$ is an embedding of topological groupoids with image a reduction of $\mathscr G(S)$ to a closed $S$-invariant subspace of $\widehat{E(S)}$. \end{Prop} Examples of universal groupoids of inverse semigroups can be found in~\cite[Chapter 4]{Paterson}. It is convenient at times to use the following algebraic embedding of the underlying groupoid into the universal groupoid. \begin{Lemma}\label{embedunderlying} Let $s\in S$. Then $[s,\chi_{(s^*s)^{\uparrow}}]=[t,\chi_{(s^*s)^{\uparrow}}]$ if and only if $s\leq t$. Consequently, $[s,\chi_{(s^*s)^{\uparrow}}]\in (t,D(t^*t))$ if and only if $s\leq t$. Moreover, the map $s\mapsto [s,\chi_{(s^*s)^{\uparrow}}]$ is an injective functor from the underlying groupoid of $S$ onto a dense subgroupoid of $\mathscr G(S)$. \end{Lemma} \begin{proof} We verify the first two statements. The final statement is straightforward from the previous ones and can be found in~\cite[Proposition~4.4.6]{Paterson}. If $s\leq t$, the germs of $s$ and $t$ at $\chi_{(s^*s)^{\uparrow}}$ clearly coincide. Assume conversely, that the germs are the same. By definition there exists $u\leq s,t$ so that $\chi_{(s^*s)^{\uparrow}}\in D(u^*u)$, i.e., $u^*u\geq s^*s$. But then $u=su^*u= ss^*su^*u=ss^*s=s$. Thus $s\leq t$. The second statement follows since $[s,\chi_{(s^*s)^{\uparrow}}]\in (t,D(t^*t))$ if and only if $[s,\chi_{(s^*s)^{\uparrow}}]=[t,\chi_{(s^*s)^{\uparrow}}]$. \end{proof} We end this section with a sufficient condition for the groupoid of germs of an action to be Hausdorff, improving on~\cite[Proposition 6.2]{Exel} (see also~\cite{Paterson,lenz}). For the universal groupoid, the condition is shown also to be necessary and is the converse to~\cite[Corollary 4.3.1]{Paterson}. As a consequence we obtain a much easier proof that the universal groupoid of a certain commutative inverse semigroup considered in~\cite[Appendix C]{Paterson} is not Hausdorff. Observe that a poset $P$ is a semilattice if and only if the intersection of principal downsets is again a principal downset. We say that a poset is a \emph{weak semilattice} if the intersection of principal downsets is finitely generated as a downset. The empty downset is considered to be generated by the empty set. \begin{Thm}\label{Hausdorff} Let $S$ be an inverse semigroup. Then the following are equivalent: \begin{enumerate} \item $S$ is a weak semilattice; \item The groupoid of germs of any non-degenerate action \mbox{$\theta\colon S\rightarrow I_X$} such that $X_e$ is clopen for all $e\in E(S)$ is Hausdorff; \item $\mathscr G(S)$ is Hausdorff. \end{enumerate} In particular, every groupoid of germs for an ample action of $S$ is Hausdorff if and only if $S$ is a weak semilattice. \end{Thm} \begin{proof} First we show that (1) implies (2). Suppose $[s,x]\neq [t,y]$ are elements of $\mathscr G$. If $x\neq y$, then choose disjoint neighborhoods $U,V$ of $x$ and $y$ in $X$, respectively. Clearly, $(s,U\cap X_{s^*s})$ and $(t,V\cap X_{t^*t})$ are disjoint neighborhoods of $[s,x]$ and $[t,y]$, respectively. Next assume $x=y$. If $s^{\downarrow}\cap t^{\downarrow}=\emptyset$, then $(s,X_{s^*s})$ and $(t,X_{t^*t})$ are disjoint neighborhoods of $[s,x]$ and $[t,x]$, respectively, since if $[s,z]=[t,z]$ then there exists $u\leq s,t$. So we are left with the case $s^{\downarrow}\cap t^{\downarrow}\neq \emptyset$. Since $S$ is a weak semilattice, we can find elements $u_1,\ldots, u_n\in S$ so that $u\leq s,t$ if and only if $u\leq u_i$ for some $i=1,\ldots, n$. Let $V=X\setminus \left(\bigcup_{i=1}^n X_{u_i^*u_i}\right)$; it is an open set by hypothesis. If $x\in X_{u_i^*u_i}$ for some $i$, then since $u_i\leq s,t$, it follows $[s,x]=[t,x]$, a contradiction. Thus $x\in V$. Define $W=V\cap X_{s^*s}\cap X_{t^*t}$. We claim $(s,W)$ and $(t,W)$ are disjoint neighborhoods of $[s,x]$ and $[t,x]$, respectively. Indeed, if $[r,z]\in (s,W)\cap (t,W)$, then $[s,z]=[r,z]=[t,z]$ and hence there exists $u\leq s,t$ with $z\in X_{u^*u}$. But then $u\leq u_i$ for some $i=1,\ldots,n$ and so $z\in X_{u_i^*u_i}$, contradicting that $z\in W\subseteq V$. Trivially (2) implies (3). For (3) implies (1), let $s,t\in S$ and suppose $s^{\downarrow}\cap t^{\downarrow}\neq \emptyset$. Proposition~\ref{semibooleanalgebra} implies that $(s,D(s^*s))\cap (t,D(t^*t))$ is compact. But clearly \[(s,D(s^*s))\cap (t,D(t^*t))=\bigcup_{u\in s^{\downarrow}\cap t^{\downarrow}} (u,D(u^*u))\] since $[s,x]=[t,x]$ if and only if there exists $u\leq s,t$ with $x\in D(u^*u)$. By compactness, we may find $u_1,\ldots,u_n\in s^{\downarrow}\cap t^{\downarrow}$ so that \[(s,D(s^*s))\cap (t,D(t^*t))= (u_1,D(u_1^*u_1))\cup \cdots \cup (u_n,D(u_n^*u_n)).\] We claim that $u_1,\ldots,u_n$ generate the downset $s^{\downarrow}\cap t^{\downarrow}$. Indeed, if $u\leq s,t$ then $[u,\chi_{(u^*u)^{\uparrow}}]\in (s,D(s^*s))\cap (t,D(t^*t))$ and so $[u,\chi_{(u^*u)^{\uparrow}}]\in(u_i,D(u_i^*u_i))$ for some $i$. But then $u\leq u_i$ by Lemma~\ref{embedunderlying}. This completes the proof. \end{proof} Examples of semigroups that are weak semilattices include $E$-unitary and $0$-$E$-unitary inverse semigroups~\cite{Lawson,Exel,Paterson,lenz}. Notice that if $s\in S$, then the map $x\mapsto x^*x$ provides an order isomorphism between $s^{\downarrow}$ and $(s^*s)^{\downarrow}$; the inverse takes $e$ to $se$. Recall that a poset is called \emph{Noetherian} if it satisfies ascending chain condition on downsets, or equivalent every downset is finitely generated. If each principal downset of $E(S)$ is Noetherian, it then follows from the above discussion that $S$ is a weak semilattice. This occurs of course if $E(S)$ is finite, or if each principal downset of $E(S)$ is finite. More generally, if each principal downset of $E(S)$ contains no infinite ascending chains and no infinite anti-chains, then $S$ is a weak semilattice. \begin{Example}[A non-Hausdorff groupoid] The following example can be found in~\cite[Appendix C]{Paterson}. Define a commutative inverse semigroup $S=\mathbb N\cup \{\infty,z\}$ by inflating $\infty$ to a cyclic group $\{\infty,z\}$ of order $2$ in the example just after Proposition~\ref{descendingchain}. Here $\mathbb N\cup \{\infty\}$ is the semilattice considered there with $0<i<\infty$ for $i\geq 1$ and all other elements incomparable. One has $\{\infty,z\}$ is a cyclic group of order $2$ with non-trivial element $z$. The element $z$ acts as the identity on $\mathbb N$. Then $\infty^{\downarrow}\cap z^{\downarrow} = \mathbb N$ is not finitely generated as a downset, in fact the positive naturals are an infinite anti-chain of maximal elements. It follows $S$ is not a weak semilattice and so $\mathscr G(S)$ is not Hausdorff. Moreover, the compact open slice $(z,D(\infty))$ is not closed and hence $\chi_{(z,D(\infty))}$ is a discontinuous element of $K\mathscr G(S)$. \end{Example} \subsection{Covariant representations} The purpose of this subsection is to indicate that if $S$ is an inverse semigroup with a non-degenerate action on a locally compact boolean space $X$, then $K(S\ltimes X)$ can be thought of as a cross product $KX\rtimes S$. Let us make this precise. \begin{Def}[Covariant representation]\label{covariant} Let $\theta\colon S\rightarrow I_X$ be an ample action of an inverse semigroup $S$. A \emph{covariant representation} of the dynamical system $(\theta,S,X)$ on a $K$-algebra $A$ is a pair $(\pi,\sigma)$ where $\pi\colon KX\rightarrow A$ is a $K$-algebra homomorphism and $\sigma\colon S\rightarrow A$ is a homomorphism such that: \begin{enumerate} \item $\pi(f\theta_{s^*}) = \sigma(s)\pi(f)\sigma(s^*)$ for $f\in KX_{s^*s}$; \item $\pi(\chi_{X_e}) = \sigma(e)$ for $e\in E(S)$. \end{enumerate} \end{Def} See~\cite{Exel,Paterson} for more on covariant representations in the analytic context. It turns out that covariant representations are in bijection with $K$-algebra homomorphisms in the Hausdorff context and so $K(S\ltimes X)$ has the universal property of a cross product $KX\rtimes S$. Probably this remains true in the non-Hausdorff case as well. \begin{Prop}\label{newamplestuff} Let $\theta\colon S\rightarrow I_X$ be an ample action and put $\mathscr G=S\ltimes X$. Then $\Gamma = \{(s,U)\mid U\in B(X_{s^*s})\}$ is an inverse subsemigroup of $\mathscr G^a$ which is a basis for the topology of $\mathscr G$ and such that $E(\Gamma)$ generates $B(X)$ as a generalized boolean algebra. Consequently, $K\mathscr G$ is spanned by all characteristic functions $\chi_{(s,U)}$ with $U\in B(X_{s^*s})$. \end{Prop} \begin{proof} Lemma~\ref{slicehom} implies that $\Gamma$ is an inverse subsemigroup of $\mathscr G^a$. We already know it is a basis for the topology of $\mathscr G$. If $U\in B(X)$, then since $\bigcup_{e\in E(S)} X_e=X$, compactness of $U$ implies $U\subseteq X_{e_1}\cup\cdots \cup X_{e_n}$ for some $e_1,\ldots, e_n\in E(S)$. Then $X_{e_i}\cap U\in B(X_{e_i})$, for $i=1,\ldots,n$ and $U=(X_{e_1}\cap U)\cup \cdots \cup (X_{e_n}\cap U)$. Since $X_{e_i}\cap U\in E(\Gamma)$ for all $i$, we conclude that $E(\Gamma)$ generates $B(X)$ as a generalized boolean algebra. The final statement now follows from Proposition~\ref{smgroupgen}. \end{proof} We are now almost ready to establish the equivalence between covariant representations and $K$-algebra homomorphisms for the case of a Hausdorff groupoid of germs. First we recall some measure-theoretic definitions. \begin{Def}[Semiring] A collection $S$ of subsets of a set $X$ is called a \emph{semiring of subsets} if: \begin{enumerate} \item $\emptyset\in S$; \item $S$ is closed under pairwise intersections; \item If $A,B\in S$, then $A\setminus B$ is a finite disjoint union of elements of $S$. \end{enumerate} \end{Def} In the context of measure theory, a generalized boolean algebra of subsets is usually called a ring of subsets. In this language a standard measure theoretic result is that the generalized boolean algebra generated by a semiring $S$ consists precisely of the finite disjoint unions of elements of $S$. \begin{Def}[Additive function] Let $S$ be a collection of subsets of $X$ and $A$ an abelian group. Then a function $\mu\colon S\rightarrow A$ is said to be \emph{additive} if whenever $A,B\in S$ are disjoint and $A\cup B\in S$, then $\mu(A\cup B) = \mu(A)+\mu(B)$. \end{Def} The following measure-theoretic lemma goes back to von Neumann. \begin{Lemma}[Extension principle]\label{extension} Let $S$ be a semiring of subsets of $X$ and suppose $\mu\colon S\rightarrow A$ is an additive function to an abelian group $A$. Then there is a unique additive function $\mu'\colon R(S)\rightarrow A$ extending $\mu$ where $R(S)$ is the generalized Boolean algebra (or ring of subsets) generated by $S$. \end{Lemma} We are now ready for the main result of this subsection. \begin{Thm} Let $\theta\colon S\rightarrow I_X$ be an ample action such that $S\ltimes X$ is Hausdorff and let $A$ be a $K$-algebra. Then there is a bijection between $K$-algebra homomorphisms $\varphi\colon K(S\ltimes X)\rightarrow A$ and covariant representations $(\pi,\sigma)$ of $(\theta,S,X)$ on $A$. \end{Thm} \begin{proof} Set $\mathscr G=S\ltimes X$. First suppose that $\varphi\colon K\mathscr G\rightarrow A$ is a $K$-algebra homomorphism. Notice that since $X=\mathscr G^0$ is an open subspace of $\mathscr G$, it follows that $KX$ is a subspace of $K\mathscr G$. In fact, it is a subalgebra with pointwise product (say by Proposition~\ref{convolutionwelldefined}). So define $\pi=\varphi|_{KX}$. For $s\in S$, put $\widehat s=\chi_{(s,X_{s^*s})}$ and define $\sigma(s)=\varphi(\widehat s)$. Then $\sigma$ is clearly a homomorphism. Let us verify the two axioms for covariant representations. The second axiom is immediate from the definitions since if $e$ is an idempotent of $S$, then $(e,X_e)$ is the open subset of $\mathscr G^0$ corresponding to $X_e$ under our identification of $\mathscr G^0$ with $X$. Suppose $f\in KX_{s^*s}$. Then we compute \begin{equation}\label{aconjugationformula} \widehat s\ast f\ast \widehat {s^*}(x) = \sum_{y\in d^{-1} d(x)}\widehat s(xy^{-1})\sum_{z\in d^{-1} d(y)} f(yz^{-1})\widehat {s^*}(z). \end{equation} Since $f$ has support in $X_{s^*s}$, it follows that to get a non-zero value we must have $y=z$ and $r(y)\in X_{s^*s}$. Moreover, $d(x)=d(y)=d(z)\in X_{ss^*}$ and $y=z=[s^*,d(x)]$. Also to obtain a non-zero value we need $xy^{-1} = x[s,s^*d(x)]=[s,s^*d(x)]$. Thus $x=d(x)\in X_{ss^*}$ and $yz^{-1} = r(y) = r(z)=s^*x$. So \eqref{aconjugationformula} is $0$ if $x\notin X_{ss^*}$ and otherwise is $f(s^*x)$, This implies $\widehat s\ast f\ast \widehat{s^*} = f\theta_{s^*}$ and so \[\sigma(s)\pi(f)\sigma(s^*)=\varphi(\widehat s\ast f\ast \widehat{s^*}) = \pi(f\theta_{s^*})\] establishing (1) of Definition~\ref{covariant}. Conversely, suppose $(\pi,\sigma)$ is a covariant representation on $A$. Fix $s\in S$. Observe that $B(X_{s^*s})\cong B((s,X_{s^*s}))$ via the map $U\mapsto (s,U)$. Define a map $\mu_s\colon B((s,X_{s^*s}))\rightarrow A$ by $\mu_s((s,U)) = \sigma(s)\pi(\chi_U)$. We claim that $\mu_s$ is additive. Indeed, if $U,V\in B(X_{s^*s})$ are disjoint, then $\chi_{U\cup V} = \chi_U+\chi_V$ so \begin{align*} \mu_s((s,U)\cup (s,V)) &= \sigma(s)\pi(\chi_{U\cup V})= \sigma(s)\pi(\chi_U)+\sigma(s)\pi(\chi_V)\\ &=\mu_s((s,U))+\mu_s((s,V)). \end{align*} Next we claim that if $U\in B((s,X_{s^*s}))\cap B((t,X_{t^*t}))$, then $\mu_s(U)=\mu_t(U)$. Put $V=d(U)$. Then $U=\{[s,x]\mid x\in V\}=\{[t,x]\mid x\in V\}$. For each $x\in V$, we can find an element $u_x\in X$ so that $u_x\leq s,t$ and $x\in X_{u_x^*u_x}$. By compactness of $V$, we conclude that there exist $v_1,\ldots, v_n\leq s,t$ so that if $V_i = X_{v_i^*v_i}\cap V$, then $V=V_1\cup\cdots\cup V_n$. Since $B(V)$ is a boolean algebra, we can refine this to a disjoint union $V=U_1\cup\cdots\cup U_m$ such that there are elements $u_1,\ldots, u_m\leq s,t$ (not necessarily all distinct) so that $U_i\subseteq X_{u_i^*u_i}\cap V$ and $U_i\in B(X)$, for $i=1,\ldots, m$ (cf.\ Step 2 of Theorem~\ref{presentation}). Then $U = (u_1,U_1)\cup\cdots\cup (u_m,U_m)$ as a disjoint union. By additivity of $\mu_s$ and $\mu_t$, it therefore suffices to show that if $w\leq s,t$ and $W\subseteq X_{w^*w}$, then $\mu_s((w,W))=\mu_t((w,W))$. Equivalently, we must show $\sigma(s)\pi(\chi_W)=\sigma(t)\pi(\chi_W)$. Now we compute \[\sigma(s)\pi(\chi_W) = \sigma(s)\pi(\chi_{X_{w^*w}}\chi_W) = \sigma(sw^*w)\pi(\chi_W) = \sigma(w)\pi(\chi_W)\] and similarly $\sigma(t)\pi(\chi_W)=\sigma(w)\pi(\chi_W)$. This concludes the proof that $\mu_s(U)=\mu_t(U)$. Let $\Gamma$ be as in Proposition~\ref{newamplestuff}; notice that $\Gamma=\bigcup_{s\in S}B((s,X_{s^*s}))$. Then there is a well-defined function $\mu\colon \Gamma\rightarrow A$ given by $\mu((s,U)) =\mu_s((s,U))$. Moreover, $\mu$ is additive since the disjoint union of two elements of $\Gamma$ belongs to $\Gamma$ if and only if they both belong to $B(s,X_{s^*s})$ for some $s$ and then one applies the additivity of $\mu_s$. Since $\mathscr G$ is Hausdorff, Proposition~\ref{semibooleanalgebra} shows that $\mathscr G^a$ is closed under intersection and relative complement. Now clearly, $\Gamma$ is a semiring of subsets of $\mathscr G$ since it is a downset in $\mathscr G^a$ and hence closed under finite intersections and relative complements. On the other hand, since $\Gamma$ is a basis for the topology of $\mathscr G$ and each element of $\mathscr G^a$ is compact, it follows that $\mathscr G^a$ is contained in the generalized boolean algebra generated by $\Gamma$. Lemma~\ref{extension} now provides a well-defined additive function $\mu'\colon \mathscr G^a\rightarrow A$ extending $\mu$. To show that $\mu'$ is a semigroup homomorphism, it suffices to show that its restriction $\mu$ to the subsemigroup $\Gamma$ is a homomorphism since $\mu'$ is additive and the product distributes over those disjoint unions that exist in $\mathscr G^a$. So suppose $(s,U),(t,V)\in \Gamma$. Then we compute \begin{align*} \mu((s,U))\mu((t,V)) &= \sigma(s)\pi(\chi_U)\sigma(t)\pi(\chi_V)\\ &= \sigma(s)\pi(\chi_U)\sigma(tt^*)\sigma(t)\pi(\chi_V) \\ &= \sigma(s)\sigma(t)\sigma(t^*)\pi(\chi_{U\cap X_{tt^*}})\sigma(t)\pi(\chi_V) \\ &= \sigma(st)\pi(\chi_{U\cap X_{tt^*}}\theta_t\cdot \chi_V). \end{align*} But $[\chi_{U\cap X_{tt^*}}\theta_t(x)]\chi_V(x)$ is $1$ if and only if $x\in V$ and $tx\in U$, which occurs if and only if $x\in t^*(U\cap tV)$. Indeed, if $x\in V$ and $tx\in U$, then $tx\in tV$ and so $x=t^*tx\in t^*(U\cap tV)$. Conversely, $x\in t^*(U\cap tV)$ implies there exists $v\in V$ so that $tv\in U$ and $x=t^*tv=v$. Thus $x\in V$ and $tx=tv\in U$. We conclude $\mu((s,U))\mu((t,V)) = \sigma(st)\pi(\chi_{t^*(U\cap tV)})$. On the other hand, $(s,U)(t,V) = (st,t^*(U\cap tV))$ by Proposition~\ref{slicehom} and so $\mu((s,U)(t,V))= \sigma(st)\pi(\chi_{t^*(U\cap tV)})$. This completes the proof that $\mu$, and hence $\mu'$, is a homomorphism. Since $\mu'\colon \mathscr G^a\rightarrow A$ is an additive semigroup homomorphism, Theorem~\ref{presentation} provides a homomorphism $\varphi\colon K\mathscr G\rightarrow A$ satisfying $\varphi(\chi_{(s,U)}) = \sigma(s)\pi(\chi_{U})$. In particular, $\varphi(\chi_{(s,X_{s^*s})}) = \sigma(s)\pi(\chi_{X_{s^*s}}) = \sigma(s)\sigma(s^*s)=\sigma(s)$ for $s\in S$. On the other hand, if $U\in B(X)$, we can write $U=U_1\cup\cdots \cup U_n$ a disjoint union where $U_i\subseteq X_{e_i}$ for some idempotent $e_i$ using compactness of $U$ and the $X_e$, as well as the non-degeneracy of the action. Then \begin{align*} \varphi(\chi_U) &= \varphi(\chi_{U_1})+\cdots+\varphi(\chi_{U_n})= \sigma(e_1)\pi(\chi_{U_1})+\cdots+\sigma(e_n)\pi(\chi_{U_n}) \\&= \pi(\chi_{U_1})+\cdots+\pi(\chi_{U_n}) = \pi(\chi_U) \end{align*} since $\sigma(e_i)\pi(\chi_{U_i}) = \pi(\chi_{X_{e_i}\cap U_i}) = \pi(\chi_{U_i})$. This proves that the two constructions in this proof are inverse to each other, establishing the desired bijection. \end{proof} \section{The isomorphism of algebras} The main theorem of this section says that if $K$ is any unital commutative ring endowed with the discrete topology and $S$ is an inverse semigroup, then $KS\cong K\mathscr G(S)$. The idea is to combine Paterson's proof for $C^*$-algebras~\cite{Paterson} with the author's proof for inverse semigroups with finitely many idempotents~\cite{mobius1,mobius2}. Recall that the semigroup algebra $KS$ is the free $K$-module with basis $S$ equipped with the usual convolution product \[\sum_{s\in S}c_ss\cdot \sum_{t\in S} d_tt = \sum_{s,t\in S}c_sd_tst.\] In the case that $K=\mathbb C$, we make $\mathbb CS$ into a $\ast$-algebra by taking \[\left(\sum_{s\in S} c_ss\right)^* = \sum_{s\in S} \ov{c_s}s^*.\] We begin with a lemma that is an easy consequence of Rota's theory of M\"obius inversion~\cite{Stanley,Burnsidealgebra}. \begin{Lemma}\label{easymobius} Let $P$ be a finite poset. Then the set $\{\chi_{p^{\downarrow}}\mid p\in P\}$ is a basis for $K^P$. \end{Lemma} \begin{proof} The functions $\{\delta_p\mid p\in P\}$ form the standard basis for $K^P$. With respect to this basis \[\chi_{p^{\downarrow}} = \sum_{q\leq p} \delta_q.\] Thus by M\"obius inversion, \[\delta_p = \sum_{q\leq p}\chi_{q^{\downarrow}}\mu(q,p)\] where $\mu$ is the M\"obius function of $P$. This proves the lemma. \end{proof} Alternatively, one can order $P=\{p_1,\ldots, p_n\}$ so that $p_i\leq p_j$ implies $i\leq j$. The linear transformation $p_i\mapsto \sum_{p_j\leq p_i} p_j$ is given by a unitriangular integer matrix and hence is invertible over any commutative ring with unit. As a corollary, we obtain the following infinitary version. \begin{Cor}\label{easymobius2} Let $P$ be a poset. Then the set $\{\chi_{p^{\downarrow}}\mid p\in P\}$ in $K^P$ is linearly independent. \end{Cor} \begin{proof} It suffices to show that, for any finite subset $F\subseteq P$, the set $F'=\{\chi_{p^{\downarrow}}\mid p\in F\}$ is linearly independent. Consider the projection $\pi\colon K^P\rightarrow K^F$ given by restriction. Lemma~\ref{easymobius} implies that $\pi(F')$ is a basis for $K^F$. We conclude that $F'$ is linearly independent. \end{proof} We are now ready for one of our main theorems, which generalizes the results of~\cite{mobius1,mobius2} for the case of an inverse semigroup with finitely many idempotents. \begin{Thm}\label{mainthm} Let $K$ be a commutative ring with unit and $S$ an inverse semigroup. Then the homomorphism $\Theta\colon S\rightarrow K\mathscr G(S)$ given by $\Theta(s) = \chi_{(s,D(s^*s))}$ extends to an isomorphism of $KS$ with $K\mathscr G(S)$. Moreover, when $K=\mathbb C$ the map $\Theta$ extends to a $\ast$-isomorphism. \end{Thm} \begin{proof} Proposition~\ref{amplestuff} establishes everything except the injectivity of the induced homomorphism $\Theta\colon KS\rightarrow K\mathscr G(S)$. This amounts to showing that the set of elements $\{\Theta(s)\mid s\in S\}$ is linearly independent. The key idea is to exploit the dense embedding of the underlying groupoid of $S$ as a subgroupoid of $\mathscr G(S)$ from Lemma~\ref{embedunderlying}. More precisely, Lemma~\ref{embedunderlying} provides an injective mapping $S\rightarrow \mathscr G(S)$ given by $s\mapsto [s,\chi_{(s^*s)^{\uparrow}}]=\widehat{s}$. Define a $K$-linear map $\psi\colon K\mathscr G\rightarrow K^S$ by $\psi(f)(s) = f(\widehat s)$. Then, if $t\in S$, one has that $\psi(\Theta(t)) = \chi_{t^{\downarrow}}$ by Lemma~\ref{embedunderlying}. Corollary~\ref{easymobius2} now implies that $\psi(\Theta(S))$ is linearly independent and hence $\Theta(S)$ is linearly independent, completing the proof. \end{proof} In the case that $E(S)$ is finite, one has that $\mathscr G(S)$ is the underlying groupoid and so we recover the following result of the author~\cite{mobius1,mobius2} (the final statement of which a proof can be found in these references). \begin{Cor} Let $S$ be an inverse semigroup so that $E(S)$ is finite and suppose $K$ is a commutative ring with unit. Let $\ov S= \{\ov s\mid s\in S\}$ be a disjoint copy of $S$. Endow $K\ov S$ with a multiplicative structure by putting \[\ov s\cdot \ov t= \begin{cases} \ov{st} & s^*s=tt^*\\ 0 & \text{else.}\end{cases}\] Then there is an isomorphism from $KS$ to $K\ov S$ sending $s$ to $\sum_{t\leq s}\ov t$. Hence $KS$ is isomorphic to a finite direct product of finite dimensional matrix algebras over the group algebras of maximal subgroups of $S$. \end{Cor} The special case where $S=E(S)$ was first proved by Solomon~\cite{Burnsidealgebra}. As a consequence of Theorem~\ref{mainthm} and Proposition~\ref{unital}, we obtain the following topological criterion for an inverse semigroup algebra to have a unit as well as a characterization of the center of $KS$. \begin{Cor} Let $K$ be a commutative ring with unit and $S$ an inverse semigroup. Then $KS$ has a unit if and only if $\widehat{E(S)}$ is compact. The center of $KS$ is the space of class functions on $\mathscr G(S)$. \end{Cor} Let $FIM(X)$ be the free inverse monoid on a set $X$ with $|X|\geq 1$. Crabb and Munn described the center of $KFIM(X)$ in~\cite{Munncentre}. We give a topological proof of their result using that $KFIM(X)\cong K\mathscr G(FIM(X))$ by describing explicitly the class functions on $\mathscr G(FIM(X))$. The reader should consult~\cite{Lawson} for the description of elements of $FIM(X)$ as Munn trees. \begin{Thm} Let $X$ be a non-empty set. Then if $|X|=\infty$, the center of $KFIM(X)$ consists of the scalar multiples of the identity. Otherwise, $Z(KFIM(X))$ is a subalgebra of $KE(FIM(X))$ isomorphic to the algebra of functions $f\colon FIM(X)/{\mathscr D}\rightarrow K$ spanned by the finitely supported functions and the constant map to $1$. \end{Thm} \begin{proof} The structure of $\mathscr G(FIM(X))$ is well known cf.~\cite[Chapter 4]{Paterson} or~\cite{strongmorita}. Let $F(X)$ be the free group on $X$ and denote its Cayley graph by $\Gamma$. Let $\mathscr T$ be the space of all subtrees of $\Gamma$ containing $1$. Viewing a subtree as the characteristic function of a map $V(\Gamma)\cup E(\Gamma)\rightarrow \{0,1\}$, we give $\mathscr T$ the topology of pointwise convergence. It is easy to see that $\mathscr T$ is a closed subspace of $\{0,1\}^{V(\Gamma)\cup E(\Gamma)}$. The space $\mathscr T$ is homeomorphic to the character space of $E(FIM(X))$. The groupoid $\mathscr G=\mathscr G(FIM(X))$ consists of all pairs $(w,T)\in F(X)\times \mathscr T$ so that $w\in V(T)$. The topology is the product topology. In particular, the pairs $(w,T)$ with $T$ finite are dense in $\mathscr G$. One has $d(w,T) = w^{-1} T$, $r(w,T)= T$ and the product is defined by $(w,T)(w',T') = (ww',T)$ if $w^{-1} T=T'$. The inverse is given by $(w,T)^{-1} = (w^{-1} ,w^{-1} T)$. The groupoid $\mathscr G$ is Hausdorff and so $K\mathscr G$ consists of continuous functions with compact support in the usual sense. Let $f$ be a class function. We claim that the support of $f$ is contained in $\mathscr G^0=\mathscr T$. Since $f$ is continuous with compact support and $K$ is discrete, it follows that $f^{-1}(K\setminus\{0\})$ is compact open and hence the support of $f$. Thus $f^{-1}(K\setminus\{0\})$ is of the form $(\{w_1\}\times C_1)\cup\cdots \cup (\{w_m\}\times C_m)$ where the $C_i$ are compact open subsets of $\mathscr T$. Suppose that $(w,T)\in \{w_i\}\times C_i$ with $w\neq 1$, and so in particular $w=w_i$. As $\{w_i\}\times C_i$ is open and the finite trees are dense, there is a finite tree $T'$ containing $1$ and $w$ so that $(w,T')$ belongs $\{w_i\}\times C_i$. But no finite subtree of $\Gamma$ is invariant under a non-trivial element of $F(X)$, so $d(w,T')=w^{-1} T'\neq T'=r(w,T')$ and hence $f(w,T')=0$ as $f$ is a class function. This contradiction shows that the support of $f$ is contained in $\mathscr G^0$. Thus we may from now on view $f$ as a continuous function with compact support on $\mathscr T$. Next observe that if $f(T)=c$ for a tree $T$ and $u\in V(T)$, then $f(u^{-1} T)=c$. Indeed, $d(u,T) = u^{-1} T$ and $r(u,T) = T$. Thus $f(u^{-1} T) = f((u,T)^{-1} (1,T)(u,T)) = f(T)=c$ as $f$ is a class function. Let $f$ be a class function. Since $K$ is discrete, $f=c_1\chi_{U_1}+\cdots+c_k\chi_{U_k}$ where $U_1,\ldots, U_k$ are non-empty disjoint compact open subsets of $\mathscr T$ and $c_1,\ldots, c_k$ are distinct non-zero elements of $K$. It is easy to see then that $\chi_{U_1},\ldots, \chi_{U_k}$ must then be class functions. In other words, the class functions are spanned by the characteristic functions $\chi_U$ of compact open subsets $U$ of $\mathscr T$ so that $T\in U$ implies $u^{-1} T\in U$ for all vertices $u$ of $T$. Suppose first that $X$ is infinite. We claim that no proper non-empty compact open subset $U$ of $\mathscr T$ has the above property. Suppose this is not the case. Then there is a subtree $T_0$ that does not belong to $U$. Since $X$ is infinite and $U$ is determined by a boolean formula which is a finite disjunction of allowing/disallowing finitely many vertices and edges of $\Gamma$, there is a letter $x\in X$ so that no vertex or edge in the boolean formula determining $U$ involves the letter $x$. Let $T\in U$. Then $T\cup xT_0\in U$ since the edges and vertices appearing in $xT_0$ are irrelevant in the definition of $U$ and $T\in U$. Thus $x^{-1}(T\cup xT_0) =x^{-1} T\cup T_0\in U$. But since the edges and vertices appearing $x^{-1} T$ again are irrelevant to the boolean formula defining $U$, we must have $T_0\in U$, a contradiction. Next suppose that $X$ is finite. First note that the finite trees form a discrete subspace of $\mathscr T$. Indeed, if $T$ is a finite subtree of $\Gamma$ containing $1$, then since $X$ is finite there are only finitely many edges of $\Gamma$ incident on $T$ that do not belong to it. Then the neighborhood of $T$ consisting of all subtrees containing $T$ but none of these finitely many edges incident on $T$ consists only of $\{T\}$. So if $T$ is a finite tree, then $U_T=\{v^{-1} T\mid v\in T\}$ is a finite open subset of $\mathscr T$ and hence its characteristic function belongs to the space of class functions. We claim that the space of class functions has basis the functions of the form $\chi_{U_T}$ with $T$ a finite subtree of $\Gamma$ containing $1$ and of the identity $\chi_{\mathscr G^0}$. This will prove the theorem since the sets of the form $U_T$ are in bijection with the $\D$-classes of $S$. So let $U$ be a compact open set so that $T\in U$ and $u\in T$ implies $u^{-1} T\in U$. Suppose that $U$ contains only finite trees. Since the finite trees are discrete in $\mathscr T$ by the above argument, it follows that $U$ is finite. The desired claim now follows from the above case. So we may assume that $U$ contains an infinite tree $T$. Since $X$ is finite, it is easy to see that there exists $N>0$ so that $U$ consists of those subtrees of $\Gamma$ whose closed ball of radius $N$ around $1$ belongs to a certain subset $F$ of the finite set of possible closed balls of radius $N$ of an element of $\mathscr T$. We claim that $U$ contains all infinite trees. Then applying the previous case to the complement of $U$ proves the theorem. Suppose first $|X|=1$. Then some translate of the infinite subtree $T$ has closed ball of radius $N$ around $1$ a path of length $2N$ centered around $1$ and so this closed ball belongs to $F$. However, all infinite subtrees of $\Gamma$ have this closed ball as the ball of radius $N$ around $1$ for some translate. Thus $U$ contains all the infinite subtrees. Next suppose $|X|\geq 2$. Let $T'$ be an infinite tree with closed ball $B$ of radius $N$ around $1$ and let $v$ be a leaf of $B$ at distance $N$ from $1$ (such exists since $T'$ is infinite and $X$ is finite). Then there is a unique edge of $B$ with terminal vertex $v$, let us assume it is labeled by $x^{\epsilon}$ with $x\in X$ and $\epsilon =\pm 1$. Since $T$ is infinite, we can find a vertex $u$ of $T$ at distance $N$ from $1$. Let $T_0$ be the closed ball of radius $N$ in $T$ around $1$. Then in $T_0$, the vertex $u$ is the endpoint of a unique edge of $\Gamma$. If this edge is not labeled by $x^{\epsilon}$, then put $T_1 = T_0\cup uv^{-1} B$. Otherwise, choose $y\in X$ with $y\neq x$ and put $T_1 = T_0\cup \{u\xrightarrow{y} uy\}\cup uyv^{-1} B$. In either case, the closed balls of radius $N$ around $1$ in $T_1$ and $T$ coincide, and so $T_1\in U$. But there is a translate $T_2$ of $T_1$ so that the closed ball of radius $N$ about $1$ in $T_2$ is exactly $B$. Thus $B\in F$ and so $T'\in U$. This completes the proof of the theorem. \end{proof} Let $\mathscr G$ be an ample groupoid and $C_c(\mathscr G)$ be the usual algebra of continuous (complex-valued) functions with compact support on $\mathscr G$~\cite{Exel,Paterson}. Notice that $\mathbb C\mathscr G$ is a subalgebra of $C_c(\mathscr G)$ since any continuous function to $\mathbb C$ with respect to the discrete topology is continuous in the usual topology. Let $\|\cdot \|$ be the $C^*$-norm on $C_c(\mathscr G)$~\cite{Exel,Paterson}. The following is essentially~\cite[Proposition 2.2.7]{Paterson}. \begin{Prop} Let $\mathscr G$ be an ample groupoid with $\mathscr G^0$ countably based. Then $C^*(\mathscr G)$ is the completion of $\mathbb C\mathscr G$ with respect to its own universal $C^*$-norm. \end{Prop} \begin{proof} To prove the theorem, it suffices to verify that any non-degenerate $\ast$-representation $\pi\colon \mathbb C\mathscr G\rightarrow \mathscr B(H)$ with $H$ a (separable) Hilbert space extends uniquely to $C_c(\mathscr G)$. Indeed, this will show that $\mathbb C\mathscr G$ is dense in the $C^*$-norm on $C_c(\mathscr G)$ and that the restriction of the $C^*$-norm on $C_c(\mathscr G)$ to $\mathbb C\mathscr G$ is its own $C^*$-norm. Suppose $V$ is an open neighborhood in $\mathscr G$ and $f\in C_c(V)$. Then since $\mathscr G$ has a basis of compact open subsets, we can cover the compact support of $f$ by a compact open subset $U$. Thus it suffices to define the extension of $\pi$ for any $f\in C(U)$ where $U$ is a compact open subset of $\mathscr G$. Since $U$ has a basis of compact open subsets, the continuous functions on $U$ with respect to the discrete topology separate points. The Stone-Weierstrass Theorem implies that we can find a sequence $f_n\in \mathbb C\mathscr G$ so that $\|f_n-f\|_{\infty}\rightarrow 0$. Now the argument of~\cite[Proposition 3.14]{Exel} shows that if $g\in C(U)\cap \mathbb C\mathscr G$, then $\|\pi(g)\|\leq \|g\|_{\infty}$. It follows that $\pi(f_n)$ is a Cauchy sequence and so has a limit that we define to be $\pi(f)$. It is easy to check that $\pi(f)$ does not depend on the Cauchy sequence by a simple interweaving argument. The reader should verify that $\pi$ is a representation of $C_c(\mathscr G)$ cf.~\cite{Paterson}. It is the unique extension since if $\pi'$ is an extension, then~\cite[Proposition 3.14]{Exel} implies $\|\pi'(g)\|\leq \|g\|_{\infty}$ for $g\in C(U)$. Thus $\|\pi(f_n)-\pi'(f)\|\rightarrow 0$ and so $\pi'(f)=\pi(f)$. \end{proof} We now recover an important result of Paterson~\cite{Paterson}. \begin{Cor}[Paterson] Let $S$ be an inverse semigroup. Then there is an isomorphism $C^*(S)\cong C^*(\mathscr G(S))$ of universal of $C^*$-algebras. \end{Cor} Paterson also established an isomorphism of reduced $C^*$-algebras~\cite{Paterson}. \section{Irreducible representations} Our aim is to construct the finite dimensional irreducible representations of an arbitrary inverse semigroup over a field $K$ and determine when there are enough such representations to separate points. Our method can be viewed as a generalization of the groupoid approach of the author~\cite{mobius1,mobius2} to the classical theory of Munn and Ponizovsky~\cite{CP} for inverse semigroups with finitely many idempotents. See also~\cite{myirreps}. We begin by describing all the finite dimensional irreducible representations of an ample groupoid. The desired result for inverse semigroups is deduced as a special case via the universal groupoid. In fact, much of what we do works over an arbitrary commutative ring with unit $K$, which shall remain fixed for the section. Let $A$ be a $K$-algebra. We say that an $A$-module $M$ is \emph{non-degenerate} if $AM=M$. We consider here only the category of non-degenerate $A$-modules. So when we write the words ``simple module,'' this should be understood as meaning non-degenerate simple module. Note that if $A$ is unital, then an $A$-module is non-degenerate if and only if the identity of $A$ acts as the identity endomorphism. A representation of $A$ is said to be \emph{non-degenerate} if the corresponding module is non-degenerate. As usual, there is a bijection between isomorphism classes of (finite dimensional) simple $A$-modules and equivalence classes of non-degenerate (finite dimensional) irreducible representations of $A$ by $K$-module endomorphisms. \subsection{Irreducible representations of ample groupoids} Fix an ample groupoid $\mathscr G$. Then one can speak about the orbit of an element of $\mathscr G^0$ and its isotropy group. \begin{Def}[Orbit] Define an equivalence relation on $\mathscr G^0$ by setting $x\sim y$ if there is an arrow $g\in \mathscr G$ such that $d(g)=x$ and $r(g)=y$. An equivalence class will be called an \emph{orbit}. If $x\in \mathscr G^0$, then \[G_x=\{g\in \mathscr G\mid d(g)=x=r(g)\}\] is called the \emph{isotropy group} of $\mathscr G$ at $x$. It is well known and easy to verify that up to conjugation in $\mathscr G$ (and hence isomorphism) the isotropy group of $x$ depends only on the orbit of $x$. Thus we may speak abusively of the isotropy group of the orbit. \end{Def} To motivate the terminology, if $G$ is a group acting on a space $X$, then the orbit of $x\in X$ in the groupoid $G\ltimes X$ is exactly the orbit of $x$ in the usual sense. Moreover, the isotropy group of $x$ in $G\ltimes X$ is isomorphic to the stabilizer in $G$ of $x$ (i.e., the usual isotropy group). \begin{Rmk}[Underlying groupoid] If $S$ is an inverse semigroup and $\mathscr G$ is its underlying groupoid, then the orbit of $e\in E(S)$ is precisely the set of idempotents of $S$ that are $\mathscr D$-equivalent to $e$ and the isotropy group $G_e$ is the maximal subgroup at $e$~\cite{Lawson}. \end{Rmk} In an ample groupoid, the orbit of a unit is its orbit under the action of $\mathscr G^a$ described earlier. Indeed, given $d(g)=x$ and $r(g)=y$, choose a slice $U\in \mathscr G^a$ containing $g$. Clearly, we have $Ux=y$. Conversely, if $U$ is a slice with $Ux=y$, then we can find $g\in U$ with $d(g)=x$ and $r(g)=y$. The following lemma seems worth noting, given the importance of finite orbits in what follows. One could give a topological proof along the lines of~\cite{inversetop} since this is essentially the same argument used in computing the fundamental group of a cell complex. \begin{Lemma}\label{fundamentalgroup} Let $S$ be an inverse semigroup with generating set $A$ acting non-degenerately on a space $X$ and let $\mathscr O$ be the orbit of $x\in X$. Fix, for each $y\in \mathscr O$, an element $p_y\in S$ so that $p_yx=y$ where we choose $p_x$ to be an idempotent. For each pair $(a,y)\in A\times \mathscr O$ such that $ay\in \mathscr O$, define $g_{a,y} = [p_{ay}^*ap_y,x]$. Then the isotropy group $G_x$ is generated by the set of elements $\{g_{a,y}\mid a\in A, y,ay\in \mathscr O\}$. In particular, if $A$ and $\mathscr O$ are finite, then $G_x$ is finitely generated. \end{Lemma} \begin{proof} First note that if $ay\in \mathscr O$, then $p_{ay}^*ap_yx = p_{ay}^*ay =x$ and so $g_{a,y}\in G_x$. Let us define, for $a\in A$ and $y\in \mathscr O$ with $a^*y\in \mathscr O$, the element $g_{a^*,y} = [p_{a^*y}^*a^*p_y,x]\in \mathscr G^a$. Notice that $g_{a^*,y} = g_{a,a^*y}^{-1}$. Suppose that $[s,x]\in G_x$ and write $s=a_n\cdots a_1$ with the $a_i\in A\cup A^*$. Define $x_i = a_i\cdots a_1x$, for $0=1,\ldots, n$ (so $x_0=x=x_n$) and consider the element $t=(p_{x_n}^*a_np_{x_{n-1}})\cdots (p_{x_2}^*a_2p_{x_1})(p_{x_1}^*a_1p_{x_0})$ of $S$. Then $t\leq s$ and $tx=sx=x$. Hence $[s,x]=[t,x] = g_{a_n,x_{n-1}}\cdots g_{a_1,x_0}$, as required. \end{proof} Applying this to the Munn representation and the underlying groupoid, we obtain the following folklore result (a simple topological proof can be found in~\cite{inversetop}). \begin{Cor}\label{fgmaxsubgroup} Let $S$ be a finitely generated inverse semigroup and $e$ an idempotent whose $\D$-class contains only finitely many idempotents. Then the maximal subgroup $G_e$ of $S$ at $e$ is finitely generated. \end{Cor} We remark that if we consider the spectral action of $S$ on $\widehat{E(S)}$ and $e\in E(S)$, then the orbit of $\chi_{e^{\uparrow}}$ is $\{\chi_{f^\uparrow}\mid f\D e\}$ and $G_{\chi_{e^\uparrow}}=G_e$. Fix $x\in \mathscr G^0$. Define $L_x=d^{-1}(x)$ (inverse semigroup theorists should think of this as the $\mathscr L$-class of $x$). The isotropy group $G_x$ acts on the right of $L_x$ and $L_x/G_x$ is in bijection with the orbit of $x$ via the map $tG_x\mapsto r(t)$. Indeed, if $s,t\in L_x$, then $r(s)=r(t)$ implies $t^{-1} s\in G_x$ and of course $s=t(t^{-1} s)$. Conversely, every element of $tG_x$ evidently has range $r(t)$. There is also a natural action of $\mathscr G^a$ on the left of $L_x$ that we shall call, in analogy with the case of inverse semigroups~\cite{CP}, the \emph{Sch\"utzenberger representation} of $\mathscr G^a$ on $L_x$. If $U\in \mathscr G^a$, then we define a map \[U\cdot\colon L_x\cap r^{-1}(U^{-1} U)\rightarrow L_x\cap r^{-1}(UU^{-1})\] by putting $Ut=st$ where $s$ is the unique element of $U$ with $d(s)=r(t)$ (or equivalent, $Ut=y$ where $yt^{-1}\in U$). We leave the reader to verify that this is indeed an action of $\mathscr G^a$ on $L_x$ by partial bijections. There is an alternative construction of $L_x$ and $G_x$ which will be quite useful in what follows. Let $\til L_x = \{U\in \mathscr G^a\mid x\in U^{-1} U\}$ and put $\mathscr G^a_x = \{U\in \mathscr G^a\mid Ux=x\}$. Notice that $\til L_x = \{U\in \mathscr G^a\mid U\cap L_x\neq \emptyset\}$ and $\mathscr G^a_x=\{U\in \mathscr G^a\mid U\cap G_x\neq \emptyset\}$. It is immediate that $\mathscr G^a_x$ is an inverse subsemigroup of $\mathscr G^a$ acting on the right of $\til L_x$. An element of $\til L_x$ intersects $L_x$ in exactly on element by the definition of a slice. \begin{Lemma}\label{Lxasgerms} Define a map $\nu\colon \til L_x\rightarrow L_x$ by $U\cap L_x = \{\nu(U)\}$. Then: \begin{enumerate} \item $\nu$ is surjective; \item $\nu(U)=\nu(V)$ if and only if $U$ and $V$ have a common lower bound in $\til L_x$; \item $\nu\colon \mathscr G^a_x\rightarrow G_x$ is the maximal group image homomorphism. \end{enumerate} \end{Lemma} \begin{proof} If $t\in L_x$ and $U\in \mathscr G^a$ with $t\in U$, then $U\in \til L_x$ with $\nu(U)=t$. This proves (1). For (2), trivially, if $W\subseteq U,V$ is a common lower bound in $\til L_x$, then $\nu(U)=\nu(W)=\nu(V)$. Conversely, suppose that $\nu(U)=\nu(V)=t$. Then $U\cap V$ is an open neighborhood of $t$. Since $\mathscr G^a$ is a basis for the topology on $\mathscr G$, we can find $W\in \mathscr G^a$ with $t\in W\subseteq U\cap V$. As $W\in \til L_x$, this yields (2). Evidentally, $\nu$ restricted to $\mathscr G^a_x$ is a group homomorphism. By (2), it is the maximal group image since any common lower bound in $\til L_x$ of elements of $\mathscr G^a_x$ belongs to $\mathscr G^a_x$. This proves (3). \end{proof} \begin{Rmk} In fact, $\nu$ gives a morphism from the right action of $\mathscr G^a_x$ on $\til L_x$ to the right action of $G_x$ on $L_x$. \end{Rmk} Consider a free $K$-module $KL_x$ with basis $L_x$. The right action of $G_x$ on $L_x$ induces a right $KG_x$-module structure on $KL_x$. Let $T$ be a transversal for $L_x/G_x$. We assume $x\in T$. \begin{Prop}\label{free} The isotropy group $G_x$ acts freely on the right of $L_x$ and hence $KL_x$ is a free right $KG_x$-module with basis $T$. \end{Prop} \begin{proof} It suffices to show that $G_x$ acts freely on $L_x$. But this is clear since if $t\in L_x$ and $g\in G_x$, then $tg=t$ implies $g=t^{-1} t=x$. \end{proof} We now endow $KL_x$ with the structure of a left $K\mathscr G$-module by linearly extending the Sch\"utzenberger representation. Formally, suppose $f\in K\mathscr G$ and $t\in L_x$. Define \begin{equation}\label{moduleaction} ft = \sum_{y\in L_x}f(yt^{-1})y. \end{equation} \begin{Prop}\label{bimodule} If $U\in \mathscr G^a$ and $t\in L_x$, then \begin{equation}\label{schutzrep} \chi_Ut = \begin{cases} Ut & r(t)\in U^{-1} U\\ 0 & \text{else.}\end{cases} \end{equation} Consequently, $KL_x$ is a well-defined $K\mathscr G$-$KG_x$ bimodule. \end{Prop} \begin{proof} Since $K\mathscr G$ is spanned by characteristic functions of elements of $\mathscr G^a$ (Proposition~\ref{characteristicbasis}) in order to show that \eqref{moduleaction} is a finite sum, it suffices to verify \eqref{schutzrep}. If $r(t)\notin U^{-1} U$, then $\chi_U(yt^{-1})=0$ for all $y\in L_x$. On the other hand, suppose $r(t)\in U^{-1} U$ and say $r(t)=d(s)$ with $s\in U$. Then $Ut=st$ and $y=st$ is the unique element of $L_x$ with $yt^{-1}\in U$. Hence $\chi_Ut=y=st=Ut$. Since the Sch\"utzenberger representation is an action, it follows that \eqref{moduleaction} gives a well-defined left module structure to $KL_x$. To see that $KL_x$ is a bimodule, it suffices to verify that if $f\in \mathscr G^a$, $g\in G_x$ and $t\in L_x$, then $(ft)g=f(tg)$. This is shown by the following computation: \begin{align*} (ft)g &= \left(\sum_{y\in L_x}f(yt^{-1})y\right)g = \sum_{y\in L_x}f(yt^{-1})yg\\ f(tg) & = \sum_{z\in L_x}f(zg^{-1} t^{-1})z = \sum_{y\in L_x}f(yt^{-1})yg \end{align*} where the final equality of the second equation is a consequence of the change of variables $y=zg^{-1}$. \end{proof} We are now prepared to define induced modules. \begin{Def}[Induction] For $x\in \mathscr G^0$ and $V$ a $KG_x$-module, we define the corresponding \emph{induced} $K\mathscr G$-module to be $\mathrm{Ind}_x(V)=KL_x\otimes_{KG_x}V$. \end{Def} The induced modules coming from elements of the same orbit coincide. More precisely, if $y$ is in the orbit of $x$ with, say, $d(s)=x$ and $r(s)=y$ and if $V$ is a $KG_x$-module, then $V$ can be made into a $KG_y$-module by putting $gv=s^{-1} gsv$ for $g\in G_y$ and $v\in V$. Then $\mathrm{Ind}_x(V)\cong \mathrm{Ind}_y(V)$ via the map $t\otimes v\mapsto ts^{-1} \otimes v$ for $t\in L_x$ and $v\in V$. The following result, and its corollary, will be essential to studying induced modules. \begin{Prop}\label{createprojections} Let $t,u,s_1,\ldots, s_n\in L_x$ with $s_1,\ldots, s_n\notin tG_x$. Then there exists $U\in \mathscr G^a$ so that $\chi_Ut = u$ and $\chi_Us_i=0$ for $i=1,\ldots,n$. \end{Prop} \begin{proof} The set $B(\mathscr G^0)$ is a basis for the topology of $\mathscr G^0$. Hence we can find $U_0\in B(\mathscr G^0)$ so that $U_0\cap \{r(t),r(s_1),\ldots,r(s_n)\}= \{r(t)\}$. Choose $U\in \mathscr G^a$ so that $ut^{-1} \in U$. Replacing $U$ by $UU_0$ if necessary, we may assume that $r(s_i)\notin U^{-1} U$ for $i=1,\ldots,n$. Then $Ut=ut^{-1} t=u$ and so $\chi_Ut=u$ by Proposition~\ref{bimodule}. On the other hand, Proposition~\ref{bimodule} provides $\chi_Us_i =0$, for $i=1,\ldots,n$. This completes the proof. \end{proof} An immediate corollary of the proposition is the following. \begin{Cor}\label{cyclicmodule} The module $KL_x$ is cyclic, namely $KL_x=K\mathscr G\cdot x$. Consequently, if $V$ is a $KG_x$-module, then $KL_x\otimes_{KG_x} V = K\mathscr G\cdot (x\otimes V)$. \end{Cor} It is easy to see that $\mathrm{Ind}_x$ is a functor from the category of $KG_x$-modules to the category of $K\mathscr G$-modules. We now consider the restriction functor from $K\mathscr G$-modules to $KG_x$-modules, which is right adjoint to the induction functor. \begin{Def}[Restriction] For $x\in \mathscr G^0$, let $\mathscr N_x=\{U\in B(\mathscr G^0)\mid x\in U\}$. If $V$ is a $K\mathscr G$-module, then define $\mathrm{Res}_x(V) = \bigcap_{U\in \mathscr N_x} UV$ where we view $V$ as a $K\mathscr G^a$-module via $Uv = \chi_Uv$ for $U\in \mathscr G^a$ and $v\in V$. \end{Def} In order to endow $\mathrm{Res}_x(V)$ with the structure of a $KG_x$-module, we need the following lemma. \begin{Lemma}\label{welldefineactionofL_x} Let $V$ be a $K\mathscr G$-module and put $W=\mathrm{Res}_x(V)$. Then: \begin{enumerate} \item $K\mathscr G^a_x\cdot W=W$; \item If $U\notin \til L_x$, then $UW=\{0\}$; \item Let $U,U'\in \til L_x$ be such that $\nu(U)=\nu(U')$. Then $Uw=U'w$ for all $w\in W$. \end{enumerate} \end{Lemma} \begin{proof} To prove (1), first observe that $\mathscr N_x\subseteq \mathscr G^a_x$ so $W\subseteq K\mathscr G^a_x\cdot W$. For the converse, suppose that $U\in \mathscr G^a_x$ and $w\in W$. Let $U_0\in \mathscr N_x$. Then $U_0Uw = U(U^{-1} U_0U)w=Uw$ since $x\in U^{-1} U_0U$ and $w\in W$. Since $U_0$ was arbitrary, we conclude that $Uw\in W$. Turning to (2), suppose that $w\in W$ and $Uw\neq 0$. Then $U^{-1} Uw\neq 0$. Suppose $U_0\in \mathscr N_x$. Then $U_0U^{-1} Uw=U^{-1} UU_0w = U^{-1} Uw$. Hence the stabilizer in $B(\mathscr G^0)$ of $U^{-1} Uw$ is a proper filter containing the ultrafilter $\mathscr N_x$ and the element $U^{-1} U$. We conclude that $U^{-1} U\in \mathscr N_x$ and so $U\in \til L_x$. Next, we establish (3). If $\nu(U)=\nu(U')$, then $U$ and $U'$ have a common lower bound $U_0\in \til L_x$ by Lemma~\ref{Lxasgerms}. Hence, for any $w\in W$, we have $Uw=UU_0^{-1} U_0w= U_0w = U'U_0^{-1} U_0w = U'w$ as $U_0^{-1} U_0\in \mathscr N_x$. This completes the proof. \end{proof} As a consequence of Lemmas~\ref{Lxasgerms} and~\ref{welldefineactionofL_x}, for $t\in L_x$ and $w\in \mathrm{Res}_x(V)$, there is a well-defined element $tw$ obtained by putting $tw=Uw$ where $U\in \mathscr G^a$ contains $t$. Trivially, the map $w\mapsto tw$ is linear. Moreover, if $g\in G_x$ and $g\in U\in \mathscr G^a$, then $U\in \mathscr G^a_x$. Hence this definition gives $W$ the structure of a $KG_x$-module since the action of $\mathscr G^a_x$ on $W$ factors through its maximal group image $G_x$ by the aforementioned lemmas. In particular, $xw=w$ for $w\in W$. Let us now prove that if $V$ is a simple $K\mathscr G$-module and $\mathrm{Res}_x(V)\neq 0$, then it is simple. \begin{Lemma}\label{issimple} Let $V$ be a simple $K\mathscr G$-module. Then the $KG_x$-module $\mathrm{Res}_x(V)$ is either zero or a simple $KG_x$-module. \end{Lemma} \begin{proof} Set $W=\mathrm{Res}_x(V)$ and suppose that $0\neq w\in W$. We need to show that $KG_x\cdot w=W$. Let $w'\in W$. Viewing $V$ as a $K\mathscr G^a$-module, we have $K\mathscr G^a\cdot w=V$ by simplicity of $V$. Hence $w'=(c_1U_1+\cdots +c_nU_n)w$ with $U_1,\ldots,U_n\in \mathscr G^a$. Moreover, by Lemma~\ref{welldefineactionofL_x} we may assume $U_1,\ldots,U_n\in \til L_x$. Let $t_i = \nu(U_i)$. Choose $U\in \mathscr N_x$ so that $r(t_i)\in U$ implies $r(t_i)=x$. Then $w' = Uw' = (c_1UU_1+\cdots +c_nUU_n)w$. But $UU_i\in \til L_x$ if and only if $r(t_i) =x$, in which case $UU_i\in \mathscr G^a_x$. Thus $w'\in K\mathscr G^a_x\cdot w = KG_x\cdot w$. It follows that $W$ is simple. \end{proof} Next we establish the adjunction between $\mathrm{Ind}_x$ and $\mathrm{Res}_x$. Since the functor $KL_x\otimes_{KG_x} (-)$ is left adjoint to $\mathrm{Hom}_{K\mathscr G}(KL_x,-)$, it suffices to show the latter is isomorphic to $\mathrm{Res}_x$. \begin{Prop}\label{leftrightadjoint} The functors $\mathrm{Res}_x$ and $\mathrm{Hom}_{K\mathscr G}(KL_x,-)$ are naturally isomorphic. Thus $\mathrm{Ind}_x$ is the left adjoint of $\mathrm{Res}_x$. \end{Prop} \begin{proof} Let $V$ be a $K\mathscr G$-module and put $W=\mathrm{Res}_x(V)$. Define a homomorphism $\psi\colon \mathrm{Hom}_{K\mathscr G}(KL_x,V)\rightarrow W$ by $\psi(f) = f(x)$. First note that if $U\in \mathscr N_x$, then $Uf(x)=f(Ux)=f(x)$ and so $f(x)\in W$. Clearly $\psi$ is $K$-linear. To see that it is $G_x$-equivariant, let $g\in G_x$ and choose $U\in \mathscr G^a_x$ with $g\in U$. Then observe, using Proposition~\ref{bimodule}, that $\psi(gf)= (gf)(x) = f(xg) =f(g)= f(Ux) = Uf(x) = gf(x) = g\psi(f)$. This shows that $\psi$ is a $KG_x$-morphism. If $\psi(f)=0$, then $f(x)=0$ and so $f(KL_x)=0$ by Corollary~\ref{cyclicmodule}. Thus $\psi$ is injective. To see that $\psi$ is surjective, let $w\in W$ and define $f\colon KL_x\rightarrow V$ by $f(t) = tw$, for $t\in L_x$, where $tw$ is as defined after Lemma~\ref{welldefineactionofL_x}. Then $f(x)=xw=w$. It thus remains to show that $f$ is a $K\mathscr G$-morphism. To achieve this, it suffices to show that $f(Ut)=Utw$ for $U\in \mathscr G^a$. Choose $U'\in \til L_x$ with $t\in U'$; so $Utw = UU'w$ by definition. If $r(t)\notin U^{-1} U$, then $Ut=0$ (by Proposition~\ref{bimodule}) and $UU'\notin \til L_x$. Thus $f(Ut)=0$, whereas $Utw=UU'w =0$ by Lemma~\ref{welldefineactionofL_x}. On the other hand, if $r(t)\in U^{-1} U$ and say $d(s)=r(t)$ with $s\in U$, then $Ut=st$ and $st\in UU'\in \til L_x$. Thus $f(Ut) = f(st)=(st)w$, whereas $Utw=UU'w = (st)w$. This completes the proof that $f$ is a $K\mathscr G$-morphism and hence $\psi$ is onto. It is clear that $\psi$ is natural. \end{proof} It turns out that $\mathrm{Res}_x\mathrm{Ind}_x$ is naturally isomorphic to the identity functor. \begin{Prop}\label{composetoidentity} Let $V$ be a $KG_x$-module. Then $\mathrm{Res}_x\mathrm{Ind}_x(V)=x\otimes V$ is naturally isomorphic to $V$ as a $KG_x$-module. \end{Prop} \begin{proof} Let $T$ be a transversal for $L_x/G_x$ with $x\in T$. Because $T$ is a $KG_x$-basis for $KL_x$, it follows that $KL_x\otimes_{KG_x}V = \bigoplus_{t\in T}(t\otimes V)$. We claim that $\mathrm{Res}_x\mathrm{Ind}_x(V)=x\otimes V$. Indeed, if $U\in \mathscr N_x$, then $U(x\otimes v) = Ux\otimes v =x\otimes v$. Conversely, suppose $w=t_1\otimes v_1+\cdots+t_n\otimes v_n$ belongs to $\mathrm{Res}_x\mathrm{Ind}_x(V)$. Choose $U\in B(\mathscr G^0)$ so that $x\in U$ and $U\cap \{r(t_1),\cdots,r(t_n)\} \subseteq \{x\}$. Then, by Proposition~\ref{bimodule}, we have $w=Uw\in x\otimes V$, establishing the desired equality. Now $x\otimes V$ is naturally isomorphic to $V$ as a $KG_x$-module via the map $x\otimes v\mapsto v$ since if $g\in G$ and $U\in \mathscr G^a_x$ with $g\in U$, then $g(x\otimes v)=U(x\otimes v) = Ux\otimes v = g\otimes v = x\otimes gv$. \end{proof} A useful fact is that the induction functor is exact. In general, $\mathrm{Res}_x$ is left exact but it need not be right exact. \begin{Prop} The functor $\mathrm{Ind}_x$ is exact, whereas $\mathrm{Res}_x$ is left exact. \end{Prop} \begin{proof} Since $KL_x$ is a free $KG_x$-module, it is flat and hence $\mathrm{Ind}_x$ is exact. Clearly $\mathrm{Res}_x = \mathrm{Hom}_{K\mathscr G}(KL_x,-)$ is left exact. \end{proof} Our next goal is to show that if $V$ is a simple $KG_x$-module, then the $K\mathscr G$-module $\mathrm{Ind}_x(V)$ is simple with a certain ``finiteness'' property, namely it is not annihilated by $\mathrm{Res}_x$. Afterwards, we shall prove that all simple $K\mathscr G$-modules with this ``finiteness'' property are induced modules; this class of simple $K\mathscr G$-modules contains all the finite dimensional ones when $K$ is a field. This is exactly what is done for inverse semigroups with finitely many idempotents in~\cite{mobius1,mobius2}. Here the proof becomes more technical because the algebra need not be unital. Also topology is used instead of finiteness arguments in the proof. The main idea is in essence that of~\cite{myirreps}: to exploit the adjunct relationship between induction and restriction. The following definition will play a key role in constructing the finite dimensional irreducible representations of an inverse semigroup. \begin{Def}[Finite index] Let us say that an object $x\in \mathscr G^0$ has \emph{finite index} if its orbit is finite. \end{Def} \begin{Prop}\label{constructirred} Let $x\in \mathscr G^0$ and suppose that $V$ is a simple $KG_x$-module. Then $\mathrm{Ind}_x(V)$ is a simple $K\mathscr G$-module. Moreover, if $K$ is a field, then $\mathrm{Ind}_x(V)$ is finite dimensional if and only if $x$ has finite index and $V$ is finite dimensional. Finally, if $V$ and $W$ are non-isomorphic $KG_x$-modules, then $\mathrm{Ind}_x(V)\ncong \mathrm{Ind}_x(W)$. \end{Prop} \begin{proof} We retain the notation above. Let $T$ be a transversal for $L_x/G_x$ with $x\in T$. Since $L_x/G_x$ is in bijection with the orbit $\mathscr O$ of $x$, the set $T$ is finite if and only if $x$ has finite index. Because $T$ is a $KG_x$-basis for $KL_x$, it follows that $KL_x\otimes_{KG_x}V = \bigoplus_{t\in T}(t\otimes V)$. In particular, $\mathrm{Ind}_x(V)$ is finite dimensional when $K$ is a field if and only if $T$ is finite and $V$ is finite dimensional. This establishes the second statement. We turn now to the proof of simplicity. Suppose that $0\neq W$ is a $K\mathscr G$-submodule. Then $\mathrm{Res}_x(W)$ is a $KG_x$-submodule of $\mathrm{Res}_x\mathrm{Ind}_x(V)\cong V$. We claim that it is non-zero. Let $0\neq w\in W$. Then $w=t_1\otimes v_1+\cdots +t_n\otimes v_n$ for some $v_1,\ldots, v_n\in V$ and $t_1,\ldots, t_n\in T$. Moreover, $v_j\neq 0$ for some $j$. By Proposition~\ref{createprojections}, we can find $U\in \mathscr G^a$ so that $\chi_Ut_j=x$ and $\chi_Ut_i=0$ for $i\neq j$. Then $\chi_Uw=x\otimes v_j\neq 0$ belongs to $\mathrm{Res}_x(W)$. Simplicity of $V$ now yields $\mathrm{Res}_x\mathrm{Ind}_x(V)=\mathrm{Res}_x(W)\subseteq W$. Corollary~\ref{cyclicmodule} then yields \[\mathrm{Ind}_x(V)=K\mathscr G\cdot (x\otimes V) = K\mathscr G\cdot \mathrm{Res}_x\mathrm{Ind}_x(V)\subseteq K\mathscr G\cdot W\subseteq W,\] establishing the simplicity of $\mathrm{Ind}_x(V)$. The final statement follows because $\mathrm{Res}_x\mathrm{Ind}_x$ is naturally equivalent to the identity functor. \end{proof} Next we wish to show that modules of the above sort obtained from distinct orbits are non-isomorphic. \begin{Prop}\label{orbitunique} Suppose that $x,y$ are elements in distinct orbits. Then induced modules of the form $\mathrm{Ind}_x(V)$ and $\mathrm{Ind}_y(W)$ are not isomorphic. \end{Prop} \begin{proof} Put $M=\mathrm{Ind}_x(V)$ and $N=\mathrm{Ind}_y(W)$. Proposition~\ref{composetoidentity} yields $\mathrm{Res}_x(M)\cong V\neq 0$. On the other hand, if $w=t_1\otimes v_1+\cdots+t_n\otimes v_n\in M$ is non-zero, then since $y\notin \{r(t_1),\ldots, r(t_n)\}$, we can find $U\in B(\mathscr G^0)$ so that $y\in U$ and $r(t_i)\notin U$, for $i=1,\ldots, n$. Then $Uw=0$ by Proposition~\ref{bimodule}. Thus we have $\mathrm{Res}_y(M)= 0$. Applying a symmetric argument to $N$ shows that $M\ncong N$. \end{proof} To obtain the converse of Proposition~\ref{constructirred}, we shall use Stone duality. Also the inverse semigroup $\mathscr G^a$ will play a starring role since each $K\mathscr G$-module gives a representation of $\mathscr G^a$. What we are essentially doing is imitating the theory for finite inverse semigroups~\cite{CP}, as interpreted through~\cite{mobius1,mobius2}, for ample groupoids; see also~\cite{myirreps}. The type of simple modules which can be described as induced modules are what we shall term spectral modules. \begin{Def}[Spectral module] Let $V$ be a non-zero $K\mathscr G$-module. We say that $V$ is a \emph{spectral module} if there is a point $x\in \mathscr G^0$ so that $\mathrm{Res}_x(V)\neq 0$. \end{Def} \begin{Rmk} It is easy to verify that $\mathrm{Res}_x(K\mathscr G)\neq 0$ if and only if $x$ is an isolated point. On the other hand, the cyclic module $KL_x$ satisfies $\mathrm{Res}_x(KL_x)=Kx$. This shows that in general $\mathrm{Res}_x$ is not exact. However, if $x$ is an isolated point of $\mathscr G^0$, then $\mathrm{Res}_x(V) = \delta_xV$ and so the restriction functor is exact in this case. \end{Rmk} Every induced module from a non-zero module is spectral due to the isomorphism $\mathrm{Res}_x\mathrm{Ind}_x(V)\cong V$. Let us show that the spectral assumption is not too strong a condition. In particular, we will establish that all finite dimensional modules over a field are spectral. Recall that if $A$ is a commutative $K$-algebra, then the idempotent set $E(A)$ of $A$ is a generalized boolean algebra with respect to the natural partial order. The join of $e,f$ is given by $e\vee f=e+f-ef$ and the relative complement by $e\setminus f=e-ef$. \begin{Prop} Let $V$ be a $K\mathscr G$-module with associated representation $\varphi\colon K\mathscr G\rightarrow \mathrm{End}_K(V)$. Let $\alpha\colon \mathscr G^a\rightarrow \mathrm{End}_K(V)$ be the representation given by $U\mapsto \varphi(\chi_U)$. Assume that $B=\alpha(B(\mathscr G^0))$ contains a primitive idempotent. Then $V$ is spectral. This occurs in particular if $B$ is finite or more generally satisfies the descending chain condition. \end{Prop} \begin{proof} Let $A$ be the subalgebra spanned by $\alpha(B(\mathscr G^0))$; so $A=\varphi(K\mathscr G^0)$. Then $\alpha\colon B(\mathscr G^0)\rightarrow E(A)$ is a morphism of generalized boolean algebras. Indeed, we compute $\alpha(U\cup V) = \varphi(\chi_{U\cup V}) = \varphi(\chi_U)+\varphi(\chi_V)-\varphi(\chi_{U\cap V}) = \alpha(U)+\alpha(V)-\alpha(U)\alpha(V)=\alpha(U)\vee \alpha(V)$. Thus $B$ is a generalized boolean algebra. Stone duality provides a proper continuous map $\widehat\alpha\colon \mathrm{Spec}(B)\rightarrow \mathscr G^0$. So now let $e$ be a primitive idempotent of $B$. Then $e^{\uparrow}$ is an ultrafilter on $B$ and $\chi_{e^{\uparrow}}\in \mathrm{Spec}(B)$ by Proposition~\ref{ultrafilterchar}. Let $x=\widehat{\alpha}(\chi_{e^{\uparrow}})$. Then \[\mathrm{Res}_x(V) = \bigcap_{f\in e^{\uparrow}} fV= eV\neq 0\] completing the proof. \end{proof} The above proposition will be used to show that every finite dimensional representation over a field is spectral. Denote by $M_n(K)$ the algebra of $n\times n$-matrices over $K$. The following lemma is classical linear algebra. \begin{Lemma} Let $K$ be a field and $F\leq M_n(K)$ a semilattice. Then we have $|F|\leq 2^n$. \end{Lemma} \begin{proof} We just sketch the argument. If $e\in F$, then $e^2=e$ and so the minimal polynomial of $e$ divides $x(x-1)$. Thus $e$ is diagonalizable. But commutative semigroups of diagonalizable matrices are easily seen to be simultaneously diagonalizable, so $F\leq K^n$. But the idempotent set of $K$ is $\{0,1\}$, so $F\leq \{0,1\}^n$ and hence $|F|\leq 2^n$. \end{proof} \begin{Cor}\label{finiteidempotents} Let $\varphi\colon K\mathscr G\rightarrow M_n(K)$ be a finite dimensional representation over a field $K$. Then $\alpha(B(\mathscr G^0))$ is finite where $\alpha\colon B(\mathscr G^0)\rightarrow M_n(K)$ is given by $\alpha(U) = \varphi(\chi_U)$. Consequently, every finite dimensional (non-zero) $K\mathscr G$-module is spectral. \end{Cor} Now we establish the main theorem of this section. \begin{Thm}\label{describeirreps} Let $\mathscr G$ be an ample groupoid and fix $D\subseteq \mathscr G^0$ containing exactly one element from each orbit. Then there is a bijection between spectral simple $K\mathscr G$-modules and pairs $(x,V)$ where $x\in D$ and $V$ is a simple $KG_x$-module (taken up to isomorphism). The corresponding simple $K\mathscr G$-module is $\mathrm{Ind}_x(V)$. When $K$ is a field, the finite dimensional simple $K\mathscr G$-modules correspond to those pairs $(x,V)$ where $x$ is of finite index and $V$ is a finite dimensional simple $KG_x$-module. \end{Thm} \begin{proof} Proposition~\ref{constructirred} and Proposition~\ref{orbitunique} yield that the modules described in the theorem statement form a set of pairwise non-isomorphic spectral simple $K\mathscr G$-modules. It remains to show that all spectral simple $K\mathscr G$-modules are of this form. So let $V$ be a spectral simple $K\mathscr G$-module and suppose $\mathrm{Res}_x(V)\neq 0$. Then $\mathrm{Res}_x(V)$ is a simple $KG_x$-module by Lemma~\ref{issimple}. By the adjunction between induction and restriction, the identity map on $\mathrm{Res}_x(V)$ gives rise to a non-zero $K\mathscr G$-morphism $\psi\colon \mathrm{Ind}_x\mathrm{Res}_x(V)\rightarrow V$. Since $\mathrm{Ind}_x\mathrm{Res}_x(V)$ is simple by Proposition~\ref{constructirred} and $V$ is simple by hypothesis, it follows that $\psi$ is an isomorphism by Schur's Lemma. This completes the proof of the first statement since the induced modules depend only on the orbit up to isomorphism. The statement about finite dimensional simple modules is a consequence of Proposition~\ref{constructirred} and Corollary~\ref{finiteidempotents}. \end{proof} \subsection{Irreducible representations of inverse semigroups} Fix now an inverse semigroup $S$ and let $\mathscr G(S)$ be the universal groupoid of $S$. If $\varphi\in \widehat {E(S)}$ has finite index in $\mathscr G(S)$, then we shall call $\varphi$ a \emph{finite index character} of $E(S)$ in $S$. This notion of index really depends on $S$. Notice that the orbit of $\varphi$ in $\mathscr G(S)$ is precisely the orbit of $\varphi$ under the spectral action of $S$ on $\widehat {E(S)}$. If $E(S)$ is finite, then of course $\widehat {E(S)}=E(S)$ and all characters have finite index. If $\varphi\in \widehat{E(S)}$, then $S_{\varphi} = \{s\in S\mid s\varphi =\varphi\}$ is an inverse subsemigroup of $S$ and one easily checks that the isotropy group $G_{\varphi}$ of $\varphi$ in $\mathscr G(S)$ is precisely the maximal group image of $S_{\varphi}$ since if $s,s'\in S_{\varphi}$ and $t\leq s,s'$ with $\varphi\in D(t^*t)$ and $t\in S$, then $t\in S_{\varphi}$. This allows us to describe the finite dimensional irreducible representations of an inverse semigroup without any explicit reference to $\mathscr G(S)$. So without further ado, we state the classification theorem for finite dimensional irreducible representations of inverse semigroups, thereby generalizing the classical results for inverse semigroups with finitely many idempotents~\cite{CP,mobius1,mobius2,oknisemigroupalgebra}. \begin{Thm}\label{inversedescribereps} Let $S$ be an inverse semigroup and $K$ a field. Fix a set $D\subseteq \widehat{E(S)}$ containing exactly one finite index character from each orbit of finite index characters under the spectral action of $S$ on $\widehat{E(S)}$. Let $S_{\varphi}$ be the stabilizer of $\varphi$ and set $G_{\varphi}$ equal to the maximal group image of $S_{\varphi}$. Then there is a bijection between finite dimensional simple $KS$-modules and pairs $(\varphi,V)$ where $\varphi\in D$ and $V$ is a finite dimensional simple $KG_{\varphi}$-module (considered up to isomorphism). \end{Thm} \begin{proof} This is immediate from Theorem~\ref{describeirreps} and the above discussion. \end{proof} \begin{Rmk} That there should be a theorem of this flavor was first suggested in unpublished joint work of S.~Haatja, S.~W.~Margolis and the author from 2002. \end{Rmk} Let us draw some consequences. First we give necessary and sufficient conditions for an inverse semigroup to have enough finite dimensional irreducible representations to separate points. Then we provide examples showing that the statement cannot really be simplified. \begin{Cor}\label{seppoints} An inverse semigroup $S$ has enough finite dimensional irreducible representations over $K$ to separate points if and only if: \begin{enumerate} \item The characters of $E(S)$ of finite index in $S$ separate points of $E(S)$; \item For each $e\in E(S)$ and each $e\neq s\in S$ so that $s^*s=e=ss^*$, there is a character $\varphi$ of finite index in $S$ so that $\varphi(e)=1$ and either: \begin{enumerate} \item $s\varphi\neq \varphi$; or \item $s\varphi =\varphi$ and there is a finite dimensional irreducible representation $\psi$ of $G_{\varphi}$ so that $\psi([s,\varphi])\neq 1$. \end{enumerate} \end{enumerate} \end{Cor} \begin{proof} Suppose first that $S$ has enough finite dimensional irreducible representations to separate points and that $e\neq f$ are idempotents of $S$. Choose a finite dimensional simple $KS$-module $W=KL_{\varphi}\otimes_{KG_{\varphi}} V$ with $\varphi$ a finite index character and such that $e$ and $f$ act differently on $W$. Recalling that $x\mapsto \chi_{D(x)}$ for $x\in E(S)$ under the isomorphism $KS\rightarrow K\mathscr G$, it follows from Proposition~\ref{bimodule} that, for $t\in L_{\varphi}$, \begin{equation}\label{idempotentaction} xt = \begin{cases} t & r(t)(x)=1\\ 0 & r(t)(x)=0.\end{cases} \end{equation} Therefore, in order for $e$ and $f$ to act differently on $W$, there must exist $t\in L_{\varphi}$ with $r(t)=\rho$ a finite index character such that $\rho(e)\neq \rho(f)$. Next suppose that $e\neq s$ and $s^*s=e=ss^*$. By assumption there is a finite dimensional simple $KS$-module $W=KL_{\varphi}\otimes_{KG_{\varphi}} V$, with $\varphi$ a finite index character, where $s$ and $e$ act differently. By \eqref{idempotentaction} there must exist $t\in L_{\varphi}$ and $v\in V$ so that $\rho=r(t)$ satisfies $\rho\in D(e)$ and $s(t\otimes v)\neq e(t\otimes v) = t\otimes v$. Since $\rho$ has finite index, if $s\rho\neq \rho$ then we are done. So assume $s\rho=\rho$. Recall that under the isomorphism of algebras $KS\rightarrow K\mathscr G(S)$, we have that $s\mapsto \chi_{(s,D(e))}$. Since $et\neq 0$ implies $st\neq 0$ (as $s^*s=e$), there must exist $y\in L_{\varphi}$ so that $yt^{-1}\in (s,D(e))$ and moreover $st = y$ in $KL_{\varphi}$ by Proposition~\ref{bimodule}. We must then have $yt^{-1} = [s,\rho]\in G_{\rho}$ as $s\rho=\rho$. Now $st = y=t(t^{-1} y) =t(t^{-1} [s,\rho]t)$ and so $t\otimes v\neq s(t\otimes v) = t\otimes t^{-1} [s,\rho]tv$. Thus if we make $V$ a (simple) $KG_{\rho}$-module via $gv = (t^{-1} gt)v$, then $[s,\rho]$ does not act as the identity on this module. This completes the proof of necessity. Let us now proceed with sufficiency. First we make an observation. Let $\varphi$ be a character of finite index with associated finite orbit $\mathscr O$. Let $V$ be the trivial $KG_{\varphi}$-module. It is routine to verify using Proposition~\ref{bimodule} that $KL_{\varphi}\otimes _{KG_{\varphi}} V$ has a basis in bijection with $\mathscr O$ and $S$ acts on the basis by restricting the action of $S$ on $\widehat{E(S)}$ to $\mathscr O$. We call this the \emph{trivial representation associated to $\mathscr O$}. Suppose $s,t\in S$ with $s\neq t$. Assume first that $s^*s\neq t^*t$ and let $\varphi$ be a finite index character with $\varphi(s^*s)\neq \varphi(t^*t)$. Then in the trivial representation associated to the orbit of $\varphi$, exactly one of $s$ and $t$ is defined on $\varphi$ and so this finite dimensional irreducible representation separates $s$ and $t$. A dual argument works if $ss^*\neq tt^*$. So let us now assume that $s^*s=t^*t$ and $ss^*=tt^*$. Then it suffices to separate $s^*s$ from $t^*s$ in order to separate $s$ and $t$. So we are left with the case that $s^*s=e=ss^*$ and $s\neq e$. We have two cases. Suppose first we can find a finite index character $\varphi$ with $s\varphi\neq \varphi$. Again, the trivial representation associated to the orbit of $\varphi$ separates $s$ and $e$. Suppose now that there is a finite index character $\varphi$ with $\varphi(e)=1$ and $s\varphi=\varphi$ and a finite dimensional simple $KG_{\varphi}$-module $V$ so that $[s,\varphi]$ acts non-trivially on $V$. It is then easy to see using Proposition~\ref{bimodule} that $s(\varphi\otimes v) = [s,\varphi]\otimes v=\varphi\otimes [s,\varphi]v$ since $[s,\varphi]\in (s,D(e))$. Thus $s$ acts non-trivially on $KL_{\varphi}\otimes _{KG_{\varphi}}V$, completing the proof. \end{proof} An immediate consequence of this corollary is the following folklore result. \begin{Cor} Let $S$ be an inverse semigroup with finitely many idempotents and $K$ a field. Then there are enough finite dimensional irreducible representations of $S$ over $K$ to separate points if and only if each maximal subgroup of $S$ has enough finite dimensional irreducible representations to separate points. \end{Cor} As a first example, consider the bicyclic inverse monoid, presented by $B=\langle x\mid x^*x=1\rangle$. Any non-degenerate finite dimensional representation of $B$ must be by invertible matrices since left invertibility implies right invertibility for matrices. Hence one cannot separate the idempotents of $B$ by finite dimensional irreducible representations of $B$ over any field. To see this from the point of view of Corollary~\ref{seppoints}, we observe that $\widehat{E(B)}$ is the one-point compactification of the natural numbers. Namely, if $F$ is a filter on $E(B)$, then either it has a minimum element $x^n(x^*)^n$, and hence is a principal filter, or it contains all the idempotents (which is the one-point compactification). All the principal filters are in a single (infinite) orbit. The remaining filter is in a singleton orbit with isotropy group $\mathbb Z$. It obviously separates no idempotents. Let us next give an example to show that there can be enough finite dimensional irreducible representations of an inverse semigroup to separate points, and yet there can be a finite index character $\varphi$ so that the isotropy group $G_{\varphi}$ does not have enough irreducible representations to separate points. Let $K=\mathbb C$. Then any finite inverse semigroup has enough finite dimensional irreducible representations to separate points, say by the above corollary. Hence any residually finite inverse semigroup has enough finite dimensional irreducible representations over $\mathbb C$ to separate points. On the other hand, the maximal group image $G$ of an inverse semigroup $S$ is the isotropy group of the trivial character that sends all idempotents to $1$, which is a singleton orbit of $\widehat{E(S)}$. Let us construct a residually finite inverse semigroup whose maximal group image does not have any non-trivial finite dimensional representations. A well-known result of Mal'cev says that a finitely generated group $G$ with a faithful finite dimensional representation over $\mathbb C$ is residually finite. Since any representation of a simple group is faithful and an infinite simple group is trivially not residually finite, it follows that finitely generated infinite simple groups have no non-trivial finite dimensional representations over $\mathbb C$. An example of such a group is the famous Thompson's group $V$, which is a finitely presented infinite simple group~\cite{Thompsongroup}. In summary, if we can find a residually finite inverse semigroup whose maximal group image is a finitely generated infinite simple group, then we will have found the example we are seeking. To construct our example, we make use of the Birget-Rhodes expansion~\cite{BR--exp}. Let $G$ be any group and let $E$ be the semilattice of finite subsets of $G$ ordered by reverse inclusion (so the meet is union). Let $G$ act on $E$ by left translation, so $gX=\{gx\mid x\in X\}$, and form the semidirect product $E\rtimes G$. Let $S$ be the inverse submonoid of $E\rtimes G$ consisting of all pairs $(X,g)$ so that $1,g\in X$. This is an $E$-unitary (in fact $F$-inverse) monoid with maximal group image $G$ and identity $(\{1\},1)$. It is also residually finite. To see this, we use the well-known fact that an inverse semigroup all of whose $\R$-classes are finite is residually finite (the right Sch\"utzenberger representations on the $\R$-classes separate points). Hence it suffices to observe that $(X,g)(X,g)^* = (X,1)$ and so the $\R$-class of $(X,g)$ consists of all elements of the form $(X,h)$ with $h\in X$, which is a finite set. Let us observe that Mal'cev's result immediately implies that a finitely generated group has enough finite dimensional irreducible representations over $\mathbb C$ to separate points if and only if it is residually finite. One direction here is trivial. For the non-trivial direction, suppose $G$ has enough finite dimensional irreducible representations over $\mathbb C$ to separate points and suppose $g\neq 1$. Then $G$ has a finite dimensional irreducible representation $\varphi\colon G\rightarrow GL_n(\mathbb C)$ so that $\varphi(g)\neq 1$. But $\varphi(G)$ is a finitely generated linear group and so residually finite by Mal'cev's theorem. Thus we can find a homomorphism $\psi\colon \varphi(G)\rightarrow H$ with $H$ a finite group and $\psi(\varphi(g))\neq 1$. Mal'cev's theorem immediately extends to inverse semigroups. \begin{Prop} Let $S$ be a finitely generated inverse subsemigroup of $M_n(\mathbb C)$. Then $S$ is residually finite. \end{Prop} \begin{proof} Set $V=\mathbb C^n$. We know that $E(S)$ is finite by Lemma~\ref{finiteidempotents} and hence each maximal subgroup is finitely generated by Corollary~\ref{fgmaxsubgroup}. It follows that each maximal subgroup is residually finite by Mal'cev's theorem since the maximal subgroup $G_e$ is a faithful group of linear automorphisms of $eV$ for $e\in E(S)$. But it is well known and easy to prove that if $S$ is an inverse semigroup with finitely many idempotents, then $S$ is residually finite if and only if all its maximal subgroups are residually finite. Indeed, for the non-trivial direction observe that the right Sch\"utzenberger representations of $S$ separate points into partial transformation wreath products of the form $G\wr T$ with $T$ a transitive faithful inverse semigroup of partial permutations of a finite set and $G$ a maximal subgroup of $S$. But such a wreath product is trivially residually finite when $G$ is residually finite. \end{proof} Now the exact same proof as the group case establishes the following result. \begin{Prop} Let $S$ be a finitely generated inverse semigroup. Then $S$ has enough finite dimensional irreducible representations over $\mathbb C$ to separate points if and only if $S$ is residually finite. \end{Prop} For the remainder of the section we take $K$ to be a commutative ring with unit. We now characterize the spectral $KS$-modules in terms of $S$. In particular, we shall see that if $E(S)$ satisfies the descending chain condition, then all non-zero $KS$-modules are spectral and so we have a complete parameterization of all simple $KS$-modules. \begin{Prop}\label{whenisspectral} Let $S$ be an inverse semigroup and let $V$ be a non-zero $KS$-module. Then $V$ is a spectral $K\mathscr G(S)$-module if and only if there exists $v\in V$ so that $fv= v$ for some idempotent $f\in E(S)$ and, for all $e\in E(S)$, one has $ev\neq 0$ if and only if $ev=v$. In particular, if $\varphi\colon S\rightarrow \mathrm{End}_K(V)$ is the corresponding representation and $\varphi(E(S))$ contains a primitive idempotent (for instance, if it satisfies the descending chain condition), then $V$ is spectral. \end{Prop} \begin{proof} Recall that $e\mapsto \chi_{D(e)}$ under the isomorphism of $KS$ with $K\mathscr G(S)$. Suppose first $V$ is spectral and let $\theta\in \widehat{E(S)}$ so that $\mathrm{Res}_{\theta}(V)\neq 0$. Fix $0\neq v\in \mathrm{Res}_{\theta}(V)$. If $\theta(f)\neq 0$, then $D(f)\in \mathscr N_{\theta}$ and so $fv=v$. Suppose that $e\in E(S)$ with $ev\neq 0$. Then $\{U\in B(\widehat {E(S)})\mid Uev=ev\}$ is a proper filter containing $\mathscr N_{\theta}$ and $D(e)$. Since $\mathscr N_{\theta}$ is an ultrafilter, we conclude $D(e)\in \mathscr N_{\theta}$ and so $ev=D(e)v=v$. Conversely, suppose there is an element $v\in V$ so that $fv=v$ some $f\in E(S)$ and $ev\neq 0$ if and only if $ev=v$ for all $e\in E(S)$; in particular $v\neq 0$. Let $A=\varphi(KE(S))$ where $\varphi\colon KS\rightarrow \mathrm{End}_k(V)$ is the associated representation. We claim that the set $B$ of elements $e\in E(A)$ so that $ev\neq 0$ implies $ev=v$ is a generalized boolean algebra containing $\varphi(E(S))$. It clearly contains $0$. Suppose $e,f\in B$ and $efv\neq 0$. Then $ev\neq 0\neq fv$ so $efv=v$. On the other hand, assume $(e+f-ef)v = (e\vee f)v\neq 0$. Then at least one of $ev$ or $fv$ is non-zero. If $ev\neq 0$ and $fv=0$, then we obtain $(e\vee f)v = ev+fv-efv=v$. A symmetric argument applies if $ev=0$ and $fv\neq 0$. Finally, if $ev\neq 0\neq fv$, then $(e\vee f)v = ev+fv-efv = v$. To deal with relative complements, suppose $e,f\in B$ and $(e-ef)v=(e\setminus f)v \neq 0$. Then $ev\neq 0$ and so $ev=v$. Therefore, $(e-ef)v=v-fv$. If $fv\neq 0$, then $fv=v$ and so $(e-ef)v=0$, a contradiction. Thus $fv=0$ and $(e\setminus f)v=v$. Since $E(S)$ generates $B(\widehat {E(S)})$ as a generalized boolean algebra via the map $e\mapsto D(e)$, it follows that if $U\in B(\widehat {E(S)})$ and $Uv\neq 0$, then $Uv=v$. Let $\mathscr F=\{U\in B(\widehat {E(S)})\mid Uv=v\}$. Clearly, $\mathscr F$ is a proper filter. We claim that it is an ultrafilter. Indeed, suppose that $U'\notin \mathscr F$. Then $U'v=0$ and so $(U\setminus U')v = Uv-UU'v=v$. Thus $U\setminus U'\in \mathscr F$ and so $\emptyset = U'\cap (U\setminus U')$ shows that the filter generated by $U'$ and $\mathscr F$ is not proper. Thus $\mathscr F$ is an ultrafilter on $B(\widehat {E(S)})$ and hence is of the form $\mathscr N_{\theta}$ for a unique element $\theta\in \widehat{E(S)}$. It follows $v\in \mathrm{Res}_{\theta}(V)$. For the final statement, suppose that $\varphi(f)\in \varphi(E(S))$ is primitive and $0\neq v\in fV$. Then, for all $e\in E(S)$, $efv=ev\neq 0$ implies $\varphi(ef)\neq 0$ and so $\varphi(ef)=\varphi(f)$ by primitivity. Thus $ev = efv=fv=v$. \end{proof} It turns out that if every idempotent of an inverse semigroup is central, then every simple $KS$-module is spectral. \begin{Prop} Let $S$ be an inverse semigroup with central idempotents. Then every simple $KS$-module is spectral. \end{Prop} \begin{proof} Let $V$ be a simple $KS$-module and suppose $e\in E(S)$. Since $e$ is central, it follows that $eV$ is $KS$-invariant and hence $eV=V$, and so $e$ acts as the identity, or $eV=0$, whence $e$ acts as $0$. Thus $V$ is spectral by Proposition~\ref{whenisspectral} \end{proof} There are other classes of inverse semigroups all of whose modules are spectral (and hence for which we have a complete list of all simple modules). \begin{Prop} Let $S$ be an inverse semigroup such that $E(S)$ is isomorphic to $(\mathbb N,\geq)$. Then every non-zero $KS$-module is spectral. \end{Prop} \begin{proof} Suppose $E(S)=\{e_i\mid i\in \mathbb N\}$ with $e_ie_j = e_{\max\{i,j\}}$. Let $V$ be a $KS$-module. If $eV=V$ for all $e\in E(S)$, then trivially $V$ is spectral. Otherwise, we can find $n>0$ minimum so that $e_nV\neq V$. Then $V=e_nV\oplus (1-e_n)V$ and $(1-e_n)V\neq 0$. Choose a non-zero vector $v$ from $(1-e_n)V$. We claim \[e_iv=\begin{cases} v & i<n\\ 0 & i\geq n.\end{cases}\] It will then follow that $V$ is a spectral $KS$-module by Proposition~\ref{whenisspectral}. Indeed, if $i<n$, then $e_i$ acts as the identity on $V$ by choice of $n$. On the other hand, if $i\geq n$, then $e_i(1-e_n) = e_i-e_ie_n = e_i-e_i=0$. This completes the proof. \end{proof} Putting it all together we obtain the following theorem. \begin{Thm}\label{inversedescribereps2} Let $S$ be an inverse semigroup and $K$ a commutative ring with unit. Fix a set $D\subseteq \widehat{E(S)}$ containing exactly one character from each orbit of the spectral action of $S$ on $\widehat{E(S)}$. Let $S_{\varphi}$ be the stabilizer of $\varphi$ and set $G_{\varphi}$ equal to the maximal group image of $S_{\varphi}$. Then there is a bijection between simple $KS$-modules $V$ so that there exists $v\in V$ with \[\emptyset\neq \{e\in E(S)\mid ev=v\} = \{e\in E(S)\mid ev\neq 0\}\] and pairs $(\varphi,W)$ where $\varphi\in D$ and $W$ is a simple $KG_{\varphi}$-module (considered up to isomorphism). This in particular, describes all simple $KS$-modules if the idempotents of $S$ are central or from a descending chain isomorphic to $(\mathbb N,\geq)$. \end{Thm} For example, if $B$ is the bicyclic monoid, then the simple $KB$-modules are the simple $K\mathbb Z$-modules and the representation of $B$ on the polynomial ring $K[x]$ by the unilateral shift. At the moment we do not have an example of an inverse semigroup $S$ and a simple $KS$-module that is not spectral. By specializing to inverse semigroups with descending chain condition on idempotents, we obtain the following generalization of Munn's results~\cite{CP,oknisemigroupalgebra}. \begin{Cor}\label{inversedescribereps3} Let $S$ be an inverse semigroup satisfying descending chain condition on idempotents and let $K$ be a commutative ring with unit. Fix a set $D\subseteq E(S)$ containing exactly one idempotent from each $\D$-class. Then there is a bijection between simple $KS$-modules and pairs $(e,V)$ where $e\in D$ and $V$ is a simple $KG_e$-module (considered up to isomorphism). The corresponding $KS$-module is finite dimensional if and only if the $\D$-class of $e$ contains finitely many idempotents and $V$ is finite dimensional. \end{Cor} \bibliographystyle{abbrv}
{ "timestamp": "2009-03-20T06:33:31", "yymm": "0903", "arxiv_id": "0903.3456", "language": "en", "url": "https://arxiv.org/abs/0903.3456", "abstract": "Let $K$ be a commutative ring with unit and $S$ an inverse semigroup. We show that the semigroup algebra $KS$ can be described as a convolution algebra of functions on the universal étale groupoid associated to $S$ by Paterson. This result is a simultaneous generalization of the author's earlier work on finite inverse semigroups and Paterson's theorem for the universal $C^*$-algebra. It provides a convenient topological framework for understanding the structure of $KS$, including the center and when it has a unit. In this theory, the role of Gelfand duality is replaced by Stone duality.Using this approach we are able to construct the finite dimensional irreducible representations of an inverse semigroup over an arbitrary field as induced representations from associated groups, generalizing the well-studied case of an inverse semigroup with finitely many idempotents. More generally, we describe the irreducible representations of an inverse semigroup $S$ that can be induced from associated groups as precisely those satisfying a certain \"finiteness condition\". This \"finiteness condition\" is satisfied, for instance, by all representations of an inverse semigroup whose image contains a primitive idempotent.", "subjects": "Rings and Algebras (math.RA); Group Theory (math.GR); Operator Algebras (math.OA)", "title": "A Groupoid Approach to Discrete Inverse Semigroup Algebras", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429585263219, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097210997769534 }
https://arxiv.org/abs/0905.0318
Mod-Poisson convergence in probability and number theory
Building on earlier work introducing the notion of "mod-Gaussian" convergence of sequences of random variables, which arises naturally in Random Matrix Theory and number theory, we discuss the analogue notion of "mod-Poisson" convergence. We show in particular how it occurs naturally in analytic number theory in the classical Erdős-Kác Theorem. In fact, this case reveals deep connections and analogies with conjectures concerning the distribution of L-functions on the critical line, which belong to the mod-Gaussian framework, and with analogues over finite fields, where it can be seen as a zero-dimensional version of the Katz-Sarnak philosophy in the large conductor limit.
\section{Introduction} \label{section:Intro} In our earlier paper~\cite{jkn} with J. Jacod,\footnote{\ Although this new paper is largely self-contained, it is likely to be most useful for readers who have at least looked at the introduction and the examples in~\cite{jkn}, especially Section 4} motivated by results from Random Matrix Theory and probability, we have introduced the notion of mod-Gaussian convergence of a sequence of random variables $(Z_N)$. This occurs when the sequence does not (typically) converge in distribution, so the sequence of characteristic functions does not converge pointwise to a limit characteristic function, but nevertheless, the characteristic functions decay precisely like a suitable Gaussian, i.e., the limits \begin{equation}\label{eq-mod-gaussian} \lim_{N\rightarrow +\infty}{\exp(-iu\beta_N+u^2\gamma_N/2)\text{\boldmath$E$}(e^{iuZ_N})} \end{equation} exist, locally uniformly for $u\in\RR$, for some parameters $(\beta_N,\gamma_N)\in \RR\times [0,+\infty[$. \par Besides giving natural and fairly general instances of such behavior in probability theory, we investigated arithmetic instances of it. In that respect, we noticed that the limits~(\ref{eq-mod-gaussian}) can not exist if the random variables $Z_N$ are integer-valued, since the characteristic functions $\text{\boldmath$E$}(e^{iuZ_N})$ are then $2\pi$-periodic, and we discussed briefly the possibility of introducing ``mod-Poisson convergence'', that may be applicable to such situations. Indeed, we noticed that this can be seen to occur in number theory in one approach to the famous Erd\H{o}s-Kac Theorem. \par In the present paper, we look more deeply at mod-Poisson convergence. We first recall the definition and give basic facts about mod-Poisson convergence in Sections~\ref{sec-mod-poisson} and~\ref{sec-2}. Sections~\ref{sec-nt} and~\ref{sec-finite-fields} consider number-theoretic situations related to the Erd\H{o}s-Kac Theorem. We show that the nature of the mod-Poisson convergence parallels closely the structure of conjectures for the moments of zeta functions on the critical line. This becomes especially clear over finite fields, leading to very precise analogies with the Katz-Sarnak philosophy and conjectures. In fact, in Section~\ref{sec-main}, we prove a version of the mod-Poisson convergence for the number of irreducible factors of a polynomial in $\mathbf{F}_q[X]$, as the degree increases, which is a zero-dimensional case of the large conductor limit for $L$-functions (see Remark~\ref{rm-scheme} and Theorem~\ref{th-katz-sarnak}). Our proof convincingly explains the probabilistic features of the limiting function, involving both local models of primes and large random permutations. \par \smallskip \par \textbf{Notation.} In number-theoretic contexts, $p$ always refers to a prime number, and sums and products over $p$ (with extra conditions) are over primes satisfying those conditions. \par For any integer $d\geq 1$, we denote by $\mathfrak{S}_d$ the symmetric group on $d$ letters and by $\mathfrak{S}_d^{\sharp}$ the set of its conjugacy classes. Recall these can be identified with partitions of $d$, where the partition $$ n=1\cdot r_1+\cdots+ d\cdot r_d,\quad\quad r_i\geq 0, $$ corresponds to permutations with $r_1$ fixed points, $r_2$ disjoint $2$-cycles, ...., $r_d$ disjoint $d$-cycles. For $\sigma\in\mathfrak{S}_d$, we write $\sigma^{\sharp}$ for its conjugacy class. We denote by $\varpi(\sigma)$ the number of disjoint cycles occurring in $\sigma$. \par By $f\ll g$ for $x\in X$, or $f=O(g)$ for $x\in X$, where $X$ is an arbitrary set on which $f$ is defined, we mean synonymously that there exists a constant $C\geq 0$ such that $|f(x)|\leq Cg(x)$ for all $x\in X$. The ``implied constant'' refers to any value of $C$ for which this holds. It may depend on the set $X$, which is usually specified explicitly, or clearly determined by the context. \par \smallskip \par \textbf{Acknowledgments}. We thank P-O. Dehaye and A.D. Barbour for interesting discussions related to this paper, and P. Bourgade for pointing out a computational mistake in an earlier draft. Thanks also to the referee for a careful reading of the manuscript. \par The second author was partially supported by SNF Schweizerischer Nationalfonds Projekte Nr. 200021 119970/1. \section{General properties of mod-Poisson convergence}\label{sec-mod-poisson} Recall that a Poisson random variable $P_{\lambda}$ with parameter $\lambda>0$ is one taking (almost surely) integer values $k\geq 0$ with $$ \Prob(P_{\lambda}=k)=\frac{\lambda^k}{k!} e^{-\lambda}. $$ \par Its characteristic function is then given by $$ \E(e^{iuP_{\lambda}})=\exp(\lambda(e^{iu}-1)). $$ \begin{defn} We say that a sequence of random variables $(Z_N)$ converges in the mod-Poisson sense with parameters $\lambda_N$ if the following limits $$ \lim_{N\rightarrow+\infty}{\text{\boldmath$E$}(e^{iuP_{\lambda_N}})^{-1} \text{\boldmath$E$}(e^{iuZ_N})}= \lim_{N\rightarrow +\infty} \exp(\lambda_N(1-e^{iu}))\E(e^{iuZ_N})=\Phi(u) $$ exist for every $u\in{\mathbf R}$, and the convergence is locally uniform. The \emph{limiting function} $\Phi$ is then continuous and $\Phi(0)=1$. \end{defn} \begin{example} (1) The simplest case of mod-Poisson convergence (which justifies partly the name) is given by \begin{equation}\label{eq-regular} Z_N=P_{\lambda_N}+Z \end{equation} where $P_{\lambda_N}$ is a Poisson variable with parameter $\lambda_N$, while $Z$ is an arbitrary random variable independent of all $P_{\lambda_N}$. In that case, the limiting function is the characteristic function $\text{\boldmath$E$}(e^{iuZ})$ of $Z$. \par (2) Often, and in particular in the cases of interest in the arithmetic part of this paper, $Z_N$ is (almost surely) integer-valued; in that case, its characteristic function is $2\pi$-periodic, and it follows that if the convergence is locally uniform, then it is in fact uniform for $u\in\RR$. However, this is not always the case, as shown by examples like~(\ref{eq-regular}) if the fixed random variable $Z$ is not itself integer-valued. \par (3) A.D. Barbour pointed out to us the paper~\cite{hwang} of H-K. Hwang. Hwang introduces an analytic assumption~\cite[(1), p. 451]{hwang} on the \emph{probability generating functions} of integer-valued random variables $(X_N)$, i.e., on the power series $$ \sum_{n\geq 1}{\Prob(X_N=n)z^n}=\text{\boldmath$E$}(z^{X_N}), $$ which is very closely related to mod-Poisson convergence. This assumption is used as a basis to deduce results on Poisson approximation of the sequence (see Proposition~\ref{th-ks-distance} below for a simple example). Hwang also gives many additional examples where his assumption holds. \end{example} If we have mod-Poisson convergence with parameters $(\lambda_N)$ which converge, then $(Z_N)$ converges in law. Such a situation arises for instance in the so-called Poisson convergence (see, e.g.,\cite[p. 188]{breiman}), which we recall: \begin{prop} Let $(X_k^{(n)})$ be an array of independent random variables, identically distributed in each row, according to a Bernoulli distribution with parameter $x_n$: $$ \Prob(X_i^{(n)}=1)=x_n \text{ and } \Prob(X_i^{(n)}=0)=1-x_n\quad\text{ for } 1\leq i\leq n. $$ \par Set $S_n=X_1^{(n)}+\ldots+X_n^{(n)}$. Then, $S_n$ converges in distribution if and only if $nx_n\to\lambda>0$, when $n\to\infty$. The limit random variable $S$ is a Poisson random variable with parameter $\lambda$. \end{prop} We will state an analogue of Poisson convergence in the mod-Poisson setting in the next section, but first we discuss some basic consequences. The link with mod-Gaussian convergence in the last part of the next result is quite intriguing. \begin{prop}\label{prop-cor-mod-poisson} Let $(Z_N)$ be a sequence of random variables which converges in the mod-Poisson sense, with parameters $\lambda_N$, such that $$ \lim_{N\to\infty}\lambda_N=\infty. $$ \par Then the following hold: \par \emph{(1)} The re-scaled variables $Z_N/\lambda_N$ converge in probability to $1$, that is, for any $\varepsilon>0$, $$ \lim_{N\to\infty}\Prob\Bigl( \Bigl|\frac{Z_N}{\lambda_N}-1\Bigr|>\varepsilon\Bigr)=0. $$ \par \emph{(2)} We have the normal convergence \begin{equation}\label{eq-clt} \dfrac{Z_N-\lambda_N}{\sqrt{\lambda_N}}\overset{\mbox{\rm \scriptsize law}}{\Rightarrow}\mathcal{N}(0,1), \end{equation} where $\mathcal{N}$ is a standard Gaussian random variable. \par \emph{(3)} The random variables $$ Y_N=\frac{Z_N-\lambda_N}{\lambda_N^{1/3}} $$ converge in the mod-Gaussian sense~\emph{(\ref{eq-mod-gaussian})} with parameters $(0,\lambda_N^{1/3})$ and universal limiting function $$ \Phi_u(t)=\exp(-it^3/6). $$ \end{prop} \begin{proof} This is a very standard probabilistic argument, but we give details for completeness. \par (1) For $u\in{\mathbf R}$, we write $$ s=\frac{u}{\lambda_N} $$ (note that $s$ depends on $N$ and $s\rightarrow 0$ when $N\rightarrow +\infty$). By the definition of mod-Poisson convergence (in particular the uniform convergence with respect to $u$), we have $$ \lim_{N\rightarrow +\infty}{\exp(\lambda_N(1-e^{is}))\E(e^{is Z_N})}=\Phi(0)=1. $$ \par The fact that $$ \exp(\lambda_N(e^{is}-1))=\exp((is+O(s^2))\lambda_N), $$ yields $$ \lim_{N\rightarrow +\infty}{\E(e^{i u Z_N/\lambda_N})}=e^{i u}. $$ Consequently, $(Z_N/\lambda_N)$ converges in distribution to $1$ and hence converges in probability since the limiting random variable is constant. \par (2) For $u\in{\mathbf R}$, we now write $$ t=\frac{u}{\sqrt{\lambda_N}} $$ (note that $t$ depends on $N$ and $t\rightarrow 0$ when $N\rightarrow +\infty$). \par Again, by the definition of mod-Poisson convergence (in particular the uniform convergence with respect to $u$), we have \begin{equation}\label{eq-1} \lim_{N\rightarrow +\infty}{\exp(\lambda_N(1-e^{it}))\E(e^{it Z_N})}=\Phi(0)=1. \end{equation} \par Moreover, we have \begin{align} \exp(\lambda_N(e^{it}-1)) &=\exp((it-t^2/2+O(t^3))\lambda_N)\nonumber\\ &=\exp\Bigl(iu\sqrt{\lambda_N}-\frac{u^2}{2}+ O\Bigl( \frac{u^3}{\sqrt{\lambda_N}} \Bigr)\Bigr). \label{eq-2} \end{align} \par Let $$ Y_N=\frac{Z_N-\lambda_N}{\sqrt{\lambda_N}}. $$ We have then \begin{equation}\label{eq-charfn} \E(e^{iuY_N})=\exp(-iu\sqrt{\lambda_N})\E(e^{it Z_N}). \end{equation} \par Writing~(\ref{eq-charfn}) as $$ \exp(-iu\sqrt{\lambda_N}) \times \exp((e^{it}-1)\lambda_N)\times \exp((1-e^{it})\lambda_N) \E(e^{it Z_N}) , $$ we see from~(\ref{eq-1}) and~(\ref{eq-2}) that this is $$ \exp\Bigl(-\frac{u^2}{2}+O\Bigl(\frac{u^3}{\sqrt{\lambda_N}}\Bigr) \Bigr) (1+o(1))\rightarrow \exp\Bigl(-\frac{u^2}{2}\Bigr),\quad \text{ as }\quad N\rightarrow +\infty, $$ and by L\'evy's criterion, this concludes the proof. \par Part (3) is a similar straightforward computation, which we leave as an enlightening exercise. \end{proof} In stating the renormalized convergence to a Gaussian variable, there is a loss of information, since the ``Poisson nature'' of the sequence is lost. This is illustrated further by the following result which goes some way towards clarifying the probabilistic nature of mod-Poisson convergence. We recall that the Kolmogorov-Smirnov distance between real-valued random variables $X$ and $Y$ is defined by $$ d_{KS}(X,Y)=\sup_{x\in\RR}{|\Prob(X\leq x)-\Prob(Y\leq x)|}. $$ \begin{prop}\label{th-ks-distance} Let $(Z_N)$ be a sequence of random variables which are a.s. supported on positive integers, and which converges in the mod-Poisson sense for some parameters $(\lambda_N)$, such that $\lambda_N\to\infty$ when $N\to\infty$. Assume further that the characteristic functions $\text{\boldmath$E$}(e^{iuZ_N})$ are of $C^1$ class and the convergence holds in $C^1$ topology. \par Then we have $$ \lim_{N\rightarrow +\infty}{d_{KS}(Z_N,P_{\lambda_N})}=0, $$ where $P_{\lambda_N}$ is a Poisson random variable with parameter $\lambda_N$, and in fact $$ d_{KS}(Z_N,P_{\lambda_N})\leq \|\Phi'\|_{\infty}\lambda_N^{-1/2}, $$ for $N\geq 1$. \end{prop} \begin{proof} We recall the following well-known inequality, which is the ad-hoc tool (see, e.g.~\cite[p. 186, 5.10.2]{Petrov95}): if $X$ and $Y$ are integer-valued random variables, then $$ d_{KS}(X,Y)\leq\dfrac{1}{4} \int_{-\pi}^\pi\left|\dfrac{\text{\boldmath$E$}(e^{iuX})-\text{\boldmath$E$}(e^{iuY})}{u}\right| du. $$ \par Let $$ \psi_N(u)=\text{\boldmath$E$}(e^{iu P_{\lambda_N}}),\quad \Phi_N(u)=\psi_N(u)^{-1}\text{\boldmath$E$}(e^{iuZ_N}). $$ \par From the inequality, we obtain \begin{align*} d_{KS}(Z_N,P_{\lambda_N})&\leq \dfrac{1}{4}\int_{-\pi}^\pi {|\text{\boldmath$E$}(e^{iuZ_N})-\psi_N(u)|\frac{du}{u}}\\ &= \frac{1}{4} \int_{-\pi}^{\pi}{\Bigl|\psi_N(u)\frac{\Phi_N(u)-1}{u}\Bigr|du} \end{align*} \par From our stronger assumption of mod-Poisson convergence with $C^1$ convergence, we have a uniform bound $$ \Bigl|\frac{\Phi_N(u)-1}{u}\Bigr|\leq \|\Phi'_N\|_{\infty}, $$ for $N\geq 1$, hence since $|\Psi_N(u)|=\exp(\lambda_N(\cos u-1))$, we have $$ d_{KS}(Z_N,P_{\lambda_N})\leq \frac{\|\Phi'\|_{\infty}}{4} \int_{-\pi}^{\pi}{e^{\lambda_N(\cos u-1)}du}. $$ \par It is well-known that the precise asymptotic of such an integral gives order of magnitude $\lambda_N^{-1/2}$ for $\lambda_N\rightarrow +\infty$. To see this quickly, note for instance that $\cos u-1\leq -u^2/5$ on $[-\pi,\pi]$, hence $$ \int_{0}^{\pi}{e^{\lambda_N(\cos u-1)}du}\leq \int_{\pi}^{\pi}{e^{-\lambda_N u^2/5}du}\leq \int_{\RR}{e^{-\lambda_N u^2/5}du}=\sqrt{\frac{5\pi}{\lambda_N}}, $$ which gives the result since $\sqrt{5\pi}/4\leq 1$. \end{proof} \begin{rem} (1) Hwang~\cite[Th. 1]{hwang} gives this and many other variants for other measures of approximation, under the assumption of his version of mod-Poisson convergence. In another work with A. Barbour, we consider various refinements and applications of this type of statement, including with approximation involving more general families of discrete random variables (see~\cite{bkn}). \par (2) As a reference for number theorists, note that the existence of renormalized convergence as in~(\ref{eq-clt}) for an arbitrary sequence of integer-valued random variables $(Z_N)$, with $\text{\boldmath$E$}(Z_N)=\lambda_N$, does not imply that the Kolmogorov distance $d_{KS}(Z_N,P_{\lambda_N})$ converge to $0$: indeed, consider $$ Z_N=B_1+\cdots+B_N $$ where the $B_i$ are Bernoulli random variables with $\Prob(B_i=1)=\Prob(B_i=0)={\textstyle{\frac{1}{2}}}$. Then $\lambda_N=\tfrac{N}{2}$, and the normalized convergence in law~(\ref{eq-clt}) is the Central Limit Theorem. However, it is known that, for some constant $c>0$, we have $$ d_{KS}(Z_N,P_{\lambda_N})\geq c>0 $$ for all $N$ (see, e.g.,~\cite[Th. 2]{barbour-hall}, for the analogue in total variation distance, which in that case is comparable to the Kolmogorov distance~\cite[Prop. 1]{roos}). \end{rem} \section{Limit theorems with mod-Poisson behavior} \label{sec-2} Now we give an analogue of the Poisson convergence in the mod-Poisson framework. \begin{prop}\label{prop-poisson-convergence} Let $(x_n)$ of positive real numbers with \begin{equation}\label{eq-mod-poisson-cond} \sum_{n\geq 1}{x_n}=+\infty,\quad\quad \sum_{n\geq 1}{x_n^2}<+\infty, \end{equation} and let $(B_n)$ be a sequence of independent Bernoulli random variables with $$ \Prob(B_n=0)=1-x_n,\quad\quad \Prob(B_n=1)=x_n. $$ Then $$ Z_N=B_1+\cdots+B_N $$ has mod-Poisson convergence with parameters $$ \lambda_N=x_1+\cdots +x_N $$ and with limiting function given by $$ \Phi(u)=\prod_{n\geq 1}{(1+x_n(e^{iu}-1))\exp(x_n(1-e^{iu}))}, $$ a uniformly convergent infinite product. \end{prop} \begin{proof} This is again a quite simple computation. Indeed, by independence of the variables $B_n$, we have $$ \exp(\lambda_N(1-e^{iu}))\E(e^{iuZ_N}) =\prod_{n=1}^{N}\exp(x_n(1-e^{iu}))(1+x_n(e^{iu}-1)), $$ and since $$ \exp(x_n(1-e^{iu}))(1+x_n(e^{iu}-1))=1+O(x_n^2) $$ for $u\in\RR$ and $n\geq 1$ (recall $x_n\rightarrow 0$), it follows from~(\ref{eq-mod-poisson-cond}) that this product converges locally uniformly to $\Phi(u)$, which completes the proof. \end{proof} \begin{rem} More generally, assume that $(X_k^{(n)})$ is a triangular array of independent random variables taking values in $\{0,a_1,\ldots, a_r\}$, such that $$\Prob[X_k^{(n)}=a_i]=x_n^{(i)};\;\;i=1,\ldots,r.$$ Assume that for any $i$, $\sum_{n\geq1}x_n^{(i)}=\infty$ and $\sum_{n\geq1}(x_n^{(i)})^2<\infty$. Then $S_n=X_1^{(n)}+\ldots+X_n^{(n)}$ converges in the mod-Poisson sense with parameter $\lambda_N=a_1x_n^{(1)}+\ldots+a_rx_n^{(r)}$. \end{rem} \section{Mod-Poisson convergence and the Erd\H{o}s-Kac Theorem: a first analogy}\label{sec-nt} In~\cite[\S 4.3]{jkn}, we gave the first example of mod-Poisson convergence as explaining (through the Central Limit of Proposition~\ref{prop-cor-mod-poisson}) the classical result of Erd\H{o}s and Kac concerning the statistic behavior of the arithmetic function function $\omega(n)$, the number of (distinct) prime divisors of a positive integer $n\geq 1$: \begin{equation}\label{eq-erdos-kac} \lim_{N\rightarrow +\infty}{ \frac{1}{N} |\{ n\leq N\,\mid\, a<\frac{\omega(n)-\log\log N}{\sqrt{\log\log N}}<b \}|} = \frac{1}{\sqrt{2\pi}}\int_{a}^b{ e^{-t^2/2}dt} \end{equation} for any real numbers $a<b$. \par More precisely, with $$ \omega'(n)=\omega(n)-1,\quad \text{ for } n\geq 2, $$ we showed by a simple application of the Delange-Selberg method (see, e.g.,~\cite[II.5, Theorem 3]{tenenbaum}) that for any $u\in {\mathbf R}$, we have $$ \lim_{N\rightarrow +\infty}{ \frac{ (\log N)^{(1-e^{iu})}}{N}\sum_{2\leq n\leq N}{e^{iu\omega'(n)}}}=\Phi(u), $$ and the convergence is uniform, with \begin{equation}\label{eq-phi-omega} \Phi(u)=\frac{1}{\Gamma(e^{iu}+1)}\prod_p{ \Bigl(1-\frac{1}{p}\Bigr)^{e^{iu}} \Bigl(1+\frac{e^{iu}}{p-1}\Bigr) }, \end{equation} where the Euler product is absolutely and uniformly convergent: this means mod-Poisson convergence with parameters $\lambda_N=\log\log N$. By Proposition~\ref{prop-cor-mod-poisson}, (2), this implies~(\ref{eq-erdos-kac}).\footnote{\ As we observed, this gives essentially the proof of the Erd\H{o}s--Kac theorem due to R\'enyi and Tur\'an~\cite{renyi-turan}. For another recent simple proof, see~\cite{granville-sound}.} To illustrate what extra information is contained in mod-Poisson convergence we make two remarks: first, by putting $u=\pi$, for instance, we get $$ \sum_{1\leq n\leq N}{(-1)^{\omega(n)}}=o\Bigl(\frac{N}{(\log N)^2}\Bigr), $$ as $N\rightarrow +\infty$ (since $1/\Gamma(1+e^{i\pi})=0$), which is a statement well-known to be equivalent to the Prime Number Theorem. Secondly, more generally, we can apply results like Proposition~\ref{th-ks-distance} (which is easily checked to be applicable here) to derive Poisson-approximation results for $\omega(n)$ which are much more precise than the renormalized Gaussian behavior (see also~\cite[\S 4]{hwang} and~\cite[\S 6.1]{tenenbaum} for the discussion of the classical work of Sath\'e and Selberg). \par \medskip \par We wish here to bring to light the very interesting, and very complete, analogy between the probabilistic structure of this mod-Poisson version of the Erd\H{o}s-Kac Theorem and the mod-Gaussian conjecture for the distribution of the values $L$-functions, taking as basic example the conjecture for the distribution of $\log |\zeta(1/2+it)|$, which follows from the Keating-Snaith moment conjectures for the Riemann zeta function (see~\cite{jkn},~\cite{keating-snaith}). \par We start with the observation, following from~(\ref{eq-phi-omega}), that the limiting function $\Phi(u)$ in the Erd\H{o}s-Kac Theorem takes the form of a product $\Phi(u)=\Phi_1(u)\Phi_2(u)$ with $$ \Phi_1(u)=\frac{1}{\Gamma(e^{iu}+1)},\quad\quad \Phi_2(u)= \prod_{p}{ \Bigl(1-\frac{1}{p}\Bigr)^{e^{iu}} \Bigl(1+\frac{e^{iu}}{p-1}\Bigr) }. $$ \par We compare this with the Moment Conjecture in the mod-Gaussian form, namely, if $U$ is uniformly distributed on $[0,T]$, it is expected that \begin{equation}\label{eq-moments-conj} \lim_{T\rightarrow +\infty}{e^{u^2 \log\log T}\text{\boldmath$E$}(e^{iu\log |\zeta(1/2+iU)|^2})}= \Psi_1(u)\Psi_2(u), \end{equation} for all $u\in\RR$ (locally uniformly) where \begin{align} \Psi_1(u)&= \frac{G(1+iu)^2}{G(1+2iu)},\\ \intertext{($G(z)$ is the Barnes double-gamma function, see e.g.~\cite[Ch. XII, Misc. Ex. 48]{ww}), and} \Psi_2(u)&=\prod_{p}{ \Bigl(1-\frac{1}{p}\Bigr)^{-u^2}\Bigl\{ \sum_{m\geq 0}{\Bigl(\frac{\Gamma(m+iu)}{m!\Gamma(\lambda)}\Bigr)^2 p^{-m}}\Bigr\}}. \end{align} \par Here also, the limiting function splits as a product of two terms, and each appears individually as limit in a distinct mod-Gaussian convergence. Indeed, we first have $$ \Psi_1(u)=\lim_{N\rightarrow +\infty}{e^{u^2(\log N)} \text{\boldmath$E$}(e^{iu\log |\det(1-X_N)|^2})}, $$ where $X_N$ is a Haar-distributed $U(N)$-valued random variable. Secondly (see~\cite[4.1]{jkn}), we have $$ \Psi_2(u)=\lim_{N\rightarrow+\infty}{ e^{u^2(\log (e^{\gamma}\log N))}\text{\boldmath$E$}(e^{iuL_N})} $$ where $$ L_N=\sum_{p\leq N}{\log \Bigl|1-\frac{e^{i\theta_p}}{\sqrt{p}}\Bigr|^2}, $$ for any sequence $(\theta_p)_{p\leq N}$ of independent random variables, uniformly distributed on $[0,1]$. \par \begin{rem} Note in passing that for fixed $p$, the $p$-th component of the Euler product of $\zeta(1/2+iU)$, for $U$ uniformly distributed on $[0,T]$, converges in law to $(1-e^{i\theta_p}p^{-1/2})^{-1}$ as $T\rightarrow +\infty$. \end{rem} \par We now prove that the Euler product $\Phi_2$ (like $\Psi_2$) corresponds to mod-Poisson convergence for a natural asymptotic probabilistic model of primes, and that $\Phi_1$ (like $\Psi_1$) comes from a model of group-theoretic origin.\footnote{\ Since a product of two limiting functions for mod-Poisson convergence is clearly another such limiting function, we also recover without arithmetic the fact that the limiting function $\Phi(u)$ arises from mod-Poisson convergence.} \par We start with the Euler product, where the computation was already described in~\cite[\S 4.3]{jkn}: we have $$ \Phi_2(u)=\lim_{y\rightarrow +\infty}{\prod_{p\leq y}{ \Bigl(1-\frac{1}{p}\Bigr)^{e^{iu}-1} \Bigl(1-\frac{1}{p}\Bigr) \Bigl(1+\frac{e^{iu}}{p-1}\Bigr) }}, $$ and by isolating the first term, it follows that \begin{align*} \Phi_2(u)&=\lim_{y\rightarrow +\infty}{ \exp((1-e^{iu})\lambda_y) \prod_{p\leq y}{\Bigl(1-\frac{1}{p}+\frac{1}{p}e^{iu}\Bigr) }}\\ &=\lim_{y\rightarrow +\infty}{\E(e^{iuP_{\lambda_y}})^{-1}} \E(e^{iuZ'_y}) \end{align*} where $$ \lambda_y=\sum_{p\leq y}{\log \Bigl(\frac{1}{1-p^{-1}}\Bigr)}= \sum_{\stacksum{p\leq y}{k\geq 1}}{ \frac{1}{kp^k}}=\log\log y+\kappa+o(1), $$ as $y\rightarrow +\infty$, for some real constant $\kappa$ (see, e.g.,~\cite[\S 22.8]{hardy-wright}), and \begin{equation}\label{eq-zy} Z'_y=\sum_{p\leq y}{B'_{p}} \end{equation} is a sum of independent Bernoulli random variables with parameter $1/p$: $$ \Prob(B'_{p}=1)=\frac{1}{p},\quad\quad \Prob(B'_{p}=0)=1-\frac{1}{p}. $$ \par We note that this is a particular case of Proposition~\ref{prop-poisson-convergence}, and that (as expected) the parameters of these Bernoulli laws correspond exactly to the ``intuitive'' probability that an integer $n$ be divisible by $p$, or equivalently, the Bernoulli variable $B'_p$ is the limit in law as $N\rightarrow +\infty$ of the random variables defined as the indicator of a uniformly chosen integer $n\leq N$ being divisible by $p$; the independence of the $B'_p$ corresponds for instance to the formal (algebraic) independence of the divisibility by distinct primes given, e.g., by the Chinese Remainder Theorem. \par As in the case of the Riemann zeta function, we also note that the independent model fails to capture the truth on the distribution of $\omega(n)$, the extent of this failure being measured, in some sense, by the factor $\Phi_1(u)$. Because $$ \frac{Z'_y-\log \log y}{\sqrt{\log\log y}}\overset{\mbox{\rm \scriptsize law}}{\Rightarrow} \mathcal{N}(0,1), $$ this discrepancy between the independent model and the arithmetic truth is invisible at the level of the normalized convergence in distribution (as it is for $\log|\zeta(1/2+it)|$, by Selberg's Central Limit Theorem, hiding the Random Matrix Model). \par Now we consider the first factor $\Phi_1(u)=\Gamma(e^{iu}+1)^{-1}$. Again, in~\cite[\S 4.3]{jkn}, we appealed to the formula $$ \frac{1}{\Gamma(e^{iu}+1)}=\prod_{k\geq 1}{ \Bigl(1+\frac{e^{iu}}{k}\Bigr) \Bigl(1+\frac{1}{k}\Bigr)^{-e^{iu}} } $$ for $u\in\RR$ (see~\cite[12.11]{ww}) to compute \begin{align*} \Phi_1(u)&=\lim_{N\rightarrow +\infty}{ \prod_{k\leq N}{ \Bigl(1+\frac{1}{k}\Bigr)^{1-e^{iu}} \Bigl(1+\frac{1}{k}\Bigr)^{-1} \Bigl(1+\frac{e^{iu}}{k}\Bigr) } }\\ &= \lim_{N\rightarrow +\infty}\exp(\lambda_N(1-e^{iu}))\prod_{k\leq N}{ \Bigl(1+\frac{1}{k}\Bigr)^{-1} \Bigl(1+\frac{e^{iu}}{k}\Bigr) }\\ &=\lim_{N\rightarrow +\infty}\exp(\lambda_N(1-e^{iu})) \text{\boldmath$E$}(e^{iu Z_N}), \end{align*} where $$ \lambda_N=\sum_{1\leq k\leq N}{\log (1+k^{-1})}=\log (N+1), $$ and $Z_N$ is the sum $$ Z_N=B_{1}+B_{2}+\cdots +B_{N}, $$ with $B_{k}$ denoting independent Bernoulli random variables with distribution $$ \Prob(B_{k}=1)=1-\frac{1}{1+\frac{1}{k}}=\frac{1}{k+1},\quad\quad \Prob(B_{k}=0)=\frac{1}{1+\frac{1}{k}}=\frac{k}{k+1}. $$ \par The group-theoretic interpretation of this distribution is very suggestive: indeed, it is the distribution of the random variable $\varpi(\sigma_{N+1})-1$, where $\sigma_{N+1}\in \mathfrak{S}_{N+1}$ is distributed according to the uniform measure on the symmetric group, and we recall that $\varpi(\sigma)$ is the number of cycles of a permutation. In other words, we have \begin{equation}\label{eq-char-permut} \text{\boldmath$E$}(e^{iu\varpi(\sigma_N)})=\prod_{1\leq j\leq N}{ \Bigl(1-\frac{1}{j}+\frac{e^{iu}}{j}\Bigr)}, \end{equation} as proved, e.g., in~\cite[\S 4.6]{abt}; note that this is not obvious, and the decomposition as a sum of independent random variables is due to Feller, and is explained in~\cite[p. 16]{abt}. \par So we see -- and this gives another example of natural mod-Poisson convergence -- that these random variables have mod-Poisson convergence with parameters $\log N$, and limiting function $1/\Gamma(e^{iu})$: \begin{equation}\label{eq-mod-poisson-permut} \lim_{N\rightarrow +\infty}\exp((\log N)(1-e^{iu}))\text{\boldmath$E$}(e^{iu\varpi(\sigma_N)})= \frac{1}{\Gamma(e^{iu})}. \end{equation} \par For further reference, we state a more precise version, which follows from~(\ref{eq-char-permut}): \begin{equation}\label{eq-explicit-permut} \text{\boldmath$E$}(e^{iu\varpi(\sigma_N)})= \frac{1}{\Gamma(e^{iu})}\exp((\log N)(e^{iu}-1))\erreurm{\frac{1}{N}}, \end{equation} locally uniformly for $u\in\RR$. Note that this includes the special case $u=(2k+1)\pi$ where $$ \text{\boldmath$E$}(e^{iu\varpi(\sigma_N)})= \frac{1}{\Gamma(e^{iu})}=0. $$ \par \par This explanation of the ``transcendental'' factor $1/\Gamma(e^{iu}+1)$ is particularly convincing because of well-known and well-studied analogies between the cycle structure of random permutations and the factorization of integers (see, e.g., the discussion in~\cite[\S 1.2]{abt} or the entertaining survey~\cite{granville}). Its origin in~\cite[4.3]{jkn} is, however, not very enlightening: the Gamma function appears universally in the Delange-Selberg method in a way which may seem to be coincidental and unrelated to any group-theoretic structure (see, e.g.,~\cite[\S 5.2]{tenenbaum} where it originates in a representation of $1/\Gamma(z)$ as a contour integral of Hankel type). \section{The analogy deepens}\label{sec-finite-fields} The discussion of the previous section is already interesting, but it becomes (to our mind) even more intriguing after one notes how the analogy can be extended by including consideration of function field situations, as in the work of Katz-Sarnak~\cite{katzsarnak}. \par Let $\mathbf{F}_q$ be a finite field with $q=p^n$ elements, with $n\geq 1$ and $p$ prime. For a polynomial $f\in \mathbf{F}_q[X]$, let $$ \omega(f)=\omega_q(f)=|\{\pi\in \mathbf{F}_q[X]\,\mid\, \pi\text{ is irreducible monic and divides } f\}| $$ be the analogue of the number of prime factors of an integer (we will usually drop the subscript $q$). \par We consider the statistic behavior of this function under two types of limits: (i) either $q$ is replaced by $q^m$, $m\rightarrow +\infty$, and $f$ is assumed to range over monic polynomials of fixed degree $d\geq 1$ in $\mathbf{F}_{q^m}[X]$; or (ii) $q$ is fixed, and $f$ is assumed to range over monic polynomials of degree $d\rightarrow +\infty$ in $\mathbf{F}_q[X]$. \par The first limit, of fixed degree and increasing base field, is similar to the one considered by Katz and Sarnak for the distribution of zeros of families of $L$-functions over finite fields~\cite{katzsarnak}. And the parallel is quite precise as far as the group-theoretic situation goes. Indeed, recall that the crucial ingredient in their work is that the Frobenius automorphism provides in a natural way a ``random matrix'' for a given $L$-function, the characteristic polynomial of which provides a spectral interpretation of the zeros (see, e.g.,~\cite[\S 4.2]{jkn} for a partial, down-to-earth, summary). \par In our case, let us assume first that $f\in \mathbf{F}_{q}[X]$ is squarefree. Let $K_f$ denote the splitting field of $f$, i.e., the extension field of $\mathbf{F}_{q}$ generated by the $d$ roots of $f$, and let $F_f$ denote the Frobenius automorphism $x\mapsto x^q$ of $K_f$. This automorphism permutes the roots of $f$, which all lie in $K_f$, and after enumerating them, leads to an element of $\mathfrak{S}_d$, denoted $F_f$. This depends on the enumeration of the roots, but the conjugacy class $F_f^{\sharp}\in\mathfrak{S}_d^{\sharp}$ is well-defined. \par Now, by the very definition, we have \begin{equation}\label{eq-ell-om} \omega(f)=\varpi(F_f^{\sharp}), \end{equation} which can be seen as the (very simple) analogue of the spectral interpretation of an $L$-function as the characteristic polynomial of the Frobenius endomorphism. \begin{rem} \label{rm-scheme} We can come even closer to the Katz-Sarnak setting of families of $L$-functions. Consider, in scheme-theoretic language,\footnote{\ Readers unfamiliar with this language can skip this remark, which will not be used, except to state Theorem~\ref{th-katz-sarnak} below.} the (very simple!) family of zeta functions of the zero-dimensional schemes $X_f=\spec(\mathbf{F}_q[X]/(f))$, i.e., the varieties over $\mathbf{F}_q$ with equation $f(x)=0$. These zeta functions are defined by either of the following two formulas: $$ Z(X_f)=\prod_{x\in |X_f|}{(1-T^{\deg(x)})^{-1}}=\exp\Bigl( \sum_{m\geq 1}{\frac{|X_f(\mathbf{F}_{q^m})|T^m}{m}} \Bigr), $$ where $|X_f|$ is the set of closed points of $X_f$. Since these correspond naturally to irreducible factors of $f$ (without multiplicity), it follows that $$ Z(X_f)=\prod_{\pi\mid f}{(1-T^{\deg(\pi)})^{-1}}, $$ and hence, if $f$ is squarefree, a higher-level version of~(\ref{eq-ell-om}) is the ``spectral interpretation'' \begin{equation}\label{eq-zeta-spectral} Z(X_f)=\det(1-F_fT|H^0_c(\bar{X}_f,\mathbf{Q}_{\ell}))^{-1}= \det(1-\rho(F_f)T)^{-1} \end{equation} where $F_f$ is still the Frobenius automorphism, $H^0_c(\bar{X}_f,\mathbf{Q}_{\ell})$ is simply isomorphic with $\QQ_{\ell}^{\deg(f)}$ (the variety over the algebraic closure has $\deg(f)$ connected components, which are points), and $\rho$ is the natural faithful representation of $\mathfrak{S}_{\deg(f)}$ in $U(\deg(f),\CC)$ by permutation matrices, since this is quite clearly how $F_f$ acts on the \'etale cohomology space. \par Looking at the order of the pole of $Z(X_f)$ at $T=1$, we recover~(\ref{eq-ell-om}). In particular, the generalizations of the Erd\H{o}s-Kac Theorem that we will prove in the next section can be interpreted as describing the limiting statistical behavior, in mod-Poisson sense, of the order of the pole of those zeta functions as the degree $\deg(f)$ tends to infinity (see Theorem~\ref{th-katz-sarnak}). It is truly a zero-dimensional version of the Katz-Sarnak problematic for growing conductor. (Note that this interpretation also suggests to look at other distribution statistics of these zeta functions, and we hope to come back to this). \end{rem} \par \medskip \par The relation~(\ref{eq-ell-om}) (or~(\ref{eq-zeta-spectral})) explains the existence of a link between the number of irreducible factors of polynomials and the number of cycles of permutations. Indeed, the other essential number-theoretic ingredient for Katz and Sarnak is Deligne's Equidistribution Theorem, which shows that the matrices given by the Frobenius, \emph{in the limit under consideration} where $q$ is replaced by $q^m$, $m\rightarrow +\infty$, become equidistributed in a certain monodromy group. Here we have, exactly similarly, the following well-known: \par \medskip \par \textbf{Fact.} In the limit of fixed $d$ and $m\rightarrow +\infty$, for $f$ uniformly chosen among monic squarefree polynomials of degree $d$ in $\mathbf{F}_{q^m}[X]$, the conjugacy classes $F_f^{\sharp}$ become uniformly distributed in $\mathfrak{S}_d^{\sharp}$ for the natural (Haar) measure. \par \medskip This fact is easily proved from the well-known Gauss-Dedekind formula $$ \Pi_q(d)= \sum_{\deg(\pi)=d}{1}= \frac{1}{d}\sum_{\delta\mid d}{\mu(\delta)q^{d/\delta}}= \frac{q^{d}}{d}+O(q^{d/2}) $$ for the number of irreducible monic polynomials of degree $d$ with coefficients in $\mathbf{F}_q$, and it is a ``baby'' analogue of Deligne's Equidistribution Theorem.\footnote{\ Indeed, it could be proved using the Chebotarev density theorem, which is a special case of Deligne's theorem.} Hence, we obtain $$ \omega(f)\overset{\mbox{\rm \scriptsize law}}{\Rightarrow} \varpi(\sigma_{d}), $$ as $m\rightarrow +\infty$, where $f$ is distributed uniformly among monic polynomials of degree $d$ in $\mathbf{F}_{q^m}[X]$, and $\sigma_d$ is distributed uniformly among $\mathfrak{S}_d$. \par The second limit, where the base field $\mathbf{F}_q$ is fixed and the degree $d$ grows, is analogue of the problematic situation of families of curves of increasing genus over a fixed finite field (see the discussion in~\cite[p. 12]{katzsarnak}), and -- for our purposes -- of the distribution of the number of prime divisors of integers, which we discussed in the previous section. In the next section, we prove a mod-Poisson form of the Erd\H{o}s-Kac theorem in $\mathbf{F}_q[X]$ (the Central Limit version being a standard result, essentially due to M. Car, and apparently stated first by Flajolet and Soria~\cite[\S 3, Cor. 1]{flajolet-soria}; see also the recent quick derivation by R. Rhoades~\cite{rhoades}). \begin{rem} One may extend the conjugacy class $F_f^{\sharp}\in \mathfrak{S}_d^{\sharp}$ to all $f\in\mathbf{F}_q[X]$ of degree $d$, in the following directly combinatorial way (which hides the Frobenius aspect): $F_f^{\sharp}$ is the conjugacy class of permutations with as many disjoint $j$-cycles, $1\leq j\leq d$, as there are irreducible factors of $f$ of degree $j$. However, the relation $\omega(f)=\varpi(F_f^{\sharp})$ does \emph{not} extend to this case, since multiple factors are not counted by $\omega$. However, we have $\Omega(f)=\varpi(F_f^{\sharp})$, where $\Omega(f)$ is the number of irreducible factors counted with multiplicity. \end{rem} \section{Mod-Poisson convergence for the number of irreducible factors of a polynomial}\label{sec-main} In this section, we state and prove the mod-Poisson form of the analogue of the Erd\H{o}s-Kac Theorem for polynomials over finite fields, trying to bring to the fore the probabilistic structure suggested in the previous section. \begin{thm}\label{th-main} Let $q\not=1$ be a power of a prime $p$, and let $\omega(f)$ denote as before the number of monic irreducible polynomials dividing $f\in \mathbf{F}_q[X]$. Write $|g|=q^{\deg(g)}=|\mathbf{F}_q[X]/(g)|$ for any non-zero $g\in \mathbf{F}_q[X]$. \par For any $u\in{\mathbf R}$, we have \begin{equation}\label{eq-main} \lim_{d\rightarrow +\infty}\frac{\exp((1-e^{iu})\log d)}{q^d} \sum_{\deg(f)=d} {e^{iu (\omega(f)-1)}} \tilde{\Phi}_1(u)\tilde{\Phi}_2(u), \end{equation} where \begin{equation}\label{eq-phi1} \tilde{\Phi}_1(u)=\frac{1}{\Gamma(e^{iu}+1)} \end{equation} and \begin{equation}\label{eq-phi2} \tilde{\Phi}_2(u)=\prod_{\pi}{ \Bigl(1-\frac{1}{|\irred|}\Bigr)^{e^{iu}} \Bigl(1+\frac{e^{iu}}{|\irred|-1}\Bigr) }, \end{equation} the product running over all monic irreducible polynomials $\pi\in \mathbf{F}_q[X]$ and the sum over all monic polynomials $f\in \mathbf{F}_q[X]$ with degree $\deg(f)=d$. Moreover, the convergence is uniform. \end{thm} \begin{rem} Note the similarity of the shape of the limiting function with that in~(\ref{eq-phi-omega}) and the conjecture for $\zeta(1/2+it)$, in particular the fact that the group-theoretic term is the same as for $\omega(n)$, while the Euler product is a direct transcription in $\mathbf{F}_q[X]$ of the earlier $\Phi_2$. \end{rem} \begin{rem} This can be rephrased, according to Remark~\ref{rm-scheme}, in the following manner which illustrates the analogy with the Katz-Sarnak philosophy: \begin{thm}\label{th-katz-sarnak} Let $q\not=1$ be a power of a prime. For any $f\in\mathbf{F}_q[X]$, monic of degree $\geq 1$, let $X_f$ be the zero-dimensional scheme $\spec(\mathbf{F}_q[X]/(f))$, let $Z(X_f)\in \QQ(T)$ denote its zeta function and let $r(X_f)\geq 0$ denote the order of the pole of $Z(X_f)$ at $T=1$. Then for any $u\in{\mathbf R}$, we have $$ \lim_{d\rightarrow +\infty}\frac{\exp((1-e^{iu})\log d)}{q^d} \sum_{\deg(f)=d} {e^{iu r(f)}} e^{-iu}\tilde{\Phi}_1(-u)\tilde{\Phi}_2(-u), $$ with notation as before. \end{thm} The only thing to note here is that if $f$ is not squarefree, the scheme $X_f$ is not reduced; the induced reduced scheme is $X_{f^{\flat}}$, where $f^{\flat}$ is the (squarefree) product of the distinct monic irreducible factors dividing $f$. Then $Z(X_f)=Z(X_{f^{\flat}})$, and we have $$ -r(f)=\ord_{T=1}Z(X_f)=\ord_{T=1}Z(X_{f^{\flat}})=-r(f^{\flat})= \omega(f^{\flat})=\omega(f), $$ so the two theorems are indeed equivalent. \end{rem} \begin{rem} One can also prove by the same method the following two variants, where we restrict attention to squarefree polynomials, or we consider irreducible factors with multiplicity. First, we have $$ \frac{e^{(1-e^{iu})\log d}}{q^d} \mathop{\sum \Bigl.^{\flat}}\limits_{\deg(f)=d} {e^{iu (\omega(f)-1)}}\rightarrow \frac{1}{\Gamma(1+e^{iu})} \prod_{\pi}{ \Bigl(1-\frac{1}{|\irred|}\Bigr)^{e^{iu}} \Bigl(1+\frac{e^{iu}}{|\irred|}\Bigr) }, $$ where the sum $\mathop{\sum \Bigl.^{\flat}}\limits$ runs over all squarefree monic polynomials $f\in \mathbf{F}_q[X]$ with degree $\deg(f)=d$. Next, we have $$ \frac{e^{(1-e^{iu})\log d}}{q^d} \mathop{\sum \Bigl.^{\flat}}\limits_{\deg(f)=d} {e^{iu (\Omega(f)-1)}}\rightarrow \frac{1}{\Gamma(1+e^{iu})} \prod_{\pi}{ \frac{(1-|\irred|^{-1})^{e^{iu}}} {1-e^{iu}/|\irred|}}. $$ \end{rem} We now come to the proof. The idea we want to highlight -- the source of the splitting of the limiting function in two parts of distinct probabilistic origin -- is to first separate the irreducible factors of ``small'' degree and those of ``large'' degree (which is fairly classical), and then observe that an equidistribution theorem allows us to perform a transfer of the contribution of large factors to the corresponding average over random permutations, conditioned to not have small cycle lengths. This will explain the factor $\tilde{\Phi}_1$ corresponding to the cycle length of random permutations. Note that shorter arguments are definitely available, using analogues of the Delange-Selberg method used in~\cite{jkn} (see~\cite[\S 2, Th. 1]{flajolet-soria}), but this hides again the mixture of probabilistic models involved. \par Interestingly, the small and larger irreducible factors are \emph{not} exactly independent. But the dependency is (essentially) perfectly compensated by the effect of the conditioning at the level of random permutations. Why this is so may be the last little mystery in the computation, which is otherwise very enlightening. \par We set up some notation first: for $f\in \mathbf{F}_q[X]$, we let $\degp{f}$ (resp. $\degm{f}$) denote the largest (resp., smallest) degree of an irreducible factor $\pi\mid f$; correspondingly, for a permutation $\sigma\in\mathfrak{S}_d$, we denote by $\ellp{\sigma}$ (resp. $\ellm{\sigma}$) the largest (resp. smallest) length of a cycle occurring in the decomposition of $\sigma$. \par Henceforth, by convention, any sum involving polynomials $f$, $g$, $h$, etc, is assumed to restrict to monic polynomials, and any sum or product involving $\pi$ is restricted to monic irreducible polynomials. \par The next lemma summarizes some simple properties, and the important equidistribution property we need. \begin{lem}\label{lm-distrib} With notation as above, we have: \par \emph{(1)} For all $d\geq 1$, we have $$ \frac{1}{q^d}\sum_{\deg(\pi)=d}{1}=\frac{1}{d}+O(q^{-d/2}). $$ \par \emph{(2)} For all $d\geq 1$, we have \begin{gather}\label{eq-upper-mertens} \prod_{\deg(\pi)\leq d}{\Bigl(1+\frac{1}{|\irred|-1}\Bigr)}\ll d, \\ \label{eq-mertens} \prod_{\deg(\pi)\leq d}{\Bigl(1-\frac{1}{|\irred|}\Bigr)}= \exp\Bigl(-\sum_{1\leq j\leq d}{\frac{1}{j}}\Bigr) \erreurm{\frac{1}{d}}. \end{gather} \par \emph{(3)} For any $d\geq 1$ and any fixed permutation $\sigma\in\mathfrak{S}_d$, we have \begin{equation}\label{eq-quant-equid} \frac{1}{q^d}\mathop{\sum \Bigl.^{\flat}}\limits_{\stacksum{\deg(f)=d}{F_f^{\sharp}=\sigma^{\sharp}}}{1} =\Prob(\sigma_d=\sigma)\erreurm{ \frac{d}{q^{\ellm{\sigma}/2}}}, \end{equation} where the conjugacy class $F_f^{\sharp}\in\mathfrak{S}_d^{\sharp}$ is defined in the previous section, $\sigma_d$ is a uniformly chosen random permutation in $\mathfrak{S}_d$ and $\mathop{\sum \Bigl.^{\flat}}\limits$ restricts the sum to squarefree polynomials. \par In all estimates, the last under the assumption $q^{\ellm{\sigma}/2}\geq d$, the implied constants are absolute, except that in~\emph{(\ref{eq-upper-mertens})}, the implied constant may depend on $q$. \end{lem} \begin{proof} The first statement has already been recalled. For~(\ref{eq-upper-mertens}), we have \begin{align*} \prod_{\deg(\pi)\leq d}{\Bigl(1+\frac{1}{|\irred|-1}\Bigr)} &\leq \exp\Bigl(\sum_{1\leq j\leq d} {\frac{\Pi_q(j)}{q^j-1}}\Bigr)\\ &=\exp\Bigl(\sum_{2\leq j\leq d}{\frac{1}{j}} +O\Bigl(\sum_{1\leq j\leq d}{\frac{q^{j/2}}{q^j-1}}\Bigr)\Bigr) \ll d, \end{align*} for $d\geq 1$, with an implied constant depending on $q$. \par For~(\ref{eq-mertens}), which is the analogue for $\mathbf{F}_q[T]$ of the classical Mertens estimate, we refer, e.g., to~\cite{rosen}, where it is proved in the form $$ \prod_{\deg(\pi)\leq d}{\Bigl(1-\frac{1}{|\irred|}\Bigr)}= \frac{e^{-\gamma}}{d}\Bigl(1+O\Bigl(\frac{1}{d}\Bigr)\Bigr) $$ for $d\geq 1$, $\gamma$ being the Euler constant; since $$ \sum_{1\leq j\leq d}{\frac{1}{j}}=\log d+\gamma+ O\Bigl(\frac{1}{d}\Bigr), $$ we get the stated result. We emphasize the fact that the asymptotic of the product in~(\ref{eq-mertens}) is independent of $q$ (and is the same as for the usual Mertens formula for prime numbers), since this may seem surprising at first sight. This is explained by the relation with random permutations, and in fact, in Remark~\ref{rm-mertens} below, we explain how our argument leads to a much sharper estimate~(\ref{eq-mertens2}) for the error term in~(\ref{eq-mertens}). \par Finally, for the third statement, if $\sigma$ is a product of $r_j$ disjoint $j$-cycles for $1\leq j\leq d$, we first recall the standard formula that \begin{equation}\label{eq-card-conj} \Prob(\sigma_d=\sigma)=\prod_{1\leq j\leq d}{\frac{1}{j^{r_j}r_j!}}, \end{equation} and we observe that the product can be made to range over $\ellm{\sigma}\leq j\leq d$, since the terms $j<\ellm{\sigma}$ have $r_j=0$ by definition. Using this observation, we have by simple counting $$ \mathop{\sum \Bigl.^{\flat}}\limits_{\stacksum{\deg(f)=d}{F_f^{\sharp}=\sigma^{\sharp}}}{1} =\prod_{\ellm{\sigma}\leq j\leq d}{ \binom{\Pi_q(j)}{r_j}}. $$ \par Furthermore, for any $r$ and $j\geq 1$ such that $r<q^{j/2}$ and $j\leq r$, we have \begin{align*} \binom{\Pi_q(j)}{r}&=\frac{1}{r!}\Pi_q(j)(\Pi_q(j)-1)\cdots (\Pi_q(j)-r+1) \\ &=\frac{1}{r!}\Bigl(\frac{q^j}{j}+O(q^{-j/2})\Bigr)^r =\frac{q^{jr}}{j^{r}r!}(1+O(rq^{-j/2}))^{r}, \end{align*} by the first part of the lemma. Combining the two formulas, we get \begin{align*} \frac{1}{q^d}\mathop{\sum \Bigl.^{\flat}}\limits_{\stacksum{\deg(f)=d}{F_f^{\sharp}=\sigma^{\sharp}}}{1} &=q^{-d} \prod_{\ellm{\sigma}\leq j\leq d} \frac{q^{jr_j}}{j^{r_j}r_j!}(1+O(dq^{-j/2}))^{r_j}\\ &= \prod_{\ellm{\sigma}\leq j\leq d}{\frac{1}{r_j!j^{r_j}} \Bigl(1+O(dq^{-j/2})\Bigr)^{r_j}}\\ &=\Prob(\sigma_d=\sigma)\prod_{\ellm{\sigma}\leq j\leq d} {\Bigl(1+O(dq^{-j/2})\Bigr)^{r_j}} \end{align*} and this immediately gives the conclusion since the implied constant in the formula for $\Pi_q(j)$ is at most $1$. \end{proof} Part (3) of this lemma means that, as long as we consider permutations $\sigma\in\mathfrak{S}_d$ with no short cycle, so that $$ d=o(q^{\ellm{\sigma}/2}), $$ there is strong quantitative equidistribution of the conjugacy class $F_f^{\sharp}$ among all conjugacy classes in $\mathfrak{S}_d$. \par Thus, to compare the distribution of polynomials and that of permutations, it is natural to introduce a parameter $b$, $0\leq b\leq d$, to be specified later, and to first write any monic polynomial $f$ of degree $d$ as $f=gh$, where the monic polynomials $g$ and $h$ are uniquely determined by \begin{equation}\label{eq-split-poly} \degp{g}\leq b,\quad\quad \degm{h}>b \end{equation} (i.e., $g$ contains the small factors, and $h$ the large ones; they correspond to ``friable'' and ``sifted'' integers in classical analytic number theory). One can expect, by the above, that if $b$ is such that $q^{b/2}$ is large enough compared with $d$, the distribution of $h$ will reflect that of permutations without cycles of length $\leq b$. And the contribution of small factors should (and will) be comparable with the independent model for divisibility of polynomials by irreducible ones. \par We now start the proof of Theorem~\ref{th-main} along these lines, trying to evaluate $$ \frac{1}{q^d} \sum_{\deg(f)=d} {e^{iu \omega(f)}} $$ \par Writing $f=gh$, where $g$ and $h$ satisfy~(\ref{eq-split-poly}) as above, we have $\omega(f)=\omega(g)+\omega(h)$ since $g$ and $h$ are coprime, and hence $$ \frac{1}{q^d} \sum_{\deg(f)=d} {e^{iu \omega(f)}}= \sum_{\stacksum{\deg(g)\leq d}{\degp{g}\leq b}} {\frac{e^{iu\omega(g)}}{|g|} T(d-\deg(g),b)}, $$ where we define $$ T(d,b)=\frac{1}{q^d}\sum_{\stacksum{\deg(f)=d}{\degm{f}>b}}{e^{iu\omega(f)}}. $$ \par Denote further $$ R(d,b)=\sum_{\stacksum{\deg(g)>d}{\degp{g}\leq b}} {\frac{1}{|g|}},\quad\quad S(d,b)=\mathop{\sum \Bigl.^{\flat}}\limits_{\stacksum{\deg(g)\leq d}{\degp{g}\leq b}} {\frac{1}{|g|}}. $$ \par Noting that $|T(d,b)|\leq 1$ for all $b$ and $d$, and splitting the sum over $g$ according as to whether $\deg(g)\leq \sqrt{d}$ or $\deg(g)>\sqrt{d}$, we get \begin{align} \frac{1}{q^d} \sum_{\deg(f)=d} {e^{iu \omega(f)}}&= \sum_{\stacksum{\deg(g)\leq \sqrt{d}}{\degp{g}\leq b}} {\frac{e^{iu\omega(g)}}{|g|} T(d-\deg(g),b)}+O(R(\sqrt{d},b))\nonumber\\ &=S_1+O(R(\sqrt{d},b)),\text{ say}.\label{eq-first} \end{align} \par The next step, which is were random permutations will come into play, will be to evaluate $T(d,b)$ asymptotically in suitable ranges. \begin{prop}\label{prop-td} With notation as before, we have \begin{multline} T(d,b)=\exp\Bigl(-e^{iu}\sum_{j=1}^b{\frac{1}{j}}\Bigr) \text{\boldmath$E$}(e^{iu \varpi(\sigma_d)})+ \\ O\Bigl(|\text{\boldmath$E$}(e^{iu \varpi(\sigma_d)})|b^2d^{-1}+dq^{-b/2} +b^3(\log d)^{1/2}d^{-2}\Bigr), \end{multline} with an absolute implied constant, in the range \begin{equation}\label{eq-valid} q^{b/2}\geq d,\quad b\leq d. \end{equation} \end{prop} \begin{proof} Before introducting permutations, we separate the contribution of squarefree and non-squarefree polynomials in $T(d,b)$ (the intuition being that non-squarefree ones should be much sparser than for all polynomials because of the imposed divisibility only by large factors): $$ T(d,b)=T^{\flat}(d,b)+T^{\sharp}(d,b) $$ where $$ T^{\flat}(d,b)=\frac{1}{q^d} \mathop{\sum \Bigl.^{\flat}}\limits_{\stacksum{\deg(f)=d}{\degm{f}>b}}{e^{iu\omega(f)}}, $$ and $T^{\sharp}(d,b)$ is the complementary term. We then estimate the latter by \begin{align*} |T^{\sharp}(d,b)|& \leq \sum_{b\leq \deg(g)\leq d/2}{ \frac{1}{q^d} \sum_{\stacksum{\deg(f)=d}{g^2\mid f}}{1}} \\ &= \sum_{b\leq \deg(g)\leq d/2}{ \frac{1}{q^d} \sum_{\deg(f)=d-2\deg(g)}{1} }\\ &\leq\sum_{\deg(g)\geq b}{\frac{1}{q^{2\deg(g)}}} \ll \frac{1}{q^b}. \end{align*} \par We can now introduce permutations through the association $f\mapsto F_f^{\sharp}$ sending a squarefree polynomial to its associated cycle type. Using~(\ref{eq-ell-om}), we obtain $$ T^{\flat}(d,b)=\sum_{\stacksum{\sigma\in\mathfrak{S}_d}{\ellm{\sigma}>b}}{ e^{iu\varpi(\sigma)} \frac{1}{q^d}\mathop{\sum \Bigl.^{\flat}}\limits_{\stacksum{\deg(f)=d}{F_f^{\sharp}=\sigma^{\sharp}}}{1} }, $$ which is now a sum over permutations without small cycles. Using the third statement of Lemma~\ref{lm-distrib}, we derive \begin{align*} T^{\flat}(d,b)&=\sum_{\stacksum{\sigma\in\mathfrak{S}_d} {\ellm{\sigma}>b}}{ e^{iu\varpi(\sigma)}\Prob(\sigma_d=\sigma) \erreurm{\frac{d}{q^{b/2}}}}\\ &=\text{\boldmath$E$}(e^{iu\varpi(\sigma_d)}\mathbf{1\!\!\!1}_{\ellm{\sigma_d}>b}) +O\Bigl(\Prob(\ellm{\sigma_d}>b)dq^{-b/2}\Bigr), \end{align*} with an absolute implied constant if $q^{b/2}\geq d$. \par Thus the problem is reduced to one about random permutations. Using Proposition~\ref{pr-permut} below with $\varepsilon=1$, the proof is finished. \end{proof} Now recall that the characteristic function $\text{\boldmath$E$}(e^{iu\varpi(\sigma_d)})$ is explicitly known from~(\ref{eq-char-permut}). This formula, or~(\ref{eq-explicit-permut}), implies in particular that we have \begin{equation}\label{eq-unif-char} \text{\boldmath$E$}(e^{iu\varpi(\sigma_{d-j})}) =\text{\boldmath$E$}(e^{iu \varpi(\sigma_d)})\erreurm{\frac{j}{d}}. \end{equation} \par Then, inserting the formula of Proposition~\ref{prop-td} in the first term $S_1$ of~(\ref{eq-first}), and using this formula, we obtain in the range of validity~(\ref{eq-valid}) that $$ S_1=\exp\Bigl(-e^{iu} \sum_{j=1}^b{\frac{1}{j}}\Bigr)\text{\boldmath$E$}(e^{iu \varpi(\sigma_{d})}) \sum_{\stacksum{\deg(g)\leq \sqrt{d}}{\degp{g}\leq b}} {\frac{e^{iu\omega(g)}}{|g|}}+R $$ where, after some computations, we find that $$ R\ll (|\text{\boldmath$E$}(e^{iu\varpi(\sigma_d)})|b^2d^{-1} +dq^{-b/2}+b^3(\log d)^{1/2}d^{-2})S(\sqrt{d},b), $$ with an absolute implied constant. \par Extending the sum in the main term, we get $$ S_1=M+R_1, $$ where \begin{gather*} M= \text{\boldmath$E$}(e^{iu \varpi(\sigma_{d})}) \exp\Bigl(-e^{iu}\sum_{j=1}^b{\frac{1}{j}}\Bigr) \sum_{\degp{g}\leq b}{\frac{e^{iu\omega(g)}}{|g|}}, \\ R_1\ll bR(\sqrt{d},b)+ \Bigl(|\text{\boldmath$E$}(e^{iu\varpi(\sigma_d)})|\frac{b^2}{d} +\frac{d}{q^{b/2}}+ \frac{b^3(\log d)^{1/2}}{d^{2}}\Bigr)S(\sqrt{d},b). \end{gather*} \par Now, we can finally apply~(\ref{eq-mertens}) and multiplicativity in the sum over $g$ in $M$, to see that \begin{align*} M=\text{\boldmath$E$}(e^{iu \varpi(\sigma_{d})})\prod_{\deg(\pi)\leq b}{ \Bigl(1-\frac{1}{|\irred|}\Bigr)^{e^{iu}} \Bigl(1+\frac{e^{iu}}{|\irred|-1}\Bigr) } \Bigl(1+O\Bigl(\frac{1}{b}\Bigr)\Bigr) \end{align*} and hence, by the mod-Poisson convergence of $\varpi(\sigma_d)$ and the absolute convergence of the Euler product extended to infinity, we have $$ \lim_{d,b\rightarrow +\infty}\exp((\log d)(1-e^{iu}))M =\tilde{\Phi}_1(u)\tilde{\Phi}_2(u), $$ uniformly for $u\in \RR$. \par There remain to consider the error terms to conclude the proof of Theorem~\ref{th-main}. We select $b=(\log d)^2\rightarrow +\infty$; then~(\ref{eq-valid}) holds for all $d\geq d_0(q)$, and hence the previous estimates are valid and we must now show that $$ \exp((\log d)(1-e^{iu}))R(\sqrt{d},b)\rightarrow 0, \quad \exp((\log d)(1-e^{iu}))R_1\rightarrow 0 $$ (the first desideratum coming from~(\ref{eq-first})). \par Note that $|\exp((\log d)(1-e^{iu}))|\leq d^2$. Now we claim that \begin{align}\label{eq-rankin} R(d,b)&\ll b^Ce^{-d/b}\\ S(d,b)& \ll b,\label{eq-sdb} \end{align} for $1\leq b\leq d$ and absolute constant $C>0$, with absolute implied constants for the first, and an implied constant depending only on $q$ for the second. \par Granting this, we have $$ d^2R(\sqrt{d},(\log d)^2)\ll \exp\Bigl(2\log d+2C\log\log d- \frac{\sqrt{d}}{(\log d)^2}\Bigr)\fleche{} 0, $$ and all terms in $R_1$ are similarly trivially estimated, except for $$ \exp((\log d)(1-e^{iu}))|\text{\boldmath$E$}(e^{iu\varpi(\sigma_d)})|b^2d^{-1} S(\sqrt{d},b)\ll b^3d^{-1}\fleche{}0, $$ using again the mod-Poisson convergence of $\varpi(\sigma_d)$. \par We now justify~(\ref{eq-sdb}) and~(\ref{eq-rankin}): for the former, by~(\ref{eq-upper-mertens}), we have $$ |S(\sqrt{d},b)|\leq \prod_{\deg(\pi)\leq b}{ \Bigl(1+\frac{1}{|\irred|-1}\Bigr) }\ll b, $$ and for the latter, we need only a simple application of the well-known Rankin trick: for any $\sigma\geq 0$, $d\geq 1$ and $g\in \mathbf{F}_q[X]$, we have $$ \mathbf{1\!\!\!1}_{\deg(g)>d}\leq q^{\sigma(\deg(g)-d)}, $$ and hence, by multiplicativity, we get $$ R(d,b)\leq q^{-\sigma d}\sum_{\degp{g}\leq b}{q^{(\sigma-1)\deg(g)}} =q^{-\sigma d}\prod_{\deg(\pi)\leq b}{(1-|\irred|^{\sigma-1})^{-1}}, $$ which we estimate further for $\sigma$ using \begin{align*} \prod_{\deg(\pi)\leq b}{(1-|\irred|^{\sigma-1})^{-1}} &=\exp\Bigl(\sum_{\deg(\pi)\leq b}{\sum_{k\geq 1} {\frac{|\irred|^{k(\sigma-1)}}{k}}}\Bigr) \\ &\leq \exp\Bigl(C\sum_{j=1}^b{\frac{q^{j\sigma}}{j}}\Bigr)\leq \exp(C'q^{\sigma b}\log b) \end{align*} for some absolute constants $C, C'>0$. Taking $\sigma=1/(b\log q)$ leads immediately to~(\ref{eq-rankin}). \par Finally, here is the computation of the characteristic function of the cycle count of permutations without small parts that we used in the proof of Proposition~\ref{prop-td}. \begin{prop}\label{pr-permut} For all $d\geq 2$ and $b\geq 0$ such that $b\leq d$, we have \begin{multline} \text{\boldmath$E$}(e^{iu\varpi(\sigma_d)}\mathbf{1\!\!\!1}_{\ellm{\sigma_d}>b}) =\exp\Bigl(-e^{iu}\sum_{j=1}^b{\frac{1}{j}}\Bigr) \text{\boldmath$E$}(e^{iu \varpi(\sigma_d)})\\ +O(|\text{\boldmath$E$}(e^{iu \varpi(\sigma_d)})|b^{1+\varepsilon}d^{-1}+b^3(\log d)^{1/2} d^{-2}),\label{eq-goal} \end{multline} for any $\varepsilon>0$, where the implied constant depend only on $\varepsilon$. \end{prop} \begin{proof} This is essentially a sieve (or inclusion-exclusion) argument, which may well be already known (although we didn't find it explicitly in our survey of the literature). To simplify the notation, we will prove the statement by induction on $b$, although this may not be necessary; taking care of the error terms is then slightly more complicated, and readers should probably first disregard them to see the main flow of the argument. \par We denote $$ \Phi_{d,b}(u)=\text{\boldmath$E$}(e^{iu\varpi(\sigma_d)}\mathbf{1\!\!\!1}_{\ellm{\sigma_d}>b}), \quad \Phi_d(u)=\Phi_{d,0}(u),\quad \harm{b}=\sum_{j=1}^b{\frac{1}{j}}. $$ \par We will write \begin{equation}\label{eq-induction} \Phi_{d,b} =\exp(-e^{iu}\harm{b})\Phi_d+|\Phi_d|E_{d,b}+F_{d,b}, \end{equation} where $E_{d,b}$, $F_{d,b}\geq 0$; such an expression holds for $b=0$, with $E_{d,0}=F_{d,0}=0$, and we will proceed inductively to obtain an expression for $\Phi_{d,b}$ from that of $\Phi_{d',b-1}$, $d'\leq d$, from which we will derive estimates for $E_{d,b}$ and $F_{d,b}$ in general. Note that we can assume that $d$ is large enough (i.e., larger than any fixed constant), since smaller values of $d$ (and $b$) are automatically incorporated by making the right-most implied constant large enough in~(\ref{eq-goal}). Also, we can always write such a formula with $|F_{d,b}|\ll b$, for some absolute constant, since the characteristic functions $\Phi_{d,b}$ are bounded by $1$ and $\exp(-e^{iu}\harm{b})\ll b$. \par Now, with these preliminaries settled, let $I$ be the set of $b$-cycles in $\mathfrak{S}_d$; we write $\tau\mid \sigma$ (resp. $\tau\nmid \sigma$) to indicate that $\tau\in I$ occurs (resp. does not occur) in the decomposition of $\sigma$ in cycles. Then we have \begin{align*} \Phi_{d,b}(u)&= \text{\boldmath$E$}(e^{iu\varpi(\sigma_d)}\mathbf{1\!\!\!1}_{\ellm{\sigma_d}>b})\\ &=\frac{1}{d!}\sum_{\stacksum{\ellm{\sigma}>b-1} {\tau\in I\Rightarrow \tau\nmid \sigma}}{e^{iu\varpi(\sigma)}} =\frac{1}{d!}\sum_{\ellm{\sigma}>b-1} {e^{iu\varpi(\sigma)}\prod_{\tau\in I}{(1-\mathbf{1\!\!\!1}_{\tau\mid \sigma})}}. \end{align*} \par We expand the product as a sum over subsets $J\subset I$, and exchange the two sums, getting $$ \Phi_{d,b}(u)= \frac{1}{d!}\sum_{J\subset I}{(-1)^{|J|} \sum_{\stacksum{\ellm{\sigma}>b-1} {\tau\in J\Rightarrow \tau\mid \sigma}} {e^{iu\varpi(\sigma)}}}. $$ \par Now fix a $J\subset I$ such that the inner sum is \emph{not empty}. This implies of course that the support of the cycles in $J$ are disjoint, in particular that those cycles contribute $|J|$ to $\varpi(\sigma)$. Moreover, if we call $A$ the complement of the union of the support of the cycles in $J$, we have $|A|=d-|J|b$, and any $\sigma$ in the inner sum maps $A$ to itself. Thus, by enumerating the elements of $A$, we can map injectively those $\sigma$ to permutations in $\mathfrak{S}_{d-|J|b}$, and the image of this map is exactly the set of those $\sigma_1\in\mathfrak{S}_{d-|J|b}$ for which $\ellm{\sigma_1}>b-1$. Moreover, if $\sigma$ maps to $\sigma_1$, we have $$ \varpi(\sigma)=|J|+\varpi(\sigma_1), $$ and thus we get $$ \sum_{\stacksum{\ellm{\sigma}>b-1} {\tau\in J\Rightarrow \tau\mid \sigma}} {e^{iu\varpi(\sigma)}}= e^{iu|J|} \sum_{\stacksum{\sigma\in \mathfrak{S}_{d-|J|b}} {\ellm{\sigma}>b-1}} {e^{iu\varpi(\sigma)}}, $$ and then \begin{align*} \Phi_{d,b}(u)&= \sum_{J\subset I}{\frac{(d-|J|b)!}{d!}(-e^{iu})^{|J|} \text{\boldmath$E$}(e^{iu\varpi(\sigma_{d-|J|b})} \mathbf{1\!\!\!1}_{\ellm{\sigma_{d-|J|b}}>b-1})},\\ &=\sum_{J\subset I}{\frac{(d-|J|b)!}{d!}(-e^{iu})^{|J|}\Phi_{d-|J|b,b-1}(u)}, \end{align*} the sum over $J$ being implicitly restricted to those subsets of $I$ for which there is at least one permutation in $\mathfrak{S}_d$ where all cycles in $J$ occur. \par In particular, we have $|J|\leq d/b$ (so there is enough room to find that many disjoint $b$-cycles), and if we denote by $S(k,b)$ the number of possible such subsets of $I$ with $|J|=k$, we can write $$ \Phi_{d,b}(u) =\sum_{k=0}^{d/b}{S(k,b)\frac{(d-kb)!}{d!}(-e^{iu})^k \Phi_{d-kb,b-1}(u)} $$ \par Now we claim that $$ S(k,b)=\binom{d}{d-kb}\times \frac{(kb)!}{b^kk!} =\frac{d!}{(d-kb)!b^kk!}. $$ \par Indeed, to construct the subsets $J$ with $|J|=k$, we can first select arbitrarily a subset $A$ of size $d-kb$ in $\{1,\ldots, d\}$, and then select, independently, an arbitrary set of $k$ disjoint $b$-cycles supported outside $A$. The choice of $A$ corresponds to the binomial factor above, and the second factor is clearly equal to the number of permutations $\sigma\in \mathfrak{S}_{kb}$ which are a product of $k$ disjoint $b$-cycles. Those are all conjugate in $\mathfrak{S}_{kb}$, and their cardinality is given by~(\ref{eq-card-conj}), applied with $d$ replaced by $kb$ and all $r_j=0$ except for $r_b=k$. \par Consequently, we obtain the basic induction relation $$ \Phi_{d,b}(u) =\sum_{k=0}^{d/b}{\Bigl(\frac{-e^{iu}}{b}\Bigr)^k\frac{1}{k!} \Phi_{d-kb,b-1}(u)}. $$ \par Before applying the induction assumption~(\ref{eq-induction}), we shorten the sum over $k$ so that $\Phi_{d-kb,b-1}$ will remain close to $\Phi_{d,b-1}$. For this, we use the inequality $$ \Bigl|\sum_{k=0}^m{\frac{z^k}{k!}}-e^z\Bigr|\leq \frac{1}{m!}, $$ for $|z|\leq 1$, $m\geq 0$, as well as $|\Phi_{d-kb}(u)|\leq 1$, and deduce that \begin{equation}\label{eq-cont} \Phi_{d,b}(u) =\sum_{k=0}^{m}{\Bigl(\frac{-e^{iu}}{b}\Bigr)^k\frac{1}{k!} \Phi_{d-kb,b-1}(u)} +O\Bigl(\frac{1}{m!}\Bigr), \end{equation} for some $m$ to be specified later, subject for the moment only to the condition $m<d/2b$, and an implied constant which is at most $1$. \par By~(\ref{eq-induction}), we have $$ \Phi_{d-kb,b-1}(u)=\exp(-e^{iu}\harm{b-1})\Phi_{d-kb}(u) +|\Phi_{d-kb}(u)|E_{d-kb,b-1}+F_{d-kb,b-1}. $$ \par Moreover, by~(\ref{eq-unif-char}), we also know that for $k\leq m$, we have \begin{equation}\label{eq-shift-char} \Phi_{d-kb}(u)= \text{\boldmath$E$}(e^{iu\varpi(\sigma_{d})})\erreurm{\frac{kb}{d}}= \Phi_d(u)\erreurm{\frac{kb}{d}}, \end{equation} with an absolute implied constant. Hence, we obtain $$ \Phi_{d,b}(u)=\exp(-e^{iu}\harm{b-1})\Phi_d(u)M+R+S $$ where \begin{align*} M&=\sum_{k=0}^{m}{\Bigl(\frac{-e^{iu}}{b}\Bigr)^k\frac{1}{k!} \erreurm{\frac{bk}{d}}}\\ |R|&\leq \sum_{k=0}^m{\frac{1}{b^kk!}E_{d-kb,b-1}|\Phi_{d-kb}(u)|}\\ &=|\Phi_d(u)|\sum_{k=0}^m{\frac{1}{b^kk!}E_{d-kb,b-1}\Bigl(1+O\Bigl( \frac{kb}{d} \Bigr)\Bigr)} \\ |S|&\leq \sum_{k=0}^m{\frac{1}{b^kk!}F_{d-kb,b-1}}+ \frac{1}{m!}. \end{align*} \par We next write $$ M=\exp\Bigl(-\frac{e^{iu}}{b}\Bigr)+ O\Bigl(\frac{1}{d}\sum_{k=1}^{m}{\frac{1}{b^{k-1}(k-1)!}}\Bigr) +O\Bigl(\frac{1}{m!}\Bigr), $$ where the implied constants are absolute, and deduce that $$ \Phi_{d,b}(u)=\exp(-e^{iu}\harm{b})\Phi_d(u)+|\Phi_d(u)|M_1+R+S, $$ with $$ |M_1|\ll \frac{1}{m!}+d^{-1}e^{1/b}, $$ where the implied constant is absolute. The desired shape of the main term is now visible, and it remains to verify that (for a suitable $m$) the other terms are bounded as stated in the proposition. \par First, comparing with~(\ref{eq-induction}), with the terms in $R_1$ and $R$ contributing to $E_{d,b}$, while those in $S$ contribute to $F_{d,b}$, we see that we have $$ F_{d,b}\leq \sum_{k=0}^m{\frac{1}{b^kk!}F_{d-kb,b-1}}+ \frac{1}{m!}. $$ \par We now select $m=\lfloor \log d\rfloor$. Then, together with $F_{d,0}=0$, we claim that this inductive inequality implies \begin{equation}\label{eq-borne-f} F_{d,b}\leq Cb^{3}(\log d)^{1/2}d^{-2}, \end{equation} for $b\leq d$ and some absolute implied constant $C\geq 1$. For a large enough value of $C$, note that this is already true for all $d\leq d_0$, where $d_0$ can be any fixed integer. We select $d_0$ so that $$ \frac{1}{m!}\leq \frac{1}{d^2}, $$ for $d\geq d_0$, and we can thus assume that $d>d_0$ from now on. \par The desired bound holds, of course, for $b=0$. It is also trivial if $b(\log d)\geq d/24$ (say), because we have observed at the beginning that (\ref{eq-induction}) can be obtained with $F_{d,b}\ll b$. If it is assumed to be true for all $d$ and $b-1$, we have for $b(\log d)<d/24$ that \begin{align*} F_{d,b}&\leq\frac{1}{m!}+ C\sum_{k=0}^m{\frac{1}{b^kk!}F_{d-kb,b-1}} \\ &\leq \frac{1}{m!}+\frac{C(\log d)^{1/2}(b-1)^{3}}{d^2}\sum_{k=0}^m{ \frac{1}{b^kk!}\Bigl(1-\frac{kb}{d}\Bigr)^{-2}}. \end{align*} \par We note the following simple inequalities \begin{gather*} (1-x)^{-1}\leq e^{2x},\quad \exp(x)\leq 1+\frac{3x}{2},\quad\text{ for }0\leq x\leq 1/2, \\ (x-1)e^{1/x}\leq x,\quad\quad\text{ for } 0\leq x\leq 1, \end{gather*} and from them we deduce that if $b(\log d)<d/24$ (so that $kb/d\leq 1/2$ for the values involved), we have $$ \sum_{k=0}^m{\frac{1}{b^kk!}\Bigl(1-\frac{kb}{d}\Bigr)^{-2}} \leq \sum_{k=0}^m{ \frac{1}{k!}\Bigl( \frac{\exp(4b/d)}{b} \Bigr)^k} \leq \exp\Bigl(\frac{1}{b}+\frac{6}{d}\Bigr) $$ and, hence (from the same simple inequalities) we get \begin{multline*} (b-1)^3\sum_{k=0}^m{\frac{1}{b^kk!}\Bigl(1-\frac{kb}{d}\Bigr)^{-2}} \leq (b-1)^{3/2}\times (b-1)\exp\Bigl(\frac{1}{b}\Bigr)\\ \times \Bigl((b-1)\exp\Bigl(\frac{1}{3d}\Bigr)\Bigr)^{1/2} \leq (b-1)^{3/2}b^{3/2}. \end{multline*} \par By the choice of $d_0$, we deduce for $b\geq 1$ and $d>d_0$ that we have $$ F_{d,b}\leq d^{-2}(\log d)^{1/2}(1+Cb^{3/2}(b-1)^{3/2}) \leq Cd^{-2}b^3, $$ (assuming again $C$ large enough), completing the verification of~(\ref{eq-borne-f}) by induction. \par Finally, from~(\ref{eq-induction}) and the foregoing, we deduce similarly that $$ E_{d,b}\leq D\Bigl(\frac{1}{m!}+d^{-1}e^{1/b}\Bigr)+ \sum_{k=0}^m{\frac{1}{b^kk!}E_{d-kb,b-1}\Bigl(1+O\Bigl( \frac{kb}{d} \Bigr)\Bigr)}, $$ for some absolute constant $D\geq 0$. Fix $\varepsilon>0$, and consider the bound $$ E_{d,b}\leq Cb^{1+\varepsilon}d^{-1}; $$ then if $C\geq 1$, assuming it for $b-1$, we obtain the inductive bound \begin{align*} E_{d,b}&\leq d^{-1}\Bigl\{ D(1+e^{1/b})+C(b-1)^{1+\varepsilon}\sum_{k=0}^m{\frac{1}{b^kk!}\Bigl(1+\frac{\beta kb}{d}\Bigr)\Bigl(1-\frac{kb}{d}\Bigr)^{-1}} \Bigr\}\\ &\leq d^{-1}\Bigl\{D(1+e^{1/b})+C(b-1)^{1+\varepsilon} \exp\Bigl(\frac{1}{b}+\frac{3(\beta+2)}{2d}\Bigr)\Bigr\} \end{align*} (using again the elementary inequalities above). Then for $d\geq d_1(\varepsilon)$, provided $C\geq 1$, we obtain $$ E_{d,b}\leq Cb^{1+\varepsilon}d^{-1}, $$ confirming the validity of this estimate. \end{proof} \begin{rem} Proposition~\ref{pr-permut} can itself be seen as an instance of mod-Poisson convergence, for the cycle count of randomly, uniformly, chosen permutations in $\mathfrak{S}_d$ without small cycles. \par Precisely, let $\mathfrak{S}_d^{(b)}$ denote the set of $\sigma\in \mathfrak{S}_d$ with $\ellm{\sigma}>b$. We then find first (by putting $u=0$ in Proposition~\ref{pr-permut}) that $$ \frac{|\mathfrak{S}_d^{(b)}|}{|\mathfrak{S}_d|} \sim_{d,b\rightarrow +\infty} \exp\Bigl(-\sum_{j=1}^b{\frac{1}{j}}\Bigr) \sim \frac{e^{-\gamma}}{b}, $$ provided $b$ is restricted by $b\ll d^{1/2-\varepsilon}$ with $\varepsilon>0$ arbitrarily small. Then, for arbitrary $u\in\RR$ and $b$ similarly restricted, we find that $$ \frac{1}{|\mathfrak{S}_d^{(b)}|} \sum_{\sigma \in \mathfrak{S}_d^{(b)}}{ e^{iu\varpi(\sigma)} }\sim_{d,b\rightarrow +\infty} \exp\Bigl((1-e^{iu})\sum_{j=1}^b{\frac{1}{j}}\Bigr) \text{\boldmath$E$}(e^{iu\varpi(\sigma_d)}), $$ locally uniformly. Thus the mod-Poisson convergence~(\ref{eq-mod-poisson-permut}) for $\varpi(\sigma_d)$ implies mod-Poisson convergence for the cycle count restricted to $\mathfrak{S}_d^{(b)}$ as long as $b\ll d^{1/2-\varepsilon}$, with limiting function $1/\Gamma(e^{iu})$ and parameters $$ \log d-\sum_{j=1}^b{\frac{1}{j}}\sim \log\frac{d}{b}. $$ \par It may be that the restriction of $b$ with respect to $d$ could be relaxed. However, in the opposite direction, note that for $b=d-1$, the number of $d$-cycles in $\mathfrak{S}_d$, i.e., $|\mathfrak{S}_d^{(d-1)}|$, is $(d-1)!$, so the ratio is $1/d$ which is obviously not asymptotic with $e^{-\gamma}/(d-1)$. \end{rem} \begin{rem}\label{rm-mertens} We come back to the asymptotic formula~(\ref{eq-mertens}), to explain how it follows from Theorem~\ref{th-main} in the sharper form \begin{equation}\label{eq-mertens2} \prod_{\deg(\pi)\leq d}{\Bigl(1-\frac{1}{|\irred|}\Bigr)}= \exp\Bigl(-\sum_{1\leq j\leq d}{\frac{1}{j}}\Bigr) \erreurm{\frac{1}{q^{d/2}}}. \end{equation} \par Namely, it is very easy to derive this asymptotic up to some constant: $$ \prod_{\deg(\pi)\leq d}{\Bigl(1-\frac{1}{|\irred|}\Bigr)}= \exp\Bigl(\gamma_q-\sum_{1\leq j\leq d}{\frac{1}{j}}\Bigr) \erreurm{\frac{1}{q^{d/2}}}, $$ where $\gamma_q$ is given by the awkward, yet absolutely convergent, expression \begin{equation}\label{eq-gamma-const} \gamma_q= \sum_{\pi}{\Bigl(\log\Bigl(1-\frac{1}{|\irred|}\Bigr) +\frac{1}{|\irred|}\Bigr)}+ \sum_{j\geq 1}{\Bigl(\frac{\Pi_q(j)}{q^j}-\frac{1}{j}\Bigr)}. \end{equation} \par From this, the flow of the proof leads to the mod-Poisson limit~(\ref{eq-main}), with an additional factor $\exp(-\gamma_qe^{iu})$ in the limit. But for $u=0$, both sides of~(\ref{eq-main}) are equal to $1$, so we must have $\exp(\gamma_q)=1$ for all $q$. (This is another interesting example of the information coming from mod-Poisson convergence, which is invisible at the level of the normal limit; note in particular that this is really a manifestation of the random permutations.) \end{rem} \section{Final comments and questions} Many natural questions arise out of this paper. The most obvious concern the general notion of mod-Poisson convergence, and its probabilistic significance and relation with other types of convergence and measures of approximation (and similarly for mod-Gaussian behavior). Already from~\cite{hwang}, it is clear that mod-Poisson convergence should be a very general fact in the setting of ``logarithmic combinatorial structures'', as discussed in~\cite{abt}. \par In the direction suggested by the Erd\H{o}s-Kac Theorem, there is a very abundant literature concerning generalizations to additive functions and beyond (see, e.g., the discussion at the end of~\cite{granville-sound}), and again it would be interesting to know which of those Central Limit Theorems extend to mod-Poisson convergence, and maybe even more so, to know which \emph{don't}. \par In the direction of pursuing the analogy with distribution of $L$-functions, the first thing to do might be to construct a proof of the mod-Poisson Erd\H{o}s-Kac Theorem for integers which parallels the one of the previous section. This does not seem out of the question, but our current attempts suffer from the fact that the associations of permutations in ``$\mathfrak{S}_{\log N}$'' to integers $n\leq N$ that we have considered are ad-hoc (though potentially useful), and do not carry the flavor of a generalization of the Frobenius. It is then difficult to envision a further natural analogue of a unitary matrix associated, say, with $\zeta(1/2+it)$. One can suggest a ``made up'' matrix $U_t$ obtained by taking the zeros of $\zeta(s)$ close to $t$, and wrapping them around the unit circle after proper rescaling, but this also lacks a good a priori definition -- though this was studied by Coram and Diaconis~\cite{coram-diaconis}, who obtained extremely good numerical agreement; this is also close to the ``hybrid'' model for the Riemann zeta function of Gonek, Hughes and Keating~\cite{ghk}. \par One may hope for more success in the case of finite fields in trying to understand (for instance) families of $L$-functions of algebraic curves in the limit of large genus, since the definition of a random matrix from Frobenius does not cause problem there (though recall it is really a \emph{conjugacy class}). However, although we have Deligne's Equidistribution Theorem in the ``vertical'' direction $q\rightarrow +\infty$, and its proof is highly effective, it is not clear what a suitable analogue of the quantitative ``diagonal'' equidistribution~(\ref{eq-quant-equid}) in Lemma~\ref{lm-distrib} should be. More precisely, what condition should replace the restriction to polynomials without small irreducible factors? We do not have clear answers at the moment, but we hope to make progress in later work. \par Finally, it should be clear that analogues of mod-Gaussian and mod-Poisson convergence exist, involving other families of probability distributions. Some cases related to discrete variables are discussed in~\cite[\S 5]{bkn}, and one may also define ``mod-stable'' convergence in an obvious way (though we do not have interesting examples of these to suggest at the moment). It may be interesting to investigate links between these various definitions; the last part of Proposition~\ref{prop-cor-mod-poisson} suggests that there should exist interesting relations.
{ "timestamp": "2009-12-26T20:47:17", "yymm": "0905", "arxiv_id": "0905.0318", "language": "en", "url": "https://arxiv.org/abs/0905.0318", "abstract": "Building on earlier work introducing the notion of \"mod-Gaussian\" convergence of sequences of random variables, which arises naturally in Random Matrix Theory and number theory, we discuss the analogue notion of \"mod-Poisson\" convergence. We show in particular how it occurs naturally in analytic number theory in the classical Erdős-Kác Theorem. In fact, this case reveals deep connections and analogies with conjectures concerning the distribution of L-functions on the critical line, which belong to the mod-Gaussian framework, and with analogues over finite fields, where it can be seen as a zero-dimensional version of the Katz-Sarnak philosophy in the large conductor limit.", "subjects": "Number Theory (math.NT); Probability (math.PR)", "title": "Mod-Poisson convergence in probability and number theory", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429580381723, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7097210994246348 }
https://arxiv.org/abs/1903.00800
On the correspondence of external rays under renormalization
Let $P$ be a monic polynomial of degree $D \geq 3$ whose filled Julia set $K_P$ has a non-degenerate periodic component $K$ of period $k \geq 1$ and renormalization degree $2 \leq d<D$. Let $I=I_K$ denote the set of angles $\theta$ on the circle ${\mathbb T}={\mathbb R}/{\mathbb Z}$ for which the (smooth or broken) external ray $R^P_\theta$ for $P$ accumulates on $\partial K$. We prove the following:$\bullet$ $I$ is a compact set of Hausdorff dimension $<1$ and there is an essentially unique degree $1$ monotone map $\Pi: I \to {\mathbb T}$ which semiconjugates $\theta \mapsto D^k \theta$ (mod 1) on $I$ to $\theta \mapsto d \theta$ (mod 1) on $\mathbb T$.$\bullet$ Any hybrid conjugacy $\varphi$ between a renormalization of $P^{\circ k}$ on a neighborhood of $K$ and a monic degree $d$ polynomial $Q$ induces a semiconjugacy $\Pi: I \to {\mathbb T}$ with the property that for every $\theta \in I$ the external ray $R^P_\theta$ has the same accumulation set as the curve $\varphi^{-1}(R^Q_{\Pi(\theta)})$. In particular, $R^P_\theta$ lands at $z \in \partial K$ if and only if $R^Q_{\Pi(\theta)}$ lands at $\varphi(z) \in \partial K_Q$.$\bullet$ The ray correspondence established by the above result is finite-to-one. In fact, the cardinality of each fiber of $\Pi$ is $\leq D-d+2$, and the inequality is strict when the component $K$ has period $k=1$.Using a new type of quasiconformal surgery we construct a class of examples with $k=1$ for which the upper bound $D-d+1$ is realized and the set $I$ has isolated points.
\section{Introduction}\label{sec:intro} This paper provides an understanding of the external rays that accumulate (and in particular land) on a non-degenerate periodic component of a disconnected polynomial Julia set. It can be viewed as a complement to the well-studied case of connected Julia sets initiated by Douady and Hubbard in the late 1980's. \vspace{6pt} Here is a brief outline of the results; we will give the precise definitions in \S \ref{sec:prelim} and the proofs in \S \ref{sec:pfsa}-\S \ref{sec:ex}. Let $P: \mathbb{C} \to \mathbb{C}$ be a monic polynomial of degree $D \geq 3$ whose filled Julia set $K_P$ is disconnected. Let $K$ be a periodic component of $K_P$ of period $k \geq 1$ which is non-degenerate in the sense that it is not a single point. There are topological disks $U_1,U_0$ containing $K$ such that the restriction $P^{\circ k}|_{U_1}: U_1 \to U_0$ is a polynomial-like map of some degree $2 \leq d<D$ with connected filled Julia set $K$. This restriction is hybrid equivalent to a polynomial $Q$ of degree $d$, that is, there is a quasiconformal map $\varphi : U_0 \to \varphi(U_0)$ which satisfies $\varphi \circ P^{\circ k} = Q \circ \varphi$ in $U_1$ and has the property that $\bar{\partial} \varphi=0$ a.e. on $K$. It follows that $\varphi(K)$ is the filled Julia set $K_Q$. The main objective of this paper is to relate the external rays of $Q$ to the external rays of $P$ which accumulate on $\partial K$. Since $K_Q$ is connected, every external ray of $Q$ is a smooth curve. For $P$, however, we need to allow all external rays, including the ones that crash into the escaping precritical points of $P$. These generalized rays can be defined in terms of the gradient flow of the Green's function $G$ of $P$ in $\mathbb{C} \smallsetminus K_P$. They consist of smooth field lines of $\nabla G$ that descend from $\infty$ and approach $K_P$, as well as their limits which are broken rays that abruptly turn when they crash into a critical point of $G$. For each $\theta \in {\mathbb{T}} :={\mathbb{R}}/{\mathbb{Z}}$ we have either a smooth ray $R_\theta$ or a pair $R_\theta^\pm$ of broken rays which descend from $\infty$ at the angle $\theta$. Here $R_\theta^+$ (resp. $R_\theta^-$) makes a right (resp. left) turn at each critical point it crashes into (see \S \ref{sec:prelim} for details). Denoting by {\mapfromto {{\widehat{D}}, {\widehat{d}}} {\mathbb{T}} {\mathbb{T}}} the maps $$ {\widehat{D}}(\theta) = D \, \theta, \quad {\widehat{d}}(\theta) = d \, \theta \ \ (\operatorname{mod} 1), $$ we have $P(R_\theta)=R_{{\widehat{D}}(\theta)}$ and $P(R_\theta^\pm)=R_{{\widehat{D}}(\theta)}^\pm$ or $R_{{\widehat{D}}(\theta)}$. \begin{thmx}[External angles associated with $K$] \label{A} The set $I=I_K \subset {\mathbb{T}}$ of angles $\theta$ for which the smooth ray $R_\theta$ or one of the broken rays $R_\theta^\pm$ accumulates on $\partial K$ is compact, invariant under ${\widehat{D}}^{\circ k}$ and of Hausdorff dimension $\leq \log d \, / (k \log D)$. Moreover, there is a continuous degree $1$ monotone surjection {\mapfromto \Pi I {\mathbb{T}}} which makes the following diagram commute: \begin{equation}\label{scj} \begin{tikzcd}[column sep=small] I \arrow[d,swap,"\Pi"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & I \arrow[d,"\Pi"] \\ {\mathbb{T}} \arrow[rr,"{\widehat{d}}"] & & {\mathbb{T}} \end{tikzcd} \end{equation} The semiconjugacy $\Pi$ is unique up to postcomposition with a rotation of the form $\tau \mapsto \tau + j/(d-1) \ (\operatorname{mod} 1)$. \end{thmx} Observe that the existence of the semiconjugacy $\Pi$ implies that $I$ is uncountable and in fact contains a Cantor set. It would be tempting to speculate that $I$ itself is a Cantor set, but in \S \ref{sec:ex} we construct examples for which $I$ has isolated points (compare \thmref{D} below). \vspace{6pt} Let us denote the accumulation set of a (smooth or broken) ray $R$ by ${\rm Acc}(R)$: $$ {\rm Acc}(R) := \overline{R} \smallsetminus R $$ \begin{thmx}[Ray correspondence] \label{B} For any hybrid conjugacy $\varphi : U_0 \to \varphi(U_0)$ between the restriction $P^{\circ k}|_{U_0}: U_1 \to U_0$ and a degree $d$ monic polynomial $Q$, there is a choice of the semiconjugacy {\mapfromto \Pi I {\mathbb{T}}} of \thmref{A} such that $$ \varphi({\rm Acc}(R^P_\theta)) = {\rm Acc}(R^Q_{\Pi(\theta)}) \qquad \text{whenever} \ \theta \in I. $$ In particular, $R^P_\theta$ lands at $z \in \partial K$ if and only if $R^Q_{\Pi(\theta)}$ lands at $\varphi(z) \in \partial K_Q$. \end{thmx} Here $R^P_\theta$ and $R^Q_\theta$ denote the external rays at angle $\theta$ for $P$ and $Q$, respectively (in the case of a broken ray for $P$, precisely one of the two possible rays at angle $\theta \in I$ accumulates on $\partial K$ and $R^P_\theta$ denotes that choice). \vspace{6pt} Note that for each $\tau \in {\mathbb{T}}$ the preimage $\varphi^{-1}(R_\tau^Q)$ is an arc in $\mathbb{C} \smallsetminus K$ that accumulates on $\partial K$ but possibly meets infinitely many components of $K_P$ along the way. The main ingredient of the proof of \thmref{B} is to show that for every $\theta \in \Pi^{-1}(\tau)$, the ray segment $R_\theta^P \cap U_1$ and the arc $\varphi^{-1}(R_\tau^Q) \cap U_1$ stay at a bounded distance in the hyperbolic metric of $U_0 \smallsetminus K$. \vspace{6pt} To illustrate the content of \thmref{B}, consider a cubic polynomial of the form $P(z)=\ensuremath{{\operatorname{e}}}^{2\pi i \theta} z + b z^2 +z^3$, where $b \in \mathbb{C}$ and the rotation number $\theta$ has a continued fraction $[a_1,a_2,a_3,\ldots]$ that satisfies $\log a_n=O(\sqrt{n})$ as $n \to \infty$. For large enough $|b|$ the filled Julia set $K_P$ is disconnected but has a quadratic-like restriction hybrid equivalent to $Q(z)=\ensuremath{{\operatorname{e}}}^{2\pi i \theta} z + z^2$. It follows that the component $K$ of $K_P$ containing the fixed point $0$ is quasiconformally homeomorphic to $K_Q$. According to \cite{PZ1}, the filled Julia set $K_Q$ is locally connected. Moreover, every $w \in \partial K_Q$ is the landing point of one or two rays according as the forward orbit of $w$ misses or hits the critical point $-\ensuremath{{\operatorname{e}}}^{2\pi i \theta}/2$ of $Q$. The semiconjugacy $\Pi$ of \thmref{A} is at most $2$-to-$1$ in this case (see \thmref{C} below), so \thmref{B} implies that every point of $\partial K$ is the landing point of at least one and at most four rays for $P$. \figref{zoo} shows neighborhoods of the respective critical points in $K$ and $K_Q$ for the golden mean case $\theta=(\sqrt{5}-1)/2=[ 1,1,1,\ldots ]$. Looking at the figure on the right, it is far from obvious that the critical point at the center is accessible through an arc that avoids the uncountably many components of $K_P$. \vspace{6pt} \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{test7.pdf} \hspace{1cm} \includegraphics[width=0.4\textwidth]{disccub1.pdf} \caption{\sl{Left: The filled Julia set of the quadratic polynomial $Q(z)= \ensuremath{{\operatorname{e}}}^{2\pi i \theta} z + z^2$, with $\theta=(\sqrt{5}-1)/2$. Right: The disconnected filled Julia set of some cubic polynomial $P(z)= \ensuremath{{\operatorname{e}}}^{2\pi i \theta} z + b z^2 +z^3$ with a quadratic-like restriction hybrid equivalent to $Q$. Both pictures are magnified near the critical point at the center.}} \label{zoo} \end{figure} The following corollary of \thmref{B} is worth mentioning: \begin{corx} If a non-degenerate component $K$ of a polynomial filled Julia set is locally connected, every point of its boundary $\partial K$ is the landing point of at least one ray, and therefore is accessible through $\mathbb{C} \smallsetminus K$. \end{corx} Here the assumption of $K$ being periodic is not needed since every non-degenerate component of a polynomial filled Julia set is known to be eventually periodic \cite{QY}. \vspace{6pt} The analog of the above corollary in the connected case is well known and in fact non-dynamical: Every boundary point of a locally connected full continuum is accessible, hence by the theorem of Lindel\"{o}f \cite{Po} it is the landing point of at least one hyperbolic geodesic in the complement descending from $\infty$. But the disconnected case asserted by the above corollary is certainly dynamical, as it fails for general compact sets (think of the union of the closed unit disk together with the semicircles $\{ (1+1/n)\ensuremath{{\operatorname{e}}}^{2\pi it}: |t|\leq 1/4 \}$ for $n \geq 1$, where every point of the right half of $\partial \mathbb{D}$ is inaccessible from the complement of this union). \vspace{6pt} In \S \ref{sec:val} we prove \begin{thmx}[Valence of $\Pi$] \label{C} The semiconjugacy $\Pi: I \to {\mathbb{T}}$ of \thmref{A} satisfies $$ \sup_{\tau \in {\mathbb{T}}} \ \# \Pi^{-1}(\tau) \leq D-d+2. $$ The inequality is strict if the component $K$ has period $k=1$. \end{thmx} The proof has two (somewhat related) ingredients: One is the dynamics of the ``gaps'' of $I=I_K$ and its maximal Cantor subset as degree $d$ invariant sets for ${\widehat{D}}^{\circ k}$. The other is a bound on the cardinality of the fibers of $\Pi$ whose angles are eventually periodic under ${\widehat{D}}^{\circ k}$. This is related to the problem of bounding the number of cycles of smooth rays that land on a periodic point, studied in degree $2$ by Milnor \cite{M2} and extended to higher degrees by Kiwi \cite{K} in their work on ``orbit portraits'' of polynomial maps. The argument in our case is a bit more subtle, as we have to deal with periodic rays that are infinitely broken, or smooth periodic rays that pull back to pairs of broken rays. \vspace{6pt} In \S \ref{sec:ex} we present a general method for constructing examples where $K$ has period $1$ and $\Pi$ has the top valence $D-d+1$ predicted by \thmref{C}. Consider the $D-1$ fixed points $$ \theta_i := \frac{i}{D-1} \ (\operatorname{mod} 1) $$ of the map ${\widehat{D}}$, taking the subscript $i$ modulo $D-1$. Using the technique of quasiconformal surgery we prove the following \begin{thmx}[Top valence and isolated rays]\label{D} Given integers $2 \leq d<D$, a degree $d$ polynomial $Q$ with connected filled Julia set and a fixed point $\theta_j$ of ${\widehat{D}}$, there is a polynomial $P$ of degree $D$ whose filled Julia set has a component $K=P(K)$ such that \vspace{6pt} \begin{enumerate} \item[(i)] $P$ restricted to a neighborhood of $K$ is hybrid equivalent to $Q$. \vspace{6pt} \item[(ii)] The $D-d+1$ consecutive fixed points $\theta_j, \ldots, \theta_{j+D-d}$ belong to the same fiber of the semiconjugacy $\Pi: I_K \to {\mathbb{T}}$. \vspace{6pt} \end{enumerate} The corresponding rays $R^P_{\theta_j}, \ldots, R^P_{\theta_{j+D-d}}$ co-land at a fixed point of $P$ on $\partial K$ \end{thmx} Compare Figs. \ref{seh} and \ref{chah}. When $D>d+1 \geq 3$, it follows that the $D-d-1$ angles $\theta_{j+1}, \ldots, \theta_{j+D-d-1}$ are isolated points of $I=I_K$. The set $I$ can have isolated points even when $D=3$ but that would require a higher period $k$ by \thmref{C} (see Example \ref{cubicex} and compare Figures \ref{ghost1} and \ref{ghost2}). \section{Preliminaries}\label{sec:prelim} We assume the reader is familiar with the basic notions of complex dynamics, as in \cite{M1}. For convenience, and to establish our notations, we quickly recall some definitions. \vspace{6pt} \noindent {\it Convention.} For distinct points $a,b \in {\mathbb{T}}$ we use the notation $]a,b[$ for the open interval in ${\mathbb{T}}$ traversed counterclockwise from $a$ to $b$. We define $[a,b[, ]a,b], [a,b]$ by adding the suitable endpoints to $]a,b[$. \subsection{Green and B\"{o}ttcher}\label{g&b} Let $P:\mathbb{C} \to \mathbb{C}$ be a monic polynomial map of degree $D \geq 2$. The {\it \bfseries filled Julia set} $K_P$ is the union of all bounded orbits of $P$: $$ K_P= \{ z \in \mathbb{C} : \{ P^{\circ n}(z) \}_{n \geq 0} \ \text{is bounded} \}. $$ It is a compact non-empty subset of the plane with connected complement $\mathbb{C} \smallsetminus K_P$. This complement can be described as the {\it \bfseries basin of infinity} of $P$, that is, the set of all points which escape to $\infty$ under the iterations of $P$. The {\it \bfseries Green's function}\index{Green's function} of $P$ is the continuous subharmonic function $G=G_P: \mathbb{C} \to [0,+\infty[$ defined by $$ G(z)=\lim_{n \to \infty} \frac{1}{D^n} \log^+ |P^{\circ n}(z)| $$ which describes the escape rate of $z$ to $\infty$ under the iterations of $P$. Here $\log^+ t = \max \{ \log t, 0 \}$. It satisfies the relation $$ G(P(z))=D \, G(z) \qquad \text{for all} \ z \in \mathbb{C}, $$ with $G(z)=0$ if and only if $z \in K_P$. We often refer to $G(z)$ as the {\it \bfseries potential} of $z$. The Green's function is harmonic in $\mathbb{C} \smallsetminus K_P$ and has critical points precisely at the escaping precritical points of $P$, that is, $\nabla G(z)=0$ for some $z \in \mathbb{C} \smallsetminus K_P$ if and only if $P^{\circ n}(z)$ is a critical point of $P$ for some $n \geq 0$. It is easy to see that for every $s>0$ there are at most finitely many critical points of $G$ at potentials higher than $s$. The open set $G^{-1}([0,s[)$ has finitely many connected components, all being Jordan domains with piecewise analytic boundaries. \vspace{6pt} There is a unique conformal isomorphism $\frak{B}=\frak{B}_P$, defined in some neighborhood of $\infty$, which is tangent to the identity at $\infty$ (in the sense that $\lim_{z \to \infty} \frak{B}(z)/z = 1$) and conjugates $P$ to the power map $w \mapsto w^D$: \begin{equation}\label{bfe} \frak{B}(P(z))=(\frak{B}(z))^D \qquad \text{for large} \ |z|. \end{equation} We call $\frak{B}$ the {\it \bfseries B\"{o}ttcher coordinate} of $P$ near $\infty$. The modulus of $\frak{B}$ is related to the Green's function by the relation $$ \log |\frak{B}(z)|=G(z) \qquad \text{for large} \ |z|. $$ Set \begin{align*} s_{\max} & := \max \big\{ G(c): c \ \text{is a critical point of} \ P \big\}, \\ W_0 & :=G^{-1}(]s_{\max},+\infty[). \end{align*} It is not hard to see that $\frak{B}$ extends to a conformal isomorphism $W_0 \to \{ w : |w| > \ensuremath{{\operatorname{e}}}^{s_{\max}} \}$ which still satisfies the conjugacy relation \eqref{bfe} for $z \in W_0$. If $K_P$ is connected, every critical point of $P$ belongs to $K_P$, so $s_{\max}=0$. In this case $W_0=\mathbb{C} \smallsetminus K_P$ and $\frak{B}$ is a conformal isomorphism $\mathbb{C} \smallsetminus K_P \to \mathbb{C} \smallsetminus {\overline{\mathbb{D}}}$. If $K_P$ is disconnected, there is at least one critical point of $P$ in $\mathbb{C} \smallsetminus K_P$, so $s_{\max}>0$. In this case $W_0$ is a domain bounded by the piecewise analytic equipotential curve $G=s_{\max}$ containing the fastest escaping critical point(s) of $P$. \subsection{Generalized rays}\label{gr} For $\theta \in {\mathbb{T}}$, we denote by $R_\theta$ the maximally extended smooth field line of $\nabla G$ such that $\frak{B}(W_0 \cap R_\theta)$ is the radial line $\{ \ensuremath{{\operatorname{e}}}^{s+2\pi i \theta}: s>s_{\max} \}$. We can parametrize $R_\theta$ by the potential, so for each $\theta$ there is an $s_\theta \geq 0$ such that $G(R_\theta(s)) = s$ for all $s > s_\theta$.\footnote{This amounts to viewing $R_\theta$ as a trajectory of the vector field $\nabla G / \| \nabla G \|^2$.} The field line $R_\theta$ either extends all the way to the Julia set $\partial K_P$ in which case $s_\theta=0$, or it crashes into a critical point $\omega$ of $G$ at potential $s_\theta>0$ in the sense that $\lim_{s \to s_{\theta}^+} R_\theta(s)=\omega$. \vspace{6pt} A critical point $\omega$ of $G$ of order $n$ is the starting point of $n$ ascending and $n$ descending field lines for $\nabla G$ that alternate around $\omega$ (see \figref{cp} for the case $n=3$). Thus, at most $n$ field lines of the form $R_\theta$ can crash into $\omega$. \begin{lemma}\label{sts} \mbox{} \begin{enumerate} \item[(i)] The function $\theta \mapsto s_\theta$ is upper semicontinuous on ${\mathbb{T}}$. \vspace{6pt} \item[(ii)] For every $s>0$, the set $\{ \theta \in {\mathbb{T}}: s_\theta>s \}$ is finite. \vspace{6pt} \item[(iii)] For every $\theta \in {\mathbb{T}}$, $$ s_{{\widehat{D}}(\theta)} \leq D s_\theta. $$ Equality holds if $R_\theta$ does not crash into a critical point of $P$. \end{enumerate} \end{lemma} \begin{proof} (i) is the statement that if $s_{\theta_0}<s$ for some $\theta_0$, then $s_\theta<s$ for all $\theta$ close to $\theta_0$. This is a simple consequence of the fact that the trajectories of smooth vector fields depend continuously on their initial point. \vspace{6pt} (ii) follows from the fact that for every $s>0$ there are finitely many critical points of $G$ at potentials higher than $s$, and each of them can have only finitely many field lines crashed into it. \vspace{6pt} For (iii), first note that the image $P(R_\theta)$ is a smooth field line which by \eqref{bfe} maps under $\frak{B}$ to the radial line at angle ${\widehat{D}}(\theta)$. Thus \begin{equation}\label{kuku} P(R_\theta(s))=R_{{\widehat{D}}(\theta)}(Ds) \qquad \text{for all} \ s>s_\theta. \end{equation} This proves $s_{{\widehat{D}}(\theta)} \leq D s_\theta$. Now suppose $R_\theta$ does not crash into a critical point of $P$. Then there are two possibilities: (i) $R_\theta$ does not crash at all, so $s_\theta=0$. In this case \eqref{kuku} shows that $R_{{\widehat{D}}(\theta)}$ does not crash either and $s_{{\widehat{D}}(\theta)}=0$; (ii) $R_\theta$ crashes into a strictly precritical point $\omega$ of $P$. In this case \eqref{kuku} shows that $R_{{\widehat{D}}(\theta)}$ crashes into $P(\omega)$ which is a critical point of $G$ at potential $D s_\theta$, proving once again $s_{{\widehat{D}}(\theta)}=D s_\theta$. \end{proof} \begin{figure}[t] \centering \begin{overpic}[width=0.5\textwidth]{cp.pdf} \put (55.5,46.5) {$\omega$} \put (64.5,93) {\small $R^-_{\theta}$} \put (51,93.5) {\small $R^+_{\theta}$} \end{overpic} \caption{\sl Field lines and equipotentials of the Green's function near a critical point $\omega$ of order $3$. Each of the three incoming field lines can be extended past $\omega$ by turning to the immediate right or left and continuing along the corresponding outgoing line field.} \label{cp} \end{figure} It follows from the upper semicontinuity of $\theta \mapsto s_\theta$ that the set $$ \Sigma : = \bigcup_{\theta \in {\mathbb{T}}} \bigcup_{s \in ]s_\theta,+\infty[} \{ \ensuremath{{\operatorname{e}}}^{s+2\pi i \theta} \} $$ is open. It is not hard to see that the extension of the B\"{o}ttcher coordinate defined by $\frak{B}(R_\theta(s)):=\ensuremath{{\operatorname{e}}}^{s+2\pi i \theta}$ gives a conformal isomorphism $\frak{B}: W \to \Sigma$, where \begin{equation}\label{omgdef} W : = \bigcup_{\theta \in {\mathbb{T}}} \bigcup_{s \in ]s_\theta,+\infty[} \{ R_\theta(s) \} \end{equation} Observe that $W$, being homeomorphic to the star-shaped domain $\Sigma$, is simply connected. \begin{corollary} The set ${\mathcal N} \subset {\mathbb{T}}$ of angles $\theta \in {\mathbb{T}}$ for which $s_\theta>0$ is countable, dense and backward-invariant under ${\widehat{D}}$. \end{corollary} This follows from the fact that there are countably many critical points of $G$ in $\mathbb{C} \smallsetminus K_P$, and that $s_\theta>0$ whenever $s_{{\widehat{D}}(\theta)}>0$ by \lemref{sts}. \vspace{6pt} Here is a closely related description of the set $\mathcal N$: \begin{corollary}\label{N0N} Let ${\mathcal N}_0$ be the finite set of angles $\theta \in {\mathbb{T}}$ for which the field line $R_\theta$ crashes into a critical point of $P$ in $\mathbb{C} \smallsetminus K_P$. Then $$ {\mathcal N} = \bigcup_{n \geq 0} {\widehat{D}}^{-n}({\mathcal N}_0). $$ \end{corollary} \begin{proof} The union is a subset of $\mathcal N$ since ${\mathcal N}_0 \subset {\mathcal N}$ and ${\mathcal N}$ is backward-invariant under ${\widehat{D}}$. Conversely, suppose $\theta \in {\mathcal N}$ so $R_\theta$ crashes into a critical point $\omega$ of $G$ in $\mathbb{C} \smallsetminus K_P$. Let $n \geq 0$ be the smallest integer for which $c:=P^{\circ n}(\omega)$ is a critical point of $P$. An easy induction using \eqref{kuku} and the inequality $s_{{\widehat{D}}(\theta)} \leq D s_\theta$ gives $$ P^{\circ n}(R_\theta(s))=R_{{\widehat{D}}^{\circ n}(\theta)}(D^n s) \qquad \text{for} \ s>s_\theta. $$ This implies that the field line $R_{{\widehat{D}}^{\circ n}(\theta)}$ crashes into $c$, which shows ${\widehat{D}}^{\circ n}(\theta) \in {\mathcal N}_0$. \end{proof} When $\theta \notin {\mathcal N}$, the field line $R_\theta$ is called the {\it \bfseries smooth ray} at angle $\theta$. When $\theta \in {\mathcal N}$, the field line $R_\theta$ is defined only for $s > s_\theta$ and there is more than one way to extend it to a curve consisting of field lines and singularities of $\nabla G$ on which $G$ defines a homeomorphism onto $]0,+\infty[$. But there are always two special choices: $R_\theta^+$ which turns immediate right and $R_\theta^-$ which turns immediate left at each critical point met during the descent (compare \figref{cp}). We call these extensions the {\it \bfseries right} and {\it \bfseries left broken rays} at angle $\theta$, respectively. Note that these broken rays are also parametrized by the potential, so $R^{\pm}_{\theta}(s)$ make sense for all $s>0$, and $$ R^+_{\theta}(s)=R^-_{\theta}(s)=R_\theta(s) \qquad \text{for} \ s > s_\theta. $$ On the other hand, an easy exercise shows that for each potential $s<s_\theta$ the points $R^{\pm}_\theta(s)$ belong to different connected components of $G^{-1}([0,s_\theta[)$, so the restrictions of the curves $s \mapsto R^\pm_\theta(s)$ to $]0,s_\theta[$ are disjoint. \vspace{6pt} Each compact piece of a broken ray is the one-sided uniform limit of the corresponding piece of the nearby smooth rays in the following sense: Let $\theta_0 \in {\mathcal N}$ and fix $0<c<1$ such that $1/c>s_{\theta_0}>c>0$. By \lemref{sts} we have $s_\theta \leq c$ for all $\theta$ in a deleted neighborhood of $\theta_0$. Then, \begin{equation}\label{limit} \lim_{\theta \nearrow \theta_0} R_{\theta}(s) = R_{\theta_0}^-(s) \quad \text{and} \quad \lim_{\theta \searrow \theta_0} R_{\theta}(s) = R_{\theta_0}^+(s) \end{equation} uniformly on $s \in [c,1/c]$. \vspace{6pt} In what follows by a {\it \bfseries ray} we always mean a smooth or broken ray. It easily follows from \eqref{kuku} that \begin{align*} P(R_\theta) & = R_{{\widehat{D}}(\theta)} & & \text{if} \ \theta \notin {\mathcal N} \\ P(R_\theta^\pm) & = R_{{\widehat{D}}(\theta)}^\pm & & \text{if} \ \theta \in {\mathcal N} \ \text{and} \ {\widehat{D}}(\theta) \in {\mathcal N} \\ P(R_\theta^\pm) & = R_{{\widehat{D}}(\theta)} & & \text{if} \ \theta \in {\mathcal N} \ \text{and} \ {\widehat{D}}(\theta) \notin {\mathcal N}. \end{align*} A critical point of $G$ of order $n$ belongs to $n$ left and $n$ right broken rays. On the other hand, a non-critical point in $\mathbb{C} \smallsetminus K_P$ belongs either to a unique smooth ray or to a pair of left and right broken rays. \vspace{6pt} Finally, let us discuss the number of critical points of $G$ on a broken ray at angle $\theta \in {\mathcal N}$. We consider three cases: \vspace{6pt} $\bullet$ {\it Case 1.} $\theta$ has infinite forward orbit under ${\widehat{D}}$. Then by \corref{N0N} there is a smallest integer $n \geq 1$ such that ${\widehat{D}}^{\circ n}(\theta) \notin {\mathcal N}$, so we have the orbit of distinct rays $$ R^\pm_{\theta} \stackrel{P}{\longrightarrow} R^\pm_{{\widehat{D}}(\theta)} \stackrel{P}{\longrightarrow} \cdots \stackrel{P}{\longrightarrow} R^\pm_{{\widehat{D}}^{\circ n-1}(\theta)} \stackrel{P}{\longrightarrow} R_{{\widehat{D}}^{\circ n}(\theta)}, $$ with the last ray being smooth. It follows that $R^\pm_{\theta}$ can contain only finitely many critical points of $G$. These critical points must eventually map (in at most $n-1$ iterations) to distinct critical points of $P$. In particular, $R^\pm_{\theta}$ can contain at most $D-2$ critical points of $G$. \vspace{6pt} $\bullet$ {\it Case 2.} $\theta$ is periodic of period $q \geq 1$ under ${\widehat{D}}$. Then $R^{\pm}_\theta$ are mapped onto themselves under $P^{\circ q}$: $$ P^{\circ q}(R^{\pm}_\theta(s))=R^{\pm}_\theta(D^q s) \qquad \text{for all} \ s>0. $$ Since $R^{\pm}_\theta$ first crash into a critical point of $G$ at potential $s_\theta$, they contain at least the critical points $R^{\pm}_\theta(s_\theta/D^{nq})$ for every $n \geq 0$. \vspace{6pt} $\bullet$ {\it Case 3.} $\theta$ is not periodic but there is a smallest integer $n \geq 1$ such that ${\widehat{D}}^{\circ n}(\theta)$ is periodic of period $q \geq 1$ under ${\widehat{D}}$. Combining the previous two cases, it follows that $R_\theta^\pm$ contain infinitely many critical points if ${\widehat{D}}^{\circ n}(\theta) \in {\mathcal N}$ and only finitely many critical points if ${\widehat{D}}^{\circ n}(\theta) \notin {\mathcal N}$. \vspace{6pt} It follows from the above analysis that a periodic ray is either smooth or infinitely broken. In particular, {\it distinct periodic rays are always disjoint}. \subsection{Polynomial-like maps}\label{plm} A holomorphic map $f: U_1 \to U_0$ between Jordan domains is {\it \bfseries polynomial-like} if $\overline{U_1} \subset U_0$ and if $f$ is proper. In this case $f$ has a well-defined mapping degree $d \geq 1$. In analogy with the polynomial case, the filled Julia set of $f$ is defined as the non-empty compact set $K_f = \{ z \in U_1 : f^{\circ n}(z) \in U_1 \ \text{for all} \ n \geq 0 \}$. According to Douady and Hubbard \cite{DH}, there is a polynomial $Q$ of degree $d$ and a quasiconformal homeomorphism $\varphi : U_0 \to \varphi(U_0)$ which satisfies $$ \varphi \circ f = Q \circ \varphi \qquad \text{in} \ U_1, $$ with $\overline{\partial} \varphi=0$ almost everywhere on $K_f$. The relation $\varphi(K_f)=K_Q$ easily follows. We say that $f$ and $Q$ are {\it \bfseries hybrid equivalent} and that $\varphi$ is a {\it \bfseries hybrid conjugacy} between $f$ and $Q$. When $K_f$ is connected, the polynomial $Q$ is uniquely determined up to affine conjugacy. \section{Basic properties of the set $I_K$}\label{sec:bpi} For the rest of the paper and unless otherwise stated, we fix a monic polynomial $P$ of degree $D \geq 3$. Our standing assumption is that the filled Julia set $K_P$ is disconnected and has a non-degenerate connected component $K$ which is periodic with period $k \geq 1$. This section is devoted to the construction of the set $I=I_K$ of the angles of external rays of $P$ that accumulate on $\partial K$ and establishing its basic properties. We would like to point out that much of the material related to the proof of \thmref{A} in this section and next can be presented in the language of ``external classes'' (see \cite{DH}). However, this would require roughly the same amount of effort as the more concrete approach taken here. \subsection{Construction of the set $I$}\label{conI} For $s>0$, consider the Jordan domain $$ V_s := \text{the connected component of} \ G^{-1}([0,s[) \ \text{containing} \ K. $$ Then the $V_s$ are nested and $K= \bigcap_{s>0} V_s$. For sufficiently small $s>0$ the restriction {\mapfromto {P^{\circ k}|_{V_s}} {V_s} {V_{D^k s}}} is a polynomial-like map of some degree $2 \leq d < D$ (independent of $s$) with connected filled Julia set $K$. Let us denote by $s^\ast>0$ the largest $s$ for which this is true. Equivalently, $s^\ast$ can be characterized as the largest $s$ for which all critical points of $P^{\circ k}$ in $V_s$ belong to $K$.\vspace{6pt} By \lemref{sts} there are at most finitely many $\theta \in {\mathcal N}$ for which the field line $R_\theta$ crashes at a potential higher than $s$. It follows that the set $$ E_s := \{ \theta \in {\mathbb{T}} : R_\theta \cap V_s \neq \emptyset \} $$ is a union of finitely many disjoint open intervals with endpoints belonging to ${\mathcal N}$. If $E_s \not= {\mathbb{T}}$, the complement ${\mathbb{T}} \smallsetminus E_s$ consists of equally many closed non-degenerate intervals. The closure $$ I_s := \overline{E}_s $$ is thus a finite union of disjoint closed non-degenerate intervals with endpoints in ${\mathcal N}$. Evidently $I_s$ is the set of angles $\theta$ such that the field line $R_\theta$ or precisely one of the broken rays $R^{\pm}_\theta$ enters $V_s$. \vspace{6pt} It will be convenient to call each complementary component of a compact subset of ${\mathbb{T}}$ a {\it \bfseries gap} of that set. Each gap of $I_s$ is an open interval of the form $]\theta_1, \theta_2[$ where the broken rays $R^-_{\theta_1}$ and $R^+_{\theta_2}$ crash into a critical point of $G$ at some potential $\geq s$ and enter $V_s$ along a common field line (see \figref{Es}). We call such $R^-_{\theta_1}, R^+_{\theta_2}$ a {\it \bfseries ray pair} in $I_s$. It is not hard to see that every ray pair in $I_s$ arises from a gap in this fashion, that is, if $\theta_1, \theta_2 \in I_s$ and if $R^-_{\theta_1}$ and $R^+_{\theta_2}$ crash into a critical point at a potential $\geq s$, then $]\theta_1, \theta_2[$ is a gap of $I_s$. \begin{figure}[t] \centering \begin{overpic}[width=0.85\textwidth]{Es.pdf} \put (91,36) {\small $\theta_1$} \put (77.5,35.5) {\small $\theta_2$} \put (72,30) {\small $\theta_3$} \put (75,11) {\small $\theta_4$} \put (40,16) {\small $V_s$} \put (60,13) {\small $\Gamma_s$} \put (85,22) {\small $I_s$} \put (65,48) {\small $R^-_{\theta_1}$} \put (29,49) {\small $R^+_{\theta_2}$} \put (16,39) {\small $R^-_{\theta_3}$} \put (10,6) {\small $R^+_{\theta_4}$} \put (47.5,30) {\small $z_1$} \put (14.5,22.5) {\small $z_2$} \end{overpic} \caption{\sl The set $I_s$ is the closure of the set of angles of the field lines from $\infty$ that enter the topological disk $V_s$. Here $I_s$ has two gaps $]\theta_1, \theta_2[$ and $]\theta_3, \theta_4[$ corresponding to the ray pairs $R^-_{\theta_1}, R^+_{\theta_2}$ and $R^-_{\theta_3}, R^+_{\theta_4}$, and the topological boundary $\Gamma_s=\partial V_s$ has two root points $z_1,z_2$ (see \S \ref{asep}).} \label{Es} \end{figure} \begin{lemma}\label{isk} ${\widehat{D}}^{\circ k}(I_s)=I_{D^k s}$ whenever $0<s<s^\ast$. \end{lemma} \begin{proof} If $\theta \in E_s$, the field line $R_\theta$ enters $V_s$. Since $P^{\circ k}(V_s)=V_{D^k s}$, it follows that $R_{{\widehat{D}}^{\circ k}(\theta)} \supset P^{\circ k}(R_\theta)$ enters $V_{D^k s}$, so ${\widehat{D}}^{\circ k}(\theta) \in E_{D^k s}$. This proves ${\widehat{D}}^{\circ k}(E_s) \subset E_{D^k s}$ and the inclusion ${\widehat{D}}^{\circ k}(I_s) \subset I_{D^k s}$ follows. \vspace{6pt} Now take any $\theta' \in E_{D^k s}$, let $\zeta$ be the intersection point of $R_{\theta'}$ with the boundary of $V_{D^k s}$ and find $z$ on the boundary of $V_s$ such that $P^{\circ k}(z)=\zeta$. If $z$ belongs to $R_\theta$ for some $\theta$, then $\theta \in E_s$ and $R_{{\widehat{D}}^{\circ k}(\theta)} \supset P^{\circ k}(R_\theta)$ passes through $\zeta$, so $\theta' = {\widehat{D}}^{\circ k}(\theta) \in {\widehat{D}}^{\circ k}(E_s)$. Otherwise, $z$ belongs to some broken ray $R^+_\theta$ which enters $V_s$. Then $\theta \in I_s$ and $P^{\circ k}(R^+_\theta) = R_{{\widehat{D}}^{\circ k}(\theta)}$ or $R^+_{{\widehat{D}}^{\circ k}(\theta)}$ passes through $\zeta$. Since $s_{\theta'}<D^k s$, we conclude again that $\theta' = {\widehat{D}}^{\circ k}(\theta) \in {\widehat{D}}^{\circ k}(I_s)$. This proves $E_{D^k s} \subset {\widehat{D}}^{\circ k}(I_s)$ and the reverse inclusion $I_{D^k s} \subset {\widehat{D}}^{\circ k}(I_s)$ follows. \end{proof} \begin{remark}\label{excep} Since ${\widehat{D}}$ is an open map, it follows from the above lemma that ${\widehat{D}}^{\circ k}(E_s) \subset E_{D^k s}$. On the other hand, a point on the boundary $\partial I_s$ may well map to an interior point in $E_{D^k s}$, although this should be thought of as a rare occurrence. In fact, if $\theta \in \partial I_s$ and ${\widehat{D}}^{\circ k}(\theta) \in E_{D^k s}$ for some $0<s<s^{\ast}$, then $\theta$ must belong to the finite set $\bigcup_{n=0}^{k-1} {\widehat{D}}^{-n}({\mathcal N}_0)$ (recall from \S \ref{gr} that ${\mathcal N}_0$ is the set of angles of rays which first crash into a critical point of $P$ in $\mathbb{C} \smallsetminus K_P$). To see this, simply note that if $\theta, {\widehat{D}}(\theta), \ldots, {\widehat{D}}^{\circ k-1}(\theta)$ were all outside ${\mathcal N}_0$, repeated application of \lemref{sts} would give $s_{{\widehat{D}}^{\circ k}(\theta)}=D^k s_\theta \geq D^k s$, which would contradict ${\widehat{D}}^{\circ k}(\theta) \in E_{D^k s}$. \end{remark} The sets $E_s$ and $I_s$ form nested families since if $s<s'$ and $R_\theta$ enters $V_s$, then it also enters $V_{s'}$. The intersection $$ I := \bigcap_{s>0} I_s $$ is thus a non-empty compact subset of ${\mathbb{T}}$ which by \lemref{isk} satisfies ${\widehat{D}}^{\circ k}(I)=I$. \begin{lemma} $I$ is the set of angles $\theta \in {\mathbb{T}}$ for which the smooth ray $R_\theta$ accumulates on $\partial K$ if $\theta \notin {\mathcal N}$, or (precisely) one of the broken rays $R^{\pm}_\theta$ accumulates on $\partial K$ if $\theta \in {\mathcal N}$. \end{lemma} \begin{proof} If $\theta \notin {\mathcal N}$ and $R_\theta$ accumulates on $\partial K$, then $R_\theta$ enters $V_s$ for every $s>0$, so $\theta \in \bigcap_{s>0} E_s \subset I$. If $\theta \in {\mathcal N}$ and, say, $R^+_{\theta}$ accumulates on $\partial K$ (the case of $R^-_{\theta}$ is similar), then by \eqref{limit} for every $s>0$ there is an $\varepsilon>0$ such that $R_{\theta'}$ enters $V_s$ if $\theta<\theta'<\theta+\varepsilon$. Any such $\theta'$ belongs to $E_s$, so $\theta \in I_s$. Since this holds for all $s>0$, it follows that $\theta \in I$. \vspace{6pt} Conversely, suppose $\theta \in I$. If $\theta \notin {\mathcal N}$, then $\theta \in E_s$ for every $s>0$. It follows that the smooth ray $R_\theta$ enters every $V_s$, so it accumulates on $\partial K$. If $\theta \in {\mathcal N}$ and $0<s<s_\theta$, then $\theta$ is a boundary point of $I_s$, so precisely one of the broken rays $R^{\pm}_\theta$ enters $V_s$. Since this holds for every $0<s<s_\theta$, it follows that this broken ray accumulates on $\partial K$. \end{proof} \subsection{The affine structure of equipotential curves}\label{asep} Recall that for $s>0$ the topological disk $V_s$ is the connected component of $G^{-1}([0,s[)$ containing $K$. Let us revisit the set $E_s$ of angles of the field lines $R_\theta$ that enter $V_s$, and the closure $I_s= \overline{E_s}$. For each gap $]\theta_1, \theta_2[$ in $I_s$, the broken rays $R^-_{\theta_1}$ and $R^+_{\theta_2}$ crash at some potential $\geq s$ and enter $V_s$ along a common field line. The intersection of this field line with the topological boundary $$ \Gamma_s := \partial V_s $$ is called a {\it \bfseries root} point of $\Gamma_s$ (see \figref{Es}). \vspace{6pt} The extended B\"{o}ttcher coordinate $\frak{B}$ defined in the simply connected domain $W$ of \eqref{omgdef} can be used to define a canonical affine structure on the equipotential curves $\Gamma_s$. Recall that $\frak{B}: W \to \Sigma$ is a conformal isomorphism which satisfies $\frak{B} \circ P=\frak{B}^D$ in $W$. The function $$ \Theta := \frac{1}{2\pi}\arg(\frak{B}) $$ is harmonic and well-defined up to an additive integer.\footnote{It is easy to check that $2\pi \Theta$ is a harmonic conjugate of $G$ in $W$.} We will think of $\Theta$ as a map $W \to {\mathbb{T}}$. \vspace{6pt} The restriction of $\Theta$ gives a homeomorphism $\Gamma_s \smallsetminus \{ \text{root points} \} \to E_s$. At a root point of $\Gamma_s$ this map has a jump discontinuity. In fact, if $z_0$ is a root point corresponding to the gap $]\theta_1, \theta_2[$ in $I_s$, then in the positive orientation of $\Gamma_s$ we have $\lim_{z \nearrow z_0} \Theta(z)= \theta_1$ and $\lim_{z \searrow z_0} \Theta(z)= \theta_2$. The inverse map $E_s \to \Gamma_s \smallsetminus \{ \text{root points} \}$ thus extends continuously to a surjective map $h_s : I_s \to \Gamma_s$ which is homeomorphic on each connected component of $I_s$ but identifies pairs of gap endpoints by sending them to the corresponding root. Similarly, consider a piecewise affine surjection $\Pi_s : I_s \to {\mathbb{T}}$ that has constant slope $1/|I_s|$ on each connected component of $I_s$ and identifies pairs of gap endpoints. Thus, there is an orientation-preserving homeomorphism $\psi_s: \Gamma_s \to {\mathbb{T}}$ which makes the following diagram commute: \begin{equation}\label{hhh} \begin{tikzcd}[column sep=small] I_s \arrow[drr,swap,"\Pi_s"] \arrow[rr,"h_s"] & & \Gamma_s \arrow[d,"\psi_s"] \\ & & {\mathbb{T}} \end{tikzcd} \end{equation} Evidently $\Pi_s$, and therefore $\psi_s$, is unique up to a rigid rotation of the circle ${\mathbb{T}}$. Thus there is a unique affine structure on $\Gamma_s$ with respect to which any choice of $\psi_s$ is an affine homeomorphism. This structure allows us to talk about affine maps between various $\Gamma_s$'s. It also equips $\Gamma_s$ with a well-defined metric coming from the Euclidean metric on ${\mathbb{T}}$. In particular, the angular length of $\Gamma_s$ is $$ |\Gamma_s| = |I_s|. $$ In what follows we always measure lengths and slopes on $\Gamma_s$ with respect to this affine structure. \begin{lemma}\label{slp} For $0<s<s^\ast$ the following diagram is commutative: \begin{equation}\label{glue} \begin{tikzcd}[column sep=small] I_s \arrow[d,swap,"h_s"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & I_{D^k s} \arrow[d,"h_{D^k s}"] \\ \Gamma_s \arrow[rr,"P^{\circ k}"] & & \Gamma_{D^k s} \end{tikzcd} \end{equation} In particular, the iterate $P^{\circ k}: \Gamma_s \to \Gamma_{D^k s}$ is an affine covering map of degree $d$ and slope $D^k$. \end{lemma} \begin{proof} For simplicity set $s':=D^k s$. Both sets $\Gamma_s \smallsetminus \{ \text{root points} \}$ and $\Gamma_{s'} \smallsetminus \{ \text{root points} \}$ are contained in $W$ and $P^{\circ k}$ maps the former to the latter. Since $\frak{B} \circ P^{\circ k} = \frak{B}^{D^k}$ in $W$, we obtain $$ \Theta \circ P^{\circ k} = {\widehat{D}}^{\circ k} \circ \Theta \qquad \text{on} \ \Gamma_s \smallsetminus \{ \text{root points} \}. $$ The result follows since $h_s$ and $h_{s'}$ are the inverses of the restrictions of $\Theta$ to $\Gamma_s \smallsetminus \{ \text{root points} \}$ and $\Gamma_{s'} \smallsetminus \{ \text{root points} \}$, respectively. \end{proof} Consider the continuous retraction {\mapfromto {\rho_s} {\mathbb{C} \smallsetminus V_s} {\Gamma_s}} defined by projecting along rays. More precisely, take $\zeta \in \mathbb{C} \smallsetminus V_s$ and consider two cases. If $\zeta \in R_\theta$ for some $\theta \in E_s$, let $\rho_s(\zeta)$ be the unique point in $\Gamma_s \cap R_\theta$. Otherwise, $\zeta$ belongs to a ray whose angle is in the closure $[\theta_1,\theta_2]$ of a gap of $I_s$. In this case, let $\rho_s(\zeta)$ be the root point of $\Gamma_s$ determined by $R_{\theta_1}^- , R_{\theta_2}^+$ (see \figref{Es}). Evidently, for any $t>s$ the restriction $\rho_s: \Gamma_t \to \Gamma_s$ is piecewise affine of degree $1$, with slope $1$ on the open arcs corresponding to $E_s$ and slope $0$ on the arcs corresponding to $E_t \smallsetminus I_s$. \vspace{6pt} Now let $0<s<s^\ast$ so the restriction {\mapfromto {P^{\circ k}} {\Gamma_s}{\Gamma_{D^k s}}} is a covering map of degree $d$. Consider an affine homeomorphism {\mapfromto {\psi_s} {\Gamma_s} {\mathbb{T}}} with slope $1/|\Gamma_s|$, as in \eqref{hhh}. By \lemref{slp} the composition $$ g_s := \psi_s \circ \rho_s \circ P^{\circ k} \circ \psi_s^{-1} : {\mathbb{T}} \to {\mathbb{T}} $$ has degree $d$ and is piecewise affine with slopes $D^k$ and $0$. Let us call a fixed point $p=g_s(p)$ near which $g_s$ is not constant {\it \bfseries semi-repelling}. The terminology is justified by the fact that the derivative or a one-sided derivative of $g_s$ at such $p$ is $D^k>1$. \begin{lemma}\label{fpcount} The map $g_s$ has at least $d-1$ semi-repelling fixed points on the circle. \end{lemma} (Compare \cite{PZ2} for a closely related result.) \begin{proof} Take any lift $\tilde{g}_s: {\mathbb{R}} \to {\mathbb{R}}$ of $g_s$ and consider the function $T(t)=\tilde{g}_s(t)-t$ which is piecewise affine with slopes $D^k-1>0$ and $-1$ and satisfies $T(t+1)=T(t)+d-1$ for all $t$. The fixed points of $g_s$ correspond to the points in $[0,1[$ at which $T$ takes an integer value, and being semi-repelling means $T$ does not have slope $-1$ there. By the intermediate value theorem, for each of the $d-1$ integers $j$ satisfying $T(0) \leq j < T(1)=T(0)+d-1$, the equation $T(t)=j$ has at least one solution in $[0,1[$. Evidently $T$ cannot have slope $-1$ at all such solutions, so at least one of them must correspond to a semi-repelling fixed point of $g_s$. \end{proof} By the construction, each semi-repelling fixed point of $g_s$ corresponds to a fixed point of ${\widehat{D}}^{\circ k}$ in $I_s$. Since the $I_s$ are nested with $I = \bigcap_{s>0} I_s$, and since ${\widehat{D}}^{\circ k}$ has only finitely many fixed points on the circle, we conclude from \lemref{fpcount} the following \begin{corollary} \label{fps} There exist at least $d-1$ points $\theta \in I$ such that ${\widehat{D}}^{\circ k}(\theta) = \theta$. \end{corollary} The number of fixed points of ${\widehat{D}}^{\circ k}$ in $I$ could be greater than $d-1$; see the examples in \S \ref{sec:ex}. \begin{remark} The periodic points of ${\widehat{D}}$ in the above corollary are all of minimal period $k$. In fact, if ${\widehat{D}}^{\circ m}(\theta)=\theta$ for some $\theta \in I$ and $m>0$, then $P^{\circ m}(R_\theta)=R_\theta$ or $P^{\circ m}(R^\pm_\theta)=R^\pm_\theta$ accumulates on $K$, which implies $P^{\circ m}(K) \cap K \neq 0$. As the component $K$ has minimal period $k$ under $P$, we conclude that $m$ must be a multiple of $k$. \end{remark} \section{Proof of Theorem \ref{A}}\label{sec:pfsa} We now have all the ingredients for the proof of \thmref{A}. The existence of the semiconjugacy and the Hausdorff dimension bound will be proved in \S \ref{consp}. The uniqueness of $\Pi$ will be addressed in \S \ref{uniqq}. \subsection{Construction of the semiconjugacy $\Pi$}\label{consp} Let $\theta_0$ be a fixed point of ${\widehat{D}}^{\circ k}$ in $I$ whose existence is guaranteed by \corref{fps}. Fix any $s_0$ with $0<s_0<s^\ast$ and consider the potentials $$ s_n := D^{-nk} s_0 \qquad \text{for} \ n \geq 0. $$ For simplicity, we write $I_n$ for $I_{s_n}$, $\Gamma_n$ for $\Gamma_{s_n}$, and so on. Let {\mapfromto {\Pi_n} {I_n} {\mathbb{T}}} denote the unique piecewise affine surjection with slope $1/|I_n|$, normalized so that $\Pi_n(\theta_0) =0$, and consider the induced orientation-preserving affine homeomorphism {\mapfromto {\psi_n} {\Gamma_n} {\mathbb{T}}} which by \eqref{hhh} satisfies \begin{equation}\label{php} \psi_n \circ h_n = \Pi_n. \end{equation} For $n>0$, the composition $\psi_{n-1} \circ P^{\circ k} \circ \psi_n^{-1} : {\mathbb{T}} \to {\mathbb{T}}$ is a degree $d$ covering map that fixes $0$ and has constant slope, so it must be the map ${\widehat{d}}$. Thus, $$ \psi_{n-1} \circ P^{\circ k} = {\widehat{d}} \circ \psi_n \qquad \text{on} \ \Gamma_n. $$ Using \eqref{glue} and \eqref{php}, we obtain the relation $$ \Pi_{n-1} \circ {\widehat{D}}^{\circ k} = {\widehat{d}} \circ \Pi_n \qquad \text{on} \ I_n, $$ which can be visualized as the infinite commutative diagram \begin{equation}\label{projn} \begin{tikzcd}[column sep=small] \cdots \arrow[rr,"{\widehat{D}}^{\circ k}"] & & I_{n+1} \arrow[d,"\Pi_{n+1}"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & I_n \arrow[d,"\Pi_n"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & \arrow[rr,"{\widehat{D}}^{\circ k}"] \cdots & & I_1 \arrow[d,"\Pi_1"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & I_0 \arrow[d,"\Pi_0"] \\ \cdots \arrow[rr,"{\widehat{d}}"] & & {\mathbb{T}} \arrow[rr,"{\widehat{d}}"] & & {\mathbb{T}} \arrow[rr,"{\widehat{d}}"] & & \arrow[rr,"{\widehat{d}}"] \cdots & & {\mathbb{T}} \arrow[rr,"{\widehat{d}}"] & & {\mathbb{T}} \end{tikzcd} \end{equation} Note that this implies $$ \frac{|I_{n+1}|}{|I_n|}=\frac{d}{D^k} \qquad \text{for all} \ n \geq 0, $$ which in particular shows $|I_n| \to 0$ and therefore $|I|=0$. More precisely, we have the following \begin{theorem} The Hausdorff dimension of the set $I=I_K$ is at most $\log d /(k \log D)$. \end{theorem} \begin{proof} Let $B_n$ be the image of the set of boundary points of $I_n$ under the surjection $\Pi_n$. By \eqref{projn}, ${\widehat{d}}^{-1}(B_n) \subset B_{n+1}$. Moreover, by \remref{excep} any point in $B_{n+1} \smallsetminus {\widehat{d}}^{-1}(B_n)$ must belong to the finite set $\bigcup_{n=0}^{k-1} {\widehat{D}}^{-n}({\mathcal N}_0)$. This shows that the cardinality of $B_n$, i.e., the number of connected components of $I_n$, grows asymptotically as $d^n$. Evidently the connected components of $I_n$ have length $\leq \text{const.} \, D^{-nk}$. Thus the lower box dimension of $I$ is at most $$ \lim_{n \to \infty} \frac{\text{const.} + n \log d}{\text{const.} + nk \log D} = \frac{\log d}{k \log D}. $$ The result follows since the Hausdorff dimension is bounded above by the lower box dimension. \end{proof} \begin{lemma}\label{dto1} For every $\theta \in I$ there are $d$ distinct angles $\theta_1, \ldots, \theta_d$ in ${\widehat{D}}^{-k}(\theta) \cap I$ such that $\Pi_n$ is injective on $\{ \theta_1, \ldots, \theta_d \}$ for every $n$. \end{lemma} \begin{proof} Take any $n \geq 0$. First suppose $\theta$ is an interior point of $I_n$. Then by \eqref{projn}, $$ {\widehat{D}}^{-k}(\theta) \cap I_{n+1} = \Pi_{n+1}^{-1} ({\widehat{d}}^{-1}(\Pi_n(\theta))). $$ Under $\Pi_{n+1}$, every point in the $d$-element set ${\widehat{d}}^{-1}(\Pi_n(\theta))$ has either one preimage in the interior of $I_{n+1}$, or two preimages which are endpoints of a gap of $I_{n+1}$. By removing one of these endpoints in every such pair, we obtain a $d$-element set in ${\widehat{D}}^{-k}(\theta) \cap I_{n+1}$ on which $\Pi_{n+1}$ is injective. \vspace{6pt} Now suppose $\theta$ is a right endpoint of a gap of $I_n$ (the case of a left endpoint is similar). Let $\theta'$ be the left endpoint of the same gap so $\Pi_n(\theta)=\Pi_n(\theta')$. By \lemref{isk}, all elements of ${\widehat{D}}^{-k}(\theta) \cap I_{n+1}$ are right endpoints of gaps in $I_{n+1}$ and all elements of ${\widehat{D}}^{-k}(\theta') \cap I_{n+1}$ are left endpoints of gaps in $I_{n+1}$. By \eqref{projn}, $$ ({\widehat{D}}^{-k}(\theta) \cup {\widehat{D}}^{-k}(\theta')) \cap I_{n+1} = \Pi_{n+1}^{-1} ({\widehat{d}}^{-1}(\Pi_n(\theta))). $$ This shows that under $\Pi_{n+1}$, every point in the $d$-element set ${\widehat{d}}^{-1}(\Pi_n(\theta))$ has two preimages, one in ${\widehat{D}}^{-k}(\theta) \cap I_{n+1}$ and the other in ${\widehat{D}}^{-k}(\theta') \cap I_{n+1}$. In particular, ${\widehat{D}}^{-k}(\theta) \cap I_{n+1}$ consists of exactly $d$ elements and $\Pi_{n+1}$ is injective on it. \vspace{6pt} The proof of the lemma is now straightforward. By what we just showed, for each $n$ there is a $d$-element set $X_n \subset {\widehat{D}}^{-k}(\theta) \cap I_n$ on which $\Pi_n$ is injective. Since there are only finitely many $d$-element subsets of ${\widehat{D}}^{-k}(\theta)$ on the circle, some set $X$ must occur infinitely often in the sequence $\{ X_n \}$. Evidently $X$ is contained in ${\widehat{D}}^{-k}(\theta) \cap I$ and $\Pi_n$ is injective on it for infinitely many $n$. To see that every $\Pi_n$ is injective on $X$, it suffices to observe that injectivity of $\Pi_{n+1}|_X$ implies injectivity of $\Pi_n|_X$. In fact, if $\theta_1, \theta_2$ are distinct points in $X$ with $\Pi_n(\theta_1)=\Pi_n(\theta_2)$, then $\theta_1, \theta_2$ are the endpoints of a gap of $I_n$, and since they both belong to $I$, they must be the endpoints of the same gap of $I_{n+1}$, which shows $\Pi_{n+1}(\theta_1)=\Pi_{n+1}(\theta_2)$. \end{proof} For $n \geq 0$ consider the set $Z_n:={\widehat{d}}^{-n}(0)$ which consists of $d^n$ equally spaced rational points of the form $i/d^n \ (\operatorname{mod} 1)$. Clearly, $Z_0=\{ 0 \} \subset Z_1 \subset Z_2 \subset \cdots$. \begin{lemma}\label{Cn} There is an increasing sequence of finite sets $$ C_0=\{ \theta_0 \} \subset C_1 \subset C_2 \subset \cdots \subset I $$ such that for $n \geq 1$, \vspace{6pt} \begin{enumerate} \item[(i)] $\Pi_j$ is injective on $C_n$ for every $j$; \vspace{6pt} \item[(ii)] ${\widehat{D}}^{\circ k}(C_n)=C_{n-1}$; \vspace{6pt} \item[(iii)] $\Pi_n(C_n)=Z_n$. \end{enumerate} \end{lemma} Note that by (iii), $C_n$ has $d^n$ elements and $\Pi_n: C_n \to Z_n$ is an order-preserving bijection. \begin{proof} We construct $\{ C_n \}$ inductively as follows. Set $C_0:= \{ \theta_0 \}$ and apply \lemref{dto1} to find a $d$-element set $C_1 \subset {\widehat{D}}^{-k}(\theta_0) \cap I$ containing $\theta_0$ on which every $\Pi_j$ is injective. Note that $\Pi_1(C_1) \subset Z_1$ by \eqref{projn}, hence $\Pi_1(C_1)=Z_1$ as both sets have $d$ elements. Suppose now that we have constructed the sets $C_0 \subset \cdots \subset C_m$ in $I$ which satisfy the conditions (i)-(iii) for all $1 \leq n \leq m$. For each $\theta \in C_m \smallsetminus C_{m-1}$ apply \lemref{dto1} to obtain a $d$-element set $X_\theta \subset {\widehat{D}}^{-k}(\theta) \cap I$ on which every $\Pi_j$ is injective. Define $$ C_{m+1} := C_m \cup \bigcup_{\theta \in C_m \smallsetminus C_{m-1}} X_\theta. $$ Evidently ${\widehat{D}}^{\circ k}(C_{m+1})=C_m \subset C_{m+1}$. As the sets in the above union are disjoint, we see that $C_{m+1}$ has $d^{m+1}$ elements. By \eqref{projn} and the induction hypothesis, every $\Pi_j$ is injective on $C_{m+1}$, and $\Pi_{m+1}(C_{m+1}) \subset Z_{m+1}$. It follows that $\Pi_{m+1}(C_{m+1})=Z_{m+1}$, as both sets have $d^{m+1}$ elements. This completes the induction step. \end{proof} As a consequence of the above lemma, we obtain the infinite commutative diagram $$ \begin{tikzcd}[column sep=small] \cdots \arrow[rr,"{\widehat{D}}^{\circ k}"] & & C_{n+1} \arrow[d,"\Pi_{n+1}"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & C_n \arrow[d,"\Pi_n"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & \arrow[rr,"{\widehat{D}}^{\circ k}"] \cdots & & C_1 \arrow[d,"\Pi_1"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & C_0 \arrow[d,"\Pi_0"] \\ \cdots \arrow[rr,"{\widehat{d}}"] & & Z_{n+1} \arrow[rr,"{\widehat{d}}"] & & Z_n \arrow[rr,"{\widehat{d}}"] & & \arrow[rr,"{\widehat{d}}"] \cdots & & Z_1 \arrow[rr,"{\widehat{d}}"] & & Z_0 \end{tikzcd} $$ in which every vertical arrow is an order-preserving bijection. Chasing around this commutative diagram shows that for all integers $j, \ell \geq 0$, $$ \Pi_{j+\ell}(C_j)=Z_j. $$ The proof is a straightforward induction on $j$ for each fixed $\ell$. It follows that for each $j \geq 0$, \begin{equation}\label{stable} \Pi_n(\theta) = \Pi_j(\theta) \qquad \text{whenever} \ \theta \in C_j \ \text{and} \ n \geq j. \end{equation} It is now easy to construct the semiconjugacy $\Pi$ of \thmref{A}. Given $\theta \in I$, find adjacent points $x,y \in C_j$ such that $\theta \in [x,y]$. Then \eqref{stable} and the monotonicity of the projections show that if $n>m \geq j$, $$ \Pi_n(\theta)-\Pi_m(\theta) \leq \Pi_n(y) - \Pi_m(x) = \Pi_j(y)-\Pi_j(x) = \frac{1}{d^j} $$ and $$ \Pi_n(\theta)-\Pi_m(\theta) \geq \Pi_n(x) - \Pi_m(y) = \Pi_j(x)-\Pi_j(y) = -\frac{1}{d^j}. $$ Since $1/d^j \to 0$ as $j \to \infty$, we conclude that the sequence $\{ \Pi_n \}$ converges uniformly on the compact set $I$ to a degree $1$ monotone surjection {\mapfromto \Pi I {\mathbb{T}}} which by \eqref{projn} semiconjugates {\mapfromto {{\widehat{D}}^{\circ k}} I I} to {\mapfromto {\widehat{d}} {\mathbb{T}} {\mathbb{T}}}. \subsection{Uniqueness of $\Pi$}\label{uniqq} To finish the proof of \thmref{A}, it remains to show that the semiconjugacy $\Pi$ constructed above is unique up to postcomposition with a rotation $\tau \mapsto \tau + j/(d-1) \ (\operatorname{mod} 1)$. We will prove this by first showing that the degree $1$ monotone extension of $\Pi$ semiconjugates a degree $d$ monotone extension of ${\widehat{D}}^{\circ k}|_I$ to ${\widehat{d}}$ (\lemref{sem}), and then invoking the well-known fact that such global semiconjugacies are unique up to a rotation (\corref{juju}). \vspace{6pt} Let us call a gap $J$ of the compact set $I$ {\it \bfseries minor} if $|J| < 1/D^k$ and {\it \bfseries major} if $|J| \geq 1/D^k$. The distinction depends on whether or not ${\widehat{D}}^{\circ k}$ acts homeomorphically on the closure of $J$. The {\it \bfseries multiplicity} of $J$ is the integer part of $D^k |J|$, that is, the number of times ${\widehat{D}}^{\circ k}$ wraps $J$ fully around the circle. A gap $J$ is {\it \bfseries taut} if $D^k |J|$ is an integer and {\it \bfseries loose} otherwise. Thus, minor gaps are always loose and have multiplicity $0$. \begin{lemma}\label{gapmap} Suppose $]a,b[$ is a gap of $I$. Then either ${\widehat{D}}^{\circ k}(a)={\widehat{D}}^{\circ k}(b)$ or $]{\widehat{D}}^{\circ k}(a),{\widehat{D}}^{\circ k}(b)[$ is a gap of $I$. \end{lemma} \begin{proof} We first prove a version of the claim for positive potentials: Suppose $]\theta_1, \theta_2[$ is a gap of $I_s$ for some $0<s<s^{\ast}$. Set $s':=D^k s$ and consider $\theta'_i:={\widehat{D}}^{\circ k}(\theta_i) \in I_{s'}$ for $i=1,2$. The ray pair $R^-_{\theta_1}, R^+_{\theta_2}$ crash into a critical point $\omega$ with potential $G(\omega) \geq s$, so the image rays $P^{\circ k}(R^-_{\theta_1}), P^{\circ k}(R^+_{\theta_2})$ have a common point $\omega':=P^{\circ k}(\omega)$. These rays are necessarily broken if $\theta'_1 \neq \theta'_2$, since two distinct smooth rays, or a smooth and a broken ray, can never meet. Thus $P^{\circ k}(R^-_{\theta_1})=R^-_{\theta'_1}$ and $P^{\circ k}(R^+_{\theta_2})=R^+_{\theta'_2}$ must crash into a critical point with potential $\geq G(\omega') = D^k G(\omega) \geq s'$. This shows that $R^-_{\theta'_1}, R^+_{\theta'_2}$ is a ray pair in $I_{s'}$, or equivalently $]\theta'_1, \theta'_2[$ is a gap of $I_{s'}$. \vspace{6pt} Now consider a gap $]a,b[$ of $I$ such that $a':={\widehat{D}}^{\circ k}(a)$ and $b':={\widehat{D}}^{\circ k}(b)$ are distinct. Working with the sequence of potentials $s_n=D^{-nk}s_0$ as before, there is an integer $n_1 \geq 0$ and an increasing sequence $ \{ ]a_n,b_n[ \}_{n \geq n_1}$ such that $]a_n,b_n[$ is a gap of $I_n$ and $\bigcup_{n \geq n_1} ]a_n,b_n[ = ]a,b[$. We may assume that $a'_n:={\widehat{D}}^{\circ k}(a_n)$ and $b'_n:={\widehat{D}}^{\circ k}(b_n)$ are distinct. Then, by the positive potential case treated above, $]a'_n,b'_n[$ is a gap of $I_{n-1}$. Since there is an integer $n_2 \geq n_1$ such that the sequence $\{ ]a'_n,b'_n[ \}_{n \geq n_2}$ is increasing and $\bigcup_{n \geq n_2} ]a'_n,b'_n[ = ]a',b'[$, we conclude that $]a',b'[ \, \cap I = \emptyset$. Now $a',b' \in I$ implies that $]a',b'[$ is a gap of $I$. \end{proof} We can extend the restriction ${\widehat{D}}^{\circ k}|_I$ to a continuous monotone map $f: {\mathbb{T}} \to {\mathbb{T}}$ of {\it minimal} degree by sending each gap $]a,b[$ of $I$ homeomorphically onto the gap $]{\widehat{D}}^{\circ k}(a),{\widehat{D}}^{\circ k}(b)[$ if ${\widehat{D}}^{\circ k}(a) \neq {\widehat{D}}^{\circ k}(b)$, and to the point ${\widehat{D}}^{\circ k}(a)={\widehat{D}}^{\circ k}(b)$ otherwise. Thus, the image $f(J)$ is a gap or a single point in $I$ according as $J$ is a loose or taut gap. \begin{lemma}\label{sem} Let $\xi: I \to {\mathbb{T}}$ be any semiconjugacy between ${\widehat{D}}^{\circ k}|_I$ and ${\widehat{d}}$. Then the degree $1$ monotone extension $\xi: {\mathbb{T}} \to {\mathbb{T}}$ is a semiconjugacy between $f$ and ${\widehat{d}}$: $$ \begin{tikzcd}[column sep=small] {\mathbb{T}} \arrow[d,swap,"\xi"] \arrow[rr,"f"] & & {\mathbb{T}} \arrow[d,"\xi"] \\ {\mathbb{T}} \arrow[rr,"{\widehat{d}}"] & & {\mathbb{T}} \end{tikzcd} $$ In particular, $f$ is a monotone map of degree $d$. \end{lemma} \begin{proof} Evidently $\xi$ is constant on each gap of $I$. If $J=]a,b[$ is a taut gap, then $f={\widehat{D}}^{\circ k}(a)$ in $J$, so $\xi \circ f = \xi({\widehat{D}}^{\circ k}(a)) = {\widehat{d}}(\xi(a))= {\widehat{d}} \circ \xi$ in $J$. If $J$ is a loose gap, then $f(J)=]{\widehat{D}}^{\circ k}(a),{\widehat{D}}^{\circ k}(b)[$ is a gap by \lemref{gapmap}, so $\xi$ takes the constant value $\xi({\widehat{D}}^{\circ k}(a))$ on it, and the relation $\xi \circ f = {\widehat{d}} \circ \xi$ in $J$ follows similarly. \end{proof} \begin{corollary}\label{gapcount} There are precisely $D^k-d$ major gaps in $I$ counting multiplicities. \end{corollary} Compare \cite{BBM} and \cite{Z} for the similar case of ``rotation sets'' where $d=1$. \begin{proof} Let $\{ J_i \}$ denote the countable collection of gaps of $I$ of multiplicities $\{ m_i \}$. We have $\sum_i |J_i| = 1$ since $I$ has measure zero, hence $\sum_i |f(J_i)| =d$ since $f$ has degree $d$. The definition of multiplicity shows that $|f(J_i)|=D^k |J_i| - m_i$ for each $i$. It follows that $D^k \sum_i |J_i| - \sum_i m_i =d$, or $\sum_i m_i = D^k-d$, as required. \end{proof} \begin{corollary}\label{juju} If $\xi_1, \xi_2 : I \to {\mathbb{T}}$ are semiconjugacies between ${\widehat{D}}^{\circ k}|_I$ and ${\widehat{d}}$, there is an integer $j$ such that $\xi_1 = \xi_2 + j/(d-1) \ (\operatorname{mod} 1)$. \end{corollary} \begin{proof} Consider a fixed point $\theta_0= {\widehat{D}}^{\circ k}(\theta_0) \in I$ as in \S \ref{consp}. The images $\xi_i(\theta_0)$ belong to the set $$ \Big\{ \frac{0}{d-1}, \frac{1}{d-1}, \ldots, \frac{d-2}{d-1} \Big\} \ (\operatorname{mod} 1) $$ of fixed points of ${\widehat{d}}$. Since the rotation $\tau \mapsto \tau + 1/(d-1) \ (\operatorname{mod} 1)$ commutes with ${\widehat{d}}$, it suffices to show that if $\xi_1(\theta_0)=\xi_2(\theta_0)=0$, then $\xi_1=\xi_2$ everywhere. By \lemref{sem}, the degree $1$ monotone extensions $\xi_i: {\mathbb{T}} \to {\mathbb{T}}$ are semiconjugacies between $f$ and ${\widehat{d}}$. This will easily imply $\xi_1=\xi_2$ as follows (compare \cite{KH}). Let $x_0 \in {\mathbb{R}}$ be a representative of $\theta_0 \in {\mathbb{T}}$ and $F: {\mathbb{R}} \to {\mathbb{R}}$ be the unique lift of $f$ such that $F(x_0)=x_0$. If $\Xi_i: {\mathbb{R}} \to {\mathbb{R}}$ is the unique lift of $\xi_i$ with $\Xi_i(x_0)=0$, then $\Xi_i \circ F = d\, \Xi_i$. This means the $\Xi_i$ are the fixed points of the map $\Xi \mapsto (\Xi \circ F)/d$ acting on the complete metric space of continuous functions ${\mathbb{R}} \to {\mathbb{R}}$ that commute with $x \mapsto x+1$, equipped with the uniform metric $\d(\Xi,\Xi')=\sup_{x \in {\mathbb{R}}} |\Xi(x)-\Xi'(x)|$. This map is clearly contracting by a factor $1/d<1$, hence it has a unique fixed point. We conclude that $\Xi_1=\Xi_2$ or $\xi_1=\xi_2$. \end{proof} \section{Proof of Theorem \ref{B}}\label{sec:pfsb} Throughout this section we will adopt the following notations: \vspace{6pt} \begin{enumerate}[leftmargin=*] \item[$\bullet$] For $\theta \in I$, we denote by $R^P_\theta$ the unique external ray of $P$ at angle $\theta$ that accumulates on $\partial K$. Thus, $R^P_\theta$ is the smooth ray $R_\theta$ if $\theta \in I \smallsetminus {\mathcal N}$, and it is one of the broken rays $R^{\pm}_{\theta}$ if $\theta \in I \cap {\mathcal N}$. \vspace{6pt} \item[$\bullet$] $L_\theta$ is the radial line in $\mathbb{C} \smallsetminus \overline{\mathbb{D}}$ at angle $\theta \in {\mathbb{T}}$: $$ L_\theta : = \{ r \ensuremath{{\operatorname{e}}}^{2 \pi i \theta} : r>1 \}. $$ \item[$\bullet$] $Q_d: \mathbb{C} \to \mathbb{C}$ is the $d$-th power map $z \mapsto z^d$. \vspace{6pt} \item[$\bullet$] $\d_X$ is the distance in the hyperbolic metric of a domain $X \subset \widehat{\mathbb{C}}$ whose complement has at least three points. \vspace{6pt} \end{enumerate} \subsection{A reduction} We begin by reducing \thmref{B} to a statement on the hyperbolic geometry of rays. Suppose $\varphi: U_0 \to \varphi(U_0)$ is a hybrid conjugacy between $P^{\circ k}:U_1 \to U_0$ and a degree $d$ polynomial $Q$ which we may assume to be monic. In order to prove \thmref{B}, it suffices to show that there is a choice of the semiconjugacy $\Pi$ such that for every $\theta \in I$ the arc $\varphi(R^P_{\theta} \cap U_1)$ and the ray segment $R^Q_{\Pi(\theta)} \cap \varphi(U_1)$ have finite Hausdorff distance in the hyperbolic metric of $\varphi(U_0) \smallsetminus K_Q$.\footnote{Recall that the Hausdorff distance between two closed sets in a metric space is the infimum of the set of $\delta>0$ such that each set is contained in the $\delta$-neighborhood of the other. If there is no such $\delta$ the Hausdorff distance is defined to be $+\infty$.} This is because a ball of fixed radius in this metric has shrinking Euclidean diameter as its center tends to $\partial K_Q$. Use the B\"{o}ttcher coordinate $\frak{B}_Q: \mathbb{C} \smallsetminus K_Q \to \mathbb{C} \smallsetminus \overline{\mathbb{D}}$ to form the composition $\Phi:=\frak{B}_Q \circ \varphi : U_0 \smallsetminus K \to \Phi(U_0 \smallsetminus K)$ which is a quasiconformal conjugacy between $P^{\circ k}: U_1 \smallsetminus K \to U_0 \smallsetminus K$ and $Q_d: \Phi(U_1 \smallsetminus K) \to \Phi(U_0 \smallsetminus K)$. Then the above condition is equivalent to $\Phi(R^P_{\theta} \cap U_1)$ and the radial segment $L_{\Pi(\theta)} \cap \Phi(U_1 \smallsetminus K)$ having finite Hausdorff distance in the hyperbolic metric of $\Phi(U_0 \smallsetminus K)$. Thus, \thmref{B} will follow from the following \begin{theorem}\label{B'} Let $U_0, U_1$ (resp. $U'_0, U'_1$) be Jordan domains containing $K$ (resp. $\overline{\mathbb{D}}$) such that $\overline{U_1} \subset U_0$ (resp. $\overline{U'_1} \subset U'_0$). Set $\Omega_i:=U_i \smallsetminus K$ and $\Omega'_i:=U'_i \smallsetminus \overline{\mathbb{D}}$ for $i=0,1$. Suppose $\Phi: \Omega_0 \to \Omega'_0$ is a quasiconformal conjugacy between $P^{\circ k} : \Omega_1 \to \Omega_0$ and the $d$-th power map $Q_d: \Omega'_1 \to \Omega'_0$. Then there is a choice of the semiconjugacy $\Pi: I \to {\mathbb{T}}$ of \thmref{A} such that for each $\theta \in I$ the arc $\Phi(R^P_{\theta} \cap \Omega_1)$ and the radial segment $L_{\Pi(\theta)} \cap \Omega'_1$ have finite Hausdorff distance in the hyperbolic metric of $\Omega'_0$. \end{theorem} In particular, the arc $\Phi(R^P_{\theta} \cap \Omega_1)$ lands at the point $\exp(2 \pi i \Pi(\theta))$ on the unit circle. \vspace{6pt} The following lemma shows that it suffices to prove \thmref{B'} for {\it some} quasiconformal conjugacy: \begin{lemma}\label{immat} If \thmref{B'} holds for one quasiconformal conjugacy between the restriction of $P^{\circ k}$ and $Q_d$, then it holds for any other. \end{lemma} \begin{proof} Suppose \thmref{B'} holds for $\Phi_1: \Omega_0 \to \Omega'_0$ and a corresponding semiconjugacy $\Pi_1: I \to {\mathbb{T}}$. Let $\Phi_2$ be another such quasiconformal conjugacy which we may assume has the same domain $\Omega_0$. The composition $$ \Psi := \Phi_2 \circ \Phi_1^{-1}: \Omega'_0 \to \Omega''_0:=\Phi_2(\Omega_0) $$ is a quasiconformal homeomorphism with $|\Psi(z)| \to 1$ as $|z| \to 1$, and therefore it extends to a homeomorphism of the unit circle. Moreover, since $\Psi$ commutes with $Q_d$, its extension to the unit circle is a rational rotation of the form $S: z \mapsto \ensuremath{{\operatorname{e}}}^{2\pi i j/(d-1)} z$. First consider the case where $S$ is the identity map. If $A$ is an annulus of the form $\{ z: 1<|z|<r \}$ with $r$ sufficiently close to $1$, it follows that \begin{equation}\label{disM} \d_{\Omega''_0}(\Psi(z),z) \leq M \qquad \text{for all} \ z \in A, \end{equation} where $M>0$ depends only on the maximal dilatation of $\Psi$. By the assumption, for every $\theta \in I$ the arcs $\Phi_1(R^P_{\theta}) \cap A$ and $L_{\Pi_1(\theta)} \cap A$ have finite Hausdorff distance in $\Omega'_0$. It follows from \eqref{disM} that $\Phi_2(R^P_{\theta}) \cap A$ and $L_{\Pi_1(\theta)} \cap A$ have finite Hausdorff distance in $\Omega''_0$. Thus, \thmref{B'} holds for $\Phi_2$ and the choice $\Pi_1$. \vspace{6pt} If $S$ is not the identity map, we can run the above argument for $S^{-1} \circ \Phi_2$ to conclude that \thmref{B'} holds for $\Phi_2$ and the choice $\Pi_2=\Pi_1+j/(d-1)$. \end{proof} \subsection{\thmref{B'} and its corollaries} The proof of \thmref{B'} begins as follows. Take the polynomial-like restriction $P^{\circ k}: U_1 \to U_0$, where $U_1=V_{D^{-k}s_0}$ and $U_0=V_{s_0}$ for a sufficiently small $s_0>0$ in the notation of \S \ref{conI}. We may choose the quasiconformal conjugacy $\Phi$ such that the images $\Omega'_i=\Phi(\Omega_i)$ for $i=0,1$ are round annuli of the form $\Omega'_0=\{ z: 0< \log |z|< r_0 \}$ and $\Omega'_1=\{ z: 0< \log |z|< d^{-1}r_0 \}$ for some $r_0>0$. Set \begin{align*} \Omega_n & := V_{D^{-nk}s_0} \smallsetminus K \\ \Omega'_n & := \Phi(\Omega_n)=Q_d^{-n}(\Omega'_0)=\{ z: 0< \log |z| < d^{-n}r_0 \}. \end{align*} The topological annuli $\{ \Omega_n \}$ and the round annuli $\{ \Omega'_n \}$ are nested and the difference sets $$ A_n:= \overline{\Omega}_n \smallsetminus \Omega_{n+1} \qquad \text{and} \qquad A'_n:= \overline{\Omega'}_n \smallsetminus \Omega'_{n+1} $$ are closed annuli. The maps $P^{\circ k}: A_{n+1} \to A_n$ and $Q_d: A'_{n+1} \to A'_n$ are degree $d$ regular coverings for every $n \geq 0$. \vspace{6pt} If $\theta_0 \in I$ is a fixed point of ${\widehat{D}}^{\circ k}$ given by \corref{fps}, we may choose $\Phi$ such that $\Phi(R_{\theta_0}^P \cap A_0) = L_0 \cap A'_0$. Since $\Phi$ conjugates $P^{\circ k}: \Omega_1 \to \Omega_0$ to $Q_d: \Omega'_1 \to \Omega'_0$, it follows inductively that $\Phi (R_{\theta_0}^P \cap A_n) = L_0 \cap A'_n$ for all $n \geq 0$ and therefore \begin{equation}\label{r00} \Phi(R_{\theta_0}^P \cap \Omega_0) = L_0 \cap \Omega'_0. \end{equation} By \thmref{A} there is a unique semiconjugacy $\Pi: I \to {\mathbb{T}}$ between ${\widehat{D}}^{\circ k}|_I$ and ${\widehat{d}}$ normalized so that $\Pi(\theta_0)=0$. Let $\{ C_n \}$ be the sequence of iterated preimages of $\theta_0$ in $I$ constructed in \lemref{Cn}. \begin{lemma}\label{d-adic} For every $n \geq 0$ and every $\theta \in C_n$, $$ \Phi(R_\theta^P \cap \Omega_n) = L_{\Pi(\theta)} \cap \Omega'_n. $$ \end{lemma} \begin{proof} Since ${\widehat{D}}^{\circ nk}(\theta)=\theta_0$, the ray segment $R_\theta^P \cap \Omega_n$ maps under $P^{\circ nk}$ to $R_{\theta_0}^P \cap \Omega_0$. It follows from \eqref{r00} that the arc $\Phi(R_\theta^P \cap \Omega_n)$ maps under $Q_d^{\circ n}$ to $L_0 \cap \Omega'_0$. Thus, $\Phi(R_\theta^P \cap \Omega_n)=L_{\tau} \cap \Omega'_n$ for some $\tau \in Z_n = {\widehat{d}}^{-n}(0)$. Evidently the assignment $\theta \mapsto \tau$ is an order-preserving bijection $C_n \to Z_n$ which by \eqref{r00} sends $\theta_0$ to $0$. Thus, by \lemref{Cn}, $\tau=\Pi_n(\theta)$. Finally, since $\theta \in C_n$, \eqref{stable} shows that $\Pi_n(\theta)=\Pi(\theta)$. \end{proof} Now consider the open topological disks $$ \Delta := \mathring{A}_0 \smallsetminus R_{\theta_0}^P \qquad \text{and} \qquad \Delta' := \Phi(\Delta) = \mathring{A}'_0 \smallsetminus L_0. $$ Observe that there are $d^n$ connected components of $Q_d^{-n}(\Delta')$ in $A'_n$ which are separated by the radial lines $L_\tau$ for $\tau \in Z_n$. Similarly, there are $d^n$ connected components of $P^{-nk}(\Delta)$ in $A_n$ which, in view of \lemref{d-adic}, are separated by the rays $R_\theta^P$ for $\theta \in C_n$. \begin{lemma}\label{samepiece} For every $\theta \in I$ and every $n \geq 0$ there is a connected component of $Q_d^{-n}(\Delta')$ whose closure contains both arcs $$ \Phi(R_\theta^P \cap A_n) \qquad \textrm{and} \qquad L_{\Pi(\theta)} \cap A'_n. $$ \end{lemma} \begin{proof} Take adjacent angles $x,y \in C_n$ such that $\theta \in [x,y[$; then $\Pi(\theta) \in [\Pi(x),\Pi(y)]$ by monotonicity. Let $\Delta_n$ be the unique connected component of $P^{-nk}(\Delta)$ whose closure contains both $R_x^P \cap A_n$ and $R_y^P \cap A_n$, and therefore $R_\theta^P \cap A_n$. It follows that the image $\Delta'_n=\Phi(\Delta_n)$ is a connected component of $Q_d^{-n}(\Delta')$ whose closure contains $\Phi(R_{\theta}^P \cap A_n)$. By \lemref{d-adic}, the closure of $\Delta'_n$ contains both $L_{\Pi(x)} \cap A'_n$ and $L_{\Pi(y)} \cap A'_n$, and therefore $L_{\Pi(\theta)} \cap A'_n$. \end{proof} It is now easy to finish the proof of \thmref{B'}. For $n \geq 0$, let $\rho_n$ denote the hyperbolic metric of the annulus $\Omega'_n$, so $\rho_0 < \rho_n$ in $\Omega'_n$ by the Schwarz lemma. Choose $\delta>0$ so that the $d$ connected components of $Q_d^{-1}(\Delta')$ have diameter $< \delta$ in the metric $\rho_0$. Then any component of $Q_d^{-n}(\Delta')$ for $n \geq 2$ also has diameter $<\delta$ in the metric $\rho_0$ since the regular covering $Q_d^{\circ n}: (\Omega'_n, \rho_n) \to (\Omega'_0, \rho_0)$ is a local isometry and therefore $Q_d^{\circ n}: (\Omega'_n, \rho_0) \to (\Omega'_0, \rho_0)$ is expanding. It follows from \lemref{samepiece} that for each $\theta \in I$ the arcs $\Phi(R_\theta^P \cap \Omega_1)$ and $L_{\Pi(\theta)} \cap \Omega'_1$ are contained in the $\delta$-neighborhood of each other in the metric $\rho_0$, and therefore have Hausdorff distance $\leq \delta$ in $\Omega'_0$. \qed \begin{corollary}\label{samefiber} The following conditions on $\theta, \theta' \in I$ are equivalent: \vspace{6pt} \begin{enumerate} \item[(i)] $\Pi(\theta)=\Pi(\theta')$. \vspace{6pt} \item[(ii)] $\d_{\mathbb{C} \smallsetminus K}(R^P_\theta(s), R^P_{\theta'}(s))$ stays bounded as the Green's potential $s$ tends to $0$. \vspace{6pt} \item[(iii)] Under the (unique up to rotation) conformal isomorphism $\zeta: \mathbb{C} \smallsetminus K \to \mathbb{C} \smallsetminus \overline{\mathbb{D}}$, the arcs $\zeta(R^P_\theta)$ and $\zeta(R^P_{\theta'})$ land at the same point of the unit circle. \vspace{6pt} \end{enumerate} If these conditions are satisfied and $R^P_\theta$ lands at $z \in \partial K$, then $R^P_{\theta'}$ also lands at $z$. Moreover, one of the two connected components of $\mathbb{C} \smallsetminus (R^P_{\theta} \cup R^P_{\theta'} \cup \{ z \})$ will be disjoint from $K$. \end{corollary} \begin{proof} We will use a quasiconformal conjugacy $\Phi$ and the associated objects, using the notations in the above proof of \thmref{B'}. \vspace{6pt} (i) $\Longrightarrow$ (ii): By \lemref{samepiece}, for every $n \geq 0$ there is a connected component of $Q_d^{-n}(\Delta')$ whose closure contains both $\Phi(R^P_\theta \cap A_n)$ and $\Phi(R^P_{\theta'} \cap A_n)$. It follows that there is a connected component of $P^{-kn}(\Delta)$ in $A_n$ whose closure contains $R^P_\theta \cap A_n$ and $R^P_{\theta'} \cap A_n$. By a similar application of the Schwarz lemma as above, for $n \geq 1$ these components have uniformly bounded diameters in the hyperbolic metric of $\Omega_0$. Thus, there is a $\delta>0$ such that for all $n \geq 1$, $$ \d_{\Omega_0}(R^P_\theta(s),R^P_{\theta'}(s)) < \delta \quad \text{if} \quad D^{-(n+1)k}s_0 \leq s \leq D^{-nk}s_0. $$ This proves $\d_{\Omega_0}(R^P_\theta(s),R^P_{\theta'}(s)) < \delta$ for all $0<s \leq D^{-k}s_0$, and (ii) follows since the hyperbolic metrics of $\mathbb{C} \smallsetminus K$ and $\Omega_0$ are comparable in $\Omega_1$. \vspace{6pt} (ii) $\Longrightarrow$ (iii): Since the conformal isomorphism $\zeta$ is a hyperbolic isometry, $\d_{\mathbb{C} \smallsetminus \overline{\mathbb{D}}}(\zeta(R^P_\theta(s)),\zeta(R^P_{\theta'}(s)))$ stays bounded as $s \to 0$. In particular, $\zeta(R^P_\theta)$ and $\zeta(R^P_{\theta'})$ have the same accumulation sets on $\partial \mathbb{D}$. Thus, we need only check that the arc $\zeta(R^P_\theta)$ lands. To see this, note that the map $$ \xi := \zeta \circ \Phi^{-1} : \Phi(\Omega_0) \to \zeta(\Omega_0) $$ is quasiconformal with $|\xi(z)| \to 1$ as $|z| \to 1$, so it extends to a homeomorphism of the unit circle. Since $\Phi(R^P_\theta \cap \Omega_0)$ lands at $\exp(2 \pi i \Pi(\theta))$ by \thmref{B'}, the arc $\zeta(R^P_\theta \cap \Omega_0)$ lands at $\xi(\exp(2 \pi i \Pi(\theta)))$. \vspace{6pt} (iii) $\Longrightarrow$ (i): By the above paragraph, the assumption that $\zeta(R^P_\theta)$ and $\zeta(R^P_{\theta'})$ land at the same point implies $$ \xi(\exp(2 \pi i \Pi(\theta)))=\xi(\exp(2 \pi i \Pi(\theta'))). $$ Since $\xi$ is a homeomorphism of the unit circle, we conclude that $\Pi(\theta)=\Pi(\theta')$. \vspace{6pt} The last two assertions are straightforward consequences of (i) and (iii), respectively. \end{proof} \section{Proof of \thmref{C}}\label{sec:val} Let $\Pi: {\mathbb{T}} \to {\mathbb{T}}$ be the degree $1$ monotone extension of the semiconjugacy between ${\widehat{D}}^{\circ k}|_I$ and ${\widehat{d}}$ constructed in \S \ref{consp}, and $f: {\mathbb{T}} \to {\mathbb{T}}$ be a degree $d$ monotone extension of ${\widehat{D}}^{\circ k}|_I$ as in \S \ref{uniqq}. We construct a (maximal) Cantor set $C \subset I$ whose gaps are precisely the interiors of the non-degenerate fibers of $\Pi$. We will prove \thmref{C} by bounding the number of points of $I$ that lie in a given gap of $C$. \subsection{A Cantor subset of $I$.} For $x \in {\mathbb{T}}$, let $H_x$ denote the fiber $\Pi^{-1}(x)$. The semiconjugacy relation $\Pi \circ f = {\widehat{d}} \circ \Pi$ (\lemref{sem}) and monotonicity of $f$ show that $$ f(H_y) = H_x \qquad \text{if} \ x={\widehat{d}}(y) $$ and \begin{equation}\label{aayy} f^{-1}(H_x) = \bigcup_{y \in {\widehat{d}}^{-1}(x)} H_y. \end{equation} Define $$ C := {\mathbb{T}} \smallsetminus \bigcup_{x \in {\mathbb{T}}} \mathring{H}_x. $$ Here each $\mathring{H}_x$, the interior of the fiber $H_x$, is an open interval (possibly empty). Evidently $C$ is a non-empty compact proper subset of the circle. \begin{lemma}\label{maxC} $C$ is a Cantor subset of $I$ with $f(C)=C$. \end{lemma} \begin{proof} Every gap of $I$ is contained in a unique $\mathring{H}_x$ since $\Pi$ is constant on it. This shows $C \subset I$. Moreover, $C$ is totally disconnected since $I$ is, and it has no isolated points since distinct $\mathring{H}_x$'s have disjoint closures. This proves that $C$ is a Cantor set. \vspace{6pt} To see the $f$-invariance property of $C$, first suppose $f(\theta) \in \mathring{H}_x$ for some $x$. Then by \eqref{aayy} there is a $y \in {\widehat{d}}^{-1}(x)$ such that $\theta \in \mathring{H}_y$. This proves $f(C) \subset C$. To verify the reverse inclusion, suppose $\theta \in C$ and take any $\theta'$ with $f(\theta')=\theta$. If $\theta' \in C$, then $\theta \in f(C)$ and we are done. Otherwise $\theta' \in \mathring{H}_y$ for some $y$. Setting $x:={\widehat{d}}(y)$, it follows from $f(H_y)=H_x$ that either $H_x = \{ \theta \}$, or $H_x$ is a non-degenerate closed interval having $\theta$ as a boundary point. In either case, monotonicity of $f$ implies that some endpoint of $H_y$ maps to $\theta$, implying $\theta \in f(C)$. This proves $C \subset f(C)$. \end{proof} \begin{remark} It is easy to see that $C$ is the maximal Cantor subset of $I$. In fact, if $X$ is any Cantor subset of $I$ not contained in $C$, then some gap of $C$ would contain uncountably many points of $X$. All such points would have the same image under $\Pi$, which would contradict finiteness of the fibers of $\Pi$ in $I$ (\thmref{C}). Alternatively, $C$ can be characterized as the set $I^{\ast}$ of all ``condensation points'' of $I$, that is, the set of all $\theta \in I$ such that every neighborhood of $\theta$ contains uncountably many points of $I$. This follows from the theorem of Cantor-Bendixson according to which $I^{\ast}$ is perfect and $I \smallsetminus I^{\ast}$ is at most countable. \end{remark} The above invariance shows that the commutative diagram \eqref{scj} restricts to the Cantor set $C$: $$ \begin{tikzcd}[column sep=small] C \arrow[d,swap,"\Pi"] \arrow[rr,"{\widehat{D}}^{\circ k}"] & & C \arrow[d,"\Pi"] \\ {\mathbb{T}} \arrow[rr,"{\widehat{d}}"] & & {\mathbb{T}} \end{tikzcd} $$ Since the closures of gaps of $C$ are precisely the non-degenerate fibers of $\Pi$, it follows that the analog of \lemref{gapmap} holds for $C$, that is, for each gap $]a,b[$ of $C$, either ${\widehat{D}}^{\circ k}(a)={\widehat{D}}^{\circ k}(b)$ or $]{\widehat{D}}^{\circ k}(a),{\widehat{D}}^{\circ k}(b)[$ is a gap of $C$. \vspace{6pt} The degree $d$ extension $f$ of ${\widehat{D}}^{\circ k}|_I$ also serves as an extension of ${\widehat{D}}^{\circ k}|_C$. The above diagram shows that for any gap $J_0$ of $C$, the image $f(\overline{J_0})$ is a single point in $C$ if $J_0$ is taut, and it is the closure of a gap $J_1$ of $C$ if $J_0$ is loose. Note that in the latter case $f$ maps the closed interval $\overline{J_0}$ onto the closed interval $\overline{J_1}$ monotonically, but it may fail to map $J_0$ onto $J_1$ homeomorphically (for example a whole subinterval of $J_0$ may map to an endpoint of $J_1$). In practice, it is convenient to ignore this issue and abuse the language slightly by saying that $J_0$ ``maps to'' $J_1$, or that $J_1$ is the ``image'' of $J_0$. \vspace{6pt} Now the same argument as in \corref{gapcount} shows that $C$ has $D^k-d$ major gaps counting multiplicities. Evidently each gap of $I$ is contained in a gap of $C$. For each gap $J_i$ of $C$ of multiplicity $n_i$, let $m_i$ be the number of gaps of $I$ contained in $J_i$ counting multiplicities. Since $\sum_i n_i= \sum_i m_i = D^k-d$ and $0 \leq m_i \leq n_i$, we must have $m_i=n_i$ for all $i$. In other words, {\it every major gap of $C$ of multiplicity $n$ contains precisely $n$ major gaps of $I$ counting multiplicities.} \vspace{6pt} Finally, let us observe that every minor gap of $C$ eventually maps to a major gap $J$. If $J$ is taut, the next image is a single point and the gap-orbit terminates. If $J$ is loose, the next image is a new gap (minor or major) and the gap-orbit continues. Since there are only finitely many majors gaps, we conclude that {\it every gap of $C$ eventually maps to a taut gap or a periodic gap}. \subsection{Taut gaps of $C$} \begin{lemma}\label{partners} Suppose $a,b \in I$ satisfy $\Pi(a)=\Pi(b)$ and ${\widehat{D}}^{\circ k}(a)={\widehat{D}}^{\circ k}(b)$. Then either $]a,b[$ or $]b,a[$ is a taut gap of $I$. Assuming the former, $R^P_a=R^-_a$ and $R^P_b=R^+_b$ crash into an escaping critical point $\omega$ of $P^{\circ k}$ and their image $P^{\circ k}(R^-_a) = R_{{\widehat{D}}^{\circ k}(a)}=R_{{\widehat{D}}^{\circ k}(b)}=P^{\circ k}(R^+_b)$ is a smooth ray. \end{lemma} Note that ray pairs in $I$ such as the above $R^-_a, R^+_b$ are ``permanent partners'' in the sense that they stay together once they join: $R^-_a(s)=R^+_b(s)$ for all potentials $s \leq G(\omega)$. \begin{proof} Assume by way of contradiction that $R^P_a, R^P_b$ are disjoint. By \corref{samefiber} the distinct points $R^P_a(s),R^P_b(s)$ have bounded hyperbolic distance in $\mathbb{C} \smallsetminus K$ and therefore their Euclidean distance tends to $0$ as $s \to 0$. Let $z \in \partial K$ be any accumulation point of $R^P_a$ and take a sequence of potentials $s_j \to 0$ such that $R^P_a(s_j) \to z$. Then $R^P_b(s_j) \to z$ also. The assumption ${\widehat{D}}^{\circ k}(a)={\widehat{D}}^{\circ k}(b)$ implies that the points $R^P_a(s_j),R^P_b(s_j)$ have the same image under $P^{\circ k}$, so $P^{\circ k}$ is not locally injective near $z$. This shows that the common accumulation set of $R^P_a, R^P_b$ consists only of critical points of $P^{\circ k}$. Since the accumulation set of a ray is connected and $P^{\circ k}$ has finitely many critical points, it follows that $R^P_a, R^P_b$ co-land at a critical point $c$ of $P^{\circ k}$ on $\partial K$. This implies that $K$ meets both components of $\mathbb{C} \smallsetminus (R^P_a \cup R^P_b \cup \{ c \})$, contradicting \corref{samefiber}. \vspace{6pt} The preceding paragraph shows that $R^P_a, R^P_b$ crash into a critical point $\omega$ of the Green's function. Assuming $R^P_a = R^-_a$ and $R^P_b=R^+_b$, this implies that $]a,b[$ is a gap of $I$. Moreover, the image rays $P^{\circ k}(R^-_a)$ and $P^{\circ k}(R^+_b)$ both accumulate on $\partial K$ and have the same angle ${\widehat{D}}^{\circ k}(a)={\widehat{D}}^{\circ k}(b)$. Hence the image rays coincide and are smooth and $\omega$ is in fact a critical point of $P^{\circ k}$. \end{proof} \begin{lemma}\label{tCtI} Taut gaps of $C$ and all their preimages are gaps of $I$. \end{lemma} \begin{proof} Let $J = ]a,b[$ be a gap of $C$ which after $n \geq 1$ iterates maps to a taut gap of $C$. For $i \geq 0$ set $a_i:={\widehat{D}}^{\circ ik}(a), b_i:={\widehat{D}}^{\circ ik}(b)$ and $J_i:=]a_i, b_i[$, so we have the gap-orbit $J_0 \to J_1 \to \cdots \to J_n$ under $f$, with $J_n$ taut. We will show that $J_i \cap I=\emptyset$ for all $0 \leq i \leq n$. \vspace{6pt} By \lemref{partners} $R^-_{a_n}, R^+_{b_n}$ form a ray pair in $I$, so $J_n \cap I = \emptyset$. Assume by way of contradiction that there is a largest $0 \leq i \leq n-1$ for which $J_i \cap I \neq \emptyset$. Take $\theta \in J_i \cap I$. Then $f(\theta)={\widehat{D}}^{\circ k}(\theta) \in \overline{J_{i+1}} \cap I$, so by the choice of $i$, $f(\theta)$ must be one of the endpoints of $J_{i+1}$, say $a_{i+1}$ (the case $f(\theta)=b_{i+1}$ is similar). Since ${\widehat{D}}^{\circ k}(\theta) = {\widehat{D}}^{\circ k}(a_i)=a_{i+1}$ and $\Pi(\theta)=\Pi(a_i)$, \lemref{partners} shows that $R^-_{a_i}, R^+_{\theta}$ form a ray pair in $I$ and their image $R^P_{a_{i+1}}$ is smooth. Applying the iterate $P^{\circ (n-i-1)k}$, it follows that $R^P_{a_n}$ is smooth. This contradicts the fact that $R^P_{a_n}=R^-_{a_n}$ is left broken. \end{proof} \subsection{Periodic gaps of $C$.}\label{pergp} Let us begin by recalling a known relationship between periodic angles in $I$ and periodic points on the boundary of the component $K$. For $z \in \partial K$, it will be convenient to use the notation $$ \La(z):= \{ \theta \in I: R^P_\theta \ \text{lands at} \ z \}. $$ \begin{theorem}\label{perr} \mbox{} \begin{enumerate} \item[(i)] If $\theta \in I$ is periodic under ${\widehat{D}}^{\circ k}$, then $\theta \in \La(z_0)$ for some $z_0 \in \partial K$ which is a repelling or parabolic periodic point under $P^{\circ k}$. \vspace{6pt} \item[(ii)] Conversely, if $z_0 \in \partial K$ is a repelling or parabolic periodic point under $P^{\circ k}$, then $\La(z_0)$ is non-empty and finite. Moreover, all angles in $\La(z_0)$ are periodic with the same period under ${\widehat{D}}^{\circ k}$. \end{enumerate} \end{theorem} Observe that in (ii), periodicity of the rays landing at $z_0$ implies that they are either smooth or infinitely broken (in particular, these rays are pairwise disjoint). Thus, each cycle of rays landing at $z_0$ consists either entirely of smooth rays, or entirely of infinitely broken rays. \vspace{6pt} This result is not new: Part (i), the landing of periodic rays on repelling or parabolic points, follows from a classical application of hyperbolic metrics introduced by Sullivan, Douady and Hubbard. For proofs in the case of connected filled Julia sets see for example \cite[Theorem 18.10]{M1} or \cite[Proposition 2.1 and its Complement]{P}. Part (ii) appears in \cite{LP} in a more general setting which also covers the degenerate case $K = \{ z_0 \}$. Here we give an alternative proof based on \thmref{B} and the corresponding statement in the connected case (see also \cite[Corollary B.3]{P}). \begin{proof}[Proof of part (ii)] Let $\varphi$ be a hybrid equivalence between the restriction of $P^{\circ k}$ to a neighborhood of $K$ and a monic degree $d$ polynomial $Q$, and choose the semiconjugacy $\Pi: I \to {\mathbb{T}}$ such that \thmref{B} holds. Let $\ell$ be the period of $z_0$ under $P^{\circ k}$. Then $w_0:=\varphi(z_0) \in \partial K_Q$ has period $\ell$ under $Q$ and is repelling or parabolic since this property is preserved under topological conjugacies. By \cite[Corollary B.1]{P}, the set $\La(w_0):=\{ \tau \in {\mathbb{T}}: R^Q_\tau \ \text{lands at} \ w_0 \}$ is non-empty and finite. By \thmref{B}, $\La(z_0)=\Pi^{-1}(\La(w_0))$ and the following diagram commutes: \begin{equation}\label{cdlala} \begin{tikzcd}[column sep=small] \La(z_0) \arrow[d,swap,"\Pi"] \arrow[rr,"{\widehat{D}}^{\circ k\ell}"] & & \La(z_0) \arrow[d,"\Pi"] \\ \La(w_0) \arrow[rr,"{\widehat{d}}^{\circ \ell}"] & & \La(w_0) \end{tikzcd} \end{equation} Since $\La(z_0)$ is compact and invariant under the expanding map ${\widehat{D}}^{\circ k\ell}$, we conclude that $\La(z_0)$ must also be finite (\cite[Lemma 18.8]{M1}). There is a neighborhood of $z_0$ in which $P^{\circ k\ell}$ is a conformal isomorphism since $(P^{\circ k\ell})'(z_0) \neq 0$. It follows that $P^{\circ k\ell}$ is injective on the set of ``ends'' of the rays that land at $z_0$. Since there are finitely many such ends, they must be permuted by $P^{\circ k\ell}$ and therefore they are all periodic. This of course implies periodicity of the whole rays that land at $z_0$, and proves that each angle in $\La(z_0)$ is periodic under ${\widehat{D}}^{\circ k\ell}$. The fact that $P^{\circ k\ell}$ is a conformal isomorphism near $z_0$ implies that it preserves the cyclic order of ray ends landing at $z_0$, so ${\widehat{D}}^{\circ k\ell}$ preserves the cyclic order of angles in $\La(z_0)$. A standard exercise then shows that all angles in $\La(z_0)$ must have the same period. \end{proof} We can say a bit more about the correspondence between rays landing at $z_0$ and those landing at $w_0$. The external rays of $Q$ landing at $w_0$ fall into some number $N \geq 1$ of cycles under $Q^{\circ \ell}$ which have the same length $q \geq 1$ and the same {\it \bfseries combinatorial rotation number} $p/q$ (compare \cite[Corollary B.1]{P}). This means that if we label the angles in $\La(w_0)$ as $0 \leq \tau_1 < \tau_1 < \cdots < \tau_{Nq} <1$, then ${\widehat{d}}^{\circ \ell}$ acts on $\La(w_0)$ as $\tau_j \mapsto \tau_{j+Np}$, taking the subscripts modulo $Nq$. Thus, the action of $Q^{\circ \ell}$ on the rays landing at $w_0$ combinatorially mimics that of the rational rotation $z \mapsto e^{2\pi i p/q} z$. \vspace{6pt} We claim that the cycles of ${\widehat{D}}^{\circ k \ell}$ in $\Lambda(z_0)$ are also of the common length $q$ and combinatorial rotation number $p/q$. To see this, let $\tau, \tau':={\widehat{d}}^{\circ \ell}(\tau) \in \Lambda(w_0)$ so ${\widehat{D}}^{\circ k\ell}$ maps $\Pi^{-1}(\tau)$ bijectively onto $\Pi^{-1}(\tau')$. Label the elements of these fibers as $$ \Pi^{-1}(\tau)=\{ \theta_1, \ldots, \theta_{\nu} \} \qquad \text{and} \qquad \Pi^{-1}(\tau')= \{ \theta'_1, \ldots, \theta'_{\nu} \} $$ in counterclockwise order. Taking the conformal isomorphism $\zeta: \mathbb{C} \smallsetminus K \to \mathbb{C} \smallsetminus \overline{\mathbb{D}}$, it follows from \corref{samefiber} that the arcs $\zeta(R^P_{\theta_1}), \ldots, \zeta(R^P_{\theta_{\nu}})$ co-land at some $u \in \partial \mathbb{D}$, and similarly the arcs $\zeta(R^P_{\theta'_1}), \ldots, \zeta(R^P_{\theta'_{\nu}})$ co-land at some $u' \in \partial \mathbb{D}$. The conjugate map $f:= \zeta \circ P^{\circ k \ell} \circ \zeta^{-1}$ extends by reflection to a conformal isomorphism $f:\Omega \to \Omega'$ between annular neighborhoods of $\partial \mathbb{D}$ which sends $u$ to $u'$, preserves $\partial \mathbb{D}$ and maps each arc $\zeta(R^P_{\theta_i}) \cap \Omega$ to some arc $\zeta(R^P_{\theta'_j}) \cap \Omega'$. The fact that $f$ is orientation-preserving then implies that $f(\zeta(R^P_{\theta_i}) \cap \Omega)=\zeta(R^P_{\theta'_i}) \cap \Omega'$, or $P^{\circ k\ell}(R^P_{\theta_i})=R^P_{\theta'_i}$, or ${\widehat{D}}^{\circ k\ell}(\theta_i)=\theta'_i$ for every $1 \leq i \leq \nu$. Applying this $q$ times, we conclude that the iterate ${\widehat{D}}^{\circ k \ell q}$ must act as the identity on the fiber $\Pi^{-1}(\tau)$. It easily follows that each cycle of ${\widehat{D}}^{\circ k \ell}$ in $\Lambda(z_0)$ has length $q$ and combinatorial rotation number $p/q$. \vspace{6pt} We summarize the above observations in the following \begin{corollary} Let $z_0 \in \partial K$ be a repelling or parabolic point of period $\ell$ under $P^{\circ k}$ and $w_0=\varphi(z_0) \in \partial K_Q$ be the corresponding periodic point of $Q$. Let $N \geq 1$ be the number of cycles of ${\widehat{d}}^{\circ \ell}$ in $\La(w_0)$ with the common length $q \geq 1$ and combinatorial rotation number $p/q$. Take representative angles $\tau_1, \ldots, \tau_N$ for these cycles and let $\nu_j$ be the number of elements in $\Pi^{-1}(\tau_j)$. Then, there are precisely $\sum_{j=1}^N \nu_j$ cycles of ${\widehat{D}}^{\circ k\ell}$ in $\La(z_0)$ and they all have the same length $q$ and combinatorial rotation number $p/q$. \end{corollary} We now return to our discussion of the periodic gaps of the maximal Cantor set $C$. Suppose $]a,b[$ is a periodic gap of $C$. Then $a,b$ are periodic under ${\widehat{D}}^{\circ k}$ so by \thmref{perr}(i) and \thmref{B} the rays $R^P_a, R^P_b$ co-land at a periodic point $z_0 \in \partial K$. Another application of \thmref{B} then shows that for every $\theta \in ]a,b[ \, \cap I$ the ray $R^P_\theta$ lands at the same $z_0$. Using the notations in the above corollary, it follows that $[a,b] \cap I$ is a fiber of $\Pi$ in $\Lambda(z_0)$, so $\# ([a,b] \cap I) = \nu_j$ for some $1 \leq j \leq N$. Thus, to bound the cardinality of $[a,b] \cap I$, we need to find an upper bound for the $\nu_j$. \vspace{6pt} It is known that the number of distinct cycles of external rays that land on a periodic orbit of a polynomial is at most one more than the number of critical values (see \cite{M1} for the quadratic and \cite{K} for the higher degree case\footnote{They prove this bound for smooth rays, but their argument works almost verbatim for the infinitely broken periodic rays since they are all pairwise disjoint.}). This gives the estimate $\sum_{j=1}^N \nu_j \leq D$, which implies $\nu_j \leq D$ for all $1 \leq j \leq N$. However, we need a sharper form of this bound for the proof of \thmref{C}. \vspace{6pt} Each of the iterated images $K^i:=P^{\circ i}(K)$ is a period $k$ component of the filled Julia set $K_P$. \thmref{A} applied to each $K^i$ gives a compact set $I^i \subset {\mathbb{T}}$ consisting of angles $\theta$ such that $R_\theta$ or one of $R^{\pm}_{\theta}$ accumulates on $\partial K^i$, and a surjection $\Pi^i: I^i \to {\mathbb{T}}$ that semiconjugates ${\widehat{D}}^{\circ k}|_{I^i}$ to ${\widehat{d}}$. For $\theta \in I^i$, we denote by $R^P_\theta$ the unique external ray of $P$ at angle $\theta$ that accumulates on $\partial K^i$. With the periodic point $z_0 \in \partial K^0$ as above, consider its orbit ${\mathcal O}:= \{ z_0, z_1, \ldots, z_{k \ell -1} \}$ under $P$, so $z_i \in K^i$. For each $0 \leq i \leq k \ell-1$ the $q \sum_{j=1}^N \nu_j$ external rays landing at $z_i$ separate the plane into the same number of open topological disks called the {\it \bfseries sectors} based at $z_i$. Two distinct sectors are either disjoint or nested or they contain each other's complements. This is a simple consequence of the fact that distinct rays landing on $\mathcal O$, being smooth or infinitely broken, cannot intersect. Moreover, if a sector contains $z_i$, it contains all but one of the sectors based at $z_i$. The collection of all sectors based at the points of $\mathcal O$ will be denoted by ${\mathcal S}_{\mathcal O}$. \vspace{6pt} Let $S(z_i,a,b) \in {\mathcal S}_{\mathcal O}$ denote the sector based at $z_i$ bounded by the rays $R_a^P, R_b^P$, labeled counterclockwise so $S(z_i,a,b)$ contains all field lines $R_\theta$ with $\theta \in ]a,b[$. The map ${\widehat{D}}$ acting on the union $\Lambda(z_0) \cup \cdots \cup \Lambda(z_{k \ell-1})$ induces a {\it \bfseries sector map} $\sigma: {\mathcal S}_{\mathcal O} \to {\mathcal S}_{\mathcal O}$ defined by $\sigma(S(z_i, a,b)):= S(z_{i+1},{\widehat{D}}(a),{\widehat{D}}(b))$. Locally near the base points, $\sigma$ is compatible with $P$: There are neighborhoods $U$ of $z_i$ and $V$ of $z_{i+1}$ such that $P:U \to V$ is a conformal isomorphism with $P(S \cap U)=\sigma(S) \cap V$ for every sector $S$ based at $z_i$. Globally $P(S)$ is often different from $\sigma(S)$, but by the monodromy theorem if $\sigma(S)$ does not contain any critical value of $P$, then $S$ does not contain any critical point of $P$, and $P|_S: S \to \sigma(S)$ is a conformal isomorphism. \vspace{6pt} The external rays that bound a sector can contain critical points of $P$ when they are infinitely broken. The following convention clarifies when such boundary points should be thought of as belonging to the sector. We say that a critical point $\omega$ of $P$ is {\it \bfseries attached} to the sector $S(z_i,a,b)$ if one of the following happens: (i) $\omega \in S(z_i,a,b)$, or (ii) $\omega \in R_a^P$ and $R_a^P=R_a^-$, or (iii) $\omega \in R_b^P$ and $R_b^P = R_b^+$. Similarly, we say that a critical value $v$ of $P$ is attached to $S(z_i,a,b)$ if either (i) $v \in S(z_i,a,b)$, or (ii) $v=P(\omega)$ for some critical point $\omega$ attached to the sector $\sigma^{-1}(S(z_i,a,b))$. Thus, a critical value attached to a sector $S$ is either an interior critical value which may or may not have come from an interior critical point of $\sigma^{-1}(S)$, or a boundary critical value coming from a boundary critical point attached to $\sigma^{-1}(S)$. \vspace{6pt} We define the {\it \bfseries length} of $S=S(z_i,a,b)$ by $|S|:=b-a$ and its {\it \bfseries weight} $w(S)$ as the integer part of $D \, |S|$. An application of the argument principle (see \cite{GM} as well as \cite[Theorem 5.10]{Z}) shows that \begin{align*} w(S) = & \ \text{number of critical points of} \ P \ \text{attached to} \ S \ \\ & \ \text{counting multiplicities}. \end{align*} The relation $$ |\sigma(S)| = D \, |S| - w(S) $$ shows that if $|\sigma(S)| \leq |S|$, then there must be a critical point of $P$ attached to $S$ and therefore a critical value of $P$ attached to $\sigma(S)$. In particular, {\it the shortest sector in each cycle of $\sigma$ has a critical value of $P$ attached to it.} \vspace{6pt} We call $S(z_i,a,b)$ an {\it \bfseries essential sector} if $\Pi^i(a) \neq \Pi^i(b)$ and a {\it \bfseries ghost sector} if $\Pi^i(a)=\Pi^i(b)$. By \corref{samefiber}, $S(z_i,a,b) \cap K^i \neq \emptyset$ if $S(z_i,a,b)$ is an essential sector, while $S(z_i,a,b) \cap K^i= \emptyset$ and $]a,b[$ is a gap of $I^i$ if $S(z_i,a,b)$ is a ghost sector. It follows that $\sigma$ preserves the type of a sector, i.e., $\sigma(S)$ is a ghost sector if and only if $S$ is. \vspace{6pt} The ghost sectors based at $z_i$ do not meet $K^i$ but they may well contain a component $K^j$ for some $j \not \equiv i \ (\operatorname{mod} k)$. We call a ghost sector {\it \bfseries minimal} if it does not meet the union $K^0 \cup \cdots \cup K^{k-1}$. Equivalently, if it does not contain any point of the orbit $\mathcal O$. \begin{figure}[t!] \centering \begin{overpic}[width=0.8\textwidth]{8.pdf} \put (90,85) {\footnotesize {\color{white} $R_{2/26}$}} \put (76,96) {\footnotesize {\color{white} $R_{4/26}$}} \put (63,96) {\footnotesize {\color{white} $R^+_{5/26}$}} \put (41,96) {\footnotesize {\color{white} $R_{6/26}$}} \put (1,91) {\footnotesize {\color{white} $R_{10/26}$}} \put (1,59) {\footnotesize {\color{white} $R_{12/26}$}} \put (1,24) {\footnotesize {\color{white} $R^+_{15/26}$}} \put (15,2) {\footnotesize {\color{white} $R_{18/26}$}} \put (40,2) {\footnotesize {\color{white} $R^+_{19/26}$}} \put (48,49) {\small {\color{white} $K$}} \end{overpic} \caption{\sl{The nine external rays and sectors associated with the period $3$ orbit of the cubic polynomial described in Example \ref{cubicex}. The escaping critical point is shown as a red dot.}} \label{ghost1} \end{figure} \begin{example}\label{cubicex} There is a cubic polynomial with a period $3$ critical point in a component $K$ of the filled Julia set and a period $3$ repelling fixed point $z_0 \in \partial K$ with $$ \Lambda(z_0)= \Big\{ \frac{2}{26}, \frac{10}{26}, \frac{19}{26} \Big\}, \quad \Lambda(z_1)= \Big\{ \frac{4}{26}, \frac{5}{26}, \frac{6}{26} \Big\}, \quad \Lambda(z_2)= \Big\{ \frac{12}{26}, \frac{15}{26}, \frac{18}{26} \Big\} $$ (compare Figures \ref{ghost1} and \ref{ghost2}). Of the nine external rays landing on the orbit ${\mathcal O}=\{ z_0, z_1, z_2 \}$, the three rays $R^+_{19/26}$ (crashing into the escaping critical point), $R^+_{5/26}, R^+_{15/26}$ are infinitely broken and the remaining six are smooth. Of the nine sectors in ${\mathcal S}_{\mathcal O}$, the three sectors $$ S\Big(z_0, \frac{19}{26}, \frac{2}{26}\Big), \quad S\Big(z_1, \frac{5}{26}, \frac{6}{26}\Big), \quad S\Big(z_2, \frac{15}{26}, \frac{18}{26}\Big) $$ are essential. The remaining six are ghost sectors, with $$ S\Big(z_1, \frac{4}{26}, \frac{5}{26}\Big) \quad \text{and} \quad S\Big(z_2, \frac{12}{26}, \frac{15}{26}\Big) $$ being minimal. \end{example} \begin{figure}[t!] \centering \begin{overpic}[width=0.8\textwidth]{9.pdf} \put (89,66) {\small {\color{white} $R_{2/26}$}} \put (33,96) {\small {\color{white} $R_{10/26}$}} \put (10,4) {\small {\color{white} $R^+_{19/26}$}} \put (69,30) {\color{white} $z_0$} \put (80,15) {\color{white} $K$} \end{overpic} \caption{\sl{Details of \figref{ghost1} near the period $3$ point $z_0 \in \partial K$.}} \label{ghost2} \end{figure} There is a one-to-one correspondence between cycles of $\sigma$ in ${\mathcal S}_{\mathcal O}$ and cycles of ${\widehat{D}}$ in the union $\Lambda(z_0) \cup \cdots \cup \Lambda(z_{k\ell-1})$ (for example, assign to the $\sigma$-cycle of $S(z_i,a,b)$ the ${\widehat{D}}$-cycle of $a$). It follows from our earlier discussion that there are $N$ cycles of essential sectors and $\sum_{j=1}^N \nu_j -N = \sum_{j=1}^N (\nu_j-1)$ cycles of ghost sectors in ${\mathcal S}_{\mathcal O}$. Let \begin{align*} N_1 := & \ \text{number of critical values of} \ P \ \text{attached to some minimal} \\ & \ \text{ghost sector in} \ {\mathcal S}_{\mathcal O}, \\ N_2 := & \ \text{number of escaping critical values of} \ P \ \text{not attached to} \\ & \ \text{any minimal ghost sector in} \ {\mathcal S}_{\mathcal O}. \end{align*} Note that each critical value that contributes to the count $N_1$ is attached to a {\it unique} minimal ghost sector. Furthermore, the critical values that contribute to the count $N_1$ or $N_2$ can only come from the $D-d$ critical points of $P$ outside $K^0 \cup \cdots \cup K^{k-1}$. Hence, \begin{equation}\label{n1n2} N_1+N_2 \leq D-d. \end{equation} \begin{lemma}\label{NaN} The number of cycles of ghost sectors in ${\mathcal S}_{\mathcal O}$ is $\leq N_1+1$. The inequality is strict if $K$ has period $k=1$. \end{lemma} \begin{proof} The following argument is inspired by \cite[Theorem 5.2]{K}. Let $M$ denote the number of cycles of ghost sectors in ${\mathcal S}_{\mathcal O}$. There is nothing to prove if $M=1$, so let us assume $M \geq 2$. The shortest sector in each of these $M$ cycles has a critical value attached to it. Let $S_1, \ldots, S_M$ denote these shortest sectors labeled so that $|S_1| \leq \cdots \leq |S_M|$. We prove that $S_1, \ldots, S_{M-1}$ are minimal. This gives the bound $M-1 \leq N_1$, as required. \vspace{6pt} If $S_i$ is not minimal for some $1 \leq i \leq M-1$, it contains a point $z \in {\mathcal O}$ and therefore it contains all but one of the sectors based at $z$. Among these sectors there must be at least one representative from each of the $M-1$ cycles of ghost sectors other than the cycle represented by $S_i$ itself. This implies that there are $M-1$ ghost sectors shorter than $S_i$, a contradiction. \vspace{6pt} When $K$ has period $1$, all ghost sectors are automatically minimal. Thus the shortest sectors $S_1, \ldots, S_M$ are minimal, giving the improved bound $M \leq N_1$. \end{proof} \begin{corollary}\label{cardper} If $]a,b[$ is a periodic gap of $C$ with $a,b \in \Lambda(z_0)$, then $\# ([a,b] \cap I) \leq N_1+2$. The inequality is strict if $K$ has period $k=1$. \end{corollary} \begin{proof} By what we have seen, $\#([a,b] \cap I) = \nu_i$ for some $1 \leq i \leq N$. By \lemref{NaN}, $\sum_{j=1}^N (\nu_j-1) \leq N_1+1$ so $\nu_i \leq N_1+2$. If $K$ has period $1$, the above sum is bounded by $N_1$ so $\nu_i \leq N_1+1$. \end{proof} \subsection{Preperiodic gaps of $C$} Let $]a,b[$ be a strictly preperiodic gap of $C$. For $i \geq 0$ set $a_i:={\widehat{D}}^{\circ ik}(a), b_i:={\widehat{D}}^{\circ ik}(b)$, so each $]a_i,b_i[$ is a gap of $C$. Let $n \geq 1$ be the smallest integer for which $]a_n,b_n[$ is periodic. Then $R^P_{a_n}, R^P_{b_n}$ co-land at a periodic point $z_0 \in \partial K$. As in \S \ref{pergp}, let $\mathcal O$ denote the orbit of $z_0$ under $P$ and let ${\mathcal S}_{\mathcal O}$ be the collection of sectors based at the points of $\mathcal O$. Recall that $N_2 \geq 0$ is the number of escaping critical values of $P$ that are not attached to any minimal ghost sector in ${\mathcal S}_{\mathcal O}$. \begin{lemma}\label{prepcount} $\# ([a,b] \cap I) \leq \# ([a_n,b_n] \cap I) + N_2$. \end{lemma} \begin{proof} Under the iterate ${\widehat{D}}^{\circ nk}$ every angle in $[a,b] \cap I$ maps to $[a_n,b_n] \cap I$. Moreover, by \lemref{partners}, for each $\theta_n \in [a_n,b_n] \cap I$ there can be at most two angles in $[a,b] \cap I$ which map to $\theta_n$ under ${\widehat{D}}^{\circ nk}$. Let us assume $a \leq \theta<\theta' \leq b$ are such angles and set $\theta_i:= {\widehat{D}}^{\circ ik}(\theta), \theta'_i:={\widehat{D}}^{\circ ik}(\theta')$, so $\theta_n=\theta'_n$. Let $0 \leq i \leq n-1$ be the largest integer for which $\theta_i \neq \theta_i$. By \lemref{partners}, the rays $R^-_{\theta_i}, R^+_{\theta'_i}$ crash into a critical point $\omega$ of $P^{\circ k}$ and the common image $P^{\circ k}(R^-_{\theta_i})=R_{\theta_{i+1}}=R_{\theta'_{i+1}}=P^{\circ k}(R^+_{\theta'_i})$ is smooth. Hence there is an integer $1 \leq j \leq k$ for which $v:=P^{\circ j}(\omega)$ is an escaping critical value of $P$. If $v$ were attached to a minimal ghost sector $S \in {\mathcal S}_{\mathcal O}$, it would necessarily be an interior critical value of $S$ since it has the smooth or finitely broken ray $R:=P^{\circ j}(R^-_{\theta_i})$ passing through it. This would imply that the entire ray $R$ is contained in $S$, and therefore the landing point $\zeta$ of $R$ belongs to $\overline{S} \cap K^j$. Since $S$ does not meet the component $K^j$, the point $\zeta$ would have to be the base point of $S$, and $R$ would be one of the rays bounding $S$, which is a contradiction. \vspace{6pt} We have assigned to each such pair $\theta, \theta'$ at least one escaping critical value of $P$ not attached to any minimal ghost sector in ${\mathcal S}_{\mathcal O}$. Evidently different pairs have different critical values assigned to them, so the number of such pairs is at most $N_2$. The lemma follows immediately. \end{proof} We are now ready to finish the proof of \thmref{C}: \begin{proof}[Proof of \thmref{C}] Let $]a,b[$ be a gap of the Cantor set $C$. We need to show that the closed interval $[a,b]$ contains at most $D-d+2$ points of $I$. Set $a_i:={\widehat{D}}^{\circ ik}(a), b_i:= {\widehat{D}}^{\circ ik}(b)$ for $i \geq 0$. We consider two cases: \vspace{6pt} $\bullet$ {\it Case 1.} There is a smallest $n \geq 1$ such that $a_n=b_n$. Then $]a_{n-1}, b_{n-1}[$ is a taut gap of $C$, so $]a,b[$ is a gap of $I$ by \lemref{tCtI}. In this case $[a,b]$ contains exactly two points of $I$, i.e. the endpoints $a,b$. \vspace{6pt} $\bullet$ {\it Case 2.} $a_i \neq b_i$ and therefore $]a_i, b_i[$ is a gap of $C$ for all $i \geq 0$. Then there is a smallest $n \geq 0$ such that $]a_n,b_n[$ is periodic. By \corref{cardper}, $[a_n,b_n]$ contains at most $N_1+2$ points of $I$. It follows from \lemref{prepcount} and the inequality \eqref{n1n2} that the cardinality of $[a,b] \cap I$ is at most $N_1+N_2+2 \leq D-d+2$, as required. Again by \corref{cardper} this inequality is strict if $K$ has period $1$. \end{proof} \section{Proof of \thmref{D}}\label{sec:ex} In this section we give a descriptive proof of \thmref{D} by constructing examples of polynomials $P$ of degree $D \geq 3$ having a filled Julia set component $K$ of period $k=1$ and polynomial-like degree $d \geq 2$ for which the semiconjugacy $\Pi: I \to {\mathbb{T}}$ of \thmref{A} has the top valence $D-d+1$ asserted by \thmref{C}. If $D-d+1 \geq 3$, it follows that $I$ has isolated points. The common feature of these examples is that $D-d+1$ consecutive fixed rays of $P$ co-land at a fixed point on $\partial K$. Our construction is flexible in that we can designate the hybrid class of the restriction of $P$ to a neighborhood of $K$ as well as the fixed rays that co-land on $\partial K$. \subsection{General description of the examples}\label{descex} The fixed rays of a monic degree $D$ polynomial have angles $\theta_i := i/(D-1) \ (\operatorname{mod} 1)$, taking the subscript $i$ modulo $D-1$. Fix an integer $j$ and choose a collection of $D-d$ open intervals of the form $$ J_i= \Big] \theta_i, \theta_i + \frac{1}{D} \Big[ \quad \text{or} \quad \Big] \theta_i-\frac{1}{D}, \theta_i \Big[ $$ subject to the conditions that (i) each $J_i$ is contained in the interval $]\theta_j, \theta_{j+D-d}[$, and (ii) the $J_i$ have pairwise disjoint closures. (The reader can verify that there are $D-d+1$ choices for such collections). Each of the $D-d$ intervals $]\theta_i, \theta_{i+1}[$ for $j \leq i \leq j+D-d-1$ contains exactly one element of $\{ J_i \}$, namely $J_i = ]\theta_i,\theta_i+1/D[$ or $J_{i+1}=]\theta_{i+1}-1/D,\theta_{i+1}[$. For simplicity let $\theta'_i:=\theta_i \pm 1/D$ denote the endpoint of $J_i$ other than $\theta_i$. Note that ${\widehat{D}}(\theta_i)={\widehat{D}}(\theta'_i)=\theta_i$. \vspace{6pt} Take a polynomial $Q$ of degree $d$ with $K_Q$ connected. Let $P$ be a monic polynomial of degree $D$ with the following properties: \vspace{6pt} \begin{enumerate} \item[(i)] There is a component $K=P(K)$ of $K_P$ and neighborhoods $U_0, U_1$ of $K$ such that $P|_{U_1}: U_1 \to U_0$ is a polynomial-like map of degree $d$ hybrid equivalent to $Q$. \vspace{6pt} \item[(ii)] The $D-d$ critical points of $P$ that do not belong to $K$ are distinct and escape to $\infty$. \vspace{6pt} \item[(iii)] For each $J_i$ in the chosen collection, the field lines $R_{\theta_i}$ and $R_{\theta'_i}$ crash into an escaping critical point $\omega_i$. \vspace{6pt} \end{enumerate} We claim that these properties imply that the angles $\theta_j, \ldots, \theta_{j+D-d}$ are in $I=I_K$ and belong to the same fiber of the semiconjugacy $\Pi:I \to {\mathbb{T}}$. By \thmref{B} and \thmref{perr} the corresponding rays $R^P_{\theta_j}, \ldots, R^P_{\theta_{j+D-d}}$ would co-land at a repelling or parabolic fixed point on $\partial K$. This will reduce the proof of \thmref{D} to the construction of a polynomial $P$ satisfying (i)-(iii). \vspace{6pt} By (iii) the union $R_{\theta_i} \cup R_{\theta'_i} \cup \{ \omega_i \}$ bounds a ``wake'' $W_i$ containing all field lines $R_\theta$ with $\theta \in J_i$. Note that $P$ maps $W_i$ univalently onto the domain $\mathbb{C} \smallsetminus R_{\theta_i}([Ds_i,+\infty[)$ which properly contains $W_i$. Here $s_i>0$ is the Green's potential of $\omega_i$. It follows from the Schwarz lemma that $W_i$ contains a unique fixed point $p_i$ which is necessarily repelling. In particular, this shows that $K$ is not contained in $W_i$, so $K \cap W_i = \emptyset$. It is easy to see that one of the two broken rays $R_{\theta_i}^\pm$ (more specifically, $R_{\theta_i}^+$ if $J_i= ]\theta_i , \theta'_i [$ or $R_{\theta_i}^-$ if $J_i= ]\theta'_i , \theta_i [$) lands at $p_i$, whereas the other lands at a fixed point $z_i$ that lies outside of the union $\bigcup W_j$. Of the $D$ fixed points of $P$ counting multiplicities, $D-d$ are $\{ p_i \}$ that fall in $\bigcup W_j$. The remaining $d$ fixed points must be in $K$ since $K \subset \mathbb{C} \smallsetminus \bigcup W_j$. This shows that every $z_i$ belongs to $K$ and therefore $\theta_i \in I$. To prove our claim, we show that $]\theta_i, \theta_{i+1}[ \, \cap I = \emptyset$ for all $j \leq i \leq j+D-d-1$ (it will of course follow that the $z_i$ are one and the same point). \vspace{6pt} \begin{figure}[t] \centering \begin{overpic}[width=0.95\textwidth]{wakes.pdf} \put (43,39) {\small $\alpha_0$} \put (17,42) {\color{red}{\small $\alpha_1$}} \put (12,42) {\color{red}{\small $\alpha_2$}} \put (8,42) {\color{red}{\small $\alpha_3$}} \put (-5,40) {\small $\alpha_0+\tfrac{1}{D-1}$} \put (27,34) {\footnotesize $D^{-1}$} \put (12,35.7) {\tiny $D^{-2}$} \put (7.5,34) {\tiny $D^{-3}$} \put (20.5,22) {\small $c_1=\omega_i$} \put (11,13.5) {\small $c_2$} \put (7.2,8.3) {\small $c_3$} \put (99,39) {\small $\alpha_0$} \put (72,42) {\color{red}{\small $\alpha_1$}} \put (67,42) {\color{red}{\small $\alpha_2$}} \put (63,42) {\color{red}{\small $\alpha_3$}} \put (49,40) {\small $\alpha_0+\tfrac{1}{D-1}$} \put (81,34) {\footnotesize $D^{-1}$} \put (66,35.7) {\tiny $D^{-2}$} \put (61.8,34) {\tiny $D^{-3}$} \put (74.5,22) {\small $c_1=\omega_i$} \put (65.2,13.5) {\small $c_2$} \put (61.5,8.3) {\small $c_3$} \end{overpic} \caption{\sl The preimages of the interval $]\alpha_0,\alpha_1[$ exhaust the interval $]\alpha_0,\alpha_0+1/(D-1)[$ between two consecutive fixed points of ${\widehat{D}}$. The fixed ray at angle $\alpha_0+1/(D-1)$ can be either smooth (left) or infinitely broken (right).} \label{wakes} \end{figure} Suppose $]\theta_i, \theta_{i+1}[$ contains $J_i = ]\theta_i, \theta'_i[ $ (the case where it contains $J_{i+1} = ]\theta'_{i+1}, \theta_{i+1}[$ is similar). Set $\alpha_0:=\theta_i, \alpha_1:=\theta'_i=\alpha_0+1/D$. By (ii) the rays $R^-_{\alpha_0}, R^+_{\alpha_1}$ first crash into the critical point $\omega_i=R^-_{\alpha_0}(s_{\alpha_0})$. Since ${\widehat{D}}(\alpha_0)=\alpha_0$, the ray $R^-_{\alpha_0}$ is infinitely broken at the preimages $c_n := R^-_{\alpha_0}(s_{\alpha_0}/D^{n-1})$ of $\omega_i$ (compare \S \ref{gr}). Thus, for each $n \geq 1$ there is an angle $\alpha_n \in ]\alpha_0, \alpha_0+1/(D-1)[$ such that the rays $R^-_{\alpha_0}, R^+_{\alpha_n}$ crash into $c_n$ (see \figref{wakes}). The relation $P(c_n)=c_{n-1}$ gives ${\widehat{D}}(\alpha_n)=\alpha_{n-1}$ which shows $$ \alpha_n=\alpha_0+\frac{1}{D}+\frac{1}{D^2}+\cdots+\frac{1}{D^n}. $$ Thus, $\alpha_n \to \alpha_0+1/(D-1)$ as $n \to \infty$. Since $]\alpha_0,\alpha_n[ \, \cap I = \emptyset$, we conclude that $]\alpha_0,\alpha_0+1/(D-1)[ \, \cap I = \emptyset$, as required. \vspace{6pt} \begin{figure}[t!] \centering \begin{overpic}[width=0.8\textwidth]{3.pdf} \put (94,51) {\small {\color{white} $R_0$}} \put (1,47) {\small {\color{white} $R^+_{1/2}$}} \put (54,47) {\small {\color{white} $\beta$}} \put (44.5,67) {\small {\color{white} $\omega_1$}} \put (48,36) {{\color{white} $K$}} \end{overpic} \caption{\sl{Filled Julia set of a degree $D=3$ polynomial $P$ with a quadratic-like restriction hybrid equivalent to $Q(z)=z^2$. The fixed rays $R_0, R^+_{1/2}$ co-land at the fixed point $\beta$ on the boundary of the component $K$ of $K_P$. Here $P(z)=a z^2+z^3$ with $a \approx 0.31629-i 1.92522$.}} \label{seh} \end{figure} \begin{figure}[t!] \centering \begin{overpic}[width=0.8\textwidth]{4.pdf} \put (94,52) {\small {\color{white} $R_0$}} \put (29,96) {\small {\color{white} $R^+_{1/3}$}} \put (28,3) {\small {\color{white} $R^-_{2/3}$}} \put (64.3,49) {\small {\color{white} $\beta$}} \put (68.2,72) {\small {\color{white} $\omega_1$}} \put (68.2,27) {\small {\color{white} $\omega_2=\overline{\omega_1}$}} \put (51,49) {{\color{white} $K$}} \end{overpic} \caption{\sl{Filled Julia set of a degree $D=4$ polynomial $P$ with a quadratic-like restriction hybrid equivalent to $Q(z)=z^2$. The fixed rays $R_0, R^+_{1/3}, R^-_{2/3}$ co-land at the fixed point $\beta$ on the boundary of the component $K$ of $K_P$. Here $P(z)=\sqrt[3]{10} \, z^2+a z^3+z^4$ with $a \approx -1.64846$.}} \label{chah} \end{figure} Figures \ref{seh} and \ref{chah} illustrate examples of cubic and quartic polynomials of this type with super-attracting fixed points at $0$, so their quadratic-like restrictions are hybrid equivalent to $z \mapsto z^2$. \figref{seh} shows the case $D=3$ with the choice $$ J_1= \, ] \theta'_1, \theta_1 [ \, = \Big] \frac{1}{6}, \frac{1}{2} \Big[, $$ where the fixed rays $R_0, R^+_{1/2}$ co-land at a repelling fixed point $\beta \in \partial K$. \figref{chah} shows the case $D=4$ with the choice $$ J_1 = \, ] \theta'_1, \theta_1 [ \, = \Big] \frac{1}{12}, \frac{1}{3} \Big[ \quad \text{and} \quad J_2 = \, ] \theta_2, \theta'_2 [ \, = \Big] \frac{2}{3}, \frac{11}{12} \Big[, $$ where the fixed rays $R_0, R^+_{1/3}, R^-_{2/3}$ co-land at a repelling fixed point $\beta \in \partial K$. In this example $\theta_0=0$ is an isolated point of $I$. \subsection{Construction of examples by surgery} We now give the details of the construction of polynomials that have the properties (i)-(iii) above. The idea is to use a cut-and-paste surgery to construct a synthetic model for such a polynomial, and then apply the measurable Riemann mapping theorem to realize it as an actual polynomial map. To simplify our exposition, we illustrate the construction of a degree $D=6$ polynomial with a degree $d=2$ polynomial-like restriction and $D-d+1=5$ fixed rays (one smooth, four broken) co-landing at a fixed point on $\partial K$. The general case is a straightforward modification of this example. \vspace{6pt} For $0 \leq i \leq 4$ let $\theta_i := i/5 \ (\operatorname{mod} 1)$, so each $\theta_i$ is fixed under multiplication by $6 \ (\operatorname{mod} 1)$. Set \begin{align*} & \theta'_1 := \theta_1-\frac{1}{6}=\frac{1}{30} & & \theta'_2 := \theta_2-\frac{1}{6} = \frac{7}{30} \\[5pt] & \theta'_3 := \theta_3+\frac{1}{6}=\frac{23}{30} & & \theta'_4 := \theta_4+\frac{1}{6} = \frac{29}{30}. \end{align*} Fix a radius $R>1$. Define a Riemann surface ${\mathcal{X}}$ conformally equivalent to the Riemann sphere $\widehat{\mathbb{C}}$ as follows. Cut eight slits in $\widehat{\mathbb{C}}$ along each of the closed straight line segments $[0, R\ensuremath{{\operatorname{e}}}^{2\pi i \theta}]$ where $\theta \in \{ \theta_1, \ldots , \theta_4, \theta'_1, \ldots , \theta'_4 \}$. We can think of the interior of each slit as having two sides which we denote by $\delta_\theta^-$ and $\delta_\theta^+$ for $\theta$ in the above set. In other words, for $0<r<R$, \begin{align*} \delta^+_\theta(r) & := \lim_{\tau \searrow \theta} \ r \ensuremath{{\operatorname{e}}}^{2\pi i \tau} \\ \delta^-_\theta(r) & := \lim_{\tau \nearrow \theta} \ r \ensuremath{{\operatorname{e}}}^{2\pi i \tau}. \end{align*} The union ${\mathcal{Y}}$ of the slit sphere together with the sixteen arcs $\delta_\theta^\pm$ is a Riemann surface with real-analytic boundary arcs. Define a Riemann surface ${\mathcal{X}}^*$ by making the identifications $$ \delta_{\theta_i'}^+(r) \longleftrightarrow \delta_{\theta_i}^-(r) \quad \text{and} \quad \delta_{\theta_i'}^-(r) \longleftrightarrow \delta_{\theta_i}^+(r) \quad (0<r<R) $$ on the boundary arcs of ${\mathcal{Y}}$ for $1 \leq i \leq 4$. It is not hard to check that ${\mathcal{X}}^*$ is homeomorphic to a $2$-sphere with nine points removed, and that these missing points are analytically punctures. These punctures correspond to the four points $$ c_i := \lim_{r \to R} \delta_{\theta_i'}^\pm(r)= \lim_{r \to R} \delta_{\theta_i}^\mp(r) \qquad (1 \leq i \leq 4) $$ together with the four points \begin{align*} z_1 & := \lim_{r \to 0} \delta_{\theta_1'}^+(r)= \lim_{r \to 0} \delta_{\theta_1}^-(r) & & & z_2 & := \lim_{r \to 0} \delta_{\theta_2'}^+(r)= \lim_{r \to 0} \delta_{\theta_2}^-(r) \\ z_3 & := \lim_{r \to 0} \delta_{\theta_3'}^-(r)= \lim_{r \to 0} \delta_{\theta_3}^+(r) & & & z_4 & := \lim_{r \to 0} \delta_{\theta_4'}^-(r)= \lim_{r \to 0} \delta_{\theta_4}^+(r) \end{align*} together with the single point \begin{align*} z_0 & := \lim_{r \to 0} \delta_{\theta_1'}^-(r)= \lim_{r \to 0} \delta_{\theta_1}^+(r) = \lim_{r \to 0} \delta_{\theta_2'}^-(r)= \lim_{r \to 0} \delta_{\theta_2}^+(r) \\ & = \lim_{r \to 0} \delta_{\theta_3'}^+(r)= \lim_{r \to 0} \delta_{\theta_3}^-(r) = \lim_{r \to 0} \delta_{\theta_4'}^+(r)= \lim_{r \to 0} \delta_{\theta_4}^-(r) \end{align*} (compare \figref{pants}). If we add these nine punctures to ${\mathcal{X}}^\ast$, we obtain a compact Riemann surface ${\mathcal{X}}$ homeomorphic to a $2$-sphere and therefore biholomorphic to the Riemann sphere $\widehat{\mathbb{C}}$ by the uniformization theorem. The function $\log|z|$ on ${\mathcal{Y}}$ induces a well-defined subharmonic function $G$ on ${\mathcal{X}}$ which tends to $-\infty$ at $z_0, \ldots, z_4$ and takes the value $s_0 := \log R$ at $c_1, \ldots, c_4$ having simple critical points there. For $s \in {\mathbb{R}}$ define $$ \Omega(s) : = \{ z \in {\mathcal{X}} : G(z) > s \}. $$ Evidently the identity map gives rise to a biholomorphism $$ \zeta : \{ z \in {\mathcal{Y}}: |z|>R \} \to \Omega(s_0). \vspace{6pt} $$ The map $z \mapsto z^6$ on ${\mathcal{Y}}$ induces a degree $6$ holomorphic branched covering $f : \Omega(s_0/6) \to \Omega(s_0)$ with nine critical points, four simple critical points at $c_1, \ldots, c_4$ and a critical point of multiplicity $5$ at $\infty$. However it is not possible to holomorphically extend $f$ to $\Omega(s)$ for any $s < s_0/6$, because each of the pre-images of the $c_i$ will become a point of discontinuity. To circumvent this problem, we use quasiconformal surgery to extend the restriction $f|_{\Omega(s_0/3)}$ to a degree $6$ smooth branched covering of ${\mathcal{X}}$ with an invariant conformal structure of bounded dilatation. \vspace{6pt} \begin{figure}[t!] \centering \begin{overpic}[width=0.98\textwidth]{pants.pdf} \put (37,95) {\small ${\mathcal{Y}}$} \put (37,50) {\small ${\mathcal{X}}$} \put (16,26.5) {\tiny {$z_0$}} \put (33,39) {\tiny {$z_1$}} \put (8,43.7) {\tiny {$z_2$}} \put (8.2,9) {\tiny {$z_3$}} \put (32.8,13.5) {\tiny{$z_4$}} \put (29.5,33.4) {\footnotesize {$c_1$}} \put (13.7,39.3) {\footnotesize {$c_2$}} \put (13.7,13.5) {\footnotesize {$c_3$}} \put (29.5,19.4) {\footnotesize {$c_4$}} \put (39.5,84.5) {\tiny {$\delta_{\theta'_1}^-$}} \put (39.5,72) {\tiny {$\delta_{\theta'_4}^+$}} \put (35.5,86.5) {\tiny {$\delta_{\theta'_1}^+$}} \put (35.5,71) {\tiny {$\delta_{\theta'_4}^-$}} \put (30,92.5) {\tiny {$\delta_{\theta_1}^-$}} \put (30,64.5) {\tiny {$\delta_{\theta_4}^+$}} \put (28.5,96) {\tiny {$\delta_{\theta_1}^+$}} \put (28.5,60.5) {\tiny {$\delta_{\theta_4}^-$}} \put (18.5,98) {\tiny {$\delta_{\theta'_2}^-$}} \put (18.5,59) {\tiny {$\delta_{\theta'_3}^+$}} \put (15.5,94) {\tiny {$\delta_{\theta'_2}^+$}} \put (16,63) {\tiny {$\delta_{\theta'_3}^-$}} \put (9,92.5) {\tiny {$\delta_{\theta_2}^-$}} \put (9,64.7) {\tiny {$\delta_{\theta_3}^+$}} \put (4,87.5) {\tiny {$\delta_{\theta_2}^+$}} \put (4,70) {\tiny {$\delta_{\theta_3}^-$}} \end{overpic} \caption{\sl{The cut-and-past construction of the Riemann surface ${\mathcal{X}} \cong \widehat{\mathbb{C}}$. The solid white curves are the level sets of the function $\log |z|$ on ${\mathcal{Y}}$ and the induced subharmonic function $G$ on ${\mathcal{X}}$. The dashed white curves are the radial lines at angles $\theta_i, \theta'_i$ in ${\mathcal{Y}}$ and their corresponding images in ${\mathcal{X}}$.}} \label{pants} \end{figure} For $0 \leq i \leq 4$ and $s \leq s_0$ denote by $D_i(s)$ the disk neighborhood of $z_i$ consisting of points $z$ with $-\infty \leq G(z)<s$. For $0 \leq i \leq 4$ and $s < s_0 < s'$ denote by $A_i(s, s')$ the closed topological annulus ${\mathcal{X}} \smallsetminus (\Omega(s') \cup D_i(s))$. Take any quadratic polynomial $Q$ with connected filled Julia set. For $s>0$ let $V(s)$ denote the topological disk $\{ z \in \mathbb{C} : G_Q(z) < s \}$, where $G_Q$ is the Green's function of $Q$. Fix some $\rho>0$ and let $\phi : V(2\rho) \to D_0(s_0/3)$ be any conformal isomorphism which sends $0$ to $z_0$. Set $D'_0:= \phi(V(\rho))$. The conjugate map $$ f_0 := \phi \circ Q \circ \phi^{-1}: D'_0 \to D_0(s_0/3) $$ is a quadratic-like map conformally conjugate to $Q$ which extends to a smooth (in fact real-analytic) degree $2$ covering map between the boundary curves, that is, between the inner boundaries of the closed annuli $\overline{D_0}(s_0/3) \smallsetminus D'_0$ and $A_0(s_0/3,2s_0)$. Since $f$ restricts to a smooth degree $2$ covering map between the outer boundaries of the same annuli, we can interpolate between $f_0$ and $f$ to obtain a smooth degree $2$ covering map $\overline{D_0}(s_0/3) \smallsetminus D'_0 \to A_0(s_0/3,2s_0)$. This gives a smooth extension of $f$ to $D_0(s_0/3)$ which is holomorphic in $D'_0$. \vspace{6pt} We can define similar degree $1$ extensions of $f$ to the four remaining components of $\{ z \in {\mathcal{X}} : -\infty \leq G(z) < s_0/3 \}$, namely the topological disks $D_i(s_0/3)$ for $1 \leq i \leq 4$. That is, we can find a disk $D'_i$ compactly contained in $D_i(s_0/3)$ and a smooth extension of $f$ to $D_i(s_0/3)$ which maps $D'_i$ conformally onto $D_i(s_0/3)$ and the closed annulus $\overline{D_i}(s_0/3) \smallsetminus D'_i$ diffeomorphically onto $A_i(s_0/3,2s_0)$. The details are straightforward and will be left to the reader. \vspace{6pt} Let $\tilde{f}: {\mathcal{X}} \to {\mathcal{X}}$ denote the extension of $f|_{\Omega(s_0/3)}$ constructed this way. Then $\tilde{f}$ is a degree $6$ branched covering of ${\mathcal{X}}$ which is smooth and therefore quasiregular. Define a conformal structure $\mu$ on ${\mathcal{X}}$ by setting $\mu=\mu_0$ (the standard conformal structure) on $\Omega(s_0/3)$ and extending it by pulling back under the iterates of $\tilde{f}$. In other words, for each $n \geq 1$, set $\mu = (\tilde{f}^{\circ n})^{\ast}(\mu_0)$ in $\tilde{f}^{-n}(\Omega(s_0/3))$, and define $\mu=\mu_0$ on ${\mathcal{X}} \smallsetminus \bigcup_{n \geq 0} \tilde{f}^{-n}(\Omega(s_0/3))$. Evidently $\mu$ is $\tilde{f}$-invariant and has bounded dilatation since each backward orbit starting in $\Omega(s_0/3)$ passes through each non-holomorphic region $\overline{D_i}(s_0/3) \smallsetminus D'_i$ of $\tilde{f}$ once (or twice if it hits the boundary). By the measurable Riemann mapping theorem, there exists a quasiconformal homeomorphism $\psi: {\mathcal{X}} \to \widehat{\mathbb{C}}$ which pulls back the standard conformal structure of $\widehat{\mathbb{C}}$ to $\mu$. We can normalize $\psi$ such that the conformal map $\psi \circ \zeta: \{ z \in {\mathcal{Y}} : |z|>R \} \to \psi(\Omega(s_0))$ is tangent to the identity at $\infty$. The conjugate map $P := \psi \circ \tilde{f} \circ \psi^{-1}$ is then a degree $6$ monic polynomial. Moreover, the conformal map $\frak{B}:=\zeta^{-1} \circ \psi^{-1}: \psi(\Omega(s_0)) \to \{ z \in {\mathcal{Y}} : |z|>R \}$ conjugates $P$ to $z \mapsto z^6$ and is tangent to the identity at $\infty$, so $\frak{B}$ must be the B\"{o}ttcher coordinate for $P$. In particular, each radial line $\{ r \ensuremath{{\operatorname{e}}}^{2\pi i \theta} : r>R \}$ in ${\mathcal{Y}}$ pulls back under $\frak{B}$ to the field line $R_\theta$ for $P$. \vspace{6pt} Setting $U_1:=\psi(D'_0)$ and $U_0:=\psi(D_0(s_0/3))$, we see that the restriction $P|_{U_1}: U_1 \to U_0$ is a quadratic-like map hybrid equivalent to $Q$ and therefore its filled Julia set $K$ is connected. The boundary of $U_0$ is contained in the basin of $\infty$ and so is the boundary of the topological disk $(P|_{U_1})^{-n}(U_0)$ for every $n \geq 1$. It follows that $K=\bigcap_{n \geq 0} (P|_{U_1})^{-n}(U_0)$ is a connected component of the filled Julia set $K_P$. Moreover, by the construction the four critical points $\omega_i := \psi(c_i)$ of $P$ are escaping at the common potential $s_0$, and the field lines $R_{\theta_i}$ and $ R_{\theta'_i}$ crash into $\omega_i$ for $1 \leq i \leq 4$. Thus $P$ satisfies the conditions (i)-(iii) of \S \ref{descex}. We conclude that the angles $\{ \theta_0, \ldots, \theta_4 \}$ belong to the same fiber of $\Pi: I_K \to {\mathbb{T}}$ and the five rays $R_0, R^+_{\theta_1}, R^+_{\theta_2}, R^-_{\theta_3}, R^-_{\theta_4}$ co-land at a fixed point on $\partial K$.
{ "timestamp": "2019-03-05T02:15:20", "yymm": "1903", "arxiv_id": "1903.00800", "language": "en", "url": "https://arxiv.org/abs/1903.00800", "abstract": "Let $P$ be a monic polynomial of degree $D \\geq 3$ whose filled Julia set $K_P$ has a non-degenerate periodic component $K$ of period $k \\geq 1$ and renormalization degree $2 \\leq d<D$. Let $I=I_K$ denote the set of angles $\\theta$ on the circle ${\\mathbb T}={\\mathbb R}/{\\mathbb Z}$ for which the (smooth or broken) external ray $R^P_\\theta$ for $P$ accumulates on $\\partial K$. We prove the following:$\\bullet$ $I$ is a compact set of Hausdorff dimension $<1$ and there is an essentially unique degree $1$ monotone map $\\Pi: I \\to {\\mathbb T}$ which semiconjugates $\\theta \\mapsto D^k \\theta$ (mod 1) on $I$ to $\\theta \\mapsto d \\theta$ (mod 1) on $\\mathbb T$.$\\bullet$ Any hybrid conjugacy $\\varphi$ between a renormalization of $P^{\\circ k}$ on a neighborhood of $K$ and a monic degree $d$ polynomial $Q$ induces a semiconjugacy $\\Pi: I \\to {\\mathbb T}$ with the property that for every $\\theta \\in I$ the external ray $R^P_\\theta$ has the same accumulation set as the curve $\\varphi^{-1}(R^Q_{\\Pi(\\theta)})$. In particular, $R^P_\\theta$ lands at $z \\in \\partial K$ if and only if $R^Q_{\\Pi(\\theta)}$ lands at $\\varphi(z) \\in \\partial K_Q$.$\\bullet$ The ray correspondence established by the above result is finite-to-one. In fact, the cardinality of each fiber of $\\Pi$ is $\\leq D-d+2$, and the inequality is strict when the component $K$ has period $k=1$.Using a new type of quasiconformal surgery we construct a class of examples with $k=1$ for which the upper bound $D-d+1$ is realized and the set $I$ has isolated points.", "subjects": "Dynamical Systems (math.DS)", "title": "On the correspondence of external rays under renormalization", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429609670702, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7097210956530436 }
https://arxiv.org/abs/1209.2655
Positivity and Transportation
We prove in this paper that the weighted volume of the set of integral transportation matrices between two integral histograms r and c of equal sum is a positive definite kernel of r and c when the set of considered weights forms a positive definite matrix. The computation of this quantity, despite being the subject of a significant research effort in algebraic statistics, remains an intractable challenge for histograms of even modest dimensions. We propose an alternative kernel which, rather than considering all matrices of the transportation polytope, only focuses on a sub-sample of its vertices known as its Northwestern corner solutions. The resulting kernel is positive definite and can be computed with a number of operations O(R^2d) that grows linearly in the complexity of the dimension d, where R^2, the total amount of sampled vertices, is a parameter that controls the complexity of the kernel.
\section*{Notes and References}\addcontentsline{toc}{section}{Notes and References}\markright{Notes and References} \begin{small}\addtolength{\baselineskip}{-1.0pt}\parindent 0pt \parskip 4pt}{\clearpage\end{small}\addtolength{\baselineskip}{1.0pt}} \newcommand{\nrsection}[1]{\subsection*{#1}} \newcommand{\nrsubsection}[1]{\subsubsection*{#1}} \newcounter{exno} \renewcommand{\theexno}{\thechapter.\arabic{exno}} \newenvironment{exercises}{\clearpage\section*{Exercises}\addcontentsline{toc}{section}{Exercises}\markright{Exercises} \begin{small}\addtolength{\baselineskip}{-1.0pt} \renewcommand{\labelenumi}{(\alph{enumi})} \begin{list} {{\bf Exercise \thechapter.\arabic{exno}.}} {\usecounter{exno }}{\end{list}\clearpage\end{small}\addtolength{\baselineskip}{1.0pt}} \iffalse \newenvironment{example}% {\begin{quote}\begin{small}\textbf{Example.}}% {\end{small}\end{quote}} \newenvironment{examples}% {\begin{quote}\begin{small}\textbf{Examples.}}% {\end{small}\end{quote}} \newenvironment{remark}% {\begin{quote}\begin{small}\textbf{Remark.}}% {\end{small}\end{quote}} \fi \newenvironment{algdesc}% {\begin{quote}}{\end{quote}} \makeatother \section{Introduction} Suppose that among $30$ students in a classroom, $7$ and $23$ have light and dark colored eyes respectively. You are also told that $12$ of them have light hair while $18$ have dark hair. What are all the possible populations of the 4 subgroups of students with light/light, dark/dark, light/dark and dark/light eyes and hair color respectively? Such quantities can be arranged in a $2\times 2$ matrix whose row sum vector must be equal to $[7,23]^T$ and column sum vector must be equal to $[12,18]$, $\smallmat{3&4\\9&14}$ for instance, and more generally \emph{any} integer values in the dots below that satisfy these constraints: $$\bordermatrix[{[]}]{% & 12 & 18 \cr 7 & \bullet & \bullet\cr 23 & \bullet & \bullet \cr }$$ Alternatively, suppose that two bakeries in a small village produce daily $7$ and $23$ loafs of bread each, while two restaurants in the same area each need $12$ and $18$ loafs to serve their customers every day. What are all the possible morning delivery plans of bread loafs that the two bakeries and shops can agree upon? These seemingly trivial sets of matrices coincide, and are known in the statistics and optimization literature as the sets of \emph{contingency tables} and \emph{transportation plans} respectively. In statistics, the problem of enumerating all such tables arises naturally in hypothesis testing. Suppose that by entering the aforementioned classroom you observe that the actual repartition of these groups is $\smallmat{5&2\\7&16}$. Such an observation intuitively suggests that eye and hair color are related, but how confident should you be about this statement? In the $2\times 2$ case presented above, the Fisher exact test~\citep{yates1934contingency} answers that question by computing the probabilities of \emph{all} possible tables outcomes if one assumes that they have been generated as the product of independent Bernoulli variables with law $p_1=7/30$ and $p_2=12/30$. By comparing all these probabilities with that of the observed table, we can conclude how reliable an independence hypothesis would be. In optimization, given a $2\times 2$ cost matrix which describes the cost (in gas, calories or time) of bringing a loaf from each bakery to each shop, finding the delivery plan with minimal cost is known as a transportation problem. Transportation problems are an extremely general class of linear programs which are known to encompass all instances of network flows~\cite[p.274]{bertsimas1997introduction}. Optimal transportation distances~\citep{rachev1998mass,villani09} are distances between probability densities which combine both perspectives outlined above, where the probabilistic view on contingency tables is matched with the goal of computing an optimal transportation plan between two marginal probabilities given a metric on the probability space of interest. Such distances have been widely used in computer vision following the impulsion of~\citet{rubner1997earth} who used it to compare histograms of image features. When used in information retrieval tasks, transportation distances fare usually better in practice than other classical distances for histograms~\citep{Pele-iccv2009}. Transportation distances have however two notable drawbacks. First, from a geometric point of view, transportation distances are deficient in the sense that they are not negative definite nor Hilbertian. Negative definiteness carries many favorable properties, among which the possibility to create Euclidean embeddings from which the metric can be accurately recovered, as well as the possibility to turn the distance into a positive definite kernel by simple exponentiation, as a radial basis function. Because of this deficiency, there is no known positive definite counterpart to transportation distances that can leverage the complexity of the set of contingency tables. Second, from a computational point of view, the computational cost of computing transportation distances grows in most cases of interest at least quadratically in the dimension $d$ of the histograms, which can be prohibitive for many applications. We try to address both issues in this work. The main contribution of this paper is theoretical: after providing some background material and motivation in Section~\ref{sec:back} we prove in Section~\ref{sec:trans} that the generating function of the set of all contingency tables between two integral histograms is a positive definite kernel. Our second contribution is practical: we propose in Section~\ref{sec:nwc} a positive definite kernel that leverages these ideas while still being computationally tractable. \section{Background}\label{sec:back} \subsection{The Transportation Polytope and the Set of Contingency Tables} We review in this section a few definitions, notations and results of interest to prove our result. In the following, we write $\dotprod{\,\cdot\,}{\cdot}$ for both the Frobenius dot-product and the usual dot-product of vectors. Given a dimension $d$ fixed throughout this paper, for two vectors $r,c\in \mathbb{R}^d$, let $U(r,c)$ be the transportation polytope of $r$ and $c$, namely the subset of nonnegative matrices in $\mathbb{R}^{d\times d}$ defined as: $$U(r,c)\defeq \{X\in\mathbb{R}_+^{d\times d}\; |\; X\mathbf 1_d=r, X^T\mathbf 1_d=c\},$$ where $\mathbf 1_d$ is the $d$ dimensional vector of ones. $U(r,c)$ contains all nonnegative $d\times d$ matrices with row and column sums $r$ and $c$ respectively. It is easy to check that $U(r,c)$ is non-empty if and only if all coordinates of $r$ and $c$ are non-negative and if the total masses of $r$ and $c$ are the same, that is $r^T\mathbf 1_d=c^T\mathbf 1_d$. We will consider in most of this work \emph{integral} vectors $r$ and $c$ taken in the set $\Sigma_N$ of $d$-dimensional integral histograms with total mass $N\in\mathds{N}$, $$ \Sigma_d^N \defeq \{r \in\mathds{N}^{d} \;|\; r_1+\cdots+r_d = N\}. $$ We will also focus accordingly on the subset $\mathbb{U}(r,c)$ of $U(r,c)$ that contains all integral transportation matrices, alternatively known as \emph{contingency tables}~\citep{lauritzen1982lectures,everitt1992analysis}: $$\mathbb{U}(r,c)\defeq U(r,c) \cap \mathds{N}^{d\times d}.$$ \subsection{Weighted Volumes of Contingency Tables and Particular Cases of Positivity} Ranging from early work by~\citet{yates1934contingency,good1976} to~\citet{diaconisefron,cryan2003polynomial,chen2005sequential}, the computation of elementary statistics about $\mathbb{U}(r,c)$ has attracted considerable attention. Many of the ideas of this paper build upon recent work by~\citeauthor{barvinok2008enumerating}, most notably on his study of the generating function of $\mathbb{U}(r,c)$, defined for $M\in \mathbb{R}^{d\times d}$ as $$V(r,c\,;M)\defeq \sum_{X\in \mathbb{U}(r,c)} e^{-\dotprod{X}{M}}.$$ The generating function can be related to the \emph{weighted} volume~\citep[p.2]{barvinok2008enumerating} of $\mathbb{U}(r,c)$, defined for any nonnegative $d\times d$ matrix $K\in\mathbb{R}_+^{d\times d}$ as: $$T(r,c\,;K) \defeq \sum_{X\in \mathbb{U}(r,c)} \prod_{ij}^d k_{ij}^{x_{ij}}.$$ Both definitions are equivalent since if we agree that $k_{ij}=e^{-m_{ij}}$ then $T(r,c\,;K)=V(r,c\,;M)$. Because all of our results rely on $K$'s properties, we will mostly use the weighted volume formulation in this paper. Some sections in this paper, notably \S\ref{subsec:rel} below and \S\ref{sec:nwc}, are better understood with the generating function formulation. ~\citet[Prop.2]{cuturi07permanents} proved that the cardinal of the set $\mathbb{U}(r,c)$ is a positive definite kernel of $r$ and $c$ using the Robinson-Schensted-Knuth bijection~\citep{Knuth70} that maps each contigency table to a pair of Young tableaux with contents $r$ and $c$ and the same pattern. It is easy to see that the cardinal of $\mathbb{U}(r,c)$ is equal to $T(r,c;\mathbf 1_{d\times d})$ or $V(r,c\,;\mathbf{0}_{d\times d})$. ~\citet[Prop.1]{cuturi07permanents} also proved that $T(r,c\,;K)$ is a positive definite kernel of $r$ and $c$ if both are \emph{binary} histograms and $K$ is a nonnegative $d\times d$ positive definite matrix. Since the computation of $T$ entails in that case the computation of the permanent of a Gram matrix,~\citet{cuturi07permanents} called this kernel the permanent kernel. The main contribution of our paper is to prove in Theorem~\ref{theo:genfpsd} that the map $(r,c)\in\Sigma_d^N \mapsto T(r,c\,;K)$ is positive definite whenever $K$ is a $d\times d$ positive definite matrix. \begin{figure} \pstexinput{.75}{2dpoly3-bis.eps_tex} \caption{Schematic representation of the set $\mathbb{U}(r,c)$ of contingency tables seen as the intersection between the lattice of integral matrices $\mathds{N}^{d\times d}$ with the transportation polytope $U(r,c)$. Each red dot stands for an integral plan $X\in\mathbb{U}(r,c)$. The inner color in each red dot stands for the value of $\dotprod{X}{M}$, which can be seen to go gradually from $\dotprod{X^\star}{M}$ to $\dotprod{X^\circ}{M}$, that is from the minimum to the maximum of $\dotprod{\cdot}{M}$ over $U(r,c)$, or equivalently $\mathbb{U}(r,c)$. The generating function $V(r,c;M)$ of $\mathbb{U}(r,c)$ considers the contributions of \emph{all} contingency tables.}\label{fig:mainfig} \end{figure} \subsection{Relationships with the Optimal Transportation Distance}\label{subsec:rel} Given a $d\times d$ cost matrix $M$, one can quantify the cost of mapping $r$ to $c$ using a transportation matrix $X$ as $\dotprod{X}{M}$. The minimum of this cost is called the optimal transportation cost, defined as: $$ d_M(r,c) \defeq \min_{X\in U(r,c)} \dotprod{X}{M}. $$ A classical result of optimization in network flows~\citep[Theo. 7.5]{bertsimas1997introduction} guarantees the existence of a contingency table $X^\star\in\mathbb{U}(r,c)$ which achieves this minimum, as schematically represented in Figure~\ref{fig:mainfig}. Such an optimal table $X^\star$ can be obtained algorithmically in polynomial time~\cite[\S9]{ahuja1993network}. The minimal cost $d_M(r,c)$ turns out to be a distance~\cite[\S6.1]{villani09} whenever the matrix $M$ is itself a metric. This distance is also known as the Wasserstein distance, Monge-Kantorovich's, Mallow's or Earth Mover's~\citep{rubner1997earth} in the computer vision literature. The transportation distance is not negative definite in the general case, as shown by counterexamples~\citep{naor-2005} and embedding distortion results~\citep{indyk2009}. Although some metrics $M$ can yield a negative definite distance\footnote{Setting $M=\mathbf 1_{d\times d}-I_d$ yields the total variation distance between discrete probabilities, which is half the Manhattan or $l_1$ distance between $r$ and $c$. All these distances are known to be negative definite.}, characterizing the negative definiteness of $d_M$ remains an open question. Despite this fact, transportation distances have been used in practice to derive a \emph{pseudo}-positive definite kernel: both~\citet[\S4.C]{emd2004} or~\citet[\S2.3]{emd2006} introduce the exponential of (minus) the minimum of $\dotprod{X}{M}$, \begin{equation}\label{eq:km}k_M (r,c) = e^{-d_M(r,c)} = \exp\left(-\min_{X\in U(r,c)}\dotprod{X}{M}\right),\end{equation} to form an undefinite kernel which can be used to compare histograms in practice. We prove that, although the value $\exp(- \langle X^\star, M\rangle)$ in itself is not a positive definite kernel, the sum of each term $\exp(- \langle X, M\rangle)$ over \emph{all} possible contingency tables in $\mathbb{U}(r,c)$ is positive definite when $M$ has suitable properties. The generating function $V_{rc}$ can be interpreted as the exponential of (minus) the soft-minimum of $\dotprod{X}{M}$ over all contingency tables, $$ V(r,c\,;M) = \exp\left(-\,\underset{X\in \mathbb{U}(r,c)}{\text{softmin}}\,\dotprod{X}{M}\right)= e^{\log\sum_{X\in \mathbb{U}(r,c)} e^{-\dotprod{X}{M}}}= \sum_{X\in \mathbb{U}(r,c)} e^{-\dotprod{X}{M}}, $$ where the soft-minimum of a finite family of scalars $(u_i)$ is $$\,\underset{i}{\text{softmin}}\,u_i\,\defeq -\log\sum_{i} e^{-u_i}.$$ This expression relates our results in this work to previous applications of soft minimums to derive positive definite kernels from combinatorial distances for strings~\citep{VerSaiAku04}, time series~\citep{cuturi07kernelSHORT} and trees~\citep{shin2011mapping}. These ideas are summarized in Figure~\ref{fig:mainfig}. \subsection{Generalized Permutations}\label{subsec:genperm} We close this section by providing some tools to prove the result. We write $S_N$ for the group of permutations over the set $\{1,\cdots,N\}$. For any vector $\alpha$ of size $N$ and permutation $\pi\in S_N$, we write $\alpha_\pi$ for the permuted vector with coordinates $\alpha_\pi=[\alpha_{\pi(1)}\,\alpha_{\pi(2)}\,\cdots\,\alpha_{\pi(N)}]$ and $\alpha_{p\cdot\cdot q}$ for the subvector $[\alpha_{p}\,\cdots\,\alpha_{q}]$ when $1\leq p \leq q \leq N$. For two vectors $\rho,\gamma$ of $\{1,\cdots,d\}^N$, the $2\times N$ array $$ (\rho\,;\gamma) \defeq \begin{bmatrix} \rho_1 & \rho_2 &\cdots & \rho_N \\ \gamma_1 & \gamma_2 &\cdots & \gamma_N \\ \end{bmatrix}, $$ is called a generalized permutation~\citep{Knuth70}. To any generalized permutation $(\rho\,;\gamma)$ corresponds a $d\times d$ integral matrix $\chi(\rho\,;\gamma)$ defined as~\cite[p.41]{fulton1997young}: \begin{equation}\label{eq:fulton}[\chi(\rho\,;\gamma)]_{ij} \defeq \sum_{n=1}^N\mathbf 1_{\rho_t=i}\cdot\mathbf 1_{\gamma_t=j},\quad 1\leq i,j\leq d.\end{equation} Consider the following example where $d=3,N=8$ and $$\rho=\begin{bmatrix}1\,2\,2\,2\,1\,3\,1\,3\;\end{bmatrix}, \gamma=\begin{bmatrix}1\,1\,2\,1\,3\,3\,3\,3\;\end{bmatrix}, (\rho\,;\gamma) = \begin{bmatrix}1\,2\,2\,2\,1\,3\,1\,3 \\1\,1\,2\,1\,3\,3\,3\,3\end{bmatrix}, \chi(\rho\,;\gamma) =\begin{bmatrix} 1 & 0 & 2\\ 2&1 &0 \\ 0 & 0 & 2 \end{bmatrix}. $$ If we consider now the permutation $\pi=[3\,6\,8\,5\,2\,1\,4\,7]$ we have that $$\rho=\begin{bmatrix}1\,2\,2\,2\,1\,3\,1\,3\;\end{bmatrix}, \gamma_\pi=\begin{bmatrix}2\,3\,3\,3\,1\,1\,1\,3\;\end{bmatrix}, (\rho\,;\gamma_\pi) = \begin{bmatrix}1\,2\,2\,2\,1\,3\,1\,3 \\2\,3\,3\,3\,1\,1\,1\,3\end{bmatrix}, \chi(\rho\,;\gamma_\pi) =\begin{bmatrix} 2 & 1 & 0\\ 0&0 &3 \\ 1 & 0 & 1 \end{bmatrix}. $$ Note that if $\rho$ and $\gamma$ have respectively $r_i$ and $c_i$ elements $i$ among their $N$ coefficients for all $1\leq i\leq d$, then $\chi(\rho\,;\gamma)\in \mathbb{U}(r,c)$. One can see above that the corresponding histograms are $r=[3,3,2]$ and $c=[3,1,4]$ and that both $\chi(\rho\,;\gamma)$ and $\chi(\rho\,;\gamma_\pi)$ have row and column sums $r$ and $c$. \section{The Weighted Volume as a Positive Definite Kernel}\label{sec:trans} \begin{theorem}\label{theo:genfpsd} Let $K\in\mathbb{R}_{+}^{d\times d}$. The map $(r,c)\mapsto T(r,c\,;K)$ is positive definite if $K$ is positive definite. \end{theorem} The proof relies on the following observation:~\citet{barvinok2008enumerating} showed that the weighted volume of $\mathbf{U}(r,c)$ of two integral histograms $r$ and $c$ of total mass $N$ can be formulated as the expectation of the permanent of a random $N\times N$ matrix. To do so,~\citeauthor{barvinok2008enumerating} shows that the weighted volume -- a sum indexed over all \emph{contigency tables} $X\in\mathbb{U}(r,c)$, can be rewritten as a sum indexed over all \emph{permutations} $\pi$ in $S_N$, up to a correcting term known as the Fisher-Yates statistic (Equation~\eqref{eq:fy} in the Appendix). The crux of~\citeauthor{barvinok2008enumerating}'s proof lies in a randomization scheme -- using draws from the exponential law -- to cancel out the Fisher-Yates statistic. We adopt a similar route to prove the positivity of $T$, by proving that the inverse of the Fisher-Yates statistic -- defined as $\mathbf{k}_2$ below -- is itself positive definite to obtain the result. \begin{proof} Suppose that $K\in\mathbb{R}_+^{d\times d}$ is positive definite and consider two integral histograms $r,c$ in $\Sigma_d^N$. We represent $r$ as a $N$-dimensional vector $\rho\in\{1,\cdots,d\}^N$, $$\rho\defeq [\,\underbrace{1,\cdots,1}_{r_1 \text{ times }},\underbrace{2,\cdots,2}_{r_2 \text{ times }},\cdots,\underbrace{d,\cdots,d}_{r_d \text{ times }}\,],$$ and consider the analogous representation $\gamma$ for $c$. Let $\mathbf{k}_1$ and $\mathbf{k}_2$ be the following kernels on $(\rho,\gamma)$: $$ \begin{aligned} \mathbf{k}_1(\rho,\gamma)&= \prod_{t=1}^N k(\rho_t,\gamma_t)\;, \text{ where } k(i,j) =k_{ij} \text{ for } 1\leq i,j\leq d,\\ \mathbf{k}_2(\rho,\gamma)&= \frac{1}{r_1!\cdots r_d!} \cdot \frac{1}{c_1!\cdots c_d!} \prod_{ij}^d x_{ij}!\;, \text{ where } X=\chi(\rho\,;\gamma). \quad (\text{see \S\ref{subsec:genperm}, Eq.~\eqref{eq:fulton}}) \end{aligned} $$ The kernel $\mathbf{k}_2$ is the inverse of the Fisher-Yates statistic (Equation~\eqref{eq:fy} in the Appendix) associated to an integral transportation table $X$ and its marginals $r$ and $c$. $\mathbf{k}_1$ is trivially positive definite. The first group of terms of $\mathbf{k}_2$ is trivially positive definite as a product $f(r)f(c)$ where $f(r)=\frac{1}{r_1!\cdots r_d!}$. We prove that the other term, the product of factorials of $x_{ij}$, is positive definite in Lemma~\ref{lem:fact} using the proof strategy of a related result provided in Lemma~\ref{lem:fac}. Lemma~\ref{lem:perm} proves that when a kernel $\kappa$ on two vectors is symmetric (the definition is provided in the lemma), the sum $\sum_{\pi\in S_N}\kappa(\rho,\gamma_\pi)$ is itself positive definite. We use this result on the product $\kappa(\rho,\gamma)=\mathbf{k}_1(\rho,\gamma) \,\mathbf{k}_2(\rho,\gamma)$ which is trivially symmetric as the product of two symmetric kernels. We then prove in Lemma~\ref{lem:decomp} that $$ \sum_{\pi\in S_N} \kappa(\rho,\gamma_\pi) = T(r,c\,;K). $$ Since the summation over all permutations in the left hand side is positive definite by Lemma~\ref{lem:perm}, we conclude that $T(r,c\,;K)$ is itself a positive definite kernel as the product of two positive definite kernels. \end{proof} \section{Northwestern Kernel}\label{sec:nwc} The weighted volume $T(r,c\,;K)$ cannot be computed exactly even for small dimensions $d$, and approximations~\citep{barvinok2008enumerating} are currently both too expensive and too loose to be of practical interest in a machine learning context. We adopt in this section an alternative approach, in which we propose to restrict the sum of elementary contributions $\exp(-\dotprod{X}{M})$ to a subset of extreme points of $U(r,c)$ and obtain a kernel whose computational complexity grows linearly in both the dimension $d$ and the size of the sample of extreme points. The main tool for this approach is provided by the Northwestern corner rule to generate a vertex of $U(r,c)$, which we recall in Section~\ref{subsec:nwc}. We define the Northwester kernel in Section~\ref{subsec:sam} and prove that it is positive definite. For any matrix $M\in\mathbb{R}^{d\times d}$, we write $M_{\sigma\sigma'}$ for the row and column permuted matrix whose $i,j$ element is $m_{\sigma(i)\sigma'(j)}$. \subsection{The Northwestern Corner Rule to Generate Vertices of $U(r,c)$}\label{subsec:nwc} The Northwestern corner rule is a heuristic that produces a vertex of the polytope $U(r,c)$ in up to $2d$ operations. The rule starts by giving the highest possible value to $x_{11}$, and at each step when a highest possible value is given to entry $x_{ij}$ it moves on to $x_{ij+1}$ in case $x_{ij}$ filled column $j$, or $x_{i+1j}$ in case $x_{ij}$ filled row $i$. The rule proceeds until $x_{nn}$ has received a value. Here is an example of this sequence assuming $r=[2,5,3]$ and $c=[5,1,4]$: $$\begin{bmatrix} \bullet & 0 & 0 \\ 0 & 0 & 0 \\ 0& 0 & 0\end{bmatrix} \rightarrow \begin{bmatrix} 2 & 0 & 0 \\ \bullet & 0 & 0 \\ 0& 0 & 0\end{bmatrix} \rightarrow \begin{bmatrix} 2 & 0 & 0 \\ 3 & \bullet & 0 \\ 0& 0 & 0\end{bmatrix} \rightarrow \begin{bmatrix} 2 & 0 & 0 \\ 3 &1 &\bullet \\ 0& 0 & 0\end{bmatrix} \rightarrow \begin{bmatrix} 2 & 0 & 0 \\ 3 &1 &1 \\ 0& 0 & \bullet\end{bmatrix} \rightarrow \begin{bmatrix} 2 & 0 & 0 \\ 3 &1 &1 \\ 0& 0 & 3\end{bmatrix}$$ We write $\mathbf{NW}(r,c)$ for the unique Northwestern corner solution that can be obtained through this heuristic. There is, however, a much larger number of Northwestern corner solutions that can be obtained by permuting arbitrarily the order of $r$ and $c$ separately, computing the corresponding Northwestern corner table, and recovering a table of $\mathbb{U}(r,c)$ by inverting again the order of columns and rows. Setting $\sigma=(3,1,2),\sigma'=(3,2,1)$ we have that $r_\sigma=[3,2,5], c_{\sigma'}=[4,1,5]$ and $\sigma^{-1}=(2,3,1),\sigma'=(3,2,1)$. Observe that: $$ \mathbf{NW}(r_\sigma,c_\sigma') = \begin{bmatrix} 3 & 0 & 0 \\ 1 & 1 & 0 \\ 0& 0 & 5\end{bmatrix} \in \mathbb{U}(r_\sigma,c_{\sigma'}),\;\mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(r_\sigma,c_{\sigma'})= \begin{bmatrix} 0 & 1 & 1 \\ 5 & 0 & 0 \\ 0& 0 & 3\end{bmatrix}\in \mathbb{U}(r,c). $$ Let $\mathcal{N}(r,c)$ be the set of all Northwestern corner solutions that can be produced this way: $$\mathcal{N}(r,c)\defeq\{ \mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(r_\sigma,c_{\sigma'}), \sigma,\sigma'\in S_d\}.$$ Note that all Northwestern corner solutions only have by construction up to $2d-1$ nonzero elements. The Northwestern corner rule produces a table which is by construction unique for $r$ and $c$, but there is an exponential number of pairs or row/column permutations $(\sigma,\sigma')$ that may share the same table~\citep[p.2]{stougie2002polynomial}. $\mathcal{N}(r,c)$ is a subset of the set of extreme points of $U(r,c)$~\citep[Corollary 8.1.4]{brualdi2006combinatorial}. $\mathbf{NW}(r,c)$ is an optimal transportation between $r$ and $c$ if the cost matrix $M$ is a Monge matrix~\citep{hoffman1961simple}, that is a matrix $M$ that satisfies the inequalities $$\forall 1 \leq i,j,k,l\leq d, \quad m_{ij}+m_{kl}\leq m_{il}+m_{kj}.$$ Note however that a distance matrix cannot be a Monge matrix since the inequality above applied to $k=j$ and $l=i$ would imply that $0<2m_{ij}\leq m_{ii}+m_{jj}=0$. \subsection{Random Sampling of Northwestern Corner Solutions}\label{subsec:sam} We propose in this section a kernel which uses arbitrary row/column permutations of $r$ and $c$ to recover extreme points of $\mathbb{U}(r,c)$ and sum their individual contribution: \begin{theorem}Let $R$ be an arbitrary subset of permutations in $S_d$. The Northwestern kernel sampled on $R$ and parameterized by a matrix $M$, defined as $$ N(r,c\,;K,R) \defeq \sum_{\sigma,\sigma'\in R} \exp\left(-\dotprod{M}{\mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(r_\sigma,c_{\sigma'})}\right), $$ is a positive definite kernel if $K$, the element-wise exponential of $-M$, is positive definite. \end{theorem} \begin{proof} As in the proof of Theorem~\ref{theo:genfpsd}, consider the representation of an integral histogram $r\in\Sigma_d^N$ as a $N$ dimensional vector $\rho$ that replicates $r_i$ times the index $i$ for all $i$ from $1$ to $d$. We also define, for any permutation $\sigma$ of $S_d$, the vector $\rho_\sigma$ as $$\rho_\sigma \defeq [\,\underbrace{\sigma(1),\cdots,\sigma(1)}_{r_{\sigma(1)} \text{ times }},\underbrace{\sigma(2),\cdots,\sigma(2)}_{r_{\sigma(2)} \text{ times }},\cdots,\underbrace{\sigma(d),\cdots,\sigma(d)}_{r_{\sigma(d)} \text{ times }}\,].$$ $\rho_\sigma$ for $\sigma\in S_d$ should not be confused with $\rho_\pi$ for $\pi\in S_N$ (\S\ref{subsec:genperm}): for any permutation $\sigma\in S_d$ there exists at least one permutation $\pi\in S_N$ such that $\rho_\sigma=\rho_\pi$ but the converse is not usually true. We show in Lemma~\ref{lem:nwc} that for $\sigma,\sigma'\in S_d$, $\mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(r_\sigma,c_{\sigma'})=\chi(\rho_\sigma,\gamma_{\sigma'})$, and thus, $$N(r,c\,;K,R) = \sum_{\sigma,\sigma'\in R} e^{-\dotprod{M}{\chi(\rho_\sigma,\gamma_{\sigma'})}} = \sum_{\sigma,\sigma'\in R} \mathbf{k_1}(\rho_\sigma,\gamma_{\sigma'}),$$ where $\mathbf{k_1}$ is defined in Theorem~\ref{theo:genfpsd}. $N(r,c\,;K,R)$ is positive definite as a convolution kernel. \end{proof} \begin{lemma}\label{lem:nwc} Let $\sigma$ and $\sigma'$ be two permutations of $S_d$. Then $$\mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(r_\sigma,c_{\sigma'})=\chi(\rho_\sigma,\gamma_{\sigma'}).$$ \end{lemma} \begin{proof}We write $E_{ij}$ for the $d\times d$ matrix of zeros except for the $(i,j)$ element set to $1$. We prove the result by induction on the total mass $N$. For $N=1$ the result is trivial since the only transportation matrix in $U(r,c)$ in that case is $E_{\sigma(i_1)\sigma(i_2)}$, where $i_1$ and $i_2$ are such that $r_{i_1}=c_{i_2}=1$. Suppose now that the result is true for all histograms of mass $N$ and consider the case where $r^T\mathbf 1_d=c^T\mathbf 1_d=N+1$. Let $i_1$ and $i_2$ be the smallest indices such that $r_{\sigma(i)}>0$ and $c_{\sigma'(i)}>0$ respectively. As a consequence, the first elements of $\rho_\sigma$ and $\gamma_{\sigma'}$ are $\sigma(i_1)$ and $\sigma(i_2)$ respectively. Consider the two vectors $\rho_*$ and $\gamma_*$ of length $N$ equal to $\rho_\sigma$ and $\gamma_{\sigma'}$ \emph{without} these two first elements. Setting $\tilde{r}$ and $\tilde{c}$ to $r$ and $c$ except for the fact that $\tilde{r}_{\sigma(i_1)}=r_{\sigma(i_1)}-1$ and $\tilde{c}_{\sigma(i_2)}=r_{\sigma(i_2)}-1$, we have by induction that $\mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(\tilde{r}_\sigma,\tilde{c}_{\sigma'})=\chi(\rho_*,\gamma_*),$ since the two histograms have total mass $N$ and their representations are respectively $\rho_*$ and $\gamma_*$. By definition of the Northwestern corner rule, adding a unit of mass to the $i_1$'s and $i_2$'s components of $\tilde{r}_\sigma$ and $\tilde{c}_{\sigma'}$ only changes the very first iteration of the rule, since all coordinates of $\tilde{r}_\sigma$ and $\tilde{c}_{\sigma'}$ up to but not including $i_1$ and $i_2$ respectively are null by construction. Applying the rule yields a transportation table with an added unit in location $(i_1,i_2)$, providing thus the identity $$\mathbf{NW}(r_\sigma,c_{\sigma'}) = \mathbf{NW}(\tilde{r}_\sigma,\tilde{c}_{\sigma'}) + E_{i_1i_2},$$ which implies that \begin{equation}\label{eq:nw}\mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(r_\sigma,c_{\sigma'})= \mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(\tilde{r}_\sigma,\tilde{c}_{\sigma'}) + E_{\sigma(i_1)\sigma'(i_2)}. \end{equation} By definition of $\chi$ we have that \begin{equation}\label{eq:chi} \chi(\rho_\sigma\gamma_\sigma)= \chi(\rho_*,\gamma_*) + E_{\sigma(i_1)\sigma'(i_2)} \end{equation} we get by combining Equations~\eqref{eq:chi} and~\eqref{eq:nw} above with the induction hypothesis that $\mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(r_\sigma,c_{\sigma'})=\chi(\rho_\sigma,\gamma_{\sigma'})$. \end{proof} \begin{remark}The evaluation of $N(r,c\,;K,R)$ requires $O(d\abs{R}^2)$ steps since computing each of the $\abs{R}^2$ contributions $\exp(-\dotprod{M}{\mathbf{NW}_{\sigma^{-1}\sigma'^{-1}}(r_\sigma,c_{\sigma'})})$ for a couple $\sigma,\sigma'$ requires up to $2d$ products. The size of $R\subset S_d$ can be controlled from a few permutations to an exhaustive enumeration, which would entail an overall complexity of the order of $O(dd!^2)$.\end{remark} \section{Conclusion and Future Work} We have proved in this paper that the fundamental ingredient of transportation distances, the polytope of contingency tables, can be used to define a positive definite kernel between two histograms. While the cost matrix of a transportation problem between two histograms $r$ and $c$ needs to be a distance matrix for the optimum to be itself a distance of $r$ and $c$, we have proved that the generating function of the same polytope is positive definite whenever the cost matrix is itself positive definite. This quantity is computationally intractable, and we have resorted to a summation that only considers a subset of extreme points of the polytope to define the north-western kernel. Future research includes the proposal of suitable subsets $R$ of permutations of $S_d$ tuned with data, as well as other approximation schemes. \section*{Appendix: Intermediate Results for the Proof of Theorem~\ref{theo:genfpsd}} \begin{lemma}\label{lem:fac}Let $a,b\in\{0,1\}^N$ be two binary vectors. The kernel $(a,b)\mapsto \dotprod{a}{b}!$ is positive definite. \end{lemma} \begin{proof} For $N=1$ the kernel is always equal to $1$ and is thus trivially positive definite. For $N>1$, the recursion $\dotprod{a}{b}!=\dotprod{a_1^{N-1}}{b_1^{N-1}}!\,(a_{N}b_{N}\dotprod{a_1^{N-1}}{b_1^{N-1}}+1)$ provides the expression $$\dotprod{a}{b}! = \prod_{t=1}^{N-1} \left(a_{t+1}b_{t+1}\dotprod{a_{1\cdot\cdot t}}{b_{1\cdot\cdot t}}+1\right),$$ which shows that $\dotprod{a}{b}!$ is the product of $N-1$ positive definite kernels on different features of $a$ and $b$.\end{proof} \begin{remark}Rather than the lemma itself, we will use the identity above in the proof of Lemma~\ref{lem:fact}. We conjecture that this result can be extended to integral vectors. Numerical counterexamples show that this result cannot be generalized to vectors of $\mathbb{R}^N$ through Euler's or Hadamard's $\Gamma$ function.\end{remark} \begin{lemma}\label{lem:fact}Let $\rho,\gamma\in\{1,\cdots,d\}^N$. The kernel $(\rho,\gamma)\mapsto \prod_{ij} x_{ij}!$, where $X=\chi(\rho;\gamma)$, is positive definite. \end{lemma} \begin{proof} An integral vector $\rho \in \{1,\cdots,d\}^N$ with $N$ components can be represented as a family of $d$ binary row vectors $\rho^1,\cdots,\rho^d$ of length $N$ where for $n\leq N$, $\rho^i_n\defeq \mathbf 1_{\rho_n=i}$. For instance, $$\text{if }\rho=\begin{bmatrix}1\,1\,2\,2\,2\,1\,3\,1\,3\,3\end{bmatrix},\text{ then } \begin{bmatrix}\rho^1\\\rho^2\\\rho^3\end{bmatrix}=\begin{bmatrix} 1&1&0&0&0&1&0&1&0&0\\ 0&0&1&1&1&0&0&0&0&0\\ 0&0&0&0&0&0&1&0&1&1\\ \end{bmatrix}$$ These $d$ binary vector representations can be used to obtain the matrix $\chi(\rho\,;\gamma)$. Indeed, it is easy to check that if $X=\chi(\rho\,,\gamma)$ then $x_{ij}=\dotprod{\rho^i}{\gamma^j}$. As a consequence, we have that for all indices $i,j$ the coefficient $x_{ij}!=\dotprod{\rho^i}{\gamma^j}!$. We obtain that the product of factorials $$ \prod_{ij}^d x_{ij}! = \prod_{i,j}^d\dotprod{\rho^i}{\gamma^j}!, $$ is thus a product of kernels evaluated on all possible pairs among the $d\times d$ representations for $\rho$ and $\gamma$. Although one might be tempted to interpret this product as a convolution kernel~\citep{haussler99convolution} or a mapping kernel~\citep{shin2008generalization}, one should recall that such results only apply to \emph{sums} of local kernels and not to \emph{products}. Such products of kernels on parts are not, as simple counterexamples can show, positive definite in the general case. Using the decomposition which was used in the proof of Lemma~\ref{lem:fac}, we have however that: $$ \begin{aligned}\prod_{ij}^d x_{ij}! &= \prod_{i,j}^d\dotprod{\rho^i}{\gamma^j}! = \prod_{i,j}^d \prod_{t=1}^{N-1} \left(\rho^i_{t+1}\gamma^j_{t+1}\dotprod{\rho^i_{1\cdot\cdot t}}{\gamma^i_{1\cdot\cdot t}}+1\right),\\ &= \prod_{t=1}^{N-1} \prod_{i,j}^d \left(\rho^i_{t+1}\gamma^j_{t+1}\dotprod{\rho^i_{1\cdot\cdot t}}{\gamma^j_{1\cdot\cdot t}}+1\right) = \prod_{t=1}^{N-1} \left(1+\sum_{i,j}^d \rho^i_{t+1}\gamma^j_{t+1}\dotprod{\rho^i_{1\cdot\cdot t}}{\gamma^j_{1\cdot\cdot t}}\right), \end{aligned} $$ where we have used in the last operation the fact that only one of all $d^2$ products $(\rho^i_{t+1}\gamma^j_{t+1})_{ij}$ is nonzero, since $$ \rho^i_{t+1}\gamma^j_{t+1}=\begin{cases} 1, \text{ if } \rho_{t+1}=i \text{ and } \gamma_{t+1}=j, \\ 0, \text {else.}\end{cases} $$ The product of factorials is thus a product of $N-1$ positive definite kernels indexed by $t$ and defined on $\rho$ and $\gamma$, where each of these $N-1$ kernel is $1$ plus a convolution kernel operating on the $d$ decompositions of $\rho_{1\cdot\cdot t}$ and $\gamma_{1\cdot\cdot t}$ as $d$ binary feature vectors, that is $$ \prod_{ij}^d x_{ij}! = \prod_{t=1}^{N-1} \left(1+k_t(\rho,\gamma)\right); $$ where $$k_{t}(\rho,\gamma)=\sum_{i,j}^d h_t(\rho^i,\gamma^j) \text{ and } h_t(a,b) = a_{t+1}b_{t+1}\dotprod{a_{1\cdot\cdot t}}{b_{1\cdot\cdot t}}.$$ \end{proof} \begin{lemma}\label{lem:perm} Let $\alpha=(\alpha_1,\cdots,\alpha_N)$ and $\beta=(\beta_1,\cdots,\beta_N)$ be two lists of $N$ elements in a set $\mathcal{X}$. Let $k$ be a symmetric kernel in $\mathcal{X}^N$, that is a kernel invariant under a permutation of the order of both $\alpha$ and $\beta$: $\forall \pi\in S_N,\; k(\alpha,\beta)=k(\alpha_\pi,\beta_\pi).$ Then $(\alpha,\beta)\mapsto \sum_{\pi\in S_N} k(\alpha,\beta_{\pi})$ is positive definite. \end{lemma} \begin{proof} The function $g$ defined below is, by~\citeauthor{haussler99convolution}'s (\citeyear{haussler99convolution}) convolution kernels framework, a positive definite kernel of $\alpha$ and $\beta$: $$ g(\alpha,\beta)=\sum_{\pi'\in S_N} \sum_{\pi\in S_N} k(\alpha_{\pi'},\beta_{\pi}). $$ Using the symmetric property of $\kappa$, we have that $$ g(\alpha,\beta)=\sum_{\pi'\in S_N} \sum_{\pi\in S_N} k(\alpha,\beta_{{\pi'}^{-1}\circ\pi}) = N!\sum_{\pi\in S_N} k(\alpha,\beta_{\pi}). $$ which proves the result. \end{proof} \begin{lemma}\label{lem:decomp} $\sum_{\pi\in S_N} \kappa(\rho,\gamma_\pi) = r_1!\cdots r_d! \cdot c_1!\cdots c_d! \,T(r,c\,;K)$ \end{lemma} \begin{proof}For any couple of vectors $\rho,\gamma$ we have that both $\mathbf{k}_1$ and $\mathbf{k}_2$ only depend on $X=\chi(\rho\;;\gamma)$. This is implicitly the case in the definition of $\mathbf{k}_2$ and one can check that $$ \mathbf{k}_1(\rho,\gamma)= \prod_{t=1}^N k(\rho_t,\gamma_t) = \prod_{ij}^d k_{ij}^{x_{ij}}, \text{ where } X=\chi(\rho\;;\gamma). $$ With every permutation $\pi$ of we associate a transportation table $\chi(\rho\,;\gamma_\pi)$ which we call the pattern of $\pi$. Following~\citep[\S2,p.7]{barvinok2008enumerating}, we know that the number of permutations $\pi$ that share the same pattern $X$ for $X\in \mathbb{U}(r,c)$ only depends on $X$, $r$ and $c$ through a formula known as the Fisher-Yates statistic $n(X)$ of $X$, \begin{equation}\label{eq:fy} n(X)\defeq \card\{\pi\in S_N | \,\chi(\rho\,;\gamma_\pi) = X\} = \frac{r_1!\cdots r_d! \cdot c_1!\cdots c_d!}{\prod_{ij}x_{ij}!}. \end{equation} We thus have that $$ \begin{aligned} \sum_{\pi\in S_N} \kappa(\rho,\gamma_\pi) &= \sum_{X\in \mathbb{U}(r,c)} n(X)\, \mathbf{k}_1(\rho,\gamma_\pi) \mathbf{k}_2(\rho,\gamma_\pi) \\ & =\sum_{X\in \mathbb{U}(r,c)} \frac{r_1!\cdots r_d! \cdot c_1!\cdots c_d!}{\prod_{ij}^d x_{ij}!} \prod_{ij}^d k_{ij}^{x_{ij}} \frac{\prod_{ij}^d x_{ij}!}{r_1!\cdots r_d! \cdot c_1!\cdots c_d!}= \,T(r,c\,;K).\end{aligned}$$ \end{proof} {\small{\bibliographystyle{apa}
{ "timestamp": "2012-09-13T02:05:27", "yymm": "1209", "arxiv_id": "1209.2655", "language": "en", "url": "https://arxiv.org/abs/1209.2655", "abstract": "We prove in this paper that the weighted volume of the set of integral transportation matrices between two integral histograms r and c of equal sum is a positive definite kernel of r and c when the set of considered weights forms a positive definite matrix. The computation of this quantity, despite being the subject of a significant research effort in algebraic statistics, remains an intractable challenge for histograms of even modest dimensions. We propose an alternative kernel which, rather than considering all matrices of the transportation polytope, only focuses on a sub-sample of its vertices known as its Northwestern corner solutions. The resulting kernel is positive definite and can be computed with a number of operations O(R^2d) that grows linearly in the complexity of the dimension d, where R^2, the total amount of sampled vertices, is a parameter that controls the complexity of the kernel.", "subjects": "Machine Learning (stat.ML); Combinatorics (math.CO)", "title": "Positivity and Transportation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429599907709, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.7097210949484064 }
https://arxiv.org/abs/1812.01086
Centroaffine Duality for Spatial Polygons
In this paper, we discuss centroaffine geometry of polygons in $3$-space. For a polygon $X$ that is locally convex with respect to an origin together with a transversal vector field $U$, we define the centroaffine dual pair $(Y,V)$ similarly to [6]. We prove that vertices of $(X,U)$ correspond to flattening points for $(Y,V)$ and also that constant curvature polygons are dual to planar polygons. As an application, we give a new proof of a known $4$ flattening points theorem for spatial polygons.
\section{Introduction} Affine differential geometry of curves in $3$-space studies differential concepts that are invariant under the affine group. When a distinguished origin is fixed, the study of such curves becomes part of the centroaffine differential geometry. We shall consider in this paper centroaffine concepts of spatial polygons that comes from discretizations of differential concepts, and so our results can be classified into discrete differential centroaffine geometry. We shall consider spatial polygons $X(i)$ that are locally convex with respect to an origin $O\in\mathbb{R}^3$, which means that the determinant \begin{equation*}\label{eq:alpha} \left[ X(i-1)-O,X(i)-O,X(i+1)-O \right] \end{equation*} does not changes sign, together with transversal vector fields $U(i)$, which means that \begin{equation*}\label{eq:beta} \left[ X(i)-O,X(i+1)-O, U(i) \right] \end{equation*} also does not change sign. The {\it (centroaffine) normal plane} is the plane generated by $X$ and $U$, while the {\it (centroaffine) focal set} is the envelope of normal planes (\cite{Craizer-Pesco}). We show that the focal set reduces to a line if and only if the pair $(X,U)$ has constant {\it curvature}. We define the dual pair $(Y,V)$ of $(X,U)$, which is also a locally convex spatial polygon $Y$ together with a transversal vector field $V$. This definition is a discrete counterpart of the centroaffine duality introduced in \cite{Nomizu2} for general smooth codimension $2$ centroaffine immersions (see also \cite{Craizer-Garcia}, where the smooth spatial curves case is more explicit). We prove many properties of this duality, among them a correspondence between {\it vertices} of $(X,U)$ and {\it flattening points} of the dual pair $(Y,V)$. We show also that $(X,U)$ has constant curvature if and only if $(Y,V)$ is planar. As a consequence, we describe explicitly the constant curvature pairs. Equal-volume polygons are the discrete counterpart of curves parameterized by centroaffine arc-length (\cite{Craizer-Pesco}). Such polygons admit natural transversal vector fields that are parallel and unimodular. By using the duality tools, we give a characterization of the constant curvature equal-volume polygons. As an application of this centroaffine duality theory, we give another proof of a known $4$ flattening points theorem for polygons in $3$-space, by assuming that some radial projection is convex. This is a discrete counterpart of a theorem of Arnold for spatial smooth curves (\cite{Arnold}). The paper is organized as follows: In section 2 we give the basic definitions and properties of centroaffine geometry of spatial polygons. In section 3 we introduce the duality and prove its mais properties. In section 4 we characterize the constant curvature polygons, while in section 5 we prove the above mentioned $4$ flattening points theorem. \section{Centroaffine Geometry of Polygons} \subsection{Notation and terminology} In this paper, we consider polygons in $3$-space. In order to avoid misunderstandings, we use the terms "edge" and "nodes" for the elements of a polygon, leaving the terms "vertex" and "flattening point" for special edges or nodes. We denote by $\mathbb{Z}$ the set of integers and by $\mathbb{Z}^*$ the set $\mathbb{Z}+\frac{1}{2}$. A discrete function is a map from $\mathbb{Z}$ or $\mathbb{Z}^*$ to $\mathbb{R}$, while a discrete vector field is a map from $\mathbb{Z}$ or $\mathbb{Z}^*$ to $\mathbb{R}^3$. For any discrete function $g(i)$, we shall use the notation $$ g'(i+\tfrac{1}{2})=g(i+1)-g(i),\ \ g''(i)=g'(i+\tfrac{1}{2})-g'(i-\tfrac{1}{2}), \ \ $$ and so on. For periodic functions of period $n$, we shall use the periodic notation $X(i+kn)=X(i)$, $k\in\mathbb{Z}$. We shall denote by $\left[ \cdot, \cdot,\cdot\right]$ the standard volume in $\mathbb{R}^3$. \subsection{Locally convex polygons and transversal parallel vector fields} Consider a closed spatial polygon $X$ with nodes $X(i)$, $1\leq i\leq n$, and $O\in\mathbb{R}^3$. Let $\alpha=\alpha(X)$ be defined by \begin{equation*}\label{eq:alpha} \alpha(i)=\left[ X(i-1)-O,X(i)-O,X(i+1)-O \right]. \end{equation*} We say that $X$ is {\it locally convex} with respect to $O$ if $\alpha(i)>0$, for any $i\in\mathbb{Z}$. Consider a vector field $U$ along $X$ given by $U(i)\in\mathbb{R}^3$, $1\leq i\leq n$. Define $\beta=\beta(X,U)$ by \begin{equation*}\label{eq:beta} \beta(i+\tfrac{1}{2})=\left[ X(i)-O,X(i+1)-O, U(i) \right]. \end{equation*} We say that $U$ is {\it transversal} to $X$ with respect to $O$ if $\beta(i+\tfrac{1}{2})>0$, for any $i\in\mathbb{Z}$. Along this paper, we shall consider $O$ to be the origin. A transversal vector field $U$ to $X$ is called {\it parallel} if \begin{equation}\label{eq:Defineb} U'(i+\tfrac{1}{2})= - b(i+\tfrac{1}{2}) X'(i+\tfrac{1}{2}), \end{equation} for certain scalar function $b=b(X,U)$. We call $b(i+\tfrac{1}{2})$ the {\it curvature} of the edge $(i+\tfrac{1}{2})$. \subsection{Affine focal sets and vertices}\label{sec:NormalLines} For a locally convex spatial polygon $X$ with a transversal vector field $U$, the line \begin{equation*}\label{eq:NormalLine} X(i)+tU(i),\ \ t\in\mathbb{R}. \end{equation*} is called the {\it normal line} at $i$. The plane generated by $X$ and $U$ is called the {\it (centroaffine) normal plane}. \begin{lemma}\label{lemma:Ubar} Consider a transversal vector field $\bar{U}$. Then $\bar{U}$ is a parallel vector field contained in the normal plane of the pair $(X,U)$ if and only if we can write \begin{equation}\label{eq:UNormalPlane} \bar{U}=cX+dU, \end{equation} for some constants $c$ and $d$, $d\neq 0$. \end{lemma} \begin{proof} If $\bar{U}$ is of the form \eqref{eq:UNormalPlane}, then clearly $\bar{U}$ is parallel and belongs to the normal plane. Conversely, if $\bar{U}$ belongs to the normal plane, then we can write $\bar{U}=cX+dU$. Since $\bar{U}$ is also parallel, $c$ and $d$ must be constants, thus proving the lemma. \end{proof} The normal lines at $i$ and $i+1$ meet at \begin{equation}\label{eq:NormalMeet} E(i+\tfrac{1}{2})=X(i)+b^{-1}(i+\tfrac{1}{2})U(i)=X(i+1)+b^{-1}(i+\tfrac{1}{2})U(i+1). \end{equation} The {\it (centroaffine) focal set} is the envelope of the normal planes, i.e., the conical polyhedron with center $O$ over the polygon whose nodes are $E(i+\tfrac{1}{2})$, $i\in\mathbb{Z}$ (\cite{Craizer-Pesco}). \begin{Proposition} \label{prop:ConstantCurvature} The following statements are equivalent: \begin{enumerate} \item The focal set of $(X,U)$ reduces to a line. \item The curvature $b$ of $(X,U)$ is constant. \item There exists a constant vector field $E$ contained in all normal planes of $(X,U)$. \item There exists a constant vector field $E$ such that $E=cX+dU$, for some constants $c$ and $d$. \end{enumerate} \end{Proposition} \begin{proof} It is easy to verify that three consecutive normal lines at $i-1$, $i$ and $i+1$ of $(X,U)$ are concurrent if and only if $b(i+\tfrac{1}{2})=b(i-\tfrac{1}{2})$. So $E$ defined by Equation \eqref{eq:NormalMeet} is constant if and only if $b$ is constant. Moreover, the affine focal set reduces to a line if and only if $E$ is constant. Thus $(1)\Leftrightarrow(2)$ and the implication $(1)\Rightarrow(3)$ also holds. If we assume that (3) holds, the affine focal set is the line passing through $O$ and $E$ and so (1) also holds. The equivalence between $(3)$ and $(4)$ is given by Lemma \ref{lemma:Ubar}. \end{proof} We say that an edge $(i+\tfrac{1}{2})$ is a {\it vertex} of $(X,U)$ if \begin{equation}\label{eq:DefineVertex} b'(i)\cdot b'(i+1)<0. \end{equation} \subsection{Flattening points} Let $\Delta=\Delta(X)$ be defined by \begin{equation*} \Delta(i+\tfrac{1}{2})=\left[X(i+2)-X(i+1), X(i+1)-X(i),X(i)-X(i-1)\right], \end{equation*} A node $i$ of the polygon $X$ is said to be a {\it flattening point} if \begin{equation*} \Delta(i-\tfrac{1}{2})\cdot\Delta(i+\tfrac{1}{2})< 0. \end{equation*} Geometrically, the flattening point condition means that $X(i-2)$ and $X(i+2)$ are in the same side with respect to the plane defined by $X(i-1),X(i),X(i+1)$. We remark that, in \cite[p.218]{Pak}, a flattening point is called a {\it support point}. For each $i$, take $\lambda(i)=\lambda(X,U)(i)$ such that $-\lambda(i)X(i)+U(i)$ belongs to the osculating plane, i.e., \begin{equation}\label{eq:Lambda} \left[ X'(i+\tfrac{1}{2}), X''(i), -\lambda(i)X(i)+U(i) \right]=0. \end{equation} \begin{lemma} We can write \begin{equation}\label{eq:Lambda2} -\lambda(i)X(i)+U(i)=AX'(i-\tfrac{1}{2})+BX'(i+\tfrac{1}{2}), \end{equation} where $\alpha(i) A=-\beta(i+\tfrac{1}{2})$. \end{lemma} \begin{proof} If follows from Equation \eqref{eq:Lambda} that Equation \eqref{eq:Lambda2} holds and $$ A\left[ X'(i-\tfrac{1}{2}), X'(i+\tfrac{1}{2}), X(i) \right]= \left[ -\lambda(i)X(i)+U(i), X'(i+\tfrac{1}{2}), X(i) \right], $$ thus proving the lemma. \end{proof} \begin{lemma}\label{lemma:Delta} We have that $$ \beta(i+\tfrac{1}{2})\Delta(i+\tfrac{1}{2})=\lambda'(i+\tfrac{1}{2})\alpha(i)\alpha(i+1). $$ \end{lemma} \begin{proof} Take the difference of Equation \ref{eq:Lambda} at $i$ and $i+1$ to obtain, after some manipulations, the following relation: $$ \left[ X'(i+\tfrac{1}{2}), -X'''(i+\tfrac{1}{2}), -\lambda(i)X(i)+U(i) \right]=-\lambda'(i+\tfrac{1}{2})\left[ X'(i+\tfrac{1}{2}),X''(i+1),X(i)\right]. $$ Now from Equation \eqref{eq:Lambda2} we obtain $$ \frac{\beta(i+\tfrac{1}{2})}{ \alpha(i) }\ \Delta(i+\tfrac{1}{2})= \lambda'(i+\tfrac{1}{2})\alpha(i+1), $$ thus proving the lemma. \end{proof} From the above lemma, we conclude the following proposition: \begin{Proposition}\label{prop:CharFlat} The node $i$ is a flattening point for $X$ if and only if $$ \lambda'(i-\frac{1}{2})\cdot \lambda'(i+\frac{1}{2})<0, $$ where $\lambda=\lambda(X,U)$ and $U$ is any transversal parallel vector field along $X$. \end{Proposition} \subsection{Equal-volume polygons} We say that the polygon $X$ is {\it equal-volume} with respect to the origin $O$ if $\alpha(i)=1$, for any $i\in\mathbb{Z}$ (see \cite{Craizer-Pesco}). We say that a transversal vector field $U$ is {\it unimodular} with respect to $O$ if $\beta(i+\tfrac{1}{2})=1$, for any $i\in\mathbb{Z}$. For equal-volume polygons, there is a natural choice of a transversal parallel vector field, namely \begin{equation}\label{eq:Uequalvolume} U(i)= X''(i)+\lambda(i)X(i), \ \ i\in\mathbb{Z}. \end{equation} It is easy to see that $U$ is unimodular. \begin{lemma} The vector field $U$ defined by Equation \eqref{eq:Uequalvolume} is parallel. \end{lemma} \begin{proof} We use the results of \cite{Craizer-Pesco}. Write \begin{equation*} \left\{ \begin{array}{c} X'''(i+\tfrac{1}{2})=-\rho_2(i)X'(i+\tfrac{1}{2})+\tau(i+\tfrac{1}{2})X(i+1)\\ X'''(i+\tfrac{1}{2})=-\rho_1(i+1)X'(i+\tfrac{1}{2})+\tau(i+\tfrac{1}{2})X(i), \end{array} \right. \end{equation*} for some scalar functions $\rho_1$, $\rho_2$ and $\tau$ satisfying the compatibility equation $$ \tau(i+\tfrac{1}{2})=\rho_2(i)-\rho_1(i+1). $$ Defining $\bar{\lambda}$ by $-\bar{\lambda}'(i+\tfrac{1}{2})=\tau(i+\tfrac{1}{2})$, we have that $$ U(i)=X''(i)+\bar{\lambda}(i)X(i) $$ is parallel. Now condition \eqref{eq:Lambda} says that $$ \lambda(i)-\bar{\lambda}(i)=0, $$ thus implying that $\bar\lambda=\lambda$. \end{proof} \begin{Proposition}\label{prop:ParallelUnimodularEV} Let $\bar{U}$ be transversal vector field that is parallel and unimodular. Then \begin{equation*} \bar{U}= U+cX, \end{equation*} where $U$ is defined by Equation \eqref{eq:Uequalvolume} and $c$ is some constant. \end{Proposition} \begin{proof} Since $\bar{U}$ is parallel, $\bar{U}(i)=cX(i)+dU(i)$, for certain constants $c$ and $d$. Since $\beta(i+\tfrac{1}{2})=1$, we conclude that $d=1$, thus proving the proposition. \end{proof} \section{Duality} \subsection{Definition and properties} The general notion of duality for codimension $2$ centroaffine immersions can be found in \cite[N9]{Nomizu} and \cite{Nomizu2}. For an explicit description of centroaffine duality of smooth spatial curves, see \cite{Craizer-Garcia}. We describe here a version of this duality for polygons in $3$-space. Since $\left[X(i), X(i+1), U(i) \right]>0$, one can define a polygon $Y$ and a vector field $V$ along $Y$ uniquely by the following conditions: \begin{equation*}\label{eq:Dual1} Y(i+\tfrac{1}{2})\cdot X'(i+\tfrac{1}{2})=0;\ Y(i+\tfrac{1}{2})\cdot U(i)=1;\ Y(i+\tfrac{1}{2})\cdot X(i)=0, \end{equation*} and \begin{equation*}\label{eq:Dual2} V(i+\tfrac{1}{2})\cdot X'(i+\tfrac{1}{2})=0;\ V(i+\tfrac{1}{2})\cdot U(i)=0;\ V(i+\tfrac{1}{2})\cdot X(i)=1. \end{equation*} We say that $(Y,V)$ is the {\it dual} pair of $(X,U)$. The following lemma is straightforward: \begin{lemma}\label{lemma:Dual} Consider a locally convex polygon $X$ with a parallel transversal vector field $U$ and denote by $(Y,V)$ its dual pair. Then \begin{equation*}\label{eq:Dual3} Y(i+\tfrac{1}{2})\cdot U(i+1)=1;\ Y(i+\tfrac{1}{2})\cdot X(i+1)=0, \end{equation*} and \begin{equation*}\label{eq:Dual4} V(i+\tfrac{1}{2})\cdot U(i+1)=0;\ V(i+\tfrac{1}{2})\cdot X(i+1)=1. \end{equation*} Moreover, $(X,U)$ is the dual pair of $(Y,V)$. \end{lemma} Recall that $$ \alpha(Y)(i+\tfrac{1}{2})=\left[Y(i-\tfrac{1}{2}), Y(i+\tfrac{1}{2}), Y(i+\tfrac{3}{2})\right] $$ and $$ \beta(Y,V)(i)=\left[Y(i-\tfrac{1}{2}), Y(i+\tfrac{1}{2}), V(i+\tfrac{1}{2})\right]. $$ \begin{lemma}\label{lemma:BetaDual} We have that $$ \beta(Y,V)(i)=\frac{\alpha(i)}{\beta(i-\tfrac{1}{2}) \beta(i+\tfrac{1}{2}) }, \ \ \alpha(Y)(i+\tfrac{1}{2})=\frac{\alpha(i)\alpha(i+1)}{\beta(i-\tfrac{1}{2})\beta(i+\tfrac{1}{2})\beta(i+\tfrac{3}{2})}, $$ where $\alpha=\alpha(X)$ and $\beta=\beta(X,U)$. \end{lemma} \begin{proof} Write \begin{equation*} \beta(i+\tfrac{1}{2}) Y(i+\tfrac{1}{2})=X(i)\times X(i+1),\ \ \beta(i-\tfrac{1}{2}) Y(i-\tfrac{1}{2})=X(i-1)\times X(i), \end{equation*} Taking the vector product of both equations we obtain $$ \alpha(i)X(i)=\beta(i-\tfrac{1}{2})\beta(i+\tfrac{1}{2})Y(i-\tfrac{1}{2})\times Y(i+\tfrac{1}{2}). $$ Now take the dot product with $V(i+\tfrac{1}{2})$ to obtain the first formula. By duality, we can write $$ \alpha(Y)(i+\tfrac{1}{2})=\beta(Y,V)(i)\beta(Y,V)(i+1)\beta(i+\tfrac{1}{2}). $$ Now use the first formula to obtain the second one. \end{proof} From the above lemma, we conclude that $Y$ is a locally convex polygon and that $V$ is a transversal vector field. \begin{lemma}\label{lemma:Wparallel} The transversal vector field $V$ is parallel and $$ V'(i)=\lambda(i)Y'(i), $$ where $\lambda=\lambda(X,U)$. We conclude that $b(Y,V)=-\lambda(X,U)$. \end{lemma} \begin{proof} Observe that $V'(i)$ is orthogonal to $U(i)$ and to $X(i)$ and the same occurs with $Y'(i)$. Thus we conclude that $V'(i)$ is parallel to $Y'(i)$, and we write $V'(i)=c(i)Y'(i)$. We claim that $c=\lambda(X,U)$. In fact, substituting $$ \beta(i+\tfrac{1}{2}) Y(i+\tfrac{1}{2})=X(i)\times X(i+1),\ \ \beta(i+\tfrac{1}{2}) V(i+\tfrac{1}{2})=X'(i+\tfrac{1}{2})\times U(i), $$ in Equation \eqref{eq:Lambda} we obtain $$ \lambda(i)Y(i+\tfrac{1}{2})\cdot X''(i)-V(i+\tfrac{1}{2})\cdot X''(i)=0. $$ Since $Y(i+\tfrac{1}{2})\cdot X'(i+\tfrac{1}{2})=0$, we conclude that $$ Y(i+\tfrac{1}{2})\cdot X''(i)+Y'(i)\cdot X'(i-\tfrac{1}{2})=0, $$ and the same holds for $V$. Thus $$ \lambda(i)Y'(i)\cdot X'(i-\tfrac{1}{2})-V'(i)\cdot X'(i-\tfrac{1}{2})=0, $$ which implies that $(c(i)-\lambda(i))Y'(i)\cdot X'(i-\tfrac{1}{2})=0$. Since $$ Y'(i)\cdot X'(i-\tfrac{1}{2})=Y(i+\tfrac{1}{2})\cdot X'(i-\tfrac{1}{2})=-\frac{\alpha(i)}{\beta(i+\tfrac{1}{2})}\neq 0, $$ the lemma is proved. \end{proof} Next lemma shows that duality preserves the centroaffine normal plane. \begin{lemma} Let $(Y,V)$ be the dual of $(X,U)$ and consider another transversal vector field $\bar{U}$ given by Equation \eqref{eq:UNormalPlane}. Then the dual pair of $(X,\bar{U})$ is given by \begin{equation*}\label{eq:DualNormalPlane} \bar{Y}=d^{-1}Y,\ \ \bar{V}=-cd^{-1}Y+V. \end{equation*} \end{lemma} \begin{proof} Straightforward verifications. \end{proof} \subsection{The equal-volume case} \begin{lemma}\label{lemma:DualUnimodular} Assume $X$ is equal-volume and $U$ is a parallel and unimodular transversal vector field. Denoting by $(Y,V)$ the dual pair, we have that also $Y$ is equal-volume and $V$ is a parallel and unimodular transversal vector field. \end{lemma} \begin{proof} This lemma is a direct consequence of Lemma \ref{lemma:BetaDual}. \end{proof} \subsection{Coplanarity and concurrency} \begin{Proposition}\label{prop:Coplanarity} Four consecutive nodes of $X$ are coplanar if and only if the corresponding three normal lines of $(Y,V)$ meet at a point. \end{Proposition} \begin{proof} From Section \ref{sec:NormalLines}, we have that three consecutive normal lines of $(Y,V)$ at $(i-\tfrac{1}{2})$, $(i+\tfrac{1}{2})$ and $(i+\tfrac{3}{2})$, are concurrent if and only if $b(Y,V)(i)=b(Y,V)(i+1)$. By Lemma \ref{lemma:Wparallel}, this is equivalent to $\lambda(X,U)(i)=\lambda(X,U)(i+1)$. From Lemma \ref{lemma:Delta}, this is equivalent to $\Delta(X,U)(i+\tfrac{1}{2})=0$, thus proving the proposition. \end{proof} We say that the polygon $X$ is {\it generic} if no four consecutive nodes are coplanar. \begin{corollary} The polygon $X$ is generic if and only if $b(Y,V)'(i)\neq 0$, for any $i\in\mathbb{Z}$. \end{corollary} \subsection{Vertex and flattening points} The main result of the section is the following: \begin{Proposition}\label{prop:DualFlatVertex} Assume that $X$ is a generic polygon. Then the node $i$ is a flattening point of $(X,U)$ if and only if the edge $i$ is a vertex of $(Y,V)$. \end{Proposition} \begin{proof} We have that the edge $i$ of $(Y,V)$ is a vertex if and only if $$ b(Y,V)'(i-\frac{1}{2})\cdot b(Y,V)'(i+\frac{1}{2})<0. $$ From Lemma \ref{lemma:Wparallel}, this is equivalent to $$ \lambda(X,U)'(i-\frac{1}{2})\cdot \lambda(X,U)'(i+\frac{1}{2})<0. $$ By Proposition \ref{prop:CharFlat}, this condition is equivalent to the node $i$ of $(X,U)$ being a flattening point. \end{proof} \section{Planar and Constant Curvature Polygons} \label{sec:AffineCylindricalPedal} \subsection{Affine cylindrical pedal} Consider a locally convex planar polygon $x(i)$ and let $u(i)\in\mathbb{R}^2$ be a transversal planar vector field that is {\it parallel}, i.e., we can write \begin{equation*} u'(i+\tfrac{1}{2})=-b(i+\tfrac{1}{2})x'(i+\tfrac{1}{2}), \ \ i\in\mathbb{Z}, \end{equation*} for some scalar function $b=b(x,u)$. The lines $x+tu$, $t\in\mathbb{R}$, are called the {\it normal lines} of the pair $(x,u)$. The {\it lifting} of $(x,u)$ is the pair $(X,U)$ given by $$ X(i)=(x(i),1),\ \ U(i)=(u(i),0). $$ Observe that $U$ is parallel along $X$ and that $b(X,U)=b(x,u)$. The {\it affine cylindrical pedal} of $(x,u)$ is defined by $$ Y(i+\tfrac{1}{2})=\left(y(i+\tfrac{1}{2}), -y(i+\tfrac{1}{2})\cdot x(i) \right), $$ where $y$ denotes the co-normal vector field of $(x,u)$, i.e., $$ y(i+\tfrac{1}{2})\cdot x'(i+\tfrac{1}{2})=0,\ \ y(i+\tfrac{1}{2})\cdot u(i)=1. $$ It is easy to verify that the constant vector field $E=(0,0,1)$ is transversal to $Y$ and $(Y,E)$ is dual to $(X,U)$. By Proposition \ref{prop:ConstantCurvature}, $(Y,E)$ is a constant curvature pair. The following proposition says that if, conversely, we start with a spatial polygon transversal to a constant vector field $E$, then it is necessarily the affine cylindrical pedal of some planar pair $(x,u)$. \begin{Proposition}\label{prop:AffineCylindricalPedal} Assume that $Y(i+\tfrac{1}{2})=(y(i+\tfrac{1}{2}),z(i+\tfrac{1}{2}))$ is a locally convex spatial polygon transversal to a constant vector field $E$. Then $Y$ is the affine cylindrical pedal of some planar parallel pair $(x,u)$. \end{Proposition} \begin{proof} Denote by $(X,U)$ the dual of $(Y,E)$. Since $(Y,E)$ is a constant curvature pair, Proposition \ref{prop:Coplanarity} implies that $X$ is planar. Moreover, since $U$ is orthogonal to $E$, it must belong to the same plane. Thus $(X,U)$ is the lifting of some planar pair $(x,u)$. \end{proof} \begin{remark} Consider any locally convex polygon $Y$. Then it is locally transversal to a constant vector field $E$ that we may assume, by an affine change of coordinates, to be $(0,0,1)$. Then Proposition \ref{prop:AffineCylindricalPedal} implies $Y$ is locally an affine cylindrical pedal. \end{remark} \begin{corollary} Consider a polygon $Y$ in $3$-space and a transversal parallel vector field $V$ such that the pair $(Y,V)$ has constant curvature. Then $Y$ is the affine cylindrical pedal of a planar polygon $x$ with a transversal parallel vector field $u$. \end{corollary} \begin{proof} It follows from Proposition \ref{prop:ConstantCurvature} that $(Y,V)$ has constant curvature if and only if there exists a constant transversal vector field $E$ satisfying $V=cY+E$, for some constant $c$. By Proposition \ref{prop:AffineCylindricalPedal}, $Y$ is the affine cylindrical pedal of some planar pair $(x,u)$. \end{proof} \subsection{Constant curvature equal-volume polygons} A planar polygon $x$ is called {\it equal-area} if \begin{equation*} \left[ x(i)-x(i-1), x(i+1)-x(i) \right]=1, \end{equation*} where $[\cdot,\cdot]$ denotes the area of two planar vectors (see \cite{Craizer-Teixeira-Alvim}). We say that a transversal vector field $u$ is {\it unimodular} if \begin{equation*} \left[ x(i+1)-x(i), u(i) \right]=1. \end{equation*} For equal-area polygons, it is easy to verify that \begin{equation*} u(i)=x''(i) \end{equation*} is the only transversal vector field that is parallel and unimodular. Observe that, in this case, the lifting $X=(x,1)$ of $x$ is equal-volume and the transversal vector field $U=(u,0)$ is parallel and unimodular. The affine cylindrical pedal of $(x,u)$ is a pair $(Y,E)$, where $Y$ is a locally convex polygon and $E=(0,0,1)$. Moreover, by Lemma \ref{lemma:DualUnimodular}, the pair $(Y,E)$ is also equal-volume and unimodular. We remark that it is not always true that a locally convex polygon $Y$ is the affine cylindrical pedal of a pair $(x,u)$ with $x$ equal-area and $u$ unimodular. In fact, we have the following proposition: \begin{Proposition}\label{prop:AffineCylindricalEV} Consider an equal-volume polygon $Y$ in $3$-space. Then it is the cylindrical pedal of an equal-area polygon $x$ with $u$ unimodular if and only if there exists a unimodular constant vector field $E$ transversal to $Y$. \end{Proposition} \begin{proof} If $(Y,E)$ is unimodular, then its dual $(X,U)$ is also unimodular. By Proposition \ref{prop:AffineCylindricalPedal}, $(X,U)$ is the lifting of a planar pair $(x,u)$, with $x$ equal-area and $u$ unimodular. \end{proof} Next proposition gives a characterization of equal-volume polygons of constant curvature: \begin{Proposition} Consider an equal-volume polygon $Y$ in $3$-space. Then it has constant curvature if and only if it is the affine cylindrical pedal of some equal-area planar polygon $x$. \end{Proposition} \begin{proof} Given an equal-volume polygon $Y$, denote by $V$ a parallel and unimodular transversal vector field. Then Proposition \ref{prop:ConstantCurvature} says that the pair $(Y,V)$ has constant curvature if and only if there exists a constant vector field $E$ that is also transversal and unimodular, which by Proposition \ref{prop:ParallelUnimodularEV} must be of the form $V+cY$, for some constant $c$. By Proposition \ref{prop:AffineCylindricalEV}, this is equivalent to $Y$ being the affine cylindrical pedal of some planar equal-area polygon $x$. \end{proof} \section{ Application: A $4$ Flattening Points Theorem} As an application of the centroaffine duality, we shall give a new proof of a $4$ flattening theorem for convex polygons in $3$-space. The proof is based on a $4$-vertex theorem for planar polygons described in \cite{Tabach}. \subsection{Statement of the theorem} We say that a polygon $X$ is said to be {\it weakly convex} if it lies in the surface of its convex hull (\cite[p.201]{Pak}). We shall consider a stronger convexity condition, namely, that some radial projection of the spatial polygon is a planar convex polygon (see Figure \ref{fig:Fig1}). The following theorem is proved in \cite{Pak} with the hypothesis of weak convexity. \begin{thm}\label{thm:DiscreteArnold} Let $X$ be a generic closed polygon in $3$-space such that, for some center $O$, the radial projection of $X$ in a plane is a convex planar polygon. Then $X$ admits at least $4$ flattening points. \end{thm} \begin{figure}[htb] \centering \includegraphics[width=0.40\linewidth]{Fig1.png} \caption{ A spatial polygon whose radial projection is convex and its flattening points. } \label{fig:Fig1} \end{figure} This theorem is a polygonal version of the following well-known Arnold's $4$-flattening points theorem for smooth spatial curves (\cite{Arnold}). Recall that a flattening point of a smooth curve $c:[a,b]\to\mathbb{R}^3$ is a point $t\in[a,b]$ such that $c'''(t)$ belongs to the osculating plane of $c$ at $t$. \begin{thm}\label{thm:Arnold} Let $c:[a,b]\to\mathbb{R}^3$ be a closed smooth curve such that, for some center $O\in\mathbb{R}^3$, the radial projection of $c$ in a plane is a convex planar curve. Then $c$ admits at least $4$ flattening points. \end{thm} For a discussion of different types of convexity of spatial curves and other smooth $4$ flattening points theorems, see \cite{Uribe-Vargas}. \subsection{Dual of a convex affine cylindrical pedal} Assume that $X(i)=(x(i),z(i))$ is a locally convex spatial polygon transversal to $E=(0,0,1)$. By Proposition \ref{prop:AffineCylindricalPedal}, $X$ is the affine cylindrical pedal of a planar polygon $y(i+\tfrac{1}{2})$. \begin{Proposition}\label{prop:ConvexAffineCylindricalPedal} If $X(i)=\lambda(i)\left(\gamma(i),1\right)$, with $\lambda(i)>0$ and $\gamma(i)$ convex containing $(0,0)$ in its interior, then $y(i+\tfrac{1}{2})$ is convex (see Figure \ref{fig:Fig2}). \end{Proposition} \begin{proof} Recall that a locally convex planar polygon $y$ is convex if and only if its index is $1$. One can think of the index of a planar locally convex polygon as the sum of its external angles divided by $2\pi$. If the index of $y$ were greater than $1$, then the index of its co-normal vector field $x$ would also be greater than $1$. On the other hand, by the convexity of $\gamma$, the polygon $x$ intersects each ray from $(0,0)$ at most once, which is a contradiction. \end{proof} \begin{figure}[htb] \centering \includegraphics[width=0.60\linewidth]{Fig2.png} \caption{ The polygon $X(i)=(x(i),z(i))=\lambda(i)(\gamma(i),1)$ and its dual pair $(y,v)(i+\tfrac{1}{2})$. } \label{fig:Fig2} \end{figure} \subsection{Proof of Theorem \ref{thm:DiscreteArnold} } Consider a locally convex polygon $X$ in $3$-space whose projection in a plane with respect to a center $O$ is a convex planar curve. We may assume that $O=(0,0,0)\in\mathbb{R}^3$ and that the plane of projection is $z=1$. Thus there exist $\lambda(i)>0$ such that $$ X(i)=\lambda(i)(\gamma(i),1), $$ where $\gamma(i)$ is a convex planar polygon. We can also assume, w.l.o.g., that $(0,0)$ is contained in the interior of $\gamma$. This implies that $E=(0,0,1)$ is a transversal vector field along $X$. Denoting by $(Y,V)$ the dual pair of $(X,E)$, Proposition \ref{prop:ConvexAffineCylindricalPedal} says that we can write $Y=(y,1)$ and $V=(v,0)$, for some planar convex polygon $y$ and parallel planar vector field $v$ along $y$. Moreover, $X=(x,z)$ is the affine pedal of $(y,v)$, i.e., $x$ is the co-normal of $(y,v)$ and $$ z(i)=x(i)\cdot y(i+\tfrac{1}{2}). $$ We claim that vertices of $(Y,V)$ correspond to vertices of $(y,v)$ in the sense of \cite{Tabach}. In fact, we can write \begin{equation}\label{eq:b2} v'(i)= -b(i)y'(i), \end{equation} where $b=b(Y,V)$ is defined in Equation \eqref{eq:Defineb}. This equation implies that $v$ is {\it exact} with respect to $y$ and that the edge $(i)$ is a vertex of $(y,v)$ if and only if $b'(i-\tfrac{1}{2})\cdot b'(i+\tfrac{1}{2})<0$ (\cite{Tabach}). By Equation \eqref{eq:DefineVertex}, this is equivalent to the edge $(i)$ being a vertex of $(Y,V)$. The vector field $v$ is called generic for $y$ in \cite{Tabach} if no $3$ consecutive normal lines $y(i+\tfrac{1}{2})+tv(i+\tfrac{1}{2})$ intersect at a point. This is equivalent to say that no $3$ consecutive lines $Y(i+\tfrac{1}{2})+tV(i+\tfrac{1}{2})$ intersect at a point. By Proposition \ref{prop:Coplanarity}, this is equivalent to the condition that no $4$ consecutive points of $X$ are coplanar, which in fact means that $X$ is generic. We are now in position to use the following theorem, proved in \cite{Tabach}: \begin{thm} Assume that $y$ is a planar convex polygon and that the generic transversal vector field $v$ is exact. Then the pair $(y,v)$ admits at least $4$ vertices. \end{thm} From this theorem, we conclude that $(y,v)$, and hence $(Y,V)$, admits at least $4$ vertices. By Proposition \ref{prop:DualFlatVertex}, this implies that $X$ admits at least $4$ flattenings, thus completing the proof of Theorem \ref{thm:DiscreteArnold}.
{ "timestamp": "2019-05-14T02:32:47", "yymm": "1812", "arxiv_id": "1812.01086", "language": "en", "url": "https://arxiv.org/abs/1812.01086", "abstract": "In this paper, we discuss centroaffine geometry of polygons in $3$-space. For a polygon $X$ that is locally convex with respect to an origin together with a transversal vector field $U$, we define the centroaffine dual pair $(Y,V)$ similarly to [6]. We prove that vertices of $(X,U)$ correspond to flattening points for $(Y,V)$ and also that constant curvature polygons are dual to planar polygons. As an application, we give a new proof of a known $4$ flattening points theorem for spatial polygons.", "subjects": "Differential Geometry (math.DG)", "title": "Centroaffine Duality for Spatial Polygons", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429599907709, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7097210949484063 }
https://arxiv.org/abs/2203.06829
Stabilized exponential-SAV schemes preserving energy dissipation law and maximum bound principle for the Allen-Cahn type equations
It is well-known that the Allen-Cahn equation not only satisfies the energy dissipation law but also possesses the maximum bound principle (MBP) in the sense that the absolute value of its solution is pointwise bounded for all time by some specific constant under appropriate initial/boundary conditions. In recent years, the scalar auxiliary variable (SAV) method and many of its variants have attracted much attention in numerical solution for gradient flow problems due to their inherent advantage of preserving certain discrete analogues of the energy dissipation law. However, existing SAV schemes usually fail to preserve the MBP when applied to the Allen-Cahn equation. In this paper, we develop and analyze new first- and second-order stabilized exponential-SAV schemes for a class of Allen-Cahn type equations, which are shown to simultaneously preserve the energy dissipation law and MBP in discrete settings. In addition, optimal error estimates for the numerical solutions are rigorously obtained for both schemes. Extensive numerical tests and comparisons are also conducted to demonstrate the performance of the proposed schemes.
\section{Introduction} Let us consider a class of reaction-diffusion equations taking the following form \begin{equation} \label{AllenCahn} u_t = \varepsilon^2\Delta u + f(u), \quad t > 0, \ \bm{x} \in \Omega, \end{equation} where $\Omega\subset\mathbb{R}^d$ is a spatial domain, $u=u(t,\bm{x}):[0,\infty)\times\overline{\Omega}\to\mathbb{R}$ is the unknown function, $\varepsilon>0$ denotes an interfacial parameter, and $f(u)$ is a nonlinear reaction term with $f$ being continuously differentiable. We also impose the initial condition \[ u(0,\cdot)=u_{\text{\rm init}} \quad \text{on } \overline{\Omega} \] and the periodic or homogeneous Neumann boundary conditions. The equation \eqref{AllenCahn} usually can be regarded as the $L^2$ gradient flow with respect to the energy functional \begin{equation} \label{energy} E(u) = \int_\Omega \Big( \frac{\varepsilon^2}{2}|\nabla u(\bm{x})|^2 + F(u(\bm{x})) \Big) \, \d\bm{x}, \end{equation} where $F$ is a smooth potential function satisfying $F'=-f$, and thus, the solution to the equation \eqref{AllenCahn} decreases the energy \eqref{energy} along with the time, i.e., $\daoshu{}{t}E(u(t))\le0$, which is often called the {\em energy dissipation law}. In addition, we also assume that \begin{equation} \label{assump} \text{there exists a constant $\beta>0$ such that $f(\beta) \le 0\le f(-\beta)$}. \end{equation} It has been proved in \cite{DuJuLiQi21} that the equation \eqref{AllenCahn} satisfies the {\em maximum bound principle} (MBP) in the sense that if the absolute value of the initial data is bounded pointwise by $\beta$, then the absolute value of the solution is also bounded by $\beta$ pointwise for all time, i.e., \begin{equation} \label{mbp} \max_{\bm{x}\in\overline{\Omega}} |u_{\text{\rm init}}(\bm{x})| \le \beta \quad \Longrightarrow \quad \max_{\bm{x}\in\overline{\Omega}} |u(t,\bm{x})| \le \beta, \quad \forall \, t > 0. \end{equation} An important and special case of \eqref{AllenCahn} is the Allen--Cahn equation with $f(u)=u-u^3$, which was originally introduced in \cite{AlCa79} to model the motion of anti-phase boundaries in crystalline solids. The solution represents the difference between the concentrations of two components of the alloy and thus should be evaluated between $-1$ and $1$, which is guaranteed by the MBP. With the corresponding double-well potential $F(u)=\frac{1}{4}(u^2-1)^2$, the associated energy functional \eqref{energy} decays in time, which reflects the energy dissipation of the phase transition process. Both the MBP and the energy stability are also satisfied by some variants of \eqref{AllenCahn}, such as the nonlocal Allen--Cahn equation for phase separations within long-range interactions \cite{Bates06,DuJuLiQi19} and the fractional Allen--Cahn equation used to describe some anomalous diffusion processes \cite{DuYaZh20,GuiZh15}. To obtain stable numerical simulations and avoid nonphysical solutions for these models, it is highly desirable to design numerical schemes preserving effectively these two basic physical properties, the MBP and the energy dissipation law in time discrete settings. In the past decades, there has been a large amount of research denoted to energy-stable numerical schemes for time discretization of gradient flow equations, such as convex splitting schemes \cite{GuWaWi14,ShWaWaWi12,WiWaLo09}, stabilized semi-implicit schemes \cite{FeTaYa13,ShYa10b,XuTa06}, and exponential time differencing (ETD) schemes \cite{DuJuLiQi19,JuLiQiZh18,JuZhDu15}. More recently, invariant energy quadratization (IEQ) schemes \cite{XuYaZhXi19,Yang16,YangZh20} and scalar auxiliary variable (SAV) schemes \cite{ShXu18,ShXuYa18,ShXuYa19} were proposed to naturally provide energy-stable and linear algorithms with second-order temporal accuracy. While the main idea for both methods is to reformulate and split the energy functional \eqref{energy} in the quadratic form by introducing extra variables, the SAV approach is usually more efficient in terms of computations. Many variants of SAV schemes were developed later; see \cite{AkrivisLiLi19,ChenYa19,ChengLiSh20,ChengLiSh21,HouAzXu19,HuangShYa20,LiuLi20} and the references therein. In practice, a suitable stabilization term is also introduced in such splitting in order to maintain numerical stability for highly stiff problems. On the other hand, existing SAV-type schemes usually fail to preserve the MBP, and a special case is the auxiliary variable proposed in \cite{HuangShYa20} which is shown to be positivity-preserving. The MBP preservation recently has also attracted increasingly attention in the field of numerical methods for the Allen--Cahn type equations of the form \eqref{AllenCahn}. The semi-implicit schemes were extensively studied in, e.g. \cite{HoLe20,HoTaYa17,LiaoTaZh20,ShTaYa16,TaYa16,XiFeYu17}, for the classic, fractional, or surface Allen--Cahn equations. The first- and second-order stabilized ETD schemes were shown to preserve the MBP unconditionally for the nonlocal Allen--Cahn equation \cite{DuJuLiQi19} and the conservative Allen--Cahn equation \cite{LiJuCaFe21}. An abstract framework on MBP preservation of the ETD schemes for a class of semilinear parabolic equations was established in \cite{DuJuLiQi21}, where sufficient conditions for the linear and nonlinear operators are presented in order to guarantee the MBP. A family of stabilized integrating factor Runge-Kutta (IFRK) schemes, up to third order, were developed in \cite{LiLiJuFe21}, which can unconditionally preserve the MBP. In addition, a fourth-order (conditionally) MBP-preserving IFRK scheme was presented in \cite{JuLiQiYa21}. So far, as one of the very popular methods, there is still not much systematical study on MBP-preserving SAV schemes. The main goal of this paper is to develop first- and second-order energy dissipative and MBP-preserving SAV-type schemes for the Allen--Cahn type equation \eqref{AllenCahn} by using an appropriate stabilization technique. Specifically, we propose new stabilized exponential-SAV (ESAV) schemes by introducing an artificial stabilization term rather than basing on the splitting of the energy functional suggested in \cite{ShXuYa19}. With the effect of such stabilization, we show that the proposed first-order scheme preserves the MBP unconditionally with an appropriate stabilizing parameter and the second-order one does under a time step size constraint. A main difficulty for numerical analysis of the two schemes lies in that the coefficients of the nonlinear term and the stabilization term are varying rather than constant due to the use of the SAV approach. With the help of the energy dissipation and MBP, we are able to show that such variable coefficients are bounded from above and below by certain positive constants, and consequently, optimal error estimate are successfully obtained for the proposed schemes. To the best of our knowledge, this is the first work in the direction of designing such SAV-type methods. More importantly, the proposed stabilizing approaches can be easily generalized to deal with many other type of gradient flow problems where the SAV methods apply. The rest of this paper is organized as follows. Section \ref{sect_spacedis} is devoted to spatial discretization of the equation \eqref{AllenCahn} and a brief summary of the classic SAV and ESAV schemes for the time integration. Then, our first- and second-order stabilized ESAV schemes are presented in Section \ref{sect_sESAVsch}, together with the energy dissipation law, MBP preservation, and convergence analysis of the resulting fully discrete systems. In Section \ref{sect_experiment}, extensive numerical tests and comparisons are carried out to demonstrate the performance of the proposed schemes. Some concluding remarks are finally given in Section \ref{sect_conclusion}. \section{Spatial discretization and SAV schemes for time integration} \label{sect_spacedis} For simplicity, throughout this paper we consider the two-dimensional square domain $\Omega=(0,L)\times(0,L)$ for the equation \eqref{AllenCahn} equipped with periodic boundary conditions. Note that the extensions to three-dimension problems and homogeneous Neumann boundary condition are straightforward. For other feasible spatial discretization, we refer to \cite{DuJuLiQi21} for more details. In this section, we first present some notations related to the spatial discretization by central finite difference, then briefly review the classic SAV schemes for time integration. \subsection{Spatial discretization and the space-discrete problem} Given a positive integer $M$, we set $h=L/M$ to be the size of the uniform mesh partitioning $\overline{\Omega}$. Denote by $\Omega_h$ the set of mesh points $(x_i,y_j)=(ih,jh)$, $1\le i,j\le M$. For a grid function $v$ defined on $\Omega_h$, we write $v_{ij}=v(x_i,y_j)$ for simplicity. Let $\mathcal{M}_h$ be the set of all periodic grid functions on $\Omega_h$, i.e., \[ \mathcal{M}_h = \{v:\Omega_h\to\mathbb{R} \,|\, v_{i+kM,j+lM}=v_{ij}, \ k,l\in\mathbb{Z}, \ 1 \le i,j\le M\}. \] The discrete inner product $\<\cdot,\cdot\>$, discrete $L^2$ norm $\|\cdot\|$, and discrete $L^\infty$ norm $\|\cdot\|_\infty$ can be defined as usual, namely, \[ \<v,w\> = h^2 \sum_{i,j=1}^M v_{ij} w_{ij}, \qquad \|v\| = \sqrt{\<v,v\>}, \qquad \|v\|_\infty = \max_{1\le i,j\le M} |v_{ij}| \] for any $v,w\in\mathcal{M}_h$, and \[ \<\bm{v},\bm{w}\> = \<v^1,w^1\> + \<v^2,w^2\>, \qquad \|\bm{v}\| = \sqrt{\<\bm{v},\bm{v}\>} \] for any $\bm{v}=(v^1,v^2)^T, \bm{w}=(w^1,w^2)^T\in\mathcal{M}_h\times\mathcal{M}_h$. We apply the second-order central finite difference to approximate spatial differentiation operators. For any $v\in\mathcal{M}_h$, the discrete Laplace operator $\Delta_h$ is defined by \[ \Delta_h v_{ij} = \frac{1}{h^2} (v_{i+1,j}+v_{i-1,j}+v_{i,j+1}+v_{i,j-1}-4v_{ij}), \quad 1 \le i,j \le M, \] and the discrete gradient operator $\nabla_h$ is defined by \[ \nabla_h v_{ij} = \Big( \frac{v_{i+1,j}-v_{ij}}{h}, \frac{v_{i,j+1}-v_{ij}}{h}\Big)^T, \quad 1 \le i,j \le M. \] By periodic boundary conditions, the summation-by-parts formula is easy to verify: \[ \<v,\Delta_hw\> = - \<\nabla_hv, \nabla_hw\> = \<\Delta_hv,w\>, \quad \forall\,v,w\in\mathcal{M}_h. \] Obviously, $\Delta_h$ is self-adjoint and negative semi-definite. For any function $\varphi:\overline{\Omega}\to\mathbb{R}$, we denote by $I_h$ the operator projecting $\varphi$ on the mesh as $(I_h\varphi)_{ij}=\varphi(x_i,y_j)$ for $1\le i,j\le M$. For example, we have \[ \max_{1\le i,j\le N} |\Delta_h(I_h\varphi)_{ij} - \Delta\varphi(x_i,y_j)| \le C_\varphi h^2, \quad \forall \, \varphi \in C_\text{\rm per}^4(\overline{\Omega}). \] For simplicity, we may directly omit the notation $I_h$ when there is no ambiguity. Since $\mathcal{M}_h$ is a finite-dimensional linear space, any grid function $v\in\mathcal{M}_h$ and any linear operator $Q:\mathcal{M}_h\to\mathcal{M}_h$ can be regarded as a vector in $\mathbb{R}^{M^2}$ and a matrix in $\mathbb{R}^{M^2\times M^2}$, respectively. We still use the notations $\|\cdot\|$ and $\|\cdot\|_\infty$ to denote the matrix induced-norms consistent with $\|\cdot\|$ and $\|\cdot\|_\infty$ defined for vectors before, respectively. By regarding $\Delta_h$ as a linear operator, we know that $\Delta_h$ is the generator of a contraction semigroup on $\mathcal{M}_h$ \cite{DuJuLiQi21}. Instead, by viewing $\Delta_h$ as a matrix, it is weakly diagonally dominant with all diagonal entries negative. Moreover, we have the following useful estimate and the proof can be found in \cite{TaYa16}. \begin{lemma} \label{lem_lapdiff} For any $a>0$, we have $\|(aI-\Delta_h)^{-1}\|_\infty \le a^{-1}$, where $I$ represents the identity matrix. \end{lemma} We have assumed that $f$ is continuously differentiable, so $\|f'\|_{C[-\beta,\beta]}$ is always finite and then the following result is valid \cite{DuJuLiQi21}. \begin{lemma} \label{lem_nonlinear} Under the assumption \eqref{assump}, if $\kappa\ge \|f'\|_{C[-\beta,\beta]}$ holds for some positive constant $\kappa$, then we have $|f(\xi)+\kappa\xi|\le\kappa\beta$ for any $\xi\in[-\beta,\beta]$. \end{lemma} Next, let us introduce the space-discrete version of \eqref{AllenCahn}. The space-discrete problem is to find a function $u_h:[0,\infty)\to\mathcal{M}_h$ satisfies \begin{equation} \label{semidis} \daoshu{u_h}{t} = \varepsilon^2\Delta_h u_h + f(u_h) \end{equation} with $u_h(0) = u_{\text{\rm init}}$. It is easy to verify the energy dissipation law for \eqref{semidis} in the sense that \begin{equation*} \daoshu{}{t} E_h(u_h(t)) \le 0, \end{equation*} where $E_h$ is the spatially-discretized energy functional defined as \begin{equation}\label{egydis} E_h(v) := \frac{\varepsilon^2}{2} \|\nabla_h v\|^2 + \<F(v),1\>, \quad \forall \, v \in \mathcal{M}_h. \end{equation} According to \cite{DuJuLiQi21}, the MBP also holds for $u_h$, i.e., $\|u_h(t)\|_\infty\le\beta$ for any $t>0$ if $\|u_{\text{\rm init}}\|_\infty\le\beta$. Let us partition the time interval into $\{t_n=n{\tau}\}_{n\ge0}$ with ${\tau}>0$ being a uniform time step size. In the remaining part of the paper, we will study time integration schemes for the space-discrete system \eqref{semidis}. For simplicity of representation, we denote by $u^n$ the fully discrete approximate value of $u_e(t_n)$ or $u_{h,e}(t_n)$ with $u_e$ and $u_{h,e}$ denoting the exact solutions to the original continuous problem \eqref{AllenCahn} and the space-discrete problem \eqref{semidis}, respectively. In general, for a sequence $\{v^n\}$, we define the following notations: \[ \delta_t v^{n+1} = \frac{v^{n+1}-v^n}{{\tau}}, \qquad v^{n+\frac{1}{2}} = \frac{v^{n+1}+v^n}{2}. \] \subsection{Classic SAV schemes and stabilization} \label{sect_classicSAV} Here we give a brief summary of the classic SAV schemes. The main idea of SAV is to reformulate the energy functional \eqref{energy} in the quadratic form by introducing an appropriate SAV. The framework of the classic SAV schemes is based on a linear splitting of the energy functional and, as shown in \cite{ShXuYa19}, a suitable stabilization term is usually also introduced in such splitting so that the numerical simulations can provide satisfactory results for highly stiff problems in practice. Denoting by $\kappa\ge0$ the stabilizing constant, the energy functional \eqref{energy} can be rewritten with a stabilization term as \begin{align} \label{sav_splitting} E(u) &= \int_\Omega \Big( \frac{\varepsilon^2}{2}|\nabla u|^2 + \frac{\kappa}{2}u^2 + F(u) - \frac{\kappa}{2}u^2 \Big) \, \d\bm{x}\nonumber\\ &= \frac{\varepsilon^2}{2}\|\nabla u\|_{L^2}^2 + \frac{\kappa}{2}\|u\|_{L^2}^2 + \int_\Omega \Big( F(u) - \frac{\kappa}{2}u^2 \Big) \, \d\bm{x}. \end{align} Suppose the last term in \eqref{sav_splitting} is bounded from below, that is, \begin{equation} \label{sav_e2} E_2(u) := \int_\Omega \Big( F(u) - \frac{\kappa}{2}u^2 \Big) \, \d\bm{x} \ge -C_0 \end{equation} for some constant $C_0\geq 0$. Choosing $\delta>C_0$, let us define an auxiliary variable $r(t)=\sqrt{E_2(u(t))+\delta}$, and reformulate the original problem \eqref{AllenCahn} to the following equivalent system: \begin{subequations} \begin{align*} u_t & = \varepsilon^2\Delta u - \kappa u + \frac{r}{\sqrt{E_2(u)+\delta}} (f(u)+\kappa u), \\ r_t & = - \frac{1}{2\sqrt{E_2(u)+\delta}} (f(u)+\kappa u,u_t). \end{align*} \end{subequations} Then the first-order SAV scheme (SAV1) is given by \cite{ShXuYa19} \begin{subequations} \label{eq_sav1shen} \begin{align} \delta_t u^{n+1} & = \varepsilon^2\Delta_h u^{n+1} - \kappa u^{n+1} + \frac{r^{n+1}}{\sqrt{E_{2h}(u^n)+\delta}}(f(u^n)+\kappa u^n), \label{eq_sav1shena} \\ \delta_t r^{n+1} & = - \frac{1}{2\sqrt{E_{2h}(u^n)+\delta}} \<f(u^n)+\kappa u^n,\delta_t u^{n+1}\>, \end{align} \end{subequations} and the Crank--Nicolson type second-order SAV scheme (SAV2) reads as \cite{ShXuYa19} \begin{subequations} \label{eq_sav2shen} \begin{align} \delta_t u^{n+1} & = \varepsilon^2\Delta_h u^{n+\frac{1}{2}} - \kappa u^{n+\frac{1}{2}} + \frac{r^{n+1}+r^n}{2\sqrt{E_{2h}(\widehat{u}^{n+\frac{1}{2}})+\delta}} (f(\widehat{u}^{n+\frac{1}{2}})+\kappa \widehat{u}^{n+\frac{1}{2}}), \\ \delta_t r^{n+1} & = - \frac{1}{2\sqrt{E_{2h}(\widehat{u}^{n+\frac{1}{2}})+\delta}} \<f(\widehat{u}^{n+\frac{1}{2}})+\kappa \widehat{u}^{n+\frac{1}{2}},\delta_t u^{n+1}\>, \end{align} \end{subequations} where $\widehat{u}^{n+\frac{1}{2}}$ is generated by solving the system \[ \frac{\widehat{u}^{n+\frac{1}{2}}-u^n}{{\tau}/2} = \varepsilon^2\Delta_h \widehat{u}^{n+\frac{1}{2}} + f(u^n) - \kappa (\widehat{u}^{n+\frac{1}{2}}-u^n). \] Both \eqref{eq_sav1shen} and \eqref{eq_sav2shen} are linear schemes and energy dissipative in the sense that $\overline{E}_h(u^{n+1},r^{n+1})\le \overline{E}_h(u^n,r^n)$ with respect to the following modified energy \[ \overline{E}_h(u^n,r^n) := \frac{\varepsilon^2}{2}\|\nabla_h u^n\|^2 + \frac{\kappa}{2}\|u^n\|^2 + (r^n)^2 - \delta. \] Note that in the discrete settings, $\overline{E}_h(u^n,r^n)$ is only an approximation of the original discrete energy $E_h(u^n)$ defined in \eqref{egydis} and they are not equal in general since $r^n\not=\sqrt{E_{2h}(u^n)+\delta}$ for $n\ge 1$. However, the MBP cannot be theoretically preserved by the above classic SAV schemes \eqref{eq_sav1shen} and \eqref{eq_sav2shen} (See the discussion in Remark \ref{rmk_sav1_nonmbp}). \subsection{Exponential-SAV schemes} \label{sect_ESAV} A variant of the classic SAV approach, called the exponential-SAV (ESAV) scheme, was studied in \cite{LiuLi20}. We below summarize the ESAV method, also with a stabilization term based on the energy splitting \eqref{sav_splitting}. Define the auxiliary variable by $r(t)=\expp{E_2(u(t))}$ and reformulate \eqref{AllenCahn} as \begin{subequations} \begin{align*} & u_t = \varepsilon^2\Delta u - \kappa u + \frac{r}{\expp{E_2(u)}} (f(u)+\kappa u), \\ & (\ln r)_t = - \frac{r}{\expp{E_2(u)}} (f(u)+\kappa u,u_t). \end{align*} \end{subequations} Then the first-order ESAV scheme (ESAV1) reads as \begin{subequations} \label{eq_esav1} \begin{align} & \delta_t u^{n+1} = \varepsilon^2\Delta_h u^{n+1} - \kappa u^{n+1} + \frac{r^n}{\expp{E_{2h}(u^n)}}(f(u^n)+\kappa u^n), \label{eq_esav1a} \\ & \frac{\ln r^{n+1} - \ln r^n}{{\tau}} = - \frac{r^n}{\expp{E_{2h}(u^n)}} \<f(u^n)+\kappa u^n,\delta_t u^{n+1}\>. \end{align} \end{subequations} Setting $\kappa=0$, the scheme \eqref{eq_esav1} reduces exactly to the original ESAV scheme (without stabilization) presented in \cite{LiuLi20}. The Crank--Nicolson type ESAV scheme (ESAV2) is given by \begin{subequations} \label{eq_esav2} \begin{align} & \delta_t u^{n+1} = \varepsilon^2\Delta_h u^{n+\frac{1}{2}} - \kappa u^{n+\frac{1}{2}} + \frac{\widehat{r}^{n+\frac{1}{2}}}{\expp{E_{2h}(\widehat{u}^{n+\frac{1}{2}})}}(f(\widehat{u}^{n+\frac{1}{2}})+\kappa \widehat{u}^{n+\frac{1}{2}}), \label{eq_esav2a} \\ & \frac{\ln r^{n+1} - \ln r^n}{{\tau}} = - \frac{\widehat{r}^{n+\frac{1}{2}}}{\expp{E_{2h}(\widehat{u}^{n+\frac{1}{2}})}} \<f(\widehat{u}^{n+\frac{1}{2}})+\kappa \widehat{u}^{n+\frac{1}{2}},\delta_t u^{n+1}\>, \end{align} \end{subequations} where the value $(\widehat{u}^{n+\frac{1}{2}},\widehat{r}^{n+\frac{1}{2}})$ can be generated by an extrapolation as suggested in \cite{LiuLi20} or predicted by the first-order scheme \eqref{eq_esav1} with half of the time step size: \begin{subequations} \label{eq_esav21} \begin{align} & \frac{\widehat{u}^{n+\frac{1}{2}}-u^n}{{\tau}/2} = \varepsilon^2\Delta_h \widehat{u}^{n+\frac{1}{2}} - \kappa \widehat{u}^{n+\frac{1}{2}} + \frac{r^n}{\expp{E_{2h}(u^n)}}(f(u^n)+\kappa u^n), \\ & \ln \widehat{r}^{n+\frac{1}{2}} - \ln r^n = - \frac{r^n}{\expp{E_{2h}(u^n)}} \<f(u^n)+\kappa u^n,\widehat{u}^{n+\frac{1}{2}}-u^n\>. \end{align} \end{subequations} We will adopt \eqref{eq_esav21} in the numerical experiments for the comparison. Both \eqref{eq_esav1} and \eqref{eq_esav2} are energy dissipative in the sense that $\widetilde{E}_h(u^{n+1},r^{n+1})\le \widetilde{E}_h(u^n,r^n)$ with respect to the following modified energy \[ \widetilde{E}_h(u^n,r^n) := \frac{\varepsilon^2}{2}\|\nabla_h u^n\|^2 + \frac{\kappa}{2}\|u^n\|^2 + \ln r^n. \] Similar to the classic SAV schemes, the above ESAV schemes \eqref{eq_esav1} and \eqref{eq_esav2} also cannot preserve the MBP (See the discussion in Remark \ref{rmk_sav1_nonmbp}). \section{New stabilized exponential-SAV schemes} \label{sect_sESAVsch} From now on, we always assume the initial value $u_{\text{\rm init}}$ has the enough regularity as needed. By spatial discretization, there is a constant $h_0>0$, depending on $u_{\text{\rm init}}$, $F$, $\varepsilon$, such that \begin{equation} \label{uinit_approx} E_h(u_{\text{\rm init}}) \le E(u_{\text{\rm init}}) + 1, \quad \|\Delta_hu_{\text{\rm init}}\| \le \|\Deltau_{\text{\rm init}}\|_{L^2} + 1, \qquad \forall \, h \in (0,h_0]. \end{equation} The continuity of $F$ implies that $F$ is bounded from below on $[-\beta,\beta]$. Therefore, according to the MBP \eqref{mbp}, it holds that $$E_1(u):=\int_\Omega F(u) \,\d\bm{x}\ge -C_*$$ for some constant $C_*\geq 0$. Introducing $s(t) = E_1(u(t))$, we then have the following energy which is equivalent to $E(u)$: \[ \mathcal{E}(u,s) = \frac{\varepsilon^2}{2} \|\nabla u\|_{L^2}^2 + s. \] Partially inspired by the idea of ESAV method \cite{LiuLi20}, we rewrite the equation \eqref{AllenCahn} as the following equivalent system: \begin{subequations} \begin{align*} u_t & = \varepsilon^2\Delta u + \frac{\expp{s}}{\expp{E_1(u)}} f(u), \\ s_t & = - \frac{\expp{s}}{\expp{E_1(u)}} (f(u),u_t). \end{align*} \end{subequations} The corresponding space-discrete problem is to find $u_h(t)\in\mathcal{M}_h$ and $s_h(t)$ for $t>0$ satisfies \begin{subequations} \label{eq_semidis} \begin{align} \daoshu{u_h}{t} & = \varepsilon^2\Delta_h u_h + g(u_h,s_h) f(u_h), \label{eq_semidisa} \\ \daoshu{s_h}{t} & = - g(u_h,s_h) \Big\<f(u_h),\daoshu{u_h}{t}\Big\>, \label{eq_semidisb} \end{align} \end{subequations} where \begin{equation} \label{gg} g(u_h,s_h) := \frac{\expp{s_h}}{\expp{E_{1h}(u_h)}} > 0, \end{equation} and $E_{1h}$ denotes the space-discrete version of $E_1$, i.e., $E_{1h}(v) := \<F(v),1\>$ for any $v \in \mathcal{M}_h$. Based on such an equivalent form, we will give the stabilized ESAV schemes in the fully discrete version. This section is devoted to the first-order scheme and the second-order one will be discussed in the next section. Recall that we use $u^n$ to represent the fully discrete approximate value of $u_e(t_n)$, the exact solution to the problem \eqref{AllenCahn}. \subsection{First-order sESAV scheme} \label{sect_sav1} The first-order stabilized ESAV fully-discrete scheme (sESAV1) is given by \begin{subequations} \label{eq_sav1stab} \begin{align} \delta_t u^{n+1} & = \varepsilon^2\Delta_h u^{n+1} + g(u^n,s^n)f(u^n) - \kappa g(u^n,s^n)(u^{n+1}-u^n), \label{eq_sav1staba} \\ \delta_t s^{n+1} & = - g(u^n,s^n)\<f(u^n),\delta_t u^{n+1}\>, \label{eq_sav1stabb} \end{align} \end{subequations} where $\kappa\ge0$ is a stabilizing constant and $g(u^n,s^n)>0.$ The scheme \eqref{eq_sav1stab} is started by $u^0=u_{\text{\rm init}}$ and $s^0=E_{1h}(u^0)$. We can rewrite \eqref{eq_sav1stab} equivalently as follows: \begin{subequations} \label{eq_sav1var} \begin{align} \Big[\Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big)I-\varepsilon^2\Delta_h\Big] u^{n+1} & = \frac{u^n}{{\tau}} +g(u^n,s^n)f(u^n) + \kappa g(u^n,s^n)u^n, \label{lem_sav1_pf1} \\ s^{n+1} & = s^n - g(u^n,s^n)\<f(u^n),u^{n+1}-u^n\>. \label{lem_sav1_pf2} \end{align} \end{subequations} Obviously, \eqref{eq_sav1var} is uniquely solvable for any ${\tau}>0$ since $(\frac{1}{{\tau}}+\kappa g(u^n,s^n))I-\varepsilon^2\Delta_h$ is self-adjoint and positive definite, which makes $u^{n+1}$ linearly determined from \eqref{lem_sav1_pf1} and then $s^{n+1}$ computed explicitly by \eqref{lem_sav1_pf2}. If we take $\kappa=0$ and $r^n=\expp{s^n}$, i.e., $s^n=\ln r^n$, it is easy to verify that the scheme \eqref{eq_sav1stab} gives us exactly the ESAV scheme \eqref{eq_esav1} with $\kappa=0$. However, they differ when $\kappa>0$. \subsubsection{Energy dissipation and MBP} Now let us define a discrete energy as follows \begin{equation} \label{modified_energy} \mathcal{E}_h(u^n,s^n) := \frac{\varepsilon^2}{2} \|\nabla_h u^n\|^2 + s^n, \end{equation} which is clearly again an approximation of the original discrete energy $E_h(u^n)$. We first show that the sESAV1 scheme \eqref{eq_sav1stab} preserves the energy dissipation law and the MBP unccondtionally. Then, as an application of both properties, we also prove the uniform boundedness of the variable coefficient $ g(u^n,s^n)$. \begin{theorem}[Energy dissipation of sESAV1] \label{thm_sav1_es} For any $\kappa\ge0$ and ${\tau}>0$, the sESAV1 scheme \eqref{eq_sav1stab} is energy dissipative in the sense that $\mathcal{E}_h(u^{n+1},s^{n+1})\le \mathcal{E}_h(u^n,s^n)$. \end{theorem} \begin{proof} Taking the inner product with \eqref{eq_sav1stab} by $u^{n+1}-u^n$ yields \begin{equation} \label{sav1_es_pf1} \Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big)\|u^{n+1}-u^n\|^2 = \varepsilon^2\<\Delta_h u^{n+1},u^{n+1}-u^n\> + g(u^n,s^n)\<f(u^n),u^{n+1}-u^n\>. \end{equation} Combining \eqref{sav1_es_pf1}, \eqref{eq_sav1stabb}, and the identity \[ \<\Delta_hu^{n+1},u^{n+1}-u^n\> =-\frac{1}{2}\|\nabla_hu^{n+1}\|^2+\frac{1}{2}\|\nabla_hu^n\|^2-\frac{1}{2}\|\nabla_hu^{n+1}-\nabla_hu^n\|^2, \] we obtain \[ \mathcal{E}_h(u^{n+1},s^{n+1}) - \mathcal{E}_h(u^n,s^n) = -\Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big)\|u^{n+1}-u^n\|^2-\frac{\varepsilon^2}{2}\|\nabla_hu^{n+1}-\nabla_hu^n\|^2, \] which completes the proof. \end{proof} Theorem \ref{thm_sav1_es} implies that the scheme \eqref{eq_sav1stab} is energy dissipative with respect to the modified energy $\mathcal{E}_h(u^n,s^n)$ rather than the original energy $E_h(u^n)$. Note that $s^n\not=E_{1h}(u^n)$ for $n\ge 1$ in general, and thus $\mathcal{E}_h(u^n,s^n)\not=E_h(u^n)$. \begin{corollary} \label{cor_sav1_es} For any $\kappa\ge0$ and ${\tau}>0$, it holds $s^n\le E_h(u_{\text{\rm init}})$ for all $n$. \end{corollary} \begin{proof} By Theorem \ref{thm_sav1_es}, since $s^0=E_{1h}(u_{\text{\rm init}})$, we have \[ \frac{\varepsilon^2}{2} \|\nabla_h u^n\|^2 + s^n = \mathcal{E}_h(u^n,s^n) \le \mathcal{E}_h(u^{n-1},s^{n-1}) \le \cdots \le \mathcal{E}_h(u^0,s^0)= E_h(u_{\text{\rm init}}). \] Dropping off the nonnegative term leads to the expected result. \end{proof} \begin{theorem}[MBP of sESAV1] \label{thm_sav1_mbp} If $\kappa\ge \|f'\|_{C[-\beta,\beta]}$, the sESAV1 scheme \eqref{eq_sav1stab} preserves the MBP for $\{u^n\}$, i.e., the discrete version of \eqref{mbp} is valid as follows: \begin{equation} \label{dismbp} \|u_{\text{\rm init}}\|_\infty \le \beta \quad \Longrightarrow \quad \|u^n\|_\infty \le \beta, \quad \forall \, n. \end{equation} \end{theorem} \begin{proof} Suppose $(u^n,s^n)$ is given and $\|u^n\|_\infty\le\beta$ for some $n$. From \eqref{lem_sav1_pf1}, we have \[ u^{n+1} = \Big[\Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big)I-\varepsilon^2\Delta_h\Big]^{-1} \Big[\frac{1}{{\tau}}u^n + g(u^n,s^n)(f(u^n) + \kappa u^n) \Big]. \] Since $ g(u^n,s^n)>0$, by Lemma \ref{lem_lapdiff}, we have \[ \Big\|\Big[\Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big)I-\varepsilon^2\Delta_h\Big]^{-1}\Big\|_\infty \le \Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big)^{-1}. \] Since $\kappa\ge \|f'\|_{C[-\beta,\beta]}$ and $\|u^n\|_\infty\le\beta$, according to Lemma \ref{lem_nonlinear}, it holds \begin{equation} \label{thm_sav1_mbp_pf} \Big\|\frac{1}{{\tau}}u^n + g(u^n,s^n)(f(u^n) + \kappa u^n)\Big\|_\infty \le \Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big) \beta. \end{equation} Therefore, we obtain \[ \|u^{n+1}\|_\infty \le \Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big)^{-1} \Big(\frac{1}{{\tau}}+\kappa g(u^n,s^n)\Big) \beta = \beta. \] By induction, we have $\|u^n\|_\infty\le\beta$ for all $n$. \end{proof} \begin{remark} \label{rmk_sav1_mbp_pf} The inequality \eqref{thm_sav1_mbp_pf} is valid if $({\tau} g(u^n,s^n))^{-1}+\kappa\ge \|f'\|_{C[-\beta,\beta]}$. In other words, when $\kappa=0$ (no stabilization), the MBP still holds for the sESAV1 scheme if the time step size satisfies ${\tau} \le ( g(u^n,s^n)\|f'\|_{C[-\beta,\beta]})^{-1}$ for all $n$. \end{remark} \begin{remark} \label{rmk_sav1_mbp} For the sESAV1 scheme \eqref{eq_sav1stab}, we know that the extra term $-\kappa g(u^n,s^n)(u^{n+1}-u^n)$ stabilizes the time stepping and $\kappa g(u^n,s^n)$ is indeed the stabilizing constant, which is an $n$-dependent quantity. In the proof of Theorem \ref{thm_sav1_mbp}, the key ingredients to preserve the MBP for $\{u^n\}$ involve two aspects: the positivity of $\kappa g(u^n,s^n)$ and the relation of $u^{n+1}$ and $u^n$. The former implies that the extra term is really a good stabilization term and the latter guarantees the balance between the linear and nonlinear parts so that the stabilized linear operator is sufficient to dominate the nonlinear term in order to preserve the MBP. \end{remark} \begin{remark} \label{rmk_sav1_nonmbp} For the classic SAV1 scheme \eqref{eq_sav1shen}, the stabilization term in \eqref{eq_sav1shena} actually takes the form \[ - \kappa u^{n+1} + \frac{r^{n+1}}{\sqrt{E_{2h}(u^n)+\delta}}\kappa u^n. \] The sign of $r^{n+1}$, and thus the sign of $\frac{r^{n+1}}{\sqrt{E_{2h}(u^n)+\delta}}\kappa$, is uncertain, which violates the positivity of the stabilizing constant. Even though $r^{n+1}$ may be positive in practical computations in some specific cases, such a stabilization term leads to an imbalance between the linear and nonlinear parts since $r^{n+1}\not=\sqrt{E_{2h}(u^n)+\delta}$ for $n\ge0$ in general, so the scheme \eqref{eq_sav1shen} cannot preserve the MBP theoretically, which will be also observed later in our numerical experiments. Similarly, the stabilization term in the ESAV scheme \eqref{eq_esav1a} reads as \[ - \kappa u^{n+1} + \frac{r^n}{\expp{E_{2h}(u^n)}}\kappa u^n, \] and the imbalance also exists between the linear and nonlinear parts since $r^n\not=\expp{E_{2h}(u^n)}$ for $n\ge1$ in general, and thus the ESAV scheme \eqref{eq_esav1} also does not preserve the MBP theoretically. Nevertheless, $r^n>0$ always holds due to the definition of the auxiliary variable, and this is the reason why we consider the ESAV approach rather than the classic one in this work. \end{remark} Note that the coefficient $ g(u^n,s^n)$ may vary step-by-step, which is different from the continuous case that $g(u,s)\equiv 1$ exactly. Fortunately, the change of $ g(u^n,s^n)$ is controllable in the sense that it can be bounded by some constants, which is illustrated in the following. \begin{corollary} \label{cor_g_upbound} If $h\le h_0$, $\kappa\ge \|f'\|_{C[-\beta,\beta]}$, and $\|u_{\text{\rm init}}\|_\infty\le\beta$, then there exists a constant $G^*=G^*(u_{\text{\rm init}},C_*)$ such that $0< g(u^n,s^n)\le G^*$ for all $n$. \end{corollary} \begin{proof} The positivity of $g(u^n,s^n)$ comes from its definition. We know from Corollary \ref{cor_sav1_es} and Theorem \ref{thm_sav1_mbp} that $ g(u^n,s^n)\le\expp{E_h(u_{\text{\rm init}})+C_*}$ for all $n$. The $h$-dependence of upper bound can be removed by \eqref{uinit_approx}, which completes the proof. \end{proof} Actually, it also holds that $ g(u^n,s^n)$ has a positive lower bound uniformly in $n$ for any fixed terminal time $T>0$. To show it, we first prove an estimate on the discrete $H^2$ semi-norm of the numerical solution. \begin{lemma} \label{lem_sav1_h2bound} Given a fixed time $T>0$. If $h\le h_0$, $\kappa\ge \|f'\|_{C[-\beta,\beta]}$, and $\|u_{\text{\rm init}}\|_\infty\le\beta$, there exists a constant $M>0$ depending on $C_*$, $|\Omega|$, $T$, $u_{\text{\rm init}}$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$, such that \[ \|\delta_t u^{n+1}\| + \|\Delta_h u^{n+1}\|\le M, \quad 0 \le n \le \lfloor T/{\tau}\rfloor-1. \] \end{lemma} \begin{proof} Taking the discrete inner product of \eqref{eq_sav1staba} with $2{\tau}\Delta_h^2u^{n+1}$, we obtain \begin{align*} & (1+\kappa g(u^n,s^n){\tau}) \<\Delta_h u^{n+1} - \Delta_h u^n, 2\Delta_hu^{n+1}\> + 2\varepsilon^2{\tau} \|\nabla_h\Delta_h u^{n+1}\|^2 \\ & \qquad\quad = - 2 g(u^n,s^n) {\tau}\<\nabla_h f(u^n),\nabla_h\Delta_h u^{n+1}\>. \end{align*} Using the facts that \begin{align*} \<\Delta_h u^{n+1} - \Delta_h u^n, 2\Delta_hu^{n+1}\> & = \|\Delta_h u^{n+1}\|^2 - \|\Delta_h u^n\|^2 + \|\Delta_h u^{n+1} - \Delta_h u^n\|^2, \\ - 2 g(u^n,s^n){\tau} \<\nabla_h f(u^n),\nabla_h\Delta_h u^{n+1}\> & \le \frac{( g(u^n,s^n))^2}{2\varepsilon^2}{\tau} \|\nabla_h f(u^n)\|^2 + 2\varepsilon^2{\tau}\|\nabla_h\Delta_h u^{n+1}\|^2, \end{align*} we obtain \begin{equation} \label{111} (1+\kappa g(u^n,s^n){\tau}) (\|\Delta_h u^{n+1}\|^2 - \|\Delta_h u^n\|^2) \le \frac{( g(u^n,s^n))^2}{2\varepsilon^2}{\tau} \|\nabla_h f(u^n)\|^2. \end{equation} By Theorem \ref{thm_sav1_mbp}, we have $\|u^n\|_\infty\le\beta$, and thus, \begin{equation} \label{222} \|\nabla_h f(u^n)\| \le \|f'\|_{C[-\beta,\beta]} \|\nabla_h u^n\| \le \|f'\|_{C[-\beta,\beta]} C_\Omega \|\Delta_h u^n\|, \end{equation} where the second step comes from the discrete Poincar\'e's inequality with $C_\Omega$ being a constant depending only on $|\Omega|$ (since $\nabla_h u^n$ has a zero mean due to the periodic boundary condition). Then, by Corollary \ref{cor_g_upbound}, \eqref{111} and \eqref{222}, we obtain \begin{align} \label{lem_sav1_h2bound_pf} \|\Delta_h u^{n+1}\|^2 &\le (1+\kappa g(u^n,s^n){\tau}) \|\Delta_h u^{n+1}\|^2\nonumber\\ &\le \Big[1+\Big(\kappa G^*+\frac{(G^* \|f'\|_{C[-\beta,\beta]} C_\Omega)^2}{2\varepsilon^2}\Big){\tau}\Big] \|\Delta_h u^n\|^2. \end{align} By recursion, we obtain \begin{align*} \|\Delta_h u^{n+1}\|^2 &\le \Big[1+\Big(\kappa G^*+\frac{(G^* \|f'\|_{C[-\beta,\beta]} C_\Omega)^2}{2\varepsilon^2}\Big){\tau}\Big]^{n+1} \|\Delta_h u^0\|^2\\ &\le \mathrm{e}^{\big(\kappa G^*+\frac{(G^* \|f'\|_{C[-\beta,\beta]} C_\Omega)^2}{2\varepsilon^2}\big)T} \|\Delta_h u_{\text{\rm init}}\|^2. \end{align*} Then, using Corollary \ref{cor_g_upbound} again, we derive from \eqref{eq_sav1staba} directly to get \begin{align*} \|\delta_t u^{n+1}\| &\le (1+\kappa g(u^n,s^n){\tau})\|\delta_t u^{n+1}\|\\ &\le \varepsilon^2 \|\Delta_h u^{n+1}\| + g(u^n,s^n)\|f(u^n)\| \le \varepsilon^2 \|\Delta_h u^{n+1}\| + G^* F_0 |\Omega|^{\frac{1}{2}}, \end{align*} where $F_0:=\|f\|_{C[-\beta,\beta]}$. This completes the proof. \end{proof} \begin{corollary} \label{cor_sav1_g_bound} Given a fixed time $T>0$. If $h\le h_0$, $\kappa\ge \|f'\|_{C[-\beta,\beta]}$, and $\|u_{\text{\rm init}}\|_\infty\le\beta$, there exists a constant $G_*>0$ such that $ g(u^n,s^n)\ge G_*$ for $0\le n\le \lfloor T/{\tau} \rfloor$, where $G_*$ depends on $C_*$, $|\Omega|$, $T$, $u_{\text{\rm init}}$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$. \end{corollary} \begin{proof} According to the definition of $g(u^n,s^n)$ in \eqref{gg} and the MBP for $\{u^n\}$, it suffices to show the existence of the lower bound of $\{s^n\}$. Using Lemma \ref{lem_sav1_h2bound}, we have \[ \<f(u^n), u^{n+1}-u^n\> \le {\tau} \|f(u^n)\| \|\delta_t u^{n+1}\| \le F_0 |\Omega|^{\frac{1}{2}} M {\tau}, \] where $M$ is the constant defined in Lemma \ref{lem_sav1_h2bound}. Then, from \eqref{lem_sav1_pf2}, we have \[ s^{n+1} \ge s^n - G^* F_0 |\Omega|^{\frac{1}{2}} M {\tau}. \] By recursion, noting that $s^0=E_{1h}(u_{\text{\rm init}})\ge-C_*$, we obtain \begin{equation*} s^{n} \ge s^0 - G^* F_0 |\Omega|^{\frac{1}{2}} M n{\tau} \ge - C_* - G^* F_0 |\Omega|^{\frac{1}{2}} MT, \end{equation*} which completes the proof. \end{proof} The combination of Corollaries \ref{cor_g_upbound} and \ref{cor_sav1_g_bound} implies that $0<G_*\le g(u^n,s^n) \le G^*$ for any fixed terminal time $T>0$, which will play an important role in error estimates of the sESAV1 scheme \eqref{eq_sav1stab} in the next subsection. \subsubsection{Error estimates} In the following error analysis, as well as that for the second-order scheme presented later, we will use many generic constants, and for simplicity of notations, we may denote the constants with the same dependence but different values by the same notation. If the exact solution $u_e$ to \eqref{AllenCahn} is smooth sufficiently, letting $s_e(t)=E_1(u_e(t))$, we have \begin{subequations} \label{sav1trun} \begin{align} \frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}} & = \varepsilon^2 \Delta_h u_e(t_{n+1}) + g(u_e(t_n),s_e(t_n)) f(u_e(t_n)) \nonumber \\ & \quad - \kappa g(u_e(t_n),s_e(t_n)) (u_e(t_{n+1})-u_e(t_n)) + R_{1u}^n, \label{sav1truna} \\ \frac{s_e(t_{n+1})-s_e(t_n)}{{\tau}} & = - g(u_e(t_n),s_e(t_n)) \Big\<f(u_e(t_n)),\frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}}\Big\> + R_{1s}^n, \label{sav1trunb} \end{align} \end{subequations} where the truncation errors $R_{1u}^n$ and $R_{1s}^n$ satisfy \begin{equation} \label{sav1trunerr} \|R_{1u}^n\|\le C_e({\tau}+h^2), \qquad |R_{1s}^n|\le C_e({\tau}+h^2) \end{equation} with $C_e>0$ depending only on $u_e$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$. Define the error functions as \begin{equation} \label{errorfuns} e_u^n = u^n - u_e(t_n), \qquad e_s^n = s^n - s_e(t_n). \end{equation} We first show a lemma on the error estimate for the nonlinear term. \begin{lemma} \label{lem_sav1_nonlinear} If $h\le h_0$ and $\|u^n\|_\infty\le\beta$, we have \begin{equation} \label{lem_sav1_nonlinear1} | g(u^n,s^n) - g(u_e(t_n),s_e(t_n)) | \le C_{g} (\|e_u^n\| + |e_s^n|), \end{equation} and \begin{equation} \label{lem_sav1_nonlinear2} \| g(u^n,s^n) f(u^n) - g(u_e(t_n),s_e(t_n)) f(u_e(t_n)) \| \le C_{g} (\|e_u^n\| + |e_s^n|), \end{equation} where the constant $ C_{g} >0$ depends on $C_*$, $|\Omega|$, $u_{\text{\rm init}}$, and $\|f\|_{C^1[-\beta,\beta]}$. \end{lemma} \begin{proof} For the exact solutions $u_e(t_n)$ and $s_e(t_n)$, we have $\|u_e(t_n)\|_\infty\le\beta$ by the MBP and $s_e(t_n)\le E(u_{\text{\rm init}})$ by the energy dissipation law. Some careful calculations yield \begin{align*} | g(u^n,s^n) - g(u^n,s_e(t_n)) | & = \frac{1}{\exp \{E_{1h}(u^n)\}} |\exp \{s^n\} - \exp \{s_e(t_n)\}| \\ & \le \frac{\exp \{\xi^n\}}{\exp \{E_{1h}(u^n)\}} |s^n - s_e(t_n)| \\ & \le G^* |s^n - s_e(t_n)| \end{align*} with $\xi^n$ being a number between $s^n$ and $s_e(t_n)$, and \begin{align*} & | g(u^n,s_e(t_n)) - g(u_e(t_n),s_e(t_n)) | \\ & \qquad\quad = \exp \{s_e(t_n)\} \Big|\frac{1}{\exp\{ E_{1h}(u^n)\}} - \frac{1}{\exp \{E_{1h}(u_e(t_n))\}} \Big| \nonumber \\ & \qquad\quad \le (\expp{E(u_{\text{\rm init}})+C_*}) |E_{1h}(u^n) - E_{1h}(u_e(t_n))| \nonumber \\ & \qquad\quad \le |\Omega|^{\frac{1}{2}} (\expp{E(u_{\text{\rm init}})+C_*}) \|F(u^n) - F(u_e(t_n))\| \nonumber \\ & \qquad\quad \le F_0 |\Omega|^{\frac{1}{2}} (\expp{E(u_{\text{\rm init}})+C_*}) \|u^n-u_e(t_n)\|. \end{align*} By combining both of the above inqualities, we obtain \eqref{lem_sav1_nonlinear1}. In addition, we have \begin{align*} & \| g(u^n,s^n) f(u_e(t_n)) - g(u_e(t_n),s_e(t_n)) f(u_e(t_n))\| \\ & \qquad\quad \le \|f(u_e(t_n))\| |g(u^n,s^n) - g(u_e(t_n),s_e(t_n))| \nonumber \\ & \qquad\quad \le F_0 |\Omega|^{\frac{1}{2}} C (\|e_u^n\| + |e_s^n|). \end{align*} According to Corollary \ref{cor_g_upbound}, it holds \begin{align*} \| g(u^n,s^n) f(u^n) - g(u^n,s^n) f(u_e(t_n)) \| & \le G^* \|f(u^n) - f(u_e(t_n))\| \\ & \le G^* \|f'\|_{C[-\beta,\beta]} \|u^n - u_e(t_n)\|. \end{align*} Then, we obtain \eqref{lem_sav1_nonlinear2} with the help of the triangular inequality to the above two inequalities. \end{proof} \begin{theorem}[Error estimate of sESAV1] \label{thm_sav1_error} Given a fixed time $T>0$ and suppose the exact solution $u_e$ is smooth enough on $[0,T]\times\overline{\Omega}$. Assume that $\kappa\ge \|f'\|_{C[-\beta,\beta]}$ and $\|u_{\text{\rm init}}\|_\infty\le\beta$. If ${\tau}$ and $h$ are small sufficiently, then we have the error estimate for the sESAV1 scheme \eqref{eq_sav1stab} as follows: \begin{equation*} \|e_u^n\| + \|\nabla_h e_u^n\| + |e_s^n| \le C({\tau}+h^2),\qquad 0 \le n \le \lfloor T/{\tau}\rfloor, \end{equation*} where the constant $C>0$ depends on $C_*$, $|\Omega|$, $T$, $u_e$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$ but is independent of ${\tau}$ and $h$. \end{theorem} \begin{proof} The difference between \eqref{eq_sav1stab} and \eqref{sav1trun} leads to \begin{subequations} \label{sav1err} \begin{align} \delta_te_u^{n+1} & = \varepsilon^2 \Delta_h e_u^{n+1} + g(u^n,s^n) f(u^n) - g(u_e(t_n),s_e(t_n)) f(u_e(t_n)) - \kappa g(u^n,s^n) (e_u^{n+1}-e_u^n) \nonumber \\ & \quad + \kappa (g(u_e(t_n),s_e(t_n)) - g(u^n,s^n)) (u_e(t_{n+1})-u_e(t_n)) - R_{1u}^n, \label{sav1erra} \\ \delta_te_s^{n+1} & = \Big\< g(u_e(t_n),s_e(t_n)) f(u_e(t_n)) - g(u^n,s^n) f(u^n), \frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}} \Big\> \nonumber \\ & \quad - g(u^n,s^n) \<f(u^n),\delta_te_u^{n+1}\> - R_{1s}^n. \label{sav1errb} \end{align} \end{subequations} Taking the discrete inner product of \eqref{sav1erra} with $2{\tau}\delta_te_u^{n+1}$ and rearranging the terms give us \begin{align*} & 2{\tau}\|\delta_te_u^{n+1}\|^2 + 2\kappa g(u^n,s^n) \|e_u^{n+1}-e_u^n\|^2 - 2\varepsilon^2 \<\Delta_h e_u^{n+1},e_u^{n+1}-e_u^n\> \\ & \qquad = 2{\tau} \<g(u^n,s^n) f(u^n) - g(u_e(t_n),s_e(t_n)) f(u_e(t_n)), \delta_te_u^{n+1}\> \nonumber \\ & \qquad\quad + 2 \kappa{\tau} (g(u_e(t_n),s_e(t_n)) - g(u^n,s^n)) \<u_e(t_{n+1})-u_e(t_n), \delta_te_u^{n+1}\> - 2{\tau}\<R_{1u}^n,\delta_te_u^{n+1}\>. \end{align*} Since $g(u^n,s^n)\ge G_*>0$ by Corollary \ref{cor_sav1_g_bound}, using the identities \begin{align} \<\Delta_h e_u^{n+1},e_u^{n+1}-e_u^n\> & = -\frac{1}{2}\|\nabla_h e_u^{n+1}\|^2 + \frac{1}{2}\|\nabla_h e_u^n\|^2 - \frac{1}{2}{\tau}^2\|\nabla_h \delta_te_u^{n+1}\|^2, \nonumber \\ \|e_u^{n+1}-e_u^n\|^2 & = \|e_u^{n+1}\|^2 - \|e_u^n\|^2 - 2{\tau} \<e_u^n, \delta_te_u^{n+1}\>, \label{sav1err_pf0} \end{align} we obtain \begin{align} & 2G_*\kappa\|e_u^{n+1}\|^2 - 2G_*\kappa\|e_u^n\|^2 + \varepsilon^2 \|\nabla_h e_u^{n+1}\|^2 - \varepsilon^2 \|\nabla_h e_u^n\|^2 + 2{\tau} \|\delta_te_u^{n+1}\|^2 \nonumber \\ & \qquad\quad \le 2{\tau} \<g(u^n,s^n) f(u^n) - g(u_e(t_n),s_e(t_n)) f(u_e(t_n)), \delta_te_u^{n+1}\> \nonumber \\ & \qquad\qquad + 2\kappa{\tau} (g(u_e(t_n),s_e(t_n)) - g(u^n,s^n)) \<u_e(t_{n+1})-u_e(t_n), \delta_te_u^{n+1}\> \nonumber \\ & \qquad\qquad + 4G_*\kappa{\tau}\<e_u^n, \delta_te_u^{n+1}\> - 2{\tau} \<R_{1u}^n, \delta_te_u^{n+1}\>. \label{sav1err_pf3} \end{align} For the first term in the right-hand side of \eqref{sav1err_pf3}, by Lemma \ref{lem_sav1_nonlinear} we have \begin{align} & 2{\tau} \<g(u^n,s^n) f(u^n) - g(u_e(t_n),s_e(t_n)) f(u_e(t_n)), \delta_te_u^{n+1}\> \nonumber \\ & \qquad\quad \le 2 C_{g} {\tau} (\|e_u^n\| + |e_s^n|) \|\delta_te_u^{n+1}\| \le 4 C_{g} ^2 {\tau} (\|e_u^n\|^2 + |e_s^n|^2) + \frac{{\tau}}{2} \|\delta_te_u^{n+1}\|^2, \label{sav1err_pf4a} \end{align} where $ C_{g} >0$ is the constant in Lemma \ref{lem_sav1_nonlinear}. For the second term in the right-hand side of \eqref{sav1err_pf3}, we have \begin{align} & 2\kappa{\tau} (g(u_e(t_n),s_e(t_n)) - g(u^n,s^n)) \<u_e(t_{n+1})-u_e(t_n), \delta_te_u^{n+1}\> \nonumber \\ & \qquad\quad \le 2\kappa{\tau} |g(u_e(t_n),s_e(t_n)) - g(u^n,s^n)| \|u_e(t_{n+1})-u_e(t_n)\| \|\delta_te_u^{n+1}\| \nonumber \\ & \qquad\quad \le 2 C_{g} \kappa{\tau} (\|u_e(t_{n+1})\|+\|u_e(t_n)\|) (\|e_u^n\| + |e_s^n|) \|\delta_te_u^{n+1}\|\nonumber \\ & \qquad\quad \le C_1\kappa^2{\tau} (\|e_u^n\|^2 + |e_s^n|^2) + \frac{{\tau}}{2} \|\delta_te_u^{n+1}\|^2, \label{sav1err_pf4b0} \end{align} where $C_1>0$ depends on $C_*$, $|\Omega|$, $u_e$, and $\|f\|_{C^1[-\beta,\beta]}$. Using the Young's inequality, the third and fourth terms in the right-hand side of \eqref{sav1err_pf3} can be bounded respectively as \begin{eqnarray} & 4G_*\kappa{\tau}\<e_u^n, \delta_te_u^{n+1}\> \le 4G_*\kappa{\tau} \|e_u^n\| \|\delta_te_u^{n+1}\| \le 8G_*^2\kappa^2{\tau} \|e_u^n\|^2 + \dfrac{{\tau}}{2}\|\delta_te_u^{n+1}\|^2, \qquad \label{sav1err_pf4b} \\ & -2{\tau} \<R_{1u}^n, \delta_te_u^{n+1}\> \le 2{\tau} \|R_{1u}^n\| \|\delta_te_u^{n+1}\| \le 4{\tau} \|R_{1u}^n\|^2 + \dfrac{{\tau}}{4}\|\delta_te_u^{n+1}\|^2. \quad \label{sav1err_pf4c} \end{eqnarray} Then, substituting \eqref{sav1err_pf4a}--\eqref{sav1err_pf4c} into \eqref{sav1err_pf3} leads to \begin{align} & 2G_*\kappa\|e_u^{n+1}\|^2 - 2G_*\kappa\|e_u^n\|^2 + \varepsilon^2 \|\nabla_h e_u^{n+1}\|^2 - \varepsilon^2 \|\nabla_h e_u^n\|^2 + \frac{{\tau}}{4} \|\delta_te_u^{n+1}\|^2 \nonumber \\ & \qquad \le (4 C_{g} ^2+C_1\kappa^2+8G_*^2\kappa^2) {\tau} \|e_u^n\|^2 + (4 C_{g} ^2+C_1\kappa^2) {\tau}|e_s^n|^2 + 4{\tau} \|R_{1u}^n\|^2. \label{sav1err_pf5} \end{align} Multiplying \eqref{sav1errb} by $2{\tau} e_s^{n+1}$ yields \begin{align} & |e_s^{n+1}|^2 - |e_s^n|^2 + |e_s^{n+1}-e_s^n|^2 \nonumber \\ & \qquad\quad = 2e_s^{n+1} \< g(u_e(t_n),s_e(t_n)) f(u_e(t_n)) - g(u^n,s^n) f(u^n), u_e(t_{n+1})-u_e(t_n) \> \nonumber \\ & \qquad\qquad - 2{\tau} e_s^{n+1} g(u^n,s^n) \<f(u^n),\delta_te_u^{n+1}\> - 2{\tau} R_{1s}^n e_s^{n+1}. \label{sav1err_pf6} \end{align} For the first term in the right-hand side of \eqref{sav1err_pf6}, by Lemma \ref{lem_sav1_nonlinear} we have \begin{align} & 2e_s^{n+1} \< g(u_e(t_n),s_e(t_n)) f(u_e(t_n)) - g(u^n,s^n) f(u^n), u_e(t_{n+1})-u_e(t_n) \> \nonumber \\ & \qquad\quad \le 2|e_s^{n+1}| \|g(u_e(t_n),s_e(t_n)) f(u_e(t_n)) - g(u^n,s^n) f(u^n)\| \|u_e(t_{n+1})-u_e(t_n)\| \nonumber \\ & \qquad\quad \le 2 C_{g} {\tau} |e_s^{n+1}| (\|e_u^n\| + |e_s^n|) \|(u_e)_t(\theta_n)\| \qquad (\mbox{for some } t_n<\theta_n<t_{n+1}) \nonumber \\ & \qquad\quad \le C_2 {\tau} (\|e_u^n\|^2 + |e_s^n|^2 + |e_s^{n+1}|^2), \label{sav1err_pf7a} \end{align} where $C_2>0$ depends on $C_*$, $|\Omega|$, $u_e$, and $\|f\|_{C^1[-\beta,\beta]}$. For the second term in the right-hand side of \eqref{sav1err_pf6}, using Corollary \ref{cor_g_upbound}, we obtain \begin{align} \label{sav1err_pf7b} - 2{\tau} e_s^{n+1} g(u^n,s^n) \<f(u^n),\delta_te_u^{n+1}\> & \le 2G^*{\tau} \|f(u^n)\| |e_s^{n+1}| \|\delta_te_u^{n+1}\| \nonumber \\ & \le C_3{\tau} |e_s^{n+1}|^2 + \frac{{\tau}}{4}\|\delta_te_u^{n+1}\|^2, \end{align} where $C_3>0$ depends on $C_*$, $|\Omega|$, $u_{\text{\rm init}}$, and $\|f\|_{C[-\beta,\beta]}$. For the third term in the right-hand side of \eqref{sav1err_pf6}, we have \begin{equation} \label{sav1err_pf7c} - 2{\tau} R_{1s}^n e_s^{n+1} \le {\tau} |R_{1s}^n|^2 + {\tau} |e_s^{n+1}|^2. \end{equation} Then, substituting \eqref{sav1err_pf7a}--\eqref{sav1err_pf7c} into \eqref{sav1err_pf6} leads to \begin{equation} \label{sav1err_pf8} |e_s^{n+1}|^2 - |e_s^n|^2 \le C_2 {\tau} \|e_u^n\|^2 + C_2 {\tau} |e_s^n|^2 + (1 + C_2 + C_3) {\tau} |e_s^{n+1}|^2 + \frac{{\tau}}{4}\|\delta_te_u^{n+1}\|^2 + {\tau} |R_{1s}^n|^2. \end{equation} Adding \eqref{sav1err_pf5} and \eqref{sav1err_pf8}, we obtain \begin{align*} & 2G_*\kappa (\|e_u^{n+1}\|^2 - \|e_u^n\|^2) + \varepsilon^2 (\|\nabla_h e_u^{n+1}\|^2 - \|\nabla_h e_u^n\|^2) + (|e_s^{n+1}|^2 - |e_s^n|^2) \\ & \qquad\quad \le (4 C_{g} ^2+C_1\kappa^2+8G_*^2\kappa^2+C_2) {\tau} \|e_u^n\|^2 + (4 C_{g} ^2+C_1\kappa^2+C_2) {\tau}|e_s^n|^2 \\ & \qquad\qquad + (1 + C_2 + C_3) {\tau} |e_s^{n+1}|^2 + 4{\tau} \|R_{1u}^n\|^2 + {\tau} |R_{1s}^n|^2. \end{align*} Then, using \eqref{sav1trunerr}, we reach \begin{align*} & \quad~ 2G_*\kappa (\|e_u^{n+1}\|^2 - \|e_u^n\|^2) + \varepsilon^2 (\|\nabla_h e_u^{n+1}\|^2 - \|\nabla_h e_u^n\|^2) + (|e_s^{n+1}|^2 - |e_s^n|^2) \\ &\qquad\qquad \le C_4 {\tau} (\|e_u^n\|^2 + |e_s^n|^2 + |e_s^{n+1}|^2) + 5C_e^2{\tau}({\tau}+h^2)^2, \end{align*} where the constant $C_4$ depends on $C_*$, $|\Omega|$, $T$, $u_e$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$. Letting $W^n := 2G_*\kappa \|e_u^n\|^2 + \varepsilon^2 \|\nabla_h e_u^n\|^2 + |e_s^n|^2$, we have \[ W^{n+1} - W^n \le \widetilde{C}_4{\tau} (W^n + W^{n+1}) + 5C_e^2{\tau}({\tau}+h^2)^2, \] where $ \widetilde{C}_4$ depends on $C_4$ and $\kappa$. When ${\tau}\le\frac{1}{2 \widetilde{C}_4}$, noting that $\frac{1+ \widetilde{C}_4{\tau}}{1- \widetilde{C}_4{\tau}}\le 1 + 4 \widetilde{C}_4{\tau}$, we obtain \[ W^{n+1} \le (1+4 \widetilde{C}_4{\tau}) W^n + 10C_e^2{\tau} ({\tau}+h^2)^2. \] Using the discrete Gronwall's inequality, we obtain \[ 2G_*\kappa\|e_u^n\|^2 + \varepsilon^2 \|\nabla_h e_u^n\|^2 + |e_s^n|^2 = W^n \le 10C_e^2 \mathrm{e}^{4 \widetilde{C}_4T} ({\tau}+h^2)^2, \] which completes the proof. \end{proof} \begin{remark} For any fixed $h>0$, let us recall the space-discrete problem \eqref{eq_semidis} and denote by $u_{h,e}(t)$ the exact solution. By similar analysis as Theorem \ref{thm_sav1_error}, one can obtain the error estimates for sufficiently small ${\tau}$ as follows: \begin{equation*} \|u^n-u_{h,e}(t_n)\| + \|\nabla_h u^n - \nabla_h u_{h,e}(t_n)\| + |s^n-E_{1h}(u_{h,e}(t))| \le C_h{\tau}, \end{equation*} where the constant $C_h>0$ depends on $C_*$, $|\Omega|$, $T$, $u_{h,e}$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$ but is independent of ${\tau}$. \end{remark} \subsection{Second-order sESAV scheme} \label{sect_sav2} For the space-discrete system \eqref{eq_semidis}, the second-order stabilized ESAV scheme (sESAV2) is given by \begin{subequations} \label{eq_sav2stab} \begin{align} \delta_t u^{n+1} & = \varepsilon^2 \Delta_h u^{n+\frac{1}{2}} + g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) f(\widehat{u}^{n+\frac{1}{2}}) - \kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) (u^{n+\frac{1}{2}}-\widehat{u}^{n+\frac{1}{2}}), \label{eq_sav2staba} \\ \delta_t s^{n+1} & = - g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<f(\widehat{u}^{n+\frac{1}{2}}),\delta_t u^{n+1}\> + \kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<u^{n+\frac{1}{2}}-\widehat{u}^{n+\frac{1}{2}}, \delta_t u^{n+1}\>, \label{eq_sav2stabb} \end{align} \end{subequations} where $ g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) > 0$ with $(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})$ being generated by the first-order scheme \eqref{eq_sav1stab} with the time step size ${\tau}/2$, i.e., \begin{subequations} \label{eq_sav2stab0} \begin{align} \frac{\widehat{u}^{n+\frac{1}{2}}-u^n}{{\tau}/2} & = \varepsilon^2 \Delta_h \widehat{u}^{n+\frac{1}{2}} + g(u^n,s^n)f(u^n) - \kappa g(u^n,s^n)(\widehat{u}^{n+\frac{1}{2}}-u^n), \label{eq_sav2stab0a}\\ \widehat{s}^{n+\frac{1}{2}}-s^n & = - g(u^n,s^n)\<f(u^n),\widehat{u}^{n+\frac{1}{2}}-u^n\>. \label{eq_sav2stab0b} \end{align} \end{subequations} The scheme \eqref{eq_sav2stab} is started by $u^0=u_{\text{\rm init}}$ and $s^0=E_{1h}(u^0)$. By the definition of $u^{n+\frac{1}{2}}$, the last term in \eqref{eq_sav2staba} is actually $- \frac{1}{2} \kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) (u^{n+1}-2\widehat{u}^{n+\frac{1}{2}}+u^n)$, which provides a second-order truncation error in time. We can rewrite \eqref{eq_sav2stab} in the following form: \begin{subequations} \label{eq_sav2var} \begin{align} & \Big[\Big(\frac{2}{{\tau}}+\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\Big)I-\varepsilon^2\Delta_h\Big] u^{n+1} \nonumber\\ &\qquad = \Big[\Big(\frac{2}{{\tau}}-\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\Big)I+\varepsilon^2\Delta_h\Big] u^n + 2g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})[f(\widehat{u}^{n+\frac{1}{2}}) + \kappa\widehat{u}^{n+\frac{1}{2}}], \label{lem_sav2_pf} \\ &s^{n+1} = s^n - g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<f(\widehat{u}^{n+\frac{1}{2}}) - \kappa (u^{n+\frac{1}{2}}-\widehat{u}^{n+\frac{1}{2}}),u^{n+1}-u^n\>. \end{align} \end{subequations} It is then easy to see that the system \eqref{eq_sav2var} is linear and uniquely solvable for any ${\tau}>0$ since $(\frac{2}{{\tau}}+\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}))I-\varepsilon^2\Delta_h$ is self-adjoint and positive definite. \subsubsection{Energy dissipation and MBP} The energy dissipation law and the MBP preservation of the sESAV2 scheme \eqref{eq_sav2stab} are stated below. \begin{theorem}[Energy dissipation of sESAV2] \label{thm_sav2_es} For any $\kappa\ge0$ and ${\tau}>0$, the sESAV2 scheme \eqref{eq_sav2stab} is energy dissipative in the sense that $\mathcal{E}_h(u^{n+1},s^{n+1})\le \mathcal{E}_h(u^n,s^n)$, where $\mathcal{E}_h(u^n,s^n)$ is given by \eqref{modified_energy}. Moreover, it holds that $s^n\le E_h(u_{\text{\rm init}})$ and $\widehat{s}^{n+\frac{1}{2}}\le E_h(u_{\text{\rm init}})$ for all $n$. \end{theorem} \begin{proof} Taking the inner product of \eqref{eq_sav2stab} with $u^{n+1}-u^n$ yields \begin{align} \label{sav2_es_pf1} \frac{1}{{\tau}}\|u^{n+1}-u^n\|^2 & = -\frac{\varepsilon^2}{2}\|\nabla_hu^{n+1}\|^2 + \frac{\varepsilon^2}{2}\|\nabla_hu^n\|^2 + g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<f(\widehat{u}^{n+\frac{1}{2}}),u^{n+1}-u^n\>\nonumber\\ &\quad - \kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<u^{n+\frac{1}{2}}-\widehat{u}^{n+\frac{1}{2}},u^{n+1}-u^n\>. \end{align} Combining \eqref{sav2_es_pf1} and \eqref{eq_sav2stabb}, we obtain \[ \mathcal{E}_h(u^{n+1},s^{n+1}) - \mathcal{E}_h(u^n,s^n) = -\frac{1}{{\tau}} \|u^{n+1}-u^n\|^2 \le 0. \] Similar to the proof of Corollary \ref{cor_sav1_es}, the uniform boundedness of $\{s^n\}$ is a direct result of the energy stability. Since $\widehat{s}^{n+\frac{1}{2}}$ is generated by the sESAV1 scheme \eqref{eq_sav2stab0}, according to Theorem \ref{thm_sav1_es}, we have $\widehat{s}^{n+\frac{1}{2}} \le \mathcal{E}_h(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \le \mathcal{E}_h(u^n,s^n)$, and thus, we have $\widehat{s}^{n+\frac{1}{2}}\le E_h(u_{\text{\rm init}})$. \end{proof} \begin{theorem}[MBP of sESAV2] \label{thm_sav2_mbp} If $h\le h_0$, $\kappa\ge \|f'\|_{C[-\beta,\beta]}$, and \begin{equation} \label{sav2_mbp_dt} {\tau} \le \Big( \frac{\kappa G^*}{2} + \frac{\varepsilon^2}{h^2} \Big)^{-1}, \end{equation} where $G^*$ is the positive constant defined in Corollary \ref{cor_g_upbound}, then the sESAV2 scheme \eqref{eq_sav2stab} preserves the MBP for $\{u^n\}$, i.e., \eqref{dismbp} is valid. \end{theorem} \begin{proof} Suppose $(u^n,s^n)$ is given and $\|u^n\|_\infty\le\beta$ for some $n$. By Theorems \ref{thm_sav1_mbp} and \ref{thm_sav2_es}, we have $\|\widehat{u}^{n+\frac{1}{2}}\|_\infty\le\beta$ and $\widehat{s}^{n+\frac{1}{2}}\le E_h(u_{\text{\rm init}})$. Then, we know that $0< g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\le G^*$ by the similar analysis as Corollary \ref{cor_g_upbound}. The condition \eqref{sav2_mbp_dt} implies \begin{equation*} \frac{2}{{\tau}}-\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\ge\frac{2\varepsilon^2}{h^2}. \end{equation*} According to the definition of the matrix $\infty$-norm, we have \[ \Big\|\Big(\frac{2}{{\tau}}-\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\Big)I+\varepsilon^2\Delta_h\Big\|_\infty = \frac{2}{{\tau}}-\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}). \] Since $\kappa\ge\|f'\|_{C[-\beta,\beta]}$ and $\|\widehat{u}^{n+\frac{1}{2}}\|_\infty\le\beta$, according to Lemma \ref{lem_nonlinear}, we have \[ \|f(\widehat{u}^{n+\frac{1}{2}}) + \kappa\widehat{u}^{n+\frac{1}{2}}\|_\infty \le \kappa\beta. \] Therefore, using Lemma \ref{lem_lapdiff}, we obtain from \eqref{lem_sav2_pf} that \[ \|u^{n+1}\|_\infty \le \Big(\frac{2}{{\tau}}+\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\Big)^{-1} \Big[\Big(\frac{2}{{\tau}}-\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\Big) \beta + 2\kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\beta \Big] = \beta. \] By induction, we have $\|u^n\|_\infty\le\beta$ for all $n$. \end{proof} \begin{remark} \label{rmk_sav2_gbound} Theorem \ref{thm_sav2_mbp} implies that $0< g(u^n,s^n)\le G^*$ and $0< g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\le G^*$ hold for all $n$. \end{remark} \begin{remark} \label{rmk_sav2_mbp} The condition \eqref{sav2_mbp_dt} on the time step size implies ${\tau}=\mathcal{O}(h^2/\varepsilon^2)$, which is the same as those enforced in \cite{HoLe20,HoTaYa17}. This restriction comes essentially from the explicit term $\Delta_hu^n$ due to the use of the Crank--Nicolson approximation, which also means that the second-order scheme \eqref{eq_sav2stab} cannot preserve the MBP unconditionally even though we introduce the stabilization term. In practical computations, $ g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\approx1$ so that the requirement for the time step size can be set to be ${\tau} \le ( \frac{\kappa}{2} + \frac{\varepsilon^2}{h^2} )^{-1}$ in order to preserve the MBP, which is later used in our numerical experiments. \end{remark} Similar to the analysis for the sESAV1 scheme, we can show that both $\{ g(u^n,s^n)\}$ and $\{ g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\}$ have uniform positive lower bounds. \begin{lemma} \label{lem_sav2_h2bound} Given a fixed time $T>0$. If $h\le h_0$, $\kappa\ge \|f'\|_{C[-\beta,\beta]}$, $\|u_{\text{\rm init}}\|_\infty\le\beta$, and ${\tau}\le 1$ satisfying \eqref{sav2_mbp_dt}, there exists a constant $M>0$ depending on $C_*$, $|\Omega|$, $T$, $u_{\text{\rm init}}$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$ such that \begin{align*} {\tau}^{-1}\|\widehat{u}^{n+\frac{1}{2}}-u^n\| + \|\Delta_h\widehat{u}^{n+\frac{1}{2}}\| \le M, \\ {\tau}^{-1}\|u^{n+1}-u^n\| + \|\Delta_h u^{n+1}\|\le M, \end{align*} for $0 \le n \le \lfloor T/{\tau}\rfloor-1$. \end{lemma} \begin{proof} Since $\widehat{u}^{n+\frac{1}{2}}$ is the solution to the sESAV1 substep \eqref{eq_sav2stab0}, according to \eqref{lem_sav1_h2bound_pf}, we have \begin{equation} \label{lem_sav2_h2bound_pf} \|\Delta_h\widehat{u}^{n+\frac{1}{2}}\|^2 \le \Big(1+\frac{G^*\kappa}{2}+\frac{(G^* \|f'\|_{C[-\beta,\beta]} C_\Omega)^2}{4\varepsilon^2}\Big) \|\Delta_h u^n\|^2, \end{equation} where we used ${\tau}\le1$. Taking the discrete inner product of \eqref{eq_sav2staba} with $2{\tau}\Delta_h^2u^{n+\frac{1}{2}}$, using the fact $0< g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\le G^*$, and conducting the similar analysis as the proof of Lemma \ref{lem_sav1_h2bound}, we can obtain \begin{align*} \|\Delta_h u^{n+1}\|^2 \le \|\Delta_h u^n\|^2 + \Big(G^*\kappa + \frac{(G^* \|f'\|_{C[-\beta,\beta]} C_\Omega)^2}{2\varepsilon^2}\Big) {\tau} \|\Delta_h\widehat{u}^{n+\frac{1}{2}}\|^2. \end{align*} Substituting \eqref{lem_sav2_h2bound_pf} into the above inequality, we have \begin{align*} \|\Delta_h u^{n+1}\|^2 & \le \Big[1 + \Big(G^*\kappa + \frac{(G^* \|f'\|_{C[-\beta,\beta]} C_\Omega)^2}{2\varepsilon^2}\Big) \\ & \qquad \cdot \Big(1+\frac{G^*\kappa}{2}+\frac{(C_\Omega \|f'\|_{C[-\beta,\beta]} G^*)^2}{4\varepsilon^2}\Big) {\tau} \Big] \|\Delta_h u^n\|^2. \end{align*} By recursion, we can obtain a uniform upper bound for $\|\Delta_h u^{n+1}\|$. Then, by \eqref{lem_sav2_h2bound_pf} we also can get the upper bound for $\|\Delta_h\widehat{u}^{n+\frac{1}{2}}\|$. Finally, as a consequence of the above analysis, using \eqref{eq_sav2stab0a} and \eqref{eq_sav2staba}, we also get the boundedness of ${\tau}^{-1}\|\widehat{u}^{n+\frac{1}{2}}-u^n\|$ and ${\tau}^{-1}\|u^{n+1}-u^n\|$, and the proof is completed. \end{proof} \begin{corollary} \label{cor_sav2_g_bound} Given a fixed time $T>0$. If $h\le h_0$, $\kappa\ge \|f'\|_{C[-\beta,\beta]}$, $\|u_{\text{\rm init}}\|_\infty\le\beta$, and ${\tau}\le 1$ satisfying \eqref{sav2_mbp_dt}, there exists a constant ${\widetilde G}_{*}>0$ such that $ g(u^n,s^n)\ge {\widetilde G}_{*}$ and $ g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\ge {\widetilde G}_{*}$, where ${\widetilde G}_{*}$ depends on $C_*$, $|\Omega|$, $T$, $u_{\text{\rm init}}$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$. \end{corollary} \begin{proof} It suffices to show the existence of the lower bounds of $\{s^n\}$ and $\{\widehat{s}^{n+\frac{1}{2}}\}$. By Lemma \ref{lem_sav2_h2bound}, we have $\|u^{n+1}-u^n\|\le M{\tau}$. From \eqref{eq_sav2stabb}, the similar analysis as Corollary \ref{cor_sav1_g_bound} leads to the lower boundedness of $\{s^n\}$. Then, since $\|\widehat{u}^{n+\frac{1}{2}}-u^n\|\le M{\tau}\le M$, we can derive from \eqref{eq_sav2stab0b} to give \[ \widehat{s}^{n+\frac{1}{2}} \ge s^n - G^* F_0 |\Omega|^{\frac{1}{2}} M, \] which completes the proof. \end{proof} \subsubsection{Error estimates} It is easy to check that the exact solution $u_e$ to \eqref{AllenCahn} with $s_e(t)=E_1(u_e(t))$ satisfies \begin{subequations} \label{sav2trun} \begin{align} &\frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}} = \frac{\varepsilon^2}{2} \Delta_h (u_e(t_{n+1})+u_e(t_n)) + g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) f(u_e(t_{n+\frac{1}{2}})) \nonumber \\ & \qquad - \kappa g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) \Big(\frac{u_e(t_{n+1})+u_e(t_n)}{2}-u_e(t_{n+\frac{1}{2}})\Big) + R_{2u}^n, \label{sav2truna} \\ &\frac{s_e(t_{n+1})-s_e(t_n)}{{\tau}} = - g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) \Big\<f(u_e(t_{n+\frac{1}{2}})),\frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}}\Big\> \nonumber \\ & \qquad + \kappa g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) \Big\<\frac{u_e(t_{n+1})+u_e(t_n)}{2}-u_e(t_{n+\frac{1}{2}}),\frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}}\Big\> + R_{2s}^n, \label{sav2trunb} \end{align} \end{subequations} where the truncation errors $R_{2u}^n$ and $R_{2s}^n$ satisfy \begin{equation} \label{sav2trunerr} \|R_{2u}^n\|\le C_e({\tau}^2+h^2), \qquad |R_{2s}^n|\le C_e({\tau}^2+h^2). \end{equation} Apart from the numerical error functions $e_u^n$ and $e_s^n$ defined by \eqref{errorfuns}, let us also define \[ \widehat{e}_u^{n+\frac{1}{2}} = \widehat{u}^{n+\frac{1}{2}} - u_e(t_{n+\frac{1}{2}}), \qquad \widehat{e}_s^{n+\frac{1}{2}} = \widehat{s}^{n+\frac{1}{2}} - s_e(t_{n+\frac{1}{2}}). \] We first present an estimate for $\widehat{e}_u^{n+\frac{1}{2}}$ and $\widehat{e}_s^{n+\frac{1}{2}}$, which will be used in the proof of the error estimate for the sESAV2 scheme \eqref{eq_sav2stab}. Recalling the proof of Theorem \ref{thm_sav1_error} for the sESAV1 scheme, the error equations with respect to $\widehat{e}_u^{n+\frac{1}{2}}$ and $\widehat{e}_s^{n+\frac{1}{2}}$ read as \begin{subequations} \label{sav2err1} \begin{align} \frac{\widehat{e}_u^{n+\frac{1}{2}} - e_u^n}{{\tau}/2} & = \varepsilon^2 \Delta_h \widehat{e}_u^{n+\frac{1}{2}} + g(u^n,s^n)f(u^n) - g(u_e(t_n),s_e(t_n))f(u_e(t_n)) - \kappa g(u^n,s^n) (\widehat{e}_u^{n+\frac{1}{2}}-e_u^n) \nonumber \\ & \quad + \kappa (g(u_e(t_n),s_e(t_n))-g(u^n,s^n)) (u_e(t_{n+\frac{1}{2}})-u_e(t_n)) - \widehat{R}_{1u}^n, \label{sav2err1a} \\ \widehat{e}_s^{n+\frac{1}{2}} - e_s^n & = \< g(u_e(t_n),s_e(t_n))f(u_e(t_n)) - g(u^n,s^n)f(u^n), u_e(t_{n+\frac{1}{2}})-u_e(t_n) \> \nonumber \\ & \quad - g(u^n,s^n) \<f(u^n),\widehat{e}_u^{n+\frac{1}{2}} - e_u^n\> - \frac{{\tau}}{2}\widehat{R}_{1s}^n, \label{sav2err1b} \end{align} \end{subequations} where \begin{equation} \label{sav2trunerr1} \|\widehat{R}_{1u}^n\|\le C_e({\tau}+h^2), \qquad |\widehat{R}_{1s}^n|\le C_e({\tau}+h^2). \end{equation} \begin{lemma} Suppose that $h\le h_0$ and $\|u^n\|_\infty\le\beta$. If ${\tau}$ is small sufficiently, we have \begin{equation} \label{sav2err_pf11} \|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + |\widehat{e}_s^{n+\frac{1}{2}}|^2 \le \widehat{C} (\|e_u^n\|^2 + |e_s^n|^2) + \widehat{C} C_e^2({\tau}^2+{\tau} h^2)^2, \end{equation} where the constant $\widehat{C}>0$ depends on $C_*$, $|\Omega|$, $u_e$, $\kappa$, and $\|f\|_{C^1[-\beta,\beta]}$. \end{lemma} \begin{proof} Taking the discrete inner product of \eqref{sav2err1a} with ${\tau}\widehat{e}_u^{n+\frac{1}{2}}$ and rearranging the terms, we have \begin{align*} & \Big(1+\frac{\kappa g(u^n,s^n)}{2}{\tau}\Big) \big(\|\widehat{e}_u^{n+\frac{1}{2}}\|^2 - \|e_u^n\|^2 + \|\widehat{e}_u^{n+\frac{1}{2}} - e_u^n\|^2\big) + \varepsilon^2 {\tau} \|\nabla_h \widehat{e}_u^{n+\frac{1}{2}}\|^2 \\ & \qquad = {\tau} \<g(u^n,s^n)f(u^n) - g(u_e(t_n),s_e(t_n))f(u_e(t_n)), \widehat{e}_u^{n+\frac{1}{2}}\> \\ & \qquad\quad + \kappa{\tau} \<(g(u_e(t_n),s_e(t_n))-g(u^n,s^n)) (u_e(t_{n+\frac{1}{2}})-u_e(t_n)), \widehat{e}_u^{n+\frac{1}{2}}\> - {\tau} \<\widehat{R}_{1u}^n, \widehat{e}_u^{n+\frac{1}{2}}\>. \end{align*} Using Young's inequality, we then get \begin{equation*} - {\tau} \<\widehat{R}_{1u}^n, \widehat{e}_u^{n+\frac{1}{2}}\> \le {\tau} \|\widehat{R}_{1u}^n\| \|\widehat{e}_u^{n+\frac{1}{2}}\| \le {\tau}^2 \|\widehat{R}_{1u}^n\|^2 + \frac{1}{4}\|\widehat{e}_u^{n+\frac{1}{2}}\|^2. \end{equation*} Similar to the deductions of \eqref{sav1err_pf4a} and \eqref{sav1err_pf4b0}, applying Lemma \ref{lem_sav1_nonlinear} leads to \begin{align*} & {\tau} \<g(u^n,s^n)f(u^n) - g(u_e(t_n),s_e(t_n))f(u_e(t_n)), \widehat{e}_u^{n+\frac{1}{2}}\> \le 2 C_{g} ^2{\tau}^2 (\|e_u^n\|^2+|e_s^n|^2) + \frac{1}{4}\|\widehat{e}_u^{n+\frac{1}{2}}\|^2, \\ & \kappa{\tau} \<(g(u_e(t_n),s_e(t_n))-g(u^n,s^n)) (u_e(t_{n+\frac{1}{2}})-u_e(t_n)), \widehat{e}_u^{n+\frac{1}{2}}\> \le C_1\kappa^2{\tau}^2 (\|e_u^n\|^2+|e_s^n|^2) + \frac{1}{4} \|\widehat{e}_u^{n+\frac{1}{2}}\|^2. \end{align*} Then, we have \begin{align*} & \Big(\frac{1}{4}+\frac{\kappa g(u^n,s^n)}{2}{\tau}\Big) \|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + \Big(1+\frac{\kappa g(u^n,s^n)}{2}{\tau}\Big) \|\widehat{e}_u^{n+\frac{1}{2}} - e_u^n\|^2 \nonumber\\ & \qquad\quad \le \Big(1+\frac{\kappa g(u^n,s^n)}{2}{\tau}\Big) \|e_u^n\|^2 + (2 C_{g} ^2 + C_1\kappa^2) {\tau}^2 (\|e_u^n\|^2+|e_s^n|^2) + {\tau}^2 \|\widehat{R}_{1u}^n\|^2. \end{align*} By Remark \ref{rmk_sav2_gbound}, we then can simplify the above equation to get \begin{align*} \|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + 4\|\widehat{e}_u^{n+\frac{1}{2}} - e_u^n\|^2 & \le (4+2G^*\kappa{\tau}) \|e_u^n\|^2 + (8 C_{g} ^2 + 4C_1\kappa^2){\tau}^2 (\|e_u^n\|^2+|e_s^n|^2) + 4{\tau}^2 \|\widehat{R}_{1u}^n\|^2. \end{align*} When ${\tau}\le1$, using \eqref{sav2trunerr1}, we obtain \begin{align} \label{sav2err_pf11a} \|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + 4\|\widehat{e}_u^{n+\frac{1}{2}} - e_u^n\|^2 & \le (4+8 C_{g} ^2+2G^*\kappa+4C_1\kappa^2) \|e_u^n\|^2 \nonumber \\ & \quad + (8 C_{g} ^2 + 4C_1\kappa^2) |e_s^n|^2 + 4C_e^2 {\tau}^2 ({\tau}+h^2)^2. \end{align} Multiplying \eqref{sav2err1b} by $2\widehat{e}_s^{n+\frac{1}{2}}$ yields \begin{align*} & |\widehat{e}_s^{n+\frac{1}{2}}|^2 - |e_s^n|^2 + |\widehat{e}_s^{n+\frac{1}{2}}-e_s^n|^2 \\ & \qquad\quad = 2\widehat{e}_s^{n+\frac{1}{2}} \< g(u_e(t_n),s_e(t_n))f(u_e(t_n)) - g(u^n,s^n)f(u^n), u_e(t_{n+\frac{1}{2}})-u_e(t_n) \> \nonumber \\ & \qquad\qquad - 2\widehat{e}_s^{n+\frac{1}{2}} g(u^n,s^n) \<f(u^n),\widehat{e}_u^{n+\frac{1}{2}}-e_u^n\> - {\tau} \widehat{R}_{1s}^n \widehat{e}_s^{n+\frac{1}{2}}. \end{align*} We can bound the third and second terms in the right-hand side of the above equation repectively as \begin{eqnarray*} & \displaystyle - {\tau} \widehat{R}_{1s}^n \widehat{e}_s^{n+\frac{1}{2}} \le {\tau}^2 |\widehat{R}_{1s}^n|^2 + \frac{1}{4} |\widehat{e}_s^{n+\frac{1}{2}}|^2, \\ & \displaystyle - 2\widehat{e}_s^{n+\frac{1}{2}} g(u^n,s^n) \<f(u^n),\widehat{e}_u^{n+\frac{1}{2}}-e_u^n\> \le \frac{1}{4} |\widehat{e}_s^{n+\frac{1}{2}}|^2 + C_5 \|\widehat{e}_u^{n+\frac{1}{2}}-e_u^n\|^2, \end{eqnarray*} where $C_5>0$ depends on $C_*$, $|\Omega|$, $u_{\text{\rm init}}$, and $\|f\|_{C[-\beta,\beta]}$. By estimating the first term in the similar way as \eqref{sav1err_pf7a}, we obtain \begin{align*} & |\widehat{e}_s^{n+\frac{1}{2}}|^2 - |e_s^n|^2 + |\widehat{e}_s^{n+\frac{1}{2}}-e_s^n|^2 \\ & \qquad\quad \le C_2 {\tau} (\|e_u^n\|^2 + |e_s^n|^2 + |\widehat{e}_s^{n+\frac{1}{2}}|^2) + C_5 \|\widehat{e}_u^{n+\frac{1}{2}}-e_u^n\|^2 + \frac{1}{2} |\widehat{e}_s^{n+\frac{1}{2}}|^2 + {\tau}^2 |\widehat{R}_{1s}^n|^2, \end{align*} and thus, \[ (1-2C_2{\tau}) |\widehat{e}_s^{n+\frac{1}{2}}|^2 \le 2|e_s^n|^2 + 2C_2 {\tau} (\|e_u^n\|^2 + |e_s^n|^2) + 2C_5 \|\widehat{e}_u^{n+\frac{1}{2}}-e_u^n\|^2 + 2{\tau}^2 |\widehat{R}_{1s}^n|^2. \] When ${\tau}\le \frac{1}{4C_2}$, using \eqref{sav2trunerr1}, we can get \begin{equation} \label{sav2err_pf11b} |\widehat{e}_s^{n+\frac{1}{2}}|^2 \le \|e_u^n\|^2 + 5|e_s^n|^2 + 4C_5 \|\widehat{e}_u^{n+\frac{1}{2}}-e_u^n\|^2 + 4C_e^2 {\tau}^2 ({\tau}+h^2)^2. \end{equation} The sum of \eqref{sav2err_pf11a} multiplied by $C_5$ and \eqref{sav2err_pf11b} leads to \eqref{sav2err_pf11}. \end{proof} \begin{theorem}[Error estimate of sESAV2] \label{thm_sav2_error} Given a fixed time $T > 0$ and suppose the exact solution $u_e$ is smooth enough on $[0,T]\times\overline{\Omega}$. Assume that $\kappa\ge \|f'\|_{C[-\beta,\beta]}$ and $\|u_{\text{\rm init}}\|_\infty\le\beta$. If ${\tau}$ and $h$ are small sufficiently and satisfy \eqref{sav2_mbp_dt}, then we have the error estimate for the sESAV2 scheme \eqref{eq_sav2stab} as follows: \begin{equation*} \|e_u^n\| + \|\nabla_h e_u^n\| + |e_s^n| \le C({\tau}^2+h^2),\qquad 0 \le n \le \lfloor T/{\tau}\rfloor, \end{equation*} where the constant $C>0$ depends on $C_*$, $|\Omega|$, $T$, $u_e$, $\kappa$, $\varepsilon$, and $\|f\|_{C^1[-\beta,\beta]}$ but is independent of ${\tau}$ and $h$. \end{theorem} \begin{proof} The difference between \eqref{eq_sav2stab} and \eqref{sav2trun} leads to \begin{subequations} \label{sav2err} \begin{align} &\delta_te_u^{n+1} = \varepsilon^2 \Delta_h e_u^{n+\frac{1}{2}} + g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) f(\widehat{u}^{n+\frac{1}{2}}) - g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) f(u_e(t_{n+\frac{1}{2}})) \nonumber \\ & \qquad\qquad + \kappa \big(g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}}))-g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\big) \Big(\frac{u_e(t_{n+1})+u_e(t_n)}{2}-u_e(t_{n+\frac{1}{2}})\Big) \nonumber \\ & \qquad\qquad - \kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) (e_u^{n+\frac{1}{2}}-\widehat{e}_u^{n+\frac{1}{2}}) - R_{2u}^n, \label{sav2erra} \\ &\delta_te_s^{n+1} = \Big\< g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) f(u_e(t_{n+\frac{1}{2}})) - g(\widehat{u}^{n+\frac{1}{2}}, \widehat{s}^{n+\frac{1}{2}}) f(\widehat{u}^{n+\frac{1}{2}}), \frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}} \Big\> \nonumber \\ & \qquad\qquad + \kappa \big(g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})-g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}}))\big) \Big\<\frac{u_e(t_{n+1})+u_e(t_n)}{2}-u_e(t_{n+\frac{1}{2}}),\frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}}\Big\> \nonumber \\ & \qquad\qquad - g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<f(\widehat{u}^{n+\frac{1}{2}}),\delta_te_u^{n+1}\> + \kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<u^{n+\frac{1}{2}}-\widehat{u}^{n+\frac{1}{2}}, \delta_te_u^{n+1}\> \nonumber \\ & \qquad\qquad + \kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \Big\<e_u^{n+\frac{1}{2}}-\widehat{e}_u^{n+\frac{1}{2}}, \frac{u_e(t_{n+1})-u_e(t_n)}{{\tau}} \Big\> - R_{2s}^n. \label{sav2errb} \end{align} \end{subequations} Taking the discrete inner product of \eqref{sav2erra} with $2{\tau}\delta_te_u^{n+1}$ and rearranging the term yield \begin{align} & \varepsilon^2\|\nabla_h e_u^{n+1}\|^2 - \varepsilon^2\|\nabla_h e_u^n\|^2 + 2{\tau}\|\delta_te_u^{n+1}\|^2 \nonumber \\ & \qquad = 2{\tau} \<g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) f(\widehat{u}^{n+\frac{1}{2}}) - g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) f(u_e(t_{n+\frac{1}{2}})),\delta_te_u^{n+1}\> \nonumber \\ & \qquad\quad + 2\kappa{\tau} \big(g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}}))-g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\big) \Big\<\frac{u_e(t_{n+1})+u_e(t_n)}{2}-u_e(t_{n+\frac{1}{2}}),\delta_te_u^{n+1}\Big\> \nonumber \\ & \qquad\quad - 2\kappa {\tau} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<e_u^{n+\frac{1}{2}}-\widehat{e}_u^{n+\frac{1}{2}},\delta_te_u^{n+1}\> - 2{\tau}\<R_{2u}^n,\delta_te_u^{n+1}\>. \label{sav2err_pf0} \end{align} Since $ g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\ge {\widetilde G}_{*}>0$, we get \begin{align*} & 2\kappa {\tau} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<e_u^{n+\frac{1}{2}}-\widehat{e}_u^{n+\frac{1}{2}},\delta_te_u^{n+1}\>\nonumber\\ &\qquad = 2\kappa {\tau} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \Big\<\frac{e_u^{n+1}-e_u^n}{2}+e_u^n-\widehat{e}_u^{n+\frac{1}{2}},\delta_te_u^{n+1}\Big\> \\ & \qquad= \kappa g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \|e_u^{n+1}-e_u^n\|^2 + 2\kappa {\tau} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<e_u^n-\widehat{e}_u^{n+\frac{1}{2}},\delta_te_u^{n+1}\> \\ &\qquad \ge \kappa {\widetilde G}_{*} \|e_u^{n+1}-e_u^n\|^2 + 2\kappa {\tau} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<e_u^n-\widehat{e}_u^{n+\frac{1}{2}},\delta_te_u^{n+1}\> \\ &\qquad = \kappa {\widetilde G}_{*} \|e_u^{n+1}\|^2 - \kappa {\widetilde G}_{*} \|e_u^n\|^2 - 2\kappa {\widetilde G}_{*}{\tau} \<e_u^n, \delta_te_u^{n+1}\> + 2\kappa {\tau} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<e_u^n-\widehat{e}_u^{n+\frac{1}{2}},\delta_te_u^{n+1}\>, \end{align*} where we have used \eqref{sav1err_pf0} in the last step. Then, we obtain from \eqref{sav2err_pf0} that \begin{align} & {\widetilde G}_{*}\kappa \|e_u^{n+1}\|^2 - {\widetilde G}_{*}\kappa \|e_u^n\|^2 + \varepsilon^2\|\nabla_h e_u^{n+1}\|^2 - \varepsilon^2\|\nabla_h e_u^n\|^2 + 2{\tau} \|\delta_te_u^{n+1}\|^2 \nonumber \\ & \qquad = 2{\tau} \<g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) f(\widehat{u}^{n+\frac{1}{2}}) - g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) f(u_e(t_{n+\frac{1}{2}})), \delta_te_u^{n+1}\> \nonumber \\ & \qquad\quad + 2\kappa{\tau} \big(g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}}))-g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\big) \Big\<\frac{u_e(t_{n+1})+u_e(t_n)}{2}-u_e(t_{n+\frac{1}{2}}),\delta_te_u^{n+1}\Big\> \nonumber \\ & \qquad\quad + 2{\widetilde G}_{*}\kappa{\tau} \<e_u^n, \delta_te_u^{n+1}\> + 2\kappa{\tau} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<\widehat{e}_u^{n+\frac{1}{2}}-e_u^n,\delta_te_u^{n+1}\> - 2 {\tau} \<R_{2u}^n, \delta_te_u^{n+1}\>. \label{sav2err_pf1} \end{align For the last three terms in the right-hand side of \eqref{sav2err_pf1}, we have respectively \begin{align} 2\kappa{\tau} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<\widehat{e}_u^{n+\frac{1}{2}}-e_u^n,\delta_te_u^{n+1}\> & \le 2G^*\kappa{\tau} (\|\widehat{e}_u^{n+\frac{1}{2}}\|+\|e_u^n\|) \|\delta_te_u^{n+1}\| \nonumber \\ & \le 6{G^*}^2\kappa^2{\tau} (\|\widehat{e}_u^{n+\frac{1}{2}}\|^2+\|e_u^n\|^2) + \frac{{\tau}}{3}\|\delta_te_u^{n+1}\|^2, \label{sav2err_pf2a} \\ 2{\widetilde G}_{*}\kappa{\tau} \<e_u^n, \delta_te_u^{n+1}\> & \le 3{\widetilde G}_{*}^2\kappa^2 {\tau}\|e_u^n\|^2 + \frac{{\tau}}{3} \|\delta_te_u^{n+1}\|^2, \label{sav2err_pf2b} \\ - 2{\tau} \<R_{2u}^n, \delta_te_u^{n+1}\> & \le 3{\tau} \|R_{2u}^n\|^2 + \frac{{\tau}}{3}\|\delta_te_u^{n+1}\|^2. \label{sav2err_pf2c} \end{align} By the energy dissipation and MBP of the sESAV1 substep \eqref{eq_sav2stab0}, we know that $\widehat{u}^{n+\frac{1}{2}}$ and $\widehat{s}^{n+\frac{1}{2}}$ are bounded uniformly. By conducting the similar deductions to the proof of Lemma \ref{lem_sav1_nonlinear}, we can obtain \begin{subequations} \label{sav1_nonlinearhat} \begin{align} & |g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) - g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}}))| \le C_{g} (\|\widehat{e}_u^{n+\frac{1}{2}}\|+|\widehat{e}_s^{n+\frac{1}{2}}|), \\ & \|g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) f(\widehat{u}^{n+\frac{1}{2}}) - g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) f(u_e(t_{n+\frac{1}{2}}))\| \le C_{g} (\|\widehat{e}_u^{n+\frac{1}{2}}\|+|\widehat{e}_s^{n+\frac{1}{2}}|), \end{align} \end{subequations} where $ C_{g} >0$ is the same constant defined in Lemma \ref{lem_sav1_nonlinear}. Then, the first and second terms in the right-hand side of \eqref{sav2err_pf1} can be bounded respectively as \begin{align} & 2{\tau} \<g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) f(\widehat{u}^{n+\frac{1}{2}}) - g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) f(u_e(t_{n+\frac{1}{2}})), \delta_te_u^{n+1}\> \nonumber \\ & \qquad \le 2 C_{g} {\tau} (\|\widehat{e}_u^{n+\frac{1}{2}}\|+|\widehat{e}_s^{n+\frac{1}{2}}|) \|\delta_te_u^{n+1}\|\nonumber\\ &\qquad\le 6 C_{g} ^2 {\tau} (\|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + |\widehat{e}_s^{n+\frac{1}{2}}|^2) + \frac{{\tau}}{3} \|\delta_te_u^{n+1}\|^2, \label{sav2err_pf2d} \end{align} and \begin{align} & 2\kappa{\tau} \big(g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}}))-g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})\big) \Big\<\frac{u_e(t_{n+1})+u_e(t_n)}{2}-u_e(t_{n+\frac{1}{2}}),\delta_te_u^{n+1}\Big\> \nonumber \\ & \qquad \le C_{g} \kappa{\tau} (\|\widehat{e}_u^{n+\frac{1}{2}}\|+|\widehat{e}_s^{n+\frac{1}{2}}|) (\|u_e(t_{n+1})\|+\|u_e(t_n)\|+2\|u_e(t_{n+\frac{1}{2}})\|) \|\delta_te_u^{n+1}\| \nonumber \\ & \qquad \le C_1 \kappa^2 {\tau}(\|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + |\widehat{e}_s^{n+\frac{1}{2}}|^2) + \frac{{\tau}}{3} \|\delta_te_u^{n+1}\|^2, \label{sav2err_pf2e} \end{align} where $C_1>0$ has the same dependence as the constant $C_1$ used in \eqref{sav1err_pf4b0} but may have a different value. Substituting \eqref{sav2err_pf2a}--\eqref{sav2err_pf2e} into \eqref{sav2err_pf1} leads to \begin{align} & {\widetilde G}_{*}\kappa \|e_u^{n+1}\|^2 - {\widetilde G}_{*}\kappa \|e_u^n\|^2 + \varepsilon^2\|\nabla_h e_u^{n+1}\|^2 - \varepsilon^2\|\nabla_h e_u^n\|^2 + \frac{{\tau}}{3} \|\delta_te_u^{n+1}\|^2 \nonumber \\ & \qquad \le (6{G^*}^2\kappa^2 + 6 C_{g} ^2 + C_1\kappa^2) {\tau} \|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + (6 C_{g} ^2 + C_1\kappa^2) {\tau} |\widehat{e}_s^{n+\frac{1}{2}}|^2 \nonumber \\ & \qquad\quad + (3{\widetilde G}_{*}^2+6{G^*}^2)\kappa^2{\tau} \|e_u^n\|^2 + 3{\tau} \|R_{2u}^n\|^2. \label{sav2err_pf3} \end{align} Multiplying \eqref{sav2errb} by $2{\tau} e_s^{n+1}$ yields \begin{align} & |e_s^{n+1}|^2 - |e_s^n|^2 + |e_s^{n+1}-e_s^n|^2 \nonumber \\ & \quad = 2e_s^{n+1} \< g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}})) f(u_e(t_{n+\frac{1}{2}})) - g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) f(\widehat{u}^{n+\frac{1}{2}}), u_e(t_{n+1})-u_e(t_n) \> \nonumber \\ & \quad\quad + \kappa e_s^{n+1} \big(g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})-g(u_e(t_{n+\frac{1}{2}}),s_e(t_{n+\frac{1}{2}}))\big) \<u_e(t_{n+1})+u_e(t_n)-2u_e(t_{n+\frac{1}{2}}),u_e(t_{n+1})-u_e(t_n)\> \nonumber \\ & \quad\quad - 2{\tau} e_s^{n+1} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<f(\widehat{u}^{n+\frac{1}{2}}),\delta_te_u^{n+1}\> + 2\kappa{\tau} e_s^{n+1} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<u^{n+\frac{1}{2}}-\widehat{u}^{n+\frac{1}{2}}, \delta_te_u^{n+1}\> \nonumber \\ & \quad\quad + 2\kappa e_s^{n+1} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<e_u^{n+\frac{1}{2}}-\widehat{e}_u^{n+\frac{1}{2}}, u_e(t_{n+1})-u_e(t_n)\> - 2{\tau} R_{2s}^n e_s^{n+1}. \label{sav2err_pf4} \end{align} The last term in the right-hand side of \eqref{sav2err_pf4} can be estimated by \begin{equation} \label{sav2err_pf5a} - 2{\tau} R_{2s}^n e_s^{n+1} \le {\tau} |e_s^{n+1}|^2 + {\tau} |R_{2s}^n|^2. \end{equation} By the boundedness of $u^{n+\frac{1}{2}}$, $\widehat{u}^{n+\frac{1}{2}}$, and $\widehat{s}^{n+\frac{1}{2}}$, the sum of the third and fourth terms in the right-hand side of \eqref{sav2err_pf4} can be estimated similarly to \eqref{sav1err_pf7b} as follows: \begin{align} & - 2{\tau} e_s^{n+1} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<f(\widehat{u}^{n+\frac{1}{2}}),\delta_te_u^{n+1}\> + 2\kappa{\tau} e_s^{n+1} g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}}) \<u^{n+\frac{1}{2}}-\widehat{u}^{n+\frac{1}{2}}, \delta_te_u^{n+1}\> \nonumber \\ & \qquad\le 2G^*\kappa {\tau} (\|u^{n+\frac{1}{2}}\|+\|\widehat{u}^{n+\frac{1}{2}}\|) |e_s^{n+1}| \|\delta_te_u^{n+1}\| + 2G^*{\tau} \|f(\widehat{u}^{n+\frac{1}{2}})\| |e_s^{n+1}| \|\delta_te_u^{n+1}\| \nonumber \\ & \qquad\le C_6 {\tau} |e_s^{n+1}|^2 + \frac{{\tau}}{3} \|\delta_te_u^{n+1}\|^2, \end{align} where $C_6>0$ depends on $C_*$, $|\Omega|$, $u_{\text{\rm init}}$, $\kappa$, and $\|f\|_{C[-\beta,\beta]}$. Then, using the facts that $\|u_e(t_{n+1})-u_e(t_n)\|\le C{\tau}$, $\|u_e(t_{n+1})+u_e(t_n)-2u_e(t_{n+\frac{1}{2}})\|\le C\tau^2$ (where $C>0$ is a constant due to smoothness of $u_e$), and the inequalities \eqref{sav1_nonlinearhat}, in the similar spirit of deriving \eqref{sav1err_pf7a}, the sum of the first, second and fifth terms in the right-hand side of \eqref{sav2err_pf4} can be bounded above by \begin{equation} \label{sav2err_pf5c} {\tau} \big(\|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + |\widehat{e}_s^{n+\frac{1}{2}}|^2 + \|e_u^n\|^2 + \|e_u^{n+1}\|^2 + |e_s^{n+1}|^2\big) \end{equation} multiplied with a positive constant depending on $C_*$, $|\Omega|$, $u_e$, $\kappa$, and $\|f\|_{C^1[-\beta,\beta]}$. Combining \eqref{sav2err_pf4} with \eqref{sav2err_pf5a}--\eqref{sav2err_pf5c}, we obtain \begin{align} |e_s^{n+1}|^2 - |e_s^n|^2 & \le C_7 {\tau} \big(\|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + |\widehat{e}_s^{n+\frac{1}{2}}|^2 + \|e_u^n\|^2 + \|e_u^{n+1}\|^2 + |e_s^{n+1}|^2\big) \nonumber\\ & \quad + \frac{{\tau}}{3} \|\delta_te_u^{n+1}\|^2 + {\tau} |R_{2s}^n|^2 \label{sav2err_pf6} \end{align} with $C_7$ depending on $C_*$, $|\Omega|$, $u_e$, $\kappa$, and $\|f\|_{C^1[-\beta,\beta]}$. Adding \eqref{sav2err_pf3} and \eqref{sav2err_pf6}, we obtain \begin{align} & {\widetilde G}_{*}\kappa (\|e_u^{n+1}\|^2 - \|e_u^n\|^2) + \varepsilon^2 (\|\nabla_h e_u^{n+1}\|^2 - \|\nabla_h e_u^n\|^2) + (|e_s^{n+1}|^2 - |e_s^n|^2) \nonumber \\ & \qquad \le C_8 {\tau} \big(\|\widehat{e}_u^{n+\frac{1}{2}}\|^2 + |\widehat{e}_s^{n+\frac{1}{2}}|^2 + \|e_u^n\|^2 + \|e_u^{n+1}\|^2 + |e_s^{n+1}|^2 \big) + 3{\tau} \|R_{2u}^n\|^2 + {\tau} |R_{2s}^n|^2, \label{sav2err_pf7} \end{align} where $C_8>0$ depends on $C_*$, $|\Omega|$, $u_e$, $\kappa$, and $\|f\|_{C^1[-\beta,\beta]}$. Substituting \eqref{sav2err_pf11} into \eqref{sav2err_pf7} and using the estimate \eqref{sav2trunerr}, when ${\tau}\le1$, we have \begin{align*} & {\widetilde G}_{*}\kappa (\|e_u^{n+1}\|^2 - \|e_u^n\|^2) + \varepsilon^2 (\|\nabla_h e_u^{n+1}\|^2 - \|\nabla_h e_u^n\|^2) + (|e_s^{n+1}|^2 - |e_s^n|^2) \nonumber \\ & \qquad \le C_8(\widehat{C}+1) {\tau} (\|e_u^n\|^2 + \|e_u^{n+1}\|^2 + |e_s^{n+1}|^2) + (C_8 \widehat{C}+4) C_e^2 {\tau}({\tau}^2+h^2)^2. \end{align*} When ${\tau}$ is small sufficiently, similar to the last paragraph in the proof of Theorem \ref{thm_sav1_error}, applying the discrete Gronwall's inequality yields \[ {\widetilde G}_{*}\kappa \|e_u^n\|^2 + \varepsilon^2 \|\nabla_h e_u^n\|^2 + |e_s^n|^2 \le C({\tau}^2+h^2)^2, \] which completes the proof. \end{proof} \section{Numerical experiments} \label{sect_experiment} This section is devoted to numerical tests and comparisons between the proposed sESAV schemes and existing SAV schemes listed in Sections \ref{sect_classicSAV} and \ref{sect_ESAV}. We consider the Allen--Cahn equation \eqref{AllenCahn} in two-dimensional spatial domain $\Omega=(0,1)\times(0,1)$ equipped with periodic boundary conditions, so that the schemes can be solved efficiently by the fast Fourier transform. We take two types of commonly-used nonlinear functions $f(u)$. One is given by \begin{equation} \label{f_dw} f(u)=-F'(u)=u-u^3 \end{equation} with $F$ being the {\em double-well potential} $$F(u)=\frac{1}{4}(u^2-1)^2.$$ In this case, one has $\beta=1$ and $\|f'\|_{C[-1,1]}=2$. The constant $C_0$ in \eqref{sav_e2} is $C_0=\frac{1}{4}(\kappa^2+2\kappa)$. The other one is determined by the {\em Flory--Huggins potential} \[ F(u) = \frac{\theta}{2}[(1+u)\ln(1+u) + (1-u)\ln(1-u)] - \frac{\theta_c}{2}u^2, \] which gives \begin{equation} \label{f_fh} f(u) = -F'(u) = \frac{\theta}{2}\ln\frac{1-u}{1+u} + \theta_c u, \end{equation} where $\theta_c>\theta>0$. In the following experiments, we set $\theta=0.8$ and $\theta_c=1.6$, then the positive root of $f(\rho)=0$ gives us $\beta\approx 0.9575$, and $\|f'\|_{C[-\beta,\beta]}\approx8.02$. The constant $C_0$ in \eqref{sav_e2} is then determined by $C_0=-F(\alpha)+\frac{\kappa}{2}\alpha^2$, where $\alpha>0$ solves $f(\alpha)+\kappa\alpha=0$. \subsection{Convergence in time} We first verify the convergence order in time for the proposed sESAV schemes. Let us set $\varepsilon=0.01$ in \eqref{AllenCahn} and take a smooth initial value \[ u_{\text{\rm init}}(x,y) = 0.1 \sin(2\pi x) \sin(2\pi y). \] The temporal convergence tests are conducted by fixing the spatial mesh size $h=1/512$. As requested by the stabilizing condition $\kappa\ge \|f'\|_{C[-\beta,\beta]}$, we set $\kappa=2$ for the double-well potential case (i.e., $f(u)$ given by \eqref{f_dw}) and $\kappa=8.02$ for the Flory--Huggins potential case (i.e., $f(u)$ given by \eqref{f_fh}). We compute the numerical solutions at $t=2$ using the sESAV1 and sESAV2 schemes with various time step sizes ${\tau}=2^{-k}$, $k=4,5,\dots,12$. To compute the numerical errors, we treat the sESAV2 solution obtained by ${\tau}=0.1\times2^{-12}$ as the benchmark solution. Figure \ref{fig_conv} shows the relation between the $L^2$-norm error and the time step size, where the left picture corresponds to the double-well potential case and the right one for the Flory--Huggins potential case. The first-order temporal accuracy for sESAV1 and the second-order for sESAV2 are observed for both cases as expected. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{figs/conv_poly.eps} \includegraphics[width=0.45\textwidth]{figs/conv_log.eps} \caption{The $L^2$-norm errors vs. the time step size produced by the proposed sESAV1 and sESAV2 schemes for the double-well potential case \eqref{f_dw} (left) and the Flory--Huggins potential case \eqref{f_fh} (right).} \label{fig_conv} \end{figure} \subsection{Comparisons with existing SAV schemes} In the following numerical experiments, we compare the proposed sESAV schemes with classic SAV and ESAV schemes by focusing on the MBP and energy dissipation law. While various modified energies are introduced as approximations of the original energy in discrete settings in order to facilitate the proof of energy dissipation law, the original one possesses the most accurate physical meaning for the model problem. Therefore, we are concerned about the behavior of the original (discrete) energy $E_h(u)$ defined in \eqref{egydis} for reflecting the phase transition process. The dynamic process considered usually needs a long-time evolution to reach the steady state; here we conduct simulations in a short time interval for the comparison among these schemes. Let us still consider the problem \eqref{AllenCahn} with $\varepsilon=0.01$. We adopt the uniform spatial mesh with $h=1/512$ and give the initial value by random numbers between $-0.8$ and $0.8$ on each mesh point. We then set the time step size ${\tau}=0.01$, and compute the numerical solutions by using the sESAV schemes, the classic SAV schemes (SAV1 and SAV2), and the ESAV schemes (ESAV1 and ESAV2). Note that we set $\delta=C_0+0.01$ for the classic SAV schemes \eqref{eq_sav1shen} and \eqref{eq_sav2shen}. For all comparison experiments, we will consider two settings for the stabilizing parameter: $\kappa=\|f'\|_{C[-\beta,\beta]}$ and $\kappa=\frac{1}{2}\|f'\|_{C[-\beta,\beta]}$, where the former one satisfies the requirement for the MBP preservation for the sESAV schemes and the latter one was adopted in \cite{ShXuYa19} for the classic SAV schemes. In addition, we take the numerical results obtained by the IFRK4 scheme \cite{JuLiQiYa21} with the small time step size $10^{-4}$ as the benchmark solution. First, we test the double-well potential case \eqref{f_dw}, and correspondingly, set the stabilizing parameter $\kappa=1$ and $\kappa=2$ respectively to carry out the experiments. Figure \ref{fig_comp_poly1} shows the evolutions of the supremum norms and the energies of simulated solutions computed by the sESAV1, SAV1, and ESAV1 schemes. For either $\kappa=1$ or $\kappa=2$, the sESAV1 scheme preserves the MBP, while the supremum norms of the SAV1 and ESAV1 solutions obviously evolve beyond $1$, which means that the MBP is violated. The energy dissipation are observed for these three schemes, where the sESAV1 scheme provides the most accurate result. In addition, the larger $\kappa$ leads to larger errors in the results, especially for the ESAV1 scheme. Figure \ref{fig_comp_poly2} plots corresponding results computed by the second-order schemes. Again, only the sESAV2 scheme preserves the MBP and the energy dissipation perfectly. The SAV2 and ESAV2 solutions evolve beyond $1$ but closer to $1$ than their first-order results due to the higher-order temporal accuracy. \begin{figure}[!ht] \centerline{ \includegraphics[width=0.43\textwidth]{figs/comp_poly1_S1_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/comp_poly1_S1_energy.eps}} \centerline{ \includegraphics[width=0.43\textwidth]{figs/comp_poly1_S2_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/comp_poly1_S2_energy.eps}} \caption{Evolutions of the supremum norms and the energies of simulated solutions computed by the sESAV1, SAV1, and ESAV1 schemes with ${\tau}=0.01$ and $\kappa=1$ (top row) or $\kappa=2$ (bottom row) for the double-well potential case.} \label{fig_comp_poly1} \end{figure} \begin{figure}[!ht] \centerline{ \includegraphics[width=0.43\textwidth]{figs/comp_poly2_S1_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/comp_poly2_S1_energy.eps}} \centerline{ \includegraphics[width=0.43\textwidth]{figs/comp_poly2_S2_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/comp_poly2_S2_energy.eps}} \caption{Evolutions of the supremum norms and the energies of simulated solutions computed by the sESAV2, SAV2, and ESAV2 schemes with ${\tau}=0.01$ and $\kappa=1$ (top row) or $\kappa=2$ (bottom row) for the double-well potential case.} \label{fig_comp_poly2} \end{figure} Next, we test the Flory--Huggins potential case \eqref{f_fh} and correspondingly set $\kappa=4.01$ and $\kappa=8.02$ respectively. Figures \ref{fig_comp_log1} and \ref{fig_comp_log2} present the evolutions of the supremum norms and the energies of simulated solutions obtained by the first- and second-order schemes, respectively. Similar to the double-well potential case, only the sESAV schemes preserve the MBP and the energy dissipation law as expected. The SAV1, ESAV1, and ESAV2 schemes, having the supremum norms beyond the theoretical bound $0.9575$, lead to inaccurate dynamic processes. Especially, the ESAV1 solution with $\kappa=8.02$ evolves beyond $1$, which yields complex numbers due to the existence of the logarithmic term and gives the completely wrong dynamics. For the SAV2 solutions, the dynamic processes look moderately correct according to the energy evolutions. Moreover, it is interesting that the supremum norm goes larger than the desired bound for $\kappa=8.02$ while it does not exceed for $\kappa=4.01$, but both results are still a bit away from the expected value $0.9575$. \begin{figure}[!ht] \centerline{ \includegraphics[width=0.43\textwidth]{figs/comp_log1_S4_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/comp_log1_S4_energy.eps}} \centerline{ \includegraphics[width=0.43\textwidth]{figs/comp_log1_S8_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/comp_log1_S8_energy.eps}} \caption{Evolutions of the supremum norms and the energies of simulated solutions computed by the sESAV1, SAV1, and ESAV1 schemes with ${\tau}=0.01$ and $\kappa=4.01$ (top row) or $\kappa=8.02$ (bottom row) for the Flory--Huggins potential case.} \label{fig_comp_log1} \end{figure} \begin{figure}[!ht] \centerline{ \includegraphics[width=0.43\textwidth]{figs/comp_log2_S4_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/comp_log2_S4_energy.eps}} \centerline{ \includegraphics[width=0.43\textwidth]{figs/comp_log2_S8_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/comp_log2_S8_energy.eps}} \caption{Evolutions of the supremum norms and the energies of simulated solutions computed by the sESAV2, SAV2, and ESAV2 schemes with ${\tau}=0.01$ and $\kappa=4.01$ (top row) or $\kappa=8.02$ (bottom row) for the Flory--Huggins potential case.} \label{fig_comp_log2} \end{figure} \subsection{Long-time coarsening dynamics simulations} Now we study the coarsening dynamics driven by the Allen--Cahn equation \eqref{AllenCahn} with $\varepsilon=0.01$. The spatial mesh size is $h=1/512$ and the initial state is given by random numbers between $-0.8$ and $0.8$. We adopt the sESAV2 scheme with ${\tau}=0.01$ to simulate the long-time coarsening process. By the comparisons shown above, we know that ${\tau}=0.01$ is sufficient to provide accurate numerical results. The steady state of the coarsening dynamics is a constant state $u\equiv\beta$ or $u\equiv-\beta$. When the absolute difference between the energies at the two consecutive moments is smaller than the tolerance value $10^{-8}$, we regard the dynamics as reaching its steady state. For the double-well potential case $f(u)$, we set $\kappa=2$ and the phase structures captured at some moments are presented in Figure \ref{fig_coarsen_poly1}, and the constant steady state $u\equiv-1$ is reached at around $t=604$. The left picture given in Figure \ref{fig_coarsen_poly2} implies the preservation of the MBP during the whole phase transition process. The energy evolution is plotted in the right graph of Figure \ref{fig_coarsen_poly2}, which states the energy dissipation of the process. For the Flory--Huggins potential case, we set $\kappa=8.02$ and the simulated results are shown in Figures \ref{fig_coarsen_log1} and \ref{fig_coarsen_log2}. We observe that the steady state is reached at around $t=602$ and the whole process of phase separation is similar to that of the double-well potential case. Those results are almost identical to those produced using the IFRK4 scheme in \cite{JuLiQiYa21}. \begin{figure}[!ht] \centerline{ \includegraphics[width=0.34\textwidth]{figs/coarsen_poly_t4.eps}\hspace{-0.45cm} \includegraphics[width=0.34\textwidth]{figs/coarsen_poly_t6.eps}\hspace{-0.45cm} \includegraphics[width=0.34\textwidth]{figs/coarsen_poly_t10.eps}}\vspace{-0.2cm} \centerline{ \includegraphics[width=0.34\textwidth]{figs/coarsen_poly_t30.eps}\hspace{-0.45cm} \includegraphics[width=0.34\textwidth]{figs/coarsen_poly_t100.eps}\hspace{-0.45cm} \includegraphics[width=0.34\textwidth]{figs/coarsen_poly_t300.eps}}\vspace{-0.2cm} \caption{Simulated phase structures at $t=4$, $6$, $10$, $30$, $100$, and $300$, respectively (left to right and top to bottom) by the sESAV2 scheme with ${\tau}=0.01$ and $\kappa=2$ for the coarsening dynamics of the double-well potential case.} \label{fig_coarsen_poly1} \end{figure} \begin{figure}[!ht] \centerline{ \includegraphics[width=0.43\textwidth]{figs/coarsen_poly_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/coarsen_poly_energy.eps}} \caption{Evolutions of the supremum norm (left) and the energy (right) for the coarsening dynamics of the double-well potential case.} \label{fig_coarsen_poly2} \end{figure} \begin{figure}[!ht] \centerline{ \includegraphics[width=0.34\textwidth]{figs/coarsen_log_t4.eps}\hspace{-0.45cm} \includegraphics[width=0.34\textwidth]{figs/coarsen_log_t6.eps}\hspace{-0.45cm} \includegraphics[width=0.34\textwidth]{figs/coarsen_log_t10.eps}}\vspace{-0.2cm} \centerline{ \includegraphics[width=0.34\textwidth]{figs/coarsen_log_t30.eps}\hspace{-0.45cm} \includegraphics[width=0.34\textwidth]{figs/coarsen_log_t100.eps}\hspace{-0.45cm} \includegraphics[width=0.34\textwidth]{figs/coarsen_log_t300.eps}}\vspace{-0.2cm} \caption{Simulated phase structures at $t=4$, $6$, $10$, $30$, $100$, and $300$, respectively (left to right and top to bottom) by the sESAV2 scheme with ${\tau}=0.01$ and $\kappa=8.02$ for the coarsening dynamics of the Flory--Huggins potential case.} \label{fig_coarsen_log1} \end{figure} \begin{figure}[!ht] \centerline{ \includegraphics[width=0.43\textwidth]{figs/coarsen_log_mbp.eps} \includegraphics[width=0.43\textwidth]{figs/coarsen_log_energy.eps}} \caption{Evolutions of the supremum norm (left) and the energy (right) for the coarsening dynamics of the Flory--Huggins potential case.} \label{fig_coarsen_log2} \end{figure} \section{Conclusion} \label{sect_conclusion} In this paper, we study MBP-preserving and energy dissipative schemes for the Allen--Cahn type equations by combining the ESAV approach with stabilizing technique. We present first- and second-order sESAV schemes and prove their MBP preservation, energy dissipation, and error estimates. The main results and observations include two aspects. First, we choose the ESAV approach rather than the classic SAV approach, since the coefficient ($g(u^n,s^n)$ or $ g(\widehat{u}^{n+\frac{1}{2}},\widehat{s}^{n+\frac{1}{2}})$) of the nonlinear term is positive automatically in the former one while the sign of the corresponding coefficient is uncertain for the later one. Second, to guarantee the MBP-preserving property, we add the stabilization term as an extra artificial term, that is, add and subtract a linear term in the scheme instead of a quadratic term in the energy functional; they are equivalent mutually for the classic stabilization or convex splitting method, but not for the SAV approach. Moreover, we find that the MBP preservation and the energy dissipation of the sESAV schemes can be established in parallel and independently, unlike the purely stabilized semi-implicit scheme discussed in \cite{TaYa16} where the MBP is needed first to bound the nonlinear term in the proof of the stability with respect to the original energy. Since the schemes we studied are all one-step methods, adaptive time-stepping strategies (such as \cite{QiaoZhTa11}) can be inherently adopted to accelerate the computation. Some generalizations can be carried out by replacing the Laplace operator in \eqref{AllenCahn} by some analogues, for instance, the nonlocal diffusion \cite{DuGuLeZh12} and the fractional Laplace operators \cite{SmakoKiMa93} which also satisfy the semigroup property with their discretizations satisfying the analogues of Lemma \ref{lem_lapdiff}. Furthermore, the proposed stabilizing approaches in this paper can be naturally extended to many other type of gradient flow problems, which can be handled by the existing SAV schemes. For example, the fourth-order Cahn--Hilliard equation is the $H^{-1}$ gradient flow of the energy functional \eqref{energy}, and satisfies the same energy dissipation law as the Allen--Cahn equation. The MBP is not valid anymore, but the solution is still $L^\infty$ stable. In the similar spirit of this paper, it is interesting to develop the sESAV schemes for the Cahn--Hilliard equation, and the discrete $L^\infty$ stability of the sESAV solution can be established by combining the high-order consistency analysis and stability estimate, as done in \cite{GuWaWi14,LiQiWa21}, which will also be one of our future works.
{ "timestamp": "2022-03-15T01:32:48", "yymm": "2203", "arxiv_id": "2203.06829", "language": "en", "url": "https://arxiv.org/abs/2203.06829", "abstract": "It is well-known that the Allen-Cahn equation not only satisfies the energy dissipation law but also possesses the maximum bound principle (MBP) in the sense that the absolute value of its solution is pointwise bounded for all time by some specific constant under appropriate initial/boundary conditions. In recent years, the scalar auxiliary variable (SAV) method and many of its variants have attracted much attention in numerical solution for gradient flow problems due to their inherent advantage of preserving certain discrete analogues of the energy dissipation law. However, existing SAV schemes usually fail to preserve the MBP when applied to the Allen-Cahn equation. In this paper, we develop and analyze new first- and second-order stabilized exponential-SAV schemes for a class of Allen-Cahn type equations, which are shown to simultaneously preserve the energy dissipation law and MBP in discrete settings. In addition, optimal error estimates for the numerical solutions are rigorously obtained for both schemes. Extensive numerical tests and comparisons are also conducted to demonstrate the performance of the proposed schemes.", "subjects": "Numerical Analysis (math.NA)", "title": "Stabilized exponential-SAV schemes preserving energy dissipation law and maximum bound principle for the Allen-Cahn type equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429599907709, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7097210949484063 }
https://arxiv.org/abs/1003.2821
Unitary equivalence to a complex symmetric matrix: a modulus criterion
We develop a procedure for determining whether a square complex matrix is unitarily equivalent to a complex symmetric (i.e., self-transpose) matrix. Our approach has several advantages over existing methods. We discuss these differences and present a number of examples.
\section{Introduction} Following \cite{Tener}, we say that a matrix $T \in M_n(\mathbb{C})$ is \emph{UECSM} if it is unitarily equivalent to a complex symmetric (i.e., self-transpose) matrix. Our primary motivation for studying this concept stems from the emerging theory of complex symmetric operators on Hilbert space \cite{Chalendar, Chevrot, CRW, CCO,CSOA,CSO2, SNCSO,Gilbreath, Sarason,Wang,Z}. A bounded operator $T$ on a separable complex Hilbert space $\mathcal{H}$ is called a \emph{complex symmetric operator} if $T = CT^*C$ for some conjugation $C$ (a conjugate-linear, isometric involution) on $\mathcal{H}$. The terminology stems from the fact that the preceding condition is equivalent to insisting that $T$ has a complex symmetric matrix representation with respect to some orthonormal basis \cite[Sect.~2.4-2.5]{CCO}. Thus the problem of determining whether a given matrix is UECSM is equivalent to determining whether that matrix represents a complex symmetric operator with respect to some orthonormal basis. Equivalently, $T$ is UECSM if and only if $T$ belongs to the unitary orbit of the complex symmetric matrices in $M_n(\mathbb{C})$. Since every $n \times n$ complex matrix is \emph{similar} to a complex symmetric matrix \cite[Thm.~4.4.9]{HJ} (see also \cite[Ex.~4]{CSOA} and \cite[Thm.~2.3]{CCO}), it is often difficult to tell whether or not a given matrix is UECSM. For instance, one of the following matrices is UECSM, but it is impossible to determine which one based upon existing methods in the literature: \begin{equation}\label{eq-Puzzle} \begin{pmatrix} 5 & 1 & 1 & 3 \\ 1 & 1 & 1 & -1 \\ 1 & -3 & 5 & -1 \\ -1 & -1 & -1 & 1 \end{pmatrix} ,\qquad \begin{pmatrix} 5 & -1 & 3 & 3 \\ 1 & 3 & -1 & -1 \\ 1 & -1 & 3 & -1 \\ -1 & 1 & -3 & 1 \end{pmatrix}. \end{equation} On the other hand, the method which we introduce here easily dispatches this particular problem (see Section \ref{SectionComparison}). Although ad-hoc methods sometimes suffice for specific examples (e.g., \cite[Ex.~5]{CSOA} \cite[Ex.~1, Thm.~4]{SNCSO}), the first general approach was due to J.~Tener \cite{Tener} who developed a procedure (\texttt{UECSMTest}), based upon the diagonalization of the selfadjoint components $A$ and $B$ in the Cartesian decomposition $T = A + iB$, by which a given matrix could be tested. More recently, L.~Balayan and the first author developed another procedure (\texttt{StrongAngleTest}), based upon a careful analysis of the eigenstructure of $T$ itself \cite{Balayan}. In this note, we pursue a different approach, based upon the diagonalization of $T^*T$ and $TT^*$. It turns out that this method has several advantages over its counterparts (see Section \ref{SectionComparison}). Before discussing our main result, we require a few preliminary definitions. Recall that the singular values of a matrix $T \in M_n(\mathbb{C})$ are defined to be the eigenvalues of the positive matrix $|T| = \sqrt{T^*T}$, the so-called \emph{modulus} of $T$. We also remark that $T^*T$ and $TT^*$ share the same eigenvalues \cite[Pr.~101]{Halmos}. \begin{Theorem}\label{TheoremMain} If $T \in M_n(\mathbb{C})$ has distinct singular values, \begin{enumerate}\addtolength{\itemsep}{0.25\baselineskip} \item $u_1,u_2,\ldots,u_n$ are unit eigenvectors of $T^*T$ corresponding to the eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_n$, respectively, \item $v_1,v_2,\ldots,v_n$ are unit eigenvectors of $TT^*$ corresponding to the eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_n$, respectively, \end{enumerate} then $T$ is UECSM if and only if \begin{align} |\inner{u_i,v_j}| &= |\inner{u_j,v_i}| , \label{eq-Magnitude} \\ \inner{u_i,v_j}\inner{u_j,v_k}\inner{u_k,v_i} &= \inner{u_i,v_k} \inner{u_k,v_j} \inner{u_j,v_i} ,\label{eq-Cocycle} \end{align} holds for $1 \leq i<j<k \leq n$. \end{Theorem} The procedure suggested by the preceding theorem can easily be implemented in \texttt{Mathematica}. We refer to this procedure as \texttt{ModulusTest}. The structure of this paper is as follows. The proof of Theorem \ref{TheoremMain} is the subject of Section \ref{SectionProof}. Section \ref{SectionExamples} contains a number of instructive examples. In Section \ref{SectionComparison} we compare \texttt{ModulusTest} to the procedures \texttt{UECSMTest} \cite{Tener} and \texttt{StrongAngleTest} \cite{Balayan}. We highlight several advantages of our approach over these other methods. In Section \ref{SectionVolterra} we discuss applications of our results to compact operators. As an illustration, we reveal a ``hidden symmetry'' of the Volterra integration operator. \begin{Example}\label{ExampleOMG} Before we proceed, we list several matrices which are UECSM and their corresponding complex symmetric matrices. These matrices were tested by \texttt{ModulusTest} and the unitary equivalences exhibited using the procedures outlined in Section \ref{SectionExamples}. In particular, we have selected relatively simple matrices which enjoy no apparent ``symmetry'' whatsoever. The symbol $\cong$ denotes unitary equivalence. \begin{align*} \begin{pmatrix} 4 & 1 & 1 \\ 4 & 5 & 7 \\ 4 & 7 & 5 \end{pmatrix} &\cong \micro{6} \begin{pmatrix} 8-\sqrt{\frac{57}{2}} & -i \sqrt{\frac{1539}{481}-\frac{36 \sqrt{114}}{481}} & -3 i \sqrt{\frac{1}{962} \left(139+8 \sqrt{114}\right)} \\ -i \sqrt{\frac{1539}{481}-\frac{36 \sqrt{114}}{481}} & \frac{7}{37} \left(22+\sqrt{114}\right) & \frac{1}{37} \sqrt{41553+3616 \sqrt{114}} \\ -3 i \sqrt{\frac{1}{962} \left(139+8 \sqrt{114}\right)} & \frac{1}{37} \sqrt{41553+3616 \sqrt{114}} & \frac{1}{74} \left(136+23 \sqrt{114}\right) \end{pmatrix} \\ \begin{pmatrix} 5 & 2 & 2 \\ 7 & 0 & 0 \\ 7 & 0 & 0 \end{pmatrix} &\cong\micro{5} \begin{pmatrix} \frac{1}{2} \left(5-\sqrt{187}\right) & -5 i \sqrt{\frac{561+5 \sqrt{187}}{1658}} & -i \sqrt{\frac{3350}{829}-\frac{125 \sqrt{187}}{1658}} \\ -5 i \sqrt{\frac{561+5 \sqrt{187}}{1658}} & \frac{1}{829} \left(1870+293 \sqrt{187}\right) & \frac{9}{829} \sqrt{\frac{1}{2} \left(173723+7075 \sqrt{187}\right)} \\ -i \sqrt{\frac{3350}{829}-\frac{125 \sqrt{187}}{1658}} & \frac{9}{829} \sqrt{\frac{1}{2} \left(173723+7075 \sqrt{187}\right)} & \frac{81}{-5+3 \sqrt{187}} \end{pmatrix} \\ \begin{pmatrix} 9 & 8 & 9 \\ 0 & 7 & 0 \\ 0 & 0 & 7 \end{pmatrix} &\cong\micro{8} \begin{pmatrix} 8-\frac{\sqrt{149}}{2} & \frac{9}{2} i \sqrt{\frac{16837+64 \sqrt{149}}{13093}} & i \sqrt{\frac{133672}{13093}-\frac{1296 \sqrt{149}}{13093}} \\ \frac{9}{2} i \sqrt{\frac{16837+64 \sqrt{149}}{13093}} & \frac{207440+9477 \sqrt{149}}{26186} & \frac{18 \sqrt{3978002+82324 \sqrt{149}}}{13093} \\ i \sqrt{\frac{133672}{13093}-\frac{1296 \sqrt{149}}{13093}} & \frac{18 \sqrt{3978002+82324 \sqrt{149}}}{13093} & \frac{92675+1808 \sqrt{149}}{13093} \end{pmatrix} \end{align*} \end{Example} \section{Proof of Theorem \ref{TheoremMain}}\label{SectionProof} \subsection{Preliminary lemmas} Recall that a conjugation $C$ on $\mathbb{C}^n$ is a conjugate-linear involution (i.e., $C^2 = I$) which is also isometric (i.e., $\inner{Cx,Cy} = \inner{y,x}$ for all $x,y \in \mathbb{C}^n$). It is easy to see that each conjugation $C$ on $\mathbb{C}^n$ is of the form $C = SJ$ where $S$ is a complex symmetric unitary matrix and $J$ is the canonical conjugation \begin{equation}\label{eq-Canonical} J(z_1,z_2,\ldots,z_n) = (\overline{z_1}, \overline{z_2}, \ldots, \overline{z_n}) \end{equation} on $\mathbb{C}^n$. The relevance of conjugations to our endeavor lies in the following lemma. \begin{Lemma}\label{LemmaC} $T \in M_n(\mathbb{C})$ is UECSM if and only if there exists a conjugation $C$ on $\mathbb{C}^n$ such that $T = CT^*C$. \end{Lemma} \begin{proof} Suppose that $T = CT^*C$ for some conjugation $C$ on $\mathbb{C}^n$. By \cite[Lem.~1]{CSOA} there exists an orthonormal basis $e_1,e_2,\ldots,e_n$ such that $Ce_i = e_i$ for $i = 1,2,\ldots,n$. Let $Q = (e_1 | e_2| \cdots |e_n)$ be the unitary matrix whose columns are these basis vectors. The matrix $S = Q^*TQ$ is complex symmetric since the $ij$th entry $[S]_{ij}$ of $S$ satisfies $[S]_{ij} = \inner{T e_j,e_i} = \inner{CT^*Ce_j,e_i} = \inner{e_i, T^*e_j} = \inner{Te_i,e_j} = [S]_{ji}.$ \end{proof} Our next result shows that, under the hypotheses of Theorem \ref{TheoremMain}, $T$ is UECSM if and only if there is a conjugation intertwining $T^*T$ and $TT^*$. \begin{Lemma}\label{LemmaCTTC} If $C$ is a conjugation on $\mathbb{C}^n$ and $T\in M_n(\mathbb{C})$ has distinct singular values, then \begin{equation}\label{eq-CTTC} T = CT^*C \quad \Leftrightarrow \quad T^*T = C(TT^*)C. \end{equation} \end{Lemma} \begin{proof} The $(\Rightarrow)$ implication of \eqref{eq-CTTC} follows immediately, regardless of any hypotheses on the singular values of $T$. The implication $(\Leftarrow)$ is considerably more involved. Suppose that $T^*T = CTT^*C$. Write $T = U(T^*T)^{\frac{1}{2}}$ where $U$ is unitary and observe that $TT^* = UT^*TU^*$ whence $UT^*T = TT^*U$. It follows that $UT^*T = CT^*TCU$ which implies that \begin{equation}\label{eq-CUCommutes} CU(T^*T) = (T^*T)CU. \end{equation} Let $e_1,e_2,\ldots,e_n$ denote unit eigenvectors of $T^*T$ corresponding to the (necessarily non-negative) eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_n$ of $T^*T$. In light of \eqref{eq-CUCommutes}, we see that $T^*T e_i = \lambda_i e_i$ if and only if $(T^*T)(CUe_i) = \lambda_i (CUe_i)$. In other words, the conjugate-linear operator $CU$ maps each eigenspace of $T^*T$ into itself. Since $CU$ is isometric and since the eigenspaces of $T^*T$ are one-dimensional, it follows that $CUe_i = \zeta_i^2 e_i$ for some unimodular constants $\zeta_1,\zeta_2,\ldots,\zeta_n$. Using the fact that $C$ is conjugate-linear we find that the unit vectors $w_i = \zeta_i e_i$ satisfy $CUw_i = w_i$ and $T^*Tw_i = \lambda_i w_i$. We claim that the conjugate-linear operator $K = CU$ is a conjugation on $\mathbb{C}^n$. Indeed, since $U$ is unitary and $C$ is a conjugation it is clear that $K$ is isometric. Moreover, since $K^2w_i = CUCUw_i = CUw_i = w_i$ for $i = 1,2,\ldots,n$ it follows that $K^2 = I$ whence $K$ is a conjugation. By \eqref{eq-CUCommutes} it follows that $K(T^*T)K = T^*T$ whence $J|T|J = |T|$ (since $|T| = p(T^*T)$ for some polynomial $p(x) \in \mathbb{R}[x]$). Putting this all together, we find that $T = CK|T|$ where $K$ is a conjugation that commutes with $|T|$. In particular, the unitary matrix $U$ factors as $U = CK$ and satisfies $U^* = KC$. We therefore conclude that $T = CK|T| = C|T|K = C(|T|KC)C = C(|T|U^*)C = CT^*C$. \end{proof} We remark that the implication $(\Leftarrow)$ of Lemma \ref{LemmaCTTC} is false if one drops the hypothesis that the singular values of $T$ are distinct. For instance, let $T$ be unitary matrix which is not complex symmetric (i.e., $T \neq JT^*J$ where $J$ denotes the canonical conjugation \eqref{eq-Canonical} on $\mathbb{C}^n$). In this case, $T^*T = I = TT^*$ (i.e., all of the singular vales of $T$ are $1$) and hence the condition on the right-hand side of \eqref{eq-CTTC} obviously holds. On the other hand, $T \neq JT^*J$ by hypothesis. From here on, we maintain the notation and conventions of Theorem \ref{TheoremMain}, namely that $u_1,u_2,\ldots,u_n$ are unit eigenvectors of $T^*T$ and $v_1,v_2,\ldots,v_n$ are unit eigenvectors of $TT^*$ corresponding to the eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_n$, respectively. \begin{Lemma}\label{LemmaUnimodular} If $C$ is a conjugation on $\mathbb{C}^n$ and $T\in M_n(\mathbb{C})$ has distinct singular values, then $T^*T = CTT^*C$ if and only if $Cu_i = \alpha_i v_i$ for some unimodular constants $\alpha_1,\alpha_2,\ldots, \alpha_n$. \end{Lemma} \begin{proof} For the forward implication, observe that $\lambda_i u_i = T^*Tu_i = CTT^*Cu_i$ whence $TT^*(Cu_i) = \lambda_i (Cu_i)$. Since the eigenspaces of $TT^*$ are one-dimensional and $C$ is isometric, it follows that $Cu_i = \alpha_i v_i$ for some unimodular constants $\alpha_1,\alpha_2,\ldots, \alpha_n$. On the other hand, suppose that there exist unimodular constants $\alpha_1,\alpha_2,\ldots, \alpha_n$ such that $Cu_i = \alpha_i v_i$ for $i = 1,2,\ldots,n$. Since $C$ is a conjugation, it follows that $Cv_i = \alpha_i u_i$ for $i = 1,2,\ldots,n$. It follows that $CTT^*Cu_i = CTT^*\alpha_i v_i = \overline{\alpha_i}CTT^* v_i = \overline{\alpha_i}\lambda_i Cv_i = \overline{\alpha_i}\alpha_i\lambda_i u_i = \lambda_i u_i$ for $i = 1,2,\ldots,n$. Since the linear operators $CTT^*C$ and $T^*T$ agree on the orthonormal basis $u_1,u_2,\ldots,u_n$, we conclude that $T^*T = CTT^*C$. \end{proof} \begin{Lemma}\label{LemmaAlpha} There exists a conjugation $C$ and unimodular constants $\alpha_1,\alpha_2,\ldots, \alpha_n$ such that $Cu_i = \alpha_i v_i$ for $i=1,2,\ldots,n$ if and only if \begin{equation}\label{eq-AlphaCondition} \inner{u_i,v_j } = \alpha_j \overline{\alpha_i} \inner{u_j, v_i} \end{equation} holds for $1 \leq i,j \leq n$. \end{Lemma} \begin{proof} For the forward implication, simply note that if $Cu_i = \alpha_i v_i$ for $i=1,2,\ldots,n$, then \eqref{eq-AlphaCondition} follows immediately from the fact that $C$ is isometric and conjugate-linear. Conversely, suppose that \eqref{eq-AlphaCondition} holds for $1 \leq i,j \leq n$. We claim that the definition $Cu_i = \alpha_i v_i$ for $1 \leq i \leq n$ extends by conjugate-linearity to a conjugation on all of $\mathbb{C}^n$. Since $u_1,u_2,\ldots,u_n$ and $v_1,v_2,\ldots,v_n$ are orthonormal bases of $\mathbb{C}^n$ and since the constants $\alpha_1,\alpha_2,\ldots, \alpha_n$ are unimodular, it follows that $C$ is isometric. It therefore suffices to prove that $C^2 = I$. To this end, we need only show that $Cv_i = \alpha_i u_i$ for $1 \leq i \leq n$. This follows from a straightforward computation: \begin{align*} Cv_i &= C\left( \sum_{j=1}^n \inner{v_i,u_j}u_j \right) = \sum_{j=1}^n \inner{u_j,v_i}Cu_j = \sum_{j=1}^n \inner{u_j,v_i}\alpha_j v_j \\ &= \sum_{j=1}^n \alpha_i \overline{\alpha_j} \inner{u_i,v_j}\alpha_j v_j = \alpha_i \sum_{j=1}^n \inner{u_i,v_j} v_j = \alpha_i u_i. \end{align*} Thus $C$ is a conjugation on $\mathbb{C}^n$, as desired. \end{proof} We can interpret the condition \eqref{eq-AlphaCondition} in terms of matrices. Let $U = (u_1 | u_2 | \cdots | u_n)$ and $V = (v_1 | v_2 | \cdots |v_n)$ denote the $n \times n$ unitary matrices whose columns are the orthonormal bases $u_1,u_2,\ldots,u_n$ and $v_1,v_2,\ldots,v_n$, respectively. Now observe that \eqref{eq-AlphaCondition} is equivalent to asserting that \begin{equation}\label{eq-UVA} (V^*U)^t = A^*(V^*U)A \end{equation} holds where $A = \operatorname{diag}(\alpha_1,\alpha_2,\ldots,\alpha_n)$ denotes the diagonal unitary matrix having the unimodular constants $\alpha_1,\alpha_2,\ldots,\alpha_n$ along the main diagonal. Putting Lemmas \ref{LemmaCTTC}, \ref{LemmaUnimodular}, and \ref{LemmaAlpha} together, we obtain the following important lemma. \begin{Lemma}\label{LemmaMain} There exist unimodular constants $\alpha_1,\alpha_2,\ldots, \alpha_n$ such that \eqref{eq-AlphaCondition} holds if and only if $T$ is UECSM. \end{Lemma} With these preliminaries in hand, we are now ready to complete the proof of Theorem \ref{TheoremMain}. \subsection{Proof of the implication $(\Rightarrow)$} Suppose that $T$ is UECSM. By Lemma \ref{LemmaMain}, there exist unimodular constants $\alpha_1,\alpha_2,\ldots,\alpha_n$ so that \eqref{eq-AlphaCondition} holds for $1 \leq i,j \leq n$. The desired conditions \eqref{eq-Magnitude} and \eqref{eq-Cocycle} from the statement of Theorem \ref{TheoremMain} then follow immediately. \subsection{Proof of the implication $(\Leftarrow)$} The proof that conditions \eqref{eq-Magnitude} and \eqref{eq-Cocycle} are sufficient for $T$ to be UECSM is somewhat more complicated. Fortunately, the proof of \cite[Thm.~2]{Balayan} goes through, \emph{mutatis mutandis}, and we refer the reader there for the details. We sketch the main idea below. Suppose that $\inner{u_j,v_i} \neq 0$ for $1 \leq i,j \leq n$ (the proof of \cite[Thm.~2]{Balayan} explains how to get around this restriction) and observe that \eqref{eq-Magnitude} ensures that the constants \begin{equation*} \beta_{ij} = \frac{ \inner{u_i,v_j} }{ \inner{ u_j, v_i} } \end{equation*} are unimodular. The condition \eqref{eq-Cocycle} then implies that $\beta_{ij} \beta_{jk} =\beta_{ik}$, from which it follows that the unimodular constants $\alpha_i = \beta_{1i}$ satisfy \eqref{eq-AlphaCondition}. We therefore conclude that $T$ is UECSM by Lemma \ref{LemmaMain}. \qed \section{Examples and computations}\label{SectionExamples} Before considering several examples, let us first remark that Theorem \ref{TheoremMain} is constructive. Maintaining the notation and conventions established in the proof of Theorem \ref{TheoremMain}, define the unitary matrices $U$, $V$, and $A$ as in \eqref{eq-UVA}. Let $s_1,s_2,\ldots,s_n$ denote the standard basis of $\mathbb{C}^n$ and let $J$ denote the canonical conjugation \eqref{eq-Canonical} on $\mathbb{C}^n$. In particular, observe that $Js_i = s_i$ for $i = 1,2,\ldots, n$. The proof of Theorem \ref{TheoremMain} tells us that if $T$ satisfies \eqref{eq-Magnitude} and \eqref{eq-Cocycle} (e.g., ``$T$ passes \texttt{ModulusTest}''), then there exist unimodular constants $\alpha_1,\alpha_2,\ldots,\alpha_n$ such that $Cu_i = \alpha_i v_i$ for $i=1,2,\ldots,n$. Letting $A = \operatorname{diag}(\alpha_1, \alpha_2, \ldots, \alpha_n)$ we see that \begin{align*} VAU^t J u_i &= VAJU^*u_i = VAJs_i \\ &= VAs_i = \alpha_i Vs_i \\ &= \alpha_i v_i. \end{align*} Thus the conjugate-linear operators $C$ and $(VAU^t)J$ agree on the orthonormal basis $u_1,u_2,\ldots,u_n$ whence they agree on all of $\mathbb{C}^n$. Although it is not immediately obvious, the unitary matrix $S = VAU^t$ is complex symmetric. Indeed, the condition $S = S^t$ is equivalent to \eqref{eq-UVA}. Once the conjugation $C = SJ$ has been obtained it is a simple matter of finding an orthonormal basis with respect to which $T$ has a complex symmetric matrix representation (see Lemma \ref{LemmaC}). To find such a basis, observe that since $S = CJ$ is a $C$-symmetric unitary operator, each of its eigenspaces are fixed by $C$ \cite[Lem.~8.3]{CCO}. Some of the following examples illustrate this construction. \begin{Example}\label{Example2x2} Although at this point many different proofs of the fact that every $2 \times 2$ matrix is UECSM exist (see \cite[Cor.~3]{Balayan}, \cite[Cor.~3.3]{Chevrot}, \cite[Ex.~6]{CSOA}, \cite{OPHUEMT}, \cite[Cor.~1]{SNCSO}, \cite[Cor.~3]{Tener}), for the sake of illustration we give yet another. By Schur's Theorem on unitary triangularization, we need only consider only upper triangular $2 \times 2$ matrices. If $T$ is such a matrix and has repeated eigenvalues, then upon subtracting a multiple of the identity we may assume that \begin{equation*} T = \minimatrix{0}{a}{0}{0}. \end{equation*} A routine computation now shows that $T = UAU^*$ where \begin{equation*} A = \minimatrix{ \frac{a}{2} }{ \frac{ia}{2} }{ \frac{ia}{2} }{ -\frac{a}{2} }, \qquad U = \minimatrix{ \frac{1}{\sqrt{2}} }{ \frac{-i}{\sqrt{2}} }{ \frac{1}{\sqrt{2}} }{ \frac{i}{\sqrt{2}} }. \end{equation*} Thus it suffices to consider the case where $T$ has distinct eigenvalues. Upon subtracting a multiple of the identity and then scaling, we may assume that \begin{equation*} T = \minimatrix{1}{a}{0}{0}. \end{equation*} Moreover, we may also assume that $a \geq 0$ since this may be obtained by conjugating $T$ by an appropriate diagonal unitary matrix. Thus we have \begin{equation*} T^*T = \minimatrix{1}{a}{a}{a^2}, \quad TT^* = \minimatrix{1+a^2}{0}{0}{0}. \end{equation*} The eigenvalues of $T^*T$ and $TT^*$ are $\lambda_1 = 1+a^2$ and $\lambda_2 = 0$ and corresponding unit eigenvectors are \begin{equation*} u_1 = \twovector{ \frac{1}{ \sqrt{1+a^2}} }{ \frac{a}{\sqrt{1+a^2}} }, \quad u_2 = \twovector{ \frac{-a}{\sqrt{1+a^2}} }{ \frac{1}{\sqrt{1+a^2}}}, \quad v_1 = \twovector{1}{0}, \quad v_2 = \twovector{0}{1}. \end{equation*} Let us first consider the condition \eqref{eq-Magnitude} of the procedure \texttt{ModulusTest}. For $i = j$ it holds trivially and for $i \neq j$ we have \begin{equation*} |\inner{u_1,v_2}| = \frac{a}{\sqrt{1+a^2}} = | \inner{ u_2,v_1} |. \end{equation*} Now let us consider the second condition \eqref{eq-Cocycle}. Since $n = 2$, at least two of $i,j,k$ must be equal whence \eqref{eq-Cocycle} holds trivially. By Theorem \ref{TheoremMain}, it follows that $T$ is UECSM. Let us now explicitly construct a complex symmetric matrix which $T$ is unitarily equivalent to. Since the equation \begin{equation*} \frac{a}{ \sqrt{1+a^2}} = \inner{u_1,v_2} = \overline{\alpha_1} \alpha_2 \inner{u_2,v_1} = \overline{\alpha_1} \alpha_2 \frac{-a}{ \sqrt{1+a^2}} \end{equation*} is satisfied by $\alpha = 1$ and $\alpha_2 = -1$, we let \begin{equation*} S= \underbrace{\left( \begin{array}{c|c} 1&0\\ 0&1 \end{array} \right)}_{V} \underbrace{\left( \begin{array}{cc} 1&0\\ 0&-1 \end{array} \right) }_{A} \underbrace{ \left( \begin{array}{cc} \frac{1}{ \sqrt{1+a^2}}&\frac{a}{\sqrt{1+a^2}}\\ \hline \frac{-a}{\sqrt{1+a^2}} &\frac{1}{\sqrt{1+a^2}} \end{array} \right) }_{U^t} = \minimatrix{\frac{1}{\sqrt{1+a^2}}}{\frac{a}{\sqrt{1+a^2}}}{\frac{a}{\sqrt{1+a^2}}}{-\frac{1}{\sqrt{1+a^2}}} \end{equation*} and note that the conjugation $C = SJ$ satisfies $T = CT^*C$. An orthonormal basis $e_1,e_2$ of $\mathbb{C}^2$ whose elements are fixed by $C$ is given by \begin{equation*} e_1 = \twovector{ \frac{1 - \sqrt{1+a^2}}{\sqrt{2 + 2a^2 - 2 \sqrt{1+a^2}}} }{ \frac{a}{\sqrt{2 + 2a^2 - 2 \sqrt{1+a^2}}} }\qquad e_2 = \twovector{ \frac{-ia}{\sqrt{2 + 2a^2 - 2 \sqrt{1+a^2}}} }{ \frac{i(1 - \sqrt{1+a^2})}{\sqrt{2 + 2a^2 - 2 \sqrt{1+a^2}}} }. \end{equation*} Note that these are certain normalized eigenvectors of $S$, corresponding to the eigenvalues $1$ and $-1$, respectively, whose phases are selected so that $Ce_1 = SJe_1 = Se_1 = e_1$ and $Ce_2 = SJe_2 = S(-e_2) = -Se_2 = e_2$. Letting $Q= (e_1|e_2)$ denote the unitary matrix whose columns are $e_1$ and $e_2$, we find that \begin{equation*} Q^*TQ = \minimatrix{ \frac{1}{2}(1 - \sqrt{1+a^2}) }{ \frac{ia}{2} }{\frac{ia}{2}}{ \frac{1}{2}(1 + \sqrt{1+a^2})}. \end{equation*} As predicted by Lemma \ref{LemmaC}, this matrix is complex symmetric. \end{Example} The following simple example was first considered, using ad-hoc methods, in \cite[Ex.~1]{SNCSO}. Note that the procedure \texttt{StrongAngleTest} of \cite{Balayan} cannot be applied to this matrix due to the repeated eigenvalue $0$. \begin{Example}\label{ExampleAB} Suppose that $ab \neq 0$ and $|a| \neq |b|$. In this case, the singular values of \begin{equation*} T = \megamatrix{0}{a}{0}{0}{0}{b}{0}{0}{0} \end{equation*} are distinct. Normalized eigenvectors $u_1,u_2,u_3$ of $T^*T$ and $v_1,v_2,v_3$ of $TT^*$ corresponding to the eigenvalues $0,|a|^2,|b|^2$, respectively are given by \begin{equation*} u_1 = v_3 = \threevector{1}{0}{0},\qquad u_2 = v_1 = \threevector{0}{1}{0},\qquad u_3 = v_2 = \threevector{0}{0}{1}. \end{equation*} Since $\inner{u_1,v_2} = 0$ and $\inner{u_2,v_1} = 1$, condition \eqref{eq-Magnitude} fails from which we conclude that $T$ is not UECSM. On the other hand, if either $a=0$ or $b=0$, then $T$ is the direct sum of a $1\times 1$ with a $2 \times 2$ matrix whence $T$ is UECSM by Example \ref{Example2x2}. Moreover, if $|a| = |b|$, then $T$ is unitarily equivalent to a Toeplitz matrix and thus UECSM by \cite[Sect.~2.2]{CCO}. \end{Example} \begin{Example}\label{ExampleNasty} We claim that the lower-triangular matrix \begin{equation*} T = \begin{pmatrix} 0 & 0 & 0 \\ 1 & 2 & 0 \\ 1 & 0 & 2 \end{pmatrix} \end{equation*} is UECSM. Normalized eigenvectors $u_1,u_2,u_3$ of $T^*T$ and $v_1,v_2,v_3$ of $TT^*$ corresponding to the eigenvalues $\lambda_1 = 0$, $\lambda_2 = 4$, and $\lambda_3 = 6$ are given by \begin{equation*} u_1 = \threevector{ \frac{1}{\sqrt{3}} }{ \frac{1}{\sqrt{3}} }{ \frac{1}{\sqrt{3}} }, \qquad u_2 = \threevector{0}{- \frac{1}{\sqrt{2}}}{ \frac{1}{\sqrt{2}}}, \qquad u_3 = \threevector{ - \frac{4}{\sqrt{6} } }{ \frac{1}{\sqrt{6} } }{ \frac{1}{\sqrt{6}}}, \end{equation*} and \begin{equation*} v_1 = \threevector{0}{ \frac{1}{\sqrt{2} } }{ \frac{1}{\sqrt{2} } }, \qquad v_2 = \threevector{0}{ -\frac{1}{\sqrt{2} } }{ \frac{1}{\sqrt{2} } }, \qquad v_3 = \threevector{1}{0}{0}, \end{equation*} respectively. Since \begin{align*} \inner{u_1,v_2} & = \inner{u_2,v_1} = 0, \\ \inner{u_2,v_3} &=\inner{u_3,v_2} =0,\\ \inner{u_3,v_1} &= \inner{u_1,v_3} = \tfrac{1}{\sqrt{3}}, \end{align*} conditions \eqref{eq-Magnitude} and \eqref{eq-Cocycle} are obviously satisfied. By Theorem \ref{TheoremMain}, we conclude that $T$ is UECSM. Let us now construct a complex symmetric matrix which $T$ is unitarily equivalent to. By inspection, we find that $\alpha_1 = \alpha_2 = \alpha_3 = 1$ is a solution to \eqref{eq-AlphaCondition}. Maintaining the notation established at the beginning of this section, we observe that the matrix \begin{align*} S &= \underbrace{ \left( \begin{array}{c|c|c} 0 & 0 & 1 \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \end{array} \right) }_V \underbrace{\megamatrix{1}{0}{0}{0}{1}{0}{0}{0}{1}}_A \underbrace{ \left( \begin{array}{ccc} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\[3pt] \hline 0 & -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\[3pt] \hline -\frac{4}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} \end{array} \right) }_{U^t} \\ &= \begin{pmatrix} -\frac{4}{\sqrt{6}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} \\ \frac{1}{\sqrt{6}} & \frac{1}{2}+\frac{1}{\sqrt{6}} & -\frac{1}{2}+\frac{1}{\sqrt{6}} \\ \frac{1}{\sqrt{6}} & -\frac{1}{2}+\frac{1}{\sqrt{6}} & \frac{1}{2}+\frac{1}{\sqrt{6}} \end{pmatrix} \end{align*} is symmetric and unitary. We then find an orthonormal basis $e_1,e_2,e_3$ whose elements are fixed by the conjugation $C = SJ$. Following Lemma \ref{LemmaC}, we encode one such example as the columns of the unitary matrix \begin{equation*}\small Q = \left( \begin{array}{c|c|c} -i \sqrt{\frac{1}{2}+\frac{1}{\sqrt{6}}} & \frac{1}{5} \sqrt{11-4 \sqrt{6}} & \frac{1}{\sqrt{2 \left(9+\sqrt{6}\right)}} \\[5pt] \frac{i}{2 \sqrt{3+\sqrt{6}}} & 0 & \frac{1}{2} \sqrt{3+\sqrt{\frac{2}{3}}} \\ \frac{i}{2 \sqrt{3+\sqrt{6}}} & \frac{1}{5} \left(\sqrt{2}+2 \sqrt{3}\right) & -\frac{1}{10} \sqrt{19-23 \sqrt{\frac{2}{3}}} \end{array} \right) \end{equation*} and note that $Q^*TQ$ is complex symmetric: \begin{equation*}\small \begin{pmatrix} 1-\sqrt{\frac{3}{2}} & -\frac{1}{5} i \sqrt{9-\sqrt{6}} & -\frac{1}{5} i \sqrt{\frac{7}{2}+\sqrt{6}} \\[5pt] -\frac{1}{5} i \sqrt{9-\sqrt{6}} & \frac{1}{25} \left(26+11 \sqrt{6}\right) & \frac{1}{25} \sqrt{123-47 \sqrt{6}} \\[5pt] -\frac{1}{5} i \sqrt{\frac{7}{2}+\sqrt{6}} & \frac{1}{25} \sqrt{123-47 \sqrt{6}} & \frac{1}{50} \left(98+3 \sqrt{6}\right) \end{pmatrix}. \end{equation*} Independent confirmation that $T$ is UECSM is obtained by noting that $T - 2I$ has rank one (every rank-one matrix is UECSM by \cite[Cor.~5]{SNCSO}). \end{Example} \section{Comparison with other methods}\label{SectionComparison} With the addition of \texttt{ModulusTest} there are now three general procedures for determining whether a matrix $T$ is UECSM. Each has its own restrictions: \begin{enumerate}\addtolength{\itemsep}{0.25\baselineskip} \item \texttt{ModulusTest} (this article) requires that $T$ has distinct singular values, \item \texttt{StrongAngleTest} \cite{Balayan} requires that $T$ has distinct eigenvalues, \item \texttt{UECSMTest} \cite{Tener} requires that the selfadjoint matrices $A,B$ in the Cartesian decomposition $T = A + iB$ (where $A=A^*$, $B = B^*$) both have distinct eigenvalues. However, this restriction can be removed in the $3 \times 3$ case. \end{enumerate} In this section, we compare \texttt{ModulusTest} to these other methods and point out several advantages of our procedure. Table \ref{TableA} provides a number of examples indicating that \texttt{ModulusTest} is not subsumed by the other two procedures mentioned above. At this point we should also remark that the two matrices \eqref{eq-Puzzle} from the introduction are unitarily equivalent to constant multiples of the corresponding matrices in Table \ref{TableA}. In particular, the first matrix in \eqref{eq-Puzzle} is unitarily equivalent to \begin{equation*}\micro{8} \begin{pmatrix} \frac{2}{17} \left(23+16 \sqrt{2}\right) & \frac{4}{17} \sqrt{50-31 \sqrt{2}} & -2 i \sqrt{\frac{1}{17} \left(5+2 \sqrt{2}\right)} & -i \sqrt{\frac{48}{17}-\frac{8 \sqrt{2}}{17}} \\ \frac{4}{17} \sqrt{50-31 \sqrt{2}} & \frac{2}{17} \left(45+\sqrt{2}\right) & -i \sqrt{\frac{48}{17}-\frac{8 \sqrt{2}}{17}} & 2 i \sqrt{\frac{1}{17} \left(5+2 \sqrt{2}\right)} \\ -2 i \sqrt{\frac{1}{17} \left(5+2 \sqrt{2}\right)} & -i \sqrt{\frac{48}{17}-\frac{8 \sqrt{2}}{17}} & 2 & 0 \\ -i \sqrt{\frac{48}{17}-\frac{8 \sqrt{2}}{17}} & 2 i \sqrt{\frac{1}{17} \left(5+2 \sqrt{2}\right)} & 0 & 2-2 \sqrt{2} \end{pmatrix}. \end{equation*} \begin{table} \begin{equation*}\small \begin{array}{|c|c|c|c|c|c|} \hline T & \sigma(T^*T) & \sigma(T) & \sigma(A) & \sigma(B) & \text{UECSM?} \\ \hline \begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 &1 & 0 & 0 \\ \end{pmatrix} & 0,2,\frac{3 \pm \sqrt{5}}{2} & 0,1,1,1 & \frac{1}{2}, \frac{3}{2}, \frac{1\pm\sqrt{2}}{2} & - \frac{1}{2}, - \frac{1}{2}, \frac{1}{2} \frac{1}{2} & \textsc{yes}\\ \hline \begin{pmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 &0 & 1 & 0 \\ \end{pmatrix} & 0,1, 2 \pm \sqrt{2} & 0,1,1,1 & \text{distinct} &0,0,\pm\frac{\sqrt{2}}{2} & \textsc{no} \\ \hline \end{array} \end{equation*} \caption{\footnotesize Matrices which satisfy the hypotheses of \texttt{ModulusTest} but not those of \texttt{UECSMTest} or \texttt{StrongAngleTest} (the notation $\sigma( \cdot )$ denotes the spectrum of a matrix). Whether or not these matrices are UECSM can be determined by \texttt{ModulusTest}. In the second row, the eigenvalues of $A$ are distinct but cannot be displayed exactly in the confines of the table.} \label{TableA} \end{table} One major advantage that \texttt{ModulusTest} has over its competitors is due to the nonlinear nature of the map $X \mapsto X^*X$ on $M_n(\mathbb{C})$. First note that the property of being UECSM is invariant under translation $X \mapsto X+cI$ for $c \in \mathbb{C}$. Next observe that if $T$ does not satisfy the hypotheses of \texttt{UECSMTest} or \texttt{StrongAngleTest}, then neither does $T+ cI$ for any value of $c$. On the other hand, $T+cI$ will often satisfy the hypotheses of \texttt{ModulusTest} even if $T$ itself does not. \begin{table} \begin{equation*}\footnotesize \begin{array}{|c|c|c|c|c|c|} \hline T & \sigma(T^*T) & \sigma(T) & \sigma(A) & \sigma(B) & \text{UECSM?} \\ \hline \begin{pmatrix} 1 &0 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 2 \\ 0 &0 & 0 & 0 \\ \end{pmatrix} & 0,1,4,4 &0,0,0,1 &0,1,\pm\sqrt{2} &0,0,\pm \sqrt{2} & \textsc{yes} \\ \hline \begin{pmatrix} 1 &0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 2 \\ 0 &0 & 0 & 0 \\ \end{pmatrix} &0,1,1,4 &0,0,0,1 &0,1,\pm \frac{\sqrt{5}}{2} & 0,0, \pm \frac{\sqrt{5}}{2} & \textsc{no} \\ \hline \end{array} \end{equation*} \caption{\footnotesize Matrices which cannot be tested by \texttt{UECSMTest}, \texttt{StrongAngleTest}, or \texttt{ModulusTest}. However, \texttt{ModulusTest} \emph{does} apply to $T + I$ and hence \texttt{ModulusTest} can be used indirectly to test the original matrix $T$.} \label{TableB} \end{table} Table \ref{TableB} displays two matrices which do not satisfy the hypotheses of any of the three tests that we have available. Nevertheless, the translation trick described above renders these matrices indirectly susceptible to \texttt{ModulusTest}. For instance, the first matrix in Table \ref{TableB} is unitarily equivalent to \begin{equation*}\small \begin{pmatrix} 1& 0 & 0 & 0 \\ 0& 0 & 0 & i \sqrt{2} \\ 0& 0 & 0 & \sqrt{2} \\ 0& i \sqrt{2} & \sqrt{2} & 0 \end{pmatrix}. \end{equation*} Rather than grind through the computational details, we can use simple ad-hoc means to independently confirm the results listed in Table \ref{TableB}. The first matrix in Table \ref{TableB} is the direct sum of a $1 \times 1$ matrix and a Toeplitz matrix and is therefore UECSM by \cite[Sect.~2.2]{CCO}. On the other hand, the second matrix in Table \ref{TableB} is not UECSM. To see this requires a little additional work. First note that the lower right $3 \times 3$ block is not UECSM (see Example \ref{ExampleAB} or \cite[Ex.~1]{SNCSO}). We next use the fact that a matrix $T$ is UECSM if and only if the external direct sum $0 \oplus T$ is UECSM \cite[Lem.~1]{CSPI}. \section{Testing compact operators}\label{SectionVolterra} Our final example indicates that the natural infinite-dimensional generalization of \texttt{ModulusTest} can sometimes be used to detect hidden symmetries in Hilbert space operators. For instance if $T$ is compact, then $T^*T$ and $TT^*$ are diagonalizable selfadjoint operators having the same spectrum \cite[Pr.~76]{HalmosHilbert} and hence the proofs of our results go through \emph{mutatis mutandis}. \begin{Example} We claim that the \emph{Volterra integration operator} $T:L^2[0,1]\to L^2[0,1]$, defined by \begin{equation*} [Tf](x) = \int_0^x f(y)\, dy, \end{equation*} is unitarily equivalent to a complex symmetric matrix acting on $l^2(\mathbb{Z})$. Before explicitly demonstrating this with \texttt{ModulusTest}, let us note that neither of the other procedures previously available (\texttt{StrongAngleTest} \cite{Balayan}, \texttt{UECSMTest} \cite{Tener}) are capable of showing this. \begin{enumerate}\addtolength{\itemsep}{0.25\baselineskip} \item The Volterra operator has no eigenvalues at all (indeed, it is quasinilpotent) and hence no straightforward generalization of \texttt{StrongAngleTest} can possibly apply. \item Since $[T^*f](x) = \int_x^1 f(y)\,dy$, we find that $A = \frac{1}{2}(T+T^*)$ equals $\frac{1}{2}$ times the orthogonal projection onto the one-dimensional subspace of $L^2[0,1]$ spanned by the constant function $1$. In particular, the operator $A$ has the eigenvalue $0$ with infinite multiplicity whence no direct generalization of Tener's \texttt{UECSMTest} can possibly apply. \end{enumerate} On the other hand, the singular values of the Volterra operator are distinct and thus \texttt{ModulusTest} applies. In fact, the eigenvalues of $T^*T$ and $TT^*$ are \begin{equation*} \lambda_n = \frac{2}{(2n+1)\pi}, \end{equation*} for $n = 0,1,2,\ldots$ and corresponding normalized eigenvectors are \begin{equation*} u_{n} = \sqrt{2}\cos[(n+\tfrac{1}{2})\pi x], \quad v_{n} = \sqrt{2}\sin[(n+\tfrac{1}{2})\pi x]. \end{equation*} These computations are well-known \cite[Pr.~188]{HalmosHilbert} and left to the reader (a different derivation of these facts can be found in \cite[Ex.~6]{CSO2}). An elementary computation now reveals that \begin{equation*} \inner{u_i,v_j} = \begin{cases} \dfrac{ (-1)^{i+j} (2i+1) - (2j+1) }{ \pi( i-j + i^2 -j^2) } & \text{if $i \neq j$}, \\[10pt] \dfrac{2}{\pi(1+2i)} & \text{if $i = j$}, \end{cases} \end{equation*} from which it is clear that \begin{equation}\label{eq-VolterraAlpha} \inner{u_i,v_j} = (-1)^{i+j} \inner{u_j,v_i}. \end{equation} Taking absolute values of the preceding, we see that \eqref{eq-Magnitude} is satisfied. Moreover, \begin{align*} \inner{u_i,v_j} \inner{u_j,v_k} \inner{u_k,v_i} &= (-1)^{2(i+j+k)} \inner{u_i,v_k}\inner{u_k,v_j}\inner{u_j,v_i}\\ &= \inner{u_i,v_k}\inner{u_k,v_j}\inner{u_j,v_i}, \end{align*} whence \eqref{eq-Cocycle} is satisfied. By Theorem \ref{TheoremMain}, it follows that the Volterra operator $T$ has a complex symmetric matrix representation with respect to some orthonormal basis of $L^2[0,1]$. Let us exhibit this explicitly. Looking at \eqref{eq-VolterraAlpha} we define $\alpha_n = (-1)^n$ and note that \eqref{eq-AlphaCondition} is satisfied for all $i$ and $j$. We now wish to concretely identify the conjugation $C$ on $L^2[0,1]$ which satisfies \begin{equation*} C( \underbrace{ \cos[ (n+ \tfrac{1}{2})\pi x] }_{u_n} ) = \underbrace{ (-1)^n }_{\alpha_n} \underbrace{ \sin[ (n+ \tfrac{1}{2})\pi x] }_{v_n} \end{equation*} for $n=0,1,2,\ldots$. Basic trigonometry tells us that \begin{align*} u_n(1-x) &= \cos[ (n+ \tfrac{1}{2})\pi (1-x)] \\ &= \cos (n+ \tfrac{1}{2})\pi \cos (n+ \tfrac{1}{2})\pi x + \sin (n+ \tfrac{1}{2})\pi \sin (n+ \tfrac{1}{2})\pi x \\ &= (-1)^n \sin[ (n+ \tfrac{1}{2})\pi x] = \alpha_n v_n(x) \\ &= [Cu_n](x) \end{align*} whence $[Cf](x) = \overline{ f(1-x)}$ for $f \in L^2[0,1]$. In particular, it is readily verified that $T = CT^*C$ (see also \cite[Lem.~4.3]{CCO}, \cite[Sect.~4.3]{CSOA}). Now observe that $C$ fixes each element of the orthonormal basis \begin{equation}\tag{$n \in \mathbb{Z}$} e_n = \exp[2 \pi i n (x - \tfrac{1}{2})], \end{equation} of $L^2[0,1]$ and that the matrix for $T$ with respect to this basis is \begin{equation*} \left( \begin{array}{cccc|c|cccc} & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \\ \cdots & \frac{i}{6 \pi } & 0 & 0 & \frac{i}{6 \pi } & 0 & 0 & 0 & \cdots \\[3pt] \cdots & 0 & \frac{i}{4 \pi } & 0 & -\frac{i}{4 \pi } & 0 & 0 & 0 & \cdots \\[3pt] \cdots & 0 & 0 & \frac{i}{2 \pi } & \frac{i}{2 \pi } & 0 & 0 & 0 & \cdots \\[3pt] \hline \cdots & \frac{i}{6 \pi } & -\frac{i}{4 \pi } & \frac{i}{2 \pi } & \frac{1}{2} & -\frac{i}{2 \pi } & \frac{i}{4 \pi } & -\frac{i}{6 \pi }& \cdots \\[3pt] \hline \cdots & 0 & 0 & 0 & -\frac{i}{2 \pi } & -\frac{i}{2 \pi } & 0 & 0 & \cdots \\[3pt] \cdots & 0 & 0 & 0 & \frac{i}{4 \pi } & 0 & -\frac{i}{4 \pi } & 0 & \cdots \\[3pt] \cdots & 0 & 0 & 0 & -\frac{i}{6 \pi } & 0 & 0 & -\frac{i}{6 \pi } & \cdots \\[3pt] & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \\ \end{array} \right). \end{equation*} In particular, the Cartesian components $A$ and $B$ of the Volterra operator are clearly visible in the preceding matrix representation. \end{Example}
{ "timestamp": "2010-03-16T01:01:47", "yymm": "1003", "arxiv_id": "1003.2821", "language": "en", "url": "https://arxiv.org/abs/1003.2821", "abstract": "We develop a procedure for determining whether a square complex matrix is unitarily equivalent to a complex symmetric (i.e., self-transpose) matrix. Our approach has several advantages over existing methods. We discuss these differences and present a number of examples.", "subjects": "Functional Analysis (math.FA); Operator Algebras (math.OA)", "title": "Unitary equivalence to a complex symmetric matrix: a modulus criterion", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429580381724, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.7097210935391316 }
https://arxiv.org/abs/1409.4510
Minimum Weight Resolving Sets of Grid Graphs
For a simple graph $G=(V,E)$ and for a pair of vertices $u,v \in V$, we say that a vertex $w \in V$ resolves $u$ and $v$ if the shortest path from $w$ to $u$ is of a different length than the shortest path from $w$ to $v$. A set of vertices ${R \subseteq V}$ is a resolving set if for every pair of vertices $u$ and $v$ in $G$, there exists a vertex $w \in R$ that resolves $u$ and $v$. The minimum weight resolving set problem is to find a resolving set $M$ for a weighted graph $G$ such that$\sum_{v \in M} w(v)$ is minimum, where $w(v)$ is the weight of vertex $v$. In this paper, we explore the possible solutions of this problem for grid graphs $P_n \square P_m$ where $3\leq n \leq m$. We give a complete characterisation of solutions whose cardinalities are 2 or 3, and show that the maximum cardinality of a solution is $2n-2$. We also provide a characterisation of a class of minimals whose cardinalities range from $4$ to $2n-2$.
\section{Introduction} Let $G=(V,E)$ be a simple graph, and for each pair of vertices $u,v \in V$, let $d(u,v)$ denote the length of the shortest path from $u$ to $v$, where ${d(u,u)=0}$ ${\forall u \in V}$ and $d(u,v) = \infty$ if $u$ and $v$ are disconnected. For two distinct vertices $u,v \in V$, a vertex $w$ is said to \textit{resolve} $u$ and $v$ if $d(w,u) \neq d(w,v)$. A set of vertices ${R \subseteq V}$ is said to be a \textit{resolving set} if for every pair of vertices $u$ and $v$ in $G$, there exists a vertex $w \in R$ that resolves $u$ and $v$. The elements of a resolving set are often called $landmarks$. For a graph $G$, a \textit{metric basis} is a resolving set of minimum cardinality, and the cardinality of a metric basis is the \textit{metric dimension} of $G$. Applications of metric bases and resolving sets arise in various settings such as network optimisation \cite{networkdiscovery}, chemistry and drug discovery \cite{chartrand2000}, robot navigation \cite{khuller}, digitisation of images \cite{melter}, and solutions to the Mastermind game \cite{chvatal}.\\ The problem of finding the metric dimension of a graph was introduced independently by Harary and Melter \cite{harary}, and Slater \cite{slater} and has been widely investigated in combinatorics literature. Khuller, Raghavachari, and Rosenfeld \cite{khuller} showed that the problem of finding the metric dimension is NP-hard for general graphs and developed a $(2 \text{ln}(n) + O(1))$ approximation algorithm. They also showed that the metric dimension of a graph is 1 iff the graph is a path and they showed that the problem is polynomial-time solvable for the case of trees. Beerliova et al. \cite{networkdiscovery} showed that no $o(\text{log}(n))$ approximation algorithm exists if $P \neq NP$. Chartrand et al. \cite{chartrand2000} proved that the only graph whose metric dimension is $|V|-1$ is $K_{|V|}$ and characterised the graphs whose metric dimension is $|V|-2$. Melter and Tomescu \cite{melter} proved that the metric dimension of grid graphs $P_n \square P_m$ is 2 and that metric bases correspond to two endpoints of a boundary edge of the grid. For more results on the metric dimensions of graphs, we refer the reader to \cite{families} and \cite{survey}.\\ We now consider a generalisation of the metric dimension problem that was first introduced by Epstein, Levin and Woeginger \cite{weightedmd} where we have a given assignment of positive weights $w(v)$ to each vertex $v \in V$. The problem is to find a \textit{minimum weight resolving set} $M \subseteq V$ such that the sum of the weights of the vertices in $M$, $\sum_{v \in M} w(v)$, is minimum. We refer to this problem as the minimum weight resolving set problem. Epstein, Levin and Woeginger showed that this problem is NP-hard for general graphs and found that the only possible solutions to the minimum weight resolving set problem correspond to \textit{minimal resolving sets} that are minimal with respect to inclusion, i.e. resolving sets $R$ where $\nexists v \in R$ such that $R-\{v\}$ is resolving. The same authors developed polynomial time algorithms for paths, trees, cycles, wheels and $k$-augmented trees (trees with an additional $k$ edges) by exhaustively enumerating the minimal resolving sets for these graphs and choosing the one with the minimum weight. As far as we are aware, these are the only graphs for which the minimum weight resolving set problem has previously been explored in the literature.\\ \textbf{Our Results}. Following the work of Epstein, Levin and Woeginger, we explore the minimum weight resolving set problem for grid graphs, $P_n \square P_m$, where $3 \leq n \leq m$. We completely characterise the minimal resolving sets of cardinality 2 and 3 for these graphs and find that for all minimal resolving sets $M$ for the grid, $2\leq |M| \leq 2n-2$. We also give a characterisation of a class of minimals whose cardinalities range between $3$ and $2n-2$ and provide a weak characterisation of a resolving set for the grid. \section{Terminology} Given the graph $P_n \square P_m$ where $m,n \geq 3$, if we label the vertices of $P_n$ by $u_0,u_1,\dots ,u_{n-1}$ and the vertices of $P_m$ by $v_0,v_1,\dots ,v_{m-1}$, then we have the natural labelling of the vertices of $P_n \square P_m$ where each vertex is labelled with $(u_i,v_j), i \in \{0,1,\dots n-1\} ,j \in \{0,1,\dots m-1\}$. This labelling has an obvious connection to the coordinates on the Cartesian plane, hence without loss of generality, we will refer to the first coordinate of the label as the coordinate of the vertex in the horizontal direction, and the second coordinate as the coordinate of the vertex in the vertical direction. For simplicity, we will refer to the vertex labelled by $(u_i,v_j)$ as vertex $(i,j)$. It is clear that the shortest distance between two vertices $(i,j)$ and $(k,l)$ is the Manhattan distance between the coordinates, or $|i-k| + |j-l|$. \\ All vertices in grid graphs have a degree of 2, 3 or 4. We give terms for each of these vertex types: \begin{itemize} \item Vertices of degree 2 are \textit{corner vertices}. \item Vertices of degree 3 are \textit{side vertices}. \item Vertices of degree 4 are \textit{interior vertices}. \end{itemize} The vertices of degree 2 and 3 are also known as \textit{boundary vertices} and the set of all boundary vertices is referred to as the boundary of the grid.\\ A \textit{line} in the grid is a path of length $n$ or $m$ in which either every vertex of the path has the same horizontal coordinate (a horizontal line), or every vertex of the path has the same vertical coordinate (a vertical line).\\ A \textit{side} of the grid is a line which contains only side vertices, except for the two endpoints of the line which are corner vertices. Two sides are adjacent if they share an endpoint and are opposite otherwise.\\ From this point onwards, we will refer to a minimal resolving set as a \textit{minimal} and we refer to a minimal of cardinality $k$ as a \textit{$k$-minimal}. \section{Results} We start with the characterisation of the metric bases of the grid, i.e. the $2$-minimals, given by Melter and Tomescu \cite{melter}. \begin{theorem}[\cite{melter}] A set $M$ of cardinality 2 is a $2$-minimal if and only if it contains two corners that share a side. \end{theorem} \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth]{2minimal.pdf} \caption{An example of a metric basis where the basis elements are in black.} \end{figure} We now attempt to characterise all the $3$-minimals. The previous theorem gives us an important property of all minimals whose cardinality is greater than 2. \begin{proposition} \label{cornerprop} All $k$-minimals, where $k \geq 3$, do not contain more than one corner vertex. \end{proposition} \begin{proof} Suppose we had a minimal $M$ such that $|M| \geq 3$. If $M$ contains two corner vertices on the same side, then a metric basis $B$ is a proper subset of $M$. Since no minimal is the proper subset of another minimal and $B$ is a 2-minimal, this leads to a contradiction.\\ Now suppose $M$ contains two corners $u,v$ that are not on the same side (opposite corners). Since $v$ is the only vertex that has distance $(n-1)+(m-1)$ from $u$, $v$ is resolved by $u$. Furthermore, since the grid is symmetric about its diagonals, every pair of vertices that are not resolved by $u$ will not be resolved by $v$ either. Hence if $M$ is a resolving set, then $M-v$ is also a resolving set, which implies that $M$ is not a minimal. \end{proof} This condition is necessary but not sufficient for a $3$-minimal. In order to see this, consider the following lemma and its corollary: \begin{lemma} If two vertices $u,v$ are not on the same line, then there exist two shortest paths from $u$ to $v$ of the form $u,\dots ,w_1,v$ and $u,\dots ,w_2,v$ where $w_1~\neq~w_2$. \end{lemma} \begin{proof} If $u$ and $v$ are not on the same line then they differ in both horizontal and vertical position.\\ In one possible shortest path, we traverse the horizontal line from $u$ to a vertex $q_1$ which has the same horizontal position as $v$, and then traverse the vertical line from $q_1$ to $v$. In this case, the second last vertex of this path, $w_1$, will have the same horizontal coordinate as $v$ but a different vertical coordinate.\\ In another possible shortest path, we traverse the vertical line from $u$ to a vertex $q_2$ which has the same vertical position as $v$, and then traverse the horizontal line from $q_2$ to $v$. In this case, the second last vertex of this path, $w_2$, will have the same vertical coordinate as $v$ but a different horizontal coordinate.\\ It is therefore clear that $w_1 \neq w_2$. \end{proof} \begin{corollary} \label{c_1} If two vertices $u,v$ are not on the same line, then there are two neighbours of $v$ that are not resolved by $u$. \end{corollary} If we consider a set $S$ of vertices that contains a corner and its two neighbours, then this will satisfy the conditions of Proposition~\ref{cornerprop}, however $S$ is clearly not resolving since the opposite corner of the one in $S$ is not on the same line as any of the vertices in $S$. We can use Corollary~\ref{c_1} to get another property of the $k$-minmals where $k \geq 3$. \begin{proposition} \label{opprop} All minimals must contain two boundary vertices on opposite sides. \end{proposition} \begin{proof} Suppose we have a set $M \subset V$ such that $M$ does not contain any boundary vertices. This implies that $M$ can only contain interior vertices. Since no interior vertices are on the same line as a corner vertex, by Corollary \ref{c_1}, for all interior vertices $u$ and a particular corner $c$, there exist two neighbours of $c$, say $w_1$ and $w_2$, that are not resolved by $u$. Since $c$ only has two neighbours, the pair \{$w_1$,$w_2$\} is the same for all $u$, hence this pair of vertices is not resolved by any interior vertex and thus $M$ is not a resolving set.\\ If we add a single boundary vertex $b$ to $M$, then there is at least one corner that is not on the same line as $b$. Hence $M \cup \{b\}$ is not a resolving set.\\ If we add two boundary vertices $b_1,b_2$ to $M$ where $b_1$ and $b_2$ are not on opposite sides, then there are three possibilities: \begin{enumerate}[(i)] \item $b_1$ and $b_2$ are two side vertices on the same side. \item $b_1$ and $b_2$ are two side vertices on adjacent sides. \item One of $b_1$ and $b_2$ is a side vertex and the other is a corner vertex on the same side. \end{enumerate} In all three possibilities, there is still at least one corner that is not on the same line as either $b_1$ or $b_2$. Hence $M \cup \{b_1,b_2\}$ is not a resolving set.\\ Therefore, all resolving sets for the grid must contain two boundary vertices on opposite sides. \end{proof} There is one last property of $3$-minimals that we need in order to get a characterisation. In order to arrive at this property we need the following lemmas. \begin{lemma} \label{quadlem} Suppose we have a set of vertices $\{ (x_1,y_1),(x_2,y_2),\dots ,(x_k,y_k) \}$ and another vertex $(p,q)$ such that: \begin{center} $p<x_i \quad \forall i \in \{1,2, \dots ,k\} \quad \text{or} \quad p>x_i \quad \forall i \in \{1,2, \dots ,k\}$\\ $\quad \text{and} \quad$\\ $q<y_i \quad \forall i \in \{1,2, \dots ,k\} \quad \text{or} \quad q>y_i \quad \forall i \in \{1,2, \dots ,k\}.$ \end{center} Then there exist two neighbours of $(p,q)$, denoted by $(p^*,q)$ and $(p,q^*)$, that are not resolved by any of the vertices $(p,q),(x_1,y_1),(x_2,y_2),\dots ,(x_k,y_k)$. \end{lemma} \begin{proof} Since no vertex in $\{ (x_1,y_1),(x_2,y_2),\dots ,(x_k,y_k) \}$ is on the same line as $(p,q)$, then by Corollary~\ref{c_1}, for each $(x_i,y_i) \in \{ (x_1,y_1),(x_2,y_2),\dots ,(x_k,y_k) \}$ there are two neighbours of $(p,q)$, a horizontal neighbour $(p^*_i,q)$ and a vertical neighbour $(p,q^*_i)$, that are not resolved by $(x_i,y_i)$.\\ However since either $p~<~x_i$, $\forall~i~\in~\{1,2, \dots ,k\}$, or $p~>~x_i$, $\forall~i~\in~\{1,2, \dots ,k\}$, \begin{center} $(p^*_1,q)~=~(p^*_2,q)~=~\dots~=~(p^*_k,q)~=~(p^*,q)$.\\ \end{center} And since either $q~<~y_i$, $\forall~i~\in~\{1,2, \dots ,k\}$, or $q~>~y_i$, $\forall~i~\in~\{1,2, \dots ,k\}$, \begin{center} $(p,q^*_1)~=~(p,q^*_2)~=~\dots~=~(p,q^*_k)~=~(p,q^*)$.\\ \end{center} Therefore, $(p^*,q)$ and $(p,q^*)$ are not resolved by any $(x_i,y_i) \in \{ (x_1,y_1), (x_2,y_2),\\ \dots , (x_k,y_k) \}$. And since $(p,q)$ does not resolve any of its neighbours, $(p^*,q)$ and $(p,q^*)$ are not resolved by $(p,q)$. \end{proof} The statement in Lemma~\ref{quadlem} is equivalent to saying that if we make $(p,q)$ the origin of coordinate axes with the $x$-axis being the line $y = q$ and the $y$-axis being the line $x = p$, then if all the vertices $(x_1,y_1),(x_2,y_2),\dots ,(x_k,y_k)$ are in the same quadrant with respect to $(p,q)$ as the origin, then the set\\ $\{ (x_1,y_1),(x_2,y_2),\dots ,(x_k,y_k),(p,q) \}$ is not resolving.\\ \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{origin1.pdf} \caption{The situation described in Lemma~\ref{quadlem} where the white vertices are not resolved by any of the black vertices.} \end{figure} We can use this lemma to achieve the following result that is specific to $3$-minimals. \begin{lemma}\label{linelem} A 3-minimal must have at least two vertices on the same line. \end{lemma} \begin{proof} Suppose we have a set $M = \{(x_1,y_1),(x_2,y_2),(x_3,y_3)\}$ where no two vertices are on the same line, i.e.m $x_1 \neq x_2 \neq x_3$ and $y_1 \neq y_2 \neq y_3$. Without loss of generality, we let $x_1 < x_2 < x_3$. Now we have $y_i < y_j < y_k$, where $i,j,k \in \{1,2,3\}$ and $i\neq j \neq k$.\\ If we let $j=1$, then we have either $y_3<y_1<y_2$ or $y_2<y_1<y_3$. In either case, by Lemma~\ref{quadlem}, there are two neighbours of $(x_3,y_3)$ that are not resolved by any vertex in $M$. Therefore, $M$ is not a resolving set.\\ If we let $j=2$, then we have either $y_1<y_2<y_3$ or $y_3<y_2<y_1$. In either case, by Lemma~\ref{quadlem}, there are two neighbours of $(x_1,y_1)$ and two neighbours of $(x_3,y_3)$ that are not resolved by any vertex in $M$. Therefore, $M$ is not a resolving set.\\ Finally, if we let $j=3$, then we have either $y_1<y_3<y_2$ or $y_2<y_3<y_1$. In either case, by Lemma~\ref{quadlem}, there are two neighbours of $(x_1,y_1)$ that are not resolved by any vertex in $M$. Therefore, $M$ is not a resolving set if it contains any three vertices that are not on the same line. \end{proof} We can now get the final property of $3$-minimals. \begin{proposition}\label{lineprop} A 3-minimals has either: \begin{enumerate}[(i)] \item Two vertices on the same line, $(i,j)$ and $(k,j)$, where $i \neq k$, and a third vertex $(p,q)$, where $i\leq p \leq k$ and $q \neq j$. \item Two vertices on the same line, $(i,j)$ and $(i,k)$, where $j \neq k$, and a third vertex $(p,q)$, where $j\leq p \leq k$ and $p \neq i$. \end{enumerate} \end{proposition} \begin{proof} Suppose we have a 3-minimal $M$. We know from Lemma~\ref{linelem} that two vertices in $M$ must be on the same line. Without loss of generality, we can let these vertices be $(i,j)$ and $(k,j)$ on a horizontal line since horizontal lines are equivalent to vertical lines in a rotated grid. We can also assume that $i<j$. Let the third vertex in $M$ be $(p,q)$.\\ If $q = j$ then $p \neq i,j$ since we must have 3 distinct vertices in a 3-minimal. However, this implies that there are two vertical neighbours $(p,q)$, denoted $(p,q^+)$ and $(p,q^-)$, that are not on the same line as $(i,j)$ and $(k,j)$. Therefore, by Corollary~\ref{c_1}, $(p,q^+)$ and $(p,q^-)$ are not resolved by any vertex in $M$ which is a contradiction.\\ Now suppose $q \neq j$ and $p<i$. This implies that $p<k$. Hence by Lemma~\ref{quadlem}, since $i,k > p$ and either $q<j$ or $q>j$, $M$ is not a resolving set. A similar argument holds for $p>k$.\\ Therefore $i\leq p \leq k$. \end{proof} We now have enough results to give a complete characterisation of the 3-minimals: \begin{theorem} A set $M$ is a 3-minimal if and only if: \begin{enumerate}[(i)] \item $M$ has no more than one corner vertex. \item $M$ contains two boundary vertices on opposite sides. \item $M$ either has two vertices on the same line, $(i,j)$ and $(k,j)$ where $i \neq k$, and a third vertex $(p,q)$, where $i\leq p \leq k$ and $q \neq j$, \\ or two vertices on the same line, $(i,j)$ and $(i,k)$, where $(j \neq k)$, and a third vertex $(p,q)$, where $j\leq p \leq k$ and $p \neq i$. \end{enumerate} \end{theorem} \begin{proof} It has already been shown from Propositions \ref{cornerprop}, \ref{opprop}, and \ref{lineprop} that if any of the above conditions are not satisfied, then $M$ is not a 3-minimal. Hence, in order to prove the above statement, we need only show that if $M$ satisfies all three of the above conditions, then $M$ is a 3-minimal.\\ Suppose we have a minimal $M$ that satisfies the above conditions. By condition (ii), $M$ contains two boundary vertices, $u$ and $v$ on opposite sides. There are two possible cases: either $u$ and $v$ are on the same line, or $u$ and $v$ are on different lines.\\ Suppose $u$ and $v$ are on the same line. Clearly, neither $u$ or $v$ can be a corner without breaking condition (i). The third vertex, $w$, can be any vertex that is not on the line between $u$ and $v$ to satisfy condition (iii). The line between $u$ and $v$ divides the grid into two subgrids, $A$ and $B$, where the line is a side in each subgrid and $u$ and $v$ are corners of the side. Since two corners is a metric basis for a grid, $u$ and $v$ will resolve every pair of vertices in $A$ and every pair of vertices in $B$. Let $a \in A$ and $b \in B$ be a pair of distinct vertices that are unresolved by $u$ and $v$. This implies that the pair $a$ and $b$ are equidistant from the line between $u$ and $v$ and that $a$ and $b$ are on a line that is perpendicular to the line between $u$ and $v$. However, since $w$ is not on the line between $u$ and $v$, it must lie exclusively in $A$ or exclusively in $B$. Hence, if $w \in A$ then $d(w,a) < d(w,b)$, and if $w \in B$ then $d(w,b) < d(w,a)$. Thus $a$ and $b$ are resolved by $w$, and therefore every pair of vertices in the grid are resolved by $u$,$v$ and $w$.\\ Now suppose $u$ and $v$ are not on the same line. This implies that the third vertex $w$ is on the same line as $u$ or $v$, so without loss of generality, we let $w$ be on the same line as $u$. The vertex $w$ will either be on the same side as $u$ or on the line perpendicular to the side containing $u$. If the latter situation were the case, then $w$ would need to be on the same side as $v$ since in order to satisfy condition (iii), we are required to have a vertex of the line segment between $u$ and $w$ to be on the same line as $v$. However, this would give the same situation as in the case described above where we have opposite boundary vertices on the same line, hence $u$,$v$ and $w$ would resolve the grid. We therefore assume $w$ is on the same side as $u$. Without loss of generality (since the grid can be rotated), let $u$ be labelled by $(0,i)$ and $w$ be labelled by $(0,j)$, where $i<j$. Hence $v$ will be labelled $(p,q)$, where $p = m-1$ or $n-1$, and $i < q < j$.\\ Consider the subgrid that has $u$ and $w$ as corners and has boundary vertices as the other two corners. Every pair of vertices in this subgrid will be resolved by $u$ and $w$ since they are two corners that share a side of the subgrid. We refer to this subgrid as the middle subgrid, and we refer to the subgrid of all vertices whose horizontal coordinates are less than or equal to $i$ as the left subgrid, and the subgrid whose horizontal coordinates are greater than or equal to $j$ as the right subgrid. Every pair of vertices in the left subgrid are resolved by $u$ and $v$ since $u$ is a corner of this subgrid, and for every vertex $l$ in the left subgrid, there is a shortest path from $v$ to $l$ that goes through the other corner of the left subgrid that is on the same vertical line as $u$ (hence $v$ resolves the same vertices in the left subgrid that this corner would). Similarly, every pair of vertices in the right subgrid are resolved by $w$ and $v$.\\ Now suppose we have a pair of vertices $a$ and $b$, where $a$ is in the left subgrid and $b$ is in the middle subgrid, and suppose $a$ and $b$ are not resolved by $u$. If $a$ and $b$ were on the same horizontal line, then they would be equidistant from the vertical line containing $u$ which is a boundary of the left and middle subgrids. Therefore, since $w$ is in the middle subgrid but not in the left subgrid, $w$ will resolve $a$ and $b$ (as would $v$). If $a$ and $b$ were on different horizontal lines, then either $a$ would have a larger vertical coordinate than $b$, or $b$ would have a larger vertical coordinate than $a$. If $a$ had the larger vertical coordinate then $a$ and $b$ would be resolved by $w$ since $b$ and $w$ are closer together than $a$ and $w$ in both the horizontal coordinate and vertical coordinate. If $b$ had the larger vertical coordinate, then $v$ would resolve $a$ and $b$ since $b$ and $v$ are closer together than $a$ and $v$ in both the horizontal coordinate and vertical coordinate. Hence $a$ and $b$ are always resolved. Similarly, if $a$ were in the right subgrid with $b$ still in the middle subgrid, then $a$ and $b$ would be resolved.\\ The last case is when $a$ is in the left subgrid and $b$ is in the right subgrid. Suppose $a$ and $b$ are equidistant from $u$. If $a$ and $b$ are on the same horizontal line, then $a$ and $b$ are resolved by $w$ since $b$ and $w$ will be closer together than $a$ and $w$. If $a$ and $b$ were on different horizontal lines with $a$ having a larger vertical coordinate than $b$, then as before, $a$ and $b$ would be resolved by $w$ since $b$ and $w$ are closer together than $a$ and $w$. And similarly, if $b$ had a larger vertical coordinate than $a$, then $a$ and $b$ would be resolved by $v$.\\ Hence any pair of vertices $a$ and $b$ are resolved by $u$,$v$ and $w$ if the three conditions hold. \end{proof} \begin{figure}[H] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{3minimal1.pdf} \end{subfigure}% \qquad \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{3minimal2.pdf} \end{subfigure} \caption{An example of the two types of 3-minimals: one with opposite boundary vertices on the same line, and one with opposite boundary vertices on different lines.} \end{figure} We now wish to find any $k$-minimals where $k>3$. In order to do this, we will need a more powerful version of Lemma~\ref{quadlem} which will also use a vertex as an origin and consider the quadrants with respect to this origin. First, we define the boundary of a quadrant to be the points on the two halves of the axes that define a quadrant, not including the origin, e.g., if the origin were $(0,0)$ then the points $(x,0)$ for $x>0$ and $(0,y)$ for $y>0$ would be on the boundary of the first quadrant. We say that two quadrants are opposite if they have no boundary points in common, e.g. the first and third quadrants are opposite, otherwise we say they are adjacent Also, the quadrant boundaries are not considered to be within any quadrant and neither is the origin. Now we have the following lemma: \begin{lemma} \label{quadlem2} Suppose we have an interior vertex $(p,q)$. Let $(p,q)$ be the origin of the coordinate axes $y=q$ and $x=p$. If the vertices $(x_1^+,y_1^+),(x_2^+,y_2^+),\dots ,(x_{k_1}^+,y_{k_1}^+)$ are in the same quadrant with respect to the origin $(p,q)$, if the vertices $(x_1^-,y_1^-),\\(x_2^-,y_2^-),\dots ,(x_{k_2}^-,y_{k_2}^-)$ are in the opposite quadrant, and if the vertices $(p,q^+_1),\\(p,q^+_2),\dots ,(p,q^+_{k_3}),(p^+_1,q),(p^+_2,q),\dots (p^+_{k_4},q)$ are the boundary points of one of these quadrants, then there exist two neighbours of $(p,q)$, denoted by $(p^*,q)$ and $(p,q^*)$, that are not resolved by any of the vertices $(p,q),(x_1^+,y_1^+),(x_2^+,y_2^+),\dots ,\\(x_{k_1}^+,y_{k_1}^+),(x_1^-,y_1^-),(x_2^-,y_2^-),\dots ,(x_{k_2}^-,y_{k_2}^-),(p,q^+_1), (p,q^+_2),\dots ,(p,q^+_{k_3}),(p^+_1,q),\\(p^+_2,q),\dots (p^+_{k_4},q)$. \end{lemma} \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{origin2.pdf} \caption{The situation described in Lemma~\ref{quadlem2}, where the white vertices are not resolved by any of the black vertices.} \end{figure} \begin{proof} Assume without loss of generality that the vertices $(p,q^+_1), (p,q^+_2), \dots ,\\(p,q^+_{k_3}),(p^+_1,q), (p^+_2,q), \dots (p^+_{k_4},q)$ are the boundary points of the quadrant containing $(x_1^+,y_1^+),(x_2^+,y_2^+),\dots ,(x_{k_1}^+,y_{k_1}^+)$.\\ By Lemma~\ref{quadlem}, since the vertices $(x_1^-,y_1^-),(x_2^-,y_2^-),\dots ,(x_{k_2}^-,y_{k_2}^-)$ are in the same quadrant with respect to $(p,q)$, then there exist two vertices, which we will denote by $(p^-,q)$ and $(p,q^-)$ that are not resolved by any of the vertices $(p,q),(x_1^-,y_1^-),(x_2^-,y_2^-),\dots ,(x_{k_2}^-,y_{k_2}^-)$.\\ Consider the shared neighbour of $(p^-,q)$ and $(p,q^-)$ which resides in the quadrant containing $(x_1^-,y_1^-),(x_2^-,y_2^-),\dots ,(x_{k_2}^-,y_{k_2}^-)$. We denote this vertex by $(p^-,q^-)$. Now the vertices $(x_1^+,y_1^+),(x_2^+,y_2^+),\dots ,(x_{k_1}^+,y_{k_1}^+),(p,q^+_1),(p,q^+_2),\dots ,\\(p,q^+_{k_3}), (p^+_1,q),(p^+_2,q),\dots (p^+_{k_4},q)$ are all in the same quadrant with respect to $(p^-,q^-)$ as the origin, so by Lemma~\ref{quadlem}, there are two neighbours of $(p^-,q^-)$ that are unresolved by any of the vertices $(p^-,q^-),(x_1^+,y_1^+),(x_2^+,y_2^+),\dots ,(x_{k_1}^+,y_{k_1}^+),\\ (p,q^+_1),(p,q^+_2),\dots ,(p,q^+_{k_3}),(p^+_1,q),(p^+_2,q),\dots (p^+_{k_4},q)$. These neighbours must be $(p^-,q)$ and $(p,q^-)$ since they lie on the boundary of the quadrant containing $(p^-,q^-)$, hence the vertices $(p^-,q) = (p^*,q)$ and $(p,q^-) = (p,q^*)$ are not resolved by any of the vertices $(p,q),(x_1^+,y_1^+),(x_2^+,y_2^+),\dots ,(x_{k_1}^+,y_{k_1}^+), (x_1^-,y_1^-),\\ (x_2^-,y_2^-),\dots ,(x_{k_2}^-,y_{k_2}^-), (p,q^+_1),(p,q^+_2),\dots ,(p,q^+_{k_3}),(p^+_1,q),(p^+_2,q),\dots (p^+_{k_4},q)$. \end{proof} Lemmas~\ref{quadlem} and~\ref{quadlem2} can be used to show that a set of vertices does not resolve the grid by finding a vertex that has a pair of neighbours that are unresolved. If a vertex does not have a pair of neighbours that are unresolved, then we say the vertex has a \textit{locally resolved neighbourhood}. For general graphs, if every vertex has a locally resolved neighbourhood, we cannot say that the graph is resolved. However, it turns out that for grid graphs, we are allowed to make this conclusion. \begin{theorem}\label{localthm} If $G=(V,E)$ is a grid and $R \subseteq V$ is a set of vertices such that every vertex in $G$ has a locally resolved neighbourhood with respect to $R$, then $R$ is a resolving set for $G$. \end{theorem} \begin{proof} The proof of this theorem is by induction. We start with the graph $G = P_3 \square P_3$ and attempt to construct a set $R$ that gives every vertex in the grid a locally resolved neighbourhood. The proof of Proposition~\ref{opprop} shows that the corners of a grid do not have locally resolved neighbourhoods if we do not have two boundary vertices on opposite sides as landmarks. Therefore $R$ must contain two boundary vertices on opposite sides. If these vertices are two corners on the same side, then $G$ would be resolved and so we are done. The proof of Proposition~\ref{cornerprop} shows that these two vertices will not locally resolve the grid if they are corners on opposite sides, so without loss of generality, since reflections and rotations do not change the grid, we have two cases, as shown in Fig~\ref{localfig}. \begin{figure}[H] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{3grid1.pdf} \end{subfigure}% \qquad \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{3grid2.pdf} \end{subfigure} \caption{The black vertices are the current elements of $R$. The vertex labels are the shortest distances from the black vertices.} \label{localfig} \end{figure} In both of these cases, the pairs of vertices that are unresolved are pairwise disjoint and for each of these pairs, there is a vertex that has both members of the pair as neighbours. Hence if we added vertices to $R$ to give every vertex in $G$ a locally resolved neighbour, then $G$ would be resolved, hence the theorem is true for $G = P_3 \square P_3$.\\ Now we suppose that the theorem is true for some $G=A$ where $A$ is a grid graph. We extend this grid by adding an extra row/column of vertices which we will denote by the set $B$. Without loss of generality, we let $B$ be a new row placed at the bottom of $A$. We denote this extended graph by $G^+$. Let $R$ be a set of landmark vertices in $G^+$ that gives every vertex in $G^+$ a locally resolved neighbourhood. Any pair of vertices in $A$ will be resolved by the induction hypothesis. Note that this remains true even if $R$ contained vertices in $B$ since a having a landmark $b \in B$ would be equivalent to having the vertex above $b$ as a landmark in $A$ when considering the resolvability of the $A$ subgrid. Suppose we have a pair of vertices in $B$ that is not resolved by any landmark. Let this pair be $(b_1,b_2)$. If $(b_1,b_2)$ is not resolved by any vertex in $B$ then this implies that there is a pair of vertices in $A$, denoted by $(a_1,a_2)$, which is not resolved by any landmark in $B$ where $a_1$ is the vertex above $b_1$ and $a_2$ is the vertex above $b_2$. This is because there is always a shortest path from a landmark in $B$ to $a_1$ that passes through $b_1$ (and similarly for $a_2$ and $b_2$). However, $(a_1,a_2)$ would also not be resolved by any landmark in $A$ since for every landmark in $A$, a shortest path to $b_1$ will have $a_1$ as the second last vertex (and similarly for $b_2$ and $a_2$). Hence if $(b_1,b_2)$ is not resolved by any landmark in $G^+$ then $(a_1,a_2)$ is not resolved which contradicts the induction hypothesis. Thus every pair of vertices in $B$ is resolved.\\ Now we need only consider the pairs of vertices $(a,b)$ where $a \in A$ and $b \in B$. Let $b$ be a vertex that is between two landmarks in $B$. Let $b^-$ be the landmark to the left of $b$ and let $b^+$ be the landmark to the right of $b$. Suppose the pair $(a,b)$ was not resolved by $b^+$. Since $a$ is at least one row above $b^+$ it must be at least one column to the right of $b$ since $b$ is to the left of $b^+$ on the same horizontal line and $d(b^+,b)=d(b^+,a)$. However, this implies that there is a shortest path from $b^-$ to $a$ that goes through $b$ since $b^-$ and $b$ are on the same horizontal line, $b$ is to the right of $b^-$, and $a$ is to the right of $b$. Hence $d(b^-,b) \neq d(b^-,a)$ so $(a,b)$ is resolved by $b^-$. Equivalently, if $(a,b)$ were not resolved by $b^-$ then the pair would be resolved by $b^+$.\\ Now suppose that $b$ is not between any two or more landmarks in $B$. This means that $b$ is to the left of the leftmost landmark in $B$, to the right of the rightmost vertex in $B$, or $B$ contains no landmarks. If $B$ contains a landmark, then without loss of generality, let $b$ be to the left of the leftmost landmark in $B$. We will denote this landmark by $b^*$. If $B$ does not contain any landmarks then we let $b$ be any vertex in $B$ and, without loss of generality, we let $b^*$ be the right neighbour of $b$. The vertex $b^*$ has a locally resolved neighbourhood; it follows that the pair of vertices consisting of the neighbour to the left of $b^*$ and the neighbour above $b^*$ must be resolved. There are no landmarks in $B$ that will resolve this pair since the only possible landmarks in $B$ are $b^*$ and vertices to the right of $b^*$, which implies that there always exist shortest paths from any landmark in $B$ to each these two neighbours of $b^*$ that contain $b^*$ as the second last vertex in the path. Furthermore, Lemma~\ref{quadlem} implies that no vertex in $A$ that is to the left of $b^*$ will resolve this pair. Hence there must be a landmark in $A$, which we will denote by $a^*$, that is either directly above $b^*$ on the same vertical line or to the right of $b^*$. Suppose $(a,b)$ is not resolved by $b^*$. This means that $d(b^*,b)=d(b^*,a)$. Since $a^*$ is directly above or to the right of $b^*$ and $b^*$ is to the right of $b$ on the same horizontal line, $d(a^*,b)=d(a^*,b^*) + d(b^*,b) = d(a^*,b^*) + d(b^*,a)$. If $(a,b)$ was not resolved by $a^*$, then $d(a^*,a) = d(a^*,b) \Rightarrow d(a^*,a)=d(a^*,b^*) + d(b^*,a)$ which is a contradiction since $b^*$ is below both $a$ and $a^*$ so no shortest path from $a^*$ to $a$ would contain $b^*$. Hence $(a,b)$ is resolved by $a^*$ and thus any pair $(a,b)$ is resolved by landmarks in $R$.\\ Therefore $G^+$ is resolved by the landmarks in $R$. \end{proof} If we let a vertex in the grid be the origin of a set of axes and consider the landmarks with respect to this origin as we did in Lemmas~\ref{quadlem} and~\ref{quadlem2}, then there are only a few different types of situations where the vertex is locally resolved and so we can use this to give a weak characterisation of an arbitrary resolving set of a grid graph. Satisfying Proposition~\ref{opprop} will guarantee that the corner vertices have locally resolved neighbourhoods, so we need only consider the situations for side and interior vertices.\\ For side vertices, we consider the situations that differ from the one described in Lemma~\ref{quadlem} since we know that this situation will not locally resolve a side vertex. This leaves us with two cases: \begin{itemize} \item \textbf{Side Case (1)}: There are landmarks in two different quadrants. \end{itemize} The other situation that differs from the one described in Lemma~\ref{quadlem} is when we have a landmark on a quadrant boundary. Denote this quadrant boundary by $q$. This alone will not give a locally resolved neighbourhood, so we must include an additional landmark vertex somewhere other than $q$. We cannot put the additional landmark in a quadrant that does not have $q$ as a boundary as this will leave the same pair that was unresolved by the first landmark unresolved. Thus we get the following case: \begin{itemize} \item \textbf{Side Case (2)}: There is a landmark on a quadrant boundary $q$ and an additional landmark that is not in $q$ or in the quadrant that does not have $q$ as a boundary. \end{itemize} \begin{figure}[H] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{sidecase1.pdf} \caption{Side Case (1)} \end{subfigure}% \qquad \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{sidecase2a.pdf} \caption{Side Case (2)} \end{subfigure} \end{figure} For interior vertices, we consider the situations that differ from the one described in Lemma~\ref{quadlem2}. This leaves us with three cases: \begin{itemize} \item \textbf{Interior Case (1)}: There are landmarks in two adjacent quadrants. \item \textbf{Interior Case (2)}: There are a landmarks on both boundaries of the same quadrant and another landmark that is in an adjacent quadrant. \end{itemize} The only other situation that differs from the one described in Lemma~\ref{quadlem2} is when we have landmarks in two different quadrant boundaries that do not share a quadrant. Denote these quadrant boundaries by $p$ and $q$. This alone will not give a locally resolved neighbourhood but putting an additional landmark anywhere except $p$ and $q$ will. Therefore, we get the following case. \begin{itemize} \item \textbf{Interior Case (3)}: There are landmarks in two different quadrant boundaries $p$ and $q$ that do not share a quadrant and an additional landmark that is not in $p$ or $q$. \end{itemize} \begin{figure}[H] \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{interiorcase1.pdf} \caption{Interior Case (1)} \end{subfigure}% \qquad \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{interiorcase2.pdf} \caption{Interior Case (2)} \end{subfigure} \qquad \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{interiorcase3a.pdf} \caption{Interior Case (3)} \end{subfigure} \end{figure} It can easily be verified that Side Cases (1) and (2) and Interior Cases (1), (2) and (3) give the origin a locally resolved neighbourhood. Hence our weak characterisation of an arbitrary resolving set of a grid graph is a set of vertices that contains two boundary vertices on opposite sides, and in which every side vertex is in a situation described by Side Case (1) or (2) and every interior vertex is in a situation described by Interior Case (1), (2) or (3) with respect to the vertices in the set.\\ It is possible to use our weak characterisation to find classes of $k$-minimals. We have characterised one such class. In order to describe the characterisation of this class of $k$-minimals, we first need some additional definitions.\\ A \textit{line segment} between two vertices $a$ and $b$ that are on the same line is the unique shortest path between $a$ and $b$. A line segment can either be a horizontal line segment or a vertical line. Now suppose we have a set of vertices $R$ on the grid. We define a \textit{horizontal line segment path} between two vertices $u$ and $v$ with respect to $R$, where $u,v \in R$, to be a shortest path from $u$ to $v$ that only uses horizontal line segments between the vertices in $R$ and the vertical lines that intersect these line segments (if such a path exists). If there are no vertices in $R$ on the same horizontal line as a vertex $w \in R$, then $w$ is a horizontal line segment of length one. A similar definition exists for the vertical line segment path that instead uses vertical line segments and the horizontal lines that intersect them. We say that the horizontal (vertical) line segment path is minimal if the horizontal (vertical) line segment path between $u$ and $v$ with respect to $R$ exists, but no horizontal (vertical) line segment path exists between $u$ and $v$ with respect to any set $R-\{w\}$, where $w \in R$.\\ Let $X_1,X_2,\dots,X_l$ be the sets of horizontal coordinates of the vertices in each of the horizontal line segments between vertices in then set $R$. If the horizontal coordinate of $u$ is $p$ and the horizontal coordinate of $v$ is $q$, where $u,v \in R$ and $p<q$, then there is no horizontal line segment path from $u$ to $v$ with respect to $R$ if $\{p,p+1, \dots,q-1, q\}\nsubseteq \bigcup_{i=1}^l X_i$. Below we have given some necessary conditions that must be satisfied in order for this horizontal line segment path to be minimal: \begin{enumerate}[(1)] \item There are no more than two vertices in the same row since we would only need the pair of vertices with the largest horizontal distance between them. \item $X_i \cap X_j \cap X_k = \emptyset$ for any three horizontal line segments, otherwise we could achieve the same result using only two of the horizontal line segments. \item $X_i \nsubseteq X_j$ for any two horizontal line segments, otherwise we could remove the line segment with the $X_i$ as the horizontal coordinates and we would still achieve the same result. \item The largest horizontal coordinate in any $X_i$ is greater or equal to $p$. \item The smallest horizontal coordinate in any $X_i$ is less or equal than $q$. \item If $u$ is above $v$, then if horizontal line segment with horizontal coordinates $X_i$ is above the horizontal line segment with coordinates $X_j$, where neither of these horizontal line segments contain $u$ or $v$, then the largest horizontal coordinate in $X_j$ must be strictly greater than the largest horizontal coordinate in $X_i$. If we had this situation and the line segment with coordinates $X_i$ were necessary for the horizontal line segment path, then there would exist such a path that would not use the line segment with coordinates $X_j$. \end{enumerate} We now use the above definitions to give the characterisation of a class of $k$-minimals: \begin{theorem} \label{segementthm} A set M of cardinality $k>3$ is a $k$-minimal if: \begin{enumerate}[(i)] \item $M$ has no more than one corner vertex. \item $M$ contains two boundary vertices, $u$ and $v$, on opposite sides that are not on the same line. Furthermore, $M$ does not contain any other pair of vertices on opposite sides. \item There is a minimal horizontal line segment path between $u$ and $v$ with respect to $M$ if $u$ and $v$ are on horizontal sides, otherwise there is a minimal vertical line segment path between $u$ and $v$ with respect to $M$. \end{enumerate} \end{theorem} \begin{proof} Let $M$ be a set of vertices that satisfies the above conditions. We will assume without loss of generality that the vertices $u$ and $v$ are on horizontal sides of the grid, the side containing $u$ is above the side containing $v$, and $u$ is to the left of $v$. Hence there is a horizontal line segment path between $u$ and $v$ with respect to $M$. We will refer to such paths as $M$-paths. We will prove the resolvability of $M$ by showing that every vertex has a locally resolved neighbourhood with respect to the elements of $M$. Since condition (ii) gives the corners locally resolved neighbourhoods, we need only consider the side and interior vertices.\\ If a side vertex $w$ is not on the same side as $u$ or $v$, then $w$ is in the situation described by Side Case (1) where $u$ and $v$ are in different quadrants. If the side vertex $w$ is on the same line as $u$ (but not $u$), then $u$ is in a quadrant boundary with respect to $w$ as the origin and there is either a vertex below $u$ on the same vertical line, or there are vertices below and at either side of $u$ in order for $u$ to be above a horizontal line segment. In either case, $w$ is in the situation described by Side Case (2)(and similarly if the side vertex is on the same side as $v$). Finally, if the side vertex $w$ is either $u$ or $v$ then it is either in the situation described by Side Case (2) if there is a vertex on the same vertical line as $w$, or it is in Side Case (1) if $w$ is above a horizontal line segment. Hence all the side vertices have locally resolved neighbourhoods.\\ Now consider the interior vertices. If $w$ is an interior vertex that is not in an $M$-path, then it is in the situation described by Interior Case (1) since the horizontal line segment path from $u$ to $v$ would cross the horizontal axis with $w$ as the origin. If $w$ is an interior vertex in an $M$-path but $w \notin M$, then there are 3 cases: \begin{enumerate}[(1)] \item $w$ is between two vertices $m_1,m_2 \in M$ on the same horizontal line. \item $w$ is between a vertex in $m \in M$ and $c \notin M$ that is in a horizontal line segment, where $c$ and $m$ are on the same vertical line. \item $w$ is between vertices $c_1,c_2 \notin M$ that are in two different horizontal line segments, where $c_1$ and $c_2$ are on the same vertical line. \end{enumerate} In Cases (2) and (3), $w$ is in the situation of Interior Case (1) due to $w$ being on a vertical line that intersects a horizontal line segment. In case (1), $w$ is in the situation described by Interior Case (3).\\ Finally, if $w$ is an interior vertex in an $M$-path and $w \in M$, then there must be another vertex in $M$ on the same horizontal line as $w$. If the vertical line containing $w$ contains another vertex in $M$, then $w$ is in the situation described by Interior Case (2). Otherwise, $w$ is in the situation described by Interior Case (1). Hence, all interior vertices have locally resolved neighbourhoods with respect to $M$. Therefore, all the vertices in the grid have locally resolved neighbourhoods with respect to the vertices of $M$ and so by Theorem~\ref{localthm}, $M$ is a resolving set.\\ Now we will show that $M$ is a minimal. Suppose we removed a vertex $w$ from $M$. If $w$ was either of the vertices $u$ or $v$, then condition (ii) implies that $M-\{w\}$ would not contain a pair of boundary vertices on opposite sides of the grid so by Proposition~\ref{opprop}, $M-\{w\}$ is not resolving. Now assume that $w$ is neither $u$ nor $v$. We know that there is an $m \in M$ on the same line as $w$. We also know that no vertical line between $w$ and $m$ intersects with two or more other horizontal line segments and that one of the vertical lines intersects with a horizontal line segment that is above $m$ and $w$ and another of these vertical lines intersects with a horizontal line segment that is below $m$ and $w$. Furthermore, we know that one of the vertical lines between $m$ and $w$ that intersects with the line segment above $m$ and $w$ will intersect with the line segment at a vertex in $M$ due to conditions (3) and (6) of a minimal horizontal line segment path. Similarly, we know that one of the vertical lines between $m$ and $w$ that intersects with the line segment below $m$ and $w$ will intersect with the line segment at a vertex in $M$. If $w$ is to the left of $m$, let $w_1$ be the first vertex on horizontal line between $w$ and $m$ for which there is vertex in $M$ that is on the same vertical line and below $w_1$. If $w$ is to the right of $m$, let $w_1$ be the first vertex on horizontal line between $m$ and $w$ for which there is vertex in $M$ that is on the same vertical line and above $w_1$. In either case, if $w_1$ is an interior vertex then it is in the situation described by Lemma~\ref{quadlem2} as there is not a pair of vertices in $M$ on the same vertical line as $w_1$ that are above and below $w_1$, nor is there a pair vertices in $M$ on the same horizontal line as $w_1$ that are to the left and right of $w_1$. Conditions (3) and (6) imply that if there is a vertex in $m_1 \in M$ that is on the same vertical line as any of the vertices on the horizontal line segment between $w$ and $m$, then if $m_1$ is above $w$ and $m$, it must be the rightmost vertex on a horizontal line segment, otherwise it must be the leftmost vertex in a horizontal line segment. This implies that there are no vertices in the north-east or south-west quadrants with respect to $w_1$ as the origin, hence by Lemma~\ref{quadlem2}, $w_1$ does not have a locally resolved neighbourhood. If $w_1$ were a side vertex, then $w_1 \in M$ and it would either be on the same side and below $u$, where $u$ is a corner vertex, or it would be on the same side and above $v$, where $v$ is a corner vertex. In the case $w_1$ is on the same side as $u$, then there are no other vertices in $M$ on the same horizontal line as $w_1$ and $u$ is the only vertex on the vertical line as $w_1$. Condition (6) implies that there are no vertices in the north-east quadrant with respect to $w_1$ as the origin and hence $w_1$ is not in the situation described by either Side Case (1) or Side Case (2) and thus does not have a locally resolved neighbourhood. In the case $w_1$ is on the same side as $v$, then there are no other vertices in $M$ on the same horizontal line as $w_1$ and $v$ is the only vertex on the vertical line as $w_1$. Condition (6) implies that there are no vertices in the south-west quadrant with respect to $w_1$ and so as in the previous case, $w_1$ does not have a locally resolved neighbourhood.\\ Therefore $M-\{w\}$ is not a resolving set for the grid. \end{proof} \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{kminimal.pdf} \caption{An example of a minimal described by Theorem~\ref{segementthm}.} \end{figure} It is important to note that the class of minimals described by Theorem~\ref{segementthm} does not include every $k$-minimal where $k>3$. Below we have an example of a 4-minimal that is clearly not in this class of minimals as no element in the minimal is on the same line as any other element in the minimal. The resolvability and minimality of this set of landmarks was verified by computer using an integer programming formulation of the problem. See appendix~\ref{sec:integer} for details.\\ \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{4minimal.pdf} \caption{A 4-minimal for the grid $P_5 \square P_5$.} \end{figure} We would now like to determine the largest cardinality of any minimal for the grid $P_n \square P_m$, where $3 \leq n \leq m$. The following result can be used to give an upper bound for this number. \begin{lemma} \label{3line} No minimal for the grid contains three vertices that are on the same line. \end{lemma} \begin{proof} Suppose we have three distinct vertices $u$, $v$ and $w$ on the same line in the grid, where $w$ is between $u$ and $v$. We will assume without loss of generality that $u$ and $v$ form a vertical line segment and that $u$ is above $v$. Now suppose there are two vertices $p$ and $q$ that are not resolved by $u$ or $v$. There are three possible cases for the location of $p$ and $q$: \begin{enumerate}[(1)] \item $p$ and $q$ are on the same horizontal line that intersects the vertical line segment between $u$ and $v$. \item Both $p$ and $q$ are above $u$. \item Both $p$ and $q$ are below $v$. \end{enumerate} For the first case, let $r$ be the vertex in the vertical line segment between $u$ and $v$ that is on the same horizontal line as $p$ and $q$. Since $r$ does not resolve $p$ or $q$ and since there is a shortest path from $w$ to $p$ that goes through $r$ and a shortest path from $w$ to $q$ that goes through $r$ if $w \neq r$, $w$ does not resolve $p$ and $q$. For the second case, there is a shortest path from $w$ to $p$ that goes through $u$ and a shortest path from $w$ to $q$ that goes through $u$, and so $w$ does not resolve $p$ and $q$. Finally, for the third case there is a shortest path from $w$ to $p$ that goes through $v$ and a shortest path from $w$ to $q$ that goes through $v$, and thus any pair of vertices $p$ and $q$ that are not resolved by $u$ or $v$, will not be resolved by $w$. Thus, if $u,v$ and $w$ were in a resolving set $R$, $R-\{w\}$ would still be resolving. \end{proof} The corollary of this result is that no minimal will have more than $2n$ vertices. It turns that we can provide an even better upper bound. \begin{theorem} For the grid $P_n \square P_m$, where $3 \leq n \leq m$, the cardinality of the largest minimal is $2n-2$. \end{theorem} \begin{proof} Suppose we have a grid $P_n \square P_m$, where $3 \leq n \leq m$, and a set $R$ that contains $2n$ vertices, where $R$ does not have three vertices that are on the same line. If we rotate the grid so that the north and south sides are of length $m$, then we have two vertices in every row. This implies that we have two vertices, $u_1$ and $u_2$, on one horizontal side of the grid, and two vertices $v_1$ and $v_2$ that are on the opposite side. If either $u_1$ or $u_2$ are on the same vertical line as any vertex in the horizontal line segment between $v_1$ or $v_2$, or if $v_1$ or $v_2$ are on the same vertical line as any vertex in the horizontal line segment between $u_1$ or $u_2$, then $R$ is the superset of a 3-minimal and we could remove $2n-3$ vertices from $R$ and still have a resolving set. Assume that neither $u_1$ or $u_2$ are on the same vertical line as a vertex in the line segment between $v_1$ and $v_2$, and assume that neither $v_1$ or $v_2$ are on the same vertical line as a vertex in the line segment between $u_1$ and $u_2$. Hence we will assume without loss of generality that if the horizontal coordinates of $u_1,u_2,v_1$ and $v_2$ are $p_1,p_2,q_1,q_2$, then $p_1<p_2<q_1<q_2$. The vertices $u_1$ and $v_1$ form opposite corners of a subgrid. Let the vertices in this subgrid be in the set $A$, and let all other vertices in the grid be in the set $B$. We will now show that the two vertices $u_2$ and $v_2$ do not affect the resolvability of $R$.\\ Clearly all of the corners of this grid are locally resolved by $u_1$ and $v_1$ since they are on opposite sides of the grid. If a vertex is a side vertex that is not on a side containing $u_1$ or $v_1$ then it is in the situation of Side Case (1) where $u_1$ and $v_1$ are in adjacent quadrants. Any interior vertex in $B$ is in the situation of Interior Case (1) where $u_1$ and $v_1$ are again in adjacent quadrants. A side vertex in $B$ that is on the same side as $u_1$ is in the situation of Side Case (2) where $u_1$ is in a quadrant boundary and $v_1$ is in a quadrant that has the quadrant boundary that contains $u_1$. Similarly, a side vertex in $B$ that is on the same side as $v_1$ is in the situation of Side Case (2) as it is locally resolved by $u_1$ and $v_1$. A side vertex in $A$ that is on the same side as $u$ may not be locally resolved as the quadrant boundary that contains $u_1$ and $u_2$ (or just $u_2$ if $u_1$ is the origin) is not a boundary of the quadrant that contains $v_1$ and $v_2$. If we had an element of $R$ that was in the adjacent quadrant to the quadrant containing $v_1$, then the side vertex would be locally resolved, both with or without $u_2$ and $v_2$. Similarly, the resolvability of the local neighbourhood of a side vertex in $A$ that is on the same side as $v_1$ does not depend on $u_2$ or $v_2$. Finally, an interior vertex in $A$ has the pair $u_1$ and $u_2$ in one quadrant and $v_1$ and $v_2$ in an opposite quadrant with respect to the interior vertex as the origin. It is clear from Lemma~\ref{quadlem2} that the resolvability of the local neighbourhood of this vertex does not depend on $u_2$ or $v_2$. Thus, if $R$ were a resolving set, then $R-\{u_2,v_2\}$ would be a resolving set and therefore no minimal has a cardinality greater than $2n-2$.\\ We show the existence of a minimal of cardinality $2n-2$ by the following construction: Given the grid $P_n \square P_m$, where $3 \leq n \leq m$, rotate the grid so that the north and south sides are of length $n$. Let $u$ be the north-west corner of the grid and let $v$ be the left neighbour of the south-east corner of the grid. Start from $u=p_1$ and a form path $P= p_1,p_2,p_3,\dots,p_{2n-3}$, where $p_{i+1}$ is the neighbour below $p_i$ if $i$ is odd, else $p_{i+1}$ is the neighbour to the right of $p_i$. Note that the path moves right $n-2$ times, hence $p_{2n-3}$ is on the same vertical line as $v$. Let $M=\{P\} \cup \{v\}$, where $\{ P \}$ denotes the set of vertices in $P$. Then by Theorem~\ref{segementthm}, $M$ is a minimal since the horizontal line segment path from $u$ to $v$ is unique and traverses every vertex in $M$. Since $|M| = 2n-2$, $M$ is a $(2n-2)$-minimal. \end{proof} \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{8minimal.pdf} \caption{An 8-minimal for the grid $P_5 \square P_5$.} \end{figure} The $(2n-2)$-minimal in the above proof was formed by taking a shortest path from two boundary vertices, $u$ and $v$, on opposite sides of the grid and letting the minimal contain $u$,$v$ and the corners of the shortest path, where a corner of the path $v_1,v_2,\dots ,v_k$ is any vertex $v_i$ such that $v_{i-1}$ and $v_{i+1}$ are on different lines of the grid. By choosing an appropriate $u$ and $v$ and an appropriate shortest path between them, it is possible to produce a $k$-minimal in this fashion, for any $k$ such that $3 \leq k \leq 2n-2$. \section{Conclusion} In this paper, we have provided a complete characterisation of 2-minimals and 3-minimals and have shown that $k$-minimals exist if and only if $2 \leq k \leq 2n-2$. We have also provided a characterisation of a class of $k$-minimals and a weak characterisation of resolving sets of the grid which may be used in the future to find more classes of $k$-minimals. As future work, we wish to give a complete characterisation and enumeration of all the minimals of the grid. The results thus far would suggest that even after a complete characterisation, an enumeration of all the minimals would be difficult, so in the case that no polynomial time algorithm to solve the minimum weight resolving set problem for the grid can be found, we would like to develop a suitable heuristic algorithm for this problem on grid graphs. We would also eventually like to expand the scope of our investigation to include other grid-like graphs, namely cylinders ($P_n \square C_m$) and toruses ($C_n \square C_m$). In the case of the torus, we conjecture that all the minimals for vertex transitive graphs are of the same cardinality which, if true, would imply that no investigation into the torus is needed since it is vertex transitive and its metric dimension is known. Proving this conjecture is also possible future work and due to the vast number of graphs for which metric dimension is known, this would be a very powerful result. \bibliographystyle{apalike}
{ "timestamp": "2014-09-17T02:07:26", "yymm": "1409", "arxiv_id": "1409.4510", "language": "en", "url": "https://arxiv.org/abs/1409.4510", "abstract": "For a simple graph $G=(V,E)$ and for a pair of vertices $u,v \\in V$, we say that a vertex $w \\in V$ resolves $u$ and $v$ if the shortest path from $w$ to $u$ is of a different length than the shortest path from $w$ to $v$. A set of vertices ${R \\subseteq V}$ is a resolving set if for every pair of vertices $u$ and $v$ in $G$, there exists a vertex $w \\in R$ that resolves $u$ and $v$. The minimum weight resolving set problem is to find a resolving set $M$ for a weighted graph $G$ such that$\\sum_{v \\in M} w(v)$ is minimum, where $w(v)$ is the weight of vertex $v$. In this paper, we explore the possible solutions of this problem for grid graphs $P_n \\square P_m$ where $3\\leq n \\leq m$. We give a complete characterisation of solutions whose cardinalities are 2 or 3, and show that the maximum cardinality of a solution is $2n-2$. We also provide a characterisation of a class of minimals whose cardinalities range from $4$ to $2n-2$.", "subjects": "Combinatorics (math.CO)", "title": "Minimum Weight Resolving Sets of Grid Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429565737233, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7097210924821753 }
https://arxiv.org/abs/0712.1680
Diamond-$α$ Jensen's Inequality on Time Scales
The theory and applications of dynamic derivatives on time scales has recently received considerable attention. The primary purpose of this paper is to give basic properties of diamond-$\alpha$ derivatives which are a linear combination of delta and nabla dynamic derivatives on time scales. We prove a generalized version of Jensen's inequality on time scales via the diamond-$\alpha$ integral and present some corollaries, including Hölder's and Minkowski's diamond-$\alpha$ integral inequalities.
\section{Introduction} Jensen's inequality is of great interest in the theory of differential and difference equations, as well as other areas of mathematics. The original Jensen's inequality can be stated as follows: \begin{theorem}\emph{\cite{mit}} \label{thm1} If $g \in C([a, b], (c, d))$ and $f \in C((c, d), \mathbb{R} )$ is convex, then $$ f \left( \frac{\int_{a}^{b}g(s) ds}{b-a}\right ) \leq \frac{\int_{a}^{b}f(g(s)) ds}{b-a}. $$ \end{theorem} Jensen's inequality on time scales via $\Delta$-integral has been recently obtained by Agarwal, Bohner and Peterson. \begin{theorem}\emph{\cite{abp}} \label{thm2} If $ g \in C_{rd}([a, b], (c, d))$ and $f \in C((c, d), \mathbb{R}) $ is convex, then $$ f \left( \frac{\int_{a}^{b}g(s) \Delta s}{b-a}\right ) \leq \frac{\int_{a}^{b}f(g(s)) \Delta s}{b-a}. $$ \end{theorem} Under similar hypotheses, we may replace the $\Delta$-integral by the $\nabla$-integral and get a completely analogous result \cite{rev1:r}. The aim of this paper is to extend Jensen's inequality to an arbitrary time scale via the diamond-$\alpha$ integral \cite{sfhd}. There has been recent developments of the theory and applications of dynamic derivatives on time scales. From the theoretical point of view, the study provide a unification and extension of traditional differential and difference equations. Moreover, it is a crucial tool in many computational and numerical applications. Based on the well-known $\Delta$ (delta) and $\nabla$ (nabla) dynamic derivatives, a combined dynamic derivative, so called $\diamondsuit_{\alpha}$ (diamond-$\alpha$) dynamic derivative, was introduced as a linear combination of $\Delta$ and $\nabla$ dynamic derivatives on time scales \cite{sfhd}. The diamond-$\alpha$ derivative reduces to the $\Delta$ derivative for $\alpha =1$ and to the $\nabla$ derivative for $\alpha =0$. On the other hand, it represents a ``weighted dynamic derivative'' on any uniformly discrete time scale when $\alpha =\frac{1}{2}$. We refer the reader to \cite{Stef,sQ,sfhd} for an account of the calculus associated with the diamond-$\alpha$ dynamic derivatives. The paper is organized as follows. In Section~\ref{sec:pre} we briefly give the basic definitions and theorems of time scales as introduced in Hilger's thesis \cite{h1} (see also \cite{h2,h3}). In Section~\ref{sec:mr} we present our main results, which are generalizations of Jensen's inequality on time scales. Some examples and applications are given in Section~\ref{sec:app}. \section{Preliminaries} \label{sec:pre} A time scale $\mathbb{T}$ is an arbitrary nonempty closed subset of real numbers. The calculus of time scales was initiated by S. Hilger in his Ph.D. thesis \cite{h1} in order to unify discrete and continuous analysis. Let $\mathbb{T}$ be a time scale. $\mathbb{T}$ has the topology that inherits from the real numbers with the standard topology. For $t \in \mathbb{T}$, we define the forward jump operator $\sigma: \mathbb{T} \rightarrow \mathbb{T}$ by $\sigma(t)= \inf \left \{s \in \mathbb{T}: s >t \right\}$, and the backward jump operator $\rho: \mathbb{T} \rightarrow \mathbb{T}$ by $\rho(t)= \sup \left \{s \in \mathbb{T}: s < t \right \}$. If $\sigma(t) > t$, we say that $t$ is right-scattered, while if $\rho(t) < t$, we say that $t$ is left-scattered. Points that are simultaneously right-scattered and left-scattered are called isolated. If $\sigma(t)=t$, then $t$ is called right-dense, and if $\rho(t)=t$, then $t$ is called left-dense. Points that are simultaneously right-dense and left-dense are called dense. Let $t \in \mathbb{T}$, then two mappings $\mu, \nu: \mathbb{T} \rightarrow [0, +\infty)$ are defined as follows: $\mu(t):=\sigma(t)-t$, $\nu(t):=t-\rho(t)$. We introduce the sets $\mathbb{T}^{k}$, $\mathbb{T}_{k}$, and $\mathbb{T}^{k}_{k}$, which are derived from the time scale $\mathbb{T}$, as follows. If $\mathbb{T}$ has a left-scattered maximum $t_{1}$, then $\mathbb{T}^{k}= \mathbb{T}-\{t_{1} \}$, otherwise $\mathbb{T}^{k}= \mathbb{T}$. If $\mathbb{T}$ has a right-scattered minimum $t_{2}$, then $\mathbb{T}_{k}= \mathbb{T}-\{t_{2} \}$, otherwise $\mathbb{T}_{k}= \mathbb{T}$. Finally, we define $\mathbb{T}_{k}^{k}= \mathbb{T}^{k} \cap \mathbb{T}_{k}$. Throughout the text we will denote a time scales interval by $$ [a,b]_{\mathbb{T}}=\{t\in\mathbb{T}:a\leq t \leq b\}, \quad \mbox{with}\ a,b\in\mathbb{T}. $$ Let $f: \mathbb{T}\rightarrow \mathbb{R}$ be a real function on a time scale $\mathbb{T}$. Then, for $t \in \mathbb{T}^{k}$ we define $f^{\Delta}(t)$ to be the number, if one exists, such that for all $\epsilon >0$ there is a neighborhood $U$ of $t$ such that for all $s \in U$, $$ \left|f(\sigma(t))- f(s)-f^{\Delta}(t)\left(\sigma(t)-s\right)\right| \leq \epsilon |\sigma(t)-s|. $$ We say that $f$ is delta differentiable on $\mathbb{T}^{k}$, provided $f^{\Delta}(t)$ exists for all $t \in \mathbb{T}^{k}$. Similarly, for $t \in \mathbb{T}_{k}$, we define $f^{\nabla}(t)$ to be the number value, if one exists, such that for all $\epsilon >0$, there is a neighborhood $V$ of $t$ such that for all $s \in V$, $$ \left|f(\rho(t))- f(s)-f^{\nabla}(t)\left(\rho(t)-s\right)\right| \leq \epsilon |\rho(t)-s|. $$ We say that $f$ is nabla differentiable on $\mathbb{T}_{k}$, provided that $f^{\nabla}(t)$ exists for all $t \in \mathbb{T}_{k}$. Given a function $f:\mathbb{T}\rightarrow \mathbb{R}$, then we define $f^{\sigma}: \mathbb{T}\rightarrow \mathbb{R}$ by $f^{\sigma}(t)=f(\sigma(t))$ for all $t \in \mathbb{T}$, \textrm{i.e.} $f^{\sigma}= f\circ \sigma$; we define $f^{\rho}: \mathbb{T}\rightarrow \mathbb{R}$ by $f^{\rho}(t)=f(\rho(t))$ for all $t \in \mathbb{T}$, \textrm{i.e.} $f^{\rho}= f\circ \rho$. The following properties hold for all $t \in \mathbb{T}^{k}$: \begin{itemize} \item[(i)] If $f$ is delta differentiable at $t$, then $f$ is continuous at $t$. \item[(ii)] If $f$ is continuous at $t$ and $t$ is right-scattered, then $f$ is delta differentiable at $t$ with $f^{\Delta}(t)= \frac{f^{\sigma}(t)-f(t)}{\mu(t)}$. \item[(iii)] If $f$ is right-dense, then $f$ is delta differentiable at $t$ if and only if the limit $\lim_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}$ exists as a finite number. In this case, $f^{\Delta}(t)= \lim_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}$. \item[(iv)] If $f$ is delta differentiable at $t$, then $f^{\sigma}(t)= f(t)+ \mu(t)f^{\Delta}(t)$. \end{itemize} Similarly, given a function $f: \mathbb{T} \rightarrow \mathbb{R}$, the following is true for all $t \in \mathbb{T}_{k}$: \begin{itemize} \item[(a)] If $f$ is nabla differentiable at $t$, then $f$ is continuous at $t$. \item[(b)] If $f$ is continuous at $t$ and $t$ is left-scattered, then $f$ is nabla differentiable at $t$ with $f^{\nabla}(t)= \frac{f(t)-f^{\rho}(t)}{\nu(t)}$. \item[(c)] If $f$ is left-dense, then $f$ is nabla differentiable at $t$ if and only if the limit $\lim_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}$ exists as a finite number. In this case, $f^{\nabla}(t)= \lim_{s\rightarrow t}\frac{f(t)-f(s)}{t-s}.$ \item[(d)] If $f$ is nabla differentiable at $t$, then $f^{\rho}(t)= f(t) - \nu(t)f^{\nabla}(t)$. \end{itemize} A function $f: \mathbb{T} \rightarrow \mathbb{R}$ is called rd-continuous, provided it is continuous at all right-dense points in $\mathbb{T}$ and its left-sided limits exist at all left-dense points in $\mathbb{T}$. A function $f: \mathbb{T} \rightarrow \mathbb{R}$ is called ld-continuous, provided it is continuous at all left-dense points in $\mathbb{T}$ and its right-sided limits exist finite at all right-dense points in $\mathbb{T}$. A function $F: \mathbb{T} \rightarrow \mathbb{R} $ is called a delta antiderivative of $f: \mathbb{T} \rightarrow \mathbb{R}$, provided $F^{\Delta}(t)=f(t)$ holds for all $t \in \mathbb{T}^{k}$. Then, the delta integral of $f$ is defined by $\int^b_a f(t)\Delta t=F(b)-F(a)$. A function $G: \mathbb{T} \rightarrow \mathbb{R} $ is called a nabla antiderivative of $g: \mathbb{T} \rightarrow \mathbb{R}$, provided $G^{\nabla}(t)=g(t)$ holds for all $t \in \mathbb{T}_{k}$. Then, the nabla integral of $g$ is defined by $\int^b_a g(t)\nabla t=G(b)-G(a)$. For more details on time scales we refer the reader to \cite{a1,abra,a2,a3,a4,b1,b2}. Now, we briefly introduce the diamond-$\alpha$ dynamic derivative and the diamond-$\alpha$ integral \cite{sfhd,Rogers}. Let $\mathbb{T}$ be a time scale, and $t$, $s \in \mathbb{T}$. Following \cite{Rogers}, we define $\mu_{t s} = \sigma(t)-s$, $\eta_{t s} = \rho(t)-s$, and $f^{\diamondsuit_{\alpha}}(t)$ to be the value, if one exists, such that for all $\epsilon >0$ there is a neighborhood $U$ of $t$ such that for all $s \in U$ \begin{equation*} \left| \alpha \left[f^\sigma(t) - f(s)\right] \eta_{t s} + (1-\alpha) \left[f^\rho(t) - f(s) \right] \mu_{t s} - f^{\diamondsuit_{\alpha}}(t) \mu_{t s} \eta_{t s} \right| < \epsilon \left|\mu_{t s} \eta_{t s}\right| \, . \end{equation*} A function $f$ is said diamond-$\alpha$ differentiable on $\mathbb{T}^k_k$ provided $f^{\diamondsuit_{\alpha}}(t)$ exists for all $t \in \mathbb{T}^k_k$. Let $0 \leq \alpha \leq 1$. If $f(t)$ is differentiable on $t \in \mathbb{T}^k_k$ both in the delta and nabla senses, then $f$ is diamond-$\alpha$ differentiable at $t$ and the dynamic derivative $f^{\diamondsuit_{\alpha}}(t)$ is given by \begin{equation} \label{eq:defSmp} f^{\diamondsuit_{\alpha}}(t)= \alpha f^{\Delta}(t)+(1-\alpha)f^{\nabla}(t) \end{equation} (see \cite[Theorem~3.2]{Rogers}). Equality \eqref{eq:defSmp} is the definition of $f^{\diamondsuit_{\alpha}}(t)$ found in \cite{sfhd}. The diamond-$\alpha$ derivative reduces to the standard $\Delta$ derivative for $\alpha =1$, or the standard $\nabla$ derivative for $\alpha =0$. On the other hand, it represents a ``weighted dynamic derivative'' for $\alpha \in (0,1)$. Furthermore, the combined dynamic derivative offers a centralized derivative formula on any uniformly discrete time scale $\mathbb{T}$ when $\alpha=\frac{1}{2}$. Let $f, g: \mathbb{T} \rightarrow \mathbb{R}$ be diamond-$\alpha$ differentiable at $t \in \mathbb{T}^k_k$. Then (\textrm{cf.} \cite[Theorem~2.3]{sfhd}), \begin{itemize} \item[(i)] $f+g: \mathbb{T} \rightarrow \mathbb{R}$ is diamond-$\alpha$ differentiable at $t \in \mathbb{T}^k_k$ with $$ (f+g)^{\diamondsuit_{\alpha}}(t)= (f)^{\diamondsuit_{\alpha}}(t)+(g)^{\diamondsuit_{\alpha}}(t); $$ \item[(ii)] For any constant $c$, $cf: \mathbb{T} \rightarrow \mathbb{R}$ is diamond-$\alpha$ differentiable at $t \in \mathbb{T}^k_k$ with $$ (cf)^{\diamondsuit_{\alpha}}(t)= c(f)^{\diamondsuit_{\alpha}}(t); $$ \item[(iii)] $fg: \mathbb{T} \rightarrow \mathbb{R}$ is diamond-$\alpha$ differentiable at $t \in \mathbb{T}^k_k$ with $$ (fg)^{\diamondsuit_{\alpha}}(t)= (f)^{\diamondsuit_{\alpha}}(t)g(t)+ \alpha f^{\sigma}(t)(g)^{\Delta}(t) +(1-\alpha) f^{\rho}(t)(g)^{\nabla}(t). $$ \end{itemize} Let $a, t \in \mathbb{T}$, and $h: \mathbb{T} \rightarrow \mathbb{R}$. Then, the diamond-$\alpha$ integral of $h$ from $a$ to $t$ is defined by $$ \int_{a}^{t}h(\tau) \diamondsuit_{\alpha} \tau = \alpha \int_{a}^{t}h(\tau) \Delta \tau +(1- \alpha) \int_{a}^{t}h(\tau) \nabla \tau, \quad 0 \leq \alpha \leq 1, $$ provided that there exist delta and nabla integrals of $h$ on $\mathbb{T}$. It is clear that the diamond-$\alpha$ integral of $h$ exists when $h$ is a continuous function. We may notice that the $\diamondsuit_{\alpha}$ combined derivative is not a dynamic derivative for the absence of its anti-derivative \cite[Sec.~4]{Rogers}. Moreover, in general we do not have \begin{equation} \label{eq:tfci:nh} \left ( \int_{a}^{t}h(\tau) \diamondsuit_{\alpha} \tau \right)^{\diamondsuit_{\alpha}} = h(t) \, , \quad t \in \mathbb{T} \, . \end{equation} \begin{example} \label{ex:2.1} Let $\mathbb{T} = \{0,1,2\}$, $a = 0$, and $h(\tau) = \tau^2$, $\tau \in $ $\mathbb{T}$. It is a simple exercise to see that $$ \left.\left ( \int_{0}^{t} h(\tau) \diamondsuit_{\alpha} \tau \right)^{\diamondsuit_{\alpha}}\right|_{t=1} = h(1) + 2 \alpha (1-\alpha) \, , $$ so that equality \eqref{eq:tfci:nh} holds only when $\diamondsuit_{\alpha} = \nabla$ or $\diamondsuit_{\alpha} = \Delta$. \end{example} Let $a$, $b$, $t \in \mathbb{T}$, $c \in \mathbb{R}$. Then (\textrm{cf.} \cite[Theorem~3.7]{sfhd}), \begin{itemize} \item[(a)]$ \int_{a}^{t}\left( f(\tau)+g(\tau) \right) \diamondsuit_{\alpha} \tau = \int_{a}^{t} f(\tau) \diamondsuit_{\alpha} \tau + \int_{a}^{t} g(\tau) \diamondsuit_{\alpha} \tau$; \item[(b)] $\int_{a}^{t} c f(\tau) \diamondsuit_{\alpha} \tau = c \int_{a}^{t} f(\tau) \diamondsuit_{\alpha} \tau$; \item[(c)] $\int_{a}^{t} f(\tau) \diamondsuit_{\alpha} \tau = \int_{a}^{b} f(\tau) \diamondsuit_{\alpha} \tau + \int_{b}^{t} f(\tau) \diamondsuit_{\alpha} \tau$. \end{itemize} Next lemma provides some straightforward but useful results for what follows. \begin{lemma}\label{lem1} Assume that $f$ and $g$ are continuous functions on $[a,b]_{\mathbb{T}}$. \begin{enumerate} \item If $f(t)\geq 0$ for all $t\in[a,b]_{\mathbb{T}}$, then $\int_a^b f(t)\Diamond_\alpha t\geq 0$. \item If $f(t)\leq g(t)$ for all $t\in[a,b]_{\mathbb{T}}$, then $\int_a^b f(t)\Diamond_\alpha t\leq\int_a^b g(t)\Diamond_\alpha t$. \item If $f(t)\geq 0$ for all $t\in[a,b]_{\mathbb{T}}$, then $f(t)=0$ if and only if $\int_a^b f(t)\Diamond_\alpha t=0$. \end{enumerate} \end{lemma} \begin{proof} Let $f(t)$ and $g(t)$ be continuous functions on $[a,b]_{\mathbb{T}}$. \begin{enumerate} \item Since $f(t)\geq 0$ for all $t\in[a,b]_{\mathbb{T}}$, we know (see \cite{b1,b2}) that $\int_a^b f(t)\Delta t\geq 0$ and $\int_a^b f(t)\nabla t\geq 0$. Since $\alpha\in[0,1]$, the result follows. \item Let $h(t)=g(t)-f(t)$. Then, $\int_a^b h(t)\Diamond_\alpha t\geq 0$ and the result follows from properties (a) and (b) above. \item If $f(t)=0$ for all $t\in[a,b]_{\mathbb{T}}$, the result is immediate. Suppose now that there exists $t_0\in[a,b]_{\mathbb{T}}$ such that $f(t_0)>0$. It is easy to see that at least one of the integrals $\int_a^b f(t)\Delta t$ or $\int_a^b f(t)\nabla t$ is strictly positive. Then, we have the contradiction $\int_a^b f(t)\Diamond_\alpha t>0$. \end{enumerate} \end{proof} \section{Main Results} \label{sec:mr} We now prove Jensen's diamond-$\alpha$ integral inequalities. \begin{theorem}[Jensen's inequality] \label{thm3} Let $\mathbb{T}$ be a time scale, $a$, $b \in \mathbb{T}$ with $a < b$, and $c$, $d \in \mathbb{R}$. If $ g \in C([a, b]_{\mathbb{T}}, (c, d))$ and $f \in C((c, d), \mathbb{R} ) $ is convex, then \begin{equation} \label{eq:JI:mr} f \left( \frac{\int_{a}^{b}g(s) \diamondsuit_{\alpha} s}{b-a}\right ) \leq \frac{\int_{a}^{b}f(g(s)) \diamondsuit_{\alpha} s}{b-a}. \end{equation} \end{theorem} \begin{remark} In the particular case $\alpha=1$, inequality \eqref{eq:JI:mr} reduces to that of Theorem~\ref{thm2}. If $\mathbb{T}=\mathbb{R}$, then Theorem~\ref{thm3} gives the classical Jensen inequality, \textrm{i.e.} Theorem~\ref{thm1}. However, if $\mathbb{T}=\mathbb{Z}$ and $f(x)=-\ln(x)$, then one gets the well-known arithmetic-mean geometric-mean inequality (\ref{eq:am:gm:i}). \end{remark} \begin{proof} Since $f$ is convex we have \begin{equation*} \begin{split} f \left( \frac{\int_{a}^{b}g(s) \diamondsuit_{\alpha} s}{b-a}\right )& = f \left(\frac{\alpha}{b-a} \int_{a}^{b} g(s) \Delta s+ \frac{1-\alpha}{b-a} \int_{a}^{b} g(s) \nabla s \right)\\ & \leq \alpha f \left(\frac{1}{b-a} \int_{a}^{b} g(s) \Delta s \right)+ (1-\alpha)f\left( \frac{1}{b-a} \int_{a}^{b} g(s) \nabla s \right). \end{split} \end{equation*} Using now Jensen's inequality on time scales (see Theorem~\ref{thm2}), we get \begin{equation*} \begin{split} f \left( \frac{\int_{a}^{b}g(s) \diamondsuit_{\alpha} s}{b-a}\right )& \leq \frac{\alpha}{b-a} \int_{a}^{b} f(g(s)) \Delta s + \frac{1-\alpha}{b-a} \int_{a}^{b} f(g(s)) \nabla s \\ & = \frac{1}{b-a} \left( \alpha \int_{a}^{b} f(g(s)) \Delta s + (1-\alpha) \int_{a}^{b} f(g(s)) \nabla s \right) \\ & = \frac{1}{b-a} \int_{a}^{b}f(g(s)) \diamondsuit_{\alpha} s. \end{split} \end{equation*} \end{proof} Now we give an extended Jensen inequality on time scales via the diamond-$\alpha$ integral. \begin{theorem}[Generalized Jensen's inequality] \label{thm4} Let $\mathbb{T}$ be a time scale, $a$, $b \in \mathbb{T}$ with $a < b$, $c$, $d \in \mathbb{R}$, $g \in C([a, b]_{\mathbb{T}}, (c, d))$ and $h\in C([a, b]_{\mathbb{T}}, \mathbb{R} )$ with $$ \int_{a}^{b} |h(s)| \diamondsuit_{\alpha} s > 0 \, . $$ If $f \in C((c, d), \mathbb{R}) $ is convex, then \begin{equation} \label{eq:GJI} f \left( \frac{\int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}\right ) \leq \frac{\int_{a}^{b} |h(s)|f(g(s)) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}. \end{equation} \end{theorem} \begin{remark} Theorem~\ref{thm4} is the same as \cite[Theorem~3.17]{rev1:r}. However, we prove Theorem~\ref{thm4} using a different approach than that proposed in \cite{rev1:r}: in \cite{rev1:r} it is stated that such result follows from the analog nabla-inequality. As we have seen, diamond-alpha integrals have different properties than those of delta or nabla integrals (\textrm{cf.} Example~\ref{ex:2.1}). On the other hand, there is an inconsistency in \cite{rev1:r}: a very simple example showing this fact is given below in Remark~\ref{rem:ex:er}. \end{remark} \begin{remark} In the particular case $h=1$, Theorem~\ref{thm4} reduces to Theorem~\ref{thm3}. \end{remark} \begin{remark} If $f$ is strictly convex, the inequality sign ``$\leq$'' in (\ref{eq:GJI}) can be replaced by ``$<$''. Similar result to Theorem~\ref{thm4} holds if one changes the condition ``$f$ is convex'' to ``$f$ is concave'', by replacing the inequality sign ``$\leq$'' in (\ref{eq:GJI}) by ``$\geq$''. \end{remark} \begin{proof} Since $f$ is convex, it follows, for example from \cite[Exercise 3.42C]{fol}, that for $t \in (c, d)$ there exists $a_{t} \in \mathbb{R} $ such that \begin{equation}\label{eq1} a_{t}(x-t) \leq f(x)-f(t) \mbox{ for all } x \in (c, d). \end{equation} Setting $$ t= \frac{\int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s} \, , $$ then using \eqref{eq1} and item 2 of Lemma~\ref{lem1}, we get \begin{equation*} \begin{split} \int_{a}^{b} & |h(s)|f(g(s)) \diamondsuit_{\alpha} s - \left ( \int_{a}^{b} |h(s)| \diamondsuit_{\alpha} s \right) f \left( \frac{\int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}\right ) \\ = & \int_{a}^{b} |h(s)|f(g(s)) \diamondsuit_{\alpha} s - \left ( \int_{a}^{b} |h(s)| \diamondsuit_{\alpha} s \right) f(t) = \int_{a}^{b} |h(s)| \left (f(g(s))-f(t) \right) \diamondsuit_{\alpha} s \\ \geq & a_{t} \left( \int_{a}^{b} |h(s)| (g(s)-t ) \right) \diamondsuit_{\alpha} s = a_{t} \left( \int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s - t \int_{a}^{b} |h(s)| \diamondsuit_{\alpha} s \right)\\ \\ = & a_{t} \left( \int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s- \int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s \right) = 0 \, . \end{split} \end{equation*} This leads to the desired inequality. \end{proof} \begin{remark} The proof of Theorem~\ref{thm4} follows closely the proof of the classical Jensen inequality (see \textrm{e.g.} \cite[Problem 3.42]{fol}) and the proof of Jensen's inequality on time scales \cite{abp}. \end{remark} We have the following corollaries. \begin{corollary}$(\mathbb{T}=\mathbb{R})$ Let $g, h: [a, b]\rightarrow \mathbb{R}$ be continuous functions with $g([a, b]) \subseteq (c, d)$ and $\int_{a}^{b}|h(x)| dx >0$. If $f \in C((c, d), \mathbb{R})$ is convex, then $$ f \left( \frac{\int_{a}^{b} |h(x)|g(x) dx}{\int_{a}^{b} |h(x)| dx}\right ) \leq \frac{\int_{a}^{b} |h(x)|f(g(x)) dx}{\int_{a}^{b} |h(x)| dx}. $$ \end{corollary} \begin{corollary}$(\mathbb{T}=\mathbb{Z})$ \label{cor:E:SMC} Given a convex function $f$, we have for any $x_{1}, \ldots ,x_{n} \in \mathbb{R}$ and $c_{1}, \ldots ,c_{n} \in \mathbb{R}$ with $\sum_{k=1}^{n}|c_{k}| >0$: \begin{equation} \label{eq:des:pr} f \left (\frac{\sum_{k=1}^{n}|c_{k}|x_{k}}{\sum_{k=1}^{n}|c_{k}|}\right) \leq \frac{\sum_{k=1}^{n}|c_{k}|f(x_{k})}{\sum_{k=1}^{n}|c_{k}|}. \end{equation} \end{corollary} \begin{remark} \label{rem:ex:er} Corollary~\ref{cor:E:SMC} coincides with \cite[Corollary~2.4]{fcw} and \cite[Corollary~3.12]{rev1:r} if one substitutes all the $|c_{k}|$'s in Corollary~\ref{cor:E:SMC} by $c_k$ and we restrict ourselves to integer values of $x_i$ and $c_i$, $i = 1,\ldots,n$. Let $\mathbb{T}=\mathbb{Z}$, $a = 1$ and $b = 3$, so that $[a,b]_{\mathbb{T}}$ denotes the set $\{1,2,3\}$ and $n=3$. For the data $f(x)=x^2$, $c_1=1$, $c_2=5$, $c_3=-3$, $x_1=1$, $x_2=1$, and $x_3=2$ one has $A=\sum_{k=1}^3 c_k = 3 > 0$ and $B=\sum_{k=1}^3 c_k x_k = 0$. Thus, $D=f(B/A)=f(0)=0$. On the other hand, $f(x_1)=1$, $f(x_2)=1$, and $f(x_3)=4$. Therefore, $C=\sum_{k=1}^3 c_k f(x_k)= -6$. We have $E=C/A=-2$ and $D>E$, \textrm{i.e.} $f \left (\frac{\sum_{k=1}^{n} c_{k} x_{k}}{\sum_{k=1}^{n} c_{k}}\right) > \frac{\sum_{k=1}^{n} c_{k} f(x_{k})}{\sum_{k=1}^{n} c_{k}}$. Inequality \eqref{eq:des:pr} gives the truism $\frac{16}{9} \le 2$. \end{remark} \section*{Particular Cases} \begin{itemize} \item[(i)] Let $g(t) > 0$ on $[a, b]_{\mathbb{T}}$ and $f(t)= t^{\beta}$ on $(0, +\infty)$. One can see that $f$ is convex on $(0, +\infty)$ for $\beta < 0$ or $\beta >1$, and $f$ is concave on $(0, +\infty)$ for $\beta \in (0, 1)$. Then, $$ \left( \frac{\int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}\right )^{\beta} \leq \frac{\int_{a}^{b} |h(s)|g^{\beta}(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}, \mbox{ if } \beta < 0 \mbox{ or } \beta >1; $$ $$ \left( \frac{\int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}\right )^{\beta} \geq \frac{\int_{a}^{b} |h(s)|g^{\beta}(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}, \mbox{ if } \beta \in (0, 1). $$ \item[(ii)] Let $g(t) > 0$ on $[a, b]_{\mathbb{T}}$ and $f(t)= \ln(t)$ on $(0, +\infty)$. One can also see that $f$ is concave on $(0, +\infty)$. It follows that $$ \ln \left( \frac{\int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}\right ) \geq \frac{\int_{a}^{b} |h(s)|\ln (g(s)) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}. $$ \item[(iii)]Let $h=1$, then $$ \ln \left(\frac{\int_{a}^{b} g(s) \diamondsuit_{\alpha} s}{b - a}\right) \geq \frac{\int_{a}^{b} \ln (g(s)) \diamondsuit_{\alpha} s}{b - a} . $$ \item[(iv)] Let $\mathbb{T}=\mathbb{R}$, $g: [0, 1]\rightarrow (0, \infty)$ and $h(t)=1$. Applying Theorem~\ref{thm4} with the convex and continuous function $f=-\ln$ on $(0, \infty)$, $a=0$ and $b=1$, we get: $$ \ln \int_{0}^{1} g(s)ds \geq \int_{0}^{1}\ln( g(s))ds. $$ Then, $$ \int_{0}^{1} g(s)ds \geq \exp \left(\int_{0}^{1}\ln( g(s))ds \right). $$ \item[(v)] Let $\mathbb{T}=\mathbb{Z}$ and $n\in\mathbb{N}$. Fix $a=1$, $b=n+1$ and consider a function $g:\{1,\ldots,n+1\}\rightarrow(0,\infty)$. Obviously, $f=-\ln$ is convex and continuous on $(0,\infty)$, so we may apply Jensen's inequality to obtain \end{itemize} \begin{equation*} \begin{split} \ln\Biggl[ & \frac{1}{n}\left(\alpha\sum_{t=1}^n g(t)+(1-\alpha)\sum_{t=2}^{n+1}g(t)\right)\Biggr] = \ln\left[\frac{1}{n}\int_1^{n+1}g(t)\Diamond_\alpha t\right]\\ &\geq\frac{1}{n}\int_1^{n+1}\ln(g(t))\Diamond_\alpha t\\ &=\frac{1}{n}\left[\alpha\sum_{t=1}^n \ln(g(t))+(1-\alpha)\sum_{t=2}^{n+1}\ln(g(t))\right]\\ &=\ln\left\{\prod_{t=1}^n g(t)\right\}^{\frac{\alpha}{n}}+\ln\left\{\prod_{t=2}^{n+1} g(t)\right\}^{\frac{1-\alpha}{n}} \, , \end{split} \end{equation*} and hence $$\frac{1}{n}\left(\alpha\sum_{t=1}^n g(t)+(1-\alpha)\sum_{t=2}^{n+1}g(t)\right)\geq\left\{\prod_{t=1}^n g(t)\right\}^{\frac{\alpha}{n}}\left\{\prod_{t=2}^{n+1} g(t)\right\}^{\frac{1-\alpha}{n}}.$$ When $\alpha=1$, we obtain the well-known arithmetic-mean geometric-mean inequality: \begin{equation} \label{eq:am:gm:i} \frac{1}{n}\sum_{t=1}^n g(t)\geq\left\{\prod_{t=1}^n g(t)\right\}^{\frac{1}{n}}. \end{equation} When $\alpha=0$, we also have $$\frac{1}{n}\sum_{t=2}^{n+1} g(t)\geq\left\{\prod_{t=2}^{n+1} g(t)\right\}^{\frac{1}{n}}.$$ \begin{itemize} \item[(vi)] Let $\mathbb{T}= 2^{\mathbb{N}_{0}}$ and $N \in \mathbb{N}$. We can apply Theorem~\ref{thm4} with $a=1, b=2^{N}$ and $g: \{ 2^{k}: 0 \leq k \leq N \} \rightarrow (0, \infty)$. Then, we get: \end{itemize} \begin{equation*} \begin{split} \ln \left \{ \frac{\int_{1}^{2^{N}}g(t)\diamondsuit_{\alpha}t}{2^{N}-1} \right \}&= \ln \left \{ \alpha \frac{\int_{1}^{2^{N}}g(t)\Delta t}{2^{N}-1}+(1-\alpha) \frac{\int_{1}^{2^{N}}g(t)\nabla t}{2^{N}-1} \right \}\\ &= \ln \left \{ \frac{\alpha \sum_{k=0}^{N-1}2^{k}g(2^{k})}{2^{N}-1} + \frac{(1-\alpha) \sum_{k=1}^{N}2^{k}g(2^{k})}{2^{N}-1} \right \}\\ & \geq \frac{\int_{1}^{2^{N}} \ln (g(t))\diamondsuit_{\alpha}t}{2^{N}-1} \end{split} \end{equation*} \begin{equation*} \begin{split} &= \alpha \frac{\int_{1}^{2^{N}} \ln (g(t))\Delta t}{2^{N}-1}+ (1 - \alpha) \frac{\int_{1}^{2^{N}} \ln (g(t))\nabla t}{2^{N}-1}\\ &= \frac{\alpha \sum_{k=0}^{N-1}2^{k}\ln(g(2^{k})) }{2^{N}-1} + \frac{(1-\alpha) \sum_{k=1}^{N}2^{k}\ln(g(2^{k})) }{2^{N}-1}\\ &= \frac{ \sum_{k=0}^{N-1}\ln(g(2^{k}))^{\alpha 2^{k}} }{2^{N}-1} + \frac{ \sum_{k=1}^{N}\ln(g(2^{k}))^{(1-\alpha)2^{k}} }{2^{N}-1}\\ &=\frac{ \ln \prod_{k=0}^{N-1}(g(2^{k}))^{\alpha 2^{k}} }{2^{N}-1} + \frac{ \ln(\prod_{k=1}^{N}g(2^{k}))^{(1-\alpha)2^{k}} }{2^{N}-1}\\ &= \ln \left \{\prod_{k=0}^{N-1}(g(2^{k}))^{\alpha 2^{k}} \right \}^{\frac{1}{2^{N}-1}} + \ln \left \{\prod_{k=1}^{N}(g(2^{k}))^{(1-\alpha) 2^{k}} \right \}^{\frac{1}{2^{N}-1}}\\ &= \ln \left ( \left \{\prod_{k=0}^{N-1}(g(2^{k}))^{\alpha 2^{k}} \right \}^{\frac{1}{2^{N}-1}} \left \{\prod_{k=1}^{N}(g(2^{k}))^{(1-\alpha) 2^{k}} \right \}^{\frac{1}{2^{N}-1}} \right) \, . \end{split} \end{equation*} We conclude that \begin{multline*} \ln \left \{ \frac{\alpha \sum_{k=0}^{N-1}2^{k}g(2^{k})+(1-\alpha) \sum_{k=1}^{N}2^{k}g(2^{k})}{2^{N}-1} \right \}\\ \geq \ln \left ( \left \{\prod_{k=0}^{N-1}(g(2^{k}))^{\alpha 2^{k}} \right \}^{\frac{1}{2^{N}-1}} \left \{\prod_{k=1}^{N}(g(2^{k}))^{(1-\alpha) 2^{k}} \right \}^{\frac{1}{2^{N}-1}} \right). \end{multline*} On the other hand, $$ \alpha \sum_{k=0}^{N-1}2^{k}g(2^{k})+(1-\alpha) \sum_{k=1}^{N}2^{k}g(2^{k})= \sum_{k=1}^{N-1}2^{k}g(2^{k})+\alpha g(1)+(1-\alpha) 2^{N}g(2^{N}). $$ It follows that \begin{multline*} \frac{\sum_{k=1}^{N-1}2^{k}g(2^{k})+\alpha g(1)+(1-\alpha) 2^{N}g(2^{N})}{2^{N}-1} \\ \geq \left\{\prod_{k=0}^{N-1}(g(2^{k}))^{\alpha 2^{k}} \right \}^{\frac{1}{2^{N}-1}} \left \{\prod_{k=1}^{N}(g(2^{k}))^{(1-\alpha) 2^{k}} \right \}^{\frac{1}{2^{N}-1}}. \end{multline*} In the particular case when $\alpha =1$ we have $$ \frac{\sum_{k=0}^{N-1}2^{k}g(2^{k})}{2^{N}-1} \geq \left \{\prod_{k=0}^{N-1}(g(2^{k}))^{ 2^{k}} \right \}^{\frac{1}{2^{N}-1}}, $$ and when $\alpha =0$ we get the inequality $$ \frac{\sum_{k=1}^{N}2^{k}g(2^{k})}{2^{N}-1} \geq \left \{\prod_{k=1}^{N}(g(2^{k}))^{ 2^{k}} \right \}^{\frac{1}{2^{N}-1}}\, . $$ \section{Related Diamond-$\alpha$ Integral Inequalities} \label{sec:app} The usual proof of H\"{o}lder's inequality use the basic Young inequality $x^{\frac{1}{p}}y^{\frac{1}{q}} \leq \frac{x}{p}+ \frac{y}{q}$ for nonnegative $x$ and $y$. Here we present a proof based on the application of Jensen's inequality (Theorem~\ref{thm4}). \begin{theorem}[H\"{o}lder's inequality] \label{app:th:hi} Let $\mathbb{T}$ be a time scale, $a$, $b \in \mathbb{T}$ with $a < b$, and $f$, $g$, $h \in C([a, b]_{\mathbb{T}}, [0, \infty))$ with $\int_{a}^{b}h(x)g^{q}(x)\diamondsuit_{\alpha} x >0$, where $q$ is the H\"{o}lder conjugate number of $p$, \textrm{i.e.} $\frac{1}{p}+\frac{1}{q}=1$ with $1<p$. Then, we have: \begin{equation} \label{app:eq:hi} \int_{a}^{b}h(x)f(x)g(x)\diamondsuit_{\alpha} x \leq \left(\int_{a}^{b}h(x)f^{p}(x)\diamondsuit_{\alpha} x\right)^{\frac{1}{p}} \left(\int_{a}^{b}h(x)g^{q}(x)\diamondsuit_{\alpha} x\right)^{\frac{1}{q}} \, . \end{equation} \end{theorem} \begin{proof} Choosing $f(x)=x^{p}$ in Theorem~\ref{thm4}, which for $p>1$ is obviously a convex function on $[0, \infty)$, we have \begin{equation} \label{eq:R} \left( \frac{\int_{a}^{b} |h(s)|g(s) \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}\right )^p \leq \frac{\int_{a}^{b} |h(s)|(g(s))^p \diamondsuit_{\alpha} s}{\int_{a}^{b} |h(s)| \diamondsuit_{\alpha}s}. \end{equation} Inequality \eqref{app:eq:hi} is trivially true in the case when $g$ is identically zero. We consider two cases: (i) $g(x) > 0$ for all $x \in [a, b]_{\mathbb{T}}$; (ii) there exists at least one $x \in [a, b]_{\mathbb{T}}$ such that $g(x) = 0$. We begin with situation (i). Replacing $g$ by $fg^{\frac{-q}{p}}$ and $|h(x)|$ by $hg^{q}$ in inequality (\ref{eq:R}), we get: $$ \left( \frac{\int_{a}^{b}h(x)g^{q}(x)f(x)g^{\frac{-q}{p}}(x)\diamondsuit_{\alpha} x}{\int_{a}^{b}h(x)g^{q}(x)\diamondsuit_{\alpha}x} \right)^{p} \leq \frac{\int_{a}^{b}h(x)g^{q}(x)(f(x)g^{\frac{-q}{p}}(x))^{p}\diamondsuit_{\alpha} x}{\int_{a}^{b}h(x)g^{q}(x)\diamondsuit_{\alpha}x}. $$ Using the fact that $\frac{1}{p}+\frac{1}{q}=1$, we obtain that \begin{equation} \label{eq:parti} \int_{a}^{b}h(x)f(x)g(x)\diamondsuit_{\alpha} x \leq \left(\int_{a}^{b}h(x)f^{p}(x)\diamondsuit_{\alpha} x\right)^{\frac{1}{p}} \left(\int_{a}^{b}h(x)g^{q}(x)\diamondsuit_{\alpha} x\right)^{\frac{1}{q}} \, . \end{equation} We now consider situation (ii). Let $G= \left\{x \in [a, b]_{\mathbb{T}} \, | \, g(x) =0 \right\}$. Then, \begin{gather*} \int_{a}^{b}h(x) f(x) g(x) \diamondsuit_{\alpha} x = \int_{[a, b]_{\mathbb{T}}-G} h(x) f(x) g(x) \diamondsuit_{\alpha} x + \int_{G} h(x) f(x) g(x) \diamondsuit_{\alpha} x\\ = \int_{[a, b]_{\mathbb{T}}-G} h(x) f(x) g(x) \diamondsuit_{\alpha} x \end{gather*} because $\int_{G} h(x) f(x) g(x) \diamondsuit_{\alpha} x =0$. For the set $[a, b]_{\mathbb{T}}-G$ we are in case (i), \textrm{i.e.} $g(x) > 0$, and it follows from \eqref{eq:parti} that \begin{equation*} \begin{split} \int_{a}^{b}h(x) f(x) g(x) \diamondsuit_{\alpha} x &= \int_{[a, b]_{\mathbb{T}}-G}h(x) f(x) g(x) \diamondsuit_{\alpha} x \\ &\leq \left(\int_{[a, b]_{\mathbb{T}}-G} h(x) f^{p}(x) \diamondsuit_{\alpha} x \right )^{\frac{1}{p}} \quad \left (\int_{[a, b]_{\mathbb{T}}-G} h(x) g^{q}(x) \diamondsuit_{\alpha} x \right )^{\frac{1}{q}}\\ &\leq \left (\int_a^b h(x) f^{p}(x) \diamondsuit_{\alpha} x \right )^{\frac{1}{p}} \quad \left (\int_a^b h(x) g^{q}(x) \diamondsuit_{\alpha} x \right )^{\frac{1}{q}} \, . \end{split} \end{equation*} \end{proof} \begin{remark} In the particular case $h=1$, Theorem~\ref{app:th:hi} gives the diamond-$\alpha$ version of classical H\"{o}lder's inequality: $$ \int_{a}^{b}|f(x)g(x)|\diamondsuit_{\alpha} x \leq \left(\int_{a}^{b}|f|^{p}(x)\diamondsuit_{\alpha} x\right)^{\frac{1}{p}} \left(\int_{a}^{b}|g|^{q}(x)\diamondsuit_{\alpha} x\right)^{\frac{1}{q}}, $$ where $p>1$ and $q=\frac{p}{p-1}$. \end{remark} \begin{remark} In the special case $p=q=2$, (\ref{app:eq:hi}) reduces to the following diamond-$\alpha$ Cauchy-Schwarz integral inequality on time scales: $$ \int_{a}^{b}|f(x)g(x)|\diamondsuit_{\alpha} x \leq \sqrt{ \left(\int_{a}^{b}f^{2}(x)\diamondsuit_{\alpha} x\right) \left(\int_{a}^{b}g^{2}(x)\diamondsuit_{\alpha} x\ \right)} \, . $$ \end{remark} We are now in position to prove a Minkowski inequality using our H\"{o}lder's inequality (\ref{app:eq:hi}). \begin{theorem}[Minkowski's inequality] Let $\mathbb{T}$ be a time scale, $a$, $b \in \mathbb{T}$ with $a < b$, and $p>1$. For continuous functions $f, g: [a, b]_{\mathbb{T}} \rightarrow \mathbb{R}$ we have $$ \left (\int_{a}^{b}|(f+g)(x)|^{p}\diamondsuit_{\alpha} x \right)^{\frac{1}{p}} \leq \left(\int_{a}^{b}|f(x)|^{p}\diamondsuit_{\alpha} x\right)^{\frac{1}{p}} + \left(\int_{a}^{b}|g(x)|^{p}\diamondsuit_{\alpha} x\right)^{\frac{1}{p}}. $$ \end{theorem} \begin{proof} We have, by the triangle inequality, that \begin{multline} \label{mink1} \int_a^b |f(x) + g(x)|^p\diamondsuit_{\alpha} x =\int_a^b |f(x)+g(x)|^{p-1}|f(x)+g(x)|\diamondsuit_\alpha x\\ \leq \int_a^b|f(x)||f(x)+g(x)|^{p-1}\diamondsuit_\alpha x+\int_a^b|g(x)||f(x)+g(x)|^{p-1}\diamondsuit_\alpha x. \end{multline} Applying now H\"{o}lder's inequality with $q=p/(p-1)$ to \eqref{mink1}, we obtain: \begin{multline*} \int_a^b |f(x)+g(x)|^p\diamondsuit_{\alpha} x \leq\left[\int_a^b|f(x)|^p\diamondsuit_\alpha x\right]^{\frac{1}{p}}\left[\int_a^b |f(x)+g(x)|^{(p-1)q}\diamondsuit_\alpha x\right]^{\frac{1}{q}}\\ +\left[\int_a^b|g(x)|^p\diamondsuit_\alpha x\right]^{\frac{1}{p}}\left[\int_a^b |f(x)+g(x)|^{(p-1)q}\diamondsuit_\alpha x\right]^{\frac{1}{q}}\\ =\left\{\left[\int_a^b|f(x)|^p\diamondsuit_\alpha x\right]^{\frac{1}{p}}+\left[\int_a^b|g(x)|^p\diamondsuit_\alpha x\right]^{\frac{1}{p}}\right\}\left[\int_a^b|f(x)+g(x)|^p\diamondsuit_\alpha x\right]^{\frac{1}{q}}. \end{multline*} Dividing both sides of the last inequality by $$\left[\int_a^b|f(x)+g(x)|^p\diamondsuit_\alpha x\right]^{\frac{1}{q}},$$ we get the desired conclusion. \end{proof} As another application of Theorem~\ref{thm4}, we have: \begin{theorem} \label{thm5} Let $\mathbb{T}$ be a time scale, $a$, $b \in \mathbb{T}$ with $a < b$, and $f$, $g$, $h \in C([a, b]_{\mathbb{T}}, [0, \infty))$. \begin{itemize} \item[(i)] If $p>1$, then $$ \left\{\left(\int_{a}^{b} hf \diamondsuit_{\alpha}x\right)^{p} +\left(\int_{a}^{b} hg \diamondsuit_{\alpha}x\right)^{p} \right\}^{\frac{1}{p}} \leq \int_{a}^{b} h(f^{p}+g^{p})^{\frac{1}{p}}\diamondsuit_{\alpha}x \, . $$ \item[(ii)] If $\ 0 < p < 1$, then $$ \left\{\left(\int_{a}^{b} hf \diamondsuit_{\alpha}x\right)^{p} +\left(\int_{a}^{b} hg \diamondsuit_{\alpha}x\right)^{p} \right\}^{\frac{1}{p}} \geq \int_{a}^{b} h(f^{p}+g^{p})^{\frac{1}{p}}\diamondsuit_{\alpha}x \, . $$ \end{itemize} \end{theorem} \begin{proof} We prove only (i). The proof of (ii) is similar. Inequality (i) is trivially true when $f$ is zero: both the left and right hand sides reduce to $\int_a^b h g \diamondsuit_{\alpha}x$. Otherwise, applying Theorem~\ref{thm4} with $f(x)=(1+x^{p})^{\frac{1}{p}}$, which is clearly convex on $(0, \infty)$, we obtain $$ \left(1+\frac{(\int_{a}^{b}hf\diamondsuit_{\alpha}x)^{p}} {(\int_{a}^{b}h\diamondsuit_{\alpha}x)^{p}}\right)^{\frac{1}{p}} \leq \frac{\int_{a}^{b}h(1+f^{p})^{\frac{1}{p}}\diamondsuit_{\alpha}x} {\int_{a}^{b}h\diamondsuit_{\alpha}x}. $$ In other words, $$ \left[ \left(\int_{a}^{b}h\diamondsuit_{\alpha}x\right)^{p}+ \left(\int_{a}^{b}hf\diamondsuit_{\alpha}x\right)^{p} \right]^{\frac{1}{p}} \leq \int_{a}^{b}h(1+f^{p})^{\frac{1}{p}}\diamondsuit_{\alpha}x. $$ Changing $h$ and $f$ by $\frac{hf}{\int_{a}^{b}hf \diamondsuit_{\alpha}x}$ and $\frac{g}{f}$ in the last inequality, respectively, we obtain directly the inequality (i) of Theorem~\ref{thm5}. \end{proof} \section*{Acknowledgements} The authors were supported by the \emph{Portuguese Foundation for Science and Technology} (FCT), through the \emph{Centre for Research on Optimization and Control} (CEOC) of the University of Aveiro, cofinanced by the European Community Fund FEDER/POCI 2010 (all the three authors); the postdoc fellowship SFRH/BPD/20934/2004 (Sidi Ammi); the PhD fellowship SFRH/BD/39816/2007 (Ferreira); and the research project PTDC/MAT/72840/2006 (Torres). The authors are grateful to three referees, and the editor assigned to handle the review process, for several helpful comments and a careful reading of the manuscript.
{ "timestamp": "2008-04-09T18:02:29", "yymm": "0712", "arxiv_id": "0712.1680", "language": "en", "url": "https://arxiv.org/abs/0712.1680", "abstract": "The theory and applications of dynamic derivatives on time scales has recently received considerable attention. The primary purpose of this paper is to give basic properties of diamond-$\\alpha$ derivatives which are a linear combination of delta and nabla dynamic derivatives on time scales. We prove a generalized version of Jensen's inequality on time scales via the diamond-$\\alpha$ integral and present some corollaries, including Hölder's and Minkowski's diamond-$\\alpha$ integral inequalities.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "Diamond-$α$ Jensen's Inequality on Time Scales", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429619433693, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7097210904721776 }
https://arxiv.org/abs/2105.02722
Mutual Visibility in Graphs
Let $G=(V,E)$ be a graph and $P\subseteq V$ a set of points. Two points are mutually visible if there is a shortest path between them without further points. $P$ is a mutual-visibility set if its points are pairwise mutually visible. The mutual-visibility number of $G$ is the size of any largest mutual-visibility set. In this paper we start the study about this new invariant and the mutual-visibility sets in undirected graphs. We introduce the mutual-visibility problem which asks to find a mutual-visibility set with a size larger than a given number. We show that this problem is NP-complete, whereas, to check whether a given set of points is a mutual-visibility set is solvable in polynomial time. Then we study mutual-visibility sets and mutual-visibility numbers on special classes of graphs, such as block graphs, trees, grids, tori, complete bipartite graphs, cographs. We also provide some relations of the mutual-visibility number of a graph with other invariants.
\section{Computational complexity}\label{sec:complexity} To study the computational complexity of finding a maximum mutual-visibility set\xspace in a graph, we introduce the following decision problem. \begin{definition} {\sc Mutual-Visibility}\xspace problem: \\ {\sc Instance}: A graph $G=(V,E)$, a positive integer $K\leq |V|$. \\ {\sc Question}: Is there a mutual-visibility set\xspace $P$ of $G$ such that $|P|\geq K$? \end{definition} The problem is hard to solve as shown by the next theorem. \begin{figure}[t] \graphicspath{{fig/}} \centering \def\columnwidth{\columnwidth} \large\scalebox{0.8}{\input{fig/NP.pdf_tex}} \caption{The graph used in Theorem~\ref{theo:NP}. Red vertices are points. Most visible vertices and edges represent the main part of the graphs. The rest is added to ensure the mutual visibility among points. Top left: The true-setting gadget used to represent a variable $x_i$ with two maximum {mutual-visibility set\xspace}s representing the two possible truth assignments: $x_i$ is false if and only if $u_i$ is a point.} \label{fig:NP} \end{figure} \begin{theorem}\label{theo:NP} {\sc Mutual-Visibility}\xspace is NP-complete. \end{theorem} \begin{proof} Given a set of points $P\subseteq V$ of $G$, we can test in polynomial time whether $P$ is a mutual-visibility set\xspace or not (see also Algorithm {\sc MV}\xspace). Consequently, the problem is in NP. We will now prove that the {\sc 3SAT}\xspace problem, shown as NP-complete in~\cite{Karp72}, polynomially reduces to {\sc Mutual-Visibility}\xspace. \begin{quote} A {\sc 3SAT}\xspace instance $\Phi$ is defined as a set $X=\{x_1,x_2,\ldots,x_p\}$ of $p$ boolean variables and a set $C$ of $q$ clauses, each defined as a set of three literals: every variable $x_i$ corresponds to two literals $x_i$ (the positive form) and $\bar {x_i}$ (the negative form). To simplify the notations we will denote by $\{\ell_1, \ell_2, \ell_3\}$ the clause with literals $\ell_i, i=1,2,3$, without distinction between the orders in which they are listed. A truth assignment assigns a Boolean value ($True$ or $False$) to each variable, corresponding to a truth assignment of opposite values for the two literals $x_i$ and $\bar {x_i}$: $\bar {x_i}$ is $True$ if and only if $x_i$ is $False$. For a literal $\ell\in\{x_i, \bar {x_i}\}$ we denote by $\bar \ell$ its negation: $\bar \ell=\bar {x_i}$ if $\ell= x_i$ and $\bar \ell= x_i$ if $\ell = \bar {x_i}$. A clause is satisfied if at least one of its literals is satisfied. The {\sc 3SAT}\xspace problem asks whether there is a truth assignment satisfying all clauses. \end{quote} In what follows, we assume there are at least three clauses such that their (pairwise) intersection is empty. Any instance $\Phi$ that does not satisfy this constraint can be transformed into an instance $\Phi'$ with such three clauses by adding five new variables $a,b,c,d,e$ and the required three clauses $\{a, \bar a, b\}$, $\{\bar b, c, \bar c\}$, $\{d,\bar d, e\}$ that are always satisfied for each truth assignment. Then the {\sc 3SAT}\xspace instance $\Phi$ has a Yes answer if and only if $\Phi'$ has a Yes answer. We transform {\sc 3SAT}\xspace to {\sc Mutual-Visibility}\xspace. Let $X = \{x_1,x_2,\ldots,x_p\}$ and $C = \{c_1,c_2,\ldots,c_q\}$ be any instance of {\sc 3SAT}\xspace. We must construct a graph $G = (V,E)$ and a positive integer $K \leq |V|$ such that $G$ has a mutual-visibility set\xspace of size $K$ or more if and only if $C$ is satisfiable. For each variable $x_i\in X$, there is a true-setting convex subgraph of $G$ $T_i=(V_i,E_i)$, with $V_i=\{u_i,\bar {u_i}, s_i, t_i\}$ and $E_i=\{u_i\bar {u_i},\bar{u_i}s_i,\bar{u_i}s_i, s_{i}t_i\}$. See the top left part of Figure~\ref{fig:NP} for a drawing of $T_i$ and the two possible maximum {mutual-visibility set\xspace}s. Note that each of the two maximum {mutual-visibility set\xspace}s of $T_i$ contain either $u_i$ or $\bar {u_i}$. For each clause $c_j \in C$, there is a vertex $v_j$ and, for each literal $x_i$ (or $\bar{x_i}$) in $c_j$ there is in an edge $v_ju_i$ (or an edge $v_j \bar{u_i}$, respectively). Moreover, there is a vertex $w$ and edges ${v_j}w$ for each $j=1,2,\ldots, q$. There are four more vertices in $V$, that is $y,y',z,z'$. For each $i\in\{1,2,\ldots, p\}$ there are edges ${u_i}y$, $\bar{u_i}y$, ${s_i}z$, ${t_i}z$, ${s_i}w$, ${t_i}w$. Finally, $E$ contains edges $yz$, $yy'$ and $zz'$. A representation of $G$ is given in Figure~\ref{fig:NP}. The construction of our instance of {\sc Mutual-Visibility}\xspace is completed by setting $K= 3p+q+2$. It is easy to see how the construction can be accomplished in polynomial time. All that remains to be shown is that $C$ is satisfiable if and only if $G$ has a mutual-visibility set\xspace of size $K$ or more. First, suppose that $t: X\rightarrow \{True,False\}$ is a satisfying truth assignment for $C$. The corresponding set of points $P$ includes vertices $u_i$ if $t(x_i)$ is $False$, and $\bar{u_i}$ otherwise, for each $i\in\{1,2,\ldots, p\}$. Moreover $y'$, $z'$, $v_j$, $s_i$, $t_i$ are in $P$, for each possible value of $i$ and $j$. No further vertex is in $P$. Then $|P| = 3p+q+2=K$. It remains to show that $P$ is a mutual-visibility set\xspace. Clearly, $y'$ is in mutual visibility with $z'$. Let $ST=\left\{s_i, t_i~|~i\in\{1,2,\ldots, p\}\right\}$, $U=\left\{u_i, \bar{u_i}~|~i\in\{1,2,\ldots, p\}\right\}$, $D=\left\{v_j~|~j\in\{1,2,\ldots, q\}\right\}$. Each vertex in $ST$ is in mutual visibility with all the points in its true-setting subgraph and with all the other points in $P$ thanks to shortest paths passing through vertices $w,y$, and $z$ that are not in $P$ (e.g., for $t_i\not \in T_1$, the paths $(t_i,z,s_1), (t_i,z,t_1), (t_i,z,z'), (t_i,z,y,y'), (t_i,z,y,u_1), (t_i,z,y,\bar{u_1}), (t_i,w,v_1)$). All the points in $D$ are in mutual visibility through shortest paths of length two via vertex $w$. More interesting is to show that each point $v\in D$ is in mutual visibility with $y'$ (and with $z'$). Point $v$ corresponds to a clause $c\in C$ and, since $C$ is satisfiable, there is a vertex $u$ in $N_G(v)\cap U$ that is not in $P$ corresponding to a $True$ literal in $c$. Then the shortest paths $(v,u,y,y')$ and $(v,u,y,z,z')$ show that $v$ is in mutual visibility with $y'$ and $z'$. Finally, each point in $U$ is in mutual visibility with all the other points in $U$, because of shortest paths passing through $y$. Regarding the mutual visibility of points in $U$ with points in $D$, let $v_j$ be a point in $D$ corresponding to a clause $c_j$ and let $x_i$ (or $\bar{x_i})$ be a literal in $c_j$ corresponding to vertex $u_i$ (or $\bar{u_i}$). If $t(x_i)$ is True then either the point $\bar{u_i}$ is connected to $v_j$ with the path $(\bar{u_i},u_i,v_j)$ or $\bar{u_i}$ is adjacent to $v_j$. Otherwise, if $t(x_i)$ is False then either the point $\bar{u_i}$ is adjacent to $v_j$ or connected to $v_j$ via $(u_i,\bar{u_i},v_j)$. Similarly for all the literals in $c_j$. If $x_i$ is not in $c_j$ then point $u_i$ (or $\bar{u_i}$) is in mutual visibility with $v_j\in D$ thanks to a shortest path $(u_i,y,u',v_j)$ (or $(\bar{u_i},y,u',v_j)$), where the vertex $u'\in D$ is in correspondence with a $True$ literal in $c_j$. This concludes the first part of the proof. Conversely, let us suppose that there is a set $P\subseteq V$ of points such that $|P|\geq K=3p+q+2$. In $C$ there are three clauses that do not share any variable. Assume, without loss of generality, that these three clauses are, $c_1$, $c_2$, $c_3$. Then the star subgraph $H$ of $G$ induced by vertices $v_1$, $v_2$, $v_3$ and $w$ is a convex subgraph of $G$. Convex subgraph of $G$ are $T_i$, the path graph $H'=(y',y,z,z')$ and each subgraph $L_j\sim K_1$ consisting in a single vertex $v_j$, $j=4,\ldots q$. The union of the vertices of these convex subgraphs is $V$ then, by applying Lemmas~\ref{lem:mu_bound}, we have: $$\mu(G) \leq \mu(H)+\mu(H')+\sum_{i=1}^p \mu(T_i)+\sum_{j=4}^q \mu(L_j) = 3+2+3p+(q-3)=3p+q+2$$ The above inequality holds since it is not difficult to see that $\mu(H)=3$ (see also Corollary~\ref{cor:tree}), $\mu(H')=2$ by Lemma~\ref{lem:PnCn}, and, by enumeration, that $\mu(T_i)=3$. The mutual-visibility number\xspace $\mu(G)$ is the size of a largest mutual-visibility set\xspace in G, then $|P|= K=3p+q+2$. Since $y$ and $z$ are articulation vertices we can assume, by Lemma~\ref{lem:art}, they are not in $P$. Moreover, at least one vertex for each $T_i$ is not in $P$: call $Q$ the set of such vertices. Then the points in $P$ are a subset of $V'=V\setminus (Q \cup\{y,z\})$. Since $|V|=4p+q+5$ and $|Q|\geq p$, then $|V'|\leq 3p + q+3$. Hence at most one vertex in $V'$ is not in $P$. Consequently, at least two vertices among $v_1$, $v_2$, $v_3$ of $H$ are in $P$ (say $v_1$, $v_2$), and since $H$ is a convex subgraph of $G$ then the only shortest path between $v_1$ and $v_2$ is $(v_1,w,v_2)$. This implies that $w$ is not in $P$, otherwise $v_1$ and $v_2$ are not in mutual visibility. In conclusion, all the vertices in $D$ are in $P$, $y'$ and $z'$ are in $P$ and three vertices for each $T_i$ are in $P$, and in particular exactly one vertex among $u_i$ and $\bar{u_i}$ is in $P$. Now, consider a point $v_j$ in $D$ and its corresponding clause $c_j$. Since $v_j$ and $y'$ are mutually visible, at least one vertex in $N_G(v_j) \cap U$ is not in $P$ an then, the corresponding literal is $True$ and $c_j$ is satisfied. By the generality of $c_j$ all the clauses are satisfied. \end{proof} Theorem~\ref{theo:NP} shows that {\sc Mutual-Visibility}\xspace is hard, however the following problem, which asks to test if a given set of points is a mutual-visibility set\xspace, can be solved in polynomial time. \begin{definition} {\sc Mutual-Visibility Test}\xspace: \\ {\sc Instance}: A graph $G=(V,E)$ and $P\subseteq V$. \\ {\sc Question}: Is $P$ a mutual-visibility set\xspace of $G$? \end{definition} The solution is provided by means of Algorithm {\sc MV}\xspace that in turn uses Procedure {\sc BFS\_MV}\xspace as a sub-routine. Procedure {\sc BFS\_MV}\xspace and Algorithm {\sc MV}\xspace are shown in Figures~\ref{alg:bfs} and~\ref{alg:algoMV}, respectively. \begin{algorithm}[ht] \SetKwInput{Proc}{Procedure} \Proc{{\sc BFS\_MV}\xspace} \SetKwInOut{Input}{Input} \Input{A connected graph $G=(V,E)$, a set of points $P$, $v\in V$, a boolean $t$ } \SetKwInOut{Output}{Output} \Output{The distance vector of $v$ from any vertex $u$ in $P$ calculated in $G$, if $t$ is True, otherwise calculated in $G- P\setminus\{u,v\}$. } \BlankLine \BlankLine $D[u]:=\infty ~\forall u\in V$\;\label{line:startin} $DP[p]:=\infty ~\forall p\in P$\; $D[v]:=0$\;\label{line:endin} \lIf{$v\in P$}{$DP[v]:=0$} Let $Q$ be a queue\; $Q.enqueue(v)$\; \While{$Q$ is not empty and $\exists p\in P, D[p]=\infty$ \label{line:ciclo}} { $u := Q.dequeue()$\; \label{line:deq} \For {each $w$ in $N_G(u)$}{ \If{$D[w]=\infty$}{ $D[w]:=D[u]+1$\;\label{line:up} \lIf{ $w \in P$}{ $DP[w]:=D[w]$} \lIf {$t$ or $w \not \in P$}{ $Q.enqueue(w)$\label{line:enq}} } } } \Return DP \caption{Procedure {\sc BFS\_MV}\xspace } \label{alg:bfs} \end{algorithm} \begin{algorithm}[t] \SetKwInput{Proc}{Algorithm} \Proc{{\sc MV}\xspace} \SetKwInOut{Input}{Input} \Input{ A graph $G=(V,E)$ and a set of points $P\subseteq V$} \SetKwInOut{Output}{Output} \Output{ True if $P$ is a mutual-visibility set\xspace, False otherwise } \BlankLine \If {points in $P$ are in different connected components of $G$\label{line:comp}} {\Return False} Let $H$ be the connected component of $G$ with points\; \For{\mbox{each} $p \in P$\label{line:loop}}{ \If {{\sc BFS\_MV}\xspace(H,P,p,False) $\not =$ {\sc BFS\_MV}\xspace(H,P,p,True)\label{line:call}} {\Return False}\label{line:exit} } \Return True\label{line:end} \caption{Algorithm {\sc MV}\xspace } \label{alg:algoMV} \end{algorithm} \begin{theorem}\label{theo:P} Algorithm {\sc MV}\xspace solves {\sc Mutual-Visibility Test}\xspace in $O(|P|(|V|+|E|))$ time. \end{theorem} \begin{proof} When $G$ is connected, Algorithm {\sc MV}\xspace (see Figure~\ref{alg:algoMV}) calculates the distances between any pair of points $u,v\in P$ both in $G$ and in $G- P\setminus\{u,v\}$, the graph obtained by removing all the points in $P$ except $u$ and $v$ (loop at Line~\ref{line:loop}). To this end, {\sc MV}\xspace uses Procedure {\sc BFS\_MV}\xspace (see. Figure~\ref{alg:bfs}). If the distances are equal (and Line~\ref{line:end} is reached) then there exits a shortest $(u,v)$-path without points, that is $u$ and $v$ are in mutual visibility. Otherwise, $P$ is not a mutual-visibility set\xspace (Line~\ref{line:exit}). Procedure {\sc BFS\_MV}\xspace is a variant of the breadth-first search algorithm that updates two distance vectors: the distance vector $D$, for the distances of $v$ with each vertex in the graph, and the distance vector $DP$, for the distances of $v$ with the points in $P$. These distance vectors are initialized at Lines~\ref{line:startin}--\ref{line:endin}. To track all the visited vertices, a queue $Q$ is initialized with the vertex $v$. Then, at Line~\ref{line:ciclo}, a loop starts, ending when all the points are visited or there are no more vertices to visit. Within the loop, the first vertex $u$ in $Q$ is dequeued at Line~\ref{line:deq}. For each non-visited neighbor $w$ of $u$ its distance $D[w]$ from $v$ is correctly updated to $D[u]+1$ (see Line~\ref{line:up}). If $w$ is a point, this distance is recorded in $DP$. Finally, at Line~\ref{line:enq}, $w$ is enqueued in $Q$ if it is not a point or the distances must be calculated in $G$ (that is, if $t$ is True). Note that if $t$ is False and $w$ is a point, $w$ is not enqueued in $Q$ because any shortest path between $v$ and a point $p\in P$, $p\not = w$, useful to calculate $d_G(v,p)$ and to test the mutual visibility of $v$ and $p$, cannot pass through $w$. Procedure {\sc BFS\_MV}\xspace ends by returning the distance vector $DP$. Algorithm {\sc MV}\xspace first checks if the points in $P$ are in different connected components of $G$ at Line~\ref{line:comp}. In this case $P$ is not a mutual-visibility set\xspace and the algorithm correctly returns False. If all the points are in the same connected component $H=(V_H,E_H)$ (and hence in $G$, if it is connected), for each point $p$ in $P$ it calculates the distances of $p$ from each other point $u\in P$, both in $H$ and in $H-P\setminus \{p,u\}$ (see Line~\ref{line:call}). If at least one of these distances is different in the two graphs, then Algorithm {\sc MV}\xspace correctly returns False, otherwise True. Procedure {\sc BFS\_MV}\xspace works in $O( | V | + | E | ) $ time, since every vertex and every edge will be explored in the worst case. Algorithm {\sc MV}\xspace calls Procedure {\sc BFS\_MV}\xspace at most two times for each $p\in P$, then the overall time is $O(|P|( | V | + | E |) ) $. \end{proof} \section{Conclusions}\label{sec:concl} This paper is a first study on the concepts of mutual-visibility set\xspace and mutual-visibility number\xspace. It would be interesting to study the same concepts for weighted graphs and directed graphs. The latter case is very different from the studied one since, given a set of points $P$, the relation of visibility between two points is not symmetric. From a computational point of view, the {\sc Mutual-Visibility}\xspace problem could be analyzed with respect to approximability and parameterized complexity. Given a graph, we have shown some relations between the mutual-visibility number\xspace and both the clique number and the maximum degree of the graph. It would be interesting to study relations with other invariants, such as the treewidth or the clique-width of a graph. Finally, different kinds of visibility can be investigated. For example, the \emph{single point visibility}: find the vertex in the graph seen by the largest set of points. \section*{Acknowledgments} Figure~\ref{fig:block} was adapted from a drawing by David Eppstein, whose remarkable contributions to Wikipedia have greatly facilitated the writing of this article. \section{Mutual-visibility set\xspace for special graph classes}\label{sec:graphs} In this section we study the mutual-visibility number\xspace for specific graph classes and provide some results useful to calculate maximum {mutual-visibility set\xspace}s in polynomial time in these graphs. \subsection{Graph characterization by mutual-visibility number\xspace} The following lemma characterizes some graph classes in terms of their mutual-visibility number\xspace. \begin{lemma}\label{lem:char} Let $G=(V,E)$ be a graph such that $|V|=n$. Then \begin{enumerate} \item $\mu(G)=1 \iff G \sim \overline{K_n}$ (if $G$ is connected: $\mu(G)=1 \iff G \sim K_1$); \item $\mu(G)=2\iff$ $n>1$ and $G\sim P_n$ or $G$ is the disjoint union of at most $n-1$ path graphs; \item $\mu(G)=|V|\iff G \sim K_n$; \item if $G$ is connected and $|V|>2$: $\mu(G)=|E|\iff G \sim K_{1,n-1}$ or $G$ is a triangle \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item $(\Rightarrow)$ Since $\mu(G)=1$, $E$ must be empty otherwise there exist $xy\in E$, and $x$ and $y$ are mutually visible, so $\mu(G)\geq 2$. Hence $G$ is a graph with $n$ vertices and no edges, that is $\overline{K_n}$\\ $(\Leftarrow)$ Since $E$ is empty there are no paths between vertices, then any mutual-visibility set\xspace cannot have more than one point. Hence $\mu(G)=1$. \item $(\Rightarrow)$ Since $\mu(G)=2$, then $n>1$. Assume now by contradiction that $G$ is not isomorphic to $P_n$ and $G$ is not the disjoint union of $n-1$ path graphs. Since at least one connected component of $G$ has at least two vertices, $\Delta(G)$ cannot be zero. If $\Delta(G)=1$, $G$ would be the disjoint union of $K_1$ and $P_2$ graphs, but this is not possible. For the same reason, if $\Delta(G)=2$ at least one connected component must be a cycle graph $C_k$, for a certain $k$, impossible since $\mu(C_k)=3$ by Lemma~\ref{lem:PnCn}. Then $\Delta(G)\geq 3$, but by Lemma~\ref{lem:omegaDelta}, also in this case $\mu(G)\geq 3$.\\ $(\Leftarrow)$ If $n=2$ or all the connect component of $G$ have at most two vertices, then $\mu(G)=2$. If $n>2$, $\mu(G)=2$ by Lemma~\ref{lem:PnCn} applied to a connected component of $G$ with more than two vertices, that must exist since the connected components are at most $n-1$. \item $(\Rightarrow)$ Since all vertices are in mutual visibility then $G$ is connected. Moreover each $u,v\in V$ must be adjacent otherwise in any shortest path connecting them there is at least one vertex, that is a point since the mutual-visibility set\xspace is $V$.\\ $(\Leftarrow)$ Obvious since all pair of vertices are adjacent and then mutually visible. \item $(\Rightarrow)$ Since $G$ is connected, $|E|\geq |V|-1$. Moreover $|V|\geq \mu(G)= |E|$. Then $|E|\leq |V|\leq |E|+1$. If $|V|=|E|=\mu(G)$ then, by point 3), $G$ is a clique graph and then a triangle (the only case where $|E|=\frac{n(n-1)} 2 = n = |V|$). If $|V|=|E|+1$, $G$ is a tree with $|V|>2$ vertices and $\mu(G)=|V|-1$ points. Then only one vertex is not a point. It can be a leaf only if $|V|=3$ (and then $G\sim K_{1,2}$) otherwise there is a path with at least three points in mutual visibility, a contradiction by Lemma~\ref{lem:PnCn}. For the same reason, when $G\not \sim K_{1,2}$, it is the only vertex that is not a leaf. Then the graph $G$ is a star with $n$ vertices, that is $G\sim K_{1,n-1}$.\\ $(\Leftarrow)$ Obvious if $G$ is a triangle, otherwise, by Lemma~\ref{lem:omegaDelta}, $\mu(G)\geq\Delta(G)=deg(v)=|V|-1=|E|$, where $v$ is the center of $G$. However $\mu(G)$ cannot be larger than $|E|=|V|-1$ otherwise $v$ should be a point, preventing the mutual visibility among the pendant vertices. \end{enumerate} \end{proof} \subsection{Block graphs and Trees}\label{sec:block} \begin{figure}[t] \graphicspath{{fig/}} \centering \def\columnwidth{\columnwidth} \large\scalebox{0.5}{\input{fig/block.pdf_tex}} \caption{A block graph. Red vertices are points of the maximum mutual-visibility set\xspace.} \label{fig:block} \end{figure} A \emph{block graph} is a graph in which every maximal biconnected subgraph (called \emph{block}) is a clique (see Figure~\ref{fig:block}). The next Theorem characterizes the largest {mutual-visibility set\xspace}s (an then the mutual-visibility number\xspace) for block graphs. \begin{theorem}\label{theo:block} Let $G=(V,E)$ be a connected block graph and $X$ the set of its articulation vertices. $V\setminus X$ is a mutual-visibility set\xspace of $G$ and $\mu(G)=|V\setminus X|$. \end{theorem} \begin{proof} By Lemma~\ref{lem:art} there exists a maximum mutual-visibility set\xspace $P\in M(G)$ without vertices in $X$. To show that $P$ includes all the vertices of $G$ in $V\setminus X$, consider two of them $u,v$ and the shortest $(u,v)$-path (note that in block graphs the shortest path between two vertices is unique). If $u$ and $v$ belong to the same block then they are adjacent since a block is a clique by definition. Otherwise, the shortest $(u,v)$-path passes only through articulation vertices of $G$, since they are induced paths. Then $u$ and $v$ are mutually visible. By the generality of $u$ and $v$ the lemma holds. \end{proof} An immediate consequence of Theorem~\ref{theo:block} is the following corollary holding for trees. \begin{corollary}\label{cor:tree} Let $T=(V,E)$ be a tree and $L$ the set of its leaves. Then $L$ is a mutual-visibility set\xspace and $\mu(T)=|L|$. \end{corollary} \begin{proof} A tree is a block graph where the blocks are the edges ($K_2$ subgraphs) in $E$ and each vertex in $V$ that is not a leaf is an articulation vertex. Then $L$ is a maximum mutual-visibility set\xspace by Theorem~\ref{theo:block}. \end{proof} Figure~\ref{fig:noconvex} shows the maximum mutual-visibility set\xspace $P$ of a tree. However, the maximum mutual-visibility set\xspace of trees and block graphs is not unique. This is the case when the removal of an articulation point creates a new component that is a path $P_n$, $n\geq 2$. In that case, we can create several {mutual-visibility set\xspace}s of maximum size by choosing single vertices of $P_n$ for each of them. \subsection{Grids, Tori}\label{sec:grid} \begin{figure}[t] \graphicspath{{fig/}} \centering \def\columnwidth{\columnwidth} \large\scalebox{0.8}{\input{fig/drawing.pdf_tex}} \caption{On the left: mutual-visibility sets for grid graphs $\Gamma_{1,1}$, $\Gamma_{1,2}$, $\Gamma_{2,2}$, and $\Gamma_{2,3}$. On the right: two non isomorphic mutual-visibility sets for $\Gamma_{3,3}$. The size of each mutual-visibility set\xspace determines the mutual-visibility number\xspace of the corresponding graph.} \label{fig:smallgrids} \end{figure} \begin{figure}[t] \graphicspath{{fig/}} \centering \def\columnwidth{\columnwidth} \large\scalebox{0.20}{{\huge \input{fig/grid4x4.pdf_tex}}}~a)~~~~~~~~~~~~ \large\scalebox{0.50}{{\huge \input{fig/grid.pdf_tex}}}~b) \caption{a) The unique maximum mutual-visibility set for $\Gamma_{4,4}$. b) A maximum mutual-visibility set for $\Gamma_{6,6}$. An extension of this set for $\Gamma_{7,7}$, when $\Gamma_{6,6}$ is seen as one of its subgraphs, is obtained by removing points $u$ and $v$ and by adding points $w$, $x$, $y$, and $z$.} \label{fig:grids} \end{figure} For grid graphs $\Gamma_{m,n}$, Figure~\ref{fig:smallgrids} represents the {mutual-visibility set\xspace}s of maximum size for small values of $m$ and $n$, and Theorem~\ref{theo:grid} gives the values of $\mu(\Gamma_{m,n})$ for $m>3$ and $n>3$. These values are based on maximum {mutual-visibility set\xspace}s shown in Figure~\ref{fig:grids}. Furthermore, Table~\ref{tab:grid} shows the values of $\mu(\Gamma_{m,n})$ for all the possible settings of $m$ and $n$, $m\leq n$. \begin{table}[t] \begin{center} \begin{tabular}{c|c|c|c|c} $m$&$n$&Graph $G$&$\mu(G)$&Reference\\\hline 1 &1 & $K_1$& 1&Lemma~\ref{lem:char}\\\hline 1 &$n>1$& $P_n$&2&Lemma~\ref{lem:PnCn}\\\hline 2 & 2 & $C_4$ & 3&Lemma~\ref{lem:PnCn}\\\hline 2 & $n>2$& $\Gamma_{2,n}$&4&Lemmas~\ref{lem:mu_bound} and~\ref{lem:PnCn}\\\hline 3 & 3 & $\Gamma_{3,3}$&5&Figure~\ref{fig:smallgrids}\\\hline 3 & $n>3$ & $\Gamma_{3,n}$&6&Lemmas~\ref{lem:mu_bound} and~\ref{lem:PnCn}\\\hline 4 & 4 & $\Gamma_{4,4}$&8&Figure~\ref{fig:grids} \\\hline $m> 3$&$n> 3$& $\Gamma_{m,n}$&$2m$&Theorem~\ref{theo:grid}\\ \end{tabular} \end{center} \caption{Values of $\mu(G)$ when $G\sim\Gamma_{m,n}$, for all the possible value of $m$ and $n$ such that $m\leq n$.}\label{tab:grid} \end{table} \begin{theorem}\label{theo:grid} Let $\Gamma_{m,n}=P_m\times P_n$ be a grid graph such that $m>3$ and $n>3$ then $$\mu(\Gamma_{m,n})=2 \cdot \min(m,n).$$ \end{theorem} \begin{proof} Let $P_m =(u_0, u_1,\ldots, u_{m-1})$ and $P_n =(v_0, v_1,\ldots, v_{n-1})$. In each subgraph $((u_0, v_i), (u_1,v_i),\ldots, (u_{m-1},v_i))$, representing the $i$-row of $\Gamma_{m,n}$, there are at most two points as an immediate consequence of Lemmas~\ref{lem:mu_bound} and~\ref{lem:PnCn}, since a row is a convex subgraph of $\Gamma_{m,n}$ and is a path. The same holds for each subgraph $((u_j, v_0), (u_j,v_1),\ldots, (u_j,v_{n-1}))$, representing the $j$-column of the grid. Then $\mu(\Gamma_{m,n})\leq 2 \cdot \min(m,n)$. To show that the equality holds, let $k=\min(m,n)$ and consider a subgraph $\Gamma_{k,k}$ of $\Gamma_{m,n}$. If $k=4$, the unique maximum mutual-visibility set\xspace of $\Gamma_{4,4}$ is given by: $(u_1,v_0), (u_2,v_0), (u_0,v_1), (u_3,v_1),(u_0, v_{2}), (u_3, v_2), (u_1,v_3), (u_2,v_3)$ and is represented in Figure~\ref{fig:grids}a. Then $\mu(\Gamma_{k,k})=8$, and since there are two points for each row and each column, then $\mu(\Gamma_{m,n})=8=2 \cdot \min(m,n)$. For $k\geq 5$, consider again a grid subgraph $\Gamma_{k,k}$ of $\Gamma_{m,n}$ and the set of points: $(u_1,v_0), (u_2,v_0), (u_0,v_1), (u_3,v_1),$ $ (u_{j-2},v_j),(u_{j+2},v_j)$, for each $j = 2, \ldots, k-3$, and $ (u_{k-4}, v_{k-2}), (u_{k-1}, v_{k-2}), (u_{k-3},v_{k-1}), (u_{k-2},v_{k-1})$. This set generalizes the solution given for $k=4$ and is represented in Figure~\ref{fig:grids}b. Since there are two points for each row and each column and all the points are in mutual visibility, then $\mu(\Gamma_{m,n})=2 \cdot \min(m,n)$. \end{proof} \begin{figure}[t] \graphicspath{{fig/}} \centering \def\columnwidth{\columnwidth} \large\scalebox{0.25}{{\huge \input{fig/torus.pdf_tex}}}~a)~~~~~~~~~~~~~~~ \large\scalebox{1}{{\huge \input{fig/donut.pdf_tex}}}~b)\\~\\ \large\scalebox{0.35}{{\huge \input{fig/torus12x12.pdf_tex}}}~c)~~~~~~~~~~~~~~ \large\scalebox{0.28}{{\huge \input{fig/torus15x15.pdf_tex}}}~d) \caption{a) A torus $C_5 \times C_5$. Vertices in red form a maximum mutual-visibility set\xspace. b) The same graph represented as a three-dimentional torus. The dotted vertex corresponds to the dotted vertex in a). c) A solution for a torus $T_{12,12}$ such that $\mu(T_{12,12})=3\cdot12$. d) A solution for a torus $T_{15,15}$ such that $\mu(T_{15,15})=3\cdot15$. }\label{fig:tori} \end{figure} For tori $T_{m,n}=C_m\times C_n$, notice that each copy of $C_m$ and $C_n$ is a convex subgraph of $T_{m,n}$. Then, by Lemmas~\ref{lem:mu_bound} and~\ref{lem:PnCn}, we derive: \begin{corollary}\label{cor:tori} Let $T_{m,n}=C_m\times C_n$ be a torus such that $m\geq3$ and $n\geq3$ then $$\mu(T_{m,n})\leq 3 \cdot \min(m,n).$$ \end{corollary} However, the problem of finding $m$ and $n$ such that the mutual-visibility number\xspace of $T_{m,n}$ is equal to $3 \cdot \min(m,n)$ is still open. In general, solutions for tori are quite irregular, like that shown in Figure~\ref{fig:tori}a for a torus $T_{5,5}$, where the upper bound is not reached. It would be interesting to find the values of $m$ such that $\mu(T_{m,m})$ reaches the upper bound of Corollary~\ref{cor:tori}. There are no {mutual-visibility set\xspace}s for tori $T_{m,m}$ such that $\mu(T_{m,m})=3\cdot m$, for $m\leq 11$ (result obtained with a backtracking algorithm that explored a space of $\binom{m}{3}^m$ possible solutions using Algorithm {\sc MV}\xspace as subprocedure). As shown in Figures~\ref{fig:tori}c and~\ref{fig:tori}d, for $m=12$ and $m=15$ there are tori such that $\mu(T_{12,12})= 3 \cdot 12$ and $\mu(T_{15,15})= 3 \cdot 15$. These solutions were found without the help of a computer, but a scalable solution, such as the one provided in Theorem~\ref{theo:grid} and shown in Figure~\ref{fig:grids} for grids, is not available. With respect to grids, the main difficulty is due to the fact that, for $m'>m$ and $n'>n$, $T_{m,n}$ is not a subgraph of $T_{m',n'}$, whereas $\Gamma_{m,n}$ is a subgraph of $\Gamma_{m',n'}$. \subsection{Complete bipartite graphs, cographs and more general graphs} Let us start with a preliminary result about graphs such that almost all the vertices can be part of a mutual-visibility set\xspace. \begin{lemma}\label{lem:V-1} Let $G=(V,E)$ be a graph. Then $\mu(G)\geq |V|-1$ if and only if there exists $v\in V$ adjacent to each vertex $u\in G-v$ such that $deg_{G-v}(u)<|V|-2$. \end{lemma} \begin{proof} $(\Rightarrow)$ If $\mu(G)=|V|$ then, by Lemma~\ref{lem:char}, $G$ is a clique graph and then the statement is obviously true. If $\mu(G)=|V|-1$, then there exists a unique vertex $v$ of $G$ such that $v\not \in P$, where $P\in M(G)$. Let $u \in G-v$. If $deg_{G-v}(u)=|V|-2$, then $u$ is adjacent to any other point in $P$, and then is in mutual visibility with it. If $deg_{G-v}(u)<|V|-2$ then there exists at least a vertex $w \in P$ not adjacent to $u$. Since $u$ and $w$ are mutually visible, there must exist the path $(u,v,w)$, then $v$ is adjacent to $u$. $(\Leftarrow)$ Let $P=V\setminus \{v\}$ be a set of points. Let us show that it is a mutual-visibility set\xspace and then that $ \mu(G)\geq |V|-1$. Let $u\in P$, if $deg_{G-v}(u)=|V|-2$, then, as noted above, $u$ is adjacent to any other vertex in $P$. Otherwise $deg_{G-v}(u)<|V|-2$ and $uv\in E$ by hypothesis. In this case, let $Q=P\setminus N_{G-v}[u]$ be the set of points in $P$ not adjacent to $u$. Then for each $w \in Q$, $deg_{G-v}(w)<|V|-2$. Hence $w$ is adjacent to $v$ and then $u$ and $w$ are in mutual visibility through the shortest path $(u,v,w)$. By the generality of $u$ and $w$, we have that $P$ is a mutual-visibility set\xspace of $G$. \end{proof} For complete bipartite graphs $K_{m,n}$, Table~\ref{tab:bip} reports the values of $\mu(K_{m,n})$ for small values of $m$ and $n$. Note that for $K_{2,n}\sim\overline{K_2}+\overline{K_n}$ Lemma~\ref{lem:V-1} applies if the vertex $v$ is taken in the partition $\overline{K_2}$. A general result is the following. \begin{table}[t] \begin{center} \begin{tabular}{c|c|c|c|c} $m$&$n$&Graph $G$&$\mu(G)$&Reference\\\hline 1 &1 & $P_2$& 2&Lemma~\ref{lem:PnCn}\\\hline 1 &$n>1$& $K_{1,n}$&n&Corollary~\ref{cor:tree}\\\hline 2 & 2 & $C_4$ & 3&Lemma~\ref{lem:PnCn}\\\hline 2 & $n>2$& $K_{2,n}$&n+1&Lemma~\ref{lem:V-1}\\\hline $m\geq 3$ & $n\geq 3$ & $K_{m,n}$&n+m-2&Theorem~\ref{theo:Kmn} \end{tabular} \end{center} \caption{Values of $\mu(G)$ for $G\sim K_{m,n}$ for all the possible value of $m$ and $n$ such that $m\leq n$. }\label{tab:bip} \end{table} \begin{theorem}\label{theo:Kmn} Let $G$ be a complete bipartite graph $K_{m,n}$ such that $m\geq 3$ and $n\geq 3$. Then $\mu(G)=m+n-2$. \end{theorem} \begin{proof} First we notice that $\mu(G)\leq m+n-2$ because $\mu(G)=m+n$ would imply that $G$ is a clique graph and $\mu(G)=m+n-1$ is not possible because Lemma~\ref{lem:V-1} does not apply. To show that $\mu(G)=m+n-2$, it is sufficient to find exactly two vertices that are not in the maximum mutual-visibility set\xspace consisting of all other vertices. Since $G\sim \overline{K_m} + \overline{K_n}$, we take a vertex $u$ from $ \overline{K_m}$ and a vertex $v$ from $\overline{K_m}$. Each point $w$ in $\overline{K_m}-v$ is in mutual visibility with other points in $\overline{K_m}-v$ because of vertex $u$. Furthermore, $w$ is adjacent to points in $\overline{K_n}-u$. Symmetrically, points in $\overline{K_n}-u$ are in mutual visibility because of vertex $v$ and are adjacent to all other points. \end{proof} We can generalize the results of Lemma~\ref{lem:V-1} and Theorem~\ref{theo:Kmn} to more general graphs resulting from a join operation. \begin{corollary}\label{cor:join} Let $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ be two graphs and $J= G_1 + G_2=(V,E)$ their join. Then one of the following three cases holds: \begin{enumerate} \item $\mu(J)=|V| \iff G_1$ and $G_2$ are clique graphs \item $\mu(J)=|V|-1 \iff \mu(J)\not=|V|$ and $\mu(G_1)\geq |V_1|-1$ or $\mu(G_2)\geq |V_2|-1$ \item $\mu(J)=|V|-2 \iff \mu(G_1)< |V_1|-1$ and $\mu(G_2)< |V_2|-1$. \end{enumerate} \end{corollary} \begin{proof} \begin{enumerate} \item Obvious by Lemma~\ref{lem:char}. \item $(\Rightarrow)$ If $\mu(J)=|V|-1$ then there is a vertex $v$ that is not a point. Without loss of generality, let $v\in V_1$. Then each pair $x,y$ of non adjacent points in $G_1-v$ must be connected to $v$ to be in mutual visibility. Hence, by Lemma~\ref{lem:V-1}, $\mu(G_1)\geq|V|-1$.\\ $(\Leftarrow)$ Without loss of generality, assume $\mu(G_1)\geq |V_1|-1$. If $\mu(G_1)= |V_1|-1$, let $v\in V_1$ be the only vertex of $G_1$ that is not a point, otherwise, if $G_1$ is a clique graph, let $v$ be any point of $V_1$. Given $\mu(G_1)\geq |V|-1$, all the points in $V_1\setminus \{v\}$ are in mutual visibility. Any pair of points in $V_2$ are in mutual visibility since either adjacent or connected to $v$. Since any point in $V_2$ is adjacent to any point in $V_1$, we conclude that all the points are in mutual visibility and then $\mu(J)=|V|-1$ \item Let $v_1\in V_1$ and $v_2 \in V_2$, and let $P=V\setminus\{v_1,v_2\}$ be the set of points. Then $P$ is a mutual-visibility set\xspace, since any point in $V_1\setminus \{v_1\}$ ($V_2\setminus \{v_2\}$, resp.) is adjacent to any point in $V_2$ ($V_1$, resp.) and it is in mutual visibility with any non adjacent point of $V_1$ with a shortest path of length two passing through $v_2$ ($v_1$, resp.). \end{enumerate} \end{proof} Cographs are well studied in literature and were independently rediscovered many times, since they represent the class of graphs that can be generated from $K_1$ by complementation and disjoint union (see Theorem 11.3.3 in~\cite{graph_classes_survey} for equivalent definitions). As reported in Section~\ref{sec:notation}, a connected cograph can be obtained starting from $K_1$ by a sequence of splittings, that is by adding a sequence of twin vertices. In~\cite{GDS12} the notion of \emph{twin-free subgraph} was introduced. \begin{definition}\emph{\hspace{-0.17cm}\cite{GDS12}} Let $G=(V,E)$ be a graph. The \emph{twin-free subgraph} $\tf{G}$ of $G$ is the subgraph $G[V']$ induced by the largest set of vertices $V'\subseteq V$ such that $G[V']$ has no twins. \end{definition} Since any induced subgraph of a connected cograph $G$ is a cograph, then $\tf{G}\sim K_1$ and it can be obtained by the polynomial time {\sc Pruning} algorithm presented in the same paper. This algorithm removes any vertex $v$ of $G$ that has a twin, and it applies the same procedure to $G-v$ until a graph without twin vertices is reached. Then it provides a sequence of vertex removals that corresponds to a sequence of splitting operations to rebuild the whole $G$ starting from $K_1$ and in such a way that $G$ results the join of two of its subgraphs. Based on this observation we can provide the following result. \begin{theorem} Let $G=(V,E)$ be a connected cograph. Then $\mu(G)$ is at least $|V|-2$ and a maximum mutual-visibility set\xspace can be computed in polynomial time. \end{theorem} \begin{proof} Let us show that the vertices of any connected cograph $G=(V,E)$ can be partitioned into two subsets $V_1$ and $V_2$ such that $G= G[V_1] +G[V_2]$. Let $v_1$ be the only vertex of $\tf{G}$ and let $V_1=\{v_1\}$. Since $G$ is connected, the first splitting operation to rebuild $G$ from $v_1$ produces a true twin $v_2$ of $v_1$ and the resulting graph is a $K_2$. Let $V_2=\{v_2\}$. Now add any vertex $v_1'$ ($v_2'$, resp.) produced by a splitting operation on a vertex of $V_1$ ($V_2$, resp.) to $V_1$ ($V_2$, resp.). Eventually, each vertex in $V_1$ is connected to all the vertices in $V_2$ and vice versa. Hence $G= G[V_1] +G[V_2]$. By applying Corollary~\ref{cor:join}, $\mu(G)\geq |V|-2$. If all the splitting operations generate true twins, then $G$ is a clique graph and $\mu(G)=|V|$. By Algorithm~{\sc MV}\xspace, we can test if $V\setminus \{v\}$ is a mutual-visibility set\xspace of $G$ for some vertex $v\in V$, and then $\mu(G)=|V|-1$. Otherwise, $\mu(G)=|V|-2$ and $V\setminus \{v_1, v2\}$ is a mutual-visibility set\xspace of $G$. \end{proof} \section{Introduction} Given a set of points in Euclidean space, they are mutually visible if and only if no three of them are collinear. Then two points $p$ and $q$ are mutually visible when no further point is on the line segment $pq$. A line segment in Euclidean space represents the shortest path between two points, but in more general topologies, this type of path (called geodesic) may not be unique. Then, in general, two points are mutually visible when there exists at least a shortest path between them without further points. In this paper, we investigate the mutual visibility of a set of points in topologies represented by graphs (e.g., see Figure~\ref{fig:tori}b). In particular, a fundamental problem is finding the maximum number of points in mutual visibility that a given graph can have. To this aim, consider the following new invariant: the \emph{mutual-visibility number\xspace} of a graph is the size of any largest \emph{mutual-visibility set\xspace}, that is, a subset of the vertices (\emph{points}) that are in mutual visibility. To study this invariant from a computational point of view, we introduce the {\sc Mutual-Visibility}\xspace problem: find a mutual-visibility set\xspace with a size larger than a given number. We prove that this problem is NP-complete, whereas, to check whether the points of a given set are in mutual visibility is a problem solvable in polynomial time. Then, given this situation, our work proceeds by investigating the {mutual-visibility number\xspace} for special classes of graphs, showing how the {\sc Mutual-Visibility}\xspace problem can be solved in polynomial time. We also provide some relations of the mutual-visibility number\xspace of a graph with other invariants. While these new concepts are interesting in themselves, their study is motivated by the fundamental role that mutual visibility plays in problems arising in the context of mobile entities, as shown below. Furthermore, points of a graph in mutual visibility may represent entities on some nodes of a computer/social network that want to communicate in a efficient and ``confidential'' way, that is, in such a way that the exchanged messages do not pass through other entities. \paragraph{Related works} Questions about sets of points and their mutual visibility in Euclidean plane have been investigated since the end of XIX century. Perhaps, the most famous problem was posed by Sylvester~\cite{sylvester1893}, who conjectured that it is not possible to arrange a finite set of points ``so that a right line through every two of them shall pass through a third, unless they all lie in the same right line''. A correct proof was given by Gallai~\cite{gallai44} some 40 years later, with a theorem now known as Sylvester--Gallai theorem. In~\cite{hardy08}, Chapter III, it is shown how to place a set of points with integer positive coordinates $(i,j)$, $j\leq i$, in such a way that each point is in mutual visibility with the origin $(0,0)$, by also maximizing the number of points with the same abscissa. This disposition shows interesting relations with the Farey series and the Euler's totient function $\phi$: the number of points with abscissa $n$ is exactly $\phi(n)$. More recently, mutual visibility has been studied in the context of mobile entities modeled as points in the Euclidean plane, whose visibility can be obstructed by the presence of other mobile entities. The problem investigated in~\cite{DiLuna17} is perhaps the most basic: starting from arbitrary distinct positions in the plane, within finite time the mobile entities must reach a configuration in which they are in distinct locations and they can all see each other. Since then, many papers have addressed the same subject (e.g., see~\cite{Aljohani18a,Bhagat20,Poudel2021,Sharma18}) and similar visibility problems were considered in different contexts where the entities are ``fat robots'' modeled as disks in the plane (e.g., see~\cite{Poudel19}) or are points on a grid based terrain and their movements are restricted only along grid lines (e.g., see~\cite{Adhikary18}). Visibility problems were also studied on graphs. Wu and Rosenfeld~\cite{Rosenfeld94} considered the mutual visibility in pebbled graphs. They assumed that the visibility may be obstructed by ``pebbles" placed on some vertices of the graph. Two unpebbled vertices $u,v$ of a pebbled graph $G$ are mutually visible if and only if there exists a shortest path $p$ in $G$ between $u$ and $v$ such that no vertex of $p$ is pebbled. In~\cite{Wu98} they consider edge pebblings and vertex pebblings, by showing that the visibility relations defined by edge and vertex pebblings are incomparable. Again in the context of mobile entities, in~\cite{Aljohani18b} it is studied the {\sc Complete Visitability} problem of repositioning a given number of robots on the vertices of a graph so that each robot has a path to all others without visiting an intermediate vertex occupied by any other robot. Here, the required paths are not shortest paths and the studied graphs are restricted to the infinite squared grid and the infinite hexagonal grid, both embedded in the Euclidean plane. \paragraph{Contribution} In Section~\ref{sec:notation}, formal definitons of mutual-visibility set\xspace and mutual-visibility number\xspace are provided along with basic notations and some preliminary results. Algorithmic results about the {\sc Mutual-Visibility}\xspace problem are shown in Section~\ref{sec:complexity}. In Section~\ref{sec:graphs} we study the {mutual-visibility set\xspace}s and {mutual-visibility number\xspace}s for special classes of graphs. Concluding remarks and notes about further studies on the subject are provided in Section~\ref{sec:concl}. \section{Notation and preliminaries}\label{sec:notation} In this work we consider finite, simple, loopless, undirected and unweighted graphs $(V,E)$ with vertex set $V$ and edge set $E$. We use standard terminologies from~\cite{graph_classes_survey,graph_theory}, some of which are briefly reviewed here. \paragraph{Basic notation.} Let $G=(V,E)$ be a graph. A {\em subgraph} of $G$ is a graph having all its vertices and edges in $G$. Given a subset $S$ of $V$, the {\em induced subgraph} $G[S]$ of $G$ is the maximal subgraph of $G$ with vertex set $S$. The subgraph of $G$ induced by $V\setminus S$ is denoted by $G-S$, and $G-x$ stands for $G-\{x\}$. If $v$ is a vertex of $G$, by $N_G(v)$ we denote the {\em neighbors} of $v$, that is, the set of vertices that are adjacent to $v$, and by $N_G[v]$ we denote the {\em closed neighbors} of $v$, that is $N_G(v)\cup \{v\}$. The number of edges incident to a vertex $v$ of a graph $G$ is the \emph{degree} of that vertex and is denoted $deg_G(v)$. Then $deg_G(v)=|N_G(v)|$ and the maximum degree is denoted $\Delta(G)$. If $|N_G(v)|=1$, $v$ is called \emph{pendant} vertex. Two vertices $u,v$ are \emph{true twins} if $uv \in E$ and $N_G[u]=N_G[v]$ and are \emph{false twins} if $uv \not \in E$ and $N_G(u)=N_G(v)$. The operation of extending a graph by adding a new vertex which has a twin in the obtained graph, is called \emph{splitting}~\cite{bandelt/mulder:86}. A sequence of pairwise distinct vertices $(x_0, x_1,\ldots, x_n)$ is a {\em path} in $G$ if $x_{i}x_{i+1} \in E$ for $0\leq i < n$, and is an {\em induced path} if $G[\{x_0, \ldots, x_n\}]$ has $n$ edges. The \emph{length} of an induced path is the number of its edges. A {\em cycle} in $G$ is a path $(x_0, \ldots, x_{n-1})$, $n\geq 3$, where also $x_{0}x_{n-1}\in E$. A \emph{$(x,y)$-path} is a path from $x$ to $y$. A graph $G$ is {\em connected} if for each pair of vertices $x$ and $y$ of $G$ there is a $(x,y)$-path in $G$. In a connected graph $G$, the length of a shortest $(x,y)$-path is called {\em distance} and is denoted by $d_G(x,y)$. A \emph{connected component} of $G$ is a maximal connected subgraph of $G$. A vertex $x$ is an \emph{articulation vertex} if $G-x$ has more connected components than $G$. A graph $G=(V,E)$ is \emph{biconnected} if $G-x$ is connected, for each $x\in V$. A subgraph $H$ of $G$ is said to be \emph{convex} if all shortest paths in $G$ between vertices of $H$ actually belong to $H$. The \emph{convex hull} of a subset $V'$ of vertices – denoted \emph{\conv{V'}} – is defined as the smallest convex subgraph containing $V'$. \paragraph{Operations on graphs} If $G$ is a graph, $\overline{G}$ denotes its \emph{complement}, that is the graph on the same vertices such that two distinct vertices of $\overline{G}$ are adjacent if and only if they are not adjacent in $G$. Given two graphs $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$, such that $V_1\cap V_2=\emptyset$, the \emph{disjoint union} $G_1\cup G_2$ denotes the graph $(V_1\cup V_2, E_1\cup E_2)$; the \emph{join} $G_1 + G_2$ denotes the graph consisting in $G_1\cup G_2$ and all edges joining $V_1$ with $V_2$, that is $(V_1\cup V_2, E_1\cup E_2\cup\{xy~|~x\in V_1, y \in V_2\})$. To define the \emph{Cartesian product} $G_1 \times G_2=(V,E)$, consider any two vertices $u=(u_1, u_2)$ and $v = (v_1, v_2)$ in $ V = V_1 \times V_2$. Then $uv\in E$ whenever either $u_1 = v_1$ and $u_2v_2\in E_2$ or $u_2 = v_2$ and $u_1v_1 \in E_1$. We call $G_1$ and $G_2$ \emph{isomorphic}, and write $G_1\sim G_2$ if there exists a bijection $\varphi :V_1 \rightarrow V_2$ with $xy \in E_1 \iff \varphi(x)\varphi(y) \in E_2$ for all $x,y \in V_1$. \paragraph{Special graphs} In this paper we use some special graphs. $K_n$ denotes the \emph{complete graph} (or \emph{clique}) with $n$ vertices and $n(n-1)/2$ edges. The \emph{clique number} $\omega(G)$ of a graph $G$ is the number of vertices in a maximum clique in $G$. $P_n$ denotes the \emph{path graph} with $n$ vertices and $n-1$ edges. $C_n$ denotes the \emph{cycle graph} with $n$ vertices and $n$ edges. Finally, $K_{m,n}=\overline{K_m}+\overline{K_n}$ denotes the \emph{complete bipartite graph}. A \emph{tree} is a connected graph without cycles and its pendant vertices are called \emph{leaves}. The tree $K_{1,n}$ is called \emph{star} and can be obtained by adding $n$ pendant vertices to a single vertex, called the \emph{center} of the star. The graph $C_3$ is also called \emph{triangle}. A \emph{grid graph} $\Gamma_{m,n}=P_m\times P_n$ is the Cartesian product of two paths $P_m$ and $P_n$. For $m\geq 3$ and $n\geq 3$ a graph $T_{m,n} =C_m \times C_n$ obtained by the Cartesian product of two cycle graphs is called \emph{torus}. A connected graph obtained from $K_1$ by a sequence of splittings is called \emph{cograph}. \begin{figure}[t] \graphicspath{{fig/}} \begin{center} {\scalebox{1.0}{\input{fig/noconvex.pdf_tex}}} \end{center} \caption{ A grid graph $\Gamma_{2,7}$ with $\mu(\Gamma_{2,7})=4$ and an induced subgraph $H$ of $\Gamma_{2,7}$ that is a tree with five leaves. For each graph, vertices in red are points of a maximum {mutual-visibility set\xspace}, then $\mu(H)=5>\mu(\Gamma_{2,7})$. } \label{fig:noconvex} \end{figure} \paragraph{Preliminaries} Let $G=(V,E)$ be a graph and $P\subseteq V$ a set of \emph{points}. Two points are \emph{mutually visible} if there is a shortest path between them with no further point. $P$ is a \emph{mutual-visibility set\xspace} if its points are pairwise mutually visible. The \emph{mutual-visibility number\xspace} of $G$ is the size of any largest mutual-visibility set\xspace of $G$ and it is denoted $\mu(G)$. By $M(G)$ we denote the set containing all the largest {mutual-visibility set\xspace}s of $G$. Formally: $$M(G)=\{P~|~P\subseteq V \mbox{ is a mutual-visibility set\xspace and } |P|=\mu(G)\}$$ Notice that given a graph $G$ and a set of points $P$, the mutual visibility relation between two points in $P$ is reflexive, symmetric, but not transitive. Then it is different from the visibility relations studied in~\cite{Wu98}, that are all transitive. Let $H=(V_H,E_H)$ be an induced subgraph of a graph $G$. If $P$ is a mutual-visibility set\xspace in $G$ then $P\cap V_H$ is not necessarily a mutual-visibility set\xspace of $H$. For example, consider a cycle graph $C_n$, $n\geq 4$: it is easy to find a maximum mutual-visibility set\xspace $P$ of size three. Now consider an induced subgraph $C_n -v$, where $v\not \in P$: it is a path graph. All the points in $P$ are in $C_n -v$, but they are not mutually visible, since one of them is between the other two. However, the following lemma holds for convex subgraphs of a given graph. \begin{lemma}\label{lem:mv_subset} Let $H=(V_H,E_H)$ be a convex subgraph of $G=(V,E)$. Let $P\subseteq V$ be a mutual-visibility set\xspace of $G$. Then $P\cap V_H$ is a mutual-visibility set\xspace of $H$. \end{lemma} \begin{proof} Let $u,v$ be two not necessarily distinct vertices of $G$ in $P'=P\cap V_H$, then, by definition of convex subgraph, all the shortest paths between $u$ and $v$ in $G$ are in $H$ and one of them is without points in $P$ and then in $P'$. Hence $u,v$ are mutually visible in $H$. By the generality of $u,v$, $P'$ is a mutual-visibility set\xspace of $H$. \end{proof} Given a graph $G$ and a positive integer $k$, the property $\mu(G)\leq k$ is not a hereditary property for induced subgraphs, i.e., it is possible for an induced subgraph $H$ of $G$ that $\mu(H)>k\geq \mu(G)$. Consider the grid graph $\Gamma_{2,7}\sim P_2 \times P_7$ in Figure~\ref{fig:noconvex}, where $P_2=(u_0,u_1)$ and $P_7=(v_0,v_1,\ldots,v_6)$. Then, as we will prove in Section~\ref{sec:grid}, $\mu(G)=4$. The induced subgraph $H$ obtained by removing vertices $(u_0,v_0), (u_2,v_0), (u_4,v_0)$, and $(u_6,v_0)$ from $G$ is a tree with five leaves, then, as shown in Figure~\ref{fig:noconvex} and proved in Section~\ref{sec:block}, $\mu(H)=5$. So $\mu(H)>4=\mu(G)$. However, if we consider convex subgraphs of $G$ the property holds, as stated by the following lemma. \begin{lemma}\label{lem:mu_conv} Let $H$ be a convex subgraph of a graph $G$. Then $\mu(H)\leq\mu(G)$. \end{lemma} \begin{proof} Any mutual-visibility set\xspace $P$ of $H$ is also a mutual-visibility set\xspace of $G$, since all the shortest paths between points in $P$ are both in $G$ and in $H$. Then the statement follows. \end{proof} The next lemma sets an upper bound to the mutual-visibility number\xspace of a graph based on the {mutual-visibility number\xspace}s of certain convex subgraphs. \begin{lemma}\label{lem:mu_bound} Let $G=(V,E)$ be a graph and let $V_1, V_2, \ldots V_k$ be subsets of $V$ such that $\bigcup_{i=1}^k V_i = V$. Then $\mu(G)\leq\sum_{i=1}^k\mu(\conv{V_i})$. \end{lemma} \begin{proof} Assume $\mu(G) > \sum_{i=1}^k\mu(\conv{V_i})$ and let $P\subseteq V$ be a mutual-visibility set\xspace such that $|P|=\mu(G)$. Since $\bigcup_{i=1}^k V_i = V$ any point of $P$ is in at least one convex hull $\conv{V_i}$. Let $P_i$ be the set of vertices that are in $P$ and in $\conv{V_i}$, for each $i=1,2,\ldots,k$. Then $\sum_{i=1}^k |P_i|\geq |P|=\mu(G)> \sum_{i=1}^k\mu(\conv{V_i})$. Hence there exists at least a set $P_j$ such that $|P_j|> \mu(\conv{V_j})$, for some $j$ in $\{1,2,\ldots,k\}$. This is a contradiction since, by Lemma~\ref{lem:mv_subset}, $P_j$ is a mutual-visibility set\xspace of $\conv{V_j}$ and its size cannot be larger than $\mu(\conv{V_j})$. \end{proof} It is worth to notice that for each graph $G$ there exists a mutual-visibility set\xspace $P$ such that $|P|=\mu(G)$ and no articulation vertex is in $P$, as shown below. \begin{lemma}\label{lem:art} Let $G=(V,E)$ be a graph and let $X$ be the set of its articulation vertices. There exists a maximum mutual-visibility set\xspace $P\in M(G)$ such that $X\cap P=\emptyset$. \end{lemma} \begin{proof} Let $P$ be any mutual-visibility set\xspace in $M(G)$ and suppose, by contradiction, that there exists a point $x_P\in X\cap P$. Let $(V_1,E_1),(V_2,E_2),\ldots (V_k,E_k), k\geq 2$ be the new connected components of $G-x_P$, created by removing $x_P$. Note that $P\setminus \{x_p\}\subseteq \bigcup_{\ell=1}^k V_\ell$. However, there is only one index $i\in\{1,\ldots,k\}$ such that $P\cap V_i\not = \emptyset$, otherwise there would be two points $u,v$ belonging to two different connected components in $G-x_P$ that are in mutual visibility in $G$. This is impossible since any shortest $(u,v)$-path passes through $x_P$. Then $P'=(P\setminus \{x_P\})\cup \{x'\}$, where $x'\in V_j, j\not=i$, is such that $P'\in M(G)$. \end{proof} Before calculating {mutual-visibility number\xspace}s and maximum {mutual-visibility set\xspace}s for some graph classes, let us show a first result that compares the mutual-visibility number\xspace of a graph $G$ with two well studied invariants of $G$. \begin{lemma}\label{lem:omegaDelta} Given a graph $G$ with clique number $\omega(G)$ and maximum degree $\Delta(G)$ then $\mu(G)\geq \omega(G)$ and $\mu(G)\geq \Delta(G)$. \end{lemma} \begin{proof} All vertices of a largest clique $K$ of $G$ form a mutual-visibility set\xspace. Then $\omega(G)=|K|=\mu(K)$, and, by Lemma~\ref{lem:mu_conv}, $\mu(K)\leq \mu(G)$ because $K$ is a convex subset of $G$. Hence, $\omega(G) \leq \mu(G)$. Let $v$ be a vertex of $G$ with degree $\Delta(G)$, then consider the set $P=N_G(v)$. For any two vertices $x,y\in N_G(v)$ they are adjacent or at distance two in the path $(x,v,y)$. Then, since $v$ is not in $P$, in both cases they are in mutual visibility and then $P$ is a mutual-visibility set\xspace. \end{proof} Where Lemma~\ref{lem:mu_bound} provides an upper bound to the mutual-visibility number\xspace of a graph, Lemma~\ref{lem:omegaDelta} provides two lower bounds to this value. The following lemma gives a first taste of the mutual-visibility number\xspace in two basic graph classes, that will be useful to derive further results. \begin{lemma}\label{lem:PnCn} The mutual-visibility number\xspace of a path graph $P_n$, $n\geq2$, is $\mu(P_n)=2$ and the mutual-visibility number\xspace of a cycle graph $C_n$, $n\geq 3$, is $\mu(C_n)=3$. \end{lemma} \begin{proof} $\mu(P_n)$ cannot be larger than two since, as already noted, a point of any mutual-visibility set\xspace with three or more points would be on the unique shortest path between two other points. Moreover, since $n\geq 2$ there is an edge $e$ in $P_n$ and the two endpoints of $e$ are mutually visible, so $\mu(P_n)=2$. Regarding the cycle graph $C_n=(x_0, x_1\ldots, x_{n-1})$, $\mu(C_n)\geq 3$ since it is always possible to choice $x_0$, $x_{\lceil \frac n 2\rceil-1}$ and $x_{\lceil \frac n 2\rceil}$ as three points in mutual visibility. Any largest mutual-visibility set\xspace cannot have four or more points as, otherwise, there exist four points $x_{i_1}$, $x_{i_2}$, $x_{i_3}$ and $x_{i_4}$, with $i_1<i_2<i_3< i_4$, such that the two $(x_{i_1},x_{i_3}$)-paths contain $x_{i_2}$ and $x_{i_4}$, respectively, and then $x_{i_1}$ and $x_{i_3}$ are not in mutual visibility. \end{proof}
{ "timestamp": "2021-05-07T02:21:52", "yymm": "2105", "arxiv_id": "2105.02722", "language": "en", "url": "https://arxiv.org/abs/2105.02722", "abstract": "Let $G=(V,E)$ be a graph and $P\\subseteq V$ a set of points. Two points are mutually visible if there is a shortest path between them without further points. $P$ is a mutual-visibility set if its points are pairwise mutually visible. The mutual-visibility number of $G$ is the size of any largest mutual-visibility set. In this paper we start the study about this new invariant and the mutual-visibility sets in undirected graphs. We introduce the mutual-visibility problem which asks to find a mutual-visibility set with a size larger than a given number. We show that this problem is NP-complete, whereas, to check whether a given set of points is a mutual-visibility set is solvable in polynomial time. Then we study mutual-visibility sets and mutual-visibility numbers on special classes of graphs, such as block graphs, trees, grids, tori, complete bipartite graphs, cographs. We also provide some relations of the mutual-visibility number of a graph with other invariants.", "subjects": "Combinatorics (math.CO); Computational Complexity (cs.CC)", "title": "Mutual Visibility in Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429619433692, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7097210904721775 }
https://arxiv.org/abs/2205.09096
Extremal arrangements of points on the sphere for weighted cone-volume functionals
Weighted cone-volume functionals are introduced for the convex polytopes in $\mathbb{R}^n$. For these functionals, geometric inequalities are proved and the equality conditions are characterized. A variety of corollaries are derived, including extremal properties of the regular polytopes involving the $L_p$ surface area. Some applications to crystallography and quantum theory are also presented.
\section{Introduction} The regular polytopes are a cornerstone of convex and discrete geometry, often arising as the solutions to geometric extremal problems. For example, among all polytopes inscribed in the Euclidean ball which have a fixed number of vertices and are combinatorially equivalent to a regular polytope, the regular polytope has the largest volume, and the largest surface area. In this article, we prove an analogous result: Among all such polytopes, the regular polytopes maximize (respectively, minimize) various Orlicz-type cone-volume functionals, which are defined in terms of a concave (convex) weight function applied to the facet heights or facet volumes. In particular, the regular simplex maximizes the $L_p$ surface area (for $p\in[0,1]$) among all simplices inscribed in the sphere. The Brunn-Minkowski theory was extended to the $L_p$ Brunn-Minkowski theory in the groundbreaking work \cite{Lutwak93} of Lutwak, where the classical surface area measure on convex bodies in $\R^n$ was extended to the $L_p$ surface area measure for $p>1$, and the $L_p$ Minkowski problem was introduced and solved for even data. Since then, the $L_p$ Minkowski problem has been studied extensively in convex geometry; see \cite{BBCY2019, Chou2006TheLP, JLW2015, LYZ2003} and the references therein for some examples. When $Q$ is a polytope, the so-called \emph{discrete $L_p$ Minkowski problem} has been studied and solved for many cases; see \cite{HLYZ2005, Lutwak93, Zhu2015, Zhu2017} and the references therein. In the past decade or so, the Orlicz-Brunn-Minkowski theory has emerged as a generalization of the $L_p$ theory. This area has been studied extensively and developed rapidly. Notably, many of the results from the $L_p$ theory have been successfully translated into the Orlicz setting; see \cite{HLYZ2010,JianLu2019, LudwigReitzner, LYZ2010-2, LYZ2010-1, ZX2014-2} and the references therein for some examples. In particular, an Orlicz extension of the $L_p$ surface area has been sought \cite{HP2014}, and Zou and Xiong \cite{ZX2014} gave the following definition. For a convex body $K$ in $\R^n$ that contains the origin in its interior and an increasing concave function $\phi:[0,\infty)\to[0,\infty)$ with $\phi(0)=0$, the \emph{Orlicz surface area} $S_\phi(K)$ of $K$ was defined in \cite{ZX2014} by \begin{equation}\label{OrliczSA} S_\phi(K)=\int_{\partial K}\phi\left(\frac{1}{\langle x,\nu_K(x)\rangle}\right)\langle x,\nu_K(x)\rangle\,{\rm d}S_K(x), \end{equation} where $\partial K$ is the boundary of $K$, $\nu_K$ is the Gauss map of $K$ and $S_K$ is the classical surface measure of $K$. In particular, if $Q$ is a polytope in $\R^n$ that contains the origin $o$ in its interior, then \eqref{OrliczSA} becomes \[ S_\phi(Q) = \sum_{F\in\mathcal{F}_{n-1}(Q)} \phi(\dist(o,F)^{-1}) \dist(o,F) \vol_{n-1}(F) \] where the summation ranges over the set of facets $\mathcal{F}_{n-1}(Q)$ of $Q$. Furthermore, when $\phi_p(t)=t^p$ is chosen for some $p\in\R$, one recovers the aforementioned \emph{$L_p$ surface area} $S_p(Q)$ of $Q$, \[ S_p(Q)=\sum_{F\in\mathcal{F}_{n-1}(Q)}\dist(o,F)^{1-p}\vol_{n-1}(F) \] Especially, $S_{\phi_1}(Q)=S_1(Q)=\vol_{n-1}(\partial Q)$ is the classical surface area of $Q$ and $n^{-1}S_{\phi_0}(Q)=n^{-1}S_0(Q)=\vol_n(Q)$ is the $n$-dimensional volume of $Q$. In this article, we introduce similar (but distinct) Orlicz-type surface area functionals on the polytopes in $\R^n$, and we prove sharp geometric inequalities for them. The main results yield several corollaries on the polytopes inscribed in a sphere which optimize $L_p$ surface area type functionals. The highlight of the paper is a new technique which involves using such functionals to determine the maximum surface area (or volume) polytopes whose vertices lie in the sphere. As an application, we use this approach to settle an old problem from discrete geometry, which asks: Among all polytopes with seven vertices inscribed in the unit sphere $\mathbb{S}^2$, which one has the greatest surface area? To the best of our knowledge, a rigorous proof of the solution was missing until now. With the ``$L_p$ approach", we show that (up to rotations) the maximizer is a bipyramid with five vertices forming a regular pentagon in the equator and two more at the north and south poles. \section{Definitions and Notation} To formulate our main results, we first state some definitions and notation. Define the following classes of weight functions: \begin{align*} {\rm Conc}_{(0,\infty)}&:=\left\{\varphi: (0,\infty)\to(0,\infty) \,\big| \, \varphi\text{ is concave}\right\}\\ {\rm Conv}_{(0,\infty)}&:=\left\{\psi: (0,\infty)\to(0,\infty) \,\big| \, \psi\text{ is convex}\right\}. \end{align*} We will use the corresponding arrow to denote the subset of increasing (or decreasing) functions. For example, $\varphi\in{\rm Conc}_{(0,\infty)}^{\uparrow}$ means that $\varphi\in{\rm Conc}_{(0,\infty)}$ and $\varphi$ is increasing, while $\psi\in{\rm Conv}_{(0,\infty)}^{\downarrow}$ means that $\psi\in{\rm Conv}_{(0,\infty)}$ and $\psi$ is decreasing. Let $\mathcal{I}_n$ denote the set of polytopes $\mathbb{R}^n$ which are inscribable in the unit sphere $\mathbb{S}^{n-1}$ and contain the origin $o$ in their interiors. For $Q\in\mathcal{I}_n$, we denote the facets ($(n-1)$-dimensional faces) of $Q$ by $F_1,\ldots,F_{N_Q}$ where $N_Q:=|\mathcal{F}_{n-1}(Q)|$. We say that $Q$ is \emph{equiareal} if $\vol_{n-1}(F_j)=\overline{F_Q}$ is constant for all $1\leq j\leq N_Q$. We will make use of the following auxiliary quantities associated to $Q\in\mathcal{I}_n$: \begin{align*} \overline{F_Q} &:= \dfrac{\vol_{n-1}(\partial Q)}{N_Q}, \\ \overline{h_Q} &:= \sum_{j=1}^{N_Q} \dist(o,F_j) \dfrac{\vol_{n-1}(F_j)}{\vol_{n-1}(\partial Q)} \qquad\text{ and}\\ H_Q&:=\sum_{j=1}^{N_Q}\dist(o,F_j) . \end{align*} We let $\mathcal{P}_n$ be the set of polytopes in $\mathcal{I}_n$ that admit an inradius. We use $i_Q$ and $r_Q$ to denote the incenter and inradius of $Q\in\mathcal{P}_n$, respectively. We say that $Q\in\mathcal{P}_n$ is \emph{concentric} if $i_Q=o$. Our main results are concerned with the Orlicz-type surface area functionals $S_\varphi: \mathcal{I}_n\to\mathbb{R}$ and $S^*_{\varphi}:\mathcal{I}_n\to\mathbb{R}$ defined by \begin{align*} S_\varphi(Q)&:=\sum_{j=1}^{N_Q} \varphi(\dist(o,F_j))\vol_{n-1}(F_j) \qquad\text{ and }\\ S_{\varphi}^{*}(Q)&:=\sum_{j=1}^{N_Q} \dist(o,F_j)\,\varphi\left(\vol_{n-1}(F_j)\right), \end{align*} where $\varphi\in{\rm Conc}_{(0,\infty)}$. Both of these functions may be viewed as weighted versions of the familiar \emph{cone-volume formula} applied to the polytope $Q\in\mathcal{I}_n$: \[ n\vol_{n}(Q) = \sum_{j=1}^{N_Q} \dist(o,F_j)\vol_{n-1}(F_j). \] Similarly, for a function $\psi\in{\rm Conv}_{(0,\infty)}$ we define the functionals $S_\psi$ and $S_\psi^*$ in the same way. Choosing $\varphi_p(t):=t^{1-p}$ with $p\in[0,1]$ (resp. $\psi_p(t)=t^{1-p}$ with $p\geq 1$ or $p<0$), then \[ S_{\varphi_p}(Q)=S_p(Q)=\sum_{j=1}^{N_Q}\dist(o,F_j)^{1-p}\vol_{n-1}(F_j) \] is the corresponding \emph{$L_p$ surface area} of $Q$. In particular, $S_{\varphi_1}(Q)=S_1(Q)=\vol_{n-1}(\partial Q)$ is the usual surface area of $Q$, and $S_{\varphi_0}(Q)=S_{\varphi_0}^*(Q)=S_0(Q)=n\vol_n(Q)$. \section{Main results} Our main result is the following \begin{theorem}\label{mainThm} Let $\varphi\in{\rm Conc}_{(0,\infty)}$. \begin{itemize} \item[(i)] If $Q\in\mathcal{I}_n$, then $S_\varphi(Q) \leq \varphi\left(\overline{h_Q}\right) \vol_{n-1}\left(\partial Q\right)$. Equality holds if and only if $\varphi$ is affine or $Q\in\mathcal{P}_n$ and $Q$ is concentric. \item[(ii)] If $Q\in\mathcal{P}_n$, then $S_{\varphi}^{*}(Q)\leq \varphi\left(\overline{F_Q}\right)H_Q$, where $\overline{F_Q}$ is the average of $\vol_{n-1}(F_j)$. Equality holds if and only if $\varphi$ is affine or $Q$ is equiareal. \end{itemize} If $\varphi$ is replaced by $\psi\in{\rm Conv}_{(0,\infty)}$, then the reverse inequalities hold with the same equality conditions. \end{theorem} \begin{comment} \begin{theorem}\label{mainThm} Let $Q\in\mathcal{P}_n$ be a simplex with inradius $r_Q$ and incenter $i_Q$. \begin{itemize} \item[(i)] For any $\varphi\in{\rm Conc}_{(0,\infty)}$ we have $S_\varphi(Q) \leq \vol_{n-1}(\partial Q)\varphi(r_Q)$. Equality holds if and only if $i_Q=o$ or $\varphi$ is affine. \item[(ii)] For any $\psi\in{\rm Conv}_{(0,\infty)}$ we have $S_\psi(Q) \geq \vol_{n-1}(\partial Q)\psi(r_Q)$. Equality holds if and only if $i_Q=o$ or $\psi$ is affine. \end{itemize} \end{theorem} Note that by the cone volume formula, for any simplex $Q$ in $\R^n$ the equality $n\vol_n(Q)=r_Q\vol_{n-1}(\partial Q)$ always holds, which corresponds to the linear function $\varphi(t)=t$. \vspace{2mm} \end{comment} For $n\geq 5$ there are exactly three types of regular polytopes in $\mathbb{R}^n$, for which we may deduce the following \begin{corollary}\label{mainThm2} Let $\varphi\in{\rm Conc}_{(0,\infty)}^{\uparrow}$ and $Q\in\mathcal{I}_n$. \begin{itemize} \item[(i)] If $Q$ is a simplex, then \begin{align}\label{simplexineq} S_\varphi(Q)& \leq \dfrac{(n+1)^{\frac{n+1}{2}}}{n^{\frac{n}{2}-1}(n-1)!}\cdot\varphi\left(\frac{1}{n}\right) \qquad\text{ and } \\ \nonumber S^{*}_\varphi(Q) &\leq \dfrac{n+1}{n} \cdot\varphi\left(\dfrac{(n+1)^{\frac{n-1}{2}}}{n^{\frac{n}{2}-1}(n-1)!}\right). \end{align} \item[(ii)] If $Q$ is combinatorially equivalent to a hypercube, then \begin{align}\label{cubeineq} S_\varphi(Q)& \leq \dfrac{2^n(n-1)^{\frac{n-1}{2}}}{n^{\frac{n-3}{2}}}\cdot\varphi\left(\frac{1}{\sqrt{n}}\right) \qquad\text{ and }\\ \nonumber S^{*}_\varphi(Q)&\leq 2\sqrt{n}\cdot \varphi\left( \left( \dfrac{4(n-1)}{n}\right)^{\frac{n-1}{2}} \right). \end{align} \item[(iii)] If $Q$ is combinatorially equivalent to a cross polytope, then \begin{align}\label{octineq} S_\varphi(Q)&\leq \dfrac{2^n\sqrt{n}}{n!}\cdot \varphi\left(\frac{1}{\sqrt{n}}\right) \qquad\text{ and }\\ \nonumber S^{*}_\varphi(Q)&\leq \dfrac{2^n}{\sqrt{n}}\cdot\varphi\left(\dfrac{\sqrt{n}}{n!}\right). \end{align} \end{itemize} The equalities in each case hold if and only if $Q$ is regular or $\varphi$ is affine. \end{corollary} Inequalities concerning polytopes combinatorially equivalent to the other regular polytopes in $\mathbb{R}^3$ and $\mathbb{R}^4$, analogous to those given above, also hold. \begin{comment} \begin{corollary}\label{mainThm2} Let $Q\in\mathcal{P}_n$ be a simplex, and suppose that $\varphi\in{\rm Conc}_{(0,\infty)}$ is increasing and $\psi\in{\rm Conv}_{(0,\infty)}$ is decreasing. Then \begin{equation}\label{simplexineq} S_\varphi(Q)\leq \dfrac{(n+1)^{\frac{n+1}{2}}}{n^{\frac{n}{2}-1}(n-1)!}\cdot\varphi(1/n), \end{equation} and the inequality is reversed if $\varphi$ is replaced by $\psi$. Equality holds if and only if $Q$ is regular. \end{corollary} \end{comment} \begin{remark}\label{volume remark} By letting $\varphi_p(t)=t^{1-p}$ with $p\in[0,1]$, the inequality for $S_{\varphi_p}$ in Corollary \ref{mainThm2} (i) represents an \emph{$L_p$ interpolation} of the following classical results. Taking $p=0$, we recover the result which states that the regular simplex has greatest volume among all inscribed simplices. Choosing $p=1$, we recover the result which states that the regular simplex has greatest surface area among all inscribed simplices. \end{remark} \begin{remark} The functionals $S_\varphi$ and $S_\psi$ are related to the \emph{$T$-functional} of a polytope, introduced by Wieacker \cite{WieackerThesis} in stochastic geometry. For a polytope $Q$ in $\R^n$, it is defined as \[T_{a,b}^{n,k}(Q)=\sum_{F\in\mathcal{F}_k(Q)} \dist(o,F)^a \vol_k(F)^b,\] where $a,b\geq 0$ are parameters and $\mathcal{F}_k(Q)$ is the set of all $k$-dimensional faces of $Q$ for $0\leq k\leq n$. The $T$-functional has been studied for various models of random polytopes; for some recent examples, see \cite{HLRT-2022, KMTT-2019, KabluchkoEtAl2019}. Note that in particular, $T_{1-p,1}^{n,n-1}(Q)=S_p(Q)$. \end{remark} \begin{comment} The arguments in the proofs of Theorem \ref{mainThm} and Corollary \ref{mainThm2} can be used to show the following, mutatis mutandis. Let $\varphi\in{\rm Conc}_{(0,\infty)}$ be increasing and let $\psi\in{\rm Conv}_{(0,\infty)}$ be decreasing. If $Q$ is combinatorially equivalent to a hypercube, then \begin{equation}\label{cubeineq} S_\varphi(Q)\leq \dfrac{2^n(n-1)^{\frac{n-1}{2}}}{n^{\frac{n-3}{2}}}\cdot\varphi(1/\sqrt{n}). \end{equation} If $Q$ is combinatorially equivalent to a cross polytope, then \begin{equation}\label{octineq} S_\varphi(Q)\leq \dfrac{2^n\sqrt{n}}{n!}\cdot\varphi(1/\sqrt{n}). \end{equation} These inequalities are reversed if $\varphi$ is replaced by $\psi$. In \eqref{cubeineq} and \eqref{octineq}, equality holds if and only if $Q$ is regular or $\varphi$ is affine (respectively, $\psi$ is affine). Similar results can be stated for the other regular polytopes in $\R^3$ and $\R^4$ \end{comment} \subsection{The $K$ vertex problem for the unit sphere} Consider the following general \begin{problem}[$K$ Vertex Problem for $G$]\label{KvertexProblem} Let $K\geq n+1$ and distribute the points $\{p_1,\dots,p_K\}$ on the unit sphere $\mathbb{S}^{n-1}$. Find the configuration which maximizes some positive functional $G$ of the convex hull of $\{p_1,\dots,p_K\}$ provided that the convex hull has nontrivial volume. \end{problem} The results of Corollary \ref{mainThm2} are a partial solution to this problem for $G=S_\varphi$ and $G=S_{\varphi}^*$. For background on the $K$ vertex problem for volume or surface area maximization, we refer the reader to, e.g., \cite{BermanHanes1970, DHL, HL-2014} and the references therein. The maximum surface area polytope with $K$ vertices inscribed in $\mathbb{S}^2$ has been determined for $K=4,5,6$ and 12. The cases $K=4,6,12$ follow, for example, from a result of L. Fejes T\'oth \cite{Toth-RegularFigures} (see also \cite[Thm. 2, p. 279]{Toth1950}). We also remark that these cases follow directly from Weitzenb\"ock's inequality. A proof of the $K=5$ case can be found in \cite{DHL}. To our knowledge, the case $K=7$ is open. As an application of our technique, we settle this old problem in the following \begin{theorem}\label{7vertices} Let $Q$ be the convex hull of seven points chosen from the unit sphere $\mathbb{S}^2$. Then \[ \vol_2(\partial Q) \leq \frac{5}{4}\sqrt{50-6\sqrt{5}}=7.560546\ldots \] with equality if and only if $Q$ is a pentagonal bipyramid with two vertices at the poles $\pm e_3$ and the other five forming an equilateral pentagon in the equator $\mathbb{S}^2\cap (\spann(e_3))^\perp$. \end{theorem} \begin{center} \tdplotsetmaincoords{80}{90} \def1{1} \begin{tikzpicture}[scale=2.2,line join=bevel, tdplot_main_coords] \coordinate (O) at (0,0,0); \coordinate (A) at (1,0,0); \coordinate (B) at ({(-1+sqrt(5))/4},{sqrt((5+sqrt(5))/8)},0); \coordinate (C) at ({(-1-sqrt(5))/4},{sqrt((5-sqrt(5))/8)},0); \coordinate (D) at (0,0,1); \coordinate (E) at (0,0,{-1}); \coordinate (F) at ({(-1-sqrt(5))/4},{-sqrt((5-sqrt(5))/8)},0); \coordinate (G) at ({(-1+sqrt(5))/4},-{sqrt((5+sqrt(5))/8)},0); \begin{scope}[thick] \draw (A) -- (D)--(B); \draw (A) -- (B)--(E); \draw (G)--(A)--(E); \draw (D)--(G); \draw (G)--(E); \end{scope} \draw[thick,fill=green, opacity=0.2] (A) -- (D)--(B); \draw[thick,fill=green, opacity=0.2] (A) -- (D) -- (G); \draw[thick,fill=green,opacity=0.2](A) -- (G) -- (E); \draw[thick,fill=green,opacity=0.2] (A)--(E)--(B); \begin{scope}[dashed] \draw (C) -- (B); \draw (D)--(C); \draw (D)--(F); \draw (C)--(F); \draw (F)--(G); \draw (E)--(F); \draw (E)--(C); \end{scope} \begin{scope}[opacity=0.8] \draw[tdplot_screen_coords] (0,0,0) circle (1); \tdplotCsDrawLatCircle{1}{0} \end{scope} \filldraw[black] (0,0,0) circle (0.25pt) node[anchor=east] {$o$}; \filldraw[black] (0,0,1) circle (0.25pt) node[anchor=south] {$e_3$}; \filldraw[black] (0,0,-1) circle (0.25pt) node[anchor=north] {$-e_3$}; \filldraw[black] (1,0,0) circle (0.25pt) node[anchor=north west] {}; \filldraw[black] (B) circle (0.25pt) node[anchor=north] {}; \filldraw[black] (C) circle (0.25pt) node[anchor=south] {}; \filldraw[black] (F) circle (0.25pt) node[anchor=south] {}; \filldraw[black] (G) circle (0.25pt) node[anchor=north] {}; \end{tikzpicture} \end{center} {\flushleft\footnotesize {\bf Figure 1}: The maximum surface area polytope with 7 vertices inscribed in $\mathbb{S}^2$ is the convex hull of the north and south poles $\pm e_3$ and an equilateral pentagon inscribed in the equator $\mathbb{S}^2\cap (\spann(e_3))^\perp$.} \section{Proofs of Theorem \ref{mainThm} and Corollary \ref{mainThm2}} We will use the following formulation of Jensen's inequality. \begin{lemma} If $f$ is a concave function and $\lambda_1,\ldots,\lambda_k\geq 0$ satisfy $\sum_{i=1}^k \lambda_i=1$, then $\sum_{i=1}^k\lambda_i f(x_i) \leq f(\sum_{i=1}^k\lambda_i x_i)$. The inequality is reversed if $f$ is convex. Equality holds if and only if $x_1=\ldots=x_k$ or $f$ is affine. \end{lemma} \subsection{Proof of Theorem \ref{mainThm}} Let $Q\in\mathcal{I}_n$. Denote the facets of $Q$ by $F_1,\ldots, F_{N_Q}$ and set $h_j:=\dist(o,F_j)$. Since $\vol_{n-1}(\partial Q) = \sum_{j=1}^{N_Q}\vol_{n-1}(F_j)$, we may express $\overline{h_Q}$ as the following convex combination of the $h_j$: \[ \overline{h_Q} = \sum_{j=1}^{N_Q} \dfrac{\vol_{n-1}(F_j)}{\vol_{n-1}(\partial Q)} h_j. \] Thus Jensen's inequality yields \[ S_\varphi(Q) \leq \varphi(\overline{h_Q})\vol_{n-1}(\partial Q), \] with equality when the $h_j$ are all the same (that is, when $Q\in\mathcal{P}_n$). The cone-volume formula provides \[ r_Q\vol_{n-1}(\partial Q)=n\vol_n(Q)=\sum_{j=1}^{N_Q} h_j\vol_{n-1}(F_j) = \overline{h_Q}\vol_{n-1}(\partial Q), \] so equality occurs when $\dist(o,F_j) = h_j=\overline{h_Q} = r_Q$, which implies that $Q$ is concentric. The inequality for $S_{\varphi}^{*}$ is handled similarly. For $Q\in\mathcal{I}_n$, we have \begin{align*} \dfrac{S^{*}_{\varphi}(Q)}{H_Q} &= \sum_{j=1}^{N_Q}\dfrac{\dist(o,F_j)}{H_Q}\varphi(\vol_{n-1}(F_j)) \leq \varphi \left( \overline{F_Q} \right), \end{align*} with equality if and only if $\vol_{n-1}(F_j) = \overline{F_Q}$ for all $1\leq j\leq N_Q$ (that is, $Q$ is equiareal). If $\varphi$ is replaced by $\psi\in{\rm Conv}_{(0,\infty)}$ in either of the above proofs, then in each case the direction of Jensen's inequality reverses and the same equality conditions hold. \qed \vspace{4mm} To prove Corollary \ref{mainThm2} we need is an extremal property of the regular polytopes due to L. Fejes T\'oth (see \cite[p. 314]{Toth-RegularFigures} and \cite[Thm. 2]{Toth-1956}). For a polytope $Q$ in $\R^n$ with inradius $r_Q$ and circumradius $R_Q$, the ratio $r_Q/R_Q$ is called the \emph{spherical shell} of $Q$. \begin{lemma}\label{ratiolemma} Among those polytopes $Q$ in $\R^n$ which are combinatorially equivalent to a regular polytope, the regular one maximizes the spherical shell. In particular: \begin{itemize} \item[(i)] If $Q$ is a simplex in $\R^n$, then $r_Q/R_Q\leq 1/n$ with equality if and only if $Q$ is regular. \item[(ii)] If $Q$ is combinatorially equivalent to a hypercube or cross polytope in $\R^n$, then $r_Q/R_Q\leq 1/\sqrt{n}$ with equality if and only if $Q$ is regular. \end{itemize} \end{lemma} Parts (i) and (ii) can be found in \cite[p. 316--317]{Toth-RegularFigures}. L. Fejes T\'oth \cite{FejesToth} gives credit to I. \'Ad\'am for the proof of (i), which is called \emph{Euler's inequality}; short proofs of this result have also been given by Klamkin and Tsintsifas \cite{KT1979} and Vince \cite{Vince2008}. \subsection{Proof of Corollary \ref{mainThm2}} Let $Q\in\mathcal{P}_n$ be any inscribed simplex, and let $\triangle_n\in\mathcal{P}_n$ denote the regular inscribed simplex. The circumradius of $Q$ is 1, so by Euler's inequality $r_Q\leq r_{\triangle_n}= 1/n$ with equality if and only if $Q$ is regular. Thus, since $\varphi$ is increasing we have $\varphi(r_Q)\leq\varphi(1/n)$ with equality if and only if $r_Q=1/n$ (or equivalently, if and only if $Q$ is regular). The result now follows from Theorem \ref{mainThm} and the formula \[ \vol_{n-1}(\partial\triangle_n)=\frac{(n+1)^{\frac{n+1}{2}}}{n^{\frac{n}{2}-1}(n-1)!}. \] The other parts are handled in a similar fashion. \qed \section{Corollaries and applications}\label{Applications} First we highlight some applications of Theorem \ref{mainThm} when there are bounds on the spherical shell similar to Corollary \ref{mainThm2}. \subsection{A planar result} We begin in the plane with the following \begin{corollary} Let $Q\in\mathcal{P}_2$ be a planar convex polygon with $N\geq 3$ vertices inscribed in the unit circle $\mathbb{S}^1$. If $\varphi\in{\rm Conc}^{\uparrow}_{(0,\infty)}$, then \begin{equation}\label{planar-up} S_\varphi(Q)\leq 2N\sin\tfrac{\pi}{N}\cdot\varphi(\cos\tfrac{\pi}{N}). \end{equation} If $\psi\in{\rm Conv}^{\downarrow}_{(0,\infty)}$, then \begin{equation} S_\psi(Q)\geq 2N\sin\tfrac{\pi}{N}\cdot\psi(\cos\tfrac{\pi}{N}). \end{equation} Equality holds in each case if and only if $Q$ is regular. \end{corollary} \begin{proof} By a result of L. Fejes T\'oth \cite{Toth-1948-article2, Toth-1948-article}, the spherical shell of any convex polygon $Q$ with $N$ vertices satisfies $r_Q/R_Q \leq \cos(\pi/N)$ with equality if and only if $Q$ is regular. Thus if $Q\in\mathcal{P}_2$, then $R_Q=1$ and hence $r_Q\leq \cos(\pi/N)$ with equality if and only if $Q$ is regular. The conclusion now follows from arguments similar to those in the proofs of Theorem \ref{mainThm} and Corollary \ref{mainThm2}. \end{proof} \begin{remark} For $\varphi_p(t)=t^{1-p}$ and $p\in[0,1]$, inequality \eqref{planar-up} may be regarded as an \emph{$L_p$ interpolation} of the classical inequalities which state that among all polygons inscribed in the unit circle, the regular polygon maximizes area ($p=0$) and maximizes perimeter ($p=1$). \end{remark} \begin{comment} In light of Remark \ref{volume remark}, we may provide a lower bound on the spherical shell $r_Q/R_Q$. \begin{corollary} Suppose $Q\in\mathcal{P}_n$, then \[ r_Q \geq \dfrac{n\emph{\text{vol}}_{n}(Q)}{\emph{\text{vol}}_{n-1}(\partial Q)}, \] where equality holds when $i_Q=o$. \end{corollary} \end{comment} \subsection{Inequalities for the Platonic solids} For a polytope $Q$ in $\mathbb{R}^3$ with $K$ vertices, we have the following bound on the spherical shell due to L. Fejes T\'oth \cite{Toth-1948-article}: \[ \frac{r_Q}{R_Q} \leq 3^{-1/2}\cot\left(\frac{K}{K-2}\frac{\pi}{6}\right). \] Equality holds only in the case that $K=4, 6$ or $12$ for the regular tetrahedron, octahedron or icosahedron, respectively. This bound leads to the following \begin{comment} L. Fejes T\'oth \cite{Toth-1948-article} proved that if $Q$ is a polytope in $\R^3$ with $N$ vertices, then $\frac{r_Q}{R_Q} \leq 3^{-1/2}\cot\left(\frac{N}{N-2}\frac{\pi}{6}\right)$, and equality only holds for the regular tetrahedron, octahedron and icosahedron. This leads to the following \end{comment} \begin{corollary}\label{R3cor} Suppose $Q\in\mathcal{P}_3$ has at most $K$ vertices and let $\varphi\in {\rm Conc}^{\uparrow}_{(0,\infty)}$. Then \[ S_\varphi(Q) \leq \vol_{n-1}(\partial Q)\varphi\left(3^{-1/2}\cot\left(\frac{K}{K-2}\frac{\pi}{6}\right)\right), \] with equality holding only for the regular tetrahedron, octahedron or icosahedron. \end{corollary} The $K=4$ case is the subject of Corollary \ref{mainThm2} (with $n=3$); the concrete bounds are listed below for $K=6$ and $K=12$, respectively: \begin{align*} S_\varphi(Q)&\leq 4\sqrt{3}\varphi\left(\frac{1}{\sqrt{3}}\right),\\ S_\varphi(Q)&\leq (10-2\sqrt{5})\varphi\left(\frac{\sqrt{25+10\sqrt{5}}}{5\sqrt{3}}\right). \end{align*} \begin{comment} Thus if $Q$ has $N=4,6$ or $12$ vertices and $\varphi$ is increasing, then $S_\varphi(Q) \leq \vol_{n-1}(\partial Q)\varphi\left(3^{-1/2}\cot\left(\frac{V}{V-2}\frac{\pi}{6}\right)\right)$ with equality only when $Q$ is a regular tetrahedron, octahedron or icosahedron. Moreover, if $Q$ is any polytope with $V=4,6$ or $12$ vertices we have $\vol_{n-1}(\partial Q)\leq ****$ with equality if and only if $Q$ is regular. ****state as a theorem** We also get the reverse for decreasing $\psi\in{\rm Conv}_{(0,\infty)}$. \end{comment} \begin{comment} While equality in Corollary \ref{R3cor} only holds in three cases, the optimizer must satisfy the condition $i_Q=o$. This idea can be exploited, for instance, to simplify the proof of the following result, which was recently shown in \cite{DHL}. \begin{theorem}\label{5verticesThm} Let $Q$ be the convex hull of five points chosen from the unit sphere $\mathbb{S}^2$. Then \[ \vol_2(\partial Q)\leq \frac{3\sqrt{15}}{2}=5.809475\ldots \] with equality if and only if $Q$ is a triangular bipyramid with vertices $\pm e_3, e_1, (-1/2,\sqrt{3}/2,0)$ and $(-1/2,-\sqrt{3}/2,0)$. \end{theorem} \begin{proof} Let $Q$ be a polytope generated by 5 points on the sphere. The cone volume formula provides \[ \vol_3(Q) = \sum_{j=1}^{N_Q}\vol_{3}(C_j) = \dfrac{1}{3}\sum_{j=1}^{N_Q}h_j \vol_2(F_j) , \] where $F_1,\dots, F_{N_Q}$ are the facets of $Q$ and $C_j$ is the face-cone corresponding to $F_j$ with apex $o$. For $1\leq j\leq N_Q$ we have \[ \vol_2(F_j) = \dfrac{3}{h_j}\vol_{3}(C_j), \] so Jensen's inequality provides the estimate \[ \dfrac{\vol_2(\partial Q)}{\vol_3(Q)} \leq \dfrac{3}{r_{Q}}, \] where equality holds when $i_Q=o$. Since there are only two combinatorial equivalence classes of polytopes in $\R^3$ with exactly five vertices and both admit inspheres, {\color{red} How do we know the global maximizer for $K=5$ has an insphere?} we can compute directly. Fixing the incenters to be $o$, denote the triangular bipyramid by $Q_1$, and the square pyramid by $Q_2$. Then we have $r_{Q_1}=1/\sqrt{5}$ and $r_{Q_2}=\sqrt{2}-1$. Calculating the volume yields the result: \[ \vol_2(\partial Q) \leq 3\max\left( \frac{\vol_3(Q_1)}{r_{Q_1}} , \frac{\vol_3(Q_2)}{r_{Q_2}}\right) =3\max\left(\dfrac{\sqrt{15}}{2},\dfrac{4\sqrt{2}}{9}\right) = \dfrac{3\sqrt{15}}{2}. \] \end{proof} \end{comment} L. Fejes T\'oth \cite[Ch. IX]{Toth-RegularFigures} also proved that if $Q$ is a polytope in $\R^3$ with $v$ vertices, $e$ edges, $f$ facets and inradius $r_Q$, then \begin{align} \vol_3(Q) &\geq \frac{e}{3}\sin\frac{\pi f}{e}\left(\tan^2\frac{\pi f}{2e}\tan^2\frac{\pi v}{2e}-1\right)r_Q^3 \label{volvef}\\ \vol_2(\partial Q) &\geq e\sin\frac{\pi f}{e}\left(\tan^2\frac{\pi f}{2e}\tan^2\frac{\pi v}{2e}-1\right)r_Q^2\label{SAvef} \end{align} with equality only for the Platonic solids. These inequalities highlight a fundamental connection between the volumetric and combinatorial properties of the Platonic solids. By Theorem \ref{mainThm} (i) and \eqref{SAvef}, we derive the following generalization. \begin{corollary}\label{veflower} Let $\psi\in{\rm Conv}_{(0,\infty)}$. If $Q\in\mathcal{P}_3$ has inradius $r_Q$, $v$ vertices, $e$ edges and $f$ facets, then \begin{align*} S_\psi(Q) \geq e\sin\frac{\pi f}{e}\left(\tan^2\frac{\pi f}{2e}\tan^2\frac{\pi v}{2e}-1\right)\psi(r_Q)r_Q^2. \end{align*} Equality holds only for the Platonic solids. \end{corollary} \begin{comment}If additionally $i_Q=o$, and $G:\R\to[0,\infty)$ is any function that contains $r_Q$ in its domain, then $S_G(Q)=G(r_Q)\vol_2(\partial Q)$. Hence \begin{align*} S_G(Q) \geq e\sin\frac{\pi f}{e}\left(\tan^2\frac{\pi f}{2e}\tan^2\frac{\pi v}{2e}-1\right)r_Q^2 \,G(r_Q) \end{align*} where $S_G(Q):=\sum_{F\in\mathcal{F}_{2}(Q)}G(\dist(o,F))\vol_2(F)$. Equality holds only for the Platonic solids. \end{comment} It was also shown in \cite[Ch. IX]{Toth-RegularFigures} that if $Q$ is a polytope in $\R^3$ with circumradius $R_Q$, then \begin{equation}\label{volvef2} \vol_3(Q) \leq \frac{2e}{3}\cos^2\frac{\pi f}{2e}\cot\frac{\pi v}{2e}\left(1-\cot^2\frac{\pi f}{2e}\cot^2\frac{\pi v}{2e}\right)R_Q^3 \end{equation} and equality holds only for the regular polytopes. A similar upper bound for the surface area of $Q$ was established in \cite[Ch. IX]{Toth-RegularFigures} under the additional assumption that $Q$ satisfies a \emph{foot condition}, where the foot of the perpendicular from the circumcenter of $Q$ to each facet-plane and each edge-line lies in the corresponding facet or edge. (Linhart \cite{Linhart} later showed the condition on the edges was superfluous.) If $Q$ satisfies the foot condition, then \begin{equation}\label{SAvef2} \vol_2(\partial Q) \leq e\sin\frac{\pi f}{2e}\left(1-\cot^2\frac{\pi f}{2e}\cot^2\frac{\pi v}{2e}\right)R_Q^2. \end{equation} \noindent Together with Theorem \ref{mainThm} (i), this leads immediately to \begin{corollary}\label{vefupper1} Let $\varphi\in{\rm Conc}_{(0,\infty)}$. Suppose that $Q\in\mathcal{P}_3$ has $v$ vertices, $e$ edges, $f$ facets, inradius $r_Q$ and circumradius $R_Q$. If $Q$ satisfies the foot condition, then \begin{align*} S_\varphi(Q) &\leq e\sin\frac{\pi f}{2e}\left(1-\cot^2\frac{\pi f}{2e}\cot^2\frac{\pi v}{2e}\right)\varphi(r_Q)R_Q^2. \end{align*} \end{corollary} \subsection{A Littlewood-type inequality for polytopes in $\R^n$} For fixed $K\geq n+1$, define \[ \mathcal{I}_{n,K}:=\{Q\in\mathcal{I}_n: Q\text{ has }K\text{ vertices}\}. \] We have a Littlewood-type inequality for the $L_p$ weighted functionals $S_p$ and $S_p^{*}$ defined in Remark \ref{volume remark}. \begin{lemma}\label{lw} Suppose that $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{n,K}}S_{p_0}(Q)$ and $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{n,K}}S_{p_1}(Q)$ for some $p_0,p_1\in(0,1)$ with $p_0<p_1$. Then $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{n,K}}S_{p}(Q)$ for all $p\in[p_0,p_1]$. The same is true if we replace $S_p$ by $S_p^*$. \end{lemma} \begin{proof} The arguments for $S_p$ and $S_p^*$ are virtually identical, so we show the result only for $S_p$. Suppose that $p_0,p_1\in[0,1]$ with $p_0<p_1$, and for $t\in[0,1]$ let $p_t$ denote the convex combination $p_{t} = (1-t)p_0+t p_1$. We use H\"{o}lder's inequality to obtain, for any $Q\in\mathcal{I}_n$, \begin{align*} S_{p_t}(Q) &= \sum_{j=1}^{N_Q}\left(\dist(o,F_j) \right)^{(1-t)(1-p_0)+t(1-p_1)}\vol_{n-1}(F_j)\\ &= \sum_{j=1}^{N_Q}\left(\dist(o,F_j) \right)^{(1-t)(1-p_0)}\vol_{n-1}(F_j)^{(1-t)}\left(\dist(o,F_j) \right)^{t(1-p_1)}\vol_{n-1}(F_j)^{t}\\ &\leq \left( \sum_{j=1}^{N_Q} \dist(o,F_j)^{1-p_0}\vol_{n-1}(F_j)\right)^{1-t}\left(\sum_{j=1}^{N_Q} \dist(o,F_j) ^{1-p_1}\vol_{n-1}(F_j)\right)^{t}\\ &=\left(S_{p_0}(Q)\right)^{1-t}\left(S_{p_1}(Q)\right)^{t}. \end{align*} Now suppose that $\widehat{Q}$ maximizes both $S_{p_0}$ and $S_{p_1}$, where $p_0,p_1 \in(0,1)$ with $p_0<p_1$. Theorem \ref{mainThm} gives us that $\widehat{Q} \in\mathcal{P}_n$, and for all $Q\in\mathcal{I}_n$ we have \begin{align*} S_{p_t}(Q) &\leq \left(S_{p_0}(\widehat{Q})\right)^{1-t}\left(S_{p_1}(\widehat{Q})\right)^{t}\\ &=\left(\sum_{j=1}^{N_{\widehat{Q}}} r_{\widehat{Q}}^{1-p_0}\vol_{n-1}(\widehat{F}_j) \right)^{1-t}\left(\sum_{j=1}^{N_{\widehat{Q}}} r_{\widehat{Q}}^{1-p_1}\vol_{n-1}(\widehat{F}_j)\right)^{t}\\ &=r_{\widehat{Q}}^{1-p_t}\vol_{n-1}(\partial \widehat{Q})\\ &=S_{p_t}(\widehat{Q}). \end{align*} \end{proof} \subsection{An Orlicz-type edge curvature functional for polytopes in $\R^3$} The mean width $W(C)$ of a convex body $C$ in $\R^n$ is defined by \[ W(C) = 2\int_{\mathbb{S}^{n-1}}h_C(u)\,d\sigma(u), \] where $h_C(u)=\max_{x\in C}\langle x,u\rangle$ is the support function of $C$ in the direction $u\in\mathbb{S}^{n-1}$ and $\sigma$ is the uniform probability measure on $\mathbb{S}^{n-1}$. In the special case $C=Q$ is a polytope in $\mathbb{R}^3$ with edge set $\mathcal{F}_1(Q)$, the \emph{edge curvature} $M(Q)$ of $Q$ is defined as \cite[p. 278]{Toth-RegularFigures} \begin{equation} M(Q) = \frac{1}{2}\sum_{E\in\mathcal{F}_1(Q)}\ell_E\theta_E \end{equation} where $\ell_E$ is the length of edge $E$ and $\theta_E$ is the external angle of outer unit normals of the two facets that meet at $E$ (in other words, $\theta_E$ is $2\pi$ minus the dihedral angle of $E$). Note that $M(Q)=\pi W(Q)$. Let $\Lambda_Q:=\sum_{E\in\mathcal{F}_1(Q)}\ell_E$ denote the total edge length of $Q$ and set $\overline{\theta_Q}:=\sum_{E\in\mathcal{F}_1(Q)}\frac{\ell_E}{\Lambda_Q}\cdot \theta_E$. For $\varphi\in{\rm Conc}_{(0,\infty)}$ and $\psi\in{\rm Conv}_{(0,\infty)}$, we define the \emph{weighted edge curvature} functionals \begin{align*} M_\varphi(Q) :=\frac{1}{2}\sum_{E\in\mathcal{F}_1(Q)}\ell_E\varphi(\theta_E)\qquad \text{and}\qquad M_\psi(Q) :=\frac{1}{2}\sum_{E\in\mathcal{F}_1(Q)}\ell_E\psi(\theta_E). \end{align*} One may also define $M_\varphi^*(Q)$ and $M_\psi^*(Q)$ in a similar fashion. Rather than list all of the analogues of the previous results for these functionals, we instead simply remark that such results are possible and omit their statements, with the following exception. \begin{theorem}\label{MWthm} Let $\varphi\in{\rm Conc}_{(0,\infty)}$. If $Q\in\mathcal{I}_3$, then $M_\varphi(Q) \leq \varphi(\overline{\theta_Q})\Lambda_Q$ with equality if and only if $\varphi$ is affine or all of the exterior angles of $Q$ are equal. If $\psi\in{\rm Conv}_{(0,\infty)}$, then the reverse inequality holds with the same equality conditions. \end{theorem} \noindent The proof follows along the same lines as that of Theorem \ref{mainThm}. We leave the details to the interested reader. Analogues of Corollaries \ref{mainThm2}, \ref{veflower} and \ref{vefupper1} may be derived similarly. \section{A variation on the theme and the proof of Theorem \ref{7vertices}} We would like to investigate the $K$ vertex \emph{surface area} problem on the unit sphere $\mathbb{S}^{n-1}$. To this end, we introduce a functional that is closely related to this problem. For $p\in(0,1)$ and $Q\in\mathcal{I}_n$ we define the \emph{$p$-weighted area functional} $A_{p}$ by \[ A_{p}(Q) := \sum_{j=1}^{N_Q} \left( \vol_{n-1}(F_j) \right)^p. \] Observe that $A_p$ is invariant under rigid motions. The geometric conditions associated to the optimizers of the cone volume functionals $S_\varphi$ and $S_\varphi^*$ suggest that we can reach similar geometric conclusions. For convenience, we denote the set of equiareal polytopes in $\mathcal{I}_n$ by $\mathcal{A}_n $. \begin{lemma}\label{NCs} Let $p\in(0,1)$. If $\widehat{Q} = \displaystyle\argmax_{Q\in\mathcal{I}_{n,K}} A_{p}(Q)$, then $\widehat{Q}\in\mathcal{A}_n$. Moreover, $\widehat{Q}$ is simplicial. \end{lemma} \begin{proof} That $\widehat{Q}\in\mathcal{A}_n$ follows from Jensen's inequality; the details are similar to those found in the proof of Theorem \ref{mainThm} and are omitted. To see that $\widehat{Q}$ is simplicial, we argue by contradiction. Suppose that $\widehat{Q}$ is not simplicial. Then we may split one of the nonsimplicial facets, say, $F_1$, apart to produce $N_{\widehat{Q}}+1$ facets. Since the equiareal condition must still hold, we have for $F_1$ that $\vol_{n-1}(\partial \widehat{Q})/N_{\widehat{Q}} = \vol_{n-1}(\partial \widehat{Q})/(N_{\widehat{Q}}+1) $, which is impossible. The general case follows from the above contradiction. \end{proof} The next result links the surface area maximizers and the $A_p$ maximizers for $Q\in\mathcal{I}_{n,K}$. \begin{lemma}\label{Ap-SA} Suppose that there exists $q_0\in(0,1)$ such that for all $p\geq q_0$, $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{n,K}} A_p(Q)$. Then $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{n,K}} \vol_{n-1}(\partial Q)$. Conversely, if $\displaystyle\widehat{P}=\argmax_{Q\in\mathcal{I}_{n,K}}\vol_{n-1}(\partial Q)$ then $\displaystyle\widehat{P}=\argmax_{Q\in\mathcal{I}_{n,K}}A_p(Q)$ for all $p\in(0,1)$. In particular, $\widehat{P}$ is simplicial. \end{lemma} \begin{proof} Note that for $p\in(0,1)$, the function $f(x)=x^p$ is increasing and concave when $x>0$. For any $Q\in\mathcal{I}_{n, K}$, \begin{align*} \vol_{n-1}(\partial Q) &= \sum_{j=1}^{N_Q} \vol_{n-1}(F_j)^p \vol_{n-1}(F_j)^{1-p} \leq \vol_{n-1}(B_{n-1})^{1-p} A_p(Q), \end{align*} and Jensen's inequality provides \begin{align*} A_p(Q) \leq N_Q^{1-p} \vol_{n-1}(\partial Q)^p. \end{align*} Hence, \[ \vol_{n-1}(B_{n-1})^{p-1}\vol_{n-1}(\partial Q) \leq A_p(Q) \leq f(n,K)^{1-p} \vol_{n-1}(\partial Q)^p \] where $f(n,K)$ is the positive constant from the Upper Bound Theorem. Therefore, \[ \lim_{p\to 1}A_p(Q) = \vol_{n-1}(\partial Q). \] Now suppose that $\widehat{P}=\argmax_{Q\in\mathcal{I}_{n,K}} \vol_{n-1}(\partial Q)$ and $\widehat{Q}=\argmax_{Q\in\mathcal{I}_{n,K}} A_p(Q)$ for all $p\in[q_0,1)$. The inequalities above imply that for all $p\in[q_0,1)$, \[ \vol_{n-1}(\partial \widehat{P})\leq \vol_{n-1}(B_{n-1})^{1-p}A_p(\widehat{Q})\leq (f(n,K)\vol_{n-1}(B_{n-1}))^{1-p}\vol_{n-1}(\partial \widehat{P})^p. \] Now the squeeze theorem provides the result: \[ \vol_{n-1}(\partial \widehat{P})\leq \vol_{n-1}(\partial \widehat{Q}) \leq \vol_{n-1}(\partial \widehat{P}). \] This shows that the surface area maximizer must also be simplicial. Conversely, we show that if $\widehat{P}=\argmax_{Q\in\mathcal{I}_{n,K}} \vol_{n-1}(\partial Q)$, then $\widehat{P}$ is also the maximizer of $A_p(Q)$ for all $p\in(0,1)$. Since $\widehat{P}$ is simplicial, the equality in Jensen's inequality provides \[ A_p(Q)\leq f(n,K)^{1-p}\vol_{n-1}(\partial \widehat{P})^p = A_p(\widehat{P}). \] \end{proof} \begin{comment} The next result links the surface area maximizers and the $A_p$ maximizers for $Q\in\mathcal{I}_{3,K}$. {\color{red} we could state this for general $\mathcal{I}_{n,K}$ where $\pi$ is replaced by $\vol_{n-1}(B_{n-1})$ and $N_Q\leq 2K-4$ bound is from the upperbound theorem $c_{K,n}$}{\color{blue} done below; if you agree with it, then we can remove the $n=3$ case} \begin{lemma}\label{Ap-SA} Let $K\geq 4$. Suppose that for some $q_0\in(0,1)$, we have for all $p\geq q_0$, $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{3,K}} A_p(Q)$. Then $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{3,K}} \vol_2(\partial Q)$. Conversely, if $\displaystyle\widehat{P}=\argmax_{Q\in\mathcal{I}_{3,K}}\vol_2(\partial Q)$ then $\displaystyle\widehat{P}=\argmax_{Q\in\mathcal{I}_{3,K}}A_p(Q)$ for $p\in(0,1)$. \end{lemma} \begin{proof} We note that for $0<p<1$, the function $f(x)=x^p$ is increasing and concave when $x>0$. For any $Q\in\mathcal{I}_{3, K}$, \begin{align*} \vol_{2}(\partial Q) &= \sum_{j=1}^{N_Q} \vol_2(F_j)^p \vol_2(F_j)^{1-p} \leq \pi^{1-p} A_p(Q), \end{align*} and Jensen's inequality provides \begin{align*} A_p(Q) \leq N_Q^{1-p} \left( \vol_2(\partial Q) \right)^p. \end{align*} Hence, we have \[ \pi^{p-1}\vol_2(\partial Q) \leq A_p(Q) \leq (2K-4)^{1-p} \vol_2(\partial Q)^p \] and thus \[ \lim_{p\to 1}A_p(Q) = \vol_2(\partial Q). \] Now suppose that $Q_1=\argmax_{Q\in\mathcal{I}_{3,K}} \vol_2(\partial Q)$ and $Q_2=\argmax_{Q\in\mathcal{I}_{3,K}} A_p(Q)$ for all $p\in[q_0,1)$. Using the inequalities above, we have for all $p\in[q_0,1)$, \[ \vol_2(\partial Q_1)\leq \pi^{1-p}A_p(Q_2)\leq ((2K-4)\pi)^{1-p}\vol_2(\partial Q_1)^p. \] Now the squeeze theorem provides the result: \[ \vol_2(\partial Q_1)\leq \vol_2(\partial Q_2) \leq \vol_2(\partial Q_1). \] This shows that the optimizer of surface area must also be simplicial. Now let's show if $Q_1=\argmax \vol_2(\partial Q)$, then it is also the optimizer of $A_p(Q)$ for some $p\in(0,1)$. Since $Q_1$ is simplicial, the equality in Jensen's inequality provides \[ A_p(Q)\leq (2K-4)^{1-p}(\vol_2(\partial Q_1))^p = A_p(Q_1). \] \end{proof} \end{comment} For a simplicial polytope $Q$ in $\R^3$, we introduce another version of the area operator for $0<p<1$: \[ A^*_p(Q):=\sum_{j=1}^{N_Q}\frac{\lambda_j}{2}r_j^p, \] where $\lambda_j$ is the perimeter of the facet $F_j$ and $r_j$ is the inradius of $F_j$. Similar to the above calculation, from the spherical shell inequality applied to each triangular facet, we deduce \[ \vol_2\left(\partial Q \right) \leq \frac{1}{2^{1-p}}A_p^*(Q). \] In the other direction, Jensen's inequality yields \[ A_p^*(Q) \leq \Lambda_{Q}^{1-p} \left(\vol_2(\partial Q) \right)^{p}, \] where $\Lambda_Q$ is the sum of the edge lengths of $Q$ and equality holds only when all of the facet inradii $r_j$ are the same. Combining these inequalities shows that for a fixed simplicial $Q$, $A_p^*(Q)\to\vol_2(\partial Q)$ as $p\to 1$. Furthermore, the surface area maximizer must have equal facet inradii. The area of a triangle is half the product of its inradius and perimeter. Therefore, combining the equiareal condition with the uniformity of the inradii shows that the surface area maximizer in $\mathcal{I}_{3,K}$ must have uniform facet perimeter too. We refer to a polytope in $\mathbb{R}^3$ that satisfies the latter condition as \emph{isoperimetric}. These geometric results are summarized in the following \begin{comment} \begin{lemma} If $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{3,K}} \vol_2(\partial Q) $, then $\widehat{Q}$ is simplicial, equiareal and isoperimetric. \end{lemma} \end{comment} \begin{lemma} If $\displaystyle\widehat{Q}=\argmax_{Q\in\mathcal{I}_{n,K}} \vol_{n-1}(\partial Q) $, then $\widehat{Q}$ is simplicial and equiareal. Furthermore, if $n=3$ then $\widehat{Q}$ is also isoperimetric. \end{lemma} \begin{comment} Let $\widehat{Q}_7$ denote the pentagonal bipyramid in the statement of the theorem. We will prove that if $p\in(0,1)$ then $\argmax_{Q\in\mathcal{I}_{3,7}}S_p^*(Q)=\widehat{Q}_7$, for then the conclusion will follow from Lemma \ref{argmaxlem2} (iii). Since $\varphi_p(t)=t^{1-p}$ is strictly concave for $p\in(0,1)$, by Propositions \ref{hatQprop} (iii) and \ref{congruent}, for any $p\in(0,1)$ the polytope $\argmax_{Q\in\mathcal{I}_{3,7}}S_p^*(Q)$ must be simplicial and have congruent facets. Hence we may restrict our attention to such polytopes. \end{comment} A complete enumeration of the 34 equivalence classes of convex polytopes in $\R^3$ with 7 vertices was given in \cite[p. 365]{BrittonDunitz1973}. Among these classes, there are precisely five which are simplicial. Four of the latter classes consist of polytopes that have a degree 3 vertex. This feature imposes an additional local geometric constraint on the surface area maximizer, which is a variation of \cite[Lemma 3]{DHL}. In what follows, let $B_3$ denote the Euclidean unit ball in $\mathbb{R}^3$ centered at the origin. \begin{lemma}\label{Property-L} Let $T$ be a tetrahedron contained in a cap of $B_3$ such that the base vertices of $T$ lie in the base of the cap. Denote the apex of $T$ by $v$, and suppose that the orthogonal projection $\pi_v$ of $v$ lies in the base triangle. Then the lateral surface area of $T$ is maximized only when $\pi_v$ coincides with the incenter of the base triangle. \end{lemma} \begin{comment} For the reader's convenience, we provide figures of each of these classes below. Each vertex is labeled with its degree. \vspace{1mm} \begin{center} \begin{tikzpicture}[scale=1.75] \coordinate (v1) at (1,0); \coordinate (v2) at (1/2,{sqrt(3)/2}); \coordinate (v3) at (-1/2,{sqrt(3)/2}); \coordinate (v4) at (-1,0); \coordinate (v5) at (-1/2,{-sqrt(3)/2}); \coordinate (v6) at (1/2,{-sqrt(3)/2}); \coordinate (v7) at (0,0.25); \begin{scope}[thick] \draw (v1)--(v2); \draw (v2)--(v3); \draw (v3)--(v4); \draw (v4)--(v5); \draw (v5)--(v6); \draw (v6)--(v1); \draw (v7)--(v1); \draw (v7)--(v2); \draw (v7)--(v3); \draw (v7)--(v4); \draw (v7)--(v5); \draw (v7)--(v6); \end{scope} \begin{scope}[dashed] \draw (v1)--(v3); \draw (v3)--(v5); \draw (v5)--(v1); \end{scope} \node[circle,draw,fill=white,scale=0.8] (c1) at (v1){5}; \node[circle,draw,fill=white,scale=0.8] (c2) at (v2){3}; \node[circle,draw,fill=white,scale=0.8] (c3) at (v3){5}; \node[circle,draw,fill=white,scale=0.8] (c4) at (v4){3}; \node[circle,draw,fill=white,scale=0.8] (c5) at (v5){5}; \node[circle,draw,fill=white,scale=0.8] (c6) at (v6){3}; \node[circle,draw,fill=white,scale=0.8] (c7) at (v7){6}; \coordinate (L1) at (0,{-sqrt(3)/2}); \node[yshift=-7mm] at (L1) {Class I}; \end{tikzpicture} \begin{tikzpicture}[scale=1.75] \begin{scope}[thick] \draw (v1)--(v2); \draw (v2)--(v3); \draw (v3)--(v4); \draw (v4)--(v5); \draw (v5)--(v6); \draw (v6)--(v1); \draw (v7)--(v1); \draw (v7)--(v2); \draw (v7)--(v3); \draw (v7)--(v4); \draw (v7)--(v5); \draw (v7)--(v6); \end{scope} \begin{scope}[dashed] \draw (v1)--(v3); \draw (v4)--(v1); \draw (v5)--(v1); \end{scope} \node[circle,draw,fill=white,scale=0.8] (c1) at (v1){6}; \node[circle,draw,fill=white,scale=0.8] (c2) at (v2){3}; \node[circle,draw,fill=white,scale=0.8] (c3) at (v3){4}; \node[circle,draw,fill=white,scale=0.8] (c4) at (v4){4}; \node[circle,draw,fill=white,scale=0.8] (c5) at (v5){4}; \node[circle,draw,fill=white,scale=0.8] (c6) at (v6){3}; \node[circle,draw,fill=white,scale=0.8] (c7) at (v7){6}; \node[yshift=-7mm] at (L1) {Class II}; \end{tikzpicture} \begin{tikzpicture}[scale=1.75] \begin{scope}[thick] \draw (v1)--(v2); \draw (v2)--(v3); \draw (v3)--(v4); \draw (v4)--(v5); \draw (v5)--(v6); \draw (v6)--(v1); \draw (v7)--(v1); \draw (v7)--(v2); \draw (v7)--(v3); \draw (v7)--(v4); \draw (v7)--(v5); \draw (v7)--(v6); \end{scope} \begin{scope}[dashed] \draw (v4)--(v2); \draw (v4)--(v1); \draw (v5)--(v1); \end{scope} \node[circle,draw,fill=white,scale=0.8] (c1) at (v1){5}; \node[circle,draw,fill=white,scale=0.8] (c2) at (v2){4}; \node[circle,draw,fill=white,scale=0.8] (c3) at (v3){3}; \node[circle,draw,fill=white,scale=0.8] (c4) at (v4){5}; \node[circle,draw,fill=white,scale=0.8] (c5) at (v5){4}; \node[circle,draw,fill=white,scale=0.8] (c6) at (v6){3}; \node[circle,draw,fill=white,scale=0.8] (c7) at (v7){6}; \node[yshift=-7mm] at (L1) {Class III}; \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture}[scale=1.75] \coordinate (u1) at (1,0); \coordinate (u2) at ({cos(360/7)},{sin(360/7)}); \coordinate (u3) at ({cos(2*360/7)},{sin(2*360/7)}); \coordinate (u4) at ({cos(3*360/7)},{sin(3*360/7)}); \coordinate (u5) at ({cos(4*360/7)},{sin(4*360/7)}); \coordinate (u6) at ({cos(5*360/7)},{sin(5*360/7)}); \coordinate (u7) at ({cos(6*360/7)},{sin(6*360/7)}); \begin{scope}[thick] \draw (u1)--(u2); \draw (u2)--(u3); \draw (u3)--(u4); \draw (u4)--(u5); \draw (u5)--(u6); \draw (u6)--(u7); \draw (u7)--(u1); \end{scope} \begin{scope}[thick] \draw (u2)--(u7); \draw (u2)--(u6); \draw (u3)--(u6); \draw (u3)--(u5); \end{scope} \begin{scope}[dashed] \draw (u2)--(u4); \draw (u4)--(u1); \draw (u5)--(u1); \draw (u6)--(u1); \end{scope} \node[circle,draw,fill=white,scale=0.8] (d1) at (u1){5}; \node[circle,draw,fill=white,scale=0.8] (d2) at (u2){5}; \node[circle,draw,fill=white,scale=0.8] (d3) at (u3){4}; \node[circle,draw,fill=white,scale=0.8] (d4) at (u4){4}; \node[circle,draw,fill=white,scale=0.8] (d5) at (u5){4}; \node[circle,draw,fill=white,scale=0.8] (d6) at (u6){5}; \node[circle,draw,fill=white,scale=0.8] (d7) at (u7){3}; \node[yshift=-9mm] at (L1) {Class IV}; \end{tikzpicture} \begin{tikzpicture}[scale=1.75] \begin{scope}[thick] \draw (u1)--(u2); \draw (u2)--(u3); \draw (u3)--(u4); \draw (u4)--(u5); \draw (u5)--(u6); \draw (u6)--(u7); \draw (u7)--(u1); \end{scope} \begin{scope}[thick] \draw (u2)--(u4); \draw (u4)--(u1); \draw (u5)--(u1); \draw (u5)--(u7); \end{scope} \begin{scope}[dashed] \draw (u2)--(u7); \draw (u2)--(u6); \draw (u6)--(u3); \draw (u5)--(u3); \end{scope} \node[circle,draw,fill=white,scale=0.8] (d1) at (u1){4}; \node[circle,draw,fill=white,scale=0.8] (d2) at (u2){5}; \node[circle,draw,fill=white,scale=0.8] (d3) at (u3){4}; \node[circle,draw,fill=white,scale=0.8] (d4) at (u4){4}; \node[circle,draw,fill=white,scale=0.8] (d5) at (u5){5}; \node[circle,draw,fill=white,scale=0.8] (d6) at (u6){4}; \node[circle,draw,fill=white,scale=0.8] (d7) at (u7){4}; \node[yshift=-9mm] at (L1) {Class V}; \end{tikzpicture} \end{center} \subsection{A local optimality criterion for surface area maximizers} Additionally we will use the following necessary geometric condition regarding vertices of degree 3, which is a variation of \cite[Lemma 3]{DHL}. \begin{lemma}\label{Property-L} Let $T$ be a tetrahedron contained in a cap of $B_3$ such that the base vertices of $T$ lie in the base of the cap. Denote the apex of $T$ by $v$, and suppose that the orthogonal projection $\pi_v$ of $v$ lies in the base triangle. Then the lateral surface area of $T$ is maximized only when $\pi_v$ coincides with the incenter of the base triangle. \end{lemma} \end{comment} Suppose that $Q_3\in\mathcal{I}_{3,K}$ has at least one degree 3 vertex. We say that $Q$ satisfies \emph{Property $\mathscr{L}$} if for every degree 3 vertex $v$ of $Q_3$, the orthogonal projection of $v$ into the base triangle lies at the triangle's incenter. By Lemma \ref{Property-L}, the global maximizer $\widehat{Q}_K:=\argmax_{Q\in\mathcal{I}_{3,K}}\vol_2(\partial Q)$ must satisfy Property $\mathscr{L}$. We use Property $\mathscr{L}$ as a tool in eliminating the combinatorial classes that admit a degree 3 vertex. A similar strategy was employed by Berman and Hanes \cite{BermanHanes1970} to determine the volume maximizers for $K=7,8$. \vspace{2mm} To formulate the next result, let $[x_1,\ldots,x_m]$ denote the convex hull of a finite set of points $x_1,\ldots,x_m$ in $\R^3$. \begin{corollary}\label{based} Suppose that $Q\in\mathcal{I}_{3,K}$ is simplicial, equiareal, satisfies Property $\mathscr{L}$ and has a degree 3 vertex $v$ with incident vertices $v_1, v_2$ and $v_3$. Then the triangular base of the tetrahedron $T_v:=[v,v_1,v_2,v_3]$ is equilateral. \end{corollary} \begin{proof} Let $t_v:=\dist(v,[v_1,v_2,v_3])$ denote the height of the tetrahedron $T_v$, and let $s_1, s_2$ and $s_3$ denote the side lengths of the base. The area of the $i$th lateral facet $F_i$ of $T_v$ is $\frac{1}{2}s_i\sqrt{p_i^2+t_v^2}$, where $p_i:=\dist(\pi_v,[v_i,v_{i+1}])$ (mod 3). Since $Q$ satisfies Property $\mathscr{L}$, the $p_i$ are all equal to the inradius $r_v$ of the base. (Here we have used that the base of $T_v$ is a triangle and every triangle has an incenter.) Since $Q$ is equiareal, the numbers \[ \vol_2(F_i) = \frac{1}{2}s_i\sqrt{r_v^2+t_v^2}, \quad i=1,2,3 \] are all equal. Therefore, $s_1=s_2=s_3$. \end{proof} Suppose that $Q$ has a degree 3 vertex $v$, satisfies Property $\mathscr{L}$ and is equiareal. Then Corollary \ref{based} provides that all facets containing $v$ are isosceles. If we label the base $a$ and the other two sides $b$, the lengths of the base $a$ and sides $b$ are given by $a(h)=\sqrt{3}\sqrt{1-h^2}$ and $b(h)=\sqrt{2-2h},$ where $h\in(0,1)$ is the distance from the origin to the plane spanned by the vertices incident to $v$. \begin{comment}Additionally, the area of each of these facets is given by \begin{equation}\label{A1h} A_1(h):=\dfrac{\sqrt{3}(1-h)\sqrt{(5-3h)(1+h)}}{4}. \end{equation} \end{comment} \begin{comment} We will make use of the following formula. Suppose that $Q$ has a degree 3 vertex $v$, satisfies Property $\mathscr{L}$, and is equiareal. Then the area of each facet containing $v$ equals \begin{equation}\label{A1h} A_1(h):=\dfrac{\sqrt{3}(1-h)\sqrt{(5-3h)(1+h)}}{4} \end{equation} where $h\in(0,1)$ is the distance from the origin to the plane spanned by the vertices incident to $v$. Additionally, \end{comment} \begin{lemma}\label{technical} Let $K\geq 4$. Suppose that $Q\in\mathcal{I}_{3,K}$ is simplicial, equiareal, isoperimetric, satisfies Property $\mathscr{L}$ and has a degree 3 vertex $v$ with incident vertices $v_1, v_2$ and $v_3$. Each facet that contains an edge of the form $[v_j,v_{j+1}] \, (\mathrm{mod}\, 3)$ is congruent. \end{lemma} \begin{proof} By Corollary \ref{based}, the base of the tetrahedron, triangle $[v_1,v_2,v_3]$, is equilateral with side length by $a=a(h)$. The lateral facets of this tetrahedron are isosceles triangles by Lemma \ref{Property-L}, whose side lengths we call $b=b(h)$. Consider a facet which shares an edge with the base, and let $a,\ell_1,\ell_2$ denote its edge lengths. The isoperimetric condition implies $\ell_1+\ell_2=2b$, and we can use Heron's formula to equate the areas: \begin{align*} 4b^4-(2b^2-a^2)^2&=4\ell_1^2\ell_2^2 - (\ell_1^2+\ell_2^2 - a^2)^2\\ &=4(2b-\ell_2)^2\ell_2^2 - ((2b-\ell_2)^2+\ell_2^2 - a^2)^2\\ &=4(4b^2-4b\ell_2+\ell_2^2)\ell_2^2 - ( 4b^2-4b\ell_2+2\ell_2^2 -a^2)^2\\ &=\ell_2^2(4a^2-16b^2)+\ell_2(32b^3-8a^2b)-a^4+8a^2 b^2-16b^4. \end{align*} Thus, \begin{align*} 0&=4(a^2-4b^2)\ell_2^2-8b(a^2-4b^2)\ell_2+4b^2(a^2-4b^2) \\ &=4(a-2b)(a+2b)(\ell_2-b)^2. \end{align*} This yields just two possibilities: $a=2b$ or $\ell_2=b$. The first is prohibited by the triangle inequality on the lateral facet of the tetrahedron. Hence $b=\ell_1=\ell_2$, so the facets are congruent. \end{proof} \subsection{Proof of Theorem \ref{7vertices}} In light of Lemma \ref{Ap-SA} and the subsequent discussion, we need only consider simplicial $Q\in\mathcal{I}_{3,7}$, which leaves five combinatorial classes. The following lemma eliminates four of them. \begin{lemma}\label{final-lem} Suppose that $Q\in\mathcal{I}_{3,7}$ is simplicial and $p\in(0,1)$. If $Q$ has a degree 3 vertex, then $Q$ cannot be the maximizer for both $A_p$ and $A_p^*$ simultaneously. \end{lemma} \begin{proof} From the preceding discussion, there are only four classes to consider. The general strategy is to label the degree 3 vertices using Lemma \ref{based}, where the isosceles lateral facets of the tetrahedron have side lengths $a$, $b$, and $b$, then use Lemma \ref{technical} to label the other edges. We include a diagram for each class for the reader's convenience, and then describe the conclusions in each class. \begin{center} \begin{tikzpicture}[scale=1.5] \coordinate (v1) at (1,0); \coordinate (v2) at (1/2,{sqrt(3)/2}); \coordinate (v3) at (-1/2,{sqrt(3)/2}); \coordinate (v4) at (-1,0); \coordinate (v5) at (-1/2,{-sqrt(3)/2}); \coordinate (v6) at (1/2,{-sqrt(3)/2}); \coordinate (v7) at (0,0.25); \begin{scope}[thick] \draw (v1)--(v2); \draw (v2)--(v3); \draw (v3)--(v4); \draw (v4)--(v5); \draw (v5)--(v6); \draw (v6)--(v1); \draw (v7)--(v1); \draw (v7)--(v2); \draw (v7)--(v3); \draw (v7)--(v4); \draw (v7)--(v5); \draw (v7)--(v6); \end{scope} \begin{scope}[dashed] \draw (v1)--(v3); \draw (v3)--(v5); \draw (v5)--(v1); \end{scope} \node[circle,draw,fill=white,scale=0.8] (c1) at (v1){5}; \node[circle,draw,fill=white,scale=0.8] (c2) at (v2){3}; \node[circle,draw,fill=white,scale=0.8] (c3) at (v3){5}; \node[circle,draw,fill=white,scale=0.8] (c4) at (v4){3}; \node[circle,draw,fill=white,scale=0.8] (c5) at (v5){5}; \node[circle,draw,fill=white,scale=0.8] (c6) at (v6){3}; \node[circle,draw,fill=white,scale=0.8] (c7) at (v7){6}; \coordinate (L1) at (0,{-sqrt(3)/2}); \node[yshift=-7mm] at (L1) {Class I}; \end{tikzpicture} \begin{tikzpicture}[scale=1.5] \begin{scope}[thick] \draw (v1)--(v2); \draw (v2)--(v3); \draw (v3)--(v4); \draw (v4)--(v5); \draw (v5)--(v6); \draw (v6)--(v1); \draw (v7)--(v1); \draw (v7)--(v2); \draw (v7)--(v3); \draw (v7)--(v4); \draw (v7)--(v5); \draw (v7)--(v6); \end{scope} \begin{scope}[dashed] \draw (v1)--(v3); \draw (v4)--(v1); \draw (v5)--(v1); \end{scope} \node[circle,draw,fill=white,scale=0.8] (c1) at (v1){6}; \node[circle,draw,fill=white,scale=0.8] (c2) at (v2){3}; \node[circle,draw,fill=white,scale=0.8] (c3) at (v3){4}; \node[circle,draw,fill=white,scale=0.8] (c4) at (v4){4}; \node[circle,draw,fill=white,scale=0.8] (c5) at (v5){4}; \node[circle,draw,fill=white,scale=0.8] (c6) at (v6){3}; \node[circle,draw,fill=white,scale=0.8] (c7) at (v7){6}; \node[yshift=-7mm] at (L1) {Class II}; \end{tikzpicture} \begin{tikzpicture}[scale=1.5] \begin{scope}[thick] \draw (v1)--(v2); \draw (v2)--(v3); \draw (v3)--(v4); \draw (v4)--(v5); \draw (v5)--(v6); \draw (v6)--(v1); \draw (v7)--(v1); \draw (v7)--(v2); \draw (v7)--(v3); \draw (v7)--(v4); \draw (v7)--(v5); \draw (v7)--(v6); \end{scope} \begin{scope}[dashed] \draw (v4)--(v2); \draw (v4)--(v1); \draw (v5)--(v1); \end{scope} \node[circle,draw,fill=white,scale=0.8] (c1) at (v1){5}; \node[circle,draw,fill=white,scale=0.8] (c2) at (v2){4}; \node[circle,draw,fill=white,scale=0.8] (c3) at (v3){3}; \node[circle,draw,fill=white,scale=0.8] (c4) at (v4){5}; \node[circle,draw,fill=white,scale=0.8] (c5) at (v5){4}; \node[circle,draw,fill=white,scale=0.8] (c6) at (v6){3}; \node[circle,draw,fill=white,scale=0.8] (c7) at (v7){6}; \node[yshift=-7mm] at (L1) {Class III}; \end{tikzpicture} \begin{tikzpicture}[scale=1.5] \coordinate (u1) at (1,0); \coordinate (u2) at ({cos(360/7)},{sin(360/7)}); \coordinate (u3) at ({cos(2*360/7)},{sin(2*360/7)}); \coordinate (u4) at ({cos(3*360/7)},{sin(3*360/7)}); \coordinate (u5) at ({cos(4*360/7)},{sin(4*360/7)}); \coordinate (u6) at ({cos(5*360/7)},{sin(5*360/7)}); \coordinate (u7) at ({cos(6*360/7)},{sin(6*360/7)}); \begin{scope}[thick] \draw (u1)--(u2); \draw (u2)--(u3); \draw (u3)--(u4); \draw (u4)--(u5); \draw (u5)--(u6); \draw (u6)--(u7); \draw (u7)--(u1); \end{scope} \begin{scope}[thick] \draw (u2)--(u7); \draw (u2)--(u6); \draw (u3)--(u6); \draw (u3)--(u5); \end{scope} \begin{scope}[dashed] \draw (u2)--(u4); \draw (u4)--(u1); \draw (u5)--(u1); \draw (u6)--(u1); \end{scope} \node[circle,draw,fill=white,scale=0.8] (d1) at (u1){5}; \node[circle,draw,fill=white,scale=0.8] (d2) at (u2){5}; \node[circle,draw,fill=white,scale=0.8] (d3) at (u3){4}; \node[circle,draw,fill=white,scale=0.8] (d4) at (u4){4}; \node[circle,draw,fill=white,scale=0.8] (d5) at (u5){4}; \node[circle,draw,fill=white,scale=0.8] (d6) at (u6){5}; \node[circle,draw,fill=white,scale=0.8] (d7) at (u7){3}; \node[yshift=-9mm] at (L1) {Class IV}; \end{tikzpicture} \end{center} The first class is ruled out since this edge labeling forces one of the facets (in this case the dashed one) to be equilateral with side length $a$, which in turn forces $a=b$. This leads to a contradiction since the angle defect at the degree 6 vertex would be 0. The second class may be labeled using Lemma \ref{technical} to produce 3 isosceles triangles whose base length is $a$ along the edge $e$ connecting the degree 6 vertices. The plane orthogonal to this edge passing through its midpoint cuts the sphere in a circle whose radius is the height of these isosceles triangles. This forces the edge $e$ to contain the origin, which means that $a=2$, which is impossible. The third class may also be labeled using Lemma \ref{technical} which leads to two facets with edge lengths $a,a$ and $b$ (the two facets whose vertex degrees are 4, 5 and 6). Hence the isoperimetric property provides $a=b$. Just as in Class I, this forces the angle defect at the degree 6 vertex to be 0, which is impossible. Labeling a polytope in the fourth class yields 9 isosceles triangles and one equilateral facet (whose vertex degrees are all 4) with side length $a$. The isoperimetric property yields $a=b$. Now we can compute the area of the facets. Denoting $A_1$ and $A_2$ to be the isosceles area and the equilateral area, respectively, straightforward computations yield \begin{align*} A_1(h)& =\dfrac{\sqrt{3}(1-h)\sqrt{(5-3h)(1+h)}}{4} \quad \text{ and }\\ A_2(h)& =\dfrac{3\sqrt{3}}{4}(1-h^2). \end{align*} Equating these yields three solutions $h=-1,-1/3,$ or $1$. Since $h\in (0,1)$, the only polytopes in this class that satisfy the equiareal condition are degenerate. \end{proof} Lemma \ref{final-lem} shows that the surface area maximizer must be in Class V, and the following lemma finishes the proof of Theorem \ref{7vertices}. We consider simplicial polytopes, which in the case $K=7$ all have 10 facets. By the equiareal condition, this implies $A_p(Q)=10\vol_2(F)^p$. The function $x\mapsto x^p$ is increasing for $p\in(0,1)$, so $A_p$ is maximized precisely when the average facet area is maximized. The following lemma finishes the proof. \begin{lemma}\cite[Cor. 2]{DHL}\label{BP-lemma} Let $Q$ be a bipyramid with $K$ vertices inscribed in $\mathbb{S}^2$. Then \[ \vol_2(\partial Q) \leq 2(K-2)\left(\cos^2\tfrac{\pi}{K-2}\right)^{1/2}\sin\tfrac{\pi}{K-2} \] with equality if and only if $Q$ is a rotation of the convex hull of the north and south poles $\pm e_3$ and the regular $(K-2)$-gon inscribed in the equator $\mathbb{S}^2\cap(\spann(e_3))^\perp$. \end{lemma} In particular, this implies that the surface area maximizer in Class V is the pentagonal bipyramid with two vertices at the north and south poles and five more forming an equilateral pentagon in the equator. It has surface area $\frac{5}{4}\sqrt{50-6\sqrt{5}}\approx 7.56$. This concludes the proof of Theorem \ref{7vertices}. \qed \subsection{Further remarks} In connection with the $K$ vertex problem for volume maximization, for $p\in(0,1)$ and $Q\in\mathcal{I}_n$ one may similarly define the \emph{$p$-weighted volume} $V_{p}(Q)$ by \[ V_{p}(Q) :=\sum_{j=1}^{N_Q} \left( \vol_{n}(C_j) \right)^p, \] where $C_j$ is the facet-cone with facet $F_j$ as the base and apex $o$. As before, $V_p$ is invariant under rigid motions. Consider the set of polytopes \[ \mathcal{V}_n :=\left\{Q\in\mathcal{I}_n: \vol_{n}(C_j) = \vol_{n}(Q)/N_{Q} \text{ for all } 1\leq j\leq N_Q \right\} \] whose facet-cones all have equal volume. Although we do not take it up here, analogues of Lemmas \ref{NCs} and \ref{Ap-SA} may be proven in nearly identical fashion for $V_p$, which allows one to approach the volume maximization problem in a similar way to the surface area problem. We mention only that for $n=3$, the optimal solutions known so far ($K\leq 8$, due to Berman and Hanes \cite{BermanHanes1970}) are in $\mathcal{V}_3$. We conclude by mentioning that the surface area and volume maximizers need not coincide. For $K=8$, the volume maximizer from \cite{BermanHanes1970} is not equiareal, and hence it cannot be the surface area maximizer. In fact, one can check with the aid of mathematical software (such as Mathematica) that adjusting the value of the angle parameter given in \cite{BermanHanes1970} from $\varphi=\arccos\left(\sqrt{(15+\sqrt{145})/40}\right)$ to $\varphi=0.62$ increases the surface area. In the context of quantum theory, it was conjectured by Kazakov \cite[Conj. 4.2]{kazakov-thesis} that the volume and surface area maximizers coincide. The counterexample above shows that this is not the case. It also shows that the volume and surface area discrepancies from crystallography are, in fact, distinct measures of distortion for the coordination polyhedra of ligand atoms on the unit sphere \cite{DHL, Makovicky}. \bibliographystyle{plain}
{ "timestamp": "2022-05-19T02:23:49", "yymm": "2205", "arxiv_id": "2205.09096", "language": "en", "url": "https://arxiv.org/abs/2205.09096", "abstract": "Weighted cone-volume functionals are introduced for the convex polytopes in $\\mathbb{R}^n$. For these functionals, geometric inequalities are proved and the equality conditions are characterized. A variety of corollaries are derived, including extremal properties of the regular polytopes involving the $L_p$ surface area. Some applications to crystallography and quantum theory are also presented.", "subjects": "Metric Geometry (math.MG)", "title": "Extremal arrangements of points on the sphere for weighted cone-volume functionals", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429595026213, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7097210887105843 }
https://arxiv.org/abs/1601.02177
Optimal-order bounds on the rate of convergence to normality for maximum likelihood estimators
It is well known that under general regularity conditions the distribution of the maximum likelihood estimator (MLE) is asymptotically normal. Very recently, bounds of the optimal order $O(1/\sqrt n)$ on the closeness of the distribution of the MLE to normality in the so-called bounded Wasserstein distance were obtained, where $n$ is the sample size. However, the corresponding bounds on the Kolmogorov distance were only of the order $O(1/n^{1/4})$. In this note, bounds of the optimal order $O(1/\sqrt n)$ on the closeness of the distribution of the MLE to normality in the Kolmogorov distance are given, as well as their nonuniform counterparts, which work better for large deviations of the MLE. These results are based on previously obtained general optimal-order bounds on the rate of convergence to normality in the multivariate delta method. The crucial observation is that, under natural conditions, the MLE can be tightly enough bracketed between two smooth enough functions of the sum of independent random vectors, which makes the delta method applicable.
\section{Introduction} \label{intro} Let us begin with the following quote from Kiefer \cite{kiefer68} of 1968: \begin{quote} a second area of what seem to me important problems to work on has to do with the fact that we do have, in many settings, quite a good large sample theory, but we don't know how large the sample sizes have to be for that theory to take hold. Now, I'm sure most of you are familiar with the error estimate one can give for the classical central-limit theorem, which goes by the name of the Berry-Esseen estimate, and which tells you that under certain assumptions one can actually give an explicit bound on the departure from the normal distribution of the sample mean for a given sample size, the error term being of order $1/\sqrt n$. For most other statistical problems, in fact for almost anything other than the use of the sample mean, we have nothing. The most obvious example of this (and this is not original with me; many people have been concerned with this), is the maximum likelihood estimator in the case of regular estimation. We all know what the asymptotic distribution is. Can you give explicitly some useful bound on the departure from the asymptotic normal distribution as a function of the sample size $n$? It seems to be a terrifically difficult problem. \end{quote} Since then, there has been some significant progress in this direction, especially rather recently. For instance, Berry--Esseen-type bounds of order $1/\sqrt n$ were obtained for $U$-statistics -- see e.g.\ \cite{kor94}; for the Student statistic \cite{bent96,bbg96}; and, even more recently, for rather broad classes of other statistics that depend on the observations in a nonlinear fashion \cite{chen07,nonlinear-publ}. As Kiefer pointed out, it is well known that, under general regularity conditions, the distribution of the maximum likelihood estimator (MLE) is asymptotically normal. In this paper, we shall consider Berry--Esseen-type bounds of order $1/\sqrt n$ for the MLE. First such bounds were apparently obtained in the paper \cite{michel-pfanzagl71}, followed by \cite{pfanzagl71,pfanzagl73}. Very recently, bounds on the closeness of the distribution of the MLE to normality in the so-called bounded Wasserstein distance, $d_{\bW}$, were obtained in \cite{anast-rein_publ}. In the rather common special case when the MLE $\hat\th$ is expressible as a smooth enough function of a linear statistic of independent identically distributed (i.i.d.) observations, the bounds obtained in \cite{anast-rein_publ} were sharpened and simplified in \cite{anast-ley} by using a version of the delta method. More specifically, it was assumed in \cite{anast-ley} that \begin{equation}\label{eq:q(th)} q(\hat\th)=\frac1n\,\sum_{i=1}^n g(X_i), \end{equation} where $q\colon\Th\to\R$ is a twice continuously differentiable one-to-one mapping, $g\colon\R\to\R$ is a Borel-measurable function, and the $X_i$'s are i.i.d.\ real-valued r.v.'s. It was noted in \cite[Proposition~2.1]{anast-rein_publ} that for any r.v.\ $Y$ and a standard normal r.v.\ $Z$ one has $d_\Ko(Y,Z)\le2\sqrt{d_{\bW}(Y,Z)}$, where $d_\Ko$ denotes the Kolmogorov distance. This bound on $d_\Ko$ in terms of $d_{\bW}$ is the best possible one, up a constant factor, as shown in \cite{nonlinear-publ}. Therefore, even though the bounds on the bounded Wasserstein distance $d_{\bW}$ obtained in \cite{anast-rein_publ,anast-ley} are of the optimal order $O(1/\sqrt n)$, the resulting bounds on the Kolmogorov distance are only of the order $O(1/n^{1/4})$. \big(That the order $O(1/\sqrt n)$ is optimal for MLEs is well known; for instance, see the example of the Bernoulli family of distributions given in \cite{michel-pfanzagl71}.\big) In \cite{nonlinear-publ}, optimal-order bounds of the form $O(1/\sqrt n)$ on the rate of convergence to normality in the general multivariate delta method were given. Those results are applicable when the statistic of interest can be expressed as a smooth enough function of the sum of independent random vectors. Accordingly, various kinds of applications were presented in \cite{nonlinear-publ}. In particular, uniform and nonuniform bounds of the optimal order on the closeness of the distribution of the MLE to normality were obtained in \cite{nonlinear-publ} under conditions similar to the mentioned conditions assumed in \cite{anast-ley}. In this paper we present a way to extend those results in \cite{nonlinear-publ} to the general case, without an assumption of the form \eqref{eq:q(th)}, made in \cite{anast-ley,nonlinear-publ}. Of course, in general the MLE cannot be represented as a function of the sum of independent random vectors (see Appendix~\ref{append} for details). However, the crucial observation here is that, under natural conditions, the MLE can be tightly enough bracketed between two such smooth enough functions, which makes the delta method applicable. Thus, the present paper is methodologically different from the preceding work on Berry--Esseen-type bounds for the MLE, in that it relies on the general result developed in \cite{nonlinear-publ}, rather than on methods specially designed to deal with the MLE. Perhaps more importantly, the new method yields not only uniform bounds (that is, in the Kolmogorov metric) of the optimal order $O(1/\sqrt n)$ on the closeness of the distribution of the MLE to normality but also their so-called nonuniform counterparts, which work much better for large deviations, that is, in tail zones of the distribution of the MLE -- which are usually of foremost interest in statistical tests. Such nonuniform bounds for MLEs in general appear to have no precedents in the existing literature (except that, as stated above, a special case of nonuniform bounds for MLEs was recently treated in \cite{nonlinear-publ}). The paper is organized as follows. The general setting of the problem is described in Section~\ref{setting}. The key step of tight enough bracketing of the MLE between two functions of the sum of independent random vectors is made in Section~\ref{bracketing}. General uniform and nonuniform optimal-order bounds from \cite{nonlinear-publ} on the convergence rate in the multivariate delta method are presented in Section~\ref{f(bar V)}. In Section~\ref{appl}, we make the bracketing work by applying the general bounds in the multivariate delta method. Yet, this leaves out the problem of bounding a remainder, which is a probability of large deviations of the MLE from the true value of the parameter. It is shown in Section~\ref{remainder} that under natural conditions this remainder is exponentially fast decreasing (in $n$) and thus asymptotically negligible as compared to the main term on the order of $1/\sqrt n$. All these findings are summarized in Section~\ref{concl}, where the main result of this paper is presented, along with corresponding discussion. In Appendix~\ref{append}, it is shown that, under general regularity conditions, \eqref{eq:q(th)} (or even a relaxed version of it) implies that the family of densities is a one-parameter exponential one; in particular, this allows one to give any number of examples where the main result of the present paper is applicable, whereas the corresponding result in \cite{nonlinear-publ} is not. \section{General setting} \label{setting} Let $X,X_1,X_2,\dots$ be random variables (r.v.'s) mapping a measurable space $(\Om,\A)$ to another measurable space $(\XXX,\B)$ and let $(\P_\th)_{\th\in\Th}$ be a parametric family of probability measures on $(\Om,\A)$ such that the r.v.'s $X,X_1,X_2,\dots$ are i.i.d.\ with respect to each of the probability measures $\P_\th$ with $\th\in\Th$; here the parameter space $\Th$ is assumed to be a subset of the real line $\R$. As usual, let $\E_\th$ denote the expectation with respect to the probability measure $\P_\th$. Suppose that for each $\th\in\Th$ the distribution $\P_\th X^{-1}$ of $X$ has a density $p_\th$ with respect to a measure $\mu$ on $\B$. Because the extended real line $[-\infty,\infty]$ is compact, for each $n\in\N$ and each point $\xx=\xx_n=(x_1,\dots,x_n)\in\XXX^n$ the likelihood function $\Th\ni\th\mapsto L_\xx(\th):=\prod_{i=1}^n p_\th(x_i)$ has at least one generalized maximizer $\hat\th_n(\xx)$ in the closure of the set $\Th$ in $[-\infty,\infty]$, in the sense that $\sup_{\th\in\Th}L_\xx(\th)=\limsup_{\th\to\hat\th_n(\xx)}L_\xx(\th)$. Picking, for each $\xx=(x_1,\dots,x_n)\in\XXX^n$, any one of such generalized maximizers $\hat\th_n(\xx)$, one obtains a map $\Om\ni\om\mapsto\hat\th_n(\X(\om))$, where $\X:=\X_n:=(X_1,\dots,X_n)$; any such map will be denoted here by $\hat\th_n(\X)$ (or simply by $\hat\th_n$ or $\hat\th$) and referred to as a maximum likelihood estimator (MLE) of $\th$. This is a somewhat more general definition of the MLE than usual, and in general an MLE $\hat\th$ will not have to be a r.v.; that is, it can be non-measurable with respect to the sigma-algebra $\A$. However, to simplify the presentation, we shall still refer to sets of the form $\{\hat\th\in J\}:=\{\om\in\Om\colon\hat\th_n(\X(\om))\in J\}$ for Borel sets $J\subseteq\Th$ as events and write $\P_\th(\hat\th\in J)$ implying that the latter expression may and should be understood as either one of the expressions $(\P_\th)^*(\hat\th\in J)$ or $(\P_\th)_*(\hat\th\in J)$, where ${}^*$ and ${}_*$ stand for the corresponding outer and inner measures. Of course, when the map $\hat\th$ is measurable, then one can use the bona fide expressions of the mentioned form $\P_\th(\hat\th\in J)$. Let $\th_0\in\Th$ be the ``true'' value of the unknown parameter $\th$, such that \begin{equation}\label{eq:in Th} [\th_0-\de,\th_0+\de]\subseteq\Th^\circ \end{equation} for some real $\de>0$, where $\Th^\circ$ denotes the interior of the subset $\Th$ of $\R$. For brevity, let \begin{equation*} \P:=\P_{\th_0}\quad\text{and}\quad\E:=\E_{\th_0}. \end{equation*} For $x\in\XXX$ and $\th\in\Th$, consider the log-likelihood \begin{equation*} \ell_x(\th):=\ln p_\th(x) \end{equation*} and assume the following: \begin{enumerate}[(I)] \item \label{diff} The set $\XXX_{>0}:=\{x\in\XXX\colon p_\th(x)>0\}$ is the same for all $\th\in[\th_0-\de,\th_0+\de]$, and for each $x\in\XXX_{>0}$ the density $p_\th(x)$ and hence the log-likelihood $\ell_x(\th)$ are thrice differentiable in $\th$ at each point $\th\in[\th_0-\de,\th_0+\de]$. \item \label{fisher} Standard regularity conditions hold so that $\E \ell'_X(\th_0)=0$ and $\E \ell'_X(\th_0)^2=-\E \ell''_X(\th_0 =I(\th_0)\in(0,\infty)$, where $I(\th)$ is the Fisher information at $\th$. \item \label{M_2} \E |\ell'_X(\th_0)|^3+ \E |\ell''_X(\th_0)|^3<\infty$. \item \label{M_3} $\E \sup\limits_{\th\in[\th_0-\de,\th_0+\de]}|\ell'''_X(\th)|^3<\infty$. \end{enumerate} \begin{remark}\label{rem:fisher} The expectation $\E \ell'_X(\th_0)$, mentioned in condition~\eqref{fisher}, may be understood as $\int_{\XXX_{>0}}p'_x(\th_0)\mu(\dd x)$, where $p_x(\th):=p_\th(x)$; similarly, for the other expectations mentioned in conditions~\eqref{fisher}--\eqref{M_3}. Of course, all the derivatives here are with respect to $\th$. Concerning the ``standard regularity conditions'' mentioned in condition~\eqref{fisher}, it will be enough to assume that $\P(\frac\partial{\partial\th}p_\th(X)\ne0)>0$ and for some measurable function $g\colon\XXX_{>0}\to[0,\infty)$ such that $\int_{\XXX_{>0}}g\dd\mu<\infty$ and all $\th\in[\th_0-\de,\th_0+\de]$ and $x\in\XXX_{>0}$ we have $|\frac\partial{\partial\th}p_\th(x)|+|\frac{\partial^2}{\partial\th^2} p_\th(x)|\le g(x)$; see e.g.\ \cite[Lemma~5.3, page~116]{lehmann-estim} and \cite[Lemma~2.4]{rosenthal_AOP} (more general conditions can be given using \cite[Lemma~2.3]{rosenthal_AOP}). Then $I(\th)$ will also be continuous in $\th\in[\th_0-\de,\th_0+\de]$. Conditions \eqref{diff}--\eqref{M_3} are rather similar to regularity conditions used in related literature; see Remark~\ref{rem:compare} on page~\pageref{rem:compare} for details. It appears that these conditions will be generally satisfied provided that $\ell_x(\th)$ is smooth enough in $\th$. For instance, let us briefly consider the case when the family of densities $(p_\th)$ is a location family, so that $\ell_x(\th)=\la(x-\th)$ for all $(x,\th)\in\XXX\times\Th=\R^2$, where $\la$ is a smooth enough function. If the densities $p_\th$ have power-like tails, then for some positive real constants $c_+$ and $c_-$ one has $\la(x)\sim-c_{\pm}\ln|x|$ as $x\to\pm\infty$, in which case typically $|\la^{(k)}(x)|\sim -c_\pm k!|x|^{-k}\ln|x|$ for $k=0,1,\dots$ as $x\to{\pm}\infty$. So, conditions \eqref{M_2} and \eqref{M_3} will hold, since $|\ell_x^{(k)}(\th)|=|\la^{(k)}(x-\th)|$. If the tails densities $p_\th$ are lighter than power-like tails, so that (say) $\la(x)\sim-c_{\pm}|x|^\al$ for some real $\al>0$ as $x\to\pm\infty$, then typically $|\la^{(k)}(x)|\sim -c_\pm k!|x|^{\al-k}$ for $k=0,1,\dots$ as $x\to{\pm}\infty$, so that conditions \eqref{M_2} and \eqref{M_3} will again hold. The case of a scale family is quite similar to that of a location family. Alternatively, the ``scale'' case can be reduced to the ``location'' one by logarithmic rescaling in both $x$ and $\th$. At this point, consider also the case when the family of densities $(p_\th)$ is an exponential family, so that $\ell_x(\th)=w(\th)T(x)+d(\th)$ for some functions $w$, $T$, and $d$ and for all $(x,\th)\in\XXX\times\Th=\R^2$, where the functions $w$ and $d$ are smooth enough, with $w'(\th_0)\ne0$. Then $\ell_x^{(k)}(\th)=w^{(k)}(\th)T(x)+d^{(k)}(\th)$. So, conditions \eqref{M_2} and \eqref{M_3} will hold in this case as well, since $\E|T(X)|^\al=\int_\XXX|T(x)|^\al\exp\{w(\th_0)T(x)+d(\th_0)\}\mu(\dd x)$ for $\al>0$, $|T(x)|^\al=O(e^{hT(x)}+e^{-hT(x)})$ for any given real $\al>0$ and any given nonzero real $h$, and the conditions $\th_0\in\Th^\circ$ and $w'(\th_0)\ne0$ imply that $\int_\XXX\exp\{[w(\th_0)+h]T(x)+d(\th_0)\}\mu(\dd x)<\infty$ for all real $h$ close enough to $0$. \qed \end{remark} Let \begin{equation}\label{eq:ell} \ell_\X(\th):=\sum_{i=1}^n\ell_{X_i}(\th) \end{equation} for $\th\in\Th$, the log-likelihood of the sample $\X=(X_1,\dots,X_n)$. \section{Tight bracketing of the MLE between two functions of the sum of independent random vectors} \label{bracketing} Without loss of generality (w.l.o.g.), $\XXX_{>0}=\XXX$. Then on the event \begin{equation}\label{eq:G} G:=\{\hat\th\in[\th_0-\de,\th_0+\de]\} \end{equation} ($G$ for ``\underline{g}ood event'') one must have \begin{align} 0=\ell'_\X(\hat\th) =&\ell'_\X(\th_0) +(\hat\th-\th_0)\,\ell''_\X(\th_0) +\frac{(\hat\th-\th_0)^2}2\,\ell'''_\X(\th_0+\xi(\hat\th-\th_0)) \label{eq:0=} \\ =&n\Big(\bar Z-(\hat\th-\th_0)\,\bar U +\frac{(\hat\th-\th_0)^2}2\,\bar R\Big) \label{eq:=n()} \end{align} for some $\xi\in(0,1)$, depending on the values of the $X_i$'s, where $\bar Z:=\frac1n\sum_{i=1}^n Z_i$, $\bar U:=\frac1n\sum_{i=1}^n U_i$, $\bar R:=\frac1n\sum_{i=1}^n R_i$, $\bar{R^*}:=\frac1n\sum_{i=1}^n R_i^*$, \begin{equation}\label{eq:Z,R} \begin{gathered} Z_i:=\ell'_{X_i}(\th_0),\quad U_i:=-\ell''_{X_i}(\th_0), \\ R_i:=\ell'''_{X_i}(\th_0+\xi(\hat\th-\th_0))\in[-R_i^*,R_i^*],\quad R_i^*:=\sup\limits_{\th\in[\th_0-\de,\th_0+\de]}|\ell'''_{X_i}(\th)|. \end{gathered} \end{equation} Note that the $Z_i$'s are i.i.d.\ r.v.'s, and so are the $U_i$'s and the $R_i^*$'s (but not necessarily the $R_i$'s). Equalities \eqref{eq:0=} and \eqref{eq:=n()} provide a quadratic equation for $\hat\th$. So, on the event $G$ one has \begin{equation}\label{eq:cases} \begin{alignedat}{2} \hat\th-\th_0&=\frac{\bar Z}{\bar U} &&\ \text{ if }\ \bar R=0\ \&\ \bar U\ne0, \\ \hat\th-\th_0&\in\{d_+,d_-\} &&\ \text{ if }\ \bar R\ne0, \end{alignedat} \end{equation} where \begin{equation*} d_\pm:=\frac{\bar U\pm\sqrt{\bar U^2-2\bar Z\,\bar R}}{\bar R}. \end{equation*} Letting \begin{multline}\label{eq:B:=} B:=B_1\cup B_2,\quad\text{where} \\ B_1:=\{\bar R\ne0,\;\hat\th-\th_0=d_+\}\cup\{\bar U\le0\} \quad\text{and}\quad B_2:=\ \bar U^2\le 2|\bar Z|\,\bar{R^*}\} \end{multline} ($B$ for ``\underline{b}ad event''), on the event $B_1\cap\{\bar U>0\}$ one has $|\hat\th-\th_0|=|d_+|\ge\bar U/|\bar R|\ge\bar U/\bar{R^*}$, whence, by \eqref{eq:G}, \begin{equation}\label{eq:G cap B} \P (G\cap B_1)\le\P \Big(\bar U\le0\text{ or }\frac{\bar U}{\bar{R^*}}\le\de\Big) =\P\Big(\frac{\bar U}{\bar{R^*}}\le\de\Big) =\P\Big(\sum_{i=1}^n(U_i-\de R_i^*)\le0\Big). \end{equation} By definitions \eqref{eq:Z,R} and conditions \eqref{fisher}, \eqref{M_2}, and \eqref{M_3}, \begin{equation}\label{eq:^{3/2}<infty} \E U_1>0,\quad\E |Z_1|^3<\infty,\quad \E |U_1|^3<\infty,\quad \E (R_1^*)^3<\infty, \end{equation} and hence $\E R_1^*<\infty$. So, w.l.o.g.\ one may choose $\de>0$ to be small enough so that \begin{equation*} \de_1:=\E (U_i-\de R_i^*)>0. \end{equation*} Then, letting $Y_i:=(U_i-\de R_i^*)-\E(U_i-\de R_i^*)$ and using \eqref{eq:G cap B}, Markov's inequality, and a Rosenthal-type inequality (see e.g.\ \cite[Theorem~1.5]{rosenthal_AOP}) \begin{multline}\label{eq:P(G and B_1)} \P (G\cap B_1)\le\P \Big(\sum_{i=1}^n Y_i\le-n\de_1\Big) \le\frac1{(n\de_1)^3}\,\E \Big|\sum_{i=1}^n Y_i\Big|^3 \\ \le\frac{n\E |Y_1|^3+\sqrt{8/\pi}\,(n\E Y_1^2)^{3/2}}{(n\de_1)^3}\le\frac\CC{n^{3/2}}, \end{multline} where $\CC:=\big(\E |Y_1|^3+\sqrt{8/\pi}\,(\E Y_1^2)^{3/2}\big)/\de_1^3$, which depends on $\de_1>0$, $\E Y_1^2<\infty$, and $\E |Y_1|^3<\infty$ -- but not on $n$. Next, the occurrence of $B_2$ implies that of at least one of the following events: $B_{21}:=\{\bar U\le\frac12\,\E U_1\}$, $B_{22}:=\{\bar{R^*}\ge1+\E R_1^*\}$, or $B_{23}:=\{|\bar Z|\ge\frac18\,(\E U_1)^2/(1+\E R_1^*)\}$. So, \begin{equation}\label{eq:B_2j} \P(B_2)\le\P(B_{21})+\P(B_{22})+\P(B_{23}). \end{equation} In view of \eqref{eq:^{3/2}<infty}, the bounding of each of the probabilities $\P(B_{21})$, $\P(B_{22})$, $\P(B_{23})$ is quite similar to the bounding of $\P (G\cap B_1)$ in \eqref{eq:P(G and B_1)} -- because \break $\P(B_{21})=\P(\sum_{i=1}^n Y_{i,21}\le-n\de_{21})$, $\P(B_{22})=\P(\sum_{i=1}^n Y_{i,22}\ge n\de_{22})$, and $\P(B_{23})=\P(\sum_{i=1}^n |Y_{i,23}|\ge n\de_{23})$, where $Y_{i,21}:=U_i-\E U_1$, $\de_{21}:=\frac12\,\E U_1>0$, $Y_{i,22}:=R_i^*-\E R_1^*$, $\de_{22}:=1>0$, $Y_{i,23}:=Z_i-\E Z_1=Z_i$, $\de_{23}:=\frac18\,(\E U_1)^2/(1+\E R_1^*)>0$. Thus, by \eqref{eq:B:=}, \eqref{eq:P(G and B_1)}, and \eqref{eq:B_2j}, \begin{equation}\label{eq:P(G and B)} \P (G\cap B)\le\P(G\cap B_1)+\P(B_2)\le\frac\CC{n^{3/2}} . \end{equation} On the other hand, if $\bar R\ne0$ and $\bar U>0$, then $d_-=\frac{2\bar Z}{\bar U+\sqrt{\bar U^2-2\bar Z\,\bar R}}$; here, the condition $\bar U>0$ was used only to ensure that the denominator of the latter ratio is nonzero. Hence, on the event $G\setminus B$ one has \begin{equation}\label{eq:hath-th0=} \bar U>0 \quad\text{and}\quad \hat\th-\th_0=\frac{2\bar Z}{\bar U+\sqrt{\bar U^2-2\bar Z\,\bar R}}\in[T_-,T_+], \end{equation} where \begin{equation}\label{eq:T_pm} T_\pm:=\frac{2\bar Z}{\bar U+\sqrt{\bar U^2\mp2|\bar Z|\,\bar{R^*}}}; \end{equation} note that, when $\bar R=0$ and $\bar U>0$, the expression of $\hat\th-\th_0$ in \eqref{eq:hath-th0=} is in agreement with the corresponding expression in \eqref{eq:cases}. Now that the desired bracketing of $\hat\th-\th_0$ between $T_-$ and $T_+$ is obtained in \eqref{eq:hath-th0=}, we are ready to apply some of the mentioned general results of \cite{nonlinear-publ}, presented in the next section. \section{General uniform and nonuniform bounds from \texorpdfstring{\cite{nonlinear-publ}}{} on the rate of convergence to normality for smooth nonlinear functions of sums of independent random vectors} \label{f(bar V)} The standard normal distribution function (d.f.) will be denoted by $\Phi$. For any $\R^d$-valued random vector $\zeta$, we use the norm notation \begin{equation*} \|\zeta\|_p:=\big(\E\|\zeta\|^p\big)^{1/p}\text{ for any real $p\ge1$}, \end{equation*} where $\|\cdot\|$ denotes the Euclidean norm on $\R^d$. Take any Borel-measurable functional $f\colon\XX\to\R$ satisfying the following smoothness condition: there exist $\ep\in(0,\infty)$, $\Mf\in(0,\infty)$, and a continuous linear functional $L\colon\XX\to\R$ such that \begin{align}\label{eq:smooth} |f(\x)-L(\x)|\le\frac \Mf2\,\|\x\|^2\text{ for all $\x\in\XX$ with }\|\x\|\le\ep. \end{align} Thus, $f(\0)=0$ and $L$ necessarily coincides with the first Fr\'echet derivative, $f'(\0)$, of the function $f$ at $\0$. Moreover, for the smoothness condition \eqref{eq:smooth} to hold, it is enough that \begin{equation}\label{eq:M^*} \Mf\ge\Mf^*:=\sup\bigg\{\frac1{\|\x\|^2}\,\bigg|\frac{\dd^2}{\dd t^2}\,f(\x+t\x)\Big|_{t=0}\bigg|\colon \x\in\XX,\,0<\|\x\|\le\ep\bigg\ ; \end{equation} it is not necessary that $f$ be twice differentiable at $\0$. E.g., if $d=1$ and $f(x)=\frac{x}{1+|x|}$ for $x\in\R$, then $f(0)=0$, $f'(0)=1$, and $f''(x)=-\frac{2\sign x}{(1+|x|)^3}$ for real $x\ne0$; so, \eqref{eq:smooth} holds for any real $\ep>0$ with $L(x)\equiv x$ and $\Mf=2$, whereas $f''(0)$ does not exist. \bigskip Let V,V_1,\dotsc,V_n\text{ be i.i.d.\ random vectors} $ in $\XX$, with $\E V=\0$ and \begin{equation*} \bar V:=\frac{1}{n}\sum_{i=1}^nV_i. \end{equation*} Further let \begin{equation}\label{eq:tsi,v_p,vsi_p} \tsi:=\|L(V)\|_2,\quad v_3:=\|V\|_3,\quad\text{and}\quad\vsi_3:=\frac{\|L(V)\|_3}{\tsi}. \end{equation} \begin{theorem}\label{th:nonlin} \ \kern-9pt\emph{\cite{nonlinear-publ}} Suppose that \eqref{eq:smooth} holds, and that $\tsi>0$ and $v_3<\infty$. Then for all $z\in\R$ \begin{equation}\label{eq:f(S).iid} \Big|\P\Big(\frac{f(\bar V)}{\tsi /\sqrt n}\le z\Big)-\Phi(z)\Big|\le\frac{\CC}{\sqrt{n}}, \end{equation} where $\CC$ is a finite positive expression that depends only on the function $f$ (through \eqref{eq:smooth}) and the moments $\tsi$, $\vsi_3$, and $v_3$. Moreover, for any $\om\in(0,\infty)$ and for all \begin{equation}\label{eq:z,iid} z\in\bigl(0,\om\,\sqrt{n}\,\bigr] \end{equation} one has \begin{align} \label{eq:f(S).iid.power} \Big|\P\Big(\frac{f(\bar V)}{\tsi /\sqrt n}\le z\Big)-\Phi(z)\Big| &\le\frac{\CC_\om}{z^3\,\sqrt{n}}, \end{align} where $\CC_\om$ is a finite positive expression that depends only on the function $f$ (through \eqref{eq:smooth}), the moments $\tsi$, $\vsi_3$, and $v_3$, and also on $\om$. \end{theorem} The restriction \eqref{eq:z,iid} cannot be relaxed in general; see \cite{nonlinear-publ}. To simplify the presentation, in what follows let $\CC$ stand for various finite positive expressions whose values do not depend on $n$ or $z$; that is, $\CC$ will denote various positive real constants -- with respect to $n$ and $z$. However, $\CC$ may depend on other attributes of the setting, including the model $(\P_\th)_{\th\in\Th}$ under consideration, the $\P_{\th_0}$-distribution of $X_1$, and the values of parameters freely chosen in a given range (such as $\om$ in \eqref{eq:z,iid} and $\vp$ in \eqref{eq:smooth}). \section{Making the bracketing work: Applying the general bounds of \texorpdfstring{\cite{nonlinear-publ}}{} } \label{appl} Now let $d=3$ and then let \begin{equation*} \D:=\{\x=(x_1,x_2,x_3)\in\XX=\R^3\colon x_2+\E U_1>0,\ (x_2+\E U_1)^2>2|x_1|\,|x_3+\E R_1^*|\}. \end{equation*} By \eqref{eq:Z,R} and conditions \eqref{fisher} and \eqref{M_3}, $\E U_1=I(\th_0)\in(0,\infty)$ and $\E R_1^*\in[0,\infty)$. So, for some real $\ep>0$, the set $\D$ contains the $\ep$-neighborhood of the origin $\0$ of $\R^3$. Define functions $f_\pm\colon\R^3\to\R$ by the formula \begin{equation}\label{eq:f_pm} f_\pm(\x)=f_\pm(x_1,x_2,x_3)=\frac{2x_1}{x_2+\E U_1+\sqrt{(x_2+\E U_1)^2\mp2|x_1|\,|x_3+\E R_1^*|}} \end{equation} for $\x=(x_1,x_2,x_3)\in\D$, and let $f(\x):=0$ if $\x\in\R^3\setminus\D$. Clearly, $f_\pm(\0)=0$, \begin{equation}\label{eq:L=} L_\pm(\x):=f'_\pm(\0)(\x)=\frac{x_1}{\E U_1}=\frac{x_1}{I(\th_0)} \end{equation} for $\x=(x_1,x_2,x_3)\in\R^3$, and, in accordance with \eqref{eq:M^*}, the smoothness condition \eqref{eq:smooth} holds for some $\ep$ and $M_\ep$ in $(0,\infty)$ -- because, as was noted above, $\E U_1=I(\th_0)\in(0,\infty)$ and $\E R_1^*\in[0,\infty)$, and hence the denominator of the ratio in \eqref{eq:f_pm} is bounded away from $0$ for $\x=(x_1,x_2,x_3)$ in a neighborhood of $\0$. Next, let \begin{equation}\label{eq:V_i=} V_i:=(Z_i,U_i-\E U_i,R_i^*-\E R_i^*) \end{equation} for $i=1,\dots,n$, with $Z_i,U_i,R_i^*$ as defined in \eqref{eq:Z,R}. Then, by \eqref{eq:tsi,v_p,vsi_p}, \eqref{eq:L=}, and condition~\eqref{fisher}, for $f=f_\pm$, \begin{equation}\label{eq:tsi=} \tsi=\sqrt{\frac{\E Z_1^2}{I(\th_0)^2}}=\frac1{\sqrt{I(\th_0)}}>0 \end{equation} and $v_3^3=\E\|V\|^3<\infty$ by conditions \eqref{M_2} and \eqref{M_3}. So, all the conditions of Theorem~\ref{th:nonlin} are satisfied for $f=f_\pm$. Moreover, by \eqref{eq:T_pm}, \eqref{eq:f_pm}, and \eqref{eq:V_i=}, \begin{equation*} T_\pm=f_\pm(\bar V) \end{equation*} on the event $G\setminus B$. So, by the inclusion relation in \eqref{eq:hath-th0=} \big(which holds on the event $G\setminus B=(G^\cc\cup B)^\cc$, where ${}^\cc$ denotes the complement\big) and \eqref{eq:tsi=}, inequality~\eqref{eq:f(S).iid} in Theorem~\ref{th:nonlin} implies \begin{equation* \begin{aligned} \P\Big(\sqrt{nI(\th_0)}\,(\hat\th-\th_0)\le z\Big) &\le\P\Big(\sqrt{nI(\th_0)}\,f_-(\bar V)\le z\Big)+\P(G^\cc\cup B) \\ &\le\Phi(z)+\frac{\CC}{\sqrt{n}}+\P(G^\cc\cup B) \end{aligned} \end{equation*} and, quite similarly, \begin{equation*}\label{eq:lower} \begin{aligned} \P\Big(\sqrt{nI(\th_0)}\,(\hat\th-\th_0)\le z\Big) &\ge\P\Big(\sqrt{nI(\th_0)}\,f_+(\bar V)\le z\Big)-\P(G^\cc\cup B) \\ &\ge\Phi(z)-\frac{\CC}{\sqrt{n}}-\P(G^\cc\cup B), \end{aligned} \end{equation*} for all real $z$. Note that $\P(G^\cc\cup B)=\P(G^\cc)+\P(G\cap B)$. It follows now by \eqref{eq:G} and \eqref{eq:P(G and B)} that \begin{equation}\label{eq:ub} \Big|\P\Big(\sqrt{nI(\th_0)}\,(\hat\th-\th_0)\le z\Big)-\Phi(z)\Big|\le\frac{\CC}{\sqrt{n}}+\P(|\hat\th-\th_0|>\de) \end{equation} for all real $z$. Quite similarly, but using \eqref{eq:f(S).iid.power} instead of \eqref{eq:f(S).iid}, one has \begin{equation}\label{eq:nub} \Big|\P\Big(\sqrt{nI(\th_0)}\,(\hat\th-\th_0)\le z\Big)-\Phi(z)\Big|\le\frac{\CC}{z^3\,\sqrt{n}}+\P(|\hat\th-\th_0|>\de) \end{equation} for $z$ as in \eqref{eq:z,iid}. Typically, given rather standard regularity conditions, the remainder term $\P(|\hat\th-\th_0|>\de)$ decreases exponentially fast in $n$ and thus is negligible as compared with the ``error'' term $\frac{\CC}{\sqrt{n}}$, and even with the ``error'' term $\frac{\CC}{z^3\,\sqrt{n}}$ -- under condition \eqref{eq:z,iid}. Some details on this can be found in the following section. \section{Exponentially small bounds on the remainder term \texorpdfstring{$\P(|\hat\th-\th_0|>\de)$}{} } \label{remainder} \subsection{Bounding the remainder: Log-concave case} \label{log-conc} In this subsection, suppose that the log-likelihood $\ell_x(\th)$ is concave in $\th\in\Th$, for each $x\in\XXX$. By condition \eqref{fisher}, $\E \ell''_X(\th_0)\ne0$. Hence, $\P\big(p_{\th_0+h}(X)\ne p_{\th_0}(X)\big) =\P\big(\ell_X(\th_0+h)\ne\ell_X(\th_0)\big)>0$ for some $h\in(0,\de)$. The concavity of $\ell_x(\th)$ in $\th$ implies that of $\ell_\X(\th)$. So, if $\hat\th>\th_0+\de$, then $\ell_\X(\th_0+h)\ge\ell_\X(\th_0)$. Therefore, \begin{multline*} \P(\hat\th>\th_0+\de)\le\P\big(\ell_\X(\th_0+h)\ge\ell_\X(\th_0)\big) =\P\Big(\prod_{i=1}^n\sqrt{\frac{p_{\th_0+h}(X_i)}{p_{\th_0}(X_i)}}\ge1\Big) \\ \le\E\prod_{i=1}^n\sqrt{\frac{p_{\th_0+h}(X_i)}{p_{\th_0}(X_i)}}=\la_+^n, \end{multline*} where \begin{equation*} \la_+:=\E\sqrt{\frac{p_{\th_0+h}(X)}{p_{\th_0}(X)}}<\sqrt{\E\frac{p_{\th_0+h}(X)}{p_{\th_0}(X)}}= \sqrt{\E_{\th_0}\frac{p_{\th_0+h}(X)}{p_{\th_0}(X)}}=1; \end{equation*} the inequality here is an instance of a strict version of the Cauchy--Schwarz inequality, which holds because, as was noted, $\P\big(p_{\th_0+h}(X)\ne p_{\th_0}(X)\big)>0$. Quite similarly, $\P(\hat\th<\th_0-\de)\le\la_-^n$ for some $\la_-\in[0,1)$, and so, \begin{equation}\label{eq:log-conc} \P(|\hat\th-\th_0|>\de)\le2\la^n \end{equation} for $\la:=\max(\la_+,\la_-)\in[0,1)$. In particular, the condition of the concavity of the log-likelihood $\ell_x(\th)=\ln p_\th(x)$ in $\th$ is fulfilled in the important case when the densities $p_\th$ form an exponential family with $\th$ as the natural parameter, so that \begin{equation* p_\th(x)=e^{\th g(x)-\psi(\th)} \end{equation*} for some function $\psi\colon\Th\to\R$ and all $\th\in\Th$ and $x\in\XXX$. Here, $g\colon\XXX\to\R$ is a measurable function. Then necessarily $\psi(\th)=\ln\int_\XXX e^{\th g(x)}\mu(dx)$, which is convex in $\th$ -- because any mixture of log-convex functions is log-convex, as is well known -- see e.g.\ \cite[page~66, Theorem~5.4C]{keilson79}. So, $\ell_x(\th)=\ln p_\th(x)=\th g(x)-\psi(\th)$ is indeed concave in $\th$. In the case of multivariate exponential families, an exponentially decreasing bound of a form more complicated than that of the bound in \eqref{eq:log-conc} was given in \cite{kour84}. \subsection{Bounding the remainder: General case} \label{remainder,general} Upper bounds on the large-deviation probability $\P(|\hat\th-\th_0|>\de)$ that are exponentially decreasing in $n$ without the assumption of the concavity of the log-likelihood function were presented e.g.\ in \cite{radav81,radav83,mogul88,borovkov_MS,miao10}. However, the parameter space $\Th$ was assumed in \cite{radav81,radav83,borovkov_MS} to be bounded, whereas in \cite{miao10} the distributions $\P_\th$ were assumed to be subgaussian (cf. Theorems~2.1, 2.2, and 3.3 in \cite{miao10}). Conditions in \cite{mogul88} appear to be difficult to verify, including the strict positivity of the infimum of the rate function, needed for an actual exponential decrease. Related is the work \cite{ibr-radav}, containing a result on so-called moderate deviation probabilities for MLEs, which decrease slower than exponentially but still faster than any powers. So, such a result would be enough for our conclusions in Theorem~\ref{th:main} in the next section (cf.\ Remark~\ref{rem:slower} there), if it were not assumed in \cite{ibr-radav} (as in \cite{radav81,radav83,borovkov_MS}) that $\Th$ is bounded. Here we modify the method of \cite{borovkov_MS} to get rid of the condition that $\Th$ is bounded. Consider the (squared) Hellinger distance \begin{equation}\label{eq:H:=} H(\th,\th_0):=\int_\XXX\big(\sqrt{p_\th}-\sqrt{p_{\th_0}}\big)^2\dd\m \end{equation} between the probability measures $\P_\th$ and $\P_{\th_0}$. Assume now the following conditions: \begin{enumerate \item[(B)] The set $\Th$ is a (possibly infinite) interval, and the Fisher information $I(\th)$ is well defined and satisfies the boundedness condition \begin{equation}\label{eq:I<} I(\th)\le c_1+c_2|\th-\th_0|^\al \end{equation} for some positive real constants $c_1,c_2,\al$ and all $\th\in\Th$. \big(If a point $\th$ in $\Th$ is an endpoint of the interval $\Th$, then $I(\th)$ is naturally understood in terms of the corresponding one-sided derivative of $p_\th(x)$ in $\th$.\big) \item[$(\text{D}_0)$] \label{H1} For each bounded neighborhood $U$ of $\th_0$, \begin{equation}\label{eq:H,bor} H(\th,\th_0)\OG(\th-\th_0)^2 \end{equation} over all $\th\in U$. \item[$(\text{D}_1)$] \label{H2} For some real constant $\ga>0$ and some bounded neighborhood $V$ of $\th_0$, \begin{equation}\label{eq:large th} J(\th,\th_0):=1-\tfrac12\,H(\th,\th_0)=\int_\XXX\sqrt{p_\th}\sqrt{p_{\th_0}}\dd\mu\O|\th-\th_0|^{-\ga} \end{equation} over all $\th\in\Th\setminus V$. \end{enumerate} Here and in the sequel, for any two expressions $E_1>0$ and $E_2\ge0$ whose values depend on some variables, the relation $E_1\OG E_2$ and its equivalent $E_2\O E_1$ mean that $\sup(E_2/E_1)<\infty$, where the supremum is taken over the corresponding specified range of values of the variables. Conditions $(\text{D}_0)$ and $(\text{D}_1)$ may be referred to as distinguishability conditions: $(\text{D}_0)$ means that the probability measures $\P_\th$ and $\P_{\th_0}$ are not too close to each other for $\th$ in a punctured neighborhood of $\th_0$, whereas $(\text{D}_1)$ implies that for $\th$ far away from $\th_0$, the probability measures $\P_\th$ and $\P_{\th_0}$ are almost mutually singular, and thus, easily distinguishable, at least in principle. \begin{remark}\label{rem:compact} In the particular case when the parameter space $\Th$ is compact (or just bounded), condition $(\text{D}_1)$ trivially holds. Moreover, as shown in \cite[Section~31]{borovkov_MS}, if $\Th$ is compact and the Fisher information $I(\th)$ is continuous in $\th\in\Th$ and strictly positive for $\th\in\Th$, then \eqref{eq:H,bor} holds over all $\th\in\Th$. So, condition $(\text{D}_0)$ holds (whether the set $\Th$ is bounded or not) whenever the Fisher information $I(\cdot)$ is continuous and strictly positive on $\Th$. \end{remark} However, since $H(\th,\th_0)$ is always bounded from above by $2$, it is clear that condition \eqref{eq:H,bor} cannot possibly hold over all $\th\in\Th$ if the parameter space $\Th$ is unbounded. In such a case, we need to complement condition $(\text{D}_0)$ by condition $(\text{D}_1)$, which latter appears to be natural, and it is indeed commonly satisfied. In particular, conditions $(\text{D}_0)$ and $(\text{D}_1)$ \big(as well as regularity conditions \eqref{diff}--\eqref{M_3}\big) -- hold if $p_\th$ is the density belonging to any one of the following families of probability distributions: \begin{enumerate}[(a)] \item \label{normal} $\mathrm{N}(\th,\si^2)$ -- with $\si>0$ known, $\Th=\R$, $H(\th,\th_0)=2-2\exp\big\{-\frac{(\th-\th_0)^2}{8\si^2}\big\}$; \item $\mathrm{N}(\mu,\th^2)$ -- with $\mu>0$ known, $\Th=(0,\infty)$, $H(\th,\th_0)=2-2\sqrt{\frac{2\th\th_0}{\th^2+\th_0^2}}$; \item $\mathrm{Exp}(\th)$ -- with $\Th=(0,\infty)$, $H(\th,\th_0)=2-\frac{4\sqrt{\th\th_0}}{\th+\th_0}$; \item more generally, Weibull distributions $\mathrm{W}(k,\th)$ -- with \\ $p_\th(x)\equiv \frac k\th(\frac x\th)^{k-1}e^{-(x/\th)^k}I\{x>0\}$, $k>0$ known, $\Th=(0,\infty)$, $H(\th,\th_0)=2-\frac{4(\th\th_0)^{k/2}}{\th^k+\th_0^k}$; \item $\mathrm{Gamma}(\th,\be)$ -- with scale parameter $\be>0$ known, $\Th=(0,\infty)$, $H(\th,\th_0)=2-\frac{2\Ga((\th+\th_0)/2)}{\sqrt{\Ga(\th)\Ga(\th_0)}}$; \item $\mathrm{Gamma}(\al,\th)$ -- with shape parameter $\al>0$ known, $\Th=(0,\infty)$, $H(\th,\th_0)=2-\frac{2^{1+\al}(\th\th_0)^{\al/2}}{(\th+\th_0)^\al}$; \item $\mathrm{Poisson}(\th)$ -- with $\Th=(0,\infty)$, $H(\th,\th_0)=2-2e^{-(\sqrt\th-\sqrt\th_0)^2/2}$; \item $\mathrm{Beta}(s\th,s(1-\th))$ -- with $s>0$ known, $\Th=(0,1)$, $H(\th,\th_0) = \\ 2-\frac{2 \BB\left(\frac{1}{2} s \left(\theta +\theta _0\right),\frac{1}{2} s \left(2-\theta -\theta _0\right)\right)}{\sqrt{\BB(s \theta ,s-s \theta ) \BB\left(s \theta _0,s-s \theta _0\right)}}$, where $\BB(\cdot,\cdot)$ is the Beta function; \item $\mathrm{Beta}(\al\th,\be\th)$ -- with $\al,\be>0$ known, $\Th=(0,\infty)$, $H(\th,\th_0)= \\ 2-\frac{2 \BB\left(\frac{1}{2} \alpha \left(\theta +\theta _0\right),\frac{1}{2} \beta \left(\theta +\theta _0\right)\right)}{\sqrt{\BB(\alpha \theta ,\beta \theta ) \BB\left(\alpha \theta _0,\beta \theta _0\right)}}$ \big(in this case, by Stirling's formula, condition $(\text{D}_1)$ holds with $\ga=1/4 \big). \end{enumerate} \qed \medskip Item \eqref{normal} above, concerning the normal location family, can be quite broadly generalized: \begin{proposition}\label{prop:shift} Suppose that $(p_\th)_{\th\in\Th}$ is a location family over $\R$, so that $p_\th(x)=p(x-\th)$ for all $x\in\R$ and $\th\in\Th$, where $p$ is a pdf (with respect to the Lebesgue measure over $\R$). Suppose also that \begin{equation}\label{eq:p<} p(u)\O(1+|u|)^{-\al} \end{equation} for some real $\al>1$ and all real $u$. Then condition \emph{$(\text{D}_1)$} holds. \end{proposition} Note that the restriction $\al>1$, together with \eqref{eq:p<}, implies the integrability of the nonnegative function $p$. \begin{proof}[Proof of Proposition~\ref{prop:shift}] Without loss of generality, $\th_0=-\th$ and $\th>0$, so that $\th-\th_0=2\th>0$ and \begin{equation J(\th,\th_0) \int_\R\sqrt{p(x+\th)}\sqrt{p(x-\th)}\dd x=\int_{|x|\ge2\th}\cdots\;+\int_{|x|<2\th}\cdots. \end{equation} Since $|x|\ge2\th$ implies $|x\pm\th|\OG|x|$, condition \eqref{eq:p<} yields \begin{equation} \int_{|x|\ge2\th}\cdots\O\int_{|x|\ge2\th}|x|^{-\al}\dd x\O\th^{1-\al}. \end{equation} Since $0\le x<2\th$ implies $x+\th\ge\th$ and $-\th\le x-\th<\th$, condition \eqref{eq:p<} yields \begin{equation} \int_0^{2\th}\ldots\O\th^{-\al/2}\int_{-\th}^\th(1+|u|)^{-\al/2}\dd u\O\th^{-\al/2}\,\th^{0\vee(1-\al/2)} \ln\th \end{equation} for (say) $\th\ge2$; the factor $\ln\th$ is actually needed here only in the case when $\al=2$. The integral $\int_{-2\th}^0\cdots$ can be bounded quite similarly. So, \begin{equation} \int_{|x|<2\th}\ldots\O\th^{-\al/2}\,\th^{0\vee(1-\al/2)} \ln\th \end{equation} for $\th\ge2$. Thus, $(\text{D}_1)$ holds for any $\ga\in(0,\ga_\al)$, where $\ga_\al:=\frac\al2-\big(0\vee(1-\frac\al2)\big)=\frac\al2\wedge(\al-1)>0$. \end{proof} The problem concerning the possibility of a non-compact parameter space $\Th$ may be illustrated by the following simple example: \begin{example}\label{ex:non-compact} For $\th\in\Th=(-1,\infty)$, let $p_\th$ be the density (with respect to the Lebesgue measure on $\R$) of the normal distribution with mean $\mu(\th):=\frac\th{1+\th^2}$ and variance $\si^2(\th):=\frac{(1+\th)^3-\th}{1+\th^3}$, and let $\th_0=0$, so that $\th_0\in\Th^\circ=\Th$. Then for any two distinct $\th$ and $\tau$ in $\Th$ the equality $\mu(\tau)=\mu(\th)$ implies $\th\notin\{0,1\}$ and $\tau=1/\th>0$, whence $\si^2(\tau)\ne\si^2(\th)$. So, $p_\tau\ne p_\th$ for any two distinct $\th$ and $\tau$ in $\Th$. However, $\mu(\th)\underset{\th\to\infty}\longrightarrow0=\mu(0)$ and $\si^2(\th)\underset{\th\to\infty}\longrightarrow1=\si^2(0)$, so that $p_0$ is almost indistinguishable from $p_\th$ for large $\th$. More specifically, it is not hard to check that here \begin{equation* J(\th,\th_0)=\int_\R\sqrt{p_\th(x)}\sqrt{p_0(x)}\dd x =\sqrt{\frac{2\si(\th)}{\si^2(\th)+1}} \exp\Big(-\frac{\mu(\th) ^2}{4 \si^2(\th)+4}\Big) \underset{\th\to\infty}\longrightarrow1, \end{equation*} so that this situation is excluded by condition \eqref{eq:large th}. \end{example} Now we are well prepared to state the main result of this subsection: \begin{proposition}\label{prop:la^n} Under conditions \emph{(B)}, \emph{$(\text{D}_0)$}, and \emph{$(\text{D}_1)$}, \begin{equation}\label{eq:bor} \P(|\hat\th-\th_0|>\de)\le c\,\la^n \end{equation} for some real constants $c>0$ and $\la\in[0,1)$ (depending on $\ga,c_0,\al,c_1,c_2$) and all natural $n$; cf.\ \eqref{eq:log-conc}. \end{proposition} Inequality \eqref{eq:bor} is similar to inequality (6) in \cite[Section~33.2, Theorem~3]{borovkov_MS}, with the following main differences. \begin{enumerate}[(i)] \item It is assumed in \cite{borovkov_MS} that $\Th$ is compact, in addition to the assumption that $I(\th)$ is continuous in $\th\in\Th$ and strictly positive for $\th\in\Th$. Under these assumptions, condition $(\text{D}_0)$ is, not assumed, but derived in \cite{borovkov_MS}. As noted above, if the parameter space $\Th$ is compact, then condition $(\text{D}_1)$ is trivial. \item As we do not assume that $\Th$ is compact (or even bounded), we need to control the behavior of log-likelihood $\ell_\X(\th)$ for $\th$ far from $\th_0$. This is done using condition $(\text{D}_1)$. \item In \cite{borovkov_MS}, instead of condition (B) above, it is assumed that the Fisher information $I(\th)$ is just bounded over all $\th\in\Th$. However, mainly following the lines of proof in \cite{borovkov_MS}, one can see that the more general condition (B) suffices, given conditions $(\text{D}_0)$ and $(\text{D}_1)$. \end{enumerate} For the readers' convenience here is \begin{proof}[Proof of Proposition~\ref{prop:la^n}] Let \begin{equation} Z(u):=\frac{p_{\th_0+u}(\X)}{p_{\th_0}(\X)}=\prod_{i=1}^n \frac{p_{\th_0+u}(X_i)}{p_{\th_0}(X_i)}=\exp\{\ell_\X(\th_0+u)-\ell_\X(\th_0)\}, \end{equation} where $p_\th(\X):=\prod_{i=1}^n p_\th(X_i)=\exp\ell_\X(\th)$ and $\ell_\X$ is the log-likelihood function, as defined in \eqref{eq:ell}; here and subsequently in this proof, $u$ is a real number such that $\th_0+u\in\Th$. By conditions $(\text{D}_1)$ and $(\text{D}_0)$, there exist real $C_1>0$, \begin{equation}\label{eq:u_*>} u_*>C_1^{1/\ga}\vee\de, \end{equation} and $C_0>0$ such that \begin{equation}\label{eq:Z^{1/2},1} \E Z(u)^{1/2}=\E_{\th_0} Z(u)^{1/2}=J(\th_0,\th_0+u)^n\le C_1^n u^{-n\ga}\quad\text{if }|u|>u_* \end{equation} and \begin{equation}\label{eq:Z^{1/2},0} \E Z(u)^{1/2}=\big(1-\tfrac12\,H(\th_0,\th_0+u)\big)^n \le (1-u^2/C_0)^n \le e^{-nu^2/C_0} \quad\text{if }|u|\le u_*. \end{equation} Note also that $\E Z(u)=1$. So, introducing \begin{equation} P(u):=Z(u)^{3/4}, \end{equation} by the Cauchy--Schwarz inequality one has \begin{equation}\label{eq:EP} \E P(u)\le\sqrt{\E Z(u)\,\E Z(u)^{1/2}}=\sqrt{\E Z(u)^{1/2}}. \end{equation} Further, $P'(u)=\frac34\,\ell'_\X(\th_0+u)Z(u)^{3/4}$, whence, again by the Cauchy--Schwarz inequality, \begin{equation}\label{eq:EP'} \begin{aligned} \E|P'(u)|&\le\tfrac34\,\sqrt{\E\ell'_\X(\th_0+u)^2 Z(u)\,\E Z(u)^{1/2}} \\ &=\tfrac34\,\sqrt{\E_{\th_0+u}\ell'_\X(\th_0+u)^2 \,\E Z(u)^{1/2}} \\ &=\tfrac34\,\sqrt{nI(\th_0+u) \,\E Z(u)^{1/2}}. \end{aligned} \end{equation} For $u>\de$, one has $P(u)\le P(\de)+\int_{\Th\cap(\de,\infty)}|P'(t)|\,dt$. So, by \eqref{eq:EP}, \eqref{eq:Z^{1/2},0}, \eqref{eq:EP'}, \eqref{eq:I<}, \eqref{eq:u_*>}, and \eqref{eq:Z^{1/2},1}, \begin{equation*} \E\sup_{u>\de}P(u)\le e^{-n\de^2/(2C_0)}+I_0+I_1=\la_*^n+I_0+I_1, \end{equation*} where $\la_*:=e^{-\de^2/(2C_0)}\in(0,1)$, \begin{equation*} I_0:=\int_\de^{u_*}\sqrt{n(c_1+c_2u^\al)}\,e^{-nu^2/(2C_0)}\,du \O\int_\de^\infty \sqrt{nu^\al}\,e^{-nu^2/(2C_0)}\,du \O \la_0^n \end{equation*} for any fixed $\la_0\in(\la_*,1)$, and \begin{equation*} I_1:=\int_{u_*}^\infty \sqrt{n(c_1+c_2u^\al)C_1^n u^{-n\ga}}\,du\O\la_1^{n/2} \end{equation*} for any fixed $\la_1\in(C_1/u_*^\ga,1)$ -- note that the latter interval is nonempty, in view of \eqref{eq:u_*>}. Thus, $\E\sup_{u>\de}P(u)\O\la^n$ for $\la:=\la_0\vee\sqrt{\la_1}\in(0,1)$. Quite similarly, $\E\sup_{u<-\de}P(u)\O\la^n$ and hence $\E\sup_{|u|>\de}P(u)\O\la^n$. So, \begin{equation*} \P(|\hat\th-\th_0|>\de)\le\P(\sup_{|u|>\de}Z(u)\ge Z(0))=\P(\sup_{|u|>\de}P(u)\ge1)\le\E\sup_{|u|>\de}P(u)\O\la^n, \end{equation*} which completes the proof of Proposition~\ref{prop:la^n}. \end{proof} \section Conclusion}\label{concl} Inequalities \eqref{eq:ub} and \eqref{eq:nub} together with \eqref{eq:log-conc} and Proposition~\ref{prop:la^n} yield \begin{theorem}\label{th:main} Suppose that conditions \eqref{fisher}, \eqref{M_2}, and \eqref{M_3} hold. Suppose also that either (i) the log-likelihood $\ell_x(\th)$ is concave in $\th\in\Th$, for each $x\in\XXX$, or (ii) conditions \emph{(B)}, \emph{$(\text{D}_0)$}, and \emph{$(\text{D}_1)$} hold. Then \begin{equation}\label{eq:ub,mle} \Big|\P\Big(\sqrt{nI(\th_0)}\,(\hat\th-\th_0)\le z\Big)-\Phi(z)\Big|\le\frac{\CC}{\sqrt{n}} \end{equation} for all real $z$, and \begin{equation}\label{eq:nub,mle} \Big|\P\Big(\sqrt{nI(\th_0)}\,(\hat\th-\th_0)\le z\Big)-\Phi(z)\Big|\le\frac{\CC}{z^3\,\sqrt{n}} \end{equation} for $z$ as in \eqref{eq:z,iid}. Here, as before, each of the two instances of the symbol $\CC$ stands for a finite positive expression whose values do not depend on $n$ or $z$, in accordance with the last paragraph of Section~\ref{f(bar V)}. \end{theorem} \begin{remark}\label{rem:slower} It should be clear that the conditions assumed in the second sentence of Theorem~\ref{th:main} can be replaced by any other conditions that imply \eqref{eq:bor} for some real constants $c>0$ and $\la\in[0,1)$ not depending on $n$. Actually, a much weaker bound, of the form $c/n^2$, instead of the exponentially fast decreasing upper bound $c\la^n$ in \eqref{eq:bor}, will already suffice. \end{remark} Theorem~\ref{th:main} can be extended to the more general case of $M$-estimators. Indeed, the condition that $p_\th$ is a pdf for $\th\ne\th_0$ is used in our proofs only in order to state that $\E_\th\ell'_X(\th)=0$ and $\E_\th \ell'_X(\th)^2=-\E_\th \ell''_X(\th)=I(\th)\in(0,\infty)$. In the case of $M$-estimator , the corresponding conditions will have to be just assumed, with some other expressions in place of the Fisher information $I(\th)$, as it is done e.g.\ in \cite{pfanzagl71,pfanzagl73}, where uniform bounds of optimal order $O(1/\sqrt n)$ for $M$-estimators were obtained; $M$-estimators were referred to as minimum contrast estimates in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73}. We have chosen to restrict the consideration here to MLEs in order not to obscure the novelty elements in our result. The most significant novelty in our Theorem~\ref{th:main}, as compared with the results of \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73}, is that, in addition to the uniform bound in \eqref{eq:ub,mle}, inequality \eqref{eq:nub,mle} in Theorem~\ref{th:main} also provides a nonuniform Berry--Esseen-type bound for MLEs in general, which latter appears to be the first such result in the literature -- except for the already mentioned special case considered recently in \cite{nonlinear-publ}. On the other hand, paper \cite{pfanzagl73} treats the case of a multidimensional parameter $\th$. The uniform bound in \cite{michel-pfanzagl71} was of the form $O(\sqrt{\ln n}/\sqrt n)$, rather than of the optimal order $O(1/\sqrt n)$. Another notable distinction is that condition \cite[(1)]{michel-pfanzagl71} (the same as the corresponding conditions on page~73 in \cite{pfanzagl71} and on page~173 in \cite{pfanzagl73}) effectively reduces the consideration to the case when the parameter space $\Th$ is compact in $[-\infty,\infty]$. This obviates the need in a condition such as $(\text{D}_1)$, which is there to control the behavior of the likelihood $\ell_\X(\th)$ for large $|\th|$. However, as pointed out in \cite[page~75]{michel-pfanzagl71} concerning the main result there, the nonconstructive compactification condition used in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73} ``gives no method for determining [the] value [of the constant in the Berry--Esseen-type bound] for a given family of probability measures.'' The problem of controlling the likelihood over far-away zones of a non-compact parameter space $\Th$ was illustrated in Example~\ref{ex:non-compact}, where the ``bad'' situation was excluded by condition \eqref{eq:large th}. That same situation was also excluded by the mentioned compactification condition in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73} -- with $f_\th=-\ln p_\th$ for $\th\in\overline\Th=[-1,\infty]$ and $\mu(\infty):=\lim_{\th\to\infty}\mu(\th)=0=\mu(0)$ and variance $\si^2(\infty):=\lim_{\th\to\infty}\si^2(\th)=1=\si^2(0)$. As was pointed out, the method of the present paper is based on the general Berry--Esseen bounds for the multivariate delta method obtained in \cite{nonlinear-publ}, which were apllied here via the bracketing argument delineated in Section~\ref{bracketing}. As such, this method is quite different from the methods in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73}, specialized to deal with MLEs. Partly because of this difference in the methods, there are many differences between the conditions in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73} and those in the present paper. Most of these differences -- apart from the ones discussed above -- are rather minor. Since the result of \cite{pfanzagl71} is apparently the closest to ours in the literature, let us further discuss the regularity conditions in \cite{pfanzagl71}, in comparison with ours, in some detail: \begin{remark}\label{rem:compare} Condition \eqref{diff} in the present paper can be replaced by the condition that $p_\th>0$ everywhere on $\XXX$. The latter condition is necessary in order for $\ell_x(\th)=\ln p_\th(x)$ to be defined for all $x\in\XXX$; cf.\ the first paragraph on page~83 in \cite{pfanzagl71}. Our condition \eqref{fisher} follows, by Remark~\ref{rem:fisher}, from regularity conditions (iv), (v)(a), (vi) on pages~83--84 in \cite{pfanzagl71} -- for $f_\th:=-\ell_\th$. Next, condition \eqref{M_2} follows from \cite[(vi)]{pfanzagl71}. Here and in the rest of this remark, the lower-case Roman numerals and letters in parentheses refer to the regularity conditions on pages~83--84 in \cite{pfanzagl71} -- again for $f_\th:=-\ell_\th$. Next, condition \eqref{M_3} is, in main, a bit stronger than \cite[(viii)]{pfanzagl71}. Of course, condition \eqref{M_3} can be relaxed, for the price of making it more complicated. By Remark~\ref{rem:compact}, our condition $(\text{D}_0)$ will hold if the Fisher information $I(\cdot)$ is continuous and strictly positive on $\Th$, for which conditions (ix) and (v)(a), respectively, in \cite{pfanzagl71} will be more than enough. Next, our condition $(\text{D}_1)$, to control the behavior of the likelihood $\ell_\X(\th)$ for large $|\th|$, was already discussed at length, versus the compactification condition used in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73}. In the case when $\Th$ is compact, for our condition (B) to hold, either one of regularity conditions (vi)(a) or (vi)(b) in \cite{pfanzagl71} will be more than enough. More generally, condition (B) together with condition $(\text{D}_1)$ replace the just mentioned compactification condition in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73}. In this paper, no explicit analogues of regularity conditions (i), (ii), (iii), (vii) of \cite{pfanzagl71} are imposed. \end{remark} So, quite predictably, neither our conditions imply those in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73}, nor vice versa. However, our conditions appear to be a bit simpler and more explicit overall than those in \cite{michel-pfanzagl71,pfanzagl71,pfanzagl73}. It should also be mentioned that in \cite{michel-pfanzagl71,pfanzagl71} both the relevant conditions and the corresponding results are stated uniformly over compact subsets of $\Th$. Of course, a similar modification of our conditions and results can be done.
{ "timestamp": "2016-12-15T02:01:55", "yymm": "1601", "arxiv_id": "1601.02177", "language": "en", "url": "https://arxiv.org/abs/1601.02177", "abstract": "It is well known that under general regularity conditions the distribution of the maximum likelihood estimator (MLE) is asymptotically normal. Very recently, bounds of the optimal order $O(1/\\sqrt n)$ on the closeness of the distribution of the MLE to normality in the so-called bounded Wasserstein distance were obtained, where $n$ is the sample size. However, the corresponding bounds on the Kolmogorov distance were only of the order $O(1/n^{1/4})$. In this note, bounds of the optimal order $O(1/\\sqrt n)$ on the closeness of the distribution of the MLE to normality in the Kolmogorov distance are given, as well as their nonuniform counterparts, which work better for large deviations of the MLE. These results are based on previously obtained general optimal-order bounds on the rate of convergence to normality in the multivariate delta method. The crucial observation is that, under natural conditions, the MLE can be tightly enough bracketed between two smooth enough functions of the sum of independent random vectors, which makes the delta method applicable.", "subjects": "Statistics Theory (math.ST)", "title": "Optimal-order bounds on the rate of convergence to normality for maximum likelihood estimators", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429595026213, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7097210887105843 }
https://arxiv.org/abs/1504.07406
On Maximal Unbordered Factors
Given a string $S$ of length $n$, its maximal unbordered factor is the longest factor which does not have a border. In this work we investigate the relationship between $n$ and the length of the maximal unbordered factor of $S$. We prove that for the alphabet of size $\sigma \ge 5$ the expected length of the maximal unbordered factor of a string of length~$n$ is at least $0.99 n$ (for sufficiently large values of $n$). As an application of this result, we propose a new algorithm for computing the maximal unbordered factor of a string.
\section{Introduction} If a proper prefix of a string is simultaneously its suffix, then it is called a border of the string. Given a string $S$ of length $n$, its maximal unbordered factor is the longest factor which does not have a border. The relationship between $n$ and the length of the maximal unbordered factor of $S$ has been a subject of interest in the literature for a long time, starting from the 1979 paper of Ehrenfeucht and Silberger~\cite{ESproblem}. Let $b(S)$ be the length of the maximal unbordered factor of $S$ and $\pi(S)$ be the minimal period of $S$. Ehrenfeucht and Silberger showed that if the minimal period of $S$ is smaller than $\frac{1}{2} n$, then $b(S) = \pi(S)$. Following this, they raised a natural question: How small $b(S)$ must be to guarantee $b(S) = \pi(S)$? Their conjecture was that $b(S)$ must be smaller than $\frac{1}{2} n$. However, this conjecture was proven false two years later by Assous and Pouzet~\cite{CounterExample}. As a counterexample they gave a string $$S = a^m b a^{m+1} b a^m b a^{m+2} b a^m b a^{m+1} b a^m$$ of length $n = 7m + 10$. The length of the maximal unbordered factor of this string is $b(S) = 3m + 6 \le \frac{3}{7} n + 2 < \frac{1}{2} n$ (with $ba^{m+1} ba^m ba^{m+2}$ and $a^{m+2} ba^m ba^{m+1} b$ being unbordered), and the minimal period $\pi (S) = 4m + 7 \neq b (S)$. The next attempt to answer the question was undertaken by Duval~\cite{Duval}: He improved the bound to $\frac{1}{4} n + \frac{3}{2}$. But the final answer to the question of Ehrefeucht and Silberger was given just recently by Holub and Nowotka~\cite{EhrenfeuchtSilberger-2}. They showed that $b(S) \le \frac{3}{7} n$ implies $b(S) = \pi(S)$, and, as follows from the example of Assous and Pouzet, this bound is tight. Therefore, when either $b(S)$ or $\pi(S)$ is small, $b(S) = \pi(S)$. Exploiting this fact, one can even compute the maximal unbordered factor itself in linear time. The key idea is that in this case the maximal unbordered factor is an unbordered conjugate of the minimal period of $S$, and both the minimal period and its unbordered conjugate can be found in linear time~\cite{BorderArray,UnborderedConjugate}. The interesting cases are those where $b(S)$ (and, consequently, $\pi(S)$) is big. Yet, it is generally believed that they are the most common ones. This is supported by experimental resuts shown in Fig.~\ref{fig:max_unbordered_length} that plots the average difference between the length~$n$ of a string and the length of its maximal unbordered factor. Guided by the experimental results, we state the following conjecture: \begin{conjecture} Expected length of the maximal unbordered factor of a string of length $n$ is $n - \mathcal{O}(1)$. \end{conjecture} \begin{figure} \vspace*{-5pt} \centering \includegraphics[scale=0.5]{n_minus_max_borderless} \caption{Average difference between the length $n$ of a string and the length of its maximal unbordered factor for $1 \le n \le 100$ and alphabets of size $2 \le \sigma \le 5$.} \label{fig:max_unbordered_length} \vspace*{-5pt} \end{figure} To the best of our knowledge, there have been no attempts to prove the conjecture or any lower bound at all in the literature. In Section~\ref{sec:lower_bound} we address this gap and make the very first step towards proving the conjecture. We show that the expected length of the maximal unbordered factor of a string of length $n$ over the alphabet $A$ of size $\sigma \ge 2$ is at least $n (1 - \xi(\sigma) \cdot \sigma^{-4}) + \mathcal{O}(1)$, where $\xi(\sigma)$ is a function that converges to $2$ quickly with the growth of $\sigma$. In particular, this theorem implies that for alphabets of size $\sigma \ge 5$ the expected length of the maximal unbordered factor of a string is at least $0.99 n$ (for sufficiently large values of $n$). To prove the theorem we developed a method of generating strings with large unbordered factors which we find to be interesting on its own (see Section~\ref{sec:generate}). It follows that the algorithm for computing maximal unbordered factors we sketched earlier cannot be used in a majority of cases. Instead, one can consider the following algorithm. A border array of a string is an array containing the maximal length of a border of each prefix of this string. Note that a prefix of a string is unbordered exactly when the corresponding entry in the border array is zero. Therefore, to compute the maximal unbordered factor of a string $S$ it suffices to build border arrays of all suffixes of a string. It is well-known that a single border array can be constructed in linear time, which gives quadratic time bound for the algorithm. In Section~\ref{sec:algo} we show how to modify this algorithm to make use of the fact that the expected length of the maximal unbordered factor is big. We give $\mathcal{O}(\frac{n^2}{\sigma^4})$ time bound for the modified algorithm, as well as confirm its efficiency experimentally. \paragraph{Related work.} Apart from the aforementioned results, we consider our work to be related to three areas of research. As we have already mentioned, the maximal unbordered factor can be found by locating the rightmost zeros in the border arrays of suffixes of a string and better understanding of structure of border arrays would give more efficient algorithms for the problem. Structure of border arrays has been studied in~\cite{ValidatingKMP,ValidatingKMP-1,ValidatingKMP-2,ValidatingKMP-3,ValidatingKMP-4,NumberOfBorderArrays}. In contrast to the problem we consider in this work, one can be interested in the problem of preprocessing a string to answer online factor queries related to its borders. This problem has been considered by Kociumaka et al.~\cite{InternalPM,InternalPM-old}. They proposed a series of data structures which, in particular, can be used to determine if a factor is unbordered in logarithmic time. Finally, repeating fragments in a string (borders of factors is one example of such fragments) were studied in connection with the \emph{Longest Common Extension} problem which asks, given a pair of positions $i,j$ in a string, to return the longest fragment that occurs both at $i$ and $j$. This problem has many solutions, yet recently Ilie at al.~\cite{LCE} showed that the simplest solution, i.e. simply scanning the string and comparing pairs of letters starting at positions $i$ and~$j$, is the fastest on average. The authors also proved that the longest common extension has expected length smaller than $\frac{1}{\sigma-1}$, where $\sigma$ is the size of the alphabet. \section{Preliminaries} \label{sec:prelim} We start by introducing some standard notation and definitions. \paragraph{Power sums.} We will need the following identities. \begin{fact} \label{lm:power_sum} $S(x) = \sum_{i=1}^{k} i \; x^{i-1} = \frac{k \; x^{k+1} - (k+1) \; x^{k} + 1}{(x-1)^2}$ for all $x \neq 1$. \end{fact} \begin{proof} $$S(x) = \bigl( \sum_{i=1}^{k} x^i \bigr) ' = \bigl( \frac{x^{k+1} - x}{x-1} \bigr) ' = \frac{((k+1) x^{k} - 1)(x-1) - (x^{k+1} - x)}{(x-1)^2}$$ Simplifying, we obtain $$S(x) = \sum_{i=1}^{k} i \; x^{i-1} = \frac{k \; x^{k+1} - (k+1) \; x^{k} + 1}{(x-1)^2}$$ \qed \end{proof} \begin{corollary} \label{cor:power_sum} $S(x) = \sum_{i=1}^{k} i \; x^{i-1} = \frac{k \; x^k}{x-1} + \mathcal{O}(x^{k-2})$ for $x \ge 1.5$. \end{corollary} \paragraph{Strings.} The alphabet $A$ is a finite set of size $\sigma$. We refer to the elements of~$A$ as \emph{letters}. A \emph{string} over $A$ is a finite ordered sequence of letters (possibly empty). Letters in a string are numbered starting from~1, that is, a string $S$ of \emph{length} $n$ consists of letters $S[1], S[2], \ldots, S[n]$. The length~$n$ of $S$ is denoted by~$|S|$. A set of all strings of length $n$ is denoted $A^n$. For $1 \le i \le j \le n$, $S[i..j]$ is a \emph{factor} of $S$ with endpoints $i$ and $j$. The factor $S[1..j]$ is called a \emph{prefix} of $S$, and the factor $S[i..n]$ is called a \emph{suffix} of $S$. A prefix (or a suffix) different from $S$ and the empty string is called \emph{proper}. If a proper prefix of a string is simultaneously its suffix, then it is called a \emph{border}. For example, borders of a string $ababa$ are $a$ and $aba$. The \emph{maximal border} of a string is its longest border. For $S$ we define its \emph{border array} $B$ (also known as the \emph{failure function}) to contain the lengths of the maximal borders of all its prefixes, i.e. $B[i]$ is the length of the maximal border of $S[1..i]$, $i = 1..n$. The last entry in the border array, $B[n]$, contains the length of the maximal border of $S$. It is well-known that the border array and therefore the maximal border of $S$ can be found in $\mathcal{O}(n)$ time and space~\cite{BorderArray}. A period of $S$ is an integer~$\pi$ such that for all $i$, $1 \le i \le n-\pi$, $S[i] = S[i +\pi]$. The minimal period of a string has length $n-B[n]$, and hence can be computed in linear time as well. \paragraph{Unbordered strings.} A string is called \emph{unbordered} if it has no border. Let $b(i, \sigma)$ be the number of unbordered strings in $A^i$. Nielsen~\cite{Bifixnote} showed that unbordered strings can be constructed in a recursive manner, starting from unbordered strings of length $2$ and inserting new letters in the ``middle''. The following theorem is a corollary of the proposed construction method: \addtocounter{theorem}{-1} \begin{theorem}[\cite{Bifixnote}] \label{th:unbordered} The sequence $\Big\{ \frac{b(i, \sigma)}{\sigma^i} \Big\}_{i = 1}^{\infty}$ is monotonically nonincreasing and it converges to a constant $\alpha$, which satisfies $\alpha \ge 1 - \sigma^{-1} - \sigma^{-2}$. \end{theorem} \begin{corollary}[\cite{Bifixnote}] \label{cor:unbordered} $b (i, \sigma) \ge \sigma^i - \sigma^{i-1} - \sigma^{i-2}$ for all $i$. \end{corollary} This corollary immediately implies that the expected length of the maximal unbordered factor of a string of length $n$ is at least $n (1 - \sigma^{-1}-\sigma^{-2})$. We improve this lower bound in the subsequent sections. We will make use of a lower bound on the number $b_j(i, \sigma)$ of unbordered strings such that its first letter differs from the subsequent $j$ letters. An example of such string for $j = 2$ is $a b c a c b b$. \begin{lemma} \label{lm:unbordered+} $b_j (i, \sigma) \ge (\sigma-1)^{j+1}\sigma^{i-j-1} - \sigma^{i-2}$ for all $i \ge j+1$. \end{lemma} \begin{proof} The number of such strings is equal to $b(i, \sigma)$ minus the number $b_j^- (i, \sigma)$ of unbordered strings of length $i$ that do not have the property. We estimate the latter from above by the number of such strings in the set of all strings with their first letter not equal to the last letter. Hence, $b_j^- (i, \sigma) \le (\sigma - 1) \sigma^{i-1} - (\sigma-1)^{j+1}\sigma^{i-j-1}$. Recall that $b(i, \sigma) \ge \sigma^i - \sigma^{i-1} - \sigma^{i-2}$ by Theorem~\ref{th:unbordered}. The claim follows. \qed \end{proof} \begin{myremark} The right-hand side of the inequality of Lemma~\ref{lm:unbordered+} is often negative for $\sigma = 2$. We will not use it for this case. \end{myremark} The \emph{maximal unbordered factor} of a string (MUF) is naturally defined to be the longest factor of the string which is unbordered. \section{Generating strings with large MUF} \label{sec:generate} In this section we explain how to generate strings of some fixed length $n$ with large maximal unbordered factors. To show the lower bounds we announced, we will need many of such strings. The idea is to generate them from unbordered strings. Let $S$ be an unbordered string of length $i \ge \lceil\frac{n}{2}\rceil$. Consider a string $S P_1 \ldots P_k$ of length $n$, where $P_1, \ldots, P_k$ are prefixes of $S$. It is not difficult to see that the maximal unbordered factor of any string of this form has length at least~$i$. (Because $S$ is one of its unbordered factors.) The number of such strings that can be generated from $S$ is $2^{n-i-1}$, because each of them corresponds to a composition of $n-i$, i.e. representation of $n-i$ as a sum of a sequence of strictly positive integers. But, some of these strings can be equal. Consider, for example, an unbordered string $S = aaabab$. Then the two strings $aaababaaa$ ($S$ appended with its prefix $aaa$) and $aaababaaa$ ($S$ appended with its prefixes $a$ and $aa$) will be equal. However, we can show the following lemma. \begin{lemma} \label{lm:distinct} Let $S_1 \neq S_2$ be two unbordered strings. Any two strings of the form above generated from $S_1$ and $S_2$ are distinct. \end{lemma} \begin{proof} Suppose that the produced strings are equal. If $|S_1| = |S_2|$, we immediately obtain $S_1 = S_2$, a contradiction. Otherwise, w.l.o.g. assume $|S_1| < |S_2|$. Then $S_2$ is equal to a concatenation of $S_1$ and some of its prefixes. The last of these prefixes is simultaneously a suffix and a prefix of $S_2$, i.e. $S_2$ is not unbordered. A contradiction. \qed \end{proof} Our idea is to produce as many strings of the form $S P_1 \ldots P_k$ as possible, taking extra care to ensure that all strings produced from a fixed string $S$ are distinct. From unbordered strings of length $i = n$ and $i = n-1$ we produce just one string of length $n$. (For $i = n$ it is the string itself and for $i = n-1$ it is the string appended with its first letter.) For unbordered strings of length $i \le n-2$ we propose a different method based on the lemma below. \begin{lemma} Each unbordered string $S$ of length $i$ such that its first letter differs from the subsequent $j$ letters, where $\lceil{n/2\rceil} \le i < n-j$, gives at least $2^j$ distinct strings of the form $S P_1 \ldots P_k$. \end{lemma} \begin{proof} We choose the last prefix $P_k$ to be the prefix of $S$ of length at least $n-i-j$. We place no restrictions on the first $k-1$ prefixes. Let us start by showing that all generated strings are distinct. Suppose there are two equal strings $S P_1 \ldots P_\ell$ and $S P'_1 \ldots P'_{\ell'}$. Let $P_d, P'_d$ be the first pair of prefixes that have different lengths. W.l.o.g. assume that $|P_d| < |P'_d|$. Then $d \neq \ell$ and hence $|P_d| \le j = n - i - (n-i-j)$. It follows that $P'_d$ (which is a prefix of $S$) contains at least two occurrences of $S[1]$, one at the position $1$ and one at the position $|P_d|+1 \le j+1$. In other words, we have $S[1] = S[|P_d|+1]$ and $|P_d|+1 \le j+1$, which contradicts our choice of $S$. If the length of the last prefix is fixed to some integer $m \ge n-i-j$, then each of the generated strings $S P_1 \ldots P_k$ is defined by the lengths of the first $k-1$ of the appended prefixes. In other words, there is one-to-one correspondence between the generated strings and compositions of $n-i-m$. (Here we use $i \ge \lceil{ n/2 \rceil}$ to ensure that every composition corresponds to a sequence of prefixes of~$S$.) The number of compositions of $n-i-m$ is $1$ when $m = n- i$ and $2^{n-i-m-1}$ otherwise. Summing up for all $m$ from $n-i-j$ to $n-i$ we obtain that the number of the generated strings is $2^j$. \qed \end{proof} Let us estimate the total amount of strings produced by this method. We produce one string from each unbordered string of length $i$. Then, from each unbordered string of length $i$ such that its first letter differs from the second letter, we produce $1 = 2 - 1$ more string. If the first letter differs both from the second and the third letters, we produce $2 = 2^2 - 1 - 1$ more strings. And finally, if the first letter differs from the subsequent $j$ letters, we produce $2^{j-1} = 2^ j - \bigl(1 + 1 + 2 + \ldots + 2^{j-2}\bigr)$ strings. It follows that the number of strings we can produce from unbordered strings of length $i \le n-2$ is \begin{equation*} \label{eq:MUF=i} b (i, \sigma) + \sum_{j=1}^{n-i-1} 2^{j-1} \cdot b_j (i, \sigma) \end{equation*} Recall that the maximal unbordered factor of each of the generated strings has length at least $i$ and that none of them can be equal to a string generated from an unbordered string of different length. \section{Expected length of MUF} \label{sec:lower_bound} In this section we prove the main result of this paper. \begin{theorem} \label{th:difference} Expected length of the maximal unbordered factor of a string of length $n$ over an alphabet $A$ of size $\sigma \ge 2$ is at least \begin{equation} n\cdot (1 - \xi(\sigma) \cdot \sigma^{-4}) + \mathcal{O}(1) \end{equation} where $\xi(2) = 8$ and $\xi(\sigma) = \frac{2\sigma^3 - 2\sigma^2}{(\sigma-2)(\sigma^2 - 2\sigma +2)}$ for $\sigma > 2$. \end{theorem} Before we give a proof of the theorem, let us say a few words about $\xi(\sigma)$. This function is monotonically decreasing for $\sigma \ge 2$ and quickly converges to $2$. We give the first four values for $\xi(\sigma)$ (rounded up to 3 s.f.) and $1 - \xi(\sigma) \cdot \sigma^{-4}$ (rounded down to 3 s.f.) in the table below. \begin{table} \centering \begin{tabularx}{0.8\textwidth}{X|X|X|X|X} & $\sigma = 2$ & $\sigma = 3$ & $\sigma = 4$ & $\sigma = 5$ \\ \hline $\xi(\sigma)$ & 8.000 & 7.200 & 4.800 & 3.922\\ \hline $1 - \xi(\sigma) \cdot \sigma^{-4}$ & 0.500 & 0.911 & 0.981 & 0.993 \end{tabularx} \end{table} \begin{corollary} Expected length of the maximal unbordered factor of a string of length $n$ over the alphabet $A$ of size $\sigma \ge 5$ is at least $0.99 n$ (for sufficiently large values of $n$). \end{corollary} \subsubsection*{Proof of Theorem~\ref{th:difference}.} Let $\beta^n_i(\sigma)$ be the number of strings in $A^n$ such that the length of their maximal unbordered factor is $i$. Expected length of the maximal unbordered factor is then equal to \begin{equation*} \frac{1}{\sigma^n} \sum_{i = 1}^{n} i \cdot \beta^n_i(\sigma) \end{equation*} For the sake of simplicity, we temporarily omit $\frac{1}{\sigma^n}$, and only in the very end we will add it back. Recall that in the previous section we showed how to generate a set of distinct strings of length $n$ with maximal unbordered factors of length at least $i$ which contains \begin{equation*} b (i, \sigma) + \sum_{j=1}^{n-i-1} 2^{j-1} \cdot b_j (i, \sigma) \end{equation*} strings for all $\lceil \frac{n}{2}\rceil \le i \le n-2$ and $b(i, \sigma)$ strings for $i = \{n-1, n\}$. Then \begin{equation} \label{eq:start} \sum_{i = 1}^{n} i \cdot \beta^n_i(\sigma) \ge \underbrace{\sum_{i=\lceil{n/2\rceil}}^{n} i \cdot b (i, \sigma)}_{(S_1)} + \underbrace{\sum_{i=\lceil{n/2\rceil}}^{n-2} \sum_{j=1}^{n-i-1} 2^{j-1} \cdot i \cdot b_j (i, \sigma)}_{(S_2)} \end{equation} We start by computing $(S_1)$. Applying Corollary~\ref{cor:unbordered} and replacing $b(i, \sigma)$ with $\frac{b (n, \sigma)}{\sigma^{n-i}}$ in $(S_1)$, we obtain: \begin{equation*} (S_1) \ge \sum_{i = \lceil\frac{n}{2}\rceil}^{n} i \; \frac{b (n, \sigma)}{\sigma^{n-i}} = \frac{b (n, \sigma)}{\sigma^{n-1}} \bigl( \sum_{i = \lceil\frac{n}{2}\rceil}^{n} i \; \sigma^{i-1} \bigr) \end{equation*} Note that the lower limit in inner sum of $(S_1)$ can be replaced by one because the correcting term is small: \begin{equation*} \frac{b (n, \sigma)}{\sigma^{n-1}} \sum_{i=1}^{\lceil{n/2\rceil}-1} i \sigma^{i-1} \le \frac{n^2 \cdot b(n, \sigma)}{4\sigma^{n/2}} = \mathcal{O}(\sigma^n) \end{equation*} We finally use Corollary~\ref{cor:power_sum} for $x = \sigma$ and $k=n$ to compute the right-hand side of the inequality: \begin{equation} \label{eq:unbordered_as_period_fin} (S_1) \ge \frac{n\sigma}{\sigma-1} \cdot b (n, \sigma) + \mathcal{O}(\sigma^n) \end{equation} We note that for $\sigma = 2$ the right-hand side is at least $2n \cdot (2^n - 2^{n-1}-2^{n-2})+\mathcal{O}(2^n) = n \cdot 2^{n-1} + \mathcal{O}(2^n)$ by Corollary~\ref{cor:unbordered} and $(S_2) \ge 0$. Hence, $\sum_{i = 1}^{n} i \cdot \beta^n_i(2) \ge n \cdot 2^{n-1} + \mathcal{O}(2^n)$. Dividing both sides by $2^n$, we obtain the theorem. Below we assume $\sigma > 2$ and for these values of $\sigma$ give a better lower bound on $(S_2)$. Recall that $b_j(i, \sigma) \ge (\sigma-1)^{j+1}\sigma^{i-j-1} - \sigma^{i-2}$ (see Lemma~\ref{lm:unbordered+}). It follows that \begin{equation*} (S_2) \ge \sum_{i=\lceil{n/2\rceil}}^{n-2} \sum_{j=1}^{n-i-1} 2^{j-1} \cdot i \cdot \bigl( (\sigma-1)^{j+1}\sigma^{i-j-1} - \sigma^{i-2} \bigr) \end{equation*} Let us change the order of summation: \begin{equation*} (S_2) \ge \sum_{j=1}^{\lfloor{n/2\rfloor}-1} 2^{j-1} \cdot \bigl( (\sigma-1)^{j+1}\sigma^{-j} - \sigma^{-1} \bigr) \sum_{i=\lceil{n/2\rceil}}^{n-j-1} i \cdot \sigma^{i-1} \end{equation*} We can replace the lower limit in the inner sum of $(S_2)$ by one as it will only change the sum by $\mathcal{O}(\sigma^n)$. After replacing the lower limit, we apply Corollary~\ref{cor:power_sum} to compute the inner sum: \begin{equation*} (S_2) \ge \sum_{j=1}^{\lfloor{n/2\rfloor}-1} 2^{j-1} \cdot \bigl( (\sigma-1)^{j+1} \sigma^{-j} - \sigma^{-1} \bigr) \cdot (n-j-1) \frac{\sigma^{n-j-1}}{\sigma-1} + \mathcal{O}(\sigma^n) \end{equation*} We divide the sum above into positive and negative parts: \begin{equation*} \underbrace{\sum_{j=1}^{\lfloor{n/2\rfloor}-1} (n-j-1) \; 2^{j-1} (\sigma-1)^{j} \sigma^{n-2j-1}}_{(P)} - \underbrace{\sum_{j=1}^{\lfloor{n/2\rfloor}-1} (n-j-1) 2^{j-1} \frac{\sigma^{n-j-2}}{\sigma-1}}_{(N)} \end{equation*} We start by computing $(N)$. We again apply the trick with the lower limit and Fact~\ref{lm:power_sum}, and replace $(n-j-1)$ with $k$. \begin{equation*} (N) = \frac{2^{n-3}}{\sigma-1} \sum_{k = \lceil{\frac{n}{2}\rceil}}^{n-2} k \bigr(\frac{\sigma}{2}\bigl)^{k-1} = \frac{(n-2)\sigma^{n-2}}{(\sigma-1)(\sigma-2)} + \mathcal{O}(\sigma^n) \end{equation*} Computing $(P)$ is a bit more involved. We divide it into two parts: \begin{equation*} (P) = \underbrace{\frac{(n-1) \sigma^{n-1}}{2} \cdot \sum_{j=1}^{\lfloor{n/2\rfloor}-1} \bigl( \frac{2 (\sigma-1)}{\sigma^2} \bigr)^j}_{R_1} - \underbrace{\sigma^{n-1} \sum_{j=1}^{\lfloor{n/2\rfloor}-1} j \; 2^{j-1} (\sigma-1)^j \sigma^{-2j}}_{R_2} \end{equation*} $(R_1)$ is a sum of a geometric progression and it is equal to \begin{equation*} \frac{(n-1) \sigma^{n-1}}{2} \cdot \frac{\bigl( \frac{2 (\sigma-1)}{\sigma^2} \bigr)^{\lfloor{n/2\rfloor}} - \frac{2 (\sigma-1)}{\sigma^2}}{\frac{2 (\sigma-1)}{\sigma^2} - 1} = \frac{(n-1) \sigma^{n-1}}{2} \cdot \frac{2(\sigma-1)}{\sigma^2 - 2\sigma + 2} + \mathcal{O}(\sigma^n) \end{equation*} \begin{lemma} \label{lm:R_2_is_small} $(R_2) = \mathcal{O}(\sigma^n)$. \end{lemma} \begin{proof} We start our proof by rewriting $(R_2)$: \begin{equation*} (R_2) = \sigma^{n-3}(\sigma-1) \cdot \sum_{j=1}^{\lfloor{n/2\rfloor}-1} j \; \bigl( \frac{2 (\sigma-1)}{\sigma^2})^{j-1} \end{equation*} We apply Fact~\ref{lm:power_sum} for $x = \frac{2 (\sigma-1)}{\sigma^2}$ and $k = \lfloor{n/2\rfloor}-1$ to compute the inner sum. \begin{equation*} (R_2) = \sigma^{n-3}(\sigma-1) \cdot \frac{(\lfloor{n/2\rfloor}-1) \cdot (\frac{2 (\sigma-1)}{\sigma^2})^{\lfloor{n/2\rfloor}} - \lfloor{n/2\rfloor} \cdot (\frac{2 (\sigma-1)}{\sigma^2})^{\lfloor{n/2\rfloor}-1} + 1}{(\frac{2 (\sigma-1)}{\sigma^2} - 1)^2} \end{equation*} The claim follows. \qed \end{proof} We now summarize our findings. From equations for $(P)$, $(N)$, $(R_1)$, and $(R_2)$ we obtain (after simplification): \begin{equation} \label{eq:unbordered+prefixes} (S_2) \ge (P) - (N) = n \cdot \bigl( \frac{\sigma^n - \sigma^{n-1}}{\sigma^2 - 2\sigma + 2} - \frac{\sigma^{n-2}}{(\sigma-1)(\sigma-2)}\bigr) + \mathcal{O}(\sigma^n) \end{equation} We now return back to Equation~\eqref{eq:start} and use our lower bounds for $(S_1)$ and $(S_2)$ together with Corollary~\ref{cor:unbordered} for $b(n, \sigma)$: \begin{equation*} \sum_{i = 1}^{n} i \cdot \beta^n_i(\sigma) \ge n \cdot \bigl( \frac{\sigma^{n+1} - \sigma^n - \sigma^{n-1}}{\sigma - 1} + \frac{\sigma^n - \sigma^{n-1}}{\sigma^2 - 2\sigma + 2} - \frac{\sigma^{n-2}}{(\sigma-1)(\sigma-2)} \bigr) + \mathcal{O}(\sigma^n) \end{equation*} We now simplify the expression above and return back $\frac{1}{\sigma^n}$ as we promised in the very beginning of the proof to obtain: \begin{equation} \frac{1}{\sigma^n} \sum_{i=1}^n i \cdot \beta^n_i (\sigma) \ge n\cdot (1 - \xi(\sigma) \cdot \sigma^{-4}) + \mathcal{O}(1) \end{equation} where $\xi(\sigma) = \frac{2\sigma^3 - 2\sigma^2}{(\sigma-2)(\sigma^2 - 2\sigma +2)}$. This completes the proof of Theorem~\ref{th:difference}. \qed \begin{myremark} Theorem~\ref{th:difference} actually provides a lower bound on the expected length of the maximal unbordered prefix (rather than that of the maximal unbordered factor), which suggests that this bound could be improved. \end{myremark} \section{Computing MUF} \label{sec:algo} Based on our findings we propose an algorithm for computing the maximal unbordered factor of a string $S$ of length $n$ and give an upper bound on its expected running time. A basic algorithm would be to compute the border arrays (see Section~\ref{sec:prelim} for the definition) of all suffixes of $S$. The border arrays contain the lengths of the maximal borders of all prefixes of all suffixes of $S$, i.e., of all factors of $S$. It remains to scan the border arrays and to select the longest factor such that the length of its maximal border is zero. Since a border array can be computed in linear time, the running time of this algorithm is $\mathcal{O}(n^2)$. The algorithm we propose is a minor modification of the basic algorithm. We build border arrays for suffixes of $S$ starting from the longest one. After building an array $B_i$ for $S[i..n]$ we scan it and locate the longest factor $S[i..j]$ such that the length of its maximal border stored in $B_i [j]$ is zero. We then compare $S[i..j]$ and the current maximal unbordered factor (initialized with an empty string). If $S[i..j]$ is longer, we update the maximal unbordered factor and proceed. At the moment we reach a suffix shorter than the current maximal unbordered factor, we stop. \begin{theorem} \label{th:avg_time_naive} The maximal unbordered factor of a string of length $n$ over an alphabet $A$ of size $\sigma$ can be found in $\mathcal{O}(\frac{n^2}{\sigma^4})$ expected time. \end{theorem} \begin{proof} Let $b(S)$ be the length of the maximal unbordered factor of $S$. Then the running time of the algorithm is $\mathcal{O}((n-b(S))\cdot n)$, because $b(S)$ will be a prefix of one of the first $n-b(S)+1$ suffixes of $S$ (starting from the longest one). Averaging this bound over all strings of length $n$, we obtain that the expected running time is $$\mathcal{O}(\frac{1}{\sigma^n} \sum_{S\in A^n} (n-b(S)) \cdot n ) = \mathcal{O}(n \cdot ( \frac{1}{\sigma^n} \sum_{S\in A^n} (n-b(S))))$$ and $\frac{1}{\sigma^n} \sum_{S\in A^n} (n-b(S)) = \mathcal{O}(\frac{n}{\sigma^4})$ as it follows from Theorem~\ref{th:difference} and properties of $\xi(\sigma)$. \qed \end{proof} We performed a series of experiments to confirm that the expected running time of the proposed algorithm is much smaller than that of the basic algorithm. We compared the time required by the algorithms for strings of length $1 \le n \le 100$ over alphabets of size $\sigma=\{2, 3, 4, 5, 10\}$. The time required by the algorithms was computed as the average time on a set of size $N = 10^6$ of randomly generated strings of given length. The experiments were performed on a PC equipped with one 2.6 GHz Intel Core i5 processor. As it can be seen in Fig.~\ref{fig:running_time}, the minor modification we proposed decreases the expected running time dramatically. Obtained results were similar for all considered alphabet sizes. All source files, results, and plots can be found in a repository \texttt{\url{http://github.com/avlonger/unbordered}}. \begin{figure} \centering \includegraphics[scale=0.4]{time} \caption{Average running times of the proposed algorithm (dashed line) and the basic algorithm (solid line) for strings over the alphabet of size $\sigma = 2$.} \label{fig:running_time} \end{figure} We note that the data structures~\cite{InternalPM,InternalPM-old} can be used to compute the maximal unbordered factor in a straightforward way by querying all factors in order of decreasing length. This idea seems to be very promising since these data structures need to be built just once, for the string $S$ itself. However, the data structures are rather complex and both the theoretical bound for the expected running time, which is $\mathcal{O}(\frac{n^2}{\sigma^4} \log n)$, and our experiments show that this solution is slower than the one described above. \section{Conclusion} We consider the contributions of this work to be three-fold. We started with an explicit method of generating strings with large unbordered factors. We then used it to show that the expected length of the maximal unbordered factor and the minimal period of a string of length~$n$ is $\Omega(n)$, leaving the question raised in Conjecture 1 open. As an immediate application of our result, we gave a new algorithm for computing maximal unbordered factors and proved its efficiency both theoretically and experimentally. \subsection*{Acknowledgements} The authors would like to thank the anonymous reviewers whose suggestions greatly improved the quality of this work. \bibliographystyle{plain}
{ "timestamp": "2015-04-29T02:08:01", "yymm": "1504", "arxiv_id": "1504.07406", "language": "en", "url": "https://arxiv.org/abs/1504.07406", "abstract": "Given a string $S$ of length $n$, its maximal unbordered factor is the longest factor which does not have a border. In this work we investigate the relationship between $n$ and the length of the maximal unbordered factor of $S$. We prove that for the alphabet of size $\\sigma \\ge 5$ the expected length of the maximal unbordered factor of a string of length~$n$ is at least $0.99 n$ (for sufficiently large values of $n$). As an application of this result, we propose a new algorithm for computing the maximal unbordered factor of a string.", "subjects": "Data Structures and Algorithms (cs.DS)", "title": "On Maximal Unbordered Factors", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.983342957061873, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7097210869489908 }
https://arxiv.org/abs/0909.1482
On positive Matrices which have a Positive Smith Normal Form
It is known that any symmetric matrix $M$ with entries in $\R[x]$ and which is positive semi-definite for any substitution of $x\in\R$, has a Smith normal form whose diagonal coefficients are constant sign polynomials in $\R[x]$. We generalize this result by considering a symmetric matrix $M$ with entries in a formally real principal domain $A$, we assume that $M$ is positive semi-definite for any ordering on $A$ and, under one additionnal hypothesis concerning non-real primes, we show that the Smith normal of $M$ is positive, up to association. Counterexamples are given when this last hypothesis is not satisfied. We give also a partial extension of our results to the case of Dedekind domains.
\section{Introduction} Arising in various areas in Mathematics, there is a seminal result (we refer to \cite{Dj}) which says that any $n\times n$-symmetric matrix $M$ with entries in ${\mathbb R}[x]$ which is positive for any substitution of $x\in{\mathbb R}$ is a matricial sum of squares (it can be written $M=\sum_{i=1}^2N_iN_i^T$ where $N_i$ is a $n\times n$-matrice with entries in ${\mathbb R}[x]$). The proof of this result as given in \cite{Dj} uses, as a prerequisite, that $M$ has a smith normal form whose diagonal coefficients are polynomials in ${\mathbb R}[x]$ of constant sign. In this article we are concern with this last property. \par Since any matrix with entries in a Principal Ideal Domain (PID in short) admits a Smith Normal Form, we consider the following : \begin{question}\label{question} Let $M$ be a symmetric square matrix with entries in a principal ring $A$. Assume that $M$ is positive semi-definite. Are all the diagonal elements of its Smith Normal Form positive semi-definite up to association ? \end{question} Before giving a precise meaning to this question in an abstract setting, we may note that the answer to this question should clearly be positive whenever the matrix $M$ is diagonal, i.e. when $M$ is already given in its Smith Normal Form.\air If $A={\mathbb R}[x]$ the ring of all polynomials in one variable over the reals, then the positivity of a matrix $M$ in $A^{n\times n}$ can be understood as the positivity when evaluated at any point $x\in{\mathbb R}$, i.e. $\phi_y(M)$ is positive-semi-definite (psd in short) for any evaluation ring-homomorphism $\phi_{y}:{\mathbb R}[x]\rightarrow{\mathbb R}$ which maps $p(x)$ onto $p(y)$. The natural extension is to consider what happens if we change $A={\mathbb R}[x]$ with $A=k[x]$ where $k$ is any field. And more generally, when $A$ is an abstract principal ring ?\air It appears to be quite natural to introduce the real spectrum of the ring $A$, which is the set of all couples $({\mathfrak p},\leq)$ where ${\mathfrak p}$ is a prime ideal of $A$ and $\leq$ is an ordering onto the field of fractions of $A/{\mathfrak p}$. The real spectrum of $A$ can be described equivalently as the set of all ring-morphisms of $A$ into a real closed field. See Section \ref{Spr} for precise definitions and properties. Then, saying that the matrix $M$ is positive semi-definite will mean that it is positive semi-definite with respect to any point $\alpha$ of ${\rm Spec_r} A$, the real spectrum of $A$, i.e. the matrix $\phi(M)$ is positive semi-definite for any ring-morphism $\phi:A\rightarrow R$ where $R$ is a real closed field. This notion obviously coincides with the common notion of positivity in the case $A={\mathbb R}[x]$. \air Following the proof of \cite{Dj}, we answer Question \ref{question} by the affirmative for all principal rings $A$ such that any non-real irreducible can be associated to a positive non-real irreducible, a condition called ${\bf (PNRI)\;}$ in the following. For instance, ${\bf (PNRI)\;}$ is satisfied when the real spectrum ${\rm Spec_r} A$ is a connected topological space.\par The first example of principal domains are rings of number fields : they are treated in Section \ref{number_fields}. The other wide class of rings with interesting arithmetic properties are rings of coordinates of affine irreducible non-singular curves. Althought only few of them are principal, they all are Dedekind domains so, in secton \ref{Dedekind} we give some partial extensions of our framework to Dedekind domains. \section{Preliminaries} The basic facts of this section are taken from \cite{BCR} and for some others we will refer to \cite{ABR}. \subsection{The real spectrum of a ring}\label{Spr} The ring $A$ admits an ordering if and only if $-1$ is not sum of squares in $A$, we say then that $A$ is {\it formally real}. A prime ideal ${\mathfrak p}$ of $A$ will be called {\it real} if the quotient ring $A/{\mathfrak p}$ is formally real. For example, in an Unique Factorization Domain (UFD in short), an irreducible $p$ will be called real if it generates a real prime ideal. \par The real spectrum ${\rm Spec_r} A$ of a ring A is defined to be the set of all couples $\alpha=({\mathfrak p},\leq_\alpha)$ where ${\mathfrak p}$ is a real prime ideal of $A$ and $\leq_\alpha$ is an ordering on $A/{\mathfrak p}$. We say that ${\mathfrak p}$ is the support of $\alpha$ and denote it by ${\mathfrak p}={\rm supp}(\alpha)$. Equivalently, an element $\alpha\in{\rm Spec_r} A$ is given by of a morphism $\phi:A\rightarrow R$ where $R$ is a real closed field. Given such a data, $\phi^{-1}(0)={\mathfrak p}$ is a real prime ideal and the unique ordering on $R$ induces an ordering $\leq_\alpha$ onto $A/{\mathfrak p}$. \par It is then clear that ${\rm Spec_r} K$ can be seen as a subset of ${\rm Spec_r} A$ where $K$ stands for the fraction field of a domain $A$. \air For a given $a\in A$, we say that $a> 0$ (resp. $a\geq 0$) if for all $\alpha\in{\rm Spec_r} A$, $a>_\alpha0$ (resp. $a\geq_\alpha 0$). Moreover, we note ${\mbox{\rm sgn}}[a](\alpha)=+1$ (respectively ${\mbox{\rm sgn}}[a](\alpha)=-1$, ${\mbox{\rm sgn}}[a](\alpha)=0$) if $a>_\alpha 0$ (respectively $a<_\alpha 0$, $a\in{\rm supp}(\alpha)$).\par Now, if $M\in A^{n\times n}$ is a symmetric matrix with entries in $A$, we say that $M$ is {\it positive-semi-definite} (psd in short) if for any morphism $\phi: A\rightarrow R$ with $R$ a real closed field, the matrix $\phi(M)$ is psd. \air The real spectrum of $A$ has a natural topology admitting as a basis of open subsets all the sets $(\{\alpha\in {\rm Spec_r} A\mid a>_\alpha0\})_{a\in A}$. \subsection{Generizations} We say that $\beta$ is a generization of $\alpha$ and we denote it by $\beta\rightarrow \alpha$ if $\alpha$ belongs to the closure of $\beta$. It is equivalent to saying that for all $a\in A$, if $a(\beta)\geq 0$, then $a(\alpha)\geq 0$. \par We begin with an easy observation that will be used several time in the sequel. \begin{lem} Let $A$ be a UFD. Let also $a=p^sa'$ where $s$ is an odd integer, $p$ is a real irreducible and $p\nmid a'$ (which means that $p$ does not divide $a'$). Let $\alpha\in{\rm Spec_r} A$ and assume that there are two generizations $\alpha_+$ and $\alpha_-$ of $\alpha$ such that $p>_{\alpha_+}0$ and $p<_{\alpha_-}0$. Then, $${\mbox{\rm sgn}}[a](\alpha_+)\cdot{\mbox{\rm sgn}}[a](\alpha_-)=-1.$$ \end{lem} \begin{proof} Indeed, by assumption ${\mbox{\rm sgn}}[p](\alpha_+)\cdot{\mbox{\rm sgn}}[p](\alpha_-)=-1$ and, since $p\nmid a'$, we have ${\mbox{\rm sgn}}[a'](\alpha_+)={\mbox{\rm sgn}}[a'](\alpha_-)={\mbox{\rm sgn}}[a'](\alpha).$ Note also that $(p)$ is prime since $A$ is UFD, and so $ a'\notin{\rm supp}(\alpha)=(p)$. \end{proof} We will need also the following : \begin{lem}\label{regular_generization} Let $p$ be an irreducible of a formally real domain $A$ such that $(p)$ is a real prime ideal of $A$. Assume that $A/(p)$ is regular. Then, for any $\alpha\in{\rm Spec_r} A$ whose support is $(p)$ there are two generizations $\alpha_+,\alpha_-\in{\rm Spec_r}(K)$ where $K$ is the fraction field of $A$. Moreover, we may take $\alpha_+,\alpha_-$ such that $p>_{\alpha_+}0$ and $p<_{\alpha_-}0$. \end{lem} \begin{proof} The ring $A_{(p)}$ is a discrete valuation ring of rank $1$. Its fraction field is $K$ and its residual field is $k$ the fraction field of the ring $A/(p)$. According to \cite[II.Proposition 3.3]{ABR}, any ordering $\alpha\in{\rm Spec_r} k$ admits at least two generizations in ${\rm Spec_r} K$ as wanted. \end{proof} \section{Unicity of the Smith Normal Form} Let $A$ be a domain, and consider the usual equivalence relation on the set of all matrices in $A^{n\times n}$ : $M\sim N$ if there are two matrices $P,Q\in A^{n\times n}$ invertibles in $A$ (${\mbox{\rm det}\,} P$ and ${\mbox{\rm det}\,} Q$ are units in $A$) such that $M=PNQ$.\par Let ${\rm diag}(a_1,\ldots,a_n)$ be the diagonal matrix in $A^{n\times n}$ whose coefficients onto the diagonal are $(a_1,\ldots,a_n)$.\par About the equivalence class of diagonal matrices, recall the well known result over a PID : \begin{thm} Let $A$ be a PID. Then, any matrix $M\in A^{n\times n}$ is equivalent to a diagonal matrix $D={\rm diag}(d_1,\ldots,d_r,0,\ldots,0)$ with $d_k\mid d_{k+1}$ for all $k=1\ldots r-1$. Moreover the $d_k$'s are unique up to association. \end{thm} We say then that $D$ is the {\it Smith Normal Form} of the matrix $M$.\air In fact, in this result the PID hypothesis is essential for the existence of the matrix $D$. Although, the unicity can be obtained for any domain : \begin{prop}\label{Unicity_Smith} Let $A$ be a domain. Assume that $D\sim D'$ where $D,D'\in A^{n\times n}$ are diagonal matrices : $D={\rm diag}(d_1,\ldots,d_r,0,\ldots,0)$ with $d_k\mid d_{k+1}$ for all $k=1\ldots r-1$ and $D'={\rm diag}(d'_1,\ldots,d'_s,0,\ldots,0)$ with $d'_k\mid d'_{k+1}$ for all $k=1\ldots s-1$. Then, we have $r=s$ and $(d_k)=(d'_k)$ for all $k$. \end{prop} We will include the proof for the convenience of the reader : \begin{proof} Let $K$ be the field of fractions of the domain $A$. Looking at the rank of the matrices $D$ and $D'$ viewed in $K^{n\times n}$, we get $r={\mbox{\rm rk}}_K(D)={\mbox{\rm rk}}_K(D')=s$. \par For any matrix $M\in A^{n\times n}$, let us introduce ${\mathfrak d}_k(M)$ the ideal in $A$ generated by all minors of order $k$ of $M$. \par \begin{lem}\label{div_minors} Let $M=NP$ where $M,N,P\in A^{n\times n}$. Then, ${\mathfrak d}_k(M)\subset{\mathfrak d}_k(N)$. \end{lem} \begin{proof} Let $\Delta$ be a $k\times k$ minor of $M=NP$, say the minor of the first $k$ rows and columns (to fix an example). Let $C_i^k$ be the truncated columns of $N$ of size $k$. Then $$\Delta={\mbox{\rm det}\,}(p_{1,1}C_1^k+\ldots+p_{n,1}C_n^k,\ldots,p_{1,k}C_1^k+\ldots+p_{n,k}C_n^k)$$ where $P=(p_{i,j})$. Here $\Delta$ appears as a linear combination of minors of order $k$ extracted from the first $k$ lines of $N$. This implies that $\Delta$ is an element of ${\mathfrak d}_k(N)$. \end{proof} Since the matrices $P$ and $Q$ are invertibles, by Lemma \ref{div_minors}, we have ${\mathfrak d}_k(D)={\mathfrak d}_k(D')$ for all $k$. Now, since the matrices $D$ and $D'$ are diagonal it is easy to see that these last two ideals are in fact principal and more precisely : $${\mathfrak d}_k(M)=(d_1\ldots d_k)\quad {\rm and}\quad {\mathfrak d}_k(N)=(d'_1\ldots d'_k)$$ Hence, we get $(d_k)=(d'_k)$ for all $k$. \end{proof} \section{The main results} After setting an abstract background, we will be able to settle our result, following the main steps of the proof given by Djokovic in \cite{Dj}.\par In a given ring $A$ that we may think at as a UFD, let us introduce two conditions. \air The first one concerns the Positivity of Non-Real Irreducibles : \air {\bf (PNRI)\;}\quad Any non-real irreducible $q$ in $A$ can be associated to a non-real irreducible which is strictly positive on all ${\rm Spec_r} A$. \air Next, we come to the second condition, relative to the Generization of a given Real Irreducible $p$ such that $(p)$ is prime : \air {\bf (GRI)}\quad There is $\alpha\in{\rm Spec_r} A$ whose support is ${\rm supp}(\alpha)=(p)$ and $\beta\in{\rm Spec_r} A$ with support $(0)$, such that $\beta$ is a generization of $\alpha$. \air Note that the condition {\bf (GRI)} is true with respect to any real prime $(p)$ whenenever $A/(p)$ is regular (confer the proof of Lemma \ref{regular_generization} for this fact). For instance, this condition will be automatically satisfied if $A$ is a PID, since $A/(p)$ is a field in this case.\air Now, if $A$ is a UFD, then for any irreducible $p\in A$, we may define as usually $\nu_p(a)$ to be the $p$-valuation of an element $a\in A$ to be the maximal integer $k$ such that $p^k$ divides $a$. \par Here is the main result : \begin{thm}\label{main_pss_smith} Let $A$ be a regular UFD, $M$ be a symmetric matrix in $A^{n\times n}$ which is positive semi-definite on ${\rm Spec_r} A$. Assume that $M$ admits a Smith Normal form, i.e. there are $d_1\vert\ldots\vert d_r$ in $A$, such that $M\sim D={\rm diag}(d_1,\ldots,d_r,0\ldots,0)$. \par Assume furthermore that the ring $A$ satisfies the condition ${\bf (PNRI)\;}$ and the condition ${\bf (GRI)}$ with respect to any real-irreducible $p$ dividing some $d_k$ and which does change of sign on ${\rm Spec_r} A$.\par Then, for all $k=1\ldots r$, the element $d_k\in A$ can be associated to an element $d'_k\in A$ which is such that $d'_k>0$ everywhere on ${\rm Spec_r} A$. \end{thm} \begin{proof} We proceed by several reductions : \air $\bullet$ We may assume that $A$ is formally real (${\rm Spec_r} A\not=\emptyset)$, otherwise there is nothing to do. \air $\bullet$ Because of Property {\bf (PNRI)\;}, we may assume that for all $k$, $d_k=e_kf_k$ where $e_k$ is a product of real irreducibles and $f_k>0$ on all ${\rm Spec_r} A$. \air $\bullet$ If $M=PDQ$ where $P,Q$ are invertible, we may reduce to the case where $P$ is the identity. Indeed, let $M'=DQ'$ where $Q'=Q(P^{-1})^T$ is invertible in $A^{n\times n}$ and $M'=P^{-1}M(P^{-1})^T$ remains symmetric and psd on ${\rm Spec_r} A$. Of course, $M$ and $M'$ have same Smith Normal Form. \air$\bullet$ We may reduce to the case where $r=n$. Indeed, let $M=DQ$ where $Q$ is invertible, $D$ diagonal, and write $M=\left(\begin{array}{cc}M_1&M_2\\ M_3&M_4\end{array}\right)$, $Q=\left(\begin{array}{cc}Q_1&Q_2\\ Q_3&Q_4\end{array}\right)$, $D=\left(\begin{array}{cc}D_1&0\\ 0&0\end{array}\right)$ with $M_1,Q_1,D_1={\rm diag}(d_1,\ldots,d_r)\in A^{r\times r}$ ; $M_2,Q_2\in A^{r\times (n-r)}$ ; $M_3,Q_3\in A^{(n-r)\times r}$ ; $M_4,Q_4\in A^{(n-r)\times (n-r)}$. We get then $$M=\left(\begin{array}{cc}D_1Q_1&D_1Q_2\\ 0&0\end{array}\right),$$ hence $M_3=M_4=0$ and by symmetry $M_2=0$. So, we are reduce to $M_1=D_1Q_1$. Next, remark that $Q_1$ is necessarily invertible. Indeed, by the proof of Proposition \ref{Unicity_Smith}, we have $$({\mathfrak d}_r(M))={\mathfrak d}_r(D)={\mathfrak d}_r(D_1)=d_1\ldots d_r=({\mbox{\rm det}\,}(M_1)),$$ which shows that ${\mbox{\rm det}\,}(Q_1)$ is invertible. \air $\bullet$ Assume that the Theorem is not true. So there is an integer $m$ such that $d_m$ is not associated to a positive element on all ${\rm Spec_r} A$. Hence, there is a real irreducible $p$ which changes of sign on ${\rm Spec_r} A$ and such that $\nu_p(d_m)$ is odd. We will assume moreover that $\nu_p(d_i)$ is even for all $i\leq m$ and $\nu_p(d_1)\leq\ldots\leq \nu_p(d_m) \leq\ldots\leq \nu_p(d_n)$. \par We claim now that, for all $1\leq i\leq m$ and $m\leq j\leq n$, the entry $q_{i,j}$ of the matrix $Q$ is divisible by $p$.\air \begin{enumerate} \item[a)] For $i=j=m$ this follows from the fact that $d_mq_{m,m}\geq 0$ on all ${\rm Spec_r} A$ ($d_mq_{m,m}$ is a $1\times 1$-minor of the positive matrix $DQ$). By condition {\bf (GRI)}\, relative to $p$, there is $\alpha\in{\rm Spec_r} A$ with support $(p)$ and $\alpha_+,\alpha_-\in{\rm Spec_r} A$ two generizations of $\alpha$ with support $(0)$, such that $p>_{\alpha_+}0$ and $p<_{\alpha_-}0$. Thus, $\nu_p(q_{m,m})$ shall be odd in order to have $d_mq_{m,m}\geq_{\alpha_-} 0$. \item[b)] For $i=m$ and $j>m$, we check that $p\vert q_{i,j}$ is a consequence of the positivity of the following symmetric $2\times 2$-minor of $DQ$ : $$\left(\begin{array}{cc} d_mq_{m,m}&d_mq_{m,j}\\ d_jq_{j,m}&d_jq_{j,j} \end{array}\right)$$ Indeed, we have the inequality on all ${\rm Spec_r} A$ : \begin{equation}\label{eq1} (d_md_j)(q_{m,m}q_{j,j})-(d_mq_{m,j})^2\geq 0 \end{equation} At this point, we use the result \begin{lem}\label{valuation_real} Let $A$ be a formally real UFD which is also a regular domain, and $a,b\in A$. Assume that $a-b^2\geq_\alpha 0$ for all $\alpha\in{\rm Spec_r} A$. Then, for all real irreducible $p$, we have $$\nu_p(a)\leq \nu_p(b^2)$$ \end{lem} \begin{proof} Assume that there exists $p\in A$ a real irreducible such that $$2r+s=\nu_p(a)> \nu_p(b^2)=2r$$ with $r,s\in{ \mathbb N}$. We write $a-b^2=p^{2r}(p^sa'-b'^2)$ with $a',b'\in A$ such that $p\nmid a'$ and $p\nmid b'$. By assumption, $p^{2r}(p^sa'-b'^2)\geq_\alpha 0$ for all $\alpha\in{\rm Spec_r} A$.\par Take $\alpha\in{\rm Spec_r} A$ such that ${\rm supp}(\alpha)=p$. By \ref{regular_generization}, there is a generization $\beta$ of $\alpha$ in ${\rm Spec_r} A$ such that ${\rm supp}(\beta)=(0)$.\par By assumption, $p^{2r}(p^sa'-b'^2)\geq_\beta 0$, which yields $p^sa'-b'^2\geq_\beta 0$ since $p\notin {\rm supp}(\beta)=(0)$. By specialization, we get $p^sa'-b'^2\geq_\alpha 0$, and hence $-b'^2\geq_\alpha 0$. Necessarily, $b'\in{\rm supp}(\alpha)$, namely $p\mid b'$ : a contradiction. \end{proof} Using Lemma \ref{valuation_real}, from Equation (\ref{eq1}) we get $$ \nu_p(d_{m})+2\nu_p(q_{m,j})\geq \nu_p(d_j)+\nu_p(q_{m,m})+\nu_p(q_{j,j}) $$\air Since $\nu_p(d_{j})\geq \nu_p(d_{m})$ for $j>m$, it shows that $$ 2\nu_p(q_{m,j})\geq \nu_p(q_{j,j})+\nu_p(q_{m,m}) $$\air Since $\nu_p(q_{m,m})\geq 1$, we obtain $\nu_p(q_{m,j})\geq 1$, namely $p\vert q_{m,j}$. \air \item[c)] For $i<m$ and $j\geq m$, we use the equality $d_iq_{i,j}=q_{j,i}d_j$ and the fact that $\nu_p(d_j)\geq \nu_p(d_m)>\nu_p(d_i)$, to conclude that $p\vert q_{i,j}$ too. \air \end{enumerate} To end, we use the elementary \begin{lem}\label{p_divides_matrix} Let $P=(p_{i,j})\in A^{n\times n}$ be a matrix with entries in a domain $A$. Assume that there is an irreducible $p$ such that $p$ divides $m_{i,j}$ for all $1\leq i \leq r$ and $r\leq j \leq n$, with $r\in{ \mathbb N}$.\par Then, $p$ divides ${\mbox{\rm det}\,}(P)$. \end{lem} \begin{proof} We proceed by induction on $r$. If $r=1$, then the result is obvious. \par Next, if $r>1$, we developp according to the last row and we find that ${\mbox{\rm det}\,}(P)$ is a linear combination of determinants which are all divisible by $p$ by the induction hypothesis. \end{proof} By Lemma \ref{p_divides_matrix}, the irreducible $p$ divides ${\mbox{\rm det}\,}(Q)$ although $Q$ is supposed to be invertible in $A$ : a contradiction which concludes the proof. \end{proof} Of course, when $A$ is a Principal Ideal Domain, then $A$ is an UFD and moreover satisfies condition {\bf (GRI)}. Moreover, the Smith Normal form of a matrix always exists, so we are able to present a shorter version of Theorem \ref{main_pss_smith} under the assumption that the ring is principal. \begin{thm}\label{main_pss_PID} Let $A$ be a PID and $M$ be a symmetric matrix in $A^{n\times n}$ which is positive semi-definite on ${\rm Spec_r} A$. Let $M\sim D={\rm diag}(d_1,\ldots,d_r,0\ldots,0)$ with $d_1\vert\ldots\vert d_r$ in $A$ be the Smith Normal Form of $M$. We assume furthermore that the ring $A$ satisfies the condition ${\bf (PNRI)\;}$. \par Then, up to association, all the $d_k$'s are positive on ${\rm Spec_r} A$. \end{thm} \remark\label{vs_ex}{According to the proof of Theorem \ref{main_pss_smith}, if we search for a counterexample to Question \ref{question}, we may focus on the case $n=2$. Namely, take $$M=\left(\begin{array}{cc} ad_1&bd_1e_1\\ bd_1e_1&cd_1e_1 \end{array}\right)=\left(\begin{array}{cc} d_1&0\\ 0&d_1e_1 \end{array}\right)\left(\begin{array}{cc} a&be_1\\ b&c \end{array}\right)$$ Where $$ac-b^2e_1=\epsilon,\quad \epsilon\in A^*$$ The symmetric matrix $M$ will be positive semi-definite if and ony if we have on all ${\rm Spec_r} A$ : $$\left\{\begin{array}{cc} ad_1&\geq 0\\ cd_1e_1&\geq 0\\ d_1^2e_1&\geq 0\\ \end{array}\right.$$ So, to get a couterexample we will search for an element $d_1\in A$ which change of sign on ${\rm Spec_r} A$ and compatible with all the previous conditions. } \endremark \air \section{On the conditions ${\bf (GRI)}$ and ${\bf (PNRI)\;}$} \subsection{Condition ${\bf (GRI)}$} We will not discuss very much this rather technical condition because it will be automatically satisfied for the class of rings we are mainly interested in. Indeed, if $A$ is principal, for any irreducible $p$, the ring $A/p$ is regular. The analogeous observation will be also valid when $A$ is a Dedekind domain (confer section \ref{Dedekind}). \subsection{Condition ${\bf (PNRI)\;}$} We may note first that if the ring $A$ is not formally real then the condition is obviously satisfied, but Theorem \ref{main_pss_smith} has not any interest !\par The next class of rings for which the condition is easily seen to be true is given by the following : \begin{prop} Condition ${\bf (PNRI)\;}$ is satisfied whenever the invertibles of $A$ separate the closed opens of ${\rm Spec_r} A$. \end{prop} \begin{proof} Let ${\rm Spec_r} A=\cup_{i\in I}W_i$ be the decomposition of ${\rm Spec_r} A$ into its connected components. Let $q$ be a non-real irreducible whose sign is ${\mbox{\rm sgn}}[q](W_j)=\epsilon_j=+1$ for any $j\in J$ and $\epsilon_j=-1$ for $j\in I\setminus J$. Set $P=\cup_{i\in J}W_i$ and $N=\cup_{i\in I\setminus J}W_i$, then ${\rm Spec_r} A=N\cup P$ is a partition of ${\rm Spec_r} A$ into two closed opens. Thus, by assumption there is an invertible $u\in A$ such that $u>0$ on $P$ and $u<0$ on $N$, hence $uq>0$ on all ${\rm Spec_r} A$. \end{proof} As an esay corollary, ${\bf (PNRI)\;}$ appears to be true for any ring whose real spectrum ${\rm Spec_r} A$ is connected, since in this case a non-real irreducible does not change of sign. \air Note moreover that \begin{prop}\label{localization} The condition ${\bf (PNRI)\;}$ is stable under localization. \end{prop} \begin{proof} It suffices to see that the non-real irreducibles of $S^{-1}A$ are in one-to-one correspondance with the product of non-real irreducibles by some element of $S$, and that the correspondance which associates $\alpha\in{\rm Spec_r} (S^{-1}A)$ to $\alpha\in{\rm Spec_r} A$ such that ${\rm supp}(\alpha)\cap S=\emptyset$ is also one-to-one. \end{proof} For instance, the coordinate ring of the real hyperbola ${\mathbb R}[x,y]/(xy-1)$ satisfies ${\bf (PNRI)\;}$\!. \remark{ Let $A$ be a principal ring. It follows from Proposition \ref{localization} that if $A$ satisfies condition {\bf (PNRI)\;}, then $A_{\mathfrak p}$ satisfies condition ${\bf (PNRI)\;}$ for all prime ${\mathfrak p}$ in $A$. Beware that the converse is false. For intance, \ref{vsex1} gives a counterexample. } \remark{The condition ${\bf (PNRI)\;}$ is closely related to the so-called {\it change of sign criterion} (see for instance \cite[Th\'eor\`eme 4.5.1]{BCR}) which says the following : \par Let $R$ be a real closed field and $f$ an irreducible polynomial in $R[x_1,\ldots,x_n]$. Then, the ideal $(f)$ is real if and only if the polynomial $f$ changes of sign in $R^n$ : $(\exists x,y\in R^n\quad f(x)f(y)<0)$.\par Indeed, the obvious implication of the equivalence gives condition ${\bf (PNRI)\;}$ for the ring $R[x_1,\ldots,x_n]$ : if $f$ is a non-real irreducible, then $f$ does not change of sign (here the invertibles are elements in $R^*$, of constant sign). } \endremark We may naturally extend this last property to any ring of polynomials over a non-necessarily real-closed field. \begin{prop} Let $A=k[x_1,\ldots,x_n]$ where $k$ is a formally real field. Then, the ring $A$ satisfies condition ${\bf (PNRI)\;}$. \end{prop} \begin{proof} Start with the case of a single variable : $A=k[x]$, where $k$ is a formally real field. Let $p(x)$ be an irreducible polynomial in $k[x]$ which is non real. Up to association, we may assume that $p$ is monic. Let $\phi:A\rightarrow R$ be a ring-morphism into a real closed field $R$. Since $R$ is real closed, $p$ cannot change of sign in $R$, otherwise by continuity it would vanish on $R$ : a contradiction with the fact that $p$ is non real. Since $\lim_{x\to +\infty}\phi(p(x))=+\infty$, we get for all all morphism $\phi:A\rightarrow R$ into a real closed field $R$ and all $x\in R$, $\phi(p(x))>0$. In other words, $p>0$ on all ${\rm Spec_r} A$. \par To generalize the argument to $A=k[x_1,\ldots,x_n]$, let us order all the monomials with respect to the lexicographic ordering. Let $m(x)=\lambda x_1^{\alpha_1}\ldots x_n^{\alpha_n}$ be the higher monomial appearing in the polynomial $p(x)$. Up to association, we may assume that $\lambda=1$. Then, we look at the element $\phi(p(x))$ for a ring-morphism $A\rightarrow R$ with $R$ real closed. If we make all $x_i$'s tend to $+\infty$ such that all successive quotients $\frac{x_i}{x_{i+1}}$ tend also to $+\infty$ (i.e. $x_1\gg x_2\gg \ldots \gg x_n$), the we get $$\phi(p(x_1,\ldots,x_n))\sim \phi(m(x_1,\ldots,x_n)).$$ Then, we conclude as previousy that $\phi(p(x_1,\ldots,x_n))>0$ for any substitution $(x_1,\ldots,x_n)\in R^n$. In other words, $p(x)>0$ on all ${\rm Spec_r} A$. \end{proof} For instance the property ${\bf (PNRI)\;}$ is satisfied in ${ \mathbb Q}[x_1,\ldots,x_n]$ although the invertibles do not separate the closed opens of ${\rm Spec_r} A$. \remark{ We shall mention also the link of this section with the content of \cite{Ma}. Roughly speaking, Marshall generalizes a separation result due to Schwartz in the geometric case, introducing a condition involving local $4$-elements fans. This last condition is empty in the one-dimentional geometric case, namely when $A={\mathbb R}[V]$ is the ring of coordinates of an real affine plane curve. So it is possible to separate the connected components of ${\rm Spec_r} A$ (or equivalently those of $V$ as a variety) by polynomials. \par But, for our purpose, it does not say whether the polynomials can be taken invertible. } \endremark In the next section, we study for which rings of number fields the condition ${\bf (PNRI)\;}$ is satisfied.\air \section{Rings of integers of number fields}\label{number_fields} Let $K$ be a finite extension of ${ \mathbb Q}$ of degree $n$. Write $K={ \mathbb Q}[x]/m(x)$ where $m(x)$ is an irreducible polynomial of degree $n$ over ${ \mathbb Q}$. Denote by $a_1,\ldots,a_n$ all the roots of $m(x)$ in ${\mathbb C}$. We say that $K$ is {\it totally real} if all the roots of $m(x)$ are real. A number field is totally real if and only if it can be embedded into ${\mathbb R}$. \air Let $A$ be the ring of integers of $K$ over ${ \mathbb Z}$. We define $N(a)$, the norm of an element in $A$, to be the integer $N(a)=\prod_{\phi}\phi(a)$, where $\phi$ runs the set of all the ring-homomorphisms $\phi:K\rightarrow {\mathbb C}$. \begin{prop}\label{SPR_number_field} Let $A$ be the ring of integers of a degree $n$ number field $K={ \mathbb Q}[x]/m(x)$. Then, ${\rm Spec_r} A={\rm Spec_r} K$ and, as a set, it consists in $r$ points, where $r$ is the number of real roots of $m(x)$. \end{prop} \begin{proof} A point of ${\rm Spec_r} A$ is given by a morphism $\phi:A\rightarrow R$ into a real closed field $R$. In order to describe ${\rm Spec_r} A$, we need as a prerequisite the classical description of the ideals in ${ \mathbb Z}[x]$ : \begin{lem}\label{ideal} Any prime ideal ${\mathfrak p}$ of ${ \mathbb Z}[x]$ has the form ${\mathfrak p}=(p,f(x))$ where $p$ is a prime number in ${ \mathbb Z}$ and $f(x)$ a polynomial in ${ \mathbb Z}[x]$ whose reduction modulo $p$ is irreducible in ${ \mathbb Z}/p{ \mathbb Z}[x]$. \end{lem} Now let ${\mathfrak p}$ be a prime ideal in $A$, viewed as an ideal of ${ \mathbb Z}[x]$ containing $m(x)$. If $p\in{\mathfrak p}$ for a prime number $p\in{ \mathbb Z}$, then $-1=p-1$ in $A/{\mathfrak p}$ and $-1$ is a sum of squares, in other word $A/{\mathfrak p}$ is not formally real. As a consequence, any $\alpha\in{\rm Spec_r} A$ has support ${\rm supp}(\alpha)=(0)$ since by \ref{ideal} the ideal ${\mathfrak p}$ shall be generated by an irreducible polynomial in ${ \mathbb Z}[x]$ which have to divide $m(x)$. Hence ${\rm Spec_r} A={\rm Spec_r} K$. \par Moreover, an element of ${\rm Spec_r} K$ is determined by a morphism $\phi : K\rightarrow R$ where $R$ is a real closed field, hence can be identified with one root of $m(x)$. \end{proof} We shall note that if $K$ is Galois of degree $n$ over ${ \mathbb Q}$, then ${\rm Spec_r} A$ consists in $n$ points in case $A$ is totally real, otherwise ${\rm Spec_r} A=\emptyset$.\par Start with the simplest examples of number fields : \subsection{Quadratic number fields} A quadratic number field has the form $K={ \mathbb Q}(\sqrt{d})$, where $d$ is square free in ${ \mathbb Z}$. Recall that if $d\not\equiv 1{\;\mbox{\rm mod}}\; 4$, then the ring of integers of $K$ is $A={ \mathbb Z}[\sqrt{q}]={ \mathbb Z}[x]/(x^2-d)$, whereas if $d\equiv 1{\;\mbox{\rm mod}}\; 4$, then $A={ \mathbb Z}\left[\frac{1+\sqrt{q}}{2}\right]$. \air As an application to Proposition \ref{SPR_number_field}, ${\rm Spec_r} A\not=\emptyset$ if and only if $d\geq 0$ and, we say in this case that $K$ is a real quadratic number field. In summary, the real spectrum of $A={ \mathbb Z}[x]/(x^2-d)$ consists into two different points which can be seen as the two possible embeddings of $A$ into ${\mathbb R}$ : the first one given by sending $x$ onto $\sqrt{d}$ and the second one by sending $x$ onto $-\sqrt{d}$.\air About the units of a number quadratic field, it is well known (see for instance \cite{Co2}) that the group of units $A^*$ is isomorphic to ${ \mathbb Z}/2{ \mathbb Z}\times{ \mathbb Z}$. We call $u$ a {\it fondamental unit} in $A$ if its image by the previous isomorphism can be written $(\pm 1,\pm 1)$. \begin{prop}\label{PI_quadratic} Let $A$ be the ring of integers of a real quadratic number field. We assume that $A$ is principal. Then, $A$ satisfies conditon ${\bf (PNRI)\;}$ if and only if $N(u)=-1$, where $u$ is a fondamental unit in $A$. \end{prop} \begin{proof} Assume that $d\not\equiv 1{\;\mbox{\rm mod}}\; 4$. We have $A\simeq{ \mathbb Z}[x]/(x^2-d)$, and ${\rm Spec_r} A$ can be described by $\phi:x\mapsto \sqrt{d}$ and $\overline{\phi}:x\mapsto -\sqrt{d}$. Assume that $N(u)=\phi(u)\cdot\overline{\phi}(u)=-1$. Then, the unit $u$ changes of sign onto ${\rm Spec_r} A$, and hence separates the two points of ${\rm Spec_r} A$. Otherwise, it does not separate.\par If $d\equiv 1{\;\mbox{\rm mod}}\; 4$, then $A\simeq{ \mathbb Z}[x]/(4x^2-4x+1-d)$. We repeat the same argument as in the previous case, this time ${\rm Spec_r} A$ being described by $\phi:x\mapsto \frac{1+\sqrt{d}}{2}$ and $\overline{\phi}:x\mapsto \frac{1-\sqrt{d}}{2}$. \end{proof} As examples, mention that $N(u)=+1$ for $d=3,7,6,11,23,\ldots$ whereas $N(u)=-1$ for $d=2,10,26,\ldots$\air In view of applying Theorem \ref{main_pss_PID}, we recall that the rings ${ \mathbb Z}[\sqrt{2}]$, ${ \mathbb Z}[\sqrt{3}]$ and ${ \mathbb Z}[\sqrt{7}]$ are principal. And moreover, it is conjectured that there are infinitly many rings of quadratic numbers fields which are principal. \par Since condition {\bf (PNRI)\;} is not satisfied by $A={ \mathbb Z}[\sqrt{3}]$ (according to \ref{PI_quadratic}), the first counterexample we give to Question \ref{question} will be the following one : \begin{vsex}\label{vsex1} In the ring $A={ \mathbb Z}[\sqrt{3}]$, $u=2+\sqrt{3}$ is a fondamental unit which satisfies $N(u)=+1$, hence $u$ remains always positif on ${\rm Spec_r} A$. Consider the element $q=1+\sqrt{3}$ which obviously changes of sign on ${\rm Spec_r} A$. The equality $$-2=N(q)=(1+\sqrt{3})(1-\sqrt{3})$$ shows that $q$ is irreducible and moreover that it is non-real since we have $$-1\equiv 1^2{\;\mbox{\rm mod}}\; (1+\sqrt{3}).$$ We have futhermore the identity : $$(1+\sqrt{3})^2=\frac{1}{2}+\left(r+\frac{\sqrt{3}}{r}\right)^2+\frac{7}{2}-\left(\frac{r^4+3}{r^2}\right)$$ where $r$ is a rational number chosed "close enough" to $\sqrt[4]{3}$ (i.e. in order that $\frac{7}{2}-\left(\frac{r^4+3}{r^2}\right)$ is a positive rational number, and hence a sum of at most $4$ squares of rational numbers). This last identity will furnish a counterexample to Question \ref{question}.\air Indeed, following Remark \ref {vs_ex}, it suffices to set $\epsilon\in A_+^*$, $b=1$, $d_1=a=c=q=1+\sqrt{3}$, $e_1=\left(r+\frac{\sqrt{3}}{r}\right)^2+\frac{7}{2}-\left(\frac{r^4+3}{r^2}\right)$. So we have $e_1\geq 0$ on all ${\rm Spec_r} A$ (it is even a sum of squares) and $q^2=\epsilon+e_1$ with $\epsilon=\frac{1}{2}$ invertible in $A$.\par We get the matricial equality : $$M= \left(\begin{array}{cc} q&0\\ 0&qe_1 \end{array}\right)\left(\begin{array}{cc} q&e_1\\ 1&q \end{array}\right)$$ And $q$ is a non-real irreducible whose all associates always change of sign in ${\rm Spec_r} A$. \end{vsex} We may generalize all this section to any totally real number field. \subsection{Totaly real number fields} Recall that $N(u)=\prod_\sigma\sigma(u)$, where $\sigma$ runs the set of all (real) embeddings $A\rightarrow{\mathbb R}$. We recall also some well known result about units of rings of integers of number fields (see for instance \cite{Co2}) : \begin{thm} Let $K$ be a totally real number field of degree $n$ over ${ \mathbb Q}$. Denote by $A$ the ring of integers of $K$ and $A^*$ the set of all units in $A$. Then, \begin{enumerate} \item[a)] The element $x$ is in $A^*$ if and only if $N(x)=\pm 1$ where $N(x)$ is the norm of $x $. \item[b)] We have the isomorphism $A/A_{{\rm tors}}\simeq { \mathbb Z}/2{ \mathbb Z}$ (which is isomorphic to the group of all roots of unity in $K$). \item[c) ] The group $A_{{\rm tors}}$ is free of rank $n-1$. \end{enumerate} As a consequence, $A^*\simeq { \mathbb Z}/2{ \mathbb Z}\times { \mathbb Z}^{n-1}$. \end{thm} Recall that ${\rm Spec_r} A$ consists in $n$ distincts points which we denotes by $\alpha_1,\ldots,\alpha_{n}$. \par Let ${\mathcal A}({\rm Spec_r} A,\{-1,+1\})$ be the set of all maps from ${\rm Spec_r} A$ into $\{-1,+1\}$ and in order to identify the maps $f$ and $-f$ we introduce the quotient ${\mathcal A}={\mathcal A}({\rm Spec_r} A,\{-1,+1\})/\{-1,+1\}$. Consider the map : $$\begin{array}{cccc}{\rm Sgn}:&A_{{\rm tors}}&\rightarrow& {\mathcal A}\\ &u&\mapsto& (\alpha\in{\rm Spec_r} A \mapsto{\mbox{\rm sgn}}[u]({\alpha}) \end{array}$$ An equivalent point of view would be to take for ${\mathcal A}$ the set of all functions $f$ satisfying $f(\alpha_n)=+1)$. Then, in place of the previous application Sgn, we would consider the following : $$\begin{array}{cccc}{\rm Sgn}:&A_{{\rm tors}}&\rightarrow& {\mathcal A}({\rm Spec_r} A\setminus\{\alpha_n\},\{-1,+1\})\\ &u&\mapsto& (\alpha\in{\rm Spec_r} A \mapsto{\mbox{\rm sgn}}[u]({\alpha}) \end{array}$$ Here is the generalization of \ref{PI_quadratic} : \begin{prop} Let $A$ be the ring of integers of a totaly real number field of degree $n$ over ${ \mathbb Q}$. We assume that $A$ is principal. Then, the ring $A$ satisfies the condition ${\bf (PNRI)\;}$ if and only if the application ${\rm Sgn}$ is an isomorphism of ${ \mathbb Z}$-modules. i.e. we may choose a basis $(u_1,\ldots,u_{n-1})$ of $A_{{\rm tors}}$ such that ${\mbox{\rm sgn}}[u_j]({\alpha_i})=-1$ if and only if $i=j$. \end{prop} \begin{proof} The ring $A$ satisfies the condition {\bf (PNRI)\;} if and only if for all subset $S\subset {\rm Spec_r} A=\{\alpha_1,\ldots,\alpha_n\}$ there exists an invertible $u$ such that $u>0$ on $S$ and $u<0$ on ${\rm Spec_r} A\setminus S$. This is equivalent to saying that the application ${\rm Sgn}$ is surjective. Since the free ${ \mathbb Z}$-modules $A_{{\rm tors}}$ and ${\mathcal A}$ have same rank equal to $n-1$, it is an isomorphism. \end{proof} \air In Theorem \ref{main_pss_PID}, we use the assumption that $A$ is principal. This hypothesis seems to be too restrictive : indeed not all number fields are principal, neither the coordinate rings of real affine irreducible non-singular varieties. But, these two classes of rings appear to be Dedekind domains. It gives a motivation to search for an extension of Theorem \ref{main_pss_smith} to the class of Dedekind domains. \section{Dedekind domains}\label{Dedekind} \begin{defn}A domain $A$ is called {\it Dedekind} if it is noetherian, integrally closed, and if any non zero prime ideal is maximal. \end{defn} Roughly speaking, we may find in a Dedekind domain, the counterpart of all the arithmetic properties (for instance the existence of gcd) we have in a PID. We just have to replace the product of elements with the product of ideals. For instance, the decomposition of an element into a product of irreducible element will be replaced, in a Dedekind domain, with the decompositon of an ideal into a product of prime ideals. \par Note that any Dedekind domain $A$ satisfies condition ${\bf (GRI)}$ since $A/(p)$ is a field and hence regular for any irreducible $p$. \par Since all the ideals in a Dedekind domain $A$ are not necessarily principal, we shall give a counterpart for the definition of condition {\bf (PNRI)\;} :\air {\bf (PNRI)\;}\quad Let $I=(f)$ be a principal ideal which is non real in $A$ (each associated minimal prime ideal is non-real). Then, $f$ is associated in $A$ to an element which is positive everywhere on ${\rm Spec_r} A$. \air We may also note that the notion of Smith Normal form still exists in Dedekind domain. Although, in general this form is not as simple as the one we have in the case of a principal ring. For instance, we may have to change the format of the matrice (see for instance \cite{Co1}). But for our purpose, we will limit oursevles to matrices which admits a diagonal Smith Normal Form. If we denote by ${\mathfrak d}_k(M)$ the ideal in $A$ generated by the $k\times k$ minors of the matrix $M$, the following results (which can be deduced from \cite{CR} for instance) give a criterion for a matrix to have a diagonal Smith Normal Form : \begin{thm}\label{dd_smith_form} Let $A$ be a Dedekind domain. Let $M$ and $N$ be two matrices in $A^{n\times n}$ such that $det(M)\not =0$ and $det(N)\not =0$. Then, there is $P,Q\in GL_n(A)$ such that $M=PNQ$ if and only if ${\mathfrak d}_k(M)={\mathfrak d}_k(N)$ for all $k=1\ldots n$. \end{thm} If we erase the assumtion ${\mbox{\rm det}\,}(M)\cdot{\mbox{\rm det}\,}(N)\not=0$, then the result is still valid with the additional condition ${\mathfrak C}(M)={\mathfrak C}(N)$ (where ${\mathfrak C}(\cdot)$ denotes the {\it column ideal class} of a matrice). As a consequence, \begin{cor} A matrix $M$ in $A^{n\times n}$ such that ${\mbox{\rm det}\,}(M)\not =0$ admits a Smith Normal Form : $M\sim{\rm diag}(d_1,\ldots,d_n)$ with $d_1\vert\ldots\vert d_n$, $d_n\not =0$, if and only if all the ideals ${\mathfrak d}_1(M),\ldots,{\mathfrak d}_n(M)$ are principal. \end{cor} \air We now are able to formulate the counterpart of Question \ref{question} for Dedekind Domains : \begin{question}\label{question_dd} Let $M$ be a symmetric square matrix with entries in a formally real Dedekind Domain $A$. Assume that $M$ is positive semi-definite and admits a diagonal Smith Normal Form. Are all the diagonal elements of the Smith Normal Form positive semi-definite up to association ? \end{question} Here is a possible extension of Theorem \ref{main_pss_PID} which can be seen as an answer to Question \ref{question_dd}, despite the non satisfactory hypothesis about the decomposition into principal prime ideals : \begin{thm}\label{main_dedekind_1} Let $A$ be a Dedekind domain which satisfies the property ${\bf (PNRI)\;}$. Let $M$ be a symmetric matrix in $A^{n\times n}$ which we suppose te be positive semi-definite on ${\rm Spec_r} A$. Suppose that for all $k=1\ldots n$, the ideal ${\mathfrak d}_k(M)$ is principal, namely ${\mathfrak d}_k(M)=(d_k)$, with $d_k\in A$. Suppose morever that all the primes appearing in the decomposition of ${\mathfrak d}_k(M)$ are principals.\par Then, for all $k=1\ldots r$, the element $d_k\in A$ can be associated to an element $d'_k\in A$ such that $d'_k$ is positive everywhere on ${\rm Spec_r} A$. \end{thm} \begin{proof} Note first that the reduction to the case $r=n$ enables us to apply Theorem \ref{dd_smith_form}.\par We follow the proof of Theorem \ref{main_pss_smith}, replacing irreducibles by prime ideals. The decomposition of $(d_k)$ into the product of its associated prime ideals (which all are principal) looks very much like the decomposition in a UFD. \par So $d_k=e_kf_k$ where $e_k$ is a product of some elements lying in some real prime ideals and $f_k$ is a product of some elements lying in some non-real prime ideals. Thanks to property {\bf (PNRI)\;} we may assume that $f_k>0$ on all ${\rm Spec_r} A$. \par Valuations relative to irreducibles in an UFD are replaced with valuations relative to prime ideals in a Dedekind domain. \par Note also that we have a version of Lemma \ref{regular_generization} in the Dedekind domain $A$ : if ${\mathfrak p}$ is real prime ideal different from $(0)$, then ${\mathfrak p}$ is maximal and $A/{\mathfrak p}$ is regular. The rest of the proof follows. \end{proof} Another solution, if we want to get rid off the unsatisfactory assumption of the previous Theorem, is to restrict the conclusion by localization : \begin{thm}\label{main_dedekind_2} Let $A$ be a Dedekind domain which satisfies the property ${\bf (PNRI)\;}$. Let $M$ be a symmetric matrix in $A^{n\times n}$ which we suppose te be positive semi-definite on ${\rm Spec_r} A$. Suppose that for all $k=1\ldots n$, the ideal ${\mathfrak d}_k(M)$ is principal, namely ${\mathfrak d}_k(M)=(d_k)$ with $d_k\in A$.\par Then, for all prime ideal ${\mathfrak p}$ in $A$ and all $k=1\ldots r$, the element $d_k$ can be associated in $A_{\mathfrak p}$ to an element $d'_k\in A_{\mathfrak p}$ such that $d'_k$ is positive everywhere on ${\rm Spec_r} A$. \end{thm} \begin{proof} Since the ring $A$ satisfies condition ${\bf (PNRI)\;}$, then by localization, all the rings $A_{\mathfrak p}$ satisfies the condition ${\bf (PNRI)\;}$ too. Moreover $A_p$ is a PID (confer \cite[Paragrahe 2, Th\'eor\`eme 1]{Bo}), so we may directly use Theorem \ref{main_pss_PID}, to get $d'_k>0$ on all ${\rm Spec_r} A_{\mathfrak p}$. By specilization, we get also $d'_k>0$ on all ${\rm Spec_r} A$. \end{proof} For instance, Theorems \ref{main_dedekind_1} and \ref{main_dedekind_2} are true for the ring $A={\mathbb R}[x,y]/(x^2+y^2-1)$. \remark{ By, \cite[Paragraphe 3, Exemple 1)]{Bo}, if a Dedekind domain is UFD, then it is principal.} \endremark \subsection{Another counterexamples} We state some counterexamples to Question \ref{question_dd}, all coming from the class of hyperelliptic curves. So we need to precise what the units look like in these rings : \begin{lem}\label{hyperellipt_inver} Let $A$ be the coordinate ring of the real affine hyperelliptic plane curve of equation $y^2-p(x)=0$, were $p(x)\in{\mathbb R}[x]$ has only single and real roots. If we assume that ${\mbox{\rm deg}\,} p$ is odd or if the leading coefficient of $p(x)$ is negative, then the set of units in $A$ is ${\mathbb R}^*$. \end{lem} \begin{proof} Any element $f$ of $A$ admits a unique representation of the form $a(x)y+b(x)$ where $a,b\in{\mathbb R}[x]$. Then, $f$ is invertible in $A$ if and only if there is $g=c(x)y+d(x)$ such that in ${\mathbb R}[x]$, we have $$\left\{\begin{array}{rcc} b(x)d(x)+p(x)a(x)c(x)&=&1\\ a(x)d(x)+b(x)c(x)&=&0\\ \end{array}\right.$$ The first equation shows that $a$ and $b$ are coprime, so by the second we deduce that $b\vert d$ and $a\vert c$. Likewise, we get the reverve divisibility property, so $c=\alpha a$ and $d=\beta b$ with $\alpha,\beta\in{\mathbb R}^*$.\par The previous system becomes $$\left\{\begin{array}{rcc} \beta (b(x)^2-p(x)a(x)^2)&=&1\\ (\beta+\alpha) a(x)b(x)&=&0\\ \end{array}\right.$$ The case $b(x)=0$ is impossible because of the first equality, whereas the case $a(x)=0$ yields $b(x)\in{\mathbb R}^*$ as wanted.\par It remains the treat the case $a(x)b(x)\not =0$. Then, $\alpha=-\beta$ and we just note that the polynomial $b(x)^2-p(x)a(x)^2$ cannot be a constant if ${\mbox{\rm deg}\,} p(x)$ is odd or if the leading coefficient of $p(x)$ is negative. \end{proof} Note that if none of the conditions \ref{hyperellipt_inver} are satisfied, then it may exist in $A$ other invertibles than ${\mathbb R}^*$, as it is the case when $A={\mathbb R}[x,y]/(y^2-(x^2-1)(x^2-2))$. For instance, the element $y+\left(x^2-\frac{3}{2}\right)$ is invertible with inverse $-4\left(y-\left(x^2-\frac{3}{2}\right)\right)$. \begin{vsex} Consider the cubic of coordinate ring $A={\mathbb R}[x,y]/(y^2-x(x^2-1))$. It has two connected components which can be separated by the polynomial $q=x-\frac{1}{2}$. Note that $(q)$ is a non-real prime ideal such that $q^2=\frac{1}{4}+x^2-x$. As in Remark \ref {vs_ex}, to produce a counterexample, it suffices to take $\epsilon=\frac{1}{4}$, $b=1$, $d_1=a=c=q$, $e_1=x^2-x$ which is such that $e_1\geq 0$ on ${\rm Spec_r} A$. \air As another example, we may also consider the ring $B={\mathbb R}[x,y]/(y^2+(x^2-1)(x^2-2))$, where the prime ideal $(x)$ in $B$ separates the two connected components of the variety. And we produce a counterexample based upon the identity $$x^2=\frac{1}{3}\left(2+x^4+y^2\right)$$ Hence, we have $e_1=\frac{1}{3}\left(x^4+y^2\right)$ which is not only positive, but also a sum of squares in $B$. \end{vsex} This last argument could be repeated to any affine irreducible non-singular real plane curve which is is compact and has several connected components. \begin{prop} Let $A={\mathbb R}[V]$ be the coordinate ring of an affine non-singular irreducible and compact curve $V$. We assume moreover that the only units of $A$ are constants.\par Then, Question \ref{question} admits a negative answer for the ring $A$ if $V({\mathbb R})$ has at least two connected components. \end{prop} \begin{proof} Assume that ${\rm Spec_r} A$ has at least two connected components, say $C_1$ and $C_2$. According to \cite{Ma}, we may find $a\in A$ which separates $C_1$ and $C_2$. Necessarily $(a)$ is non-real since it does not vanish on ${\rm Spec_r} A$. Since $V({\mathbb R})$ is compact, there is a rationnal number $r\geq 0$ such that $a^2-r> 0$ on $V({\mathbb R})$. By Schm\"udgen Positivestellensatz \cite{Sd}, we get that $a^2-r$ is a sum of squares in $A$. Thus, as in Remark \ref {vs_ex}, we are able to produce a counterexample to Question \ref{question_dd}. \end{proof}
{ "timestamp": "2009-09-08T14:49:21", "yymm": "0909", "arxiv_id": "0909.1482", "language": "en", "url": "https://arxiv.org/abs/0909.1482", "abstract": "It is known that any symmetric matrix $M$ with entries in $\\R[x]$ and which is positive semi-definite for any substitution of $x\\in\\R$, has a Smith normal form whose diagonal coefficients are constant sign polynomials in $\\R[x]$. We generalize this result by considering a symmetric matrix $M$ with entries in a formally real principal domain $A$, we assume that $M$ is positive semi-definite for any ordering on $A$ and, under one additionnal hypothesis concerning non-real primes, we show that the Smith normal of $M$ is positive, up to association. Counterexamples are given when this last hypothesis is not satisfied. We give also a partial extension of our results to the case of Dedekind domains.", "subjects": "Rings and Algebras (math.RA)", "title": "On positive Matrices which have a Positive Smith Normal Form", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9833429565737234, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7097210865966721 }
https://arxiv.org/abs/2301.10902
Efficient Hyperdimensional Computing
Hyperdimensional computing (HDC) is a method to perform classification that uses binary vectors with high dimensions and the majority rule. This approach has the potential to be energy-efficient and hence deemed suitable for resource-limited platforms due to its simplicity and massive parallelism. However, in order to achieve high accuracy, HDC sometimes uses hypervectors with tens of thousands of dimensions. This potentially negates its efficiency advantage. In this paper, we examine the necessity of such high dimensions and conduct a detailed theoretical analysis of the relationship between hypervector dimensions and accuracy. Our results demonstrate that as the dimension of the hypervectors increases, the worst-case/average-case HDC prediction accuracy with the majority rule decreases. Building on this insight, we develop HDC models that use binary hypervectors with dimensions orders of magnitude lower than those of state-of-the-art HDC models while maintaining equivalent or even improved accuracy and efficiency. For instance, on the MNIST dataset, we achieve 91.12% HDC accuracy in image classification with a dimension of only 64. Our methods perform operations that are only 0.35% of other HDC models with dimensions of 10,000. Furthermore, we evaluate our methods on ISOLET, UCI-HAR, and Fashion-MNIST datasets and investigate the limits of HDC computing.
\section{Introduction} {\em Hyperdimensional computing} (HDC) is an emerging learning paradigm inspired by an abstract representation of neuron activity in the human brain using high-dimensional binary vectors. Compared with other well-known training methods like artificial neural networks (ANNs), HDCs have the advantage of high parallelism and low energy consumption (low latency). This makes HDCs well suited to resource-constrained applications such as electroencephalogram detection, robotics, language recognition and federated learning~\citep{hsieh2021fl,asgarinejad2020detection,neubert2019introduction,rahimi2016robust}. HDCs are also easy to implement in hardware~\citep{schmuck2019hardware,salamat2019f5}. Unfortunately, the practical deployment of HDC suffers from low model accuracy and is always restricted to small and simple datasets. To solve the problem, one commonly used technique is increasing the hypervector dimension~\citep{neubert2019introduction,schlegel2022comparison,yu2022understanding}. For example, running on the MNIST dataset, hypervector dimensions of 10,000 are often used. \citet{duan2022lehdc} and \citet{yu2022understanding} achieved the state-of-the-art accuracies of 94.74\% and 95.4\% separately this way. In these and other state-of-the-art HDC works, hypervectors are randomly drawn from the hyperspace $\{-1,+1\}^d$, where the dimension $d$ is very high. This ensures high orthogonality, making the hypervectors more independent and easier to distinguish from each other~\citep{thomas2020theoretical}. As a result, accuracy is improved and more complex application scenarios can be targeted. However, the price paid due to higher dimension is in higher energy consumption possibly negating the advantage of HDC altogether~\citep{neubert2019introduction}. This paper addresses this tradeoff. In this paper, we will analyze the relationship between hypervector dimension and accuracy, as well as between dimension and orthogonality. In our analysis, we found that strict orthogonality can be obtained for small $d$. We will show that a dimension $d$ of only $2^{\lceil \log_2 n \rceil}$ is sufficient to yield $n$ vectors in $\{-1, 1\}^d$ with strict orthogonality. Dimensions higher than that are not necessary. If we relax orthogonality to $\varepsilon$-{\em quasi-orthogonality}~\citep{kainen2020quasiorthogonal}, we will show that it is even easier to construct the hypervectors. Further, it is intuitively true that high dimensions will lead to high orthogonality~\citep{thomas2020theoretical}, contrary to popular belief, we found that as the dimension of the hypervectors $d$ increases, the upper bound for inference accuracy actually decreases (Statement~\ref{prop1} and Statement~\ref{prop2}). In particular, if the hypervector dimension $d$ is sufficient to represent a vector with $K$ classes ($d > \log_2 K$) then, \textbf{the lower the dimension, the higher the accuracy.} The key insight of our work is this: {\em In HDC, it is not the higher dimension, that is the determinant of accuracy, and the required orthogonality for a given problem can be achieved at lower hypervector dimensions using our proposed techniques}. \begin{comment} In particular, we analyze the relationship between hypervector dimension and accuracy in HDC. Contrary to popular thought, we found that as the dimension of the hypervectors $d$ increases, the inference accuracy decreases (Statement~\ref{prop1} and Statement~\ref{prop2}). This indicates that if the hypervector dimension $d$ is enough to represent a vector with length $l$ ($d>\log_2 l$) then, without considering orthogonality, \textbf{the lower the dimension, the higher the accuracy.} This translates to lower energy consumption. It is intuitive that the larger the dimension, the better the orthogonality~\citep{thomas2020theoretical}. However, further examination revealed that a dimension $d$ of only $2^{\lceil \log_2 n \rceil}$ is sufficient to yield $n$ vectors in $\{-1, 1\}^d$ with strict orthogonality. Dimensions higher than that are not necessary. If we relax orthogonality to $\varepsilon$-quasiorthogonality, we will show that it is even easier to construct the hypervectors. \end{comment} Based on the analysis, we propose a combination of a novel trainable binary kernel-based encoder with the majority rule (shown in Figure~\ref{fig:workflow}) to reduce the hypervector dimension significantly while maintaining state-of-art accuracies. Running on the MNIST dataset, HDC accuracies of 96.88/97.23\% were achieved with hypervector dimensions of only 32/64. The total number of calculation operations of our method is a mere $7\%$ of the previous state-of-art related works where hypervectors dimensions of 10,000 or more were needed. We further explored our methods on CIFAR-10 and an HDC accuracy of 46.18\% was achieved. Both our analysis and experiments show that dimensions of 5,000 or even 10,000 used by the state-of-the-art in HDC are not necessary. The contribution of this paper is as follows: \begin{itemize} \item We give a comprehensive analysis of the relationship between hypervector dimension and the accuracy of HDC. Both the worst-case and average-case accuracy are studied. Mathematically, we explain why relatively lower dimensions can yield higher model accuracies. This contradicts the standard assumption in HDC. Furthermore, the relationship between orthogonality and hypervector dimension is also discussed. Based on the analysis, we can reduce the dimension by nearly three orders of magnitude. \item We introduce a kernel-based binary encoder and two HDC retraining algorithms. With these techniques, we can achieve higher detection accuracies using much smaller hypervector dimensions (latency) and better orthogonality compared to the state-of-the-art. \end{itemize} \paragraph{Organisation} This paper is organized as follows. First, the basic workflow and background of HDC are introduced. Then, we describe our main dimension-accuracy and dimension-orthogonality analysis in Section~\ref{sec:analysis}. In Section~\ref{sec:approach}, we present a trainable binary encoder and two HDC retraining approaches to improve accuracy while at the same time reducing energy consumption. We then show our experimental results and comparison with state-of-the-art HDCs in Section ~\ref{sec:exp}, followed by a discussion and conclusion. \section{Background} Hyperdimensional computing encodes binary hypervectors with typical dimensions of 5,000 to 10,000 to represent the data. Using the MNIST dataset as an example, HDC encodes one float32-type image $f = {f_0,f_1,...,f_{783}}$ to hypervectors by binding and adding the value hypervectors $\bm{v}$ and position hypervectors $\bm{p}$ together. Both these two hypervectors are independently drawn from the hyperspace $\{-1,+1\}^d$ randomly. Mathematically, we can construct representation $r$ for each image as followed: \begin{equation*} r = \textrm{sgn} \left ((v_{f_0}\bigotimes p_{f_0} + v_{f_1}\bigotimes p_{f_1}+ ...+ v_{f_{783}}\bigotimes p_{f_{783}}) \right ), \end{equation*} where $\textrm{sgn}(\cdot)$ is the sign function that binarizes the sum of hypervectors and returns -1 or 1. $\textrm{sgn}(0)$ is randomly assigned to 1 or -1. $\bigotimes$ is the {\em binding operation} that perform coordinate-wise (element-wise) multiplication. For example, $[-1,1,1,-1]\bigotimes[1,1,1,-1] = [-1,1,1,1]$. For training, all hypervectors $r_1,...,r_{60,000}$ that of the same digit are added together. The {\em majority rule} is then used to generate the representation $R_c$ for class $c$ \begin{equation} \label{r_c} R_c = \textrm{sgn} \left (\sum_{i \in c}r_i \right ). \end{equation} For inference, the encoded test image is compared with the representation of each class $R_c$, and the most similar one is selected. Cosine similarity, L2 distance, and Hamming distance are commonly used similarity measures in previous works. According to~\citet{frady2021computing}, the inner product has the same function with Hamming distance for binary hyper vectors with values of -1 and 1, which we used in this work. The workflow is shown in the Appendix~\ref{wfhdc}. \begin{comment} Using the MNIST dataset as an example, HDC first encodes the 256 kinds of pix values (from 0 to 255) to hypervectors $v_0,v_1,...v_{255}$. All hypervectors are independently drawn from the hyperspace $\{-1,+1\}^d$ randomly. By binding feature position hypervectors and value hypervectors, a sample can be described as a new hypervecto Then, all 784 hypervectors from one image are bounded and shifted to construct representation $r$ for each image. Mathematically: \begin{equation*} r = f^0(v_{p_0})\bigotimes f^1(v_{p_1}) \bigotimes ... f^{783}(v_{p_{784}}). \end{equation*} $\bigotimes$ is binding operations that perform coordinate-wise (element-wise) multiplication and $f^i$ is the shift operations that shift the elements in a hypervector $i$ times. For example, \begin{equation*} [-1,1,-1,1] \end{equation*} For training, all hypervectors $r_1,...,r_{60,000}$ that of the same digit are added together. The {\em majority rule} is then used to generate the representation $R_c$ for class $c$: \begin{equation} R_c = sgn(\sum_{i \in c}r_i), \end{equation} where $sgn$ is the sign function. For inference, the encoded test image is compared with the representation of each class $R_c$, and the most similar one is selected. \end{comment} \section{High dimensions are not necessary} \label{sec:analysis} Compared to traditional ANNs, the use of binary vectors and simple, point-wise computation in HDC holds the promise of low energy consumption while achieving competitive accuracies. The Achilles Heel is in the high dimensions needed that potentially negated the gains. In this section, we will study the need for high hypervector dimensions in terms of both accuracy and orthogonality. Through an analysis of the relationship between dimension and accuracy, we will show that a higher hypervector dimension does not necessarily lead to higher accuracy. We will show that for a classification task that has only two classes, a higher hypervector dimension results in both lower worst-case and average-case accuracy. We then study the relationship between dimension and orthogonality and show that good orthogonality does not require high dimensions. This opens the door to performing HDC with significantly lower dimension hypervectors. \subsection{Dimension-accuracy analysis} To simplify the analysis of the HDC, we consider the following assumptions of hypervectors without loss of generality. We assume that the hypervectors are uniformly distributed over a $d$-dimensional unit ball: \begin{equation*} \mathcal{X} = \{x \in \mathbb{R}^d \big | \|x\|_2 \leq 1\}. \end{equation*} Moreover, we assume that hypervectors $x$ are {\em linearly separable} and each class with label $i$ can be represented by $C_i$: \begin{equation*} C_i = \{x \in \mathcal{X} | \theta_i \cdot x > \theta_j \cdot x, j \neq i\}, \quad 1 \leq i \leq K \end{equation*} where $\theta_i \in [0, 1]^d $ are support hypervectors that are used to distinguish classes $i$ from other classes. This is a reasonable assumption as long as we select $d$ sufficiently large so that there exists a mapping (encoder) to embed the raw data into a $d$-dimensional unit ball. Similarly, we define the prediction class $\hat{C}_i$ by $\htheta_i$ as followed: \begin{equation*} \hat{C}_i = \{x \in \mathcal{X} | \hat{\theta}_i \cdot x > \hat{\theta}_j \cdot x, j \neq i\}, \quad 1 \leq i \leq K. \end{equation*} When we apply the majority rule to separate the above hypervectors $x$, we are approximating $\theta_i$ with $\hat{\theta}_i$ in the sense of maximizing the prediction accuracy. Here each $\htheta_i \in \{0, 1\}^d$ is a binary vector. Therefore we define the worst-case $K$-classes prediction accuracy over hypervectors distribution $\mathcal{X}$ in the following expression: \begin{equation*} Acc^w_{K, d} := \inf_{\theta_1, \theta_2, \dots, \theta_K} \sup_{\hat{\theta}_1, \hat{\theta}_2, \dots, \hat{\theta}_K} \mathbb{E}_x \bigg [ \sum_{i=1}^K \prod_{j \neq i} \mathbf{1}_{\{\theta_i \cdot x > \theta_j \cdot x\}} \mathbf{1}_{\{\hat{\theta}_i \cdot x > \hat{\theta}_j \cdot x\}} \bigg ]. \end{equation*} \begin{statement} \label{prop1} Assume $K = 2$, as the dimension of the hypervectors $d$ increases, the worst-case prediction accuracy decreases with the following rate: \begin{align*} Acc^w_{2, d} & = 2 \inf_{\theta_1, \theta_2} \sup_{\hat{\theta}_1, \hat{\theta}_2} \mathbb{E}_x \bigg [ \mathbf{1}_{\{\theta_1 \cdot x > \theta_2 \cdot x\}} \mathbf{1}_{\{\hat{\theta}_1 \cdot x > \hat{\theta}_2 \cdot x\}} \bigg ] \\ & = \inf_{\theta_1, \theta_2} \sup_{\hat{\theta}_1, \hat{\theta}_2} \bigg [ 1 - \frac{\arccos (\frac{(\theta_1 - \theta_2) \cdot (\hat{\theta}_1 - \hat{\theta}_2)}{ \|\theta_1-\theta_2\|_2 \|\hat{\theta}_1-\hat{\theta}_2\|_2}) }{ \pi } \bigg ] \\ & = 1 - \frac{\arccos (\frac{1}{\sqrt{\sum_{j=1}^d (\sqrt{j} - \sqrt{j-1})^2}}) }{\pi} \to \frac{1}{2}, \qquad d \to \infty \end{align*} \item {The first equality is by the symmetry of distribution $\mathcal{X}$. The second equality is the evaluation of expectation over $\mathcal{X}$ and the detail is given in Lemma~\ref{lemma:equality_1_for_mr}. For the third equality, the proof is given in Lemma~\ref{lemma:inequality_1_for_mr} and Lemma~\ref{lemma:inequality_2_for_mr}.} \end{statement} In the next statement, we further consider the average-case. Assume the prior distribution $\mathcal{P}$ for $\theta_1 ,... \theta_K \sim \mathcal{U}[0, 1]^d$. We can define the average accuracy in the following expression: \begin{equation*} \overline{Acc}_{K, d} := \mathbb{E}_{\theta_1, \theta_2, \dots, \theta_K \sim \mathcal{P}} \sup_{\hat{\theta}_1, \hat{\theta}_2, \dots, \hat{\theta}_K} \mathbb{E}_x \bigg [ \sum_{i=1}^K \prod_{j \neq i} \mathbf{1}_{\{\theta_i \cdot x > \theta_j \cdot x\}} \mathbf{1}_{\{\hat{\theta}_i \cdot x > \hat{\theta}_j \cdot x\}} \bigg ]. \end{equation*} \begin{statement} \label{prop2} Assume $K = 2$, as the dimension of the hypervectors $d$ increases, the average case prediction accuracy decreases: \begin{align*} \overline{Acc}_{K, d} & = \mathbb{E}_{\theta_1, \theta_2 \sim U[0, 1]^d} \sup_{\hat{\theta}_1, \hat{\theta}_2} \mathbb{E}_x \bigg [ \mathbf{1}_{\{\theta_1 \cdot x > \theta_2 \cdot x\}} \mathbf{1}_{\{\hat{\theta}_1 \cdot x > \hat{\theta}_2 \cdot x\}} \bigg ] \\ & = \mathbb{E}_{\theta_1, \theta_2 \sim U[0, 1]^d} \sup_{\hat{\theta}_1, \hat{\theta}_2} \bigg [ 1 - \frac{\arccos (\frac{(\theta_1 - \theta_2) \cdot (\hat{\theta}_1 - \hat{\theta}_2)}{ \|\theta_1-\theta_2\|_2 \|\hat{\theta}_1-\hat{\theta}_2\|_2}) }{ \pi } \bigg ] \\ & = \mathbb{E}_{\theta_1, \theta_2 \sim U[0, 1]^d} \bigg [ 1 - \frac{\arccos \big( \sup_{j=1}^d \frac{\sum_{i=1}^j |\theta_1 - \theta_2|_{(i)}}{\sqrt{j} \|\theta_1 - \theta_2\|} \big )}{\pi} \bigg ]. \end{align*} Here $|\theta_1 - \theta_2|_{(i)}$ denotes the $i$-th maximum coordinate for vector $|\theta_1 - \theta_2|$. \end{statement} As the exact expression for the average-case accuracy is harder to evaluate, we do the Monte Carlo simulation which sampling $\theta_1$ and $\theta_2$ 1000 times to evaluate the expectation form. We then show the curve of $Acc^w_{K, d}$ and $\overline{Acc}_{K, d}$ over dimension from 1 to 1000 in Figure ~\ref{worst} and ~\ref{average}. It is easy to find that a high dimension for HDCs is not necessary for both the worst-case and average-case, the upper bound of accuracy will drop slowly when the dimension increases. \begin{figure*}[t] \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width = 0.99\linewidth]{Figures/worst-case.png} \caption{ Worst-case Accuracy $Acc_{2, d}^w$} \label{worst} \end{minipage} \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width = 0.99\linewidth]{Figures/worst-average.png} \caption{Average-case Accuracy $\overline{Acc}_{2, d}$} \label{average} \end{minipage} \end{figure*} According to ~\citet{tax2002using}, we can approximate multi-class case where $K \geq 3$ by one-against-one binary classification. Therefore, we define the quasi-accuracy of $K$-class classification as follows: \begin{equation*} Quasi\textrm{-}Acc_{K, d} = \frac{\sum_{i \neq j} Acc^{ij}_{2, d}}{K(K-1)}, \end{equation*} where $Acc^{ij}_{2, d}$ can be either the average-case or worst-case accuracy that distinguishes class $i$ and $j$. Since the accuracy $Acc^{ij}_{2, d}$ for binary classification decreases as the dimension increase, the quasi-accuracy follows the same trend. \begin{comment} For the multi-class case $K \geq 3$, we show the same trend holds for dimension-accuracy based on the following inequality: \begin{align*} \mathbb{E}_x \bigg [ \sum_{i=1}^K \prod_{j \neq i} \mathbf{1}_{\{\theta_i \cdot x > \theta_j \cdot x\}} \mathbf{1}_{\{\hat{\theta}_i \cdot x > \hat{\theta}_j \cdot x\}} \bigg ] & = \sum_{i=1}^K \mathbb{E}_x \bigg [ \prod_{j \neq i} \mathbf{1}_{\{\theta_i \cdot x > \theta_j \cdot x\}} \mathbf{1}_{\{\hat{\theta}_i \cdot x > \hat{\theta}_j \cdot x\}} \bigg ] \\ & \leq \sum_{i=1}^K \prod_{j \neq i} \bigg ( \mathbb{E}_x \bigg [ \mathbf{1}_{\{\theta_i \cdot x > \theta_j \cdot x\}} \mathbf{1}_{\{\hat{\theta}_i \cdot x > \hat{\theta}_j \cdot x\}} \bigg ] \bigg )^{\frac{1}{K-1}}. \end{align*} The inequality is achieved by Holder's inequality. Therefore, the general accuracy for $\theta_1, \theta_2, \dots, \theta_K$ satisfies the following inequality: \begin{equation*} Acc_{K, d}(\theta_1, \theta_2, \dots, \theta_K) \leq \sum_{i=1}^K \prod_{j \neq i} (Acc_{2, d}(\theta_i, \theta_j))^{\frac{1}{K-1}}. \end{equation*} Since the accuracy for any two-class accuracy is decreasing monotonically with respect to $d$, we know $Acc_{K, d}$ is also decreasing monotonically. \end{comment} \subsection{Dimension-orthogonality analysis} For strict orthogonality, we first construct a Hadamard matrix sequence $\{H_k\}$ ~\citep{horadam2012hadamard} as followed: \begin{align*} H_0 & = [1]; \\ H_1 & = \begin{bmatrix} H_{0} & H_{0}\\ H_{0} & -H_{0} \end{bmatrix}; \\ & \vdots \\ H_k & = \begin{bmatrix} H_{k-1} & H_{k-1}\\ H_{k-1} & -H_{k-1} \end{bmatrix}. \end{align*} According to the definition of Hadamard matrix, if we take $n$ rows from $H_{\lceil \log_2 n \rceil}$, we can find $n$ hypervectors with strict orthogonality in $2^{\lceil \log_2 n \rceil}$-dimensional space. Then, we can give the following statement: \begin{statement} \label{dimension-orth} Dimension $d$ of only $2^{\lceil \log_2 n \rceil}$ is needed to find $n$ strictly orthogonal hypervectors, which indicates the unnecessity of high dimension. \end{statement} Further, if the Hadamard conjecture~\citep{horadam2012hadamard} holds (for each positive integer $k$, there exists a Hadamard matrix of order $4k$), the number $d$ can be bounded above by $n + 3$. We then consider the quasi-orthogonality since there is no enforcement of strict orthogonality in HDC's practice. \begin{definition}[$\varepsilon-$quasiorthogonality] For two unit vectors $x$ and $y$, we call them quasi-orthogonal: \begin{equation*} |x^T y| \leq \varepsilon. \end{equation*} \end{definition} Based on the definition and recent progress on quasi-orthogonality~\citep{kainen2020quasiorthogonal} (shown in ~\ref{poof_t1}), we can draw the following conclusion: \begin{statement} \label{dimension-quorth} If orthogonality has been relaxed to $\varepsilon$-quasiorthogonality for $\varepsilon \in (0, 1)$, the number of $d$-dimensional vectors with $\varepsilon$-quasiorthogonality is exponential with respect to the dimension: \begin{equation*} n = O(e^{c(\varepsilon) d}). \end{equation*} Here $c(\varepsilon)$ is a constant related to $\varepsilon$. \end{statement} \begin{comment} It has also been proved that an exponential number of $\varepsilon-$quasiorthogonal vectors can be constructed in $d$-dimensional space: For the $\varepsilon$-quasiorthogonal dimension of $\mathbb{R}^d$~\citep{kainen2020quasiorthogonal}, we know that: \begin{equation*} dim_{\varepsilon}(d) := \max\{|X| : X \subset S^{n-1} , x \neq y \in X \Rightarrow |x \cdot y| \leq \varepsilon \} \geq e^{d \varepsilon^2/2}. \end{equation*} \end{comment} Both Statement~\ref{dimension-orth} and ~\ref{dimension-quorth} indicate that even in the low dimension case, it is still feasible to find hypervectors with high (quasi-)orthogonality. \begin{comment} We show in this section that strict orthogonality does not require high dimensions. Moreover, the $\varepsilon$-quasiorthogonality can be achieved with an even lower dimension. We first claim that to find $n$ strictly orthogonal hypervectors, it is sufficient to consider the hypervector dimension $d = 2^{\lceil \log_2 n \rceil}.$ We can extract these hypervectors from a $d$ by $d$ matrix by taking the first $n$ rows of the matrix. \begin{definition}[Hadamard matrix] A Hadamard matrix is a square matrix with each entry being $1$ or $-1$ and whose rows are mutually orthogonal. \begin{equation*} H H^T = d I_d. \end{equation*} \end{definition} There is a famous conjecture about the dimension of the Hadamard matrix: \begin{proposition}[Hadamard conjecture] For each positive integer $k$, there exists a Hadamard matrix of order $4k$. \end{proposition} If this conjecture is true, then the number $d$ can be bounded above by $n + 3$. Even with a weaker but easily proved claim we can show the feasibility of constructing $n$ hypervectors with strict orthogonality in dimension $d = 2^{\lceil \log_2 n \rceil} < 2n.$ \begin{proposition}[Sylvester's construction for Hadamard matrix of order $2^k$] Fix $H_0 = [1]$, define $H_k = \begin{bmatrix} H_{k-1} & H_{k-1}\\ H_{k-1} & -H_{k-1} \end{bmatrix}$. It can be seen that for each non-negative integer $k$, $H_k$ is a well-defined Hadamard matrix. \end{proposition} Since there is no enforcement of strict orthogonality in HDC's practice, so we further consider the quasi-orthogonality: If we relax the orthogonality to $\varepsilon$-quasiorthogonality for $\varepsilon \in (0, 1)$, the number $n$ is exponential with respect to the dimension $d$. This indicates the feasibility of HDC achieving high (quasi-)orthogonality even in the low dimension case. \begin{definition}[$\varepsilon-$quasiorthogonality] For two unit vectors $x$ and $y$, we call them quasi-orthogonal if $|x^T y| \leq \varepsilon.$ \end{definition} It has also been proved that an exponential number of $\varepsilon-$quasiorthogonal vectors can be constructed in $d$-dimensional space: \begin{theorem}[\citealp{kainen2020quasiorthogonal}] The $\varepsilon$-quasiorthogonal dimension of $\mathbb{R}^d$, \begin{equation*} dim_{\varepsilon}(d) := \max\{|X| : X \subset S^{n-1} , x \neq y \in X \Rightarrow |x \cdot y| \leq \varepsilon \} \geq e^{d \varepsilon^2/2}. \end{equation*} \end{theorem} \end{comment} \section{Methods} \label{sec:approach} As shown in Figure~\ref{fig:workflow}, we first combine a {\em kernel-based binary encoder} with a fully-connected layer and train the whole network with cross-entropy loss function. Since the whole structure is binary, the {\em straight-through estimator} (STE)~\citep{bengio2013estimating} is used for back-propagation. Next, the fully-connected layer is replaced with a majority rule for power-saving. Weight sharing indicates that we use the same weights before and after we replacing the FC layer with the majority rule. Then, representations of each class $R_c$ can be obtained with Equation~\ref{r_c}. To improve $R_c$, we train the combination of the binary encoder and majority rule with STE (Algorithm~\ref{alg_tt} Step 1) and recompute the final representations $R_c$ of each class $c$. Finally, all hypervectors are trained with Algorithm~\ref{alg_tt} Step 2 for higher detection accuracy. \begin{figure}[htb] \centering \includegraphics[width = 1.03\linewidth]{Figures/whole_process_.pdf} \caption{Workflow of Our HDC.} \label{fig:workflow} \end{figure} \subsection{Binary Kernel-based Encoder} The binary kernel-based encoder is composed of $k$ {\em binary neural netwrok} (BNN) style layers. Unlike the standard BNNs whose input is in floating point numbers, both the input and activation values used in our structure have been quantized to 0 and 1. Since the information transmitted among layers is binary, we can replace the multiplication operations with addition operations. In particular, we only need to sum up the weights whose corresponding inputs are 1. Weights whose corresponding inputs are 0 can be ignored. For each neuron $i$ at layer $l$, if the sum up if higher than 1, a `1' is output. Otherwise, we output 0. Mathematically, : \begin{equation} \label{eq:cnn} x^{l}_i= \begin{cases} 0, & (\sum_{j=0}^n w^l_{i,j,x^{l-1}_{j}=1} + b^l_i) \leq 1\\ 1,& (\sum_{j=0}^n w^l_{i,j,x^{l-1}_{j}=1} + b^l_i) > 1 \\ \end{cases} \end{equation} , where $x^{l}_i$ indicates the output of layer $l$ at neuron $i$, $ w^l_{i,j}$ and $ b^l_i$ indicates the weight and bias at layer $l$ ($j$ is the index of neurons in $l-1$ layer, and $i$ is the neuron index at $l$ th layer.), $w_{i,j,x_{l-1}=1}$ indicates the weight whose corresponding inputs $x_{l-1}$ are 1. $G(x)$ is the gradient for the backpropagation. However, because the whole function is not continuous and not differentiable at the turning points, we use the method of the straight-through estimator to simulate the gradient. Thus $G(x)$ is set as 1 to make the whole network trainable: $G(x) \approx 1.$ \begin{comment} \begin{equation} \label{eq:Q_gradient} G(x) \approx 1. \end{equation} \end{comment} After the training, we remove the fully connected layer and run the encoder to generate the binary representation $\bm{R^b_c}$ of each class. The majority rule is used here, shown in Algorithm~\ref{alg_mr}. \begin{minipage}{0.46\textwidth} \begin{algorithm}[H] \centering \caption{representation Generation:} \label{alg_mr} \begin{algorithmic}[1] \REQUIRE $N$ number of training data $\bm{x}$; \ENSURE Trained binary encoder $E$; Representation $R_c$ for class $c$ with dimension of $d$;Binary Representation $R^b_c$; Outputs of encoder $\bm{y}$; Pre-defined Threshold $\theta$; \STATE $\bm{y} = E(\bm{x})$; $R_c = 0$ \FOR{$i=1$ to $N$} \STATE $R_c[i]+=y_c$ \ENDFOR \FOR{$i=1$ to $d$} \IF {$R_c[i]>\theta$} \STATE $R^b_c[i]=1$ \ELSE \STATE $R^b_c[i]=0$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \hfill \begin{minipage}{0.46\textwidth} \begin{algorithm}[H] \centering \caption{HDC Retraining:} \label{alg_tt} \begin{algorithmic}[1] \REQUIRE Training data $\bm{x}$ with label $\bm{R_c}$; Trained Encoder $E$; $N$ training epochs. \STATE \textbf{Step1:} \FOR{epoch$ =1$ to $N$} \STATE y = E($\bm{x}$) \STATE $L$ = mse(y, $R^b_c$) //Bp: STE \ENDFOR ~ \STATE \textbf{Step2:} \STATE y = E($\bm{x}$) \IF {$y!=R^b_c$} \STATE $R_{c_{correct}}+=lr*y$ \STATE $R_{c_{wrong}}-=lr*y$ \ENDIF \STATE Generate $R^b_c$ (Algorithm 1, line 5-9) \end{algorithmic} \end{algorithm} \end{minipage} \subsection{Retraining} Here, we introduce a two-step retraining method. As shown in Algorithm~\ref{alg_tt}, training data are first sent to the encoder in batches. The mean squared error is used as the loss function to update weights in the encoder. Then, we freeze the encoder and update the representation of each class. If the output $y$ is wrongly detected as class $c_{wrong}$ which should belong to class $c_{correct}$, we minus the representation of wrong class $R_{c_{wrong}}$ by the multiplication of learning rate and $y$. Meanwhile, we add the representation of the correct class $R_{c_{right}}$ by the multiplication of the learning rate and $y$ as well. Then, the modified $R_c$ are sent to Algorithm~\ref{alg_mr} to generate the binary representation $R^b_c$. \subsection{Inference} As we have already computed the representation of each class, we can simply compare the similarity between the resulting hypervector (computed by sending the test data to the same encoder) and the representation of all classes. Then, we output the class with the highest similarity. We turn the value of 0 in $R^b_c$ to $-1$ and do the inner product for similarity check. Orthogonality of the resulting representation $\bm{R_c}$ can be evaluated with Equation~\ref{eq:orth}. The closer $\bar{O}$ is to 0, the better the orthogonality. \begin{equation} \label{eq:orth} \bar{O} = \frac{1}{K(K-1)} \sum_{c_1 \neq c_2} \frac{|\mathbf{R}^b_{c_1}*{\mathbf{R}^b_{c_2}}^T|}{d} \end{equation} \section{Results} \label{sec:exp} We have implemented our schemes in CUDA-accelerated (CUDA 11.7) PyTorch version 1.13.0. The experiments were performed on an Intel Xeon E5-2680 server with two NVIDIA A100 Tensor Core GPUs and one GeForce RT 3090 GPU, running 64-bit Linux 5.15. MNIST dataset\footnote{http://yann.lecun.com/exdb/mnist/} and CIFAR10\footnote{https://www.cs.toronto.edu/~kriz/cifar.html} are used in our experiments. \subsection{A case study of our technologies} Here, we will describe how our approaches improve the digit recognition task step by step. \subsubsection{Training the Encoder} We build a three-layer binary kernel-based encoder to enhance the HDC model. For each layer, we set the output channel number as 32, kernel size as 6, and stride as 2. We first discuss the relationship between the pre-defined threshold mentioned in Algorithm~\ref{alg_mr} with accuracy. As shown in Figure~\ref{d_thre}, running on the MNIST dataset, taking the hypervector dimension of 16 as an example (full ablation studies are shown in the Appendix), we find that the threshold has good robustness against the noise. The detection accuracy remains almost the same when the threshold varies from 500 to 5000 (the max number in $R_c$ after encoder without majority rule is around 6500) and good orthogonality is also achieved. \begin{figure*}[t] \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width = 0.99\linewidth]{Figures/d_t.png} \caption{Threshold Study. The orthogonality is measured using Equation \ref{eq:orth}} \label{d_thre} \end{minipage} \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width = 0.99\linewidth]{Figures/bnn_1.png} \caption{Dimension Study} \label{d_acc} \end{minipage} \end{figure*} We further consider the relationship between the dimension and inference accuracy with the most suitable threshold. As shown in Figure ~\ref{d_acc}, we can achieve an HDC accuracy of 96.82/97.23\% with a dimension of only 32 and 64. Also, the accuracy will drop when the dimension is higher than 128, which is consistent with Statement~\ref{prop1}. \subsubsection{HDC Retraining} Thus far, we have shown how we can achieve the-state-of-art HDC accuracy with the smallest hypervector dimension. We can in fact improve the results using retraining techniques we will describe in this section. For example, with a dimension of 32, we can push the accuracy to 96.88\% with our two-step training (0.05\% and 0.01\% accuracy improvement with steps 1 and 2, respectively). However, as shown in Figure~\ref{d_acc}, there is almost no accuracy drop after replacing the fully-connected layer with the majority rule, which indicates accuracy improvement after retraining may not be significant for the MNIST datasets. Therefore, we will explore our retraining methods on the CIFAR-10 dataset. Using the same hypervector dimension, a baseline accuracy (trained encoder+majority rule) of 38.42\% was achieved. After retraining step 1 and step 2, the accuracy has improved by 0.21\% and 0.42\% respectively. The final accuracy increased to 39.05\% in a matter of minutes. \subsection{Experimental Results} Our full set of experiment results is shown in Table~\ref{tab:comparison} where we compare our accuracy, dimension, and number of operations with other state-of-the-art HDC models. In this paper, HDC accuracies of 96.88\% and 97.23\% with $d = 32$ and $64$ were achieved for the MNIST dataset. We also applied our techniques to a larger dataset to test whether they work in more complex situations. For the CIFAR-10 dataset, an HDC accuracy of 39.05\% with $d = 32$ was achieved with 1.17M computations. When we increase the dimension to 128, we can achieve an HDC accuracy of 46.18\% with 4.34M computations. A number of state-of-the-art HDC works were chosen for comparison. TD-HDC, proposed by ~\citet{chuang2020dynamic}, is a threshold-based framework to dynamically choose an execution path. They can improve the accuracy-energy efficiency trade-off and achieve an HDC accuracy of 88.92\% on MNIST with their pure binary HD model. \citet{hassan2021hyper} used a basic HDC model on the MNIST dataset in their case study. They encoded the pixels based on their black/white value and used majority sum operation in the training stage to combine similar samples. They achieved an HDC accuracy of 86\% on the MNIST dataset. HDC is also used in federated learning and secure learning. FL-HDC by ~\citet{hsieh2021fl} focused on the combination of HDC and federated learning. They introduced the polarized model into the federated learning field to reduce communication costs and managed to control the accuracy drop by retraining. 88\% accuracy was achieved on the MNIST dataset. SecureHD~\citep{imani2019framework} adapted a novel encoding and decoding method to perform securely learning tasks with the idea of HDC. Their accuracy on the MNIST dataset was 95\% for federated training. More recently, LeHDC~\citep{duan2022lehdc}, by transferring the HDC classifier into the binary neural network, has used the learning-based HDC to achieve 94.74\% on the MNIST dataset and 46.10\% on the CIFAR-10 dataset. QuantHD ~\citep{imani2019quanthd} and SearcHD ~\citep{imani2019searchd} are two methods that introduce multi-model and retraining into the HDC field. In LeHDC, they report the accuracy of the CIFAR-10 dataset with the methods of QuantHD and SearcHD as baselines, which are 22.66\% and 28.42\%. Compared with HDC, binary neural networks always require additional multiplication operations at least in the first layer because of the floating point input, which is much more expensive and was not considered in our comparison. For inference, cosine similarity and Hamming distance are used in most state-of-the-art works. Since cosine distance requires additional multiplication and division operations which are quite expensive, we chose Hamming distance instead. The number of operations in Hamming distance is linearly proportional to the dimension of the hypervectors, which indicates that our method only needs 0.32\% operations compared with other HDCs with a dimension of 10,000. \begin{table*}[ht] \centering \caption{Comparison with related works.} \label{tab:comparison} \begin{tabular}{l|cccc} \hline & \multirow{2}{*}{Accuracy} & \multirow{2}{*}{Dimension} & \multicolumn{2}{c}{Inference}\\ \cline{4-5} & & & Encoder addition/Boolean op count & Similarity \\ \hline\hline \multicolumn{5}{c}{MNIST} \\ \hline SearcHD & 84.43\% & 10,000 & 7.84M/7.84M & Hamming \\ \hline FL-HDC & 88\% & 10,000 & 7.84M/7.84M & Cosine \\ \hline TD-HDC & 88.92\% & 5,000 & 3.92M/3.92M & Hamming \\ \hline QuantHD & 89.28\% & 10,000 & 7.84M/7.84M & Hamming \\ \hline LeHDC & 94.74\% & 10,000 & 7.84M/7.84M & Hamming \\ \hline SecureHD & 95\% & 10,000 & 7.84M/7.84M & Cosine \\ \hline \textbf{This work} & \textbf{96.88\%} & \textbf{32} & \textbf{1.15M/0} & Hamming \\ \hline \textbf{This work} & \textbf{97.23\%} & \textbf{64} & \textbf{1.19M/0} & Hamming \\ \hline\hline \multicolumn{5}{c}{CIFAR-10} \\ \hline SearcHD & 22.66\% & 10,000 & 10.24M/10.24M & Hamming \\ \hline QuantHD & 28.42\% & 10,000 & 10.24M/10.24M & Hamming \\ \hline LeHDC & 46.10\% & 10,000 & 10.24M/10.24M & Hamming \\ \hline \textbf{This work }& 39.05\% & \textbf{32} &\textbf{1.17M/0 } & Hamming \\ \hline \textbf{This work }& 46.18\% & \textbf{128} &\textbf{4.34M/0} & Hamming \\ \end{tabular} \end{table*} \section{Discussion} Our analysis of the relationship between orthogonality and the number of classes also affects the results. This has been largely ignored in other HDC works. Taking the MNIST dataset as an example, the state-of-the-art works use hypervectors of dimensions 5,000 to 10,000 to `distinguish' pixel values that range from 0 to 255. However, if quantization is applied to input data and a proper encoder is used to extract the information from the original picture, theoretically, a much smaller dimension ($K$ reduced to 10 because the MNIST dataset has 10 labels) is needed. This also explains why our method and other HDCs cannot work on more complex datasets like ImageNet where the number of classes is large. \section{Conclusion} In this paper, we considered the dimension of the hypervectors used in HDC. We presented a detailed analysis of the relationship between dimension and accuracy as well as the relationship between dimension and orthogonality to demonstrate that it is not necessary to use high dimension to get a good performance in HDC. We showed that it is the orthogonality that affects accuracy. Previous works have been using high dimensions to achieve higher orthogonal because they used randomly drawn hypervectors. We showed that the orthogonality required to solve the problem can be achieved without resorting to high dimensions. As a result, we can reduce the dimensions from the tens of thousands used by the state-of-the-art to merely tens, while achieving the same level of accuracy. Computing operations during inference have been reduced to a tenth of that in traditional HDCs. Running on the MNIST dataset, we achieved an HDC accuracy of $96.88\%$ using a dimension of only 32. All our results are reproducible using the code we have made public.
{ "timestamp": "2023-01-27T02:04:36", "yymm": "2301", "arxiv_id": "2301.10902", "language": "en", "url": "https://arxiv.org/abs/2301.10902", "abstract": "Hyperdimensional computing (HDC) is a method to perform classification that uses binary vectors with high dimensions and the majority rule. This approach has the potential to be energy-efficient and hence deemed suitable for resource-limited platforms due to its simplicity and massive parallelism. However, in order to achieve high accuracy, HDC sometimes uses hypervectors with tens of thousands of dimensions. This potentially negates its efficiency advantage. In this paper, we examine the necessity of such high dimensions and conduct a detailed theoretical analysis of the relationship between hypervector dimensions and accuracy. Our results demonstrate that as the dimension of the hypervectors increases, the worst-case/average-case HDC prediction accuracy with the majority rule decreases. Building on this insight, we develop HDC models that use binary hypervectors with dimensions orders of magnitude lower than those of state-of-the-art HDC models while maintaining equivalent or even improved accuracy and efficiency. For instance, on the MNIST dataset, we achieve 91.12% HDC accuracy in image classification with a dimension of only 64. Our methods perform operations that are only 0.35% of other HDC models with dimensions of 10,000. Furthermore, we evaluate our methods on ISOLET, UCI-HAR, and Fashion-MNIST datasets and investigate the limits of HDC computing.", "subjects": "Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)", "title": "Efficient Hyperdimensional Computing", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9511422213778252, "lm_q2_score": 0.7461389930307512, "lm_q1q2_score": 0.7096842992878823 }
https://arxiv.org/abs/1804.09191
On the uniqueness of polynomial embeddings of the real 1-sphere in the plane
This paper considers real forms of closed algebraic $\mathbb{C}^*$-embeddings in $\mathbb{C}^2$. The classification of such embeddings was recently completed by Cassou-Nogues, Koras, Palka and Russell. Based on their classification, this paper shows that, up to an algebraic change of coordinates, there is only one polynomial embedding of the real 1-sphere $\mathbb{S}^1$ in the affine plane $\mathbb{R}^2$.
\section{Introduction} Let $\field{S}^n$ denote the real $n$-sphere as an algebraic variety over $\field{R}$. Daigle asked whether every polynomial embedding of $\field{S}^1$ in $\field{R}^2$ is equivalent to the standard embedding.\footnote{D. Daigle, University of Ottawa, private communication, 2013} Our main result, {\it Theorem\,\ref{main}}, gives an affirmative answer to this question. This result mirrors the Epimorphism Theorem of Abhyankar and Moh, and Suzuki: Over a field $k$ of characteristic zero, any polynomial embedding of the affine line $\field{A}^1_k$ in the affine plane $\field{A}^2_k$ is equivalent to the standard embedding \cite{Abhyankar.Moh.75, Suzuki.74}. The complexification of $\field{S}^1$ is the complex algebraic torus $\field{C}^*$, and in contrast to its real counterpart, there are infinitely many equivalence classes of polynomial embeddings of $\field{C}^*$ in $\field{C}^2$. The proof of {\it Theorem\,\ref{main}} relies on the recent classification of closed $\field{C}^*$-embeddings in $\field{C}^2$ due to Cassou-Nogues, Koras, Palka and Russell found in \cite{Cassou-Nogues.Koras.Russell.09, Koras.Palka.Russell.16, Koras.Palka.ppt}; see also \cite{Kaliman.96, Borodzik.Zoladek.10, Sathaye.11}. The proof of {\it Theorem\,\ref{main}} also uses the polar group of the real plane $\field{R}^2$. The polar group of a real form of a complex affine variety is introduced in \cite{Freudenburg.ppt18a}. In their classification, Cassou-Nogues, Koras, Palka and Russell show that each equivalence class of closed embeddings of $\field{C}^*$ in $\field{C}^2$ is represented by a polynomial with rational coefficients. Therefore, every polynomial embedding of $\field{C}^*$ in $\field{C}^2$ admits a real form as an embedding. The proof of {\it Theorem\,\ref{main}} shows that, if a closed embedding of $\field{C}^*$ in $\field{C}^2$ admits two distinct real forms, then this embedding is equivalent to the standard embedding, given by $xy=1$. One is thus led to ask about polynomial embeddings of $\field{S}^n$ in $\field{R}^{n+1}$. To the author's knowledge, there are no known examples of such embeddings which are not equivalent. Similarly, we ask if there exists {\it any} polynomial embedding of the torus $\field{S}^1\times\field{S}^1$ in $\field{R}^3$. The usual rendition of a topological torus as a surface of revolution in $\field{R}^3$ does indeed give an algebraic surface $T$ which is diffeomorphic to $\field{S}^1\times\field{S}^1$. However, it turns out that $T$ is a nontrivial algebraic $\field{S}^1$-bundle over $\field{S}^1$. This is shown in {\it Section\,\ref{torus}}. Note that $\field{S}^1\times\field{S}^1$ is a real form of the complex torus $\field{C}^*\times\field{C}^*$, which has polynomial embeddings in $\field{C}^3$, for example, $xyz=1$. \medskip \noindent {\bf Notation and Terminology.} Let $R$ be a ring. The group of units of the ring $R$ is denoted $R^*$. The multiplicative monoid $R\setminus\{ 0\}$ is denoted $R'$. The polynomial ring in $n$ variables over the ring $R$ is denoted $R^{[n]}$. $\field{R}^n$ denotes affine $n$-space over $\field{R}$. The real $n$-sphere $\field{S}^n$ is the algebraic variety in $\field{R}^{n+1}$ defined by the polynomial equation $x_0^2+\cdots +x_n^2=1$. A {\bf polynomial embedding} of $\field{S}^n$ in $\field{R}^{N+1}$ is of the form $F=0$ for some $F\in \field{R} [x_0,...,x_N]$, $N\ge n$. The {\bf standard embedding} is given by $x_0^2+\cdots +x_n^2=1$. Two embeddings are {\bf equivalent} if they differ by an algebraic automorphism of $\field{R}^{N+1}$. Let $X$ be an affine $\field{R}$-variety with coordinate ring $\field{R} [X]$, and $Y$ an affine $\field{C}$-variety with coordinate ring $\field{C} [Y]$. Then $X$ is a {\bf real form} of $Y$ if $\field{C}\otimes_{\field{R}}\field{R}[X]=\field{C} [Y]$. \medskip \noindent{\bf Acknowledgment.} The author wishes to acknowledge that many ideas in this paper were influenced by discussions with Daniel Daigle (University of Ottawa), Shulim Kaliman (University of Miami), Lucy Moser-Jauslin (Universite de Bourgogne), Peter Russell (McGill University) and Karol Palka (Warsaw University). \section{Preliminary Results} Throughout this section, $A$ is an affine integral domain over $\field{R}$, and $B=\field{C}\otimes_{\field{R}}A=A[i]=A\oplus iA$. Assume that $B$ is also an integral domain. Given $f\in B$, write $f=f_1+if_2$ for $f_1,f_2\in A$. The {\bf conjugate} of $f$ is $\bar{f}=f_1-if_2$. \subsection{Polar Groups} Some facts about polar groups are required, as laid out in \cite{Freudenburg.ppt18a}. The element $f\in B'$ has {\bf no real divisor} if $f=rg$ for $r\in A$ and $g\in B$ implies $r\in A^*$. The set of $f\in B'$ with no real divisor is denoted $\Delta (B)$, and the set of irreducible elements of $\Delta (B)$ is denoted by $\Delta (B)_1$. \begin{theorem}\label{UFDUFD} {\rm (\cite{Freudenburg.ppt18a},Thm.\,5.1,Thm.\,5.5)} Assume that $A$ and $B$ are UFDs. Given $f\in B'$, $f\in\Delta (B)$ if and only if $\gcd (f,\bar{f})=1$. \end{theorem} Let $K={\rm frac}(A)$ and $L={\rm frac}(B)$. The {\bf polar group} of $A$ is the quotient group $L^*/B^*K^*$, which is denoted $\Pi (A)$. This group is an invariant of $A$ which encodes information about the residual divisors in $B$ over $A$. Given $f\in B$, let $[f]$ denote its image in $\Pi (A)$. A key feature of this group is that $[f]^{-1}=[\bar{f}]$. \subsection{Units and Gradings} \begin{lemma}\label{trivial-units} Suppose that $A^*=\field{R}^*$. If $f\in B^*$, then $f^{-1}=\lambda\bar{f}$ for some $\lambda\in\field{R}^*$. \end{lemma} \begin{proof} We have: \[ ff^{-1}=1 \implies \bar{f}(\overline{f^{-1}})=1 \implies \bar{f}\in B^* \] Therefore, $f\bar{f}\in B^*\cap A=A^*=\field{R}^*$. If $f\bar{f}=\rho$, then $\bar{f}=\rho f^{-1}$. \end{proof} \begin{lemma}\label{deg} Suppose that $B$ has a $\field{Z}$-grading, and let $\deg$ be the induced degree function on $B$. If $A$ is a graded subring, then $\deg f=\deg\bar{f}$ for all $f\in B$. \end{lemma} \begin{proof} Given $g\in B'$, let $\eta (g)$ denote the highest degree homogeneous summand of $g$. Note that, since $A$ is a graded subring, $\eta (g)\in A$ if $g\in A$. Suppose that $f\in B'$, and write $f=f_1+if_2$ for $f_1,f_2\in A$. If $\deg f<\max\{\deg f_1,\deg f_2\}$, then $\deg f_1=\deg f_2$, which implies $\eta (f_1)+i\eta (f_2)=0$. But then $\eta (f_1)=\eta (f_2)=0$ implies $\eta (f)=0$, a contradiction. Therefore: \[ \deg f=\max\{ \deg f_1,\deg f_2\} = \deg \bar{f} \] \end{proof} Recall that an $\field{N}$-grading of $B$ is a $\field{Z}$-grading $\bigoplus_nB_n$ in which $B_n=\{0\}$ for $n\in\field{Z}\setminus\field{N}$. \begin{lemma}\label{unit-degree} Suppose that $B$ has an $\field{N}$-grading and $A$ is a graded subring, and let $\deg$ be the induced degree function on $B$. Suppose that $P\in A'$ is prime in $B$ and $(A/PA)^*=\field{R}^*$. Given $f\in B$, if the image of $f$ in $B/PB$ is a unit, then either $f\in B^*$ or $\deg P\le 2\deg f$. \end{lemma} \begin{proof} First note that $A/PA$ is a real form of $B/PB$. Assume that $f\not\in B^*$. Let $\pi :B\to B/PB$ be the standard surjection. By hypothesis, there exists $h\in B'$ such that $\pi (f)\pi (h)=1$. Therefore, there exists $Q\in B$ with $fh=1+PQ$. In addition, by {\it Lemma\,\ref{trivial-units}}, there exists $\lambda\in\field{R}^*$ with $\pi (h)=\lambda \overline{\pi (f)}=\lambda\pi (\bar{f})$. Therefore, there exists $R\in B$ with $h=\lambda\bar{f}+PR$. So altogether we can write $\lambda f\bar{f}=1+PS$ for some $S\in B$. Note that $S\ne 0$, since $f\not\in B^*$. By {\it Lemma\,\ref{deg}}, we have $\deg f=\deg\bar{f}$. Therefore, $2\deg f=\deg P+\deg S\ge\deg P$. \end{proof} \subsection{Polynomial Rings} In this section, assume that: \[ A=\field{R} [x,y]\cong\field{R}^{[2]} \quad {\rm and}\quad B=\field{C}\otimes_{\field{R}}A=\field{C} [x,y]\cong\field{C}^{[2]} \] We consider the standard $\field{N}$-grading of $A$ and $B$, wherein $x$ and $y$ are homogeneous of degree one. \begin{lemma}\label{AMS} Let $\alpha\in A$ be such that $B=\field{C} [\alpha ,u]$ for some $u\in B$. Then there exists $\beta\in A$ such that $A=\field{R} [\alpha ,\beta]$. If $u\in A$, then we may take $u=\beta$. \end{lemma} \begin{proof} If $u\in A$, then $A=\field{R} [\alpha ,u]$ by Cor.\,3.28 of \cite{Freudenburg.17}. So assume $u\not\in A$. $A/\alpha A$ is a real form of $B/\alpha B\cong\field{C}^{[1]}$, and it is known that the only real form of $\field{C}^{[1]}$ is $\field{R}^{[1]}$ (see \cite{Russell.81}). Therefore, $A/\alpha A\cong\field{R}^{[1]}$. By the Abhyankar-Moh-Suzuki Theorem \cite{Abhyankar.Moh.75, Suzuki.74}, there exists $\beta\in A$ with $A=\field{R} [\alpha ,\beta ]$. \end{proof} \begin{lemma} Let $Q=x^2+y^2-1\in A$. If $P\in A$ is such that $A/PA\cong_{\field{R}}A/QA$, then: \[ B/PB\cong_{\field{C}}B/QB =\field{C}[t,t^{-1}] \] \end{lemma} \begin{proof} Let $A_1=A/QA$ and $A_2=A/PA$, and let $\alpha :A_1\to A_2$ be an isomorphism of $\field{R}$-algebras. Let $B_1=B/QA=A_1[z]$, where $z^2+1=0$, and $B_2=B/PA=A_2[w]$, where $w^2+1=0$. Extend $\alpha$ to $\beta :B_1\to B_2$ by setting $\beta (z)=w$. Then $\beta$ is an $\field{R}$-algebra isomorphism, and since $\beta (\field{R}[z])=\field{R}[w]$, we can view $\beta$ as an isomorphism of $\field{C}$-algebras. \end{proof} \begin{lemma}\label{P-form} Suppose that $u,v\in B$ satisfy $B=\field{C} [u,v]$ and $[v]\ne 1$ in $\Pi (A)$. Let $P\in B'$ have the form $P=v^mf+1$ for $f\in B'$ and $m\ge 1$. Assume that: \begin{enumerate} \item $P\in A$ \item $P$ is irreducible in $B$ \item $(A/PA)^*=\field{R}^*$ \end{enumerate} Then $m=1$ and $fB=\bar{v}B$. \end{lemma} \begin{proof} We have: \[ P\in A \implies v^mf\in A \implies v^mf=\bar{v}^m\bar{f} \] Since $v$ is irreducible and $[v]\ne 1$, we see that $v\in\Delta (B)_1$. By {\it Thm.\,\ref{UFDUFD}}, $\gcd (v,\bar{v})=1$. Since $v$ and $\bar{v}$ are prime, it follows that $f\in\bar{v}^mB$. Write $f=\bar{v}^mg$ for $g\in B'$. Then $v^m\bar{v}^mg=\bar{v}^mv^m\bar{g}$ implies $g=\bar{g}$ and $g\in A$. Let $\pi :A\to A/PA$ be the standard surjection. Since $P=(v\bar{v})^mg+1$, we see that $\pi (g)$ is a unit of $A/PA$. By hypothesis, there exists $\lambda\in\field{R}^*$ and $T\in A$ with $g=\lambda +PT$. If $T\ne 0$, then $\deg P=m\deg (v\bar{v}) + \deg P +\deg T$, which is not possible, since $\deg (v\bar{v})>0$. Therefore, $T=0$ and $g=\lambda\in A^*$, so $fB=\bar{v}^mB$. Let $\zeta =\lambda^{1/m}\in\field{C}^*$. Then $P=(\zeta v\bar{v})^m+1$. Since $P$ is irreducible in $B$, $m=1$. \end{proof} \begin{lemma}\label{cusp} Let $\tilde{B}=\field{C} [t,t^{-1}]$. Suppose that $f^a=g^b$ for $f,g\in\tilde{B}$ and $a,b\in\field{N}$ relatively prime. Then there exists $h\in\tilde{B}$ such that $f=h^b$ and $g=h^a$. \end{lemma} \begin{proof} If $a=1$ or $b=1$, this is clear, so we may assume that $b>a>1$. We first show that, for some $h\in\tilde{B}$: \begin{equation}\label{equation1} f^a\tilde{B}=g^b\tilde{B} \implies f\tilde{B}=h^b\tilde{B} \quad {\rm and}\quad g\tilde{B}=h^a\tilde{B} \end{equation} Let $f=dF$ and $g=dG$, where $d,F,G\in\tilde{B}$ and $\gcd (F,G)=1$. Then $F^a\tilde{B}=d^{b-a}G^b\tilde{B}$, and since $\gcd (F,G)=1$, we must have $G\in\tilde{B}^*$. Therefore, $F^a\tilde{B}=d^{b-a}\tilde{B}$. By induction, we conclude that $F\tilde{B}=h^{b-a}\tilde{B}$ and $d\tilde{B}=h^a\tilde{B}$ for some $h\in\tilde{B}$. It follows that $f\tilde{B}=(d\tilde{B})(F\tilde{B})=(h^a\tilde{B})(h^{b-a}\tilde{B})=h^b\tilde{B}$. So the implication (\ref{equation1}) is proved. Let $\omega ,\zeta\in\field{C}^*$ and $m,n\in\field{Z}$ be such that $f=\omega t^mh^b$ and $g=\zeta t^nh^a$. Then: \[ f^a=g^b \implies am=bn \quad {\rm and}\quad \omega^a=\zeta^b \implies a\,\vert\, n \quad {\rm and}\quad b\,\vert\, m \] Define $k=m/b=n/a$, and let $\lambda\in \field{C}^*$ be such that $\omega =\lambda^b$ and $\zeta=\lambda^a$. If $H=\lambda t^kh$, then $f=H^b$ and $g=H^a$. \end{proof} \section{Main Result} Let $A=\field{R} [x,y]\cong\field{R}^{[2]}$ and $B=\field{C}\otimes_{\field{R}}A=\field{C} [x,y]\cong\field{C}^{[2]}$. The goal of this section is to prove the following. \begin{theorem}\label{main} Define $Q\in A$ by $Q=x^2+y^2-1$. Given $P\in A$, if $A/PA\cong_{\field{R}}A/QA$, then there exist $f,g\in A$ such that $A=\field{R} [f,g]$ and $P=f^2+g^2-1$. \end{theorem} The proof of this theorem is based on the classification of closed $\field{C}^*$-embeddings in $\field{C}^2$ found in \cite{Cassou-Nogues.Koras.Russell.09, Koras.Palka.Russell.16,Koras.Palka.ppt}. We show that, for almost every $\field{C}^*$-embedding in their classification, the induced real form of the representative polynomial embedding is an embedding of $\field{R}^*$ in $\field{R}^2$. There is only one exceptional case, and in this case, the induced embedding of $\field{S}^1$ in $\field{R}^2$ is equivalent to the standard embedding. An important distinction between $\field{R}^*$ and $\field{S}^1$ is that the coordinate ring of $\field{R}^*$ has nontrivial units, whereas the units of the coordinate ring of $\field{S}^1$ are trivial. The authors of \cite{Cassou-Nogues.Koras.Russell.09, Koras.Palka.Russell.16,Koras.Palka.ppt} distinguish three types of closed $\field{C}^*$-embeddings in $\field{C}^2$: Those with a very good asymptote, those with a good asymptote, and the sporadic embeddings. These three cases are dealt with in {\it Prop.\,\ref{prop1}}, {\it Prop.\,\ref{prop2}} and {\it Prop.\,\ref{prop3}}, respectively. \subsection{One Very Good Asymptote} See \cite{Cassou-Nogues.Koras.Russell.09}, Thm.\,8.2\,(i). \begin{proposition}\label{prop1} Suppose that $B=\field{C} [u,v]$ and that $P\in B'$ is of one of the following two forms. \begin{itemize} \item [(i.1)] $P(u,v)=v^a-(uv^k+g(v))^b$, with $a,b\ge1$ and $\gcd(a,b)=1$, $k\ge 1$, $g(0)=1$ and $g$ otherwise arbitrary of degree at most $k-1$. \medskip \item [(i.2)] $P(u,v)=1-v^{b-a}(uv^{k-1}+g(v))^b$, with $b>a\ge 1$, $\gcd(a,b)=1$, $k\ge 1$, $g$ arbitrary of degree at most $k-2$. \end{itemize} \medskip If $P\in A$ and $(A/PA)^*=\field{R}^*$, then $P$ is of form {\rm (i.1)} with $b=k=1$, and $P$ defines the standard embedding of $\field{S}^1$ in $\field{R}^2$. \end{proposition} \begin{proof} First consider the case $[v]=1$ in the polar group $\Pi (A)$. In this case, $v=\omega\alpha$ for $\omega\in\field{C}^*$ and $\alpha\in A'$. By {\it Prop.\,\ref{AMS}}, there exists $\beta\in A$ such that $A=\field{R} [\alpha ,\beta]$. So we may assume that $x=\beta$ and $y=\alpha$. Since $B=\field{C} [u,y]=\field{C} [x,y]$, it follows that $u=\lambda x + \mu (y)$ for $\lambda\in\field{C}^*$ and $\mu (y)\in\field{C} [y]$. Therefore, form (i.1) becomes \[ P(u,v)=P(\lambda x+\mu (y), \omega y) = ry^a-(sxy^k+h(y))^b \quad (r,s\in\field{R}^*\, ,\,\, h\in\field{R}[y]) \] and form (i.2) becomes: \[ P(u,v)=P(\lambda x+\mu (y), \omega y)=1-ry^{b-a}(sxy^{k-1}+h(y))^b \quad (r,s\in\field{R}^*\, ,\,\, h\in\field{R}[y]) \] Since $P\in A$, we see that, in each case, the image of $y$ in $A/PA$ is a non-constant invertible function, meaning that $(A/PA)^*\ne\field{R}^*$. Therefore, $[v]\ne 1$. Consider form (i.2). By {\it Lemma\,\ref{P-form}}, we must have $b-a=1$ and $b=1$, which gives a contradiction. Therefore, $P(u,v)$ cannot be of form (i.2). Consider form (i.1). Write $P=vF-1$ for $F\in B'$. By {\it Lemma\,\ref{P-form}}, $FB=\bar{v}B$. If $F=\lambda\bar{v}$ for $\lambda\in\field{C}^*$, and if $v=v_1+iv_2$ for $v_1,v_2\in A$, then: \[ P=\lambda v\bar{v}-1=\lambda (v_1^2+v_2^2)-1\in A \implies \lambda\in\field{R}^* \] By {\it Lemma\,\ref{deg}}, it follows that: \[ 2\deg v=\deg P=\max\{ a\deg v,\deg u+bk\deg v\} \implies 2\deg v>bk\deg v \implies b=k=1 \] We have thus have: \[ P=v^a-uv-1=v(v^{a-1}-u)-1 \implies F=v^{a-1}-u \implies (u-v^{a-1})B=\bar{v}B \] Therefore: \[ B=\field{C} [u,v]=\field{C} [u-v^{a-1},v]=\field{C} [\bar{v},v] = \field{C} [v_1,v_2] \] By {\it Prop.\,\ref{AMS}}, $A=\field{R} [v_1,v_2]$. \end{proof} \subsection{One Good Asymptote} See \cite{Cassou-Nogues.Koras.Russell.09}, Thm.\,8.2\,(ii). \begin{proposition}\label{prop2} Suppose that $B=\field{C} [u,v]$ and that $P\in B'$ is of one of the following five forms. \begin{itemize} \item [(ii.1)] $v^kP=(v+F^s)^p-F^{sp+1}$ and $F=uv^k+g(v)$, where $s,p,k\ge 1$; $g$ is a polynomial of degree at most $k-1$ uniquely determined by the condition that $g$ is a polynomial and $g(0)=1$. \medskip \item [(ii.2)] $v^kP=(v+F^s)^p-F^{sp-1}$ and $F=uv^k+g(v)$, where $s,p,k\ge 1$, $sp\ge 2$; $g$ is a polynomial of degree at most $k-1$ uniquely determined by the condition that $g$ is a polynomial and $g(0)=1$. \medskip \item [(ii.3)] $v^kP=v-16v^2+4vF-8vF^2+F^3-F^4$ for $F=uv^k+g(v)$, where $k\ge 1$, and $g$ is a polynomial of degree at most $k-1$ uniquely determined by the condition that $g$ is a polynomial and $g(0)=1$. \medskip \item [(ii.4)] $v^{k-1}P=(1+vF^{s+1})^pF-1$ for $F=uv^{k-1}+g(v)$, where $s,p,k\ge 1$, and $g$ is a polynomial of degree at most $k-2$ uniquely determined by the condition that $g$ is a polynomial. \medskip \item [(ii.5)] $v^{k-1}P=(1+vF^{s+1})^p-F$ for $F=uv^{k-1}+g(v)$, where $s,p,k\ge 1$, and $g$ is a polynomial of degree at most $k-2$ uniquely determined by the condition that $g$ is a polynomial. \end{itemize} \medskip If $P\in A$, then $(A/PA)^*\ne\field{R}^*$. \end{proposition} \begin{proof} Assume, to the contrary, that $(A/PA)^*=\field{R}^*$. Let $G\in B$ be such that $F=vG+1$. \medskip \noindent {\it Form (ii.1).} By {\it Lemma\,\ref{cusp}}, there exists $h\in B/PB$ such that $F\equiv h^p$ and $v+F^s\equiv h^{sp+1}$ modulo $P$. Therefore: \[ v\equiv h^{sp}(h-1) \implies h^p\equiv vG+1\equiv h^{sp}(h-1)G+1 \implies h^p(1-h^{(s-1)p}(h-1)G)\equiv 1 \] Consequently, $h$ is a unit modulo $P$, which implies that $F$ is a unit modulo $P$. Since $F\not\in B^*=\field{C}^*$, {\it Lemma\,\ref{unit-degree}} implies: \[ 2\deg F \ge \deg P=sp\deg F + \deg u > sp\deg F \implies s=p=1 \] We thus have: \[ v^kP=v+F-F^2=v-vFG \implies v^{k-1}P=1-FG \] By {\it Lemma\,\ref{trivial-units}}, $\lambda\bar{F}=G+PL$ for some $L\in B$. However, the equalities $\deg P=\deg F+\deg u$, $\deg \bar{F}=\deg F$ and $\deg G=\deg F-\deg v$ imply $\deg P>\deg \bar{F}>\deg G$, thus precluding the existence of the equation $\lambda\bar{F}=G+PL$. Therefore, $P$ cannot be of form (ii.1). \medskip \noindent {\it Form (ii.2).} Reasoning as in the case of form (ii.1), we find that $2\deg F\ge \deg P$. In this case, $\deg P=(sp-1)\deg F+\deg u$, and it follows that $sp=2$. Assume that $s=2$ and $p=1$. Then $v^kP=v+F(F-1)$ implies that $v^{k-1}P=1+FG$. By {\it Lemma\,\ref{trivial-units}}, $\lambda\bar{F}=G+PN$ for some $N\in B$. However, the equalities $\deg P=\deg F+\deg u$, $\deg \bar{F}=\deg F$ and $\deg G=\deg F-\deg v$ imply $\deg P>\deg \bar{F}>\deg G$, thus precluding the existence of the equation $\lambda\bar{F}=G+PN$. Therefore, $s=1$ and $p=2$. If $k\ge 2$, then $G=vH-2$ for some $H\in B$, and: \[ v^kP=(v+F)^2-F \implies v^{k-1}P=v+2F+F(vH-2) \implies v^{k-2}P=1+FH \] By {\it Lemma\,\ref{trivial-units}}, there exist $\lambda\in\field{R}^*$ and $M\in B$ with $\lambda\bar{F}=H+PM$. However, the equalities $\deg P=\deg F+\deg u$, $\deg \bar{F}=\deg F$ and $\deg H=\deg F-2\deg v$ imply $\deg P>\deg \bar{F}>\deg H$, thus precluding the existence of the equation $\lambda\bar{F}=H+PM$. Therefore, $k=1$. We have $vP=(v+F)^2-F$ and $F=uv+1$, which means that $P=v+2F+uF$. Define $h\in B$ by $h=v+F$. Then $P=h+(1+u)F$. Modulo $P$, we have: \[ h(1+u)\equiv h+uh\equiv h+uv+uF\equiv h+(F-1)-v-2F\equiv h-v-F-1\equiv 1 \] Therefore, $1+u$ is a unit modulo $P$. By {\it Lemma\,\ref{unit-degree}}, \[ 2\deg u+\deg v=\deg P\le 2\deg (1+u)=2\deg u \] a contradiction. Therefore, $P$ cannot be of form (ii.2). \medskip \noindent{\it Form (ii.3).} If $H=v^{-1}(F^2+4v-F)=uF+4$, then \[ vH(F^2+4v)=(F^2+4v-F)(F^2+4v)=F^4-F^3+8vF^2-4vF+16v^2=v-v^kP \] which implies: \[ H(F^2+4v)=1-v^{k-1}P \] Therefore, $H$ is a unit modulo $P$. Since $H\not\in B^*=\field{C}^*$, {\it Lemma\,\ref{unit-degree}} implies: \[ 4\deg u+3k\deg v=\deg P\le 2\deg H=4\deg u+2k\deg v \] a contradiction. Therefore, $P$ cannot be of form (ii.3). \medskip \noindent {\it Form (ii.4).} By definition, $F$ is a unit modulo $P$, but $F\not\in B^*=\field{C}^*$. Therefore, by {\it Lemma\,\ref{unit-degree}}, $\deg P\le 2\deg F$. However, \[ \deg P=p\deg (vF^{s+1})+\deg F-(k-1)\deg v = p\deg (vF^{s+1})+\deg u>2\deg F \] which gives a contradiction. Therefore, $P$ cannot be of form (ii.4). \medskip \noindent {\it Form (ii.5).} Write $v^{k-1}P=FQ+1$ for $Q\in B$. By {\it Lemma\,\ref{trivial-units}}, there exists $H\in B$ and $\lambda\in\field{R}^*$ such that $\lambda\bar{F}=Q+PH$. However, the equalities \[ \deg P=\deg (vF^{s+1})^p-\deg F+\deg u\,\, ,\,\, \deg Q=\deg (vF^{s+1})^p-\deg F\,\, ,\,\, \deg\bar{F}=\deg F \] imply $\deg P>\deg Q>\deg \bar{F}$, thus precluding the existence of the equation $\lambda\bar{F}=Q+PM$. Therefore, $P$ cannot be of form (ii.5). In conclusion, $(A/PA)^*\ne\field{R}^*$ whenever $P\in A$ and $P$ has one of the forms (ii.1)-(ii.5). \end{proof} \subsection{Sporadic Embeddings} See \cite{Koras.Palka.Russell.16}. Two known families of sporadic embeddings of $\field{C}^*$ in $\field{C}^2$ are parametrized as follows, where the second family has only one member. \begin{enumerate} \item $X=t^{2n}(t^2+t+\frac{1}{2})$ and $Y=t^{-2n-4}(t^2-t+\frac{1}{2})$, where $n\in\field{Z}$, $n\ge 1$. \medskip \item $X=t^4(t^2+t+\frac{2}{3})$ and $Y=t^{-8}(t^2-t+\frac{1}{3})$ \end{enumerate} In the first case, note that, if $F=4(XY-1)$, then $F=t^{-4}$. Therefore: \[ XF^{n+1}=t^{-2n-4}(t^2+t+{\textstyle\frac{1}{2}})=Y+2t^{-2n-3} \implies {\textstyle\frac{1}{2}}(XF^{n+1}-Y)=t^{-2n-3} \] We thus obtain the relation: \[ (Y-XF^{n+1})^4=16F^{2n+3} \] Since $\gcd (4,2n+3)=1$ and $\gcd (Y-XF^{n+1},F)=1$ as polynomials in $B$, it follows that this is a prime relation. In the second case, let $F=-3(XY-1)$, $G=XF+\frac{4}{9}$ and $H=4Y-3F^2$. Then $H=t^{-6}$ and $3(X-G^2)H-3(1-FG)=t^{-2}$. This gives the relation: \[ H=27((X-G^2)H+FG-1)^3 \] Consider polynomials $f,g,h,p\in B=\field{C} [x,y]=\field{C}^{[2]}$ defined by: \[ f=-3(xy-1)\,\, ,\,\, g=xf+{\textstyle\frac{4}{9}}\,\, ,\,\, h=4y-3f^2 \,\, ,\,\, p=h-27((x-g^2)h+fg-1)^3 \] By direct calculation (using {\it Maple} for example), we find that $p$ is irreducible, and that the highest-degree homogeneous summand of $p$ is $x^6y^4$. \begin{proposition}\label{prop3} Suppose that $B=\field{C} [u,v]$ and that $P\in B'$ is of one of the following two forms. \begin{itemize} \item [(iii.1)] $P=(v-uF^{n+1})^4-16F^{2n+3}$ and $F=4(uv-1)$, where $n\ge 1$. \medskip \item [(iii.2)] $P=4v-3F^2-27((u-G^2)H+FG-1)^3$, where $F=-3(uv-1)$ and $G=uF+\frac{4}{9}$. \end{itemize} \medskip If $P\in A$, then $(A/PA)^*\ne\field{R}^*$. \end{proposition} \begin{proof} Assume that $(A/PA)^*=\field{R}^*$. \medskip \noindent {\it Form (iii.1).} By the foregoing discussion, $F$ is a unit modulo $P$. Since $F\not\in B^*=\field{C}^*$, {\it Lemma\,\ref{unit-degree}} implies that, if $P\in A$, then $\deg P\le 2\deg F$. But this is impossible, since $\deg P=4\deg (uF^{n+1})$. \medskip \noindent {\it Form (iii.2).} Let $H=4v-3F^2$. By the foregoing discussion, $H$ is a unit modulo $P$. Since $H\not\in B^*=\field{C}^*$, {\it Lemma\,\ref{unit-degree}} implies that, if $P\in A$, then $\deg P\le 2\deg H$. But this is impossible, since $\deg P=6\deg u+4\deg v$, while $\deg H=2\deg u+2\deg v$. Therefore, $(A/PA)^*\ne\field{R}^*$ whenever $P\in A$ and $P$ has one of the forms (iii.1)-(iii.2). \end{proof} This completes the proof of {\it Thm.\,\ref{main}}. \section{A Remark on $\field{S}^1$-Bundles over $\field{S}^1$}\label{torus} Let $R=\field{R} [\field{S}^1]$ be the coordinate ring of $\field{S}^1$, and let $R=\field{R} [a,b]$, where $a^2+b^2=1$. Define the affine surface $T$ over $\field{R}$ with coordinate ring: \[ A=R[x,y]/(x^2+y^2-(a+2)^2) \] Let $\pi :T\to \field{S}^1$ be the surjective morphism induced by the inclusion $\field{R} [a,b]\subset A$. The fiber over $(r,s)\in\field{S}^1$ is defined by the quotient ring: \[ A/(a-r,b-s)= \field{R} [x,y]/(x^2+y^2-(r+2)^2)\cong_{\field{R}}\field{R} [\field{S}^1] \] So every fiber of $\pi$ is isomorphic to $\field{S}^1$, and $T$ is an $\field{S}^1$-bundle over $\field{S}^1$. In $A$, we have $4a=x^2+y^2-a^2-4=x^2+y^2+b^2-5$. Therefore, $A=\field{R} [b,x,y]$, meaning that $T$ admits a polynomial embedding in $\field{R}^3$. The defining polynomial relation is: \[ \textstyle \frac{1}{4}(x^2+y^2+b^2)^2+b^2=1 \implies (x^2+y^2+b^2)^2=16(x^2+y^2) \] We thus recognize $T$ as the surface obtained by revolving the circle $(y-2)^2+b^2=1$ about the $b$-axis. \begin{proposition} $T\not\cong_{\field{R}}\field{S}^1\times\field{S}^1$ \end{proposition} \begin{proof} (due to Daigle) Consider the integral domain $B=\field{C}\otimes_{\field{R}}A=\field{C} [a,b,u,v]/(uv-(a+2)^2)$, where $u=x+iy$ and $v=x-iy$. Then $B$ has a $\field{Z}$-grading $B=\bigoplus_{n\ge 0}B_n$ over $\field{C} [a,b]$ in which $u$ and $v$ are homogeneous, $\deg u=1$ and $\deg v=-1$. In addition: \[ B_0=\field{C} [a,b,uv]=\field{C} [a,b]\,\, ,\,\, B_n=u^nB_0 \,\, {\rm for}\,\, n\ge 1\,\,\ ,\,\, B_n=v^nB_0 \,\, {\rm for}\,\, n\le -1 \] Any unit of $B$ is homogeneous. Given $w\in B^*$, assume that $\deg w=n\ge 0$ and write $w=u^nc$ for $c\in B_0$. If $n\ne 0$, then $u\in B^*$, which is absurd since $\dim_{\field{C}}(B/uB)=1$. Therefore, $n=0$. If $\deg w\le 0$, then in the same way we get $\deg w=0$. So $\deg w=0$ in any case. Therefore: \[ B^*\subset B_0 \implies \field{C} [B^*]=B_0=\field{C} [a,b] \] Consider $\field{C}\otimes (\field{S}^1\times\field{S}^1)=\field{C}^*\times\field{C}^*$, which has coordinate ring $W=\field{C} [s,s^{-1},t,t^{-1}]$. Since $W=\field{C} [W^*]$, it follows that $B\not\cong_{\field{C}}W$. Therefore, $A\not\cong_{\field{R}}\field{R} [\field{S}^1\times\field{S}^1]$. \end{proof} \noindent {\bf Question.} Does the real algebraic torus $\field{S}^1\times\field{S}^1$ admit a polynomial embedding in $\field{R}^3$ ? \medskip \bibliographystyle{amsplain}
{ "timestamp": "2018-04-26T02:00:26", "yymm": "1804", "arxiv_id": "1804.09191", "language": "en", "url": "https://arxiv.org/abs/1804.09191", "abstract": "This paper considers real forms of closed algebraic $\\mathbb{C}^*$-embeddings in $\\mathbb{C}^2$. The classification of such embeddings was recently completed by Cassou-Nogues, Koras, Palka and Russell. Based on their classification, this paper shows that, up to an algebraic change of coordinates, there is only one polynomial embedding of the real 1-sphere $\\mathbb{S}^1$ in the affine plane $\\mathbb{R}^2$.", "subjects": "Algebraic Geometry (math.AG)", "title": "On the uniqueness of polynomial embeddings of the real 1-sphere in the plane", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357634636668, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.7096739358868459 }
https://arxiv.org/abs/1007.5101
A relative isoperimetric inequality for certain warped product spaces
Given a warped product space $\mathbb{R} \times_{f} N$ with logarithmically convex warping function $f$, we prove a relative isoperimetric inequality for regions bounded between a subset of a vertical fiber and its image under an almost everywhere differentiable mapping in the horizontal direction. In particular, given a $k$--dimensional region $F \subset \{b\} \times N$, and the horizontal graph $C \subset \mathbb{R} \times_{f} N$ of an almost everywhere differentiable map over $F$, we prove that the $k$--volume of $C$ is always at least the $k$--volume of the smooth constant height graph over $F$ that traps the same $(1+k)$--volume above $F$ as $C$. We use this to solve a Dido problem for graphs over vertical fibers, and show that, if the warping function is unbounded on the set of horizontal values above a vertical fiber, the volume trapped above that fiber by a graph $C$ is no greater than the $k$--volume of $C$ times a constant that depends only on the warping function.
\section{Introduction}\label{S:Intro} This paper proves a relative isoperimetric inequality for warped product spaces with log convex warping function. The original relative isoperimetric inequality is the solution to the classical Dido problem in the plane, which asks for the greatest amount of area that can be enclosed by a given straight line and a connected curve segment (of a prescribed length) whose endpoints lie on the line. The general version of this isoperimetric problem in an $n$--dimensional complete and simply connected Riemannian manifold of constant curvature has as its solution an $(n-1)$--dimensional hemisphere (e.g., \cite[Theorem 18.1.3]{BurZal}). The connection between log convex functions and isoperimetric inequalities appears most recently in explorations of Euclidean space with density (e.g., \cite[Corollary 4.11, Theorem 4.13]{RosalesEtAl08}, \cite{Bobkov97-1}, \cite{Bobkov97-2}). The result contained herein is most directly comparable to recent results of Kolesnikov and Zhdanov \cite{KolesnikovZhdanov10}, and generalizes a particular relative isoperimetric inequality for hyperbolic 3--space \cite[Section 4]{Rafalski10}. We solve an analogous problem to the Dido problem completely for these warped product spaces, where the given fixed set is a suitable region inside of a vertical fiber of the warped product and the free set is its image in the horizontal direction under a (possibly discontinuous) almost everywhere differentiable function (see Section \ref{S:Conclusion}). Section \ref{S:Defs} provides definitions and Section \ref{S:MainTheorem} contains the statement and proof of the main theorem. \begin{acknowledgments} The author thanks Ian Agol for asking the question that motivates these results, as well as for helpful feedback. Very special thanks to Frank Morgan for helping with the alternative proof and for invaluable preliminary draft discussions. \end{acknowledgments} \section{Definitions}\label{S:Defs} Warped product spaces were introduced by Bishop and O'Neill \cite{BishOneill69}. Let $M$ and $N$ be Riemannian manifolds of respective dimensions $m$ and $n$, and let $f \colon\thinspace M \to \mathbb{R}_{>0}$ be a smooth function. Let $\pi \colon\thinspace M \times N \to M$ and $\eta \colon\thinspace M \times N \to N$ denote the projections of the product $M \times N$ onto the factors. Then the warped product $W = M \times_{f} N$ is the product manifold $M \times N$ endowed with the Riemannian metric satisfying $$||v||_{W}^{2} = ||\pi_{*}(v)||_{M}^{2} + f(\pi(p))^{2} ||\eta_{*}(v)||_{N}^{2}$$ for all tangent vectors $v \in W_{p}.$ Subsets of $W$ of the form $\pi^{-1}(b) = \{b\} \times N$ and $\eta^{-1}(q) = M \times \{q\}$ are called \emph{vertical fibers} and \emph{horizontal leaves}, respectively. We note that a warped product is a complete Riemannian manifold if and only if both its factors are complete, and also that the horizontal leaves are totally geodesic. In particular, the Hopf-Rinow Theorem implies that if $W$ is a complete warped product, then there is always a minimal geodesic between any two points in a horizontal leaf, and this geodesic is contained in the leaf. We assume that $W= M \times_{f} N$ is a complete warped product. Let $b \in M$ and let $F \subseteq \{b\} \times N$ be a subset of a vertical fiber of $W$. For any function $g \colon\thinspace \eta(F) \to M$, we can define the \emph{graph of $g$ over $F$} as the subset of $W$ consisting of all pairs $(g(q),q)$, where $(b,q)$ ranges over $F$. For our purposes, we will define the graph as the same point set, but we formulate the definition as follows. For $q \in \eta(F)$, let $\varphi_{q}\colon\thinspace [0,\ell(q)] \to \eta^{-1}(q)$ be the product of the (unit speed) parametrization of the minimal geodesic in $M$ from $b$ to $g(q)$ with the $N$ coordinate $\{q\}$. Because horizontal leaves are totally geodesic, this is the minimal geodesic from $(b,q)$ to $(g(q),q)$ in $W$. \begin{definition}\label{D:CeilingsRooms} With the notation above, the \textbf{graph of g over F} is defined to be $$C := \left\{\varphi_{q}(\ell(q)) \in M \times_{f} N \, | \, (b,q) \in F \right\}.$$ The \textbf{room R enclosed by C} is defined to be $$R := \left\{\varphi_{q}(t) \in M \times_{f} N \, | \, (b,q) \in F, \, t \in [0,\ell(q)] \right\}.$$ \end{definition} We will sometimes call $C$ and $F$ the \emph{ceiling} and \emph{floor} of the room $R$, respectively. We observe that if $M$ has points with nonempty cut locus, then the room enclosed by the ceiling $C$ is not necessarily well-defined. When $M=\mathbb{R}$ (as in Theorem \ref{T:IsopIneq}), this is not an issue. If $F$ is $k$--dimensional, then the ceiling and room associated to the graph of $g$ over $F$ have $k$--dimensional volume and $(1+k)$--dimensional volume, respectively, in the warped product. We use $\mbox{\rm{Vol}}_{k}(\cdot)$ to denote $k$--dimensional volume. \section{The Isoperimetric Inequality for Rooms}\label{S:MainTheorem} \begin{theorem}\label{T:IsopIneq} Let $W=\mathbb{R} \times_{f} N$ be a complete $(1+n)$--dimensional warped product with logarithmically convex warping function $f$. Let $b \in \mathbb{R}$, $F \subseteq \{b\} \times N \subset W$ a piecewise smooth $k$--dimensional subset of finite $k$--volume, $g \colon\thinspace \eta(F) \to [b,\infty)$ a (possibly discontinuous) map that is differentiable almost everywhere, $C$ the graph of $g$ over $F$ and $R$ the room enclosed by $C$ over $F$. Let $S$ be the constant height graph over $F$ (on the same side of $F$ as $C$) whose associated room has the same $(1+k)$--dimensional volume as $R$. Then $\mbox{\rm{Vol}}_{k}(S) \leq \mbox{\rm{Vol}}_{k}(C)$, with equality if and only if $g$ is a piecewise constant map and either $C=S$ (up to measure zero) or $f$ is not strictly log convex on a set of positive measure. \end{theorem} \begin{remarks}\label{R:ForCH} \end{remarks} \begin{enumerate} \item Uniqueness fails in the case of equality, for example, when $W$ is hyperbolic $n$--space $\mathbb{R} \times_{e^{t}} \mathbb{R}^{n-1}$, in which the vertical fibers are horospheres. Another example is Euclidean $n$--space $\mathbb{R} \times \mathbb{R}^{n-1}$. \item The proof of this result shows that \emph{constant height minimizes the vertical component of area}, which is a somewhat stronger result than what is given in the statement of the theorem. Similar proof techniques, also involving average values, have appeared in investigations of the plane with density (e.g., \cite[Proposition 4.3]{CarrollETAL2008}). \item If $f$ is increasing on $[b,\infty)$, then the result can be easily seen to hold for the more general cases in which either $C$ consists of two disjoint a.e. differentiable graphs over $F$ (with the room $R$ equal to the region trapped between these graphs), or $C$ consists of three disjoint a.e. differentiable graphs over $F$ (with $R$ the union of the region bounded above $F$ by the lowest graph and the region bounded between the other two graphs). Using induction on the number $d$ of disjoint graphs over $F$ (where the corresponding number of components of $R$ equals $(d+1)/2$ or $d/2$---and $R$ does or does not intersect $F$---depending on whether or not $d$ is odd), the result can therefore be shown to hold when $C$ consists of any number of disjoint a.e. differentiable graphs of over $F$. We can use this to show that Theorem \ref{T:IsopIneq} holds more generally, under the increasing assumption on $f$. Let $k$ equal the dimension of $F$, and suppose $C$ is any embedded (not necessarily connected) $k$--dimensional a.e. differentiable subset that bounds some union of regions $R$ over $F$. Here are some examples to keep in mind: \begin{enumerate} \item The warped product $\mathbb{H}^{3}=\mathbb{R} \times_{\cosh t} \mathbb{H}^{2}$, with $F=\{0\} \times D$ a disc in $\{0\} \times \mathbb{H}^{2}$, $C \subset [0,\infty) \times D$ an embedded surface of any genus with circular boundary contained in $[0,\infty) \times \partial D$ and $R \subset [0,\infty) \times D$ the region bounded between $F$ and $C$. \item The warped product of the line and the $n$--sphere $(0,\infty) \times_{e^{v(r)}} \mathbf{S}^{n}$, where $v(r)$ is increasing and convex, with $F = \{b\} \times \mathbf{S}^{n}$, $C$ any embedded, closed, connected a.e. differentiable $n$--dimensional hypersurface that intersects the ray $[b,\infty) \times \{p\}$ for every $p \in \mathbf{S}^{n}$ and $R$ the complement of $(0,b) \times \mathbf{S}^{n}$ in the region bounded by $C$ that contains $(0,b) \times \mathbf{S}^{n}$. \item The same warped product as in (b), with $C$ any embedded, closed a.e. differentiable $n$--dimensional hypersurface contained in $[b,\infty) \times \mathbf{S}^{n}$ whose image under the vertical projection is \emph{not} all of $\{b\} \times \mathbf{S}^{n}$, $F$ the image of the vertical projection of $C$ on $\{b\} \times \mathbf{S}^{n}$ and $R$ the region bounded by $C$ and disjoint from $(0,b) \times \mathbf{S}^{n}$. \item The same warped product and $F$ as in (b), with $C$ any disjoint union of hypersurfaces as in (b) and (c). \end{enumerate} We may partition $F$ so that $C$ is a disjoint union of graphs over each component of the partition, apply Theorem \ref{T:IsopIneq} over each piece of the partition (by the observations of the previous paragraph) and then apply the theorem again to the resulting piecewise constant height graph over $F$. We have therefore shown that the result holds for this more general notion of ceiling. It is important to note here that Theorem \ref{T:IsopIneq} compares the area of $C$ (this includes any area from $F$ obtained when $C$ is at height zero) with the area \emph{only} of the appropriate constant height ceiling over $F$, and \emph{not} also the area of $F$. \item Our result is similar to some recent results of Kolesnikov and Zhdanov, who consider log convex densities on Euclidean space, rather than warping factors of warped product spaces \cite[Section 6]{KolesnikovZhdanov10}. One fact implied by their arguments is that in $\mathbb{R}^{n+1} = [0,\infty) \times \mathbf{S}^{n}$ with the metric $dr^{2} + (e^{u(r)})^{2} d\Theta^{2}$ and radial density $e^{v(r)}$, balls about the origin are perimeter minimizers among all balls containing the origin, if $nu''(r) + v''(r) \geq 0$. This fact is also implied by Theorem \ref{T:IsopIneq} as follows. Take $f(r)=e^{v/n+u}$ to be the warping function on $\mathbb{R}^{n+1} = [0,\infty) \times \mathbf{S}^{n}$. Any ball containing the origin also contains a smaller ball centered at the origin. Use the boundary of this smaller ball as the floor of a room and the boundary of the original ball as the ceiling. The assumption on the second derivatives of $u$ and $v$ implies that $f$ is log convex. The resulting constant height ceiling is the boundary of a ball centered at the origin with the same volume as the original but with no greater perimeter. The metric calculations are the same in both the warped product and density settings. \end{enumerate} \medskip \emph{Proof of \ref{T:IsopIneq}.} The volume of the room $R$ enclosed by $C$ is given by $$\mbox{\rm{Vol}}_{k+1}(R) = \int\limits_{(b,q) \in F} \left(\int_{\varphi_{q}(0)}^{\varphi_{q}(\ell(q))} f(h)^{k} \, dh \right) \, dV_{N_{k}},$$ where $dV_{N_{k}}$ is the $k$--volume measure on $N$, and $dh$ is the line element for $\mathbb{R}$. Evaluating the inner integral, we can rewrite this expression as $$\mbox{\rm{Vol}}_{k+1}(R) = \int\limits_{(b,q) \in F} \left(\int_{0}^{\ell(q)} f(\pi(\varphi_{q}(t)))^{k} \, dt \right) \, dV_{N_{k}}.$$ The number $f(\pi(\varphi_{q}(t)))$ is independent of $q$, since it is simply $f$ evaluated at a point in $\mathbb{R}$ at distance $t$ from $b$. We will use the expression $\mu(t)$ for $f(\pi(\varphi_{q}(t)))^{k}$, and note that $\mu$ is log convex. Now we define $H$---the height of the constant height ceiling $S$---implicitly by \begin{align}\label{E:HDefined} \mbox{\rm{Vol}}_{k+1}(R) &= \int\limits_{(b,q) \in F} \left(\int_{0}^{H} \mu(t) \, dt \right) \, dV_{N_{k}} \\ &= \mbox{\rm{Vol}}_{k}(F) \int_{0}^{H} \mu(t) \, dt, \notag \end{align} and use the integral in the above line to define the function $I \colon\thinspace \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0}$ $$I(h) = \int_{0}^{h} \mu(t) \, dt.$$ Because $I$ is strictly increasing, it has an inverse. It is easily verified, using the log convexity of $\mu$, that \begin{equation}\label{E:LogConvex} \frac{d^{2}}{dx^{2}} \left(\mu \circ I^{-1} \right) (x) = \frac{\mu(I^{-1}(x))\mu''(I^{-1}(x)) - (\mu'(I^{-1}(x))^{2}}{(\mu(I^{-1}(x)))^{3}} \geq 0, \end{equation} and, consequently, that the function $x \mapsto \mu \circ I^{-1}(x)$ is convex. We now take a step function (with values $h_{1}, h_{2},...,h_{r}$) for the map $q \mapsto \ell(q)$, and we partition the floor $F$ into measurable pieces $F_{1}, F_{2},..., F_{r}$ so that the total volume of the rooms $R_{i}$ over the floors $F_{i}$ with constant height $h_{i}$ ceilings $C_{i}$ is equal to $\mbox{\rm{Vol}}_{k+1}(R)$. Referring to \ref{E:HDefined}, we have $$\mbox{\rm{Vol}}_{k}(F) \int_{0}^{H} \mu(t) \, dt = \sum_{i=1}^{r} \mbox{\rm{Vol}}_{k}(F_{i}) \int_{0}^{h_{i}} \mu(t) \, dt,$$ or equivalently, \begin{equation}\label{E:PartitionEqn2} H = I^{-1}\left( \sum_{i=1}^{r} \frac{\mbox{\rm{Vol}}_{k}(F_{i})}{\mbox{\rm{Vol}}_{k}(F)} I(h_{i}) \right). \end{equation} Applying the convexity result of \ref{E:LogConvex} and Jensen's inequality to the convex linear combination in the argument of $I^{-1}$ in \ref{E:PartitionEqn2}, we have \begin{align* \mu(H) &= \mu \circ I^{-1}\left( \sum_{i=1}^{r} \frac{\mbox{\rm{Vol}}_{k}(F_{i})}{\mbox{\rm{Vol}}_{k}(F)} I(h_{i}) \right) \\ &\leq \sum_{i=1}^{r} \frac{\mbox{\rm{Vol}}_{k}(F_{i})}{\mbox{\rm{Vol}}_{k}(F)} \mu \circ I^{-1} (I(h_{i}))\\ &= \sum_{i=1}^{r} \frac{\mbox{\rm{Vol}}_{k}(F_{i})}{\mbox{\rm{Vol}}_{k}(F)} \mu(h_{i}).\\ \end{align*} The outermost ends of the above inequality can be rewritten as in the second line below: \begin{align}\label{E:AreaIneq} \mbox{\rm{Vol}}_{k}(S) &= \int\limits_{q \in \eta(F)} \mu(H) \,\, dV_{N_{k}} \\ \notag &= \mbox{\rm{Vol}}_{k}(F)\mu(H) \leq \sum_{i=1}^{r} \mbox{\rm{Vol}}_{k}(F_{i}) \mu(h_{i})\\ \notag &= \sum_{i=1}^{r} \int\limits_{q \in \eta(F_{i})} \mu(h_{i}) \,\, dV_{N_{k}} = \sum_{i=1}^{r} \mbox{\rm{Vol}}_{k}(C_{i}). \end{align} By our assumption on $g$, the function $q \mapsto \ell(q)$ is differentiable almost everywhere, and therefore, as the number of values $r$ for the step function approximating $q \mapsto \ell(q)$ tends to infinity, each term $\mbox{\rm{Vol}}_{k}(C_{i})$ at the far right above becomes a lower bound for the area of the portion of $C$ that lies above $F_{i}$ (this is because $\mu(h_{i})=f(\pi(\varphi_{x}(h_{i})))^{k}$ is always a lower bound for the Jacobian of $q \mapsto \ell(q)$ at the sample point for the step function, since the map that determines the graph $C$ preserves the $N$--coordinate of the floor $F$). In the case of equality, we have immediately that $q \mapsto d(b,g(q))$ is locally a constant map, where $d$ denotes the metric in $\mathbb{R}$, because the Jacobian of $q \mapsto \ell(q)$ must be equal to $\mu(\ell(q))$ almost everywhere (in order for the last line of \ref{E:AreaIneq} to limit to equality as the number of step values increases). Therefore, $g$ is locally constant. Also, we have that the middle line of \ref{E:AreaIneq} is an equality. This implies either that $\mu$ is not strictly convex, or that $h_{i}=H$ for every step function approximating $q \mapsto \ell(q)$. In the latter case, it is clear that $C=S$ (up to measure zero). \hfill \fbox{\ref{T:IsopIneq}}\\ We remark that with Frank Morgan we found an alternative calibration proof of this result, which holds for \emph{any current} over a given floor. Let $R$ denote the room over $F$ with ceiling $C$, and $B$ denote the room over $F$ with the same volume as $R$ but with constant height ceiling $S$. Let $X$ be the unit vector field on $W=\mathbb{R} \times_{f} N$ which flows parallel to the horizontal leaves in the positive direction from $F$, and let $\nu$ denote the outward unit normal vector for a given domain in the warped product. Finally, let $dV$ and $dA$ represent the volume form and the codimension 1 volume form on $W$, respectively. Then we have \begin{align}\label{E:Frank} \mbox{\rm{Vol}}_{k}(S) - \mbox{\rm{Vol}}_{k}(F) &= \int\limits_{\partial B} \langle X, \nu \rangle \, dA = \int\limits_{B} \text{div} X \, dV \tag*{} \\ \notag &\leq \int\limits_{R} \text{div} X \, dV = \int\limits_{\partial R} \langle X, \nu \rangle \, dA \leq \mbox{\rm{Vol}}_{k}(C) - \mbox{\rm{Vol}}_{k}(F), \notag \end{align} where the first and last relations follow by evaluation, the second and penultimate relations by the Divergence Theorem, and the middle relation by the logarithmic convexity of the warping function $f$. This final fact is proved as follows. The part of $B$ that lies above $C$ has volume equal to the part of $R$ that lies above $S$. Call these pieces $A_{B}$ and $A_{R}$, respectively. The remaining parts of $B$ and $R$ are equal, and so it is only necessary to show that the integrand $ \langle X, \nu \rangle$ is greater at any point of $A_{R}$ than at any point of $A_{B}$. But we note that the div$X$ is equal to the derivative of $\log f$ \cite[Lemma 7.3 (2)]{BishOneill69}, and so the log convexity of $f$ implies that div$X$ is increasing in the parameter for $\mathbb{R}$. Since every point of $A_{R}$ is at a height that is at least the height of any point of $A_{B}$, the claim is proved. \section{Critical Points for Isoperimetric Functions}\label{S:Conclusion} We conclude with a discussion of the Dido problem in the warped products we have been considering. We consider warped products of the form $\mathbb{R} \times_{f} N$ with $\log f$ (not necessarily strictly) convex, and choose an $n$--dimensional floor $F \subseteq \{0\} \times N$, denoting the constant height $h$ graph over $F$ and the associated room by $S(h)$ and $B(h)$, respectively. We assume $f(0) = 1$. We have already solved the problem of maximizing the $(1+n)$--volume of the room with floor $F$ and with ceiling $n$--volume $A$. Namely, we solve $\mbox{\rm{Vol}}_{n}(S(h))= \mbox{\rm{Vol}}_{n}(F)f(h)^{n} = A$ for the value $h$. Assuming that $f$ is unbounded on $[0,\infty)$ will ensure that this equation has a solution, provided that $A$ is sufficiently large. In this case, because $f$ is convex, there will be either one or two solutions to this equation, and we take the largest resulting value of $\mbox{\rm{Vol}}_{1+n}(B(h))$. This is the maximal $(1+n)$--volume room because any other ceiling over $F$ with the given $n$--volume will have an associated constant height ceiling with $n$--volume no greater than that of $S(h)$, and that therefore will enclose a room of $(1+n)$--volume no greater than that of $B(h)$. A related question is whether or not we can, given an arbitrary ceiling $C$ over $F$, use $\mbox{\rm{Vol}}_{n}(C)$ to provide an upper volume bound for the associated room $R$. \begin{theorem}\label{T:CritPts} Let $W= \mathbb{R} \times_{f} N$ be a warped product space of dimension $1+n$ with $f$ logarithmically convex. Choose a vertical fiber $\{0\} \times N$ and assume $f(0)=1$. Suppose $R$ is a room over a finite $n$--volume floor $F$ in this fiber and that the $\mathbb{R}$--coordinate of every point of its ceiling $C$ is nonnegative. Then the $(1+n)$--volume of $R$ can be bounded from above by a function of the $n$--volume of $C$ if and only if $f$ is unbounded on $[0,\infty)$. In this case, we have $\mbox{\rm{Vol}}_{1+n}(R) \leq \mbox{\rm{Vol}}_{n}(C)/\omega$, where $\omega>0$ is equal to either the critical value corresponding to the smallest positive critical point of the function $$\mathcal{I}(h) = \frac{d}{dh} \log \left( \int_{0}^{h} f(t)^{n} \, dt \right),$$ or to $\lim_{h \to \infty} (n f'/f)$ if $\mathcal{I}$ has no positive critical points. \end{theorem} \emph{Proof of \ref{T:CritPts}.} We observe that $\mathcal{I}(h)$ is the ratio of the $n$--volume of a constant height $h$ ceiling $S(h)$ to the $(1+n)$--volume of its room $B(h)$, for any finite $n$--volume floor $F \subset \{0\} \times N$: $$\frac{\mbox{\rm{Vol}}_{n}(S(h))}{\mbox{\rm{Vol}}_{1+n}(B(h))} = \frac{\mbox{\rm{Vol}}_{n}(F)f(h)^{n}}{\mbox{\rm{Vol}}_{n}(F)\int_{0}^{h} f(t)^{n} \, dt} = \frac{f(h)^{n}}{\int_{0}^{h} f(t)^{n} \, dt} = \mathcal{I}(h). $$ Now if $B(h)$ is the constant height $h$ room over $F$ with the same volume as $R$, then Theorem \ref{T:IsopIneq} implies $$\mathcal{I}(h) = \frac{\mbox{\rm{Vol}}_{n}(S(h))}{\mbox{\rm{Vol}}_{1+n}(B(h))} \leq \frac{\mbox{\rm{Vol}}_{n}(C)}{\mbox{\rm{Vol}}_{1+n}(R)}.$$ We therefore need to show that $\mathcal{I}$ is bounded from below by a positive constant if and only if $f$ is unbounded on $[0,\infty)$. Necessity follows from the definition of $\mathcal{I}$. For sufficiency, suppose that $f$ is unbounded on $[0,\infty)$. Then by l'H\^{o}pital's rule we have $$\lim_{h\to \infty} \mathcal{I} = \lim_{h\to \infty} nf'/f,$$ and since the logarithmic derivative of $f^{n}$ is nondecreasing (because $f$ is log convex and nonconstant), we have that $\inf \mathcal{I} > 0$. Now suppose that $f$ is unbounded on $[0,\infty)$. Since $\mathcal{I}(h)$ tends to infinity as $h$ tends to zero, the observations above imply that $\mathcal{I}$ is always at least $\lim_{h \to \infty} nf'/f$ if it has no critical points. If $\mathcal{I}$ does have critical points, then the following equation holds at any such point: $$\mathcal{I}= nf'/f.$$ It follows from the fact that $nf'/f$ is nondecreasing that the corresponding critical values are nondecreasing. Any relative minimum value of $\mathcal{I}$ is therefore unique and must occur at the first critical point. In addition, the two preceding equations imply that $\lim_{h \to \infty} nf'/f$ must be no less than the critical value of this critical point. This proves the theorem. \hfill \fbox{\ref{T:CritPts}}\\ Finally, some notable examples illustrating the above theorem. \begin{enumerate} \item For $\mathbb{R} \times_{f} \mathbb{R}$, with $f(t) = e^{t^{2}-2\sin t}$, $\mathcal{I}$ tends to infinity and has infinitely many critical values with one global minimum. \item For $\mathbb{H}^{3} = \mathbb{R} \times_{f} \mathbb{H}^{2}$, with $f(t) = \cosh t$, $\mathcal{I}$ achieves a global minimum but then tends to a positive constant. \item For $\mathbb{H}^{2} = \mathbb{R} \times_{f} \mathbb{H}^{1}$, with $f(t) = \cosh t$, $\mathcal{I}$ decreases monotonically to a positive constant. \item For both of the warped products $\mathbb{R}^{2} = \mathbb{R} \times_{1} \mathbb{R}$ and $\mathbb{H}^{2} = \mathbb{R} \times_{e^{-t}} \mathbb{R}$, $\mathcal{I}$ decreases monotonically to zero. \end{enumerate} \bibliographystyle{hyperamsplain}
{ "timestamp": "2010-09-23T02:02:47", "yymm": "1007", "arxiv_id": "1007.5101", "language": "en", "url": "https://arxiv.org/abs/1007.5101", "abstract": "Given a warped product space $\\mathbb{R} \\times_{f} N$ with logarithmically convex warping function $f$, we prove a relative isoperimetric inequality for regions bounded between a subset of a vertical fiber and its image under an almost everywhere differentiable mapping in the horizontal direction. In particular, given a $k$--dimensional region $F \\subset \\{b\\} \\times N$, and the horizontal graph $C \\subset \\mathbb{R} \\times_{f} N$ of an almost everywhere differentiable map over $F$, we prove that the $k$--volume of $C$ is always at least the $k$--volume of the smooth constant height graph over $F$ that traps the same $(1+k)$--volume above $F$ as $C$. We use this to solve a Dido problem for graphs over vertical fibers, and show that, if the warping function is unbounded on the set of horizontal values above a vertical fiber, the volume trapped above that fiber by a graph $C$ is no greater than the $k$--volume of $C$ times a constant that depends only on the warping function.", "subjects": "Geometric Topology (math.GT); Differential Geometry (math.DG)", "title": "A relative isoperimetric inequality for certain warped product spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357604052423, "lm_q2_score": 0.7248702880639792, "lm_q1q2_score": 0.709673933669885 }
https://arxiv.org/abs/2105.01575
A systematic and complete proof of the existence and uniqueness of self-descriptive numbers
All the already known results on self descriptive numbers, together with the demonstration of the uniqueness for bases greater than 6, are here obtained through a systematic scheme of proof and not trial and error. The proof is also complete for all the possible cases had been taken into account.
\section{Introduction} The subject of \textit{self-descriptive numbers} as it will be defined and discussed in the following has received some interest by mathematicians \cite{libro1}, \cite{libro2},\cite{articolo1},\cite{articolo2}. A list of these numbers expressed in the base $b=10$ is found on the OEIS \cite{link lista}.\\ \indent Part of the mathematical interest in the subject is that although the existence and numerousness of these numbers is very well known for bases $b\leq 6$, and the existence of at least one defined self-descriptive number had been recognized in each base $b\geq 7$, the eventual uniqueness in these greater bases had remained unproven.\\ \indent The acknowledged results seem however to have been obtain'd mostly through direct checks and trial-and-error procedures, easy for smaller bases. On the internet they can be found several amateur algorithms to check the eventual existence of multiple self-descriptive numbers in greater bases.\\ \indent In this work instead, the already known results, together with the demonstration of the still unproven uniqueness, are all obtain'd through a systematic scheme of proof that applies the idea of restricted partition of an integer. \newpage \subsection{Self-descriptive numbers} \begin{definition} A number $n$ of $b$ digits in some base $b\geq 2$, $n=\displaystyle{\sum_{i=0}^{b-1}}\,j_i\,b^{(b-1-i)},\, 0\leq j_i\leq b-1$; represented by the ordered list of its digits $j_i$ at the $b$ positions $p_i,\, i=0,1,\dots,b-1$;\\ is self-descriptive \emph{iff} the digit $j_i$ counts how many times the digit $i$ occurs in $n$. \end{definition} Examples \begin{itemize} \item[*] In the base $b=4$ the number $2020$ is self-descriptive, for there are in it : \begin{itemize} \item[-] $2$ instances of the digit “$0$”; \item[-] $0$ instances of the digit “$1$”; \item[-] and so on \end{itemize} \item[*] In the base $b=10$ the number $6210001000$ is self-descriptive and unique. \end{itemize} The following trivially holds \begin{lemma} In any base $b$ the sum of all the digits of an eventual self-descriptive number is $b$. \end{lemma} Related to self-descriptive numbers also \textit{autobiographical numbers} in a base $b$ are considered in literature (\cite{articolo1}). Those are just all the self-descriptive numbers in bases up to $b$ included, expressed in the base $b$. \subsection{State of the art} It’s known that \begin{itemize} \item there are no self-descriptive numbers in the bases $b=2,3,6$; \item in $b=4$ the numbers $2020$ and $1210$ are self-descriptive and unique; \item in $b=5$ the number $21200$ is self-descriptive and unique; \item in any $b\geq 7$ it’s self-descriptive the number which has similar entries as $6210001000$ , it is the number \begin{equation} \label{general} (b-4)b^{b-1}+(2)b^{b-2}+(1)b^{b-3}+(1)b^3; \end{equation} \end{itemize} Leaving aside direct checks, proving negative, for smaller bases $\geq 7$, it has been hitherto unknown if the numbers of the kind of (\ref{general}) are the unique self-descriptive numbers in \emph{any} $b>7$. \subsection{Results here proven} All the former results are here originally proven in a systematic way, it is not through trial-and-error procedures. Through the same approach it’s also here proven that\\ \emph{A number of the kind of (\ref{general}) is the unique self-descriptive number in each base $b\geq 7$}. \section{Theorems and proofs} Given some first entry $J \equiv j_0$, it is the number of instances of “$0$” in $n$, \[J= (b-m),\quad 1\leq m < b \leftarrow(J=0\quad \mathrm{is}\:\mathrm{ inconsistent} \rightarrow \:m \neq b)\] there are then $(b-J-1)=(m-1)$ empty positions left in the list of the digits, it is $(m-1)$ parts in which some restricted partition of the integer $m$ is to be had.\\ It’s then immediate that $m\neq1$ for there would be no empty position left.\\ \indent This already implies that \begin{theorem} \label{teorema due} There are no self-descriptive numbers in the base $b=2$. \end{theorem} There is then only one kind of restricted partition of $m\geq 2$ in $m-1$ parts, it is \begin{equation}\label{part} m=2\: \left(+1 \right)_{\: m-2\:\mathrm{times}\:\mathbf{\mathrm{iff}}\: m\geq 3.}\end{equation} Thus apart from “$J$” at the position $p_0$, and the $J$ instances of “$0$”,\\ all the other digits in a self-descriptive number must be: \begin{itemize} \item[] “$2$” in $1$ only instance; and \item[] “$1$” in eventual instances, yet anyway not more than $2$. \end{itemize} Thus $ j_1\: = \; 0,1,2 $, and these three and only cases will be now discussed. \subsection{Case 1: $j_1=0$} There are no instances of ``1'' in the number, thus because of formula (\ref{part}) \begin{itemize} \item[] $m=2$, and there is only “$2$” as a nonzero entry at the positions $p_{(i>0)}$. \end{itemize} Consequentially $j_J=2$, as “$J$” must have at least $1$ instance. This means that “$J$” occurs twice, and, as the only nonzero entries are “$J$” and “$2$”, so $J=2$. Thus $b=4$, and \begin{theorem}\label{teorema quattro primo} The number $2020$ is self-descriptive in the base $b=4$.\end{theorem} \subsection{Case 2: $j_1=1$} Again $(j_J=2\, \rightarrow\, J=2)$ as in case 1.\\ This implies \begin{theorem}\label{teorema cinque} The number $21200$ is self-descriptive in the base $b=5$.\end{theorem} \subsection{Case 3: $j_1=2$}. Then $j_2=1$, for $p_2$ has a nonzero entry, and it can be only the digit “$1$”; thus \paragraph{Subcase 3.1: $J=1$},\\ \indent then $p_J\equiv p_1$ and, as all the nonzero entries have already been considered, $b=4$ and \begin{theorem}\label{teorema quattro secondo} The number $1210$ is self-descriptive in the base $b=4$. \end{theorem} \paragraph{Subcase 3.2: $J\neq 1$}, \\ \indent then $p_J\not\equiv p_1$ and $J \neq 2$ for $j_2=1$ and “$2$” already occurs at the position $p_1$;\\ thus $J=(b-4)$, for $(j_1=2,\;j_2=1,\;j_{J,\, J\neq (1,2)}=1 \rightarrow j_1+j_2+j_J=m=4)$,\\ and $b > 4$, yet $(b\neq5 \leftarrow J\neq 1),\, (b\neq 6 \leftarrow J\neq 2)$.\\ Together with the previously deduced results, this implies \begin{theorem}\label{teorema tre e sei} There are no self-descriptive numbers in the bases $b=3,\,6$. \end{theorem} And \begin{theorem}\label{teorema del generale} The number $(b-4)b^{b-1}+(2)b^{b-2}+(1)b^{b-3}+(1)b^3$ is self-descriptive in each base $b \geq 7$. \end{theorem} \subsection{Concluding uniqueness} Finally, as $j_1$ cannot assume any other value than $(0,1,2)$ and all the cases had already been considered, it's so proven that \begin{theorem}\label{teorema dell'unicità del generale} The number $$(b-4)b^{b-1}+(2)b^{b-2}+(1)b^{b-3}+(1)b^3$$ is unique as self-descriptive number in each base $b\geq7$. \end{theorem} \vfill \section{Conclusions} A systematic scheme of proof that makes a simple use of the idea of restricted partition of an integer has been applied in a complete proof of the existences and uniquenesses of self-descriptive numbers.\\ \indent The results are here listed and their proves in the text referred to. \begin{itemize} \item[(\ref{teorema due},\ref{teorema tre e sei})] There are no self-descriptive numbers in the bases $b=2,3,6$. \item[(\ref{teorema quattro primo}, \ref{teorema quattro secondo})] In the base $b=4$ there are the two, and only, self-descriptive numbers $2020$ and $1210$. \item[(\ref{teorema cinque})] In the base $b=5$ there is the one and only self-descriptive number $21200$. \item[ (\ref{teorema del generale},\ref{teorema dell'unicità del generale})] In each base $b\geq 7$ is self-descriptive and unique the number \[ (b-4)b^{b-1}+(2)b^{b-2}+(1)b^{b-3}+(1)b^3\quad .\] \end{itemize} \newpage
{ "timestamp": "2021-05-05T02:22:25", "yymm": "2105", "arxiv_id": "2105.01575", "language": "en", "url": "https://arxiv.org/abs/2105.01575", "abstract": "All the already known results on self descriptive numbers, together with the demonstration of the uniqueness for bases greater than 6, are here obtained through a systematic scheme of proof and not trial and error. The proof is also complete for all the possible cases had been taken into account.", "subjects": "Combinatorics (math.CO)", "title": "A systematic and complete proof of the existence and uniqueness of self-descriptive numbers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357591818726, "lm_q2_score": 0.7248702880639791, "lm_q1q2_score": 0.7096739327831004 }
https://arxiv.org/abs/1608.08665
A-posteriori snapshot location for POD in optimal control of linear parabolic equations
In this paper we study the approximation of an optimal control problem for linear para\-bolic PDEs with model order reduction based on Proper Orthogonal Decomposition (POD-MOR). POD-MOR is a Galerkin approach where the basis functions are obtained upon information contained in time snapshots of the parabolic PDE related to given input data. In the present work we show that for POD-MOR in optimal control of parabolic equations it is important to have knowledge about the controlled system at the right time instances. We propose to determine the time instances (snapshot locations) by an a-posteriori error control concept. This method is based on a reformulation of the optimality system of the underlying optimal control problem as a second order in time and fourth order in space elliptic system which is approximated by a space-time finite element method. Finally, we present numerical tests to illustrate our approach and to show the effectiveness of the method in comparison to existing approaches.
\section{Introduction} Optimization with PDE constraints is nowadays a well-studied topic motivated by its relevance in industrial applications. We are interested in the numerical approximation of such optimization problems in an efficient and reliable way using surrogate models obtained with POD-MOR. The surrogate models are built upon {\em snapshots} of the system to provide information about the underlying problem. This stage is usually called the offline stage. For the snapshot POD approach we refer the reader to \cite{Sir87}.\\ \noindent Several works focus their attention on the choice of the snapshots, in order to approximate either dynamical systems or optimal control problems by suitable surrogate models. In \cite{KV10}, it is proposed to optimize the choice of the time instances such that the error between POD and the trajectory of the dynamical system is minimized. A recent approach proposes to choose the snapshots by an a-posteriori error estimator in order to equidistribute the state error on the time grid related to the snapshot locations (see \cite{HL14}). We also mention an adaptive method, proposed in \cite{OKAC15}, where the aim is to reduce expensive offline costs selecting the snapshots according to an adaptive time-stepping algorithm using time error-control. For further references we refer the interested reader to \cite{OKAC15}.\\ In optimal control problems the reduced model is usually built upon a forecast on the control. This approach does not guarantee a proper construction of the surrogate model since we do not know how far away the optimal solution is from the reference control. More sophisticated approaches select snapshots by solving an optimization problem in order to improve the selection of the snapshots according to the desired controlled dynamics. For this purpose optimality system for POD (OS-POD) is introduced in \cite{KV08}. In OS-POD, the computation of the basis functions is performed by means of the solution of an enlarged optimal control problem which involves the full problem, the reduced equation and the eigenvalue problem for the POD modes.\\ The reduction of optimal control problems with particular focus on adaptive adjustments of the surrogate models can be found in \cite{AH01,AFS02}. We should also mention another adaptive method for feedback control problems by means of the Hamilton-Jacobi-Bellman equation, introduced in \cite{AF12}.\\ Recently, an a-posteriori error estimator was introduced in \cite{TV09,KTV13} for optimal control problems. In these works the error between the unknown optimal and the computed POD suboptimal control is estimated for linear and nonlinear problems, and it is shown that increasing the number of basis functions leads to the desired convergence. OS-POD and a-posteriori error estimation is combined in \cite{V11}.\\ All these works have in common that they compute basis functions for optimal control problems. In our paper we address the question of an efficient and suitable selection of snapshot locations by means of an a-posteriori error control approach proposed in \cite{GHZ12}. We rewrite the optimality conditions as a second order in time and fourth order in space elliptic equation for the adjoint variable and we generalize this approach to control constraints. In particular, a time adaptive concept is used to build the snapshot grid which should be used to construct the POD surrogate model for the approximate solution of the optimal control problem. Here the novelty for the reduced control problem is twofold: we directly obtain snapshots related to an approximation of the optimal control and, at the same time, we get information about the time grid. We have proposed a similar approach based on a reformulation of the optimality system with respect to the state variable in \cite{AGH15}. Now, we focus our approach on the adjoint variable and generalize the idea presented in \cite{GHZ12} to time dependent control intensities with control shape functions including control constraints. Furthermore, we certify our approach by means of several error bounds for the state, adjoint state and control variable.\\ The outline of this paper is as follows. In Section 2 we present the optimal control problem together with the optimality conditions. In Section 3 we recall the main results of \cite{GHZ12}. Proper Orthogonal Decomposition and its application to optimal control problems is presented in Section 4. The focus of Section 5 lies in investigating our snapshot location strategy. Finally, numerical tests are discussed in Section 6 and conclusions are driven in Section 7. \section{Optimal Control Problem} In this section we describe the optimal control problem. The governing equation is given by a linear parabolic PDE: \begin{equation}\label{heat} \left. \begin{array}{rcll} y_t-\Delta y & = & f+\mathcal{B}u &\text{ in } \Omega_T,\\ y(\cdot,0) & = & y_0 & \text{ in } \Omega,\\ y &= & 0 &\text{ on } \Sigma_T, \end{array} \right\} \end{equation} \noindent where $\Omega\subset\mathbb R^q, q \in \{1,2,3\}$ is an open bounded domain with smooth boundary, $T>0$, $\Omega_T:=\Omega\times (0,T]$ is the space-time cylinder, $\Sigma_T:=\partial\Omega\times (0,T]$, and the state is denoted by $ y:\Omega_T\rightarrow\mathbb R$. As control space we use $\left(L^2(0,T;\mathbb{R}^m), \langle \cdot,\cdot \rangle_U\right)$, where $\langle u,v\rangle_U:=\sum_{i=1}^m \langle u_i,v_i\rangle_{L^2(0,T)}$, and define the control operator as $\mathcal{B}:U \rightarrow L^2(0,T;H^{-1}(\Omega))$, $(\mathcal{B}u)(t) = \sum_{i=1}^m u_i(t) \chi_i$, where $\chi_i \in H^{-1}(\Omega) (1 \leq i \leq m)$ denote specified control actions. Thus $\mathcal{B}$ is linear and bounded. For the control variable we require $$u\in U_{ad} := \{u\in U \; | \; u_a(t) \leq u(t) \leq u_b(t) \text{ in } \mathbb{R}^m \text{ a.e. in } [0,T] \} \subset L^\infty(0,T;\mathbb{R}^m)$$ with $u_a, u_b \in L^\infty(0,T;\mathbb{R}^m), u_a(t) \leq u_b(t)$ almost everywhere in $(0,T)$. It is well-known (see \cite{L71}, for example) that for a given initial condition $y_0\in L^2(\Omega)$ and a forcing term $f\in L^2(0,T;H^{-1}(\Omega))$ the equation \eqref{heat} admits a unique solution $y=y(u)\in W(0,T)$, where $$W(0,T):=\left\{v\in L^2\left(0,T;H^1_0(\Omega)\right), \dfrac{\partial v}{\partial t}\in L^2\left(0,T;H^{-1}(\Omega)\right)\right\}.$$ If $y_0 \in H_0^1(\Omega)$, higher regularity results can be derived according to \cite{E10}. We also note that the unconstrained case is related to $u_a\equiv-\infty, u_b\equiv+\infty$. \noindent The weak formulation of \eqref{heat} is given by: find $y \in W(0,T)$ with $y(0)=y_0$ and \begin{equation}\label{weak:heat} \int_\Omega y_t(t)v dx + \int_\Omega \nabla y(t) \cdot \nabla v dx =\int_\Omega (f+\mathcal{B}u)(t) v dx \quad \forall v \in H_0^1(\Omega). \end{equation} \noindent The cost functional we want to minimize is given by \begin{equation} J(y,u):=\dfrac{1}{2} \|y-y_d\|^2_{L^2(\Omega_T)}+\dfrac{\alpha}{2} \|u\|^2_U, \end{equation} where $y_d\in L^2(\Omega_T)$ is the desired state and the regularization parameter $\alpha$ is a real positive constant. The optimal control problem then reads \begin{equation}\label{ocp} \min_{u\in U_{ad}} \hat{J}(u):=J(y(u),u) \mbox{, where } y(u) \mbox{ satisfies } \eqref{heat}. \end{equation} Note that $U_{ad}$ is a non-empty, bounded, convex and closed subset of $L^\infty(0,T;\mathbb{R}^m)$. Hence, it is easy to argue that \eqref{ocp} admits a unique solution $u \in U$ with associated state $y(u)\in W(0,T)$, see e.g. \cite{L71}. \noindent The first order optimality system of the optimal control problem \eqref{ocp} is given by the state equation \eqref{heat}, together with the adjoint equation \begin{equation}\label{adj} \left. \begin{array}{rcll} -p_t-\Delta p & = & y-y_d &\text{ in } \Omega_T,\\ p(\cdot,T) & = & 0 &\text{ in } \Omega,\\ p & = & 0 &\text{ on } \Sigma_T, \end{array} \right\} \end{equation} and the variational inequality \begin{equation}\label{opt_con} \langle\alpha u + \mathcal{B}^*p,v-u \rangle_U \geq 0\quad \mbox{ for all }v \in U_{ad}, \end{equation} \noindent where $\mathcal{B}^* : L^2(0,T;H^{-1}(\Omega))^* \to U^*$ is the dual operator of $\mathcal{B}$. In \eqref{opt_con} we have identified $L^2(0,T;H^{-1}(\Omega))^*$ with $L^2(0,T; H^1_0(\Omega))$ and $U^*$ with $U$, where we use that Hilbert spaces are reflexive. The variational inequality \eqref{opt_con} is equivalent to the projection formula \begin{equation}\label{Proj} u(t)=\mathcal{P}_{U_{ad}}\left\lbrace -\dfrac{1}{\alpha}(\mathcal{B}^*p)(t) \right\rbrace \mbox{ for almost all } t\in[0,T], \end{equation} where $\mathcal{P}_{U_{ad}}:U\rightarrow U_{ad}$ denotes the orthogonal projection onto $U_{ad}$. It follows from the reflexivity of the involved spaces that the action of the adjoint operator $\mathcal{B}^*$ is given as $$(\mathcal{B}^*v)(t)=\left(\langle \chi_1,v\rangle_{H^{-1},H^1_0},\ldots, \langle \chi_m,v\rangle_{H^{-1},H^1_0}\right)$$ and \begin{equation*} \mathcal{P}_{U_{ad}}\left\lbrace-\frac{1}{\alpha} \mathcal{B}^*p\right\rbrace_i = \max \left\{u_a , \min \{ u_b , -\frac{1}{\alpha} \langle \chi_i , p \rangle_{H^{-1},H_0^1}\} \right\}. \end{equation*} Since our domain is smooth, the regularities of the optimal state, the optimal control and the associated adjoint state are limited through the regularities of the initial state $y_0$, the right hand side $f$, the control $\mathcal{B}u$ and the desired state $y_d.$\\ \noindent The numerical approximation of the optimality system \eqref{heat}-\eqref{adj}-\eqref{opt_con} with a standard Finite Element Method (FEM) in the spatial variable leads to a high-dimensional system of ordinary differential equations: \begin{equation}\label{opt_disc} \left. \begin{array}{rclrcl} M\dot{y}^N- A y^N & = & f^N+ \mathcal{B}^Nu, & \quad y^N(0) & = & y_0^N,\\ -M\dot{p}^N-A p^N & = & y^N-y^N_d, & \quad p^N(T) & = & 0,\\ \langle \alpha u + (\mathcal{B}^*)^Np^N, v-u \rangle_\mathcal{U} & \geq & 0. & & & \end{array} \right\} \end{equation} \noindent Here $y^N, p^N:[0,T]\rightarrow\mathbb R^N$ are the semi-discrete state and adjoint, respectively, $\dot{y}^N, \dot{p}^N$ are the time derivatives, $M\in\mathbb R^{N\times N}$ denotes the mass matrix and $A\in\mathbb R^{N\times N}$ the stiffness matrix. Note that the dimension $N$ of each equation in the semi-discrete system \eqref{opt_disc} is related to the number of element nodes chosen in the FEM approach.\\ \section{Space-Time approximation} In this section, we consider the reformulation of the optimality system \eqref{heat}-\eqref{adj}-\eqref{opt_con} as an elliptic equation of fourth order in space and second order in time for the adjoint variable $p$. This is carried out for the unconstrained control problem in \cite{GHZ12} and generalized to control constrained optimal control problems in \cite{NPS11}. Following these works, we include control constraints. Here, we aim to derive an a-posteriori error estimate for the time discretization as suggested in \cite{GHZ12}, which then turns out to be the basis for our model reduction approach to solve \eqref{ocp}.\\ \noindent We define $$H^{2,1}_0(\Omega_T):=\left\{v\in H^{2,1}(\Omega_T): v(T)=0 \mbox { in }\Omega\right\},$$ where $$H^{2,1}(\Omega_T)=L^2\left(0,T;H^2(\Omega\right)\cap H^1_0\left(\Omega)\right)\cap H^1\left(0,T; L^2(\Omega)\right)$$ is equipped with the norm $$\|w\|_{H^{2,1}(\Omega_T)}^2:=\left(\|w\|^2_{L^2(0,T;H^2(\Omega))}+\|w\|^2_{H^1(0,T;L^2(\Omega))}\right).$$ \noindent Under the assumptions $y_0 \in H_0^1(\Omega)$, $\chi_i \in L^2(\Omega)$ for $i=1, \dotsc, m$ and $y_d \in H^{2,1}(\Omega_T)$, the regularity of $y,p \in H^{2,1}(\Omega_T)$ is ensured, see \cite{E10} for the details. Then, the first order optimality conditions \eqref{heat}-\eqref{adj}-\eqref{opt_con} can be transformed into an initial boundary value problem for $p$ in space-time: \begin{equation}\label{2ordp} \left. \begin{array}{rcll} -p_{tt}+\Delta^2 p-\mathcal{B}\mathcal{P}_{U_{ad}}\left(-\dfrac{1}{\alpha}\mathcal{B}^*p\right) & = &-(y_d)_t+\Delta y_d & \text{ in } \Omega_T,\\ p(\cdot, T) & = & 0 &\text{ in } \Omega,\\ p & = & 0 &\text{ on } \Sigma_T,\\ \Delta p & = & y_d &\text{ on } \Sigma_T,\\ \left(p_t+\Delta p\right)(0) & = & y_d(0)-y_0 &\text{ in }\Omega, \end{array} \right\} \end{equation} \noindent where, without loss of generality, we have set $f\equiv0$. We note that the quantity $$\mathcal{B}\mathcal{P}_{U_{ad}}\left(-\dfrac{1}{\alpha}\mathcal{B}^*p\right)$$ is nondifferentiable and nonlinear in $p$ and thus \eqref{2ordp} becomes a semilinear second order in time and fourth order in space elliptic problem with a monotone nonlinearity. Existence of a unique weak solution for \eqref{2ordp} can be proved analogously to \cite{NPS11} and follows from the fact that the optimal control problem \eqref{ocp} in the case of control constraints with closed and convex $U_{ad} \subset U$ admits a unique solution.\\ In order to provide the weak formulation of \eqref{2ordp}, we define the operator $A_0$ and the linear form $L_0$ as $$A_0:H^{2,1}_0(\Omega_T)\times H^{2,1}_0(\Omega_T)\rightarrow\mathbb R,\qquad L_0:H^{2,1}_0(\Omega_T)\rightarrow\mathbb R,$$ $$A_0(v,w):=\int_{\Omega_T} \left( v_tw_t-\mathcal{B}\mathcal{P}_{U_{ad}} \left(-\dfrac{1}{\alpha}\mathcal{B}^* v \right)w\right)+\int_{\Omega_T}\Delta v\Delta w +\int_\Omega \nabla v(0) \nabla w(0),$$ $$ L_0(v):=\int_{\Omega_T} \langle -\dfrac{\partial y_d}{\partial t}+\Delta y_d,v\rangle_{H^{-1}(\Omega)\times H^1_0(\Omega)}-\int_\Omega (y_d(0)-y_0)v(0)+\int_{\Sigma_T} y_d \nabla v \cdot \hat{n},$$ where $\hat{n}$ denotes the outer normal to the boundary $\partial\Omega$. The weak formulation of equation \eqref{2ordp} for given $y_d\in H^{2,1}(\Omega_T),\, y_0\in H^1_0(\Omega),$ reads: \begin{equation}\label{weak_2ord} \mbox{ find } p\in H_0^{2,1}(\Omega_T) \mbox{ with }A_0(p,v)=L_0(v)\quad \forall v\in H^{2,1}_0(\Omega_T). \end{equation} \noindent It follows from the monotonicity of the orthogonal projection that \eqref{weak_2ord} admits a unique solution $p$, compare e.g. \cite[Th. 1.25]{HPUU09}. We put our attention on the semi-discrete approximation of \eqref{2ordp} and investigate a-priori and a-posteriori error estimates for the time discrete problem, where the space is kept continuous. Let us consider the time discretization $0=t_0<t_1<\ldots<t_n=T$ with $\Delta t_j=t_j-t_{j-1}$ and $\Delta t:=\max_j \Delta t_j$. Let $I_j:=[t_{j-1},t_j]$. We define the time discrete space $$V_t^k:=\left\{v\in H^{2,1}(\Omega_T): \; v(\cdot)|_{I_j} \in P_1(I_j)\right\}, \qquad \bar{V}_t^k:=V_t^k\cap H^{2,1}_0(\Omega_T),$$ where the notation $P_1(I_j)$ stands for the polynomials of first order on the interval $I_j$. Then, we consider the semi-discrete problem: \begin{equation}\label{weak_dis} \mbox{find }p_k\in \bar{V}_t^k \mbox{ with } A_0(p_k,v_k)=L_0(v_k),\quad \forall v_k\in \bar{V}_t^k. \end{equation} Using the arguments of e.g. \cite[Th. 1.25]{HPUU09} one can show that problem \eqref{weak_dis} admits a unique solution $p_k \in \bar{V}_t^k$.\\ We note that with \eqref{weak_2ord} and \eqref{weak_dis} we have the Galerkin orthogonality \begin{equation}\label{GO} A_0(p,v_k) - A_0(p_k,v_k) = 0 \quad \forall v_k \in \bar{V}_t^k. \end{equation} Thus, for $v \in H_0^{2,1}(\Omega_T)$ it holds true \begin{equation*} \begin{array}{r c l} A_0(p,v) - A_0(p_k,v) & = & A_0(p,v-v_k) - A_0(p_k,v-v_k) \quad \forall v_k \in \bar{V}_t^k.\\ \end{array} \end{equation*} The following Theorem states a temporal residual type a-posteriori error estimate for $p$, which transfers the estimation of \cite[Theorem 3.5]{GHZ12} to the control constrained optimal control problem \eqref{ocp}: \begin{teo}\label{thm:apost} Let $p\in H_0^{2,1}(\Omega_T)$ and $p_k\in \bar{V}^k_t$ denote the solutions to \eqref{weak_2ord} and \eqref{weak_dis}, respectively. Then we obtain \begin{equation}\label{est-thm31} \|p-p_k\|_{H^{2,1}(\Omega_T)}^2\leq C_1\eta^2, \end{equation} where $C_1>0$ and $$\eta^2=\sum_j \Delta t_j^2 \int_{I_j} \left\| -\dfrac{\partial y_d}{\partial t}+\Delta y_d+ \dfrac{\partial^2 p_k}{\partial t^2} +\mathcal{B}\mathcal{P}_{U_{ad}}\left(-\dfrac{1}{\alpha}\mathcal{B}^*p_k\right)-\Delta^2 p_k \right\|^2_{L^2(\Omega)} + \sum_j \int_{I_j} \|y_d-\Delta p_k\|_{L^2(\partial \Omega)}^2.$$ \end{teo} \textit{Proof.} We start the proof showing a consequence of the monotonicity of the projector operator $-\mathcal{P}_{U_{ad}}\{-\mathcal{B}^*p\}$. We find that $$\left\langle - \mathcal{P}_{U_{ad}}\left\{-\frac{1}{\alpha} \mathcal{B}^* p_1 \right\} + \mathcal{P}_{U_{ad}}\left\{-\frac{1}{\alpha} \mathcal{B}^* p_2 \right\}, \mathcal{B}^*p_1 - \mathcal{B}^*p_2 \right\rangle_U \geq 0,\quad \forall p_1,p_2\in H^{2,1}_0(\Omega_T),$$ and hence \begin{equation}\label{monotonicity} \int_{\Omega_T} \left( - \mathcal{B} \mathcal{P}_{U_{ad}} \left\{ -\frac{1}{\alpha} \mathcal{B}^* p_1 \right\} + \mathcal{B}\mathcal{P}_{U_{ad}} \left\{ -\frac{1}{\alpha} \mathcal{B}^* p_2 \right\} \right)(p_1 - p_2) \geq 0 . \end{equation} \noindent For easier notation, we set $N(p):= -\mathcal{B}\mathcal{P}_{U_{ad}} \left\{ -\frac{1}{\alpha} \mathcal{B}^*p \right\}$.\\ Let $e^p:= p - p_k$ and let $\pi_k e^p$ denote the standard Lagrange type temporal interpolation of $e^p$. Using the inequality $$ \| v \|_{H^{2,1}(\Omega_T)}^2 \leq C \left(\| \frac{\partial v}{\partial t} \|^2_{L^2(\Omega_T)} + \| \Delta v \|_{L^2(\Omega_T)}^2\right)$$ for $v \in H_0^{2,1}(\Omega_T)$ and $C > 0$ from \cite[Lemma 2.5]{GHZ12}, the monotonicity \eqref{monotonicity} and the Galerkin orthogonality \eqref{GO}, we can estimate: \begin{equation*} \begin{array}{r c l} & & c \| p - p_k \|_{H^{2,1}(\Omega_T)}^2 \\[2ex] & \leq & \left\| \displaystyle\frac{\partial (p-p_k)}{\partial t} \right\|_{L^2(\Omega_T)}^2 + \| \Delta (p-p_k) \|_{L^2(\Omega_T)}^2 \\ & \leq & \left\| \displaystyle\frac{\partial (p-p_k)}{\partial t} \right\|_{L^2(\Omega_T)}^2 + \| \Delta (p-p_k) \|_{L^2(\Omega_T)}^2 + \displaystyle\int_{\Omega_T} (N(p)-N(p_k))(p-p_k) \\[2ex] & = & \displaystyle\int_{\Omega_T} \displaystyle\frac{\partial (p-p_k)}{\partial t} \frac{\partial e^p}{\partial t} + \int_{\Omega_T} \Delta (p-p_k) \Delta e^p + \int_{\Omega_T}(N(p) - N(p_k))e^p \\[2ex] & = & \displaystyle\int_{\Omega_T} \displaystyle\frac{\partial (p-p_k)}{\partial t} \frac{\partial (e^p-\pi_k e^p)}{\partial t} + \int_{\Omega_T} \Delta (p-p_k) \Delta (e^p-\pi_k e^p) + \int_{\Omega_T}(N(p) - N(p_k))(e^p - \pi_k e^p) \\[2ex] & = & \displaystyle\int_{\Omega_T} (-\frac{\partial y_d}{\partial t} + \Delta y_d)(e^p - \pi_k e^p) + \int_{\Sigma_T} y_d \nabla(e^p - \pi_k e^p) \cdot \hat{n} - \int_{\Omega_T} \frac{\partial p_k}{\partial t} \frac{\partial (e^p - \pi_k e^p)}{\partial t}\\[2ex] & & - \displaystyle\int_{\Omega_T} \Delta p_k \Delta (e^p - \pi_k e^p)- \int_{\Omega_T}N(p_k)(e^p - \pi_k e^p) \end{array} \end{equation*} Integration by parts on each time interval and Green's formula lead to \begin{equation*} \begin{array}{r c l} & & c \| p - p_k \|_{H^{2,1}(\Omega_T)}^2 \\[2ex] & \leq & \displaystyle\sum_j \int_{I_j} \int_\Omega (- \frac{\partial y_d}{\partial t} + \Delta y_d + \frac{\partial^2 p_k}{\partial t^2} - \Delta^2 p_k - N(p_k))(e^p - \pi_k e^p) + \sum_j \int_{I_j} \int_{\partial \Omega} (y_d - \Delta p_k) \nabla (e^p - \pi_k e^p) \cdot \hat{n}. \end{array} \end{equation*} Utilizing error estimates of the Lagrange interpolation $\pi_k$, the trace inequality and Young's inequality, we find \begin{equation*} \begin{array}{r c l} \hspace*{1cm} & & \| p - p_k \|_{H^{2,1}(\Omega_T)}^2 \\[2ex] & \leq & C_1 \displaystyle\sum_j \Delta t_j^2 \int_{I_j} \left\| - \frac{\partial y_d}{\partial t} + \Delta y_d + \frac{\partial^2 p_k}{\partial t^2} - \Delta^2 p_k + \mathcal{B} \mathcal{P}_{U_{ad}} \left\{-\frac{1}{\alpha} \mathcal{B}^* p_k \right\} \right\|_{L^2(\Omega)}^2 \\[2ex] & & + C_1 \displaystyle\sum_j \int_{I_j} \| y_d - \Delta p_k \|_{L^2(\partial \Omega)}^2. \hspace{9.5cm} \square \end{array} \end{equation*} Theorem \ref{thm:apost} provides a tool to refine the time grid by means of the residual of the system \eqref{2ordp}. Due to \eqref{Proj}, the time instances of this grid may be regarded as ideal snapshot locations for POD-MOR applied to problem \eqref{ocp}. \section{POD for optimal control problems} In this section, we recall the POD method which we use in order to replace the original problem \eqref{ocp} by a surrogate model. The main interest when applying the POD method is to reduce computation times and storage capacity while retaining a satisfying approximation quality. This is possible due to the key fact that POD basis functions (unlike typical finite element ansatz functions) contain information about the underlying model, since the POD modes are derived from snapshots of a solution data set. For this reason it is important to use rich snapshot ensembles reflecting the dynamics of the modeled system. Usually, we are able to improve the accuracy of a POD suboptimal solution by enlarging the number of utilized POD basis functions or enriching the snapshot ensemble, for instance. The snapshot form of POD proposed by Sirovich in \cite{Sir87} works in the continuous version as follows.\\ \noindent Let us suppose that the continuous solution $y(t)$ of \eqref{heat} and $p(t)$ of \eqref{adj} belongs to a real separable Hilbert space $V$, where $V=H_0^1(\Omega)$ or $L^2(\Omega)$, equipped with its inner product $\langle\cdot,\cdot\rangle$ and associated norm $\|\cdot\|^2=\langle\cdot,\cdot\rangle$. We set \mbox{$\mathcal{V}:=\mbox{span}\{z^k(t) \; | \; t \in [0,T] \text{ and } 1 \leq k \leq 3 \} \subseteq V$}, where $z^1(t) := y(t)$, $z^2(t) := p(t)$, $z^3(t) := \dot{p}(t)$. Note that the initial condition $y(0) = y_0$ is included in $\mathcal{V}$. The aim is to determine a POD basis $\{\psi_1,\ldots,\psi_\ell\} \subset V$ of rank $\ell \in \{1, ..., d\}$ with $d= \text{dim}(\mathcal{V}) \leq \infty$, by solving the following constrained minimization problem: \begin{eqnarray}\label{prb:pod} \min_{\psi_1,\ldots,\psi_\ell} \sum_{k=1}^3\int_0^T \left\|z^k(t)-\sum_{i=1}^\ell \langle z^k(t),\psi_i\rangle \; \psi_i\right\|^2 dt \quad \mbox{ s.t. } \langle\psi_j,\psi_i\rangle=\delta_{ij}\quad\mbox{for }1\leq i,j\leq \ell, \end{eqnarray} where $\delta_{ij}$ denotes the Kronecker symbol, i.e. $\delta_{ij}=0$ for $i \neq j$ and $\delta_{ii} = 1$. \\ \noindent It is well-known (see \cite{GV13}) that a solution to problem (\ref{prb:pod}) is given by the first $\ell$ eigenvectors $\{\psi_1, \ldots, \psi_\ell\}$ corresponding to the $\ell$ largest eigenvalues $\lambda_i > 0$ of the self-adjoint linear operator $\mathcal R:V \rightarrow V,$ i.e. \mbox{$\mathcal{R}\psi_i=\lambda_i\psi_i$,} $i=1, \dotsc, \ell$, where $\mathcal R$ is defined as follows: $$ \mathcal{R}\psi=\sum_{k=1}^3 \int_0^T \langle z^k(t),\psi\rangle \; z^k(t) dt \quad \mbox{for } \psi\in V.$$ \noindent Moreover, we can quantify the POD approximation error by the neglected eigenvalues (more details in \cite{GV13}) as follows: \begin{equation}\label{err-POD} \sum_{k=1}^3 \int_0^T \left\|z^k(t)-\sum_{i=1}^\ell \langle z^k(t),\psi_i\rangle \; \psi_i\right\|^2 dt =\sum_{i=\ell+1}^d \lambda_i. \end{equation} \noindent Let us assume that we have computed POD basis functions $\{ \psi_i\}_{i=1}^\ell$. Then, we define the POD Galerkin ansatz of order $\ell$ for the state $y$ as: \begin{equation}\label{pod_ans} y^\ell(t)=\sum_{i=1}^\ell w_i(t)\psi_i, \end{equation} \noindent where $y^\ell\in V^\ell:= \text{span}\{\psi_1,\ldots,\psi_\ell\}$ and the unknown coefficients are denoted by $\{w_i\}_{i=1}^\ell$. If we plug this ansatz into the weak formulation of the state equation \eqref{weak:heat} and use $V^\ell$ as the test space, we get the following reduced order model for \eqref{weak:heat} of low dimension: \begin{equation}\label{weakpod:heat} \begin{array}{r c l} \displaystyle\int_\Omega y_t^\ell(t) \psi dx + \int_\Omega \nabla y^\ell(t) \cdot \nabla \psi dx & = & \displaystyle\int_\Omega (f + \mathcal{B} u)(t) \psi dx \quad \forall \psi \in V^\ell \text{ and } t \in (0,T] \text{ a.e.},\\[2ex] \displaystyle\int_\Omega y^\ell(0) \psi dx & = & \displaystyle\int_\Omega y_0 \psi dx\\[2ex] \end{array} \end{equation} \noindent Choosing $\psi = \psi_i$ for $i=1, \dotsc, \ell$ and utilizing \eqref{pod_ans}, we infer from \eqref{weakpod:heat} that the coefficients \mbox{$(w_1(t), \dotsc, w_\ell(t)) =: w(t)$} satisfy $$M^\ell \dot{w}(t) + A^\ell w(t) = F^\ell(t) \quad \text{a.e. in} (0,T], \quad M^\ell w(0) = y_0^\ell, $$ where $(M^\ell)_{ij} = \int_\Omega \psi_j \psi_i dx$, $(A^\ell)_{ij} = \int_\Omega \nabla \psi_j \cdot \nabla \psi_i dx$, $(F^\ell (t))_j = \int_\Omega (f + \mathcal{B}u)(t) \psi_j dx$ and $(y_0^\ell)_j = \int_\Omega y_0 \psi_j dx$. Note that $M^\ell$ is the identity matrix, if we choose as inner product $\langle \cdot , \cdot \rangle := \langle \cdot , \cdot \rangle_{L^2(\Omega)}$. \noindent The reduced order model surrogate (ROM) for the optimal control problem is given by \begin{equation}\label{ocp_pod} \min_{u \in U_{ad}} \hat{J}^\ell(u) \mbox{ s.t. } y^\ell(u) \mbox{ satisfies } \eqref{weakpod:heat}, \end{equation} where $\hat{J}^\ell$ is the reduced cost functional, i.e. $\hat{J}^\ell (u):= \hat{J} (y^\ell(u),u) $. We recall that the discretization of the optimal solution $\bar{u}^\ell$ to \eqref{ocp_pod} is determined by the relation between the adjoint state and control and refer to \cite{H05} for more details about the variational discretization concept.\\ \noindent In order to solve the reduced optimal control problem \eqref{ocp_pod}, we consider the well-known first order optimality condition given by the variational inequality $$\langle \nabla \hat{J}^\ell(\bar{u}^\ell), u-\bar{u}^\ell \rangle_U \geq 0 \quad \forall u \in U_{ad},$$ which is sufficient since the underlying problem is convex.\\ The first order optimality conditions of \eqref{ocp_pod} also deliver that the adjoint POD scheme for the approximation of $p$ is given by: find $p^\ell(t) \in V^\ell$ with $p^\ell(T) = 0$ satisfying \begin{equation}\label{weakpod:adjoint} -\displaystyle\int_\Omega p_t^\ell(t) \psi dx + \int_\Omega \nabla p^\ell(t) \cdot \nabla \psi dx = \displaystyle\int_\Omega (y^\ell - y_d)(t) \psi dx \quad \forall \psi \in V^\ell \text{ and } t \in (0,T) \text{ a.e.} \end{equation} \section{The snapshot location strategy} In Section 4, the POD method in the continuous framework is recalled, where the POD basis functions are computed in such a way that the error between the trajectories $y(t)$ of \eqref{heat} and $p(t)$ of \eqref{adj} and its POD Galerkin approximation is minimized in \eqref{prb:pod}. In practice, we do not have the whole solution trajectories $\{z^k(t)\}_{t\in[0,T]}, 1 \leq k \leq 4$ at hand. But we have snapshots available, which are the solutions $\{y(t_j)\}_{j=0}^n$ to \eqref{heat} and the solutions $\{p(t_j)\}_{j=0}^n$ to \eqref{adj} at times $\{t_j\}_{j=0}^n$. This motivates to replace the time integration in \eqref{prb:pod} by an appropriate quadrature rule based on $t_0,\ldots,t_n$, i.e. $\int_0^T g(t)dt\approx \sum_{j=0}^n \alpha_j g(t_j)$ for $g\in C^0([0,T])$ with quadrature weights $\beta_0,\ldots,\beta_n\in\mathbb R.$ We later choose the weights for the trapezoidal rule, compare \eqref{post_p}. In the present work, we neglect the error introduced by quadrature weights. The minimization problem related to \eqref{prb:pod} then becomes $$\min \; \sum_{k=1}^3\sum_{j=0}^n \beta_j \left\| z^k(t_j) - \sum_{i=1}^\ell \langle z^k(t_j),\psi_i \rangle \psi_i \right\|^2, \text{ s.t. } \langle \psi_j , \psi_i \rangle = \delta_{ij} \quad \text{ for } 1 \leq i,j \leq \ell$$ \noindent and obviously constitutes a strong dependence of the POD basis functions on the chosen snapshot locations $t_0,\ldots, t_n$. The related snapshots shall have the property to capture the main features of the dynamics of the truth solution as good as possible. Here it is important to select suitable time instances at which characteristic dynamical properties of the optimal state are located. A natural question is: \medskip \begin{center} {\em How to pick time instances that represent good locations for snapshots in POD-MOR for \eqref{ocp_pod}?}\\ \end{center} \medskip Moreover, we face some difficulties since the reduction of optimal control problems is usually initialized with snapshots computed from a given input control $u_\circ \in U_{ad}$. This problem is usually addressed in the offline stage for POD, which is the phase needed for snapshot generation, POD basis computation and building the reduced order model. Mostly, we do not have any information about the optimal control, such that in POD-MOR the input control $u_\circ$ is often chosen as $u_\circ \equiv 0$. This circumstance raises the question about the quality of the POD basis and the quality of the POD suboptimal solution. The a-posteriori error estimator \eqref{est-thm31} in Section 3 motivates a suitable location of time instances for the POD adjoint state and at the same time we get an approximation of the optimal control which can be used as an input control $u_\circ$ in order to generate the snapshots. \noindent The use, in the offline-stage, of a time adaptive mesh refinement process allows to overcome the choice of an input control $u_\circ$ and the choice of the snapshot locations by solving equation \eqref{2ordp}. Then, we take advantage of the a-posteriori error estimation presented in Theorem \ref{thm:apost}. Equation \eqref{2ordp} provides the optimal adjoint state associated with \eqref{ocp}, which does not require the explicit knowledge of a control input $u_\circ$. We note that the ellipticity of equation \eqref{2ordp} play a crucial role in this approach. The same approach would not work, if one solves the optimality conditions directly. The numerical approximation of $p$ provides important information about the control input. In fact, thanks to the variational inequality \eqref{opt_con} we are first able to build an approximate control $u$ and finally compute the associated state $y(u)$. In this way our snapshot set will contain information about the state corresponding to an approximation of the optimal control. Thanks to this numerical approximation of the optimal control problem we can build the snapshot matrix and compute the POD basis functions where the number $\ell$ is chosen such that $\sum_{i=\ell+1}^d \lambda_i\approx0$.\\ The approximation of equation \eqref{2ordp} is very useful in model order reduction since we overcome the choice of the initial input control to generate the snapshot set. Moreover, we also gain information about a temporal grid, which allows us to better resolve $p$ with respect to time. \noindent The a-posteriori error estimation \eqref{est-thm31} guarantees that the finite element approximation of \eqref{2ordp} in the time variable is below a certain tolerance. Therefore, the reduced optimal control problem \eqref{ocp_pod} is set up and solved on the resulting adaptive time grid. Now the question is: \medskip \begin{center} \textit{How good is the quality of the computed time grid in terms of the error between\\ the optimal solution and the POD surrogate solution? } \\ \end{center} \medskip \subsection{Error Analysis for the adjoint variable} Let us motivate our approach by analyzing the error $\|p(u)-p_k^\ell(u_k^\ell)\|_{L^2(0,T,V)}$ between the optimal adjoint solution $p(u)$ of \eqref{adj} associated with the optimal control $u$ for \eqref{ocp}, i.e. $u=\mathcal{P}_{U_{ad}}(-\dfrac{1}{\alpha}\mathcal{B}^*p)$ and the POD reduced approximation $p_k^\ell(u_k^\ell)$, which is the time discrete solution to the POD-ROM for \eqref{adj} associated with the time discrete optimal control $u_k^\ell$ for \eqref{ocp_pod}, i.e. $y=y(u_k^\ell)$ in \eqref{adj}. We denote by $V$ the space $V=H_0^1(\Omega)$ and by $H$ the space $L^2(\Omega)$. By the triangular inequality we get the following estimates for the $L^2(0,T;V)$-norm: \begin{align}\label{err:est_new} \|p(u)-p_k^\ell(u_k^\ell)\| &\leq \underbrace{\|p(u)-p_k(u_k)\|}_{(\ref{err:est_new}.1)}+\underbrace{\|p_k(u_k)- \mathcal{P}^\ell p_k(u_k)\|}_{(\ref{err:est_new}.2)}+ \underbrace{\| \mathcal{P}^\ell p_k(u_k) - \mathcal{P}^\ell p_k(u_k^\ell) \|}_{(\ref{err:est_new}.3)} + \underbrace{\| \mathcal{P}^\ell p_k(u_k^\ell) - p_k^\ell(u_k^\ell) \|}_{(\ref{err:est_new}.4)} \end{align} where $p_k(u_k)$ is the time discrete adjoint solution of \eqref{weak_dis} associated with the control $u_k=\mathcal{P}_{U_{ad}}(-\dfrac{1}{\alpha}\mathcal{B}^*p_k)$ and $p_k(u_k^\ell)$ is the time discrete adjoint solution to \eqref{adj} with respect to the suboptimal control $u_k^\ell$, i.e. $y=y(u_k^\ell)$ in \eqref{adj}. By $\mathcal{P}^\ell:V\rightarrow V^\ell$ we denote the orthogonal POD projection operator as follows: $$\mathcal{P}^\ell y:=\sum_{i=1}^\ell \langle y,\psi_i\rangle_V \psi_i\quad \mbox{ for } y\in V.$$ The term (\ref{err:est_new}.1) can be estimated by \eqref{est-thm31} and concerns the snapshot generation. Thus, we can decide on a certain tolerance in order to have a prescribed error. The second term (\ref{err:est_new}.2) in \eqref{err:est_new} is the POD projection error and can be estimated by the sum of the neglected eigenvalues. Then, we note that the third term (\ref{err:est_new}.3) can be estimated as follows: \begin{equation}\label{err:est_third} \| \mathcal{P}^\ell p_k(u_k) - \mathcal{P}^\ell p_k(u_k^\ell) \| \leq \| \mathcal{P}^\ell \| \, \|p_k(u_k) - p_k(u_k^\ell) \| \leq C_2 \|u_k-u_k^\ell \|_U, \end{equation} where $\| \mathcal{P}^\ell \| \leq 1$ and $C_2>0$ is the constant referring to the Lipschitz continuity of $p_k$ independent of $k$ as in \cite{NV11}. In order to control the quantity $\| u_k-u_k^\ell \|_U \leq \| u_k - u \|_U + \| u - u_k^\ell \|_U$ we make use of the a-posteriori error estimation of \cite{TV09}, which provides an upper bound for the error between the (unknown) optimal control and any arbitrary control $u_p$ (here $u_p = u_k$ and $u_p = u_k^\ell$) by $$ \| u - u_p \|_U \leq \frac{1}{\alpha} \| \zeta_p \|_U, $$ where $\alpha$ is the regularization parameter in the cost functional and $\zeta_p \in L^2(0,T;\mathbb{R}^m)$ is chosen such that $$ \langle \alpha u_p - \mathcal{B}^* p(u_p) + \zeta_p, u - u_p \rangle_U \geq 0 \quad \forall u \in U_{ad} $$ is satisfied. Finally, last term (\ref{err:est_new}.4) can be estimated according to \cite{HV08} and involves the sum of the eigenvalues not considered, the first derivative of the time discrete adjoint variable and the difference between the state and the POD state: \begin{equation}\label{err:est_fourth} \| \mathcal{P}^\ell p_k(u_k^\ell) - p_k^\ell(u_k^\ell) \|^2 \leq C_3 \left(\sum_{i=\ell+1}^d \lambda_i^k + \| \dot{p}_k(u_k^\ell)-\mathcal{P}^\ell \dot{p}_k(u_k^\ell) \|_{L^2(0,T,V')}^2 + \|y_k(u_k^\ell) - y_k^\ell(u_k^\ell) \|_{L^2(0,T,H)}^2\right), \end{equation} for a constant $C_3>0$. We note that the sum of the neglected eigenvalues is sufficiently small provided that $\ell$ is large enough. Furthermore, error estimation \eqref{err:est_fourth} depends on the time derivative $\dot{p}_k$. To avoid this dependence, we include time derivative information concerning the adjoint variable into the snapshot set, see \cite{KV02}. To summarize, the error estimation reads: \begin{equation}\label{err:summarized} \| p(u) - p_k^\ell(u_k^\ell) \|_{L^2(0,T,V)} \leq \sqrt{C}_1 \eta + \frac{C_2}{\alpha}( \|\zeta_k \|_U + \|\zeta_k^\ell \|_U ) + \sqrt{ C_3 \left( \sum_{i=\ell+1}^d \lambda_i^k + \|y_k - y_k^\ell \|_{L^2(0,T,H)}^2 \right)}. \end{equation} Finally, we note that estimation \eqref{err:est_fourth} involves the state variable which is estimated in the following Section 5.2. \subsection{Error Analysis for the state variable} In this section we address the problem of the certification of the quality for POD approximation for the state variable. It may happen that the time grid selected for the adjoint $p$ will not be accurate enough for the state variable $y$. Therefore a further refinement of the time grid might be useful in order to reduce the error between the POD state and the true state below a given threshold. This is not guarenteed if we use the time grid, which results from the use of the estimate \eqref{est-thm31}. Here, we consider the error between the full solution $y(u_k^\ell)$ corresponding to the suboptimal control $u_k^\ell$ and the time discrete POD solution $y_k^\ell(u_k^\ell)$, where we assume to have the same temporal grid for snapshots and the solution of our POD reduced order problem. In this situation, the following estimate is proved in \cite{KV02}: \begin{subequations}\label{post_p} \begin{align} \displaystyle\sum_{j=0}^n \beta_j \|y(t_j;u_k^\ell)-y_j^\ell(u_k^\ell) \|_H^2 &\leq \quad \displaystyle\sum_{j=1}^n \left( \Delta t_j^2 C_y ((1+c_p^2)\|y_{tt}(u_k^\ell)\|^2_{L^2(I_j,H)}+\|y_t(u_k^\ell)\|_{L^2(I_j;V)})\right) \label{post_p_a}\\[0.4cm] & \hspace{1cm} +\displaystyle\sum_{j=1}^n C_y\left(\sum_{i=\ell+1}^d\left(|\langle \psi_i,y_0\rangle_V|^2+\lambda_i\right)\right) \label{post_p_b}\\[0.4cm] & \hspace{1cm} +\displaystyle\sum_{j=1}^n \sum_{i=\ell+1}^d C_y \dfrac{\lambda_i}{\Delta{t_j^2}} \label{post_p_c} \end{align} \end{subequations} \noindent where $C_y>0$ is a constant depending on $T$, but independent of the time grid $\{t_j\}_{j=0}^n$. We note that $y(t_j;u_k^\ell)$ is the continuous solution of \eqref{heat} at given time instances related to the suboptimal control $u_k^\ell$. The temporal step size in the subinterval $[t_{j-1},t_j]$ is denoted by $\Delta t_j$. The positive weights $\beta_j$ are given by $$ \beta_0 = \frac{\Delta t_1}{2}, \quad \beta_j = \frac{\Delta t_j + \Delta t_{j+1}}{2} \text{ for } j = 1, \dotsc, n-1, \quad \text{and } \beta_n = \frac{\Delta t_n}{2}. $$ The constant $c_p$ is an upper bound of the projection operator. A similar estimate can be carried out for the $V-$norm. We refer the interested reader to \cite{KV02}. Estimate \eqref{post_p} provides now a recipe for further refinement of the time grid in order to approximate the state $y$ within a prescribed tolerance. One option here consists in equidistributing the error contributions of the term \eqref{post_p_a}, while the number of modes has to be adapted to the time grid size according to the term \eqref{post_p_c}. Finally, the number $\ell$ of modes should be chosen such that the term in \eqref{post_p_b} remains within the prescribed tolerance. \subsection{The algorithm} The a-posteriori error control concept for \eqref{2ordp} now offers the possibility to select snapshot locations by a time adaptive procedure. For this purpose, \eqref{2ordp} is solved adaptively in time, where the spatial resolution ($\Delta x$ in Algorithm \ref{Alg:OPTPOD}) is chosen to be very coarse in order to keep the computational costs low. This is possible due to the fact that spatial and temporal discretization decouple when using the solution technique of \cite{GHZ12} as we will see in Section 6, compare Figure \ref{fig3:mesh}. The resulting time grid points now serve as snapshot locations, on which our POD reduced order model for the optimization is based. The snapshots are now obtained from a simulation of \eqref{heat} with high spatial resolution h, which is used in \eqref{adj} to obtain highly resolved snapshots of p, which are accomplished with time finite differences of those adjoint snapshots. The right-hand side $u$ in the simulation of \eqref{heat} is obtained from \eqref{opt_con} with $p$ from \eqref{adj} computed with spatially coarse resolution $\Delta x$. The certification of the state variable is then performed according to \eqref{post_p} as a post-processing procedure. This strategy might not deliver the optimal time instances, but it is a practical and efficient strategy, which turns out to deliver good approximation results (compare Section 6) at low costs. The algorithm is summarized below in Algorithm \ref{Alg:OPTPOD}. \begin{algorithm}[htbp] \caption{Adaptive snapshot selection for optimal control problems.} \label{Alg:OPTPOD} \begin{algorithmic}[1] \REQUIRE coarse spatial grid size $\Delta x$, fine spatial grid size $h$, maximal number of degrees of freedom (dof)\\ for the adaptive time discretization, $T>0$. \STATE Solve \eqref{2ordp} adaptively w.r.t. time with spatial resolution $\Delta x$ and obtain the time grid $\mathcal{T}$ with solution $p_{\Delta x}$. \STATE Set $u_{\Delta x}=\mathcal{P}_{U_{ad}}\left(-\dfrac{1}{\alpha}\mathcal{B}^*p_{\Delta x}\right).$ \STATE Solve \eqref{heat} on $\mathcal{T}$ with spatial resolution $\Delta x$ corresponding to the control $u_{\Delta x}$. \STATE Refine the time interval $\mathcal{T}$ according to \eqref{post_p} and construct the time grid $\mathcal{T}_{new}$. \STATE Generate state and adjoint snapshots by solving \eqref{heat} with r.h.s. $u_{\Delta x}$ and \eqref{adj}, respectively, on $\mathcal{T}_{new}$ with spatial resolution $h$. Generate time derivative adjoint snapshots with time finite differences on those adjoint snapshots. \STATE Compute a POD basis of order $\ell$ and build the POD reduced order model \eqref{ocp_pod} based on the state, \\adjoint state and time derivative adjoint state snapshots. \STATE Solve \eqref{ocp_pod} with the time grid $\mathcal{T}_{new}$ \end{algorithmic} \end{algorithm} \section{Numerical Tests} In our numerical computations we use a one-dimensional spatial domain and a finite element discretization in space by means of conformal piecewise linear polynomials. We use the implicit Euler method for time integration. The solution of the optimal control problem \eqref{ocp_pod} is done by a gradient method with stopping criteria $\|\hat{J}'(u^k)\| \leq \tau_r \|\hat{J}'(u^k)\|_U +\tau_a$ and an Armijo linesearch. In the following numerical examples, we apply Algorithm \ref{Alg:OPTPOD} in order to validate this strategy by numerical results.\\ The numerical tests illustrate that utilizing a time adaptive grid for snapshot location and for solving the POD reduced order optimal control problem delivers more accurate approximation results than utilizing a uniform time grid. We show three different numerical tests. The first example presents a steep gradient at the end of the time interval in the adjoint variable. In the second example the adjoint state develops an interior layer in the middle of the time interval and finally we introduce control contraints in the third example. Moreover we also show the benefits of the post processing for the state variable (step 4 in Algorithm \ref{Alg:OPTPOD}) to achieve more accurate approximation results for both state and adjoint state. \\ All coding is done in \textsc{Matlab R2015}a and the computations are performed on a 2.50GHz computer.\\ \subsection{Test 1: Solution with steep gradient towards final time} The data for this test example is inspired from Example 5.3 in \cite{GHZ12}, with the following choices: $\Omega = (0,1)$ and $[0,T] = [0,1]$. We set $U_{ad} = L^\infty(0,T; \mathbb{R}^m)$. The example is built in such a way that the exact optimal solution $(\bar{y},\bar{u})$ of problem \eqref{ocp} with associated optimal adjoint state $\bar{p}$ is known: $$\bar{y}(x,t) = \sin (\pi x) \sin(\pi t), \quad \bar{p}(x,t) = x(x-1)\left(t-\frac{e^{(t-1)/\varepsilon}- e^{-1/\varepsilon}}{1-e^{-1/\varepsilon}}\right), \quad \bar{u}(t) = -\frac{1}{\alpha} \mathcal{B}^*\bar{p}(x,t) = -t+\frac{e^{(t-1)/\varepsilon}-e^{-1/\varepsilon}}{1-e^{-1/\varepsilon}}$$ with $m=1$ and the control shape function $\chi(x) = x(x-1)$ for the operator $\mathcal{B}$. This leads to the right hand side $$f(x,t) = \pi \sin(\pi x)(\cos(\pi t) + \pi \sin(\pi t)) + x(x-1)\left(t- \frac{e^{(t-1)/\varepsilon}-e^{-1/\varepsilon}}{1-e^{-1/\varepsilon}}\right),$$ \noindent the desired state $$y_d (x,t) = \sin(\pi x)\sin(\pi t) + x(x-1)\left(1-\frac{e^{(t-1)/\varepsilon}\cdot 1/\varepsilon}{1-e^{-1/\varepsilon}}\right) + 2\left(t-\frac{e^{(t-1)/\varepsilon}-e^{-1/\varepsilon}}{1-e^{-1/\varepsilon}}\right)$$ and the initial condition $y_0 (x) = 0$. We choose the regularization parameter to be $\alpha = 1/30$. For small values of $\varepsilon$ (we use $\varepsilon = 10^{-4}$), the adjoint state $\bar{p}$ develops a layer towards $t = 1$, which can be seen in the left plots of Figure \ref{fig1:sol} and Figure \ref{fig2:con}. \begin{figure}[htbp] \includegraphics[scale=0.33]{p_true_surf.pdf} \includegraphics[scale=0.33]{pPOD_uniform_surf_new.pdf} \includegraphics[scale=0.33]{pPOD_adapt_surf_dof21_new.pdf} \caption{Test 1: Analytical optimal adjoint state $\bar{p}$ (left), POD adjoint solution $p^\ell$ utilizing an equidistant time grid with $ \Delta t = 1/20$ (middle), POD adjoint solution $p^\ell$ utilizing an adaptive time grid with dof=21 (right).} \label{fig1:sol} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.33]{p_true_contour_new.pdf} \includegraphics[scale=0.33]{pPOD_uniform_contour_new.pdf} \includegraphics[scale=0.33]{pPOD_adapt_contour_dof21_new.pdf} \caption{Test 1: Contour lines of the analytical optimal adjoint state $\bar{p}$ (left), POD adjoint solution $p^\ell$ utilizing an equidistant time grid with $ \Delta t = 1/20$ (middle), POD adjoint solution $p^\ell$ utilizing an adaptive time grid with dof=21 (right).} \label{fig2:con} \end{figure} In this test run we focus on the influence of the time grid to approximate of the POD solution. Therefore, we compare the use of two different types of time grids: an equidistant time grid characterized by the time increment $\Delta t = 1/n$ and a non-equidistant (adaptive) time grid characterized by $n+1$ degrees of freedom (dof). We build the POD-ROM from the uncontrolled problem; we create the snapshot ensemble by determining the associated state $y(u_\circ)$ and adjoint state $p(u_\circ)$ corresponding to the control function $u_\circ \equiv 0$ and we also include the initial condition $y_0$ and the time derivatives of the adjoint $p_t(u_\circ)$ into our snapshot set, which is accomplished with time finite differences of the adjoint snapshots. We use $\ell = 1$ POD basis function. Although we would also have the possibility to use suboptimal snapshots corresponding to an approximation $u_{\Delta x}$ of the optimal control, here, we want to emphasize the importance of the time grid. Nevertheless in this example, the quality of the POD solution does not really differ, if we consider suboptimal or uncontrolled snapshots. First, we leave out the post-processing step 4 of Algorithm \ref{Alg:OPTPOD} and discuss the inclusion of this part later.\\ Figure \ref{fig3:mesh} visualizes the space-time mesh of the numerical solution of (\ref{2ordp}) utilizing the temporal residual type a-posteriori error estimate (\ref{est-thm31}). The first grid in Figure \ref{fig3:mesh} corresponds to the choice of dof=21 and $\Delta x = 1/100$, whereas the grid in the middle refers to using dof = 21 and $\Delta x = 1/5$. Both choices for spatial discretization lead to the exact same time grid, which displays fine time steps towards the end of the time horizon (where the layer in the optimal adjoint state is located), whereas at the beginning and in the middle of the time interval the time steps are larger. This clearly indicates that the resulting time adaptive grid is very insensitive against changes in the spatial resolution. For the sake of completeness, the equidistant grid with the same number of degrees of freedom is shown in the right plot of Figure \ref{fig3:mesh}.\\ Since the generation of the time adaptive grid as well as the approximation of the optimal solution is done in the offline computation part of POD-MOR, this process shall be perfomred quickly, which is why we pick $\Delta x = 1/5$ for step 1 in Algorithm 1.\\ \begin{figure}[htbp] \includegraphics[scale=0.33]{grid_dx100.pdf} \includegraphics[scale=0.33]{grid_dx5.pdf} \includegraphics[scale=0.33]{grid_equi.pdf} \caption{Test 1: Adaptive space-time grids with dof $= 21$ according to the strategy in \cite{GHZ12} and $\Delta x = 1/100$ (left) and $\Delta x = 1/5$ (middle), respectively, and the equidistant grid with $\Delta t = 1/20$ (right)} \label{fig3:mesh} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.33]{true_control.pdf} \includegraphics[scale=0.33]{uGHZ21.pdf}\\ \includegraphics[scale=0.33]{uPOD_uniform20_new.pdf} \includegraphics[scale=0.33]{uPOD_adaptive21_new.pdf} \caption{Test 1: Analytical optimal control $\bar{u}$ (top left), approximation $u_{\Delta x}$ of the optimal control gained by step 1 of Algorithm \ref{Alg:OPTPOD} (top right); POD control utilizing a uniform time grid with $\Delta t = 1/20$ (bottom left), POD control utilizing an adaptive time grid with dof=21 (bottom right)} \label{fig2b:control} \end{figure} Figures \ref{fig1:sol} and \ref{fig2:con} (middle and right plots) show the surface and contour lines of the POD adjoint state utilizing an equidistant time grid and utilizing the time adaptive grid, respectively. The analytical control intensity $\bar{u}(t)$, the approximation $u_{\Delta x}$ of the optimal control computed in step 1 of Algorithm \ref{Alg:OPTPOD} as well as the POD controls utilizing a uniform and time adaptive grid, respectively, are shown in Figure \ref{fig2b:control}.\\ Table \ref{tab:1} summarizes the approximation quality of the POD solution depending on different time discretizations. The fineness of the time discretization (characterized by $\Delta t$ and dof, respectively) is chosen in such a way that the results of uniform and adaptive temporal discretization are comparable. The absolute errors between the analytical optimal state $\bar{y}$ and the POD solution $y^\ell$, defined by $\varepsilon_{\text{abs}}^y := \|\bar{y}-y^\ell \|_{L^2(\Omega_T)}$, are listed in columns 2 and 6; same applies for the errors in the control $\varepsilon_{\text{abs}}^u := \|\bar{u}-u^\ell \|_{\mathcal{U}}$ (columns 3 and 7) and adjoint state $\varepsilon_{\text{abs}}^p := \|\bar{p}-p^\ell \|_{L^2(\Omega_T)}$ (columns 4 and 8). If we compare the results, we note that we gain one order of accuracy for the adjoint and control variable with the time adaptive grid. In order to achieve an accuracy in the control variable of order $10^{-2}$ utilizing an equidistant time grid, we need about $n = 10000$ time steps (not listed in Table \ref{tab:1}). This emphasizes that using an appropriate (non-equidistant) time grid for the adjoint variable is of particular importance in order to efficiently achieve POD controls of good quality. \begin{table}[htbp] \centering \begin{tabular}{ c | c | c | c || c | c | c | c} \toprule $\Delta t$ & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ & dof & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ \\ \hline 1/20 & $1.5120 \cdot 10^{-02}$ & $1.9837 \cdot 10^{-01}$ & $3.6247 \cdot 10^{-02}$ & 21 & $5.1874 \cdot 10^{-02}$ & $5.3428 \cdot 10^{-02}$ & $9.6343 \cdot 10^{-03}$ \\ 1/42 & $1.1186 \cdot 10^{-02}$ & $2.1071 \cdot 10^{-01}$ & $3.8490 \cdot 10^{-02}$ & 43 & $5.1634 \cdot 10^{-02}$ & $2.4868 \cdot 10^{-02}$ & $4.3611 \cdot 10^{-03}$ \\ 1/61 & $1.0774 \cdot 10^{-02}$ & $2.1447 \cdot 10^{-01}$ & $3.9173 \cdot 10^{-02}$ & 62 & $5.1599 \cdot 10^{-02}$ & $2.3275 \cdot 10^{-02}$ & $4.0691 \cdot 10^{-03}$ \\ 1/114 & $1.1157 \cdot 10^{-02}$ & $2.1846 \cdot 10^{-01}$ & $3.9893 \cdot 10^{-02}$ & 115 & $5.1568 \cdot 10^{-02}$ & $2.3027 \cdot 10^{-02}$ & $4.0340 \cdot 10^{-03}$\\ \bottomrule \end{tabular} \vspace{0.4cm} \caption{Test 1: Absolute errors between the analytical optimal solution and the POD solution depending on the time discretization (equidistant: columns 1-4, adaptive: columns 5-8)} \label{tab:1} \end{table} Table \ref{tab:2} contains the evaluations of each term in \eqref{err:summarized}. The value $\eta_p^i$ ($\eta_p^b$) refers to the first (second) part in (\ref{est-thm31}). For this test example, we note that the term $\eta_p^i$ influences the estimation. However, we observe that the better the semi-discrete adjoint state $p_{\Delta x}$ from step 1 of Algorithm \ref{Alg:OPTPOD} is, the better will be the POD adjoint solution. Since all summands of (\ref{err:summarized}) can be estimated, Table \ref{tab:2} allows us to control the approximation of the POD adjoint state. The estimation \eqref{post_p} concerning the state variable will be investigated later on. \begin{table}[htbp] \centering \begin{tabular}{ c | c | c | c | c | c } \toprule dof & $\varepsilon_{\text{abs}}^p$ & $\eta_p^i$ & $\eta_p^b$ & $\| \zeta_k \|_U + \| \zeta_k^\ell \|_U $ & $\sum_{i=\ell+1}^d \lambda_i$ \\ \hline 21 & $9.6343 \cdot 10^{-03}$ & $4.9518 \cdot 10^{+00}$ & $4.8031 \cdot 10^{-04}$ & $1.6033 \cdot 10^{-02}$ & $3.3938 \cdot 10^{-04}$ \\ 43 & $4.3611 \cdot 10^{-03}$ & $1.1976 \cdot 10^{+00}$ & $5.0087 \cdot 10^{-05}$ & $1.9200 \cdot 10^{-02}$ & $2.9454 \cdot 10^{-04}$ \\ 62 & $4.0691 \cdot 10^{-03}$ & $7.2852 \cdot 10^{-01}$ & $2.9835 \cdot 10^{-05}$ & $1.9707 \cdot 10^{-02}$ & $2.9212 \cdot 10^{-04}$ \\ 115 & $4.0340 \cdot 10^{-03}$ & $3.4966 \cdot 10^{-01}$ & $1.4845 \cdot 10^{-05}$ & $2.0191 \cdot 10^{-02}$ & $2.9090 \cdot 10^{-04}$ \\ \bottomrule \end{tabular} \vspace{0.4cm} \caption{Test 1: Evaluation of each summand of the error estimation \ref{err:summarized}} \label{tab:2} \end{table} Moreover, a comparison of the value of the cost functional is given in Table \ref{tab:3}. The aim of the optimization problem (\ref{ocp}) is to minimize the quantity of interest $J(y,u)$. The analytical value of the cost functional at the optimal solution is $J(\bar{y},\bar{u}) \approx 8.3988 \cdot 10^{+01}$. Table \ref{tab:3} clearly points out that the use of a time adaptive grid is fundamental for solving the optimal control problem (\ref{ocp}). The huge differences in the values of the cost functional is due to the great increase of the desired state $y_d$ at the end of the time interval (see Figure \ref{fig4:states}). Small time steps at the end of the time interval, as it is the case in the time adaptive grid, lead to much more accurate results. \begin{table}[htbp] \centering \begin{tabular}{ c | c || c | c } \toprule $\Delta t$ & $J(y^\ell,u)$ & dof & $J(y^\ell,u)$ \\ \hline 1/20 & $4.1652 \cdot 10^{+04}$ & 21 & $8.7960 \cdot 10^{+01}$ \\ 1/42 & $1.9834 \cdot 10^{+04}$ & 43 & $8.4252 \cdot 10^{+01}$ \\ 1/61 & $1.3656 \cdot 10^{+04}$ & 62 & $8.4102 \cdot 10^{+01}$ \\ 1/114 & $7.3078 \cdot 10^{+03}$ & 115 & $8.4034 \cdot 10^{+01}$ \\ 1/40000 & $8.5692 \cdot 10^{+01}$ & - & - \\ \bottomrule \end{tabular} \vspace{0.4cm} \caption{Test 1: Value of the cost functional at the POD solution utilizing uniform and adaptive time discretization, respectively, analytical value: $J \approx 8.3988 \cdot 10^{+01} $} \label{tab:3} \end{table} \begin{figure}[htbp] \includegraphics[scale=0.33]{true_state.pdf} \includegraphics[scale=0.33]{desired_state.pdf}\\ \includegraphics[scale=0.33]{yPOD_uniform20_new.pdf} \includegraphics[scale=0.33]{yPOD_adaptive21_new.pdf} \caption{Test 1: Analytical optimal state $\bar{y}$ (top left), desired state $y_d$ (top right); POD state $y^\ell$ utilizing a uniform time grid with $\Delta t = 1/20$ (bottom left), POD state $y^\ell$ utilizing an adaptive time grid with dof = 21 (bottom right) } \label{fig4:states} \end{figure} Now, let us discuss the inclusion of step 4 in Algorithm \ref{Alg:OPTPOD}. Since we went for an adaptive time grid regarding the adjoint variable, we cannot in general expect that the resulting time grid is a good time grid for the state variable. Table \ref{tab:1} confirms that utilizing a uniform time grid leads to better approximation results in the state variable than using the time adaptive grid. In order to improve also the approximation quality in the state variable, we incorporate the error estimation (\ref{post_p}) from \cite{KV02} in a post-processing step after producing the time grid with the strategy of \cite{GHZ12} and before starting the POD solution process. Define\\ \begin{center} $\eta_{\text{POD}_j} := \Delta t_j^2 \left( \int\limits_{I_j} (\| y_{tt}^k \|_H^2 + \|y_t^k\|_V^2) \right) $ \end{center} where $y_t^k \approx y_t(t_k)$ and $y_{tt}^k \approx y_{tt}(t_k)$ are computed via finite difference approximation. We perform bisection on those time intervals $I_j$, where the quantity $\eta_{\text{POD}_j}$ has its maximum value and repeat this $N_{\text{refine}}$ times. This results in the time grid $\mathcal{T_{\text{new}}}$. The improvement in the approximation quality in the state variable can be seen in Table \ref{tab:4}. The more additional time instances we include according to \eqref{post_p}, the better the approximation results get with respect to the state. Moreover, also the approximation quality in the control and adjoint state is improved. \begin{table}[htbp] \centering \begin{tabular}{ c | c | c | c } \toprule $N_{\text{refine}}$ & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ \\ \hline 0 & $5.1874 \cdot 10^{-02}$ & $5.3428 \cdot 10^{-02}$ & $9.6343 \cdot 10^{-03}$\\ 5 & $ 4.0058 \cdot 10^{-02}$ & $2.1145 \cdot 10^{-02}$ & $3.6378 \cdot 10^{-03}$ \\ 10 & $3.0909 \cdot 10^{-02}$ & $1.8396 \cdot 10^{-02}$ & $3.0895 \cdot 10^{-03}$ \\ 20 & $2.4759 \cdot 10^{-02}$ & $1.7104 \cdot 10^{-02}$ & $2.8210 \cdot 10^{-03}$\\ 30 & $2.3028 \cdot 10^{-02}$ & $1.6971 \cdot 10^{-02}$ & $2.7906 \cdot 10^{-03}$ \\ \bottomrule \end{tabular} \vspace{0.4cm} \caption{Test 1: Improvement of approximation quality concerning the state variable. The initial time grid $\mathcal{T}$ is computed with dof=43} \label{tab:4} \end{table} We note that the sum of the neglected eigenvalues $\sum_{i=2}^d \lambda_i$ is approximately zero and the second largest eigenvalue of the correlation matrix is of order $10^{-10}$, which makes the use of additional POD basis functions redundant. Likewise, in this particular example the choice of richer snapshots (even the optimal snapshots) does not bring significant improvements in the approximation quality of the POD solutions. So, this example shows that solely the use of an appropriate adaptive time mesh efficiently improves the accuracy of the POD solution.\\ \subsection{Test 2: Solution with steep gradient in the middle of the time interval} Let $\Omega = (0,1)$ be the spatial domain and $[0,T]=[0,1]$ be the time interval. We choose $\varepsilon = 10^{-4}$ and $\alpha = 1$. To begin with, we consider an unconstrained optimal control problem and investigate the inclusion of control constraints separately in Test 3. We build the example in such a way that the analytical solution $(\bar{y},\bar{u})$ of \eqref{ocp} is given by: $$ \bar{y} (x,t) = x^3 (x-1)t, \quad \bar{p}(x,t) = \sin (\pi x) \text{atan}\left(\frac{t-0.5}{\varepsilon}\right)(t-1), $$\\ $$ \bar{u}_1(t) = \bar{u}_2(t) = -\text{atan} \left(\frac{t-0.5}{\varepsilon}\right)(t-1)\left(\frac{32}{\pi^3} - \frac{8}{\pi^2}\right), $$\\ $$\bar{\chi}_1(x) = \max(0,1-16(x-0.25)^2), \quad \bar{\chi}_2(x) = \max(0,1-16(x-0.75)^2).$$\\ \noindent The desired state and the forcing term are chosen accordingly. Due to the arcus-tangens term and the small value for $\varepsilon$, the adjoint state exhibits an interior layer with steep gradient at $t=0.5$, which can be seen in the left panel of Figure \ref{fig:ex2_p_surf} and \ref{fig:ex2_p_contour}. The shape functions $\chi_1$ and $\chi_2$ are shown in Figure \ref{fig:shape_functions_ev} on the left side. Like in Test 1, we study the use of two different time grids: an equidistant time discretization and the time adaptive grid computed in step 1 of Algorithm \ref{Alg:OPTPOD} (see Figure \ref{fig:grids_ex2}). Once again, we note that spatial and temporal discretization decouple when computing the time adaptive grid utilizing the a-posteriori estimation \eqref{est-thm31}, which enables us to use a large spatial resolution $\Delta x$ for solving the elliptic system and to keep the offline costs low. \begin{figure}[htbp] \includegraphics[scale=0.33]{p2_true_surf.pdf} \includegraphics[scale=0.33]{pPOD2_surf_uniform_dt40_new.pdf} \includegraphics[scale=0.33]{pPOD2_adapt_surf_dof41_new.pdf} \caption{Test 2: Analytical optimal adjoint state $\bar{p}$ (left), POD adjoint solution $p^\ell$ with $\ell = 4$ utilizing an equidistant time grid with $ \Delta t = 1/40$ (middle), POD adjoint solution $p^\ell$ with $\ell = 4$ utilizing an adaptive time grid with dof=41 (right)} \label{fig:ex2_p_surf} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.33]{p2_true_contour_new.pdf} \includegraphics[scale=0.33]{pPOD2_contour_uniform_dt40_new.pdf} \includegraphics[scale=0.33]{pPOD2_adapt_contour_dof41_new.pdf} \caption{Test 2: Contour lines of the analytical optimal adjoint state $\bar{p}$ (left), POD adjoint solution $p^\ell$ with $\ell = 4$ utilizing an equidistant time grid with $ \Delta t = 1/40$ (middle), POD adjoint solution $p^\ell$ with $\ell = 4$ utilizing an adaptive time grid with dof=41 (right)} \label{fig:ex2_p_contour} \end{figure} As snaphots, we choose state and adjoint snapshots as well as time derivative adjoint snapshots corresponding to $u_\circ=0$ and we also include the initial condition $y_0$ into our snapshot set. The middle and right plots of Figures \ref{fig:ex2_p_surf} and \ref{fig:ex2_p_contour} show the surface and contour lines of the POD adjoint solution utilizing an equidistant time grid (with $\Delta t = 1/40$) and utilizing the adaptive time grid (with dof = 41), respectively. Clearly, the equidistant time grid fails to capture the interior layer at $t=1/2$ satisfactorily, whereas the POD adjoint state utilizing the adaptive time grid approximates the interior layer well. \begin{figure}[htbp] \includegraphics[scale=0.33]{shape_fct.pdf} \includegraphics[scale=0.33]{decay_ev_new.pdf} \includegraphics[scale=0.33]{psi1_ex2_new.pdf} \caption{Test 2: Shape functions $\chi_1 (x)$ and $\chi_2(x)$ (left), decay of the eigenvalues on semilog scale (middle) and first POD basis function $\psi_1$ (right) utilizing uniform time grid with $\Delta t = 1/40$ } \label{fig:shape_functions_ev} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.33]{grid2_dx100.pdf} \includegraphics[scale=0.33]{grid2_dx5.pdf} \includegraphics[scale=0.33]{grid_equi.pdf} \caption{Test 2: Adaptive space-time grids with dof $= 41$ according to the strategy in \cite{GHZ12} and $\Delta x = 1/100$ (left) and $\Delta x = 1/5$ (middle), respectively, and the equidistant grid with $\Delta t = 1/40$ (right)} \label{fig:grids_ex2} \end{figure} Unlike Test Example 6.1, the adaptive time grid is also a suitable time grid for the state variable in this numerical test example. This can be seen visually when comparing the results for the POD state utilizing uniform discretization and utilizing the adaptive time grid with the analytical optimal state, Figures \ref{fig:ex_2_state_surface} and \ref{fig:ex_2_state_contour}. \begin{figure}[htbp] \includegraphics[scale=0.33]{y2_true_surf.pdf} \includegraphics[scale=0.33]{yPOD2_uniform_dof40_surf_new.pdf} \includegraphics[scale=0.33]{yPOD2_adapt_surf_dof41_new.pdf} \caption{Test 6.2: Analytical optimal state $\bar{y}$ (left), POD solution $y^\ell$ with $\ell = 4$ utilizing an equidistant time grid with $ \Delta t = 1/40$ (middle), POD solution $y^\ell$ with $\ell = 4$ utilizing an adaptive time grid with dof=41 (right)} \label{fig:ex_2_state_surface} \end{figure} \begin{figure}[htbp] \includegraphics[scale=0.33]{y2_true_contour_new.pdf} \includegraphics[scale=0.33]{yPOD2_uniform_dt40_contour_new.pdf} \includegraphics[scale=0.33]{yPOD2_adapt_contour_dof41_new.pdf} \caption{Test 6.2: Contour lines of the analytical optimal state $\bar{y}$ (left), POD solution $y^\ell$ with $\ell = 4$ utilizing an equidistant time grid with $ \Delta t = 1/40$ (middle), POD solution $y^\ell$ with $\ell = 4$ utilizing an adaptive time grid with dof=41 (right)} \label{fig:ex_2_state_contour} \end{figure} Table \ref{tab:ex2_absolute_err_l5} summarizes the absolute errors between the analytical optimal solution and the POD solution for the state, control and adjoint state for all test runs with an equidistant and adaptive time grid, respectively. If we compare the results of the numerical approximation, we note that the use of an adaptive time grid heavily improves the quality of the POD solution with respect to an equidistant grid. In fact, we get an improvement up to order four. \begin{table}[htbp] \centering \begin{tabular}{ c | c | c | c || c | c | c | c} \toprule $\Delta t$ & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ & dof & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ \\ \hline 1/20 & $5.0767 \cdot 10^{-01}$ & $7.8419 \cdot 10^{+00}$ & $3.5413 \cdot 10^{+01}$ & 21 & $4.0346 \cdot 10^{-02}$ & $5.4053 \cdot 10^{-01}$ & $2.4409 \cdot 10^{+00}$ \\ 1/40 & $2.6242 \cdot 10^{-01}$ & $4.1058 \cdot 10^{+00}$ & $1.8542 \cdot 10^{+01}$ & 41 & $2.2178 \cdot 10^{-04}$ & $5.3471 \cdot 10^{-03}$ & $1.3186 \cdot 10^{-02}$ \\ 1/68 & $1.5603 \cdot 10^{-01}$ & $2.4503 \cdot 10^{+00}$ & $1.1065 \cdot 10^{+01}$ & 69 & $9.7031 \cdot 10^{-05}$ & $4.5702 \cdot 10^{-03}$ & $4.2670 \cdot 10^{-03}$ \\ 1/134 & $7.8741 \cdot 10^{-02}$ & $1.2386 \cdot 10^{+00}$ & $5.5938 \cdot 10^{+00}$ & 135 & $8.5577 \cdot 10^{-05}$ & $4.4901 \cdot 10^{-03}$ & $2.3507 \cdot 10^{-03}$\\ \bottomrule \end{tabular} \vspace{0.4cm} \caption{Test 6.2: Absolute errors between the analytical optimal solution and the POD solution with $\ell = 4$ depending on the time discretization (equidistant: columns 1-4, adaptive: columns 5-8)} \label{tab:ex2_absolute_err_l5} \end{table} The exact optimal control intensities $\bar{u}_1(t)$ and $\bar{u}_2(t)$ as well as the POD solutions utilizing uniform and adaptive temporal discretization are illustrated in Figure \ref{fig:ex_2_controls}.\\ Another point of comparison is the evaluation of the cost functional. Since the exact optimal solution is known analytically, we can compute the exact value of the cost functional, which is $J(\bar{y},\bar{u})= 1.0085 \cdot 10^{+03}$. As expected, utilizing an adaptive time grid enables us to approximate this value of the cost functional quite well when using dof=135, see Table \ref{tab:ex_2_cost_value}. In contrast, the use of a very fine temporal discretization with $\Delta t = 1 / 10000$ is still worse than the results with the adaptive time grid and dof $\geq 41$. Again, this emphasizes the importance of a suitable time grid. \\ \begin{figure}[htbp] \includegraphics[scale=0.28]{true_control_u1.pdf} \includegraphics[scale=0.28]{POD_u1_uniform_new.pdf} \includegraphics[scale=0.28]{POD_u1_adaptive_new.pdf}\\ \includegraphics[scale=0.28]{true_control_u2.pdf} \includegraphics[scale=0.28]{POD_u2_uniform_new.pdf} \includegraphics[scale=0.28]{POD_u2_adaptive_new.pdf} \caption{Test 2: Analytical control intensities $\bar{u}_1(t)$ (top left) and $\bar{u}_2(t)$ (bottom left), POD control utilizing an equidistant time grid with $ \Delta t = 1/40$ (middle) and $\ell = 4$, POD control utilizing an adaptive time grid with dof=41 (right) and $\ell = 4$} \label{fig:ex_2_controls} \end{figure} \begin{table}[htbp] \centering \begin{tabular}{ c | c || c | c } \toprule $\Delta t$ & $J(y^\ell, u)$ & dof & $J(y^\ell, u)$ \\ \hline 1/20 & $3.1225 \cdot 10^{+05}$ & 21 & $1.9553 \cdot 10^{+04}$ \\ 1/40 & $1.5619 \cdot 10^{+05}$ & 41 & $1.0274 \cdot 10^{+03}$ \\ 1/68 & $9.1901 \cdot 10^{+04}$ & 69 & $1.0065 \cdot 10^{+03}$ \\ 1/134 & $4.6655 \cdot 10^{+04}$ & 135 & $1.0082 \cdot 10^{+03}$ \\ 1/10000 & $1.0350 \cdot 10^{+03}$ & -- & -- \\ \bottomrule \end{tabular} \vspace{0.4cm} \caption{Test 2: Value of the cost functional with $\ell = 4$, true value $J \approx 1.0085 \cdot 10^{+03}$} \label{tab:ex_2_cost_value} \end{table} Now, we like to investigate which influence the number $\ell$ of utilized POD basis functions has on the approximation quality of the POD solution. First, we have a look at the decay of the eigenvalues, which is displayed in Figure \ref{fig:shape_functions_ev}, middle. The eigenvalues stagnate nearby the order of machine precision, which is why the use of more than $\ell = 4$ POD basis functions will not lead to better POD approximation results. The first POD basis function $\psi_1$ can be seen in the right plot of Figure \ref{fig:shape_functions_ev}. For the use of only $\ell = 1$ POD basis function, the absolute error between the analytical solution and the POD solution in the state, control and adjoint state for uniform as well as for adaptive time discretization are summarized in Table \ref{tab:ex2_absolute_err_l1}. Let us compare the results in this Table \ref{tab:ex2_absolute_err_l1} where $\ell = 1$ POD basis function is used with the results in Table \ref{tab:ex2_absolute_err_l5} where $\ell = 4$ POD basis functions are used. We note that in the case of the uniform temporal discretization, the use of $\ell = 1$ POD basis function leads to similar approximation results like when using $\ell = 4$ POD modes. On the contrary, in the case of the adaptive time discretization, the use $\ell = 4$ POD basis functions leads to better approximation results with respect to the state variable than using $\ell = 1$ POD basis. The approximation results concerning the control and adjoint state differ only slightly when increasing the number of utilized POD basis functions. Nevertheless, also for the use of only $\ell = 1$ POD mode, the use of the time adaptive grid leads to an improvement of the absolute errors of up to four decimal points in comparison to using a uniform time grid. \begin{table}[htbp] \centering \begin{tabular}{ c | c | c | c || c | c | c | c} \toprule $\Delta t$ & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ & dof & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ \\ \hline 1/20 & $5.0631 \cdot 10^{-01}$ & $7.8420 \cdot 10^{+00}$ & $3.5413 \cdot 10^{+01}$ & 21 & $4.5255 \cdot 10^{-02}$ & $5.4054 \cdot 10^{-01}$ & $2.4409 \cdot 10^{+00}$ \\ 1/40 & $2.6230 \cdot 10^{-01}$ & $4.1059 \cdot 10^{+00}$ & $1.8542 \cdot 10^{+01}$ & 41 & $2.0721 \cdot 10^{-02}$ & $5.3475 \cdot 10^{-03}$ & $1.3186 \cdot 10^{-02}$ \\ 1/68 & $1.5684 \cdot 10^{-01}$ & $2.4503 \cdot 10^{+00}$ & $1.1065 \cdot 10^{+01}$ & 69 & $2.0713 \cdot 10^{-02}$ & $4.5706 \cdot 10^{-03}$ & $4.2670 \cdot 10^{-03}$ \\ 1/134 & $8.1129 \cdot 10^{-02}$ & $1.2386 \cdot 10^{+00}$ & $5.5938 \cdot 10^{+00}$ & 135 & $2.0664 \cdot 10^{-02}$ & $4.4905 \cdot 10^{-03}$ & $2.3507 \cdot 10^{-03}$\\ \bottomrule \end{tabular} \vspace{0.4cm} \caption{Test 2: Absolute errors between the analytical optimal solution and the POD solution with $\ell = 1$ depending on the time discretization (equidistant: columns 1-4, adaptive: columns 5-8)} \label{tab:ex2_absolute_err_l1} \end{table} \subsection{Test 3: Control constrained problem} In this test we add control constraints to the previous example. We set $u_{1,a}(t) \leq u_1(t) \leq u_{1,b}(t)$ and $u_{2,a}(t) \leq u_2(t) \leq u_{2,b}(t)$ for the time dependent control intensities $u_1(t)$ and $u_2(t)$. The analytical value range for both controls is $u_1(t), u_2(t) \in [-0.3479, 0.1700]$ for $t \in [0,1]$. For each control intensity we choose different upper and lower bounds: we set $u_{1,a}(t) = -100$ (i.e. no restriction), $ u_{1,b} = 0.1$ and $u_{2,a}(t) = -0.2, \; u_{2,b}(t) = 0$. For the solution of problem \eqref{ocp_pod} we use a projected gradient method. The solution of the nonlinear, nonsmooth equation \eqref{2ordp} can be done by a semi-smooth Newton method or by a Newton method utilizing a regularization of the projection formula, see \cite{NPS11}. In our numerical tests we compute the approximate solution to \eqref{ocp_pod} with a fixed point iteration and initialize the method with the adjoint state corresponding to the control unconstrained optimal control problem. In this way, only two iterations are needed for convergence. Convergence of the fixed point iteration can be argued for large enough values of $\alpha$, see \cite{HV12}.\\ The analytical optimal solutions $\bar{u}_1$ and $\bar{u}_2$ are shown in the left plots in Figure \ref{fig:ex_2_controls_box}. For POD basis computation, we use state, adjoint and time derivative adjoint snapshots corresponding to the reference control $u_\circ = 0$ and we also include the initial condition $y_0$ into our snapshot set. The plots in the middle and on the right in Figure \ref{fig:ex_2_controls_box} refer to the POD controls using a uniform and an adaptive temporal discretization, respectively. Once again, we note that utilizing an adaptive time grid leads to far better results than using a uniform temporal grid. The numerical results in Table \ref{tab:ex_2_errors_box} confirm this observation. We observe that the inclusion of box constraints on the control functions lead in general to better approximation results, compare Table \ref{tab:ex2_absolute_err_l5} with Table \ref{tab:ex_2_errors_box}. This is due to the fact that on the active sets the error between the analytical optimal controls and the POD solutions vanishes. \begin{figure}[htbp] \includegraphics[scale=0.33]{true_control_u1_box} \includegraphics[scale=0.33]{control_u1_uni_box_new.pdf} \includegraphics[scale=0.33]{control_u2_adapt_box_new.pdf}\\ \includegraphics[scale=0.33]{true_control_u2_box.pdf} \includegraphics[scale=0.33]{control_u2_uni_box_new.pdf} \includegraphics[scale=0.33]{control_u1_adapt_box_new.pdf} \caption{Test 3: Inclusion of box constraints for the control intensities: Analytical control intensities $\bar{u}_1(t)$ (top left) and $\bar{u}_2(t)$ (bottom left), POD control utilizing an equidistant time grid with $ \Delta t = 1/40$ (middle) and $\ell = 4$, POD control utilizing an adaptive time grid with dof=41 (right) and $\ell = 4$} \label{fig:ex_2_controls_box} \end{figure} \begin{table}[htbp] \centering \begin{tabular}{ c | c | c | c || c | c | c | c} \toprule $\Delta t$ & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ & dof & $\varepsilon_{\text{abs}}^y$ & $\varepsilon_{\text{abs}}^u$ & $\varepsilon_{\text{abs}}^p$ \\ \hline 1/20 & $ 2.8601 \cdot 10^{-01}$ & $5.7201 \cdot 10^{+00}$ & $3.5430 \cdot 10^{+01}$ & 21 & $2.2714 \cdot 10^{-02}$ & $3.9586 \cdot 10^{-01}$ & $2.4423 \cdot 10^{+00}$ \\ 1/40 & $ 1.4802 \cdot 10^{-01}$ & $2.9955 \cdot 10^{+00}$ & $1.8551 \cdot 10^{+01}$ & 41 & $2.9482 \cdot 10^{-04}$ & $4.4969 \cdot 10^{-03}$ & $1.3183 \cdot 10^{-02}$ \\ 1/68 & $8.8124 \cdot 10^{-02}$ & $1.7882 \cdot 10^{+00}$ & $1.1071 \cdot 10^{+01}$ & 69 & $2.1247 \cdot 10^{-04}$ & $3.2811 \cdot 10^{-03}$ & $4.2629 \cdot 10^{-03}$ \\ 1/134 & $4.4570 \cdot 10^{-02}$ & $9.0470 \cdot 10^{-01}$ & $5.5965 \cdot 10^{+00}$ & 135 & $2.1330 \cdot 10^{-04}$ & $3.1321 \cdot 10^{-03}$ & $2.3474 \cdot 10^{-03}$\\ \bottomrule \end{tabular} \vspace{0.4cm} \caption{Test 3: Inclusion of box constraints for the control intensities: Absolute errors between the analytical optimal solution and the POD solution with $\ell = 4$ depending on the time discretization (equidistant: columns 1-4, adaptive: columns 5-8)} \label{tab:ex_2_errors_box} \end{table} \section{Conclusion} In this paper we investigated the problem of snapshot location in optimal control problems. We showed that the numerical POD solution is much more accurate if we use an adaptive time grid, especially when the solution of the problem presents steep gradients. The time grid was computed by means of an a-posteriori error estimation strategy of space-time approximation of a second order in time and fourth order in space elliptic equation which describes the optimal control problem and has the advantage that it is independent of an input control function. Furthermore, a coarse approximation with respect to space of the latter equation gives information on the snapshots one can use to build the surrogate model. Finally, we provided a certification of our surrogate model by means of an a-posteriori error estimation for the error between the optimal solution and the POD solution.\\ For future work, we are interested in transferring our approach to optimal control problems subject to nonlinear parabolic equations.
{ "timestamp": "2016-09-01T02:00:57", "yymm": "1608", "arxiv_id": "1608.08665", "language": "en", "url": "https://arxiv.org/abs/1608.08665", "abstract": "In this paper we study the approximation of an optimal control problem for linear para\\-bolic PDEs with model order reduction based on Proper Orthogonal Decomposition (POD-MOR). POD-MOR is a Galerkin approach where the basis functions are obtained upon information contained in time snapshots of the parabolic PDE related to given input data. In the present work we show that for POD-MOR in optimal control of parabolic equations it is important to have knowledge about the controlled system at the right time instances. We propose to determine the time instances (snapshot locations) by an a-posteriori error control concept. This method is based on a reformulation of the optimality system of the underlying optimal control problem as a second order in time and fourth order in space elliptic system which is approximated by a space-time finite element method. Finally, we present numerical tests to illustrate our approach and to show the effectiveness of the method in comparison to existing approaches.", "subjects": "Optimization and Control (math.OC)", "title": "A-posteriori snapshot location for POD in optimal control of linear parabolic equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.979035762851982, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7096739296244793 }
https://arxiv.org/abs/1802.03385
Krein-type theorems and ordered structure for Cauchy-de Branges spaces
We extend some results of M.G. Krein to the class of entire functions which can be represented as ratios of discrete Cauchy transforms in the plane. As an application we obtain new versions of de Branges' Ordering Theorem for nearly invariant subspaces in a class of Hilbert spaces of entire functions. Examples illustrating sharpness of the obtained results are given.
\section{Introduction} \label{int} \subsection{Krein's theorem.} M.G. Krein's theorem about the Cartwright class functions plays a seminal role in entire function theory and its applications to spectral theory of linear operators. Recall that an entire function $F$ is said to be of {\it Cartwright class} if it is of finite exponential type and $$ \int_{\mathbb{R}} \frac{\log^+|F(x)|}{1+x^2}dx <\infty. $$ Krein's theorem can be stated as follows (for the necessary background see Section \ref{prel}): \medskip \\ {\bf Theorem (M.G. Krein).} {\it Let $F$ be an entire function. If $F$ is a function of bounded type both in the upper half-plane $\mathbb{C}^+$ and the lower half-plane $\mathbb{C}^-$, then $F$ is a function of Cartwright class. In particular, $F$ is of finite exponential type and its type is equal to $\max ({\rm mt}_+(F), {\rm mt}_-(F))$, where ${\rm mt}_+(F)$ and ${\rm mt}_-(F)$ denote the mean type of $F$ in $\mathbb{C^+}$ and in $\mathbb{C^-}$, respectively. } \medskip For different approaches to this result see \cite[Part II, Chapter 1]{hj} or \cite[Lecture 16]{lev}; its applications to the spectral theory of non-dissipative operators can be found, e.g., in ~\cite[Section IV.8]{Gohb_Krein}. A typical situation when Krein's theorem is applicable is when $F$ can be represented as a ratio of two Cauchy transforms of some discrete measures on $\mathbb{R}$. Namely, for $T = \{t_n\}_{n=1}^\infty \subset \mathbb{R}$ and $a= \{a_n\}_{n=1}^\infty \in \ell^1$ consider the discrete {\it Cauchy transform} $$ \mathcal{C}_a(z) = \sum_n \frac{a_n}{z- t_n}. $$ Condition $a\in \ell^1$ can be relaxed. Assume that $t_n \ne 0$ and, for some $k\in \mathbb{N}$, $\sum_n |t_n|^{-k-1}|a_n| <\infty$. Then we define the {\it regularized Cauchy transform} as \begin{equation} \label{reg} \mathcal{C}_{a, k} (z) = \sum_n a_n\bigg(\frac{1}{z- t_n} +\frac{1}{t_n} + +\dots + \frac{z^{k-1}}{t_n^k}\bigg) = z^k \sum_n \frac{a_n}{t_n^k (z-t_n)} \end{equation} (we do not need to regularize the Cauchy kernel at infinity when $k=0$). The functions of the form $\mathcal{C}_{a, k}$ are of bounded type in $\mathbb{C^+}$ and in $\mathbb{C^-}$. Therefore, if an entire function $F$ can be represented as $\mathcal{C}_{a, k}/\mathcal{C}_{b, m}$, then $F$ is of finite exponential type. A special case of the above statement is the following theorem, also due to Krein (see \cite[Theorem 4]{krein47} or \cite[Lecture 16]{lev}): {\it Assume that $F$ is an entire function, which is real on $\mathbb{R}$, with simple real zeros $t_n \ne 0$ and such that, for some integer $k\ge 0$, we have $$ \sum_n \frac{1}{|t_n|^{k+1} |F'(t_n)|}<\infty $$ and \begin{equation} \label{krein} \frac{1}{F(z)} = R(z) + \sum_n \frac{1}{F'(t_n)} \cdot \bigg(\frac{1}{z-t_n} +\frac{1}{t_n}+ \cdots + \frac{z^{k-1}}{t_n^k} \bigg), \end{equation} where $R$ is some polynomial. Then $F$ is of Cartwright class.} Krein \cite[Theorem 5]{krein47} showed also that the condition $t_n\in\mathbb{R}$ can be relaxed to the Blaschke condition $\sum_n |t_n|^{-2} |{\rm Im}\, t_n| <\infty$. Some further refinements of this result are due to A.G.~Bakan and V.B.~Sherstyukov (see, e.g., \cite{shers} and references therein). \medskip One of the goals of this paper is to extend the above results to the case of Cauchy transforms of measures which are supported by some discrete set $\{t_n\}$ in $\mathbb {C}$ where $t_n$ {\it are no longer real}. In what follows we assume that $T=\{t_n\}\subset \mathbb {C}$, $t_n$ are pairwise distinct, and $|t_n|\to \infty$ as $n\to\infty$. To simplify the formulas we assume, as above, that $0\notin T$ (if $0\in T$ then an obvious modification of the formulas is required, but all results remain true). \medskip \\ {\bf Question.} {\it Let $F$ be an entire function such that $F = \mathcal{C}_{a, k}/\mathcal{C}_{b, m}$ for some $a,b, k, m$. Under what conditions on $T$ can we conclude that $F$ is a function of finite exponential type?} \medskip We find several conditions on $T$ ensuring that this (and even more) is true. We also provide some examples which show the sharpness of these conditions. \subsection{The spaces of Cauchy transforms} One motivation for the study of the above question is related to spectral theory of rank one perturbations of compact normal operators. Recently, D.V.~Yakubovich and the second author \cite{by} applied a functional model to the study of rank one perturbations of compact selfadjoint operators. Using this model, a number of results was obtained about completeness and spectral synthesis for such perturbations. The functional model in question acts in the so-called de Branges spaces of entire functions which can be identified with the spaces of Cauchy transforms of discrete measures supported by $\mathbb{R}$. To extend the results of \cite{by} to the case of rank one perturbations of {\it normal} (non-selfadjoint) compact operators one needs to have analogues of Krein's theorems for the case of non-real $t_n$. A similar functional model for perturbations of normal operators was constructed in \cite{by1}; it also acts in some space of discrete Cauchy transforms, which we introduce now. Let $T = \{t_n\}_{n=1}^\infty$ be a set as above and let $\mu=\sum_n\mu_n\delta_{t_n}$ be a positive measure such that $\sum_n \frac{\mu_n}{|t_n|^2 +1}<\infty$. Also let $A$ be an entire function which has only simple zeros and whose zero set $\mathcal{Z}_A$ coincides with $T$. With any such $T$, $A$ and $\mu$ we associate the space $\mathcal{H}(T,A,\mu)$ of entire functions, $$ \mathcal{H}(T,A,\mu):=\biggl{\{}f:f(z)= A(z)\sum_n\frac{a_n\mu^{1/2}_n}{z-t_n}, \quad a = \{a_n\}\in\ell^2\biggr{\}} $$ equipped with the norm $\|f\|_{\mathcal{H}(T,A,\mu)}:=\|a\|_{\ell^2}$. In what follows, the spaces ${\mathcal{H}(T,A,\mu)}$ will be called {\it Cauchy--de Branges spaces}. The spaces $\mathcal{H}(T,A,\mu)$ were introduced in full generality by Yu. Belov, T. Mengestie and K. Seip \cite{bms}. Essentially, they are spaces of Cauchy transforms. We multiply them by the entire function $A$ to get rid of poles and make the elements entire, but essentially the space does not depend on the choice of $A$. The spaces with the same $T$, $\mu$ and different $A$-s are isomorphic. In what follows we will always assume that $T$ has a finite convergence exponent (i.e., $\sum_n |t_n|^{-K} <\infty$ for some $K>0$) and $A$ is some canonical product of the corresponding order. We call the pair $(T, \mu)$ the {\it spectral data} for $\mathcal{H}(T,A,\mu)$. It is noted in \cite{bms} that each space $\mathcal{H}(T,A,\mu)$ is a reproducing kernel Hilbert space and, moreover, if $\mathcal{H}$ is a reproducing kernel Hilbert space of entire functions such that \medskip \begin{enumerate} \item [(i)] $\mathcal{H}$ has the {\it division property}, that is, $\frac{f(z)}{z-w} \in \mathcal{H}$ whenever $f\in\mathcal{H}$ and $f(w) = 0$, \medskip \item [(ii)] there exists a Riesz basis of reproducing kernels in $\mathcal{H}$, \end{enumerate} \medskip then $\mathcal{H} = \mathcal{H}(T,A,\mu)$ (as sets with equivalence of norms) for some choice of the parameters. Note that the functions $\overline{A'(t_n)} \mu_n \cdot \frac{A(z)}{z-t_n}$ form an orthogonal basis in $\mathcal{H}(T,A,\mu)$ and are the reproducing kernels at the points $t_n$. In the case when $T\subset \mathbb{R}$ and $A$ is real on $\mathbb{R}$, the space $\mathcal{H}(T,A,\mu)$ is a de Branges space. De Branges spaces' theory is a deep and important field which has numerous applications to operator theory, spectral theory of differential operators and even to number theory. For the basics of de Branges theory we refer to de Branges' monograph \cite{br} and to \cite{rom}; some further results and applications can be found in \cite{abb, by, lag, mp, os, rem}. Since the spaces $\mathcal{H}(T,A,\mu)$ are defined in terms of Cauchy transforms and also are a generalization of de Branges spaces, the term {\it a Cauchy--de Branges space} seems to be appropriate. We believe it is a noteworthy problem to extend certain aspects of de Branges theory to the more general setting of Cauchy--de Branges spaces. One of the most striking features of de Branges spaces is the ordered structure of their subspaces which are themselves de Branges spaces: {\it If $\mathcal{H}_1$ and $\mathcal{H}_2$ are two de Branges subspaces of a de Branges space $\mathcal{H}$, then either $\mathcal{H}_1 \subset \mathcal{H}_2$ or $\mathcal{H}_2 \subset \mathcal{H}_1$} \cite[Theorem 35]{br}. Here we study this problem for Cauchy--de Branges spaces and, as an application of the Krein-type theorems obtained in the first part of the paper, establish the ordering property for a class of these spaces. \subsection{Main results} We will develop the Krein-type theory for measures with non-real supports in the following three cases: \medskip \begin{enumerate} \item [(i)] $\mathbf{Z:}$ $T$ is the zero set of some entire function of zero exponential type; \smallskip \item [(ii)] $\mathbf{\Pi:}$ $T$ lies in some strip and has finite convergence exponent; \smallskip \item [(iii)] $\mathbf{A_\gamma}:$ $T$ lies in some angle of size $\pi\gamma$, $0<\gamma<1$, and the convergence exponent of $T$ is strictly less than $\gamma^{-1}$. \end{enumerate} \begin{theorem} \label{main1} Let $T$ satisfy one of the conditions $\mathbf{Z}$, $\mathbf{\Pi}$ or $\mathbf{A_\gamma}$. Assume that $F$ is an entire function such that $F = \mathcal{C}_{a, k}/\mathcal{C}_{b, m}$, where $\mathcal{C}_{a, k}$ and $\mathcal{C}_{b, m}$ are regularized Cauchy transforms with poles in $T$ defined in \eqref{reg}. Then $F$ is a function of finite exponential type. In cases $\mathbf{Z}$ and $\mathbf{A_\gamma}$, the function $F$ is of zero exponential type. In cases $\mathbf{\Pi}$ and $\mathbf{A_\gamma}$, $F$ is of Cartwright class with respect to some line. \end{theorem} As a corollary of Theorem \ref{main1} we see that if a finite order function $F$ with zeros in a strip or a function of order less than $\gamma^{-1}$ with zeros in the angle of size $\pi\gamma$ admits the representation \eqref{krein}, then $F$ is a function of exponential type. The first of these observations was proved in \cite{shers} where Krein-type theorems for functions with zeros in a strip were studied. The relation between the size of the angle and the order in the case $\mathbf{A_\gamma}$ is optimal. \begin{theorem} \label{count_a} For any $\gamma \in (0,1)$ there exists an entire function $F$ of order precisely $\gamma^{-1}$ such that all its zeros $t_n$ are simple, lie in an angle of size $\pi\gamma$ and $$ \sum_n \frac{1}{|F'(t_n)|} <\infty, \qquad \frac{1}{F(z)} = \sum_n \frac{1}{F'(t_n) (z-t_n)}. $$ \end{theorem} \medskip We use Theorem \ref{main1} to establish Ordering Theorems for the Cauchy--de Branges spaces $\mathcal{H}(T,A,\mu)$. In fact, we consider a more general and, in a sense, more natural class of subspaces: these are {\it nearly invariant} (or {\it division-invariant}) subspaces. A closed subspace $\mathcal{H}_0$ of a Cauchy--de Branges space $\mathcal{H}$ is said to be {\it nearly invariant} if there is $w_0\in \mathbb{C}$ such that $\frac{f(z)}{z-w_0} \in \mathcal{H}_0$ whenever $f\in \mathcal{H}_0$ and $f(w_0) =0$. It is known that this property is equivalent to a stronger {\it division invariance property}: for any $w\in \mathbb{C}$ such that there exists $f\in \mathcal{H}_0$ with $f(w)\ne 0$ ($w$ is not a {\it common zero for $\mathcal{H}_0$}), $$ f\in \mathcal{H}_0, \ \ f(w) = 0 \ \Longrightarrow \ \frac{f(z)}{z-w}\in \mathcal{H}_0. $$ In the context of Hardy spaces in general domains the equivalence of nearly invariance and division invariance is shown in \cite[Proposition 5.1]{ar}; a similar argument works for general spaces of analytic functions (see Proposition \ref{nea} below). While the de Branges theory guarantees a rich structure of de Branges subspaces in a de Branges space, it is not clear whether there always exist many subspaces of $\mathcal{H}(T,A,\mu)$ which have a Riesz basis of reproducing kernels (i.e., are Cauchy--de Branges spaces themselves). However, there exist many nearly invariant subspaces. A natural construction of a nearly invariant subspace is as follows. Given a function $G\in \mathcal{H}(T,A,\mu)$, consider the subspace of $\mathcal{H}(T,A,\mu)$ defined as \begin{equation} \label{rs} \mathcal{H}_G = \overline{\rm Span}\, \Big\{\frac{G}{z-\lambda}: G(\lambda) = 0 \Big\}. \end{equation} We can also define $\mathcal{H}_G$ in a slightly more general situation when $G$ possibly is not in $\mathcal{H}(T,A,\mu)$, but $\frac{G}{z-\lambda} \in \mathcal{H}(T,A,\mu)$ whenever $G(\lambda) = 0$. It is easy to see that if $G$ has simple zeros then $\mathcal{H}_G$ is nearly invariant (and, thus, division-invariant). Clearly, any subspace $\mathcal{H}$ which is itself a Cauchy--de~Branges space is of the form \eqref{rs} (indeed, if $\mathcal{H} = \mathcal{H}(T_1, A_1, \mu_1)$, then $\mathcal{H} = \mathcal{H}_{A_1}$). We do not know at present whether any division-invariant subspace of $\mathcal{H}(T,A,\mu)$ is of the form $\mathcal{H}_G$. We will consider mostly nearly invariant subspaces $\mathcal{H}_0$ {\it without common zeros}, that is, such that $\mathcal{Z}(\mathcal{H}_0) =\emptyset$, where $\mathcal{Z}(\mathcal{H}_0)= \{w\in \mathbb{C}:\, f(w) = 0\ \text{for any}\ f\in \mathcal{H}_0\}$. Note that subspaces of the form $\mathcal{H}_G$ have no common zeros when zeros of $G$ are simple. Now we formulate two theorems which extend de Branges' Ordering Theorem to Cauchy--de Branges spaces. \begin{theorem} \label{main2} Let $T$ satisfy one of the conditions $\mathbf{Z}$ or $\mathbf{A_\gamma}$ and let $\mathcal{H}_1$ and $\mathcal{H}_2$ be two nearly invariant subspaces of $\mathcal{H}(T,A,\mu)$ without common zeros. Then either $\mathcal{H}_1 \subset \mathcal{H}_2$ or $\mathcal{H}_2 \subset \mathcal{H}_1$. \end{theorem} To state a similar result for the strip case $\mathbf{\Pi}$ we need to impose some conditions. Otherwise the statement is no longer true even in the case of real zeros. \begin{theorem} \label{main3} Let $T \subset \{-h \le {\rm Im}\, z\le h\}$, $h>0$, and let $\mathcal{H}_1$ and $\mathcal{H}_2$ be two nearly invariant subspaces of $\mathcal{H}(T,A,\mu)$ without common zeros. Assume, moreover, that $\mathcal{H}_1$ and $\mathcal{H}_2$ are closed under the $*$-transform $f\mapsto f^*$, where $f^*(z) = \overline{f(\overline z)}$. Then either $\mathcal{H}_1 \subset \mathcal{H}_2$ or $\mathcal{H}_2 \subset \mathcal{H}_1$. \end{theorem} In other words, in the above cases the set of all nearly invariant subspaces without common zeros (and, in particular, the set of all Cauchy--de Branges subspaces) of $\mathcal{H}(T,A,\mu)$ is totally ordered by inclusion. However, without any restrictions on the growth or spectrum localization the ordered structure for nearly invariant subspaces fails. This is illustrated by the following \begin{theorem} \label{main4} There exists a space $\mathcal{H}(T,A,\mu)$ of order $2$ and two nearly invariant and $*$-closed subspaces $\mathcal{H}_1$ and $\mathcal{H}_2$ without common zeros such that neither $\mathcal{H}_1 \subset \mathcal{H}_2$ nor $\mathcal{H}_2 \subset \mathcal{H}_1$. Moreover, these subspaces can be chosen to be of the form \eqref{rs}. \end{theorem} The paper is organized as follows. In Section \ref{prel} we discuss our main tools from function theory. Theorem \ref{main1} is proved in Section \ref{proofth1}, while in Section \ref{count0} counterexamples are given illustrating its sharpness. Ordering theorems \ref{main3} and \ref{main4} are proved in Section \ref{proofth23}. Construction of two nearly invariant subspaces which do not contain each other is presented in Section \ref{count}. \bigskip \section{Preliminaries} \label{prel} In what follows we write $U(x)\lesssim V(x)$ if there is a constant $C$ such that $U(x)\leq CV(x)$ holds for all $x$ in the set in question. We write $U(x)\asymp V(x)$ if both $U(x)\lesssim V(x)$ and $V(x)\lesssim U(x)$. The standard Landau notations $O$ and $o$ also will be used. For the basic notions (such as order and type) of entire function theory see, e.g., \cite{lev1, lev}. The order of an entire function $f$ will be denoted by $\rho(f)$ and its zero set by $\mathcal{Z}_f$. We denote by $D(z,R)$ the disc with center $z$ of radius $R$. The symbol $m_2$ will denote the area Lebesgue measure in $\mathbb {C}$, while, for a measurable set $E\subset \mathbb{R}$, we denote its one-dimensional Lebesgue measure by $|E|$. \subsection{Functions of bounded type.} In this subsection we recall some definitions and basic facts about functions of bounded type. Denote by $H^p= H^p(\mathbb{C^+})$, $1\le p\le \infty$, the standard Hardy spaces in the upper half-plane. For the inner-outer factorization and other basic properties of the Hardy spaces see, e.g., \cite{ko}. Recall that if $m$ is a non-negative function on $\mathbb{R}$ such that $\log m \in L^1\big(\frac{dt}{t^2+1}\big)$, then we can define the {\it outer function} $O_m$ as $$ O_m(z) = \exp\bigg(\frac{1}{2\pi i} \int_\mathbb{R} \bigg( \frac{1}{t-z}- \frac{t}{t^2+1}\bigg)\log m(t)\, dt \bigg). $$ We will use the following well-known estimates for outer functions. By a very rough estimate $\frac{y}{(t-x)^2+y^2} \lesssim \big(y+ \frac{x^2+1}{y}\big)\frac{1}{t^2+1}$ we have for $z =x+iy = r e^{i\theta}\in \mathbb {C}^+$, $r\ge 1$, \begin{equation} \label{out} \big| \log|O_m(z)| \big| \le \frac{y}{\pi}\int_\mathbb{R} \frac{|\log m(t)|}{(t-x)^2+y^2} \lesssim y+ \frac{x^2+1}{y}\lesssim \frac{r}{\sin\theta}. \end{equation} In particular, for any $\delta>0$, \begin{equation} \label{out1} \big|\log|O_m(z)|\big| \lesssim |z|, \qquad \delta<\arg z<\pi-\delta, \ \ |z|\ge 1. \end{equation} A function $f$ analytic in $\mathbb {C}^+$ is said to be of {\it bounded type}, if $f=g/h$ for some functions $g$, $h\in H^\infty$. If, moreover, $h$ can be taken to be outer, we say that $f$ is in the \textit{Smirnov class $\mathcal{N}_+ = \mathcal{N}_+(\mathbb {C}^+)$}. Analogously, we can define functions of bounded type and Smirnov class functions in any given half-plane. If $f$ is a function of bounded type in $\mathbb {C}^+$, it has the canonical factorization $f = OB S_1/S_2$, where $O$ is the outer factor for $f$, $B$ is a Blaschke product and $S_1, S_2$ are some (mutually prime) singular inner functions. We define the mean type of $f$ as $$ {\rm mt}(f) = \limsup\limits_{y\to \infty}\frac{\log|f(iy)|}{y}. $$ The mean type is equal to $a$ if and only if there is a factor $e^{-iaz}$ in the canonical factorization of $f$. If we assume additionally that $f$ is continuous up to $\mathbb{R}$ then the singular inner functions can not have singularities on $\mathbb{R}$ and so $S_1/S_2= e^{iaz}$ for some $a\in\mathbb{R}$. Thus, in this case $f \in \mathcal{N}_+(\mathbb {C}^+)$ if and only if ${\rm mt}(f) \le 0$. Estimate \eqref{out1}, a similar estimate for the singular factor and the Hayman theorem \cite[Lecture 15]{lev} which gives an estimate from below for the Blaschke product outside a union of angles of arbitrarily small total size imply the following estimates: \begin{lemma} \label{boun0} If $f$ is a function of bounded type in $\mathbb {C}^+$ and ${\rm mt}(f) = a$, then, for any $\varepsilon, \delta>0$, there exists $R>0$ such that $$ \log |f(z)| \le (a \sin \delta + \varepsilon )|z|, \qquad \delta<\arg z<\pi-\delta, \ \ |z|\ge R. $$ More generally, if $f = O\frac{B_1 S_1}{B_2 S_2}e^{iaz}$ where $O$ is the outer factor, $B_1$ and $B_2$ Blaschke products, $S_1, S_2$ singular inner functions and $a = {\rm mt}(f)$, then for any $\varepsilon, \delta>0$, there exist $R$ and a set $E\subset [\delta, \pi-\delta]$ which is a union of intervals of total length less than $\varepsilon$ such that $$ (a \sin \delta - \varepsilon )|z| \le \log |f(z)| \le (a \sin \delta + \varepsilon ) |z|, \qquad \arg z \notin E, \ \ |z|\ge R. $$ \end{lemma} For the theory of the Cartwright class we refer to~\cite{hj, ko1, lev}. The following lemma will be often useful. \begin{lemma} \label{boun} Let $\mathbb{H}_+$ and $\mathbb{H}_-$ be two complement half-planes and assume that $T\subset \mathbb{H}_-$. Then any regularized Cauchy transform $\mathcal{C}_{a, k}$ given by \eqref{reg} is a function from the Smirnov class in $\mathbb{H}_+$. \end{lemma} \begin{proof} Without loss of generality, let $\mathbb{H}_+ = \mathbb{C^+}$. It is well known that if $f$ is analytic in $\mathbb {C}^+$ and ${\rm Im}\, f>0$, then $ f$ is in the Smirnov class~\cite[Part 2, Ch. 1, Sect. 5]{hj}. Thus, if $u_n>0$ and $\sum_n u_n<\infty$, then the function $\sum_n \frac{u_n}{t_n-z}$ is in the Smirnov class $\mathcal{N}_+$. Consequently, $\sum_n \frac{v_n}{t_n-z} \in \mathcal{N}_+$ for any $\{v_n\}\in \ell^1$. Finally, $f(z) = z$ also is in $\mathcal{N}_+$ and the result follows immediately from formula \eqref{reg}. \end{proof} \subsection{Estimates of Cauchy transform in the complex plane.} The following two results from \cite{bbb-fock} about the asymptotic behaviour of Cauchy transforms of measures in the plane will be useful. We say that $\Omega\subset \mathbb {C}$ is a {\it set of zero area density} if $$ \lim_{R\to\infty} \frac{m_2(\Omega \cap D(0, R))}{R^2} = 0. $$ \begin{lemma} \cite[Proof of Lemma 4.3]{bbb-fock} \label{verd} Let $\nu$ be a finite complex Borel measure in $\mathbb {C}$. Then, for any $\varepsilon>0$, there exists a set $\Omega$ of zero area density such that \begin{equation} \label{meas} \bigg|\int_\mathbb{C}\frac{d\nu(\xi)}{z-\xi} - \frac{\nu(\mathbb{C})}{z} \bigg| < \frac{\varepsilon}{|z|}, \qquad z\in\mathbb {C}\setminus\Omega. \end{equation} \end{lemma} The following result from \cite{bbb-fock}, which is due to A. Borichev, can be considered as an extension of the classical Liouville theorem. \begin{theorem} \cite[Lemma 4.2]{bbb-fock} \label{dens} If an entire function $f$ of finite order is bounded on $\mathbb {C}\setminus \Omega$ for some set $\Omega$ of zero area density, then $f$ is a constant. \end{theorem} Next we discuss growth properties of functions in the spaces $\mathcal{H}(T,A,\mu)$. \begin{lemma} \label{gr1} Let $A$ be an entire function with the zero set $T$ and let $A$ be of order $\rho$. Then for any $\varepsilon>0$, there exists a set $E\subset (0,\infty)$ of zero linear density \textup(i.e., $|E \cap (0, R)| = o(R)$, $R\to\infty$\textup) such that for any entire function $f$ of the form $f = A \mathcal{C}_{a, k}$, \begin{equation} \label{cart} |f(z)| \lesssim |z|^{\rho+k+1+\varepsilon} |A(z)|, \qquad |z|\notin E. \end{equation} In particular, if $A$ is of order $\rho$ and type $\sigma$, then any element of $\mathcal{H}(T,A,\mu)$ is of order at most $\rho$ and of type at most $\sigma$ with respect to this order. \end{lemma} \begin{proof} Let $a=(a_n)$ be such that $\sum_n |t_n|^{-k-1} |a_n|<\infty$. In view of the representation $$ A(z)\mathcal{C}_{a, k} (z) = A(z)P(z) + A(z) z^{k+1} \sum_n \frac{a_n}{t_n^{k+1}(z-t_n)}, $$ where $P$ is a polynomial of degree at most $k$, it suffices to prove the statement for $\mathcal{C}_{a}$ with $a\in \ell^1$. For a fixed sufficiently large $n\in \mathbb{N}$ let $\mathcal{R} = \{z: 2^{n}\le |z| \le 2^{n+1}\}$ and $\mathcal{R}' = \{z: 2^{n-1}\le |z| \le 2^{n+2}\}$. Let $\{t_{n_1}, t_{n_2}, \dots, t_{n_p} \} = T\cap \mathcal{R}'$. Since $A$ is of order $\rho$, we have $p\lesssim 2^{(\rho+\varepsilon)n}$. Let $f= A\sum_{n}\frac{a_n}{z-t_n}$. Then, for $z\in \mathcal{R}$, $$ \bigg|\sum_{t_n \notin \mathcal{R}'} \frac{a_n}{z-t_n}\bigg| \lesssim \sum_{t_n \notin \mathcal{R}'} \frac{|a_n|}{|t_n|} \lesssim 1. $$ By the classical Cartan's lemma \cite[Chapter 1, \S 7]{lev1}, there exist discs $D_j$, $j=1, \dots p$, of radii $r_j$ such that $\sum_{j=1}^p r_j <2$ and $$ \min_{z\in \mathcal{R} \setminus \cup_j D_j} {\rm dist}\, (z, T\cap\mathcal{R}') \ge \frac{1}{p}. $$ Hence, for $z\in \mathcal{R} \setminus \cup_j D_j$, we have $$ \bigg|\sum_{t_n \in \mathcal{R}'} \frac{a_n}{z-t_n}\bigg| \lesssim p \sum_{t_n \in \mathcal{R}'} |a_n| \lesssim p \lesssim |z|^{\rho+\varepsilon}. $$ Now we repeat this procedure for any $n\in \mathbb{N}$ and define $E$ as the set of $r$ such that $\{|z|=r\} \cap (\cup D_j) \ne \emptyset$. Then \eqref{cart} holds for any $z$ with $|z|\notin E$. \end{proof} Note that we have the following criterion for the inclusion of $f$ into $\mathcal{H}(T,A,\mu)$. \begin{theorem} \label{inc} Let $\mathcal{H}(T,A,\mu)$ be a Cauchy--de Branges space and let $A$ be of finite order. Then an entire function $f$ is in $\mathcal{H}(T,A,\mu)$ if and only if the following three conditions hold: \begin{enumerate} \item [(i)] $\sum_n\dfrac{|f(t_n)|^2}{|A'(t_n)|^2 \mu_n} <\infty$\textup; \smallskip \item [(ii)] there exists a set $E\subset (0,\infty)$ of zero linear density and $N>0$ such that $|f(z)| \le |z|^N |A(z)|$, $|z| \notin E$\textup; \smallskip \item [(iii)] there exists a set $\Omega$ of positive area density such that $|f(z)| = o(|A(z)|)$, $|z|\to \infty$, $z\in \Omega$. \end{enumerate} \end{theorem} \begin{proof} The necessity of (i) is obvious since for $f= A\sum_{n}\frac{c_n\mu_n^{1/2}}{z-t_n} \in \mathcal{H}(T,A,\mu)$ we have $f(t_n) = A'(t_n)c_n \mu_n^{1/2}$ and $\{c_n\}\in \ell^2$. The necessity of (ii) is proved in Lemma \ref{gr1}. Finally, the representation $$ \frac{f(z)}{A(z)} = z \sum_{n}\frac{c_n\mu_n^{1/2}}{t_n(z-t_n)} - \sum_{n}\frac{c_n\mu_n^{1/2}}{t_n} $$ and Lemma \ref{verd} imply that $f(z)/A(z) = o(1)$ as $|z|\to\infty$ outside a set of zero density. To prove the sufficiency consider the function $$ H(z) = \frac{f(z)}{A(z)} - \sum_{n} \frac{f(t_n)}{A'(t_n)(z-t_n)} $$ which is well defined by (i) and entire. Condition (ii) and Lemma \ref{gr1} imply2 that $H$ is a polynomial. Finally, note that, by the same argument as above, $\sum_{n} \frac{f(t_n)}{A'(t_n)(z-t_n)}$ tends to zero as $|z|\to\infty$ on some set $\Omega_1$ whose complement has zero density. Hence, by (iii), $|H(z)|\to 0$, $|z|\to\infty$, $z\in \Omega\cap\Omega_1$. Since the set $\Omega\cap\Omega_1$ is obviously unbounded, we conclude that $H\equiv 0$. Thus, $f$ has the required representation with $c_n = f(t_n)/(A'(t_n)\mu_n^{1/2})$. \end{proof} In many cases one can relax the conditions (ii)--(iii) and require the estimates on a smaller set. Note that, for $f\in \mathcal{H}(T,A,\mu)$, $$ \|f\|^2_{\mathcal{H}(T,A,\mu)} = \|\{c_n\}\|_{\ell^2}^2 = \sum_n \frac{|f(t_n)|^2}{|A'(t_n)|^2 \mu_n}. $$ Thus, the space $\mathcal{H}(T,A,\mu)$ is isometrically embedded into the space $L^2(\nu)$, where \begin{equation} \label{nu} \nu = \sum_n |A'(t_n)|^{-2} \mu_n^{-1} \delta_{t_n}. \end{equation} \bigskip \section{Proof of Theorem \ref{main1}} \label{proofth1} In this section we prove Theorem \ref{main1}. In what follows we assume that $F = \mathcal{C}_{a, k}/\mathcal{C}_{b, m}$ and $T$ satisfies one of the conditions $\mathbf{Z}$, $\mathbf{\Pi}$, or $\mathbf{A_\gamma}$. \medskip In the {\bf Case} $\mathbf{Z}$ the result follows directly from Lemma~\ref{gr1}. Let $A$ be a function of zero exponential type with zero set $T$. Then $H = A \mathcal{C}_{a, k}$ is an entire function and by Lemma~\ref{gr1} we have $|H(z)|\lesssim |z|^N |A(z)|$ for $|z|$ outside some small exceptional set. We conclude that $H$ is of zero exponential type. Hence, if $F = \mathcal{C}_{a, k}/\mathcal{C}_{b, m}$, then we can write $F=H_1/H_2$ for two functions $H_1, H_2$ of minimal type. By the standard estimates of the minimum of modulus for entire functions \cite[Chapter 1, \S 8]{lev}, $F$ is of minimal type. \medskip {\bf Case $\mathbf{\Pi}$.} Without loss of generality, let $T \subset \{-h \le {\rm Im}\, z\le h\}$. Then, by Lemma \ref{boun}, $F$ is of bounded type in the half-planes $\mathbb {C}^+ +i h$ and $\mathbb {C}^- -ih$. Since $T$ has a finite convergence exponent, there exists an entire function $A$ of finite order such that $\mathcal{Z}_A = T$. Then, as above, we can write $F=H_1/H_2$ where $H_1 = A \mathcal{C}_{a, k}$, $H_2 = A \mathcal{C}_{b, m}$. By Lemma \ref{gr1}, $\rho(H_j)\le \rho(A)$ whence $\rho(F) \le \rho(A)$. Choose $\varepsilon\in (0,1)$ such that $\rho(F) < \pi/(2\varepsilon)$. By Lemma \ref{boun0}, there exists $R>0$ such that $$ \log|F(z)| \lesssim |z|, \qquad \arg z\in [\varepsilon, \pi-\varepsilon] \cup [\pi+\varepsilon, 2\pi -\varepsilon], \ |z|>R. $$ Since $\rho(F) < \pi/(2\varepsilon)$, we can apply the standard Phragm\'en--Lindel\"of principle to the angles $-\varepsilon <\arg z <\varepsilon$ and $\pi-\varepsilon < \arg z< \pi+\varepsilon$ to conclude that $F$ is of exponential type. Since $F$ is of bounded type in $\mathbb {C}^+ ih$, we have $\log|F(t+ih)| \in L^1\big(\frac{dt}{t^2+1}\big)$. Therefore $F(z+ih)$ is of Cartwright class and, finally, $F$ is of Cartwright class. \medskip {\bf Case $\mathbf{A_\gamma}$.} Here we follow essentially the method of de Branges \cite[Theorem 11]{br}. Put $A(\gamma_1, \gamma_2) = \{z: \gamma_1 < \arg z <\gamma_2\}$. Choose $\gamma' \in (\gamma, 1 )$ such that $\rho(A) \le 1/\gamma'$. Without loss of generality we can assume that $T \subset A(\delta, \pi\gamma +\delta)$ where $\delta$ is so small that $\pi\gamma +\delta <\pi \gamma'$. By Lemma \ref{boun}, $F$ is of bounded type in the half-planes $\{-\pi +\delta <\arg z<\delta\}$ and $\{ \pi\gamma +\delta <\arg z<\pi\gamma +\delta +\pi\}$. Then, by Lemma \ref{boun0}, we have $$ \log|F(z)| \lesssim |z|, \qquad \pi\gamma' \le \arg z \le 2\pi +\delta/2. $$ It remains to estimate $|F|$ in the angle $A(\delta/2, \pi\gamma')$. By Lemma \ref{boun}, $F$ is of bounded type in $\mathbb{C}^-$. Then $\log|F| \in L^1\big(\frac{dt}{t^2+1}\big)$. Let $G$ be an outer function in $\mathbb {C}^+$ such that $|G| = |F|^{-1}$ on $\mathbb{R}$. Since $F$ is of bounded type in the half-plane $\{\pi\gamma+\delta <\arg z < \pi\gamma+\delta +\pi\}$, we have $$ \limsup_{r\to\infty} \frac{\log|F(r e^{i\gamma'})|}{r} <\infty. $$ Choose sufficiently large $h>0$ so that $\widetilde{F} = FGe^{ihz}$ is bounded on the ray $\{\arg z = \gamma'\}$. Also, $\widetilde{F}$ is bounded on $\mathbb{R}$. Let us show that $\widetilde{F}$ is bounded in $A(0, \pi\gamma')$. By Lemma \ref{gr1}, $\rho(F) \le \rho(A)$. Choose $\varepsilon>0$ such that $\rho(F) +\varepsilon < \frac{1}{\gamma'}$. Then we have $$ \log|\widetilde{F}(z)| \lesssim |z| + |z|^{\rho(F) +\varepsilon} +\log|G(z)|, \qquad z\in A(0, \pi\gamma'), \ \ |z|\ge 1. $$ Consider the function $F_1(z) = \widetilde{F} (z^{\gamma'})$. Then $F_1$ is analytic in $\mathbb {C}^+$, continuous up to $\mathbb{R}$, and bounded on $\mathbb{R}$. Using the estimate \eqref{out} we get $$ \log|F_1(z)| \lesssim r^{\gamma'} + r^{\gamma'(\rho(F) +\varepsilon)} + \frac{r^{\gamma'}}{\sin \gamma'\theta}, \qquad z=re^{i\theta} \in \mathbb {C}^+, \ \ r\ge 1. $$ We conclude that $$ \lim_{r\to\infty} \frac{1}{r} \int_0^\pi \log^+|F_1(r^{i\theta})| \sin \theta\, d\theta =0. $$ By the de Branges version of the Phragm\'en--Lindel\"of principle \cite[Theorem 1]{br}, $F_1$ is bounded in $\mathbb {C}^+$. Thus, $\widetilde{F}$ is bounded in $A(0, \pi\gamma')$. Using the fact that $|\log|G(z)|| \lesssim |z|$, $z\in A(\delta/2, \pi-\delta/2)$, we conclude that $\log|F(z)| \lesssim |z|$ for $z\in A(\delta/2, \pi\gamma')$, $|z|\ge 1$. Thus, $F$ is of finite exponential type. It remains to show the $F$ is of zero type. Since $\log|F| \in L^1\big(\frac{dt}{t^2+1}\big)$, $F$ is of Cartwright class and so we have $\liminf_{|x| \to \infty} \frac{\log|F(x)|}{|x|} \le 0$. Hence, by Lemma \ref{boun0}, $F$ is of non-positive mean type in the half-planes $\{\pi\gamma+\delta < \arg z < \pi\gamma +\delta + \pi\}$ and $\{\pi+\delta < \arg z< 2\pi +\delta\}$. Therefore, for any $\varepsilon>0$, $\log|F(z)| \le \varepsilon |z|$ when $\pi\gamma' < \arg z < 2\pi +\delta/2$ and $|z|$ is sufficiently large. The standard Phragm\'en--Lindel\"of principle now implies that $F$ is of zero exponential type. \qed \medskip Given $T$, denote by $\mathcal{C}$ the class of all regularized Cauchy transforms $\mathcal{C}_{a, k}$ with poles on $T$, by $\mathcal{C}/\mathcal{C}$ the class of functions of the form $\mathcal{C}_{a, k}/\mathcal{C}_{b, m}$, etc. Then, by the same arguments as above one easily obtains the following result that we will use in what follows: \begin{corollary} \label{wer} Let $T$ satisfy one of the conditions $\mathbf{Z}$, $\mathbf{\Pi}$ or $\mathbf{A_\gamma}$. If $F\in \mathcal{C} + \mathcal{C} \cdot \mathcal{C}/\mathcal{C}$, then the conclusions of Theorem \ref{main1} hold. \end{corollary} \begin{remark} {\rm We do not know whether in the case $\mathbf{\Pi}$ the condition that $T$ has finite convergence exponent can be omitted. } \end{remark} \bigskip \section{Counterexamples to Krein-type theorem} \label{count0} In this section we prove Theorem \ref{count_a}. However, we start with a simpler example for the special case of the half-plane. Namely, we show that there exists a function $F$ with zeros in the lower half-plane which admits the representation \eqref{krein} and $F$ is of order 1, but of {\it maximal} (i.e., infinite) type. This shows the sharpness of our results in case $\mathbf{A_\gamma}$ in the limit case $\gamma = 1$. This example will play an important role in the construction in Theorem \ref{main4}. \begin{example} \label{ex1} There exists an entire function $F$ of order 1 with simple zeros $t_n$ in $\mathbb {C}^-$ such that $\sum_n |F'(t_n)|^{-1} <\infty$ and \begin{equation} \label{kre} \frac{1}{F(z)} = \sum_n \frac{1}{F'(t_n)(z-t_n)}, \end{equation} but $F$ is of maximal type with respect to order 1. {\rm Let $n_k$ be an increasing sequence such that $n_{k+1}-n_k \ge 1$. Put $$ G(z) = \prod_{k=1}^\infty \bigg(1-\frac{e^{2\pi iz}}{e^{2\pi n_k}}\bigg). $$ Then $G$ is an entire function with zeros $z_{m,k} = m-in_k$, $m\in\mathbb{Z}$, $k\in \mathbb{N}$. By simple estimates of lacunary canonical products, for any $N>0$ there exists $C>0$ such that $$ \prod_{k=1}^\infty \bigg| 1 -\frac{w}{e^{2\pi n_k}}\bigg| \ge C|w|^N {\rm dist}\, (w, \{e^{2\pi n_k}\}). $$ Hence, $|G(z)| \asymp 1$, ${\rm Im}\, z\ge 0$, $|G(z)| \gtrsim {\rm dist}\,(z, \{ m-in_k \}_{m,k})$ and so $|G'(m-in_k)|\gtrsim 1$. Put $F=PG$ where $P$ is a polynomial of degree at least 3 whose zeros are not in the set $\{m-in_k\}$. Then $\sum_{t_n \in \mathcal{Z}_F} |F'(t_n)|^{-1} <\infty$ since $\sum_{m, k} |m-in_k|^{-3} <\infty$. Let us show that the entire function $$ H(z) = \frac{1}{F(z)} - \sum_n \frac{1}{F'(t_n)(z-t_n)} $$ is identically zero. From the estimates on $G$ it follows that $|H(z)|\lesssim 1$ when ${\rm dist}\, (z,\{t_n\}) \ge 1/2$, and $H(iy)\to 0$, $y\to\infty$. Hence, $H\equiv 0$. If $n_k = k$, the function $F$ is of order 2. However, taking $n_k$ to be sufficiently sparse (e.g., $n_k = 2^k$) we obtain an example of a function of order 1 and maximal type which has expansion \eqref{kre}.} \end{example} \medskip Now we pass to the proof of Theorem \ref{count_a}. In what follows let $D = A(0, \pi\gamma)$ be the angle of size $\pi\gamma$ and $\Gamma = \partial D$ be its boundary (oriented from $e^{i\pi\gamma} \infty$ to $+\infty$). \begin{lemma} \label{bhh} Let $g$ be a function analytic in a slightly larger angle $A(-\varepsilon, \pi\gamma +\varepsilon)$ for some $\varepsilon>0$ and assume that, for some $C>0$, we have \begin{equation} \label{kre1} |g(z)| + |g'(z)| \le \frac{C}{1+|z|^4}, \qquad {\rm dist}\, (z, \Gamma) \le (|z|+1)^{-1}. \end{equation} Then for the Cauchy integral of $g$ over $\Gamma$ we have $$ \int_\Gamma \frac{g(w)}{z-w}dw = \frac{1}{z} \int_\Gamma g(w) dw +o\bigg(\frac{1}{z}\bigg), \qquad |z|\to\infty. $$ \end{lemma} \begin{proof} We split the integral into three parts: $$ \begin{aligned} \int_\Gamma \frac{g(w)}{z-w}dw & - \frac{1}{z} \int_\Gamma g(w) dw \\ & = \int_{\{|w|<|z|/2\}} \frac{wg(w)}{z(z-w)}dw + \int_{\{|w -z|<(|z|+1)^{-1}\}} \frac{wg(w)}{z(z-w)}dw \\ & + \int_{\{|w| \ge |z|/2, |w-z|\ge (|z|+1)^{-1} \}} \frac{wg(w)}{z(z-w)}dw = I_1 +I_2 +I_3. \end{aligned} $$ Clearly, $|I_1| \lesssim |z|^{-2}$, $|z|\ge 2$, and $$ |I_3| \lesssim \int_{ \{|w| \ge |z|/2\} } |wg(w)|\,|dw| \lesssim |z|^{-2}. $$ Finally, to estimate the integral $I_2$ for $z$ close to $\Gamma$ note that $$ \begin{aligned} \bigg| \int_{\{|w -z|<(|z|+1)^{-1}\}} \frac{g(w)}{w-z} dw \bigg| & \le \bigg| \int_{\{|w -z|<(|z|+1)^{-1}\}} \frac{g(z)}{w-z} dw \bigg| \\ & +\bigg| \int_{\{|w -z|<(|z|+1)^{-1}\}} \frac{g(w)-g(z)}{w-z} dw \bigg| \\ & \lesssim |g(z)| +\max_{|\zeta - z|\le (|z|+1)^{-1}}|g'(\zeta)| \lesssim |z|^{-4}. \end{aligned} $$ Combining these inequalities we obtain the estimate of the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{count_a}] Let $D$ and $\Gamma$ be as above. Let $f(z) = z^{-4}\sin^4 z$. Then $f$ is an entire function of finite exponential type. Put $g(z) = f(z^{1/\gamma})$, $z\in D$, and define the function $F$ for $z\in \mathbb{C} \setminus \overline{D}$ by the contour integral $$ F(z) = \frac{1}{2\pi i}\int_\Gamma \frac{g(w)}{z-w} dw, \qquad z\in \mathbb{C} \setminus \overline{D} $$ (note that $|g(w)|\lesssim |w|^{-4/\gamma}$ and so there is no problem with convergence). Now we use a well-known trick to show that $F$ admits a continuation to an entire function. Let $R>0$ and $\widetilde{\Gamma}$ be the contour $\{t e^{i\pi\gamma}: t\ge R\} \cup \{r e^{i\theta}: 0\le\theta\le \pi\gamma \} \cup \{t\ge R\}$ (also oriented from $e^{i\pi\gamma} \infty$ to $+\infty$) and let $\widetilde{D}\subset D$ be the domain such that $\partial \widetilde{D} = \widetilde{\Gamma}$. Put $$ \widetilde{F} (z) = \frac{1}{2\pi i} \int_{\widetilde{\Gamma}} \frac{g(w)}{z-w} dw, \qquad z\in \mathbb{C} \setminus \overline{\widetilde{D}}. $$ Then, for $z \in \mathbb{C} \setminus \overline{D}$, $$ F(z) - \widetilde{F} (z) = \frac{1}{2\pi i} \int_{\Gamma_0} \frac{g(w)}{z-w} dw = 0 $$ where $\Gamma_0$ is the counterclockwise oriented boundary of the sector $S = \{r e^{i\theta}: 0< r< R, 0<\theta<\pi\gamma\}$ and the integral is zero since $g$ is analytic inside $S$ and continuous up to the boundary. Thus, $\widetilde{F}$ is a continuation of $F$ to a larger domain $\mathbb{C} \setminus \overline{\widetilde{D}}$. Since $R$ is arbitrary, we conclude that $F$ has an entire extension. Moreover, by the same argument we have a representation for $F$ inside $D$: $$ F(z) = \frac{1}{2\pi i}\int_\Gamma \frac{g(w)}{z-w} dw +g(z), \qquad z\in D. $$ Note that $g$ satisfies the hypotheses of Lemma \ref{bhh}. Indeed, $|f(z)|+|f'(z)| \lesssim (|z|+1)^{-4}$ for ${\rm dist}\,(z, \mathbb{R})\ge 1$ which implies estimate \eqref{kre1}. Also, since $f$ is even and non-negative, it follows that $\alpha := (2\pi i)^{-1} \int_\Gamma g(w)dw \ne 0$. Thus, \begin{equation} \label{f1} F(z) = \alpha z^{-1} +o(z), \qquad z\in \mathbb{C} \setminus D, \end{equation} and we conclude that $F$ has at most finite number of zeros in $\mathbb{C} \setminus D$. Let us analyse the zeros of $F$ inside $D$. We have \begin{equation} \label{f2} F(z) = g(z)+\frac{\alpha}{z} + o\Big(\frac{1}{z}\Big), \qquad z\in D, \ |z|\to\infty. \end{equation} Equivalently, this means that for $G(z) = F(z^\gamma)$ we have $$ G(z) =\frac{\sin^4 z}{z^4}+\frac{\alpha}{z^\gamma} + o\Big(\frac{1}{|z|^\gamma}\Big), \qquad z\in \mathbb{C}^+. $$ The unperturbed equation $$ \frac{\sin^4 z}{z^4}+\frac{\alpha}{z^\gamma} = 0 $$ has zeros in $\mathbb{C}^+$ whose asymptotics can be easily computed. Namely, if we write $-8\alpha = re^{i\beta}$, then the solutions $z_k=x_k+iy_k$ of the equation $(e^{-iz} - e^{iz})^4 = re^{i\beta}z^{4-\gamma}$ in $\mathbb{C}^+$ will have the asymptotics $$ \begin{cases} x_k = \frac{\pi k}{2} +\frac{\beta}{4} +o(1),\\ y_k = \big(1-\frac{\gamma}{4}\big)\ln k +\big(1-\frac{\gamma}{4}\big) \ln\frac{\pi}{2} +o(1), \end{cases} $$ as $k\to\infty$. It is easy to see that $$ \bigg|\frac{\sin^4 z}{z^4}+\frac{\alpha}{z^\gamma}\bigg| \gtrsim \frac{1}{|z|^\gamma} $$ when $z\in \mathbb{C}^+$, ${\rm dist}\,(z,\{z_k\})\ge \frac{1}{10}$ and $|z|$ is sufficiently large. By the Rouch\'e theorem, for sufficiently large $k$ the disc $D(z_k, 1/10)$ contains exactly one zero (say, $s_k$) of $G$ and these are all zeros of $G$ except a finite number. Moreover, $$ |G'(s_k)|\asymp \frac{|\sin^3 s_k\cos s_k|}{|s_k|^4} \asymp \frac{1}{|s_k|^\gamma}. $$ It follows from formulas \eqref{f1} and \eqref{f2} that $F$ is an entire function of order $\gamma^{-1}$ and of finite type. The zeros of $F$ are given by $t_k = s_k^\gamma$. Dividing and mutiplying by a polynomial we can assume without loss of generality that all zeros of $F$ are simple and lie in $D$. Since $|F'(t_k)|\asymp|s_k|^{1-2\gamma}\gtrsim |t_k|^{-1} \asymp |k|^{-\gamma}$, we can multiply $F$ by a polynomial $P$ of sufficiently large degree (with zeros in $D$) to achieve $$ \sum_n \frac{1}{|F'(t_n)P(t_n)|} <\infty. $$ Slightly abusing the notation we now denote by $\{t_n\}$ the zero set of $FP$. It remains to show that $FP$ has the required simple fraction expansion. We have $$ \frac{1}{F(z)P(z)} = \sum_n \frac{1}{(FP)'(t_n) (z-t_n)} +H(z) $$ for some entire function $H$. Since $FP$ is of order $\gamma^{-1}$, we conclude that $H$ is of order at most $\gamma^{-1}$. However, $|F(z)P(z)|\gtrsim 1$, $z\in \mathbb{C}\setminus D$ (since $|F(z)|\asymp |z|^{-1}$ there). Also, for any $\varepsilon>0$ we have $|F(z)P(z)|\asymp |g(z)| \to\infty$ when $|z|\to \infty$ and $z\in A(\varepsilon, \pi\gamma-\varepsilon)$. From this we conclude that $H\equiv 0$. \end{proof} \bigskip \section{Proof of Theorems \ref{main2} and \ref{main3}} \label{proofth23} We first state a simple proposition which shows that nearly invariance implies division-invariance. The proof is similar to \cite[Proposition 5.1]{ar} and we omit it. Let $\mathcal{H}$ be a reproducing kernel Hilbert space which consists of analytic functions in some domain $D$ and has the division property. Recall that, for a closed subspace $\mathcal{H}_0$ of $\mathcal{H}$, we denote by $\mathcal{Z}(\mathcal{H}_0)$ the set of its common zeros. \begin{proposition} \label{nea} Assume that there exists $w_0\in D$ such that $\frac{f(z)}{z-w_0} \in \mathcal{H}_0$ whenever $f\in \mathcal{H}_0$ and $f(w_0)= 0$. Then, for any $w\in D\setminus \mathcal{Z}(\mathcal{H}_0)$ and any $f\in \mathcal{H}_0$ such that $f(w) =0$, we have $\frac{f(z)}{z-w} \in \mathcal{H}_0$. \end{proposition} \medskip We pass to the proofs of Theorems \ref{main2} and \ref{main3}. The key idea of the proof is due to L.~de~Branges \cite[Theorem 35]{br}. Assume that neither $\mathcal{H}_1 \subset \mathcal{H}_2$ nor $\mathcal{H}_2 \subset \mathcal{H}_1$ and choose nonzero functions $F_1, F_2 \in \mathcal{H}$ such that $F_1 \perp \mathcal{H}_2$ but $F_1$ is not orthogonal to $\mathcal{H}_1$, while $F_2 \perp \mathcal{H}_1$ but $F_2$ is not orthogonal to $\mathcal{H}_2$. Let $F\in \mathcal{H}_1$ and $G\in \mathcal{H}_2$. Define two functions $$ \begin{aligned} f(w) & = \bigg\langle \frac{F - \frac{F(w)}{G(w)}G}{z-w}, F_1\bigg\rangle_{\mathcal{H}(T,A,\mu)} = \int\frac{F(z) - \frac{F(w)}{G(w)}G(z)}{z-w} \overline{F_1(z)} d\nu(z), \\ g(w) & = \bigg\langle \frac{G - \frac{G(w)}{F(w)}F}{z-w}, F_2\bigg\rangle_{\mathcal{H}(T,A,\mu)} = \int\frac{G(z) - \frac{G(w)}{F(w)}F(z)}{z-w} \overline{F_2(z)} d\nu(z), \end{aligned} $$ where $\nu$ is the measure defined by \eqref{nu} such that the embedding $\mathcal{H}(T,A,\mu) \subset L^2(\nu)$ is isometric. The functions $f$ and $g$ are well-defined and analytic on the sets $\{w: G(w)\ne 0\}$ and $\{w: F(w)\ne 0\}$, respectively. \medskip \\ {\bf Step 1:} {\it $f$ and $g$ are entire functions, $f$ does not depend on the choice of $G$ and $g$ does not depend on the choice of $F$}. Let $f_1$ be a function associated in a similar way to $G_1 \in \mathcal{H}_1$, $$ f_1(w) = \int\frac{F(z) - \frac{F(w)}{G_1(w)}G_1(z)}{z-w} \overline{F_1(z)} d\nu(z). $$ Then, for $G(w) \ne 0$ and $G_1(w) \ne 0$, we have $$ f_1(w) - f(w) = \frac{F(w)}{G(w)G_1(w)} \int \frac{G_1(w)G(z) - G(w)G_1(z)}{z-w} \overline{F_1(z)} d\nu(z) = 0, $$ since $\frac{G_1(w)G - G(w)G_1}{z-w} \in \mathcal{H}_2$. Now choosing $G$ such that $G(w) \ne 0$ we can extend $f$ analytically to a neighborhood of the point $w$. Thus, $f$ and $g$ are entire functions. \medskip \\ {\bf Step 2:} {\it $f$ and $g$ are of zero exponential type.} Recall that we denote by $\mathcal{C}$ the class of all regularized Cauchy transforms with poles in $T$. Since $F$ and $G$ are in $\mathcal{H}(T,A,\mu)$, we have $F/A, G/A \in \mathcal{C}$ and so $F/G, G/F \in \mathcal{C}/\mathcal{C}$. Hence, $$ f, g \in \mathcal{C} +\mathcal{C} \cdot \frac{\mathcal{C}}{\mathcal{C}}. $$ By Corollary \ref{wer} $f$ and $g$ are of zero exponential type. \medskip \\ {\bf Step 3:} {\it Either $f$ or $g$ is identically zero.} Given $w$ such that $F(w)\ne 0$, $G(w) \ne 0$, we have \begin{equation} \label{ots} \begin{aligned} |f(w)| & \le \bigg|\int \frac{F(z)\overline{F_1(z)}}{z-w} d\nu(z) \bigg| + \bigg|\frac{F(w)}{G(w)}\bigg|\cdot \bigg|\int \frac{G(z)\overline{F_1(z)}}{z-w} d\nu(z) \bigg|,\\ |g(w)| & \le \bigg|\int \frac{G(z)\overline{F_2(z)}}{z-w} d\nu(z) \bigg| + \bigg|\frac{G(w)}{F(w)}\bigg|\cdot \bigg|\int \frac{F(z)\overline{F_2(z)}}{z-w} d\nu(z)\bigg|. \end{aligned} \end{equation} By Lemma \ref{gr1}, there exist $M>0$ and a set $E\subset (0, \infty)$ of zero linear density such that $$ |f(w)| \lesssim |w|^M\bigg(1+ \bigg|\frac{F(w)}{G(w)}\bigg|\bigg), \qquad |g(w)| \lesssim |w|^M\bigg(1+ \bigg|\frac{G(w)}{F(w)}\bigg|\bigg), \qquad |w|\notin E. $$ We conclude that $$ \min\big(|f(w)|, |g(w)|\big) \lesssim |w|^M, \qquad |w|\notin E. $$ Since $E$ has zero linear density, we can choose a sequence $R_j\to \infty$ such that $R_j\notin E$ and $R_{j+1}/ R_j \le 2$. Applying the maximum principle to the annuli $R_j\le |z|\le R_{j+1}$, we conclude that $$ \min\big(|f(w)|, |g(w)|\big) \lesssim |w|^M, \qquad |w|\ge 1. $$ Since both $f$ and $g$ are of zero exponential type, a small variation of a well-known deep result by de Branges \cite[Lemma 8]{br} gives that either $f$ or $g$ is a polynomial. Assume that $f$ is a nonzero polynomial. By Lemma \ref{verd}, there exists a set $\Omega$ of zero area density such that $$ \bigg|\int\frac{F(z)\overline{F_1(z)}}{z-w} d\nu(z) \bigg| + \bigg|\int\frac{G(z)\overline{F_1(z)}}{z-w} d\nu(z) \bigg| =O\Big(\frac{1}{|w|}\Big), \qquad w\notin \Omega. $$ Hence, $|F(w)/G(w)| \to\infty$ as $|w|\to\infty$, $w\notin\Omega$, and so $$ |g(w)| \le \bigg|\int\frac{G(z)\overline{F_2(z)}}{z-w} d\nu(z) \bigg| + \bigg|\frac{G(w)}{F(w)}\bigg|\cdot \bigg|\int \frac{F(z)\overline{F_2(z)}}{z-w} d\nu(z)\bigg| = O\Big(\frac{1}{|w|}\Big), \qquad w\notin \Omega\cup \widetilde{\Omega}, $$ where $\widetilde{\Omega}$ is another set of zero area density (here we again applied Lemma \ref{verd}). Thus, $g$ tends to zero outside a set of zero density and so $g\equiv 0$ by Theorem \ref{dens}. \medskip \\ {\bf Step 4:} {\it End of the proof.} Without loss of generality, let $f\equiv 0$. Then $$ \frac{F(w)}{G(w)}\int\frac{G(z)\overline{F_1(z)}}{z-w} d\nu(z) =\int\frac{F(z)\overline{F_1(z)}}{z-w} d\nu(z) $$ for any $F\in \mathcal{H}_1$, $G\in \mathcal{H}_2$. Recall that $F_1$ is not orthogonal to $\mathcal{H}_1$ and so we can choose $F \in \mathcal{H}_1$ such that $\langle F, F_1\rangle = \int F\overline{F}_1 d\nu \ne 0$. Then, by Lemma \ref{verd}, $$ \bigg|\int\frac{F(z)\overline{F_1(z)}}{z-w} d\nu(z)\bigg| \gtrsim \frac{1}{|w|}, \qquad w\notin \Omega, $$ for some set $\Omega$ of zero density. Since $G\perp F_1$ for any $G\in \mathcal{H}_2$, we have (again by Lemma \ref{verd}) $$ \bigg|\int\frac{G(z)\overline{F_1(z)}}{z-w} d\nu(z)\bigg| = o\Big(\frac{1}{|w|}\Big), \qquad |w|\to\infty, \ w\notin \widetilde{\Omega}, $$ where $\widetilde{\Omega}$ is another set of zero density. We conclude that $|F(w)/G(w)| \to \infty$ when $|w|\to \infty$ outside the set of zero density $\Omega \cup \widetilde{\Omega}$ (for any $G\in\mathcal{H}_2$). Applying this fact and Lemma \ref{verd} to $g$ we conclude that $|g(w)|\to 0$ outside a set of zero density and so $g\equiv 0$ by Theorem \ref{dens}. Thus, we have \begin{equation} \label{ghj} \frac{G(w)}{F(w)}\int\frac{F(z)\overline{F_2(z)}}{z-w} d\nu(z) =\int\frac{G(z)\overline{F_2(z)}}{z-w} d\nu(z) \end{equation} and we may repeat the above argument. Choose $G \in \mathcal{H}_2$ such that $\langle G, F_2\rangle = \int G\overline{F}_2 d\nu \ne 0$. Then, by Lemma \ref{verd}, the modulus of the right-hand side in \eqref{ghj} is $\gtrsim |w|^{-1}$, while the left-hand side is $o(|w|^{-1})$ when $|w|\to\infty$ outside a set of zero density. This contradiction proves Theorem \ref{main2}. \qed \medskip \begin{proof}[Proof of Theorem \ref{main3}] The proof essentially coincides with the proof of Theorem \ref{main2}. Let $f$ and $g$ be defined as above. By Step 1, $f$ and $g$ are entire functions. Since $f, g\in \mathcal{C} +\mathcal{C} \cdot \mathcal{C} / \mathcal{C}$, we conclude by Corollary \ref{wer} that $f$ and $g$ are of finite exponential type. Now we need to show that $f$ and $g$ are of zero type. For this we can use the property that $\mathcal{H}_1$ and $\mathcal{H}_2$ are closed under the $*$-transform. Therefore, we can also take $F^*$ and $G^*$ in place of $F$ and $G$ in the definition of $f$ and $g$. Since $T\subset \{ -h \le {\rm Im}\, z\le h\}$, we have $$ \int\frac{\alpha(z)}{z-w} d\nu(z) \lesssim |{\rm Im}\, w|^{-1}, \qquad |{\rm Im}\, w| \ge 2h, $$ for any $\alpha\in L^1(\nu)$. Thus, $$ |f(w)| \lesssim \bigg( 1+ \min\bigg\{\bigg|\frac{F(w)}{G(w)}\bigg|, \bigg|\frac{F(\overline w)}{G(\overline w)}\bigg| \bigg\} \bigg) \frac{1}{|{\rm Im}\, w|}, \qquad |{\rm Im}\, w| \ge 2h. $$ Note that $F/G \in \mathcal{C}/\mathcal{C}$. Therefore, $F/G$ is a function of bounded type in $\mathbb {C}^+ + ih$ and in $\mathbb {C}^- -ih$. Then we can write for $w\in \mathbb {C}^+$, $$ \frac{F(w+ih)}{G(w+ih)} = O\frac{B_1 S_1}{B_2 S_2}e^{iaw}, \qquad \frac{\overline{ F(\overline w -ih)}}{\overline{ G(\overline w - ih)}} = \tilde O\frac{\tilde B_1 \tilde S_1}{\tilde B_2 \tilde S_2}e^{ibw}, $$ where $O, \tilde O$ are the corresponding outer factors, $B_1, B_2, \tilde B_1, \tilde B_2$ are Blaschke products in $\mathbb {C}^+$, $S_1, S_2, \tilde S_1, \tilde S_2$ are singular inner functions in $\mathbb {C}^+$ (without factors of the form $e^{icz}$) and $a, b\in \mathbb{R}$. If at least one of the numbers $a$ or $b$ is non-negative we have, by Lemma \ref{boun0}, $$ \log |f(w)| =o(|w|), \qquad |{\rm Im}\, w|\ge 2h, \ \arg w\notin E, $$ where $E\subset [0, 2\pi]$ is a union of interval of arbitrarily small total length. Since $f$ is an entire function of exponential type, the classical Phragm\'en--Lindel\"of principle implies that $f$ is of zero type. Assume that both $a<0$ and $b<0$. Then for $g$ we have a similar estimate $$ |g(w)| \lesssim \bigg( 1+ \min\bigg\{\bigg|\frac{G(w)}{F(w)}\bigg|, \bigg|\frac{G(\overline w)}{F(\overline w)}\bigg| \bigg\} \bigg) \frac{1}{|{\rm Im}\, w|}, \qquad |{\rm Im}\, w| \ge 2h. $$ We conclude from factorizations of $G/F$ in $\mathbb {C}^+ + ih$ and in $\mathbb {C}^- -ih$ that $g$ tends to zero outside the union of angles of arbitrarily small total size whence $g\equiv 0$. Thus, we have seen that \begin{enumerate} \item [(i)] either both $f$ and $g$ are of zero type, and we can proceed to Step 3 as in the proof of Theorem \ref{main2}; \item [(ii)] or one of the functions $f$ or $g$ is identically zero, and we can go directly to Step 4. \end{enumerate} The end of the proof is the same as in Theorem \ref{main2}. \end{proof} \begin{remark} {\rm Let us mention that Lemma \ref{verd} and Theorem \ref{dens} are essential in the case $\mathbf{Z}$. In the cases $\mathbf{\Pi}$ and $\mathbf{A_\gamma}$ it is sufficient to consider the asymptotics of the Cauchy transforms along the rays lying outside the strip or the angle in question. } \end{remark} \begin{remark} {\rm Theorems \ref{main2} and \ref{main3} can be extended with essentially the same proofs to the case of nearly invariant subspaces having the same sets of common zeros (counting with multiplicities).} \end{remark} \bigskip \section{Counterexample to the Ordering Theorem} \label{count} In this section we prove Theorem \ref{main4}. Our construction is similar to Example \ref{ex1}. To have the symmetry with respect to $\mathbb{R}$ we rotate the function $G$ from Example \ref{ex1}. \begin{proof}[Proof of Theorem \ref{main4}] {\bf Step 1.} Define the entire functions $A_1$ and $A_2$ by $$ A_1(z) = \prod_{k=1}^\infty \bigg(1-\frac{e^{2\pi z}}{e^{2\pi k}}\bigg), \qquad A_2(z) = \prod_{k=1}^\infty \bigg(1-\frac{e^{-2\pi z}}{e^{2\pi k}}\bigg), $$ and put $A=A_1A_2$. Then the zero set of $A$ is given by $T = \{t_n\} = \{k+im: k\in \mathbb{Z}\setminus\{0\}, m\in \mathbb{Z} \}$. Put $\mu_n = |t_n|^{-3}$. Now we construct the functions $G_1$ and $G_2$ as follows. Let $P_1$ be some polynomial of degree 4 such that $\mathcal{Z}_{P_1} \subset \mathcal{Z}_{A_2}$ and let $G_1$ be an entire function with simple zeros $\tilde t_n = t_n+\delta_n$, $t_n\in \mathcal{Z}_{A_2} \setminus \mathcal{Z}_{P_1}$, $|\delta_n|<1/100$, such that: \begin{enumerate} \item [(i)] $\mathcal{Z}_{G_1}$ is so close to the set $\mathcal{Z}_{A_2} \setminus \mathcal{Z}_{P_1}$ that \begin{equation} \label{tre} |G_1(z)P_1(z)| \asymp |A_2(z)|, \qquad {\rm dist}\, (z, \mathcal{Z}_{A_2})>1/10; \end{equation} \item [(ii)] $\mathcal{Z}_{G_1} \cap T =\emptyset$. \end{enumerate} Note that condition \eqref{tre} implies (by the maximum modulus principle applied to the function $(z-t_n)G_1P_1/A_2$ in $\{|z-t_n|\le 1/10\}$) that \begin{equation} \label{tre1} |G_1(t_n)P_1(t_n)| \lesssim |A_2'(t_n)|, \qquad t_n \in \mathcal{Z}_{A_2}. \end{equation} Analogously, we define a polynomial $P_2$ and the function $G_2$ such that $|G_2(z)P_2(z)| \asymp |A_1(z)|$ when ${\rm dist}\, (z, \mathcal{Z}_{A_1})>1/10$. \medskip \\ {\bf Step 2.} Let us show that $G_1, G_2$ belong to the corresponding space $\mathcal{H}(T,A,\mu)$. Similarly to Example \ref{ex1} we have $$ \begin{aligned} |A_1(z)| & \asymp 1, \qquad {\rm Re}\, z \le 0, \\ |A_1'(t_n)| & \gtrsim 1, \qquad t_n \in \mathcal{Z}_{A_1}, \\ |A_1(z)| & \gtrsim 1, \qquad {\rm dist}\, (z, \mathcal{Z}_{A_1})>1/10. \end{aligned} $$ Hence, $$ \bigg|\frac{G_1(z)}{A(z)}\bigg| \lesssim \bigg|\frac{1}{P_1(z)A_1(z)}\bigg| \lesssim \frac{1}{|P_1(z)|}, \qquad {\rm dist}\, (z, T)>1/10. $$ Also, by \eqref{tre} and \eqref{tre1}, $$ \begin{aligned} \sum_{t_n\in T\setminus \mathcal{Z}_{P_1}} \frac{|G_1(t_n)|^2}{|A'(t_n)|^2\mu_n} & \asymp \sum_{t_n \in \mathcal{Z}_{A_1}} \frac{|A_2(t_n)|^2}{|P_1(t_n)|^2 |A_2(t_n)|^2|A_1'(t_n)|^2\mu_n} + \sum_{t_n \in \mathcal{Z}_{A_2} \setminus \mathcal{Z}_{P_1}} \frac{|G_1(t_n)|^2}{|A_1(t_n)|^2|A_2'(t_n)|^2\mu_n} \\ & \lesssim \sum_{t_n \in \mathcal{Z}_{A_1}} \frac{1}{|P_1(t_n)|^2 |A_1'(t_n)|^2\mu_n} + \sum_{t_n \in \mathcal{Z}_{A_2} \setminus \mathcal{Z}_{P_1}} \frac{1}{|P_1(t_n)|^2|A_1(t_n)|^2\mu_n}. \end{aligned} $$ Since $\mu_n = |t_n|^{-3}$ and $|P_1(t_n)| \asymp |t_n|^4$, $t_n\in T\setminus \mathcal{Z}_{P_1}$, we conclude that $\sum_{t_n\in T} |G_1(t_n)|^2/(|A'(t_n)|^2\mu_n) < \infty$. By Theorem \ref{inc}, $G_1 \in \mathcal{H}(T,A,\mu)$. Clearly, we can construct $G_1$ and $G_2$ so that $G_1^* = G_1$, $G_2^* = G_2$, whence the spaces $\mathcal{H}_{G_1}$ and $\mathcal{H}_{G_2}$ are $*$-closed. \medskip \\ {\bf Step 3.} Now we construct a function $f \in \mathcal{H}(T,A,\mu)$ such that $f\perp \mathcal{H}_{G_1}$, but $\langle\frac{G_2}{z-\lambda}, f\rangle \ne 0$ for some $\lambda\in \mathcal{Z}_{G_2}$. Thus, $\mathcal{H}_{G_2}$ is not contained in $\mathcal{H}_{G_1}$. By the symmetry of the construction, also $\mathcal{H}_{G_1}$ is not contained in $\mathcal{H}_{G_2}$. Since $|A_1(z)| \asymp 1$, ${\rm Re}\, z \le 0$, we have $\sum_n |A_1(t_n)|^2 |t_n|^{-3} <\infty$. Put $$ f(z) = A (z)\sum_n \frac{\overline{A_1(t_n)}}{|t_n|^3 (z-t_n)} = A(z) \sum_n \frac{\overline{A_1(t_n)}\mu_n^{1/2}}{|t_n|^{3/2} (z-t_n)}. $$ Then $f\in \mathcal{H}(T,A,\mu)$. For $\lambda\in \mathcal{Z}_{G_1}$ we have $$ \bigg\langle \frac{G_1}{z-\lambda}, f \bigg\rangle = \sum_n \frac{G_1(t_n)}{(t_n-\lambda)A'(t_n)\mu_n^{1/2}}\cdot \frac{A_1(t_n)}{|t_n|^{3/2}} = \sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_1(t_n)}{A_2'(t_n)(t_n-\lambda)}. $$ To show that the last expression is 0, we prove the interpolation formula $$ \sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_1(t_n)}{A_2'(t_n)(z- t_n)} = \frac{G_1(z)}{A_2(z)}. $$ Once it is proved, taking $z=\lambda \in \mathcal{Z}_{G_1}$ we obtain that $\langle \frac{G_1}{z-\lambda}, f \rangle$. As usual, consider the entire function $$ H(z) = \frac{G_1(z)}{A_2(z)} - \sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_1(t_n)}{A_2'(t_n)(z- t_n)}. $$ By \eqref{tre1}, $|G_1(t_n)/A'(t_n)| \lesssim |P_1(t_n)|^{-1}$ for $t_n \in \mathcal{Z}_{A_2} \setminus \mathcal{Z}_{P_1}$, and it is easy to see that $$ \bigg|\sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_1(t_n)}{A_2'(t_n)(z- t_n)} \bigg| \to 0, \qquad |z|\to\infty, \ {\rm dist}\, (z, \mathcal{Z}_{A_2})>1/10. $$ Since, by \eqref{tre}, $|G_1(z)/A_2(z)| \asymp |P_1(z)|^{-1}$ when ${\rm dist}\, (z, \mathcal{Z}_{A_2})>1/10$, we conclude that $|H(z)| \to 0$, when $|z|\to\infty$ and ${\rm dist}\, (z, \mathcal{Z}_{A_2})>1/10$. Hence, $H\equiv 0$. \medskip \\ {\bf Step 4.} It remains to show that $\langle\frac{G_2}{z-\lambda}, f\rangle \ne 0$ for some $\lambda\in \mathcal{Z}_{G_2}$. As above we have $$ \bigg\langle \frac{G_2}{z-\lambda}, f \bigg\rangle = \sum_n \frac{G_2(t_n)}{(t_n-\lambda)A'(t_n)\mu_n^{1/2}}\cdot \frac{A_1(t_n)}{|t_n|^{3/2}} = \sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_2(t_n)}{A_2'(t_n)(t_n-\lambda)}. $$ Assume that $\langle\frac{G_2}{z-\lambda}, f\rangle = 0$ for any $\lambda\in \mathcal{Z}_{G_2}$. Then the entire function $A_2(z) \sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_2(t_n)}{A_2'(t_n)(z-t_n)}$ vanishes on $\mathcal{Z}_{G_2}$ and we can write $$ A_2(z) \sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_2(t_n)}{A_2'(t_n)(z-t_n)} = G_2(z)U(z) $$ for some entire function $U$. Since $G_2\ne 0$ on $T$, comparing the values at $t_n\in \mathcal{Z}_{A_2}$, we obtain $U(t_n) = 1$, $t_n \in \mathcal{Z}_{A_2}$. Hence, we can write $U = 1+ A_2V$ for some entire function $V$. Dividing by $G_2A_2$ we get $$ V(z) = \frac{1}{G_2(z)} \sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_2(t_n)}{A_2'(t_n)(z-t_n)} - \frac{1}{A_2(z)}. $$ We know that $|A_2(z)| \gtrsim 1$ and $|G_2(z)| \gtrsim |P_2(z)|^{-1}$ when ${\rm dist}\, (z, T)>1/10$. Since $|G_2(t_n)| \asymp |A_1(t_n)|/|P_2(t_n)| \asymp 1/|P_2(t_n)|$, $t_n \in \mathcal{Z}_{A_2}$, we also see that the Cauchy transform in the above formula tends to zero when $|z|\to\infty$, $ {\rm dist}\, (z, T)>1/10$. We conclude that $V$ is at most a polynomial. However, we also have $\limsup_{x\to+\infty}|G_2(x)|=\infty$, whence $V$ is at most a constant. Since $\lim_{x\to+\infty} A_2(x) = 1$, we conclude that this constant is $-1$. We have arrived to the following representation: \begin{equation} \label{wrt} \sum_{t_n \in \mathcal{Z}_{A_2}} \frac{G_2(t_n)}{A_2'(t_n)(z-t_n)} = \frac{G_2(z)}{A_2(z)} - G_2(z) \end{equation} and we need to show that it is impossible. Note that \eqref{wrt} implies that $$ |1-A_2(x)|\lesssim |G_2(x)|^{-1}, \qquad x >0. $$ However, $$ 1-A_2(x) = 1- \prod_{k=1}^\infty \big( 1- e^{-2\pi x -2\pi k}\big) > 1- \big( 1- e^{-2\pi x -2\pi }\big) = e^{-2\pi x -2\pi }. $$ On the other hand, it is clear that $$ \frac{\log|G_2(x)|}{x} \to\infty, \qquad x\to+\infty,\ \ \dist(x, \mathcal{Z}_{G_2}) >1/10. $$ Indeed, for the lacunary product $L(z) = \prod_{k=1}^\infty \big( 1- e^{-2\pi k}z\big)$ and for any $N, \delta>0$, we have $|L(z)| \gtrsim |z|^N$, ${\rm dist}\, (z,\{e^{2\pi k}\})>\delta$. By the construction, $|G_2(x)|\asymp |L(e^{x})|/|P_2(x)| \gtrsim e^{Nx}/|P_2(x)|$ when $\dist(x, \mathcal{Z}_{G_2}) >1/10$. This contradiction proves Theorem \ref{main4}. \end{proof}
{ "timestamp": "2018-04-03T02:09:12", "yymm": "1802", "arxiv_id": "1802.03385", "language": "en", "url": "https://arxiv.org/abs/1802.03385", "abstract": "We extend some results of M.G. Krein to the class of entire functions which can be represented as ratios of discrete Cauchy transforms in the plane. As an application we obtain new versions of de Branges' Ordering Theorem for nearly invariant subspaces in a class of Hilbert spaces of entire functions. Examples illustrating sharpness of the obtained results are given.", "subjects": "Complex Variables (math.CV); Functional Analysis (math.FA)", "title": "Krein-type theorems and ordered structure for Cauchy-de Branges spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357628519819, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7096739296244792 }
https://arxiv.org/abs/0802.4377
Unitary super perfect numbers
We shall show that 9, 165 are all of the odd unitary super perfect numbers.
\section{Introduction}\label{intro} We denote by $\sigma(N)$ the sum of divisors of $N$. $N$ is called to be perfect if $\sigma(N)=2N$. It is a well-known unsolved problem whether or not an odd perfect number exists. Interest to this problem has produced many analogous notions. D. Suryanarayana \cite{Sur} called $N$ to be super perfect if $\sigma(\sigma(N))=2N$. It is asked in this paper and still unsolved whether there were odd super perfect numbers. A special class of divisors is the class of unitary divisors defined by Cohen \cite{Coh}. A divisor $d$ of $n$ is called a unitary divisor if $(d, n/d)=1$. Then we write $d\mid\mid n$. We denote by $\sigma^*(N)$ the sum of unitary divisors of $N$. Replacing $\sigma$ by $\sigma^*$, Subbarao and Warren \cite{SW} introduced the notion of a unitary perfect number. $N$ is called to be unitary perfect if $\sigma^*(N)=2N$. They proved that there are no odd unitary perfect numbers. Moreover, Subbarao \cite{Sub} conjectured that there are only finitely many unitary perfect numbers. Combining these two notions, Sitaramaiah and Subbarao \cite{SS} studied unitary super perfect (USP) numbers, integers $N$ satisfying $\sigma^*(\sigma^*(N))=2N$. They found all unitary super perfect numbers below $10^8$. The first ones are $2, 9, 165, 238$. Thus there are both even and odd USPs. They proved that another odd USP must have at least four distinct prime factors and conjectured that there are only finitely many odd USPs. The purpose of this paper is to prove this conjecture. Indeed, we show that the known two USPs are all. \begin{thm}\label{th11} If $N$ is an odd USP, then $N=9$ or $N=165$. \end{thm} Our proof is completely elementary. The key point of our proof is the fact that if $N$ is an odd USP, then $\sigma^*(N)$ must be of the form $2^{f_1} q^{f_2}$, where $q$ is an odd prime. This yields that if $p^e$ is an unitary divisor of $N$, then $p^e+1$ must be of the form $2^a q^b$. Moreover, elementary theory of cyclotomic polynomials and quadratic residues gives that $a\leq 2$ or $b=0$. Hence $p^e$ belongs a to very thin set. Using this fact, we deduces that $q$ must be small. For each small primes $q$, we show that $\sigma^*(\sigma^*(N))/N<2$ and therefore $N$ cannot be an USP unless $N=9, 165$, with the aid of the fact that $f_1, f_2$ must be fairly large. We sometimes use facts already stated in \cite{SS} but we shall present proofs of these facts when proofs are omitted in \cite{SS}. Our method does not seem to work to find all odd super perfect numbers. Since $\sigma(\sigma(N))=2N$ does not seem to imply that $\omega(\sigma(N))\leq 2$. Even assuming that $\omega(\sigma(N))\leq 2$, the property of $\sigma$ that $\sigma(p^e)/p^e>1+1/p$ prevents us from showing that $\sigma(\sigma(N))<2$. Nevertheless, with the aid of a theory of exponential diophantine equations, we can show that for any given $k$, there are only finitely many odd super perfect numbers $N$ with $\omega(\sigma(N))\leq k$. \section{Preliminary Lemmas}\label{lemmas} Let us denote by $v_p(n)$ the solution $e$ of $p^e||n$. For distinct primes $p$ and $q$, we denote by $o_q(p)$ the exponent of $p \mod q$ and we define $a_q(p)=v_q(p^d-1)$, where $d=o_q(p)$. Clearly $o_q(p)$ divides $q-1$ and $a_q(p)$ is a positive integer. Now we quote some elementary properties of $v_q(\sigma(p^x))$. Lemmas \ref{lm21} is well-known. Lemma \ref{lm21} has been proved by Zsigmondy\cite{Zsi} and rediscovered by many authors such as Dickson\cite{Dic} and Kanold\cite{Kan}. See also Theorem 6.4A.1 in \cite{Sha}. \begin{lem}\label{lm21} If $a>b\ge 1$ are coprime integers, then $a^n-b^n$ has a prime factor which does not divide $a^m-b^m$ for any $m<n$, unless $(a, b, n)=(2, 1, 6)$ or $a-b=n=1$, or $n=2$ and $a+b$ is a power of $2$. \end{lem} By Lemma \ref{lm21}, we obtain the following lemmas. \begin{lem}\label{lm22} Let $p$, $q$ be odd primes and $e$ be a positive integer. If $p^e+1=2^aq^b$ for some integers $a$ and $b$, then one of the following holds: a)$e=1$. b)$e$ is even and $q\equiv 1\pmod{2e}$. c)$p$ is a Mersenne prime and $q\equiv 1\pmod{2e}$. \end{lem} \begin{proof} We first show that if a) does not hold, then either b) or c) must hold. Since $(p, e)\neq (2, 3)$ and $e\neq 1$, it follows from Lemma \ref{lm21} that $p^{2e}-1$ has a prime factor $r$ which does not divide $p^m-1$ for any $m<2e$. Since the order of $p\pmod{r}$ is $2e$, $r\equiv 1\pmod{2e}$. Since $r$ is odd and does not divide $p^e-1$, we $r$ divides $p^e+1$ and therefore $q=r$. If $e$ is even, then b) holds. Assume that $e$ is odd. If $p+1$ has an odd prime factor, then this cannot be equal to $q$ and must be a prime factor of $p^e+1=2^aq^b$, which is contradiction. Thus $p$ is a Mersenne prime and c) follows. \end{proof} \begin{lem}\label{lm23} Let $p$ be an odd prime and $e$ be a positive integer. If $p^e+1=2^a3^b$ for some integers $a$ and $b$, then $e=1$. \end{lem} \begin{proof} By Lemma \ref{lm22}, $e=1$ or $3\equiv 1\pmod{2e}$. The latter is equivalent to $e=1$. \end{proof} \begin{lem}\label{lm24} Let $p$, be an odd prime and $e, x$ be positive integers. If $p^e+1=2^x$, then $e=1$. \end{lem} \begin{proof} If $e>1$, then by Lemma \ref{lm21}, $p^{2e}-1$ has a prime factor which does not divide $p^m-1$ for any $m<2e$. This prime factor must be odd and divide $p^e+1$, which violates the condition $p^e+1=2^x$. \end{proof} \begin{lem}\label{lm25} Let $p$, be an odd prime and $e, x$ be positive integers. If $2^x+1=3^e$, then $(e, x)=(1, 1)$ or $(2, 3)$. \end{lem} \begin{proof} We apply Lemma \ref{lm21} with $(a, b, n)=(3, 1, e)$. If $e>2$, then $3^e-1$ has a prime factor which does not divide $3-1=2$. \end{proof} \begin{lem}\label{lm26} If a prime $p$ divides $2^a+1$ for some integer $a$, then $p$ is congruent to $1, 3$ or $5\pmod{8}$. \end{lem} \begin{proof} If $a$ is even, then it is well known that $p\equiv 1\pmod{4}$. If $a$ is odd, then $p$ divides $2x^2+1$ with $x=2^{(a-1)/2}$. We have $(-2/p)=1$ and therefore $p\equiv 1$ or $3\pmod{8}$. \end{proof} \begin{lem}\label{lm27} Let $p$ and $q$ be odd primes and $b$ be a positive integer. If a prime $p$ divides $q^b+1$ and $4$ does not divide $q^b+1$, then $4q$ does not divide $p+1$. \end{lem} \begin{proof} If $b$ is even, then $p\equiv 1\pmod{4}$ and clearly $4q$ does not divide $p+1$. If $b$ is odd, then we have $(-q/p)=1$ and $q\equiv 1\pmod{4}$. Assume that $q$ divides $p+1$. Since $q\equiv 1\pmod{4}$, we have, by the reciprocity law, $(-q/p)=(-1/p)(q/p)=(-1/p)(p/q)=(-1/p)(-1/q)=(-1/p)$. Thus $(-1/p)=1$ and $p\equiv 1\pmod{4}$ and therefore $4$ does not divide $p+1$. \end{proof} \section{Basic properties of odd USPs} In this section, we shall show some basic properties of odd USPs. We write $N=p_1^{e_1} p_2^{e_2}\ldots p_k^{e_k}$, where $p_1, p_2, \ldots, p_k$ are distinct primes. Moreover, we denote by $C$ the constant \begin{equation} \prod_{p, 2^p-1\mathrm{\ is\ prime}}\frac{2^p}{2^p-1}<1.6131008. \end{equation} This upper bound follows from the following estimate: \begin{equation}\label{eq31} \begin{split} \prod_{p, 2^p-1\mathrm{\ is\ prime}}\frac{2^p}{2^p-1}&<\frac{4}{3}\cdot\left(\prod_{n\ge 3, n\mathrm{\ is\ odd}}\frac{2^n}{2^n-1}\right)\\ &<\frac{4}{3}\cdot\exp\left(\sum_{n\ge 3, n\mathrm{\ is\ odd}}\frac{1}{2^n-1}\right)\\ &<\frac{4}{3}\cdot\exp\left(\frac{1}{7}\sum_{n\ge 0}\frac{1}{4^n}\right)\\ &=\frac{4}{3}\cdot\exp\left(\frac{4}{21}\right)=1.631007\cdots.\\ \end{split} \end{equation} \begin{lem}\label{lm31} If $N$ is an odd USP, then $\sigma^*(N)=2^{f_1}q^{f_2}$ for some odd prime $q$ and positive integers $f_1, f_2$. Moreover, $q^{f_2}+1$ is not divisible by $4$. \end{lem} \begin{proof} Since $N$ is odd, $\sigma^*(N)$ must be even. Moreover, since $\sigma^*(\sigma^*(N))=2N$ with $N$ odd, $\sigma^*(N)$ has exactly one odd prime factor. Hence $\sigma^*(N)=2^{f_1}q^{f_2}$ for some odd prime $q$ and positive integers $f_1, f_2$. Since $\sigma^*(q^{f_2})=q^{f_2}+1$ divides $\sigma^*(\sigma^*(N))=2N$, $4$ does not divide $q^{f_2}+1$. \end{proof} Henceforth, we let $N\neq 9, 165$ be an odd USP and write $\sigma^*(N)=2^{f_1}q^{f_2}$ as allowed by Lemma \ref{lm31}. \begin{lem}\label{lm32} Unless $p_i$ is a Mersenne prime and $e_i$ is odd, we have $p_i^{e_i}=2^{a_i}q^{b_i}-1$ for some positive integers $a_i$ and $b_i$ with $a_i\leq 2$. Moreover, $f_1=\sum_{i=1}^{k} a_i$ and $f_2=\sum_{i=1}^{k} b_i$. \end{lem} \begin{proof} Since $\sigma^*(p_i^{e_i}+1)$ divides $\sigma^*(N)=2^{f_1}q^{f_2}$, we can write $p_i^{e_i}+1=2^{a_i}q^{b_i}$ with some nonnegative integers $a_i$ and $b_i$. Since $p_i$ is odd and non-Mersenne, $a_i$ and $b_i$ are positive by Lemma \ref{lm24}. If $e_i$ is even, then $p_i^{e_i}+1\equiv 2\pmod{4}$. Hence $a_i=1$. Assume that $p_i$ is not a Mersenne prime and $e_i$ is odd. By Lemma \ref{lm22}, we have $e_i=1$ and therefore $p_i=p_i^{e_i}=2^{a_i}q^{b_i}-1$. By Lemma \ref{lm26} and \ref{lm27}, we have $a_i\leq 2$ since $q^{f_2}+1$ is not divisible by $4$. This shows $a_i\leq 2$. The latter part of the lemma immediately follows from $2^{f_1}q^{f_2}=\sigma^*(N)=\prod (p_i^{e_i}+1)$. \end{proof} \begin{lem}\label{lm34} $\omega(N)\geq 3$. \end{lem} \begin{proof} First we assume that $N=p_1^{e_1}$. Since we have $\sigma^*(N)/N=1+1/N$ and $\sigma^*(\sigma^*(N))/\sigma^*(N)\leq (1+1/2)(1+2/N)$ by Lemma \ref{lm31}, we have $N\leq 9$. We can easily confirm that $N=9$ is the sole odd USP with $N\leq 9$. Next we assume that $N=p_1^{e_1}p_2^{e_2}$. Since we have $\sigma^*(N)/N\leq (1+1/3)(1+3/N)$ and $\sigma^*(\sigma^*(N))/\sigma^*(N)\leq (1+1/4)(1+4/N)$, we have $N<37$. We can easily confirm that there is no odd USP $N$ with $N<37$ and $\omega(N)=2$. Another proof of impossibility of $\omega(N)=1$ unless $N=2, 9$ (whether $N$ is even or odd) can be found in \cite[Theorem 3.2]{SS} and impossibility of $\omega(N)=2$ (again, $N$ may be even) is stated in \cite[Theorem 3.3]{SS} with their proof presented only in the case $N$ is even. \end{proof} \section{$q$ cannot be $3$} In this section, we show that $q\neq 3$. There are two cases: the case $3\mid N$ and the case $3\nmid N$. \begin{prop}\label{pr41} If $3\nmid N$ and $3\mid \sigma^*(N)$, then $f_1$ and $f_2$ are even, $p_i$ has the form $2\cdot 3^{b_i}-1$ with positive integers $b_i$. \end{prop} \begin{proof} We have $e_i=1$ by Lemma \ref{lm23}. Thus any $p_i$ must be of the form $2^{a_i}\cdot 3^{b_i}-1$ with nonnegative integers $a_i, b_i$. Since $3^{f_2}+1$ is not divisible by $4$, $f_2$ must be even. Since $3$ does not divide $2^{f_1}+1$, $f_1$ must also be even. By Lemma \ref{lm26}, any prime factor of $N$ is congruent to $1\pmod{4}$ and therefore $a_i$ must be odd. By Lemma \ref{lm32}, we have $a_i=1$. \end{proof} Hence we have $p_i\in\{5, 17, 53, 4373, \ldots\}$. \begin{lem}\label{lm42} If $3\mid \sigma^*(N)$, then $3\mid N$. \end{lem} \begin{proof} Suppose $3\mid \sigma^*(N)$ and $3\nmid N$. By Proposition \ref{pr41}, we have \begin{equation} \frac{\sigma^*(N)}{N}\leq\frac{6}{5}\cdot\frac{18}{17}\cdot\frac{54}{53}\cdot\left(\prod_{i=7}^{\infty}\frac{2\cdot 3^i}{2\cdot 3^i-1}\right). \end{equation} Since \begin{equation} \prod_{i=7}^{\infty}\frac{2\cdot 3^i}{2\cdot 3^i-1}\leq\exp\sum_{i=7}^{\infty}\frac{1}{2\cdot 3^i-1}\leq\linebreak[0]\exp\left(\frac{1}{2\cdot 3^7-1}\sum_{i=0}^{\infty}3^{-i}\right), \end{equation} we have \begin{equation}\label{eq41} \frac{\sigma^*(N)}{N}<\frac{6}{5}\cdot\frac{18}{17}\cdot\frac{54}{53}\cdot\exp\left(\frac{3}{8744}\right). \end{equation} Since $k\geq 3$ by Lemma \ref{lm34}, we have $f_1=k\geq 3$ and $f_2\geq 3+2+1=6$. Thus we obtain \begin{equation}\label{eq42} \frac{\sigma^*(\sigma^*(N))}{\sigma^*(N)}\leq\frac{9}{8}\cdot\frac{730}{729}. \end{equation} Multiplying (\ref{eq41}) and (\ref{eq42}), we obtain \begin{equation} 2=\frac{\sigma^*(\sigma^*(N))}{N}<\frac{9}{8}\cdot\frac{730}{729}\cdot \frac{6}{5}\cdot\frac{18}{17}\cdot\frac{54}{53}\cdot\exp\left(\frac{3}{8744}\right)=1.4588\cdots<2, \end{equation} which is contradiction. \end{proof} \begin{lem}\label{lm43} It is impossible that $3\mid N$ and $3\mid \sigma^*(N)$. \end{lem} \begin{proof} Suppose $3\mid N$ and $3\mid \sigma^*(N)$. We have $e_i=1$ by Lemma \ref{lm23}. By Lemma \ref{lm26}, $2^{a_i}+1$ is divisible by no Mersenne prime other than 3. Since $3^{b_i}+1$ cannot be divisible by 4, $b_i$ must be odd and therefore $3^{b_i}+1$ is divisible by no Mersenne prime. Hence it follows from Lemma \ref{lm32} that any $p_i$ must be of the form $2^{a_i}\cdot 3^{b_i}-1$, where $a_i\leq 2$ and $b_i$ are positive integers. Hence $p_i\in\{5, 11, 17, 53, 107, 971, 4373, \ldots\}$. Thus we obtain \begin{equation} \frac{\sigma^*(N)}{N}\leq\frac{4}{3}\cdot\frac{6}{5}\cdot\frac{12}{11}\cdot\frac{18}{17}\cdot\frac{54}{53}\cdot\frac{108}{107}\cdot\left(\prod_{i=7}^{\infty}\frac{2\cdot 3^i}{2\cdot 3^i-1}\right) \cdot\left(\prod_{i=5}^{\infty}\frac{4\cdot 3^i}{4\cdot 3^i-1}\right). \end{equation} As in the proof of the previous lemma, substituting the inequality \begin{equation} \prod_{i=5}^{\infty}\frac{4\cdot 3^i}{4\cdot 3^i-1}\leq\exp\left(\frac{1}{4\cdot 3^5-1}\sum_{i=0}^{\infty}3^{-i}\right) \end{equation} we have \begin{equation}\label{eq43} \frac{\sigma^*(N)}{N}\leq\frac{4}{3}\cdot\frac{6}{5}\cdot\frac{12}{11}\cdot\frac{18}{17}\cdot\frac{54}{53}\cdot\frac{108}{107}\cdot\exp\left(\frac{3}{8744}+\frac{3}{1942}\right). \end{equation} Since $k\geq 46$ by \cite[Theorem 3.4]{SS}, we have \begin{equation}\label{eq44} \frac{\sigma^*(\sigma^*(N))}{\sigma^*(N)}\leq\frac{2^{46}+1}{2^{46}}\cdot\frac{3^{45}+1}{3^{45}}. \end{equation} Multiplying (\ref{eq43}) and (\ref{eq44}), we obtain \begin{equation} \begin{split} 2&=\frac{\sigma^*(\sigma^*(N))}{N}\\&\leq\frac{2^{46}+1}{2^{46}}\cdot\frac{3^{45}+1}{3^{45}}\cdot\frac{4}{3}\cdot\frac{6}{5}\cdot\frac{12}{11}\cdot\frac{18}{17}\cdot\frac{54}{53}\cdot\frac{108}{107}\cdot\exp\left(\frac{3}{8744}+\frac{3}{1942}\right)\\&\leq 1.9041\cdots<2, \end{split} \end{equation} which is contradiction. \end{proof} It immediately follows from these two lemmas that $q\neq 3$. \section{The remaining part} The remaining case is the case $3\nmid \sigma^*(N)$, i.e., $q\neq 3$. \begin{lem}\label{lm51} Suppose $p_i$ is not a Mersenne prime. Then $p_i^{e_i}$ has the form $2^{a_i}\cdot q^{b_i}-1$ with positive integers $a_i\leq 2$ and $b_i$. Moreover, for any integer $b$, at most one of the pairs $(1, b)$ and $(2, b)$ appear in $(a_i, b_i)$'s. \end{lem} \begin{proof} The former part follows from Lemma \ref{lm32}. Since $q\neq 3$, $3$ divides at least one of $2\cdot q^{b}-1$ and $4\cdot q^{b}-1$. If both pairs $(a_i, b_i)=(1, b)$ and $(a_j, b_j)=(2, b)$ appear, then at least one of $p_i^{e_i}$ and $p_j^{e_j}$ must be a power of three, which violates the condition that $p_i$ and $p_j$ are not Mersenne. \end{proof} \begin{lem}\label{lm52} $q\leq 13$. Furthermore, provided $f_2\geq 2$, we have $q=5$ or $q=7$. \end{lem} \begin{proof} By Lemma \ref{lm51}, we have \begin{equation} \frac{\sigma^*(N)}{N}\leq C\cdot\left(\prod_{a=1}^{\infty}\frac{2\cdot q^a}{2\cdot q^a-1}\right). \end{equation} Since $\prod_{a=1}^{\infty}2\cdot q^a/(2\cdot q^a-1)\leq\exp\left(q/\left\{(q-1)(2q-1)\right\}\right)$, we have \begin{equation} \frac{\sigma^*(N)}{N}\leq C\cdot\exp\left(\frac{q}{(q-1)(2q-1)}\right). \end{equation} By Lemma \ref{lm34}, we have \begin{equation} \frac{\sigma^*(\sigma^*(N))}{\sigma^*(N)}\leq\frac{2^{f_1}+1}{2^{f_1}}\cdot\frac{q^{f_2}+1}{q^{f_2}}\leq\frac{2^3+1}{2^3}\cdot\frac{q^{f_2}+1}{q^{f_2}}. \end{equation} Combining these inequalities, we obtain \begin{equation} 2\leq\frac{\sigma^*(\sigma^*(N))}{N}\leq\frac{2^3+1}{2^3}\cdot C\cdot\frac{q^{f_2}+1}{q^{f_2}}\cdot\exp\left(\frac{q}{(q-1)(2q-1)}\right). \end{equation} Hence \begin{equation} \frac{q^{f_2}+1}{q^{f_2}}\cdot\exp\left(\frac{q}{(q-1)(2q-1)}\right)\geq\frac{16}{9C}\geq 1.102087. \end{equation} This yields $q\leq 13$. If $f_2\geq 2$, then this inequality yields $q\leq 7$. \end{proof} \begin{thm}\label{th53} $q\neq 5$. \end{thm} \begin{proof} Suppose that $q=5$. Then we have $p_i^{e_i}=2\cdot 5^{b_i}-1$ or $p_i^{e_i}=4\cdot 5^{b_i}-1$ or $p_i$ is Mersenne. Hence $p_i^{e_i}\in\{ 19, 499, 7812499, \ldots, 9, 49, 1249, \ldots,\linebreak[0] 3, 7, 31, 127, 8191, \ldots\}$. We note that $9=3^2$ and $49=7^2$. Let us assume that $19\mid N$. Then $f_1\equiv 9\pmod{18}$ and hence $3^3\mid N$. By (\ref{eq31}), we have \begin{equation} \frac{\sigma^*(N)}{N}\leq \frac{3}{4}\cdot\frac{28}{27}\cdot C\cdot\exp\left(\frac{5}{36}\right). \end{equation} Since $f_1\geq 9$, we have \begin{equation} \frac{\sigma^*(\sigma^*(N))}{N}\leq\frac{2^9+1}{2^9}\cdot\frac{6}{5}\cdot\frac{7}{9}\cdot C\cdot\exp\left(\frac{5}{36}\right)=1.7332\cdots <2, \end{equation} which is contradiction. Thus $19$ cannot divide $N$. From this we deduce that if $p_i^{e_i}=2\cdot 5^{b_i}-1$ or $p_i^{e_i}=4\cdot 5^{b_i}-1$, then $b_i\geq 3$. It is impossible that $7\mid N$ since $7$ does not divide $2^x+1$ or $5^x+1$ for any integer $x$. Hence, by Lemma \ref{lm34} we have \begin{equation} \frac{\sigma^*(\sigma^*(N))}{N}\leq \frac{7}{8}\cdot C\cdot\exp\left(\frac{5}{4}\cdot\frac{250}{249}\right)\cdot\frac{6}{5}\cdot\frac{9}{8}<1.9150\cdots <2. \end{equation} So that, we cannot have $q=5$. \end{proof} \begin{thm}\label{th54} $q\neq 7, 11, 13$. \end{thm} \begin{proof} Suppose $q=7$. Observing that $4\cdot 7^b-1$ is divisible by $3$, we deduce from Lemma \ref{lm32} that, for any $i$, $p_i$ is a Mersenne prime or $p_i^{e_i}=2\cdot 7^{b_i}-1$. By Lemma \ref{lm26}, $(2^{f_1}+1)(7^{f_2}+1)$ is not divisible by $7$. Hence \begin{equation}\label{eq541} \begin{split} \frac{\sigma^*(N)}{N}&\leq \frac{4}{3}\cdot\left(\prod_{i=2}^{\infty}\frac{2^{2i+1}}{2^{2i+1}-1}\right) \cdot\left(\prod_{i=1}^{\infty}\frac{2\cdot 7^i}{2\cdot 7^i-1}\right)\\ &\leq\frac{4}{3}\cdot\exp\left(\frac{1}{31}\cdot\frac{4}{3}+\frac{1}{13}\cdot\frac{8}{7}\right). \end{split} \end{equation} By Lemma \ref{lm34}, we have $k\geq 3$. We deduce from Lemma \ref{lm32} that we can take an integer $s$ with $1\leq s\leq 3$ for which the following statement holds: there is at least $3-s$ indices $i$ such that $p_i$ is a Mersenne prime and $e_i$ is odd, and there is at least $s$ indices $i$ such that $p_i^{e_i}=2\cdot 7^{b_i}-1$. If $s=1$, then $f_1\geq 6$ and $f_2\geq 1$. If $s=2$, then $f_1\geq 4$ and $f_2\geq 3$. If $s=3$, then $f_1\geq 3$ and $f_2\geq 6$. \begin{equation}\label{eq542} \begin{split} \frac{\sigma^*(\sigma^*(N))}{\sigma^*(N)}&\leq\max\left\{\frac{2^6+1}{2^6}\cdot\frac{8}{7}\cdot\frac{2^4+1}{2^4}\cdot\frac{7^3+1}{7^3}, \frac{2^3+1}{2^3}\cdot\frac{7^6+1}{7^6}\right\}\\ &\leq\frac{65}{56}. \end{split} \end{equation} Combining two inequalities (\ref{eq541}) and (\ref{eq542}), we have \begin{equation} \frac{\sigma^*(\sigma^*(N))}{N}\leq\frac{65}{56}\cdot\frac{4}{3}\cdot\exp\left(\frac{1}{31}\cdot\frac{4}{3}+\frac{1}{13}\cdot\frac{8}{7}\right)=1.7604\cdots<2, \end{equation} which is contradiction. Suppose $q=11$. Observing that $2\cdot 11^{2b+1}-1$ and $4\cdot 11^{2b}-1$ is divisible by $3$, we deduce from Lemma \ref{lm32} that, for any $i$, $p_i$ is a Mersenne prime or $p_i^{e_i}=2^{a_i}\cdot 7^{b_i}-1$ with $a_i+b_i$ odd. \begin{equation} \begin{split} \frac{\sigma^*(N)}{N}&\leq\frac{4}{3}\cdot\left(\prod_{i=2}^{\infty}\frac{2^{2i+1}}{2^{2i+1}-1}\right) \cdot\left(\prod_{i=1}^{\infty}\frac{2\cdot 11^i}{2\cdot 11^i-1}\right)\\ &\leq\frac{4}{3}\cdot\exp\left(\frac{1}{31}\cdot\frac{4}{3}+\frac{1}{21}\cdot\frac{12}{11}\right). \end{split} \end{equation} In a similar way to derive (\ref{eq542}), we obtain \begin{equation} \frac{\sigma^*(\sigma^*(N))}{\sigma^*(N)}\leq\frac{2^3+1}{2^3}\cdot\frac{11^6+1}{11^6}. \end{equation} Combining these inequalities, we have \begin{equation} \begin{split} 2=\frac{\sigma^*(\sigma^*(N))}{N}&\leq\frac{2^3+1}{2^3}\cdot\frac{11^6+1}{11^6}\cdot\frac{4}{3} \cdot\frac{8}{7}\cdot\exp\left(\frac{1}{31}\cdot\frac{4}{3}+\frac{1}{21}\cdot\frac{12}{11}\right)\\ &\leq 1.8850\cdots<2, \end{split} \end{equation} which is contradiction. Suppose $q=13$. $3\mid N$ and $3\nmid (q^{f_2}+1)$ since $q=13\equiv 1\pmod{3}$. Hence $f_1$ must be odd. Moreover, $f_2=1$ by Lemma \ref{lm52}. Hence $\sigma^*(N)=2^{f_1}\cdot 13$ and $N=7(2^{f_1}+1)$. There is exactly one index $j$ such that $p_j^{e_j}$ is of the form $2^{a}13^{b}-1$ for some positive integers $a, b$. By Lemma \ref{lm32}, we have $a\leq 2$. Moreover, we have $b=1$ since $b\leq f_2=1$. Hence $p_j^{e_j}=25=5^2$. Since $13^{f_2}+1=2\cdot 7$, $2^{f_1}+1$ must be divisible by $5$. But this is impossible since $f_1$ is odd. \end{proof} Now Theorem \ref{th11} is clear. By Lemma \ref{lm52}, $q$ must be one of $3, 5, 7, 11, 13$. In the previous section, it is shown that $q\neq 3$. Theorem \ref{th53} shows that $q\neq 5$. Theorem \ref{th54} eliminates the remaining possibilities.
{ "timestamp": "2008-02-29T13:36:46", "yymm": "0802", "arxiv_id": "0802.4377", "language": "en", "url": "https://arxiv.org/abs/0802.4377", "abstract": "We shall show that 9, 165 are all of the odd unitary super perfect numbers.", "subjects": "Number Theory (math.NT)", "title": "Unitary super perfect numbers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357616286122, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7096739287376947 }
https://arxiv.org/abs/2212.06766
Homomorphism Conjugacy versus Centralizer Actions in the Symmetric Group
We explore when generator-conjugate homomorphisms are conjugate and when element-conjugate homomorphisms are conjugate from abelian or dihedral groups to the symmetric group. We completely determine when such homomorphisms are conjugate in the case where the source group has two generators by studying centralizer actions in the target group.
\section{Introduction} Element-conjugate linear representations of finite degree are conjugate if the underlying field is algebraically closed and of characteristic zero. We are motivated by exploring the relationship between element-conjugacy and homomorphism conjugacy for other target groups. M. Larsen initiated this discussion of the relationship between element-conjugate homomorphisms and conjugate homomorphisms for Lie groups in his papers from 1994 \cite{larsen94} and 1996 \cite{larsen96}. In the papers, a group $G$ is acceptable if a homomorphism from a group $\Gamma$ to $G$ is determined up to conjugacy by the conjugacy classes of the elements $\phi(\gamma)$. Larsen made progress toward the classification of acceptable subgroups of the general linear group where all algebraic groups are assumed to be connected even if their associated Lie groups are not. Weidner later extended this work by using pseudocharacters to answer more questions about acceptable algebraic groups \cite{weidner20}. We focus on a discrete variation of this problem by determining under which conditions some generator-conjugate or element-conjugate homomorphisms to the symmetric group are conjugate. \subsection{Notation and Important Definitions} \subsubsection{Notation} Let $G$ be a group and $x$ be an element in $G$, $\mathrm{Cent}_G(x)$ denotes the centralizer of $x$ in $G$. We have that $S_n$ is the symmetric group of the set $\{1,2,...,n\}$ while $S_X$ is the symmetric group of the set $X$. Let $\sigma \in S_n$, then $\mathbf{Fix}(\sigma)$ is the set of fix points of $\sigma$ in $\{1,2,..,n\}$. We set $X_{\sigma} = \{1,2,...,n\} \setminus \mathbf{Fix}(\sigma)$. Let $\sigma_1,\sigma_2 \in S_n$. We set $X_{\sigma_1,\sigma_2}=\{1,2,...,n\}\setminus \mathbf{Fix}(\sigma_1) \cup \mathbf{Fix}(\sigma_2).$ \subsubsection{Definitions} Let $G$ and $H$ be groups and let $\varphi,\psi$ be homomorphisms from $G$ to $H$. \begin{definition}[\textbf{Generator-conjugate}] Assume $G$ is generated by the elements $g_1,g_2,...,g_n \in G$ and that for each $g_i$, there exists some $h \in H$ so that \begin{equation} h\varphi(g_i)h^{-1} = \psi(g_i), \end{equation} then $\varphi, \psi$ are generator-conjugate. \end{definition} \begin{definition}[\textbf{Element-conjugate}] If for every $g \in G$, there exists some $h \in H$ so that \begin{equation} h\varphi(g)h^{-1} = \psi(g), \end{equation} we say $\varphi, \psi$ are element-conjugate. \end{definition} \begin{definition} If there exists some $h \in H$ so that for all $g \in G$: \begin{equation} h\varphi(g)h^{-1} = \psi(g), \end{equation} then $\varphi, \psi$ are \textbf{conjugate}. \end{definition} \subsection{Summary} We begin by studying actions induced by centralizers in the symmetric group. Assume $\sigma \in S_n$. The first action is due to the invariance of cycle type under conjugation in the symmetric group and gives a direct product decomposition of the centralizer $\mathrm{Cent}_{S_n}(\sigma)$(Lemma \ref{directproduct}). Let $\sigma^\prime \in S_n$ be the product of $k$ disjoint cycles of length $d$,$\; \tau_1\tau_2\ldots\tau_k$. The centralizer of $\sigma^\prime$ acts transitively by conjugation on the set $\{\tau_1,\tau_2,\ldots,\tau_k\}$ (Lemma \ref{actioncycles}). These two actions to a large extent determine whether two elements $\pi,\pi^\prime\in S_n$ are conjugate by some element $\rho$ in the centralizer of $\sigma$. Let $\varphi,\psi$ be generator-conjugate homomorphisms from a group $G$ to the symmetric group. We assume that both homomorphisms send some generator $g$ to the same point $\sigma$ in $S_n$. If $\varphi, \psi$ are conjugate as homomorphisms, there must exist some $\rho \in \mathrm{Cent}_{S_n}(\sigma)$ so that $\rho\varphi\rho^{-1}=\psi$. We are concerned with determining when such an element $\rho$ exists. Exploring centralizer actions in relation to homomorphism conjugacy in this way, we obtain our main results. \subsubsection{Main results of the paper} Assume $\varphi,\psi$ are homomorphisms from an abelian group with two generators or a dihedral group, to the symmetric group $S_n$. In these cases we: \begin{itemize} \item Describe under which conditions generator-conjugate homomorphisms are conjugate (Theorem \ref{abeliantheorem} and Theorem \ref{dihedraltheorem}). \item Detail the relationship between element-conjugate homomorphisms and conjugate homomorphisms (Corollary \ref{localglobalabelian} and Corollary \ref{localglobaldihedral}). \end{itemize} \subsection{Possible Future Directions} The dihedral case describes the relationship between generator-conjugacy, element-conjugacy, and conjugacy for homomorphisms from a finite reflection group with two generators to the symmetric group. A natural next step could be attempting to generalize some of these results to other finite reflection groups as source groups. In Lemma \ref{actioncycles}, we prove that the centralizer of an element $\sigma=\tau_1\tau_2..\tau_k$, the product of $k$-cycles of length $d$, acts transitively on the set $\{\tau_1,\tau_2,..,\tau_k \}$. However, we later note that there is also a larger subgroup $H_{\sigma}$ that acts transitively on the set $\{ \{\tau_1\},...\{\tau_k \}\}$, where $\{\tau_i\}$ is the set of points in the cycle $\tau_i$ (Lemma \ref{actiondihedral}): \begin{equation} H_{\sigma} = \{ x \in S_n | x\sigma x^{-1} = \sigma^k, k \in \mathbb{Z} \} \end{equation} Note that any finite reflection group $S$ can be represented in the form: \begin{equation} S = \langle s_1,s_2,...,s_n \;| \; s_i^2=1, (s_is_j)^{m_{i,j}}=1 , \; m_{i,j} \in \mathbb{Z} \rangle, \end{equation} and that $s_i,s_j \in H_{s_is_j}$: \begin{equation} s_i(s_is_j)s_i = s_j(s_is_j)s_j = s_js_i = (s_is_j)^{-1}. \end{equation} A way to work out the relations between element-conjugacy and conjugacy for homomorphisms to the symmetric group from an arbitrary finite reflection group might be through this connection to actions, in a similar way to the abelian or dihedral case we have already covered. Further, we could also let the target group itself be any finite reflection group and try to further generalize the results to this setting. Another potential route is by first noting that $S_n$ is the Weyl group of $\mathrm{GL}(n,\mathbb{C})$. It could then be interesting to explore the connection between element-conjugacy versus conjugacy for homomorphisms to each group. This could be expanded into attempting to explain this relation for other connected, reductive groups $G$ and their corresponding Weyl groups. \section{Centralizer Actions in the Symmetric Group} Let $\sigma$ be an arbitrary permutation in $S_n$. We begin by defining the notion of a non-trivial centralizer. \begin{definition} [Non-trivial Centralizer] We call the centralizer of $\sigma$ in the symmetric group $S_{X_{\sigma}}$ where $X_{\sigma}=\{1,2,...,n\}\setminus \mathbf{Fix}(\sigma)$ the non-trivial centralizer of $\sigma$ and denote it $\mathrm{Cent}_0(\sigma)$. \end{definition} We may rewrite $\sigma$ as a product of disjoint cycles and group the cycles of the same length together such that \begin{equation} \sigma = \sigma_1\sigma_2...\sigma_l, \end{equation} where each $\sigma_i$ is the product of $k_i$ disjoint cycles of length $d_i$ and all the integers $d_1,...,d_l$ are distinct. The first important centralizer action we observe is induced by the invariance of cycle type under conjugation in $S_n$. \begin{lemma}[Direct Product Decomposition of Centralizer] \label{directproduct} The centralizer of $\sigma=\sigma_1\sigma_2...\sigma_l$ has a direct product decomposition in $S_n$ \begin{equation} \mathrm{Cent}_{S_n}(\sigma)=S_{\mathbf{Fix}(\sigma)}\times\mathrm{Cent}_0(\sigma_1)\times\mathrm{Cent}_0(\sigma_2)\times...\times\mathrm{Cent}_0(\sigma_l) \end{equation} \end{lemma} \begin{proof} Assume $\pi \in \mathrm{Cent}_{S_n}$. Then \begin{equation} \pi\sigma_1\pi^{-1}\pi\sigma_2\pi^{-1}...\pi\sigma_l\pi^{-1}=\sigma_1\sigma_2...\sigma_l. \end{equation} By invariance of cycle type under conjugation, it must hold that \begin{equation} \pi\sigma_i\pi^{-1}=\sigma_i, \; \; 1\leq i \leq l. \end{equation} Every point in a cycle of length $d_i$ in $\sigma_i$ must be sent to some cycle of length $d_i$ which also lies in $\sigma_i$ by construction. \end{proof} This direct product decomposition of the centralizer motivates restricting our further study of centralizer actions in $S_n$ to the case where $\sigma$ is the product of $k$ disjoint cycles of length $d$. We assume $\sigma$ satisfies this condition for the remainder of this entire section. The second important action for our results is presented in the following lemma. \begin{lemma} \label{actioncycles} $\mathrm{Cent}_0(\sigma)$ acts transitively by conjugation on the set $\{\tau_1,\tau_2,...,\tau_k\}$. \end{lemma} \begin{proof} The identity element acts trivially on all elements in the set. We show that the orbit of $\tau_1$ is the entire set. We can set $\tau_1=(12\cdots d)$ and $\tau_i=(i_1i_2...i_d)$ and define a permutation: \begin{equation} \pi(j)=i, \pi(i_j) =j; 1 \leq j \leq d, \; \mathbf{Fix}(\pi) = X_{\sigma} \setminus \{1,2,...,n\}\cup\{i_1,i_2,...,i_d\} \end{equation} Clearly, $\pi \in \mathrm{Cent}_0(\sigma)$ and $\pi\tau_1\pi^{-1}= \tau_i$. Next, assume there exists some element $\rho \in \mathrm{Cent}_0(\sigma)$ so that $\rho\tau_1\rho^{-1} \notin \{\tau_1,\tau_2,...,\tau_k\}$. Since we assume that the cycles $\tau_1,\tau_2,..,\tau_k$ are disjoint, this implies that $\rho\tau_1...\tau_k\rho^{-1} \ne \tau_1...\tau_k$ which is a contradiction. \end{proof} \begin{corollary} \label{kisnormal} The subgroup $K=\langle \tau_1, \tau_2, \tau_3,...,\tau_k \rangle$ is normal in $\mathrm{Cent}_0(\sigma)$. \end{corollary} Note that the associated permutation representation of the action of $\mathrm{Cent}_0(\sigma)$ on $\{ \tau_1,...,\tau_k \}$ is precisely given by the homomorphism whose kernel is $K$: \begin{equation} \mathrm{Cent}_0(\sigma) \longrightarrow \mathrm{Cent}_0(\sigma)/K \cong S_k \end{equation} Each element $\pi \in \mathrm{Cent}_0(\sigma)$ may therefor by its image under $\mathrm{Cent}_0(\sigma)$ be associated with some element in $S_k$, describing its action on the cycles $\tau_1,...,\tau_k$. \begin{remark} Let $\pi \in \mathrm{Cent_0}(\sigma)$. The cycle type of $\pi$ under the homomorphism $\mathrm{Cent}_0(\sigma)/K$ is invariant under conjugation in $\mathrm{Cent}_0(\sigma)$. \end{remark} This is just the result that cycle type is invariant under conjugation in the symmetric group. However, this remark is significant for our discussion about homomorphism conjugacy since two elements $\pi,\pi' \in \mathrm{Cent}_0(\sigma)$ may be conjugate in $S_n$ but have distinct cycle types under $\mathrm{Cent}_0(\sigma)/K$. \begin{ex} Assume $\sigma = (12)(34)(56)(78)$ and set $\pi=(1324)(5768)$ and $\pi'=(1357)(2468)$. The image of $\pi$ under $\mathrm{Cent}_0(\sigma)/K$ is $(12)(34)$ while the image of $\pi'$ under $\mathrm{Cent}_0(\sigma)$ is $(1234)$ \end{ex} We now split our focus on centralizer actions in the symmetric group in two separate directions, based on the two source groups (abelian or dihedral) we are interested in exploring homomorphism conjugacy for. In the abelian case, we would like to determine when for two elements $\pi,\pi' \in \mathrm{Cent}_0(\sigma)$, there exists some $\rho \in \mathrm{Cent}_0(\sigma)$ so that \begin{equation} \rho\pi'\rho^{-1}=\pi. \end{equation} In the dihedral case, we are interested in elements $\pi,\pi'$, conjugate in $S_n$, where \begin{equation} \pi^2=(\pi')^2=1 \; \; \mathrm{and} \;\; \pi\sigma\pi^{-1}=(\pi')\sigma(\pi')^{-1}=\sigma^{-1}. \end{equation} In particular, we wish to determine when there exists some $\rho \in \mathrm{Cent_0}(\sigma)$ such that \begin{equation} \rho\pi'\rho^{-1} = \pi. \end{equation} \subsection{Centralizer Actions for Abelian Source} Assume as before that $\sigma \in S_n$ is the product of $k$ disjoint cycles of length $d$: \begin{equation} \tau_1\tau_2...\tau_k. \end{equation} From now on, we also assume that $\mathbf{Fix}(\sigma)= \emptyset$. The primary goal of this subsection is to completely determine under which conditions two elements $\pi,\pi' \in \mathrm{Cent}_0(\sigma)$ are conjugate in $\mathrm{Cent}_0(\sigma)$. It is sufficient to explore this problem for the case where the image of $\pi$ is a single cycle of length $m$ under $\mathrm{Cent}_0(\sigma)/K$: \begin{equation} \mathrm{Cent}_0(\sigma)/K: \pi \mapsto (12\cdots m). \end{equation} \begin{remark} \label{frstremark} If $\pi,\pi'$ are conjugate in $\mathrm{Cent}_0(\sigma)$, they have the same cycle type under $\mathrm{Cent}_0(\sigma)/K$. \end{remark} We have that this follows immediately from the invariance of cycle type under conjugation in the symmetric group. This in conjunction with that $K = \langle \tau_1,\tau_2,..,\tau_k \rangle$ is normal in $\mathrm{Cent}_0(\sigma)$ by Corollary \ref{kisnormal} means we may write any element in the non-trivial centralizer as a product of some element in $K$ and some element not in $K$, whose preimage of the identity under $\mathrm{Cent}_0(\sigma)/K$ is trivial. \begin{remark} \label{decomp} We can rewrite the elements $\pi,\pi'$ such that \begin{equation} \pi = \mu\pi_0, \pi=\mu'\pi_0', \end{equation} where $\mu, \mu' \in K$ and $\pi_0,\pi_0'$ permutes no cycles trivially, i.e either all points in a cycle $\tau_i$ are fixed or sent to the points in some distinct cycle $\tau_j$. \end{remark} This in conjunction with the action on the set $\{\tau_1,...,\tau_k\}$ (Lemma \ref{actioncycles}) results in an additional condition for conjugacy in $\mathrm{Cent}_0(\sigma)$: \begin{remark} \label{sndremark} There are integers $z_1,z_2,...,z_k$ and $n_1,n_2,...,n_k$ such that \begin{equation} \mu = \tau_1^{z_1}\tau_2^{z_2}...\tau_k^{z_k} \; \; \mathrm{and} \; \; \mu' = \tau_1^{n_1}\tau_2^{n_2}...\tau_k^{n_k}. \end{equation} If $\pi,\pi'$ are conjugate in $\mathrm{Cent}_0(\sigma)$, then the lists of integers $z_1,z_2,...,z_k$ and $n_1,n_2,...,n_k$ are identical up to rearrangement. \end{remark} We now make an important observation about $\pi_0$ in the decomposition $\pi = \mu\pi_0$ as in Remark \ref{decomp}. \begin{lemma} We have that \begin{equation} \pi_0^m = \tau_1^z\tau_2^{z}...\tau_m^{z}, \end{equation} for some integer $z$. \end{lemma} \begin{proof} It is clear that $\pi_0^m \in \ker(\mathrm{Cent}_0(\sigma)/K)$. Then $\pi_0^m = \tau_1^{z_1}\tau_2^{z_2}...\tau_m^{z_m}$ for some integers $z_1,z_2,..,z_m$. Assume there exists a pair of integers $i<j$ such that $z_i \ne z_j$. This implies that $\pi_0^{j-i}$ does not commute with $\pi_0^m$ which is a contradiction. \end{proof} This lemma makes it possible to define the last necessary condition for conjugacy in $\mathrm{Cent}_0(\sigma)$. \begin{remark} \label{trdremark} We rewrite the elements $\pi,\pi'$ such that \begin{equation} \pi = \mu\pi_0, \pi=\mu'\pi_0', \end{equation} as in Remark \ref{decomp}. If $\pi,\pi'$ are conjugate in $\mathrm{Cent}_0(\sigma)$, then $\pi_0^m=\tau_1^{z}\tau_2^{z}...\tau_m^{z}$ and $(\pi_0')^m=\tau_{i_1}^z\tau_{i_2}^z...\tau_{i_m}^z$ for an integer $z$. \end{remark} \begin{proof} By that $K$ is normal in $\mathrm{Cent}_0(\sigma)$, we must have that $\pi_0,\pi_0'$ are conjugate in $\mathrm{Cent}_0(\sigma)$. Assume that $\pi_0^m=\tau_1^z...$ and $(\pi_0')^m=\tau_{i_1}^{z'}...$ for some distinct integers $z, z'$. Then $\tau_1^z, \tau_{i_1}^{z'}$ are conjugate in $\mathrm{Cent}_0(\sigma)$. This implies that the orbit of $\tau_1$ under the action of $\mathrm{Cent}_0(\sigma)$ is not the set $\{\tau_1,\tau_2,..,\tau_k\}$ which is a contradiction by Lemma \ref{actioncycles}. \end{proof} We are ready to define precisely when $\pi, \pi'$ are conjugate in $\mathrm{Cent}_0(\sigma)$. \begin{lemma} \label{importantabelianlemma} Let $\pi,\pi'$ be two elements in the non-trivial centralizer of $\sigma=\tau_1...\tau_k$ where \begin{equation} \mathrm{Cent}_0(\sigma)/K : \pi \mapsto (12\cdots m), \end{equation} recalling that $K= \langle \tau_1,\tau_2,...,\tau_k \rangle$. We may rewrite $\pi, \pi'$ by Remark \ref{decomp} so that $\pi=\mu\pi_0$ and $\pi'=\mu'\pi_0'$ where $\mu, \mu' \in K$ and $\pi_0,\pi_0' \notin K$. Then $\mu_0=\tau_1^{z_1}...\tau_k^{z_k}$ for some integers $z_1,z_2,..,z_k$ and $\mu_0=\tau_1^{n_1}...\tau_k^{n_k}$ for some integers $n_1,n_2,..,n_k$. The elements $\pi, \pi'$ are conjugate in $\mathrm{Cent}_0(\sigma)$ if and only if: \begin{enumerate}[(i)] \item The image of $\pi'$ under $\mathrm{Cent}_0(\sigma)/K$ is also an $m$-cycle: \begin{equation} \mathrm{Cent}_0/K: \pi' = (i_1i_2\cdots i_m). \end{equation} \item The lists $z_1,..,z_k$ and $n_1,...,n_k$ are identical up to rearrangement. \item There exists an integer $z$ so that $\pi_0^m=\tau_1^z...\tau_m^z$ and $(\pi_0')^m=\tau_{i_1}^z...\tau_{i_m}^z$ \end{enumerate} \end{lemma} \begin{proof} It follows from Remark \ref{frstremark}, \ref{sndremark}, and \ref{trdremark} that each of the conditions are necessary. It remains to prove that they are sufficient. Look at the list of integers $n_1,..,n_k$. It is clear that \begin{equation} n_{i_1}=n_{i_2}=...=n_{i_m}=0, \end{equation} by the image of $\pi'$ under $\mathrm{Cent}_0(\sigma)/K$. By the transitive action on the cycles $\{\tau_1,\tau_2,..,\tau_k\}$ and the fact that the lists $z_1,..,z_k$ and $n_1,...,n_k$ are identical up to rearrangement there exists some $\rho \in \mathrm{Cent}_0(\sigma)$ such that \begin{equation} \rho\mu_0'\rho^{-1}=\mu_0 \; \; \mathrm{and} \; \; \mathrm{Cent_0(\sigma)}/K: \rho\pi'\rho^{-1} \mapsto (12\cdots m). \end{equation} We are done if we can prove that there exists some integers $j_1,j_2,...,j_m$ so that \begin{equation} (\tau_{1}^{j_1}\tau_{2}^{j_2}...\tau_{m}^{j_m})(\rho\pi_0'\rho^{-1})(\tau_{1}^{j_1}\tau_{2}^{j_2}...\tau_{m}^{j_m})^{-1} = \pi_0. \end{equation} Set $\rho\pi_0'\rho^{-1}=\pi_0''$. By condition (iii), there exists some integer $z$ so that \begin{equation} (\pi_0)^m=(\pi_0'')^m=\tau_1^z\tau_2^z...\tau_m^z. \end{equation} This implies that $\pi_0, \pi_0''$ have the same cycle type. Let $d'$ denote the length of the cycles in $\tau_1^z$. Then $\pi_0, \pi_0''$ have cycle type $\frac{d}{d'}$ disjoint cycles of length $md'$. We observe that the elements $\pi_0, \pi_0''$ are completely determined by one of their cycles. To see this, let $c_0$ denote a cycle in $\pi_0$ and $c_0''$ denote a cycle in $\pi_0''$. Then \begin{equation} \begin{split} c_0& = (x_{\tau_1}\cdots x_{\tau_m}x_{\tau_1+z}\cdots x_{\tau_m+z}\cdots x_{\tau_1+d'z}\cdots x_{\tau_m+d'z}) \\ c_0''& = (y_{\tau_1}\cdots y_{\tau_m}y_{\tau_1+z}\cdots y_{\tau_m+z}\cdots y_{\tau_1+d'z}\cdots y_{\tau_m+d'z}), \end{split} \end{equation} where $x_{\tau_i}, y_{\tau_i}$ are points in the cycles $\tau_i$ , $\tau_i^{hz}(x_{\tau_i})=x_{\tau_i+hz}$, and $\tau_i^{hz}(y_{\tau_i})=y_{\tau_i+hz}$. These relations completely determine how the remaining points in the cycles $\tau_1,...\tau_m$ are sent. We see that $\pi, \pi''$ are not only determined by the cycles $c_0, c_0''$ but further, completely determined by the first $m$ points in the cycles. Naturally, there exists integers $h_1,..,h_m$ so that \begin{equation} \tau_i^{h_i}(x_{\tau_i})=y_{\tau_i}, \; \; 1 \leq i \leq m, \end{equation} which concludes our argument. \end{proof} \begin{corollary} \label{relprimecor} Assume that the order of $\sigma$, $|\tau_1...\tau_k|=d$ and the order of $\pi$ are relatively prime, then $\pi$ and $\pi'$ are conjugate if $\pi, \pi'$ are conjugate in $S_n$. \end{corollary} \begin{proof} By $\pi, \pi'$ having order relatively prime to $d$, we must have that $\mu = \mu' =1$. Also $\pi^m=1$ and since $\pi'$ conjugate to $\pi$ and relatively prime to $d$ , the image of $\pi'$ is also an $m$-cycle and $(\pi')^m=1$. The result follows from Lemma \ref{importantabelianlemma}. \end{proof} \begin{corollary} \label{transpocase} Assume that the order $\sigma$, $d=2$. The elements $\pi,\pi'$ are conjugate in $\mathrm{Cent}_0(\sigma)$ if $\pi,\pi'$ are conjugate in $S_n$ and have the same cycle type under $\mathrm{Cent}_0(\sigma)/K$ \end{corollary} \begin{proof} If $\pi, \pi'$ have odd order, the result follows from the corollary above. Assume $\pi, \pi'$ have even order. $\pi', \pi_0'$ act non-trivially on $md$ points. It is clear that $\pi, \pi'$ have the same number of fix points and so we can assume that there exists $\rho \in \mathrm{Cent}_0(\sigma)$ so that $\rho\mu'\rho^{-1}=\mu$ and $\rho\pi_0'\rho^{-1}, \pi$ have the same image under $\mathrm{Cent}_0(\sigma)/K$. The conjugacy in $S_n$ further forces that $(\pi_0')^m=(\pi_0)^m=1$ or $(\pi_0')^m=(\pi_0)^m=\tau_1...\tau_k$. The result now follows from \ref{importantabelianlemma}. \end{proof} Note that Corollary \ref{relprimecor} and \ref{transpocase} can easily be generalized to the case where $\pi$ has arbitrary cycle type under $\mathrm{Cent}_0(\sigma)/K$. \subsection{Centralizer Actions for Dihedral Source} Assume as before that $\sigma \in S_n$ is the product of $k$ disjoint cycles of length $d$, $\tau_1\tau_2...\tau_k$ and $\mathbf{Fix}(\sigma)= \emptyset$. We are interested in exploring elements $\pi, \pi' \in S_n$ of the type \begin{equation} \pi^2 = (\pi')^2 = 1 \; \; \mathrm{and} \; \; \pi\sigma\pi^{-1} = (\pi')\sigma(\pi')^{-1} = \sigma^{-1}. \end{equation} We start by noting that $\pi, \pi'$ are elements in a larger subgroup than $\mathrm{Cent}_0(\sigma)$, $H_{\sigma}$, where we also have a transitive action on the cycles $\{\tau_1,\tau_2,...,\tau_k\}$: \begin{lemma} \label{actiondihedral} The subgroup $H_{\sigma} = \{ x \in S_n | x\sigma x^{-1} = \sigma^{z}, \; z\in \mathbb{Z} \}$ acts transitively on the cycles $\{\tau_1,\tau_2,...,\tau_k\}$. The kernel $K_{H_\sigma}$ of this action is \begin{equation} K_{H_\sigma} = \{ x \in H_\sigma | x\tau_ix^{-1} = \tau_i^z, \; \; z \in \mathbb{Z} \; \mathrm{for} \; \mathrm{all} \; 1 \leq i \leq k \}. \end{equation} \end{lemma} \begin{proof} This is straightforward to prove, see Lemma \ref{actioncycles}. \end{proof} It is obvious that $\mathrm{Cent}_0(\sigma) \leq H_\sigma$, and in fact $\mathrm{Cent}_0(\sigma) \unlhd H_\sigma.$ As for the non-trivial centralizer, we get a permutation representation, this time from $H_\sigma$ to $S_k$: \begin{equation} H_{\sigma} \to H_{\sigma}/K_{H_\sigma} \cong S_k. \end{equation} The main goal of this subsection is to prove that if $\pi, \pi'$ are conjugate in $S_n$ and if the images of the elements under $H_{\sigma}/K_{H_\sigma}$ are also conjugate in $S_k$, then there exists some $\rho \in \mathrm{Cent}_0(\sigma)$ so that \begin{equation} \rho (\pi') \rho^{-1} = \pi. \end{equation} We first look at elements $s_1, s_2 \in S_{X_{\tau_1}}$, that send a single cycle $\tau_1$ to its own inverse or elements $s_1,s_2 \in S_{X_{\tau_1,\tau_2}}$ that sends the cycle $\tau_1$ to the inverse of a distint cycle $\tau_2$. \begin{lemma} \label{dihed1} Assume $\tau_1 = (12\cdots d)$ and $\tau_2 = (j_1j_2 \cdots j_d)$. Let $s_1, s_2 \in S_{X_{\tau_1,\tau_2}}$ both be products of disjoint transpositions such that \begin{equation} s_1\tau_1s_1^{-1} = s_2\tau_1s_2^{-1}=\tau_2^{-1}. \end{equation} There exists an integer $z$ such that \begin{equation} \tau_2^{z}s_1\tau_2^{-z} = s_2. \end{equation} \end{lemma} \begin{proof} There are precisely $d$ ways to send the cycle $\tau_1$ to $\tau_j^{-1}$ through transpositions. Once we send the point 1 in $\tau_1$ to some point in $\tau_2$, the element is determined. Assume $s_1 = (1j_1)...$ and $s_2 = (1j_i)...$. Then \begin{equation} \tau_2^{i-1}s_1\tau_2^{-(i-1)}=s_2. \end{equation} \end{proof} \begin{lemma} \label{dihed2} Let $\tau_1=(12\cdots d)$ and let $s_1,s_2 \in S_{X_{\tau_1}}$ be products of disjoint transpositions such that \begin{equation} s_1\tau_1s_1^{-1}=s_2\tau_1s_2^{-1}=\tau_1^{-1}. \end{equation} If $s_1,s_2$ are conjugate in $S_n$, there exists some integer $z$ so that: \begin{equation} \tau_1^z s_1\tau_1^{-z}=s_2. \end{equation} \end{lemma} \begin{proof} The elements $s_1,s_2$ may precisely be associated with reflections along the lines of symmetry of a regular polygon with $d$ vertices. If $d$ is odd, $s_1,s_2$ will have cycle type $\frac{d-1}{2}$ transpositions. The lines of symmetry of a regular polygon with an odd number of vertices is completely determined by the vertex it intersects. Assume $s_1$ fixes vertex 1 and $s_2$ fixes vertex $i$. Then $\tau_1^{i-1}s_1\tau_1^{-(i-1)}=s_2$. If $d$ is even, a line of symmetry either intersects two vertices or an edge. In the former case $s_1,s_2$ have cycle type $\frac{d-2}{2}$ and the elements are again completely determined by a fixed vertex so that if $s_1$ fixes vertex 1 and $s_2$ fixes vertex i, then $\tau_1^{i-1}s_1\tau_1^{-(i-1)}=s_2$. In the latter case, $s_1,s_2$ have cycle type $\frac{d}{2}$ transpositions and the elements are determined by an edge that the line of symmetry intersects. Let $s_1 = (12)...$ and $s_2=(i \; i+1)...$ , then $\tau_1^{i-1}s_1\tau_1^{-(i-1)}=s_2$ and we are done. \end{proof} We are ready to state the main lemma: \begin{lemma} \label{importantdihedlemma} Let $\pi,\pi'$ be elements conjugate in $S_n$ and assume that their images under $H_{\sigma}/K_{H_{\sigma}}$ have the same cycle type. Then, there exists some $\rho \in \mathrm{Cent}_0(\sigma)$ so that $\rho\pi\rho^{-1}=\pi'$. \end{lemma} \begin{proof} We may write $\pi,\pi'$ in the form $\pi=\mu\pi_0$, $\pi'=\mu'\pi_0'$ where $\mu, \mu' \in K_{H_{\sigma}}$ and $\pi_0, \pi_0'$ either fix all points in a cycle $\tau_i$ or sends all the points to a distinct cycle $\tau_j$. Assume $\mu$ acts non-trivially on the points in the cycles $\tau_1,...,\tau_m$. Then, $\pi_0$ acts non-trivially on the cycles $\tau_{m+1},...,\tau_k$. Since $\pi, \pi'$ have the same cycle type under $H_{\sigma}/K_{H_\sigma}$, this implies that $\mu, \mu'$ are conjugate in $S_n$. By the transitive action on the cycles $\tau_1,..,\tau_k$ in $\mathrm{Cent}_0(\sigma)$, this in turn implies that there exists some $\rho \in \mathrm{Cent}_0(\sigma)$ so that $\rho \mu' \rho^{-1}$ acts non-trivially on the cycles $\tau_1,...,\tau_m$ and $\rho\pi_0'\rho^{-1}, \rho\pi\rho^{-1}$ have the same image under $H_\sigma/K_{H_{\sigma}}$. The result now follows from Lemma \ref{dihed1} and \ref{dihed2}. \end{proof} \begin{corollary} \label{odddihedral} If the order of $\sigma$, $|\tau_1...\tau_k|=d$ is odd and $\pi, \pi'$ conjugate in $S_n$, there exists some $\rho \in \mathrm{Cent}_0(\sigma)$ so that $\rho \pi \rho^{-1} = \pi'$ \end{corollary} \begin{proof} If $d$ is odd and $\pi, \pi'$ conjugate in $S_n$, this forces $\pi, \pi'$ to have the same image under $H_{\sigma}/K_{H_{\sigma}}$. The result follows from Lemma \ref{importantdihedlemma}. \end{proof} \section{Homomorphism Conjugacy in the Symmetric Group} Let $\varphi, \psi$ be two generator-conjugate homomorphisms from a group $G$ to $S_n$ with respect to the generators $g_1,....,g_n$. We first wish to determine when $\varphi, \psi$ are conjugate as homomorphisms. We may assume that \begin{equation} \varphi(g_1)=\psi(g_1), \end{equation} so that if $\varphi, \psi$ are conjugate, there exists some $\rho \in \mathrm{Cent}_{S_n}(\varphi(g_1))$ so that \begin{equation} \rho\varphi\rho^{-1}=\psi. \end{equation} This observation, in conjunction with the results derived about centralizer actions from the previous section, gives the first part of our main result: determining when generator-conjugate homomorphisms to $S_n$ are conjugate for an abelian or dihedral source group with two generators. The second part of the main result, describing the relationship between element-conjugate and conjugate homomorphisms for such source groups, is derived by first studying the relationship between generator-conjugate and element-conjugate homomorphisms through centralizer actions in $S_n$. \subsection{Homomorphisms Conjugacy for Abelian Source} Let $\varphi,\psi$ be two homomorphisms from an abelian group $A$, generated by two elements $a,b$ to the symmetric group $S_n$. Assume $\varphi,\psi$ are generator-conjugate with respect to $a,b$ and that \begin{equation} \varphi(a)=\psi(a). \end{equation} We may set $\varphi(a)=\sigma$ and rewrite the element so that $\sigma=\sigma_1...\sigma_l$ where $\sigma_i$ is a product of $k_i$ disjoint cycles of length $d_i$, $\tau_{(1,i)}\tau_{(2,i)}...\tau_{(k_i,i)}$, and the integers $d_1,...,d_l$ are all distinct. By the direct product decomposition of centralizers in the symmetric group (Lemma \ref{directproduct}), we may also rewrite $\varphi(b), \psi(b)$: \begin{equation} \begin{split} \varphi(b) &= \mu_0\pi_1\pi_2...\pi_l \\ \psi(b) &= \mu_0'\pi_1'\pi_2'...\pi_l', \end{split} \end{equation} where $\mu_0,\mu_0' \in S_{\mathbf{Fix(\sigma}}$ and $\pi_i,\pi_i' \in \mathrm{Cent}_0(\sigma_i)$. By Remark \ref{decomp}, each $\pi_i,\pi_i'$ may decomposed as products: $\pi_i = \mu_i\pi_{0,i}$ and $\pi_i' = \mu_i\pi_{0,i}'$ where $\mu_i,\mu_i' \in K_i = \langle \tau_{1,i},...,\tau_{k_i,i}\rangle$ and $\pi_i, \pi_i'$ either fix all points in a cycle $\tau_{j,i}$ or sends all points to some distinct cycle $\tau_{j',i}$. Note that we have that \begin{equation} \begin{split} \mu_i &= \tau_{1,i}^{z_{(1,i)}}\tau_{2,i}^{z_{(2,i)}}...\tau_{k_i,i}^{z_{(k_i,i)}} \\ \mu_i' &= \tau_{1,i}^{z_{(1,i)}'}\tau_{2,i}^{z_{(2,i)}'}...\tau_{k_i,i}^{z_{(k_i,i)}'}, \end{split} \end{equation} for some integers $z_{(1,i)},..,z_{(k_i,i)}$ and $z_{(1,i)}',..,z_{(k_i,i)}'$. The action in $\mathrm{Cent}_0(\sigma_i)$ gives a further decomposition of the elements $\pi_{(0,i)}$ and $\pi_{(0,i)}'$ as products of elements in the non-trivial centralizer: \begin{equation} \begin{split} \pi_{(0,i)} &= x_{(1,i)}....x_{(h,i)} \\ \pi_{(0,i)}' &= y_{(1,1)}....y_{(h',i)}, \end{split} \end{equation} where the image of every $x_{(j,i)},y_{(j,i)}$ is a single cycle under $\mathrm{Cent}_0(\sigma_i)/K_i$. \begin{theorem} \label{abeliantheorem} The homomorphisms $\varphi, \psi$ are conjugate if and only if: \begin{enumerate}[(i)] \item $\mu_0, \mu_0'$ are conjugate in $S_n$. \item The lists $z_{(1,i)},..,z_{(k_i,i)}$ and $z_{(1,i)}',..,z_{(k_i,i)}'$are identical up to rearrangement for $1\leq i \leq l$ . \item For every $x_{(j,i)}$ whose image is an $m$-cycle under $\mathrm{Cent}_0(\sigma_i)/K_i$ there exists some $y_{(j',i)}$ whose image is also an $m$-cycle under $\mathrm{Cent}_0(\sigma_i)/K_i$ where $x_{(j,i)}^m = \tau_{t,i}^{z_{j,i}}...$ and $y_{j'}^m=\tau_{t',i}^{z_{j,i}}...$ for some integer $z_{j,i}$. \end{enumerate} \end{theorem} \begin{proof} By the direct product decomposition of the centralizer of $\varphi(a)=\psi(a)=\sigma$ we must assume that there exists some $\rho_0 \in S_{\mathbf{Fix}(\sigma)}$ so that $\rho_0\mu_0'\rho_0=\mu_0$ and $\rho_i \in \mathrm{Cent}_0(\sigma_i)$ so that $\rho_i\pi_i'\rho_i=\pi_i$ for $1\leq i \leq l$. Condition (i) follows from that cycle type determines conjugacy in symmetric groups. Condition (ii) follows directly from Lemma \ref{importantabelianlemma}. Condition (iii) follows from the decomposition of $\pi_{0,i},\pi_{0,i}'$ into disjoint elements whose image under $\mathrm{Cent}_0(\sigma_i)/K_i$ is a single cycle and Lemma \ref{importantabelianlemma}. \end{proof} The following two corollaries follow quite easily from Theorem \ref{abeliantheorem} and generalizing the results in Corollary \ref{relprimecor} and \ref{transpocase}. \begin{corollary} \label{relprime} If the order of $a,b$ are relatively prime then $\varphi, \psi$ are conjugate if and only if: $\mu_0,\mu_0'$ conjugate in $S_n$ and $\pi_i, \pi_i'$ are conjugate in $S_n$ for $1\leq i \leq l$. \end{corollary} \begin{corollary} If $|a|=2$ then $\varphi, \psi$ are conjugate if and only if: $\mu_0,\mu_0'$ conjugate in $S_n$, $\pi_i, \pi_i'$ are conjugate in $S_n$, and the elements also have the same cycle type under $\mathrm{Cent}_0(\sigma_i)/K_i$ for $1\leq i \leq l$. \end{corollary} Note that although the source group is cyclic in Corollary \ref{relprime}, the result is not trivial. We can easily define generator-conjugate homomorphisms from cyclic source groups that are not conjugate. \begin{ex} Consider the homomorphisms $\varphi, \psi: A \cong \mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} \to S_{12}:$ \begin{equation} \begin{split} &\varphi, \psi: a \mapsto (123)(456); \\ &\varphi: b \mapsto (14)(25)(36), \; \psi: b \mapsto (78)(9\;10)(11\;12). \end{split} \end{equation} \end{ex} The homomorphisms are clearly generator-conjugate, but one can check that $\varphi(ab), \psi(ab)$ have different cycle types in $S_n$. Further, the conditions set out in Theorem \ref{abeliantheorem} make it easy to define distinct generator-conjugate homomorphisms from a non-cyclic abelian group to $S_n$. \begin{ex} Consider the homomorphisms $\varphi, \psi: A \cong \mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} \to S_{11}:$ \begin{equation} \begin{split} &\varphi, \psi: a \mapsto (1234)(5678); \\ &\varphi: b \mapsto (15)(26)(37)(48)(9\;10), \; \psi: b \mapsto (16)(27)(38)(45)(10\;11). \end{split} \end{equation} \end{ex} These homomorphisms are conjugate by Theorem \ref{abeliantheorem} and it is straightforward to check they are distinct. We conclude this subsection by describing the relationship between element-conjugate and conjugate homomorphisms: \begin{corollary} \label{localglobalabelian} The homomorphisms $\varphi, \psi$ are conjugate if and only if they are element-conjugate on restriction to $S_{\mathrm{Fix}(\sigma)}$ and $\mathrm{Cent}_0(\sigma_i)$, for $1 \leq i \leq l$. \end{corollary} If $\varphi, \psi$ are conjugate as homomorphisms, it is clear by the direct product decomposition of the centralizer that we must assume that the homomorphisms are also element-conjugate when restricted to $S_{Fix(\sigma)}$ and each subgroup $\mathrm{Cent}_0(\sigma_i)$. It is then relevant to restrict ourselves to investigating when generator-conjugate homomorphisms are element-conjugate in the case where $\psi(a)=\psi(a)$ is the product of $k$ disjoint cycles of length $d$, $\tau_1...\tau_k$ and this element has no fix points in $S_n$. Recall that in this case, we can decompose $\varphi(b), \psi(b)$ such that: \begin{equation} \begin{split} \varphi(b) &= \mu x_{1}....x_{h} \\ \psi(b) &= \mu'y_{1}....y_{h'}, \end{split} \end{equation} where $\mu, \mu' \in K = \langle\tau_1,...,\tau_k\rangle$ and the images of all $x_j,y_j$ under $\mathrm{Cent}_0(\sigma)/K$ are single cycles and $x_i,y_i$ do not non-trivially send points in a cycle to itself. We also have that $\mu = \tau_1^{z_1}...\tau_k^{z_k}$ and $\mu' = \tau_1^{n_1}...\tau_k^{n_k}$ for some integers $z_1,...,z_k$ and $n_1,...,n_k$. We present an important lemma that relates generator-conjugacy to element-conjugacy: \begin{lemma} Assume $\psi(a)=\varphi(a)= \tau_1....\tau_k$ and also that $\mathbf{Fix}(\tau_1...\tau_k)=\emptyset$. If $\varphi,\psi$ are generator-conjugate, they are element-conjugate if and only if: \begin{enumerate}[(i)] \item The lists $z_{i},..,z_{k}$ and $n_{1},..,n_{k}$ are identical up to rearrangement. \item For every $x_{j}$ whose image is an $m$-cycle under $\mathrm{Cent}_0(\tau_1...\tau_k)/K$ there exists some $y_{j'}$ whose image is also an $m$-cycle under $\mathrm{Cent}_0(\sigma_i)/K_i$ and $x_{j,}^m = \tau_{i}^{z}...$, $y_{j'}^m=\tau_{i'}^{z}...$ for some integer $z$. \end{enumerate} \end{lemma} \begin{proof} If $\varphi,\psi$ are element-conjugate, $\varphi(a^{h_1}b^{h_2})$ and $\psi(a^{h_1}b^{h_2})$ have the same number of fix points for all integers $h_1,h_2$. This immediately implies that the lists $z_{i},..,z_{k}$ and $n_{1},..,n_{k}$ are identical up to rearrangement. Ordering the elements $x_1,...,x_h$ after their cycle length under $\mathrm{Cent}_0(\sigma)$, $m_1\leq m_2 \leq ...\leq m_h$ , the rest of the proof follows from comparing fix points and a simple induction argument. \end{proof} Note that this lemma is precisely the conditions required for homomorphism conjugacy in this case, and Corollary \ref{localglobalabelian} follows. \subsection{Homomorphism Conjugacy for Dihedral Source} Let $\varphi,\psi$ be homomorphisms from the dihedral group $D_{2m}$ to $S_n$ and assume the homomorphisms are generator-conjugate with respect to the generators $r,s$: \begin{equation} D_{2m} = \langle r,s \; | \; r^m=s^2 = 1 \; srs = r^{-1} \rangle. \end{equation} We assume $\varphi(r)=\psi(r)=\sigma$ and rewrite $\sigma$ in the form $\sigma=\sigma_1...\sigma_l$ where $\sigma_i$ is the product $k_i$ disjoint cycles of length $d_i$ and all the integers $d_1,..,d_l$ are distinct. By the invariance of cycle type under conjugation, the elements $\varphi(s),\psi(s)$ can be written as products: \begin{equation} \begin{split} \varphi(s) &= \mu \pi_{1}....\pi_{l} \\ \psi(s) &= \mu'\pi_{1}'....\pi_{l}', \end{split} \end{equation} where $\mu, \mu' \in S_{\mathbf{Fix}(\sigma)}$ and $\pi_i, \pi_i' \in H_{\sigma_i}$ recalling that \begin{equation} H_{\sigma_i} = \{ x \in S_{X_{\sigma_i}} | \; x\sigma_ix=\sigma_i^z, \; z \in \mathbb{Z} \}. \end{equation} By previous results, the subgroup $H_{\sigma_i}$ acts on the cycles in $\sigma_i$ through the permutation representation $H_{\sigma_i}/K_{\sigma_i}$ (Lemma \ref{actiondihedral}). \begin{theorem} \label{dihedraltheorem} The homomorphisms $\varphi, \psi$ are conjugate if and only if: \begin{enumerate}[(i)] \item $\mu_0, \mu_0$ are conjugate in $S_n$. \item $\pi_i,\pi_i'$ are conjugate in $S_n$ and the elements also have the same cycle type under $H_{\sigma_i}/K_{H_{\sigma_i}}$, $1\leq i \leq l$. \end{enumerate} \end{theorem} \begin{proof} The direct product decomposition of $\varphi(s),\psi(s)$ due to invariance of cycle type and Lemma \ref{importantdihedlemma}. \end{proof} We finally note that the relationship between element-conjugacy and homomorphism conjugacy for the dihedral case is similar to the abelian case with two generators: \begin{corollary} \label{localglobaldihedral} The homomorphisms $\varphi, \psi$ are conjugate if and only if they are element-conjugate on restriction to $S_{\mathrm{Fix}(\sigma)}$ and $H_{\sigma_i}$, for $1 \leq i \leq l$. \end{corollary} \begin{proof} If the order of $r$ is odd, the result follows from Corollary \ref{odddihedral}. If $d$ is even, we proceed by contradiction to show that if $\pi_i, \pi_i'$ have distinct cycle types under $H_{\sigma_i}/K_{H_{\sigma_i}}$, on restriction to $H_{\sigma_i}$ the elements $\varphi(rs), \psi(rs)$ are not conjugate in $S_n$. \end{proof}
{ "timestamp": "2022-12-14T02:18:09", "yymm": "2212", "arxiv_id": "2212.06766", "language": "en", "url": "https://arxiv.org/abs/2212.06766", "abstract": "We explore when generator-conjugate homomorphisms are conjugate and when element-conjugate homomorphisms are conjugate from abelian or dihedral groups to the symmetric group. We completely determine when such homomorphisms are conjugate in the case where the source group has two generators by studying centralizer actions in the target group.", "subjects": "Group Theory (math.GR); Representation Theory (math.RT)", "title": "Homomorphism Conjugacy versus Centralizer Actions in the Symmetric Group", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357604052424, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7096739278509103 }
https://arxiv.org/abs/1403.2125
Two-orbit convex polytopes and tilings
We classify the convex polytopes whose symmetry groups have two orbits on the flags. These exist only in two or three dimensions, and the only ones whose combinatorial automorphism group is also two-orbit are the cuboctahedron, the icosidodecahedron, and their duals. The combinatorially regular two-orbit convex polytopes are certain 2n-gons for each n > 1. We also classify the face-to-face tilings of Euclidean space by convex polytopes whose symmetry groups have two flag orbits. There are finitely many families, tiling one, two, or three dimensions. The only such tilings which are also combinatorially two-orbit are the trihexagonal plane tiling, the rhombille plane tiling, the tetrahedral-octahedral honeycomb, and the rhombic dodecahedral honeycomb.
\section{Introduction} Here we will classify all convex polytopes, and face-to-face tilings of Euclidean space by convex polytopes, whose flags have two orbits under the action of the symmetry group. First we briefly define these terms. A \emph{convex polytope} is the convex hull of a finite set of points in $d$-dimensional Euclidean space $\mathbb{E}^d$ \cite{grunbaum1967convex}. In this paper we use ``$d$-polytope'' to mean ``$d$-dimensional convex polytope,'' ``polygon'' to mean ``2-polytope'' and ``polyhedron'' to mean ``3-polytope.'' A \emph{face} of a convex polytope $P$ is the intersection of $P$ with a \emph{supporting hyperplane} of $P$, i.e.\ a hyperplane $H$ such that $P$ is contained in one closed half-space determined by $H$, and such that $H$ and $P$ have non-empty intersection. We also admit the empty set and $P$ itself as ``improper'' faces. A face is called $j$-dimensional, or a \emph{$j$-face}, if its affine hull is $j$-dimensional; the empty face is $(-1)$-dimensional. The 0-faces are also called \emph{vertices}; 1-faces are also called \emph{edges}; $(d-2)$-faces may be called \emph{ridges} and $(d-1)$-faces are called \emph{facets}. The faces of $P$, ordered by containment, form a lattice $\mathcal{L}(P)$, the \emph{face lattice} of $P$. The \emph{symmetry group} of $P$, denoted $G(P)$, is the set of Euclidean isometries which carry $P$ to itself. The \emph{automorphism group} of $P$, denoted $\Gamma(P)$, is the set of lattice isomorphisms from $\mathcal{L}(P)$ to itself. Since each transformation in $G(P)$ acts as an automorphism of $\mathcal{L}(P)$, we can consider $G(P)$ as a subgroup of $\Gamma(P)$. A maximal chain in $\mathcal{L}(P)$ (i.e.\ a maximal linearly ordered set of faces) is called a \emph{flag} (due to the way a vertex, followed by an edge incident to that vertex, followed by a 2-face incident to the edge, resemble the construction of a flagpole.) The set of all flags of $P$ is $\mathcal{F}(P)$. Transformations in $G(P)$ (or automorphisms in $\Gamma(P)$) induce an action on $\mathcal{F}(P)$ in an obvious way. The orbits of flags under the action of $G(P)$ are called \emph{flag orbits}, and a polytope with $n$ distinct flag orbits is called an \emph{$n$-orbit polytope}. Similarly, orbits of flags under the action of $\Gamma(P)$ are called \emph{combinatorial flag orbits}, and a polytope with $n$ such orbits is called \emph{combinatorially $n$-orbit}; in the context of abstract polytopes, this is the only definition possible and the adjectives may be dropped. In \cite[273]{conway2008symmetries}, Conway et al. introduce the term \emph{flag rank} for the number of flag orbits. A $k$-orbit polytope is said to have a flag rank of $k$, and they also suggest that such a polytope be called $\frac{1}{k}$-regular. Thus, in this paper we determine all the half-regular convex polytopes. One-orbit polytopes are the regular polytopes. It is well known \cite{coxeter1973regular} that there are infinitely many regular polygons, namely the regular $n$-gon for each $n \geq 3$; there are five regular polyhedra, the Platonic solids; there are six regular 4-polytopes; and there are three regular $d$-polytopes for all $d > 4$. As far as flags are concerned, two-orbit polytopes are as close to regular as possible while not being regular. Two-orbit convex polytopes can either be combinatorially two-orbit, if $G(P) = \Gamma(P)$, or combinatorially regular, in which case $G(P)$ is a subgroup of index 2 in $\Gamma(P)$. In the more general case of abstract polytopes, combinatorially two-orbit polyhedra were examined by \textcite{hubard2010two}. The \emph{chiral polytopes} are notable examples of two-orbit abstract polytopes \cite{chiral}. However, convex polytopes cannot be chiral \cite[496]{chiral}. \begin{figure}[h] \input{polygons.tex} \caption{The first few two-orbit convex polygons, in pairs of duals}\label{fig:polygons} \end{figure} As we shall show, two-orbit convex polytopes turn out to be even scarcer than one-orbit convex polytopes, and exist only in two or three dimensions. There are infinitely many in two dimensions. For each $n \geq 2$, a two-orbit $2n$-gon may be constructed by alternating edges of two distinct lengths, with the same angle at each vertex (namely the interior angle of a regular $2n$-gon, $\frac{n-2}{n}\pi$). Dual to each of these is another type of two-orbit $2n$-gon, with uniform edge lengths, but alternating angle measures. In three dimensions, there are just four: The cuboctahedron, its dual the rhombic dodecahedron, the icosidodecahedron, and its dual the rhombic triacontahedron. We summarize the results in Theorems \ref{thm:topes} and \ref{thm:tilings}. \begin{thm}\label{thm:topes} There are no two-orbit $d$-polytopes if $d \geq 4$ (or if $d \leq 1$). There are exactly four, if $d = 3$: the cuboctahedron, icosidodecahedron, rhombic dodecahedron, and rhombic triacontahedron. If $d = 2$, there are two infinite series of $2n$-gons, for each $n \geq 2$. Polygons of one series alternate between two distinct edge lengths. Polygons of the other alternate between two distinct angle measures. \end{thm} \begin{figure}[h] \input{polyhedra.tex} \caption{The two-orbit convex polyhedra}\label{fig:polyhedra} \end{figure} In Section~\ref{sec:tilings} we classify all two-orbit tilings by convex polytopes. We consider only face-to-face, locally finite tilings. See Section~\ref{sec:tilings} for a description of each of the named tilings. \begin{thm}\label{thm:tilings} There are no two-orbit tilings of $\mathbb{E}^d$ if $d \geq 4$ (or if $d = 0$). If $d = 1$, there is one family: an apeirogon alternating between two distinct edge lengths. If $d = 2$, there are four: the trihexagonal tiling (3.6.3.6); its dual, the rhombille tiling; a family of tilings by translations of a rhombus; and a family of tilings by rectangles. If $d = 3$, there are two: the tetrahedral-octahedral honeycomb and its dual, the rhombic dodecahedral honeycomb. \end{thm} In the above two theorems, all those examples which vary by a real parameter greater than one (both types of $2n$-gons, the apeirogon, and the tilings by rhombi and rectangles) are combinatorially regular; in each case, allowing the parameter to become one yields a regular polygon or tiling, to which all other members of the family are isomorphic. The other examples, namely the four polyhedra, the trihexagonal tiling, the rhombille tiling, the tetrahedral-octahedral honeycomb, and the rhombic dodecahedral honeycomb, are all unique (up to similarity), and are all combinatorially two-orbit. \section{Preliminary Facts} Let $P$ be a $d$-polytope. Recall that \emph{flags} are maximal chains of faces of~$P$. Two flags are said to be \emph{adjacent} if they differ in exactly one face; if they differ in the $j$-face, they are said to be $j$-adjacent. The face lattice $\mathcal{L}(P)$ satisfies the following four properties, which are in fact taken to be the definition of an abstract polytope of rank $d$ \cite[22]{ARP}: \begin{enumerate}[label=(P\arabic*)] \item There is a least face $F_{-1}$, the \emph{empty face}, and a greatest face $F_d$, which is $P$ itself. \item Every flag contains $d+2$ faces. \item\label{connectivity} (Strong flag-connectivity:) For any two flags $\Phi$ and $\Psi$ of $P$, there exists a sequence of flags $\Phi \eqqcolon \Phi_0, \Phi_1, \dotsc, \Phi_k \coloneqq \Psi$, such that each flag is adjacent to its neighbors and $\Phi \cap \Psi \subseteq \Phi_i$ for each $i$. \item (The diamond condition:) For any $j$, $1 \leq j \leq d$, any $j$-face $G$ of $P$, and any $(j-2)$-face $F$ contained in $G$, there are exactly two faces $H$ such that $F < H < G$. \end{enumerate} A $d$-polytope $Q$ is said to be \emph{dual} to $P$ if the face lattice $\mathcal{L}(Q)$ is anti-isomorphic to the lattice $\mathcal{L}(P)$, that is, identical to $\mathcal{L}(P)$ with the order reversed. A bijective, order-reversing function $h \colon \mathcal{L}(P) \to \mathcal{L}(Q)$ is called a \emph{duality}. A dual polytope to $P$ is often denoted $P^*$. Clearly, any two duals of $P$ are combinatorially isomorphic. A dual $P^*$ to any convex polytope $P$ may be constructed by the process of \emph{polar reciprocation}: After translating $P$, if necessary, so that the origin is contained in its interior, let $P^* = \bigcap_{y \in P}\set{x}{\scprd{x}{y} \leq 1}$, where $\scprd{x}{y}$ is the scalar product. Then $(P^*)^* = P$ and $G(P^*) = G(P)$. Thus, when necessary, we may assume that a polytope and its dual have the same symmetry group. For any two faces $F$ and $G$ of $P$ with $F \leq G$, $G/F$ denotes the \emph{section} of $\mathcal{L}(P)$ whose face lattice is $\set{H \in \mathcal{L}(P)}{F \leq H \leq G}$. This section may be realized as a convex polytope by taking the dual polytope $G^*$ to $G$, say with a duality $h \colon G \to G^*$; then the dual $h(F)^*$ to the face $h(F)$ of $G^*$ is the desired polytope. A subgroup of $G(P)$ acts on the section $G/F$; namely, those symmetries which fix all faces of $P$ which contain $G$ and all faces of $P$ which are faces of $F$. These form a subgroup which acts faithfully on $G/F$ in a well-defined way. As symmetries of $G/F$, this group is a subgroup of the symmetry group of $G/F$. We call it the \emph{restricted subgroup}, denoted $G_P(G/F)$ (this is not standard notation.) Note that the symmetry group of a two-orbit $d$-polytope $P$ can have at most two orbits on its $j$-faces, for any $j < d$. \begin{clm}\label{trans} Suppose $P$ is a two-orbit $d$-polytope. If the symmetry group $G(P)$ is not transitive on $j$-faces for some $j$, then $G(P)$ is transitive on $i$-faces for all $i \neq j$, where $0 \leq i, j \leq d-1$. \end{clm} \begin{proof} Otherwise, we have two orbit classes of $i$-faces, say class I and II, and two classes of $j$-faces, say A and B. Without loss of generality, suppose $j < i$. Let us say that a flag of $P$ whose $j$-face is in class A and whose $i$-face is in class I is an A-I flag, and similarly for other cases. Then we have more than two flag types, A-I, A-II, B-I, and B-II, unless the $j$-faces in class A occur only in one class of $i$-faces, say I, and $j$-faces in class B occur only in $i$-faces in class II. But, as we will show, this violates the connectivity property \ref{connectivity}. Let $\Phi$ be an A-I flag and $\Psi$ be a B-II flag. By flag-connectedness there is a sequence of adjacent flags, $\Phi = \Phi_0, \Phi_1, \dotsc, \Phi_k = \Psi$. Let $\ell$ be the least index such that $\Phi_\ell$ contains a $j$-face in class B or an $i$-face in class II, or both. Then $\Phi_{\ell-1}$ is an A-I flag, and since $\Phi_\ell$ is adjacent to $\Phi_{\ell-1}$, only one face is different, so $\Phi_\ell$ is either an A-II flag or a B-I flag. Therefore $P$ has at least three flag orbits. \end{proof} Polytopes which are transitive on $j$-faces for all $1 \leq j \leq d-1$ are called \emph{fully transitive}. It is a theorem of McMullen's thesis \cite{McMThesis} that fully transitive convex polytopes are regular. \begin{thm}[McMullen {\cite[\nopp 4C6]{McMThesis}}]\label{fullytransreg} A $d$-polytope $P$ is regular if and only if for each $j = 0, \dotsc, d-1,$ its symmetry group $G(P)$ is transitive on the $j$-faces of $P$. \end{thm} Therefore, for a two-orbit $d$-polytope $P$ there is a $j$, $0 \leq j \leq d-1$, so that $G(P)$ is not transitive on the $j$-faces but is transitive on the faces of every other rank. We shall call such a polytope $j$-intransitive. In the language of \textcite{hubard2010two}, a 0-intransitive two-orbit polyhedron is of class $2_{1,2}$, a 1-intransitive two-orbit polyhedron is of class $2_{0,2}$, and a 2-intransitive two-orbit polyhedron is of class $2_{0,1}$. Claim~\ref{trans} and the above comments were proved in \cite{hubard2010two}. They are consequences of Theorem~5 therein, which we may paraphrase to say that an (abstract) two-orbit $d$-polytope $P$ is either fully transitive, or there exists a $j$ ($1 \leq j \leq d$) such that $P$ is $i$-transitive for every $i \neq j$, but not for $i = j$. In using any results about abstract two-orbit polytopes, however, we must be careful to remember that convex two-orbit polytopes may be combinatorially regular and not combinatorially two-orbit. \begin{clm}\label{freeflags} For any convex polytope $P$, the order of the symmetry group $G(P)$ divides the number of flags of $P$. Each flag orbit has the same size, namely $\abs{G(P)}$, and so $P$ is a two-orbit polytope if and only if the number of flags is twice the order of $G(P)$. \end{clm} \begin{proof} This all follows from the fact that $G(P)$ acts freely on the set of flags of $P$. Let $\Phi$ be any flag of $P$. Any $\gamma \in G(P)$ acts on the $j$-adjacent flag $\Phi^j$ to $\Phi$ as $\gamma(\Phi^j) = \gamma(\Phi)^j$, since $\gamma$ is an automorphism of the face lattice. Therefore, if $\gamma \in G(P)$ is such that $\gamma(\Phi) = \Phi$, then $\gamma$ will also fix each flag adjacent to $\Phi$, and thus all flags of $P$ by flag-connectedness, so $\gamma$ is the identity. \end{proof} It follows that the dual to a two-orbit polytope is two-orbit; the dual to a $j$-intransitive $d$-polytope is $(d-j-1)$-intransitive. \begin{clm}\label{adjflags} If $P$ is a two-orbit $j$-intransitive $d$-polytope, and $\Phi$ is any flag, then for any $i \neq j$ the $i$-adjacent flag $\Phi^i$ is in the same orbit as $\Phi$. That is, there exists a symmetry $\rho \in G(P)$ such that $\rho(\Phi) = \Phi^i$. \end{clm} \begin{proof} Since there are only two flag orbits, and two classes of $j$-faces, the orbit of a given flag is determined entirely by its $j$-face. For $i \neq j$, $\Phi$ and $\Phi^i$ share their $j$-face, hence are in the same flag orbit. \end{proof} \begin{crl}\label{inadjflags} If $P$ is a two-orbit $j$-intransitive $d$-polytope, and $\Phi$ is any flag, then the $j$-adjacent flag $\Phi^j$ is not in the same flag orbit as $\Phi$. \end{crl} \begin{proof} If $\Phi^j$ were in the same orbit as $\Phi$, then by Claim~\ref{adjflags}, for each $i = 0, \dotsc, d - 1$ there exists an isometry $\rho_i$ of $P$ such that $\rho_i(\Phi) = \Phi^i$. But if a flag is in the same orbit as all of its adjacent flags, it follows from flag-connectedness that $P$ is regular (see Proposition 2B4 of \cite{ARP} or Theorem 4B1 of \cite{McMThesis}.) \end{proof} The next corollary is immediate from Corollary~\ref{inadjflags}. \begin{crl}\label{alternate} If $P$ is a two-orbit $j$-intransitive $d$-polytope, then for any $(j+1)$-face $F_{j+1}$ of $P$ and any $(j-1)$-face $F_{j-1}$ contained in $F_{j+1}$, the two $j$-faces $H$ with $F_{j-1} < H < F_{j+1}$ are in different $j$-face orbits. \end{crl} In the following, by ``chain of cotype $\{j\}$'' we mean a chain of faces in $\mathcal{L}(P)$ including a face of each rank except $j$. \begin{clm}\label{cochain-trans} If $P$ is a two-orbit $j$-intransitive $d$-polytope, then $G(P)$ acts transitively on chains of cotype $\{j\}$. \end{clm} \begin{proof} Let $\Psi$ and $\Omega$ be two chains of cotype $\{j\}$. By Corollary~\ref{alternate}, the two $j$-faces which are incident to the $(j-1)$-face and $(j+1)$-face of $\Psi$ are in different $j$-face orbits. Recall that the orbit of a given flag is determined entirely by its $j$-face. So we may extend $\Psi$ to a flag in either flag orbit. Similarly, we may extend $\Omega$ to a flag in either orbit. Thus, we extend $\Psi$ to a flag $\Psi'$ and $\Omega$ to a flag $\Omega'$ such that both are in the same orbit; then there is a symmetry $\gamma \in G(P)$ so $\gamma(\Psi') = \Omega'$, and thus $\gamma(\Psi) = \Omega$. \end{proof} \begin{clm}\label{vertexorfacet} If $P$ is a two-orbit $j$-intransitive $d$-polytope, then $j = 0$ or $j = d-1$. \end{clm} \begin{proof} Suppose $1 \leq j \leq d - 2$. Then there is a $(j-2)$-face $F_{j-2}$ contained in some $(j+2)$-face $F_{j+2}$ in $P$. The section $Q = F_{j+2}/F_{j-2}$ is a polyhedron. By Claim~\ref{cochain-trans}, isometries in the restricted group $G_P(Q)$ act transitively on the vertices and facets of $Q$ (corresponding to $(j-1)$-faces and $(j+1)$-faces of $P$, respectively.) By vertex transitivity, every vertex is in the same number $q$ of edges. By Corollary~\ref{alternate}, the edge orbits alternate across each facet, so $q$ is even. By facet transitivity, each facet is a $p$-gon for some $p$, and again by Corollary~\ref{alternate} the edge orbits alternate at each vertex, so $p$ is even. However, this contradicts Euler's theorem. In fact, each polyhedron without triangular facets has at least one 3-valent vertex \cite[237]{grunbaum1967convex}. \end{proof} \begin{clm}\label{regfaces} If $P$ is a two-orbit $j$-intransitive $d$-polytope, then all $i$-faces, for $i \leq j$, are regular. More generally, any section $G/F$, where $G$ is a $k$-face and $F$ is an $l$-face, is regular if $j \leq l$ or $k \leq j$. If $l < j < k$, then $G/F$ has two flag orbits under the restricted subgroup $G_P(G/F)$. \end{clm} \begin{proof} Since there are only two flag orbits, and two classes of $j$-faces, the orbit of a given flag is determined entirely by its $j$-face. Suppose $G/F$ is a section as described and we do not have $l < j < k$. Choose a base flag $\Phi$ of $G/F$ and extend it to a flag $\Phi'$ of $P$. Now any flag $\Psi$ of $G/F$ may be extended to a flag $\Psi'$ of $P$ which agrees with $\Phi'$ for all $i$-faces with $i \leq l$ or $i \geq k$. In particular, $\Phi'$ and $\Psi'$ share the same $j$-face, so there is an isometry $\gamma \in G(P)$ such that $\gamma(\Phi') = \Psi'$. Then $\gamma$ restricts to a symmetry of $G/F$ carrying $\Phi$ to $\Psi$. Hence $G/F$ is regular. On the other hand, if $l < j < k$, then $G/F$ contains a $(j-1)$-face $F_{j-1}$ of $P$ and a $(j+1)$-face $F_{j+1}$ of $P$ which contains $F_{j-1}$. By Corollary~\ref{alternate}, the two $j$-faces $H$ of $P$ with $F_{j-1} < H < F_{j+1}$ are in different orbits. Thus $G/F$ has at least two flag orbits under those isometries in $G(P)$ which restrict to $G/F$. On the other hand, for any two flags $\Phi$ and $\Psi$ of $G/F$ which contain the same kind of $j$-face of $P$, we may extend these to flags $\Phi'$ and $\Psi'$ of $P$ which agree on all $i$-faces with $i \leq l$ and $i \geq k$. Then an isometry $\gamma \in G(P)$ exists with $\gamma(\Phi') = (\Psi')$, and this $\gamma$ restricts to $G/F$ where it takes $\Phi$ to $\Psi$. Hence $G/F$ has two flag orbits under those transformations in $G(P)$ which restrict to $G/F$. \end{proof} Note that those sections in Claim~\ref{regfaces} with two flag orbits under the restricted subgroup are either two-orbit polytopes or regular. Their full group of symmetries includes the restricted subgroup, but may be bigger. If the section is in fact two-orbit, then its symmetry group agrees with the restricted subgroup. In particular, if a face $F$ of a two-orbit $j$-intransitive polytope is two-orbit, then $F$ is also $j$-intransitive; note than then $j = 0$, by Claim~\ref{vertexorfacet}. \section{Two Dimensions} Suppose $P$ is a two-orbit polygon. If $P$ does not have all edges of the same length, then it has two distinct edge lengths; if it had three or more, then there would be three or more flag orbits. In this case, $P$ is not edge-transitive, so it must be vertex-transitive. Then no two edges of the same length may be adjacent, since in that case, by vertex-transitivity, all edges would be the same length. So $P$ must alternate edges of two distinct lengths, and by vertex-transitivity all angles are the same. On the other hand, suppose $P$ does have all edges the same length. If the angle at each vertex is the same, then $P$ would be regular. Therefore, $P$ has at least two distinct angles; it has at most two, since there at most two vertex orbits. Then $P$ is not vertex-transitive, so it must be edge-transitive, which implies that $P$ alternates between two distinct angles. We have shown that every two-orbit convex polygon must be of one of the two types described above. It is not hard to see that, moreover, such $2n$-gons exist for each $n \geq 2$. The existence of non-regular rectangles is well known. For each $n \geq 3$, a polygon of the first type may be constructed from a regular $n$-gon by truncation, i.e.\ chopping off a corner at each vertex. In the top row of Figure~\ref{fig:polygons}, you may see how the hexagon is a truncated equilateral triangle, and the octagon is a truncated square. The existence of each $2n$-gon of the second type is then clear, since they are the duals of the polygons of the first type; i.e.\ they may be constructed by taking the convex hull of vertices placed at the midpoint of each edge of a polygon of the first type. It is also clear that such polygons are, indeed, two-orbit. Let us consider a polygon $P$ of the first type. It then follows for the second type by duality. Since $P$ is not edge-transitive, it has at least two flag orbits. Since $P$ is a truncated regular $n$-gon, it has (at least) all the symmetries of the regular $n$-gon, which has order $2n$. But $P$ has $4n$ flags ($2n$ vertices, each in 2 edges), so $P$ has at most $4n/2n = 2$ flag orbits. Therefore $P$ is a two-orbit polygon. \section{Three Dimensions} A \emph{quasiregular} polyhedron is vertex-transitive and has exactly two kinds of facets, which are regular and alternate around each vertex. By Claims~\ref{trans} and \ref{regfaces}, any 2-intransitive two-orbit polyhedron is vertex-transitive, edge-transitive, and has regular facets in two orbits. The two types of facet must alternate around each vertex, i.e.\ each edge must be incident to one facet of each type, by edge-transitivity. Thus any 2-intransitive two-orbit polyhedron is quasiregular. But there are only two quasiregular polyhedra: the cuboctahedron and the icosidodecahedron, two of the Archimedean solids \cite[18]{coxeter1973regular}. We may verify that these are two-orbit polyhedra. The cuboctahedron has at least two flag orbits, since it is not regular, having both square and triangular faces. It has 12 vertices, each incident to 4 edges, and each edge is in 2 faces, so it has $12 \cdot 4 \cdot 2 = 96$ flags. The cuboctahedron may be formed by truncating each vertex of the 3-cube at the midpoints of the edges, so it retains all the symmetries of the cube, a group of order 48. Hence the cuboctahedron has at most $96/48 = 2$ orbits, and thus is a two-orbit polyhedron (and also combinatorially two-orbit.) The icosidodecahedron has at least two flag orbits, since it is not regular, having both triangular and pentagonal faces. It has 30 vertices, each in 4 edges, and each edge is in 2 faces, so it has $30 \cdot 4 \cdot 2 = 240$ flags. The icosidodecahedron may be formed by truncating each vertex of the dodecahedron at the midpoints of the edges, so it retains all the symmetries of the dodecahedron, a group of order 120. Hence the icosidodecahedron has at most $240/120 = 2$ orbits, and thus is a two-orbit polyhedron (and also combinatorially two-orbit.) Any two-orbit polyhedron which is 0-intransitive must be dual to one of these two, so we have the rhombic dodecahedron, dual to the cuboctahedron, and the rhombic triacontahedron, dual to the icosidodecahedron. As duals to Archimedean solids, these are Catalan solids. Rather than using the list of quasiregular polyhedra, it is possible to arrive at candidates for 0-intransitive or 2-intransitive two-orbit polyhedra by considering all the edge-transitive polyhedra. It turns out there are only nine: the five platonic solids, the cuboctahedron, the icosidodecahedron, the rhombic dodecahedron, and the rhombic triacontahedron \cite{graver1997locally,grunbaum1987edge}. By Claim~\ref{vertexorfacet}, there are no 1-intransitive two-orbit polyhedra. In fact, polyhedra which are vertex-transitive and facet-transitive have a name, the \emph{noble} polyhedra, and the only non-regular ones (i.e. the 1-intransitive polyhedra) are disphenoid tetrahedra, which are tetrahedra with non-equilateral triangular faces \cite[26]{bruckner1906}. It is not hard to see that, if not regular, a tetrahedron has at least three flag orbits. Hence the cuboctahedron, icosidodecahedron, rhombic dodecahedron, and rhombic triacontahedron are the only two-orbit polyhedra. The same result is found in \textcite[427]{orbanic2010map} as a consequence of Theorem 6.1 therein, stating that every 2-orbit map on the sphere is either the medial of a regular map on the sphere, or dual to one. \section{Higher dimensions} Suppose $P$ is a $j$-intransitive two-orbit $d$-polytope with $d \geq 4$; by Claim~\ref{vertexorfacet} $j$ is either $0$ or $d-1$. Any two-orbit 0-intransitive polytope is dual to a two-orbit $(d-1)$-intransitive polytope, so we shall restrict our attention to the latter case. Such a polytope is vertex-transitive, and by Claim~\ref{regfaces} has regular facets. This is the definition used by \textcite{gosset1900regular} for \emph{semiregular} polytopes. In his 1900 paper he gives a complete list of all the semiregular polytopes. The list was proved to be complete in \textcite{Blind1991semireg}. There are only seven semiregular convex polytopes in dimensions greater than three. There are three 4-polytopes: the rectified 4-simplex, the snub 24-cell, and the rectified 600-cell. The rectified 4-simplex, which Gosset called ``tetroctahedric,'' is the convex hull of the midpoints of the edges of the 4-simplex. The facets are tetrahedra and octahedra. It has 360 flags, with 10 vertices, each in 6 edges, each edge in 3 ridges, and each ridge in 2 facets. It has the same symmetry group as the 4-simplex, of order 120; hence it has three flag orbits. The rectified 600-cell, which Gosset called ``octicosahedric,'' is the convex hull of the midpoints of the edges of the 600-cell. The facets are octahedra and icosahedra. It has 43,200 flags, with 720 vertices, each in 10 edges, each edge in 3 ridges and each ridge in 2 facets. It has the same symmetry group as the 600-cell, of order 14,400; hence it has three flag orbits. The snub 24-cell, which Gosset called ``tetricosahedric,'' has icosahedra and tetrahedra for facets. It has 96 vertices, each in 9 edges; 6 of these edges are in 3 ridges, and the other 3 edges are in 4 ridges. (This already makes it clear that there are at least two orbit classes of edges, as well as at least two orbit classes of facets, so it cannot be two-orbit.) Each ridge is in 2 facets. Hence there are 5,760 flags. It has half the symmetries of the 24-cell, leaving 576. So it has ten flag orbits. The remaining examples form Coxeter's $k_{21}$ family \cites[\S 11.8]{coxeter1973regular}{coxeter1988regular}, with one each in dimensions 5 through 8. They are the 5-demicube, or $1_{21}$, Gosset's ``5-ic Semi-regular''; $2_{21}$ or ``6-ic Semi-regular''; $3_{21}$ or ``7-ic Semi-regular''; and $4_{21}$ or ``8-ic Semi-regular''. Each of these has the preceding one for its vertex figure, starting with the rectified 4-simplex (which may also be called $0_{21}$) as the vertex figure of the 5-demicube. Of course, by Claim~\ref{regfaces}, if any member of this family were two-orbit, then the previous member (being a section) would either be two-orbit or regular. So by induction, none of these polytopes are two-orbit. In fact, each has three flag orbits. Thus, no two-orbit convex polytopes exist in more than three dimensions. In \cite[409--411]{conway2008symmetries}, Conway et al. say that the $n$-dimensional demicube, i.e.\ the convex hull of alternate vertices of the $n$-cube (which they call a hemicube), has $n-2$ flag orbits. So the 4-demicube should be two-orbit. The 4-demicube is described specifically as a 4-crosspolytope ``but with only half its symmetry.'' This apparently contradicts our result! However, if the 4-cube has for its vertices the 16 points in $\mathbb{E}^4$ with all coordinates 0 or 1, then the vertices of the 4-demicube are $(0,0,0,0)$, $(1,1,1,1)$, and all vectors with two 0's and two 1's. Hence if $x$ is a vertex, so is $\mathbf{1} - x$, where $\mathbf{1} = (1,1,1,1)$. Grouping the 8 vertices in pairs $(x, \mathbf{1} - x)$, we find four axes which are mutually perpendicular. Thus we have four antipodal pairs of vertices of a regular 4-crosspolytope. Hence the ``two-orbit'' 4-demicube is actually a regular 4-crosspolytope with artificially restricted symmetries, essentially by coloring the facets depending whether they were formed inside a facet, or at a missing vertex, of the 4-cube. \section{Tilings}\label{sec:tilings} A \emph{tiling} of $d$-dimensional Euclidean space $\mathbb{E}^d$, also called a tessellation or a honeycomb, is a countable collection of subsets (called \emph{tiles}) of $\mathbb{E}^d$ which cover $\mathbb{E}^d$ without gaps or overlaps; that is, the union of the tiles is $\mathbb{E}^d$, and the interiors of the tiles are pairwise disjoint. Here, we consider only locally finite face-to-face tilings by convex polytopes, meaning that all the tiles must be convex polytopes, every compact subset of $\mathbb{E}^d$ meets only finitely many tiles, and the intersection of any two tiles is a face of both (possibly the empty face). The face lattice of a tiling of $d$-dimensional space meets all the criteria defining an abstract polytope of rank $d+1$, and we call it a rank $(d+1)$ tiling. The $d$-dimensional tiles are the facets. A rank 3 tiling is called a \emph{plane tiling}, and a rank 2 tiling is called an \emph{apeirogon}. The latter necessarily consists of infinitely many edges (line segments) covering the line, and has been described as the limit of a sequence of $n$-gons as $n \to \infty$. A \emph{normal} tiling has \begin{itemize} \item tiles which are homeomorphic to closed balls, \item two positive radii $r$ and $R$ such that every tile contains a ball of radius $r$ and is contained in a ball of radius $R$, and \item the property that the intersection of any two tiles is empty or connected. \end{itemize} A two-orbit tiling has at most two congruence classes of tiles, so that the tiles are uniformly bounded (above and below) by balls of two given radii; together with convex polytopes as tiles, this is sufficient to establish that the tiling is normal. This rules out certain pathological possibilities for tilings. Claim~\ref{trans} still applies: if a two-orbit tiling is not fully transitive, then it is not transitive on the faces of exactly one dimension, say $j$, and we call it $j$-intransitive. However, Theorem~\ref{fullytransreg} does not apply; the proof depends on the fact that the vertices of a vertex-transitive polytope lie on a sphere, which is not the case for a tiling. So fully transitive two-orbit tilings are a possibility (and some exist.) Claim~\ref{freeflags} no longer makes sense, since the symmetry group and the set of flags are both infinite, but Claim~\ref{adjflags} and its corollaries still hold for any $j$-intransitive two-orbit tilings. Finally, Claim~\ref{regfaces} applies: the faces and sections of a two-orbit tiling have at most two orbits. Following \cite{grunbaum1986tilings}, we say two tilings are equal if one can be mapped onto the other by a uniform scale transformation followed by an isometry. \subsection{Apeirogons} There is one two-orbit tiling of the line, which varies by a single real parameter greater than one: an apeirogon alternating between two distinct edge lengths. Note that the construction of well-behaved duals does not work, in general, for tilings, as it does for polytopes. For example, if one constructs a ``dual'' to this two-orbit apeirogon by taking edge midpoints for vertices, one obtains a regular apeirogon, which is then self-dual! This tiling is combinatorially regular. \subsection{Plane tilings} We consider four cases of plane tilings, based on their transitivity properties. \subsubsection{Fully transitive} \textcite{grunbaum1986tilings} contains the full list of isohedral (i.e.\ tile-transitive) plane tilings (Table 6.1), isotoxal (i.e.\ edge-transitive) plane tilings (Table 6.4), and isogonal (vertex-transitive) plane tilings (Table 6.3). There are only four plane tilings realizable by convex tiles which have all three properties: the three regular plane tilings and a tiling by translations of a rhombus, labeled IH74 as an isohedral tiling, IG74 as an isogonal tiling, and IT20 as an isotoxal tiling. On \cite[311]{grunbaum1986tilings} it is confirmed that this rhombus tiling is the only non-regular fully transitive tiling realizable by convex tiles. Figure~\ref{fig:rhombus} shows a portion of this tiling, with flags of one orbit shaded. For a given flag $\Phi$, both the 0-adjacent flag $\Phi^0$ and the 2-adjacent flag $\Phi^2$ are in the other orbit, whereas the 1-adjacent flag $\Phi^1$ remains in the same orbit; thus with the notation of \textcite{hubard2010two} this tiling is in class $2_{1}$. \begin{figure}[h] \begin{tikzpicture}[xslant=.5,yscale=0.894] \foreach \x in {1,2,3,4} { \foreach \y in {1,2,3} { \fill[gray!50] (\x,\y) -- (\x,\y+.5) -- (\x+.5,\y+.5) -- cycle; \fill[gray!50] (\x,\y) -- (\x+.5,\y) -- (\x+.5,\y+.5) -- cycle; \fill[gray!50] (\x+1,\y+1) -- (\x+.5,\y+1) -- (\x+.5,\y+.5) -- cycle; \fill[gray!50] (\x+1,\y+1) -- (\x+1,\y+.5) -- (\x+.5,\y+.5) -- cycle; \draw[white](\x,\y) -- (\x+1,\y+1); } } \draw[very thick] (.5,.5) grid (5.5,4.5); \end{tikzpicture} \caption{The fully-transitive rhombus tiling}\label{fig:rhombus} \end{figure} A family of unequal versions of this tiling may be obtained by varying a single real parameter greater than one (the ratio of the diagonals of the rhombus.) The tiling is self-dual when taking tile midpoints for vertices. It is combinatorially regular. \subsubsection{2-intransitive} The facets of a 2-intransitive two-orbit tiling must be regular, by Claim~\ref{regfaces}. By edge-transitivity, the two facets bordering each edge are from different orbits; hence they alternate around each vertex. By vertex-transitivity, each vertex appears in the same kinds of tiles, which appear in the same order around each vertex; a common notation for such a situation is $(p.q.r\ldots)$ to indicate that each vertex $v$ is in a $p$-gon adjacent to a $q$-gon (containing $v$) adjacent to an $r$-gon, etc. An exponent may be used to indicate repetition; for instance, the regular tiling by equilateral triangles, $(3.3.3.3.3.3)$, is denoted $(3^6)$. If six facets appear at each vertex, then they must all be triangles, since replacing any triangle by a regular $n$-gon with $n \geq 4$ will not fit in the plane. The only tiling with six equilateral triangles at every vertex is the regular tiling $(3^6)$. Hence there must be exactly four facets at each vertex. If none of the facets are triangles, then each has at least four sides. Four squares fit exactly around a vertex, but replacing any squares by regular $n$-gons with $n \geq 5$ will not fit in the plane. The only tiling with four squares at every vertex is the regular tiling $(4^4)$. Hence there must be at least some triangles. If all four faces at each vertex are equilateral triangles, there is too much angular deficiency to tile the plane; indeed, the only such figure is the regular octahedron, $(3^4)$. If triangles alternate with squares, the resulting figure is the cuboctahedron, $(3.4.3.4)$. If triangles alternate with pentagons, the resulting figure is the icosidodecahedron, $(3.5.3.5)$. (This is, in brief, the proof that these are the only quasiregular polyhedra.) If triangles alternate with hexagons, we do obtain a plane tiling, denoted $(3.6.3.6)$. This is one of the 11 \emph{uniform} plane tilings, also called \emph{Archimedean} tilings. This tiling, seen in Figure~\ref{fig:trihex}, is sometimes called ``trihexagonal'' or ``hexadeltille.'' \begin{figure}[h] \begin{tikzpicture}[scale=0.8] \foreach \x in {0,2,4} { \foreach \y in {0,3.464} { \draw (\x,\y) -- (\x+1,\y) -- (\x+1.5,\y+.866) -- (\x+1,\y+1.732) -- (\x,\y+1.732) -- (\x-.5,\y+.866) -- cycle; } \foreach \y in {1.732} { \filldraw[fill=gray!50] (\x,\y) -- (\x+1,\y) -- (\x+.5,\y+.866) -- cycle; \filldraw[fill=gray!50] (\x,\y+1.732) -- (\x+1,\y+1.732) -- (\x+.5,\y+.866) -- cycle; \draw[white] (\x,\y) -- +(30:.866) (\x+1,\y) -- +(150:.866) (\x+.5,\y) -- (\x+.5,\y+1.732) (\x,\y+1.732) -- +(-30:.866) (\x+1,\y+1.732) -- +(210:.866); } } \foreach \x in {1,3} { \foreach \y in {1.732} { \draw (\x,\y) -- (\x+1,\y) -- (\x+1.5,\y+.866) -- (\x+1,\y+1.732) -- (\x,\y+1.732) -- (\x-.5,\y+.866) -- cycle; } \foreach \y in {0,3.464} { \filldraw[fill=gray!50] (\x,\y) -- (\x+1,\y) -- (\x+.5,\y+.866) -- cycle; \filldraw[fill=gray!50] (\x,\y+1.732) -- (\x+1,\y+1.732) -- (\x+.5,\y+.866) -- cycle; \draw[white] (\x,\y) -- +(30:.866) (\x+1,\y) -- +(150:.866) (\x+.5,\y) -- (\x+.5,\y+1.732) (\x,\y+1.732) -- +(-30:.866) (\x+1,\y+1.732) -- +(210:.866); } } \draw (1,5.196) -- (2,5.196) (3,5.196) -- (4,5.196) (1,0) -- (2,0) (3,0) -- (4,0) (0,3.464) -- (.5,2.598) -- (0,1.732) (5,3.464) -- (4.5,2.598) -- (5,1.732); \end{tikzpicture} \caption{The trihexagonal tiling}\label{fig:trihex} \end{figure} If we replace the hexagons by regular $n$-gons with $n \geq 7$, the total angles are excessive to fit in the plane. Hence $(3.6.3.6)$ is the unique two-orbit 2-intransitive plane tiling. \textcite[60]{coxeter1973regular} calls it by the extended Schl\"afli symbol $\vsch{3}{6}$, which is suggestive of the construction by taking the midpoints of the edges of the regular tiling $\{3,6\}$, or equivalently of its dual, the regular tiling $\{6,3\}$. He describes it as a quasiregular tessellation. It is combinatorially two-orbit. Taking the dual by using tile midpoints for vertices works well and results in the rhombille tiling detailed below. \subsubsection{1-intransitive} By facet-transitivity, each facet has the same number of sides, say $p$, and by vertex-transitivity, each vertex is incident to the same number of edges, say $q$. Thus a 1-intransitive plane tiling has a Schl\"afli symbol $\{p,q\}$. Since edges of the two orbits alternate at each vertex of a tile, $p$ and $q$ are both even; the only possible symbol is $\{4,4\}$. The tiles must be regular or two-orbit. The only tiling by squares is regular; so the tiles must be two-orbit 4-gons, i.e.\ rectangles or rhombi. It follows from vertex-transitivity, or from adding angle defects, that rhombi must be arranged with two acute angles and two obtuse angles at each vertex. In the case that the two angle types alternate, we obtain the tiling in Figure~\ref{fig:rhombus}, which we know to be fully transitive. In the case that the obtuse angles are adjacent to each other, and the acute angles are adjacent to each other, we do obtain a 1-intransitive plane tiling. The rhombi are arranged in strips which alternate direction. However, this tiling actually has four orbits. Indeed, in a 1-intransitive two-orbit tiling, the orbit of a flag is determined entirely by the edge it contains; if any face is also two-orbit, so that its symmetry group is the same as the restricted subgroup, then its flag orbits must also be determined by edges, and not vertices as in the case of a rhombus. This leaves only the tiling by copies of a rectangle. This is the unique two-orbit 1-intransitive family of plane tilings, and varies by a single real parameter greater than one. It is self-dual and combinatorially regular, being isomorphic to the square tiling $(4^4)$. Figure~\ref{fig:rect} shows a patch of this tiling, with flags of one orbit shaded. \begin{figure}[h] \begin{tikzpicture}[xscale=1.62] \foreach \x in {1,2,3,4} { \foreach \y in {1,2,3} { \fill[gray!50] (\x,\y) -- (\x,\y+.5) -- (\x+.5,\y+.5) -- cycle; \fill[gray!50] (\x,\y+1) -- (\x,\y+.5) -- (\x+.5,\y+.5) -- cycle; \fill[gray!50] (\x+1,\y) -- (\x+1,\y+.5) -- (\x+.5,\y+.5) -- cycle; \fill[gray!50] (\x+1,\y+1) -- (\x+1,\y+.5) -- (\x+.5,\y+.5) -- cycle; \draw[white](\x,\y+.5) -- (\x+1,\y+.5); } } \draw[very thick] (1,1) grid (5,4); \end{tikzpicture} \caption{The 1-intransitive rectangle tiling}\label{fig:rect} \end{figure} \subsubsection{0-intransitive} It is tempting to say that any 0-intransitive tiling must be dual to a 2-intransitive one. However, \textcite{grunbaum1986tilings} admonish us that for tilings, no duality theorem exists which would allow us to make such statements! Nonetheless, it turns out that the only 0-intransitive two-orbit tiling is indeed dual to the uniform tiling $(3.6.3.6)$. We can confirm this by again turning to the tables of isohedral and isotoxal tilings in \cite{grunbaum1986tilings}; the only additional tiling realizable by convex tiles with both properties is denoted IH37 as an isohedral tiling and IT11 as an isotoxal tiling. This is a tiling by copies of a rhombus, which can be viewed as dividing the hexagons of the regular tiling $(6^3)$ into three rhombi each. It is called ``rhombille'' or ``tumbling blocks,'' and is familiar as the visual illusion of a stair-case of blocks which can be seen in two ways. It is combinatorially two-orbit. \begin{figure}[h] \begin{tikzpicture} \foreach \x in {0,2,4} { \fill[gray!50] (\x+.5,1.443) -- (\x+.5,.866) -- (\x+1,1.155) -- (\x+1,1.732) -- cycle; \fill[gray!50] (\x+1,1.155) -- (\x+1.5,.866) -- (\x+1.5,1.443) -- (\x+1,1.732) -- cycle; \fill[gray!50] (\x,0) -- (\x,.577) -- (\x+.5,0.866) -- (\x+.5,.289) -- cycle; \fill[gray!50] (\x,0) -- (\x+.5,-.289) -- (\x+1,0) -- (\x+.5,.289) -- cycle; \fill[gray!50] (\x+2,0) -- (\x+1.5,.289) -- (\x+1,0) -- (\x+1.5,-.289) -- cycle; \fill[gray!50] (\x+2,0) -- (\x+1.5,.289) -- (\x+1.5,.866) -- (\x+2,.577) -- cycle; \draw[white] (\x,0) -- (\x+1,1.732) -- (\x+2,0) -- cycle; \draw (\x,0) -- (\x+1,-.577) -- (\x+2,0) -- (\x+2,1.155) -- (\x+1,1.732) -- (\x,1.155) -- cycle; \draw (\x+1,1.732) -- (\x+1,.577) -- (\x,0) (\x+1,.577) -- (\x+2,0); } \foreach \x in {1,3} { \foreach \y in {-1.732,1.732} { \fill[gray!50] (\x+.5,\y+1.443) -- (\x+.5,\y+.866) -- (\x+1,\y+1.155) -- (\x+1,\y+1.732) -- cycle; \fill[gray!50] (\x+1,\y+1.155) -- (\x+1.5,\y+.866) -- (\x+1.5,\y+1.443) -- (\x+1,\y+1.732) -- cycle; \fill[gray!50] (\x,\y) -- (\x,\y+.577) -- (\x+.5,\y+0.866) -- (\x+.5,\y+.289) -- cycle; \fill[gray!50] (\x,\y) -- (\x+.5,\y-.289) -- (\x+1,\y) -- (\x+.5,\y+.289) -- cycle; \fill[gray!50] (\x+2,\y) -- (\x+1.5,\y+.289) -- (\x+1,\y) -- (\x+1.5,\y-.289) -- cycle; \fill[gray!50] (\x+2,\y) -- (\x+1.5,\y+.289) -- (\x+1.5,\y+.866) -- (\x+2,\y+.577) -- cycle; \draw[white] (\x,\y) -- (\x+1,\y+1.732) -- (\x+2,\y) -- cycle; \draw (\x,\y) -- (\x+1,\y-.577) -- (\x+2,\y) -- (\x+2,\y+1.155) -- (\x+1,\y+1.732) -- (\x,\y+1.155) -- cycle; \draw (\x+1,\y+1.732) -- (\x+1,\y+.577) -- (\x,\y) (\x+1,\y+.577) -- (\x+2,\y); } } \end{tikzpicture} \caption{The rhombille tiling}\label{fig:rhombille} \end{figure} \subsection{Tilings of three-space} A tiling in $\mathbb{E}^d$ is said to be \emph{uniform} if it is vertex-transitive and has uniform $d$-polytopes as tiles \cite{coxeter1940regular}. Recall that uniform polytopes may be defined inductively, declaring uniform polygons to be regular and uniform polytopes of rank 3 or higher to be vertex-transitive with uniform facets. A 3-intransitive two-orbit tiling of 3-space has regular polyhedral tiles and is vertex-transitive, which means that it is a uniform tiling. \textcite{grunbaum1994uniform} listed all 28 uniform tilings of 3-space. Of these, only one is two-orbit: the tetrahedral-octahedral honeycomb, \#1 on Gr{\"u}nbaum's list, also called ``alternated cubic,'' ``Tetroctahedrille,'' or ``octatetrahedral.'' This is 3-intransitive. Coxeter describes it as the unique quasiregular honeycomb \cite[69]{coxeter1973regular} and assigns it the modified Schl{\"a}fli symbol $\{3, \stak{3}{4}\}$ and an abbreviated symbol $h \delta_4$ \cite[402]{coxeter1940regular}. Being semiregular (with regular tiles and a vertex-transitive group), it also appears in Gosset's list \cite{gosset1900regular} as the ``simple tetroctahedric check.'' \textcite{monson2012semiregular} describe this tiling at length. It has 6 octahedra and 8 tetrahedra meeting at each vertex; the vertex figure is a cuboctahedron. The corresponding ``net,'' the 1-skeleton of the tiling, is named \textbf{fcu} by crystallographers in \cite{delgado2002three}, where this tiling is conjectured to be the unique one with transitivity 1112, i.e.\ whose symmetry group has one orbit on vertices, edges, and 2-faces, and two orbits on tiles. A 2-intransitive tiling of 3-space has regular polygon 2-faces and is vertex-transitive. Moreover, the facets are regular or 2-intransitive two-orbit, hence vertex-transitive. So such a tiling is uniform; but we already found the only two-orbit uniform tiling and this was 3-intransitive. A 1-intransitive tiling of 3-space has two kinds of edge, which must alternate around a 2-face, so each 2-face has evenly many sides. The facets are regular or 1-intransitive two-orbit, and the only such polyhedron with even-sided 2-faces is the cube. The only face-to-face tiling by cubes is the regular one. So no such tilings exist. A 0-intransitive tiling of 3-space has two kinds of vertex, and every edge must be incident to one of each (by edge-transitivity), so each 2-face has evenly many sides. The facets are regular or 0-intransitive two-orbit; the only possibilities are the cube, the rhombic dodecahedron, or the rhombic triacontahedron. As we already mentioned, the only face-to-face tiling by cubes is regular. The rhombic triacontahedron has a dihedral angle of $4 \pi / 5$, so it is impossible to fit an integral number of them around an edge in 3-space. However, the rhombic dodecahedron, with a dihedral angle of $2 \pi / 3$, does form a two-orbit tiling of 3-space in a unique way. This tiling (called the rhombic dodecahedral honeycomb) is dual to the tetrahedral-octahedral honeycomb above. The corresponding net is named \textbf{flu} in \cite{delgado2002three}, and described as the structure of fluorite ($\text{CaF}_2$.) It is conjectured there to be the unique tiling with transitivity 2111. Suppose $\mathcal{T}$ is a fully transitive two-orbit tiling; then the facets are regular or two-orbit, and all of one type. Since $\mathcal{T}$ is vertex-transitive, if the facets were regular, $\mathcal{T}$ would be uniform, and we have already checked all the uniform tilings. Thus the facets must be two-orbit. Since $\mathcal{T}$ is 2-face-transitive, every 2-face is the same, which rules out the cuboctahedron or icosidodecahedron as facets. The remaining possibilities are the rhombic dodecahedron, which only appears in the 0-intransitive tiling already listed, and the rhombic triacontahedron, which as mentioned does not tile 3-space. \subsection{Higher dimensions} For a rank $(d+1)$ tiling $\mathcal{T}$ with $d \geq 4$, the facets and vertex figures are $d$-dimensional polytopes with at most two orbits. Since no two-orbit convex polytopes exist in $d \geq 4$ dimensions, the facets and vertex figures must, in fact, be regular; but then $\mathcal{T}$ itself is regular \cite[129]{coxeter1973regular}. \section{Conclusion} The number of half-regular convex polytopes and tilings (to use Conway's pleasant term) is perhaps surprisingly small. Those which are also combinatorially two-orbit are simply the cuboctahedron and the icosidodecahedron, the only two quasiregular polyhedra, and their duals; the trihexagonal tiling, the only quasiregular plane tiling, and its dual; and the tetrahedral-octahedral honeycomb, the only quasiregular honeycomb, and its dual. It is notable, perhaps, that although duality is not generally well-defined for tilings, it always works well for two-orbit tilings which are combinatorially two-orbit, just as it always works well for regular tilings and uniform plane tilings. However, it does not generally work out for two-orbit tilings which are combinatorially regular! The above seems suggestive that ``quasiregular,'' which has previously had rather ad-hoc definitions, could be taken to mean ``facet-intransitive two-orbit.'' \textcite[18]{coxeter1973regular} defines a ``quasi-regular polyhedron'' as ``having regular faces, while its vertex figures, though not regular, are cyclic and equi-angular (i.e., inscriptible in circles and alternate-sided).'' The definition of a quasiregular plane tiling does not seem to be clearly stated, but the implication (in \cite[\S 4.2]{coxeter1973regular}) is that a quasiregular plane tiling is one formed, as the quasiregular polyhedra can be, by truncating the vertices of a regular tiling to the midpoints of the edges. In \cite[\S 4.7]{coxeter1973regular}, a tiling of 3-space (or honeycomb) ``is said to be quasi-regular if its cells are regular while its vertex figures are quasi-regular.'' This suggests the beginning of an inductive definition for ``quasiregular'' in higher dimensions, which would perhaps agree with ours: Facet-intransitive two-orbit polytopes have regular facets and the vertex figures are again facet-intransitive and two-orbit. It would be good to establish that having regular facets and facet-intransitive two-orbit vertex figures implies that the polytope is two-orbit. This is vacuously true for convex polytopes, since \textcite{blind1979konvexe} classified all regular-faced $d$-polytopes with $d \geq 4$, and none have two-orbit vertex figures. However, the corresponding result for abstract polytopes would clarify the agreement of the definitions. The word ``quasiregular'' is also applied to some star polytopes, such as the dodecadodecahedron $\vsch{5}{5/2}$ and the great icosidodecahedron $\vsch{3}{5/2}$ in \cite[100--101]{coxeter1973regular}; three ditrigonal forms: the ditrigonal dodecadodecahedron, small ditrigonal icosidodecahedron, and great ditrigonal icosidodecahedron (also called ``triambic'' instead of ``ditrigonal''); and nine hemihedra: the tetrahemihexahedron, octahemioctahedron, cubohemioctahedron, small icosihemidodecahedron, small dodecahemidodecahedron, great dodecahemicosahedron, small dodecahemicosahedron, great dodecahemidodecahedron, and great icosihemidodecahedron (using names from \cite{wenninger1974polyhedron}). All of these are two-orbit facet-intransitive. It would be good to establish that these are the only two-orbit facet-intransitive star polytopes. Remaining questions include the classification of two-orbit tilings of hyperbolic space, two-orbit star polytopes, and other non-convex two-orbit polytopes in Euclidean space. The general abstract two-orbit polyhedra have been addressed in \cite{hubard2010two}, with extension to higher dimensions in preparation \cite{hubardschultetwo}. An overview is in \cite[\S 1.3]{helfand2013constructions}. The important special case of chiral polytopes have been studied extensively but many open questions remain; a recent survey is \cite{pellicer2012developments}. It also remains to classify convex polytopes of three or more orbits. Results in this direction, mostly for abstract polytopes, are found in \cite{cunningham2012orbit}, \cite{helfand2013constructions}, and \cite{orbanic2010map}. \section{Acknowledgments} The author would like to thank his adviser, Egon Schulte, for his guidance and assistance, and suggesting the original problem, and Peter McMullen for suggesting improvements. \nocite{coxeter1985regular,johnson2000uniform} \printbibliography \end{document}
{ "timestamp": "2014-03-11T01:09:27", "yymm": "1403", "arxiv_id": "1403.2125", "language": "en", "url": "https://arxiv.org/abs/1403.2125", "abstract": "We classify the convex polytopes whose symmetry groups have two orbits on the flags. These exist only in two or three dimensions, and the only ones whose combinatorial automorphism group is also two-orbit are the cuboctahedron, the icosidodecahedron, and their duals. The combinatorially regular two-orbit convex polytopes are certain 2n-gons for each n > 1. We also classify the face-to-face tilings of Euclidean space by convex polytopes whose symmetry groups have two flag orbits. There are finitely many families, tiling one, two, or three dimensions. The only such tilings which are also combinatorially two-orbit are the trihexagonal plane tiling, the rhombille plane tiling, the tetrahedral-octahedral honeycomb, and the rhombic dodecahedral honeycomb.", "subjects": "Metric Geometry (math.MG); Combinatorics (math.CO)", "title": "Two-orbit convex polytopes and tilings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357604052424, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7096739278509103 }
https://arxiv.org/abs/2111.12199
Jacobi identity in polyhedral products
We show that a relation among minimal non-faces of a fillable complex $K$ yields an identity of iterated (higher) Whitehead products in a polyhedral product over $K$. In particular, for the $(n-1)$-skeleton of a simplicial $n$-sphere, we always have such an identity, and for the $(n-1)$-skeleton of a $(n+1)$-simplex, the identity is the Jacobi identity of Whitehead products ($n=1$) and Hardie's identity of higher Whitehead products ($n\ge 2$).
\section{Introduction}\label{Introduction} Let $K$ be a simplicial complex with vertex set $[m]=\{1,2,\ldots,m\}$, and let $(\underline{X},\underline{A})=\{(X_i,A_i)\}_{i=1}^m$ be a collection of pairs of spaces indexed by vertices of $K$. If all $(X_i,A_i)$ are a common pair $(X,A)$, then we abbreviate $(\underline{X},\underline{A})$ by $(X,A)$. The \emph{polyhedral product} of $(\underline{X},\underline{A})$ over $K$ is defined by \[ Z_K(\underline{X},\underline{A})=\bigcup_{\sigma\in K}(\underline{X},\underline{A})^\sigma \] where $(\underline{X},\underline{A})^\sigma=Y_1\times\cdots\times Y_m$ such that $Y_i=X_i$ for $i\in\sigma$ and $Y_i=A_i$ for $i\not\in\sigma$. A polyhedral product as introduced by Bahri, Bendersky, Cohen and Gitler \cite{BBCG} as a generalization of the moment-angle complex $Z_K=Z_K(D^2,S^1)$ and the Davis-Januszkiewicz space $DJ_K=Z_K(\mathbb{C} P^\infty,*)$, which are fundamental objects in toric topology \cite{DJ}. It is connected to broad areas in mathematics and has been studied in many context. See a comprehensive survey \cite{BBC} for details. In this paper, we study relations among iterated (higher) Whitehead products of the inclusions $\Sigma X_i\to Z_K(\Sigma\underline{X},*)$, where $(\Sigma\underline{X},*)=\{(\Sigma X_i,*)\}_{i=1}^m$. We explain a motivation of our study. There is a natural action of a torus $T$ of rank $m$ on $Z_K$, which is of particular importance because quasitoric manifolds, a topological counterpart of toric varieties, are the quotient of $Z_K$ by a certain subtorus of $T$. Then the Borel construction $Z_K\times_TET$ and the inclusion $Z_K\to Z_K\times_TET$ are fundamental in toric topology. It is well known that $DJ_K\simeq Z_K\times_TET$ and the above inclusion is identified with the map \begin{equation} \label{w} Z_K\to DJ_K \end{equation} induced from the composition of the pinch map $(D^2,S^1)\to(S^2,*)$ and the bottom cell inclusion $(S^2,*)\to(\mathbb{C} P^\infty,*)$. If $K$ is the boundary of a $n$-simplex, then $Z_K=S^{2n-1}$ and the map \eqref{w} is the Whitehead product of the bottom cell inclusions $S^2\to DJ_K$ for $n=2$ and the higher Whitehead product for $n\ge 3$ in the sense of Porter \cite{P}. Then Buchstaber and Panov posed the following problem \cite[Problem 8.2.5]{BP}. \begin{problem} \label{problem 1} When $Z_K$ is a wedge of spheres, is the map \eqref{w} a wedge of iterated (higher) Whitehead products of the bottom cell inclusions $S^2\to DJ_K$? \end{problem} Remark that there are several classes of simplicial complexes whose moment-angle complexes decompose into a wedge of spheres, for some of which polyhedral products also decompose \cite{GPTW,GT1,GT2,GW,IK1,IK2,IK3,IK4,IK6,IK7}. Here are results on Problem \ref{problem 1}. Grbi\'{c}, Panov, Theriault and Wu \cite{GPTW} obtained an affirmative solution for flag complexes, Grbi\'{c} and Theriault \cite{GT3} for MF-complexes, Abramyan and Panov \cite{AP} for the substitution of simplicial complexes, and Iriye and the first author \cite{IK4} for totally fillable complexes. As long as we are concerned with (higher) Whitehead products of the bottom inclusions $S^2\to DJ_K$, we only need to consider the map $Z_K=Z_K(D^2,S^1)\to Z_K(S^2,*)$ induced from a pinch map $(D^2,S^1)\to(S^2,*)$, which is generalized to a map \begin{equation} \label{w 2} \widetilde{w}\colon Z_K(C\underline{X},\underline{X})\to Z_K(\Sigma\underline{X},*) \end{equation} induced from the pinch maps $(CX_i,X_i)\to(\Sigma X_i,*)$, where $(C\underline{X},\underline{X})=\{(CX_i,X_i)\}_{i=1}^m$. The result of Iriye and the first author \cite{IK5} mentioned above is actually on the generalized map \eqref{w 2}. As for Whitehead products, the Jacobi identity is obviously fundamental. Then in this paper, we consider: \begin{problem} \label{problem 2} Is there a relation among iterated (higher) Whitehead products of the inclusions $\Sigma X_i\to Z_K(\Sigma\underline{X},*)$? \end{problem} The main theorem (Theorem \ref{main}) shows that a certain property of a \emph{fillable complex}, introduced in \cite{IK4}, yields an identity among iterated (higher) Whitehead products defined by minimal non-faces of the complex. So instead of the main theorem, we present its corollary here, which needs less notions to state, while it is restrictive compared to the main theorem. \begin{theorem} [Corollary \ref{main sphere later}] \label{main sphere} Let $K$ be the $(n-1)$-skeleton of a simplicial $n$-sphere $S$ with $n$-simplices $\sigma_1,\ldots,\sigma_r$, each of which is given a contraction ordering. If each $X_i$ is a suspension, then there is an identity of iterated (higher) Whitehead products \[ w_{\sigma_r}=\epsilon_1w_{\sigma_1}+\cdots+\epsilon_{r-1}w_{\sigma_{r-1}}, \] where $\epsilon_1,\ldots,\epsilon_{r-1}=\pm 1$ such that $\partial\sigma_r=\partial(\epsilon_1\sigma_1+\cdots+\epsilon_{r-1}\sigma_{r-1})$ as a simplicial chain. \end{theorem} Remarks on Theorem \ref{main sphere} are in order, where details will be given in Sections \ref{Fillable complex} and \ref{Result}. First, every $\sigma_i$ has at least one contraction ordering. Second, each $w_{\sigma_i}$ is a restriction of the map \eqref{w 2} and is an iterated (higher) Whitehead product of inclusions $X_i\to Z_K(\underline{X},*)$ determined by $\sigma_i$ and its contraction ordering. We will give example computations (Examples \ref{Jacobi Hardie}, \ref{cross-polytope} and \ref{RP^2}) to derive explicit identities among iterated (higher) Whitehead products by using our results, and will see that the classical Jacobi identity of Whitehead products and Hardie's Jacobi identity of higher Whitehead products \cite{H} are recovered from Theorem \ref{main sphere} for $S$ being the boundary of a simplex. \subsection*{Acknowledgement} The authors were partially supported by JSPS KAKENHI Grant Numbers 17K05248 and 19K03473 (Kishimoto), and 19K14536 (Matsushita). \section{Fillable complex}\label{Fillable complex} This section recalls a fillable complex introduced in \cite{IK4} and shows its properties that we are going to use. For the rest of this paper, let $K$ denote a simplicial complex with vertex set $[m]=\{1,2,\ldots,m\}$. We set notation and terminology for simplicial complexes. Let $|K|$ denote the geometric realization of $K$. For a non-empty subset $I\subset[m]$, let \[ K_I=\{\sigma\in K\mid\sigma\subset I\} \] which is called the \emph{full subcomplex} of $K$ over $I$. A subset of $M\subset [m]$ with $|M|\ge 2$ is called a \emph{minimal non-face} of $K$ if $M$ is not a simplex of $K$ and $M-v$ is a simplex of $K$ for each $v\in M$. So $M\subset [m]$ with $|M|\ge 2$ is a minimal non-face of $K$ if and only if $K_M$ is the boundary of a simplex with vertex set $M$. Note that if $M_1,\ldots,M_r$ are minimal non-faces of $K$, then $K\cup M_1\cup\cdots\cup M_r$ is a simplicial complex containing $K$ as a subcomplex. Let $\overline{K}$ denote a simplicial complex obtained by adding all minimal non-faces to $K$. For a finite set $I$, let $\Delta(I)$ denote a simplex with vertex set $I$. We define a fillable complex. \begin{definition} A simplicial complex $K$ is called \emph{fillable} if there are minimal non-faces $M_1,\ldots,M_r$ of $K$ such that $|K\cup M_1\cup\cdots\cup M_r|$ is contractible. The set of minimal non-faces $\{M_1,\ldots,M_r\}$ is called a \emph{filling} of $K$. \end{definition} We give examples of fillable complexes. \begin{example} \label{simplex} The boundary of a simplex is a fillable complex with a filling consisting of the only one minimal non-face which is the whole vertex set. Moreover, every skeleton of a simplex if a fillable complex which possibly possesses several fillings. More generally, it is proved in \cite{IK4} that the Alexander dual of a shellable complex is fillable, where each skeleton of a simplex is the Alexander dual of a shellable complex. \end{example} \begin{example} \label{sphere} The $(n-1)$-skeleton $K$ of a simplicial $n$-sphere $S$ with $n$-simplices $\sigma_1,\ldots,\sigma_r$ is a fillable complex with filling $\{\sigma_1,\ldots,\sigma_{r-1}\}$. Indeed, $|K\cup\sigma_1\cup\cdots\cup\sigma_{r-1}|$ is homotopy equivalent to $|S|$ minus the center of $\sigma_r$, which is contractible. \end{example} We refine \cite[Proposition 2.12]{IK5}. For a minimal non-face $M\subset[m]$, let \[ g_M\colon |K_M|\to|K| \] denote the inclusion. Since $K_M=\partial\Delta(M)$ and $|\partial\Delta(M)|\cong S^{|M|-2}$, $g_M$ is a map from a sphere $S^{|M|-2}$ into $|K|$. \begin{lemma} \label{suspension K} If $K$ is a fillable complex with filling $\{M_1,\ldots,M_r\}$, then the map \[ \Sigma g_{M_1}\vee\cdots\vee\Sigma g_{M_r}\colon S^{|M_1|-1}\vee\cdots\vee S^{|M_r|-1}\to|\Sigma K| \] is a homotopy equivalence. \end{lemma} \begin{proof} Since $|K\cup M_1\cup\cdots\cup M_r|$ is contractible, there is a homotopy equivalence \[ |\Delta(M_1)|/|\partial\Delta(M_1)|\vee\cdots\vee|\Delta(M_r)|/|\partial\Delta(M_r)|=|K\cup M_1\cup\cdots\cup M_r|/|K|\to|\Sigma K| \] whose restriction to $|\Delta(M_i)|/|\partial\Delta(M_i)|=S^{|M_i|-1}$ is the suspension of $g_{M_i}$ for each $i=1,\ldots,r$. Then the proof is done. \end{proof} The following corollary is immediate from Lemma \ref{suspension K}, which will be used later without mentioning. \begin{corollary} If a fillable complex has two fillings $\{M_1,\ldots,M_r\}$ and $\{N_1,\ldots,N_s\}$, then $r=s$. \end{corollary} We will need to find identities among maps $\Sigma^2g_\sigma$, for which the following purity will be quite useful. We say that a filling $\{M_1,\ldots,M_r\}$ of a fillable complex $K$ is \emph{pure} if $|M_1|=\cdots=|M_r|$. Example \ref{sphere} implies that the $(n-1)$-skeleton of a simplicial $n$-sphere admits several pure fillings. We prove a homological condition that guarantees a relation among maps $g_M$. \begin{lemma} \label{pure filling} Let $K$ be a fillable complex with two pure fillings $\{M_1,\ldots,M_r\}$ and $\{N_1,\ldots,N_r\}$. Then there is an identity \[ \partial N_i=a_1\partial M_1+\cdots+a_r\partial M_r \] in the simplicial chain complex of $\overline{K}$ for each $i=1,\ldots,r$. Moreover, we also have \[ \Sigma^2 g_{N_i}=a_1\Sigma^2 g_{M_{1}}+\cdots+a_r\Sigma^2 g_{M_{r}}. \] \end{lemma} \begin{proof} The first claim is proved by Lemma \ref{suspension K}. By construction, the Hurewicz image of each $\Sigma g_{M_i}\in\pi_*(|\Sigma K|)$ is the suspension of the homology class of $K$ represented by a cycle $\partial M_i$. Of course, the same is true for $\Sigma g_{N_i}$. Then by the Hurewicz theorem, the second claim is proved. \end{proof} We recall from \cite{IK5} a contraction ordering for a minimal non-face. The following proposition is proved in \cite[Proposition 2.13]{IK5} by assuming $K$ is fillable. However, we can see that such an assumption is not necessary. \begin{proposition} \label{contraction ordering} For any minimal non-face $M$ of $K$, there are trees $T_1,\ldots,T_k$ satisfying the following conditions: \begin{enumerate} \item $\Delta(M)\cup T_1\cup\cdots\cup T_k$ is a subcomplex of $\overline{K}$ with vertex set $[m]$; \item $T_i\cap\Delta(M)$ is a vertex for each $i=1,\ldots,k$. \item $T_i \cap T_j = \emptyset$ if $i \ne j$. \end{enumerate} \end{proposition} \begin{proof} Since $\Delta(M)$ is connected, it includes a maximal tree $T$ whose vertex set is $\sigma$. It is easy to see that $T$ can be extended to a maximal tree $T'$ of $\overline{K}$ such that if we remove all edges of $T$ from $T'$, then we get a forest $T_1\sqcup\cdots\sqcup T_k$ where $T_i\cap\Delta(M)$ is a vertex for each $i=1,\ldots,k$. Since $\overline{K}$ is connected, the vertex set of $T'$ is $[m]$, so that $\Delta(M)\cup T_1\cup\cdots\cup T_k$ is a subcomplex of $\overline{K}$ with vertex set $[m]$. Thus the proof is done. \end{proof} Let $T$ be a tree with a distinguished root. We say that an edge of $T$ is free if one of its vertices is a leaf. By contracting free edges inductively, we can contract $T$ onto its root, and such a contraction is identified with an ordering of non-root vertices of $T$. We call such an ordering a \emph{contraction ordering}. Let $M,T_1,\ldots,T_k$ be as in Proposition \ref{contraction ordering}. Then by assuming that $T_i$ is given a root $T_i\cap\Delta(M)$, we get a contraction ordering of $T_i$. Joining such contraction orderings of $T_1,\ldots,T_k$ yields an ordering on $[m]-M$, which we call a contraction ordering of $M$. Then by Proposition \ref{contraction ordering}, we get: \begin{corollary} \label{contraction ordering 2} Every minimal non-face of a simplicial complex admits a contraction ordering. \end{corollary} \section{Results}\label{Result} First, we recall from \cite{IK4} the fat-wedge filtration of a polyhedral product and necessary results on them. Let $\underline{X}=\{X_i\}_{i=1}^m$ be a collection of pointed spaces. For $i=0,1,\ldots,m$, let \[ Z_K^i(C\underline{X},\underline{X})=\{(x_1,\ldots,x_m)\in Z_K(C\underline{X},\underline{X})\mid \text{at least }m-i\text{ of }x_k\text{ are basepoints}\}. \] Then we get a filtration \[ *=Z_K^0(C\underline{X},\underline{X})\subset Z_K^1(C\underline{X},\underline{X})\subset\cdots\subset Z_K^{m-1}(C\underline{X},\underline{X})\subset Z_K^m(\underline{X})=Z_K(C\underline{X},\underline{X}) \] which is the \emph{fat-wedge filtration} of $Z_K(\underline{X})$. It is proved in \cite{IK4} that \[ Z_K^i(C\underline{X},\underline{X})/Z_K^{i-1}(C\underline{X},\underline{X})=\bigvee_{\substack{I\subset[m]\\|I|=i}}|\Sigma K_I|\wedge\widehat{X}^I \] where $\widehat{X}^I=X_{i_1}\wedge\cdots\wedge X_{i_k}$ for $I=\{i_1<\cdots<i_k\}$. In \cite{IK4}, several criteria for splitting the fat-wedge filtration are given in terms of $K$. In particular, we have the following decomposition. See \cite{BBC,IK2,IK3,IK6,IK7} for applications of the decomposition. \begin{theorem} \label{decomposition} If $K$ is a fillable complex, then there is a homotopy equivalence \[ Z_K(C\underline{X},\underline{X})\simeq Z_K^{m-1}(C\underline{X},\underline{X})\vee(|\Sigma K|\wedge\widehat{X}) \] where $\widehat{X}=X_1\wedge\cdots\wedge X_m$. \end{theorem} Next, we recall the result of \cite{IK5} which describes the map \eqref{w 2} in terms of iterated (higher) Whitehead products by using a homotopy decomposition in Theorem \ref{decomposition}. Let $e_i\colon\Sigma X_i\to Z_K(\Sigma\underline{X},*)$ denote the inclusion. For a minimal non-face $M\subset[m]$ of $K$, we can define the (higher) Whitehead product $w(M)$ of $e_i$ with $i\in M$. Namely, $w(M)$ is the composite \[ \Sigma^{|M|-1}\widehat{X}^M\simeq Z_{\partial\Delta(M)}(C\underline{X}_M,\underline{X}_M)\xrightarrow{\widetilde{w}}Z_{\partial\Delta(M)}(\Sigma\underline{X}_M,*)\xrightarrow{\rm incl}Z_K(\Sigma\underline{X},*) \] where $\underline{X}_M=\{X_i\}_{i\in M}$. Moreover, given a contraction ordering $[m]-M=\{i_1<\ldots<i_k\}$, we can also define an iterated (higher) Whitehead product \[ w_M=[\ldots[[w(M),e_{i_1}],e_{i_2}],\ldots,e_{i_k}]\circ\rho\colon\Sigma^{|M|-1}\widehat{X}\to Z_K(\Sigma\underline{X},*) \] where $\rho$ is the permutation $(1,2,\ldots,m)\mapsto(j_1,\ldots,j_l,i_1,\ldots,i_k)$ for $M=\{j_1<\cdots<j_l\}$. We recall state the main result of \cite{IK5}. \begin{theorem} \label{Whitehead product} Let $K$ be a fillable complex with filling $\{M_1,\ldots,M_r\}$ such that each of $M_i$ is equipped with a contraction ordering. Suppose that each $X_i$ is a suspension. Then the composite \begin{multline*} \Sigma^{|M_i|-1}\widehat{X}\xrightarrow{\Sigma g_{M_i}\wedge 1}|\Sigma K|\wedge\widehat{X}\xrightarrow{\rm incl}Z_K^{m-1}(C\underline{X},\underline{X})\vee(|\Sigma K|\wedge\widehat{X})\\ \simeq Z_K(C\underline{X},\underline{X})\xrightarrow{\widetilde{w}}Z_K(\Sigma\underline{X},*) \end{multline*} is the iterated (higher) Whitehead product $w_{M_i}$. \end{theorem} Now we can state the main theorem of this paper. \begin{theorem} \label{main} Let $K$ be a fillable complex with two fillings $\{M_1,\ldots,M_r\}$ and $\{N_1,\ldots,N_r\}$ such that each of $M_i$ and $N_i$ is equipped with a contraction ordering. Suppose that each $X_i$ is a suspension. If \[ \Sigma^2 g_{N_i}=a_1\Sigma^2 g_{M_1}+\cdots+a_r\Sigma^2 g_{M_r}, \] then there is an identity among iterated (higher) Whitehead products \[ w_{N_i}=a_1w_{M_{1}}+\cdots+a_rw_{M_{r}}. \] \end{theorem} \begin{proof} Since $\Sigma^2 g_{N_i}=a_1\Sigma^2 g_{M_1}+\cdots+a_r\Sigma^2 g_{M_r}$ and each $X_i$ is a suspension, we have \[ \Sigma g_{N_i}\wedge 1_{\widehat{X}}=a_1\Sigma g_{M_{1}}\wedge 1_{\widehat{X}}+\cdots+a_r\Sigma g_{M_{r}}\wedge 1_{\widehat{X}}. \] Thus the identity in the statement is obtained by Theorem \ref{Whitehead product}. \end{proof} By Lemma \ref{pure filling} and Theorem \ref{main}, we get: \begin{corollary} \label{main pure} Let $K$ be a fillable complex with pure fillings $\{M_1,\ldots,M_r\}$ and $\{N_1,\ldots,N_r\}$ such that each of $M_i$ and $N_i$ is equipped with a contraction ordering. If each $X_i$ is a suspension., then for each $i=1,\ldots,r$, \[ w_{N_i}=a_1w_{M_1}+\cdots+a_rw_{M_r}, \] where $\partial N_i=\partial(a_1M_1+\cdots+a_rM_r)$ in the simplicial chain complex of $\overline{K}$. \end{corollary} \begin{proof} Clearly, there is an identity $\partial\sigma_r=\partial(\epsilon_1\sigma_1+\cdots+\epsilon_{r-1}\sigma_{r-1})$ as a simplicial chain for some $\epsilon_1,\ldots,\epsilon_r=\pm 1$. This relation readily implies $\Sigma^2 g(\sigma_r)=\epsilon_1\Sigma^2 g(\sigma_{1})+\cdots+\epsilon_{r-1}\Sigma^2 g(\sigma_{r-1})$. then the proof is finished by Theorem \ref{main}. \end{proof} By Example \ref{sphere}, Corollary \ref{main pure} specializes to: \begin{corollary} \label{main sphere later} Let $K$ be the $(n-1)$-skeleton of a simplicial $n$-sphere $S$ with $n$-simplices $\sigma_1,\ldots,\sigma_r$. If each $X_i$ is a suspension, then \[ w_{\sigma_r}=\epsilon_1w_{\sigma_1}+\cdots+\epsilon_{r-1}w_{\sigma_{r-1}}, \] where $\epsilon_1,\ldots,\epsilon_{r-1}=\pm 1$ such that $\partial\sigma_r=\partial(\epsilon_1\sigma_1+\cdots+\epsilon_{r-1}\sigma_{r-1})$ in the simplicial chain complex of $S$. \end{corollary} We give example computations of the above results. \begin{example} \label{Jacobi Hardie} Let $K$ be the $(m-3)$-skeleton of $\partial\Delta([m])$. Then $K$ has pure fillings \[ \{[m]-i\mid i=1,2,\ldots,m-1\}\quad\text{and}\quad\{[m]-i\mid i=2,3,\ldots,m\}. \] Let $\sigma_i=[m]-i$ for $i=1,2,\ldots,m$. Then there is an identity \[ \partial\sigma_m=\partial((-1)^{m+2}\sigma_1+\cdots+(-1)^{2m}\sigma_{m-1}) \] in the simplicial chain complex of $S$. Therefore by Corollary \ref{main sphere later}, there is an identity \begin{equation} \label{Hardie} w_{\sigma_m}=(-1)^{m+2}w_{\sigma_1}+\cdots+(-1)^{2m}w_{\sigma_{m-1}}. \end{equation} Suppose that $m=3$ and $X_i=S^{p_i}$ for $i=1,2,3$. Then it is easy to see that the identity \eqref{Hardie} is exactly the same as the Jacobi identity of Whitehead products \[ (-1)^{p_1p_3}[[e_1,e_2],e_3]+(-1)^{p_1p_2}[[e_2,e_3],e_1]+(-1)^{p_2p_3}[[e_3,e_1],e_2]=0. \] For $m>3$, if all $X_i$ are sphere, then \eqref{Hardie} coincides with Hardie's identity of higher Whitehead products \cite[Theorem 2.2]{H}. Then our identity is a combinatorial generalization of these two identities. \end{example} \begin{example} \label{cross-polytope} Recall that the $n$-dimensional cross-polytope is defined as the convex hull of $2n$ points \[ (\pm 1,0,0,\ldots,0),\,(0,\pm 1,0,\ldots,0),\ldots,(0,0,\ldots,0,\pm 1) \] in $\mathbb{R}^n$. Then the $n$-dimensional cross polytope is the dual polytope of the $n$-dimensional hypercube. Let $S$ be the boundary of the $(n+2)$-dimensional cross-polytope. We may set the vertex set of $S$ to be $[2n+4]$ such that $\sigma \subset [2n+4]$ is a face of $S$ if and only if there is no $i = 1, 2,\cdots, n+2$ such that $\{ 2i-1, 2i \} \subset \sigma$. Thus a facet of $S$ is given by \[ \sigma(\alpha_1, \cdots, \alpha_{n+2}) = \{ 2 - \alpha_1, 4- \alpha_2, \cdots, 2n+4 - \alpha_{n+2}\}, \] where $\alpha_1, \cdots, \alpha_{n+2}$ are either $0$ or $1$. We consider identities among iterated (higher) Whitehead products in a polyhedral product over the $n$-skeleton $K$ of $S$. Then collections of facets of $S$ \[ \{ \sigma(\alpha_1, \cdots, \alpha_{n+2}) \; | \; (\alpha_1, \cdots, \alpha_{n+2}) \ne (1, \cdots, 1)\} \] and \[ \{ \sigma(\alpha_1, \cdots, \alpha_{n+2}) \; | \; (\alpha_1, \cdots, \alpha_{n+2}) \ne (0, \cdots, 0)\} \] are pure fillings of $K$. For these pure fillings, we have \begin{align*} &\sum_{\alpha_1, \cdots, \alpha_{n+2}} (-1)^{\alpha_1 + \cdots + \alpha_{n+2}} \partial \sigma (\alpha_1, \cdots, \alpha_{n+2}) \\ &=\sum_{\alpha_1, \cdots, \alpha_{n+2}} (-1)^{\alpha_1 + \cdots + \alpha_{n+2}} \bigg( \sum_{i=1}^{n+2} (-1)^{i-1} \{ 2-\alpha_1, \cdots \widehat{2i-\alpha_i}, \cdots, 2n+4 - \alpha_{n+2}\}\bigg) \\ &=\sum_{\alpha_1, \cdots, \alpha_{n+2}} (-1)^{\alpha_1 + \cdots + \alpha_{n+2} + i - 1} \{ 2-\alpha_1, \cdots, \widehat{2i-\alpha_i}, \cdots, 2n+4-\alpha_{n+2}\}\\ &=0 \end{align*} in the simplicial chain complex of $S$. Then by Corollary \ref{main sphere later}, we get an identity \[ w_{\sigma(1, \cdots, 1)} = \sum_{(\alpha_1, \cdots, \alpha_{n+2}) \ne (1,\cdots, 1)} (-1)^{\alpha_0 + \cdots + \alpha_n + n + 1} w_{\sigma(\alpha_1, \cdots, \alpha_{n+2})}. \] \end{example} \begin{example} \label{RP^2} This example considers a simplicial complex which is not a skeleton of a simplicial sphere. Let $K$ be the following graph, where we identify verticed and edges with the same names. \begin{center} \begin{tikzpicture}[x=0.7cm, y=0.7cm, thick] \draw(0,0.1)--(-2,1.1)--(-2,3.5)--(0,4.5)--(2,3.5)--(2,1.1)--(0,0.1); \draw(-2,1.1)--(2,1.1)--(0,4.5)--(-2,1.1); \draw(-2,3.5)--(-1,2.8); \draw(2,3.5)--(1,2.8); \draw(-1,2.8)--(1,2.8)--(0,1.1)--(-1,2.8); \draw(0,0)--(0,1.1); \fill[black](-1,2.8)circle(2pt)node[below=1.5pt, left=2pt]{$4$}; \fill[black](1,2.8)circle(2pt)node[below=1.5pt, right=2pt]{$6$}; \fill[black](0,1.1)circle(2pt)node[above=3.5pt]{$5$}; \fill[black](0,0.1)circle(2pt)node[below=2pt]{$3$}; \fill[black](-2,1.1)circle(2pt)node[left=2pt]{$2$}; \fill[black](2,1.1)circle(2pt)node[right=2pt]{$1$}; \fill[black](-2,3.5)circle(2pt)node[left=2pt]{$1$}; \fill[black](2,3.5)circle(2pt)node[right=2pt]{$2$}; \fill[black](0,4.5)circle(2pt)node[above=2pt]{$3$}; \end{tikzpicture} \end{center} Then $K$ is the 1-skeleton of a six vertex triangulation of $\mathbb{R} P^2$. We abbreviate $\{i,j,k\}$ by $ijk$ for $i,j,k=1,2,\ldots,6$. Let \[ \mathcal{F}=\{124,\,126,\,134,\,135,\,156,\,235,\,236,\,245,\,346,\,456\}. \] Since $(\mathcal{F}-\sigma)\cup\{123\}$ is a filling of $K$ for each $\sigma\in\mathcal{F}$, $K$ is fillable. In the simplicial chain complex of $\overline{K}$, we have \begin{align*} \partial (456) &= 2 \partial (123) - \partial (124) - \partial (126) + \partial(134) + \partial (135)\\ &\quad+ \partial (156) - \partial (235) - \partial (236) + \partial (245) - \partial (346). \end{align*} Then by Corollary \ref{main pure}, we obtain \[ w_{456}=2 w_{123} - w_{124} - w_{126} + w_{134} + w_{135} + w_{156} - w_{235} - w_{236} + w_{245} - w_{346}. \] This shows that coefficients in the identity are not $\pm 1$ in general, while they are $\pm 1$ if $K$ is the $(n-1)$-skeleton of a simplicial $n$-sphere as in Corollary \ref{main sphere later}. \end{example}
{ "timestamp": "2021-11-25T02:04:41", "yymm": "2111", "arxiv_id": "2111.12199", "language": "en", "url": "https://arxiv.org/abs/2111.12199", "abstract": "We show that a relation among minimal non-faces of a fillable complex $K$ yields an identity of iterated (higher) Whitehead products in a polyhedral product over $K$. In particular, for the $(n-1)$-skeleton of a simplicial $n$-sphere, we always have such an identity, and for the $(n-1)$-skeleton of a $(n+1)$-simplex, the identity is the Jacobi identity of Whitehead products ($n=1$) and Hardie's identity of higher Whitehead products ($n\\ge 2$).", "subjects": "Algebraic Topology (math.AT); Combinatorics (math.CO)", "title": "Jacobi identity in polyhedral products", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357597935575, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7096739274075181 }
https://arxiv.org/abs/2007.09765
On the illumination of centrally symmetric cap bodies in small dimensions
The illumination number $I(K)$ of a convex body $K$ in Euclidean space $\mathbb{E}^d$ is the smallest number of directions that completely illuminate the boundary of a convex body. A cap body $K_c$ of a ball is the convex hull of a Euclidean ball and a countable set of points outside the ball under the condition that each segment connecting two of these points intersects the ball. The main results of this paper are the sharp estimates $I(K_c)\leq6$ for centrally symmetric cap bodies of a ball in $\mathbb{E}^3$, and $I(K_c)\leq 8$ for unconditionally symmetric cap bodies of a ball in $\mathbb{E}^4$.
\section{Introduction}\label{section-introduction} \subsection{On the status of the Illumination Conjecture} Let $\mathbb{E}^d$ denote a $d$-dimensional Euclidean space. The origin point is denoted $\mathbf{o}$, and $\mathbb{S}^{d-1}$ is an origin-centered $(d-1)$-sphere with a unit radius. We say that $K \subset \mathbb{E}^d$ is a \textit{convex body} if it is a compact, convex subset of $\mathbb{E}^d$ with a non-empty interior. Unless specified otherwise, it is implied that the convex body contains the origin $\mathbf{o}$ in its interior. Consider a convex body $K$, a point $\boldsymbol{p}$ on its boundary, and a point $\boldsymbol{u}$ on $\mathbb{S}^{d-1}$. We say $\boldsymbol{u}$ \textit{illuminates} $\boldsymbol{p}$ if and only if there exists a point in the interior of $K$, $\boldsymbol{q} \in \inter K$, such that $\boldsymbol{q} = \boldsymbol{p} + \lambda \boldsymbol{u}$ for some positive $\lambda \in \mathbb{R}^+$. In other words, the ray with direction $\boldsymbol{u}$ that starts at $\boldsymbol{p}$ has to pierce the interior of $K$. $K$ is \textit{completely illuminated} by a set of directions $U = \left\{\boldsymbol{u}_1, \boldsymbol{u}_2, \dots, \boldsymbol{u}_k \right\} \subset \mathbb{S}^{d-1}$ if every point in $\bd K$ is illuminated by at least one direction from $U$. The illumination number of $K$, $I(K)$, is the smallest number of directions that completely illuminate $K$. \begin{conjecture}[\textbf{Illumination Conjecture}]\label{illconj} The illumination number $I(K)$ of any d-dimensional convex body $K \subset \mathbb{E}^d$ does not exceed $2^d$. Moreover, $I(K) = 2^d$ if and only if $K$ is an affine image of a d-cube. \end{conjecture} The illumination conjecture has several alternative statements. It was first posed in 1955 as a covering problem in two dimensions. Levi proved that any 2-dimensional convex body can be covered with 4 translates of its interior \cite{leviUeberdeckungEibereichesDurch1955}. Later, Hadwiger \cite{hadwigerUngelosteProblemeNr1957} conjectured that any d-dimensional convex body can be covered by $2^d$ translates of its interior. Independently, Markus and Gohberg \cite{gohbergCertainProblemCovering1960} have posed a similar conjecture about covering a $d$-dimensional convex body $K$ with translates of its smaller homothetic copies $\boldsymbol{u}+\lambda K,$ where $\boldsymbol{u} \in \mathbb{E}^d,$ and $ \lambda \in (0,1)$. The question of illumination by directions was first stated by Boltyanskii \cite{boltyanskiProblemIlluminatingBoundary1960}. Hadwiger \cite{hadwigerUngelosteProblemeNr1960} has offered a point source interpretation of the problem. Instead of illuminating a region of $\bd K$ by all the rays with the same direction, it is illuminated by all the rays starting in a point outside $K$. For a convex body $K \subset \mathbb{E}^d$ two covering numbers can be defined: $C_{hom}(K)$, the smallest number of smaller homothets of $K$ required to cover it, and $C_i(K)$, the smallest number of interior translates required to cover $K$. There are also two illumination numbers: $I_p(K)$, the smallest number of point sources that would completely illuminate $K$, and, finally, the direction illumination number $I(K)$ defined above. For any convex body $K \subset \mathbb{E}^d$ all these numbers are equal (for details see \cite{boltyanskiExcursionsCombinatorialGeometry1996}). \begin{equation} C_i(K) = C_{hom}(K) = I_p(K) = I(K) \end{equation} So far, the illumination conjecture has been completely proven only in $\mathbb{E}^2$. Best general estimates for the convex body illumination numbers in $\mathbb{E}^3, \mathbb{E}^4, \mathbb{E}^5, \mathbb{E}^6$ are, respectively, $I(K)\leq 16$ \cite{papadoperakisEstimateProblemIllumination1999}, $I(K) \leq 96, I(K) \leq 1091,$ and $I(K) \leq 15373$ \cite{prymak_illumination_2020}. In $\mathbb{E}^3$ the illumination conjecture is proven for the convex bodies with central symmetry \cite{lassakSolutionHadwigerCovering1984}, bodies with symmetry about a plane \cite{deksterEachConvexBody2000}, and polytopes with affine symmetry \cite{bez_affine}. For a long time, the best general estimate was due to Rogers' work \cite{rogersNoteCoverings1957} and his collaborations with Shepard \cite{rogersDifferenceBodyConvex1957} and Erdos \cite{erdosCoveringSpaceConvex1962}: \begin{equation} I(K) \leq \frac{\vol_{d} (K - K)}{\vol_{d}(K)} d(\ln d+\ln\ln d+5)\leq {\binom{2d}{d}}d(\ln d+\ln\ln d+5)=O(4^{d}\sqrt{d}\ln d). \end{equation} Which for centrally symmetric convex bodies turns into: \begin{equation} I(K) \leq \frac{\vol_{d} (K - K)}{\vol_{d}(K)}d(\ln d+\ln\ln d+5)=2^{d}d (\ln d+\ln\ln d+5)= O(2^{d}d\ln d). \end{equation} Recently the general Rogers' estimate was improved in the paper \cite{huangImprovedBoundsHadwiger2018} to $I(K) \leq c_1 4^d e^{-c_2 \sqrt n}$ for some universal constants $c_1$ and $c_2$ . For a more detailed outlook of the illumination conjecture see \cite{bezdekGeometryHomotheticCovering2018}. \begin{definition} A \textbf{cap body of a ball} is the convex hull of the closed ball $B^d[\mathbf{o},r] \subset \mathbb{E}^d$ and a countable set of points outside the ball $\left\{\boldsymbol{v}_i \in \mathbb{E}^d \setminus B^d[\mathbf{o},r] | i \in I \right\}$ such that for any pair of distinct points $\boldsymbol{v}_i, \boldsymbol{v}_j$ with $ i,j \in I$, the line segment $\overline{\boldsymbol{v}_i\boldsymbol{v}_j}$ intersects the closed ball. \end{definition} Since the illumination number is invariant with respect to affine transformations, we will only consider the cap bodies based on the origin-centered ball with a unit radius. Here and after, unless specified otherwise, we will use ``cap body'' to refer to the cap body of an origin-centered unit ball. See Fig. \ref{fig:2dcap} for an example of 2-dimensional cap body. \begin{figure}[h] \centering \includegraphics[scale=0.4]{picture1} \caption{A cap body in $\mathbb{E}^2$ with three vertices} \label{fig:2dcap} \end{figure} Cap bodies of a ball were first introduced by Minkowski in 1903 \cite{minkowskiVolumenUndOberflache1903}. Minkowski has conjectured that only the cap bodies maximize the product of volume and mean width given a fixed surface area. This conjecture was proven later \cite{bolBeweisVermutungMinkowski1943}. For the details and recent results on Minkowski's quadratic inequality extremals see \cite{shenfeldExtremalsMinkowskiQuadratic2019}. Cap bodies were used by Naszodi in \cite{nasspiky}, under the name of a ``spiky ball'' in the illumination context. Naszodi used the construction to demonstrate that for any positive $\varepsilon$ there is a $d$-dimensional cap body in a $\varepsilon$-region of a Euclidean ball with an illumination number exponentially large with respect to $d$. \subsection{New results} Let $\left\{ \boldsymbol{e}_i \mid i \in \{1 \dots d \right\}\}$ be the standard orthonormal basis. For a point $\boldsymbol{u} \in \mathbb{E}^d$ and $\alpha \in \mathbb{R}$ we will denote by $H_{\boldsymbol{u},\alpha}$ the hyperplane given by the equation $\left\{ \boldsymbol{x} \in \mathbb{E}^d \mid \langle \boldsymbol{x}, \boldsymbol{u} \rangle = \alpha \right\}$. For that hyperplane, $H^+_{\boldsymbol{u}, \alpha} = \left\{ \boldsymbol{x} \in \mathbb{E}^d \mid \langle \boldsymbol{x}, \boldsymbol{u} \rangle \geq \alpha \right\}$ is its positive halfspace and $H^-_{\boldsymbol{u}, \alpha}=\left\{ \boldsymbol{x} \in \mathbb{E}^d \mid \langle \boldsymbol{x}, \boldsymbol{u} \rangle \leq \alpha \right\}$ is its negative halfspace. The hyperplanes $H_{\boldsymbol{e}_i, 0}$ we will call \textit{coordinate hyperplanes}. Respectively, $(d-2)$-greatspheres $G_i = \mathbb{S}^{d-1} \cap H_{\boldsymbol{e}_i, 0}$ are \textit{coordinate greatspheres}. If $\boldsymbol{u}$ is a unit vector, $\Hem_{\boldsymbol{u}}$ stands for the open hemisphere of $\mathbb{S}^{d-1}$ with centre in $\boldsymbol{u}$. There is a correspondence between vertices $\boldsymbol{v}_i$ of a cap body $K_c = \conv \left( \mathbb{S}^{d-1} \cup \left\{ \boldsymbol{v}_i \mid i \in I \right\} \right)$ and spherical caps on $\mathbb{S}^{d-1}$, $C_i = \mathbb{S}^{d-1} \cap H^+_{\boldsymbol{v}_i,1} \mid i \in I$. These caps form a packing on $\mathbb{S}^{d-1}$, their interiors do not intersect. We will prove that the problem of illuminating a cap body by a set of directions is equivalent to the problem of covering spherical caps with open hemispheres. If the spherical cap $C$ is a subset of an open hemisphere $\Hem_{\boldsymbol{u}}$ for some $\boldsymbol{u} \in \mathbb{S}^{d-1}$, we say that $\Hem_{\boldsymbol{u}}$ \textit{separates} $C$. Similarly, hyperplane $H$ passing through the origin separates the cap $C$ if one of the open hemispheres $\mathbb{S}^{d-1} \setminus H^{\pm}$ separates the cap. In this case we also say that the $(d-2)$-greatsphere $H \cap \mathbb{S}^{d-1}$ separates the cap. \begin{theorem}[\textbf{Cap Body Illumination Criterion}]\label{thm:cbcriterion} A cap body $K_c = \conv \left( \mathbb{S}^{d-1} \cup \left\{\boldsymbol{v}_i \mid i \in I\right\} \right)$ is illuminated by directions $\boldsymbol{u}_1, \dots, \boldsymbol{u}_k \in \mathbb{S}^{d-1}$ if and only if each closed spherical cap $\mathbb{S}^{d-1} \cap H^+_{\boldsymbol{v}_i,1},$ for $i \in I$ is separated by some open hemisphere from $\Hem_{-\boldsymbol{u}_j}$, $j \in \{1,\dots,k\}$, and this set of hemispheres completely covers $\mathbb{S}^{d-1}$. \end{theorem} \begin{definition} An \textbf{unconditionally symmetric cap body} in $\mathbb{E}^d$ is a cap body which is symmetric about every coordinate hyperplane $H_{\boldsymbol{e}_i,0}$ for $i \in \{1,2, \dots, d\}$. \end{definition} We study the illumination of centrally symmetric cap bodies in $\mathbb{E}^3$ and unconditionally symmetric cap bodies in $\mathbb{E}^4$. The main results of this paper are the theorems \ref{thm:S2} and \ref{thm:S3}: \begin{theorem}\label{thm:S2} The illumination number of a centrally symmetric cap body of a ball in $\mathbb{E}^3$ is at most 6, and this estimate is sharp. \end{theorem} \begin{theorem}\label{thm:S3} The illumination number of an unconditionally symmetric cap body of a ball in $\mathbb{E}^4$ is at most 8, and this estimate is sharp. \end{theorem} \begin{rmk} The illumination conjecture has already been proven for centrally symmetric convex bodies in $\mathbb{E}^3$ \cite{lassakSolutionHadwigerCovering1984}. We sharpen the illumination number estimate for the centrally symmetric cap bodies to 6, compared to Lassak's general estimate of 8. Similarly, in $\mathbb{E}^4$ we show that the illumination number of an unconditionally symmetric cap body is at most 8, compared to $2^4$ that features in the general illumination conjecture statement. \end{rmk} In section \ref{sec:s2proof} we show that for the centrally symmetric cap bodies in $\mathbb{E}^3$, there always exist three pairwise orthogonal great circles that separate every cap on the sphere. We pick a cap $C_{max}$ of a largest radius and position the cap body so that $\boldsymbol{e}_3$ is the centre of this cap. Caps that are not separated by the coordinate great circle $G_3$ are the ones that intersect it. We then show that the great circles $G_1, G_2$ can be rotated around $\boldsymbol{e}_3$ so that all these caps are separated. For the cap bodies in $\mathbb{S}^3$ we consider only the cap bodies with caps that are not separated by the 4 coordinate greatspheres. We show in section \ref{sec:ktan} that due to the unconditional symmetry, the caps that fail to be separated by this configuration have to be tangent to $k$ coordinate greatspheres ($1 < k \leq d$), and have their centers on the remaining $d-k$ coordinate greatspheres. We call such caps $k$-tangent. Since the caps are packed on the sphere, only four distinct configurations of $k$-tangent caps are allowed (up to orthogonal transformation ). We consider all the possible configurations of $k$-tangent caps, for each configuration we pick four pairwise orthogonal 2-greatspheres that separate all the $k$-tangent caps, and then we show that these greatspheres would also separate any other cap that forms a packing with the $k$-tangent caps. \begin{rmk} The illumination number of the convex body is invariant with respect to affine transformations. Therefore, our results are also applicable to the cap bodies of ellipsoids, affine images of the cap bodies of the balls. \end{rmk} \section{Proof of Theorem \ref{thm:cbcriterion}} Let $K_c = \conv \left( \mathbb{S}^{d-1} \cup \left\{\boldsymbol{v}_i \mid i \in I \right\} \right)$ denote a cap body in $\mathbb{E}^d$. \begin{lemma \label{illum_innp} A vertex $\boldsymbol{v}_i$ of a cap body $K_c$ is illuminated by the direction $\boldsymbol{u} \in \mathbb{S}^{d-1}$ if and only if $\langle \boldsymbol{v}_i, \boldsymbol{u} \rangle < -\sqrt{\norm{\boldsymbol{v}_i}^2-1}$ \end{lemma} \begin{proof} Suppose direction $\boldsymbol{u}$ illuminates the vertex $\boldsymbol{v}_i$. This takes place if and only if the ray starting at the point $\boldsymbol{v}_i$ with the direction $\boldsymbol{u}$ intersects the plane $H_{\boldsymbol{v}_i,1}$ in a point inside the $\mathbb{S}^{d-1}$. In other words, there is a positive $\lambda$ such that $\langle \boldsymbol{v}_i, \boldsymbol{v}_i+\lambda \boldsymbol{u} \rangle = 1 \left( \mbox{which is equivalent to }\lambda = \frac{1-\norm{\boldsymbol{v}_i}^2}{\langle \boldsymbol{u}, \boldsymbol{v}_i \rangle}\right)$ and $\norm{\boldsymbol{v}_i + \lambda \boldsymbol{u}} < 1$. Combining these two conditions concludes the proof of the lemma. \end{proof} \begin{lemma A boundary point of a cap body $\boldsymbol{p} \in \bd K_c$ that is also on the sphere, $\boldsymbol{p} \in \mathbb{S}^{d-1}$, is illuminated by the direction $\boldsymbol{u} \in \mathbb{S}^{d-1}$ if and only if $\langle \boldsymbol{p},\boldsymbol{u} \rangle <0$ \end{lemma} \begin{proof} The point $\boldsymbol{p}$ is illuminated by $\boldsymbol{u}$ if and only if there is a non-negative $\lambda$ such that $\norm{\boldsymbol{p} + \lambda \boldsymbol{u}} < 1$, or, equivalently, $\norm{\boldsymbol{p} + \lambda \boldsymbol{u}}^2 < 1$. Using the fact that $\norm{\boldsymbol{p}} = \norm{\boldsymbol{u}} = 1$ yields $2 \lambda \langle \boldsymbol{p},\boldsymbol{u} \rangle <-1$. It holds for some $\lambda \geq 0$ if and only if $\langle \boldsymbol{p},\boldsymbol{u} \rangle<0$. \end{proof} \begin{definition} A \textbf{spike} of a vertex $\boldsymbol{v}_i, i \in I$ of a cap body $K_c$ is the set $S_i = \bd \conv ( \mathbb{S}^{d-1} \cup \boldsymbol{v}_i) \setminus \mathbb{S}^{d-1}$. \end{definition} Note that any point on the boundary of a cap body either lies on the $\mathbb{S}^{d-1}$, or on a spike $S_i$ of some vertex $\boldsymbol{v}_i$. \begin{lemma Let $\boldsymbol{v}_i \in \mathbb{E}^d \setminus \mathbb{S}^{d-1}$ be a vertex of a cap body $K_c$. Then every point on the spike $S_i$ is illuminated by the direction $\boldsymbol{u} \in \mathbb{S}^{d-1}$ if and only if the vertex $\boldsymbol{v}_i$ is illuminated by the direction $\boldsymbol{u}$ \end{lemma} \begin{proof} The ``only if'' part follows from the fact that $\boldsymbol{v}_i \in S_i$. Suppose that the direction $\boldsymbol{u} \in \mathbb{S}^{d-1}$ illuminates the vertex $\boldsymbol{v}_i$. Now we need to show that an arbitrary point $\boldsymbol{p} \in S_i$ is also illuminated by $\boldsymbol{u}$. Let $\boldsymbol{q}$ be the point where the line through $\boldsymbol{v}_i$ and $\boldsymbol{p}$ meets $\mathbb{S}^{d-1}$. Then ``shrink'' the spike: let $S'_i$ be the homothet of $S_i$ with centre $\boldsymbol{q}$ and the homothety coefficient $\frac{\norm{\boldsymbol{p} - \boldsymbol{q}}}{\norm{\boldsymbol{v}_i - \boldsymbol{q}}}$, so that $\boldsymbol{p}$ is the the image of $\boldsymbol{v}_i$ under this transformation. Since for some $\varepsilon\in \mathbb{R}^+$ there is a point $\boldsymbol{v}_i + \varepsilon \boldsymbol{u} \in \inter \conv S_i$, there is also a point $\boldsymbol{p} + \frac{\norm{\boldsymbol{p} - \boldsymbol{q}}}{\norm{\boldsymbol{v}_i - \boldsymbol{q}}} \varepsilon \boldsymbol{u} \in \inter \conv S'_i \subset \inter \conv S_i$, and hence, $\boldsymbol{p}$ is illuminated by $\boldsymbol{u}$. \end{proof} \begin{lemma \label{VertCap} Let $\boldsymbol{v}_i \in \mathbb{E}^d \setminus \mathbb{S}^{d-1}$ be a vertex of a cap body $K_c$. Then $\boldsymbol{v}_i$ is illuminated by the direction $\boldsymbol{u} \in \mathbb{S}^{d-1}$ if and only if the closed spherical cap $C_i = \left\{ \boldsymbol{p} \in \mathbb{S}^{d-1} \mid \langle \boldsymbol{p},\boldsymbol{v}_i \rangle \geq 1 \ \right\}$ is separated by $Hem_{-\boldsymbol{u}}$. \end{lemma} \begin{proof} Cap $C_i$ lies in the hemisphere $Hem_{-\boldsymbol{u}}$ if and only if the angle between $\boldsymbol{u}$ and the centre of $C_i$ (which is equal to the angle between $\boldsymbol{u}$ and $\boldsymbol{v}_i$) is greater than $\pi/2 + r_i$, where $r_i$ is the spherical radius of the cap $C_i$. This condition is equivalent to $\frac{\langle \boldsymbol{u},\boldsymbol{v}_i \rangle}{\norm{\boldsymbol{v}}} \leq \cos \left( \pi/2 + r_i \right)$. This can be transformed into $\langle \boldsymbol{v}_i, \boldsymbol{u} \rangle < -\sqrt{\norm{\boldsymbol{v}_i}^2-1}$ using the fact that $\cos r_i = 1/\norm{\boldsymbol{v}_i}$. Together with Lemma \ref{illum_innp} this concludes the proof. \end{proof} We have shown that every spike of a cap body is illuminated by a set of directions if and only if every corresponding spherical cap of a cap body is separated by the corresponding set of hemispheres. That concludes the proof of the Theorem \ref{thm:cbcriterion}. \section{Proof of the Theorem \ref{thm:S2}}\label{sec:s2proof} Let $K_c \subset \mathbb{E}^3$ be a cap body symmetric with respect to the origin, with vertices $\left\{\boldsymbol{v}_i \mid i \in I\right\}$. We want to prove that there exist six illumination directions $\boldsymbol{u}_1, \dots, \boldsymbol{u}_6$ such that every closed cap $C_i = \mathbb{S}^2 \cap H^+_{\boldsymbol{v}_i,1}$ along with every other point on $\mathbb{S}^2$ belongs to at least one open hemisphere $Hem_{-\boldsymbol{u}_j}$ . \begin{definition} A \textbf{view angle} of a spherical cap $C \subset \mathbb{S}^2$ from the point $\boldsymbol{p} \in \mathbb{S}^2, \boldsymbol{p} \notin C$ is the angle between the two great circles that both pass through $\boldsymbol{p}$ and are tangent to $C$ \end{definition} First, we pick a cap $C_j$ , $j \in I$ with the largest spherical radius. There's at least two such caps, any one will do, we will denote it as $C_{max}$, its centre is $\boldsymbol{c}_{max}$, and its radius is $r_{max}$. Then we rotate the cap body so that ${c}_{max} = \boldsymbol{e}_3$. Now consider the coordinate great circles $G_1, G_2, G_3$. If all the caps are separated by these circles, then the 6 directions $\left\{ \pm \boldsymbol{e}_i \right\}$ will illuminate the cap body. Suppose there is a cap $C_p$ that is not separated by $G_1, G_2, G_3$ and, hence, intersects all of them. In particular, since $G_1$ and $G_2$ pass through $\boldsymbol{e}_3$, the view angle $2\alpha$ of $C_p$ from $\boldsymbol{e}_3$ is at least $\pi/2$ (see Fig. \ref{fig:symprobcap}). Let $\boldsymbol{c}_p$ be the centre of the cap $C_p$, and $r_p$ the spherical radius of the cap. We will show that such a cap exists only if $r_{max} = r_p = \pi/4$, and will demonstrate that such a cap configuration still can be separated by three great circles. Denote the spherical distance between $\boldsymbol{c}_{max}$ and $\boldsymbol{c}_p$ as $d_p$. We assume $d_p \leq \pi/2$, if this is not the case, we will switch to the antipodal cap of $C_p$. \begin{figure}[h] \centering \includegraphics[scale=0.45]{2_symprobcap.png} \caption{A cap with centre $\boldsymbol{c}_p$ has a view angle $2\alpha \geq \pi/2$ from the point $\boldsymbol{c}_{max}$} \label{fig:symprobcap} \end{figure} Consider a great circle $F$ that is tangent to $C_p$ at a point $\boldsymbol{h}$ and passes through $\boldsymbol{c}_{max}$. Using the sine theorem for the right spherical triangle $\boldsymbol{c}_{max}\boldsymbol{c}_p\boldsymbol{h}$ and the fact that $d_p \geq r_{max} + r_p$, we get the following inequality: \begin{equation} \label{eq:3dtrig} \sin \alpha = \frac{\sin r_p}{\sin d_p} \leq \frac{\sin r_p}{\sin (r_{max} + r_p)} = \frac{1}{\sin r_{max} \cot r_p + \cos r_{max}} \end{equation} \begin{itemize} \item Case 1: $r_{max} > \pi/4$ The spherical distance between $\boldsymbol{e}_3$ and $-\boldsymbol{e}_3$ is $\pi \geq 2r_{max} + 2r_p$. Hence $r_p \leq \pi/2 - r_{max}$, which, in this case, leads to $r_p < \pi/4$. Together with (\ref{eq:3dtrig}) we get: \begin{equation} \sin \alpha \leq \frac{1}{\sin r_{max} \cot r_p + \cos r_{max}} < \frac{1}{\sin r_{max} + \cos r_{max}} < \frac{1}{\sqrt2} \end{equation} Which shows that $2\alpha < \pi/2$. Hence the view angle of any other cap from $\boldsymbol{e}_3$ is strictly less than $\pi/2$. \item Case 2: $r_{max} \leq \pi/4$ Using the inequality (\ref{eq:3dtrig}) and the fact that $r_p \leq r_{max}$ we can obtain the following inequality: \begin{equation} \label{eq:s2_pifour} \begin{split} &\sin \alpha \leq \frac{1}{\sin r_{max} \cot r_p + \cos r_{max}} \leq \\ &\frac{1}{\sin r_{max} \cot r_p + \cos r_{max}} \leq \frac{1}{\sin r_{max} \cot r_{max} + \cos r_{max}} = \\ &\frac{1}{2 \cos r_{max}} \leq \frac{1}{\sqrt2} \end{split} \end{equation} Equality is only attained if $\cot r_p = \cot r_{max}$ and $\cos r_{max} = \frac{1}{\sqrt2}$ which is equivalent to $r_{max} = r_p = \pi/4$. For the centrally symmetric cap body it is only possible if $\boldsymbol{c}_p$, the centre of $C_p$, is on the great circle $G_3$. To separate all the caps, we pick the great circles $G_1,G_2$ so that $G_2$ passes through $\boldsymbol{c}_p$ and $G_1$ is orthogonal to both $G_2, G_3$. This configuration separates all the other caps on the sphere. If there is no more caps with a view angle $\pi/2$ from $\boldsymbol{e}_3$, then no other caps can intersect $G_1$ and $G_2$ simultaneously. If there is some other cap $C_q$ with a centre $\boldsymbol{c}_q$ and a spherical radius $r_q$ with a view angle $\pi/2$, then, as shown in (\ref{eq:s2_pifour}), $r_q = \pi/4$ and $\boldsymbol{c}_q \in G_3$. Now on the circle $G_3$ there are four centres (caps $C_p, C_q$ with their antipodes), and no two centres can be closer than $\pi/2$ to each other. That means those four centres are uniformly distributed with distances between adjacent centres being exactly $\pi/2$, as seen in Fig. \ref{fig:symoct}. \begin{figure}[h!] \centering \includegraphics[scale=0.45]{4_symoct.png} \caption{Six caps with radii $\frac{\pi}{4}$} \label{fig:symoct} \end{figure} Then the cap $C_p$ and its antipode belong to hemispheres corresponding to circle $G_1$, circle $G_2$ takes care of cap $C_q$ with its antipode, and there are no more caps that intersect all three great circles, as there is no more room on $G_3$. \end{itemize} We have shown that any centrally symmetric packing of spherical caps on $\mathbb{S}^2$ can be separated by three greatspheres. That concludes the proof of Theorem \ref{thm:S2}. \begin{rmk} This proof is based on solving spherical triangles. In higher dimensions this technique is not as helpful, and our attempts to use the proof for centrally symmetric $\mathbb{S}^3$ cap bodies have not been successful so far. \end{rmk} \section{Unconditional Cap Bodies and $k$-tangent Caps}\label{sec:ktan} In this section we will explore possible configurations of the caps on unconditionally symmetrical cap bodies. We do not specify the dimension of a cap body here, everything in this section holds in general $\mathbb{E}^d$ case. Suppose a coordinate $(d-2)$-greatsphere $G_i$ cuts the cap $C$ off-center, i.e. the centre of $C$ does not lie on $G_i$, but some other interior point $p$ of $C$ does. Since our cap body is symmetric about the hyperplane $H_{e_i,0}$ there is a cap $C'$ $\neq C$ such that $C$ and $C'$ are symmetric about the hyperplane $H_{\boldsymbol{e}_i,0}$. Then $p$ lies both in the $\inter C$ and $\inter C'$, violating the condition that the caps must form a packing (see Fig. \ref{fig:capnopack}). So if the cap $C$ intersects $G_i$ it is either tangent to $G_i$(see Fig. \ref{fig:captan}), or its centre lies on $G_i$ (see Fig. \ref{fig:capcen}). \begin{figure}[h] \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{{Fig45_centered.png}} \caption{Centered} \label{fig:capcen} \end{subfigure} \hfill % \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{Fig45_tangent.png} \caption{Tangent} \label{fig:captan} \end{subfigure} \hfill % \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{Fig45_nopacking.png} \caption{Prohibited, interiors intersect} \label{fig:capnopack} \end{subfigure} \caption{Intersection of a cap and a coordinate greatsphere} \end{figure} \indent Again we start with trying to separate all the spherical caps by the coordinate $(d-2)$-greatspheres $G_i$, $i \in \{1, \dots, d\}$. If these greatspheres separate all the caps, then, by Theorem \ref{thm:cbcriterion}, the cap body can be illuminated by $2d$ directions. If, however, there is a spherical cap that is not separated by the system, it must intersect every coordinate greatsphere. For every $G_i$, this cap is either tangent to $G_i$, or has its centre on $G_i$. \begin{definition} A \textbf{$k$-tangent cap}, where $k \in \{2, \dots, d\}$ , is a spherical cap $C \subset \mathbb{S}^{d-1}$ such that the centre of $C$ lies on the intersection of $d-k$ coordinate greatspheres, and $C$ is tangent to each of the remaining $k$ coordinate greatspheres. \end{definition} Note that $k \geq 2$ since a 1-tangent cap is a hemisphere, and the respective cap body is an unbounded cylinder that does not satisfy our definition of the convex body. Suppose a $k$-tangent cap $C$ has a spherical radius $r_c$ and its centre lies at a point $\boldsymbol{u}_c$ with coordinates $(x_1, \dots, x_d)$. Since our cap body is unconditional, there are also caps congruent to the cap $C$ with coordinates $(\pm x_1, \dots, \pm x_d)$. From this set of caps we can pick the cap with non-negative centre coordinates, so we will assume $x_i \geq 0$. If the cap's centre is on the greatsphere $G_i$, then $x_i=0$. If, however, the cap is tangential to a greatsphere $G_i$, its spherical radius $r_c$ is equal to $\pi/2 - \angle(\boldsymbol{u}_c, \boldsymbol{e}_i)$, the angle between $\boldsymbol{u}_c$ and the hyperplane $H_{\boldsymbol{e}_i,0}$. Hence, $\sin r_c = \cos (\angle(\boldsymbol{u}_c, \boldsymbol{e}_i)) = x_i$. Without loss of generality suppose that the cap in question is tangential to greatspheres $G_1 \dots G_k$, and its centre lies on greatspheres $G_{k+1} \dots G_d$. Hence, coordinates of the cap centre $\boldsymbol{u}_c$ would be $(\underbrace{\sin r_c, \dots, \sin r_c}_k , \underbrace{0, \dots, 0 }_{d-k})$. Since $\sum_{i=1}^{d} x_i^2 = 1$, it follows that $r_c = \arcsin \frac{1}{\sqrt{k}}$. Suppose there are at least two distinct sets of $k$-tangent caps on a sphere: a set of $k_1$-tangent caps with spherical radii $r_1 = \arcsin \frac{1}{\sqrt{k_1}}$ and a set of $k_2$-tangent caps with radii $r_2 = \arcsin \frac{1}{\sqrt{q}}$, where $p,q \in \{2,3,\dots, d\}$. Let $C_1, C_2$ be the representatives of sets of, respectively, $k_1$-tangent and $k_2$-tangent caps. We pick them so their centres $\boldsymbol{c}_{1}=(x_1, \dots, x_d)$ and $\boldsymbol{c}_2 = (y_1, \dots, y_d)$ have non-negative coordinates. For the two sets of caps to form a packing, the interiors of $C_1$ and $C_2$ must not intersect. \begin{lemma \label{lem:2setscomp} If a set of $k_1$-tangent caps forms a packing with a set of $k_2$-tangent caps on a unit sphere $\mathbb{S}^{d-1} \subset \mathbb{E}^d$, the following inequality holds: \begin{equation} k_1+k_2-d \leq \sqrt{(k_1-1)(k_2-1)} - 1 \end{equation} \end{lemma} \begin{proof}If the interiors of $C_1$ and $C_2$ do not intersect, then the spherical distance between $\boldsymbol{c}_{1}$ and $\boldsymbol{c}_{2}$ is at least the sum of the caps spherical radii $r_1 + r_2$. This condition is equivalent to $\cos \langle \boldsymbol{c}_1, \boldsymbol{c}_2 \rangle \leq \cos(r_1 + r_2)$, as both $\angle (\boldsymbol{c}_1, \boldsymbol{c}_2) $ and $ r_1 + r_2$ are less than $\pi$. Now, $\cos \langle \boldsymbol{c}_1, \boldsymbol{c}_2 \rangle = \sum_{i=1}^{d} x_i y_i = m \frac{1}{\sqrt{k_1k_2}}$ where $m$ is how many indices $i \in 1 \dots d$ have both $x_i \neq 0$ and $y_i \neq 0$. Since there is exactly $k_1$ non-zero $x$'s and $k_2$ non-zero $y$'s, it follows that $m \geq k_1+k_2-d $. Hence, \begin{equation} \begin{split} \frac{k_1+k_2-d}{\sqrt{k_1k_2}} \leq m \frac{1}{\sqrt{k_1k_2}} \leq \cos(r_1 + r_2) \\ = \cos \left( \arcsin\left( \frac{1}{\sqrt{k_1}}\right) + \arcsin\left( \frac{1}{\sqrt{k_2}}\right) \right) = \frac{\sqrt{k_1-1}\sqrt{k_2-1} - 1}{\sqrt{k_1k_2}} \end{split} \end{equation} \end{proof} This condition is based on the optimal case of $m=k_1+k_2-d$, so it is necessary, but not sufficient. \section {Proof of Theorem \ref{thm:S3}} Using Lemma \ref{lem:2setscomp}, we can classify the unconditional cap bodies in $\mathbb{E}^4$ that are not illuminated by the eight directions $\pm \boldsymbol{e}_i, i \in \{1 \dots 4\}$, based on the $k$-tangent cap configuration. For each possible configuration we will demonstrate a system of four 2-greatspheres that separates the $k$-tangent caps and every other cap that forms a packing with the $k$-tangent caps. Consider an arbitrary greatsphere $G = \mathbb{S}^{d-1} \cap H_{\boldsymbol{u}, 0}$ and two spherical caps $C_1, C_2$ with centers $\boldsymbol{o}_1$, $\boldsymbol{o}_2$, and spherical radii $r_1, r_2$ such that $\boldsymbol{o}_1, \boldsymbol{o}_2, \boldsymbol{u} \in \mathbb{S}^{d-1}$, and $r_1, r_2 \in (0, \pi/2)$. Cap $C_1$ is not separated by a greatsphere $G$ if and only if the spherical distance between $\boldsymbol{u}$ and $\boldsymbol{o}_1$ lies in the range $\left[ \pi/2 - r_1, \pi/2 + r_1 \right]$. This condition can be rewritten as \begin{equation} \label{eq:capgsp} \langle \boldsymbol{u}, \boldsymbol{o}_1 \rangle \in \left[-\sin r_1, \sin r_1 \right] \end{equation} Caps $C_1, C_2$ form a packing if and only if the spherical distance between their centers $O_1, O_2$ is no less than the sum of the radii $r_1 + r_2$. This condition is equivalent to \begin{equation}\label{eq:capcap} \langle \boldsymbol{o}_1, \boldsymbol{o}_2 \rangle \leq \cos(r_1 + r_2) \end{equation} Consider two sets of $k_1$-tangent and $k_2$-tangent caps forming a packing on $\mathbb{S}^3 \subset \mathbb{E}^4$. Using Lemma \ref{lem:2setscomp} and cross-checking all 6 pairs of $k_1, k_2 \in \left\{ 2,3,4 \right\}$ shows that there is only one case with multiple $k$-tangent cap sets to consider, 2 sets of 2-tangent caps such that each coordinate greatsphere is tangential to caps from only one set. Thus we have a total of 4 cases to consider. \subsection{Eight 2-tangent caps} Without loss of generality let our two sets of 2-tangent caps have centre coordinates $\left(\pm \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2}, 0, 0 \right)$ and $\left(0,0,\pm \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2} \right)$. We will show that an arbitrary spherical cap packing that contains these 2-tangent caps, can be separated by the four greatspheres with the centers $\left(\frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2}, 0, 0 \right)$ and $\left(0,0, \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2} \right)$. These greatspheres are pairwise orthogonal, so any cap that is not separated by them, has to have a radius that is no less than $\arcsin 1/\sqrt4 = \pi/6$, the inradius of the spherical orthant on $\mathbb{S}^3$ We are looking for ``stranded'' points, the points with maximum spherical distance to the nearest $k$-tangent cap. If this distance is less than $\pi/6$, then we cannot fit a cap of a radius $\pi/6$ on a sphere around the initial $k$-tangent cap construction, and hence, no other cap can intersect all 4 greatspheres. This technique is helpful with the first two cases. Consider a countable packing of caps $\left\{C_i \mid i \in I \right \}$ on the $\mathbb{S}^{d-1}$ sphere. For every point $\boldsymbol{p} \in \mathbb{S}^{d-1}$ and each cap $C_j, j \in I$ there is a non-negative spherical distance $d(\boldsymbol{p}, C_j)$ between the point and each cap. For each cap $C_j$ we can construct its spherical Voronoi cell $V_j$ (see Fig. \ref{voronoi}) , a closed set of all the points on the sphere for which $C_j$ is the nearest cap, or one of the nearest caps: $$V_j = \left\{ \boldsymbol{x} \in \mathbb{S}^{d-1} \mid d(\boldsymbol{x}, C_j) \leq d(\boldsymbol{x}, C_i) \mbox{ for any } i \in I \right\} $$ \begin{figure}[h] \centering \includegraphics[width=5cm, keepaspectratio]{CapVoronoi} \caption{Voronoi cells of three caps on $\mathbb{S}^2$} \label{voronoi} \end{figure} All eight 2-tangent caps are images of one cap under the symmetry group that consists of reflections about coordinate hyperplanes and the mirror symmetry that maps $(x_1, x_2, x_3, x_4)$ to $(x_3, x_4, x_1, x_2)$, so all the Voronoi cells are congruent and are the same as the Voronoi cells of the cap centers. So any ``stranded'' point would have to be on a boundary of a cell, otherwise its distance to the nearest cap could be increased by moving the point slightly towards the boundary. Since the Voronoi cells of all eight 2-tangent caps are congruent, we can consider arbitrary cap, like the one with the centre $\left(\frac{1}{\sqrt2}, \frac{1}{\sqrt2}, 0, 0 \right)$. A point $\boldsymbol{p} = (x_1, x_2, x_3, x_4)$ in its Voronoi cell is closer to $\left(\frac{1}{\sqrt2}, \frac{1}{\sqrt2}, 0, 0 \right)$ than to any of the points $\left(\pm \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2}, 0, 0 \right)$, or any of the points $\left(0, 0, \pm \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2} \right)$. It also, obviously, lies on $\mathbb{S}^2$. Writing down these conditions via inner product yields the following system: \begin{equation} \begin{cases} \label{2t_x8_voron} x_1 + x_2 \geq \pm x_3 \pm x_4 \\ x_1 + x_2 \geq \pm x_1 \pm x_2 \\ \sum_{j=1}^{4} x_j^2 = 1 \end{cases} \end{equation} Keeping these conditions in mind we want to maximize the angle between $\left(\frac{1}{\sqrt2}, \frac{1}{\sqrt2}, 0, 0 \right)$ and $(x_1, x_2, x_3, x_4)$, i.e., minimize the sum $x_1 + x_2$. Equation $x_1 + x_2 \geq \pm x_1 \pm x_2$ yields $x_1, x_2 \geq 0$. Suppose $x_1+x_2 = A$, then $2A^2 \geq (x_1+x_2)^2 + (|x_3| + |x_4|)^2 = 1 + 2x_1x_2 + 2 |x_3x_4| \geq 1$ with equality attained, for example, with the point $\left(\frac{1}{\sqrt2}, 0 \frac{1}{\sqrt2}, 0 \right)$. Spherical distance between this point and the 2-tangent cap with the centre in $\left(\frac{1}{\sqrt2}, \frac{1}{\sqrt2}, 0, 0 \right)$ equals to $\pi/12$ which is less than $\pi/6$. So any other cap can not intersect all four greatspheres. \subsection{Sixteen 4-tangent caps} In this case, the 4-tangent caps have centre coordinates $(\pm \frac12, \pm \frac12, \pm \frac12, \pm \frac12)$ and spherical radii $r = \arccos{1/\sqrt4} = \pi/6$. We will show that this configuration can be separated by the following four (d-2)-greatspheres: $$F_{1,2} = \left\{\boldsymbol{x} \in \mathbb{S}^3 \mid \left\langle \boldsymbol{x},\left( \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2}, 0, 0 \right) \right\rangle = 0 \right\} $$ $$F_{3,4} = \left\{\boldsymbol{x} \in \mathbb{S}^3 \mid \left\langle \boldsymbol{x},\left( 0,0, \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2} \right) \right\rangle = 0 \right\} $$ In this case, and the further cases, one can verify that the greatspheres separate the 4-tangent caps using the equation (\ref{eq:capgsp}). Next, we need to investigate all the possible caps with radius at least $\pi/6$ that would form a packing on the sphere. Once again, we can pick any cap, like the cap $C_1$ with centre at $O_1 = (\frac12, \frac12, \frac12, \frac12)$, and then the other 15 4-tangent caps are images of this cap under the group generated by the reflections about all the coordinate planes. Hence the Voronoi cells of all the 4-tangent caps are congruent and are the same as Voronoi cells of caps centres. Here are the equations that characterize the Voronoi cell of $C_1$: \begin{equation} \begin{cases} x_1^2 + x_2^2 + x_3^2 + x_4^2 = 1 \\ x_1 + x_2 + x_3 + x_4 \geq \pm x_1 \pm x_2 \pm x_3 \pm x_4 \end{cases} \end{equation} These conditions describe the orthant with non-negative coordinates. Within this closed orthant we need to find the point that is farthest away from $(\frac12, \frac12, \frac12, \frac12)$, i.e. a point with minimum $x_1 + x_2 +x_3 +x_4$. \begin{equation} (x_1 + x_2 + x_3 + x_4)^2 = 1 + 2\sum_{i>j}x_ix_j \geq 1 \end{equation} So $x_1 + x_2 + x_3 + x_4 \geq 1$ with equality being achieved on the points $\boldsymbol{e}_i$, where $i \in 1 \dots 4$. These points are each $\pi/3$ away from $O_1$, so the distance between $C_1$ and $\boldsymbol{e}_i$ is $\pi/6$. No other point in the orthant satisfies this condition, since all the other points have at least two positive coordinates. The radius of any cap with its centre not in $\pm \boldsymbol{e}_i$ would be strictly less than $\pi/6$, and it will not intersect all four greatspheres $F_1, \dots, F_4$. Caps with centers $(\pm 1, 0,0,0)$ and $(0, \pm 1,0,0)$ are separated by $F_1$, and caps with centers $(0,0,\pm 1,0)$ and $(0, 0,0, \pm 1)$ are separated by $F_3$. Hence we have a system of 4 greatspheres with mutually orthogonal normal vectors that separate all the possible caps of radius at least $\pi/6$, and there is no larger cap. All the caps with radii less than $\pi/6$ cannot intersect all four mutually orthogonal greatspheres simultaneously, and will also be separated. \subsection{Four 2-tangent caps} Here our cap configuration has four 2-tangent caps, $C_1, \dots, C_4$ with radii $\pi/4$. Without loss of generality, cap centre coordinates are $\left( \pm \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2}, 0, 0 \right)$. To separate these caps we will use the greatspheres $F_{1,2} = \left\{\boldsymbol{x} \in \mathbb{S}^3 \mid \left\langle \boldsymbol{x},\left( \frac{1}{\sqrt2}, \pm \frac{1}{\sqrt2}, 0, 0 \right) \right\rangle = 0 \right\} $, and the coordinate greatspheres $G_3, G_4$. Greatspheres $F_1$ and $F_2$ separate the 2-tangent caps. Now we just have to make sure that all the other caps that would still form a packing with these caps, are separated by the greatspheres $\left\{F_1, F_2, G_3, G_4 \right\}$. Suppose some cap $C_0$ has non-zero intersection with every greatsphere $F_1, F_2, G_3, G_4$. Let this cap have the spherical radius $r_0$, and centre coordinates $(x_1, x_2, x_3, x_4)$. We choose $C_0$ so that its centre coordinates are non-negative. Cap $C_0$ is not $k$-tangent, yet it has non-zero intersection with $G_3$ and $G_4$. So it has to have no intersection with either $G_1$ or $G_2$, without loss of generality, suppose it is $G_1$. Equation (\ref{eq:capgsp}) then yields $x_1 \geq \sin r_0$. We will also use the fact that $C_0$ forms a packing with the 2-tangent caps $C_1 \dots C_4$. Writing down equation (\ref{eq:capcap}) leads to \begin{equation} \label{eq:xx} \pm \frac{x_1}{\sqrt2} \pm \frac{x_2}{\sqrt2} \leq \cos \left(r_0 + \frac{\pi}{4} \right) \end{equation} Taking the non-negative signs at $x_1, x_2$ and simplifying the inequality (\ref{eq:xx}), we get $x_1 + x_2 \leq \cos r_0 - \sin r_0$. This, together with the assumption that $x_1 > \sin r_0$ we can state that \begin{equation} 0 \leq x_2 < \cos r_0 - 2 \sin r_0 \end{equation} Since $\cos r_0 - 2 \sin r_0$ monotonously decreases on the allowed range for $r_0$, it follows that $r_0 < \theta$ where $\cos \theta - 2 \sin \theta = 0$. Then $r_0 < \theta = \arcsin (1/\sqrt5) < \pi/6$ and the cap of the radius $r_0$ can not intersect all four mutually orthogonal greatspheres. So it has to be separated by at least one of $F_1, F_2, G_3, G_4$. \subsection{Eight 3-tangent caps} For this case, take a 3-tangent cap system $C_1, \dots, C_8$ with centre coordinates $\left( \pm \frac{1}{\sqrt3}, \pm \frac{1}{\sqrt3}, \pm \frac{1}{\sqrt3}, 0 \right)$ and all their radii equal to $r_p = \arcsin \frac{1}{\sqrt3}$. To separate these caps, we will use the same 2-greatsphere we have used in the previous case: $F_1, F_2, G_3, G_4$. Repeated use of the equation (\ref{eq:capgsp}) shows that that system does separate the 2-tangent caps. $C_1 \dots C_8$. Suppose there is a cap $C_0$ with radius $r_0$ and centre coordinates $(x_1, x_2, x_3, x_4)$ that is not separated by our 2-greatsphere system. Again, we pick it so that $x_i \geq 0$. Here are the conditions such a cap has to satisfy: \begin{itemize} \item It would have to form a packing with the 3-tangent caps. This condition is equivalent to \begin{equation} \pm \frac{x_1}{\sqrt3} \pm \frac{x_2}{\sqrt3}\pm \frac{x_3}{\sqrt3} \leq \cos \left(r_0 + r_p \right) \end{equation} Simplify the right part, and take all signs at $x_i$ to be positive for the strongest statement, and we get \begin{equation}\label{eq:wasone} x_1 + x_2 + x_3 \leq \sqrt2 \cos r_0 - \sin r_0 \end{equation} \item The cap $C_0$ is not 3-tangent, so at least one of the intersections $C_0 \cap G_1$, $C_0 \cap G_2$ is empty. Without loss of generality, suppose the cap does not intersect $G_1$. Hence \begin{equation}\label{eq:wastwo} x_1 > \sin r_0,~x_2 = 0 \mbox{ or } x_2 \geq \sin r_0 \end{equation} \item From (\ref{eq:wasone}) and (\ref{eq:wastwo}) we get $0 \leq x_2 + x_3 < \sqrt2 \cos r_0 - 2\sin r_0$, which yields the following estimate on $r_0$: \begin{equation}\label{eq:was3} \sqrt2 \cos r_0 - 2\sin r_0 > 0 \Rightarrow r_0 < \arcsin 1/\sqrt3 \end{equation} \item $C_0$ intersects $G_3$ and $G_4$, so \begin{equation}\label{eq:was4} x_3 = 0 \mbox{ or } x_3 = \sin r_0,~ x_4 = 0 \mbox{ or } x_4 = \sin r_0 \end{equation} \item Cap $C_0$ intersects $F_1$ and $F_2$. Writing down corresponding equation (\ref{eq:capgsp}) results in \begin{equation}\label{eq:was5} x_1 + x_2 \leq \sqrt2 \sin r_0 \end{equation} \end{itemize} Now we will show that a cap can not satisfy all these conditions simultaneously. Suppose $x_2 \neq 0$. Then $x_2 \geq \sin r_0$, which, together with (\ref{eq:was5}), yields $x_1 \leq (\sqrt2 -1) \sin r_0 < \sin r_0$, which contradicts (\ref{eq:was3}). So $x_2 = 0$. Suppose $x_3 \neq 0$. Then $x_3 = \sin r_0$. Then from (\ref{eq:wasone}) and (\ref{eq:wastwo}) we get $\sin r_0 < x_1 \leq \sqrt2 \cos r_0 - 2 \sin r_0$. It follows that $\sqrt2 \cos r_0 - 3 \sin r_0 >0$. Then $r_0 < \arcsin (\sqrt2 / \sqrt{11}) < \pi/6$, and then cap $C_0$ can not intersect four pairwise orthogonal 2-greatspheres. So $x_3 = 0$. Since the cap centre lies on $\mathbb{S}^3$, we get $x_1^2 + x_4^2 = 1$, and from (\ref{eq:was3}),(\ref{eq:was5}) we get $x_1 < \sqrt2/\sqrt3$. Hence, $x_4=\sin r_0$, because th only other option is $x_4 = 0$, which means $x_1 = 1$, contradicting $x_1 < \sqrt2/\sqrt3$. So $x_4 = \sin r_0$. But then $x_4^2 = 1 - x_1^2 > 1/3$, which is incompatible with $r_0 < \arcsin 1/\sqrt3$. Hence there can not be a cap $C_0$ that is distinct from a system of 3-tangent caps, forms a packing with those caps, and is not separated by 2-greatspheres $F_1, F_2, G_3, G_4$. Now, for any possible unconditional packing of spherical caps on $\mathbb{S}^3$ we have shown that there are four pairwise orthogonal 2-greatspheres that separates every cap in the packing. That concludes the proof of Theorem $\ref{thm:S3}$. \section{Concluding Remarks} So far we have failed to find the cap body in $\mathbb{E}^3$ with an illumination number higher than 6. We suppose, that 6 is, indeed, the upper estimate for the three-dimensional cap body illumination number. The illumination estimates we have obtained for the cap bodies with symmetry are sharp. However, we have not completely characterized the centrally symmetric cap bodies that require precisely 6 illumination directions in $\mathbb{E}^3$, nor the unconditional cap bodies with illumination number 8 in $\mathbb{E}^4$. \bigskip On behalf of all authors, the corresponding author states that there is no conflict of interest. \bibliographystyle{plain}
{ "timestamp": "2020-07-21T02:23:41", "yymm": "2007", "arxiv_id": "2007.09765", "language": "en", "url": "https://arxiv.org/abs/2007.09765", "abstract": "The illumination number $I(K)$ of a convex body $K$ in Euclidean space $\\mathbb{E}^d$ is the smallest number of directions that completely illuminate the boundary of a convex body. A cap body $K_c$ of a ball is the convex hull of a Euclidean ball and a countable set of points outside the ball under the condition that each segment connecting two of these points intersects the ball. The main results of this paper are the sharp estimates $I(K_c)\\leq6$ for centrally symmetric cap bodies of a ball in $\\mathbb{E}^3$, and $I(K_c)\\leq 8$ for unconditionally symmetric cap bodies of a ball in $\\mathbb{E}^4$.", "subjects": "Metric Geometry (math.MG)", "title": "On the illumination of centrally symmetric cap bodies in small dimensions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357579585025, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7096739260773414 }
https://arxiv.org/abs/1406.7403
Tautological classes on the moduli space of hyperelliptic curves with rational tails
We study tautological classes on the moduli space of stable $n$-pointed hyperelliptic curves of genus $g$ with rational tails. Our result gives a complete description of tautological relations. The method is based on the approach of Yin in comparing tautological classes on the moduli of curves and the universal Jacobian. It is proven that all relations come from the Jacobian side. The intersection pairings are shown to be perfect in all degrees. We show that the tautological algebra coincides with its image in cohomology via the cycle class map. The latter is identified with monodromy invariant classes in cohomology. The connection with recent conjectures by Pixton is also discussed.
\section*{Introduction} In this article we study tautological classes on the moduli space $\mathcal{H}_{g,n}^{rt}$ of stable $n$-pointed hyperelliptic curves of genus $g$ with rational tails. Tautological classes are natural algebraic cycles reflecting the nature of the generic object parameterized by the moduli space. The set of generators consists of an explicit collection of cycles. In particular, tautological groups are finite dimensional vector spaces. This distinguishes remarkably the tautological ring from the out of reach space of all algebraic cycles. A basic question regarding tautological algebras is to give a meaningful class of tautological relations. Our strategy in studying tautological classes on $\mathcal{H}_{g,n}^{rt}$ is to look at the fibers of the projection $\pi:\mathcal{H}_{g,n}^{rt} \rightarrow \mathcal{H}_g$. The reduced fiber of $\pi$ over a moduli point $[X] \in \mathcal{H}_g$ corresponding to a smooth hyperelliptic curve $X$ is the Fulton-MacPherson compactification $X[n]$ of the configuration space of $n$ points on $X$. There is a natural way to view the tautological ring of $X[n]$ as an algebra over the tautological ring $R^*(X^n)$ of the cartesian product $X^n$ of $X$. Basic relations among tautological classes on $X^n$ are obtained. These relations are divided into 3 parts: \begin{itemize} \item The vanishing of the Faber-Pandharipande cycle, \item The vanishing of the Gross-Schoen cycle, \item A relation of degree $g+1$ involving $2g+2$ points. \end{itemize} We show that all these relations can be obtained from relations on the universal Jacobian $\mathcal{J}_g$ over the space $\mathcal{H}_g$. Our argument in proving the first two vanishings depends strongly on the fact that we are working with hyperelliptic curves. The last relation has a different nature and holds over any family of smooth curves of genus $g$. We will give two independent arguments to prove this relation. Yin has pointed out that the degree $g+1$ relation can be obtained from the vanishing of a certain Chow motive as well. From our results we get a complete description of tautological relations on the moduli space $\mathcal{H}_{g,n}^{rt}$. We will see that the structure of the tautological algebra is determined by studying tautological classes on the fiber $X[n]$: \begin{thm}\label{X} Let $X$ be a fixed hyperelliptic curve of genus $g$. The tautological ring of the moduli space $\mathcal{H}_{g,n}^{rt}$ is naturally isomorphic to the tautological ring of the fiber $X[n]$. In particular, the intersection pairings are perfect in all degrees. \end{thm} Everything mentioned above concerns tautological classes in Chow. The Gorenstein property of $R^*(\mathcal{H}_{g,n}^{rt})$ implies the same results in cohomology. This shows that there is no difference between Chow and cohomology as long as we restrict to tautological classes. Using a result of Petersen and Tommasi, which was our motivation for this project, we prove the following: \begin{cor}\label{M} The cycle class map induces an isomorphism between the tautological ring of the moduli space $\mathcal{H}_{g,n}^{rt}$ in Chow and monodromy invariant classes in cohomology. \end{cor} At the end we discuss the connection between the relations on the space $\mathcal{H}_{g,n}^{rt}$ and Pixton's relations on $\overline{M}_{g,n}$. \begin{con} We consider algebraic cycles modulo rational equivalence. All Chow groups are taken with $\mathbb{Q}$-coefficients. \end{con} \vspace{+10pt} \noindent{\bf Acknowledgments.} I am grateful to all my colleagues who made suggestions and corrections on the preliminary version of this note. I would like to thank Carel Faber, Gerard van der Geer, Richard Hain, Robin de Jong, Nicola Pagani, Aaron Pixton and Orsola Tommasi for the valuable discussions and their comments. Thanks to Felix Janda for sending me the notes on Pixton's relations on products of the universal curve, answering many questions in that direction and his comments. Special thanks are due to Qizheng Yin for explaining several aspects of the theory developed in his thesis and his comments and corrections. Several parts of this research was carried out during my stay at the Max-Planck-Institut f\"ur Mathematik at Bonn in 2013. This research was completed in the group of Sergey Shadrin at the KdV Instituut voor Wiskunde at the university of Amsterdam. Thanks to Sergey Shadrin for his interest in this project and his supports. I would like to thank both institutes for their supports. \section{Tautological classes on the space of hyperelliptic curves} Let $\mathcal{H}_{g,n}^{rt}$ be the space of stable $n$-pointed hyperelliptic curves of genus $g$ with rational tails. It is a quasi-projective variety of dimension $2g-1+n$. It parameterizes objects of the form $(C;x_1 , \dots , x_n)$, where $C$ is stable hyperelliptic with $n$ distinct smooth points $x_i$ for $i=1, \dots , n$. We assume that $C$ is a reduced nodal curve of arithmetic genus $g$ with exactly one component of genus $g$. For each rational component of the curve the markings and nodes are called special points. By the stability condition we require that all rational components have at least 3 special points. As a result each object of the corresponding moduli problem has finitely many automorphisms and the resulting stack is of Deligne-Mumford type. Tautological classes on $\mathcal{H}_{g,n}^{rt}$ are defined as natural algebraic cycles on the moduli space. Let $\pi:\mathcal{C} \rightarrow \mathcal{H}_g$ be the universal hyperelliptic curve of genus $g$ and denote by $\omega$ its relative dualizing sheaf. Its class in the Picard group of $\mathcal{C}$ is denoted by $K$. Denote by $\mathcal{C}^n$ the $n$-fold fiber product of $\mathcal{C}$ over $\mathcal{H}_g$. We define $K_i:=\pi_i^*(K) \in A^1(\mathcal{C}^n)$, where $\pi_i:\mathcal{C}^n \rightarrow \mathcal{C}$ is the projection onto the $i^{th}$ factor for every $1 \leq i \leq n$. Its pull-back via the contraction map $\mathcal{H}_{g,n}^{rt} \rightarrow \mathcal{C}^n$ is denoted by the same letter. Tautological classes on $\mathcal{H}_{g,n}^{rt}$ come from the classes $K_i$ and those supported on the boundary of the partial compactification $\mathcal{H}_{g,n}^{rt}$. Recall that for each subset $I$ of the marking set $\{1, \dots , n\}$ having at least 2 elements there is a boundary divisor class $D_I$ in Pic$(\mathcal{H}_{g,n}^{rt})$. It corresponds to those nodal curves having 2 components. According to our definition one component is of genus $g$ and the other component is rational. The index set $I$ refers to the markings on the rational component. We now define the tautological ring of the moduli space: \begin{dfn} The tautological ring of $\mathcal{H}_{g,n}^{rt}$ is the $\mathbb{Q}$-subalgebra of the rational Chow ring $A^*(\mathcal{H}_{g,n}^{rt})$ of $\mathcal{H}_{g,n}^{rt}$ generated by the divisor classes $K_i$ for $i=1, \dots, n$ and the boundary divisors $D_I$ for subsets $I \subseteq \{1, \dots, n\}$ with at least 2 elements. \end{dfn} \begin{rem} There is another, equivalent, way to define tautological classes. We consider the system of Chow rings $A^*(\mathcal{H}_{g,n}^{rt})$ for all stable $(g,n)$. The system of tautological rings $R^*(\mathcal{H}_{g,n}^{rt})$ are defined as the smallest collection of $\mathbb{Q}$-subalgebras of the Chow rings $A^*(\mathcal{H}_{g,n}^{rt})$ having the identity element and stable under all natural maps among these spaces. It is straightforward to see that all classes we defined above belong to the tautological ring with this definition and they are generators. \end{rem} \begin{rem} Recall that the line bundle $\mathbb{L}_i$, whose fiber over the moduli point $(C;x_1, \dots, x_n)$ is the cotangent space of $C$ at $x_i$, gives tautological classes. Its first Chern class is called the $\psi_i$-class. It is related to the classes considered before via the following equality: $$\psi_i=K_i+\sum_{i \in I} D_I.$$ Other natural classes on the moduli space $\mathcal{H}_g$ are kappa classes. Recall that the kappa class $\kappa_i$ is defined as $\pi_*(K^{i+1})$. Notice that the kappa class $\kappa_i$ vanishes when $i>0$ since $\mathcal{H}_g$ has trivial Chow groups. \end{rem} We study the connection between tautological classes on moduli of curves and the universal Jacobian. There are many natural maps from a curve into its Jacobian. We find it more convenient to use a Weierstra{\ss} point on the curve to define such a map. We work with the moduli space $\mathcal{W}_g$ of Weierstra{\ss} pointed hyperellipctic curves of genus $g$. This space parameterizes objects of the form $(C,p)$, where $C$ is a hyperelliptic curve of genus $g$ and $p$ is a Weierstra\ss \ point on $C$. The universal family $\pi:\mathcal{C} \rightarrow \mathcal{W}_g$ admits a section $s: \mathcal{W}_g \rightarrow \mathcal{C}$. It associates the Weierstra{\ss} point $p \in C$ to the pair $(C,p)$. The space $\mathcal{W}_g$ is a finite cover of $\mathcal{H}_g$ of degree $2g+2$. In a similar way we consider a pointed version of this moduli space and define the space $\mathcal{W}_{g,n}^{rt}$. In an analogous manner we define tautological rings of the moduli spaces parameterizing Weierstra\ss \ pointed curves. In each case there is a map to a moduli of curves by ignoring the Weierstra\ss \ point. Tautological classes are defined as pull-backs of tautological classes on the moduli of curves via these maps. \section{Tautological classes on the universal Jacobian} In this section we review basic notions about tautological classes on the universal Jacobian. Relations among these classes give interesting results on moduli of curves. The tautological ring of a fixed Jacobian variety is defined by Beauville. In \cite{B} he studies tautological classes on the Jacobian of a fixed curve under algebraic equivalence. The idea is to consider the class of a curve of genus $g$ inside its Jacobian and apply all natural operators to it induced from the group structure on the Jacobian and the intersection product in the Chow ring. He shows that the resulting algebra becomes stable under the Fourier transform. In fact, if one applies the Fourier transform to the class of the curve, all components in different degrees belong to the tautological algebra. Beauville shows that these components yield a set of generators with $g-1$ elements. When the curve admits a degree $d$ map into the projective line the tautological ring is generated by $d-1$ elements. Let $\pi: \mathcal{C} \rightarrow S$ be a family of smooth curves of genus $g > 0$ which admits a section $s: S \rightarrow \mathcal{C}$. Denote by $\mathcal{J}_g:=\text{Pic}^0(\mathcal{C}/S)$ the relative Picard scheme of divisors of degree zero. It is an abelian scheme over the base $S$ of relative dimension $g$. The section $s$ induces an injection $\iota: \mathcal{C} \rightarrow \mathcal{J}_g$ from $\mathcal{C}$ into the universal Jacobian $\mathcal{J}_g$. The geometric point $x$ on a curve $C$ is sent to the line bundle $\mathcal{O}_C(x-s)$ via the morphism $\iota$. The abelian scheme $\mathcal{J}_g$ is equipped with the Beauville decomposition defined in \cite{B}. Components of this decomosition are eigenspaces of the natural maps corresponding to multiplication with integers. More precisely, for an integer $k$ consider the associated endomorphism on $\mathcal{J}_g$. The subgroup $A^i_{(j)}(\mathcal{J}_g)$ is defined as all degree $i$ classes on which the morphism $k^*$ acts via multiplication with $k^{2i-j}$. Equivalently, the action of the morphism $k_*$ on $A^i_{(j)}(\mathcal{J}_g)$ is multiplication by $k^{2g-2i+j}$. The Beauville decomposition has the following form: $$A^*(\mathcal{J}_g)=\oplus_{i,j} A_{(i,j)}(\mathcal{J}_g),$$ where $A_{(i,j)}(\mathcal{J}_g):=A_{(j)}^{\frac{i+j}{2}}(\mathcal{J}_g)$. Beside the intersection product on Chow groups there is another multiplication on $A^*(\mathcal{J}_g)$. The Pontryagin product $*$ defined in terms of the addition $$\mu: \mathcal{J}_g \times_S \mathcal{J}_g \rightarrow \mathcal{J}_g$$ on $\mathcal{J}_g$. Let $\pi_1,\pi_2: \mathcal{J}_g \times_S \mathcal{J}_g \rightarrow \mathcal{J}_g$ be the natural projections and $x,y$ be elements of the Chow ring of $\mathcal{J}_g$. The Pontryagin product $x*y$ of $x$ and $y$ is defined as $\mu_*(\pi_1^* x \cdot \pi_2^* y)$. The universal theta divisor $\theta$ trivialized along the zero section is defined in the rational Picard group of $\mathcal{J}_g$. It defines a principal polarization on $\mathcal{J}_g$. The first Chern class $l$ of the Poincar{\'e} bundle $\mathcal{P}$ is defined as $\pi_1^* \theta+\pi_2^* \theta-\mu^* \theta$. The Fourier Mukai transform $\mathcal{F}$ gives an isomorphism between $(A^* (\mathcal{J}_g),.)$ and $(A^*(\mathcal{J}_g),*)$. It is defined as follows: $$\mathcal{F}(x)=\pi_{2,*}(\pi_1^* x \cdot \exp(l)).$$ We now recall the definition of the tautological ring of $\mathcal{J}_g$ from \cite{Y1}. It is defined as the smallest $\mathbb{Q}$-subalgebra of the Chow ring $A^*(\mathcal{J}_g)$ which contains the class of $\mathcal{C}$ and is stable under the Fourier transform and all maps $k^*$ for integers $k$. It follows that for an integer $k$ it becomes stable under $k_*$ as well. From this definition we get infinitely many tautological classes. But one can see that the tautological algebra is finitely generated. In particular, it has finite dimensions in each degree. The generators are expressed in terms of the components of the curve class in the Beauville decomposition. Define the following classes: $$p_{i,j}:=\mathcal{F} \left(\theta^{\frac{j-i+2}{2}} \cdot [\mathcal{C}]_{(j)} \right) \in A_{(i,j)}(\mathcal{J}_g).$$ We have that $p_{2,0}=-\theta$ and $p_{0,0}=g[\mathcal{J}_g]$. The class $p_{i,j}$ vanishes for $i<0$ or $j<0$ or $j>2g-2$. The tautological class $\Psi$ comes from the class $K$ of relative dualizing sheaf $\omega_{\pi}$ of $\pi:\mathcal{C} \rightarrow S$. It is defined as $$\Psi:=s^*(K).$$ It is proven in \cite{Y1} that the tautological ring of $\mathcal{J}_g$ is generated by the classes $p_{i,j}$ and $\Psi$. A crucial feature of the tautological ring of the universal Jacobian is the Lefschetz decomposition. In the classic case the $\mathfrak{sl}_2$ action on Chow groups of an abelian variety was studied by K{\"u}nnemann \cite{KU}. Polishchuk \cite{PO1} has studied the $\mathfrak{sl}_2$ action for abelian schemes. We follow the standard convention that $\mathfrak{sl}_2$ is generated by elements $e,f,h$ satisfying: $$[e,f]=h \qquad [h,e]=2e, \qquad [h,f]=-2f.$$ In this notation the action of $\mathfrak{sl}_2$ on Chow groups of $\mathcal{J}_g$ is defined as $$e: A_{(j)}^i(\mathcal{J}_g) \rightarrow A_{(j)}^{i+1}(\mathcal{J}_g) \qquad x \rightarrow -\theta \cdot x,$$ $$f: A_{(j)}^i(\mathcal{J}_g) \rightarrow A_{(j)}^{i-1}(\mathcal{J}_g) \qquad x \rightarrow -\frac{\theta^{g-1}}{(g-1)!} * x,$$ $$h: A_{(j)}^i(\mathcal{J}_g) \rightarrow A_{(j)}^i(\mathcal{J}_g) \qquad x \rightarrow -(2i-j-g) x,$$ The operators $e,h$ have simple forms. The operator $f$ is given by the following differential operator: $$\mathcal{D}=\frac{1}{2} \sum_{i,j,k,l} \left( \Psi p_{i-1,j-1}p_{k-1,l-1}- \binom{i+k-2}{i-1}p_{i+k-2,j+l} \right) \partial p_{i,j} \partial p_{k,l}+\sum_{i,j} p_{i-2,j}\partial p_{i,j}.$$ The differential operator $\mathcal{D}$ is a powerful tool to produce tautological relations. The idea is to start from obvious relations and apply the operator $\mathcal{D}$ to it several times. This procedure yields a large class of tautological relations. A surprising fact is that one can get highly non-trivial relations from this method. Yin \cite{Y1} studies tautological classes on the universal Jacobian in his recent thesis. There are basic relations among tautological classes coming from the $\mathfrak{sl}_2$ action on its Chow ring. Interpreting these relations on the Jacobian side gives an interesting class on relations on moduli of curves. All tautological relations on the universal curve $\mathcal{C}_g$ for $g \leq 19$ and on $M_g$ for $g \leq 23$ are recovered using this method. As another application we will see that all components of the curve class in positive degrees vanish for families of hypereliptic curves. These vanishings will be used in this article to find all tautological relations on $\mathcal{H}_{g,n}^{rt}$. Let $\pi:\mathcal{C} \rightarrow \mathcal{W}_g$ be the universal curve over the space $\mathcal{W}_g$ of Weierstra{\ss} pointed hyperelliptic curves of genus $g$. Geometric points of $\mathcal{C}$ correspond to objects of the form $(C,p,x)$ where $C$ is a smooth hyperelliptic curve of genus $g$, $p$ is a Weierestra{\ss} point of $C$ and $x \in C$ is arbitrary. The degree zero divisor $x-p$ belongs to the Jacobian of $C$. This association defines the map $$\phi: \mathcal{C} \rightarrow \mathcal{J}_g.$$ The $i^{th}$ component of the Beauville decomposition of image of $\mathcal{C}$ via $\phi$ is denoted by $\mathcal{C}_{(i)}$ as usual. \begin{prop}\label{J} Let $\mathcal{C}$ and $\phi: \mathcal{C} \rightarrow \mathcal{J}_g$ be as above. The component $\mathcal{C}_{(i)}$ vanishes for all $i>0$. \end{prop} \begin{proof} The proof is known to the expert. For example in \cite{B} Beauville proves a similar statement when $C$ is a fixed hyperelliptic curve. He shows that the component $C_{(i)}$ is algebraically equivalent to zero when $i>0$. Here we want to prove the same vanishings for families of hyperelliptic curves under rational equivalence. Notice that all components $\mathcal{C}_{(i)}$ vanish when $i$ is odd. To see this consider the Ceresa cycle $[\mathcal{C}]-(-1)^*[\mathcal{C}]$. This class is zero according to our definition of the morphism $\phi$. This shows the vanishing of $\mathcal{C}_{(i)}$ for odd $i$. We now use this fact to show the vanishing of other components with positive indices. Equivalently, we show that the class $p_{i+2,i}=\mathcal{F}(\mathcal{C}_{(i)})$ is zero when $i \geq 1$. The vanishing of $p_{3,1}$ is immediate since we know that $\mathcal{C}_{(1)}=0$. Consider the following equations: $$0=\mathcal{D}(p_{3,1}p_{i+1,i-1})=\Psi p_{2,0} p_{i,i-2}-\binom{i+2}{2}p_{i+2,i}+p_{i+1,i-1}p_{1,1}+p_{3,1}p_{i-1,i-1},$$ which is the same as $-\binom{i+2}{2}p_{i+2,i}$. Notice that the class $\Psi$ vanishes since the Picard group of $\mathcal{W}_g$ is trivial. This proves the claim since the coefficient of $p_{i+2,i}$ is not zero. \end{proof} \begin{rem} In \cite{B} Beauville shows that the tautological ring of the Jacobian of a fixed hyperelliptic curve of genus $g$ under algebraic equivalence is isomorphic to $\mathbb{Q}[\theta]/(\theta^{g+1})$. The vanishing proved above gives the same presentation for the tautological ring of $\mathcal{J}_g$ over $\mathcal{H}_g$ under rational equivalence. \end{rem} \begin{cor}\label{K} Let $\pi:\mathcal{C} \rightarrow \mathcal{W}_g$ and $s: \mathcal{W}_g \rightarrow \mathcal{C}$ be as before. We have the relation $K=(2g-2)s$ in Pic$(\mathcal{C})$. \end{cor} \begin{proof} Consider the map $\phi: \mathcal{C} \rightarrow \mathcal{J}_g$ defined before. We will show that the desired relation follows from the vanishing of the divisor $p_{1,1}$. We need to calculate the pull-back of $p_{1,1}$ to $\mathcal{C}$ via $\phi$. The method for calculating the pull-back of tautological classes on the universal Jacobian to families of curves is explained in the thesis of Yin. We briefly recall the procedure from \cite{Y1}. Let $\pi_1,\pi_2$ be the projections onto the factors of $\mathcal{C}^2:=\mathcal{C} \times_{\mathcal{W}_g} \mathcal{C}$. The section $s: \mathcal{W}_g \rightarrow \mathcal{C}$ induces two sections $s_1,s_2: \mathcal{W}_g \rightarrow \mathcal{C}^2$. Denote by $d_{1,2}$ the class of the diagonal inside $\mathcal{C}^2$. According to the definition the class $\phi^* (p_{1,1})$ is equal to the degree one component of the following expression: $$\phi^* \left(\mathcal{F}(\theta \cdot [\phi_* \mathcal{C}]) \right).$$ An argument based on chasing through cartesian squares shows that the expression above is the same as: $$\pi_{2,*} \left( \pi_1^* \left( \left(s+\frac{K}{2}+ \frac{\Psi}{2} \right) \cdot \exp(-2 s) \right) \cdot \exp \left(d_{1,2} \right) \right) \cdot \exp\left(-s\right).$$ We therefore have the following: $$\phi^*(p_{1,1})=\frac{1}{2}K-(g-1)s-(g-\frac{1}{2})\Psi.$$ The result follows since the classes $p_{1,1}$ and $\Psi$ vanish. \end{proof} \section{The Faber-Pandharipande cycle} There is a natural way to define the tautological ring $R^*(C^n) \subset A^*(C^n)$ of products of a fixed smooth curve $C$ for positive integers $n$. It is the $\mathbb{Q}$-algebra generated by canonical classes and diagonals. More precisely, denote by $K$ the canonical class on the curve $C$ and consider the natural projections $\pi_i: C^n \rightarrow C$ and $\pi_{i,j}:C^n \rightarrow C^2$ for $1 \leq i < j \leq n$. From this collection of maps one gets the divisor classes $K_i:=\pi_i^*(K)$ and $d_{i,j}:=\pi_{i,j}^*(D)$, where $D \subset C^2$ is the diagonal class. The tautological ring $R^*(C^n)$ is defined to be the $\mathbb{Q}$-subalgebra of $A^*(C^n)$ generated by $K_i,d_{i,j}$. One could simply restrict tautological cycles on the product $\mathcal{C}_g^n$ of the universal curve $\mathcal{C}_g$ of genus $g$ to the fiber $C^n$ and recover the same set of generators. Faber and Pandharipande in an unpublished work have studied this ring in cohomology. From their analysis one gets a complete description of this ring. In particular, there is an explicit presentation of all relations. The resulting algebra becomes Gorenstein. It is natural to ask whether $R^*(C^n) \subset A^*(C^n)$ has the Gorenstein property as well. The situation becomes difficult already when we consider the surface $C^2$. There are 4 cycles $K_1K_2, K_1d_{1,2},K_2d_{1,2},d_{1,2}^2$ in degree 2 with the following relations: $$d_{1,2}^2=-K_1 d_{1,2}=-K_2d_{1,2}.$$ The proportionality $K_1K_2=(2g-2)K_1d_{1,2}$ holds in cohomology. One wonders whether this relation is true in Chow as well. This was shown by Faber and Pandharipande for $g \leq 3$. Green and Griffiths \cite{GG} study the zero cycle $K_1K_2-(2g-2)K_1d_{1,2}$ for generic curves defined over complex numbers. Their Hodge theoretic analysis is based on an infinitesimal invariant. In particular, they show that the Faber-Pandharipande cycle doesn't vanish when $C$ is a generic curve of $g \geq 4$. In \cite{Y2} Yin shows that the same statement is true in arbitrary characteristic. The idea of the proof is to write the Faber-Pandharipande cycle as the pull-back of a tautological class from the Jacobian of the curve. He observes that the corresponding tautological class on the Jacobian doesn't vanish for a generic curve of genus $\geq 4$. Yin proves that the same is true for its pull-back. This shows the non triviality of this cycle for such curves. We will prove the vanishing of the Faber-Pandharipande cycle on the locus of hyperelliptic curves with the same idea: \begin{prop} \label{FP} Let $\pi:\mathcal{C} \rightarrow \mathcal{H}_g$ be the universal hyperelliptic curve of genus $g$. The cycle $K_1K_2-(2g-2)K_1d_{1,2}$ vanishes on $\mathcal{C}^2$. \end{prop} \begin{proof} Let $\pi:\mathcal{C} \rightarrow \mathcal{W}_g$ be the universal curve over the space $\mathcal{W}_g$ together with the section $s: \mathcal{W}_g \rightarrow \mathcal{C}$. Denote by $s_1,s_2$ the induced sections from $\mathcal{W}_g$ to the space $\mathcal{C} \times_{\mathcal{W}_g} \mathcal{C}$. We have the relations $$K_i=(2g-2)s_i, \qquad \text{for} \ i=1,2$$ from the statement proven in Corollary \ref{K}. This gives the desired vanishing since $K_1K_2=(2g-2)^2s_1s_2$ and $K_1d_{1,2}=(2g-2)s_1s_2$. There is another way to see this: Consider the following map $$\phi_2: \mathcal{C} \times_{\mathcal{W}_g} \mathcal{C} \rightarrow \mathcal{J}_g.$$ Notice that a geometric point on the space $\mathcal{C} \times_{\mathcal{W}_g} \mathcal{C}$ has the form $(C,p,x,y)$, where $C$ is a hyperelliptic curve with a Weierstra{\ss} point $p$ and $x,y \in C$ are arbitrary. The image of this point under $\phi_2$ is the divisor $x+y-2p$ on the Jacobian of $C$. After calculating the pull-back of the classes $p_{1,3},p_{2,2}$ to $\mathcal{C} \times_{\mathcal{W}_g} \mathcal{C}$ we obtain the following relation: $$\phi_2^*\left(4g p_{2,2}+(4g+6)p_{1,3}\right)= -\left(\frac{g}{g-1} \right)^2 \left(K_1K_2-(2g-2)K_1d_{1,2}\right).$$ The result follows from the vanishing of $p_{1,3},p_{2,2}$ on the universal Jacobian proved in Proposition \ref{J}. \end{proof} The vanishing of the Faber-Pandharipande cycle can be used to show another vanishing on the universal hyperelliptic curve: \begin{cor} \label{K2} The cycle $K_1^2$ vanishes on the universal curve $\pi:\mathcal{C} \rightarrow \mathcal{H}_g$. \end{cor} \begin{proof} We have the relations $$K_1K_2=(2g-2)K_1d_{1,2}=(2g-2)K_2d_{1,2}$$ on the space $\mathcal{C} \times_{\mathcal{H}_g} \mathcal{C}$. Intersect these relations with the divisor classes $K_1,K_2$ and compute their push-forwards to $\mathcal{C}$ under the natural projection $\mathcal{C} \times_{\mathcal{H}_g} \mathcal{C} \rightarrow \mathcal{C}$, onto the first factor. The vanishing of $K_1^2$ follows. \end{proof} \begin{rem} In \cite{Y2} the Faber-Pandharipande cycle is shown to be the pull-back of the class $W:=2p_{1,1}^2-(4g-4)p_{2,2}$, for a fixed curve of genus $g$. It is possible to prove Proposition \ref{FP} and Corollary \ref{K2} using the class $W$ with a similar method. The only difference is that the pull-back of $W$ has extra terms involving $K_1^2$ and $K_2^2$, which both vanish after all. \end{rem} \section{The Gross-Schoen cycle} In \cite{GS} Gross and Schoen considered a smooth and projective curve $X$ defined over a field $k$ together with a $k$-rational point $p$. The codimension 2 cycle $\Delta_p$ on the product $X^3$ is defined in terms of the diagonal classes and the point $p$. The authors call this class \emph{the modified diagonal cycle} and study some of its properties. The basic fact about $\Delta_p$ is that it vanishes in cohomology. It is proven that the class $\Delta_p$ in the second Griffiths group $Gr^2(X^3):= A^2_{hom}(X^3)/A^2_{alg}(X^3)$, measuring homologically trivial cycles modulo algebraically trivial cycles, is independent of the choice of the point $p$. When $X$ is a rational curve, an elliptic curve or a hyperelliptic curve the cycle $\Delta_p$ is shown to be zero in Chow. In the first two cases the point $p$ can be arbitrary but for hyperelliptic curves it has to be a Weierstra{\ss} point. In this article we want to show the same result in the relative setting. Let us recall the definition of the Gross-Schoen cycle in the classical case. Let $X$ be a smooth curve with a point $p$ as above. Consider the following subvarieties: $$\Delta_1=\{(x,p,p): x \in X \}, \qquad \Delta_2=\{(p,x,p): x \in X \},$$ $$\Delta_3=\{(p,p,x): x \in X \}, \qquad \Delta_{1,2}=\{(x,x,p): x \in X \},$$ $$\Delta_{1,3}=\{(x,p,x): x \in X \}, \qquad \Delta_{2,3}=\{(p,x,x): x \in X \},$$ $$\Delta_{1,2,3}=\{(x,x,x): x \in X \}.$$ The degree 2 cycle $\Delta_p$ is defined on the product $X^3$ as follows: $$\Delta_p=\Delta_{1,2,3}-\Delta_{1,2}-\Delta_{1,3}-\Delta_{2,3}+\Delta_1+ \Delta_2+\Delta_3.$$ There is another version of this cycle with respect to the canonical class. It has the following form: $$\Delta_K=\Delta_{1,2,3}-\frac{1}{2g-2}\left(K_1d_{2,3}+K_2d_{1,3}+K_3d_{1,2}\right)+\frac{1}{(2g-2)^2}\left(K_1K_2+K_1K_3+K_2K_3\right),$$ where $d_{i,j}$ is the diagonal class $x_i=x_j$ as usual. Recall that a curve $C$ has a subcanonical point $p$ if the equality $K=(2g-2)p$ holds. In this situation the classes $\Delta_K$ and $\Delta_p$ defined above coincide with each other. Notice that the cycle $\Delta_K$ is symmetric. \begin{prop} The cycle $\Delta_K$ vanishes on the locus of hyperelliptic curves. \end{prop} \begin{proof} Let $\pi: \mathcal{C} \rightarrow \mathcal{W}_g$ be the family of Weierstra{\ss} pointed hyperelliptic curves of genus $g$ as before. We define a map $$\phi_3: \mathcal{C} \times_{\mathcal{W}_g} \mathcal{C} \times_{\mathcal{W}_g} \mathcal{C} \rightarrow \mathcal{J}_g.$$ For a pointed curve $(C,p)$ and points $x_1,x_2,x_3 \in C$ the associated divisor on the Jacobian of $C$ is the divisor $x_1+x_2+x_3-3p$. The cycle $\Delta_K$ is the pull back of the following class via $\phi_3$: \begin{equation}\tag{1}\label{GS} p_{3,1}-\frac{1}{g-1}p_{2,0}p_{1,1}+\frac{2g}{g-1}p_{2,2}- \frac{2g-3}{2(g-1)^2}p_{1,1}^2. \end{equation} This was proven by Yin in \cite{Y1} for a fixed curve. A computation similar to \ref{K} shows that this formula stays valid over the base $\mathcal{W}_g$ as well. The vanishing of the cycle $\Delta_K$ follows from Proposition \ref{J}. \end{proof} \begin{rem} In \cite{Z} Zhang studies the connection between the triviality of the Gross-Schoen cycle and the Ceresa cycle in the Chow ring of $X^3$ for a fixed curve $X$. From his result one can see their equivalence assuming the triviality of the Faber-Pandharipande cycle. The vanishing of the Ceresa cycle for families of Weierstra{\ss} pointed hyperelliptic curves is obvious from its definition. It would be interesting to see whether the result proven above can be obtained from this vanishing by the approach of Zhang. That might give insight into the following natural question about the torsion cycles found in this article: \begin{Q} What are the orders of the Faber-Pandharipande and Gross-Schoen cycle in the integral Chow ring of the moduli space of pointed hyperelliptic curves? \end{Q} \end{rem} Modified diagonals can be defined in a more general settings. The following formulation is due to O'Grady. Let $X$ be any $d$-dimensional algebraic variety over a field $k$ and $p \in X(k)$ be a $k$-rational point. The modified cycle $\Gamma^m(X,p)$ can be defined on the product $X^n$. For any subset $I$ of $\{1, \dots, n\}$ let $$\Delta_I^n(X,p):=\{(x_1,\dots,x_n): x_i=x_j \ \text{if} \ i,j \in I \text{and} \ x_i=p \ \text{if} \ i \notin I\}.$$ The $n^{th}$ modified cycle associated to the point $p$ is the $d$-cycle on $X^n$ given by $$\Gamma^n(X,p):=\sum_{\emptyset \neq I \subset \{1, \dots, n\}} (-1)^{n-|I|} \Delta_I^n(X,p).$$ In \cite{O} it is conjectured that for a hyperk{\"a}hler variety of dimension $d=2n$ there exists a point $p \in X$ such that the cycle $\Gamma^{2n+1}(X,p)$ vanishes. It is also conjectured that the modified diagonal $\Gamma^{2g+1}(A,p)$ vanishes for a point $p$ on an abelian variety of dimension $g$. A recent result of Moonen and Yin \cite{MY} establishes the second conjecture. In \cite{MY2} the same authors among other things give a motivic description of modified diagonals. \section{The degree $g+1$ relation}\label{g+1} We have proved that the Faber-Pandharipande cycle and Gross-Schoen cycle vanish on families of hyperelliptic curves. In this section we obtain a degree $g+1$ relation which plays an essential role in our study. We will see that there are two different ways to get this relation. The first source of this relation is again the universal Jacobian. We use the formula given by Grushevsky and Zakharov for the pull-back of the theta divisor to the moduli of curves. The second method is based on studying linear systems for generic curves of genus $g$. This method produces relations in more general settings. Restricting to the locus of hyperelliptic curves we obtain a tautological relation of degree $g+1$ involving $2g+2$ points. \subsection{Relations coming from the theta divisor} The geometry of the theta divisor on the universal Jacobian gives a very simple way to prove our relation. The pull-back of the theta divisor to the space of curves is investigated by several authors. Hain \cite{H} gives a formula for this class in terms of standard boundary cycles in cohomology. Hain uses this formula to compute the pull-back of the zero section of the universal Jacobian to the space of curves of compact type. Hain's formula answers Eliashberg's question. Analogue results are proven by M{\"u}ller \cite{MUL}. Grushevsky and Zakharov \cite{GZ1}, \cite{GZ2} give a formula for the pull-back of the theta divisor to the spaces $M_{g,n}^{ct}$ classifying pointed curves of compact type and the space $\overline{M}_{g,n}$ of stable pointed curves. Here we follow the notation in \cite{GZ1}. Let $\mathcal{J}_g$ be the universal Jacobian of degree zero divisors over $M_g^{ct}$. We denote its pull-back under the natural projection $M_{g,n}^{ct} \rightarrow M_g^{ct}$ by the same letter. Consider a collection $\mathbf{d}=(d_1, \dots, d_n) \in \mathbb{Z}^n$ of integers satisfying $\sum_{i=1}^nd_i=0$. For any moduli point $$(C;x_1 , \dots, x_n) \in M_{g,n}^{ct}$$ one gets a degree zero divisor $\sum_{i=1}^n d_ix_i$ on the Jacobian of $C$. This association defines a map $$s_{\mathbf{d}}:M_{g,n}^{ct} \rightarrow \mathcal{J}_g.$$ Let $\theta$ be the universal symmetric theta divisor trivialized along the zero section as before. In \cite{GZ1} Grushevsky and Zakharov compute the pull-back $s_{\mathbf{d}}^*(\theta)$ in terms of standard divisor classes on $M_{g,n}^{ct}$. We recall the definition of the divisor class $\Delta_{h,I}$ for $0 \leq h \leq g$ and a subset $I$ of the marking set $\{1, \dots, n\}$. The generic point on this divisor corresponds to a singular curve having 2 irreducible components. One of the components has genus $h$ whose set of markings is $I$. For any such subset $I$ the number $d_I$ is defined as the sum $\sum_{i \in I}d_i$. \begin{thm} For deg $\mathbf{d}=0$, the class $s_{\mathbf{d}}^*(\theta) \in \text{Pic}_{\mathbb{Q}}(M_{g,n}^{ct})$ of the pull-back of the universal symmetric theta divisor trivialized along the zero section is equal to $$s_{\mathbf{d}}^*(\theta)=\frac{1}{2}\sum_{i=1}^n d_i^2K_i-\frac{1}{2}\sum_{I \subseteq \{1, \dots , n\}} (d_I^2-\sum_{i \in I} d_i^2) \Delta_{0,I}- \frac{1}{2} \sum_{h>0, I \subset \{1, \dots, n\}} d_I^2\Delta_{h,I}$$ \end{thm} \begin{rem} In \cite{GZ1} a similar formula is proven when deg $\mathbf{d}=g-1$. For hyperelliptic curves these relations give equivalent results. \end{rem} The vanishing of the class $\theta^{g+1}$ in the Chow ring of the universal Jacobian $\mathcal{J}_g$ gives a relation among tautological classes on $M_{g,n}^{ct}$. We restrict this relation to the locus of hyperelliptic curves and get a relation on $\mathcal{H}_{g,n}^{rt}$. One can show that the relations coming from the vanishing of the class $\theta^{g+1}$ follow from the relations found on $\mathcal{H}_{g,n}^{rt}$ for $n \leq 3$ as long as $n <2g+2$. When $n=2g+2$ one gets \emph{one} new relation. More precisely, for all choices of parameters $d_i$, $i=1, \dots, 2g+2$ the resulting relations on $\mathcal{H}_{g,2g+2}^{rt}$ are multiples of each other up to a linear combination of the relations involving $n \leq 3$ points. As we will see in Section \ref{Product} there will be no new relations afterwards. \subsection{Relations from higher jets of differentials} The method is similar to the method introduced by Faber in \cite{F2} with slight changes. This gives tautological relations on products $\mathcal{C}_g^n$ of the universal curve over $M_g$. The resulting relation holds for a general family of curves. Let $\pi: \mathcal{C}_g \rightarrow M_g$ be the universal curve of genus $g$ with the relative dualizing sheaf $\omega$. We also make the usual convention $K:=c_1(\omega)$. The $n$-fold fibered product of the curve $\mathcal{C}_g$ over the $M_g$ is denoted by $\mathcal{C}_g^n$. We consider two natural locally free sheaves on this space. Let $\pi:\mathcal{C}_g^{n+1} \rightarrow \mathcal{C}_g^n$ be the projection onto the first $n$ factors. Its relative dualizing sheaf is denoted by $\omega_{n+1}$. The sum of the diagonal classes $d_{i,n+1}$ on $\mathcal{C}_g^{n+1}$ defines the divisor class $\Delta_n$: $$\Delta_n=\sum_{i=1}^n d_{i,n+1}.$$ The locally free sheaf $\mathbb{E}_m$ defined for every $m \geq 0$ as follows: $$\mathbb{E}_m:=\pi_*(\omega^{\otimes m}).$$ This is the usual Hodge bundle of rank $g$ when $m=1$. The fiber of $\mathbb{E}_m$ at a point $(C;x_1 , \dots, x_n)$ is the vector space $H^0(C,\omega_C^{\otimes m})$, where $\omega_C$ is the dualizing sheaf of the curve $C$. For $m>1$ it is of rank $(2m-1)(g-1)$. Another natural bundle is obtained from evaluating differential forms on divisors. We define the following locally free sheaf of rank $n$ on $\mathcal{C}_g^n$: $$\mathbb{F}_{m,n}:=\pi_*\left(\mathcal{O}_{\Delta_{n+1}} \otimes \omega_{n+1}^{\otimes m}\right).$$ The fiber of the sheaf $\mathbb{F}_{m,n}$ at a point $(C;x_1,\dots, x_n)$ is $$H^0\left(C,\frac{\omega_C^{\otimes m}}{\omega_C^{\otimes m}\left(-\sum_{i=1}^n x_i\right)} \right).$$ Consider the natural evaluation map: $$\phi_{m,n}:\mathbb{E}_m \rightarrow \mathbb{F}_{m,n}.$$ For a general $n$ the morphism $\phi_{m,n}$ doesn't behave well. Its kernel has the fiber $$H^0\left(C,\omega_C^{\otimes m}\left(-\sum_{i=1}^n x_i\right)\right),$$ which depends on the curve $C$ and the points $x_1, \dots, x_n$. However the situation becomes simpler when $n$ is large enough. More precisely, when $n > 2m(g-1)$ the morphism $\phi_{m,n}$ is injective. The quotient bundle $\mathbb{F}_{m,n}/\mathbb{E}_m$ has rank $n-(2m-1)(g-1)$. This means that for all $r:=n-2m(g-1) >0$ we get the following vanishing: \begin{prop}\label{R} The class $c_{g+r}(\mathbb{F}_{m,n}-\mathbb{E}_m)$ vanishes for all $r >0$. \end{prop} The Chern classes of the bundle $\mathbb{F}_{m,n}$ can be calculated with the same method as in \cite{F2} using Grothendieck-Riemann-Roch. We have the following formula for its total Chern class: $$c(\mathbb{F}_{m,n})=(1+mK_1)(1+mK_2-\Delta_2) \dots (1+mK_n-\Delta_n).$$ We recover Faber's relations \cite{F2} when $m=1$. As we will see there are tautological relations on $\mathcal{C}^n_g$ which don't come from Faber's relations. As an example let $g=2$ and take $m=3$. In this case we get the relation $c_3(\mathbb{F}_{3,7}-\mathbb{E}_3)=0$. After multiplying this relation with the divisor class $K_7$ on $\mathcal{C}_2^7$ and pushing it forward to $\mathcal{C}_2^6$ via the projection $\mathcal{C}_2^7 \rightarrow \mathcal{C}_2^6$ we get a degree 3 relation involving 6 points. This relation was found in \cite{T2} and was used to study the tautological ring of $M_{2,n}^{rt}$. As another example take $m=2$ for $g >2$. From Proposition \ref{R} we get the following relation involving $n=4g-3$ points: $$c_{g+1}(\mathbb{F}_{2,4g-3}-\mathbb{E}_2)=0.$$ Notice that $n \geq 2g+2$ by our assumption. From this relation we get a degree $g+1$ relation involving $2g+2$ points. There are many ways to do this. All give equivalent results for hyperelliptic curves. Here is one example of doing this: Multiply the relation above with the monomial $K_{2g+3} \dots K_{4g-3}$ and push it forward to $\mathcal{C}_g^{2g+2}$. The resulting relation is symmetric with respect to $2g+2$ markings. \section{Products of the universal curve over $\mathcal{H}_g$} \label{Product} In this section we give a description of the tautological rings for products $\mathcal{C}^n$ of the universal hyperelliptic curve $\pi:\mathcal{C} \rightarrow \mathcal{H}_g$. We will see that the relations found in previous sections can be used to find all tautological relations. This is based on explicit computations of the intersection pairings. To simplify the computations we work with a different set of generators. \begin{dfn} Let $n \geq 1$ be an integer. For every $1 \leq i \leq n$ and $1 \leq i < j \leq n$ define the following classes: $$a_i:=\frac{1}{2g-2}K_i, \qquad b_{i,j}:=d_{i,j}-a_i-a_j.$$ \end{dfn} It is straightforward to see that the collection of elements $a_i,b_{i,j}$ generate the tautological algebra of $\mathcal{C}^n$. The relations we found in previous sections become simpler in terms of these variables. The relation $K_i^2=0$ translates into $a_i^2=0$. The following relations $$K_1K_2-(2g-2)K_1d_{1,2}=K_1K_2-(2g-2)K_2d_{1,2}=0$$ are equivalent to the vanishings $a_ib_{i,j}=b_{i,j}^2+2ga_ia_j=0$. The vanishing of the Gross-Schoen cycle is equivalent to $b_{i,j}b_{i,k}-a_ib_{j,k}=0$. The degree $g+1$ relation comes from the vanishing of the following symmetric expression: \begin{equation}\tag{2}\label{Re} \alpha_g:=\sum_{\mathcal{I}} b_{i_1,i_2} \dots b_{i_{2g+1},i_{2g+2}}. \end{equation} Each term of the sum corresponds to a partition $\mathcal{I}$ of $\{1, \dots, 2g+2\}$ into $g+1$ subsets with 2 elements. \begin{ex} While we consider only hyperelliptic curves our presentation works for elliptic curves as well. In this case the origin of the elliptic curve plays the role of a Weierstra{\ss} point. Let $\pi: \mathcal{C} \rightarrow M_{1,1}$ be the universal elliptic curve over $M_{1,1}$. Geometric points of $M_{1,1}$ are elliptic curves $(C,p)$, where $C$ is a smooth curve of genus one and $p \in C$ denotes its origin. The morphism $\pi$ admits a natural section $a: M_{1,1} \rightarrow \mathcal{C}$. It associates $p \in C$ to the moduli point $(C,p)$. The image of the section $a$ is denoted by the same letter. Consider the $n$-fold fiber product $\mathcal{C}^n:=\mathcal{C} \times_{M_{1,1}} \dots \times_{M_{1,1}} \mathcal{C}$. Notice that $\mathcal{C}^n$ is birational to the moduli space $M_{1,n+1}$! The divisor class $a$ defines the divisor $a_i$ in the Picard group of $\mathcal{C}^n$ for $1 \leq i \leq n$. We also have the diagonal class $d_{i,j}$ for $1 \leq i < j \leq n$. The class $b_{i,j}$ is defined as $b_{i,j}:=d_{i,j}-a_i-a_j$. In \cite{T1} the vanishing of the Faber-Pandharipande cycle on $\mathcal{C}^2$ is obtained from a tautological relation on $M_{1,3}^{ct}$. The vanishing of the Gross-Schoen cycle gives $b_{1,2}b_{1,3}-a_1b_{2,3}=0$. The connection between this relation and Getzler's relation on $\overline{M}_{1,4}$ is explained in \cite{T1}. The next case deals with $n=4$. There are 10 generators in degree 1,3 and 21 generators in degree 2. The intersection matrix of the pairing $R^1(\mathcal{C}^4) \times R^3(\mathcal{C}^4)$ is invertible. This shows that tautological groups are of dimension 10 in degrees 1 and 3. The resulting intersection matrix for $R^2(\mathcal{C}^4) \times R^2(\mathcal{C}^4)$ has the form $$\begin{pmatrix} I_6 & & & & \\ & -2I_{12} & & & \\ & & 4 & -2 & -2 \\ & & -2 & 4 & -2 \\ & & -2 & -2 & 4 \\ \end{pmatrix}.$$ This matrix has rank 20. The kernel is one dimensional and corresponds to the relation $$b_{1,2}b_{3,4}+b_{1,3}b_{2,4}+b_{1,4}b_{2,3}=0.$$ This relation can be obtained from Getzler's relation via a pull-back. It is proven in \cite{T1} that every relation in the tautological ring of $\mathcal{C}^n$ follows from these relations. This was used to find all tautological relations for the moduli space $M_{1,n}^{ct}$. \end{ex} \begin{ex} Let $g=2$ and consider $n=2$. The vanishing of the Faber-Pandharipande cycle gives the relation $b_{1,2}^2+4a_1a_2=0$. This is the restriction of a tautological relation found on $\overline{M}_{2,2}$ by Getzler \cite{G2}. When $n=3$ the vanishing of the Gross-Schoen cycle corresponds to the relation $b_{1,2}b_{1,3}-a_1b_{2,3}=0$. This is the restriction of the relation on $\overline{M}_{2,3}$ found by Belorousski and Pandharipande \cite{BP}. The last relation contains 15 terms and has the following form: $$\sum b_{i_1,i_2}b_{i_3,i_4}b_{i_5,i_6}=0.$$ In \cite{T3} we proved that these relations determine the structure of the tautological ring $R^*(\mathcal{C}_2^n)$ for every $n$. \end{ex} The relations involving $n \leq 3$ points can be used to generate the tautological group of a given degree with elements of the form \begin{equation}\label{STC}\tag{3} v:=\prod_{i \in A(v)} a_i \cdot \prod_{j,k \in B(v)} b_{j,k}, \qquad A(v) \cap B(v)=\emptyset. \end{equation} In this situation such element $v$ is said to be a \emph{standard monomial}. We define $a(v):=\prod_{i \in A(v)} a_i$ and $b(v):=\prod_{j,k \in B(v)} b_{j,k}$. What we mentioned before means that the tautological group of $\mathcal{C}^n$ is generated by standard monomials. Intersection pairings have a simple form if one works with standard monomials. There is a natural way to associate a standard monomial $w \in R^{n-k}(\mathcal{C}^n)$ to every standard monomial $v \in R^k(\mathcal{C}^n)$. It is simply defined as the following product $$w:=\prod_{i \in \{1, \dots, n\} \setminus A(v) \cup B(v)} a_i \cdot b(v).$$ We say that $v$ and $w$ are dual to each other and write $w=v^*$. An elementary argument shows that for standard monomials $v_1,v_2$ of the same degree the product $v_1^* \cdot v_2$ vanishes unless they have the same $B$-parts. This means that interesting blocks of intersection pairings come from matrices having the following form: Let $m \geq 1$ be an integer and consider all standard monomials of the form $b_{i_1,i_2} \dots b_{i_{2m-1},i_{2m}}$. These belong to the tautological group $R^m(\mathcal{C}^{2m})$. Denote by $R_{2m}$ the $\mathbb{Q}$-vector space generated by these elements. The permutation group $S_{2m}$ acts on $R_{2m}$ via its natural action on indices. This makes $R_{2m}$ into a representation of the symmetric group $S_{2m}$. The decomposition of $R_{2m}$ into irreducible components has the following form: $$R_{2m}=\oplus_{\lambda} V_{\lambda},$$ where $V_{\lambda}$ is the representation associated to the partition $\lambda$ and in this sum $\lambda$ varies over all partitions having only even components. Notice that all representations appear with multiplicities 1. In \cite{HW} similar matrices and their eigenvalues are studied by Hanlon and Wales. It follows from their result that every representation $V_{\lambda}$ corresponds to an eigenspace of our intersection matrix. In particular, $V_{\lambda}$ is in the kernel if and only if the partition $\lambda$ has a part of length at least $2g+2$. An elementary argument shows that such representation $V_{\lambda}$ is generated by an element which is a linear combination of expressions of the form $\alpha_g$ given in \eqref{Re}. This shows that the relations found for $n \leq 3$ points together with the degree $g+1$ relation on $\mathcal{C}^{2g+2}$ generate all tautological relations. This completes the description of $R^*(\mathcal{C}^n)$ for every $n$. It follows that intersection pairings are perfect in all degrees. Everything we have said about the tautological ring of the moduli space $\mathcal{C}^n$ can be restricted to a fixed hyperelliptic curve. \begin{cor} Let $X$ be any hyperelliptic curve of genus $g$ and $n$ be an integer. The intersection pairings between tautological classes on $X^n$ are perfect in all degrees. \end{cor} \section{The Fulton-MacPherson compactification} In previous sections we found tautological relations on products of the universal hyperelliptic curve $\mathcal{C}$ over $\mathcal{H}_g$. Those relations suffice to determine the structure of the tautological ring of $\mathcal{C}^n$ for every $n$. We now want to include singular curves as well and give a description of $R^*(\mathcal{H}_{g,n}^{rt})$. To do this goal we notice that the space $\mathcal{H}_{g,n}^{rt}$ can be seen as the relative Fulton-MacPherson compactification of $\mathcal{C}^n$ over the base $\mathcal{H}_g$. More precisely, consider the natural map $$\pi: \mathcal{H}_{g,n}^{rt} \rightarrow \mathcal{H}_g,$$ which forgets all markings on the curve and contracts all rational components. The reduced fiber of $\pi$ over a moduli point $[X] \in \mathcal{H}_g$ is the Fulton-MacPherson compactification $X[n]$ of $X$. From this point of view all boundary divisors appearing in the partial compactification $\mathcal{H}_{g,n}^{rt}$ can be seen as exceptional divisors of blow-ups introduced in the process of separating points. This identifies the tautological ring of $\mathcal{H}_{g,n}^{rt}$ with an algebra over $R^*(\mathcal{C}^n)$. Basic relations among exceptional divisors are interpreted as trivial relations on the moduli space. This gives a simple description of the tautological ring of $\mathcal{H}_{g,n}^{rt}$ in terms of the results proven in previous sections. We first recall the definition of the Fulton-Macpherson compactification of a configuration space in the classical settings. Let $X$ be a nonsingular algebraic variety defined over an algebraically closed field. For an integer $n \geq 1$ consider the product $X^n$. The configuration space $F(X,n)$ is the open subset of $X^n$ corresponding to all $n$-tuples consisting of $n$ distinct points. In \cite{FM} Fulton and MacPherson introduce a compactification of $F(X,n)$ inside the following product $$X^n \times \prod_{|S| \geq 2} \text{Bl}_{\Delta} X^S,$$ where for each subset $S$ of the set $\{1, \dots , n\}$ Bl$_{\Delta} X^S$ is the blow-up of the corresponding product $X^S$ along its small diagonal $\Delta$. The resulting space is denoted by $X[n]$. This construction is symmetric with respect to the action of the symmetric group permuting $n$ points. The space $X[n]$ is an irreducible algebraic variety. The natural map from it to $X^n$ is proper. Furthermore, the variety $X[n]$ contains $F(X,n)$ as an open subset and the complement is a divisor with normal crossings. This compactification has several advantages over the naive candidate $X^n$. When there are several points in the configuration space approaching to the same point $x \in X$ the resulting limit in $X^n$ is simply $x$. In this compactification one remembers how fast these points approach each other. The original construction of Fulton and MacPherson is inductive. The starting point is $X[1]:=X$. Assuming that $X[n]$ is already constructed we consider the product $X[n] \times X$. There is a sequence of blow-ups along a collection of disjoint codimension two subvarieties which yields $X[n+1]$. Another equivalent construction of the compactification $X[n]$ is given in \cite{L}. The construction consists of a symmetric sequence of blow-ups along diagonal classes on $X^n$. We consider a diagonal class $X_I$ for each subset $I$ of $\{1, \dots , n\}$ having at least 2 elements. It is defined as all points in $X^n$ having equal elements for all coordinates corresponding to the set $I$. In other words it is the inverse of the small diagonal of $X^{|I|}$ via the natural projection from $X^n$ onto $X^{|I|}$. We first blow up the small diagonal inside the product $X^n$. In the next step we blow up all diagonals associated with subsets with $n-1$ elements. This process continues and at each step we increase the dimension of the blow-up centers by one. Notice that in our case we assume that $X$ is a curve and it suffices to deal with subsets $I$ having at least 3 elements. The class of the exceptional divisor of the blow-up along $X_I$ and its proper transform under later blow-ups is denoted by $D_I$. \section{The tautological ring of $X[n]$} In \cite{FM} the authors show that the Chow ring of $X[n]$ can be described as an algebra over the intersection ring $A^*(X^n)$ of the cartesian product $X^n$. They also give a description of the ideal of relations. Notice that we chose a different sequence of blow-ups than the original construction given by Fulton and MacPherson. As a result our presentation of the Chow ring has a slightly different form. The formula of Keel \cite{K} plays an essential role in our computations. To apply this formula we consider a codimension $d$ closed subvariety $Z$ of an algebraic variety $Y$. We assume that the restriction map $$A^*(Y) \rightarrow A^*(Z)$$ is surjective. Let $J_{Z/Y}$ be its kernel so that $$A^*(Z)=\frac{A^*(Y)}{J_{Z/Y}}.$$ Notice that this property holds for all subvarieties occurring in the process of the Fulton-MacPherson compactification. By our assumption the Chern class $c_i(N_{Z/Y})$ of the normal bundle of $Z$ is the restriction of an algebraic cycle $a_i \in A^i(Y)$. We define a \emph{Chern polynomial} for $Z$ to be $$P_{Z/Y}(t)=t^d+a_1t^{d-1}+ \dots + a_{d-1}t+a_d \in A^*(Y)[t].$$ We require that $a_d=[Z]$. Other coefficients $a_i$ are well-defined only modulo the ideal $J_{Z/Y}$. Denote by $\widetilde{Y}$ the blow-up of $Y$ along $Z$ and let $E$ be the class of the exceptional divisor. In this situation we get a simple description of the Chow ring of $\widetilde{Y}$ in terms of the Chow ring of $Y$ as follows: \begin{thm}(Keel) The Chow ring of $\widetilde{Y}$ is given by $$A^*(\widetilde{Y})=\frac{A^*(Y)[E]}{\left(J_{Z/Y} \cdot E, P_{Z/Y}(-E)\right)}.$$ \end{thm} This theorem enables us to find a complete description of the Chow ring of $X[n]$ if we know $A^*(X^n)$. We need to know a Chern polynomial $P_{Z/Y}$ and the ideal $J_{Z/Y}$ for all subvarieties $Z$ appearing in the construction of the space $X[n]$. In our case we are only interested in tautological classes. There is a natural way to define the tautological ring of $X[n]$. \begin{dfn} Let $X$ be a curve and $n$ be a positive integer. The tautological ring of the Fulton-MacPherson compactification $X[n]$ of the configuration space $F(X,n)$ is defined to be the $\mathbb{Q}$-subalgebra of the rational Chow ring of $X[n]$ generated by tautological classes on $X^n$ together with the classes of exceptional divisors. \end{dfn} In \cite{T2} we found the connection between tautological relations on $X[n]$ and the product $X^n$. They are divided into 5 classes of relations. In the following we assume that $Y$ is $X^n$ or the total space of one of its blow-ups in the process of the construction of $X[n]$: \begin{enumerate}\label{relations} \item The most obvious class of relations consists of those coming from the original space $X^n$. Notice that the Chow ring of $X^n$ naturally becomes a subring of the intersection ring of the space $X[n]$. We use the same letters for cycles in $R^*(X^n)$ and their pull backs to $X[n]$. \item Another class of trivial relations comes from the vanishing of products of certain exceptional divisors. We have seen that exceptional divisors correspond to subsets $I$ of $\{1, \dots, n\}$ having at least 3 elements. Let $D_I$ and $D_J$ be the exceptional divisors associated to the subvarieties $X_I$ and $X_J$ of $X^n$. Assume that $I \not \subseteq J$, $J \not \subseteq I$ and $I \cap J \neq \emptyset$. Under these assumptions the proper transforms of the subvarieties $X_I$ and $X_J$ become disjoint in the process of blow-ups. This happens when we blow up subvarieties corresponding to subsets having $\min(|I|,|J|)+1$ elements. This implies that the product $D_I \cdot D_J$ is zero for such subsets. \item For a subvariety $Z$ of $Y$ consider the inclusion map $i:Z \rightarrow Y$. Let $x$ be an element in the kernel of the restriction map $i^*:A^*(Y) \rightarrow A^*(Z)$. Denote by $E_Z$ the class of the exceptional divisor of the blow-up Bl$_Z Y$ of $Y$ along $Z$. We get the relation $x \cdot E_Z=0$ in the Chow ring of Bl$_Z Y$. \item Let $V_1,\dots,V_k,Z$ be a collection of blow-up centers. Assume that the subvarieties $V_1, \dots, V_k$ intersect transversally. Furthermore, suppose we can write $Z$ as the transversal intersection $V_1 \cap \dots \cap V_k \cap W$ for some $W$. Denote by $E_{\widetilde{V_i}}$ the class of the exceptional divisor of the blow-up along $V_i$. In this situation we get the relation $P_{W/Y}(-E_Z) \cdot E_{\widetilde{V_1}} \dots E_{\widetilde{V_k}}=0$. \item Let $Z$ be a blow-up center of $Y$ with a Chern Polynomial $P_{Z/Y}$. The class $E_Z$ satisfies the equation $P_{Z/Y}(-E_Z)=0$. \end{enumerate} \subsection{Standard monomials and intersection pairings} The relations described above can be used to obtain a smaller set of generators for tautological groups. These generators will be called \emph{Standard monomials}. We will see that there is an involution in the tautological ring of $X[n]$ which gives a one to one correspondence between standard monomials in degrees $d$ and $n-d$. The intersection pairing $$R^d(X[n]) \times R^{n-d}(X[n]) \rightarrow \mathbb{Q}, \qquad 0 \leq d \leq n$$ has a simple form with respect to this choice of basis for tautological groups. The existence of a natural filtration on the tautological algebra shows that the resulting intersection matrix becomes triangular. It consists of square blocks along the main diagonal. These blocks are intersection matrices of pairings for $X^m$ when $m \leq n$. This also shows that to study tautological classes on $X[n]$ it is enough to restrict to the spaces $X^m$, $m \leq n$. Every element in the tautological ring is a product of tautological classes on $X^n$ with products of exceptional divisors. It can be written as follows: \begin{equation}\label{V}\tag{4} v:=a(v) \cdot b(v) \cdot \prod_{r=1}^m D_{I_i}^{i_r}, \end{equation} where $a(v)$, $b(v)$ belong to $R^*(X^n)$ defined before and $D_I$ is the exceptional divisor associated with the subvariety $X_I$. To define standard monomials and the involution on the tautological ring we need to associate a graph to each monomial in $R^*(X[n])$. Let $v$ be a monomial as given in \eqref{V}. We associate a directed graph $\mathcal{G}:=(V_{\mathcal{G}},E_{\mathcal{G}})$ to $v$ as follows: The vertex set $V_{\mathcal{G}}$ of $\mathcal{G}$ is identified with the set $\{1, \dots, m\}$. There is an edge $(r,s)$ in $E_{\mathcal{G}}$ from the vertex $r$ to $s$ if the set $I_s$ is a maximal element of the set $$\{I_i: I_i \subset I_r\}.$$ The degree deg$(i)$ of a vertex $i$ is the number of outgoing edges $(i,j)$ with the starting point $i$. The closure $\overline{i}$ of a vertex $i \in V_{\mathcal{G}}$ consists of all vertices $r$ such that the inclusion $I_r \subseteq I_i$ holds. In other words, the vertex $r$ belongs to the closure of $i$ when there is a path in the graph $\mathcal{G}$ from $i$ to $r$. Notice that each vertex $i$ of $\mathcal{G}$ corresponds to a subset $I_i$ of $\{1, \dots, n\}$. A vertex $i$ is called a \emph{root} of $\mathcal{G}$ if the set $I_i$ is a maximal element of the collection $\{I_1, \dots, I_m\}$ with respect to inclusion of sets. A vertex $i$ corresponding to a minimal subset $I_i$ is called an \emph{external} vertex. All the other vertices of $\mathcal{G}$ are called \emph{internal} vertices. Let $v$ be a monomial in the tautological ring of $X[n]$ and denote by $\mathcal{G}$ the associated graph. Roots of $\mathcal{G}$ correspond to the collection $\{J_1, \dots, J_s\}$ consisting of subsets of $\{1, \dots, n\}$. For each $1 \leq r \leq s$ let $\alpha_r \in J_r$ be the smallest element. Consider the following subset $S$ of the set $\{1, \dots, n\}$: \begin{equation} \label{S} \tag{5} S:=\{\alpha_1, \dots, \alpha_s\} \cup ( \cap_{r=1}^m I_r^c). \end{equation} In this situation we say that the monomial $v$ is standard if $$a(v)b(v) \in R^*(X^{|S|})$$ is a standard monomial according to our former convention \eqref{STC} in Section 3.3. We also require the following restriction for the powers of exceptional divisors: $$i_r \leq \text{min}(|I_r|-2,|I_r|-|\cup_{I_s \subset I_r}I_s|+\text{deg}(I_r)-2).$$ The following proposition shows that in studying tautological classes on $X[n]$ we can restrict to standard monomials: \begin{prop}\label{standard} Standard monomials form an additive basis for tautological groups in all degrees. \end{prop} \begin{proof} Let $v$ be a monomial as in Definition \eqref{V}. For any subset $I$ of the set $\{1, \dots, n\}$ and elements $i,j \in I$ the relations $$a_i \cdot D_I=a_j \cdot D_I, \qquad b_{i,j} \cdot D_I=-2g a_i \cdot D_I$$ hold. Under these assumptions for every $k \in \{1, \dots , n\}$ we have that $b_{i,k} \cdot D_I=b_{j,k} \cdot D_I$. Using these relations we may assume that the monomial $a(v)b(v)$ belongs to $R^*(X^{|S|})$ for the index set $S$ given in \eqref{S}. We may also assume that the power $i_r$ satisfies the inequality $i_r \leq |I_r|-2$. This follows from the last class of relations in \ref{relations}. To deal with the last inequality let $\{J_1, \dots, J_s\}$ be the set of maximal elements of the set $$\{I_i: I_i \subset I_r, \text{where} \ 1 \leq i \leq m\}.$$ From the relations of type (4) in \ref{relations} the monomial $D_{I_r}^j \cdot \prod_{i=1}^s D_{J_i}$ can be written as a sum of terms which are strictly less than it, where $$j=|I_r|-|\cup_{i=1}^s J_i|+s-1=|I_r|-|\cup_{I_s \subset I_r} I_s|+\deg(I_r)-1.$$ This shows that for any $r$ we may assume that $$i_r \leq |I_r|-|\cup_{I_s \subset I_r} I_s|+\deg(I_r)-2.$$ \end{proof} We are now able to define an involution on $R^*(X[n])$ which associates a dual element $v^* \in R^{n-d}(X[n])$ to every standard monomial $v \in R^d(X[n])$. \begin{dfn} Let $v$ be a standard monomial as given in \eqref{V} and $\mathcal{G}$ be its associated graph. Define the subsets $J_1, \dots, J_s$ and $S$ of $\{1, \dots, n\}$ as before. The set $T$ is defined as $$T:=S \setminus A_v \cup B_v.$$ The integers $j_r$ for $1 \leq r \leq m$ are defined as follows: $$j_r:= \left\{ \begin{array}{ll} |I_r|-|\cup_{I_s \subset I_r}I_s|+\deg(I_r)-1-i_r & \qquad I_r \ \mathrm{is \ an \ internal \ vertex \ of} \ \mathcal{G} \\ \\ |I_r|-1-i_r & \qquad I_r \ \mathrm{is \ an \ external \ vertex \ of} \ \mathcal{G}. \\ \end{array} \right. $$ We define $$v^*:=a(v^*) \cdot b(v^*) \cdot \prod_{r=1}^m D_{I_i}^{j_r},$$ where $a(v^*)=\prod_{i \in T}a_i$ and $b(v^*)=b(v)$. \end{dfn} It follows from our definition that the dual of every standard monomial is again standard. The basic property $v^{**}=v$ shows that this association defines an involution on the tautological algebra. \begin{ex} Let $I$ be a subset of $\{1, \dots, n\}$ with at least 3 elements and denote by $i$ its smallest element. The divisor $v:=D_I$ is standard. Its dual is $v^*=a_iD_I^{n-2}$. The intersection product $v \cdot v^*$ is equal to $(-1)^{n-1}$. \end{ex} \begin{ex} Let $I_1, \dots, I_4$ be 4 disjoint subsets of $\{1, \dots ,n\}$ having at least 3 elements. Let $i_j \in I_j$ be the smallest element for $1 \leq j \leq 4$. The degree 6 class $v:=b_{i_1,i_2}b_{i_3,i_4} \prod_{i=1}^4D_{I_i}$ is standard. We have that $v^*=b_{i_1,i_2}b_{i_3,i_4} \prod_{i=1}^4D_{I_i}^{|I_i|-2}$ and $v \cdot v^*=4(-1)^{1+\sum_{i=1}^4 |I_i|}g^2$. \end{ex} \begin{ex} Let $n=20$ and consider the monomial $v=b_{1,2} \cdot \prod_{i=1}^7 D_{I_i}$ in $R^8(X[20])$, where $$I_1=\{3,4,5\}, \qquad I_2=\{6,7,8\}, \qquad I_3=\{9,10,11\}, \qquad I_4=\{3,4, \dots, 13\},$$ $$I_5=\{14,15,16\}, \qquad I_6=\{14,15,16,17,18\}, \qquad I_7=\{14,15, \dots, 20\}.$$ It follows from the definition that $v$ is a standard monomials. The graph $\mathcal{G}$ associated to $v$ is pictured below. It has 7 vertices and 5 edges. The vertex $I_1$ has degree 3, the vertices $I_5,I_6$ are of degree 2 and other vertices have degree zero. The graph $\mathcal{G}$ has two roots $I_1$ and $I_5$. There are 4 external vertices $I_2,I_3,I_4$ and $I_7$ and 3 internal vertices $I_1,I_5,I_6$. The dual of $v \in R^{12}(X[20])$ is defined as $$v^*=b_{1,2}a_3a_{14} \cdot \prod_{r=1}^7 D_{I_r}^{i_r},$$ where $i_4=3$ and all other powers are 1. The intersection product $v \cdot v^*$ is equal to $-2g$. \begin{figure}[htp] \begin{tikzpicture} [inner sep=0pt,thick, dot/.style={fill=black,circle,minimum size=7pt}] [auto=left] \begin{scope}[very thick, every node/.style={sloped,allow upside down}] \node (m1) at (2,7.5) {$I_4$}; \node[dot] (n1) at (2,8) {}; \node (m2) at (1,10.5) {$I_1$}; \node[dot] (n2) at (1,10) {}; \node (m3) at (2,10.5) {$I_2$}; \node[dot] (n3) at (2,10) {}; \node (m4) at (3,10.5) {$I_3$}; \node[dot] (n4) at (3,10) {}; \draw (n1)-- node {\tikz \draw[-triangle 45] (0,1) -- +(.5,0);} (n2); \draw (n1)-- node {\tikz \draw[-triangle 45] (0,1) -- +(.5,0);} (n3); \draw (n1)-- node {\tikz \draw[-triangle 45] (0,1) -- +(.5,0);} (n4); \node[dot] (n5) at (6,8) {}; \node (m5) at (6,7.5) {$I_7$}; \node[dot] (n6) at (6,10) {}; \node (m6) at (5.5,10) {$I_6$}; \node[dot] (n7) at (6,12) {}; \node (m7) at (6,12.5) {$I_5$}; \draw (n5)-- node {\tikz \draw[-triangle 45] (0,1) -- +(.5,0);} (n6); \draw (n6)-- node {\tikz \draw[-triangle 45] (0,1) -- +(.5,0);} (n7); \end{scope} \end{tikzpicture} \caption{The graph $\mathcal{G}$} \end{figure} \end{ex} \subsection{The filtration of the tautological algebra} To compute the intersection matrix of the pairing we consider all standard monomials of degree $d$ and their duals defined before. The resulting matrix has a triangular structure which simplifies our analysis of the pairing. The triangular property is shown to be connected to the vanishing of certain intersection numbers. These vanishings are better understood via a natural filtration of the tautological ring. The existence of this filtration was predicted by Looijenga in the previous work \cite{T1} on the tautological ring of $M_{1,n}^{ct}$. This filtration is defined in terms of a natural ordering of monomials. We first order the exceptional divisors. \begin{dfn}\label{<} Let $I,J$ be subsets of the set $\{1, \dots, n\}$. We say that $I < J$ if \begin{itemize} \item $|I| < |J|$; \item or if $|I|=|J|$ and the smallest element of $I \setminus I \cap J$ is smaller than the smallest element of $J \setminus I \cap J$. \end{itemize} This induces an ordering on monomials in $R^*(X[n])$. Consider two monomials $v_1,v_2$ in the tautological ring of $X[n]$ as follows: $$v_1=a(v_1)b(v_1) \cdot \prod_{r=1}^{r_0} D_{I_r}^{i_r} \cdot D, \qquad v_2=a(v_2)b(v_2) \cdot \prod_{r=1}^{r_0} D_{I_r}^{j_r} \cdot D,$$ where $D=\prod_{r=r_0+1}^m D_{I_r}^{i_r}$, for $I_m < \dots < I_1$ and $i_{r_0}<j_{r_0}$; or if $r_0=0$ and $a(v_1)b(v_1)<a(v_2)b(v_2)$. Furthermore, we say that $v_1 \ll v_2$ if for any factor $D_I$ of $v_2$ we have that $v_1<D_I$. Notice that $v_1 \ll v_2$ implies that $v_1<v_2$. \end{dfn} This ordering of monomials induces a filtration of the tautological ring. \begin{dfn}\label{fil} Let $v$ be a standard monomial as given in \eqref{V} and let $J_1, \dots, J_s$ be the roots of its associated graph as before. The integer $p(v)$ is defined to be the degree of the element $$a(v)b(v) \cap_{r=1}^s X_{J_r} \in A^*(X^n),$$ which is the same as $$\deg a(v)b(v) +\sum_{r=1}^s |J_r|-s.$$ The subspace $F^pR^*(X[n])$ of the tautological ring is defined to be the $\mathbb{Q}$-vector space generated by standard monomials $v$ satisfying $p(v) \geq p$. \end{dfn} It follows immediately from our definition that $$F^{p+1}R^*(X[n]) \subseteq F^pR^*(X[n]).$$ This shows that the subspaces $F^pR^*(X[n])$ indeed define a filtration for $R^*(X[n])$. The following vanishing result will be crucial in the analysis of intersection pairings: \begin{prop} \label{3} \begin{enumerate} \item Let $v \in F^pR^*(X[n])$ and $w \in R^d(X[n])$ be such that $w \ll v$. If $p+d>n$ then the intersection product $v \cdot w$ is zero. In particular, we have that $F^{n+1}R^*(X[n])$ is zero. \item Let $v_1,v_2$ be standard monomials in $R^d(X[n])$ satisfying $D(v_1) < D(v_2)$. Then $v_1 \cdot v_2^*=0$. \end{enumerate} \end{prop} \begin{proof} Let $J_1, \dots ,J_s$ be as in Definition \ref{fil}. Denote by $Y$ the space before blowing up the subvarieties $X_{J_1}, \dots , X_{J_s}$ and by $\widetilde{Y}$ the resulting blow-up space. The corresponding morphism is denoted by $$\pi: \widetilde{Y} \rightarrow Y.$$ The product $v \cdot w$ vanishes since it is the pull-back of zero via $\pi^*$. This proves the first claim. For the second part it is enough to write $v_1 \cdot v_2^*$ as a product $v \cdot w$, for $v,w \in R^*(X[n])$ satisfying the properties stated in the first part. To find $v$ and $w$, let $v_1,v_2$ be given as in Definition \ref{<}, and denote by $\{J_1, \dots ,J_s\}$ the set of roots of the graph associated to the monomial $$D=\prod_{r=r_0+1}^{m} D_{I_r}^{i_r}.$$ By relabelling the roots we may assume that there is an $s_0 \geq 0$ such that $J_r \subset I_{r_0}$ for $1 \leq r \leq s_0$, and the equality $I_{r_0} \cap J_r=\emptyset$ holds for $s_0 < r \leq s$. Let $w$ be the product of all monomials $D_I$ in $v_1 \cdot v_2^*$ which are strictly less than $D_{I_{r_0}}$ and $v$ be the product of the other factors, so that the equality $v_1 \cdot v_2^*=v \cdot w$ holds. Notice that $w \ll v$, by the definition of $v$ and $w$. We need to calculate the degree of $w$. This is done by calculating the degree of $v$ and using the equation $$\deg(w)=n-\deg(v).$$ Roots of the graph associated to $v$ consist of $s-s_0+1$ vertices associated to the subsets $I_{r_0}, J_{s_0+1}, \dots, J_s$. The degree $\deg(v)$ of $v$ is therefore equal to the following sum: $$|I_{r_0}|-1+\sum_{r=s_0+1}^s (|J_r|-1).$$ It follows that $$\deg(w)=n+1+j_{r_0}-i_{r_0}+s-s_0-|I_{r_0}|-\sum_{r=s_0+1}^s |J_r| > n+1+s-s_0-|I_{r_0}|-\sum_{r=s_0+1}^s |J_r|.$$ On the other hand from Definition \ref{fil} we have that $$p(v) \geq |I_{r_0}|+\sum_{r=s_0+1}^s |J_r|-s+s_0-1.$$ From the inequality $\deg(w)+p(v)> n$ we see that the intersection product $v \cdot w$ is zero. \end{proof} \begin{thm}\label{X[n]} Let $X$ be a hyperelliptic curve of genus $g$ and $n$ be an integer. Denote by $R^*(X[n])$ its tautological ring. The intersection pairings $$R^d(X[n]) \times R^{n-d}(X[n]) \rightarrow \mathbb{Q}$$ are perfect for all $0 \leq d \leq n$. The space of relations is generated by the vanishing of the following cycles: \begin{itemize} \item The Faber-Pandharipande cycle, \item The Gross-Schoen cycle, \item All product of the form $D_I \cdot D_J$ for subsets $I,J$ of $\{1, \dots, n\}$ whenever $I \not \subseteq J$, $J \not \subseteq I$ and $I \cap J \neq \emptyset$. \item All products of the form $x \cdot D_I$, where $I$ is a subset of $\{1, \dots , n\}$ with at least 3 elements and $x$ has the form $d_{i,j}+K_j$ or $d_{i,k}-d_{j,k}$ for $i,j \in I$ and $k \in \{1, \dots , n\} \setminus I$. \end{itemize} \end{thm} \begin{proof} We have proved in Proposition \ref{standard} that tautological groups are generated by standard monomials. Consider a basis for the vector space $R^d(X[n])$ consisting of standard monomials of degree $d$. It follows from Proposition \ref{3} that the intersection matrix of the pairing between tautological classes has a triangular structure with respect to our ordering of generators. To finish the study of the pairing we need to analyze the blocks along the main diagonal. We will see that these are intersection matrices of the pairings of $R^*(X^{|S|})$ for various subsets $S$ of $\{1, \dots, n\}$. In other words we need to understand the block corresponding to standard monomials having the same $D$-part. Let $D$ be any such monomial. It is a product of the divisor classes $D_I$. Consider the graph $\mathcal{G}$ associated to the monomial $D$. The subset $S$ of the set $\{1, \dots, n\}$ is defined as in \ref{S}. Now suppose that $v_1,v_2$ are standard monomials satisfying $D(v_1)=D(v_2)$. It is easy to see that the intersection numbers $$v_1 \cdot v_2^* \in R^n(X[n]) \cong \mathbb{Q}$$ and the number $$a(v_1)b(v_1) \cdot a(v_2^*)b(v_2^*) \in R^{|S|}(X^{|S|}) \cong \mathbb{Q}$$ differ by $(-1)^{\epsilon}$, where $$\epsilon=|\cup_{r=1}^m I_r|+\sum_{i \in V(\mathcal{G})} \deg(i).$$ This shows that square blocks on the diagonal of the intersection matrix are related to the pairings of $R^*(X^{|S|})$ for various $S$ in a simple way. In particular, we see the direct connection between tautological relations on $X[n]$ and the products $X^{|S|}$. We have studied the pairings for $R^*(X^n)$ for all $n$ and we have found all tautological relations. This shows the same result for $X[n]$ as well. We need to prove that the relations stated above form a set of generators. It is not clear that all relations described in \ref{relations} follow from these relations. The argument in the next section shows this. \end{proof} \section{The tautological ring of $\mathcal{H}_{g,n}^{rt}$} We will show that the tautological ring of the moduli space $\mathcal{H}_{g,n}^{rt}$ has the same description as $R^*(X[n])$ studied in the previous section. Let $X$ be a smooth hyperelliptic curve of genus $g$ as before. Consider the natural map $$F:X[n] \rightarrow \mathcal{H}_{g,n}^{rt}.$$ Tautological classes on both sides are connected to each other via the following relations: $$F^*(K_i)=K_i \ \text{for} \ 1 \leq i \leq n,$$ $$F^*(D_{i,j})=d_{i,j}-\sum_{i,j \in I}D_I, \qquad F^*(D_I)=D_I \ \text{when} \ |I| \geq 3.$$ As a result we get a ring homomorphism between tautological algebras: $$F^*: R^*(\mathcal{H}_{g,n}^{rt}) \rightarrow R^*(X[n]).$$ We want to prove that this homomorphism identifies $R^*(\mathcal{H}_{g,n}^{rt})$ with the tautological ring of the fiber $X[n]$. It is clear from the definition that $F^*$ is surjective. We also need to show the injectivity of this map. Notice that both rings are generated by divisors and there is a natural bijection between these generators. To prove the injectivity it is enough to verify that for every relation among divisors on the fiber $X[n]$ the corresponding divisors on $\mathcal{H}_{g,n}^{rt}$ satisfy the same relation. \begin{enumerate} \item Our arguments in sections 2 and 3 show that all tautological relations on the fiber $X^n$ hold on the moduli space $\mathcal{C}^n$. Their pull back to $\mathcal{H}_{g,n}^{rt}$ via the contraction map $$\mathcal{H}_{g,n}^{rt} \rightarrow \mathcal{C}^n$$ establishes the first class of relations. \item We now consider the first class of relations among the divisor classes $D_I$. The product $D_I \cdot D_J \in R^2(\mathcal{H}_{g,n}^{rt})$ is zero unless $$I \subseteq J, \mathrm{or} \qquad J \subseteq I, \mathrm{or} \qquad I \cap J = \emptyset.$$ Notice that this holds even in $R^*(\overline{\mathcal{H}}_{g,n})$ for all subsets $I,J$ with at least two elements. In (2) we only consider subsets with at least three elements. \item Let $I \subset \{1,\dots,n\}$ be a subset with $|I| \geq 3$ and $i_I:X_I \rightarrow X^n$ denote the inclusion. The $\ker(i_I^*)$ is generated by the divisor classes $d_{i,j}+K_i$ and $d_{i,k}-d_{j,k}$ for distinct elements $i,j \in I$ and $k$ in the complement of $I$. We will prove that $$( \psi_i + \sum_{i \in J, j \notin J} D_J) \cdot D_I \qquad \text{and} \ (\sum_{i,j \in J} D_J - \sum_{i,k \in J} D_J) \cdot D_I$$ are zero in $R^2(\mathcal{H}_{g,n}^{rt})$. The first equality follows from the well-known formula for the $\psi$ classes in genus zero, which we recall: Let $i \in \{1, \dots , n\}$ be an element and assume that $j,k \in \{1, \dots, n\} \backslash \{i\}$ are arbitrary distinct elements. Then one has the following equality in $A^1(\overline{M}_{0,n})$: $$\psi_i=\sum_{\substack{i \in I \\j,k \notin I}}D_I.$$ The second relation is an easy implication of the relations proved in the previous part. \item Let $V=V_1 \cap \dots \cap V_k,W$ and $Z$ be subvarieties of $X^n$ as in part (4) in \ref{relations}, so that $V \cap W=Z$. After possibly relabeling the indices, we can assume that $$Z=X_{I_0}, \qquad V_i=X_{I_i}, \ \mathrm{for} \ 1 \leq i \leq k, \qquad W=\prod_{i=2}^{r_1} d_{1,i} \cdot \prod_{j=1}^k d_{1,r_j+1} ,$$ where $1 \leq r_1 < \dots < r_{k+1} \leq n, I_0=\{1, \dots ,r_{k+1}\}$, and $I_i=\{r_i+1, \dots , r_{i+1}\}$ for $1 \leq i \leq k$. From these data we get the relation $$P_{W/X^n}(-\sum_{I_0 \subseteq I} D_I) \cdot \prod_{i=1}^k D_{I_i}=0$$ on the space $X[n]$, for $$P_{W/X^n}(t)=\prod_{i=2}^{r_1} (t+d_{1,i}) \cdot \prod_{j=1}^k (t+d_{1,r_j+1}).$$ We want to prove a similar identity on the moduli space $\mathcal{H}_{g,n}^{rt}$. This is proven by showing that any monomial in the expansion of this expression is zero. Consider any such monomial. It has the form $$\prod_{i=2}^{r_1} D_{J_i} \cdot \prod_{j=1}^{k} D_{J_{r_j+1}} \cdot D_{I_j},$$ where $$1,i \in J_i \ \text{for} \ 2 \leq i \leq r_1, \qquad 1,r_{j+1} \in J_{r_i+1} \ \text{for} \ 1 \leq j \leq k.$$ Assume that the product above is not zero. Notice that the subsets $J_i$ are not disjoint since 1 is in their intersection. This means that for any two indices $i$ and $j$ one of the inclusions $J_i \subseteq J_j$ or $J_j \subseteq J_i$ holds. When $1 \leq j \leq k$ the non vanishing of the product $D_{J_{r_j+1}} \cdot D_{I_j}$ implies that $I_j \subseteq J_{r_j+1}$. This means that $\cup_{j=1}^{k}I_j \subseteq \cup_{j=1}^k J_{r_j+1}$. Since $1,i \in J_i$ for $2 \leq i \leq r_1$ we see that the union of all $J_i$'s contains $I_0$. But this union coincides with one of the $J_i$'s. This means that $I_0 \subseteq J_i$ for some $i$. But the term $D_{J_i}$ for such set $J_i$ is excluded from the expression above. This contradiction shows the desired vanishing. \item Let $I \subset \{1,\dots,n\}$ be a subset with $|I| \geq 3$ corresponding to the subvariety $Z=X_I$ of $X^n$. The relation $$P_{Z/X^n}(-\sum_{I \subseteq J}D_J)=0$$ on the space $X[n]$ holds. To prove the corresponding relation on the moduli space we will show that the product $$\prod_{i \neq j \in I} (d_{i,j}- \sum_{I \subseteq J} D_J)$$ is zero, where $i \in I$ is an arbitrary element. This is verified as in the previous case by showing that all monomials occurring in the expansion of the expression above are zero. In the proof we only need to use relations of type (2). \end{enumerate} Notice that our proof shows that seemingly complicated relations of type (4) and (5) are formal consequences of the trivial relations of type (2). This completes the proof of Theorem \ref{X[n]} as well. The argument above proves that $F^*$ is indeed an isomorphism. This shows that the restriction map induces an isomorphism between the tautological rings involved and finishes the proof of Theorem \ref{X}. \section{Tautological cohomology via monodromy} We have seen a complete description of relations among tautological classes in the Chow ring of $\mathcal{H}_{g,n}^{rt}$. From the Gorenstein property of the tautological ring there will be no more relations between tautological cycles in cohomology. More precisely, consider the cycle class map $$\text{cl}: A^*(\mathcal{H}_{g,n}^{rt}) \rightarrow H^*(\mathcal{H}_{g,n}^{rt}).$$ The image of $R^*(\mathcal{H}_{g,n}^{rt})$ is called the tautological cohomology of $\mathcal{H}_{g,n}^{rt}$ and is denoted by $RH^*(\mathcal{H}_{g,n}^{rt})$. From Theorem \ref{X} we get the following isomorphism: $$\text{cl}: R^*(\mathcal{H}_{g,n}^{rt}) \stackrel{\sim}{\rightarrow} RH^*(\mathcal{H}_{g,n}^{rt}).$$ This shows that the tautological cohomology ring $RH^*(\mathcal{H}_{g,n}^{rt})$ has Poincar\'e duality as well. Now let $X$ be a smooth hyperelliptic curve of genus $g$. The tautological cohomology $RH^*(X[n]) \subset H^*(X[n])$ is defined analogously. The cycle class map identifies the tautological algebras of $X[n]$ in Chow and cohomology. We therefore get the following diagram of isomorphisms from Theorems \ref{X} and \ref{X[n]}: \begin{equation}\label{square}\tag{6} \begin{CD} R^*(\mathcal{H}_{g,n}^{rt}) @>F^*>> R^*(X[n]) \\ @VV\text{cl}V @VV\text{cl}V\\ RH^*(\mathcal{H}_{g,n}^{rt}) @>F^*>> RH^*(X[n]) \end{CD} \end{equation} We want to identify these isomorphic algebras with monodromy invariant classes in the rational cohomology of $X[n]$. This follows from a result of Petersen and Tommasi proved for arbitrary curves which we will recall below. Let $X$ be a smooth curve of genus $g$. Recall that there is a natural action of the symplectic group $\mathfrak{Sp}(2g,\mathbb{Q})$ on the rational cohomology ring of $X$. This induces an action of this group on the cohomology of $X[n]$ as well. As it was observed in \cite{PT} the tautological cohomology of $X[n]$ can be identified with monodromy invariant classes in cohomology: \begin{prop} Let $X$ be a smooth projective curve of genus $g$. The subalgebra $H^*(X[n])^{\mathfrak{Sp}(2g,\mathbb{Q})}$ of monodromy invariant classes coincides with the tautological cohomology ring $RH^*(X[n])$. \end{prop} In particular, $RH^*(X[n])$ is a Gorenstein algebra for every curve $X$ and a natural number $n$. This is very different from what we know in Chow as $R^*(X^n)$ and $R^*(X[n])$ don't have Poincar\'e duality for generic curves already when $n=2$. However for hyperelliptic curves we proved the isomorphism between tautological rings in Chow and cohomology. This shows that all isomorphic algebras in the diagram \ref{square} can be naturally identified with monodromy invariant classes. This finishes the proof of Corollary \ref{M}. \section{A remark on Pixton's conjecture} Our results in previous sections deal with relations among tautological classes on the space $\mathcal{H}_{g,n}^{rt}$. In this section we want to use the found relations to get relations on the space $\overline{M}_{g,n}$. This reformulation uses the following ingredients: the formula for the class of hyperelliptic curves and the natural evaluation map on the space of curves of rational tails. Then we raise questions about the connection between the resulting relations and Pixton's recent conjecture. The class of the hyperelliptic locus inside $M_g$ is calculated by Mumford in the seminal paper \cite{M}. It is a class of degree $g-2$ and the formula is a polynomial in terms of lambda and kappa classes. Recall that the lambda class $\lambda_i$ for $0 \leq i \leq g$ is defined as the Chern class $\lambda_i:=c_i(\mathbb{E})$ of the Hodge bundle $\mathbb{E}$. On the other hand we know that the tautological group $R^{g-2}(M_g)$ has dimension one. Indeed the lambda class $\lambda_{g-2}$ is a generator. Faber \cite{F1} calculated the explicit multiplicity: \begin{equation}\label{H}\tag{7} [\mathcal{H}_g]=\frac{2^{2g}-1}{g(g+1)(4g^2-1)|B_{2g-2}|}\lambda_{g-2} \in R^*(M_g), \end{equation} where $B_{2g-2}$ is the Bernoulli number. Let us explain how we get relations on $\overline{M}_{g,n}$ from those on hyperelliptic curves. We have seen that the vanishing of the Faber-Pandharipande cycle on $\mathcal{H}_{g,2}^{rt}$ is equivalent to the relation $$K_1K_2=(2g-2)K_1d_{1,2}$$ where $d_{1,2}$ is the diagonal class. We now use the formula for the class of the hyperelliptic locus. If we rewrite this relation in terms of standard tautological classes on $M_{g,2}^{rt}$ we get that $$(\psi_1\psi_2-(2g-1)\psi_1^2) \cdot \lambda_{g-2}=0 \in R^g(M_{g,2}^{rt}).$$ To obtain a relation in $R^*(\overline{M}_{g,2})$ we use the evaluation $$A^*(M_{g,n}^{rt}) \rightarrow \mathbb{Q}, \qquad \epsilon \rightarrow \int_{\overline{M}_{g,n}} \epsilon \cdot \lambda_{g-1} \lambda_g.$$ Recall that this map defines a well-defined evaluation on $A^*(M_{g,n}^{rt})$. The resulting equality becomes $$\int_{\overline{M}_{g,2}} (\psi_1\psi_2-(2g-1)\psi_1^2) \lambda_{g-2}\lambda_{g-1}\lambda_g=0.$$ This is a well-known identity among the simplest type of Hodge integrals. It can be easily shown using the string equation. However notice that this last identity does \emph{not} prove the vanishing of the Faber-Pandharipande cycle on $\mathcal{H}_{g,2}^{rt}$. We can apply the same method to other relations. The vanishing of the Gross-Schoen cycle gives a relation of degree $3g-1$ on $\overline{M}_{g,3}$. The resulting relation is slightly more complicated than the previous one. On the space $\mathcal{H}_{g,3}^{rt}$ it has the following form: $$d_{1,2}d_{1,3}-\frac{1}{2g-2}(K_1d_{2,3}+K_2d_{1,3}+K_3d_{1,2})+\frac{1}{(2g-2)^2}(K_1K_2+K_1K_3+K_2K_3).$$ where, $$d_{i,j}=D_{i,j}+D_{1,2,3}, \qquad K_i=\psi_i-\sum_{i \in I}D_I.$$ The divisor $D_I$ corresponds to curves with the marking set $I$ on the rational component. To obtain a relation on $\overline{M}_{g,3}$ we proceed as before. The resulting relation seems less obvious to prove with known methods. In the same way we get a tautological relation of degree $4g-2$ on $\overline{M}_{g,2g+2}$. The resulting relation is symmetric with respect to the $2g+2$ points. It has the following form: \begin{thm} The following relation holds in $R^{4g-2}(\overline{M}_{g,2g+2})$: $$\left (\sum_{\mathcal{I}} b_{i_1,i_2} \dots b_{i_{2g+1},i_{2g+2}} \right) \cdot \lambda_{g-2}\lambda_{g-1}\lambda_g=0.$$ Each term of the sum corresponds to a partition $\mathcal{I}$ of $\{1, \dots, 2g+2\}$ into $g+1$ subsets with 2 elements. \end{thm} In \cite{Pi} Pixton proposed a class of conjectural tautological relations on the Deligne-Mumford space $\overline{M}_{g,n}$. His relations are defined as a weighted sum over all stable graphs on the boundary $\partial M_{g,n}=\overline{M}_{g,n} \setminus M_{g,n}$. Weights on graphs are tautological classes. Pixton also conjectures that all tautological relations have this form. A recent result \cite{PPZ} of Pandharipande, Pixton and Zvonkine shows that Pixton's relations are connected with the Witten's class on the space of 3-spin curves. Their analysis shows that Pixton's relations hold in cohomology. Janda \cite{J} derives Pixton's relations by applying the virtual localization formula to the moduli space of stable quotients introduced in \cite{MOP}. This establishes Pixton's relations in Chow. Several known relations such as Keel's relation \cite{K} on $\overline{M}_{0,4}$, Getzler's relation \cite{G1} on $\overline{M}_{1,4}$ and Belorousski-Pandharipande relation \cite{BP} on $\overline{M}_{2,3}$ follow from Pixton's relations. While this method produces a large class of relations there are very few evidences for Pixton's conjecture at this point. The mysterious structure of the tautological algebras and the complicated nature of the relations makes his conjecture more delicate. The question whether all relations in the tautological ring come from Pixton's relations seems interesting and challenging. The relations we described in this section give an example of an infinite collection of tautological relations. The connection between these relations and Pixton's relations is not clear to us. We can show that the relation on $M_{2,6}^{rt}$ already comes from a stable quotient relation on $\overline{M}_{2,6}$. Our computation is based on the notes \cite{JSQ} by Janda on stable quotient relations on products of the universal curve. We don't know if it is possible to derive our relations from Pixton's relations in general. Understanding their connection needs new ideas. \section{Concluding remarks and further directions} A conjecture of Faber and Pandharipande \cite{FP2} predicts that every relation in $R^*(M_{g,n}^{rt})$ and $R^*(M_{g,n}^{ct})$ can be extended to a tautological relation in $R^*(\overline{M}_{g,n})$. It is natural to ask whether tautological relations on $\mathcal{H}_{g,n}^{rt}$ can be extended to tautological relations on $\mathcal{H}_{g,n}^{ct}$ and $\overline{\mathcal{H}}_{g,n}$. As we have seen in Section \ref{g+1} the degree $g+1$ relation on $\mathcal{H}_{g,2g+2}^{rt}$ can be naturally extended to the space of curves of compact type. The reason is that the Abel-Jacobi map extends to curves of compact type and the vanishing of $\theta^{g+1}$ holds over the universal Jacobian over $\mathcal{H}_g^{ct}$. The formula of Grushevsky and Zakharov gives an explicit formula for the boundary terms arising in the extended relation. Working with semi-abelian varieties and using the result of \cite{GZ2} should prove the existence of an extension of this relation to $\overline{\mathcal{H}}_{g,n}$ as well. Extending the Faber-Pandharipande and Gross-Schoen relations to the compactifications is more subtle. A naive guess would be to obtain these vanishings from the degree $g+1$ relation. One could start from this relation and push it forward to spaces with smaller numbers of points. At each step we can multiply the resulting relation with a tautological class. In genus two both of these relations follow from the vanishing of $\theta^3$ after suitable push-forwards. But in genus 3 one can only obtain the Faber-Pandharipande relation with this method. In genus $g \geq 4$ none of these relations follow from this approach. In fact if this method gives the vanishing of these cycles the same argument would prove the same statement for \emph{every family} of curves with trivial kappa classes in positive degrees. In particular, it would show their triviality for any fixed curve. But according to \cite{Y1} the Faber-Pandharipande cycle does not vanish for a generic curve of genus $g \geq 4$. We also know the non triviality of the Gorss-Schoen cycle for generic curves of genus $g \geq 3$. This is shown by Ceresa \cite{CE} in characteristic zero and by Fakhruddin \cite{FA} in positive characteristic. Finding a complete description of tautological relations on $\mathcal{H}_{g,n}^{ct}$ and $\overline{\mathcal{H}}_{g,n}$ seems an interesting question. In \cite{PT} Petersen and Tommasi show the existence of a counterexample to the Gorenstein conjectures for $\overline{M}_{2,n}$ for some $n \leq 20$. A more recent result \cite{P2} of Petersen shows that the tautological ring of $M_{2,n}^{ct}$ is not Gorenstein when $n \geq 8$. These results suggest that the structure of the tautological rings of the larger compactifications become more complicated already in genus two. It is not clear whether for a fixed genus all relations can be obtained from finitely many ones independent of $n$. Another natural question concerns other loci of curves with special linear systems. The next case is the space of trigonal curves. In \cite{T3} we study the connection between the \emph{modified Gross-Schoen cycles} and Faber's relations in \cite{F2} on certain spaces of trigonal curves. \bibliographystyle{amsplain}
{ "timestamp": "2014-07-15T02:13:57", "yymm": "1406", "arxiv_id": "1406.7403", "language": "en", "url": "https://arxiv.org/abs/1406.7403", "abstract": "We study tautological classes on the moduli space of stable $n$-pointed hyperelliptic curves of genus $g$ with rational tails. Our result gives a complete description of tautological relations. The method is based on the approach of Yin in comparing tautological classes on the moduli of curves and the universal Jacobian. It is proven that all relations come from the Jacobian side. The intersection pairings are shown to be perfect in all degrees. We show that the tautological algebra coincides with its image in cohomology via the cycle class map. The latter is identified with monodromy invariant classes in cohomology. The connection with recent conjectures by Pixton is also discussed.", "subjects": "Algebraic Geometry (math.AG)", "title": "Tautological classes on the moduli space of hyperelliptic curves with rational tails", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357573468176, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.709673925633949 }
https://arxiv.org/abs/math/0701928
2-torus manifolds, cobordism and small covers
Let ${\frak M}_n$ be the set of equivariant unoriented cobordism classes of all $n$-dimensional 2-torus manifolds, where an $n$-dimensional 2-torus manifold $M$ is a smooth closed manifold of dimension $n$ with effective smooth action of a rank $n$ 2-torus group $({\Bbb Z}_2)^n$. Then ${\frak M}_n$ forms an abelian group with respect to disjoint union. This paper determines the group structure of ${\frak M}_n$ and shows that each class of ${\frak M}_n$ contains a small cover as its representative in the case $n=3$.
\section{Introduction} An $n$-dimensional 2-torus manifold $M$ is a smooth closed manifold of dimension $n$ with effective smooth action of a rank $n$ 2-torus group $({\Bbb Z}_2)^n$. Since the action is effective, the fixed point set of the action is 0-dimensional (i.e., it is formed by finitely many isolated points) if $M$ has a fixed point. In this paper, we shall study this class of geometrical objects from the viewpoint of cobordism. \vskip .2cm Let ${\frak M}_n$ denote the set of equivariant unoriented cobordism classes of all $n$-dimensional 2-torus manifolds. Then ${\frak M}_n$ forms an abelian group with respect to disjoint union, and in particular, ${\frak M}_n$ also forms a vector space over ${\Bbb Z}_2$. The zero element of ${\frak M}_n$ is given by a canonical 2-torus manifold, which is the $n$-dimensional standard sphere $S^n$ with the standard $({\Bbb Z}_2)^n$-action defined by $$(x_0,x_1,...,x_n)\longmapsto (x_0, g_1x_1,...,g_nx_n),$$ fixing two isolated points with same $({\Bbb Z}_2)^n$-representation, where $(x_0,x_1,...,x_n)\in S^n$ and $(g_1,...,g_n)\in ({\Bbb Z}_2)^n$. When $n=1, 2$, it is known from the work of Conner and Floyd \cite{cf} that ${\frak M}_1$ is trivial and ${\frak M}_2$ is generated by the standard $({\Bbb Z}_2)^2$-action on ${\Bbb R}P^2$. As for $n\geq 3$, as far as the author knows, the structure of ${\frak M}_n$ is still far from well understood. One of main objectives of this paper considers the following problem. \vskip .2cm {\bf Problem:} {\em To determine the group structure of ${\frak M}_n$ when $n\geq 3$.} \vskip .2cm In 1991, Davis and Januszkiewicz introduced and studied a kind of special 2-torus manifolds---small covers, each of which is locally isomorphic to a faithful representation of $({\Bbb Z}_2)^n$ on ${\Bbb R}^n$, and its orbit space is a simple convex polytope. This establishes a direct link between equivariant topology and combinatorics. A typical example of an equivariant nonbounding small cover is a real projective space ${\Bbb R} P^n$ with a standard action of $({\Bbb Z}_2)^n$. Its orbit space is an $n$-simplex. Another typical example of a bounding small cover is a product of $n$ copies of a circle $S^1$ with reflection, and its orbit space is an $n$-cube. Thus, we see that when $n=2$, two typical examples above can be used as representatives of two classes in ${\frak M}_2$, respectively. This leads us to another objective of this paper, i.e., the following conjecture. \vskip .2cm {\bf Conjecture:} {\em Each class of ${\frak M}_n$ contains a small cover as its representative. }\vskip .2cm Note that in non-equivariant case, the above conjecture has been shown to be true by Bukhshtaber and Ray in \cite{br}. \vskip .2cm In this paper we settle the above problem and conjecture in 3-dimensional case, see Theorems~\ref{dim} and \ref{small}. \vskip .2cm The paper is is organized as follows. In Section 2, we formulate the complete equivariant cobordism invariant (i.e., prime tangent representation set $\mathcal{N}_\beta$) of 2-torus manifolds from Stong homomorphism, and then we study some properties of the complete equivariant cobordism invariant $\mathcal{N}_\beta$. In Section 3 we introduce the notion of an essential generator of ${\frak M}_n$, and show that any element of ${\frak M}_n$ is a linear combination of essential generators. In Section 4, we review the work of Davis and Januszkiewicz \cite{dj} and give two kinds of 3-dimensional small covers, which play a key role in the study of ${\frak M}_3$. In Section 5 we introduce the moment graphs induced by 2-torus manifolds. The group structure of ${\frak M}_3$ is determined completely in Section 6, and the above conjecture is settled in the 3-dimensional case in Section 7. \vskip .2cm The author expresses his gratitude to Professor M. Masuda for his valuable suggestions and comments, and especially for helpful conversation in the argument of Proposition~\ref{bound}. The author also expresses his gratitude to Professor R.E. Stong for his valuable suggestions and comments. \section{$G$-representations and Stong homomorphism} Let $G=({\Bbb Z}_2)^n$, and let $\text{Hom}(G,{\Bbb Z}_2)$ be the set of all homomorphisms $\rho: G\longrightarrow {\Bbb Z}_2$, which consists of $2^n$ distinct homomorphisms. One agrees to let $\rho_0$ denote the trivial element in $\text{Hom}(G,{\Bbb Z}_2)$, i.e., $\rho_0(g)=1$ for all $g\in G$. The irreducible real $G$-representations are all one-dimensional and correspond to all elements in $\text{Hom}(G,{\Bbb Z}_2)$. Thus, every irreducible real representation of $G$ has the form $\lambda_\rho: G\times{\Bbb R}\longrightarrow{\Bbb R}$ with $\lambda_\rho(g,x)=\rho(g)\cdot x$ for $\rho\in\text{Hom}(G,{\Bbb Z}_2)$. \vskip .2cm Given an element $\beta$ of ${\frak M}_n$, let $(M, \phi)$ be a representative of $\beta$ such that $M$ has a fixed point. Taking an isolated point $p$ in $M^G$, the $G$-representation at $p$ can be written as $$\tau_pM=\bigoplus_{\rho\not= \rho_0}\lambda_\rho^{q_\rho}$$ with $\sum_{\rho\not=\rho_0}q_\rho=n$. By the Borel Theorem (see \cite{ap}) and the effectiveness of the action, if $q_\rho\not=0$, then $q_\rho$ must be one. Thus, $\tau_pM$ is the direct sum of $n$ irreducible real $G$-representations (which are linearly independent). The collection $\mathcal{N}_M=\{[\tau_pM] \big\vert\ p\in M^G\}$ is called the {\em tangent representation set of $(M, \phi)$}, where $[\tau_pM]$ denotes the isomorphism class of $\tau_pM$. \vskip .2cm By $R_n(G)$ denote the vector space over ${\Bbb Z}_2$, generated by the representation classes of dimension $n$. Then $R_*(G)=\sum_{n\geq 0}R_n(G)$ is a graded commutative algebra over ${\Bbb Z}_2$ with unit. The multiplication in $R_*(G)$ is given by $[V_1]\cdot[V_2]=[V_1\oplus V_2]$. We can identify $R_*(G)$ with the graded polynomial algebra over ${\Bbb Z}_2$ generated by $\text{Hom}(G,{\Bbb Z}_2)$, where the addition in $\text{Hom}(G,{\Bbb Z}_2)$ is given by the tensor product of representations $(\rho+\mu)(g)=\rho(g)\cdot\mu(g)$, and the multiplication is given by the direct sum of representations. The homomorphisms $\rho_i:(g_1,...g_n)\longmapsto g_i$ give a standard basis of $\text{Hom}(G,{\Bbb Z}_2)$. Then $R_*(G)$ is isomorphic to the graded polynomial algebra ${\Bbb Z}_2[\rho_1,...,\rho_n]$. Obviously, each $[\tau_pM]$ of $\mathcal{N}_M$ uniquely corresponds to a monomial of degree $n$ in ${\Bbb Z}_2[\rho_1,...,\rho_n]$ such that the $n$ factors of the monomial form a basis of $\text{Hom}(G,{\Bbb Z}_2)$. \vskip .2cm There is a natural homomorphism $\delta_n:{\frak M}_n\longrightarrow R_n(G)$ defined by $$\delta_n([M,\phi])=\sum_{p\in M^G}[\tau_p M].$$ The following result is essentially due to Stong \cite{s}. \begin{thm}[Stong] \label{s} $\delta_n$ is a monomorphism. \end{thm} Theorem~\ref{s} implies that for each $\beta$ in ${\frak M}_n$, there must be a representative $(M,\phi)$ of $\beta$ such that $\mathcal{N}_M$ is {\em prime} (i.e., either all elements of $\mathcal{N}_{M}$ are distinct or $\mathcal{N}_{M}$ is empty), and $\mathcal{N}_{M}$ is uniquely determined by $\beta$. Define $$\mathcal{N}_{\beta}:=\mathcal{N}_{M}$$ and it is called the prime tangent representation set of $\beta$. Then \begin{cor} \label{ns} Let $\beta_1,\beta_2\in {\frak M}_n$. Then $$\beta_1=\beta_2\Longleftrightarrow \mathcal{N}_{\beta_1}=\mathcal{N}_{\beta_2}.$$ \end{cor} \begin{rem} Since Hom$(G,{\Bbb Z}_2)$ is isomorphic to $G$, each $[\tau_pM]$ of $\mathcal{N}_M$ actually corresponds a unique element (denoted by $[\Delta_p]$) in the quotient $\text{GL}(n,{\Bbb Z}_2)/\text{\bf S}_n$, where $\text{\bf S}_n$ is a subgroup generated by all matrices of the form $E_{ij}$ (i.e., the identity matrix $E$ makes an exchange between the $i$-th column and the $j$-th column), and it is isomorphic to the symmetric group of rank $n$. Thus, for any two $\sigma_1,\sigma_2$ in $[\Delta_p]$, there exists a matrix $\theta$ in $\text{\bf S}_n$ such that $\sigma_1=\sigma_2\theta$. This also means that there is a one-to-one correspondence between all bases of $({\Bbb Z}_2)^n$ and $\text{GL}(n,{\Bbb Z}_2)/\text{\bf S}_n$. Here we call $[\Delta_p]$ the {\em tangent matrix} at $p$. With this understood, we often regard each element $[\tau_pM]$ of $\mathcal{N}_M$ as being $[\Delta_p]$. Note that $|\text{GL}(n,{\Bbb Z}_2)|=2^{{{n(n-1)}\over 2}}\prod_{i=1}^n(2^i-1)$, see \cite{ab}. \end{rem} \begin{prop} \label{bound} Let $\beta$ be a nonzero element of ${\frak M}_n$. Then $$n+1\leq \vert\mathcal{N}_\beta\vert\leq{{2^{{{n(n-1)}\over 2}}\prod_{i=1}^n(2^i-1)}\over {n!}}.$$ In particular, such upper and lower bounds are the best possible. \end{prop} \begin{proof} The lower bound of $\vert \mathcal{N}_\beta\vert$ is a special case of the Theorem 1.2 in \cite{l}. Thus, it suffices to give the proof of the upper bound. For this, it needs to merely show that there is a nonzero element $\beta'\in {\frak M}_n$ such that $\vert \mathcal{N}_{\beta'}\vert={{2^{{{n(n-1)}\over 2}}\prod_{i=1}^n(2^i-1)}\over {n!}}$. \vskip .2cm Consider the standard $({\Bbb Z}_2)^n$-action $({\Bbb R}P^n, T_0)$ of $({\Bbb Z}_2)^n$ on the real $n$-dimensional projective space ${\Bbb R}P^n$ defined by $n$ commuting involutions $$t_i:([x_0, x_1,...,x_n])=[x_0, x_1,...,x_{i-1}, -x_i,x_{i+1},...,x_n], \ \ i=1,...,n$$ where $t_1,...,t_n$ generate $({\Bbb Z}_2)^n$. This action fixes $n+1$ isolated points $$p_{i+1}=[\underbrace{0,...,0}_i, 1, 0,...,0]$$ where $i=0, 1,2,...,n$, and one easily sees that its tangent matrix set is \begin{eqnarray*} \mathcal{N}_0=\{ [\Delta_i]=\left[ \begin{pmatrix} 1 & & & & & & & & \\ & \cdot & & & & & & & \\ & & \cdot & & & & & & \\ & & & \cdot & & & & & \\ 1 & & \cdots & & 1 & & \cdots & & 1 \\ & & & & & \cdot & & & \\ & & & & & & \cdot & & \\ & & & & & & & \cdot & \\ & & & & & & & & 1 \\ \end{pmatrix}\right] \vert i=0,1,2,...,n\} \end{eqnarray*} where the row vector $(1,\cdots, 1,\cdots, 1)$ in $\Delta_i$ denotes $i$-th row, and one makes the convention that $\Delta_0=E$ when $i=0$; especially, each $[\Delta_i]$ corresponds to the isolated point $p_{i+1}$. Obviously, $\mathcal{N}_0$ is prime. By direct computations, one has that $\Delta_i\Delta_i=E$ and the result of the product $\Delta_i\Delta_j (i, j\not=0, j\not=i)$ just makes an exchange between $i$-th column and $j$-th column of $\Delta_j$. Thus, for $i, j\not=0$, one has \begin{equation} \label{r} [\Delta_i\Delta_j] = \begin{cases} [E] & \text{ if } i=j\\ \text{$[\Delta_j]$} & \text{ if } i\not=j. \end{cases} \end{equation} \vskip .2cm Now, let $B_{n+1}$ denote the subset of $\text{GL}(n,{\Bbb Z}_2)$ defined as follows: $$B_{n+1}=\{\sigma\in \text{GL}(n,{\Bbb Z}_2)\vert \sigma\mathcal{N}_0=\mathcal{N}_0\}$$ where $\sigma\mathcal{N}_0=\{[\sigma\Delta_0],[\sigma\Delta_1],...,[\sigma\Delta_n]\}$. Obviously, $B_{n+1}$ is a subgroup of $\text{GL}(n,{\Bbb Z}_2)$, and each element of $B_{n+1}$ actually makes a permutation for $[\Delta_0],[\Delta_1],...,[\Delta_n]$. One then knows from (\ref{r}) that each $\Delta_i\in B_{n+1}$. \vskip .2cm {\bf Claim I.} $\vert B_{n+1}\vert=(n+1)!$. \vskip .2cm First, we prove that $B_{n+1}$ contains the symmetric group ${\bf S}_n$. Actually, this is because for any $E_{ij}$ in ${\bf S}_n$ and any $\Delta_l$, $$[E_{ij}\Delta_l]=\begin{cases} [\Delta_l] & \text{ if $i,j\not=l$ or $l=0$}\\ [\Delta_i] & \text{ if $j=l\not=0$}\\ [\Delta_j] & \text{ if $i=l\not=0$.} \end{cases}$$ Next, it is easy to see that for any $\sigma$ in $B_{n+1}$, $\sigma$ can be expressed as a product by some matrices of ${\bf S}_n$ and some of the $\Delta_i$'s. Obviously, when $i\not=0$, $\Delta_i\not\in {\bf S}_{n}$. Thus, $B_{n+1}$ is generated by those matrices of ${\bf S}_n$ and all $\Delta_i$, and Claim I then follows from this. \vskip .2cm {\bf Claim II.} For any $\sigma,\tau\in \text{GL}(n,{\Bbb Z}_2)$, $$\sigma\mathcal{N}_0\cap\tau\mathcal{N}_0\not=\emptyset\Longleftrightarrow \sigma\mathcal{N}_0=\tau\mathcal{N}_0.$$ \vskip .2cm It is obvious that if $\sigma\mathcal{N}_0=\tau\mathcal{N}_0$ then $\sigma\mathcal{N}_0\cap\tau\mathcal{N}_0\not=\emptyset$. Conversely, if $\sigma\mathcal{N}_0\cap\tau\mathcal{N}_0\not=\emptyset$, then there are $[\Delta_i], [\Delta_j]\in \mathcal{N}_0$ such that $[\sigma\Delta_j]=[\tau\Delta_i]$. By the definition of $B_{n+1}$, one has that $$\sigma\mathcal{N}_0=\tau\mathcal{N}_0\Longleftrightarrow \sigma^{-1}\tau\in B_{n+1}.$$ Hence, it suffices to show that $\sigma^{-1}\tau\in B_{n+1}$. From $[\sigma\Delta_j]=[\tau\Delta_i]$, there is an element $s\in \text{\bf S}_n$ such that $\sigma\Delta_js=\tau\Delta_i$, so $$\sigma^{-1}\tau=\Delta_j s\Delta_i.$$ Note that $\Delta_i^{-1}=\Delta_i$. Furthermore, one concludes that $\sigma^{-1}\tau\in B_{n+1}$. \vskip .2cm For any automorphism $\sigma:({\Bbb Z}_2)^n\longrightarrow ({\Bbb Z}_2)^n$ where $\sigma\in \text{GL}(n,{\Bbb Z}_2)$, one obtains new generators $\sigma (t_1),..., \sigma(t_n)$ of $({\Bbb Z}_2)^n$, and then one obtains a new $({\Bbb Z}_2)^n$-action $({\Bbb R}P^n, \sigma T_0)$ from $({\Bbb R}P^n, T_0)$ by using generators $\sigma (t_1),..., \sigma(t_n)$ such that its tangent matrix set is $(\sigma^{-1})^\top\mathcal{N}_0$. By the above arguments with Corollary~\ref{ns} together, up to equivariant cobordism, there are ${{\vert\text{GL}(n,{\Bbb Z}_2)\vert}\over{\vert B_{n+1}\vert}}={{2^{{{n(n-1)}\over 2}}\prod_{i=1}^n(2^i-1)}\over {(n+1)!}}$ different $({\Bbb Z}_2)^n$-actions $({\Bbb R}P^n, \sigma T_0)$, and especially, the union of their tangent matrix sets just consists of all elements of $\text{GL}(n,{\Bbb Z}_2)/{\bf S}_{n}$. Therefore, taking $$(T', {M'}^n)=\bigsqcup_{\{(\sigma^{-1})^\top\}\in\text{GL}(n,{\Bbb Z}_2)/B_{n+1}}({\Bbb R}P^n, \sigma T_0)$$ then the tangent representation set of this action is prime, and the number of its all elements is $$(n+1)\times {{2^{{{n(n-1)}\over 2}}\prod_{i=1}^n(2^i-1)}\over {(n+1)!}}={{2^{{{n(n-1)}\over 2}}\prod_{i=1}^n(2^i-1)}\over {n!}}.$$ This completes the proof of the upper bound. \end{proof} \section{Essential generators of ${\frak M}_n$} \begin{defn} Let $\beta\not=0$ in ${\frak M}_n$. One says that $\beta$ is {\em an essential generator} if $\vert{\mathcal{N}}_{\beta+\gamma}\vert \geq \vert{\mathcal{N}}_\beta\vert$ for any $\gamma\in {\frak M}_n$ with $\vert{\mathcal{N}}_{\gamma}\vert < \vert{\mathcal{N}}_\beta\vert$. \end{defn} We know from Proposition~\ref{bound} that up to equivariant cobordism there are ${{2^{{{n(n-1)}\over 2}}\prod^n_{i=1}(2^i-1)}\over {(n+1)!}}$ different $({\Bbb Z}_2)^n$-actions $({\Bbb R}P^n, \sigma T_0), \sigma\in\text{GL}(n,{\Bbb Z}_2)$, and each $({\Bbb R}P^n, \sigma T_0)$ fixes just $n+1$ isolated points with different representations. Since the lower bound of $\vert{\mathcal{N}}_\beta\vert$ for any nonzero element $\beta$ of ${\frak M}_n$ is $n+1$, we have that each $({\Bbb R}P^n, \sigma T_0)$ is an essential generator. \begin{lem} \label{l1} Let $\beta\in {\frak M}_n$. If $\beta$ is an essential generator, then $$\vert{\mathcal{N}}_\beta\vert\leq \begin{cases} {{2^{{{n(n-1)}\over 2}}\prod^n_{i=1}(2^i-1)}\over {2n!}} & \text{ if $n$ is odd,}\\ {{2^{{{n(n-1)}\over 2}}\prod^n_{i=1}(2^i-1)}\over {2(n-1)!(n+1)}} & \text{ if $n$ is even.} \end{cases}$$ \end{lem} \begin{proof} If $\vert{\mathcal{N}}_\beta\vert=n+1$, then obviously the lemma holds. Now suppose that $\vert{\mathcal{N}}_\beta\vert>n+1$ so $\beta$ is not just one of the $({\Bbb R}P^n, \sigma T_0), \sigma\in\text{GL}(n,{\Bbb Z}_2)$. Then one claims that for each $({\Bbb Z}_2)^n$-action $({\Bbb R}P^n, \sigma T_0)$, ${\mathcal{N}}_\beta$ cannot contain more than $[{{n+1}\over 2}]$ elements in ${\mathcal{N}}_{({\Bbb R}P^n, \sigma T_0)}$. Actually, if not, then one has that $\vert{\mathcal{N}}_\beta\vert>\vert{\mathcal{N}}_{\beta+[({\Bbb R}P^n, \sigma T_0)]}\vert$, but this is impossible since $\beta$ is an essential generator. The lemma then follows from this claim. \end{proof} \begin{prop} \label{l2} Let $\beta\in {\frak M}_n$. Then $\beta$ is a linear combination of essential generators. \end{prop} \begin{proof} The argument is trivial if $\beta=0$ or $\beta$ is an essential generator. Suppose that $\beta$ is nonzero and is not an essential generator. Then there exists some element $\gamma$ with $\vert{\mathcal{N}}_{\gamma}\vert < \vert{\mathcal{N}}_\beta\vert$ in ${\frak M}_n$ such that $\beta=(\beta+\gamma)+\gamma$ with $\vert{\mathcal{N}}_{\beta+\gamma}\vert < \vert{\mathcal{N}}_\beta\vert$. If $\gamma$ or $\beta+\gamma$ is not an essential generator, since ${\frak M}_n$ contains finite elements, by continuing the above process, finally $\beta$ may be expressed as a linear combination of essential generators. \end{proof} \section{Small covers} \vskip .2cm An $n$-dimensional convex polytope $P^n$ is said to be {\em simple} if exactly $n$ faces of codimension one meet at each of its vertices. Each point of a simple convex polytope $P^n$ has a neighborhood which is affine isomorphic to an open subset of the positive cone ${\Bbb R}_{\geq 0}^n$. A smooth closed $n$-manifold $M^n$ is said to be a {\em small cover} if it admits an effective smooth $({\Bbb Z}_2)^n$-action and is locally isomorphic to the standard action of $({\Bbb Z}_2)^n$ on ${\Bbb R}^n$ such that the orbit space of the action is a simple convex polytope $P^n$. \vskip .2cm A small cover is a special 2-torus manifold. A canonical example of small cover is the $n$-dimensional real projective space ${\Bbb R}P^n$ with the standard $({\Bbb Z}_2)^n$-action whose orbit space is the $n$-simplex $\Delta^n$. \vskip .2cm Suppose that $\pi:M^n\longrightarrow P^n$ is a small cover over a simple convex polytope $P^n$. Let $\mathcal{F}(P^n)=\{F_1,...,F_\ell\}$ be the set of codimension-one faces (facets) of $P^n$. Then there are $\ell$ connected submanifolds $M_1,...,M_\ell$ determined by $\pi$ and $F_i$ (i.e., $M_i=\pi^{-1}(F_i)$), which are called {\em characteristic submanifolds} here. Each submanifold $M_i$ is fixed pointwise by the ${\Bbb Z}_2$-subgroup $G_i$ of $({\Bbb Z}_2)^n$, so that each facet $F_i$ corresponds to the ${\Bbb Z}_2$-subgroup $G_i$. Since there is a canonical isomorphism from $({\Bbb Z}_2)^n$ to $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^n)$, such the ${\Bbb Z}_2$-subgroup $G_i$ corresponds to an element $\upsilon_i$ in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^n)$. For each face $F$ of codimension $s$, since $P^n$ is simple, there are $s$ facets $F_{i_1},...,F_{i_s}$ such that $$F=F_{i_1}\cap\cdots \cap F_{i_s}.$$ Then, the corresponding characteristic submanifolds $M_{i_1},...,M_{i_s}$ intersect transversally in the $(n-s)$-dimensional submanifold $\pi^{-1}(F)$, and the isotropy subgroup $G_F$ of $\pi^{-1}(F)$ is a subtorus of rank $s$ and is generated by $G_{i_1},...,G_{i_s}$ (or is determined by $\upsilon_{i_1},...,\upsilon_{i_s}$ in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^n)$). Thus, this actually gives a characteristic function (see \cite{dj}) $$\lambda:\mathcal{F}(P^n)\longrightarrow \text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^n)$$ defined by $\lambda(F_i)=\upsilon_i$ such that for any face $F=F_{i_1}\cap\cdots \cap F_{i_s}$ of $P^n$, $\lambda(F_{i_1}),...,\lambda(F_{i_s})$ are linearly independent in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^n)$. When $\dim F=0$ (i.e., $s=n$), $F$ is a vertex of $P^n$, which corresponds to a $({\Bbb Z}_2)^n$-fixed point $p$ of $M$. In this case, $\lambda(F_{i_1}),...,\lambda(F_{i_n})$ uniquely determines a dual basis of $\text{Hom}(({\Bbb Z}_2)^n, {\Bbb Z}_2)$, which just gives the tangent representation at $p$. Thus, the characteristic function $\lambda$ completely determines the tangent representation set $\mathcal{N}_M$ of fixed points of $M^n$. \vskip .2cm By the work of Davis and Januszkiewicz \cite{dj}, there is a reconstruction process of $M^n$ by using the product bundle $({\Bbb Z}_2)^n\times P^n$ and $\lambda$. Note that each point $q\in \partial P^n$ must lie in the relative interior of a unique face $F(q)$ of $P^n$. Then, one may define an equivalence relation on $({\Bbb Z}_2)^n\times P^n$ as follows: $$(t_1, x)\sim (t_2, x)\Longleftrightarrow t_1^{-1}t_2\in G_{F(q)}$$ where $x\in F(q)$, so that the quotient space $$M(\lambda):=({\Bbb Z}_2)^n\times P^n/(t_1, x)\sim (t_2, x) $$ is equivariantly homeomorphic to $M^n$. Obviously, both $M^n$ and $M(\lambda)$ have the same characteristic function, so they also are cobordant equivariantly. \vskip .2cm By $\Lambda(P^n)$ we denote the set of all characteristic functions on $P^n$. Then we have \begin{prop} Let $\pi:M^n\longrightarrow P^n$ be a small cover over a simple convex polytope $P^n$. Then all small covers over $P^n$ are given by $\{M(\lambda)\vert \lambda\in \Lambda(P^n)\}$ from the viewpoint of cobordism. \end{prop} \noindent {\bf Remark.} Generally speaking, one cannot make sure that there always exist characteristic functions (or colorings) over a simple convex polytope $P^n$ when $n\geq 4$. For example, see [DJ, Nonexamples 1.22]. However, the Four Color Theorem makes sure that every 3-dimensional simple convex polytope always admits characteristic functions. \vskip .2cm The correspondence $\lambda\longmapsto\sigma\circ\lambda$ defines an action of $\text{GL}(n,{\Bbb Z}_2)$ on $\Lambda(P^n)$, and it then induces an action of $\text{GL}(n,{\Bbb Z}_2)$ on $\{M(\lambda)\vert \lambda\in \Lambda(P^n)\}$, given by $M(\lambda)\longmapsto M(\sigma\circ \lambda)$. It is easy to check that such two actions are free. \vskip .2cm The following two kinds of small covers play a key important role on indicating the structure of ${\frak M}_3$. \begin{exam}[Small covers over a 3-complex $\Delta^3$] \label{e1} {\em A 3-simplex $\Delta^3$ has four 2-faces, and a canonical characteristic function $\lambda_0$ on it is defined by assigning to $\rho_1^*,\rho_2^*,\rho_3^*,\rho_1^*+\rho_2^*+\rho_3^*$ the four 2-faces of $\Delta^3$, where $\{\rho_1^*,\rho_2^*,\rho_3^*\}$ is the standard basis of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$, which corresponds to $\rho_1,\rho_2,\rho_3$ of $\text{Hom}(({\Bbb Z}_2)^3, {\Bbb Z}_2)$. Thus, $\{\sigma\circ \lambda_0|\sigma\in\text{GL}(3,{\Bbb Z}_2)\}$ gives all characteristic functions on $\Delta^3$. Since the characteristic function of the standard action $T_0$ of $({\Bbb Z}_2)^3$ on ${\Bbb R}P^3$ is just $\lambda_0$, $\{M(\sigma\circ \lambda_0)|\sigma\in\text{GL}(3,{\Bbb Z}_2)\}=\{({\Bbb R}P^3, \sigma T_0)|\sigma\in\text{GL}(3,{\Bbb Z}_2)\}$. Proposition~\ref{bound} has shown that, up to equivariant cobordism, there are 7 different small covers in $\{M(\sigma\circ \lambda_0)|\sigma\in\text{GL}(3,{\Bbb Z}_2)\}=\{({\Bbb R}P^3, \sigma T_0)|\sigma\in\text{GL}(3,{\Bbb Z}_2)\}$, denoted by $({\Bbb R}P^3, T_0), ({\Bbb R}P^3, T_1),..., ({\Bbb R}P^3, T_6)$, respectively. A direct calculation gives the following table about the tangent representation sets of seven different small covers. \vskip .2cm \noindent \begin{tabular}{|l|l|l|} \multicolumn{2}{c}{Table I}\\[5pt] \hline \multicolumn{1}{|c|}{ Small cover $M$} & \multicolumn{1}{|c|}{tangent representation set $\mathcal{N}_M$} \\ \hline $({\Bbb R}P^3, T_0)$ & {\scriptsize $\rho_1\rho_2\rho_3, \rho_1(\rho_1+\rho_2)(\rho_1+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_2+\rho_3),\rho_3(\rho_1+\rho_3)(\rho_2+\rho_3)$}\\ \hline $({\Bbb R}P^3, T_1)$ & {\scriptsize $\rho_1(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), \rho_1\rho_2(\rho_2+\rho_3),\rho_2\rho_3(\rho_1+\rho_2),\rho_3(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline $({\Bbb R}P^3, T_2)$ & {\scriptsize $\rho_1(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), \rho_1\rho_3(\rho_2+\rho_3),\rho_2\rho_3(\rho_1+\rho_3),\rho_2(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline $({\Bbb R}P^3, T_3)$ & {\scriptsize $\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), \rho_1\rho_2(\rho_1+\rho_3),\rho_1\rho_3(\rho_1+\rho_2),\rho_3(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline $({\Bbb R}P^3, T_4)$ & {\tiny $\rho_1(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_1\rho_2(\rho_1+\rho_2+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_1+\rho_3),(\rho_1+\rho_3) (\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline $({\Bbb R}P^3, T_5)$ & {\tiny $\rho_1(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_1\rho_3(\rho_1+\rho_2+\rho_3),\rho_3(\rho_1+\rho_2)(\rho_1+\rho_3),(\rho_1+\rho_2) (\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline $({\Bbb R}P^3, T_6)$ & {\tiny $\rho_2(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_2\rho_3(\rho_1+\rho_2+\rho_3),(\rho_1+\rho_2) (\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3),\rho_3(\rho_1+\rho_2)(\rho_2+\rho_3)$}\\ \hline \end{tabular} } \end{exam} \begin{exam}[Small covers over a prism $P^3$]\label{e2} {\em There exists only one simple convex 3-polytope with six vertices (i.e., a prism $P^3$), see \cite{e}. Let $F_1, F_2, F_4$ denote three square facets, and $F_3, F_5$ two triangular facets in $P^3$. From \cite{ccl} we know that essentially there are five different characteristic functions $\lambda_1,\lambda_2,\lambda_3,\lambda_4,\lambda_5$ under the action of $\text{GL}(3,{\Bbb Z}_2)$ on $\Lambda(P^3)$, which are respectively defined by $$\begin{tabular}{|l|l|l|l|l|l|} \hline & $F_1$& $F_2$ & $F_3$ & $F_4$ & $F_5$ \\ \hline $\lambda_1$ & $\rho_1^*$ & $\rho_2^*$ & $\rho_3^*$ & $\rho_1^*+\rho_2^*$ & $\rho_1^*+\rho_2^*+\rho_3^*$\\ \hline $\lambda_2$ & $\rho_1^*$ & $\rho_2^*$ & $\rho_3^*$ & $\rho_1^*+\rho_2^*$ & $\rho_1^*+\rho_3^*$\\ \hline $\lambda_3$ & $\rho_1^*$ & $\rho_2^*$ & $\rho_3^*$ & $\rho_1^*+\rho_2^*$ & $\rho_2^*+\rho_3^*$\\ \hline $\lambda_4$ & $\rho_1^*$ & $\rho_2^*$ & $\rho_3^*$ & $\rho_1^*+\rho_2^*$ & $\rho_3^*$\\ \hline $\lambda_5$ & $\rho_1^*$ & $\rho_2^*$ & $\rho_3^*$ & $\rho_1^*+\rho_2^*+\rho_3^*$ & $\rho_3^*$\\ \hline \end{tabular}$$ It is easy to check that for any $\sigma\in \text{GL}(3,{\Bbb Z}_2)$, every one of $M(\sigma\circ \lambda_4)$ and $M(\sigma\circ \lambda_5)$ always bounds equivariantly. A direct calculation shows that for $\sigma_1=\begin{pmatrix} 1 & & \\ 1 & 1 & \\ & & 1 \end{pmatrix}$, $\mathcal{N}_{M(\sigma_1\circ \lambda_1)}=\mathcal{N}_{M(\lambda_2)}$, and for $\sigma_2= \begin{pmatrix} 1 & 1& \\ & 1 & \\ & & 1 \end{pmatrix}$, $\mathcal{N}_{M(\sigma_2\circ \lambda_1)}=\mathcal{N}_{M(\lambda_3)}$. Since $\mathcal{N}_{M(\lambda_1)}$ is prime, by Corollary~\ref{ns}, all nonzero equivariant cobordism classes in $\{M(\sigma\circ\lambda_1)|\sigma\in \text{GL}(3,{\Bbb Z}_2)\}$ give those in all small covers over $P^3$. By further computations, one obtains that there are only four matrices $$\tau_1=\begin{pmatrix} 1 & & \\ & 1 & \\ & & 1 \end{pmatrix}, \tau_2=\begin{pmatrix} 1 & & \\ 1& 1 & \\ & & 1 \end{pmatrix},\tau_3=\begin{pmatrix} 1 & & \\ & 1 & 1\\ & & 1 \end{pmatrix},\tau_4=\begin{pmatrix} 1 & & \\ 1& 1 &1 \\ & & 1 \end{pmatrix}$$ such that $\tau_i\mathcal{N}_{M(\lambda_1)}=\mathcal{N}_{M(\lambda_1)}, i=1,2,3,4$, and these four matrices form a subgroup of $\text{GL}(3,{\Bbb Z}_2)$. Thus, up to equivariant cobordism, there are ${{|\text{GL}(3,{\Bbb Z}_2)\vert}\over 4}=42$ different nonbouding small covers over $P^3$. We can even construct such small covers as follows. Consider the $({\Bbb Z}_2)^3$-action $\Phi_0$ on $S^1\times {\Bbb R}P^2=S^1\times {\Bbb R}P({\Bbb C}\oplus{\Bbb R})$ defined by the following three commutative involutions $$t_1: (z, [v, w])\longmapsto (\bar{z}, [\bar{z}v, w])$$ $$t_2: (z, [v, w])\longmapsto (z, [z\bar{v}, w])$$ $$t_3: (z, [v, w])\longmapsto (z, [-z\bar{v}, w]).$$ This action fixes six isolated points $(\pm 1, [0, 1]), (\pm 1, [1, 0]), (\pm 1, [\sqrt{-1}, 0])$, and its orbit space is just a prime $P^3$. A direct calculation shows that $\mathcal{N}_{(S^1\times {\Bbb R}P^2, \Phi_0)}$ consists of six distinct monomials $\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_2+\rho_3),\rho_1\rho_3(\rho_2+\rho_3),\rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_2+\rho_3),\rho_1(\rho_1+\rho_3)(\rho_2+\rho_3)$ of ${\Bbb Z}_2[\rho_1,\rho_2,\rho_3]$, so $(S^1\times {\Bbb R}P^2, \Phi_0)$ is nonbounding. Further, up to equivariant cobordism, 42 different nonbouding small covers over $P^3$ can be given by applying automorphisms of $({\Bbb Z}_2)^3$ to $(S^1\times {\Bbb R}P^2, \Phi_0)$, and they are denoted by $(S^1\times {\Bbb R}P^2, \Phi_0)$, $(S^1\times {\Bbb R}P^2, \Phi_1)$, ..., $(S^1\times {\Bbb R}P^2, \Phi_{41})$, respectively. } \end{exam} \vskip .2cm \section{Graphs of actions} Given a nonzero element $\beta$ in ${\frak M}_n$. Let $(M^n,\phi)$ be a representative of $\beta$ such that ${\mathcal{N}}_M$ is prime. Choose a nontrivial irreducible representation $\rho$ in $\text{Hom}(({\Bbb Z}_2)^n,{\Bbb Z}_2)$, let $C$ be a component of the fixed point set of $\ker\rho(\cong ({\Bbb Z}_2)^{n-1})$ acting on $M$ such that $\dim C>0$, and the action of $({\Bbb Z}_2)^n/\ker\rho$ on $C$ has a nonempty fixed point set. Then the dimension of $C$ must be 1 since the action is effective, and thus $C$ is equivariantly diffeomorphic to the circle $S^1$ with a reflection fixing just two fixed points. Then one has an edge joining these two fixed points, which is labeled by $\rho$. Furthermore, one can obtain a graph $\Gamma_M$, which is the union of all those edges chosen for each $\rho$ and $C$. Clearly, the set of vertices of $\Gamma_M$ is just the fixed point set of $({\Bbb Z}_2)^n$ acting on $M$. Since the tangent representation at a fixed point $p$ has $n$ irreducible summands, the number of edges in $\Gamma_M$ meeting at $p$ is exactly $n$, so $\Gamma_M$ is a regular graph of valence $n$. It should be pointed out that, generally, $\Gamma_M$ is not determined by $\beta$ uniquely, and it depends upon the choice of representatives of $\beta$. \vskip .2cm Let $E_{\Gamma_M}$ denote the set of all edges in $\Gamma_M$, and let $V_{\Gamma_M}$ denote the set of all vertices in $\Gamma_M$. Given a vertex $p$ in $V_{\Gamma_M}$, let $E_p$ denote the set of $n$ edges joining to $p$. Then there is a natural map $\alpha: E_{\Gamma_M}\longrightarrow \text{Hom}(({\Bbb Z}_2)^n, {\Bbb Z}_2)$ (called an axial function or a $({\Bbb Z}_2)^n$-coloring, cf~\cite{gz1}, \cite{gz2}, \cite{bl}). One knows from \cite{l} that $\alpha$ satisfies the following properties: \vskip .2cm 1) for each vertex $p$ in $V_{\Gamma_\beta}$, $\alpha(E_p)$ spans $\text{Hom}(({\Bbb Z}_2)^n, {\Bbb Z}_2)$; 2) for each edge $e$ in $E_{\Gamma_\beta}$, $$\prod_{x\in E_p-E_e}\alpha(x)\equiv \prod_{y\in E_q-E_e}\alpha(y)\mod \alpha(e)$$ where $p, q$ are two endpoints of $e$, and $E_e$ denotes the set of all edges joining two endpoints of $e$. The pair $(\Gamma_M, \alpha)$ is called the {\em moment graph} of $(M^n,\phi)$. Since ${\mathcal{N}}_M$ is prime, one has from \cite{l} that for each edge $e$ in $\Gamma_M$, $\vert E_e\vert=1$. \vskip .2cm {\em Note.} If $M$ is a small cover over a simple convex polytope $P^n$, then $\Gamma_M$ is just the 1-skeleton of $P^n$. In this case, it is easy to see that the map $\alpha: E_{\Gamma_M}\longrightarrow \text{Hom}(({\Bbb Z}_2)^n, {\Bbb Z}_2)$ is dual to the characteristic function $\lambda:\mathcal{F}(P^n)\longrightarrow \text{Hom}({\Bbb Z}_2,({\Bbb Z}_2)^n)$. In other words, both $\alpha$ and $\lambda$ are determined to each other. \vskip .2cm By \cite{bl} we know that $(\Gamma_M,\alpha)$ is a ``good'' $({\Bbb Z}_2)^n$-coloring, so that each $k$-nest $\Delta^k$ of $(\Gamma_M,\alpha)$ is a connected regular $k$-valent subgraph of $\Gamma_M$ with $\dim\text{Span}\alpha(\Delta^k)=k$, where $\text{Span}\alpha(\Delta^k)$ denotes the linear space spanned by all colors of edges in $\Delta^k$. By $\mathcal{K}_{(\Gamma_M, \alpha)}$ one denotes the set of all nests of $(\Gamma_M, \alpha)$. Since each $k$-nest $(k>0)$ determines a $k$-dimensional subspace of $\text{Hom}(({\Bbb Z}_2)^n, {\Bbb Z}_2)$, it corresponds to an $(n-k)$-dimensional subspace in the dual space $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^n)$. This actually gives a dual map $\eta$ from $\mathcal{K}_{(\Gamma_M, \alpha)}$ to the set of all subspaces of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^n)$, which is just the characteristic function when $M$ is a small cover. Obviously, $\eta$ maps each $(n-1)$-dimensional nest of $\mathcal{K}_{(\Gamma_M, \alpha)}$ to a nonzero element in $\text{Hom}({\Bbb Z}_2,({\Bbb Z}_2)^n)$. Since each vertex $p$ is the intersection of $n$ $(n-1)$-nests of $\mathcal{K}_{(\Gamma_M, \alpha)}$, it corresponds to a basis of $\text{Hom}({\Bbb Z}_2,({\Bbb Z}_2)^n)$, which is just the dual basis of the basis $\alpha(E_p)$ in $\text{Hom}(({\Bbb Z}_2)^n,{\Bbb Z}_2)$. \vskip .2cm From \cite{bl} one knows that if $\dim M\leq 3$, then $(\Gamma_M, \alpha)$ always admits a skeletal expansion (note that if $\dim M>4$, under what condition $(\Gamma_M, \alpha)$ admits a skeletal expansion is still open). This will directly lead us to use this result to study the group structure of ${\frak M}_3$. \begin{prop} [\cite{bl}]\label{gr} If $\dim M=3$, then $(\Gamma_M,\alpha)$ admits a 2-skeletal expansion $(N,K)$ such that $N$ is a closed surface. \end{prop} \section{Determination of ${\frak M}_3$} The main task of this section is devoted to determining the structure of ${\frak M}_3$. \begin{lem}\label{l4} Let $\beta\in {\frak M}_3$. Then $\vert{\mathcal{N}}_\beta\vert$ is even. \end{lem} \begin{proof} The Euler characteristic of any 3-dimensional closed manifold is always zero, and the lemma then follows from the classical Smith Theorem. \end{proof} By Proposition~\ref{l2}, the key point of determining the structure of ${\frak M}_3$ is to find out all essential generators in ${\frak M}_3$. The following proposition characterizes the essential generators of ${\frak M}_3$. \begin{prop} \label{p} A nonzero element $\beta\in{\frak M}_3$ is an essential generator if and only if $\vert{\mathcal{N}}_\beta\vert\leq 6$. Further, all essential generators of ${\frak M}_3$ are given by two kinds of small covers $({\Bbb R}P^3, \sigma T_0)$ and $(S^1\times {\Bbb R}P^2, \sigma \Phi_0)$. \end{prop} \begin{lem}\label{l5} Let $\beta\in{\frak M}_3$ be nonzero. If $\vert{\mathcal{N}}_\beta\vert\leq 6$, then $\beta$ is an essential generator. \end{lem} \begin{proof} If $\vert{\mathcal{N}}_\beta\vert=4$, then $\beta$ is one of $[({\Bbb R}P^3, \sigma T_0)]$'s so $\beta$ is an essential generator. Thus, it suffices to consider the case $\vert{\mathcal{N}}_\beta\vert=6$ by Lemma~\ref{l4}. From Example~\ref{e1}, we see that all $\mathcal{N}_{({\Bbb R}P^3, T_i)}, i=0,1,...,6$ are disjoint to each other. We first claim that any intersection $\mathcal{N}_\beta\cap \mathcal{N}_{({\Bbb R}P^3, T_i)}$ cannot contain four elements. If not, then there exists some $i^{'}$ such that $|\mathcal{N}_{\beta+[({\Bbb R}P^3, T_{i^{'}})]}|=2$. By \cite{ks}, $\beta+[({\Bbb R}P^3, T_{i^{'}})]$ must be zero in ${\frak M}_3$, but this is impossible. Next, we shall prove that any intersection $\mathcal{N}_\beta\cap \mathcal{N}_{({\Bbb R}P^3, T_i)}$ cannot contain three elements. If not, then there exists some $i^{''}$ such that $|\mathcal{N}_{\beta+[({\Bbb R}P^3, T_{i^{''}})]}|=4$, so that $\beta+[({\Bbb R}P^3, T_{i^{''}})]$ must be the equivariant cobordism class of another $({\Bbb R}P^3, T_j)$ with $j\not=i^{''}$. Further, $\beta$ is the sum $[({\Bbb R}P^3, T_{i^{''}})]+[({\Bbb R}P^3, T_j)]$, so $|\mathcal{N}_\beta|$ is 8 rather than 6. This is a contradiction. Combining the above argument, one has that $|\mathcal{N}_\beta\cap \mathcal{N}_{({\Bbb R}P^3, T_i)}|$ is less than 3. Then the lemma follows from this. \end{proof} The following lemma indicates the connection between $\mathcal{A}=\{({\Bbb R}P^3, T_i)| i=0,1,...,6\}$ and $\mathcal{B}=\{(S^1\times {\Bbb R}P^2, \Phi_j)| j=0,1,...,41\}$. \begin{lem}\label{l6} Each $({\Bbb R}P^3, T_i)$ of $\mathcal{A}$ corresponds to six small covers $(S^1\times {\Bbb R}P^2, \Phi_{i_1})$,..., $(S^1\times {\Bbb R}P^2, \Phi_{i_6})$ of $\mathcal{B}$ for which such six small covers are not cobordant to each other, and $|\mathcal{N}_{({\Bbb R}P^3, T_i)}\cap \mathcal{N}_{(S^1\times {\Bbb R}P^2, \Phi_{i_u})}|=2, u=1,...,6$. \end{lem} \begin{proof} Since all $\mathcal{N}_{({\Bbb R}P^3, T_i)}, i=0,1,...,6$, are distinct and since all $({\Bbb R}P^3, T_i), i=0,1,...,6$, can be translated to each other up to cobordism by applying automorphisms of $({\Bbb Z}_2)^3$, it suffices to consider the case of $({\Bbb R}P^3, T_0)$. We see from the table I of Example~\ref{e1} that $$\mathcal{N}_{({\Bbb R}P^3, T_0)}=\{\rho_1\rho_2\rho_3, \rho_1(\rho_1+\rho_2)(\rho_1+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_2+\rho_3),\rho_3(\rho_1+\rho_3)(\rho_2+\rho_3)\}.$$ Obviously, any two monomials of $\mathcal{N}_{({\Bbb R}P^3, T_0)}$ give five elements of $\text{Hom}(({\Bbb Z}_2)^3,{\Bbb Z}_2)$, and there are exactly six such pairs in $\mathcal{N}_{({\Bbb R}P^3, T_0)}$. Consider two monomials $\rho_1\rho_2\rho_3, \rho_1(\rho_1+\rho_2)(\rho_1+\rho_3)$ of $\mathcal{N}_{({\Bbb R}P^3, T_0)}$, we get five elements $\rho_1, \rho_2, \rho_3, \rho_1+\rho_2, \rho_1+\rho_3$ of $\text{Hom}(({\Bbb Z}_2)^3,{\Bbb Z}_2)$. Using these five elements, we can define an axial function $\alpha$ on the 1-skeleton of a prism $P^3$ as shown in Figure~\ref{a1}. \begin{figure}[h] \input{F11.pstex_t}\centering \caption[]{ An axial function $\alpha$ on the 1-skeleton of a prism $P^3$}\label{a1} \end{figure} Since $\alpha$ uniquely determines a characteristic function on $P^3$, we obtain a small cover $(S^1\times {\Bbb R}P^2, \Phi_{0_1})$ with six fixed points over $P^3$ such that its tangent representation set $\mathcal{N}_{(S^1\times {\Bbb R}P^2, \Phi_{0_1})}$ consists of six monomials $\rho_1\rho_2\rho_3, \rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_1\rho_2(\rho_2+\rho_3), \rho_1\rho_3(\rho_2+\rho_3),\rho_1(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_2+\rho_3)$. Similarly, for other five pairs in $\mathcal{N}_{({\Bbb R}P^3, T_0)}$, we can obtain five small covers $(S^1\times {\Bbb R}P^2, \Phi_{0_u}), u=2,...,6$ with their tangent representation sets as follows {\tiny $$\begin{tabular}{|l|l|} \hline $u$ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\mathcal{N}_{(S^1\times {\Bbb R}P^2, \Phi_{0_u})}$\\ \hline 2 & $\{\rho_1\rho_2\rho_3,\rho_1\rho_2(\rho_1+\rho_3), \rho_2\rho_3(\rho_1+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_2+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_2(\rho_1+\rho_3)(\rho_2+\rho_3)\}$\\ \hline 3 & $\{\rho_1\rho_2\rho_3,\rho_1\rho_3(\rho_1+\rho_2), \rho_2\rho_3(\rho_1+\rho_2),\rho_3(\rho_1+\rho_3)(\rho_2+\rho_3),\rho_3(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_2+\rho_3)\}$\\ \hline 4 & $\{\rho_1(\rho_1+\rho_2)(\rho_1+\rho_3),\rho_1\rho_3(\rho_1+\rho_2), \rho_3(\rho_1+\rho_2)(\rho_1+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_2+\rho_3),\rho_2\rho_3(\rho_1+\rho_2), \rho_3(\rho_1+\rho_2)(\rho_2+\rho_3)\}$\\ \hline 5 & $\{\rho_1(\rho_1+\rho_2)(\rho_1+\rho_3),\rho_1\rho_2(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3),\rho_3(\rho_1+\rho_3)(\rho_2+\rho_3),\rho_2\rho_3(\rho_1+\rho_3), \rho_2(\rho_1+\rho_3)(\rho_2+\rho_3)\}$\\ \hline 6 & $\{\rho_2(\rho_1+\rho_2)(\rho_2+\rho_3),\rho_1\rho_2(\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_2+\rho_3),\rho_3(\rho_1+\rho_3)(\rho_2+\rho_3),\rho_1\rho_3(\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_2+\rho_3)\}$\\ \hline \end{tabular}$$ } \noindent Then the lemma follows from the above argument and Corollary~\ref{ns}. \end{proof} \vskip .2cm \begin{rem}\label{re} Lemma~\ref{l6} also gives the method of constructing 42 different small covers (up to equivariant cobordism) with 6 fixed points. In particular, we easily see the following property that for each $(S^1\times {\Bbb R}P^2, \Phi_j)$, two of $\mathcal{N}_{(S^1\times {\Bbb R}P^2, \Phi_j)}$ are in some $\mathcal{N}_{({\Bbb R}P^3, T_i)}$, and others are just distributed in four different $\mathcal{N}_{({\Bbb R}P^3, T_{i_1})}, \mathcal{N}_{({\Bbb R}P^3, T_{i_2})}, \mathcal{N}_{({\Bbb R}P^3, T_{i_3})}, \mathcal{N}_{({\Bbb R}P^3, T_{i_4})}$ with $i_v\not=i, v=1,2,3,4$. In addition, we also see from the argument of Lemma~\ref{l6} that $$\delta_3([(S^1\times {\Bbb R}P^2, \Phi_{0_1})]+[(S^1\times {\Bbb R}P^2, \Phi_{0_6})]+[({\Bbb R}P^3, T_0)])=0$$ $$\delta_3([(S^1\times {\Bbb R}P^2, \Phi_{0_2})]+[(S^1\times {\Bbb R}P^2, \Phi_{0_5})]+[({\Bbb R}P^3, T_0)])=0$$ $$\delta_3([(S^1\times {\Bbb R}P^2, \Phi_{0_3})]+[(S^1\times {\Bbb R}P^2, \Phi_{0_4})]+[({\Bbb R}P^3, T_0)])=0$$ where $\delta_3$ is the monomorphism of Theorem~\ref{s}. This means that actually we need only to consider the half of 42 different small covers $(S^1\times {\Bbb R}P^2, \Phi_j), j=0,1,...,41$, such that up to equivariant cobordism the union of any two of them is not one of $({\Bbb R}P^3, T_i), i=0,1,...,6$. With no loss we may assume that such 21 different small covers are just $(S^1\times {\Bbb R}P^2, \Phi_j), j=0,1,...,20$, with their tangent representation sets stated in Table II. \end{rem} \noindent \begin{tabular}{|l|l|l|} \multicolumn{2}{c}{Table II}\\[5pt] \hline \multicolumn{1}{|c|}{ Small cover $M$} & \multicolumn{1}{|c|}{tangent representation set $\mathcal{N}_M$} \\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_0)$} & {\scriptsize $\rho_1\rho_2\rho_3,\rho_1\rho_2(\rho_2+\rho_3), \rho_1\rho_3(\rho_2+\rho_3),\rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_2+\rho_3),$}\\ &{\scriptsize $ \rho_1(\rho_1+\rho_3)(\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_1)$} & {\scriptsize $\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_1+\rho_3), \rho_2\rho_3(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_2(\rho_1+\rho_3)(\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_2)$} & {\scriptsize $\rho_1\rho_2\rho_3, \rho_1\rho_3(\rho_1+\rho_2), \rho_2\rho_3(\rho_1+\rho_2), \rho_3(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_2+\rho_3),$}\\ & {\scriptsize $ \rho_3(\rho_1+\rho_3)(\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_3)$} & {\scriptsize $\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_1+\rho_3), \rho_2\rho_3(\rho_1+\rho_2), \rho_2\rho_3(\rho_1+\rho_3), \rho_2\rho_3(\rho_1+\rho_2+\rho_3),$}\\ & {\scriptsize $ \rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_4)$} & {\scriptsize $\rho_1\rho_2(\rho_1+\rho_2+\rho_3), \rho_1\rho_3(\rho_1+\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_1(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), \rho_3(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_5)$} & {\scriptsize $\rho_1\rho_3(\rho_1+\rho_2), \rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), $}\\ & {\scriptsize $\rho_3(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_6)$} & {\scriptsize $\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_2+\rho_3), \rho_1\rho_3(\rho_1+\rho_2), $}\\ & {\scriptsize $\rho_1\rho_3(\rho_2+\rho_3), \rho_1\rho_3)(\rho_1+\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_7)$} & {\scriptsize $\rho_1\rho_2(\rho_1+\rho_2+\rho_3), \rho_2\rho_3(\rho_1+\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), \rho_2(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3), \rho_3(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_8)$} & {\scriptsize $\rho_2\rho_3(\rho_1+\rho_2), \rho_2(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_3(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_9)$} & {\scriptsize $\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_1+\rho_3), \rho_1\rho_2(\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_1\rho_2(\rho_1+\rho_2+\rho_3), \rho_1\rho_3(\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline \end{tabular} \noindent \begin{tabular}{|l|l|l|} \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{10})$} & {\scriptsize $\rho_1\rho_3(\rho_1+\rho_2+\rho_3), \rho_2\rho_3(\rho_1+\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_2(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3), \rho_3(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), \rho_3(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{11})$} & {\scriptsize $\rho_2\rho_3(\rho_1+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_2(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_3(\rho_1+\rho_3)(\rho_2+\rho_3), (\rho_1+\rho_3)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{12})$} & {\scriptsize $\rho_1\rho_2(\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_2(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_2(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_3)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{13})$} & {\scriptsize $\rho_1\rho_2(\rho_1+\rho_3), \rho_1\rho_2(\rho_1+\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), $}\\ & {\scriptsize $\rho_1(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{14})$} & {\scriptsize $\rho_1(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), $}\\ & {\scriptsize $\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{15})$} & {\scriptsize $\rho_1\rho_3(\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_3(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_3(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{16})$} & {\scriptsize $\rho_1\rho_3(\rho_1+\rho_2), \rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_1(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_1+\rho_3), (\rho_1+\rho_2)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{17})$} & {\scriptsize $\rho_1\rho_3(\rho_1+\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3),(\rho_1+\rho_2)(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_3(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_3)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{18})$} & {\scriptsize $\rho_2\rho_3(\rho_1+\rho_3), \rho_2(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_1+\rho_3), $}\\ & {\scriptsize $\rho_3(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), \rho_3(\rho_1+\rho_3)(\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{19})$} & {\scriptsize $\rho_2\rho_3(\rho_1+\rho_2), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline {\scriptsize $(S^1\times {\Bbb R}P^2, \Phi_{20})$} & {\scriptsize $\rho_2\rho_3(\rho_1+\rho_2+\rho_3), \rho_2(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3),(\rho_1+\rho_2)(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), $}\\ & {\scriptsize $\rho_3(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_2)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3), (\rho_1+\rho_3)(\rho_2+\rho_3)(\rho_1+\rho_2+\rho_3)$}\\ \hline \end{tabular} \vskip .2cm Now let $\beta\in {\frak M}_3$ be an essential generator. By Lemma~\ref{l1}, one has known that $\vert{\mathcal{N}}_\beta\vert\leq 14$. \vskip .2cm \noindent {\bf Claim 1.} {\em $\vert{\mathcal{N}}_\beta\vert$ must be less than 12.} \begin{proof} If $\vert{\mathcal{N}}_\beta\vert=14$, then for each $i$ $(i=0,1,...,6)$, there must be two monomials $\delta^{(i)}_1, \delta^{(i)}_2$ in $\mathcal{N}_{({\Bbb R}P^3, T_i)}$ such that $\delta^{(i)}_1$ and $\delta^{(i)}_2$ contain in ${\mathcal{N}}_\beta$. By Lemma~\ref{l6} and Remark~\ref{re}, an easy argument shows that there must be some $(S^1\times {\Bbb R}P^2, \Phi_j)$ such that ${\mathcal{N}}_{(S^1\times {\Bbb R}P^2, \Phi_j)}\subset {\mathcal{N}}_\beta$. Then $8=\vert{\mathcal{N}}_{\beta+[(S^1\times {\Bbb R}P^2, \Phi_j)]}\vert<\vert{\mathcal{N}}_{\beta}\vert=14$. However, this is a contradiction since $\beta$ is an essential generator. Thus, $\vert{\mathcal{N}}_\beta\vert=14$ is impossible. \vskip .2cm If $\vert{\mathcal{N}}_\beta\vert=12$, since each $\mathcal{N}_{({\Bbb R}P^3, T_i)}$ contains at most two monomials in ${\mathcal{N}}_\beta$, then ${\mathcal{N}}_\beta=\{\delta_1,$ $\delta_2, ... , \delta_{11}, \delta_{12}\}$ has the following two possible cases: (i) all elements of ${\mathcal{N}}_\beta$ may be divided into six pairs $\{\delta_1,\delta_2\}$, $... , \{\delta_{11}, \delta_{12}\}$ that are distributed in six different $\mathcal{N}_{({\Bbb R}P^3, T_i)}$ respectively; (ii) ${\mathcal{N}}_\beta$ may be divided into seven parts $\{\delta_1,\delta_2\}$, $... , \{\delta_{9}, \delta_{10}\}$, $\{\delta_{11}\}$, $\{\delta_{12}\}$ such that these seven parts are just distributed in $\mathcal{N}_{({\Bbb R}P^3, T_0)}, ..., \mathcal{N}_{({\Bbb R}P^3, T_6)}$, respectively. The similar argument as above also shows that there must be some $(S^1\times {\Bbb R}P^2, \Phi_j)$ such that for the case (i), at least five elements of ${\mathcal{N}}_{(S^1\times {\Bbb R}P^2, \Phi_j)}$ contain in ${\mathcal{N}}_\beta$, and for the case (ii), at least four elements of ${\mathcal{N}}_{(S^1\times {\Bbb R}P^2, \Phi_j)}$ contain in ${\mathcal{N}}_\beta$. Then $\vert{\mathcal{N}}_{\beta+[(S^1\times {\Bbb R}P^2, \Phi_j)]}\vert\leq 10<\vert{\mathcal{N}}_{\beta}\vert=12$. This contradicts that $\beta$ is an essential generator. Thus, $\vert{\mathcal{N}}_\beta\vert=12$ cannot occur. \end{proof} Let $(M, \phi)$ be a representative of $\beta$ such that $\mathcal{N}_M$ is prime, and let $(\Gamma_M,\alpha)$ be the moment graph of $(M, \phi)$. \vskip .2cm \noindent {\bf Claim 2.} {\em $\Gamma_M$ is connected.} \begin{proof} Suppose that $\Gamma_M$ is disconnected. Let $\Gamma^{'}$ be a connected component of $\Gamma_M$. Then the restriction $\alpha|_{\Gamma^{'}}$ is still an axial function of $\Gamma^{'}$. By Claim 1, one has $\vert{\mathcal{N}}_M\vert\leq 10$ so the number of vertices of $\Gamma_M$ is less than or equal to 10. If $|V_{\Gamma^{'}}|=2$, then obviously $\alpha(E_{p_1})=\alpha(E_{p_2})$ for $p_1,p_2\in V_{\Gamma^{'}}$, but this is impossible since $\mathcal{N}_M$ is prime. If $|V_{\Gamma^{'}}|=4$, then $\Gamma^{'}$ must be the 1-skeleton of a 3-simplex, and thus $(\Gamma^{'}, \alpha|_{\Gamma^{'}})$ is the moment graph of some $({\Bbb R}P^3, T_i)$. Further, the disjoint union of $(M, \phi)$ and $({\Bbb R}P^3, T_i)$ forms a $({\Bbb Z}_2)^3$-action with at most six fixed points. This contradicts to the assumption that $\beta$ is an essential generator. If $|V_{\Gamma^{'}}|=6$, since the number of vertices of $\Gamma_M$ is less than or equal to 10, $\Gamma_M$ must have another connected component with 2 or 4 vertices, so that the problem is reduced to the case $|V_{\Gamma^{'}}|=2$ or 4. This completes the proof. \end{proof} \vskip .2cm By Proposition~\ref{gr} and Claim 2, the 2-skeletal expansion $N$ of $(\Gamma_M,\alpha)$ is a connected closed surface. By $F_{\Gamma_M}$ one denotes the set of all 2-nests in $\mathcal{K}_{(\Gamma_M,\alpha)}$. Then one has the formula \begin{eqnarray}\chi(N)=\vert V_{\Gamma_M}\vert-\vert E_{\Gamma_M}\vert+ \vert F_{\Gamma_M}\vert \end{eqnarray} where $\chi(N)$ is the Euler characteristic of $N$. Note that $\vert V_{\Gamma_M}\vert=\vert{\mathcal{N}}_\beta\vert$ and $ 3\vert V_{\Gamma_M}\vert=2\vert E_{\Gamma_M}\vert$. \vskip .2cm \noindent {\bf Claim 3.} {\em The 2-skeletal expansion $N$ of $(\Gamma_M,\alpha)$ is a sphere of dimension 2.} \begin{proof} It suffices to show that the Euler characteristic $\chi(N)$ is 2. By Claim 1, one has $\vert{\mathcal{N}}_M\vert\leq 10$ so one needs to consider the cases of $\vert{\mathcal{N}}_M\vert=4,6,8,10$. \vskip .2cm When $\vert{\mathcal{N}}_M\vert=4$, if $\chi(N)$ is not 2, then from (6.1) one has that $\vert F_{\Gamma_M}\vert\leq 3$, so all 2-nests in $(\Gamma_M, \alpha)$ correspond to at most three nonzero elements in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$. However, any three nonzero elements in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ cannot produce four different bases of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$. Thus, $\chi(N)$ must be 2. \vskip .2cm When $\vert{\mathcal{N}}_M\vert=6$, since any four nonzero elements in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ cannot produce six different bases of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$, one has that $\vert F_{\Gamma_M}\vert$ must be 5 so $\chi(N)$ is 2. \vskip .2cm When $\vert{\mathcal{N}}_M\vert=8$, if $N$ is not a sphere of dimension 2, then the above argument makes sure that $\vert F_{\Gamma_M}\vert$ must be 5, and the dual map $\eta$ of $\alpha$ maps five 2-nests of $\mathcal{K}_{(\Gamma_M, \alpha)}$ to five different nonzero elements of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$, respectively. An easy argument shows that any five nonzero elements in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ can be translated into five given nonzero elements by applying an automorphism of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$. Thus we may choose five special elements $\rho_1^*, \rho_2^*, \rho_3^*, \rho_1^*+\rho_2^*, \rho_1^*+\rho_3^*$ as being the images of $\eta$ on five 2-nests of $\mathcal{K}_{(\Gamma_M, \alpha)}$, where $\{\rho_1^*, \rho_2^*, \rho_3^*\}$ is the standard basis of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$, which corresponds to the standard basis $\{\rho_1, \rho_2, \rho_3\}$ of $\text{Hom}(({\Bbb Z}_2)^3, {\Bbb Z}_2)$. Then from these five chosen elements, one may produce just 8 bases of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ as follows: $$\{\rho_1^*,\rho_2^*,\rho_3^*\}, \{\rho_1^*,\rho_2^*,\rho_1^*+\rho_3^*\}, \{\rho_1^*,\rho_3^*,\rho_1^*+\rho_2^*\}, \{\rho_1^*,\rho_1^*+\rho_2^*,\rho_1^*+\rho_3^*\},$$ $$\{\rho_2^*,\rho_3^*,\rho_1^*+\rho_2^*\}, \{\rho_2^*,\rho_3^*,\rho_1^*+\rho_3^*\}, \{\rho_2^*,\rho_1^*+\rho_2^*,\rho_1^*+\rho_3^*\}, \{\rho_3^*,\rho_1^*+\rho_2^*,\rho_1^*+\rho_3^*\}.$$ So, $\mathcal{N}_M$ consists of 8 monomials $\rho_1\rho_2\rho_3$, $\rho_2\rho_3(\rho_1+\rho_3)$, $\rho_2\rho_3(\rho_1+\rho_2)$, $\rho_2\rho_3(\rho_1+\rho_2+\rho_3)$, $\rho_1\rho_3(\rho_1+\rho_2)$, $\rho_1\rho_2(\rho_1+\rho_3)$, $\rho_3(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3)$, $\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3)$. Further, we see from Table I that $\mathcal{N}_{({\Bbb R}P^3, T_3)}\subset \mathcal{N}_M$, so $|\mathcal{N}_{\beta+[({\Bbb R}P^3, T_3)]}|<8$. This means that $\beta$ is not an essential generator, which gives a contradiction. Thus, when $\vert{\mathcal{N}}_M\vert=8$, $\vert F_{\Gamma_M}\vert$ must be 6 so $\chi(N)$ is still 2. \vskip .2cm When $\vert{\mathcal{N}}_M\vert=10$, suppose that $\chi(N)$ is not 2. As shown above, any five nonzero elements in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ cannot produce ten different bases of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$, and thus the only possibility of $\vert F_{\Gamma_M}\vert$ is 6. Further, one has from (6.1) that $\chi(N)$ must be 1. To ensure that $\vert{\mathcal{N}}_M\vert=10$, six 2-nests in $\mathcal{K}_{(\Gamma_M, \alpha)}$ must then correspond to six different nonzero elements in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ by the dual map $\eta$. It is easy to check that any six different nonzero elements in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ can still be translated into the given six different nonzero elements by an automorphism of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$. Thus, as in the argument of the case $\vert{\mathcal{N}}_M\vert=8$, it needs to merely consider six special nonzero elements $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$. Take six nonzero elements $\rho_1^*, \rho^*_2, \rho^*_3, \rho^*_1+\rho^*_2, \rho^*_1+\rho^*_3, \rho^*_1+\rho^*_2+\rho^*_3$ in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ such that they are the images of $\eta$ on six 2-nests, one then may produce 16 different bases of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$ as follows: $\{\rho_1^*, \rho_2^*, \rho_3^*\}, \{\rho_2^*, \rho_3^*, \rho_1^*+\rho_2^*+\rho_3^*\}, \{\rho_1^*, \rho_3^*, \rho_1^*+\rho_2^*+\rho_3^*\}, \{\rho_1^*, \rho_2^*, \rho_1^*+\rho_2^*+\rho_3^*\},$ $\{\rho_3^*, \rho_1^*+\rho_2^*, \rho_1^*+\rho_3^*\}, \{\rho_2^*, \rho_3^*, \rho_1^*+\rho_3^*\}, \{\rho_2^*, \rho_1^*+\rho_2^*, \rho_1^*+\rho_3^*\}, \{\rho_2^*, \rho_3^*, \rho_1^*+\rho_2^*\},$ $\{\rho_1^*+\rho_2^*, \rho_1^*+\rho_3^*, \rho_1^*+\rho_2^*+\rho_3^*\}, \{\rho_1^*, \rho_1^*+\rho_3^*, \rho_1^*+\rho_2^*+\rho_3^*\}, \{\rho_1^*, \rho_1^*+\rho_2^*, \rho_1^*+\rho_2^*+\rho_3^*\}, \{\rho_1^*, \rho_1^*+\rho_2^*, \rho_1^*+\rho_3^*\},$ $\{\rho_1^*, \rho_3^*, \rho_1^*+\rho_2^*\}, \{\rho_1^*, \rho_2^*, \rho_1^*+\rho_3^*\}, \{\rho_3^*, \rho_1^*+\rho_3^*, \rho_1^*+\rho_2^*+\rho_3^*\}, \{\rho_2^*, \rho_1^*+\rho_2^*, \rho_1^*+\rho_2^*+\rho_3^*\}.$ These 16 bases are dual to 16 bases in $\text{Hom}(({\Bbb Z}_2)^3, {\Bbb Z}_2)$, which give the following 16 monomials. $$\rho_1\rho_2\rho_3, \rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_1\rho_2(\rho_1+\rho_2+\rho_3),$$ $$\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3), \rho_1\rho_2(\rho_1+\rho_3), \rho_3(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), \rho_1\rho_3(\rho_1+\rho_2),$$ $$(\rho_1+\rho_2)(\rho_1+\rho_3)(\rho_1+\rho_2+\rho_3), \rho_2(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_2\rho_3(\rho_1+\rho_2+\rho_3),$$ $$\rho_2\rho_3(\rho_1+\rho_2), \rho_2\rho_3(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_1+\rho_3).$$ One sees that the first row above is just $\mathcal{N}_{({\Bbb R}P^3, T_0)}$, the second row is $\mathcal{N}_{({\Bbb R}P^3, T_3)}$, and the third row is $\mathcal{N}_{({\Bbb R}P^3, T_6)}$, but $\rho_2\rho_3(\rho_1+\rho_2), \rho_2\rho_3(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_1+\rho_3)$ belong to $\mathcal{N}_{({\Bbb R}P^3, T_1)}$, $\mathcal{N}_{({\Bbb R}P^3, T_2)}$, $\mathcal{N}_{({\Bbb R}P^3, T_4)}$, $\mathcal{N}_{({\Bbb R}P^3, T_5)}$, respectively. Then $\mathcal{N}_M$ must contain $\rho_2\rho_3(\rho_1+\rho_2), \rho_2\rho_3(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_1+\rho_3)$, and $|\mathcal{N}_M\cap \mathcal{N}_{({\Bbb R}P^3, T_i)}|=2$ for $i=0,3,6$. \vskip .2cm Now choose any two $\gamma_1, \gamma_2$ of $\rho_2\rho_3(\rho_1+\rho_2), \rho_2\rho_3(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_3(\rho_1+\rho_2)(\rho_1+\rho_3)$, it is easy to show that there is always one $(S^1\times{\Bbb R}P^2, \Phi_j)$ such that $\mathcal{N}_{(S^1\times{\Bbb R}P^2, \Phi_j)}$ contains $\gamma_1, \gamma_2$. Without loss of generality, we may let $\gamma_1=\rho_2\rho_3(\rho_1+\rho_2)$ and $\gamma_2=\rho_2\rho_3(\rho_1+\rho_3)$. Then one has that $\mathcal{N}_{(S^1\times{\Bbb R}P^2, \Phi_j)}=\{\rho_2\rho_3(\rho_1+\rho_2), \rho_2\rho_3(\rho_1+\rho_3),\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_1+\rho_3),\rho_2\rho_3(\rho_1+\rho_2+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3) \}$ with $\rho_1\rho_2\rho_3\in \mathcal{N}_{({\Bbb R}P^3, T_0)}$, $\rho_1\rho_2(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3)\in \mathcal{N}_{({\Bbb R}P^3, T_3)}$, $\rho_2\rho_3(\rho_1+\rho_2+\rho_3)\in \mathcal{N}_{({\Bbb R}P^3, T_6)}$. If $\mathcal{N}_M$ contains at least two of $\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_1+\rho_3),\rho_2\rho_3(\rho_1+\rho_2+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3)$, form the disjoint union $(M,\phi)\sqcup(S^1\times{\Bbb R}P^2, \Phi_j)$, then $$|\mathcal{N}_{\beta+[(S^1\times{\Bbb R}P^2, \Phi_j)]}|<10,$$ which contradicts that $\beta$ is an essential generator. Thus, this case cannot occur. If $\mathcal{N}_M$ contains only one (say $\omega$) of $\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_1+\rho_3),\rho_2\rho_3(\rho_1+\rho_2+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3)$, form the union $(M,\phi)\sqcup(S^1\times{\Bbb R}P^2, \Phi_j)\sqcup({\Bbb R}P^3, T_l)$ where $({\Bbb R}P^3, T_l)$ is some one of $({\Bbb R}P^3, T_0), ({\Bbb R}P^3, T_3), ({\Bbb R}P^3, T_6)$ such that $\omega\not\in \mathcal{N}_{({\Bbb R}P^3, T_l)}$, then $$|\mathcal{N}_{\beta+[(S^1\times{\Bbb R}P^2, \Phi_j)]+[({\Bbb R}P^3, T_l)]}|<10,$$ which leads to a contradiction (note that $|\mathcal{N}_{[(S^1\times{\Bbb R}P^2, \Phi_j)]+[({\Bbb R}P^3, T_l)]}|<10$). Finally, if $\mathcal{N}_M$ does not contain any one of $\rho_1\rho_2\rho_3, \rho_1\rho_2(\rho_1+\rho_3),\rho_2\rho_3(\rho_1+\rho_2+\rho_3),\rho_2(\rho_1+\rho_2)(\rho_1+\rho_2+\rho_3)$, consider the disjoint union of $(M,\phi)\sqcup(S^1\times{\Bbb R}P^2, \Phi_j)$ with $({\Bbb R}P^3, T_3)$, then a contradiction still occurs (i.e., $\beta$ is not an essential generator). This is impossible. Therefore, $\chi(N)$ must be 2. \vskip .2cm Combining the above arguments, we complete the proof. \end{proof} \begin{lem} \label{l7} Let $\beta\in {\frak M}_3$. If $\beta$ is an essential generator, then $\vert{\mathcal{N}}_\beta\vert\leq 6$. \end{lem} \begin{proof} By Claim 1 it suffices to show that $\vert{\mathcal{N}}_\beta\vert$ is not equal to $8$ and $10$. One knows by Claim 3 that the 2-skeletal expansion $N$ is a sphere of dimension 2, so $\Gamma_M$ is planar and in particular, it is the 1-skeleton of a simple convex 3-polytope $P^3$. In this case, $M$ is a small cover over $P^3$, so the axial function $\alpha$ on $\Gamma_M$ is dual to the characteristic function $\lambda$ on $P^3$. The argument proceeds as follows. \vskip .2cm {\em Case $(i)$}: $\vert{\mathcal{N}}_\beta\vert=8$. \vskip .2cm If $\vert{\mathcal{N}}_\beta\vert=8$, then $\Gamma_M$ is the 1-skeleton of a simple convex polytope with 8 vertices. From \cite{g} one knows that there are only two different combinatorial types of simple 3-polytopes with eight vertices, as shown in Figure~\ref{a2}. \begin{figure}[h] \input{LM6.pstex_t}\centering \caption[]{Two simple 3-polytopes with eight vertices }\label{a2} \end{figure} If $\Gamma_M$ is the 1-skeleton of 3-dimensional cube $P_1$, then it is easy to check that $P_1$ does not admit any characteristic function of mapping six 2-faces into six different nonzero elements in $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$, but this is impossible. Thus, $\Gamma_M$ cannot be the 1-skeleton of $P_1$. If $\Gamma_M$ is the 1-skeleton of $P_2$, taking a triangular facet $F$ of $P_2$, then, up to automorphisms of $\text{Hom}({\Bbb Z}_2, ({\Bbb Z}_2)^3)$, it is easy to see that the characteristic function $\lambda$ on $P_2$ maps $F$ with its 3 adjacent 2-faces into either $\rho_1^*, \rho_2^*, \rho_3^*, \rho_1^*+\rho_2^*+\rho_3^*$ or $\rho_1^*, \rho_2^*, \rho_3^*, \rho_1^*+\rho_2^*$. When $\lambda$ maps $F$ with its 3 adjacent 2-faces into $\rho_1^*, \rho_2^*, \rho_3^*, \rho_1^*+\rho_2^*+\rho_3^*$, obviously there must be some $({\Bbb R}P^3, T_i)$ such that $|\mathcal{N}_{\beta+[({\Bbb R}P^3, T_i)]}|=6<8$. This contradicts the fact that $\beta$ is an essential generator, and thus this case cannot occur. When $\lambda$ maps $F$ with its 3 adjacent 2-faces into $\rho_1^*, \rho_2^*, \rho_3^*, \rho_1^*+\rho_2^*$, it is easy to check that there must be some $(S^1\times{\Bbb R}P^2, \Phi_j)$ such that $|\mathcal{N}_{\beta+[(S^1\times{\Bbb R}P^2, \Phi_j)]}|=6<8$. This also is impossible, so $\Gamma_M$ cannot be the 1-skeleton of $P_2$. Thus, if $\beta$ is an essential generator, then $\vert{\mathcal{N}}_\beta\vert=8$ is impossible. \vskip .2cm {\em Case $(ii)$}: $\vert{\mathcal{N}}_\beta\vert=10$. \vskip .2cm If $\vert{\mathcal{N}}_\beta\vert=10$, then $\Gamma_M$ is the 1-skeleton of a simple convex polytope with 10 vertices. From \cite{g} one knows that there are only five different combinatorial types of simple 3-polytopes with ten vertices, as shown in Figures~\ref{a3} and~\ref{a4}. \begin{figure}[h] \input{LM7.pstex_t}\centering \caption[]{Simple 3-polytopes with ten vertices }\label{a3} \end{figure} \begin{figure}[h] \input{LM8.pstex_t}\centering \caption[]{ Simple 3-polytopes with ten vertices }\label{a4} \end{figure} An easy argument shows that $\Gamma_M$ cannot be the 1-skeleton of $P_3$. Since each of $P_4,P_5,P_6,P_7$ has at least one triangular facet, similarly to the proof of case (i), one may prove that $\Gamma_M$ cannot be the 1-skeleton of $P_4,P_5,P_6,P_7$, respectively. Therefore, $\vert{\mathcal{N}}_\beta\vert=10$ is impossible, too. \vskip .2cm Combining the above arguments, one completes the proof. \end{proof} Together with Lemma~\ref{l5}, Lemma~\ref{l7} and Remark~\ref{re}, we complete the proof of Proposition~\ref{p}. \begin{thm}\label{dim} As a vector space over ${\Bbb Z}_2$, ${\frak M}_3$ has dimension 13, and it is generated by $({\Bbb R}P^3, T_0), ({\Bbb R}P^3, T_1),...,({\Bbb R}P^3, T_6), (S^1\times {\Bbb R}P^2, \Phi_0), (S^1\times {\Bbb R}P^2, \Phi_1), ..., (S^1\times {\Bbb R}P^2, \Phi_4), (S^1\times {\Bbb R}P^2, \Phi_6)$. \end{thm} \begin{proof} By Propositions~\ref{l2} and~\ref{p}, any element of ${\frak M}_3$ is a linear combination of 28 small covers $({\Bbb R}P^3, T_i), i=0,1,...,6$, and $(S^1\times {\Bbb R}P^2, \Phi_j), j=0,1,...,20$. Thus, in order to calculate the dimension of ${\frak M}_3$, one needs to determine a maximal linearly independent set of the above 28 small covers. Let $$\sum_{i=0}^6l_i[({\Bbb R}P^3, T_i)]+\sum_{j=0}^{20}k_j[(S^1\times{\Bbb R}P^2), \Phi_j)]=0$$ where $l_i,k_j\in {\Bbb Z}_2$. Using Stong-homomorphism $\delta_3$ in Theorem~\ref{s}, one then has that \begin{equation} \label{e}\sum_{i=0}^6l_i\delta_3([({\Bbb R}P^3, T_i)])+\sum_{j=0}^{20}k_j\delta_3([(S^1\times{\Bbb R}P^2), \Phi_j)])=0. \end{equation} Since $\text{Hom}(({\Bbb Z}_2)^3,{\Bbb Z}_2)$ gives 28 different bases, from (\ref{e}) and Tables I and II, one obtains a equation system formed by 28 equations, such that the coefficient matrix of this equation system is {\tiny \begin{eqnarray*} \setcounter{MaxMatrixCols}{28} A= \begin{pmatrix} 1 & 0 &0 &0 &0 &0&0 &1 & 1& 1&1 &0 & 0& 1& 0& 0& 1& 0 &0& 0& 0&0 &0 &0 &0 &0 &0 &0 \\ 1 &0&0&0&0&0&0&1&0&0&0&0&1&0&0&0&0&0&0&0&1&0&0&1&0&0&0&0 \\ 1 & 0 &0 &0 &0 &0 &0&0 &1 &0&0&0&0&0&0&1&0&0&0&1&0&0&0&0&0&0&1&0 \\ 1 &0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0&0&1&0&0&0&1&0&0&1&0&0 \\ 0&1&0&0&0&0&0&0&0&0&1&1&1&0&1&0&0&0&0&0&0&1&0&0&0&0&1&0\\ 0&1&0&0&0&0&0&0&1&0&1&0&0&0&0&0&1&0&0&0&1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&0&0&0&0&0&0&1&1&0&0\\ 0&1&0&0&0&0&0&0&0&1&0&0&1&1&0&0&0&0&0&0&0&0&0&1&0&0&0&0\\ 0&0&1&0&0&0&0&0&0&0&0&1&0&1&1&1&0&0&0&0&0&1&0&1&0&0&0&0\\ 0&0&1&0&0&0&0&1&0&0&0&0&0&1&0&0&1&0&0&1&0&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0&0&0&0&0&0&0&1&0&0&1&0&0&0&0&1&0&0&0&0&1\\ 0&0&1&0&0&0&0&0&0&1&1&0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&1&0\\ 0&0&0&1&0&0&0&0&0&0&0&1&0&0&0&0&1&1&1&0&1&0&0&0&1&0&0&0\\ 0&0&0&1&0&0&0&1&0&0&0&0&0&1&0&0&1&0&0&0&0&0&1&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0&0&0&0&0&0&1&0&0&1&0&1&0&0&0&0&0&0&0&1\\ 0&0&0&1&0&0&0&0&1&0&1&0&0&0&0&0&0&0&1&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&1&0&0&1&0&0&0&0&0&0&0&1&0&0&0&1&1&1&0&1&0&0&0&0\\ 0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0&0&0&1&1&0&0&0&0&1&0&0&1\\ 0&0&0&0&1&0&0&0&0&0&0&1&0&0&1&0&1&0&0&0&1&0&0&0&0&0&0&0\\ 0&0&0&0&1&0&0&0&1&0&0&0&1&0&0&0&0&0&0&0&0&1&0&0&0&0&1&0\\ 0&0&0&0&0&1&0&0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&1&1&1&0&0&1\\ 0&0&0&0&0&1&0&1&0&0&0&0&0&0&0&0&0&0&1&0&1&0&1&0&0&0&0&0\\ 0&0&0&0&0&1&0&0&0&1&0&0&1&0&0&0&0&0&0&0&0&0&0&1&0&1&0&0\\ 0&0&0&0&0&1&0&0&0&0&0&1&0&1&0&0&0&1&0&0&0&0&0&0&1&0&0&0\\ 0&0&0&0&0&0&1&0&0&0&0&0&1&0&0&0&0&0&0&0&0&1&0&0&1&1&1&1\\ 0&0&0&0&0&0&1&0&1&0&0&0&0&0&0&0&0&0&1&1&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&1&0&0&1&0&0&0&0&0&1&0&0&0&0&0&0&1&0&0&0&1&0\\ 0&0&0&0&0&0&1&0&0&0&1&0&0&0&1&0&0&1&0&0&0&0&0&0&0&0&0&1 \end{pmatrix}. \end{eqnarray*}} By doing elementary row operations, $A$ is changed into {\tiny \begin{eqnarray*} \setcounter{MaxMatrixCols}{28} A^{'}= \begin{pmatrix} 1&0&0&0&0&0&0 &0&0&1&0&0&0&0&0&0&0&0&1&0&0&0&1&0&0&1&0&0 \\ 0&1&0&0&0&0&0 &0&0&0&0&1&0&0&0&0&0&1&0&0&0&0&0&0&1&1&0&0 \\ 0&0&1&0&0&0&0 &0&0&0&0&0&0&0&1&0&0&1&0&0&0&0&1&0&0&0&0&1 \\ 0&0&0&1&0&0&0 &0&0&0&0&0&0&0&1&0&0&1&0&1&0&0&0&0&0&0&0&1 \\ 0&0&0&0&1&0&0 &0&0&0&0&0&0&0&0&0&0&0&1&1&0&0&0&0&1&0&0&1\\ 0&0&0&0&0&1&0 &0&0&0&0&0&0&0&0&1&0&0&0&0&0&1&1&1&1&0&0&1\\ 0&0&0&0&0&0&1 &0&0&0&0&0&1&0&0&0&0&0&0&0&0&1&0&0&1&1&1&1\\ 0&0&0&0&0&0&0 &1&0&0&0&0&0&0&0&1&0&0&1&0&1&1&0&1&1&0&0&1\\ 0&0&0&0&0&0&0 &0&1&0&0&0&1&0&0&0&0&0&1&1&0&1&0&0&1&0&1&1\\ 0&0&0&0&0&0&0 &0&0&1&0&0&1&0&0&1&0&0&0&0&0&1&1&0&1&1&0&1\\ 0&0&0&0&0&0&0 &0&0&0&1&0&1&1&0&1&1&0&1&1&1&0&1&1&0&1&1&0\\ 0&0&0&0&0&0&0 &0&0&0&0&1&0&0&1&0&1&0&1&1&1&0&0&0&1&0&0&1\\ 0&0&0&0&0&0&0 &0&0&0&0&0&0&1&1&1&1&1&1&1&1&1&1&1&1&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\ \end{pmatrix}. \end{eqnarray*}} so the rank of $A$ is 13, which is just the dimension of ${\frak M}_3$. Theorem~\ref{dim} then follows from this. \end{proof} \section{Representatives of equivariant cobordism classes of ${\frak M}_3$} Given two small covers $\pi_i: M_i^n\longrightarrow P_i^n, i=1,2$, their equivariant connected sum along fixed points can be proceeded as follows: Take a vertex $v_i$ from $P_i^n$ and let $p_i$ be its preimage in $M_i$, $i=1,2$. With no loss one may assume that $({\Bbb Z}_2)^n$-actions are equivalent in a neighborhood of $p_i$ (actually, if necessary, one can change the action by using an automorphism of $({\Bbb Z}_2)^n$). Then one can perform the connected sum equivariantly near the fixed points $p_1, p_2$. The result is a 2-torus manifold $M_1^n\sharp M_2^n$, and its orbit space, $P_1^n\sharp P_2^n$, is given by removing a small ball around $v_i$ from $P_i^n$ and gluing the results together. As pointed out in \cite{dj}, generally $P_1^n\sharp P_2^n$ is not canonically identified with a simple polytope but is almost as good in that its boundary complex is dual to some PL triangulation of $S^{n-1}$. However, it is easy to see that if $n=3$, $P_1^n\sharp P_2^n$ is also a simple polytope, so $M_1^n\sharp M_2^n$ is a small cover over $P_1^n\sharp P_2^n$. \begin{lem} \label{l8} There exists a 3-dimensional small cover $\pi: M^3\longrightarrow P^3$ such that $M$ is equivariantly cobordant to a 2-torus 3-manifold $N$ with $\mathcal{N}_M$ prime and $|\mathcal{N}_N|=28$. \end{lem} \begin{proof} Consider two small covers $(S^1\times{\Bbb R}P^2, \Phi_0)$ and $(S^1\times{\Bbb R}P^2, \Phi_1)$ over a prism $P^3$, one sees from Table II that they have fixed points with the same representation $\rho_1\rho_2\rho_3$. Then one can make an equivariant connected sum along the fixed points with representation $\rho_1\rho_2\rho_3$, such that $(S^1\times{\Bbb R}P^2, \Phi_0)\sharp(S^1\times{\Bbb R}P^2, \Phi_1)$ is also a small cover over a simple 3-polytope with 10 vertices, and its tangent representation set is just equal to $\mathcal{N}_{[(S^1\times{\Bbb R}P^2, \Phi_0)]+[(S^1\times{\Bbb R}P^2, \Phi_1)]}$, consisting of $\rho_1\rho_2(\rho_2+\rho_3), \rho_1\rho_3(\rho_2+\rho_3),\rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_1(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_1(\rho_1+\rho_3)(\rho_2+\rho_3), \rho_1\rho_2(\rho_1+\rho_3), \rho_2\rho_3(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_2+\rho_3), \rho_2(\rho_1+\rho_3)(\rho_2+\rho_3)$. From Table I one sees the following properties: \vskip .2cm (a) For any $({\Bbb R}P^3, T_i)$, the intersection of $\mathcal{N}_{[(S^1\times{\Bbb R}P^2, \Phi_0)]+[(S^1\times{\Bbb R}P^2, \Phi_1)]}$ and $\mathcal{N}_{[({\Bbb R}P^3, T_i)]}$ is always non-empty. (b) Two elements $\rho_1(\rho_1+\rho_2)(\rho_1+\rho_3), \rho_2(\rho_1+\rho_2)(\rho_2+\rho_3)$ of $\mathcal{N}_{[(S^1\times{\Bbb R}P^2, \Phi_0)]+[(S^1\times{\Bbb R}P^2, \Phi_1)]}$ contain in $\mathcal{N}_{[({\Bbb R}P^3, T_0)]}$. \vskip .2cm Next, one preforms an equivariant connected sum of two copies of $(S^1\times{\Bbb R}P^2, \Phi_0)\sharp(S^1\times{\Bbb R}P^2, \Phi_1)$ along the fixed point with representation $\rho_1(\rho_1+\rho_2)(\rho_1+\rho_3)$. Then the resulting $({\Bbb Z}_2)^3$-manifold $M^{'}$ fixes 18 isolated points and is also a small cover over a simple polytope with 18 vertices. Obviously, the representations at 18 fixed points of $M^{'}$ appear in pairs, so $M^{'}$ bounds equivariantly. Since $\mathcal{N}_{[(S^1\times{\Bbb R}P^2, \Phi_0)]+[(S^1\times{\Bbb R}P^2, \Phi_1)]}\setminus \{\rho_1(\rho_1+\rho_2)(\rho_1+\rho_3)\}\subset \mathcal{N}_{M^{'}}$, by properties (a) and (b), one has that for any $({\Bbb R}P^3, T_i)$, the intersection $\mathcal{N}_{M^{'}}\cap \mathcal{N}_{[({\Bbb R}P^3, T_i)]}$ is non-empty, so that $M^{'}$ can perform an equivariant connected sum with each $({\Bbb R}P^3, T_i)$ along the fixed points with the same representation. Let $M$ be the equivariant connected sum of $M^{'}$ with all $({\Bbb R}P^3, T_i)$ in the above way. Then $M$ is just a desired small cover. \end{proof} \begin{thm}\label{small} Any element $\beta$ in ${\frak M}_3$ contains a small cover as its representative. \end{thm} \begin{proof} If $\beta=0$, then the bounding small cover $M^{'}$ in Lemma~\ref{l8} can be chosen as a representative of $\beta$. If $\beta\not=0$, then $\beta$ is a linear combination of those 13 small covers stated in Theorem~\ref{dim}. Consider the small cover $M$ constructed in Lemma~\ref{l8} and take a fixed point $p$ of $M$ with representation $\rho_1\rho_2\rho_3$, one first preforms an equivariant connected sum $M\sharp M$ of two copies of $M$ along the fixed point $p$, so that $M\sharp M$ is also a small cover and bounds equivariantly. Obviously, one can still preform an equivariant connected sum of $M\sharp M$ with some chosen arbitrarily from 13 small covers stated in Theorem~\ref{dim} such that the resulting 2-torus manifold is a small cover. This means that $\beta$ must contain a small cover as its representative. \end{proof}
{ "timestamp": "2007-01-31T17:22:02", "yymm": "0701", "arxiv_id": "math/0701928", "language": "en", "url": "https://arxiv.org/abs/math/0701928", "abstract": "Let ${\\frak M}_n$ be the set of equivariant unoriented cobordism classes of all $n$-dimensional 2-torus manifolds, where an $n$-dimensional 2-torus manifold $M$ is a smooth closed manifold of dimension $n$ with effective smooth action of a rank $n$ 2-torus group $({\\Bbb Z}_2)^n$. Then ${\\frak M}_n$ forms an abelian group with respect to disjoint union. This paper determines the group structure of ${\\frak M}_n$ and shows that each class of ${\\frak M}_n$ contains a small cover as its representative in the case $n=3$.", "subjects": "Algebraic Topology (math.AT); Combinatorics (math.CO)", "title": "2-torus manifolds, cobordism and small covers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357561234475, "lm_q2_score": 0.724870282120402, "lm_q1q2_score": 0.7096739247471645 }
https://arxiv.org/abs/1812.03024
Dickson's Lemma, Higman's Theorem and Beyond: a survey of some basic results in order theory
We provide proofs for the fact that certain orders have no descending chains and no antichains.
\section{Introduction} \label{sec:intro} We investigate some finiteness conditions of a partially ordered set $\algop{A}{\le}$. As usual, we write $a \ge b$ for $b \le a$, and $a < b$ (or $b > a$) for $a \le b$ and $a \neq b$. Furthermore, $a \perp b$ stands for ($a \not\le b$ and $b \not \le a$); in this case, we call $a$ and $b$ \emph{uncomparable}. A sequence $(a_i)_{i \in \mathbb{N}_0}$ from $A$ is a \emph{descending chain} if $a_i > a_{i+1}$ for all $i \in \mathbb{N}_0$, it is an \emph{antichain} if for all $i, j \in \mathbb{N}_0$, $a_i \le a_j$ implies $i = j$, and it is an \emph{ascending chain} if for all $i \in \mathbb{N}_0$, $a_i < a_{i+1}$. A subset $U$ of $A$ is called \emph{upward closed} if $u \in U$, $a \in A$, and $u \le a$ imply $a \in U$. For a subset $B$ of $A$, we define the \emph{upward closed set generated by $B$} by ${\uparrow} B := \{ a \in A \mid \exists b \in B : b \le a \}$. By $U \algop{A}{\le}$ or simply $U(A)$ we denote the set of upward closed subsets of $A$. This set can be ordered by set inclusion $\subseteq$. One frequently uses the fact that certain partially ordered sets have no descending chain and no antichain; such orders are called \emph{well partial orders}. \cite{La:WASO,AH:FTIS} provide powerful techniques to establish that a given order is a well partial order. In this note, we restrict our attention to some particular ordered sets: The first set that we consider is the set $\mathbb{N}_0^m$ of vectors of natural numbers of some fixed length $m$, which we order by $(a_1, \ldots, a_m) \le (b_1, \ldots, b_m)$ if $a_i \le b_i$ for all $i \in \{1,\ldots,m\}$, and we provide proofs of the following well known facts: \begin{thm} \label{thm:n} Let $m \in \mathbb{N}$. \begin{enumerate} \item \label{it:n1} $(\mathbb{N}_0^m, \le)$ has no descending chain and no antichain \cite[Lemma~A]{Di:FOTO}. \item \label{it:n2} $(U(\mathbb{N}_0^m, \le), \subseteq)$ has no ascending chain and no antichain \cite{Ma:AOMI}, \cite[Corollary~1.8]{AP:OOMI}. \end{enumerate} \end{thm} It is easy to see that $(\mathbb{N}_0^m, \le)$ has no descending chain. In 1913, Dickson proved that $(\mathbb{N}_0^m, \le)$ has no antichain, a fact that lies at the basis of the theory of Gr\"obner bases \cite{BL:AS,Bu:EAKF}. This fact can be stated differently. We call an ideal of the polynomial ring $\mathbb{Q}[x_1, \ldots, x_N]$ \emph{monomial} if it is generated by monomials. Then Dickson's Lemma states that every monomial ideal is finitely generated (which of course also follows from Hilbert's Basis Theorem). A somewhat surprising fact is that the set of monomial ideals of $\mathbb{Q}[x_1, \ldots, x_n]$ has no antichain, which has been proved in \cite{Ma:AOMI}, but probably much before in an order theoretic setting: in this setting, the result states that the set of upward closed subsets of $(\mathbb{N}_0^m, \le)$ has no antichain. A direct proof is given at the end of Section~\ref{sec:dickson}. Another proof using an ordering of words over a finite alphabet that goes back to G.\ Higman \cite{Hi:OBDI} is given in Section~\ref{sec:other}. This is the second type of order relations we study: For a finite alphabet $A$, we say that a word $u \in A^* = \bigcup_{n \in \mathbb{N}_0} A^n$ \emph{embeds into} a word $v$ if $u$ can be obtained from $v$ by cancelling some letters and in this case write $u\le_e v$. For example $u = aabbca$ embeds into $v = \underline{a}b\underline{a}\underline{b}a\underline{b}\underline{c}\underline{a}c$. It follows from \cite{Hi:OBDI} that for a finite set $A$, $(A^*, \le_e)$ has no antichain. Also, the upward closed subsets of $(A^*, \le_e)$ have no antichain \cite{Na:OWLS}. Formally, we define when $x \le_e y$ holds by recursion on the length of $x$. First, the empty word $\emptyset$ satisfies $\emptyset \le y$ for all $y \in A^*$. If $x = a u$ with $a \in A$ and $u \in A^*$, then $x \le_e y$ if there are words $v,w \in A^*$ such that $y = vaw$ and $u \le_e w$. Then we have: \begin{thm} \label{thm:a} Let $A$ be a finite set. \begin{enumerate} \item \label{it:a1} $(A^*, \le_e)$ has no descending chain and no antichain \cite{Hi:OBDI}. \item \label{it:a2} $(U(A^*, \le_e), \subseteq)$ has no ascending chain and no antichain. \end{enumerate} \end{thm} We will give a proof of this theorem in Section~\ref{sec:higman}. We also investigate the following ordering of words used in \cite{AMM:OTNO}. Let $A$ be a finite set, and let $B := (A \times \{0\}) \cup (A \times \{1\})$. We define a mapping $\varphi : A^* \to B^*$ by $\varphi (a_1, \ldots ,a_n) := (b_1, \ldots, b_n)$ with $b_i := (a_i, 0)$ if $a_i \not\in \{a_1, \ldots, a_{i-1}\}$ and $b_i := (a_i, 1)$ if $a_i \in \{a_1, \ldots, a_{i-1}\}$. For $u = (a_1, \ldots, a_n)$, we use $S(u)$ to denote the set of letters that occur in $u$, formally $S(u) := \{ a_i \mid i \in \{1,\ldots, n\} \}$. For $u, v \in A^*$, we say that $u \le_E v$ if $\varphi (u) \le_e \varphi (v)$ and $S(u) = S(v)$. \begin{thm} \label{thm:E} Let $A$ be a finite set. \begin{enumerate} \item \label{it:E1} $(A^*, \le_E)$ has no descending chain and no antichain \cite[Lemma~3.2]{AMM:OTNO}. \item \label{it:E2} $(U(A^*, \le_E), \subseteq)$ has no ascending chain and no antichain. \end{enumerate} \end{thm} The first listed author has used these results on several occasions: in \cite{Ai:CMCO}, the absence of antichains in $(A^*, \le_e)$ is used to establish that every constantive Mal'cev clone on a finite set is finitely related. In \cite{AM:SOCO}, it is proved that the set of admissible higher commutator operations on a finite congruence lattice is well partially ordered by a natural ordering. The variant of Higman's embedding ordering considered in Theorem~\ref{thm:E} was used in \cite{AMM:OTNO} to generalize the results from \cite{Ai:CMCO}, resulting in a proof that every finite algebra with few subpowers is finitely related, and \cite{AM:FGEC} uses the same ordering to prove that every subvariety of the variety generated by such an algebra is finitely generated. Recently, McDevitt \cite{Mc:ACOR} gave a construction of a large class of word orderings generalizing $\le_e$ and $\le_E$, and he established a sufficient criterion \cite[Proposition~47]{Mc:ACOR} for such orders to be well partial orders, thereby generalizing Theorems~\ref{thm:a}\eqref{it:a1} and \ref{thm:E}\eqref{it:E1}. An open question in clone theory is whether there is a finite set with an antichain of clones containing a Mal'cev operation. One motivation for proving the absence of antichains is that this absence often allows testing certain properties of a structure by considering whether it contains finitely many forbidden substructures \cite{RS:GM}. The aim of this note is to establish the order theoretic results that are listed above and used in \cite{Ai:CMCO, AM:SOCO, AMM:OTNO, AM:FGEC} in a rather direct way. In particular, we will not resort to the theory of better quasi orderings \cite{La:WASO}. The note is self-contained, in particular we introduce the well known and very useful concept of \emph{minimal bad sequences} due to \cite{Na:OFT}, although this is done in a similar way at several other places (cf. \cite{AH:FTIS}). However, we will not give a proof of the following theorem due to F.P.\ Ramsey \cite{Ra:OAPO} (cf. \cite{Ne:RT}): Denoting the $2$-element subsets of $\mathbb{N}_0$ by $\VecTwo{\mathbb{N}_0}{2}$, Ramsey's Theorem states that for every finite set $T$ and for every $c : \VecTwo{\mathbb{N}_0}{2} \to T$, there is an infinite subset $Y$ of $\mathbb{N}_0$ such that the restriction of $c$ to $\VecTwo{Y}{2}$ is constant. Most of the results and proofs in this note are well known and can be found, e.g., in \cite{AH:FTIS}. However, we believe that the the two proofs of Theorem~\ref{thm:n}\eqref{it:n2} and the proof of Theorem~\ref{thm:a}\eqref{it:a2} are new. The derivation of Theorem~\ref{thm:E} from Theorem~\ref{thm:a} using quasi-embeddings (cf. \cite{AH:FTIS}) was suggested to the author by N.\ Ru\v{s}kuc in 2015 (cf. \cite{Ru:WICA}). Considering \cite{La:WASO,Na:OWLS,AH:FTIS}, the reader will easily find out that the theory of well quasi orders and better quasi orders is much deeper than scratched in the present note. The aim of the present note is to give easily accessible proofs for some basic and very useful results in this theory. The first listed author has faced the need for such proofs when teaching the basics of Gr\"obner Basis Theory, e.g., the existence of universal Gr\"obner bases (cf. \cite{Ai:KA}) or, in universal algebra \cite{BS:ACIU}, the order theoretical foundations of the finite relatedness of finite algebras with few subpowers \cite{AMM:OTNO}. \section{Basics of order theory} It can easily be shown, using, however, some form of the axiom of choice, that a partially ordered set $\algop{A}{\le}$ has no descending chain if and only if every finite subset $X$ of $A$ contains a minimal element. A sequence $(b_i)_{i \in \mathbb{N}_0}$ is a \emph{subsequence} of $(a_i)_{i \in \mathbb{N}_0}$ if there is a mapping $t : \mathbb{N}_0 \to \mathbb{N}_0$ that satisfies $i < j \Rightarrow t(i) < t(j)$ for all $i, j \in \mathbb{N}_0$ and for all $i \in \mathbb{N}_0$, we have $b_i = a_{t(i)}$. In this case, $(b_i)_{i \in \mathbb{N}_0} = (a_{t(i)})_{i \in \mathbb{N}_0}$. \begin{lem} \label{lem:chainsexist} Let $\ob{A} = \algop{A}{\le}$ be a partially ordered set, and let $S = (a_i)_{i \in \mathbb{N}_0}$ be a sequence from $A$. Then $S$ has a subsequence $T = (a_{t(i)})_{i \in \mathbb{N}_0}$ such that one of the following conditions holds: \begin{enumerate} \item $T = (a_{t(i)})_{i \in \mathbb{N}_0}$ is \emph{constant}, which means that for all $i, j \in \mathbb{N}_0$, $a_{t(i)} = a_{t(j)}$. \item $T = (a_{t(i)})_{i \in \mathbb{N}_0}$ is a \emph{descending chain}, which means that for all $i, j \in \mathbb{N}_0$, $i < j \Rightarrow a_{t(i)} > a_{t(j)}$. \item $T = (a_{t(i)})_{i \in \mathbb{N}_0}$ is an \emph{ascending chain}, which means that for all $i, j \in \mathbb{N}_0$, $i < j \Rightarrow a_{t(i)} < a_{t(j)}$. \item $T = (a_{t(i)})_{i \in \mathbb{N}_0}$ is an \emph{antichain}, which means that for all $i, j \in \mathbb{N}_0$, $i < j \Rightarrow a_{t(i)} \perp a_{t(j)}$. \end{enumerate} \end{lem} \emph{Proof:} We define a coloring $c$ of the $2$-element subsets of $\mathbb{N}_0$. Let $i, j \in \mathbb{N}_0$ with $i < j$. We set $c ( \{ i, j\} ) := 1$ if $a_i = a_j$, $c ( \{ i, j\} ) := 2$ if $a_i < a_j$, $c ( \{ i, j\} ) := 3$ if $a_i > a_j$, and $c ( \{ i, j\} ) := 4$ if $a_i \perp a_j$. By Ramsey's Theorem, $\mathbb{N}$ contains an infinite subset $Y$ such that $c$ is constant on $\VecTwo{Y}{2}$. We let $t : \mathbb{N}_0 \to Y$ be an injective increasing function from $\mathbb{N}_0$ into $Y$. Then $T := (a_{t(i)})_{i \in \mathbb{N}_0}$ is the required subsequence. \qed \section{Bad sequences} \begin{de} Let $\algop{A}{\le}$ be a partially ordered set. A sequence $(a_i)_{i \in \mathbb{N}}$ from $A$ is \emph{good} if there are $i, j \in \mathbb{N}_0$ such that $i < j$ and $a_i \le a_j$, and it is \emph{bad} if it is not good. \end{de} \begin{lem} \label{lem:badseq} Let $\ob{A} = \algop{A}{\le}$ be an ordered set. The following are equivalent: \begin{enumerate} \item Every sequence from $A$ is good. \item $\ob{A}$ has no descending chain and no antichain. \end{enumerate} \end{lem} \emph{Proof:} Assume that every sequence from $A$ is good. Then $\ob{A}$ has no descending chain and no antichain, since such chains are all bad. Now assume that $\ob{A}$ has no descending chain and no antichain, and let $(a_i)_{i \in \mathbb{N}_0}$ be a sequence from $A$. Then by Lemma~\ref{lem:chainsexist}, $(a_i)_{i \in \mathbb{N}_0}$ has a subsequence $T = (a_{t(i)})_{i \in \mathbb{N}_0}$ that is constant, a descending chain, an ascending chain, or an antichain. Descending chains and antichains are excluded by the assumptions, thus $T$ is either constant or ascending. In both cases, $a_{t(1)} \le a_{t(2)}$, and hence $T$ is good. \qed We call a sequence $(a_i)_{i \in \mathbb{N}_0}$ a \emph{minimal bad sequence} if it is a bad sequence, and for every $i \in \mathbb{N}_0$ and for every $b < a_i$, every sequence starting with $(a_0, a_1, \ldots, a_{i-1}, b)$ is good. \begin{lem} \label{lem:minbad} Let $\ob{A} = \algop{A}{\le}$ be an ordered set. If $\ob{A}$ has no descending chain and it has an infinite antichain, then it has a minimal bad sequence. \end{lem} \emph{Proof:} We assume that $(A, \le)$ has an antichain. This antichain is a bad sequence. Inductively, we can define a minimal bad sequence. We define $a_0$ to be a minimal element with respect to $\le$ of $S_0 := \{ y_0 \mid (y_i)_{i \in \mathbb{N}_0} \text{ is a bad sequence from $A$} \}$. For $j \in \mathbb{N}$, having defined $(a_0, \ldots, a_{j-1})$, we let \[ S_j := \{ y_j \mid (y_i)_{i \in \mathbb{N}_0} \text{ is a bad sequence from $A$ with } (y_0, \ldots, y_{j-1}) = (a_0, \ldots, a_{j-1}) \}, \] and we choose $a_j$ to be a minimal element of $S_j$ with respect to $\le$. \qed \begin{lem} \label{lem:acc} Let $\algop{A}{\le}$ be a partially ordered set. The following are equivalent: \begin{enumerate} \item $\algop{A}{\le}$ has no descending chain and no antichain. \item $\algop{U(A, \le)}{\subseteq}$ has no ascending chain. \end{enumerate} \end{lem} \emph{Proof:} Let $(a_i)_{i \in \mathbb{N}}$ be a descending chain or an antichain from $A$, and let $B_i := {\uparrow} \{a_1,\ldots, a_i\}$ for $i \in \mathbb{N}_0$. Then $(B_i)_{i \in \mathbb{N}}$ is an ascending chain. For the other implication, let $(C_i)_{i \in \mathbb{N}_0}$ be an ascending chain, and choose $c_i \in C_{i+1} \setminus C_i$. We show that $(c_i)_{i \in \mathbb{N}_0}$ is bad. Suppose $i < j$ and $c_i \le c_j$. Since $C_{i+1}$ is upward closed, we then have $c_j \in C_{i+1}$, and thus $c_j \in C_j$. This contradicts the choice of $c_j$ in $C_{j+1} \setminus C_j$. Hence $\algop{A}{\le}$ has a bad sequence, and thus by Lemma~\ref{lem:badseq}, it must have a descending chain or an antichain. \qed \section{Dicksons's ordering} \label{sec:dickson} In this section we provide a direct proof for Theorem~\ref{thm:n} by using Ramsey's theorem. In Section~\ref{sec:other} we show that these results can also be obtained as a consequence of Theorem~\ref{thm:a} by using quasi-embeddings. \begin{proof}[Proof of {Theorem~\ref{thm:n}\eqref{it:a1}}.] For $m \in \mathbb{N}$ and $k \in \{1,\ldots,m\}$ we denote the $m$-tuple $(a_1,\dots,a_m)\in \mathbb{N}_0^m$ by $\vb{a}$ and the $k$\,th component of $\vb{a}$ by $\vb{a}_k$. It can be easily seen that $(\mathbb{N}_0^m,\le)$ has no descending chain. We assume that $(\mathbb{N}_0^m,\le)$ has an antichain $(\vb{a}^{(i)})_{i \in \mathbb{N}_0 }$. Now we color the $2$-element subsets of $\mathbb{N}_0$ with the elements of ${\{1,2\}}^{\{1,\dots, m\}}$ as colors. For $i<j$, we set \[ c (\{i,j\}) \, (k) := \left\{ \begin{array}{rl} 1 & \text{ if } \vb{a}^{(i)}_k \le \vb{a}^{(j)}_k, \\ 2 & \text{ if } \vb{a}^{(i)}_k > \vb{a}^{(j)}_k. \end{array} \right. \] By Ramsey's Theorem, we find a subsequence $(\vb{a}^{t(i)})_{i \in \mathbb{N}_0}$ and a color $C \in \{1,2\}^{\{1,\dots, m\}}$ such that for all $i,j \in \mathbb{N}_0$ with $i <j$, we have $c ( \{t(i), t(j)\} ) = C$. Assume there exists $k\in {\{1,\dots, m\}}$ such that $C(k) = \,\, 2$. Then $\vb{a}^{(t_0)}_k > \vb{a}^{(t_1)}_k > \vb{a}^{(t_2)}_k > \cdots$ in contradiction to the fact that $(\mathbb{N}_0, \le)$ has no descending chain. Hence $C(k) = 1$ for all $k$ and therefore $\vb{a}^{(t_0)} \leq \vb{a}^{(t_1)} \leq \vb{a}^{(t_2)} \leq \cdots$, contradicting our assumption of $(\vb{a}^{(i)})_{i \in \mathbb{N}_0 }$ being an antichain. \end{proof} \begin{proof}[Proof of {Theorem~\ref{thm:n}\eqref{it:a2}}.] By Theorem~\ref{thm:n}\eqref{it:a1}, $(\mathbb{N}_0^m, \le)$ has no descending chain and no antichain. Hence Lemma~\ref{lem:acc} yields that $(U(\mathbb{N}_0^m, \le), \subseteq)$ has no ascending chain. It remains to show that $(U(\mathbb{N}_0^m, \le), \subseteq)$ has no antichain. If $m = 1$, the set of upward closed subsets of $\mathbb{N}_0$ is totally ordered, and hence there cannot be an antichain. In the case $m \ge 2$, we encode each upward closed subset of $\mathbb{N}_0^m$ by a function from $\mathbb{N}_0^{m-1}$ to $\mathbb{N}_0$. For each upward closed subset $F$ of $\mathbb{N}_0^m$ we define a function $\Phi_F : \mathbb{N}_0^{m-1} \to \mathbb{N}_0 \cup \{ \infty \}$ by \[ \Phi_F (\vb{a}) := \left\{ \begin{array}{rl} \min \{ c \in \mathbb{N}_0 |% (\vb{a},c) \in F \} & \text{ if there exists $c' \in \mathbb{N}$ such that $(\vb{a},c') \in F$ }, \\ \infty & \text{ otherwise.} \end{array} \right. \] for $\vb{a} \in \mathbb{N}_0^{m-1}$. First we show that for all $\vb{a}, \vb{b} \in \mathbb{N}_0^{m-1}$ such that $\vb{a} \le \vb{b}$, also $\Phi_F (\vb{a}) \ge \Phi_F (\vb{b})$ holds. Therefore, let $c := \Phi_F (\vb{a})$. We assume $c \neq \infty$. We have $(\vb{a}, c) \in F$. Since $F$ is an upward closed set, also $(\vb{b}, c) \in F$, and hence $\Phi_F (\vb{b}) \le c = \Phi_F (\vb{a})$. Moreover, for the upward closed subsets $F,G$ of $\mathbb{N}_0^m$ the inclusion $F \subseteq G$ holds if and only if $\Phi_F (\vb{a}) \ge \Phi_G (\vb{a})$ for all $\vb{a} \in \mathbb{N}_0^{m-1}$. Let $(F_i)_{i \in \mathbb{N}_0}$ be an antichain in $(U(\mathbb{N}_0^m, \le), \subseteq)$. Thus for $i, j \in \mathbb{N}$ with $i < j$, $F_j \not\subseteq F_i$ holds. Accordingly, there exists $\vb{a}^{(i,j)} \in \mathbb{N}_0^{m-1}$ such that \[ \Phi_{F_j} (\vb{a}^{(i,j)}) < \Phi_{F_i}(\vb{a}^{(i,j)}). \] Now we color the $3$-element subsets of $\mathbb{N}_0$ with the elements of ${\{1,2\}}^{\{1,\dots, m-1\}}$ as colors. For $l \in \{1,\ldots,m-1\}$, we denote the $l$\,th component of $\vb{a}^{(i,j)}$ by $\vb{a}^{(i,j)}_l$. For $i<j<k$ we define the coloring of $\{i,j,k\}$ in the following way: \[ c (\{i,j, k\}) \, (l) := \left\{ \begin{array}{rl} 1 & \text{if } \vb{a}^{(i,j)}_l \le \vb{a}^{(j,k)}_l, \\ 2 & \text{if } \vb{a}^{(i,j)}_l > \vb{a}^{(j,k)}_l. \end{array} \right. \] By Ramsey's Theorem, we find an infinite subset $T$ of $\mathbb{N}_0$, $T=\{t_1, t_2,\dots\}$, $t_1<t_2<\dots$, and a color $C \in \{1,2\}^{\{1,\dots, m-1\}}$ such that for all $i,j,k \in \mathbb{N}_0$ with $i<j<k$, we have $c ( \{t_i, t_j, t_k\} ) = C$. We now show that $C(l) = \,\, 1$ for all $l \in \{1,\ldots,m-1\}$. In contradiction to that, we assume that there exists $l$ such that $C(l)\, = \,\,2$. Then \[ \vb{a}^{(t_0, t_1)}_l > \vb{a}^{(t_1, t_2)}_l > \vb{a}^{(t_2,t_3)}_l> \cdots. \] holds. Hence we have constructed a descending chain of natural numbers, which is impossible. Thus for all $r \in \mathbb{N}_0$ the inequality $\vb{a}^{(t_r, t_{r+1})} \le \vb{a}^{(t_{r+1}, t_{r+2})}$ holds. Now let $r \in \mathbb{N}_0$. Because of the choice of $\vb{a}^{(t_r, t_{r+1})}$, we have \[ \Phi_{F_{t_r}} (\vb{a}^{(t_r, t_{r+1})}) > \Phi_{F_{t_{r+1}}} (\vb{a}^{(t_r, t_{r+1})}). \] Since $\vb{a}^{(t_r, t_{r+1})} \le \vb{a}^{(t_{r+1}, t_{r+2})}$, we also have \[ \Phi_{F_{t_{r+1}}} (\vb{a}^{(t_r, t_{r+1})}) \ge \Phi_{F_{t_{r+1}}} (\vb{a}^{(t_{r+1}, t_{r+2})}). \] Hence the sequence $(\Phi_{F_{t_i}} (\vb{a}^{(t_i, t_{i+1})}))_{i \in \mathbb{N}_0}$ is a descending chain in $\mathbb{N}_0 \cup \{\infty\}$, which is impossible. Consequently there cannot exist an antichain of upward closed subsets of $\mathbb{N}_0^m$. \end{proof} \section{Higman's ordering} \label{sec:higman} Of the results listed in Section~\ref{sec:intro}, we now derive Theorem~\ref{thm:a}. \begin{proof}[Proof of {Theorem~\ref{thm:a}\eqref{it:a1}}.] Since $u <_e v$ implies that the length of $u$ is strictly smaller than the length of $v$, $(A^*, \le_e)$ has no descending chain. We assume that $(A^*, \le_e)$ has an antichain. Then by Lemma~\ref{lem:minbad}, we find a minimal bad sequence $U = (u_i)_{i \in \mathbb{N}_0}$ be in $A^*$ Such a sequence cannot contain the empty word, since $u_i = \emptyset$ implies $u_i \le_e u_{i+1}$, and therefore $U$ is good. Hence we can write $u_i = a_i v_i$ with $a_i \in A$ and $v_i \in A^*$. Since $A$ is finite, there is a subsequence $(a_{t(i)})_{i \in \mathbb{N}_0}$ and $b \in A$ such that $a_{t(i)} = b$ for all $i \in \mathbb{N}_0$. The sequence $(u_1, u_2, \ldots, u_{t(0)-1}, v_{t(0)}, v_{t(1)}, \ldots)$ is smaller, hence good. Therefore, we find $i, j \in \mathbb{N}_0$ such that either $i < j < t(0)$ and $u_i \le_e u_j$, or $i < t(0)$ and $u_i \le_e v_{t(j)}$, or $i < j$ and $v_{t(i)} \le_e v_{t(j)}$. In the case $i < j < t(0)$ and $u_i \le_e u_j$, $U$ is good, contradicting the assumptions. If $i < t(0)$ and $u_i \le_e v_{t(j)}$, then $u_i \le_e v_{t(j)} \le_e a_j v_{t(j)} = u_{t(j)}$, and thus $U$ is good, contradicting the assumptions. If $i < j$ and $v_{t(i)} \le_e v_{t(j)}$, then $u_{t(i)} = a_{(t(i))} v_{t(i)} = b v_{t(i)} \le_e b v_{t(j)} = a_{(t(j))} v_{t(j)} = u_{t(j)}$, and thus $U$ is good, again contradicting the assumptions. \end{proof} Before proving Theorem~\ref{thm:a}\eqref{it:a2}, we need some preparation. For $X \subseteq A^*$ and $a \in A$, we define $a^{-1}X$ by \[ a^{-1} X := \{ y \in A^* \mid ay \in X \}. \] The set of \emph{starting letters of minimal elements} of $X$ is defined by \[ \operatorname{Som} (X) := \{ a \in A \mid \exists u \in A^* : au \text{ is a minimal element of } X \}. \] \begin{lem} \label{lem:XaX} Let $X, Y$ be a upward closed subsets of $\algop{A^*}{\le_e}$, and let $a \in X$. Then we have: \begin{enumerate} \item \label{it:X1} $X \subseteq a^{-1} X$. \item \label{it:X2} If $a \in \operatorname{Som} (X)$, then $X \subset a^{-1} X$. \item \label{it:X3} If $X \neq A^*$ and for all $b \in \operatorname{Som} (X)$, $b^{-1} X \subseteq b^{-1} Y$. Then $X \subseteq Y$. \end{enumerate} \end{lem} \begin{proof} For item~\eqref{it:X1}, let $x \in X$. Then $x \le_e ax$, and therefore, since $X$ is upward closed, $ax \in X$. Thus $x \in a^{-1}X$. For item~\eqref{it:X2}, we choose $y \in A^*$ such that $ay$ is a minimal element of $X$ with respect to $\le_e$. Then $y \in a^{-1}X$. Since $y <_e ay$ and $ay$ is minimal in $X$, we have $y \not\in X$. Thus the inclusion $X \subset a^{-1} X$ is indeed proper. For item~\eqref{it:X3}, we fix $x \in X$. Then there is a minimal element $y$ of $X$ with respect to $\le_e$ such that $y \le_e x$. In the case that $y$ is the empty word, $X = A^*$, which is excluded by the assumptions. If $y$ is not empty, there is $b \in A$ and $z \in A^*$ such that $y = bz$. Then $b \in \operatorname{Som} (X)$ and $z \in b^{-1} X$. Therefore $z \in b^{-1} Y$, and therefore $bz \in Y$. Thus $y \in Y$, and since $Y$ is upward closed, $x \in Y$. \end{proof} \begin{proof}[Proof of {Theorem~\ref{thm:a}\eqref{it:a2}}.] By Theorem~\ref{thm:a}\eqref{it:a1}, $(A^*, \le)$ has no descending chain and no antichain. Hence Lemma~\ref{lem:acc} yields that $(U(A^*, \le_e), \subseteq)$ has no ascending chain. We call a sequence $(X_i)_{i \in \mathbb{N}_0}$ \emph{co-good} if there are $i, j \in \mathbb{N}$ with $i < j$ and $X_i \supseteq X_j$, and \emph{co-bad} otherwise. We assume that $U(A^*)$ has an antichain. This antichain is a co-bad sequence. Inductively, we can define a maximal co-bad sequence. We define $X_0$ to be a maximal element with respect to $\subseteq$ of $S_0 := \{ Y_0 \mid (Y_i)_{i \in \mathbb{N}_0} \text{ is a co-bad sequence of $U(A^*)$} \}$. This maximal element exists because $(U(A^*, \le_e), \subseteq)$ has no ascending chain. For $j \in \mathbb{N}$, having defined $(X_0, \ldots, X_{j-1})$, we let $S_j := \{ Y_j \mid (Y_i)_{i \in \mathbb{N}_0} \text{ is a co-bad sequence of $U(A^*)$ with } (Y_0, \ldots, Y_{j-1}) = (X_0, \ldots, X_{j-1}) \}$, and we choose $X_j$ to be a maximal element of $S_j$ with respect to $\subseteq$. Since the sequence $(\operatorname{Som} (X_i))_{i \in \mathbb{N}_0}$ can take at most $2^{|A|}$ values, there is $B \subseteq A$ and a subsequence $(X_{t(i)})_{i \in \mathbb{N}_0}$ such that $\operatorname{Som} (X_{t(i)}) = B$ for all $i \in \mathbb{N}_0$. Now we color the two element subsets of $\mathbb{N}_0$ with the elements of $\{1,2\}^B$ as colors. For $i < j$ and $b \in B$, we set $c( \{i, j \} ) \, (b) := 1$ if $b^{-1} X_{i} \not\supseteq b^{-1} X_{j}$, $c( \{i, j \} ) \, (b) := 2$ if $b^{-1} X_{i} \supseteq b^{-1} X_{j}$. We restrict our coloring to the two element subsets of $t[\mathbb{N}_0]$. By Ramsey's Theorem, we find a subsequence $(X_{t(r(i))})_{i \in \mathbb{N}_0} =: (X_{s(i)})_{i \in \mathbb{N}_0}$ and a color $C \in \{1,2\}^B$ such that for all $i,j \in \mathbb{N}_0$ with $i <j$, we have $c ( \{s(i), s(j)\} ) = C$. Furthermore, $\operatorname{Som}(X_{s(i)}) = B$ for all $i \in \mathbb{N}_0$. Let us first consider the case that we have $b \in B$ such that $C(b) = 1$. To this end, we consider the sequence $Y := (X_1, X_2, \ldots, X_{s(0)-1}, b^{-1} X_{s(0)}, b^{-1} X_{s(1)}, \ldots)$. Since $b \in \operatorname{Som} (X_{s(0)})$, Lemma~\ref{lem:XaX}\eqref{it:X2} yields $X_{s(0)} \subset b^{-1} X_{s(0)}$. By the maximality of $(X_{i})_{i \in \mathbb{N}_0}$, the sequence $Y$ is co-good. Therefore, we find $i, j \in \mathbb{N}_0$ such that either $i < j < s(0)$ and $X_i \supseteq X_j$, or $i < s(0)$ and $X_i \supseteq b^{-1} X_{s(j)}$, or $i < j$ and $b^{-1} X_{s(i)} \supseteq b^{-1} X_{s(j)}$. The case $i < j < s(0)$ and $X_i \supseteq X_j$ cannot occur because the sequence $(X_i)_{i \in \mathbb{N}_0}$ is co-bad. In the case $i < s(0)$ and $X_i \supseteq b^{-1} X_{s(j)}$, Lemma~\ref{lem:XaX}~\eqref{it:X1} yields $X_i \supseteq b^{-1} X_{s(j)} \supseteq X_{s(j)}$. Again this cannot occur because $(X_i)_{i \in \mathbb{N}_0}$ is co-bad. In the case $i < j$ and $b^{-1} X_{s(i)} \supseteq b^{-1} X_{s(j)}$, we obtain $c (\{s(i), s(j) \}) \, (b) = 2$, contradicting the assumption $C(b) = 1$. Hence we have $C(b) = 2$ for all $b \in B$. Therefore, for all $b \in \operatorname{Som} (X_{s(1)})$, we have $b^{-1} (X_{s(0)}) \supseteq b^{-1} (X_{s(1)})$. Since $(X_i)_{i \in \mathbb{N}_0}$ is co-bad, we have $X_{s(1)} \not\supseteq X_{s(2)}$. From this, we conclude $X_{s(1)} \neq A^*$. Now Lemma~\ref{lem:XaX}~\eqref{it:X3} yields $X_{s(0)} \supseteq X_{s(1)}$, contradicting the fact that $(X_i)_{i \in \mathbb{N}_0}$ is co-bad. \end{proof} \section{Other well partially ordered sets} \label{sec:other} We will derive the other results stated in Section~\ref{sec:intro} by using \emph{quasi-embeddings}: \begin{de} \cite{AH:FTIS} Let $\ob{A} = (A, \le_A)$ and $\ob {B} = (B, \le_B)$ be partially ordered sets. A mapping $f : A \to B$ is a \emph{quasi-embedding} from $\ob{A}$ into $\ob{B}$ if for all $a_1, a_2 \in A : f (a_1) \le_B f (a_2) \Rightarrow a_1 \le_A a_2$. \end{de} \begin{lem} \label{lem:qe1} \cite{AH:FTIS} Let $\algop{A}{\le_A}$ and $\algop{B}{\le_B}$ be partially ordered sets. Let $f : A \to B$ be a quasi-embedding from $\algop{A}{\le_A}$ into $\algop{B}{\le_B}$. Then we have: \begin{enumerate} \item \label{it:a1} If $\algop{A}{\le_A}$ has an antichain, then $\algop{B}{\le_B}$ has an antichain. \item \label{it:a3} If $\algop{A}{\le_A}$ has a descending chain, then $\algop{B}{\le_B}$ has an antichain or a descending chain. \end{enumerate} \end{lem} \emph{Proof:} For~\eqref{it:a1}, we let $(a_i)_{i \in \mathbb{N}_0}$ be an antichain. We define $b_i := f(a_i)$ and we claim that $(b_i)_{i \in \mathbb{N}}$ is an antichain. To this end, let $j, k \in \mathbb{N}_0$ with $b_j \le b_k$. Then $f(a_j) \le f(a_k)$, and therefore $a_j \le a_k$. Since $(a_i)_{i \in \mathbb{N}_0}$ is an antichain, $j = k$, completing the proof that $(b_i)_{i \in \mathbb{N}_0}$ is an antichain. For~\eqref{it:a3}, let $(a_i)_{i \in \mathbb{N}_0}$ be a descending chain. According to Lemma~\ref{lem:chainsexist}, it is sufficient to show that that the sequence $(b_i)_{i \in \mathbb{N}_0} := (f(a_i))_{i \in \mathbb{N}_0}$ has no subsequence that is constant or an ascending chain. Seeking a contradiction, we let $(b_{t(i)})_{i \in \mathbb{N}_0}$ be a subsequence with $b_{t(1)} \le b_{t(2)} \le \cdots$. Since $f$ is a quasi-embedding, $a_{t(1)} \le_A a_{t(2)}$, which contradicts the fact that $(a_i)_{i \in \mathbb{N}_0}$ is descending. \qed \begin{lem} \label{lem:qeu} Let $\algop{A}{\le_A}$ and $\algop{B}{\le_B}$ be partially ordered sets. Let $f : A \to B$ be a quasi-embedding from $\algop{A}{\le_A}$ into $\algop{B}{\le_B}$. Let $\phi : U(A, \le_A) \to U(B, \le_B)$ be defined by $\phi (X) := {\uparrow} f[X] = \{ b \in B \mid \exists x \in X : b \ge_B f(x) \}$. Then $\phi$ is a quasi-embedding from $(U(A, \le_A), \subseteq)$ into $(U(B, \le_B), \subseteq)$. \end{lem} \begin{proof} We have to show that for all $X, Y \in U(A)$, we have \begin{equation} \label{eq:x1} {\uparrow} f[X] \subseteq {\uparrow} f[Y] \Rightarrow X \subseteq Y. \end{equation} To this end, we fix $X, Y \in U(A)$ and assume ${\uparrow} f[X] \subseteq {\uparrow} f[Y]$. We let $x \in X$. Then $f(x) \in f[X]$. Hence $f(x) \in {\uparrow} f[X]$, and therefore $f(x) \in {\uparrow} f[Y]$. Therefore, there exists $y \in Y$ such that $f(x) \ge_B f(y)$. By the assumptions on $f$, $x \ge_A y$. Since $Y$ is upward closed, we obtain $x \in Y$. This completes the proof of~\eqref{eq:x1}. \end{proof} \begin{cor} \label{cor:emb} Let $(A, \le_A)$ and $(B, \le_B)$ be partially ordered sets, and let $f$ be a quasi-embedding from $(A, \le_A)$ into $(B, \le_B)$. If $(B, \le_B)$ has no descending chain and no antichain and $(U(B, \le_B), \subseteq)$ has no antichain, then $(A, \le_A)$ has no descending chain and no antichain, and $(U(A, \le_A), \subseteq)$ has no ascending chain and no antichain. \end{cor} \begin{proof} From Lemma~\ref{lem:qe1}\eqref{it:a1}, we obtain that $\algop{A}{\le_A}$ has no antichain. % From~Lemma~\ref{lem:qe1}\eqref{it:a3}, we obtain that $\algop{A}{\le_A}$ has no descending chain. Lemma~\ref{lem:acc} now yields that $(U(A, \le_A), \subseteq)$ has no ascending chain. By Lemma~\ref{lem:qeu}, there exists a quasi-embedding from $(U(A, \le_A), \subseteq)$ into $(U(B, \le_B), \subseteq)$. Applying Lemma~\ref{lem:qe1}\eqref{it:a1} to this quasi-embedding, we obtain that $(U(A, \le_A), \subseteq)$ has no antichain. \end{proof} \begin{proof}[Proof of {Theorem~\ref{thm:n}}.] We first construct a quasi-embedding from $\ob{A} := (\mathbb{N}_0^m, \le)$ into $\ob{B} = (\{1,\ldots,m\}^*, \le_e)$. We define $f:A \to B$ by \[ f (x_1, \ldots,x_m) := \underbrace{11\ldots 1}_{x_1} \underbrace{22\ldots 2}_{x_2} \ldots \underbrace{mm\ldots m}_{x_m} = 1^{x_1}2^{x_2} \ldots m^{x_m} \] for all $(x_1, \ldots, x_m) \in \mathbb{N}_o^m$. Now if $1^{x_1}2^{x_2} \ldots m^{x_m} \le_e 1^{y_1}2^{y_2} \ldots m^{y_m}$, then $(x_1, \ldots, x_m) \le (y_1, \ldots, y_m)$. Thus $f$ is a quasi-embedding. By Corollary~\ref{cor:emb} and Theorem~\ref{thm:a}, $\ob{A}$ has no antichain and no descending chain and $(U(\ob{A}), \subseteq)$ has no ascending chain and no antichain. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:E}.] We define a mapping $f: A^* \to B^* \times \mathcal{P} (A)$ by $f (u) := (\varphi (u), S(u))$. We order the set $B^* \times \mathcal{P}(A)$ by $(u, S) \le (v, T)$ if $u \le_e v$ and $S = T$. Then $f$ is a quasi-embedding from $(A^*, \le_E)$ into $B^* \times \mathcal{P} (A)$. Since $(B^*, \le_e)$ has no antichain and $\mathcal{P}(A)$ is finite, $B^* \times \mathcal{P} (A)$ has no antichain. Hence Lemma~\ref{lem:qe1}\eqref{it:a1} implies that $(A^*, \le_E)$ has no antichain. Since $u <_E v$ implies that the length of $u$ is strictly smaller than the length of $v$, $(A^*, \le_E)$ has no descending chain. Thus, using Lemma~\ref{lem:acc}, we get that $(U(A^*, \le_E), \subseteq)$ has no ascending chain. It remains to show that $(U(A^*, \le_E), \subseteq)$ has no antichain. Using Corollary~\ref{cor:emb}, it is sufficient to show that $C := ( U(B^* \times \mathcal{P} (A), \le), \subseteq )$ has no antichain. Seeking a contradiction, we let $(M_i)_{i \in \mathbb{N}_0}$ be an antichain in $C$, and for $S \in \mathcal{P} (A)$, let $X(M_i, S) := \{ w \in B^* \mid (w, S) \in M_i \}$. Then $( \langle X(M_i, S) \mid S \in \mathcal{P}(A) \rangle )_{i \in \mathbb{N}_0}$ is a sequence in $(U (B^*, \le_e))^{\mathcal{P(A)}}$. Since $\mathcal{P}(A)$ is finite, and since $U(B^*, \le_e)$ has no ascending chain and no antichain by Theorem~\ref{thm:a}\eqref{it:a2}, a modification of the argument in the proof of Theorem~\ref{thm:n}\eqref{it:n1} can be used to show that $(U (B^*, \le_e))^{\mathcal{P(A)}}$ has no ascending chain and no antichain. Hence by the dual of Lemma~\ref{lem:badseq}, there are $i, j \in \mathbb{N}$ with $i < j$ such that $X(M_i, S) \supseteq X(M_j, S)$ for all $S \in \mathcal{P} (A)$. Therefore $M_i \supseteq M_j$, contradicting the fact that $(M_i)_{i \in \mathbb{N}_0}$ is an antichain. \end{proof} \section{Acknowledgements} The authors would like to thank J.\ Farley and N.\ Ru\v{s}kuc for numerous discussions on the theory of ordered sets. Part of this research was done while the first listed author was visiting P.\ Aglian\`{o} at the University of Siena, Italy. \bibliographystyle{alpha} \def$'${$'$}
{ "timestamp": "2018-12-10T02:15:57", "yymm": "1812", "arxiv_id": "1812.03024", "language": "en", "url": "https://arxiv.org/abs/1812.03024", "abstract": "We provide proofs for the fact that certain orders have no descending chains and no antichains.", "subjects": "Logic (math.LO)", "title": "Dickson's Lemma, Higman's Theorem and Beyond: a survey of some basic results in order theory", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357561234474, "lm_q2_score": 0.7248702821204019, "lm_q1q2_score": 0.7096739247471644 }
https://arxiv.org/abs/2106.00437
Homological duality for covering groups of reductive $p$-adic groups
In this largely expository paper we extend properties of the homological duality functor $RHom_{\mathcal H}(-,{\mathcal H})$ where ${\mathcal H}$ is the Hecke algebra of a reductive $p$-adic group, to the case where it is the Hecke algebra of a finite central extension of a reductive $p$-adic group. The most important properties being that $RHom_{\mathcal H}(-,{\mathcal H})$ is concentrated in a single degree for irreducible representations and that it gives rise to Schneider--Stuhler duality for Ext groups (a Serre functor like property). Along the way we also study Grothendieck--Serre duality with respect to the Bernstein center and provide a proof of the folklore result that on admissible modules this functor is nothing but the contragredient duality. We single out a necessary and sufficient condition for when these three dualities agree on finite length modules in a given block. In particular, we show this is the case for all cuspidal blocks as well as, due to a result of Roche, on all blocks with trivial stabilizer in the relative Weyl group.
\section{Introduction} \subsection{} Let $G$ be a reductive $p$-adic group or a covering group (finite central extension) of such a group. The homological duality for the abelian category $\mathcal M(G)$ of smooth representations of $G$ is defined at the level of derived categories as \[ D_h:=\RHom_\mathcal H(-,\mathcal H)\colon \mathcal D^b(\mathcal M(G))\to \mathcal D^b(\mathcal M(G))^{op}\] where $\mathcal H$ is the Hecke algebra of $G$. An important property established by Bernstein in his Harvard notes \cite{BerNotes} is that if $\pi$ is irreducible then $D_h(\pi)$ is concentrated in a single degree (and is irreducible). We will denote this representation by $\bD_h(\pi)$. On the other hand, in \cite{SchStu} Schneider and Stuhler also prove this property using localization techniques on the Bruhat-Tits building. Moreover, they show that this functor $\bD_h$ enjoys a Serre functor like property, namely that for any irreducible representation $\pi$ of $G$ and any smooth representation $\pi'$ of $G$, there is a natural isomorphism \begin{align}\label{Eq:intro Sch-Stuh duality} \Ext^i_G(\pi,\pi')^*\simeq \Ext_G^{d(\pi)-i}(\pi',\bD_h(\pi)^\vee). \end{align} Actually in \cite{SchStu} this was proved only in the subcategory of representations with a fixed central character. The more general version was proved in \cite{NoriPras} and \cite{BBK}. One of the aims of this work is to present a direct proof of \eqref{Eq:intro Sch-Stuh duality} for covering groups that does not make use of \cite{SchStu} or of localization techniques. The strategy is to first show that the full homological duality $D_h$ satisfies a Serre functor like property \[ \RHom_\mathcal H(\pi,\pi')^*\simeq \RHom_\mathcal H(\pi',D_h(\pi)^\vee) \] for any finitely generated representations $\pi$ and $\pi'$. This is easy and follows from basic adjunctions naturally extended to idempotented algebras. For convenience of the reader, and for lack of a precise reference, we provide all the details in \cref{S:abstract duality theorem}. The second step, namely showing that $D_h(\pi)$ is concentrated in a single degree for an irreducible representation, is non-formal and is based on several ingredients: Bernstein decomposition, second adjointness and a special property of the algebra governing a cuspidal block (Frobenius-symmetric algebra over its center). Although the proof is already in \cite{BerNotes} we hope that the argument that we present is more streamlined. Another property of the homological functor $D_h$ is that on finite length cuspidal representations it agrees (up to a shift) with the contragredient functor. A sketch of this result for irreducible representations appears already in \cite{BerNotes} but we were not able to supply the details so we decided to include a different proof with full details. To this effect we were led to study the Grothendieck--Serre duality over the Bernstein center $D_{GS/\mathcal Z}$ (see \cref{SS:consequences dualities} for the definition). In particular we provide a proof of the folklore result that $D_{GS/\mathcal Z}$ agrees with the contragredient for admissible representations. Moreover we identify a necessary and sufficient condition (see \eqref{Cond: R iso to D_GS(R)[d]}) for the two functors $D_h$ and $D_{GS/\mathcal Z}$ to agree (up to a shift) on a Bernstein block. In order to state the condition, we need to introduce some notation. If $\mathfrak s=[\rho,L]$ is a cuspidal datum, up to conjugation and inertia, then there is an equivalence of the Bernstein's block $\mathcal M(G)_\mathfrak s$ with the module category $\rmod\mathcal R_\mathfrak s$ for some algebra $\mathcal R_\mathfrak s$ with center $\mathcal Z_\mathfrak s$ that can be described explicitly (see \cref{T:center of M(G)_s as invariants}). The condition alluded to before, which we call Frobenius-symmetric-Gorenstein, since it is reminiscent of both these properties, is \[ D_{GS/{\mathcal Z_\mathfrak s}}(\mathcal R_\mathfrak s)\simeq \mathcal R_\mathfrak s[d] \text{ as $\mathcal R_\mathfrak s$-bimodules}\] where $d=d(\mathfrak s)$ is the split rank of the center of $L$ (equals the Krull dimension of $\mathcal Z_\mathfrak s$). We are able to check this condition on a cuspidal block $\mathfrak s=(\rho,G)$ because in this case $\mathcal Z_\mathfrak s$ is a Laurent-polynomial algebra and $\mathcal R_\mathfrak s$ is Azumaya (see \cref{P:R_rho is Azumaya and Z Laur pol} and \cref{C:Frob sym cond for R_rho}). Therefore we deduce that on all finite length cuspidal representations in a given block, the homological duality and the Grothendieck--Serre duality agree with the contragredient duality. Moreover, if $\mathfrak s=[L,\rho]$ is a cuspidal datum, by a result of Roche \cite[Theorem 3.1]{Roche-parab}, parabolic induction induces an equivalence of the blocks $\mathcal M(L)_{[\rho]}\to\mathcal M(G)_{\mathfrak s}$ if and only if the stabilizer of the inertia class $[\rho]$ in the relative Weyl group $N_G(L)/L$ is trivial, i.e., if and only if there is no non-trivial $w\in N_G(L)/L$ such that ${}^w\rho \simeq \rho\chi$ for some unramified character $\chi$ of $L$. We also deduce that for these blocks, $D_h$ and $D_{GS/\mathcal Z}$ agree (up to a shift). There is yet another duality\footnote{It is indeed involutive but \emph{co}variant and so the name duality is slightly misleading.} on smooth representations, namely the Aubert--Zelevinski duality. Originally, it arose in representation theory of finite groups of Lie type where it was defined on the Grothendieck group of finite dimensional representations. Deligne--Lusztig introduced what is now called the Deligne--Lusztig complex in order to have a definition of the involution at the level of representations. For $p$-adic groups as well as their covering groups, the involution so constructed is usually called the Aubert--Zelevinski involution. It can be proved (\cite{Aub,BBK,BezrPhD}) that the Aubert--Zelevinski involution defines an exact functor from the abelian category $\mathcal M(\tilde{G})$ to itself, hence it extends trivially to a functor on the derived category $\mathcal D^b(\mathcal M(G)$: \[D_{AZ}\colon \mathcal D^b(\mathcal M(G))\to \mathcal D^b(\mathcal M(G)).\] At the level of Grothendieck groups, it was also shown in \cite{Aub} that $D_{AZ}$ commutes with the contragredient and also commutation formulae with the parabolic induction and restriction functors were provided. None of these are known for representations, say of finite length, of covering groups, as even for linear groups, all these assertions are proved by relating Aubert-Zelevinski involution with the homological duality, and that is not available for covering groups. The same involution was considered by Schneider--Stuhler in \cite{SchStu} and by Bezrukavnikov in \cite[Theorem 4.13]{BezrPhD} where the following isomorphism of functors was proved using localization techniques \begin{align}\label{intro:D_AZ D_GS=D_h} D_{AZ}\circ D_{GS/\mathcal Z}=D_h. \end{align} Actually, in \cite{SchStu} this was shown only for finite length modules and in the Grothendieck group whereas in \cite{BezrPhD} the more general result was established. This isomorphism was recently revisited in \cite{BBK} where a simpler proof was given using the geometry of the wonderful compactification. Results similar to Aubert's were proved in the context of smooth representations of covering groups in the work of Ban--Jantzen \cite{DubJan-I,DubJan-II} (again for the Grothendieck group of finite length representations). In this work we do not address the question of whether \eqref{intro:D_AZ D_GS=D_h} holds for covering groups nor do we study the Aubert--Zelevinsky involution in this context. An immediate obstacle to generalizing the approach of \cite{BBK} is that as covering groups are no longer linear, we do not have at our disposal an obvious wonderful compactification. However, in favor of the isomorphism \eqref{intro:D_AZ D_GS=D_h} is the fact that $[d]\circ D_h=D_{GS/\mathcal Z}$ on cuspidal blocks, a result that we prove in \cref{SS:consequences dualities}. By the aforementioned result of Roche, this continues to be the case for blocks $\mathfrak s=[L,\rho]$ such that the stabilizer of the inertia class of $\rho$ is trivial in the relative Weyl group $N_G(L)/L$. One reason for interest in homological duality, and the attendant Serre functor property, is in the context of Ext analogues of branching laws, as discussed in \cite{Pra18}. Recall that usually the branching laws in the context of $p$-adic group are considered for a representation $\pi_1$ of a reductive group $G$ to a representation $\pi_2$ of a closed subgroup $H$, as $\Hom_H(\pi_1,\pi_2)$ (and never as $\Hom_H(\pi_2,\pi_1)$). It has been suggested in \cite{Pra18} that the success of these branching laws in the various cases studied stems from the vanishing of $\Ext^i_H(\pi_1,\pi_2), i>0$, whereas by the Serre functor property, $\Ext^i_H(\pi_2,\pi_1) = 0, i< d(\pi_2)$. In particular, $\Hom_H(\pi_2,\pi_1)$ is typically zero (so no wonder it is never considered!), and shows up only through the higher extension groups, i.e., $\Ext^{d(\pi_2)}_H(\pi_2,\pi_1)$. A big chunk of the paper is expository: we give an exposition of some basic representation theoretic results in this context to show that they generalize in a naive way to the case of covering groups culminating with Bernstein's decomposition (following closely the linear setting). We also include a sketch of the proof of second adjointness (following the exposition in \cite[VI.9.7]{Renard} of the linear case which makes use of completion functors in order to streamline the argument). Then we include a short discussion of Grothendieck--Serre duality limiting ourselves to the context that is sufficient for the applications we have in mind. The reader familiar with the above classical results should directly consult \S\ref{S:homological duality}, \S\ref{S:duality SchSt}, \S\ref{S:dualities on finite length}, \S\ref{S:abstract duality theorem}. \subsection{} Below follows a more precise description of our work. Let $G=\mathbb{G}(F)$ be a $p$-adic reductive group and let ${\widetilde{G}}$ be a finite covering group of $G$. Denote by $\mathcal H$ the Hecke algebra of ${\widetilde{G}}$ and by $\mathcal M({\widetilde{G}})$ the category of smooth complex representations of ${\widetilde{G}}$. The Levi subgroups and parabolic subgroups of ${\widetilde{G}}$ are, by definition, the pullbacks of those of $G$. We have parabolic induction and restriction functors (Jacquet modules) defined as in the linear case. A representation is called cuspidal if it is killed by all proper parabolic restriction functors. We denote by $\mathcal B({\widetilde{G}})$ the equivalence classes (conjugation and inertia) of pairs $({\widetilde{L}},\rho)$ of a Levi subgroup ${\widetilde{L}}$ and an irreducible cuspidal representation $\rho$ of ${\widetilde{L}}$. The following is the Bernstein decomposition for ${\widetilde{G}}$ (see \cite{BerNotes,BerDel,Renard} for the linear case): \begin{theorem} \label{T:Bernstein dec-intro} We have a block decomposition \[ \mathcal M({\widetilde{G}})\simeq \prod_{\mathfrak s\in \mathcal B({\widetilde{G}})} \mathcal M({\widetilde{G}})_\mathfrak s. \] Moreover, each block $\mathcal M({\widetilde{G}})_\mathfrak s$ is equivalent to the category of modules over some $\mathbb{C}$-algebra $\mathcal R_\mathfrak s$ containing a finitely generated commutative subalgebra over which it is finite. \end{theorem} We will outline the proof in \cref{S:Bernstein dec} which follows closely the linear case as exposed in \cite{BerNotes} or \cite{Renard}. To that end, and to fix the notation, we will collect in Sections 1, 2, 3, 4 all the necessary results from the classical theory (i.e., the linear case) as well as some rudiments from category theory that go into the proof of it. This is meant to convince the reader that the same strategy as in the linear case gives also the Bernstein decomposition for covering groups. Using the second adjointness theorem of Bernstein, we prove the equivalence of $\mathcal M({\widetilde{G}})_\mathfrak s$ with $\rmod\mathcal R_\mathfrak s$ in \cref{S:Blocks as module cats}. For completeness, we also give in \cref{S:second adj} a skeleton of the proof of the second adjointness which is again meant to convince the reader that the proof from the linear case goes through without changes for covering groups. The argument we present follows \cite{Renard} rather than \cite{BerNotes} in that it uses completion of modules in order to streamline the proof. We take advantage of this opportunity to introduce the completion functors in \cref{SS:completion} and we show that (at least on admissible representations) it allows one to recover the lost properties of the invariants $(-)^N$ functor. See also \cref{C:invar and coinvar are isomorphic} for a far more advanced version (essentially equivalent to the second adjointness theorem). For an irreducible representation $\pi$, the block $\mathfrak s=[{\widetilde{L}},\rho]$ that corresponds to $\pi$ is called the cuspidal support of $\pi$. We denote by $d(\pi)=d(\mathfrak s)=d({\widetilde{L}})$, the dimension of a maximal split torus in the center of the algebraic group $\mathbb{L}$. The category of (smooth) representations of ${\widetilde{G}}$ has finite global dimension (see \cref{SS:finite homological dimension}). Consider the homological duality functor $D_h$ on the bounded derived category of ${\widetilde{G}}$-modules \[ D_h:=\RHom_\mathcal H(-,\mathcal H)\colon \mathcal D^b(\mathcal M({\widetilde{G}}))\to \mathcal D^b(\mathcal M({\widetilde{G}}))^{op}. \] It is surprisingly easy to show that\footnote{In the context of finite dimensional algebras, this functor is called \emph{Nakayama duality}, and it was observed in \cite[3.2 Example 3]{BonKapr} that it is a Serre functor.} $(\,)^\vee \circ D_h$ satisfies a Serre-functor-like property for the full $\RHom$, namely: \begin{theorem}\label{T:Serre functor on derived cat-intro} (see \cref{C:natural pairing RHom is perfect}) For $\pi, \pi' \in \mathcal D^b(\mathcal M({\widetilde{G}}))$ with $\pi$ finitely generated, we have a natural pairing of complexes of $\mathbb{C}$-vector spaces \[ \RHom_\mathcal H(\pi,\pi')\otimes^L_\mathbb{C} \RHom_\mathcal H(\pi',D_h(\pi)^\vee)\to \RHom_\mathcal H(\pi,D_h(\pi)^\vee)\to \mathbb{C}\] providing a natural isomorphism \[ \RHom_\mathcal H(\pi,\pi')^* = \RHom_\mathcal H(\pi',D_h(\pi)^\vee) \] where $(-)^*$ is taking the dual vector space degree-wise. \end{theorem} Notice that we do not claim that $\RHom_\mathcal H(\pi,D_h(\pi)^\vee)\to \mathbb{C}$ is an isomorphism in general but merely that such a canonical map exists. For a parabolic ${\widetilde{P}}$ with Levi decomposition ${\widetilde{P}}={\widetilde{L}} N$, we denote by $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}$ (resp., $\mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}$) the parabolic induction (resp., restriction) functors. Here are further homological properties that one can prove about $D_h$: \begin{theorem} \label{T:homol prop of D_h-intro} The functor $D_h\colon\mathcal D^b(\mathcal M({\widetilde{G}}))\to (\mathcal D^b(\mathcal M({\widetilde{G}})))^{op}$ enjoys the following properties \begin{enumerate} \item\label{T:subpoint:vanishing Ext for D_h} If $\pi\in\mathcal M({\widetilde{G}})_\mathfrak s^\mathsf{fl}$ is a finite length representation in a fixed Bernstein block, then $D_h(\pi)$ is concentrated in degree $d(\mathfrak s)$, \item\label{subp:intro-D_h2=1} $D_h^2\simeq \Id$, \item\label{subp:intro-D_h on cusp} If $\pi$ is cuspidal of finite length and lives in a single block, then $D_h(\pi) = \pi^\vee[-d(\pi)]$, \item $D_h \circ \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{{\widetilde{G}}} = \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}^-}^{{\widetilde{G}}} \circ D_h$ and $D_h\circ \mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{{\widetilde{G}}} = \mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{{\widetilde{G}}} \circ D_h$, \item\label{T:subpoint:exact involution} $\bD_h:=D_h[d(\mathfrak s)]$ restricted to finite length representations in $\mathcal M({\widetilde{G}})_\mathfrak s$ is an exact involution providing an equivalence \[ \bD_h\colon\mathcal M({\widetilde{G}})_\mathfrak s^{\mathsf{fl}}\xrightarrow{\sim} (\mathcal M({\widetilde{G}})_{\mathfrak s^\vee}^{\mathsf{fl}} )^{op},\] where $\mathfrak s^\vee$ is the contragredient block of $\mathfrak s$ \end{enumerate} \end{theorem} Partt \eqref{T:subpoint:vanishing Ext for D_h} is proved in \cref{SS:vanishing}. The involution statement is shown in \cref{C:subp main thm:involut}. Partt \eqref{subp:intro-D_h on cusp} is showed in \cref{SS:consequences dualities} after a quick interlude on the Grothendieck--Serre duality. The commutation with parabolic induction and restriction is dealt with in \cref{SS:Dh and ind res}. The last partt is a consequence of \eqref{T:subpoint:vanishing Ext for D_h},\eqref{subp:intro-D_h2=1} and \eqref{subp:intro-D_h on cusp}. See also \cref{C:subp main thm:involut}. Let us say a few words about the Grothendieck--Serre duality which we will briefly review in \cref{S:dualities on finite length} in the generality that we need. Any representation $\pi\in\mathcal M({\widetilde{G}})$, besides having an action of ${\widetilde{G}}$, has also an action of the Bernstein center $\mathcal Z$, and by definition these two actions commute. Therefore, any operation that one does to a $\mathcal Z$-module will produce again a ${\widetilde{G}}$-module. The most trivial operation one can think of is taking the contragredient. This works well, in the sense that it is an involution, on admissible representations. One would like to extend the contragredient in a functorial way to all finitely generated modules such that it stays an involution. Working with a fixed Bernstein block at a time, the question now is sent in the commutative algebra court: we have a nice ring (invariants under a finite group of a Laurent polynomial algebra, in particular Cohen--Macaulay) and finitely generated modules over it. We would like to extend the contragredient from finite length modules to all finitely generated modules in such a way that it stays an involution. One way to do this, coming from algebraic geometry, is to use the Grothendieck--Serre duality. Although all the properties are nice and formal, one must now work in the derived category since the abelian category of modules is not preserved anymore under this duality. For a commutative regular ring $A$ of Krull dimension $d$, the normalized dualizing (or canonical) complex is defined to be $\omega_A^\circ:=\omega_A[d]\in\mathcal D^b(A\lmod)$, where $\omega_A=\Lambda^d\Omega_A^1$ is the module of top differentials on $A$. If $A\to B$ is a finite map of commutative rings and $\omega_A^\circ$ is the normalized dualizing complex of $A$, then $\omega_B^\circ:=\RHom_A(B,\omega_A^\circ)$ is the normalized dualizing complex of $B$. In our context, the (component of the) Bernstein center $\mathcal Z_\mathfrak s$ is Cohen--Macaulay hence it is finite free over a regular subalgebra. The above paragraph allows us to see that it has a normalized dualizing complex which moreover lives in a single degree (this is a characterization of Cohen--Macaulay rings). For a commutative ring $A$ with a normalized dualizing complex $\omega_A^\circ$, one defines the Grothendieck--Serre duality as \begin{align*} D_{GS} &\colon \mathcal D^b(A\lmod)\to (\mathcal D^b(A\lmod))^{op} \\ M & \mapsto \RHom_A(M,\omega_A^\circ). \end{align*} This has the nice property of being an involution $D_{GS}^2\simeq \Id$ when restricted to finitely generated modules. It is not obvious to compute $D_{GS}(M)$ for a particular module $M$. In general, it is a complex of modules even if $M$ is only a module. However, a particular case in which $D_{GS}(M)$ is easy to compute is that of finite length modules $M$ (over an arbitrary ring $A$!). Indeed, we have $D_{GS}(M)=M^*$ for any $M\in A\lmod$ of finite length. For convenience of the reader we include (the short) proof of this in \cref{S:dualities on finite length}. Going back to the category of smooth representations of ${\widetilde{G}}$, we have the Grothendieck--Serre duality with respect to the Bernstein's center (component by component, see \cref{SS:consequences dualities}.) \begin{align*} D_{GS/\mathcal Z}&\colon\mathcal D^b(\mathcal M({\widetilde{G}})) \to (\mathcal D^b(\mathcal M({\widetilde{G}})))^{op}\\ \pi & \mapsto \RHom_\mathcal Z(\pi,\omega_\mathcal Z^\circ)^\mathrm{sm} \end{align*} where the superscript ``$\mathrm{sm}$'' means taking smooth vectors. On finitely generated representations it is an involution, i.e., $D_{GS/\mathcal Z}^2\simeq \Id$. In order to make sure we extended the contragredient from admissible to all finitely generated one needs to check that $D_{GS/\mathcal Z}(\pi)=\pi^\vee$ for all admissible representations $\pi$ of $G$. This is a folklore result whose (again easy and rather formal) proof we include for the convenience of the reader and for lack of a better reference (see \cref{C:D_GS on admissibles is contrag}). Going back to homological duality, this allows us to prove point \eqref{subp:intro-D_h on cusp} of \cref{T:homol prop of D_h-intro}. Additionally, it permits us to understand precisely under what conditions do we have an isomorphism $[d]\circ D_h\simeq D_{GS/\mathcal Z}$ on a given block. For precise details, we refer to \cref{C:D_h=D_GS if and only if} and condition \eqref{Cond: R iso to D_GS(R)[d]}. \medskip Returning to homological duality and its Serre functor property, let us state the main result we are interested in this work. For $\pi\in\mathcal M({\widetilde{G}})_\mathfrak s^\mathsf{fl}$, a finite length representation in a Bernstein block, we put $\bD_h(\pi) = H^{d(\pi)}(D_h(\pi))$ to be the unique non-zero cohomology of $D_h(\pi)$ (see \cref{T:homol prop of D_h-intro}\eqref{T:subpoint:vanishing Ext for D_h}). \begin{theorem}\label{T:Serre functor reps-intro} (see \cref{T:main duality Schneider-Stuhler}) Let $\pi\in \mathcal M({\widetilde{G}})_\mathfrak s$ be of finite length and $\pi'\in \mathcal M({\widetilde{G}})$ be arbitrary. Then the following natural pairing is perfect \[ \Ext^i_{{\widetilde{G}}}(\pi,\pi')\otimes \Ext^{d(\pi)-i}_{{\widetilde{G}}}(\pi',\bD_h(\pi)^\vee)\to \Ext^{d(\pi)}_{{\widetilde{G}}}(\pi,\bD_h(\pi)^\vee)\to \mathbb{C}.\] \end{theorem} The proof is an easy consequence of \cref{T:Serre functor on derived cat-intro} and \cref{T:homol prop of D_h-intro}. It is an improvement of the results of \cite{NoriPras} where the theorem was proved only for $\pi$ irreducible. \begin{remark}\hfill \begin{enumerate} \item As in the linear case, it would be desirable to have an identification of $D_h$ with $D_{AZ}\circ (-)^\vee$, where $D_{AZ}$ is the Aubert--Zelevinsky duality, similar to \cite{BBK}, \cite[Proposition IV.5.1]{SchStu} (Grothendieck group) or \cite[Theorem 4.2]{BezrPhD}. This is not at all discussed in this paper. However, one can consult \cite{DubJan-II} for a discussion of Aubert--Zelevinsky duality for finite central extensions of reductive $p$-adic groups (in the Grothendieck group of representations). \item As already mentioned, the duality \cref{T:Serre functor on derived cat-intro} was first proved in \cite[Duality Theorem]{SchStu} using a more involved argument. Their strategy was to first show the vanishing of Ext groups from \cref{{T:homol prop of D_h-intro}}(\ref{T:subpoint:vanishing Ext for D_h}) and then proceed to prove the isomorphism from \cref{T:Serre functor on derived cat-intro} using this vanishing. Both steps hinge on nice projective resolutions constructed from the Bruhat-Tits building. The advantage of the proof that we present is that by keeping the homological duality functor $D_h$ at the derived level, the duality theorem becomes very easy and requires no technology. Once the vanishing of Ext-groups \cref{{T:homol prop of D_h-intro}}(\ref{T:subpoint:vanishing Ext for D_h}) is proved (for which we follow the argument in \cite{BerNotes}) we immediately deduce the required duality theorem at the level of abelian categories. \end{enumerate} \end{remark} \begin{center}{\bf Acknowledgments}\end{center} D.F. would like to thank IIT Mumbai, where this work has started, for their hospitality. The second author thanks SERB, India for its support through the JC Bose Fellowship, JBR/2020/000006. His work was also supported by a grant of the Government of the Russian Federation for the state support of scientific research carried out under the agreement 14.W03.31.0030 dated 15.02.2018. We would like to thank Roman Bezrukavnikov for some very useful remarks concerning homological duality and Grothendieck--Serre duality. \section{Categorical generalities} \subsection{Preliminaries on decomposing categories and centers}\label{SS:prelim centers} Given $\mathcal C_i, i\in I$, a family of abelian categories, we can define the product category $\mathcal C:=\prod_{i\in I}\mathcal C_i$ through the usual universal property. Concretely, one constructs it as follows: \begin{itemize} \item the objects are tuples $(X_i)_{i\in I}$ with $X_i\in \mathcal C_i$ for every $i\in I$; \item the morphisms are defined by \[ \Hom_\mathcal C((X_i)_i, (Y_j)_j):=\prod_{i\in I} \Hom_{\mathcal C_i}(X_i,Y_i); \] \item the projection functors $\pi_i\colon \mathcal C\to \mathcal C_i$ send a tuple $(X_s)_s$ to $X_i$. \end{itemize} The universal property of a product is immediately verified. In addition to the projection functors $\pi_i\colon\mathcal C \to \mathcal C_i$, there are also natural inclusion functors \[ \iota_i\colon \mathcal C_i\to \mathcal C \] sending an object $X\in \mathcal C_i$ to the tuple that has $X$ in position $i$ and $0$ everywhere else. By construction, the functor $\iota_i$ is fully faithful so we can think of the category $\mathcal C_i$ as a full subcategory of the product category $\mathcal C$. We will drop the functor $\iota_i$ from the notation most of the time. Two such subcategories $\mathcal C_i$ and $\mathcal C_j$ for $i\neq j$ are (derived) orthogonal, namely \[ \Ext^k_\mathcal C(X_i,X_j) = 0,\text{ for all }k\ge 0.\] Moreover we notice that the object $(X_i)_i\in\mathcal C$ is the direct sum of the objects $\iota_i(X_i)$ in $\mathcal C$. Indeed, let us check the universal property for direct sums: for an arbitrary object $Y=(Y_i)_i\in\mathcal C$ we have \begin{align*} \Hom_{\mathcal C}((X_i)_i, (Y_i)_i)& = \prod_i \Hom_{\mathcal C_i}(X_i,Y_i)\\ & = \prod_i \Hom_{\mathcal C}(\iota_i(X_i),(Y_j)_j)\\ & = \Hom_{\mathcal C}(\oplus_i \iota_i(X_i),(Y_j)_j). \end{align*} As a summary, we have a category $\mathcal C$ with full subcategories $\mathcal C_i$, $i\in I$ that are two by two derived orthogonal and moreover every object of $\mathcal C$ is a direct sum of objects from $\mathcal C_i$ (the subcategories $\mathcal C_i$ split the category $\mathcal C$). Conversely, a category $\mathcal C$ with full subcategories $\mathcal C_i$, $i \in I$, with the above properties is a direct product of $\mathcal C_i$ provided we assume a small technical condition on the categories that we work with which in practice is always verified. \begin{proposition}\label{P:categ is product if and only if} (cf. \cite[\S1.9]{BerDel}) Let $\mathcal C$ be an abelian category and let $\mathcal C_i\subset \mathcal C$, $i\in I$ be full abelian subcategories of $\mathcal C$. Assume that \begin{enumerate} \item\label{i:sums} $\mathcal C$ admits direct sums indexed by $I$, \item\label{i:obj dir sum} every object of $\mathcal C$ can be written as a direct sum of objects of $\mathcal C_i$, \item\label{i:orth} $\Hom_\mathcal C(X_i,X_j)=0$ for all $X_i\in\mathcal C_i$, $Y_j\in\mathcal C_j$ and $i\neq j$. \item\label{i:technical cond} If $X_i\in\mathcal C_i$, $i\in I$ and $f\colon Y\to \oplus_i X_i$ is such that ${\operatorname{proj}}_i\circ f=0$ for all $i\in I$ then $f=0$.\footnote{There are pathological examples where this condition is not satisfied.} \end{enumerate} Then the natural functor $\prod_i \mathcal C_i \to \mathcal C$ is an equivalence of categories. \end{proposition} \begin{proof} There is a natural functor $\prod_i \mathcal C_i \to \mathcal C$ given by $(X_i)_i\mapsto \oplus_i X_i$ that is well defined by assumption \eqref{i:sums}. By \eqref{i:obj dir sum} it is essentially surjective. In order to show that it is an equivalence it remains to check that it is fully faithful. Given this observation and taking into account assumptions \eqref{i:orth} and \eqref{i:technical cond}, we have \begin{align*} \Hom_{\prod_i \mathcal C_i}((X_i)_i,(Y_i)_i) & = \prod_i \Hom_{\mathcal C_i}(X_i,Y_i)\\ & = \prod_i \Hom_{\mathcal C}(X_i,\oplus_j Y_j)\\ & = \Hom_\mathcal C (\oplus_i X_i,\oplus_j Y_j) \end{align*} which proves the fully-faithfulness. \end{proof} \begin{definition} Given an abelian category $\mathcal C$, we define its center $\mathcal Z(\mathcal C)$ to be $\End(\Id_{\mathcal C})$, the endomorphisms of the identity functor. \end{definition} \begin{remark}\label{R:equiv of cats center} The center of a category is preserved under categorical equivalence. \end{remark} Since $\mathcal C$ is additive we see that $\mathcal Z(\mathcal C)$ is a commutative ring with unit. Moreover, the category $\mathcal C$ becomes naturally $\mathcal Z(\mathcal C)$-enriched, i.e., the $\Hom$ spaces in $\mathcal C$ have a natural action of $\mathcal Z(\mathcal C)$ making them $\mathcal Z(\mathcal C)$-modules and $\mathcal C$ into a $\mathcal Z(\mathcal C)$-linear category. \begin{example}\label{Ex:center of A-mod} If $\mathcal C = A\lmod$ for an algebra $A$ with unit, then it is not hard to see that $\mathcal Z(\mathcal C)$ is the center $\mathcal Z(A)$ of $A$. \end{example} For $e\in \mathcal Z(\mathcal C)$ an idempotent we denote by $e\mathcal C$ its image in $\mathcal C$: it is the full subcategory consisting of those objects on which $e$ acts by identity. A similar argument as in \cref{P:categ is product if and only if} proves the first part of \begin{proposition}\label{P:center of product of cats} If $\mathcal C\simeq \prod_i \mathcal C_i$ then $\mathcal Z(\mathcal C) \simeq \prod_i \mathcal Z(\mathcal C_i)$. Conversely, if $\mathcal Z(\mathcal C) \simeq \prod_i Z_i$ then the identity of each $Z_i$ provides an idempotent $e_i\in \mathcal Z(\mathcal C)$ and we have $\mathcal C\simeq \prod_i e_i\mathcal C$. \end{proposition} \begin{proof} Only the last part is non-trivial and it follows by applying \cref{P:categ is product if and only if}. \end{proof} The following is a generalization of \cref{Ex:center of A-mod} for algebras without unit but with enough idempotents. An algebra $A$ is said to have enough idempotents if for any element $a\in A$ there exists an idempotent $e\in A$ such that $ae=ea=a$. A left $A$-module $M$ is said to be non-degenerate if for any $m\in M$ there exists an idempotent $e\in A$ such that $em=m$. The category of non-degenerate left $A$-modules is denoted as $A\lmod^\mathsf{nd}$ If $A^o$ is the opposite algebra to $A$, then $\overline{A}:=\Hom_{A^o}(A,A)$ is the space of right $A$-invariant maps from $A$ to $A$. We can write $\overline{A}= \Hom_{A^0}(\varinjlim eA,A)= \varprojlim Ae$ where the limit is taken over all the idempotents of $A$. Clearly $\overline{A}$ is an $A$-bimodule. \begin{proposition}[{\cite[Lemme 1.5]{BerDel} or \cite[I.1.7]{Renard}}]\label{P:center module idempot algebra}\hfill\\ The center of the category $A\lmod^\mathsf{nd}$ is identified with the center of $\overline{A}$ as an $A$-bimodule (i.e., with the $A$-bimodule endomorphisms of $A$). It can be further identified with $\varprojlim Z(eAe)$ where $e$ goes over all idempotents of $A$. \end{proposition} In our context, the abelian categories that we will encounter will be equivalent to module categories over a unitary ring and as such their center is just the center of the corresponding ring. We use a standard tool from category theory to detect when an abelian category is equivalent to a module category. Recall that a functor $F\colon \mathcal C\to\mathcal D$ is called \emph{faithful} (resp., \emph{fully-faithful}) if for any objects $A,B\in\mathcal C$ it induces an injective (resp., bijective) map on Hom spaces: \[ \Hom_\mathcal C(A,B)\to \Hom_\mathcal D(F(A),F(B)).\] One calls $F$ \emph{essentially surjective} if every object in $\mathcal D$ is isomorphic to the image through $F$ of an object from $\mathcal C$. A basic theorem in category theory says that $F\colon \mathcal C\to \mathcal D$ is an equivalence of categories if and only if it is essentially surjective and fully-faithful. An object $P$ of an abelian category $\mathcal A$ is called \begin{enumerate} \item compact (or finite) if $\Hom_\mathcal A(P,-)$ commutes with arbitrary coproducts(=direct sums), \item projective if $\Hom_\mathcal A(P,-)$ is exact, \item a generator if $\Hom_\mathcal A(P,-)$ is faithful (injective on $\Hom$ spaces), \item progenerator if it is projective and a generator. \end{enumerate} \begin{proposition}[see {\cite[4.11]{Pareigis-categories}}]\label{P:equiv module category} Let $\mathcal A$ be an abelian category admitting coproducts. For a compact progenerator $P$ of $\mathcal A$ the functor \begin{align*} \mathcal A & \to \rmod\End_\mathcal A(P)\\ X&\mapsto \Hom_\mathcal A(P,X) \end{align*} is an equivalence of categories between $\mathcal A$ and the category of left modules over the ring $\End_\mathcal A(P)$. \end{proposition} \subsection{Serre functors}\label{SS:Serre functors} The notion of a Serre functor for an additive category was introduced in \cite{BonKapr} in order to capture duality phenomena similar to Serre's duality theorem on smooth projective varieties. It is intimately related to questions of representability of certain functors. Moreover this turns out to be a very rigid notion. In particular, if a Serre functor exists then it is unique (up to natural isomorphism). \begin{definition}\cite[Definition 1.28]{Huyb-FM}\cite[Definition 3.1]{BonKapr} Let $\mathcal C$ be a $k$-linear category. A Serre functor on $\mathcal C$ is the data of an equivalence of categories $\mathbb{S}\colon \mathcal C\to \mathcal C$ together with natural isomorphisms \[\eta_{A,B}\colon \Hom_\mathcal C(A,B)^*\to \Hom_\mathcal C(B,\mathbb{S}(A)), \,\text{ for all } A,B\in\mathcal C.\] \end{definition} \begin{remark} It is observed in \cite[Lemma 1.30]{Huyb-FM} that if one assumes the categories $\mathcal C_1,\mathcal C_2$ to have finite dimensional Hom spaces with $\mathbb{S}_1,\mathbb{S}_2$ Serre functors, then any functorial isomorphism between the categories $\mathcal C_1,\mathcal C_2$ commutes with the Serre functors. \end{remark} An example of a Serre functor, actually one that motivated the notion, is the classical Serre duality. Namely, let $X$ be a smooth projective variety over $k$ of dimension $d$ and denote by $\omega_X$ its canonical sheaf. Then Serre duality stipulates a natural isomorphism \[ \Ext^i_{\mathcal O_X}(\mathcal F,\mathcal G)^* \simeq \Ext^{d-i}_{\mathcal O_X}(\mathcal G,\mathcal F\otimes\omega_X), \text{ for all } \mathcal F,\mathcal G\in\mathcal D^b(\Coh(X)).\] In other words, the functor \[ -\otimes \omega_X[d]\colon \mathcal D^b(\Coh(X))\to \mathcal D^b(\Coh(X)) \] is a Serre functor. A simpler example comes from finite dimensional algebras and is known as the Nakayama functor, see \cite[\S3.2, Example 3]{BonKapr}. Let $A$ be a finite dimensional algebra over $k$ and suppose it has finite homological dimension. (Although there are no nontrivial finite dimensional commutative algebras of finite homological dimension, there are many non-commutative ones, for example, there are constructions using Quivers.) We consider the derived category $\mathcal C:=\mathcal D^b_{fd}(A\lmod)$ of finite dimensional left $A$-modules. There are two duality functors \[ \mathcal D^b_{fd}(A\lmod) \stackrel{\delta}{\to} \mathcal D^b_{fd}(\rmod A) \stackrel{(-)^\vee}\to \mathcal D^b_{fd}(A\lmod),\] where $\delta(M) = \RHom_A(M,A)$ and $M^\vee = \RHom_{\mathcal D^b(k)}(M,k)$ for any object $M\in \mathcal D^b(A\lmod)$. We put $D_{Nak}:= (-)^\vee\circ \delta$ and call it the Nakayama functor. The following proposition is very easy to prove from classical adjunctions and our results in \cref{S:abstract duality theorem} are essentially a detailed version of it for idempotented algebras (see \cref{T:Nakayama functor is Serre}): \begin{proposition} The Nakayama functor $D_{Nak}\colon \mathcal D^b_{fd}(A\lmod)\to\mathcal D^b_{fd}(A\lmod)$ is a Serre functor. \end{proposition} \begin{remark} Notice that we had to restrict ourselves to finite dimensional modules and the reason is the appearance of the dual vector space which provides an equivalence of categories only for finite dimensional modules. \end{remark} In order to extend the notion to more general categories, in particular to finitely generated modules over an algebra finite over its center, Bezrukavnikov and Kaledin propose the notion of relative Serre functor (see \cite[\S 2.1]{BezKal}). Since we do not prove anything about relative Serre functors we prefer to defer this discussion to a later work. \subsection{Central extensions}\label{SS:central ext} Let $G=\mathbb{G}(F)$ be the locally compact group of $F$-rational points of a reductive linear algebraic group $\mathbb{G}$ over a non-archimedean local field $F$. Let ${\widetilde{G}}$ be a finite central extension of $G$ with kernel a finite abelian group ${\mu}$: \[ 1\to \mu\to{\widetilde{G}}\to G\to 1 \] that is moreover a topological covering. In this situation it is proved, for example in \cite[Lemma 2.2]{DubJan-I}, that ${\widetilde{G}}$ admits a basis around identity of compact open subgroups lifted from $G$. So ${\widetilde{G}}$ is a totally disconnected group or an $l$-group in the terminology of \cite{BerNotes}. \subsection{Representations} In this section we recall some notions around the representation theory of locally compact totally disconnected groups $G$. All representations will be on complex vector spaces. One can consult \cite{BerNotes,Renard} for details. We will work with the category $\mathcal M(G)$ of smooth complex representations of such a group $G$. Let $H\le G$ be a closed subgroup of $G$. Then restricting a representation from $G$ to $H$ we obtain an exact functor between the categories of smooth representations \begin{align*} \operatorname{Res}_H^G\colon \mathcal M(G)\to \mathcal M(H). \end{align*} The restriction functor has a right adjoint given by induction \[ \operatorname{Ind}_H^G\colon \mathcal M(H)\to \mathcal M(G)\] \[\operatorname{Ind}_H^G(V):=\{f\colon G\to V\mid f(hg) = hf(g),\text{ for all }h\in H \text{ and }g\in G\}^\mathrm{sm}.\] The pair of adjoint functors $\operatorname{Res}_H^G\dashv \operatorname{Ind}_H^G$ goes under the name of Frobenius reciprocity. For details, one can look at \cite{BerNotes} or \cite[III.2.5]{Renard}. The induction functor admits a subfunctor $\operatorname{ind}_H^G\subset \operatorname{Ind}_H^G$ called \emph{compact induction} consisting of functions with compact support modulo $H$: \[ \operatorname{ind}_H^G(V):=\{f\in \operatorname{Ind}_H^G(V)\mid H\backslash\supp(f) \text{ is compact}\}. \] In case $G/H$ is compact we clearly have $\operatorname{ind}_H^G=\operatorname{Ind}_H^G$. \begin{definition} If $V$ is a smooth representation of $G$ then the contragredient representation of $V$ is defined to be the smooth part of the linear dual $V^\vee:=(V^*)^\mathrm{sm}$. \end{definition} One can prove (see \cite[III.2.7]{Renard}) that induction and compact induction are related to each other through the contragredient. More precisely, for $V$ a smooth representation of $H$ we have \[ \operatorname{Ind}_H^G(V^\vee) = \operatorname{ind}_H^G(V\otimes \delta_{H\backslash G})^\vee, \] where $\delta_{H\backslash G}$ is the modular character of $G$ divided by the one of $H$. Suppose now that $H\le G$ is open. Since $H$ is the complement of the union of all non-trivial left cosets $Hg$, $g\in G$, it is also a closed subgroup. \begin{lemma} (see \cite[III.2.6.5]{Renard})\label{L:induction from open adjunction} If $H\le G$ is an open subgroup then the restriction functor $\operatorname{Res}_H^G$ is right adjoint to the compact-induction functor $\operatorname{ind}_H^G$. In particular, $\operatorname{ind}_H^G$ sends projective objects to projective objects. \end{lemma} \subsection{Projectives and injectives} We continue with $G$ being a locally compact, totally disconnected group. Let $\mathcal M(G)$ be the abelian category of smooth representations of $G$. Let $\mathcal{H}(G)$ be the Hecke algebra of locally constant compactly supported functions on $G$ endowed with convolution (one needs to choose a left invariant Haar measure on $G$). The algebra $\mathcal{H}(G)$ is an algebra without unit but with a rich supply of idempotents\footnote{It is what is called an \emph{idempotented} algebra: for every finite set of element $a_i$, there is an idempotent $e$ such that $ea_i=a_ie=a_i$ for each $a_i$.} because $G$ has a basis of neighborhoods of identity consisting of open compact subgroups. A representation $V$ of $\mathcal{H}(G)$ is said to be \emph{non-degenerate} if it has the property that $\mathcal{H}(G)V=V$. There is the well-known equivalence of the category of smooth representations of $G$ and non-degenerate representations of $\mathcal{H}(G)$: \[ \mathcal M(G)\simeq \mathcal H(G)\lmod^\mathsf{nd}. \] \begin{remark}\label{R:H(G) is projective} Suppose that $G$ admits a countable basis of neighborhoods of identity consisting of compact open subgroups. Then the Hecke algebra $\mathcal H(G)$ is a projective object in $\mathcal M(G)$. Indeed, one writes $\mathcal H(G) = \bigcup_{i=1}^\infty \mathcal H(G)e_i$ where $e_i=e_{K_i}$ are idempotents corresponding to a countable basis of compact open subgroups of $G$. Since $ \Hom_G(\mathcal H(G)e_i,V) = V^{K_i}$, each of the $\mathcal H(G)e_i$ is a projective object in $\mathcal M(G)$, and then one notices that $\Hom_G(\mathcal H(G), -) = \lim_i \Hom_G(\mathcal H(G)e_i,-)$ and this latter inverse limit is exact because the transition functions are surjective (and hence the Mittag-Leffler condition is automatically satisfied). \end{remark} The abelian category $\mathcal M(G)$ has a good supply of injective and projective objects, for example for any open compact subgroup $K$ of $G$, $\operatorname{ind}_K^G (\mathbb{C})$ is a projective object (see \cref{L:induction from open adjunction}), and its smooth dual $\operatorname{Ind}_K^G (\mathbb{C})$ is an injective object. We use $\Ext^i_{G}(V,V')$ to denote Ext groups in $\mathcal M(G)$. \subsection{Homological dimension}\label{SS:finite homological dimension} Here $G=\mathbb{G}(F)$ and ${\widetilde{G}}$ is a central extension of $G$ as defined in \cref{SS:central ext}. It is shown in \cite[Theorem 29]{BerNotes} that the category of smooth representations of $G$ has finite homological dimension. The argument uses the building of $G$ to give a resolution of the trivial module by projective modules which are sums of representations of $G$ of the form $\operatorname{ind}_K^G (\mathbb{C})$ (where $K$ are compact open subgroups of $G$): \[0\rightarrow P_d \rightarrow \cdots \rightarrow P_1 \rightarrow P_0 \rightarrow \mathbb{C} \rightarrow 0,\] where $d$ is the split rank of $G$. Tensoring this resolution with any $G$-module $V$, and observing that \[\operatorname{ind}_K^G (\mathbb{C}) \otimes V = \operatorname{ind}_K^G(V|_K),\] we find a projective resolution of length $\leq d$ for any $G$-module $V$. This argument with the resolution of the trivial module $\mathbb{C}$ for $G$ works just as well for covering groups ${\widetilde{G}}\to G$. Therefore, $\Ext^i_{{\widetilde{G}}}(V,V')=0$ for any representations $V,V'$ of ${\widetilde{G}}$, if $i>d$. Since the category of representations is noetherian and, as we have seen, of finite homological dimension, any finitely generated module admits a finite resolution by finitely generated projective modules. (If $d$ is the projective dimension of a finitely generated representation $V$, resolve $V$ by finitely generated projective modules $ P_{d-1} \rightarrow \cdots \rightarrow P_1 \rightarrow P_0 \rightarrow V \rightarrow 0,$ and then observe as in the proof of Hilbert Syzygy that the kernel of the map $P_{d-1}\rightarrow P_{d-2}$ must be projective.) This will be useful when we apply the abstract results from \cref{S:abstract duality theorem} to representation theory in \cref{S:homological duality} and \cref{S:duality SchSt} as all finitely generated modules will be perfect. \section{Splitting the category of representations}\label{S:splitting reps} \subsection{Compact representations}\label{SS:compact split} In this section $G$ denotes an arbitrary locally compact td-group which is countable at infinity. The most important result we need is that compact representations split the category of smooth representations of $G$. \begin{definition} A smooth representation $(\pi,V)$ of $G$ is said to be compact if all its matrix coefficients have compact support. \end{definition} \begin{remark} The existence of a compact irreducible representation implies that $G$ has compact center. \end{remark} We denote by $\mathcal M(G)_c$ the full subcategory of $\mathcal M(G)$ consisting of representations whose irreducible subquotients are compact representations. It is clearly a subcategory closed under subquotients, direct sums and extensions. We denote by $\mathcal M(G)_{nc}$ the full subcategory of $\mathcal M(G)$ formed of representations that have no compact irreducible subquotient. For $\mathcal S$, a collection of irreducible representations of $G$, we denote by $\mathcal M(G)_{[\mathcal S]}$ the subcategory of $\mathcal M(G)$ formed of representations such that all their irreducible subquotients are isomorphic to an object in $\mathcal S$. Denote by $\mathcal M(G)_{[\text{out }\mathcal S]}$ the subcategory of representations such that none of their irreducible subquotients are isomorphic to an object in $\mathcal S$. The first important theorem about compact representations is \begin{theorem}\label{T:compact rep as GxG} Let $\rho$ be a compact irreducible representation of $G$. Then matrix coefficients of $\rho$ provide us with an injective map of $G\times G$-modules \[\rho\boxtimes \rho^\vee \subset\mathcal H(G),\] and this is the only subquotient of $\mathcal H(G)$ isomorphic to $\rho\boxtimes\rho^\vee$. Moreover, if $\rho'$ is another irreducible compact representation of $G$, non-isomorphic to $\rho$, then the $G\times G$ representation $\rho\boxtimes\rho'^\vee$ does not appear as a subquotient of $\mathcal H(G)$. \end{theorem} The following theorem summarizes the main properties of compact representations: \begin{theorem}[see {\cite[IV.1.3]{Renard}}]\label{T:main on cpct reps} For $\mathcal S$, a finite collection of compact irreducible representations of $G$, we have: \begin{enumerate} \item The category $\mathcal M(G)_{[\mathcal S]}$ is semisimple: all the objects in $\mathcal M(G)_{[\mathcal S]}$ are isomorphic to a direct sum of objects in $\mathcal S$. \item There is an equivalence of categories \begin{align}\label{Eq:decompose subquotients in A and outside A} \mathcal M(G) \simeq \mathcal M(G)_{[\mathcal S]}\times \mathcal M(G)_{[\text {out } \mathcal S]}. \end{align} \end{enumerate} Moreover, the category $\mathcal M(G)_c$ admits a decomposition \begin{align} \mathcal M(G)_c = \prod_{\tau} \mathcal M(G)_{[\tau]} \end{align} where the product runs over all isomorphism classes of compact irreducible representations of $G$. In particular, $\mathcal M(G)_c$ is a semisimple category. \end{theorem} It is natural to ask if we can always decompose a representation into a direct sum of compact and non-compact. This is not completely automatic and one needs a further finiteness condition: Consider the following condition (see \cite[IV.1.7 (KF)]{Renard}) called "compact finite": \begin{align}\label{Eq:condition (KF)} \tag{KF}\quad \begin{minipage}{300pt} for any compact open subgroup $K\le G$ there is only a finite number of isomorphism classes of compact irreducible representations of $G$ having a non-zero $K$-fixed vector. \end{minipage} \end{align} \begin{theorem}\label{T:dec compact times non-compact} If the group $G$ satisfies the above condition (KF), then we have a decomposition of categories \begin{align}\label{Eq:dec compact times non-compact} \mathcal M(G) = \mathcal M(G)_c\times \mathcal M(G)_{nc}. \end{align} \end{theorem} \begin{remark} Notice that in light of the theorem \ref{T:dec compact times non-compact}, every compact representation is projective-injective in $\mathcal M(G)$. This is a remarkably strong homological property that will be useful in the sequel. \end{remark} In our situation, the condition (KF) is satisfied thanks to the uniform admissibility theorem, see \cref{T:uniform adm}. \subsection{Compact modulo center}\label{SS:compact mod center} We have seen that compact representations behave as nicely as one could hope but for many interesting groups there are no such representations. It turns out that the issue comes from the non-compactness of the center. In this subsection we present an analogue of the decomposition of categories \cref{T:main on cpct reps} and \cref{T:dec compact times non-compact}. \begin{definition} A representation of $G$ is called \emph{compact modulo center} if its matrix coefficients have compact support modulo $Z(G)$. \end{definition} Denote by $G^\circ$ the subgroup of $G$ generated by all compact subgroups of $G$. If $G=\mathbb{G}(F)$, for $\mathbb{G}$ a reductive group over $F$, one can define $G^\circ$ algebraically too: it is the kernel of all $|\chi|\colon G \rightarrow \mathbb{R}^+$ where $\chi\colon G \rightarrow F^\times$ are the algebraic characters of $G$ defined over $F$. The subgroup $G^\circ$ of $G$ has the property that $G/G^\circ\simeq \bZ^d$ for some $d\ge 0$. Further, $G^\circ Z(G)$ is a normal subgroup of finite index in $G$. We put $\mathcal X(G):=\Hom_\mathsf{gr}(G/{G^\circ}, \mathbb{C}^\times)$ and call it the group of \emph{unramified characters} of $G$. \begin{remark} A representation $\pi$ of $G$ is compact modulo center if and only if its restriction $\pi|_{G^\circ}$ is compact. \end{remark} The following well-known proposition, although simple, is the main ingredient allowing one to pass from ${G^\circ}$ to $G$: \begin{proposition}\cite[VI.3.2]{Renard}\label{P:irred res to Go and inertia classes} \begin{enumerate} \item Let $(V,\pi)$ be an irreducible representation of $G$. The irreducible representations of ${G^\circ}$ appearing in $\operatorname{Res}_{G^\circ}^G(\pi)$ are all conjugate under $G$. Moreover, the representation $\operatorname{Res}_{G^\circ}^G(\pi)$ is semisimple and of finite length. \item Let $(V_i,\pi_i)$ be two irreducible representations of $G$. The following are equivalent \begin{enumerate} \item $\operatorname{Res}_{G^\circ}^G(\pi_1)\simeq \operatorname{Res}_{G^\circ}^G(\pi_2)$, \item there exists an unramified character $\chi\in\mathcal X(G)$ such that \[ \pi_1\otimes\chi\simeq \pi_2, \] \item $\Hom_{{G^\circ}}(\operatorname{Res}_{G^\circ}^G(\pi_1),\operatorname{Res}_{G^\circ}^G(\pi_2))\neq 0$. \end{enumerate} \end{enumerate} \end{proposition} We denote by $\mathcal M(G)_\sc$, the full subcategory of $\mathcal M(G)$ formed of those representations all of whose subquotients are compact modulo center. The set of irreducible objects of $\mathcal M(G)_\sc$ is denoted by $\operatorname{Irr}(G)_\sc$. \begin{definition} We say that two irreducible representations $\rho_1, \rho_2\in\operatorname{Irr}(G)_\sc$ are in the same \emph{inertia class} if there exists a character $\chi\in\mathcal X(G)$ such that $\rho_1\simeq \chi\rho_2$. We write the corresponding equivalence relation as $\rho_1\sim \rho_2$, and denote the inertia class containing $\rho \in \operatorname{Irr}(G)_\sc$ by the square bracket $[\rho]$. We denote the set of inertia classes in $\operatorname{Irr}(G)_\sc$ by $[\operatorname{Irr}(G)_\sc]$. \end{definition} Given $\pi\in\operatorname{Irr}(G)_\sc$, we denote by $\mathcal M(G)_{[\pi]}$, the full subcategory of $\mathcal M(G)$ consisting of representations whose restriction to ${G^\circ}$ have all the irreducible subquotients among those of $\pi|_{G^\circ}$ (a finite set of compact representations of ${G^\circ}$). \begin{remark} \cref{P:irred res to Go and inertia classes} says that two irreducible representations of $G$ are in the same inertia class if and only if their restriction to ${G^\circ}$ are isomorphic. Therefore the objects of the category $\mathcal M(G)_{[\pi]}$ are those smooth representations of $G$ all of whose irreducible subquotients are in the same inertia class as $\pi$. \end{remark} Using the above proposition and \cref{T:main on cpct reps}, one immediately deduces: \begin{theorem}\label{T:dec of comp mod center} For an irreducible cuspidal representation $\pi$ of $G$, we have a decomposition of the category $\mathcal M(G)$ as \[ \mathcal M(G)\simeq \mathcal M(G)_{[\pi]}\times \mathcal M(G)_{[\text{out }\pi]}. \] Morever, the category $\mathcal M(G)_\sc$ decomposes as \[\mathcal M(G)_\sc =\prod_{[\pi]\in\operatorname{Irr}(G)_\sc} \mathcal M(G)_{[\pi]}.\] \end{theorem} The full subcategory of $\mathcal M(G)$ formed of those representations that have no irreducible subquotient that is compact modulo center is denoted by $\mathcal M(G)_{\mathrm{ind}}$. Using \cref{P:irred res to Go and inertia classes} and \cref{T:dec compact times non-compact} one deduces easily: \begin{theorem}\label{T:dec com mod center x others} If the group ${G^\circ}$ satisfies condition \eqref{Eq:condition (KF)} then we have a decomposition of categories \[ \mathcal M(G) = \mathcal M(G)_\sc\times \mathcal M(G)_{\mathrm{ind}}.\] \end{theorem} \subsection{The simplest Bernstein component} The goal of this subsection is to formulate and prove an analogue of \cref{T:compact rep as GxG} Fix $\pi\in\mathcal M(G)$ an irreducible, compact modulo center representation of $G$. Then $\pi\boxtimes \pi^\vee\in\mathcal M(G\times G)$ is irreducible and compact modulo center. We consider the Hecke algebra $\mathcal H(G) = \operatorname{ind}_{\Delta G}^{G\times G}\mathbb{C}$ as a $G\times G$-module and we try to understand the part of $\mathcal H(G)$, denoted by $\mathcal H(G)[\pi\boxtimes\pi^\vee]$, that lives in $\mathcal M(G\times G)_{[\pi\boxtimes\pi^\vee]}$ (see \cref{T:dec of comp mod center}). Let us fix $\pi_0\subset \pi\mid_{G^\circ}$, an irreducible representation of ${G^\circ}$. Put $G_1 = \{g\in G\mid \pi_0\simeq {}^g\pi_0\}\le G$. It is a normal subgroup of $G$ and it contains the finite index subgroup ${G^\circ} Z(G)$. Denote by $\Sigma$ the group $G/G_1$ and its order by $f$. Put $H:=\Delta(G) ({\Go\times\Go})\le G\times G$. The following is the main result of this subsection: \begin{proposition}\label{P:cusp component of H(G) as GxG module} For $\pi\in\mathcal M(G)$ an irreducible, compact modulo center representation of $G$ we have an isomorphism of $G\times G$ representations \begin{align}\label{Equation 1} \mathcal H(G)[\pi\boxtimes\pi^\vee]\simeq \operatorname{ind}_{\Delta(G_1)({\Go\times\Go})}^{G\times G}(\pi_0\boxtimes \pi_0^\vee). \end{align} Further, \begin{align}\label{Equation 2} \begin{array}{rl}(\pi\boxtimes\pi^\vee)\otimes \operatorname{ind}_H^{G\times G}\mathbb{C} &\simeq \Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}}) \otimes \mathcal H(G)[\pi\boxtimes\pi^\vee]\\ &\simeq e^2f \mathcal H(G)[\pi\boxtimes\pi^\vee], \end{array} \end{align} where $ \Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}})$ is a finite dimensional representation of $G/{G^\circ}$, treated as a representation of $G\times G$ trivial on $H=\Delta(G) ({\Go\times\Go})\le G\times G$. \end{proposition} \begin{proof} Put $\mathcal S = \{\pi' \mid \pi'\subset \pi\boxtimes\pi^\vee |_{G^\circ\times G^\circ} \text{ irreducible}\}$. The set $\mathcal S$ is a finite set of compact irreducible representations of ${\Go\times\Go}$ and given \cref{T:main on cpct reps} we can write \[ \mathcal H(G)|_{{\Go\times\Go}} = \mathcal H(G)_{[\mathcal S]}\oplus \mathcal H(G)_{[out\,\mathcal S]}\] as ${\Go\times\Go}$-representations. Moreover, since $\mathcal S$ is stable under conjugation by $G\times G$, the above decomposition holds as $G\times G$-modules. Let us write $\mathcal H(G){[\pi\boxtimes \pi^\vee]}$ for the $G\times G$ representation $\mathcal H(G)_{[\mathcal S]}$. For $\sigma\in\Sigma$, let $\pi_\sigma:=\sigma\pi_0\subset \pi$, an irreducible sub ${G^\circ}$-representation of $\pi|_{G^\circ}$. Notice that $\pi_\sigma$ is isomorphic to the conjugated representation ${}^\sigma\pi_0$ where the action on $\pi_0$ is conjugated by $\sigma$. The irreducible summands of $\pi|_{G^\circ}$ are isomorphic to $\pi_\sigma$ for some $\sigma\in\Sigma$ and $\pi_\sigma\not\simeq\pi_{\sigma'}$ if $\sigma\neq\sigma'$. Let's write the restriction of $\pi$ to ${G^\circ}$ as (for an integer $e\geq 1$): \[ \pi|_{G^\circ} = \bigoplus_{\sigma\in\Sigma} e \pi_\sigma. \] For an irreducible representation $\pi$ of $G$, the finite set of unramified characters $\chi: G/{G^\circ}\to \mathbb{C}^\times$ with $\pi \otimes \chi \simeq \pi$ is a finite abelian group, call it $X(\pi)$. Then, the space $\Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}}) $ comes equipped with an action of the abelian group $G/{G^\circ}$, diagonalizing which gives a canonical basis (up to scalars) which are nothing but the intertwining operators $\pi \otimes \chi \simeq \pi$, thus one sees that $\dim \Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}}) = e^2f$ is the number of the intertwining operators $\pi \otimes \chi \simeq \pi$, and $\Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}})$ comes equipped with a basis $e_\chi$ such that $g\cdot e_\chi = \chi(g)e_\chi$. The ${G^\circ}$-representation $\pi_0$ does not extend to a representation of $G_1$ but the $({\Go\times\Go})$-representation $\pi_0\boxtimes\pi_0^\vee$ does extend to a representation of $\Delta(G_1) ({\Go\times\Go})$: this is most easily seen by noticing that $\pi_0$ extends to a projective representation of $G_1$, and therefore $\pi_0\boxtimes \pi_0^\vee$ is canonically a representation of $\Delta(G_1)$, hence of $\Delta(G_1) ({\Go\times\Go})$. The representations $\pi_\sigma\boxtimes \pi_{\sigma'}^\vee$ are also representations of $ \Delta(G_1) ({\Go\times\Go}) $ as they can be written as $(\sigma,\sigma')\pi_0\boxtimes\pi_0^\vee$. In what follows, let $\tau$ be the irreducible representation of $H= \Delta(G)({\Go\times\Go}),$ \[\tau = \sum_{\sigma \in \Sigma} \pi_\sigma \boxtimes \pi_{\sigma}^\vee = \operatorname{ind}_{ \Delta(G_1)({\Go\times\Go})}^{H} (\pi_0\boxtimes \pi_{0}^\vee) .\] By induction in stages: $G \times G \supset {\Delta(G)}({G^\circ} \times {G^\circ}) \supset {\Delta(G)}$, we can write: \[ \mathcal H(G) = \operatorname{ind}_{\Delta(G)}^{G\times G} \mathbb{C} = \operatorname{ind}_H^{G\times G}\mathcal H({G^\circ}), \] as a representation of $G\times G$, where we recall that $H=\Delta(G) ({\Go\times\Go})\le G\times G$ is a normal subgroup of $G\times G$ with a natural action on $\mathcal H({G^\circ})$ (where $\Delta(G)$ acts on ${G^\circ}$ by the conjugation action, and ${\Go\times\Go}$ acts on ${G^\circ}$ by left and right translations). Since the set $\mathcal S$ is stable under conjugation by $G\times G$, we deduce the following \[ \mathcal H(G)[\pi\boxtimes\pi^\vee] = \operatorname{ind}_H^{G\times G}(\mathcal H({G^\circ})_{[\mathcal S]}).\] \cref{T:compact rep as GxG} tells us that in $\mathcal M({\Go\times\Go})$, we have a natural isomorphism \[\mathcal H({G^\circ})_{[\mathcal S]} \simeq \bigoplus_{\sigma\in\Sigma} \pi_\sigma\boxtimes\pi_\sigma^\vee \simeq \operatorname{ind}_{\Delta(G_1)({\Go\times\Go})}^{ \Delta(G)({\Go\times\Go}) }( \pi_0\boxtimes \pi_{0}^\vee ),\] which is moreover an isomorphism of $H$-modules, proving the isomorphism \eqref{Equation 1}. All the irreducible subrepresentations of $\pi\boxtimes\pi^\vee|_H$ are of the form $(\sigma,1)\tau\simeq {}^{(\sigma,1)}\tau$, hence \begin{align}\label{Eq:dpi restricted to H} \pi\boxtimes\pi^\vee|_H = \bigoplus_{\sigma\in\Sigma}e^2 (\sigma,1)\tau. \end{align} Put these together and use the Mackey's relation $V\otimes \operatorname{ind}_K^{G} W = \operatorname{ind}_K^G(V|_{K}\otimes W)$ to get \begin{align*} (\pi\boxtimes\pi^\vee)\otimes \operatorname{ind}_H^{G\times G}(\mathbb{C}) & \simeq \operatorname{ind}_H^{G\times G}(\pi\boxtimes\pi^\vee|_H)\\ & \simeq e^2f\operatorname{ind}_H^{G\times G}(\tau) \\ & \simeq e^2f \operatorname{ind}_{ \Delta(G_1)({\Go\times\Go})}^{ G \times G } (\pi_0\boxtimes \pi_{0}^\vee) \\ & \simeq \Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}}) \otimes \operatorname{ind}_{ \Delta(G_1)({\Go\times\Go})}^{ G \times G } (\pi_0\boxtimes \pi_{0}^\vee) \\ & \simeq \Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}}) \otimes \mathcal H(G)[\pi\boxtimes\pi^\vee], \end{align*} where in the 2nd isomorphism above, we have used \eqref{Eq:dpi restricted to H} together with the fact that $H$ is a normal subgroup of $G \times G$, and $\operatorname{ind}_H^{G\times G}(\tau) = \operatorname{ind}_H^{G\times G}(\tau^{(\sigma, 1)})$ for all $\sigma \in \Sigma$; the last isomorphism is the isomorphism \eqref{Equation 1}, and the isomorphism previous to that follows since $\Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}})$ consists of characters of $G/{G^\circ}$, treated as a character of $G\times G$ trivial on $\Delta(G)({G^\circ} \times {G^\circ})$. Thus we have proved the isomorphism \eqref{Equation 2}. \end{proof} \begin{remark} The isomorphism \eqref{Equation 2} of the above Proposition can be interpreted as saying that for the cuspidal representation $\pi \boxtimes \pi^\vee$ of $G\times G$, the corresponding Bernstein component in $\mathcal H(G)$ ``contains'' all representations $(\pi\alpha) \boxtimes (\pi \alpha)^\vee$ where $\alpha$ runs over all the unramified characters of $G/{G^\circ}$. Since $\pi \otimes \chi \simeq \pi$ for $e^2f$ many characters, this number shows up as multiplicity in the right hand side of \eqref{Equation 2}. This means that for each irreducible cuspidal representation $\pi$ of $G$, $\pi \boxtimes \pi^\vee$ ``appears'' in $\mathcal H(G)$ with multiplicity 1, where the precise meaning of ``appears'' is to be understood in the derived sense, see \cref{dual}. \end{remark} \begin{remark} Suppose the group $G$ is either a reductive $p$-adic group, or a finite cover of it. Then for a standard parabolic $P=LN$ inside $G$, and any cuspidal pair $[L, \rho]$ defining a Bernstein block $\mathfrak s$ (see \cref{S:Bernstein dec}), similar to \cref{P:cusp component of H(G) as GxG module}, it is natural to propose the following isomorphism in $\mathcal M(G\times G)$ \[\operatorname{ind}^{ G \times G}_{P \times P^-}(\rho \boxtimes \rho^\vee \otimes \operatorname{ind}_{\Delta(L) (L^o \times L^o)}^{ L\times L} (\mathbb{C})) \simeq e^2fw \mathcal H(G)[\mathfrak s \times \mathfrak s^\vee],\] where $e,f$ are defined as before for the cuspidal representation $\rho$ of $L$, and $w$ is the order of $N_G(L, [\rho])/L$ where $N_G(L, [\rho])$ is the normalizer of $L$ preserving $\rho$ up to an unramified character. This will be a kind of Plancherel decomposition in the smooth category, see \cite{Hei-Planch} for some results in this direction. \end{remark} The following corollary is a consequence of the proof of the above proposition. \begin{cor} The component of $\mathcal H(G)$ in $\mathcal M(G)_{[\pi_1\boxtimes\pi_2^\vee]}$ is zero unless $\pi_2\simeq\pi_1\chi$ for some unramified character $\chi$ of $G$. \end{cor} \begin{remark} In the language of Bernstein decomposition reviewed in \cref{S:Bernstein dec}, this is equivalent to saying that the Bernstein component of $\mathcal H(G)$ corresponding to $\pi_1\boxtimes\pi_2^\vee$ is zero unless $\pi_2\simeq\pi_1\chi$ for some unramified character $\chi$ of $G$. \end{remark} We note the following corollary that will be useful to us in \cref{S:Blocks as module cats} when studying contragredients. It also appears in \cite[Lemma B.5]{Aiz-Say} \begin{cor}\label{C:full cuspidal homological dual of it} For any irreducible representation $\pi\in\mathcal M(G)$, compact modulo center, we have an isomorphism of $G$-modules \[ \Hom_G(\operatorname{ind}_{{G^\circ}}^G(\pi|_{{G^\circ}}),\mathcal H(G))\simeq \operatorname{ind}_{{G^\circ}}^G(\pi^\vee|_{{G^\circ}}).\] \end{cor} \begin{proof} By the Mackey theory, since $({G^\circ} \times G) H = G \times G$, it follows that the restriction of $\operatorname{ind}_H^{G\times G}(\mathbb{C})$ to ${G^\circ} \times G$ is $\mathbb{C} \otimes \operatorname{ind}_{{G^\circ}}^G\mathbb{C}$. Hence by the isomorphism \eqref{Equation 2}, and Frobenius reciprocity: \begin{eqnarray*} e^2f\Hom_G(\operatorname{ind}_{{G^\circ}}^G(\pi|_{{G^\circ}}),\mathcal H(G)) & \simeq & e^2f\Hom_G(\operatorname{ind}_{{G^\circ}}^G(\pi|_{{G^\circ}}), \mathcal H(G)[\pi\boxtimes\pi^\vee]) \\ & \simeq & \Hom_G( \operatorname{ind}_{{G^\circ}}^G(\pi|_{{G^\circ}}), \pi \otimes \pi^\vee \otimes \operatorname{ind}_H^{G\times G}(\mathbb{C}) ) \\ & \simeq & \Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}} \otimes (\pi^\vee \otimes \operatorname{ind}_{{G^\circ}}^{G}\mathbb{C}) \\ & \simeq & \Hom_{{G^\circ}}(\pi|_{{G^\circ}}, \pi|_{{G^\circ}}) \otimes (\pi^\vee \otimes \operatorname{ind}_{{G^\circ}}^{G}\mathbb{C}) \\ &\simeq & e^2f \operatorname{ind}_{{G^\circ}}^G(\pi^\vee|_{{G^\circ}}), \end{eqnarray*} where in the 4th isomorphism we have used the following lemma. \begin{lemma} Let $H_1$ (resp., $H_2$) be a totally disconnected group, and $\pi_1$ (resp., $\pi_2$) any smooth representation of $H_1$ (resp., $H_2$). If $\pi_1$ has finite length, then \[\Hom_{H_1}(\pi_1,\pi_1 \otimes \pi_2) \simeq \Hom_{H_1}(\pi_1,\pi_1) \otimes \pi_2\] as representations of $H_2$. \end{lemma} \end{proof} The following proposition, based on \cref{P:cusp component of H(G) as GxG module}, seems worth including as its proof is fairly elementary. However, a generalization to finite length plus functoriality is proved in \cref{C:D_h on finlen cusp is contrag}. Recall that $G/{G^\circ}\simeq \bZ^d$ for some integer $d\ge 0$. \begin{proposition} \label{dual} Let $\pi$ be an irreducible, compact modulo center representation of $G$. Then $\RHom_{G}^{\bullet} (\pi, \mathcal H(G)),$ lives only in degree $d$ where it is isomorphic to $\pi^\vee$. \end{proposition} \begin{proof} By Proposition \ref{P:cusp component of H(G) as GxG module}, the proof of this proposition boils down to proving that $\RHom_{G}^{\bullet} (\pi, \pi \boxtimes \pi^\vee \otimes \operatorname{ind}_H^{G\times G}(\mathbb{C}))$ lives only in degree $d$ and \[\Ext_{G}^{d} (\pi, \pi \boxtimes \pi^\vee \otimes \operatorname{ind}_H^{G\times G}(\mathbb{C})) = (e^2f)\pi^\vee.\] For any two modules $M,N$ of $G\times G$ which are semisimple when restricted to ${G^\circ}$ sitting in the first copy of $G$, i.e. $G \times 1 \subset G \times G$, it is well known that \[ \Ext^i_{G}[M,N] = H^i(G/{G^\circ}, \Hom_{{G^\circ}}[M,N]),\] where the groups $G,{G^\circ}$ are in $G \times 1 \subset G \times G$, and both $\Ext^i$ and $H^i$ here are modules for $1 \times G \subset G\times G$. We apply this to $M=\pi$ (treated as a module for $G\times G$ on which $1 \times G$ acts trivially), and $N= \pi \boxtimes \pi^\vee \otimes \operatorname{ind}_H^{G\times G}(\mathbb{C})$. Note that $ \operatorname{ind}_H^{G\times G}(\mathbb{C})$ restricted to ${G^\circ}$ is $\operatorname{ind}_{{G^\circ}}^{G}(\mathbb{C})$ as representation space for $1 \times G \subset G \times G$. Thus, $N$ restricted to ${G^\circ} \times G \subset G \times G$ is $\pi|_{{G^\circ}} \otimes \pi^\vee \otimes \operatorname{ind}_{{G^\circ}}^{G}(\mathbb{C})$; further, the actions of $G/{G^\circ}$ and $1 \times G$ coincide on $\operatorname{ind}_{{G^\circ}}^{G}(\mathbb{C})$. Thus \[ \Hom_{{G^\circ}}[\pi , \pi \otimes \pi^\vee \otimes \operatorname{ind}_H^{G\times G}(\mathbb{C}) ]) =\Hom_{{G^\circ}}[\pi , \pi] \otimes \pi^\vee \otimes \operatorname{ind}_{{G^\circ}}^{G}(\mathbb{C}) ,\] therefore \[H^i(G/{G^\circ}, \Hom_{{G^\circ}}[M,N]) =H^i(G/{G^\circ}, \Hom_{{G^\circ}}[\pi , \pi] \otimes \operatorname{ind}_{{G^\circ}}^{G}(\mathbb{C})) \otimes \pi^\vee.\] Now we know that $\Hom_{{G^\circ}}[\pi , \pi]$ is a representation of $G/{G^\circ}$ of dimension $e^2f$, therefore, \[\Hom_{{G^\circ}}[\pi , \pi] \otimes \operatorname{ind}_{{G^\circ}}^{G}(\mathbb{C}) \simeq e^2f \operatorname{ind}_{{G^\circ}}^{G}(\mathbb{C}) ,\] both as a module for $G\times 1$ and $1 \times G$. Hence we are reduced to understanding $H^i(\bZ^d, \mathbb{C}[\bZ^d])$ which is well-known to be zero for $i<d$, and $H^d(\bZ^d, \mathbb{C}[\bZ^d]) = \Ext_{\bZ^d}^d(\mathbb{C}, \mathbb{C}[\bZ^d]) \simeq \mathbb{C}$, completing the proof of the Proposition. \end{proof} \begin{remark} Later, in Corollary \ref{C:D_h on finlen cusp is contrag}, we will prove that Proposition \ref{dual} remains true for any finite length cuspidal representation of a $p$-adic group or of a finite cover of it. We take this occasion to mention that the Bernstein block even of a cuspidal representation is not totally trivial: although it is true that a finite length indecomposable cuspidal representation of ${\widetilde{G}}$ is the successive extension of a fixed irreducible cuspidal representation $\rho$ of ${\widetilde{G}}$, it is not true that such a finite length indecomposable cuspidal representation of ${\widetilde{G}}$ is of the form $\rho \otimes \lambda$ where $\rho$ is an irreducible cuspidal representation of ${\widetilde{G}}$, and $\lambda$ is a finite dimensional indecomposable representation of ${\widetilde{G}}/{\wG^\circ}$ (which is the case when $\rho|_{{\wG^\circ}}$ is irreducible, in which case, indeed this Bernstein block is rather simple). \end{remark} \section{Basic notions of representation theory} We place ourselves in the setting $G=\mathbb{G}(F)$ for $\mathbb{G}$ a reductive group over a non-archimedean local field $F$ and ${\widetilde{G}}$ a covering group of $G$ (see \cref{SS:central ext}). All the basic notions for linear groups have an analogue for the covering group ${\widetilde{G}}\to G$. \subsection{Parabolics, Levi and the Weyl group} A parabolic subgroup for ${\widetilde{G}}$ is simply the preimage of a parabolic subgroup of $G$. Similarly for Levi subgroups. A Levi decomposition $P=LN$ for $P\le G$ lifts to a Levi decomposition ${\widetilde{P}} = {\widetilde{L}} N$ where $N\le {\widetilde{G}}$ is the unique lift of $N$ to ${\widetilde{G}}$ that is normalized by ${\widetilde{L}}$ (in characteristic zero such a lift is obvious, in general see \cite[Appendix Lemma]{MoeWald}). For $P$ a parabolic subgroup of $G$ with Levi decomposition $LN$ we denote by $P^-$ the opposite parabolic with Levi decomposition $LN^-$. Similarly for their preimages in ${\widetilde{G}}$. If $T$ is a maximal torus of $G$ then we denote by ${\widetilde{T}}$ its preimage\footnote{Notice that ${\widetilde{T}}$ is not necessarily abelian but this is immaterial to us.} in ${\widetilde{G}}$ and the Weyl group of ${\widetilde{G}}$ is defined simply to be equal to $W$, the Weyl group of $G$. We have $W=N_{{\widetilde{G}}}({\widetilde{T}})/{\widetilde{T}}$. \subsection{Parabolic induction and restriction} The notion of parabolic restriction (Jacquet modules), and parabolic induction, make sense for $\widetilde{G}$, and they have the same adjointness properties as in the linear case. Let ${\widetilde{P}} = {\widetilde{L}} N$ be a parabolic with a Levi decomposition. The (normalized) functors of parabolic induction and parabolic restriction are defined as follows (for details see \cite[VI.1]{Renard}): \begin{align*} \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}\colon &\mathcal M({\widetilde{L}})\to \mathcal M({\widetilde{G}}) \\ &\pi\mapsto \operatorname{ind}_{{\widetilde{P}}}^{{\widetilde{G}}}(\pi\otimes\delta_{\widetilde{P}}^{-1/2})\\ \mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{{\widetilde{G}}}\colon& \mathcal M({\widetilde{G}})\to \mathcal M({\widetilde{L}}) \\ &\tau\mapsto (\operatorname{Res}_{{\widetilde{P}}}^{\widetilde{G}} (\tau)\otimes\delta_{{\widetilde{P}}}^{1/2})_N \end{align*} where $(-)_N$ is the functor of coinvariants under $N$ and $\delta_{\widetilde{P}}\colon {\widetilde{P}}\to \mathbb{R}^\times_{+}$ is the modular character of ${\widetilde{P}}$. Both functors are exact. Frobenius reciprocity states that $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{{\widetilde{G}}}$ is right adjoint to $\mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}$. As a consequence, parabolic induction preserves limits and parabolic restriction preserves colimits. It is easy to establish that parabolic induction preserves admissibility (see \cite[III.2.3]{Renard}): this is because the double-coset ${\widetilde{K}}\backslash {\widetilde{G}}/{\widetilde{P}}$ is finite for any open compact ${\widetilde{K}}\le {\widetilde{G}}$. We define cuspidal representations of $\widetilde{G}$ just as in the linear case: those smooth representations of $\widetilde{G}$ whose all nontrivial parabolic restrictions vanish (i.e., all Jacquet modules vanish). By the same arguments as in the linear case (see \cite[VI.5.1]{Renard}), the geometric lemma holds and allows one to calculate the Jordan--Hölder series of Jacquet module of a principal series representation of $\widetilde{G}$. Then one can conclude as in the linear case that every irreducible representation of $\widetilde{G}$ is a subquotient of a principal series representation of $\widetilde{G}$ induced from a cuspidal representation of a Levi subgroup of $\widetilde{G}$. \subsection{Iwahori decomposition} \begin{definition}\label{D:Iwahori factorization} A compact open subgroup $K$ of $G$ (or of ${\widetilde{G}}$) is said to have an Iwahori factorization with respect to a parabolic $P=LN$ with opposite parabolic $P^-=LN^-$ if the natural map given by multiplication: \[ m\colon K^+ \times K^0 \times K^- \rightarrow K,\] is a bijection, where \begin{align*} K^+ &= K \cap N, \\ K^0 &= K \cap L ,\\ K^- &= K \cap N^-. \end{align*} For central extensions ${\widetilde{G}}$, we replace $P=LN$ with ${\widetilde{P}}={\widetilde{L}} N$ and ${\widetilde{P}}^-={\widetilde{L}} N^-$ and everything makes sense to define Iwahori factorization in ${\widetilde{G}}$. \end{definition} \subsection{Cartan decomposition} The Cartan decomposition for the group $G$ plays an important role in the proof of the uniform admissibility theorem. The analogous theorem for ${\widetilde{G}}$ is proved by taking inverse image from $G$. Let us recall some notions before we state the theorem. Recall from \cref{SS:compact mod center} that ${\wG^\circ}$ is the subgroup of ${\widetilde{G}}$ generated by all compact subgroups. It is the preimage of the similarly defined subgroup ${G^\circ}\le G$. We put $\Lambda({\widetilde{G}}):={\widetilde{G}}/{\wG^\circ} = G/{G^\circ}$; it is a finite rank free $\bZ$-module. \begin{definition}\label{D:unramified characters} The set of unramified characters of ${\widetilde{G}}$ is defined to be \[ \mathcal X({\widetilde{G}}):=\Hom({\widetilde{G}}/{\wG^\circ},\mathbb{C}^\times) = \Hom(G/{G^\circ},\mathbb{C}^\times). \] \end{definition} It is clear that the image of $Z({\widetilde{G}})$ has finite index in ${\widetilde{G}}/{\wG^\circ}$ and hence the rank of $Z({\widetilde{G}})$ is the same as the rank of $Z(G)$. In other words, we have $\mathcal X({\widetilde{G}})\simeq (\mathbb{C}^\times)^{d_G}$ where $d_G$ is the split rank of $Z(G)$. Notice that by definition, the unramified characters of ${\widetilde{G}}$ are the same as those of $G$ (identified through the map ${\widetilde{G}} \to G$). \textbf{Construction.} For $\mathbb{L}$, a reductive group over $F$, we denote by $A_\mathbb{L}$, the maximal split torus contained in the center of $\mathbb{L}$, and denote $d(\mathbb{L}) :=\dim(A_\mathbb{L})$, and call it the split rank of the center of $\mathbb{L}$. We extend this definition to any finite central extension ${\widetilde{L}}$ of $\mathbb{L}(F)$, defining $d({\widetilde{L}})= d(\mathbb{L}) =\dim(A_\mathbb{L})$. Put $A:=A_\mathbb{L}(F)$ and notice that the image of $A/A^\circ \to L/L^\circ$ has finite index. Moreover, the surjection $A\to A/A^\circ$ admits a section that is a group homomorphism (use a uniformizer in $F$) and so we get an injection $A/A^\circ \hookrightarrow L$ whose image in $L/L^\circ$ has finite index. If ${\widetilde{L}}$ is a finite central extension of $L$, then the natural map $Z({\widetilde{L}})\rightarrow Z(L)$ induces a map of lattices $\Lambda(Z({\widetilde{L}}))\to \Lambda(Z(L))$ whose image has finite index. We can therefore find a lift $\Lambda(Z({\widetilde{L}}))\to {\widetilde{L}}$ that is a group homomorphism and moreover its image in $\Lambda({\widetilde{L}})=\Lambda(L)$ has finite index. Summarizing we have a diagram \[ \xymatrix{ {\widetilde{L}} \ar[r]& \Lambda({\widetilde{L}}) \ar[r]^{=} & \Lambda(L)\\ Z({\widetilde{L}})\ar@{^(->}[u] \ar[r]& \Lambda(Z({\widetilde{L}})) \ar@{^(->}[r] \ar@{^(->}[u]& \Lambda(Z(L)) \ar@{^(->}[u] } \] and from the discussion above we have a section $\Lambda(Z({\widetilde{L}}))\to Z({\widetilde{L}})$ which is a group homomorphism. Denote its image by $C_A$; it is a central subgroup of ${\widetilde{L}}$. Fix also finitely many elements $F_A:=\{f_1,\dots,f_r\in {\widetilde{L}}\}$ that lift some system of representatives for $\Lambda({\widetilde{L}})/\Lambda(Z({\widetilde{L}}))$. All in all, we have that $C_A F_A$ provides a set-theoretic lift of $\Lambda({\widetilde{L}})$ to ${\widetilde{L}}$ that is moreover a group homomorphism when restricted to $\Lambda(Z({\widetilde{L}}))$. Apply the above discussion to a parabolic subgroup ${\widetilde{P}}={\widetilde{L}} N$ and recall the notion of dominant cocharacters of $L$ with respect to $P$. Let $L^+$ denote the preimage of dominant cocharacters of $L$ through the map $L\to \Hom_\mathsf{gr}(L,\mathbb{G}_m)^*$ given by $m\mapsto (\chi\mapsto \chi(m))$. We let ${\widetilde{L}}^+$ denote its preimage in ${\widetilde{L}}$ under ${\widetilde{L}}\to L$ We can choose the above $F_A$ to lie in ${\widetilde{L}}^+$ and similarly we can consider $C_A^+\subset C_A$ the subset of dominant elements in $C_A$. Let $P_0$ be a fixed minimal parabolic subgroup of $G$ with Levi decomposition $P_0=L_0N_0$ and denote by ${\widetilde{P}}_0 = {\widetilde{L}}_0 N_0$ the corresponding subgroups in ${\widetilde{G}}$. Let $A=A_0$ be the split component of $P_0$ and $C_AF_A\subset {\widetilde{L}}_0$ as above. \begin{theorem}[see {\cite[V.5.1]{Renard}}] (Cartan decomposition) There exists a maximal compact subgroup ${\widetilde{K}}_0$ of ${\widetilde{G}}$ such that \begin{enumerate} \item ${\widetilde{G}} = {\widetilde{P}}_0 {\widetilde{K}}_0 = {\widetilde{K}}_0{\widetilde{P}}_0$ \item ${\widetilde{G}} = \bigsqcup_{af\in C_A^+F_A} {\widetilde{K}}_0 af{\widetilde{K}}_0$. \end{enumerate} \end{theorem} Put $\mathcal H_0 = \mathcal H({\widetilde{K}}_0,{\widetilde{K}})$ the Hecke algebra of ${\widetilde{K}}$-biinvariant functions on ${\widetilde{K}}_0$. It is a finite dimensional algebra over $\mathbb{C}$. A rather easy consequence of this theorem, which is essential in the proof of the uniform admissibility theorem, is the following decomposition of the Hecke algebra: \begin{theorem}\label{T:dec Hecke for compact} Let ${\widetilde{K}}\le{\widetilde{G}}$ be a compact open subgroup of $G$. Then the Hecke algebra $\mathcal H({\widetilde{G}},{\widetilde{K}})$ decomposes as \[ \mathcal H({\widetilde{G}},{\widetilde{K}}) = \mathcal H_0 D\mathcal C\mathcal H_0 \] where $D$ is a vector subspace spanned by functions indexed by $F_A$ and $\mathcal C$ is a subalgebra isomorphic to the group algebra of $C_A\simeq \Lambda(Z({\widetilde{L}}_0))$. \end{theorem} \section{Basic theorems} As before, ${\widetilde{G}}$ is a covering group (see \cref{SS:central ext}) of a reductive $p$-adic group $G$, where $G=\mathbb{G}(F)$ for $\mathbb{G}$ a reductive group over $F$, a non-archimedean local field. All the basic theorems concerning representations of reductive $p$-adic groups hold also for ${\widetilde{G}}$ with the same proofs. We will give precise references to \cite{Renard} where the analogous results for $G$ are proved. The groups $G$ and ${\widetilde{G}}$ do not have any compact representations in general because their centers may be non-compact. However, we can ask for the next best thing (see \cref{SS:compact mod center}): \begin{definition} A representation $(V,\pi)\in \mathcal M({\widetilde{G}})$ is called compact modulo center if all its matrix coefficients have compact support in ${\widetilde{G}}/Z({\widetilde{G}})$. \end{definition} The classical theorem of Harish-Chandra still holds with the same proof: \begin{theorem} [Harish-Chandra, see {\cite[VI.2.1]{Renard}} for linear groups]\label{T:Harish-Chandra} A representation $(\pi,V)$ of ${\widetilde{G}}$ is cuspidal if and only if it is compact modulo center. \end{theorem} Using the easy fact that parabolic induction preserves admissibility we obtain as a consequence the admissibility of irreducible representations (for a proof in the linear case see for example {\cite[VI.2.2]{Renard} or \cite[Theorem 12]{BerNotes}}): \begin{theorem}\label{T:irred is adm} Any irreducible representation of ${\widetilde{G}}$ is admissible. \end{theorem} Not only irreducible representations are admissible but in fact they are so in a uniform way. Below is the precise formulation of the Uniform Admissibility Theorem due to Bernstein whose proof is based on the decomposition of the bi-invariant Hecke algebra given in \cref{T:dec Hecke for compact} and some tricky linear algebra: \begin{theorem} [uniform admissibility, {\cite{Bernstein-unif-adm}}] \label{T:uniform adm} Let ${\widetilde{K}}\le {\widetilde{G}}$ be an open compact subgroup. There exists a constant $c=c({\widetilde{K}})$ such that for all irreducible representations $(\pi,V)$ of ${\widetilde{G}}$ we have $\dim(V^{\widetilde{K}})\le c$. \end{theorem} One can also look at {\cite[VI.2.3]{Renard}} for a proof. For establishing the Bernstein decomposition, and prior to that, verifying the condition \eqref{Eq:condition (KF)} which leads to \cref{T:dec compact times non-compact}, one makes use of the following \begin{corollary}[see {\cite[VI.2.4]{Renard}} for the linear case] Given an open compact subgroup ${\widetilde{K}}\le {\widetilde{G}}$ there is a compact modulo center subset $\Omega\subset {\widetilde{G}}$ such that for any irreducible cuspidal representation $(V,\pi)$ of ${\widetilde{G}}$ and any vector $v\in V^{\widetilde{K}}$ the function \[ g\mapsto e_{{\widetilde{K}}}\cdot\pi(g)(v)\] has support contained in $\Omega$. \end{corollary} \section{The Bernstein decomposition}\label{S:Bernstein dec} We give an exposition of a small part of a theory due to Bernstein which allows one to decompose the category $\mathcal M({\widetilde{G}})$ of smooth complex representations of ${\widetilde{G}}$ as a direct product of certain indecomposable full subcategories, now called the Bernstein components of $\mathcal M({\widetilde{G}})$. The results are due to \cite{BerDel} where it is also stated that they hold for finite central extensions of reductive $p$-adic groups. Indeed, no essential modifications are needed to adapt the proof from \cite{BerDel} to the case of finite central extensions. However, in the following we follow mostly the exposition of the linear case from \cite[VI.7]{Renard} and we give precise references. The idea is that whereas ${\widetilde{G}}$ does not have compact representations, the group ${\wG^\circ}$ does and moreover from Harish-Chandra's \cref{T:Harish-Chandra} all irreducible cuspidal representations of ${\widetilde{G}}$ restrict to compact representations of ${\wG^\circ}$. So using the results of \cref{SS:compact mod center}, we can decompose $\mathcal M({\widetilde{G}})$ into a cuspidal part and an induced part. Using induction we express the induced part in a similar way. \subsection{Cuspidals split} In this section we sketch the decomposition of $\mathcal M({\widetilde{G}})$ into a product of cuspidal and induced. We define $\mathcal M({\widetilde{G}})_\sc$ (resp., $\mathcal M({\widetilde{G}})_{\mathrm{ind}}$) to be the full subcategory of $\mathcal M({\widetilde{G}})$ formed of representations all of whose irreducible subquotients are cuspidal (resp., that have no cuspidal irreducible subquotients). The set $\operatorname{Irr}_\sc({\widetilde{G}})$ denotes the set of irreducible cuspidal representations of ${\widetilde{G}}$. \begin{remark} The category $\mathcal M({\widetilde{G}})_\sc$ (resp., $\mathcal M({\widetilde{G}})_{\mathrm{ind}}$) is the pullback of the subcategory of compact representations (resp., non-compact representations) from $\mathcal M({\wG^\circ})$ through the functor $\operatorname{Res}_{\wG^\circ}^{\widetilde{G}}$. \end{remark} % Let $(V,\pi)\in\operatorname{Irr}_\sc({\widetilde{G}})$ and denote by $[\pi]$ its inertia class, i.e., its orbit under $\mathcal X({\widetilde{G}})$. From \cref{P:irred res to Go and inertia classes} we have that the restriction of $\pi$ to ${\wG^\circ}$ depends only on the inertia class and we know moreover that this restriction has finite length and is semisimple. \begin{definition} We define $\mathcal M({\widetilde{G}})_{[\pi]}$ to be the full subcategory of $\mathcal M({\widetilde{G}})$ formed of those representations all of whose subquotients belong to $[\pi]$. Similarly define $\mathcal M({\widetilde{G}})_{[\text{\rm out }\pi]}$. \end{definition} \begin{remark} Here is a different way of thinking about $\mathcal M({\widetilde{G}})_{[\pi]}$. If $\tau_1,\tau_2,\dots,\tau_l$ are the irreducible summands of $\pi$ restricted to ${\wG^\circ}$, then the category $\mathcal M({\widetilde{G}})_{[\pi]}$ consists precisely in the representations of ${\widetilde{G}}$ whose restriction to ${\wG^\circ}$ are direct sums of $\tau_i, i=1,\dots, l$, i.e., those representations of ${\widetilde{G}}$ whose restriction to ${\wG^\circ}$ belongs to $\mathcal M({\wG^\circ})_{[\mathcal A]}$, where $\mathcal A = \{\tau_i\mid i=1,\dots, l\}$. (See \cref{SS:compact split}.) \end{remark} Harish-Chandra's \cref{T:Harish-Chandra} together with the results from \cref{SS:compact mod center} give \begin{theorem}\label{T:cuspidal block} For an irreducible cuspidal representation $\pi$ of ${\widetilde{G}}$, we have a decomposition of the category $\mathcal M({\widetilde{G}})$ as \[ \mathcal M({\widetilde{G}})\simeq \mathcal M({\widetilde{G}})_{[\pi]}\times \mathcal M({\widetilde{G}})_{[\text{out }\pi]}. \] \end{theorem} One uses the uniform admissibility \cref{T:uniform adm} to check the condition \eqref{Eq:condition (KF)} in \cref{SS:compact split} and then applies \cref{T:dec of comp mod center} and \cref{T:dec com mod center x others} to get the following decomposition \begin{theorem}[see {\cite[VI.3.5]{Renard}}]\label{T:dec of M(G) into cusp and indu} The subcategories $\mathcal M({\widetilde{G}})_\sc$ and $\mathcal M({\widetilde{G}})_{\mathrm{ind}}$ split the category $\mathcal M({\widetilde{G}})$: \begin{align}\label{Eq:dec cuspidal x induced} \mathcal M({\widetilde{G}}) = \mathcal M({\widetilde{G}})_\sc\times \mathcal M({\widetilde{G}})_{\mathrm{ind}} \end{align} and moreover the cuspidal part decomposes as \begin{align}\label{Eq:dec cuspidals} \mathcal M({\widetilde{G}})_\sc = \prod_{[\pi]\in[\operatorname{Irr}({\widetilde{G}})_\sc]} \mathcal M({\widetilde{G}})_{[\pi]}. \end{align} \end{theorem} Since the center of a product of categories is the product of their centers (see \cref{P:center of product of cats}), we have \begin{corollary} \begin{align*} \mathcal Z(\mathcal M({\widetilde{G}})) = \prod_{[\pi]\in[\operatorname{Irr}_\sc({\widetilde{G}})]} \mathcal Z_{[\pi]}\times \mathcal Z(\mathcal M({\widetilde{G}})_{\mathrm{ind}}). \end{align*} \end{corollary} It turns out that for an irreducible cuspidal representation $\pi$ the center $\mathcal Z_{[\pi]}$ of $\mathcal M({\widetilde{G}})_{[\pi]}$ is not very hard to determine. Recall the notation $\Lambda({\widetilde{G}}) = {\widetilde{G}}/{\wG^\circ}$ and $\mathcal X({\widetilde{G}}) = \Hom_\mathsf{gr}(\Lambda({\widetilde{G}}),\mathbb{C}^\times)$ and notice that the group algebra $\mathbb{C}[\Lambda({\widetilde{G}})]$ is the algebra of regular functions on the algebraic variety (a torus) $\mathcal X({\widetilde{G}})$. We first need a lemma whose easy proof goes through considerations of the effect on the central character. \begin{lemma}\cite[V.2.7]{Renard} Given $\pi\in\operatorname{Irr}_\sc({\widetilde{G}})$ its stabilizer in $\mathcal X({\widetilde{G}})$ is finite. \end{lemma} Let us put $\mathcal G_\pi:=\Stab_{\mathcal X({\widetilde{G}})}(\pi)$ the stabilizer of $\pi$ in the group $\mathcal X({\widetilde{G}})$. \begin{theorem}[see {\cite[VI.10]{Renard}} or {\cite[1.12-1.14]{BerDel} for a slick proof}]\label{T:center for a cuspidal component} We have a canonical isomorphism \begin{align*} \mathcal Z(\mathcal M({\widetilde{G}})_{[\pi]})\simeq \mathbb{C}[\Lambda({\widetilde{G}})]^{\mathcal G_\pi} = \mathcal O(\mathcal X({\widetilde{G}})/\mathcal G_\pi). \end{align*} In particular, $\mathcal Z(\mathcal M({\widetilde{G}})_{[\pi]})$ is isomorphic to a ring of Laurent polynomials and hence is smooth of Krull dimension equal to the the rank of $\Lambda({\widetilde{G}})$. \end{theorem} Actually, one can do a bit better and find an equivalence of categories $\mathcal M({\widetilde{G}})_{[\pi]}\simeq \rmod\mathcal R_{[\pi]}$ with $\mathcal R_{[\pi]} := \End_{{\widetilde{G}}}(\Pi_{[\pi]})$ and $\Pi_{[\pi]} := \operatorname{ind}_{{\wG^\circ}}^{\widetilde{G}} (\operatorname{Res}_{{\wG^\circ}}^{\widetilde{G}}(\pi))$. This will be discussed in \cref{S:Blocks as module cats}. \subsection{Induced representations} In this section we look at the category $\mathcal M({\widetilde{G}})_{\mathrm{ind}}$ and decompose it into blocks. The main input is the geometric lemma (see \cite[VI.5.1]{Renard} for a proof that works also for ${\widetilde{G}}$). \begin{definition}\hfill \begin{enumerate} \item A cuspidal datum is a couple $({\widetilde{L}},\rho)$ where ${\widetilde{L}}$ is a Levi subgroup of ${\widetilde{G}}$ and $\rho\in\operatorname{Irr}({\widetilde{L}})_\sc$ is an irreducible cuspidal representation of ${\widetilde{L}}$. \item We say that two cuspidal data $({\widetilde{L}},\rho), ({\widetilde{M}},\tau)$ are conjugate (or associate) if there exists $g\in{\widetilde{G}}$ such that \begin{align*} {\widetilde{L}} = {}^g{\widetilde{M}} \text{ and } \rho = {}^g\tau, \end{align*} \item We say that two cuspidal data $({\widetilde{L}},\rho), ({\widetilde{M}},\tau)$ define the same inertial support if there exists $g\in{\widetilde{G}}$ and $\chi\in\mathcal X({\widetilde{L}})$ such that \begin{align*} {\widetilde{L}} = {}^g{\widetilde{M}} \text{ and } \rho = {}^g\tau \chi. \end{align*} \end{enumerate} \end{definition} We denote by $\Omega({\widetilde{G}})$ the set of cuspidal data up to conjugation and by $\mathcal B({\widetilde{G}})$ the cuspidal data up to conjugation and inertia. The following theorem is extensively used and it goes under the name of <<\emph{the geometric lemma}>>". It i due to Bernstein and Zelevinsky \cite[2.12]{BerZel}. \begin{theorem}[see {\cite[VI.5.3]{Renard}}]\label{T:geometric lemma conseq} Let ${\widetilde{P}}={\widetilde{M}} N$ and $Q={\widetilde{L}} U$ be two parabolic subgroups of ${\widetilde{G}}$ with Levi decompositions. Let $\rho$ be an irreducible cuspidal representation of ${\widetilde{M}}$ and put $\tau:=\mathbf{r}_{{\widetilde{L}},{\widetilde{Q}}}^{\widetilde{G}} \mathbf{i}_{{\widetilde{M}},{\widetilde{P}}}^{\widetilde{G}}(\rho)$. Then we have \begin{enumerate} \item If ${\widetilde{L}}$ has no Levi subgroup conjugate to ${\widetilde{M}}$, then $\tau=0$. \item If ${\widetilde{M}}$ is not conjugate to ${\widetilde{L}}$, then $\tau$ has no cuspidal subquotient. \item If ${\widetilde{M}}$ and ${\widetilde{L}}$ are standard and conjugate, then $\tau$ has a filtration with subquotients isomorphic to ${}^w\rho$ for $w\in W({\widetilde{L}},{\widetilde{M}})/W_{\widetilde{L}}$. In particular, $\tau$ is cuspidal. \end{enumerate} \end{theorem} In the above statement $W_{\widetilde{L}}$ is the Weyl group of ${\widetilde{L}}$ and $W({\widetilde{L}},{\widetilde{M}})$ is the subset of the Weyl group of ${\widetilde{G}}$ conjugating ${\widetilde{L}}$ into ${\widetilde{M}}$. The next result, proved using the above theorem, shows that the Jordan--Hölder factors of an induced representation of an irreducible cuspidal are independent of the chosen parabolic and depend only on the conjugation class of the cuspidal datum. \begin{theorem}[see {\cite[VI.5.4]{Renard}}]\label{T:JH series of induced of cuspidals} Let ${\widetilde{P}}={\widetilde{L}} N$ an ${\widetilde{P}}'={\widetilde{L}}' N'$ be two parabolic subgroups with Levi decompositions. Let $\rho\in\operatorname{Irr}_\sc({\widetilde{L}})$ and $\rho'\in\operatorname{Irr}_\sc({\widetilde{L}}')_\sc$ and $\pi = \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}\rho)$ and $\pi' = \mathbf{i}_{{\widetilde{L}}',{\widetilde{P}}'}^{\widetilde{G}}(\rho')$ for the induced representations. The following are equivalent \begin{enumerate} \item The cuspidal data $({\widetilde{L}},\rho)$ and $({\widetilde{L}}',\rho')$ are conjugated, \item The Jordan--Hölder series of $\pi$ and $\pi'$ are equivalent, \item The Jordan--Hölder series of $\pi$ and $\pi'$ have a common element. \end{enumerate} In particular, if ${\widetilde{L}}={\widetilde{L}}'$ and $\rho=\rho'$ this shows that $\pi$ and $\pi'$ have the same Jordan--Hölder series, independent of the chosen parabolic. \end{theorem} The previous theorem guarantees that the following notion is well defined. \begin{definition}\label{D:cuspidal support} Let $\pi\in\mathcal M({\widetilde{G}})$ be an irreducible representation. We define its cuspidal support to be a cuspidal datum up to conjugation $({\widetilde{L}},\rho)\in\Omega({\widetilde{G}})$ such that $\pi$ appears in the Jordan--Hölder series of the parabolic induction $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}(\rho)$ for some parabolic with Levi decomposition ${\widetilde{P}}={\widetilde{L}} N$. \end{definition} \begin{remark} In order to decompose $\mathcal M({\widetilde{G}})_{\mathrm{ind}}$ we would like to pull back, for each cuspidal datum up to conjugation $({\widetilde{L}},\rho)\in\Omega({\widetilde{G}})$, the decomposition from \eqref{Eq:dec cuspidal x induced} for ${\widetilde{L}}$ back to ${\widetilde{G}}$ for each irreducible cuspidal representation of ${\widetilde{L}}$. However, we need to take into account also the inertia because $\mathcal M({\widetilde{L}})_{[\rho]}$ is an indecomposable subcategory. \end{remark} Recall the set $\mathcal B({\widetilde{G}})$ of cuspidal data up to conjugation and inertia. For $({\widetilde{L}},\rho)$ a cuspidal datum we denote by $[{\widetilde{L}},\rho]_{\widetilde{G}}$ its class in $\mathcal B({\widetilde{G}})$. If no confusion can arise, we drop the subscript ${\widetilde{G}}$. Given $[{\widetilde{L}},\rho]\in\mathcal B({\widetilde{G}})$, \cref{T:JH series of induced of cuspidals} allows us to define unambiguously the subcategory of representations of ${\widetilde{G}}$ all whose irreducible subquotients have cuspidal support in $[{\widetilde{L}},\rho]$: \begin{align} \mathcal M({\widetilde{G}})_{[{\widetilde{L}},\rho]} = \left\{\pi\in \mathcal M({\widetilde{G}})\bigg| \begin{array}{l}\text{ all irreducible subquotients of $\pi$ as a ${\widetilde{G}}$-module}\\ \text{ have cuspidal support in } [{\widetilde{L}},\rho]\end{array}\right\}. \end{align} The next lemma gives another characterization of the category $\mathcal M({\widetilde{G}})_{[{\widetilde{L}},\rho]}$. \begin{lemma}\label{L:block as generated by induction from cusp component} Let $[{\widetilde{L}},\rho]\in\mathcal B({\widetilde{G}})$ and ${\widetilde{P}}={\widetilde{L}} N$ be a parabolic subgroup with Levi ${\widetilde{L}}$. Then the subcategory $\mathcal M({\widetilde{G}})_{[{\widetilde{L}},\rho]}$ is the smallest full subcategory of $\mathcal M({\widetilde{G}})$ closed under subquotients and containing $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}(\mathcal M({\widetilde{L}})_{[\rho]})$. \end{lemma} \begin{proof} It follows easily from \cref{T:geometric lemma conseq} and \cref{T:JH series of induced of cuspidals}. \end{proof} Use Frobenius reciprocity, \cref{T:geometric lemma conseq} and exactness of parabolic induction and restriction to deduce the following proposition. \begin{proposition}\label{P:derived orthogonal of components of M(G)} If $[{\widetilde{L}},\rho], [{\widetilde{L}}',\rho']$ are distinct elements of $\mathcal B({\widetilde{G}})$, then the subcategories $\mathcal M({\widetilde{G}})_{[{\widetilde{L}},\rho]}$ and $\mathcal M({\widetilde{G}})_{[{\widetilde{L}}',\rho']}$ are (derived) orthogonal, i.e., there are no non-zero Ext groups between them. \end{proposition} The above proposition together with \cref{P:categ is product if and only if} imply that we have a fully faithful functor \[ \prod_{\mathfrak s\in\mathcal B({\widetilde{G}})}\mathcal M({\widetilde{G}})_\mathfrak s \hookrightarrow \mathcal M({\widetilde{G}}).\] Proving that this functor is essentially surjective\footnote{A functor $ F\colon \mathcal C\to \mathcal D$ is called essentially surjective if for any $X\in \mathcal D$ there exists $A\in\mathcal C$ such that $F(A)\simeq X$.} gives the Bernstein decomposition for $\mathcal M({\widetilde{G}})$: \begin{theorem}[see {\cite[VI.7.2]{Renard}}]\label{T:Bernstein dec for tildeG} The category of smooth representations of ${\widetilde{G}}$ decomposes into blocks indexed by $\mathcal B({\widetilde{G}})$: \begin{equation}\label{Eq:Bernstein decomposition} \mathcal M({\widetilde{G}}) = \prod_{{\mathfrak s} \in \mathcal{B}({\widetilde{G}})} \mathcal M({\widetilde{G}})_\mathfrak s. \end{equation} \end{theorem} \begin{proof} Using \cref{T:dec of M(G) into cusp and indu} what is left to show is that every representation $\pi\in\mathcal M({\widetilde{G}})_{\mathrm{ind}}$ can be written as a direct sum of representations $\pi_\mathfrak s\in\mathcal M({\widetilde{G}})_\mathfrak s$: \begin{align}\label{Eq:decompose rep along subcategories indexed by B(G)} \pi \simeq \oplus_{\mathfrak s\in\mathcal B({\widetilde{G}})}\pi_\mathfrak s \quad \text{ for some } \pi_\mathfrak s\in\mathcal M({\widetilde{G}})_\mathfrak s. \end{align} First one observes that \eqref{Eq:decompose rep along subcategories indexed by B(G)} holds for an induced representation of a cuspidal representation from a Levi. This is tautological using \cref{L:block as generated by induction from cusp component}. Next one notices that if $\pi'\subset \pi$ is a subrepresentation and $\pi$ satisfies \eqref{Eq:decompose rep along subcategories indexed by B(G)} then the same is true of $\pi'$. Namely, consider $\pi'_\mathfrak s= \pi_\mathfrak s \cap \pi' $, then $\pi'= \oplus_{\mathfrak s\in\mathcal B({\widetilde{G}})}\pi'_\mathfrak s$ as follows easily only using the fact that no nonzero subquotients of $\pi_\mathfrak s$ for different $\mathfrak s$ are isomorphic (analogous assertion in group theory is called Goursat's Lemma). Finally, one needs to show that every representation $\pi\in\mathcal M({\widetilde{G}})_{\mathrm{ind}}$ can be embedded into a sum of induced representations. This is done using a pair of adjoint functors that are direct sums of all parabolic induction functors (resp., parabolic restriction) indexed by all the standard parabolic subgroups. For details, see \cite[VI.7.2 second Lemme]{Renard}. \end{proof} \section{Second adjointness theorem}\label{S:second adj} Recall that the Frobenius reciprocity states that for ${\widetilde{P}}$ a parabolic subgroup with Levi ${\widetilde{L}}$ the functor $\mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}$ is left adjoint to $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}$. It is natural to look for an adjoint in the other direction. Although showing that $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}$ admits a right adjoint is not difficult (see below), identifying precisely this adjoint is quite hard and is the content of Bernstein's second adjointness theorem: $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}$ is left adjoint to $\mathbf{r}_{{\widetilde{L}},{\widetilde{P}}^-}^{\widetilde{G}}$, where ${\widetilde{P}}^-$ is the opposite parabolic of ${\widetilde{P}}$. In this section we will sketch the main steps in the proof following the exposition in \cite[VI.9]{Renard} where all the details are spelled out for linear groups. In loc.cit. the argument is streamlined using the notion of \emph{completion} of a representations of ${\widetilde{G}}$ which we will recall below. Other references for the linear case include \cite[Theorem 19]{BerNotes} and a geometric proof \cite{BezKazh2nd}. \subsection{Completion}\label{SS:completion} We recall the notion of completion for smooth $G$-modules where $G$ is a locally compact, totally disconnected group admitting a countable basis of neighborhoods of 1 given by compact open subgroups. For $V$ a smooth $G$-module its completion $\overline{V}$ is a $G$-module (not smooth anymore) sitting between $V$ and $(V^{\vee})^*$. This notion will play a simplifying role in the proof of the second adjointness theorem and it also helps clarify the relationship between $N$-invariants and $N$-coinvariants. Notice that for a representation $V$ of $G$ and compact open subgroups $L\subset K$ with corresponding idempotents $e_K,e_L\in\mathcal H(G)$ we have a natural map $e_L V\to e_K V$ sending $w$ to $e_Kw$. \begin{definition} For $V$ a smooth representation of $G$ we define the functor \[ \overline{(\,\,)}\colon \mathcal M(G)\to \operatorname{Rep}_\mathbb{C}(G) \text{ by }\] \[ \overline{V}:=\varprojlim (e_K V) = \varprojlim (V^K)\] where the inverse limit is taken over all compact open subgroups of $G$. \end{definition} A few remarks are in order \begin{remark} \hfill \begin{enumerate} \item Since the set of compact open subgroups of $G$ is invariant under conjugation by $G$, it follows that $\overline{V}$is a $G$-module and that $\overline{V}^K=V^K$. Clearly $V\subset \overline{V}$ and the smooth part of $\overline{V}$ is $V$. \item The completion functor is exact because the Mittag-Leffler condition is automatically satisfied. \item Another way of writing the completion functor is as \[ \overline{V} = \Hom_G(\mathcal H(G),V) \] and this is simply because $\mathcal H(G)=\varinjlim \mathcal H(G)e_K$. The exactness of the functor $V \rightarrow \overline{V}$ is equivalent to $\mathcal H(G)$ being a projective $G$-module. \end{enumerate} \end{remark} \begin{proposition}\label{P:completion of contragredient} For $V$, a smooth representation of $G$, we have \[ \overline{V^\vee} = V^*.\] \end{proposition} \begin{proof} Suppose given $B\colon V\times W\to \mathbb{C}$ a pairing of smooth $G$-representations inducing an injective map of representation $V\hookrightarrow W^*$. Since $(W^*)^K = (W^K)^*$ for any smooth representation $W$ of $G$ and any compact open subgroup $K \subset G$, we obtain \[ \overline{V} = \varprojlim (V^K) \subset \varprojlim(W^K)^* = (\varinjlim W^K)^* = W^*.\] In particular, taking $B\colon V^\vee\times V\to \mathbb{C}$ to be the natural pairing and using that $(V^\vee)^K = (V^*)^K$ for all open compact subgroups $K$, we obtain the claimed equality. \end{proof} Applying it to the standard bilinear form $B\colon \mathcal H(G) \times \mathcal H(G) \to \mathbb{C},$ we find \begin{corollary} The completion $\overline{\mathcal H}(G)$ of $\mathcal H(G)$ is the space of distributions $D$ on $G$ such that $e_K*D$ is a compactly supported distribution on $G$ for all idempotents $e_K$. \end{corollary} In particular, coupled with \cref{P:center module idempot algebra}, we deduce: \begin{corollary} The center of $\mathcal M(G)$ can be described as $\overline{\mathcal H(G)}^G$, i.e., the space of invariant distributions $D$ on $G$ such that $e_K*D$ is a compactly supported distribution on $G$ for all idempotents $e_K$. \end{corollary} \begin{remark} It is known that for smooth representations taking invariants with respect to a unipotent subgroup $N$ is not a well behaved functor and as such it is almost never used. On the other hand, taking coinvariants under $N$ leads to nice exact functors (Jacquet functors) which are extremely important in the theory. The completion allows us to reconcile invariants and coinvariants for $N$ provided we use completions. \end{remark} \begin{proposition} The functor $(-)^N\circ \overline{(-)} \circ (-)^\vee\colon \mathcal M(G)\to\operatorname{{Vect}}_\mathbb{C}$ is exact. \end{proposition} \begin{proof} Notice that we have a natural isomorphism $(V^*)^N = (V_N)^*$ for any representation $V$ of $G$. In particular, for a smooth representation $V$ of $G$, using \cref{P:completion of contragredient} we obtain \[ \overline{V^\vee}^N = (V^*)^N = (V_N)^*.\] Taking $N$-coinvariants is an exact functor because $N$ is a increasing union of compact open subgroups. So the functor $(-)^N\circ \overline{(-)} \circ (-)^\vee$ identifies with the composition of two exact functors $(-)^*\circ (-)_N$ which concludes the proof. \end{proof} In other words, taking $N$-invariants on completions of contragredient $G$-modules is an exact functor. In particular, we deduce \begin{corollary} The functor $(-)^N\circ \overline{(-)}$ is exact on admissible modules. \end{corollary} \subsection{Adjoints} From now on, $G$ is a reductive $p$-adic group and ${\widetilde{G}}$ is a finite covering group of it (see \cref{SS:central ext}). We fix ${\widetilde{P}} = {\widetilde{L}} N$, a parabolic subgroup with a Levi decomposition in ${\widetilde{G}}$, and we denote by ${\widetilde{P}}^-$, the opposite parabolic subgroup with Levi decomposition ${\widetilde{L}} N^-$. We denote by $\delta_{\widetilde{P}}$, the modulus character of ${\widetilde{P}}$. The induction functor $\operatorname{ind}_{\widetilde{P}}^{\widetilde{G}} \colon \mathcal M({\widetilde{P}})\to \mathcal M({\widetilde{G}})$ has a different, more algebraic, expression in terms of the Hecke algebras. First, recall that we have an equivalence of categories \[ \mathcal M({\widetilde{G}})\simeq \mathcal H({\widetilde{G}})\lmod^\mathsf{nd} \] between smooth representations of ${\widetilde{G}}$ and non-degenerate $\mathcal H({\widetilde{G}})$-modules and that this holds also for ${\widetilde{P}}$ and ${\widetilde{L}}$. The following result gives a different incarnation of the induction from ${\widetilde{P}}$ to ${\widetilde{G}}$: \begin{proposition}[{\cite[Théorème III.2.6]{Renard}}]\label{P:ind_P^G as tensor product} There is a natural isomorphism of functors \[ \mathcal H({\widetilde{G}})\otimes_{\mathcal H({\widetilde{P}})} - \simeq \operatorname{ind}_{\widetilde{P}}^{\widetilde{G}} \circ (-\otimes_\mathbb{C} \delta_{\widetilde{P}}). \] \end{proposition} Now consider the functor of taking ${\widetilde{G}}$-smooth vectors (or the non-degenerate submodule for the Hecke algebra) \[ (-)_{nd\mhyp{\widetilde{G}}}\colon \mathcal H({\widetilde{G}})\lmod \to \mathcal H({\widetilde{G}})\lmod^\mathsf{nd}\] which to a module associates its non-degenerate submodule. Similarly for $\mathcal H({\widetilde{P}})$-modules. \begin{remark}\label{R:non-deg is right adjoint} The functor $(-)_{nd\mhyp{\widetilde{G}}}$ is right adjoint to the inclusion functor \[\mathcal H({\widetilde{G}})\lmod^\mathsf{nd}\hookrightarrow \mathcal H({\widetilde{G}})\lmod. \] \end{remark} \begin{proposition}\label{P:right adjoint to ind_P^G} The functor \[ \operatorname{ind}_{\widetilde{P}}^{\widetilde{G}} \circ (-\otimes_\mathbb{C} \delta_{\widetilde{P}})\colon \mathcal M({\widetilde{P}})\to\mathcal M({\widetilde{G}}) \] is left adjoint to \[ (-)_{nd-{\widetilde{P}}}\circ \operatorname{Res}_{\widetilde{P}}^{\widetilde{G}}\circ \overline{(-)}\colon\mathcal M({\widetilde{G}})\to \mathcal M({\widetilde{P}}).\] \end{proposition} \begin{proof} The usual tensor--Hom adjunction has the following analogue here: the functor \[ \mathcal H({\widetilde{G}})\otimes_{\mathcal H({\widetilde{P}})} -\colon \mathcal M({\widetilde{P}})\to\mathcal M({\widetilde{G}}) \] is left adjoint to \[ (-)_{nd\mhyp {\widetilde{P}}}\circ \Hom_{\mathcal H({\widetilde{G}})}(\mathcal H({\widetilde{G}}),-)\colon \mathcal M({\widetilde{G}})\to\mathcal M({\widetilde{P}}) \] where $\mathcal H({\widetilde{G}})$ is viewed as a left $\mathcal H({\widetilde{G}})$-module and a right $\mathcal H({\widetilde{P}})$-module. Therefore the functor $\Hom_{\mathcal H({\widetilde{G}})}(\mathcal H({\widetilde{G}}),-)\colon \mathcal M({\widetilde{G}})\to \mathcal H({\widetilde{P}})\lmod$ is identified with $\operatorname{Res}_{\widetilde{P}}^{\widetilde{G}} \circ \overline{(-)}$. Now we conclude using the fact that the image of a non-degenerate module is a non-degenerate module. \end{proof} We immediately get some expression of the right adjoint of parabolic induction: \begin{corollary}\label{C:right adjoint of parab ind} The functor $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}\colon \mathcal M({\widetilde{L}})\to \mathcal M({\widetilde{G}})$ is left adjoint to \[ (-)^N\circ (-)_{nd-{\widetilde{P}}}\circ \operatorname{Res}_{\widetilde{P}}^{\widetilde{G}}\circ \overline{(-)}.\] \end{corollary} \begin{proof} Use \cref{R:non-deg is right adjoint} and that the right adjoint of the inflation functor $\operatorname{Res}_{\widetilde{P}}^{\widetilde{L}}\colon \mathcal M({\widetilde{L}})\to \mathcal M({\widetilde{P}})$ is $(-)^N$. \end{proof} The key in the proof of Bernstein's second adjointness is the generalized Jacquet Lemma which was proved first by Casselman for admissible representations and later generalized by Bernstein to all smooth representations. See \cite[p.65 Jacquet's Lemma]{BerNotes}, \cite[VI.9.1]{Renard}. The same proof works for finite central extensions: \begin{theorem}\label{T:gen Jacquet} Let ${\widetilde{K}}\le {\widetilde{G}}$ be an open compact admitting an Iwahori decomposition with respect to ${\widetilde{P}}={\widetilde{L}} N$. Then for any representation $V\in\mathcal M({\widetilde{G}})$, the projection \[V^{\widetilde{K}}\to (V_N)^{{\widetilde{K}}\cap {\widetilde{L}}} \] is surjective and has a natural (functorial in $V$) section, call it $s_{{\widetilde{K}}}: (V_N)^{{\widetilde{K}}\cap {\widetilde{L}}} \to V^{\widetilde{K}}$. Further, for ${\widetilde{K}}' \subset {\widetilde{K}}$ open compact, both admitting Iwahori decompositions with respect to ${\widetilde{P}}={\widetilde{L}} N$, we have the commutative diagram (see \cite[VI.9.6.6]{Renard}) : \[ \begin{tikzcd} (V_N)^{{\widetilde{K}}' \cap {\widetilde{L}}} \ar[r,"s_{{\widetilde{K}}'}"] \ar[d," e_{{\widetilde{K}} \cap {\widetilde{L}}} "'] & V^{{\widetilde{K}}'} \arrow[d,"e_{{\widetilde{K}}}"]\\ (V_N)^{{\widetilde{K}} \cap {\widetilde{L}}} \ar[r,"s_{{\widetilde{K}}}"] & V^{\widetilde{K}}. \end{tikzcd}\] \end{theorem} Recall that for a ${\widetilde{G}}$-module $V$, we defined its completion to be \[ \overline{V}:=\Hom_{\widetilde{G}}(\mathcal H({\widetilde{G}}),V) = \lim_{\widetilde{K}} V^{\widetilde{K}} \] where the limit is taken over all the open compact subgroups ${\widetilde{K}}$ of ${\widetilde{G}}$. The previous theorem is actually equivalent to the following very nice looking statement that is proved for linear groups in \cite[VI.9.7]{Renard}. The same proof works for finite central extensions. (The commutativity of the diagram in Theorem \ref{T:gen Jacquet} allows one to construct a map $\overline{V_{N}} \to \overline{V}$ which is then proved to land inside the space of $N^-$-invariants.) \begin{corollary}\label{C:invar and coinvar are isomorphic} If ${\widetilde{P}}={\widetilde{L}} N$ is a parabolic with opposite ${\widetilde{P}}^- = {\widetilde{L}} N^-$ and $V$ is a smooth representation of ${\widetilde{G}}$ then the natural map \[ \overline{V}^{N^-}\to \overline{V_{N}} \] is an isomorphism of ${\widetilde{L}}$-representations. \end{corollary} In representation theory of $p$-adic groups taking invariants is not a well behaved functor and one only works with co-invariants. In some sense, this corollary recovers the lost properties of invariants provided we are willing to work with completed representations. Using \cref{C:invar and coinvar are isomorphic} one can now prove Bernstein's second adjointness theorem easily (we follow the exposition in \cite[VI.9.7]{Renard} for linear groups). Other references for the linear case are \cite[Theorem 19]{BerNotes}, \cite{BezKazh2nd}. \begin{theorem}\label{T:second adjointness} The functor $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}$ is left adjoint to $\mathbf{r}_{{\widetilde{L}},{\widetilde{P}}^-}^{\widetilde{G}}$. \end{theorem} \begin{proof} We have the following natural isomorphisms: \begin{align*} \Hom_{\widetilde{G}}(\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}(V),W) & = \Hom_{\widetilde{G}}(\operatorname{ind}_{\widetilde{P}}^{\widetilde{G}} \operatorname{Res}_{\widetilde{P}}^{\widetilde{L}} (\delta_P^{1/2}\otimes V),W) & \text{ by definition}\\ &=\Hom_{\widetilde{G}}(\mathcal H({\widetilde{G}})\otimes_{\mathcal H({\widetilde{P}})}\operatorname{Res}_{\widetilde{P}}^{\widetilde{L}}(\delta_P^{-1/2}\otimes V),W) & \text{by \cref{P:ind_P^G as tensor product}}\\ & = \Hom_{\widetilde{P}}(\operatorname{Res}_{\widetilde{P}}^{\widetilde{L}}(\delta_P^{-1/2}\otimes V),\overline{W}) & \text{by \cref{P:right adjoint to ind_P^G}}\\ & = \Hom_{\widetilde{L}}(\delta_P^{-1/2}\otimes V,\overline{W}^N) & \operatorname{Res}_{\widetilde{P}}^{\widetilde{L}} \dashv (-)^N\\ & = \Hom_{\widetilde{L}}(V,\delta_P^{1/2}\otimes \overline{W_{N^-}})&\text{by \cref{C:invar and coinvar are isomorphic}}\\ & = \Hom_{\widetilde{L}}(V,\delta_P^{1/2} \otimes W_{N^-})& \text{ by \cref{R:non-deg is right adjoint}}\\ & = \Hom_{\widetilde{L}}(V,\mathbf{r}_{{\widetilde{L}},{\widetilde{P}}^-}^{\widetilde{G}}(W)) & \text{ by definition.} \end{align*} \end{proof} Since every functor that admits an exact right adjoint preserves projective objects, we deduce the rather non-trivial fact: \begin{corollary}\label{C:induction preserves projectives} The functor $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}\colon \mathcal M({\widetilde{L}})\to \mathcal M({\widetilde{G}})$ sends projective objects to projective objects. \end{corollary} \section{Blocks as module categories}\label{S:Blocks as module cats} In this section we describe each block $\mathcal M({\widetilde{G}})_\mathfrak s$, $\mathfrak s\in\mathcal B({\widetilde{G}})$, as the category of modules over some algebra $\mathcal R_\mathfrak s$ and prove some basic homological properties of this algebra. The key results of this section are \cref{C:Frob sym cond for R_rho} and \cref{P:vanishing of ext over R}. \subsection{Cuspidals} Fix $({\widetilde{L}},\rho)$ a cuspidal data and denote by $\mathfrak s\in\mathcal B({\widetilde{G}})$ its equivalence class (conjugation and inertia). Define the following representation of ${\widetilde{L}}$: \[ \Pi_{[\rho]}:=\operatorname{ind}_{{\wL^\circ}}^{\widetilde{L}} (\rho\mid_{{\wL^\circ}}) \] where we recall that ${\wL^\circ}$ is the subgroup of ${\widetilde{L}}$ generated by all compact subgroups. \begin{proposition}\label{P:projective generator for cuspidal block} The representation $\Pi_{[\rho]}$ is a finite type projective generator in $\mathcal M({\widetilde{L}})_{[\rho]}$. \end{proposition} \begin{proof} First we show the projectivity. Since ${\wL^\circ}\le{\widetilde{L}}$ is an open subgroup, the functor $\operatorname{ind}_{{\wL^\circ}}^{\widetilde{L}}$ is also left adjoint to $\operatorname{Res}_{{\wL^\circ}}^{\widetilde{L}}$ (see \cref{L:induction from open adjunction}) which is exact, hence $\operatorname{ind}_{{\wL^\circ}}^{\widetilde{L}}$ preserves projective objects. It is therefore enough to show that $\rho|_{\wL^\circ}$ is projective which follows immediately from Harish-Chandra's \cref{T:Harish-Chandra} and the fact that compact representations are projective by \cref{S:splitting reps}. Let us now show that $\Pi_{[\rho]}$ is a generator of $\mathcal M({\widetilde{L}})_{[\rho]}$. Let $\pi$ be an arbitrary representation in $\mathcal M({\widetilde{L}})_{[\rho]}$. We must show that there's a non-zero morphism $\Pi_{[\rho]}\to \pi$. Recall (paragraph before \cref{T:cuspidal block}) that the category $\mathcal M({\widetilde{L}})_{[\rho]}$ is defined as consisting of those smooth representations of ${\widetilde{L}}$ all whose irreducible subquotients belong to $[\rho]$. Since $\Pi_{[\rho]}$ is a projective representation of ${\widetilde{L}}$, it is enough to prove that an irreducible representation belonging to $\mathcal M({\widetilde{L}})_{[\rho]}$, thus isomorphic to $\chi\rho$ for some unramified character $\chi\in\mathcal X({\widetilde{L}})$, appears as a quotient of $\Pi_{[\rho]}$. By the adjunction \cref{L:induction from open adjunction}, we have \[ \Hom_{{\widetilde{L}}}(\Pi_{[\rho]},\chi\rho) = \Hom_{\wL^\circ}(\rho\mid_{{\wL^\circ}},\chi\rho\mid_{\wL^\circ}), \] and the latter is non-zero as it contains the identity. In order to show that $\Pi_{[\rho]}$ is of finite type, one notices that ${\wL^\circ}\backslash {\widetilde{L}}$ is a discrete group and leads to a basis of $\operatorname{ind}_{\wL^\circ}^{\widetilde{L}}(\rho\mid_{\wL^\circ})$ indexed by these cosets and a basis of $\rho$. The action of ${\widetilde{L}}$ permutes this basis and hence, by choosing a finite generating set of $\rho\mid_{\wL^\circ}$, we obtain a finite generating set of $\Pi_{[\rho]}$ as an ${\widetilde{L}}$-representation, proving that $\Pi_{[\rho]}$ is of finite type. \end{proof} We put $\mathcal R_{[\rho]}:=\End_{\widetilde{L}}(\Pi_{[\rho]})$. Generalities from category theory (see \cref{P:equiv module category}), gives us the following corollary. \begin{corollary} We have an equivalence of categories \begin{align*} \mathcal M({\widetilde{L}})_{[\rho]} &\longrightarrow \rmod\mathcal R_{[\rho]}\\ \pi &\mapsto \Hom_{\widetilde{L}}(\Pi_{[\rho]},\pi).\notag \end{align*} \end{corollary} Below we give a description of the algebra $\mathcal R_{[\rho]}$ by generators and relations. Recall that $\mathcal G_\rho\subset \Hom_\mathsf{gr}(\Lambda,\mathbb{C}^\times) = \mathcal X({\widetilde{L}})$ denotes the stabilizer of $\rho$, where $\Lambda = {\widetilde{L}}/{\wL^\circ}$, i.e., $\mathcal G_\rho = \{\chi \in \Hom_\mathsf{gr}(\Lambda,\mathbb{C}^\times) | \rho \otimes \chi \simeq \rho\}$. The group $\mathcal G_\rho$ naturally acts on the complex torus $\mathcal X({\widetilde{L}})$ by translations, and hence also on the algebra of regular functions on $\mathcal O(\mathcal X({\widetilde{L}})) = \mathbb{C}[\Lambda]:= \mathcal A$ which we denote by $\chi(f)= {}^\chi f$ for $\chi \in \mathcal G_\rho$. By Schur's lemma, the isomorphism $\rho \otimes \chi \simeq \rho$ defines a unique map -- up to scalars -- from the space underlying $\rho$ to itself, hence provides us with a 2-cocycle $ c \in H^2(\mathcal G_\rho, \mathbb{C})$, which in turn defines a twisted group algebra $\mathbb{C}[\mathcal G_{\rho},c]$, the algebra generated by the intertwining operators from $\rho$ to itself through the isomorphisms $\rho \otimes \chi \simeq \rho$. \begin{proposition}[{\cite[Proposition 28]{BerNotes}}] \label{P:R_rho presentation} The algebra $\mathcal R_{[\rho]}$ has the following presentation: \begin{enumerate} \item As a vector space $\mathcal R_{[\rho]} = \mathcal A\otimes \mathbb{C}[\mathcal G_{\rho},c]$, \item $\mathcal A$ and $\mathbb{C}[\mathcal G_{\rho},c]$ are subalgebras, \item $f b_\chi = b_\chi {}^\chi f$ for all $f\in\mathcal A$ and $\chi\in \mathcal G_{\rho}$, with $b_\chi = 1 \otimes \chi \in \mathcal A\otimes \mathbb{C}[\mathcal G_{\rho},c] =\mathcal R_{[\rho]}$. \end{enumerate} \end{proposition} \begin{proof} The proof is fairly elementary but we will give a sketch. First notice that by adjunction, we have a natural isomorphism of vector spaces \begin{align}\label{Eq:R_rho as v space} \mathcal R_{[\rho]} = \End_{\widetilde{L}}(\operatorname{ind}_{\wL^\circ}^{\widetilde{L}}(\rho|_{\wL^\circ})) \simeq \End_{\wL^\circ}(\rho|_{\wL^\circ})\otimes \mathcal A. \end{align} Since $\End_{\widetilde{L}}(\operatorname{ind}_{\wL^\circ}^{\widetilde{L}}(\mathbb{C})) = \mathcal A$, and since $\operatorname{ind}_{\wL^\circ}^{\widetilde{L}}(\rho|_{\wL^\circ})\simeq \rho\otimes \operatorname{ind}_{\wL^\circ}^{\widetilde{L}}(\mathbb{C})$, this exhibits $\mathcal A$ as a subalgebra of $\mathcal R_{[\rho]}$ and moreover it shows that \eqref{Eq:R_rho as v space} is an isomorphism of right $\mathcal A$-modules. Second, notice that $\Lambda={\widetilde{L}}/{\wL^\circ}$ acts on $\End_{\wL^\circ}(\rho|_{\wL^\circ})$ and the action factors through the finite quotient ${\widetilde{L}}/Z({\widetilde{L}}){\wL^\circ}$. Moreover the action is diagonalizable and the eigenvalues (i.e., characters of $\Lambda$) that appear are, by definition, exactly the elements of $\mathcal G_\rho$. We deduce a canonical isomorphism of vector spaces \[ \End_{\wL^\circ}(\rho|_{\wL^\circ}) \simeq \mathbb{C}[\mathcal G_\rho]. \] If $b_\chi,b_\mu\in \End_{\wL^\circ}(\rho|_{\wL^\circ})$ correspond to $\chi,\mu\in\mathcal G_\rho$ through the previous isomorphism, then their product belongs to the eigenspace corresponding to $\chi\mu$, i.e., we have $b_\chi b_\mu = c(\chi,\mu)b_{\chi\mu}$ for some non-zero $c(\chi,\mu)\in\mathbb{C}^\times$. Associativity plus unity implies that $c(-,-)$ is a two cocycle of $\mathcal G_\rho$ with values in $\mathbb{C}^\times$. This allows us to construct a twisted group algebra $\mathbb{C}[\mathcal G_\rho,c]$ and in turn provides us with an isomorphism of algebras \[ \End_{\wL^\circ}(\rho|_{\wL^\circ}) \simeq \mathbb{C}[\mathcal G_\rho,c].\] The last step consists in showing how the subalgebra $\mathcal A$ and $\mathbb{C}[\mathcal G_\rho,c]$ interact inside $\mathcal R_{[\rho]}$. This is done by writing down explicitly the isomorphism in \eqref{Eq:R_rho as v space}. \end{proof} Recall (see \cite[Proposition III.12.1]{Artin-nonc}) that an Azumaya algebra $R$ over a ring $Z$ is a $Z$-algebra such that for some faithfully flat extension of rings $Z\hookrightarrow Z'$ we have $R\otimes_ZZ'\simeq M_n(Z')$. Equivalently, the multiplication map $R\otimes_Z R^{op} \to \End_Z(R)$ is an isomorphism of rings. In particular, one deduces in a straightforward way from the above presentation of $\mathcal R_{[\rho]}$ the following: \begin{proposition}\label{P:R_rho is Azumaya and Z Laur pol} The natural inclusion $\mathcal A^{\mathcal G_\rho}\hookrightarrow \mathcal R_{[\rho]}$ identifies $\mathcal A^{\mathcal G_\rho}$ with the center $Z(\mathcal R_{[\rho]})$. In particular $Z(\mathcal R_{[\rho]})$ is isomorphic to a Laurent polynomial algebra. Moreover $\mathcal R_{[\rho]}$ is an Azumaya algebra over its center. \end{proposition} For a different proof, see \cite[Proposition 8.1]{Bus-Hen}. Note that our algebra $\mathcal R_{[\rho]}$ is a matrix algebra over the algebra $E_G$ of \cite{Bus-Hen}. The next lemma is easy and is valid for any Azumaya algebra (see \cite[Proposition IV.2.1]{Artin-nonc}) \begin{lemma} Let $R$ be an Azumaya algebra over its center $Z$ which is a noetherian ring. Then the trace map $\mathsf{tr} \colon R \to Z$ provides an isomorphism of $R$-bimodules \[ R\to \Hom_Z(R,Z).\] \end{lemma} \begin{proof} First, this is obvious for $R=M_n(Z)$. Using faithfully-flat descent, one first defines the trace map unambiguously for any Azumaya algebra $R$ and then sees that it provides a non-degenerate symmetric pairing $R\otimes_Z R\to Z$ which leads to the isomorphism asserted in the statement of the lemma. \end{proof} Applying this to our algebra $\mathcal R_{[\rho]}$ with center $\mathcal Z_{[\rho]}:=Z(\mathcal R_{[\rho]})$, we get, in particular, the following corollary. \begin{corollary}\label{C:Frob sym cond for R_rho} There is an isomorphism of $\mathcal R_{[\rho]}$-bimodules \[\mathcal R_{[\rho]}\simeq \Hom_{\mathcal Z_{[\rho]}}(\mathcal R_{[\rho]},\mathcal Z_{[\rho]}).\] \end{corollary} Let us now apply what we have learned about $\mathcal R_{[\rho]}$ to showing the following proposition which plays a crucial role in the study of the homological duality functor (see \cref{T:homological duality single degree}). \begin{proposition}\label{P:vanishing of ext over R} Let $V$ be an $\mathcal R_{[\rho]}$-module which is finite dimensional over $\mathbb{C}$. Then we have \begin{align*} \Ext^i_{\mathcal R_{[\rho]}}(V,\mathcal R_{[\rho]}) = 0 \text{ for all }i\neq d(\rho). \end{align*} \end{proposition} \begin{proof} For simplicity we will put $\mathcal R = \mathcal R_{[\rho]}$ and $\mathcal Z:=Z(\mathcal R_{[\rho]})$ in this proof. The forgetful functor \begin{align*} F\colon \rmod\mathcal R\to \rmod\mathcal Z \end{align*} can also be written as $F = -\otimes_{\mathcal R}\mathcal R$ where ${}_\mathcal R\cR_\mathcal Z$ is viewed as an $\mathcal R\mhyp\mathcal Z$-bimodule. As such, $F$ has a right adjoint $F':=\Hom_\mathcal Z(\mathcal R_\mathcal Z,-)$. Since $\mathcal R$ is an Azumaya algebra (\cref{P:R_rho is Azumaya and Z Laur pol}), it is a projective $\mathcal Z$-module, hence both $F$ and $F'$ are exact functors. It follows that the adjunction extends to Ext groups. Applying \cref{C:Frob sym cond for R_rho} we therefore get \[ \Ext^i_\mathcal R(V,\mathcal R) = \Ext^i_\mathcal R(V,F'(\mathcal Z)) = \Ext^i_\mathcal Z(V,\mathcal Z),\] and now we conclude the proof of the proposition by a well-known result in commutative algebra since $\mathcal Z$ is a Laurent polynomial algebra (by \cref{P:R_rho is Azumaya and Z Laur pol}) of Krull dimension $d(\rho)$ and $V$ is a $\mathcal Z$-module which is a finite dimensional vector space over $\mathbb{C}$. \end{proof} \begin{remark}\label{R:what makes vanishing} The statement holds with the same proof if we only assume that $\mathcal R_{[\rho]}\simeq\Hom_{\mathcal Z_{[\rho]}}(\mathcal R_{[\rho]},\mathcal Z_{[\rho]})$ as right $\mathcal R_{[\rho]}$-modules. \end{remark} \subsection{Non-cuspidal blocks: a projective generator}\label{SS:induced proj gen} Moving on to the block in ${\widetilde{G}}$ corresponding to $\mathfrak s=[{\widetilde{L}},\rho]\in\mathcal B({\widetilde{G}})$, let us define the representation \[ \Pi_\mathfrak s:=\oplus_{{\widetilde{P}}} \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}(\Pi_{[\rho]}) = \oplus_{{\widetilde{P}}}\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{{\widetilde{G}}} (\operatorname{ind}_{{\wL^\circ}}^{{\widetilde{L}}}(\rho|_{{\wL^\circ}}))\] where the sum ranges over all parabolics ${\widetilde{P}}$ with Levi ${\widetilde{L}}$ (it is a finite sum). \begin{remark} Actually it would be enough to consider $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}(\Pi_{[\rho]})$ for a single parabolic as this module turns out to be independent of the parabolic containing ${\widetilde{L}}$. This is not so easy to prove (see \cite[VI.10.1]{Renard} for comments and proofs or \cite[p. 96]{BerNotes}). We do not need it though. \end{remark} % \begin{lemma}\label{L:projective generator of M(G)_s} The module $\Pi_\mathfrak s$ is a finite type projective-generator of $\mathcal M({\widetilde{G}})_\mathfrak s$. \end{lemma} \begin{proof} The projectivity follows at once as a consequence of the second-adjointness \cref{C:induction preserves projectives} together with \cref{P:projective generator for cuspidal block}: the functor $\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{{\widetilde{G}}}$ has an exact right adjoint, hence it preserves projectives and $\Pi_{[\rho]}$ is projective. Since parabolic induction preserves finite type we have that $\Pi_\mathfrak s$ is also finitely generated. Let $\pi\in \mathcal M({\widetilde{G}})_\mathfrak s$ be a non-zero representation of ${\widetilde{G}}_\mathfrak s$. We need to show that $\Hom_{{\widetilde{G}}}(\Pi_\mathfrak s,\pi)$ is non zero and for that, since $\Pi_\mathfrak s$ is projective, it is enough to consider the case $\pi$ irreducible. By definition of the block $\mathcal M({\widetilde{G}})_\mathfrak s$, there exists a parabolic ${\widetilde{Q}}$ with Levi ${\widetilde{L}}$ such that $\mathbf{r}_{{\widetilde{L}},{\widetilde{Q}}^-}^{\widetilde{G}}(\pi)$ is non-zero in the block $\mathcal M({\widetilde{L}})_{[\rho]}$. The second-adjointness \cref{T:second adjointness} and the fact that $\Pi_{[\rho]}$ is a generator of $\mathcal M({\widetilde{L}})_{[\rho]}$ (see \cref{P:projective generator for cuspidal block}) imply that \[ \Hom_{\widetilde{G}}( \mathbf{i}_{{\widetilde{L}},{\widetilde{Q}}}^{\widetilde{G}}(\Pi_{[\rho]}),\pi) = \Hom_{\widetilde{L}}(\Pi_{[\rho]}, \mathbf{r}_{{\widetilde{L}},{\widetilde{Q}}^-}^{\widetilde{G}}(\pi))\neq 0 \] and hence $\Hom_{\widetilde{G}}(\Pi_\mathfrak s,\pi)\neq 0$. \end{proof} An abelian category admitting all coproducts and having a finitely generated (compact) progenerator is equivalent to the category of right modules over its endomorphism ring (see \cref{P:equiv module category}). Putting $\mathcal R_\mathfrak s:=\End_{{\widetilde{G}}}(\Pi_\mathfrak s)$, we thus deduce the following proposition. \begin{proposition}\label{P:block equiv to modules over algebra} The functor \begin{align*} \mathcal M({\widetilde{G}})_\mathfrak s &\to \rmod\mathcal R_\mathfrak s\\ V&\mapsto \Hom_{{\widetilde{G}}}(\Pi_\mathfrak s,V) \end{align*} is an equivalence of categories sending finite length representations of ${\widetilde{G}}$ in $\mathcal M({\widetilde{G}})_\mathfrak s$ to $\mathcal R_\mathfrak s$-modules which are finite dimensional over $\mathbb{C}$. \end{proposition} \subsection{Center of $\mathcal M({\widetilde{G}})_\mathfrak s$}\label{SS:center of a block} This section plays no role in this work but we record it for completeness. We give the description of the center of the algebra $\mathcal R_\mathfrak s$ for $\mathfrak s = [{\widetilde{L}},\rho]\in\mathcal B({\widetilde{G}})$. In turn this determines the center of the block $\mathcal M({\widetilde{G}})_\mathfrak s$ (see \cref{P:block equiv to modules over algebra}, \cref{R:equiv of cats center} and \cref{Ex:center of A-mod}). Recall that $\mathcal X({\widetilde{L}})$ acts on the irreducible cuspidals $\operatorname{Irr}({\widetilde{L}})_\sc$ and we denoted by $\mathcal G_\rho$ the stabilizer of $\rho$. Denote by $W_{[\rho]}:=\Stab_{W_{{\widetilde{L}},{\widetilde{G}}}}([\rho])$, the stabilizer of the inertia class $[\rho]$ in the relative Weyl group $W_{{\widetilde{L}},{\widetilde{G}}}$. Recall also the $\bZ$-lattice $\Lambda({\widetilde{L}}) = {\widetilde{L}}/{\wL^\circ} = \Lambda(L) = L/{L^\circ}$. The characters of $\Lambda({\widetilde{L}})$ form a torus whose ring of regular functions is identified with the group algebra of $\Lambda({\widetilde{L}})$: \[ \mathbb{C}[\Lambda({\widetilde{L}})] = \mathcal O(\mathcal X({\widetilde{L}})). \] The linear case of the following theorem is due to \cite[2.12]{BerDel} but the proof works also for finite central extensions. Another reference is \cite[VI.10.4]{Renard}. \begin{theorem} \label{T:center of M(G)_s as invariants} The algebra $\mathcal R_\mathfrak s$ contains the Laurent polynomial algebra $\mathcal O(\mathcal X({\widetilde{L}})/\mathcal G_\rho)$ as a subalgebra and it is finite as a left (or right)-module over it. Moreover, the center of $\mathcal R_\mathfrak s$ is nothing but \[ \mathcal Z(\mathcal R_\mathfrak s) =\mathcal O(\mathcal X({\widetilde{L}})/\mathcal G_\rho)^{W_{[\rho]}}. \] \end{theorem} Since $\mathcal Z(\mathcal R_\mathfrak s)$ has no non-trivial idempotents, we immediately get the following corollary. \begin{corollary}\label{L:block M(G)_s is indec} The category $\mathcal M({\widetilde{G}})_\mathfrak s$ is indecomposable. \end{corollary} % % % % \section{An abstract duality theorem}\label{S:abstract duality theorem} The purpose of this section is to prove an abstract duality theorem reminiscent of the Serre functor property of the Nakayama functor $\RHom_A(-,A)^*$ (see \cite{BonKapr} and \cref{SS:Serre functors}). This is achieved in \cref{T:Nakayama functor is Serre} and \cref{C:D_h square to id for fin.h.dim}. Although the results presented in this section are well-known and we do not claim any originality, for lack of a precise reference, we provide all the details. We do not strive for maximal generality, so sometimes we make hypotheses which might not be necessary but which hold in our applications to $p$-adic groups. Let $k$ be a field and $A$ be an idempotented $k$-algebra, i.e., for every $a\in A$ there exists an idempotent $e\in A$ such that $ae=ea=a$. Clearly any unitary algebra is idempotented. We suppose moreover that $A$ has a countable filtered set of idempotents. A left $A$-module $M$ is said to be \emph{non-degenerate} if $AM=M$, equivalently, if for any $m\in M$, there exists an idempotent $e\in A$ such that $em=m$. \begin{remark} For a non-unitary ring, a free module is not necessarily projective. The basic projective left modules are $Ae$ with $e$ an idempotent. Any projective finitely generated non-degenerate $A$-module is a direct summand of $\oplus_i Ae_i$ where $\{e_i\}$ is a finite collection of idempotents. This follows as in the unitary case using the fact that the module is non-degenerate and finitely generated. \end{remark} We denote by $A\lmod$, the category of all left $A$-modules, and by $A\lmod^\mathsf{nd}$ the full subcategory of non-degenerate left $A$-modules. We use similar notation for right $A$-modules. Consider the functors \begin{align}\label{Eq:functors Hom and tensor product} \begin{split} \Hom_A(-,-)\colon &(A\lmod^\mathsf{nd})^{op}\times A\lmod^\mathsf{nd} \to \bZ\lmod\\ -\otimes_A- \colon & \rmodnd A\times A\lmod \to \bZ\lmod \end{split} \end{align} The category of non-degenerate left $A$-modules, $A\lmod^\mathsf{nd}$, has enough projective objects\footnote{We do not know if the category of all left modules over $A$ has enough projective objects, but this plays no role for us.} and so we can derive the functors \eqref{Eq:functors Hom and tensor product} on the first argument since non-degenerate projective modules are still acyclic: \begin{align}\label{Eq:derived functors Hom and tensor product} \begin{split} \RHom_A(-,-)\colon &(\mathcal D^b(A\lmod^\mathsf{nd}))^{op}\times \mathcal D^b(\rmodnd A) \to \mathcal D(\bZ\lmod)\\ -\otimes^L_A- \colon & \mathcal D^b(\rmodnd A) \times \mathcal D^b(A\lmod) \to \mathcal D(\bZ\lmod) \end{split} \end{align} \begin{remark} The functor $\Hom_A(-,A)\colon (A\lmod^\mathsf{nd})^{op}\to \rmod A$ does not land inside the subcategory of non-degenerate modules. This can already be seen for $A$ itself: $\Hom_A(A,A) \simeq \lim_e eA$ where the limit is taken over the poset of idempotents $e$ in $A$ (can take a filtered subset). However, if $M$ is a \emph{finitely generated} non-degenerate module then $\Hom_A(M,A)$ is non-degenerate. \end{remark} For two $A$-modules $M,N\in A\lmod^\mathsf{nd}$, there is a canonical morphism \begin{align}\label{Eq:dual Hom to Hom natural morphism} \mathsf{can}_{N,M}\colon \Hom_A(M,A)\otimes_A N\to \Hom_A(M,N) \end{align} that extends to the derived category $\mathcal D^b(A\lmod^\mathsf{nd})$ \begin{align}\label{Eq:can Hom in derived cat} \mathsf{can}_{M,M}\colon \RHom_A(M,A)\otimes_A^L N\to \RHom_A(M,N). \end{align} \begin{definition} A non-degenerate module $M$ over $A$ is said to be \emph{perfect} if it has a finite resolution by finitely generated non-degenerate projective $A$-modules. An object of $\mathcal D^b(A\lmod^\mathsf{nd})$ is said to be perfect if it is isomorphic to a complex of finitely generated non-degenerate projective $A$-modules. \end{definition} The next lemma tells us that $\mathsf{can}_{N,M}$ is an isomorphism when $M$ is perfect: \begin{lemma}\label{L:Hom(M N)=Hom(M A)otimesN} If $M,N\in\mathcal D^b(A\lmod^\mathsf{nd})$ and $M$ is perfect, the canonical morphism \eqref{Eq:can Hom in derived cat} \[ \RHom_A(M,A)\otimes_A^L N\to \RHom_A(M,N) \] is an isomorphism. \end{lemma} \begin{proof} In this proof all modules are non-degenerate. We split the proof into several steps. First notice that for $M=Ae$ with $e$ an idempotent we have canonical identifications $\RHom_A(Ae,N)=\Hom_A(Ae,N)=eN$ and in particular $\Hom_A(Ae,A)=eA$ is a projective right $A$-module and so one can compute the derived tensor product with it. Hence in this situation we have \[ \RHom_A(Ae,A)\otimes^L N = eA\otimes_A N=eN=\Hom_A(Ae,N) \] and we are done. Second, notice that the canonical morphism from the statement of the lemma is compatible with finite direct sums (in both arguments). Since a finitely generated projective module is a direct summand of $\oplus_{i=1}^n Ae_i$ for some idempotents $e_i$ we deduce the validity of the lemma for $M$ a finitely generated projective module. Next, let $M$ be a perfect complex, quasi-isomorphic to $P_\bullet$ where each $P_r$ is a finitely generated non-degenerate projective module. For $N=N_\bullet\in\mathcal D^b(A\lmod^\mathsf{nd})$, $\RHom_A(M,N)$ is computed by the totalization of a double complex with entries $\Hom_A(P_r,N_s)$ which by what we said above is isomorphic to $\Hom_A(P_r,A)\otimes_A N_s$ through $\mathsf{can}_{P_r,N_s}$. The latter are the entries of the double complex $\Hom_A(P_\bullet,A)\otimes_A N_\bullet$ whose totalization computes $\RHom_A(M,A)\otimes^L_A N$. The naturality of \eqref{Eq:dual Hom to Hom natural morphism} ensures that these isomorphisms commute with all the differentials in the double complexes and therefore the total complexes are isomorphic. In other words, the natural map $\mathsf{can}_{N,M}\colon \RHom_A(M,A)\otimes^L_A N\to \RHom_A(M,N)$ is an isomorphism. \end{proof} \begin{definition} For an $A$-module $P$, we denote by $P^\mathsf{nd}$ its non-degenerate submodule, i.e., the subspace of $P$ consisting of elements that are fixed by some idempotent in $A$. \end{definition} It is clear that a morphism from a non-degenerate $A$-module to an arbitrary $A$-module lands inside its non-degenerate part. Said otherwise, we have \begin{proposition} The right adjoint of the natural inclusion $A\lmod^\mathsf{nd}\to A\lmod$ is the functor $(-)^\mathsf{nd}$ of taking the non-degenerate part of a module. Moreover $(-)^\mathsf{nd}$ is exact so the adjunction extends trivially to derived categories. \end{proposition} \textbf{Contragredient.} Any non-degenerate left or right $A$-module $M$ has a natural structure of $k$-vector space. We can therefore construct the contragredient module $M^\vee:=\Hom_k(M,k)^{^\mathsf{nd}}$ as the non-degenerate $k$-linear maps from $M$ to $k$. We extend the contragredient to the bounded derived categories \[ (-)^\vee\colon \mathcal D^b(A\lmod^\mathsf{nd})\to\mathcal D^b(\rmodnd A) \] by applying it degree-wise. For a right $A$-module $M$, a left $A$-module $N$, both non-degenerate and a $k$-vector space $V$, we have a natural adjunction morphism \[ \tau_{M,N,V}\colon \Hom_k(M\otimes_A N,V)\to \Hom_A(N,\Hom_k(M,V)^{^\mathsf{nd}}) \] that can be checked (as in the classical situation) to be an isomorphism. This isomorphism extends to bounded derived categories and we get the usual derived tensor-hom adjunction \begin{align}\label{Eq:tensor hom adjunction non unitary} \tau_{M,N,V}\colon \RHom_k(M\otimes_A^L N,V) \stackrel{\sim}{\to} \RHom_A(N,\RHom_k(M,V)^{^\mathsf{nd}}) \end{align} For a complex of vector spaces $V\in \mathcal D^b(k)$, we denote by $V^*$, the complex of dual vector spaces. Denote by $D^b_{\mathrm{perf}}(A\lmod^\mathsf{nd})$, the full subcategory of $D^b(A\lmod^\mathsf{nd})$ consisting of perfect complexes. \begin{theorem}\label{T:Nakayama functor is Serre} The following functor \begin{align*} D_{\mathrm{Nak}}\colon D^b_{\mathrm{perf}}(A\lmod^\mathsf{nd}) & \longrightarrow D^b(A\lmod^\mathsf{nd}) \\ M\quad & \mapsto \RHom_A(M,A)^\vee, \end{align*} is a Serre functor, i.e., for any perfect object $M$ and any object $N\in \mathcal D^b(A\lmod^\mathsf{nd})$, we have a natural isomorphism of $k$-vector spaces: \[ \Hom_{\mathcal D^b(A)}(M,N)^* \simeq \Hom_{\mathcal D^b(A)}(N,D_\mathrm{Nak}(M)). \] \end{theorem} \begin{remark} A Serre functor (see \cref{SS:Serre functors}) is, moreover, required to be an equivalence of categories. In our situation, in order for $D_{\mathrm{Nak}}$ to be an equivalence, we must restrict to finite dimensional modules (because of duality over $k$). In the context of representations of $p$-adic groups this means restricting to admissible modules. Replacing the contragredient by Grothendieck--Serre duality over the Bernstein center extends $D_{Nak}$ from admissible modules to finitely generated modules and indeed gives a Serre functor (relative to the center) for the whole category of finitely generated representations.\footnote{We are indebted to Roman Bezrukavnikov for explaining this to us.} This statement already appears in \cite{BBK}. \end{remark} \begin{proof} Apply \cref{L:Hom(M N)=Hom(M A)otimesN} and the tensor-hom adjunction \eqref{Eq:tensor hom adjunction non unitary} to get natural quasi-isomorphisms \begin{align*} \RHom_k( \RHom_{A}(M,N), k) & \simeq \RHom_k(\RHom_A(M,A) \stackrel{L}{\otimes} _A N, k) \\ & \simeq \RHom_A(N, \RHom_k(\RHom_A(M,A), k)^{^\mathsf{nd}})\\ & \simeq \RHom_A(N,\RHom_A(M,A)^\vee)\\ & \simeq \RHom_A(N,D_\mathrm{Nak}(M)). \end{align*} Taking $\mathrm H^0$ gives the desired result. \end{proof} Putting $N=M, D_\mathrm{Nak}(M)$ above, we get the following: \begin{corollary}\label{C:Hom M into DNak(M) is one dim} Suppose $M$ is a non-degenerate irreducible left $A$-module which belongs to $\mathcal D^b_\mathrm{perf}(A\lmod^\mathsf{nd})$ and for which Schur's lemma holds. Then, \[\Hom_{\mathcal D^b(A)}(M,M)^* \simeq \Hom_{\mathcal D^b(A)}(M,D_\mathrm{Nak}(M)) \simeq\Hom_{\mathcal D^b(A)}(D_\mathrm{Nak}(M),D_\mathrm{Nak}(M) )^*\simeq k.\] \end{corollary} \begin{remark} The above corollary shows that for $M$ irreducible, $D_\mathrm{Nak}(M)$ is indecomposable. So if $D_\mathrm{Nak}(M)$ is concentrated in degree $0$, then it is isomorphic to $M$. In particular, if $M$ is irreducible and projective, then $D_\mathrm{Nak}(M)\simeq M$. If we apply it to a finite dimensional semisimple algebra $A$, then we get the simple fact that $D_\mathrm{Nak}$ leaves stable every irreducible module. See \cref{C:D_h on finlen cusp is contrag} for a statement for $p$-adic groups. In general however, it looks like $D_\mathrm{Nak}(M)$ might be an interesting, non-trivial, involution. \end{remark} Taking $N=D_\mathrm{Nak}(M)$ in the above theorem we get a canonical map \begin{align} \mathsf{can}_M\colon \Hom_{\mathcal D^b(A)}(M,D_\mathrm{Nak}(M))\to k \end{align} and moreover, unwinding the definitions (see the proof of \cref{T:Nakayama functor is Serre}), we obtain \begin{corollary}\label{C:natural pairing RHom is perfect} For $M,N\in\mathcal D^b(A\lmod^\mathsf{nd})$ with $M$ perfect, the natural pairing \[ \Hom_{\mathcal D^b(A)}(M,N)\times \Hom_{\mathcal D^b(A)}(N,D_\mathrm{Nak}(M))\to \Hom_{\mathcal D^b(A)}(M,D_\mathrm{Nak}(M))\stackrel{\mathsf{can}_M}{\to} k \] is perfect giving rise to the isomorphism \[ \Hom_{\mathcal D^b(A)}(M,N)^*\simeq \Hom_{\mathcal D^b(A)}(N,D_\mathrm{Nak}(M)) \] from \cref{T:Nakayama functor is Serre}. \end{corollary} Define the \emph{homological duality} functor as \begin{align}\label{Eq:def functor homological duality for alg A} D_h\colon \mathcal D^+(A\lmod)&\longrightarrow \mathcal D^-(\rmod A)^{op}\\ M&\mapsto \RHom_A(M,A)\notag \end{align} The same formula sends a right module into a left module. By abuse of notation we will still write $D_h$ for this functor. Notice that without further hypotheses, the target category of $D_h$ is only bounded from below. \begin{proposition}\label{P:abstract D_h is an involution} The functor $D_h$ preserves perfect complexes and is an involution on $\mathcal D^b_\mathrm{perf}(A\lmod^\mathsf{nd})$. \end{proposition} \begin{proof} Notice that if $P$ is a finitely generated projective left $A$-module, then $\Hom_A(P,A)$ is a finitely generated projective right $A$-module. It follows that $D_h$ preserves perfect complexes. Let us now prove that $D_h^2\simeq \Id$ on perfect complexes. There is a natural transformation $\mathsf{ev}\colon\Id\to D_h^2$ coming from the evaluation morphism \[ \mathsf{ev}_P\colon P\to \Hom_A(\Hom_A(P,A),A). \] Note that $\mathsf{ev}_P$ is compatible with finite direct sums, i.e., $\mathsf{ev}_{\oplus P_i} = \oplus_i \mathsf{ev}_{P_i}$. Hence in order to show that $\mathsf{ev}_P$ is an isomorphism for every finitely generated projective, it is enough to show it for $P=Ae$, where $e$ is an idempotent. But then $\Hom_A(P,A)=eA$ and $\Hom_A(eA,A) = Ae=P$ and so clearly $\mathsf{ev}_P$ is an isomorphism. Since $\mathsf{ev}$ is a natural transformation (functorial) it behaves well on complexes and hence we deduce that $\mathsf{ev}_{P^\bullet}$ is also an isomorphism for every perfect complex $P^\bullet\in\mathcal D^b_\mathrm{perf}(A\lmod^\mathsf{nd})$. \end{proof} We denote by $\mathcal D^b_{fg}(A\lmod^\mathsf{nd})$, the bounded derived category of finitely generated, non-degenerate, left $A$-modules. (Similarly for right $A$-modules.) \begin{corollary}\label{C:D_h square to id for fin.h.dim} Suppose $A$ has both left and right finite homological dimension. Then the homological duality $D_h$ gives a functor \[ D_h\colon \mathcal D^b_{fg}(A\lmod^\mathsf{nd})\to \mathcal D^b_{fg}(\rmodnd A)^{op} \] whose square is isomorphic to the identity, i.e., $D_h^2\simeq \Id$. \end{corollary} \begin{proof} Since $A$ has finite left and right homological dimension, $\mathcal D^b_{fg}(A\lmod^\mathsf{nd}) = \mathcal D^b_{\mathrm{perf}}(A\lmod^\mathsf{nd})$, and similarly for right modules. Now, the conclusion follows from \cref{P:abstract D_h is an involution}. \end{proof} \section{Homological duality}\label{S:homological duality} \subsection{Vanishing}\label{SS:vanishing} Here we prove the vanishing part of our main result, namely \cref{T:homol prop of D_h-intro}\eqref{T:subpoint:vanishing Ext for D_h}, following the strategy in \cite{BerNotes}. The main ingredients are \cref{S:Bernstein dec} on Bernstein's decomposition, Bernstein's second adjoint \cref{T:second adjointness}, and the vanishing result from \cref{P:vanishing of ext over R}. Consider $\mathcal M({\widetilde{G}})_{fg}$ the full subcategory of smooth representations of ${\widetilde{G}}$ which are finitely generated. It is still an abelian category and the parabolic induction and restriction functors preserve it. The homological duality is defined as the following functor between derived categories \begin{align} D_h\colon &\mathcal D^b_{fg}(\mathcal M({\widetilde{G}}))\to \mathcal D^b_{fg}(\mathcal M({\widetilde{G}}))^{op}\\ & \pi \mapsto \RHom_{\widetilde{G}}(\pi,\mathcal H({\widetilde{G}}))\notag \end{align} where $\mathcal H({\widetilde{G}})$ denotes the Hecke algebra of ${\widetilde{G}}$. The structure of left ${\widetilde{G}}$ representation is through the right action of $\mathcal H({\widetilde{G}})$ on itself and through the involution $f\mapsto \breve{f}\colon \mathcal H({\widetilde{G}})\to \mathcal H({\widetilde{G}})$ defined by $\breve{f}(g):=f(g^{-1})$. \begin{remark}\hfill \begin{enumerate} \item Notice that $D_h$ lands indeed in the bounded derived category because the category of smooth representations $\mathcal M({\widetilde{G}})$ has finite global dimension (\cref{SS:finite homological dimension}). \item One could also define $D_h$ without the assumption on finitely generated but then $D_h$ would land outside smooth modules, indeed in the category of all $\mathcal H({\widetilde{G}})$-modules. We could come back to smooth modules simply by taking the smooth part (see the paragraph after \cref{P:ind_P^G as tensor product}). \end{enumerate} \end{remark} The following theorem is the most important property of the homological duality functor and is due to Bernstein. Fix $\mathfrak s = [{\widetilde{L}},\rho]\in\mathcal B({\widetilde{G}})$ a cuspidal datum. \begin{theorem}\label{T:homological duality single degree} If $\pi\in\mathcal M({\widetilde{G}})_\mathfrak s$ is of finite length, then $D_h(\pi)$ has cohomology only in degree $d(\mathfrak s)$. \end{theorem} \begin{proof} (following \cite[Theorem 31]{BerNotes}) We need to show that $\Ext^i_{\widetilde{G}}(\pi,\mathcal H) = 0$ for $i\neq d(\mathfrak s)$, where we put for short $\mathcal H = \mathcal H({\widetilde{G}})$. Decompose the representation $\mathcal H({\widetilde{G}})$ according to Bernstein's decomposition, \cref{T:Bernstein dec for tildeG}. Clearly only the component $\mathcal H({\widetilde{G}})_\mathfrak s$ is important to us. Moreover, since $\mathcal H({\widetilde{G}})$ is a projective representation (\cref{R:H(G) is projective}), and $\Pi_\mathfrak s$ is a projective generator of $\mathcal M({\widetilde{G}})_\mathfrak s$ (see \cref{L:projective generator of M(G)_s}), it is enough to show: \begin{align}\label{Eq:Ext vanishing with Pi_fs} \Ext^i_{\widetilde{G}}(\pi,\Pi_\mathfrak s)=0,\text{ for all }i\neq d(\mathfrak s). \end{align} Recall from \S\ref{SS:induced proj gen} that $\Pi_\mathfrak s = \oplus_{{\widetilde{P}}}\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{{\widetilde{G}}}(\Pi_{[\rho]})$, where the sum ranges over all parabolics with Levi subgroup ${\widetilde{L}}$. Since parabolic induction and restriction are exact functors, the Frobenius adjunction gives \begin{align}\label{Eq:Ext with Pi_fs equal with Pi_rho} \Ext^i_{\widetilde{G}}(\pi,\Pi_\mathfrak s) = \oplus_{{\widetilde{P}}}\Ext^i_{\widetilde{L}}(\mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}(\pi),\Pi_{[\rho]}). \end{align} The representation $\pi$ is admissible (because every irreducible representation is admissible by \cref{T:irred is adm}) and because parabolic restriction preserves admissibility and finite type, $\mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}(\pi)$ is a finite type admissible representation of ${\widetilde{L}}$. As such, it is of finite length. Through the categorical equivalence $\mathcal M({\widetilde{L}})_{[\rho]}\simeq \rmod\mathcal R_{[\rho]}$ (see \cref{P:block equiv to modules over algebra}), the vanishing of \eqref{Eq:Ext with Pi_fs equal with Pi_rho} follows from \[ \Ext^i_{\mathcal R_{[\rho]}}(V,\mathcal R_{[\rho]}) = 0, \text{ for all }i\neq d(\rho)=d(\mathfrak s), \] for all finite dimensional $\mathcal R_{[\rho]}$-modules $V$. This was proved in \cref{P:vanishing of ext over R}. \end{proof} Given $\pi\in\mathcal M({\widetilde{G}})_\mathfrak s^\mathsf{fl}$ a finite length representation in a given block, we will denote by $\bD_h(\pi)$ the representation $H^{d(\mathfrak s)}(D_h(\pi))$. \subsection{Interaction with induction and restriction}\label{SS:Dh and ind res} The objective of this section is to investigate how does the (full, derived) homological duality commute with parabolic induction and restriction. We follow the proof of the linear case from \cite[Theorem 31(4,5)]{BerNotes}. \begin{proposition}\label{P:D_h an ind res} Let ${\widetilde{P}} = {\widetilde{L}} N$ be a parabolic with Levi decomposition in ${\widetilde{G}}$. We have the following natural isomorphisms of functors when restricted to the bounded derived category of finitely generated smooth ${\widetilde{G}}$-modules $\mathcal D^b(\mathcal M({\widetilde{G}})_{fg})$: \begin{enumerate} \item $D_h \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}} \simeq \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}^-}^{\widetilde{G}} D_h$, \item $D_h \mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}} \simeq \mathbf{r}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}} D_h$. \end{enumerate} \end{proposition} \begin{proof} To shorten the notation, we write $\mathbf{i}_{\widetilde{P}}$ and $\mathbf{r}_{\widetilde{P}}$ for the parabolic induction and restriction functors. Since all objects in $\mathcal M({\widetilde{G}})_{fg}$ admit a finite resolution by finitely generated projective objects, it is enough to define and prove the required isomorphisms for finitely generated projective objects. Therefore, using the second adjointness and the Frobenius reciprocity, we see that the isomorphisms in part (1) and in part (2) of the proposition are equivalent to proving natural isomorphisms \begin{align} \mathbf{i}_{{\widetilde{P}}^-} \Hom_{\widetilde{L}}(V,\mathcal H({\widetilde{L}})) &\simeq \Hom_{\widetilde{L}}(V,\mathbf{r}_{{\widetilde{P}}^-}(\mathcal H({\widetilde{G}}))),\label{Eq:ind Hom}\\ \mathbf{r}_{\widetilde{P}}\Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}})) & \simeq \Hom_{\widetilde{G}}(W,\mathbf{i}_{\widetilde{P}}(\mathcal H({\widetilde{L}}))),\label{Eq:res Hom} \end{align} for $V\in\mathcal M({\widetilde{L}})$ and $W\in\mathcal M({\widetilde{G}})$, both projective, and finitely generated representations, and where we have considered $\mathcal H({\widetilde{L}})$ as an ${\widetilde{L}}\times {\widetilde{L}}$ module and $\mathcal H({\widetilde{G}})$ as a ${\widetilde{G}}\times {\widetilde{G}}$-module. For the proof of these isomorphisms, we begin by noticing the following identifications of ${\widetilde{L}}\times{\widetilde{G}}$-modules: \begin{align}\label{Eq:ind of H(L)} (\id\times\mathbf{i}_{\widetilde{P}})(\mathcal H({\widetilde{L}})) \simeq \mathcal C^\infty_c(({\widetilde{G}}\times {\widetilde{L}})/{\widetilde{P}}) \simeq \mathcal C^\infty_c({\widetilde{G}}/N) \simeq (\id\times\mathbf{r}_{\widetilde{P}})(\mathcal H({\widetilde{G}})). \end{align} The following is an easy but crucial observation valid for any finitely generated smooth representation $V$ of ${\widetilde{L}}$ (we will apply this to finitely generated projective modules as noted earlier): \begin{align} \label{Eq:easy crucial obs} \mathbf{i}_{\widetilde{P}}\Hom_{\widetilde{L}}(V,\mathcal H({\widetilde{L}})) &\simeq \Hom_{\widetilde{L}}(V,(\id\times\mathbf{i}_{\widetilde{P}}) (\mathcal H({\widetilde{L}})), \end{align} as representations of ${\widetilde{G}}$ where in $\Hom_{\widetilde{L}}(V,\mathcal H({\widetilde{L}}))$, both $V$ and $\mathcal H({\widetilde{L}})$ are considered as left ${\widetilde{L}}$-modules, thus $\Hom_{\widetilde{L}}(V,\mathcal H({\widetilde{L}}))$ is a right ${\widetilde{L}}$-module through the right ${\widetilde{L}}$-action on $\mathcal H({\widetilde{L}})$; the ${\widetilde{L}} \times {\widetilde{G}}$-representation $(\id\times\mathbf{i}_{\widetilde{P}}) (\mathcal H({\widetilde{L}})) = \mathbf{i}_{\widetilde{P}}^{{\widetilde{G}}} (\mathcal H({\widetilde{L}}))$ has ${\widetilde{L}}$ action through the left action of ${\widetilde{L}}$ on $\mathcal H({\widetilde{L}})$, and a right ${\widetilde{G}}$ action. To see the isomorphism in equation \eqref{Eq:easy crucial obs}, note that by the definition of an induced representation, both sides of \eqref{Eq:easy crucial obs} give rise to functions $F\colon {\widetilde{G}} \times V \rightarrow \mathcal H({\widetilde{L}})$ which are linear maps $V \rightarrow \mathcal H({\widetilde{L}})$ when restricted to any $g \in {\widetilde{G}}$, and satisfy: \begin{enumerate} \item $F(g, \ell v) = \ell F(g,v)$, for all $g\in {\widetilde{G}}, \ell \in {\widetilde{L}}, v \in V$. \item $F(gp, v) = F(g,v)\cdot \ell$ where $p \in {\widetilde{P}} = {\widetilde{L}} N$ has the form $p=\ell n$, with $\ell \in {\widetilde{L}}$, and $n \in N$. \end{enumerate} Further, such an $F$ arises from the left hand side of the isomorphism in \eqref{Eq:easy crucial obs} if and only if the corresponding map ${\widetilde{G}} \rightarrow \Hom (V, \mathcal H({\widetilde{L}}))$ is locally constant on ${\widetilde{G}}$, whereas such an $F$ arises from the right hand side of the equality in eq. \eqref{Eq:easy crucial obs} if and only if for each $v \in V$, $F(-,v)$ is a locally constant map from ${\widetilde{G}}$ to $\mathcal H({\widetilde{L}})$. Thus the functions $F$ which arise from the left hand side of equation \eqref{Eq:easy crucial obs} are always contained in those which arise from the right hand side, and the converse holds if $V$ is finitely generated over ${\widetilde{L}}$, say by $v_1,\cdots, v_n$. Then assuming that each of the functions $F(-,v_i)$ are constant in a neighborhood $U(g_0)$ of a fixed $g_0\in {\widetilde{G}}$, then by equation 1. above, $F(g,\ell v_i) = \ell F(g, v_i)=\ell F(g_0, v_i)$ for all $g \in U(g_0)$, $\ell \in {\widetilde{L}}$, hence $F(g,v) = F(g_0,v)$ for all $g \in U(g_0)$, $v \in V$. Now the isomorphism \eqref{Eq:easy crucial obs} (applied to ${\widetilde{P}}^-$ in place of ${\widetilde{P}}$) together with \eqref{Eq:ind of H(L)} proves the isomorphism \eqref{Eq:ind Hom}, and hence the isomorphism in part 1. of the proposition. Similarly for any finitely generated smooth, projective, representation $W$ of ${\widetilde{G}}$ we have \[ \mathbf{r}_{\widetilde{P}}\Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}}))\simeq \Hom_{\widetilde{G}}(W, (\id\times\mathbf{r}_{\widetilde{P}})(\mathcal H({\widetilde{G}})),\] as ${\widetilde{L}}$-modules and naturally in $W$. This assertion is equivalent to proving that the Jacquet module of $\Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}}))$ (considered as a right $G$-module) is the same as $\Hom_{\widetilde{G}}(W,\mathcal C^\infty_c({\widetilde{G}}/N) )$ which is equivalent to proving that: \begin{enumerate} \item the natural map from $\Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}}))$ to $\Hom_{\widetilde{G}}(W,\mathcal C^\infty_c({\widetilde{G}}/N) )$ is surjective. This is a consequence of $W$ being projective. \item The kernel of the natural map $\Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}})) \to \Hom_{\widetilde{G}}(W,\mathcal C^\infty_c({\widetilde{G}}/N) )$ consists of $\Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}})) [N]$ where for any smooth representation ${\mathcal W}$ of $N$, \[{\mathcal W}[N] = \{n\cdot w -w| w \in{\mathcal W}\} = \{v \in {\mathcal W}| \int_{N_i}n\cdot v = 0 \}\] where $N_i$ is some compact open subgroup of $ N$ depending on $v \in {\mathcal W}$. It follows that \[\Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}})) [N] = \Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}}) [N]),\] using that $ \Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}}))$ is a smooth representation of $N$ which is the case as $W$ is a finitely generated ${\widetilde{G}}$-module. \end{enumerate} Applying the functor $\Hom_{\widetilde{G}}(W,-)$ to the exact sequence \[ 0 \to \mathcal H({\widetilde{G}})[N] \to \mathcal H({\widetilde{G}}) \to \mathcal C^\infty_c({\widetilde{G}}/N) \to 0,\] and noting that $W$ is projective, we have the exact sequence \[ 0 \to \Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}}) [N]) \to \Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}})) \to \Hom_{\widetilde{G}}(W, \mathcal C^\infty_c({\widetilde{G}}/N)) \to 0,\] proving the assertion that the Jacquet module of $\Hom_{\widetilde{G}}(W,\mathcal H({\widetilde{G}}))$, considered as a right ${\widetilde{G}}$-module, is the same as $\Hom_{\widetilde{G}}(W,\mathcal C^\infty_c({\widetilde{G}}/N) )$. This completes the proof of the isomorphism in \eqref{Eq:res Hom}, and hence part 2. of the proposition. \end{proof} \subsection{On projective generators}\label{SS:projective generators and D_h} In order to better understand the homological duality functor $D_h$, we will compute its value on the projective generators $\Pi_{[\rho]}\in\mathcal M({\widetilde{G}})_{[\rho]}$ and $\Pi_\mathfrak s\in \mathcal M({\widetilde{G}})_\mathfrak s$ from \cref{S:Blocks as module cats}. The answer is as nice as it could possibly be. Then we will use this computation to describe the functors $(-)^\vee$ and $D_h$ on the $\mathcal R_\mathfrak s$-side: we obtain the contragredient resp., homological duality for $\mathcal R_\mathfrak s$. Recall that if $\rho\in\mathcal M({\widetilde{G}})$ is irreducible cuspidal, then $\Pi_\rho$ was defined to be $\operatorname{ind}_{{\wG^\circ}}^{\widetilde{G}}(\rho|_{\wG^\circ})$. Since a cuspidal representation is compact modulo center by Harish-Chandra's theorem, the following is a restatement of \cref{C:full cuspidal homological dual of it}: \begin{lemma}\label{L:homological dual of progen Pi_rho} If $\rho\in\mathcal M({\widetilde{G}})$ is irreducible cuspidal, then \[ D_h(\Pi_{[\rho]})\simeq \Pi_{[\rho^\vee]}.\] \end{lemma} Let $\mathfrak s = ({\widetilde{L}},\rho)\in\mathcal B({\widetilde{G}})$ be a cuspidal datum (up to inertia and conjugation). The projective generator $\Pi_\mathfrak s\in\mathcal M({\widetilde{G}})_\mathfrak s$ was defined in \cref{SS:induced proj gen} as \[ \Pi_\mathfrak s = \oplus_{\widetilde{P}} \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}} \Pi_{[\rho]}\] where the sum goes over all parabolics with Levi ${\widetilde{L}}$. We denote by $\mathfrak s^\vee = ({\widetilde{L}},\rho^\vee)$ the contragredient cuspidal datum (it could be the same as $\mathfrak s$ but in general it is not). \begin{proposition}\label{P:homological dual of progren Pi_s} With the above notation we have $D_h(\Pi_\mathfrak s)\simeq \Pi_{\mathfrak s^\vee}$. \end{proposition} \begin{proof} Use the previous lemma together with $D_h \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}\simeq \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}^-}^{\widetilde{G}} D_h$ (see \cref{P:D_h an ind res}) to get the following isomorphisms \begin{align*} D_h(\Pi_\mathfrak s) & = \oplus_{{\widetilde{P}}} D_h(\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}}^{\widetilde{G}}(\Pi_{[\rho]}))\\ &\simeq \oplus_{{\widetilde{P}}} \mathbf{i}_{{\widetilde{L}},{\widetilde{P}}^-} D_h(\Pi_{[\rho]})\\ & \simeq \oplus_{{\widetilde{P}}}\mathbf{i}_{{\widetilde{L}},{\widetilde{P}}^-}\Pi_{[\rho^\vee]}\\ & = \Pi_{\mathfrak s^\vee}.\qedhere \end{align*} \end{proof} \begin{corollary}\label{C:D_h sends block to its contragredient} Homological duality restricts to a functor $D_h\colon \mathcal D^b(\mathcal M({\widetilde{G}})_\mathfrak s)\to \mathcal D^b(\mathcal M({\widetilde{G}})_{\mathfrak s^\vee})^{op}$ which is moreover involutive when restricted to finitely generated modules. \end{corollary} \begin{proof} The functor $D_h$ sends the projective generator $\Pi_\mathfrak s$ of $\mathcal M({\widetilde{G}})_\mathfrak s$ to the projective generator $\Pi_{\mathfrak s^\vee}$ of $\mathcal M({\widetilde{G}})_{\mathfrak s^\vee}$. The duality statement for finitely generated follows from \cref{C:D_h square to id for fin.h.dim}. \end{proof} Thank to the vanishing result for finite length modules (\cref{T:homological duality single degree}), we can deduce \begin{corollary}\label{C:subp main thm:involut} The functor $D_h$ restricts to an involution \[ \bD_h\colon \mathcal M({\widetilde{G}})_\mathfrak s^\mathsf{fl} \to (\mathcal M({\widetilde{G}})_{\mathfrak s^\vee}^\mathsf{fl})^{op}.\] \end{corollary} \begin{cor}\label{C:R_s anti-iso to R_svee} The homological duality induces an anti-isomorphism of algebras \[ \mathcal R_\mathfrak s\simeq \mathcal R_{\mathfrak s^\vee},\] or equivalently, an isomorphism of algebras $\mathcal R_\mathfrak s^{op}\simeq \mathcal R_{\mathfrak s^\vee}$. \end{cor} \begin{proof} Since $D_h^2\simeq\Id$ (see \cref{C:D_h square to id for fin.h.dim}) we see that in particular $D_h$ is an equivalence of categories (contravariant). Hence it induces an anti-isomorphism of algebras: \[\mathcal R_\mathfrak s = \Hom_{\widetilde{G}}(\Pi_\mathfrak s,\Pi_\mathfrak s) \simeq \Hom_{\widetilde{G}}(D_h\Pi_\mathfrak s,D_h\Pi_\mathfrak s) = \mathcal R_{\mathfrak s^\vee}\] where in the last equality we used \cref{P:homological dual of progren Pi_s}. \end{proof} The above corollary means that we have an equivalence of categories $\mathcal R_\mathfrak s\lmod\simeq \rmod\mathcal R_{\mathfrak s^\vee}$. In other words, given a right module $V\in\rmod\mathcal R_\mathfrak s$ its dual $V^*=\Hom_\mathbb{C}(V,\mathbb{C})$ is naturally a left $\mathcal R_\mathfrak s$-module, hence we can view it as a right $\mathcal R_{\mathfrak s^\vee}$-module. Put shortly, taking the dual vector spaces gives us a functor \[ (-)^*\colon \rmod\mathcal R_\mathfrak s\to \rmod\mathcal R_{\mathfrak s^\vee}.\] We are now ready to describe the contragredient on the $\mathcal R_\mathfrak s$-module side. The answer is not surprising at all: \begin{proposition}\label{P:contragredient on the R_s side} There is a commutative square of functors \[ \begin{tikzcd} \mathcal M({\widetilde{G}})_{\mathfrak s} \ar[r,"\sim"] \ar[d,"(-)^\vee"'] & \rmod \mathcal R_{\mathfrak s} \arrow[d,"(-)^*"]\\ (\mathcal M({\widetilde{G}})_{\mathfrak s^\vee})^{op} \ar[r,"\sim"] & (\rmod \mathcal R_{\mathfrak s^\vee})^{op}. \end{tikzcd}\] \end{proposition} \begin{proof} The fact that the vertical right arrow makes sense was discussed above as a consequence of \cref{C:R_s anti-iso to R_svee} Note that $\Pi_{\mathfrak s}$ being projective and finitely generated, together with the calculation from \cref{P:homological dual of progren Pi_s}, implies that we havex natural isomorphisms of functors \[ \Hom_{{\widetilde{G}}}(\Pi_{\mathfrak s},-)\simeq \Hom_{\widetilde{G}}(\Pi_\mathfrak s,\mathcal H({\widetilde{G}}))\otimes_{\mathcal H({\widetilde{G}})}- \simeq \Pi_{\mathfrak s^\vee}\otimes_{\mathcal H({\widetilde{G}})} - .\] Let $V\in\mathcal M({\widetilde{G}})_\mathfrak s$ and use tensor-hom adjunction to obtain natural isomorphisms \begin{align*} \Hom_{\widetilde{G}}(\Pi_{\mathfrak s}, V)^* & \simeq \Hom(\Pi_{\mathfrak s^\vee}\otimes_{\mathcal H({\widetilde{G}})} V, \mathbb{C})\\ & \simeq \Hom_{\widetilde{G}}(\Pi_{\mathfrak s^\vee},V^*)\\ & \simeq \Hom_{\widetilde{G}}(\Pi_{\mathfrak s^\vee},V^\vee) \end{align*} where the last equality follows because the image of a smooth module is smooth. \end{proof} As a final computation, we show that the homological duality on the $\mathcal M({\widetilde{G}})_\mathfrak s$ side goes to homological duality on the $\mathcal R_\mathfrak s$-module side: \begin{proposition}\label{P:homolog duality on the R_s side} There is a commutative square of functors \[ \begin{tikzcd} \mathcal D^b(\mathcal M({\widetilde{G}})_{\mathfrak s}) \ar[r,"\sim"] \ar[d,"D_h"'] & \mathcal D^b(\rmod \mathcal R_{\mathfrak s}) \arrow[d,"D_h"]\\ \mathcal D^b(\mathcal M({\widetilde{G}})_{\mathfrak s^\vee})^{op} \ar[r,"\sim"] & \mathcal D^b(\rmod \mathcal R_{\mathfrak s^\vee})^{op}. \end{tikzcd}\] where the $D_h$ on the right is $\RHom_{\mathcal R_\mathfrak s}(-,\mathcal R_\mathfrak s)$ and we again use the algebra anti-isomorphism $\mathcal R_\mathfrak s \simeq \mathcal R_{\mathfrak s^\vee}$ from \cref{C:R_s anti-iso to R_svee}. \end{proposition} \begin{proof} We deal first with the left-bottom composition. Since $D_h$ is involutive (see \cref{C:D_h square to id for fin.h.dim}), it induces an isomorphism on Hom spaces. In particular, using \cref{P:homological dual of progren Pi_s}, we get an isomorphism of left $\mathcal R_\mathfrak s$-modules \[ \RHom_{\widetilde{G}}(\Pi_{\mathfrak s^\vee}, D_h(V)) = \RHom_{\widetilde{G}}(V,\Pi_\mathfrak s)\] for all $V\in\mathcal D^b(\mathcal M({\widetilde{G}})_\mathfrak s)$. On the other hand, let us recall that the equivalence $\mathcal M({\widetilde{G}})_\mathfrak s\to \rmod\mathcal R_\mathfrak s$ was given by the functor $\Hom_{\widetilde{G}}(\Pi_\mathfrak s,-)$. The tensor-Hom adjunction gives its left adjoint as $-\otimes_{\mathcal R_\mathfrak s} \Pi_\mathfrak s$. Being an equivalence implies the adjunction is a bi-adjunction, i.e., we also have that $\Hom_{{\widetilde{G}}}(\Pi_\mathfrak s,-)$ is left adjoint to $-\otimes_{\mathcal R_\mathfrak s} \Pi_\mathfrak s$. We can now compute the right-top composition and conclude the proof: \begin{align*} \RHom_{\mathcal R_\mathfrak s}(\RHom_{\widetilde{G}}(\Pi_\mathfrak s,V),\mathcal R_{\mathfrak s}) &\simeq \RHom_{\widetilde{G}}(V,\mathcal R_{\mathfrak s}\otimes_{\mathcal R_\mathfrak s} \Pi_\mathfrak s)\\ &\simeq \RHom_{\widetilde{G}}(V,\Pi_\mathfrak s), \end{align*} natural isomorphisms of left $\mathcal R_\mathfrak s$-modules. \end{proof} \begin{remark} Using a similar argument and Morita theory (see for example \cite[4.11 Theorem 2, Corollary 2]{Pareigis-categories}) one shows that under an equivalence of module categories $R\lmod \simeq S\lmod$, the $R$-bimodule $R$ is sent to the $S$-bimodule $S$, and as a consequence, the homological duality for $R$ corresponds to the homological duality for $S$. \end{remark} As a consequence of this section, in order to study homological duality on $\mathcal M({\widetilde{G}})$ we can study it on $\rmod \mathcal R_\mathfrak s$ for all the cuspidal data $\mathfrak s$. This will play an important role in \cref{SS:consequences dualities} where we show that homological duality on finite length cuspidals is isomorphic to the contragredient (up to a shift). \section{The duality theorem of Schneider--Stuhler}\label{S:duality SchSt} We will use the duality theorem from \cref{S:abstract duality theorem}, applied to the Hecke algebra $\mathcal H({\widetilde{G}})$, to deduce a theorem of Schneider and Stuhler \cite[Duality theorem]{SchStu}. In loc.cit., the result is stated and proved in the subcategory of modules with central character but this restriction was removed in \cite{NoriPras}. Our proof is independent both of \cite{SchStu} and of \cite{NoriPras}, and it provides moreover a generalization of the main result of \cite{NoriPras} from irreducible modules to modules of finite length. The approach that we take was suggested in \cite[\S 3.4]{BBK}. Let $\mathfrak s=[\rho,{\widetilde{L}}]\in\mathcal B({\widetilde{G}})$ be a cuspidal datum and put $d=d(\mathfrak s)$ the split rank of the center of ${\widetilde{L}}$. We denote the contragredient cuspidal datum by $\mathfrak s^\vee = [\rho^\vee,{\widetilde{L}}]$ for which we have $d(\mathfrak s^\vee)=d$. Recall that in \cref{T:homological duality single degree} it was proved that, for $\pi\in\mathcal M({\widetilde{G}})_\mathfrak s^\mathsf{fl}$, we have $H^i(D_h(\pi))\neq0$ only for $i=d$. We put $\bD_h(\pi) = H^d(D_h(\pi))$. \begin{theorem}\label{T:main duality Schneider-Stuhler} Let $\pi\in\mathcal M({\widetilde{G}})_\mathfrak s^\mathsf{fl}$ be a finite length representation in the Bernstein block $\mathfrak s$ and let $\pi'\in\mathcal M({\widetilde{G}})$ be any smooth representation. Then the natural pairing \[ \Ext^{i}_{\widetilde{G}}(\pi, \pi') \times \Ext^{d-i}_{\widetilde{G}}(\pi', \bD_h(\pi)^\vee) \rightarrow \Ext^{d(\pi)}_{\widetilde{G}}(\pi, D_h(\pi)^\vee) \to \mathbb{C}\] provides an isomorphism \[ \Ext^{i}_{\widetilde{G}}(\pi, \pi')^*\simeq \Ext^{d-i}_{\widetilde{G}}(\pi', \bD_h(\pi)^\vee).\] If $\pi$ is, moreover, irreducible, then $\Ext^d_{\widetilde{G}}(\pi,\bD_h(\pi)^\vee)\simeq \mathbb{C}$. \end{theorem} \begin{proof} The first part is just a reformulation of \cref{C:natural pairing RHom is perfect}. The second part comes from \cref{C:Hom M into DNak(M) is one dim} by noticing that $D_\mathrm{Nak}(\pi) = \bD_h(\pi)^\vee[d]$. \end{proof} \begin{remark} Note that $(\cdot)^\vee\circ \bD_h$ is an endofunctor of the category $\mathcal M({\widetilde{G}})_\mathfrak s^\mathsf{fl}$ of finite length modules. We would like to think that $\bD_h$ commutes with the contragredient on finite length modules but we are not able to prove it. Another way to reformulate it is to say that $(-)^\vee\circ \bD_h$ is an involution on $\mathcal M({\widetilde{G}})_\mathfrak s^\mathsf{fl}$. \cref{C:D_h on finlen cusp is contrag} confirms this for finite length \emph{cuspidal} representations. More generally, $D_h$ should commute with Grothendieck--Serre duality (over the Bernstein center) for all finitely generated ${\widetilde{G}}$-modules.\footnote{This was suggested to us by Roman Bezrukavnikov.} \end{remark} \section{Dualities on finite length representations}\label{S:dualities on finite length} In this section we prove that the homological duality restricted to finite length cuspidal reprsentations is nothing else but contragredient duality up to a shift (see \cref{C:D_h on finlen cusp is contrag}). Along the way we will show the well-known fact that Grothendieck--Serre duality over the Bernstein center, restricted to finite length modules, is just the contragredient duality (see \cref{C:D_GS on admissibles is contrag}). The tools that we will use are the dualizing (or canonical) complex $\omega_A^\circ\in\mathcal D^b(A\lmod)$ for a commutative ring as well as the exceptional pull-back functor $f^!\colon D^+(A\lmod)\to D^+(B\lmod)$ for any finite map of algebras $f\colon A\to B$. We will provide some of the details without striving for optimal generality. For a thorough discussion of these matters, one should consult \cite[Ch III]{Hart-RD} or \cite[\href{https://stacks.math.columbia.edu/tag/0A7A}{0A7A}]{SP}. Check \cite[\href{https://stacks.math.columbia.edu/tag/0AU3}{0AU3}]{SP} for a summary. We start with some general discussion of Grothendieck--Serre and homological duality, and then we apply it to the context of smooth representation of $p$-adic groups. \subsection{General algebra}\label{SS:general algebra} For an algebra $R$, we denote by $R\lmod$ the abelian category of left modules over $R$. We put $\mathcal D^+(R\lmod)$ for the bounded below derived category of $R\lmod$ and $\mathcal D^b(R\lmod)$ for the \emph{bounded} derived category. For right modules we use the notation $\rmod R$. The full subcategory of $R\lmod$ consisting of finite length modules is denoted by $R\lmod^\mathsf{fl}$ and similarly for right modules. To simplify the discussion of dualizing complexes and upper-shriek funtoriality, we start with the following definition: \begin{definition}\label{D:dualizing complex for A^n} For the polynomial ring $A=k[X_1,\dots,X_d]$, we put $\omega_A^\circ = A[d]$ as an object in $\mathcal D^b(A\lmod)$ and call it the normalized dualizing complex. \end{definition} \begin{remark} Notice that in general one speaks of dualizing complex\emph{es} over a ring $A$ (or more generally over a scheme $X$) and that they are not unique. In order to make it unique, one normalizes it in such a way to make it compatible with exceptional pullback functors $f^!$, i.e., such that $f^!\omega_Y^\circ = \omega_X^\circ$ for any map of schemes $f\colon X\to Y$. Since a discussion about $f^!$ for a general map of algebras $f\colon A\to B$ would take us too far afield, we prefer to start by \cref{D:dualizing complex for A^n} and only discuss $f^!$ for finite maps. \end{remark} As a trivial example, notice that for a field $k$, we have $\omega_k^\circ = k[0]$. If $f\colon A\to B$ is a map of (commutative) algebras, then the natural restriction functor $B\lmod \to A\lmod$ is exact and extends in the obvious way to derived categories. We will denote this functor by $f_*\colon \mathcal D^b(B\lmod)\to \mathcal D^b(A\lmod)$. Notice that it can also be described as $f_* = {}_AB_{B}\otimes_B -$ where we have denoted by ${}_AB_B$ the $A$-$B$-bimodule $B$. \begin{lemma}\label{L:upper shriek for finite} Let $f\colon A\to B$ be a finite map of commutative algebras. Then the restriction functor $f_*\colon \mathcal D^+(B\lmod)\to \mathcal D^+(A\lmod)$ is left adjoint to functor \[ f^!:=\RHom_A(B,-)\colon \mathcal D^+(A\lmod)\to\mathcal D^+(B\lmod). \] \end{lemma} \begin{proof} Since $f_* = {}_AB_{B}\otimes -$ the tensor-Hom adjunction gives for a $B$-module $M$ and an $A$-module $N$ the following natural isomorphism: \[ \RHom_A(B\otimes_B M,N) = \RHom_B(M,\RHom_A(B,N)).\] In other words $f^!$ is right adjoint to $f_*$. \end{proof} \begin{remark} The previous proof does not need the map $f$ to be finite. If the map $f$ is not finite, the right adjoint $\RHom_A(B,-)$ is not denoted by $f^!$. In general, $f^!$ is right adjoint to a functor $f_!$ (restriction with compact support) that we will not define (see for example \cite[Appendix]{Hart-RD} or \cite[\href{https://stacks.math.columbia.edu/tag/0G4Z}{0G4Z}]{SP}). See \cite[\href{https://stacks.math.columbia.edu/tag/0A9Y}{0A9Y}]{SP} for a construction of $f^!$ in general. If $f$ is a finite map, then $f_*=f_!$, so the right adjoint of $f_*$ deserves the name $f^!$. \end{remark} \begin{definition}\label{D:dualizing from finite maps} Suppose we have defined for a commutative $k$-algebra $A$, a (normalized) dualizing complex $\omega_A^\circ\in\mathcal D^b(A\lmod)$. Then for any finite map $f\colon A\to B$ of commutative algebras, we put $\omega_B^\circ:=f^!(\omega_A^\circ)$ and call it the (normalized) dualizing complex of $B$. \end{definition} \begin{remark} This definition of $\omega_B^\circ$ seems to depend on $A$ and on the map $f$. It turns out that this is not the case, see \cite[\href{https://stacks.math.columbia.edu/tag/0A7A}{0A7A}]{SP} and more generally \cite[\href{https://stacks.math.columbia.edu/tag/0BZI}{0BZI}]{SP} for details. \end{remark} \begin{example} For any ideal $I\le A$, the map $A\to A/I$ is finite, hence $\omega_{A/I}^\circ = \RHom_A(A/I,\omega_A^\circ)$. In particular, for $\mathfrak m$ a maximal ideal of $A$ with residue field $k$, we recover $\RHom_A(k,\omega_A^\circ) = \omega_k^\circ = k[0]$. \end{example} \begin{example}\label{E:dualizing for fdim com alg} If $A$ is a finite dimensional $k$-algebra, then $\omega_A^\circ = A^*[0]$. Indeed, using the finite map $k\to A$, and the fact that $\omega_k^\circ = k[0]$, we deduce immediately the equality $\omega_A^\circ = \RHom_k(A,k) = A^*[0]$. \end{example} \begin{definition}\label{D:D_GS for commutative k-algebras} For a commutative $k$-algebra $A$ with normalized dualizing complex $\omega_A^\circ$, we define the Grothendieck--Serre duality functor as \[ D_{GS}:=\RHom_A(-,\omega_A^\circ)\colon \mathcal D^b(A\lmod)\to (\mathcal D^b(A\lmod))^{op}.\] \end{definition} \begin{remark} The canonical evaluation morphism $\Id\to D_{GS}^2$ is an isomorphism on finitely generated modules. This follows by the definition of a dualizing complex, see \cite[\href{https://stacks.math.columbia.edu/tag/0A7C}{0A7C}]{SP}. \end{remark} \begin{example}\label{E:D_GS f.dim. com alg} For $A$ a finite dimensional commutative $k$-algebra we have that $D_{GS} = (-)^*$. To see this, use that $\omega_A = A^*[0]$ and the tensor-Hom adjunction. \end{example} \begin{lemma}\label{L:D_GS on finl for commutative A} For any commutative algebra $A$ with normalized dualizing complex $\omega_A^\circ$, the functor $D_{GS}$ restricts to a duality \[ D_{GS}\colon A\lmod^\mathsf{fl} \to (A\lmod^\mathsf{fl})^{op}\] which moreover can be identified with the contragredient duality $(-)^* = \Hom_k(-,k)$. \end{lemma} \begin{proof} Let $M$ be a finite length $A$-module. Then there exists a finite codimension ideal $I\le A$ such that the $A$-module structure on $M$ factors through the (finite) map of algebras $A\to A/I$. Using the right adjoint of restriction (see \cref{L:upper shriek for finite}) together with the definition of the normalized dualizing complex, we get natural isomorphisms of $A$-modules: \begin{align*} D_{GS}(M) & = \RHom_A(M,\omega_A^\circ)\\ & = \RHom_{A/I}(M,\RHom_A(A/I,\omega_A^\circ))\\ & = \RHom_{A/I}(M,\omega_{A/I}^\circ)\\ & = \RHom_{A/I}(M,(A/I)^*)\\ & = M^* \end{align*} where in the last two equalities we have used \cref{E:D_GS f.dim. com alg}. \end{proof} \begin{remark} Part of the content of the above lemma is that $D_{GS}(M)$ is concentrated in a single degree if $M$ is a finite length $A$-module. This is not a priori obvious from the definition of $D_{GS}$ and it is a feature of dualizing complexes. The fact that it lives in degree $0$ has to do with the normalization that we chose. \end{remark} We would like to apply all this to non-commutative algebras. From now on, we consider $R$ a (possibly) non-commutative $k$-algebra together with an algebra map $A\to Z(R)$ from a commutative $k$-algebra $A$ to the center of $R$. Suppose moreover that $R$ is finite as an $A$-module and that $A$ has a (normalized) dualizing complex $\omega_A^\circ$. The category of left $R$-modules becomes linear over $A$ and therefore one can consider the following functor \begin{align*} D_{GS/A}\colon &\mathcal D^{b}(R\lmod)\to (\mathcal D^{b}(\rmod R))^{op} \\ &M\mapsto \RHom_A(M,\omega_A^\circ) \end{align*} from the (bounded) derived category of left $R$-modules to the derived category of right $R$-modules. We will abuse notation and denote by the same symbol the similar functor from right modules to left modules. \begin{definition} A \emph{duality} on a $k$-linear category $\mathcal C$ is a $k$-linear functor $D\colon \mathcal C\to \mathcal C^{op}$ such that $D^2\simeq \Id$. \end{definition} Notice that for abelian and triangulated categories, a duality is necessarily exact. \begin{corollary}\label{C:D_GS/A on finite length R-mod} In the above setting, the functor $D_{GS/A}$ restricts to a duality \[ D_{GS/A}\colon R\lmod^\mathsf{fl} \to (\mathbf{mod}^\mathsf{fl}\mhyp R)^{op} \] that can be moreover identified with the contragredient $(-)^*$. \end{corollary} \begin{proof} Observe that since $R$ is a finite $A$-module, any finite length $R$-module restricts to a finite length $A$-module. We can therefore apply \cref{L:D_GS on finl for commutative A} to deduce that $D_{GS/A}(M)\simeq M^*$ as $A$-modules for any $M$ a finite length left (resp., right) $R$-module. The naturality implies the isomorphism $D_{GS/A}(M)\simeq M^*$ holds also as right (resp., left) $R$-modules. \end{proof} \begin{remark} Notice that if we do not assume that $R$ is finite over $A$, then the proof still applies to $R$-modules that are of finite length as $A$-modules. \end{remark} Recall that for a $k$-algebra $R$, the homological duality was defined in \cref{S:abstract duality theorem} as the functor \[ D_h:=\RHom_R(-,R)\colon \mathcal D^\pm(R\lmod)\to \mathcal D^\mp(\rmod R)^{op},\] where it was shown that it is a duality on perfect complexes. If moreover $R$ has finite global dimension, then we get a duality on the bounded derived category of finitely generated $R$-modules (see \cref{P:abstract D_h is an involution}): \[ D_h\colon \mathcal D^b_{fg}(R\lmod)\to \mathcal D^b_{fg}(\rmod R)^{op}.\] In general, there is no reason for this functor to preserve any abelian subcategories or to have good properties. However, in the case of representations of $p$-adic groups it does (see \cref{T:homological duality single degree}) thanks to second-adjointness and the following technical condition that is satisfied on the cuspidal blocks (keeping the assumptions on $R$ and $A$): \begin{align}\label{Cond: R iso to D_GS(R)[d]} \begin{array}{c} \text{ there is an integer }d \text{ such that }\\ D_{GS/A}(R)\simeq R[d] \\ \text{as } R\text{-bimodules} \end{array}\tag{FsG} \end{align} \begin{remark} The name \eqref{Cond: R iso to D_GS(R)[d]} is inspired by \emph{Frobenius symmetric} and \emph{Gorenstein} since for a finite dimensional $k$-algebra the above condition is equivalent to $R$ being Frobenius symmetric. If $R$ is commutative and local then this condition is equivalent to $R$ being Gorenstein. \end{remark} \begin{proposition}\label{P:D_h=D_GS = (-)^* for finite length R-mod under cond (FsG)} Keeping the same assumptions, suppose moreover that $R$ satisfies condition \eqref{Cond: R iso to D_GS(R)[d]} above. Then the (shifted by $d$) homological duality functor $[d]\circ D_h$ is isomorphic to $D_{GS/A}$. In particular, it restricts to a duality on finite length $R$-modules \[ [d]\circ D_h\colon R\lmod^\mathsf{fl} \to (\mathbf{mod}^\mathsf{fl}\mhyp R)^{op}\] that is moreover isomorphic to the contragredient. \end{proposition} \begin{proof} Using condition \eqref{Cond: R iso to D_GS(R)[d]}, tensor-hom adjunction and \cref{C:D_GS/A on finite length R-mod} we get natural isomorphisms of functors: \begin{align*} [d]\circ D_h & = [d]\circ \RHom_R(-,R) \\ & = \RHom_R(-,\RHom_A(R,\omega_A^\circ))\\ & = \RHom_A(-,\omega_A^\circ)\\ & = D_{GS/A}\\ & = (-)^* \qquad\text{ for finite length modules}.\qedhere \end{align*} \end{proof} \begin{remark}\label{R:replace bimodule by left/right in condition FsG} If in Condition \eqref{Cond: R iso to D_GS(R)[d]}, we replace $R$-bimodule by left $R$-module, then we still get that $[d]\circ D_h$ sends $R\lmod^\mathsf{fl}$ to $\mathbf{mod}^\mathsf{fl}\mhyp R$ but we can not identify it with the contragredient. \end{remark} The above proof gives us a bit more: \begin{corollary}\label{C:D_h=D_GS if and only if FsG} Keeping the same assumptions, we have that $[d]\circ D_h \simeq D_{GS/A}$ on all $R$-modules if and only if condition \eqref{Cond: R iso to D_GS(R)[d]} is satisfied. \end{corollary} \begin{proof} By the definition of $D_{GS/A}$ and adjunction we have \begin{align*} D_{GS/A} &= \RHom_A(-,\omega_A^\circ)\\ & = \RHom_R(-,\RHom_A(R,\omega_A^\circ)) \end{align*} and by Yoneda's lemma this last functor is isomorphic to $[d]\circ D_h=\RHom(-,R[d])$ if and only if condition \eqref{Cond: R iso to D_GS(R)[d]} is satisfied. \end{proof} Let us consider a particular case that will be of importance to us. Suppose that $R$ is a finite projective $A$-algebra where $A=k[X_1^{\pm1},\dots,X_d^{\pm1}]$. We have that $\omega_A^\circ = A[d]$ and then condition \eqref{Cond: R iso to D_GS(R)[d]} becomes \begin{align}\label{Cond: Hom(R,A)=R as bimodules} \Hom_A(R,A) \simeq R \text{ as $R$-bimodules}. \end{align} \begin{corollary}\label{C:D_h=D_h/A=D_GS=(-)^* on finite length if (FsG) satisfied} Keeping the above notation and assuming condition \eqref{Cond: Hom(R,A)=R as bimodules}, we have the following isomorphisms of functors when restricted to finite-length $R$-modules \[ \RHom_R(-,R)[d]\simeq \RHom_A(-,A)[d] \simeq \Hom_k(-,k).\] \end{corollary} \subsection{Consequences for representations}\label{SS:consequences dualities} Recall the Bernstein decomposition (\cref{T:Bernstein dec for tildeG}): the category $\mathcal M({\widetilde{G}})$ of smooth representations of ${\widetilde{G}}$ decomposes into blocks $\mathcal M({\widetilde{G}})\simeq \prod_\mathfrak s \mathcal M({\widetilde{G}})_\mathfrak s$, and for each block, we have $\mathcal M({\widetilde{G}})_\mathfrak s \simeq \rmod \mathcal R_\mathfrak s$ (see \cref{P:block equiv to modules over algebra}). Moreover, we have shown (see \cref{P:contragredient on the R_s side} and \cref{P:homolog duality on the R_s side}) that under the equivalence $\mathcal M({\widetilde{G}})_\mathfrak s \simeq \rmod \mathcal R_\mathfrak s$, the contragredient on $\mathcal M({\widetilde{G}})_\mathfrak s$ corresponds to taking dual vector spaces on $\rmod\mathcal R_\mathfrak s$ and that homological duality on $\mathcal M({\widetilde{G}})_\mathfrak s$ corresponds to homological duality on $\rmod\mathcal R_\mathfrak s$. Recall from \cref{SS:center of a block} that the center of the block $\mathcal M({\widetilde{G}})_\mathfrak s$ is $\mathcal Z_\mathfrak s:=\mathcal Z(\mathcal R_\mathfrak s)$ and that $\mathcal R_\mathfrak s$ is finite over $\mathcal Z_\mathfrak s$. From the block decomposition, the (Bernstein) center of the category $\mathcal M({\widetilde{G}})$ satisfies \[ \mathcal Z = \prod_\mathfrak s \mathcal Z_\mathfrak s.\] Moreover, we know that each $\mathcal Z_\mathfrak s$ is the ring of invariants of a Laurent polynomial ring by a finite group (see \cref{T:center of M(G)_s as invariants}), therefore it is a Cohen-Macaulay ring, and has a dualizing complex $\omega_{\mathcal Z_\mathfrak s}^\circ$. The dualizing complex for the ring $\mathcal Z$ is then $\prod_\mathfrak s \omega_{\mathcal Z_\mathfrak s}^\circ$. \begin{definition} The Grothendieck--Serre duality relative to the center is defined by \[ D_{GS/\mathcal Z} =\RHom_\mathcal Z(-,\omega^\circ_\mathcal Z)^\mathrm{sm}\colon \mathcal D^b_{fg}(\mathcal M({\widetilde{G}}))\to \mathcal D^b_{fg}(\mathcal M({\widetilde{G}}))^{op} \] where the superscript ``sm'' signifies taking smooth vectors. \end{definition} The fact that $D_{GS/\mathcal Z}$ lands in the bounded derived category is a feature of the dualizing complex, namely that Grothendieck--Serre duality preserves the bounded derived category of finitely generated modules. \begin{corollary} \label{C:D_GS on admissibles is contrag} The functor $D_{GS/\mathcal Z}$ restricted to admissible modules is isomorphic to the contragredient functor. In particular, this holds for all finite length ${\widetilde{G}}$-modules. \end{corollary} \begin{proof} Let $V$ be an admissible module for ${\widetilde{G}}$. For any open compact subgroup ${\widetilde{K}} \subset {\widetilde{G}}$, $V^{\widetilde{K}}$ is a finite dimensional module over the ${\widetilde{K}}$-biinvariant Hecke algebra $\mathcal H({\widetilde{G}}/\mkern-6mu/ {\widetilde{K}})$ and it is also a module over the center $\mathcal Z$. We have the following natural isomorphisms: \begin{align*} \RHom_\mathcal Z(V,\omega_Z^\circ) & = \RHom_\mathcal Z(\bigcup_{\widetilde{K}} V^{\widetilde{K}}, \omega_Z^\circ)&\\ & = \varprojlim_{\widetilde{K}} \RHom_\mathcal Z(V^{\widetilde{K}},\omega_Z^\circ) &\\ & = \varprojlim_{\widetilde{K}} (V^{\widetilde{K}})^*& \text{ by \cref{C:D_GS/A on finite length R-mod}} \\ & = V^*& \end{align*} and we conclude by taking smooth vectors. \end{proof} Grothendieck--Serre duality relative to the center behaves well under equivalences of categories. We record the following for completeness as an analogue of \cref{P:contragredient on the R_s side} and \cref{P:homolog duality on the R_s side}. \begin{proposition}\label{P:D_GS under equiv to R_s-mod} Let $\mathfrak s=[{\widetilde{L}},\rho]\in\mathcal B({\widetilde{G}})$ be a cuspidal datum. Then, under the equivalence $\mathcal M({\widetilde{G}})_\mathfrak s\simeq \rmod \mathcal R_\mathfrak s$, we have the following commutative square of functors \[ \begin{tikzcd} \mathcal D^b(\mathcal M({\widetilde{G}})_{\mathfrak s}) \ar[r,"\sim"] \ar[d,"D_{GS}"'] & \mathcal D^b(\rmod \mathcal R_{\mathfrak s}) \arrow[d,"D_{GS}"]\\ \mathcal D^b(\mathcal M({\widetilde{G}})_{\mathfrak s^\vee})^{op} \ar[r,"\sim"] & \mathcal D^b(\rmod \mathcal R_{\mathfrak s^\vee})^{op}. \end{tikzcd}\] \end{proposition} \begin{proof} Under an equivalence of categories, the centers are isomorphic. In this proof we put $Z:=\mathcal Z_\mathfrak s$. So the two $D_{GS}$ functors in the diagram are both $\RHom_Z(-,\omega_{Z}^\circ)$. \cref{C:D_GS on admissibles is contrag} ensures that the target of the left vertical arrow is correct. Similarly, use \cref{C:D_GS/A on finite length R-mod} together with \cref{C:R_s anti-iso to R_svee} for the right vertical arrow. We have to show that, for $V\in\mathcal M({\widetilde{G}})$, there are natural isomorphisms \begin{align}\label{Eq:D_GS commutes with equiv} \RHom_Z(\Hom_{\widetilde{G}}(\Pi_\mathfrak s,V),\omega_Z^\circ) \simeq \Hom_{\widetilde{G}} (\Pi_{\mathfrak s^\vee},\RHom_Z(V,\omega_Z^\circ)). \end{align} Notice that $\Pi_\mathfrak s$ is projective, so we don't need to derive the Hom space from it. Using \cref{L:Hom(M N)=Hom(M A)otimesN} and \cref{P:homological dual of progren Pi_s} we have $\Hom_{\widetilde{G}}(\Pi_\mathfrak s,V)\simeq \Pi_{\mathfrak s^\vee}\otimes_{\mathcal H({\widetilde{G}})} V$. We can rewrite the left hand-side of \eqref{Eq:D_GS commutes with equiv} and use tensor-hom adjunction to conclude. \end{proof} Let $\rho\in\mathcal M({\widetilde{G}})$ be an irreducible cuspidal representation and put as usual $d=\rank_\bZ({\widetilde{G}}/{\wG^\circ})$. It has already been proved that homological duality restricts to a contravariant functor \[ \bD_h:=[d]\circ D_h\colon \mathcal M({\widetilde{G}})_{[\rho]}^\mathsf{fl} \to (\mathcal M({\widetilde{G}})_{[\rho^\vee]}^\mathsf{fl})^{op},\] see \cref{C:D_h sends block to its contragredient} and \cref{T:homological duality single degree}. We can see that it is not new: \begin{corollary}\label{C:D_h on finlen cusp is contrag} The above functor $\bD_h$ is isomorphic to the contragredient. \end{corollary} \begin{proof} The block $\mathcal M({\widetilde{G}})_{[\rho]}$ is equivalent to the category $\rmod\mathcal R_{[\rho]}$ and this equivalence commutes with homological duality (see \cref{P:homolog duality on the R_s side}) as well as with contragredient (see \cref{P:contragredient on the R_s side}). Moreover, it sends finite length ${\widetilde{G}}$-modules to finite length $\mathcal R_{[\rho]}$-modules. In order to prove the claim it is therefore enough to do it for $\rmod\mathcal R_{[\rho]}$. The center of $\mathcal R_{[\rho]}$ is a Laurent polynomial algebra of dimension $d$ (see \cref{P:R_rho is Azumaya and Z Laur pol}) and moreover $\mathcal R_{[\rho]}$ satisfies condition \eqref{Cond: Hom(R,A)=R as bimodules} by \cref{C:Frob sym cond for R_rho}. Applying \cref{C:D_h=D_h/A=D_GS=(-)^* on finite length if (FsG) satisfied} we conclude. \end{proof} The same argument gives \begin{corollary}\label{C:D_h=D_GS on all cuspidal block} On the cuspidal block $\mathcal D^b(\mathcal M({\widetilde{G}})_{[\rho]})$ we have a natural isomorphism of functors \[ [d]\circ D_h\simeq D_{GS/\mathcal Z}.\] \end{corollary} \begin{remark} If $G$ is a multiplicative group then all smooth representations of a finite central extension ${\widetilde{G}}$ are cuspidal and therefore the above corollaries applies to \emph{all} finite length smooth ${\widetilde{G}}$-modules. Moreover, in this situation the Grothendieck--Serre duality and homological duality agree (up to shift) on all smooth modules. \end{remark} More generally, the following gives a necessary and sufficient condition for homological duality to be isomorphic to Grothendieck--Serre duality: \begin{corollary}\label{C:D_h=D_GS if and only if} On a block $\mathcal D^b(\mathcal M({\widetilde{G}})_\mathfrak s)$, we have that $[d]\circ D_h\simeq D_{GS/\mathcal Z}$ if and only if $\mathcal R_\mathfrak s$ satisfies condition \eqref{Cond: R iso to D_GS(R)[d]}. \end{corollary} \begin{proof} Given \cref{P:D_GS under equiv to R_s-mod}, this is simply a restatement of \cref{C:D_h=D_GS if and only if FsG}. \end{proof} Let us also note the particular case of a free abelian group $\Gamma\simeq \bZ^d$. We have that all representations of $\Gamma$ are smooth and the Hecke algebra is just the group algebra $\mathbb{C}[\Gamma]\simeq \mathbb{C}[X_1^\pm,\dots,X_d^\pm]$. From \cref{C:D_h=D_h/A=D_GS=(-)^* on finite length if (FsG) satisfied} we get \begin{corollary} The Grothendieck--Serre duality, (shifted by $d$) homological duality and contragredient duality all agree on finite length $\Gamma$-modules: \[ D_{GS} = [d]\circ D_h = (-)^* \text{ as functor on } \mathcal M(\Gamma)^\mathsf{fl}.\] \end{corollary}
{ "timestamp": "2021-06-02T02:21:08", "yymm": "2106", "arxiv_id": "2106.00437", "language": "en", "url": "https://arxiv.org/abs/2106.00437", "abstract": "In this largely expository paper we extend properties of the homological duality functor $RHom_{\\mathcal H}(-,{\\mathcal H})$ where ${\\mathcal H}$ is the Hecke algebra of a reductive $p$-adic group, to the case where it is the Hecke algebra of a finite central extension of a reductive $p$-adic group. The most important properties being that $RHom_{\\mathcal H}(-,{\\mathcal H})$ is concentrated in a single degree for irreducible representations and that it gives rise to Schneider--Stuhler duality for Ext groups (a Serre functor like property). Along the way we also study Grothendieck--Serre duality with respect to the Bernstein center and provide a proof of the folklore result that on admissible modules this functor is nothing but the contragredient duality. We single out a necessary and sufficient condition for when these three dualities agree on finite length modules in a given block. In particular, we show this is the case for all cuspidal blocks as well as, due to a result of Roche, on all blocks with trivial stabilizer in the relative Weyl group.", "subjects": "Representation Theory (math.RT); Algebraic Geometry (math.AG); Rings and Algebras (math.RA)", "title": "Homological duality for covering groups of reductive $p$-adic groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357622402971, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7096739233621124 }
https://arxiv.org/abs/2006.04745
Generalized golden mean and the efficiency of thermal machines
We investigate generic heat engines and refrigerators operating between two heat reservoirs, for the condition when their efficiencies are equal to each other. It is shown that the corresponding value of efficiency is given as the inverse of the generalized golden mean, $\phi_p$, where the parameter $p$ depends on the degree of irreversibility of both engine and refrigerator. The reversible case ($p=1$) yields the efficiency in terms of the standard golden mean. We also extend the analysis to a three-heat-resrervoir setup.
\section{Introduction} Although the golden mean (golden ratio) has engaged artists, mathematicians and philosophers since antiquity \cite{Coxeter, Ogilvy1990, Markowsky1992, Livio}, it has been appreciated more recently that it is not a unique number as far as many of its algebraic and geometric properties are concerned \cite{Falbo2005, Fowler1982}. In fact, one of the simplest generalizations of the golden mean may be defined by the positive solution, $\phi_p$, of the following quadratic equation: \begin{equation} x^2 - p x -1 = 0, \label{x2p} \end{equation} where $p$ is a given positive number. The number $\phi_p$ is given by: \begin{equation} \phi_p = \cfrac{\sqrt{p^2+4} + p}{2}, \label{phip} \end{equation} also referred as the $p$th order extreme mean (POEM) \cite{Falbo2005}. More specifically, when $p$ is a positive integer $n$, it is addressed as the $n$th order extreme mean (NOEM) \cite{Fowler1982} or a member of the family of metallic means. For example, $p=1$ gives the golden mean $\phi_1 = (\sqrt{5}+1)/2$; $p=2$ yields the silver mean, $\phi_2 = \sqrt{2} +1$, and so on. Amongst other mathematical constructions, the ratio of successive terms in the generalized Fibonacci sequence $F_{n}^{} = p F_{n-1}^{} + F_{n-2}^{}$, tend to this number: \begin{equation} \lim _{n \to \infty} \frac{F_{n+1}}{F_n} = \phi_p. \end{equation} It also relates the lengths of diagonals of a regular odd $n$-gon ($n\geq 5$). For other properties and identities related to $\phi_p$, see Ref. \cite{Fowler1982}. The generalized golden mean ($\phi_p$) appears as the optimal solution determining the shape of Newton's frustum that faces the least resistance while moving through a rare medium \cite{Sampedro2010}. The metallic means family has been related to quasiperiodic dynamics \cite{Spinadel1997}. Another simple physical example is a semi-infinite resistor network \cite{Srinivasan1992, Kasperski2013} as shown in Fig. 1a, whose equivalent resistance ($r'$) between points $A$ and $B$ satisfies the quadratic euquation: $(r')^2-r r' -1 =0$, and therefore, $r' = \phi_r$. In this article, we describe an occurrence of the generalized golden mean in the context of thermodynamics by relating this number to the instance of equal efficiencies of a heat engine and refrigerator. \begin{figure}[ht] \includegraphics[width=1.1\linewidth]{equalresistor.pdf} \caption{a) An semi-infinite resistor network where each resistance in series is $r$ while a resistance in parallel is $1/r$. b) The effective resistance between points A and B is $r' = \phi_r$.} \label{equalresis} \end{figure} \section{Two-heat-reservoir setup} A heat engine and a refrigerator constitute basic mechanisms of a thermal machine operating between two heat reservoirs at unequal temperatures, say, $T_h$ and $T_c$ ($T_c < T_h$) \cite{Zemansky1997}. An irreversible machine always leads to a net entropy increase of the universe. On the other hand, the ideal limit of a reversible operation implies no net entropy change of the universe. Further, irreversibility leads to a decrease in performance of the machine, and so the efficiency of a thermal machine is limited by its maximal value, known as the Carnot efficiency, obtained in the reversible case. {\it Heat engine}: We first consider an irreversible heat engine working in a cycle, and suppose that $W$ amount of work is extracted per cycle when $Q_h$ amount of heat is absorbed from the hot reservoir. The total entropy generated per cycle is: \begin{equation} \Delta_{\rm tot} S = -\frac{Q_h}{T_h} + \frac{Q_c}{T_c} >0. \label{dst} \end{equation} where $Q_c = Q_h-W$, is the heat rejected to the cold reservoir. All quantities defined above are positive. From Eq. (\ref{dst}), we can express the work output of an irreversible cycle as: \begin{equation} W = (1-\theta) Q_h - T_c \Delta_{\rm tot} S, \label{wirr} \end{equation} where $\theta = T_c/T_h$. Let $\Delta_{\rm rev} S = Q_h/T_h$ be the entropy transferred from the hot reservoir to the working medium, in the reversible case. Then, the efficiency, $E=W/Q_h$, can be written as \begin{equation} E = 1 - \left( 1+ \frac{\Delta_{\rm tot} S}{\Delta_{\rm rev} S} \right) \theta. \label{efirr} \end{equation} Here, we define the ``irreversibility'' parameter, $z = 1+ {\Delta_{\rm tot} S}/{\Delta_{\rm rev} S} > 1$, so that the efficiency of the engine can be expressed as $E = 1- z \theta$. The reversible case corresponds to $\Delta_{\rm tot} S=0$, or $z=1$, yielding Carnot efficiency equal to $1-\theta$. In other words, $0 \le E \le 1-\theta $ implies $1\le z \le 1/\theta$. {\it Refrigerator}: Now, let us consider the machine in the refrigerator mode. Suppose, an input work $\mathscr{W}$ is required to transport $\mathscr{Q}_c$ amount of heat against the temperature gradient. Let $\mathscr{Q}_h = \mathscr{Q}_c + \mathscr{W} $, be the amount of heat deposited in the hot reservoir. The efficiency of a refrigerator is defined as $R= \mathscr{Q}_c/\mathscr{W}$. The total entropy generated per cycle \cite{Zemansky1997} is: \begin{equation} \Delta_{\rm tot} \mathscr{S} = \frac{ \mathscr{Q}_h}{T_h} - \frac{ \mathscr{Q}_c}{T_c} >0. \end{equation} Let $\Delta_{\rm rev} \mathscr{S}$ be the amount of heat transferred reversibly, from the {\it cold} reservoir to the working medium. Then we have $\mathscr{Q}_c = T_c \Delta_{\rm rev} \mathscr{S}$, and so, we can write \begin{equation} \mathscr{W} = \frac{ 1-\theta}{\theta} \mathscr{Q}_c + T_h \Delta_{\rm tot} \mathscr{S}. \end{equation} The efficiency is then given by: \begin{equation} R = \theta \left(1+ \frac{\Delta_{\rm tot} \mathscr{S}} {\Delta_{\rm rev} \mathscr{S}} - \theta \right)^{-1}. \label{refirr} \end{equation} Analogous to the case of engine, we define the parameter $z' = 1+ {\Delta_{\rm tot} \mathscr{S}}/{\Delta_{\rm rev} \mathscr{S}} >1$, and express the efficiency of the refrigerator as : $R = \theta (z' -\theta)^{-1}$. In the reversible case, $z'=1$, and so $R = \theta (1 -\theta)^{-1}$. Note that, unlike the parameter $z$, $z'$ is not bounded from above. In the above, we have identified the expressions for efficiencies as functions of the ratio of temperatures as well as the irreversibility parameter $z$ or $z'$. These expressions refer to {\it any} irreversible thermal cycle between the two reservoirs, with given values of $\theta, z$ and $z'$. We may now ask, for what ratio of temperatures, a heat engine and a refrigerator with the given values of their respective irreversibility parameters, have the {\it same} efficiency, and what is its value? Thereby, setting $E = R$, and solving for $0<\theta <1$, we obtain: \begin{equation} \theta = \frac{z z' + 2 -\sqrt{(z z')^2 +4 }}{2z}. \label{theq} \end{equation} Then, the efficiency, $E=1-z\theta$, at the above condition is given by: \begin{equation} E = R = \frac{\sqrt{(z z')^2 + 4} - z z'}{2}. \label{ezrzp} \end{equation} Interestingly, the above expression for the efficiency depends only on the {\it product} of the irreversibility parameters. Thus, we may define ${z z'} \equiv p$, and rewrite Eq. (\ref{ezrzp}) as follows \begin{equation} E = \frac{\sqrt{p^2 + 4} - p}{2} = \frac{1}{\phi_p}, \label{ez} \end{equation} where $E$ is expressed in terms of $\phi_p$ from Eq. (\ref{phip}). Note that, in the above $p \geqslant 1$, where $p=1$ implies the reversible case, with $z= z' = 1$, yielding $E = R = (\sqrt{5}-1)/2 = 1/\phi_1$, which was earlier noted in Ref. \cite{Popkov}. Correspondingly, the ratio of temperatures in the reversible case is required to be: $\theta = (3-\sqrt{5})/2 = 2-\phi_1$. \section{Three-heat-reservoir setup} Next, we show the occurrence of the generalized golden mean in a slightly different setting. Let us consider three heat reservoirs with temperatures ordered as $T_l < T_c < T_h$. Assume that we can operate an engine between the reservoirs at $T_h$ and $T_l$, and a refrigerator between the reservoirs at $T_c$ and $T_l$ (see Fig. 2). Further, let these be ideal or reversible machines, so that the respective thermal efficiencies are given as: \begin{equation} E_{hl} = \frac{T_h-T_l}{T_h}, \quad R_{cl} = \frac{T_l}{T_c-T_l}. \end{equation} \begin{figure}[ht] \includegraphics[width=1.2\linewidth]{3reser.pdf} \caption{A three heat-reservoir setup, in which a heat engine runs between the hottest ($T_h$) and the coldest ($T_l$) reservoirs, while a refrigerator runs between the coldest and an intermediate reservoir ($T_c$). For given values of $T_h$ and $T_c$, we look for $T_l$ at which efficiencies of the engine and the refrigerator are equal.} \label{3src} \end{figure} Now, for given values of $T_h$ and $T_c$, we look for the temperature $T_l$ such that these two efficiencies become equal. Setting $E_{hl} = R_{cl}$, we obtain a quadratic equation for $T_l$: \begin{equation} T_{l}^{2} - (2T_h + T_c) T_l + T_h T_l = 0, \end{equation} whose solutions are: \begin{equation} T_l = \frac{T_h}{2} \left[ \theta +2 \pm \sqrt{\theta^2+4} \right], \end{equation} where $\theta = T_c/T_h$. Only the negative root above yields an allowed value of the efficiency, given by: \begin{equation} E_{hl} = R_{cl} = \frac{ \sqrt{\theta^2 + 4}- \theta}{2} = \frac{1}{\phi_\theta}. \label{phth} \end{equation} Note that the generalized golden mean appears above within a reversible setup. Secondly, due to $0< \theta <1$, it covers a range of parameters complementary to the expression (\ref{ez}), where $p \geqslant 1$. Finally, we extend our exploration on the three-heat-reservoir case also into the irreversible regime. Consider an irreversible engine between $T_h$ and $T_l$, specified by the irreversibility parameter $z$. Likewise, let the corresponding parameter for the refrigerator operating between $T_c$ and $T_l$ be $z'$. Then, following Section II, the efficiencies of these machines can be written as: \begin{equation} \bar{E}_{hl} = 1-z\frac{T_l}{T_h}; \quad \bar{R}_{cl} = \frac{T_l/T_c}{z'- T_l/T_c}. \end{equation} Now setting the condtion $ \bar{E}_{hl} = \bar{R}_{cl}$, we can solve for $T_l$: \begin{equation} T_l = \frac{T_h}{2z} \left[ p\theta + 2 \pm \sqrt{(p \theta)^2 + 4} \right], \end{equation} where $p = z z'$. Consequently, the allowed solution for efficiency is given by: \begin{equation} \bar{E}_{hl} = \frac{\sqrt{(p \theta)^2+ 4}-p \theta}{2} = \frac{1}{\phi_{p\theta}^{}}. \end{equation} For $p=1$, we obtain the reversible case discussed above (Eq. (\ref{phth})). Moreover, letting $\theta \to 1$, which may be realized by taking $T_c \to T_h$, we revert to the case of two reservoirs (at $T_h$ and $T_l$), as discussed in Section II. \section{Conclusions} In the above, we have observed that the generalized golden mean determines the efficiencies of a heat engine and a refrigerator when the latter are set equal to each other. When both the machines are reversible, the efficiency is related to the standard golden mean. To extend to irreversible domain, we have first recast the efficiency of a generic heat engine and a refrigerator in terms of an irreversibility parameter and the ratio of the reservoir temperatures. Then, the efficiency depends only on the product of the two irreversibility parameters. The importance of this step can be appreciated by noting that {\it only} for the reversible case, we have $R = Q_c/W_h$ along with $E = W/Q_h$, where $Q_c = Q_h -W$, i.e. the same amounts of heat and work appear for a {\it reversible} heat engine as well as for a refrigerator. So, in this case, we can express the condition $E=R$, as follows: \begin{equation} \frac{W}{Q_h} = \frac{Q_c}{W}. \end{equation} From this, we obtain the equation: $E = E^{-1} - 1$, whose solution is $E = R = 1/\phi_1$ \cite{Popkov}. However, the efficiencies in the irreversible case are $E= W/Q_h$ and $R = \mathscr{Q}_c/\mathscr{W}_h$, which don't seem useful for applying the $E=R$ condition. Thereby, the forms (\ref{efirr}) and (\ref{refirr}) have been employed. Further, we have also extended this condition to three-reservoir scenario which also includes the limiting case of two-reservoir setup. It will be interesting to identify physical situations leading to the equality of the efficiencies of engine and refrigerator under the given conditions. Here, we have discussed one possible setup using three heat reservoirs. The interested reader can identify other engine-refrigerator pairs in the three-reservoir setup which lead to equal efficiencies, expressed in terms of suitable generalized golden means.
{ "timestamp": "2020-06-16T02:08:42", "yymm": "2006", "arxiv_id": "2006.04745", "language": "en", "url": "https://arxiv.org/abs/2006.04745", "abstract": "We investigate generic heat engines and refrigerators operating between two heat reservoirs, for the condition when their efficiencies are equal to each other. It is shown that the corresponding value of efficiency is given as the inverse of the generalized golden mean, $\\phi_p$, where the parameter $p$ depends on the degree of irreversibility of both engine and refrigerator. The reversible case ($p=1$) yields the efficiency in terms of the standard golden mean. We also extend the analysis to a three-heat-resrervoir setup.", "subjects": "Classical Physics (physics.class-ph); Statistical Mechanics (cond-mat.stat-mech)", "title": "Generalized golden mean and the efficiency of thermal machines", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357622402971, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739233621123 }
https://arxiv.org/abs/math/0604356
Long $n$-zero-free sequences in finite cyclic groups
A sequence in the additive group ${\mathbb Z}_n$ of integers modulo $n$ is called $n$-zero-free if it does not contain subsequences with length $n$ and sum zero. The article characterizes the $n$-zero-free sequences in ${\mathbb Z}_n$ of length greater than $3n/2-1$. The structure of these sequences is completely determined, which generalizes a number of previously known facts. The characterization cannot be extended in the same form to shorter sequence lengths. Consequences of the main result are best possible lower bounds for the maximum multiplicity of a term in an $n$-zero-free sequence of any given length greater than $3n/2-1$ in ${\mathbb Z}_n$, and also for the combined multiplicity of the two most repeated terms. Yet another application is finding the values in a certain range of a function related to the classic theorem of Erdős, Ginzburg and Ziv.
\section{Introduction} \label{Intro} The Erd\H{o}s--Ginzburg--Ziv theorem \cite{ErdosGinzburgZiv} states that each sequence of length ${2n{-}1}$ in the cyclic group of order~$n$ has a subsequence of length~$n$ and sum~zero. This article characterizes all sequences with length greater than~$3n/2{-}1$ in the same group that do not satisfy the conclusion of the celebrated theorem. In the sequel, the cyclic group of order~$n$ is identified with the additive group~$\Zn/=\Z//n\Z/$ of integers modulo~$n$. A sequence in~$\Zn/$ is called a {\em zero sequence} or a {\em zero sum} if its sum is the zero element of~$\Zn/$. A sequence is {\em \zf/} if it does not contain a nonempty zero subsequence. Sequences in~$\Zn/$ without zero subsequences of length~$n$ will be called {\em $n$-\zf/}. \looseness=-1 The $n$-\zfs/s in~$\Zn/$ were given considerable attention. Here we mention only results most closely related to our topic. Yuster and~Peterson \cite{PetersonYuster} and, independently, Bialostocki and Dierker~\cite{BialostockiDierker}, determined all $n$-\zfs/s of length~$2n{-}2$ in~$\Zn/$. These are the sequences containing exactly two distinct elements $a$ and~$b$ of~$\Zn/$, each of them repeated $n{-}1$~times, such that ${a-b}$ generates~$\Zn/$. Ordaz and Flores~\cite{FloresOrdaz} solved the same problem for length~$2n{-}3$. Again, two distinct terms have high combined multiplicity (details can be found in Section~\ref{Highmult}). In general, the combined multiplicity of the two most represented terms was intensively studied. Gao \cite{Gao2} proved a statement to this effect for $n$-\zfs/s of length roughly greater than~$7n/4$. A recent work of Gao, Panigrahi and Thangadurai \cite{GaoPanigrahiThagadurai} considered the same question in the case of a prime~$n$, for length roughly greater than~$5n/3$. Based on the main theorem in~\cite{Gao2}, Bialostocki et.~al.\ \cite{BialostockiDierkerGrynkiewiczLotspeich} obtained an explicit characterization of the $n$-\zfs/s in~$\Zn/$ with length greater than or equal to ${2n-2-\lfloor n/4\rfloor}$. The core of their proof is essentially present already in the article~\cite{GaoHamidoune} of Gao and Hamidoune. Our goal is to characterize the $n$-\zfs/s in~$\Zn/$ of length greater than~$3n/2{-}1$. The argument relies on a key structural result from~\cite{PCheetahSCat} about \zfs/s of length greater than~$n/2$ in~$\Zn/$. The description obtained generalizes the one from~\cite{BialostockiDierkerGrynkiewiczLotspeich} and cannot be extended in the same shape to shorter sequences. In this sense the range of the characterization is optimal. Let $a$ be an integer coprime to~$n$ and $b$ an element of~$\Zn/$. The function $x\mapsto ax+b$ from~$\Zn/$ into itself will be called an {\em affine map}. In particular the {\em translations}~$x\mapsto x+b$ are affine maps, for each~$b\in \Zn/$. Suppose that a sequence~$\beta$ in~$\Zn/$ can be obtained from a sequence~$\alpha$ through an affine map and rearrangement of terms. Then we say that $\alpha$ is {\em similar} to~$\beta$ and write $\alpha\sim\beta$. Clearly $\sim$ is an equivalence relation. Affine maps preserve zero sums of length~$n$ and do not bring in new ones. So it is usual not to distinguish between similar sequences in questions involving $n$-term zero subsequences. Our characterization will be up to similitude, i.~e. up to affine maps and rearrangement of terms. If $a\in\Zn/$, let $\v{a}$ denote the unique integer in~$[1,n]$ that belongs to the congruence class~$a$ modulo~$n$. We call~$\v{a}$ the {\em least positive representative} of~$a$. Least positive representatives occur frequently in the text, so we allow a certain abuse of notation to simplify the exposition. In some cases we write $a$ instead of~$\v{a}$. This applies mostly to the group elements $0$ and $1$; the actual meaning of the symbols~$0$ and $1$ should be clear from the context. Furthermore, for a sequence~$\alpha$ in~$\Zn/$ we denote by $\v{\alpha}$ the sequence of its \lpr/, and by $L(\alpha)$ the sum of~$\v{\alpha}$. The sequences considered are written multiplicatively, and multiplicities of sequence terms are indicated by using exponents. The length of the sequence~$\alpha$ is denoted by~$|\alpha|$. The {\em union} of two sequences $\alpha$ and $\beta$, denoted $\alpha\cup\beta$, is the sequence formed by appending the terms of~$\beta$ to~$\alpha$. Also, $1-\beta$ is the sequence obtained by replacing each term $b$ of $\beta$ by $1-b$. Now the main result in the article, Theorem~\ref{characterization}, can be stated as follows: \begin{quote}\em A sequence of length greater than $3n/2-1$ in~$\Zn/$ is $n$-\zf/ if and only if it is similar to the union of two sequences $\alpha$ and $\beta$ in~$\Zn/$ such that $L(\alpha)<n$ and $L(1-\beta)<n$. \end{quote} Once such a characterization is available, certain basic questions about sufficiently long $n$-\zfs/s in cyclic groups receive satisfactory answers. The preliminaries needed for the key proof are included in Section~\ref{Pre}. The main result is proven in Section~\ref{MainResult}. It is preceded by some properties of sequences of the form $\alpha\cup\beta$, where $\alpha$ and $\beta$ are sequences in~$\Zn/$ satisfying $L(\alpha)<n$ and $L(1-\beta)<n$. In Section~\ref{Highmult} we study the maximum multiplicity of a term in an $n$-\zfs/ of length~$n-1+k$, where $n/2<k<n$, and also the combined multiplicity of the two most repeated terms. Best possible lower bounds are established in both cases. The main theorem enables us to determine, in Section~\ref{gnk}, the values in a certain range of a function related to a variant of the Erd\H{o}s--Ginzburg--Ziv theorem. \section{Preliminaries} \label{Pre} For sequences $\alpha$ and $\beta$ in~$\Zn/$, we say that $\alpha$ is {\em equivalent} to~$\beta$ if $\beta$ can be obtained from~$\alpha$ through multiplication by an integer coprime to~$n$ and rearrangement of terms. Such multiplication is an affine map preserving all zero sums in~$\Zn/$, not just the ones of length~$n$. In particular equivalent sequences are similar. Our characterization rests on the following result from~\cite{PCheetahSCat}: \begin{thm}\label{zero-free} Each \zfs/ of length greater than~$n/2$ in the cyclic group~$\Zn/$ is equivalent to a sequence whose sum of the \lpr/ is less than~$n$. \end{thm} A restatement of a fact from~\cite{Gao1} is also used in the main proof. \begin{prop}\label{gao} A sequence in an abelian group of order~$n$ is such that the multiplicity of each term is at most the multiplicity of\kern1pt~$0$. Then each subsequence sum of length greater than~${n}$ equals a subsequence sum of length exactly~${n}$. \end{prop} One more statement is necessary for the main argument. \begin{prop}\label{general} Let $\alpha$ be a sequence with positive integer terms of length~$\ell$ and sum~$S$, where ${2\ell>S}$. Then: a) $\alpha$ contains at least $2\ell-S$ terms equal to~$1$; b) each integer in the interval $[2\ell-S,S]$ is representable as the sum of a subsequence of~$\alpha$ with length at least~$2\ell-S$. \end{prop} \begin{pf} Part~a) is straightforward. If $\alpha$ contains $x$ terms equal to~1 then each of the remaining $\ell-x$~terms is at least~2, hence $S\ge x+2(\ell-x)=2\ell-x$. This implies $x\ge 2\ell-S$. For part~b), fix $2\ell-S$ ones in~$\alpha$. The remaining ${\ell-(2\ell-S)=S-\ell}$ terms add up to $S-(2\ell-S)=2(S-\ell)$, so their average is~2. Label these terms $a_1,\dots,a_{S-\ell}$, assuming that $1\le a_1\le \cdots\le a_{S-\ell}$. Due to this nondecreasing order, the sequence $a_1,(a_1+a_2)/2,(a_1+a_2+a_3)/3,\dots$ of arithmetic means is nondecreasing, hence these means are all at most~2. In other words, $a_1+\cdots+a_i\le 2i$ for all $i=1,\dots,S-\ell$. Suppose that $b_1, \dots,b_k$ are \pin/s such that $b_1+\cdots+b_i\le 2i$ for all $i=1,\dots,k$. Denoting $S_k=\sum_{i=1}^{k}b_i$, we prove by induction on~$k$ that the sumset of the sequence $1b_1\dots b_k$ is $\{1,2,\dots,S_k+1\}$. By {\em sumset} we mean the set of integers representable as a nonempty subsequence sum. The base $k=1$ is clear. For the inductive step, let $\Sigma_{k-1}$ and $\Sigma_{k}$ be the sumsets of $1b_1\dots b_{k-1}$ and $1b_1\dots b_{k-1}b_{k}$, respectively. Now $\Sigma_{k-1}=\{1,2,\dots,S_{k-1}+1\}$ by the induction hypothesis, hence $\Sigma_{k}=\{1,2,\dots,S_{k-1}+1\}\cup\{b_{k},b_{k}+1,\dots,b_{k}+S_{k-1}+1\}$. Since $b_{k}+S_{k-1}=S_k$, it suffices to check that $b_k\le S_{k-1}+2$ which is equivalent to $S_k\le 2S_{k-1}+2$. The latter is true as $2S_{k-1}+2\ge 2(k-1)+2=2k\ge S_k$. The induction is complete. Going back to the proof of~b), we infer from the previous paragraph that the sequence $1a_1\dots a_{S-\ell}$ has sumset $\{1,2,\dots,2(S-\ell)+1\}$. Take an arbitrary $x\in [2\ell-S,S]$ and set $y=x-(2\ell-S-1)$. Since $1\le y\le 2(S-\ell)+1$, one can express~$y$ as a nonempty subsequence sum of $1a_1\dots a_{S-\ell}$. In view of~a), adding $2\ell-S-1$ to both sides of this representation shows that $x$ equals the sum of at least $2\ell-S$~terms of the original sequence~$\alpha$. \qed \end{pf} \section{The main result} \label{MainResult} We are about to characterize all sufficiently long $n$-\zfs/s in~$\Zn/$. Up to similitude, a sequence of length greater than~$3n/2-1$ is $n$-\zf/ if and only if it can be divided into two sequences $\alpha$ and~$\beta$ satisfying $L(\alpha)<n$ and $L(1-\beta)<n$. (Recall that $L(\omega)$ denotes the sum of the \lpr/ of the sequence~$\omega$.) There exist sequences of any length less than $2n-1$ that are ``separable" in the sense just described. We discuss them before the main theorem in order to indicate that most of their basic properties do not depend on whether or not the sequence is ``long." A couple of technical remarks will be necessary. Let $\alpha$ and~$\beta$ be sequences in~$\Zn/$ such that $L(\alpha)<n$ and $L(1-\beta)<n$. Because $\v{0}=n$, note that $a\ne 0$ for $a\in\alpha$ and $b\ne 1$ for $b\in\beta$. We will need the observations that \begin{equation}\label{technical} \v{-b}=n-\v{b}\quad\text{and}\quad \v{1-b}=1+\v{-b}\qquad\text{for each $b\in\Zn/$, $b\ne 0$.} \end{equation} By~(\ref{technical}), for each sequence $\beta$ in~ $\Zn/$ one can write \begin{equation}\label{bars} L(1-\beta)=\sum_{b\in\beta,b\ne 0}\v{-b}+|\beta|. \end{equation} In what follows, the empty sequence is assumed to have sum~$0$, in~$\Z/$ and in~$\Zn/$. \begin{prop}\label{separable} Let $n$ and $k$ be integers such that $0<k<n$. Suppose that the sequences $\alpha$ and~$\beta$ in~$\Zn/$ satisfy the conditions $|\alpha|+|\beta|=n-1+k$, $L(\alpha)<n$ and $L(1-\beta)<n$. Then: \begin{enumerate} \item[a)] The union $\alpha\cup\beta$ is $n$-\zf/. \item[b)] $k\le |\alpha|<n$, $k\le |\beta|<n$ and $\v{b}-\v{a}\ge k$ for all $a\in\alpha$, $b\in\beta$. In particular $a\ne b$ for all $a\in\alpha$, $b\in\beta$. \item[c)] The multiplicities $u$ and $v$ of \kern1pt$1$ and \kern1pt$0$ in~$\alpha\cup\beta$ satisfy \[ u+v\ge 2k,\quad \max(u,v)\ge k,\quad \min(u,v)\ge 2k-n+1. \] The equality $u+v=2k$ is attained if and only if $\alpha=1^{2p-n+1}2^{n-1-p}$ and $\beta=0^{2q-n+1}(-1)^{n-1-q}$, for integers $p$ and $q$ such that $(n-1)/2\le p<n$, $(n-1)/2\le q<n$ and $p+q=n-1+k$. The equality $\max(u,v)=k$ is attained if and only if $n$ and $k$ have different parity and $\alpha=1^{k}2^{(n-1-k)/2}$, $\beta=0^{k}(-1)^{(n-1-k)/2}$. \looseness=-1 \item[d)] For $k\ge (n{-}1)/2$, the highest multiplicity of a term in~$\alpha\cup\beta$ is $\max(u,v)$. \end{enumerate} \end{prop} \begin{pf} a) Consider a zero subsequence~$\gamma$ of~$\alpha\cup\beta$. Let $\gamma$ contain $r$~terms $a_1,\dots ,a_r$ from~$\alpha$, $s$~nonzero terms $b_1,\dots,b_s$ from~$\beta$, and several zeros, from~$\beta$ again. Because the sum of $\gamma$ is zero in~$\Zn/$, the integers $\sum_{i=1}^r\v{a_i}$ and $\sum_{j=1}^s\v{-b_j}$ are congruent modulo~$n$. Also $0\le \sum_{i=1}^r\v{a_i}\le L(\alpha)<n$ and, by~(\ref{bars}), \[0\le \sum_{j=1}^s\v{-b_j}\le \sum_{b\in\beta,b\ne 0}\v{-b}=L(1-\beta)-|\beta|<n-|\beta|\le n. \] Hence $\sum_{i=1}^r\v{a_i}=\sum_{j=1}^s\v{-b_j}$. Therefore $r\le \sum_{i=1}^r\v{a_i}=\sum_{j=1}^s\v{-b_j} <n-|\beta|$, implying $r+|\beta|<n$. Since $|\gamma|\le r+|\beta|$, we infer that $\alpha\cup\beta$ is $n$-\zf/. b) The first two inequalities are immediate, because $|\alpha|+|\beta|=n-1+k$, $|\alpha|\le L(\alpha)<n$ and $|\beta|=|1-\beta|\le L(1-\beta)<n$. To show that $\v{b}-\v{a}\ge k$ for $a\in\alpha$ and $b\in\beta$, denote $M=\max_{a\in\alpha}\v{a}+\max_{b\in\beta}\v{1-b}$. Then \[ 2(n-1)\ge L(\alpha)+L(1-\beta)\ge M+(|\alpha|-1)+(|\beta|-1)=M+n-3+k. \] This yields $M\le n+1-k$; thus $\v{a}+\v{1-b}\le n+1-k$ for all $a\in\alpha$, $b\in\beta$. If $b\ne 0$ then $\v{1-b}=1+\v{-b}=1+n-\v{b}$ by~(\ref{technical}), so $\v{a}+\v{1-b}\le n+1-k$ becomes $\v{b}-\v{a}\ge k$. The same conclusion holds if $b=0$, as then $\v{b}=n$, $\v{1-b}=1$. c) We have $n-1 \ge L(\alpha)\ge u+2(|\alpha|-u)=2|\alpha|-u$, since $\v{a}\ge 2$ for $a\ne 1$. Similarly, $\v{1-b}\ge 2$ for $b\ne 0$, so that $n-1 \ge L(1-\beta)\ge v+2(|\beta|-v)=2|\beta|-v$. Adding up yields $2(n-1)\ge 2(|\alpha|+|\beta|)-(u+v)=2(n-1+k)-(u+v)$. It follows that $u+v\ge 2k$, so $\max(u,v)\ge k$. Clearly $\max(u,v)\le n-1$ by~b), which implies $\min(u,v)\ge 2k-n+1$. The equality $u+v=2k$ occurs if and only if $n-1=L(\alpha)=2|\alpha|-u$ and $n-1 =L(1-\beta)=2|\beta|-v$. These conditions imply $u=2|\alpha|-n+1$, $v=2|\beta|-n+1$; also $\v{a}=2$ for $a\in\alpha$, $a\ne 1$ and $\v{1-b}=2$ for $b\in\beta$, $b\ne 0$. In particular $|\alpha|\ge (n-1)/2$, $|\beta|\ge (n-1)/2$. So setting $p=|\alpha|$, $q=|\beta|$ and taking~b) into account, we obtain $(n-1)/2\le p<n$, $(n-1)/2\le q<n$, $p+q=n-1+k$ and $\alpha=1^{2p-n+1}2^{n-1-p}$, $\beta=0^{2q-n+1}(-1)^{n-1-q}$. The converse is easy to check; we note only that the last two sequences are well-defined whenever $(n-1)/2\le p<n$, $(n-1)/2\le q<n$. If $\max(u,v)=k$ then ${u=v=k}$ in view of $u+v\ge 2k$, so $u+v=2k$. The conclusions of the last paragraph imply ${\alpha=1^{k}2^{(n-1-k)/2}}$, $\beta=0^{k}(-1)^{(n-1-k)/2}$. These sequences are well-defined only if $k\not\equiv n\ \text{(mod 2)}$. The converse is clear. d) We have $u+v\ge 2k$ by~c), so the number of terms different from $1$ and $0$ in~$\alpha\cup\beta$ is at most $(n-1+k)-2k=n-1-k$. Now it suffices to observe that $n-1-k\le k$ for $k\ge (n-1)/2$, and that $\max(u,v)\ge k$ by~c). \qed \end{pf} One can see that sequences ``separable" in the sense of Proposition~\ref{separable} are not just $n$-\zf/ but have an interesting general structure. This is unexpected at first glance as $\alpha$ and $\beta$ do not seem to be related in any way. However, while the properties listed in Proposition~\ref{separable} are fairly simple to derive, it is less trivial to establish that each sufficiently long $n$-\zfs/ in~$\Zn/$ is ``separable." The next theorem proves that length greater than~${3n/2{-}1}$ is enough to guarantee this. Moreover, shorter $n$-\zfs/s are not necessarily ``separable." These conclusions form the essential part of the article. \begin{thm}\label{characterization} A sequence of length greater than~$3n/2{-}1$ in the cyclic group~$\Zn/$ does not contain an $n$-term zero subsequence if and only if it is similar to the union of two sequences $\alpha$ and $\beta$ in~$\Zn/$ such that \begin{equation*} L(\alpha)<n\qquad\text{and}\qquad L(1-\beta)<n. \end{equation*} \end{thm} \begin{pf} The sufficiency follows from Proposition~\ref{separable}~a). For the necessity, let $\gamma$ be an $n$-\zfs/ of length greater than~$3n/2{-}1$ in~$\Zn/$. Translations in~$\Zn/$ do not affect sums of length~$n$, so one may assume that 0 is a term of~$\gamma$ with maximum multiplicity~$v$. Then Proposition~\ref{gao} shows that each zero subsequence of~$\gamma$ has length less than~$n$. In particular $v<n$. Select a zero subsequence~$\sigma$ of $\gamma$ with nonzero terms and of maximum length; $\sigma$~may be the empty sequence. This choice implies that the remaining nonzero terms of~$\gamma$ form a \zfs/~$\tau$. By the remark above, the lengths of~$\sigma$ and~$\tau$ satisfy $|\sigma|<n-v$ and $|\tau|>(3n/2{-}1)-(n{-}1)=n/2$. Therefore Theorem~\ref{zero-free} applies to the \zfs/~$\tau$. Hence multiplying~$\tau$ by a certain integer coprime to~$n$ yields an equivalent sequence with sum of the \lpr/ less than~$n$. We multiply by the same integer all remaining terms of~$\gamma$, which preserves the zero sums of any length. So there is no loss of generality in assuming that $\gamma=0^v\sigma\tau$, where $\sigma$ is a zero subsequence of~$\gamma$ with nonzero terms and of maximum length, and $\tau$ is a \zfs/ of length greater than~$n/2$ satisfying $L(\tau)<n$. Let $\sigma=1^wb_1\dots b_q$ where $b_1,\dots,b_q$ are all terms of $\sigma$ different from~$1$. The following inequality stronger than $L(\tau)<n$ implies the conclusion directly: \begin{equation}\label{claim2} L(\tau)+\sum_{j=1}^q\v{-b_j}<n. \end{equation} Indeed, assume~(\ref{claim2}) is true, and let $\alpha=1^w\tau$, $\beta=0^vb_1\dots b_q$. Then $\gamma=\alpha\cup\beta$; in addition, $L(\alpha)<n$ and $L(1-\beta)<n$. Firstly, $w\equiv \sum_{j=1}^q\v{-b_j}\ \text{(mod~$n$)}$, since $\sigma$ has sum zero. Also $0\le\sum_{j=1}^q\v{-b_j}<n$ by~(\ref{claim2}), and clearly $0\le w<n$. Therefore $w=\sum_{j=1}^q\v{-b_j}$. So (\ref{claim2}) can be written as $L(\tau)+w<n$, which is the inequality~$L(\alpha)<n$. Furthermore, we obtain $|\sigma|=w+q=\sum_{j=1}^q\v{-b_j}+q$, and because $|\sigma|<n-v$, it follows that $\sum_{j=1}^q\v{-b_j}+q<n-v$. This is the same as $\sum_{b\in\beta,b\ne 0}\v{-b}+|\beta|<n$. By~(\ref{bars}), the latter means that $L(1-\beta)<n$ . \looseness=-1 So it suffices to prove~(\ref{claim2}) which is clear if $\sigma$ is the empty sequence. Hence let $\sigma\ne\emptyset$, implying $q\ge 1$ ($\sigma$ cannot have only terms equal to~$1$). For the sake of clarity, denote $|\tau|=\ell>n/2$, $L(\tau)=S<n$ and $\v{-b_j}=v_j$, $j=1,\dots,q$. Note that $2\ell-S\ge 2\ell-(n-1)\ge 2$ as $\ell>n/2$. Thus Proposition~\ref{general} applies to the sequence~$\v{\tau}$ of the \lpr/ of~$\tau$. Also, $1\le v_j<n{-}1$ by the choice of $b_1,\dots,b_q$. The proof of~(\ref{claim2}) is based on the following observation. \begin{quote} \em Suppose that $m$~terms $v_{j_1},\dots ,v_{j_m}$ of the sequence $v_{1}\dots v_{q}$ are such that the integer $T=n-(v_{j_1}+\cdots+v_{j_m})$ satisfies $1<T\le S$. Then $m\ge 2\ell-S$ if $2\ell-S\le T\le S$ and $m\ge T$ if $1<T<2\ell-S$. \end{quote} Indeed, if $T$ represents the congruence class~$t$ modulo~$n$ then $t=\sum_{i=1}^{m}b_{i_{j}}$. Let $2\ell-S\le T\le S$. By Proposition~\ref{general}, there is a subsequence~$\omega$ of~$\tau$ with length at least $2\ell-S$ such that $T=\sum_{c\in\omega}\v{c}$. Hence $\sum_{c\in\omega}c=t=\sum_{i=1}^{m}b_{i_{j}}$. This implies $m\ge|\omega|\ge 2\ell-S$ as $m<|\omega|$ would yield a zero subsequence of $\gamma$ with nonzero terms which is longer than $\sigma$, obtained upon replacing $b_{j_1}\dots b_{j_m}$ by~$\omega$. Similarly, if $1<T<2\ell-S$ then $T$ can be expressed as the sum of $T$ terms equal to~$1$ of~$\v{\tau}$. (There are at least~$2\ell-S$ ones in $\v{\tau}$ by Proposition~\ref{general}.) Now the same argument as above gives $m\ge T$, by the maximum choice of~$\sigma$. It follows from the observation that $n-v_j>S$, $j=1,\dots,q$. Indeed, if $1<n-v_j\le S$ for some~$j$ then $1\ge 2\ell-S$ or $1\ge n-v_j$, both of which are not true. Therefore $1\le v_j<n-S$, $j=1,\dots,q$. Passing on to the proof of~(\ref{claim2}), suppose that it is false. Then there are subsequences of $v_1\dots v_q$ whose sum is at least~$n-S$, for instance $v_1 \dots v_q$ itself. Without loss of generality, let $v_1\dots v_m$ be such a (nonempty) subsequence of minimum length~$m$. So $T=n-\sum_{j=1}^{m}v_j\le S$ but $T+v_j>S$ for all $j=1,\dots,m$. Note that $T>1$ in view of the previous paragraph, because $v_m<n-S$ yields $T>S-v_m>S-(n-S)=2S-n\ge 2\ell-n\ge 1$. Let $2\ell-S\le T\le S$. Then $m\ge 2\ell-S$ by the observation above. Hence \[ S+1\le n-\sum_{j=1}^{m-1}v_j \le n-(m-1)\le n-(2\ell-S-1)=(n-2\ell)+S+1, \] implying $n\ge 2\ell$ which is a contradiction. Next, let $1<T<2\ell-S$. Now the observation gives $m\ge T$. Recalling that $T+v_j>S$, we have $v_j\ge S+1-T>0$ for $j=1,\dots,m$, implying \[ n=T+\sum_{j=1}^mv_j\ge T+m(S+1-T)\ge T+T(S+1-T)=T(S+2-T). \] Consider the quadratic function $g(t)=t^2-(S+2)t+n$. We obtained $g(T)\ge 0$ for some $T\in\{2,\dots,2\ell-S-1\}$. But the maximum of~$g$ on $\{2,\dots,2\ell-S-1\}$ is $g(2)=n-2S$, and $n-2S\le n-2\ell<0$. This is a contradiction again; claim~(\ref{claim2}) follows, concluding the main argument. \qed \end{pf} Theorem~\ref{characterization} establishes the desired characterization, in a form hopefully providing general insight into the structure of $n$-\zfs/s. On the other hand, the practically important consequence of the theorem is that each $n$-\zfs/ of length~${n-1+k}$ in~$\Zn/$, where ${n/2<k<n}$, is similar to a sequence satisfying the conclusions of Proposition~\ref{separable}. Both Theorem~\ref{characterization} and Proposition~\ref{separable} are needed for a really clear picture of the ``long" $n$-\zfs/s in~$\Zn/$. The next observation adds one more detail to this picture. The affine map $x\mapsto 1-x$ interchanges~0 and~1 and transforms arbitrary sequences $\alpha$ and $\beta$ into $\alpha_1=1-\alpha$ and $\beta_1=1-\beta$, respectively. If the inequalities $L(\alpha)<n$ and $L(1-\beta)<n$ hold true, they can be written as $L(1-\alpha_1)<n$ and $L(\beta_1)<n$. So if a sequence $\gamma$ is similar to $\alpha\cup\beta$, it is also similar to $\alpha_1\cup\beta_1$, a sequence with all properties from Proposition~\ref{separable}, in which the multiplicities of~0 and~1 are interchanged. Therefore one can assume additionally that $u\le v$. For $k\ge (n-1)/2$, Proposition~\ref{separable}~d) then implies that $0$ is a term of highest multiplicity in $\alpha\cup\beta$. The conditions $L(\alpha)<n$ and $L(1-\beta)<n$ can be expanded to obtain an explicit form of the characterization established in Theorem~\ref{characterization}. Up to certain details, this explicit description has the same shape as the one in~\cite{BialostockiDierkerGrynkiewiczLotspeich} of the $n$-\zfs/s with length $n-1+k$, for $k$ roughly greater than~$3n/4$. It is worth noting that the range~$n/2<k<n$ for~$k$ is the natural scope of such a characterization. There are $n$-\zfs/s of length $n-1+\lfloor n/2\rfloor$ that do not obey the conclusion of Theorem~\ref{characterization}. Here are examples. For an odd $n\ge 9$, consider the sequence $0^{n-1}2^{(n-5)/2}3^2$. Its length is $n-1+(n-1)/2=n-1+\lfloor n/2\rfloor$. For an even $n\ge 6$, consider the sequence $0^{n-1}2^{n/2-1}3$, of length $n-1+n/2=n-1+\lfloor n/2\rfloor$. Both sequences are $n$-\zf/. Suppose that either of them is similar to a union $\alpha\cup\beta$ where $\alpha$ and $\beta$ satisfy~$L(\alpha)<n$, $L(1-\beta)<n$. Because $k\ge (n-1)/2$, $\alpha$ and $\beta$ can be chosen so that $0$ is a term of highest multiplicity~$v$ in~$\alpha\cup\beta$. Then $v=n-1$, so $\beta=0^{n-1}$. It follows that $2^{(n-5)/2}3^2$ or $2^{n/2-1}3$ is equivalent to $\alpha$, a sequence satisfying $L(\alpha)<n$. However, one can check that the latter is not true. \section{Terms of high multiplicity} \label{Highmult} Let $n/2<k<n$, and let $\gamma$ be an $n$-\zfs/ of length $n-1+k$ in $\Zn/$. It follows from Theorem~\ref{characterization} and Proposition~\ref{separable} that $\gamma$ contains a term of multiplicity at least $k$, and two distinct terms of combined multiplicity at least $2k$. Now we obtain precise forms of these statements. Let $\alpha\cup\beta$ be a sequence similar to~$\gamma$ and satisfying the conclusions of Proposition~\ref{separable}. We may also assume that $0$ is a term of maximum multiplicity~$v$ in~$\alpha\cup\beta$, as explained in the previous section. By Proposition~\ref{separable}~c), the equality $v=k$ holds if and only if $k\not\equiv n\ \text{(mod 2)}$ and $\alpha\cup\beta=0^k1^k2^{(n-1-k)/2}(-1)^{(n-1-k)/2}$. This is the unique sequence satisfying $v=k$, up to affine maps and rearrangement of terms. Suppose now that $k\equiv n\ \text{(mod 2)}$. Then $v\ge k+1$ by Proposition~\ref{separable}~c) again. The equality $v= k+1$ can be attained, for instance for the sequence $0^{k+1}1^{k-1}2^{(n-k)/2}(-1)^{(n-k-2)/2}$ which is well-defined and $n$-\zf/ by Proposition~\ref{separable}~a) (setting $\alpha=1^{k-1}2^{(n-k)/2}$, $\beta=0^{k+1}(-1)^{(n-k-2)/2}$). Thus the following corollary was proved. \begin{cor}\label{bestmult} Let $n$ and~$k$ be integers satisfying $n/2<k<n$. Each $n$-\zfs/ of length $n-1+k$ in~$\Zn/$ contains a term of multiplicity at least~$k$, if $n$ and $k$ are of different parity, and at least~$k+1$, if $n$ and $k$ are of the same parity. Both estimates are best possible. \end{cor} The sum of the two highest multiplicities was probably the most widely explored question concerning $n$-\zfs/s in~$\Zn/$. We are now in the position to resolve this question completely for each length $n-1+k$ where $n/2<k<n$. Indeed, the lower bound $u+v\ge 2k$ for this combined multiplicity follows from above. Now let us take another look at the examples for the maximum multiplicity of a single term. In both possible cases, $k\not\equiv n\ \text{(mod 2)}$ and $k\equiv n\ \text{(mod 2)}$, it is easy to see that 0 and 1 are the two terms with highest combined multiplicity, and the value of this multiplicity is~$2k$. So we proceed with one more structural conclusion. \begin{cor}\label{bestcombinedmult} Let $n$ and~$k$ be integers satisfying $n/2<k<n$. Each $n$-\zfs/ of length $n-1+k$ in~$\Zn/$ contains two terms of combined multiplicity at least~$2k$, and this estimate is best possible. \end{cor} Naturally, some well-known results about the structure of the $n$-\zfs/s with a certain length are now immediate. For example, let us consider the lengths~$2n{-}2$ (as in~\cite{PetersonYuster}, \cite{BialostockiDierker}) and~$2n{-}3$ (as in~\cite{FloresOrdaz}). By the discussion above, any $n$-\zfs/ of length~$2n{-}2$ (i.~e. $n-1+k$ with $k=n{-}1$) is similar to~$0^v1^u$, where $v=n{-}1$ (as $v\ge k$) and $u=n{-}1$ (as $u+v\ge 2k$). Here we assume $n>2$ to ensure that $k>n/2$; however the same conclusion holds true for $n=2$ as well. Similarly, for $n>4$, any $n$-\zfs/ of length~$2n{-}3$ (i.~e. $n-1+k$ with $k=n{-}2$) is similar to~$0^v1^u\gamma$, where $v$ is the maximum multiplicity of a term and $u+v\ge 2k=2n-4$. Since $k=n{-}2\equiv n\ \text{(mod 2)}$, Corollary~\ref{bestmult} implies $v\ge k{+}1=n{-}1$. Hence $v=n{-}1$ and $u=n{-}2$ or $u=n{-}3$. Now it is easy to infer that each $n$-\zfs/ of length~$2n{-}3$, $n>4$, is similar to~$0^{n-1}1^{n-2}$ or~$0^{n-1}1^{n-3}2$. For $n=3,4$, this conclusion can be checked directly. For~$n=2$, the only $n$-\zfs/ of length $2n-3=1$ is similar to the one-term sequence~$0$. \section{The $g(n,k)$ function} \label{gnk} For \pin/s $n$ and $k$, $k\le n$, let $g(n,k)\ge k$ be the least integer such that each sequence in~$\Zn/$ with at least $k$ distinct terms and length $g(n,k)$ contains an $n$-term zero sum. The function $g(n,k)$ was introduced by Bialostocki and Lotspeich in~\cite{BialostockiLotspeich}. The structural results about $n$-\zfs/s of lengths~$2n{-}2$ and~$2n{-}3$ (such as mentioned after Corollary~\ref{bestcombinedmult}) imply $g(n,2)=2n{-}1$ for $n\ge 2$ and $g(n,3)=2n{-}2$ for~$n\ge 4$. It is easy to see that $g(3,3)=3$. The values of $g(n,k)$ for $4\le k\le \sqrt{n+4}+1$ were found in~\cite{BialostockiDierkerGrynkiewiczLotspeich}: If $k\ge 4$ is even and $n\ge k^2-2k-4$, or if $k\ge 5$ is odd and $n\ge k^2-2k-3$, then \begin{equation*} g(n,k)=2n-1-\left\lfloor\left(\frac{k-1}{2}\right)^2\right\rfloor. \end{equation*} For the lower bound, the following examples were used. In the case of an even~$k\ge 2$, consider the sequence \[ -\left( \frac{k-2}{2}\right)\dots(-1)(0)^{n-(k^2+2k)/8}(1)^{n-(k^2+2k)/8}(2)\dots\left(\frac k2\right); \] if $k\ge 3$ is odd, consider the sequence \[ -\left( \frac{k-3}{2}\right)\dots(-1)(0)^{n-(k^2-1)/8}(1)^{n-(k^2+4k+3)/8}(2)\dots\left(\frac {k+1}{2}\right). \] These examples are valid under the weaker restrictions $n\ge (k^2+2k)/8+1$ when $k$ is even, and $n\ge (k^2+4k+3)/8+1$ when $k$ is odd. The multiplicities of~0 and~1 in both sequences are positive integers. By Proposition~\ref{separable}~a), both sequences are $n$-\zf/, and each one contains $k$ distinct terms. Here we prove that the function $g(n,k)$ obeys the same formula as above under the weaker constraints $4\le k\le\sqrt{2n-1}+1$. In this range the examples above still provide the lower bound $g(n,k)\ge 2n-1-\lfloor((k{-}1)/2)^2\rfloor$. \begin{thm} \label{brakemeier} Let $n\ge k$ be integers such that ${4\le k\le\sqrt{2n-1}+1}$. Then \begin{equation*} g(n,k)=2n-1-\left\lfloor\left(\frac{k-1}{2}\right)^2\right\rfloor. \end{equation*} \end{thm} \begin{pf} As already mentioned, we need to prove only the upper bound. The condition $k\le\sqrt{2n-1}+1$ is equivalent to $n-\lfloor((k{-}1)/2)^2\rfloor>n/2$. Also $k\ge 4$, so the integer $\ell=n-\lfloor((k{-}1)/2)^2\rfloor$ satisfies $n/2<\ell<n$. Consider any $n$-\zfs/ $\gamma$ of length $n-1+\ell$ in~$\Zn/$. It suffices to prove that the number of distinct terms in~$\gamma$ is less than $k$; then the definition of $g(n,k)$ implies $g(n,k)\le n-1+\ell=2n-1-\lfloor((k{-}1)/2)^2\rfloor$. Let $\alpha\cup\beta$ be a sequence similar to $\gamma$ , where $\alpha$ and~$\beta$ satisfy the conditions in Proposition~\ref{separable}, with $k$ replaced by~$\ell$. Let there be $x$ distinct terms in~$\alpha$ and $y$ distinct terms in~$\beta$. Then Proposition~\ref{separable}~b) shows that the number of distinct terms in $\gamma$ is $z=x+y$. The sum $L(\alpha)$ does not increase upon replacing $x$ distinct summands in it by the least possible values $1,2,\dots,x$, and all remaining summands by~$1$. Therefore \[ 1+2+\cdots+x+(|\alpha|-x)\le L(\alpha)\le n-1, \] which gives $(x^2-x)/2\le n-1-|\alpha|$. Likewise, noticing that there are $y$ distinct terms in $1-\beta$, we obtain $(y^2-y)/2\le n-1-|\beta|$. Hence \[ \frac12(x^2-x)+\frac12(y^2-y)\le 2(n-1)-\left(|\alpha|+|\beta|\right). \] Because $|\alpha|+|\beta|=n-1+\ell$, the right-hand side expression is equal to $n-1-\ell=\lfloor((k{-}1)/2)^2\rfloor-1$. On the other hand, \[ \frac12(x^2-x)+\frac12(y^2-y)\ge \left(\frac{x+y}{2}\right)^2-\frac{x+y}{2}=\left(\frac{z}{2}\right)^2-\frac{z}{2} =\left(\frac{z-1}{2}\right)^2-\frac 14. \] Thus $((z{-}1)/2)^2-1/4\le \lfloor((k{-}1)/2)^2\rfloor -1$ which implies the desired $z<k$ and completes the proof.\qed \end{pf}
{ "timestamp": "2006-04-16T21:00:31", "yymm": "0604", "arxiv_id": "math/0604356", "language": "en", "url": "https://arxiv.org/abs/math/0604356", "abstract": "A sequence in the additive group ${\\mathbb Z}_n$ of integers modulo $n$ is called $n$-zero-free if it does not contain subsequences with length $n$ and sum zero. The article characterizes the $n$-zero-free sequences in ${\\mathbb Z}_n$ of length greater than $3n/2-1$. The structure of these sequences is completely determined, which generalizes a number of previously known facts. The characterization cannot be extended in the same form to shorter sequence lengths. Consequences of the main result are best possible lower bounds for the maximum multiplicity of a term in an $n$-zero-free sequence of any given length greater than $3n/2-1$ in ${\\mathbb Z}_n$, and also for the combined multiplicity of the two most repeated terms. Yet another application is finding the values in a certain range of a function related to the classic theorem of Erdős, Ginzburg and Ziv.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "Long $n$-zero-free sequences in finite cyclic groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357616286122, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739229187201 }
https://arxiv.org/abs/1506.03435
Nielsen-Schreier implies the finite Axiom of Choice
We present a new proof that the statement 'every subgroup of a free group is free' implies the Axiom of Choice for finite sets.
\section{Introduction} In 1921, Nielsen \cite{Nielsen1921NS} proved that every subgroup of a finitely generated free group is free. This result was generalised to arbitrary free groups by Schreier \cite{Schreier1927Untergruppen} in 1927, giving us the following result. \begin{principle}{NS}{Nielsen-Schreier} If $F$ is a free group and $K\leq F$ is a subgroup, then $K$ is a free group. \end{principle} Since every proof of $\mathsf{NS}$ uses the Axiom of Choice, it is natural to ask whether it is equivalent to the Axiom of Choice. The first step was made by L\"auchli \cite{Lauchli1962Auswahlaxiom}, who showed that $\mathsf{NS}$ cannot be proved in $\mathsf{ZF}$ set theory with atoms. Jech and Sochor's embedding theorem \cite{Jech2008Choice} allows this result to be transferred to standard $\mathsf{ZF}$ set theory. It was improved in 1985 by Howard \cite{Howard1985Subgroups}, who showed that $\mathsf{NS}$ implies $\mathsf{AC}_{fin}$, the Axiom of Choice for finite sets: \begin{principle}[fin]{\mathsf{AC}}{Axiom of Choice for finite sets} Every set of non-empty finite sets has a choice function. \end{principle} Another Choice principle used in this article is the Axiom of Choice for pairs: \begin{principle}[2]{\mathsf{AC}}{Axiom of Choice for pairs} Every set of 2-element sets has a choice function. \end{principle} The purpose of this paper is to provide a new and shorter proof of Howard's result. \section{Nielsen-Schreier implies $\mathsf{AC}_{fin}$} \label{section: Nielsen-Schreier implies ACfin} Before beginning the proof, we must fix some notation and terminology. If $X$ is a set, let $X^-=\{x^{-1}:x\in X\}$ be a set of formal inverses of $X$. It does not matter what the elements of $X^-$ are, as long as $X^-$ is disjoint from $X$. Members of $X^\pm=X\cup X^-$ are called $X$-\emph{letters}. Finite sequences $x_1\cdots x_n$ with $x_1,...,x_n\in X^\pm$ are $X$-\emph{words}. An $X$-word $x_1\cdots x_n$ is $X$-\emph{reduced} if $x_i\not=x_{i+1}^{-1}$ for $i=1,...,n-1$. If $\alpha$ is an $X$-word, the $X$-\emph{reduction} of $\alpha$ is the $X$-reduced $X$-word obtained by performing all possible cancellations within $\alpha$. For notational simplicity, we don't distinguish between $X$-words and their $X$-reductions. Reference to $X$ is omitted if $X$ is clear from the context. If $G$ is a group and $S\subseteq G$, then $\langle S\rangle$ is the subgroup of $G$ \emph{generated by} $S$. \begin{defn} Let $X$ be a set. The \emph{free group on} $X$, written $F(X)$, consists of all reduced $X$-words. The group operation is concatenation followed by reduction, and the identity is the empty word ${\bf 1}$. A group $G$ is \emph{free} if it is isomorphic to $F(X)$ for some $X\subseteq G$. If this is the case, $X$ is a \emph{basis} for $G$. \end{defn} The following proofs will start with a family $Y$ of non-empty sets and construct a choice function $c:Y\rightarrow\bigcup Y$. Without loss of generality, we assume that the members of $Y$ are pairwise disjoint. We then define $X=\bigcup Y$ to be the basis of the free group $F=F(X)$. With every $y\in Y$ we associate a function $\sigma_y:F\rightarrow\mathbb{Z}$ which counts the number of occurrences of $y$-letters in words $\alpha\in F$ as follows. \begin{quote} Write $\alpha=x_1^{\epsilon_1}\cdots x_n^{\epsilon_n}$ as an $X$-reduced word with $x_1,...,x_n\in X$ and $\epsilon_1,...,\epsilon_n\in\{\pm1\}$. Then define $$\sigma_y(\alpha)=|\{i:x_i\in y\land\epsilon_i=1\}|-|\{i:x_i\in y\land\epsilon_i=-1\}|.$$ It is easily checked that, for each $y\in Y$, $\sigma_y$ is a group homomorphism from the free group $F$ to the additive group of integers. \end{quote} Before proving theorem \ref{theorem: NS implies ACfin} we handle a special case in lemma \ref{lemma: NS implies AC2}. Its proof serves as an introduction to ideas used in the proof of the main theorem. \begin{lem} \label{lemma: NS implies AC2} $\mathsf{ZF}\vdash\mathsf{NS}\Rightarrow\mathsf{AC}_2$ \end{lem} \begin{proof} Let $Y$ be a family of 2-element sets. Without loss of generality, assume that the members of $Y$ are pairwise disjoint. Let $X=\bigcup Y$, let $F=F(X)$ be the free group on $X$, and define the subgroup $K\leq F$ by $$K=\langle\{wx^{-1}:(\exists y\in Y)w,x\in y\}\rangle.$$ By the Nielsen-Schreier theorem, $K$ has a basis $B$. Note that \begin{equation} \label{equation: K is in the kernel} \sigma_y(\alpha)=0\text{ for all }y\in Y\text{ and all }\alpha\in K. \end{equation} We will construct a choice function for $Y$, i.e. a function $c:Y\rightarrow X$ satisfying $c(y)\in y$ for each $y\in Y$. Let $y\in Y$. Define the function $s_y:y\rightarrow y$ to swap the two elements of $y$. For any choice of $x\in y$, $y=\{x,s_y(x)\}$. To simplify notation, we set $x_i=s_y^i(x)$ for all $i\in\mathbb{Z}$; hence $y=\{x_0,x_1\}$. Express $x_0x_1^{-1}$ and $x_1x_0^{-1}$ as reduced $B$-words: \begin{eqnarray*} x_0x_1^{-1}&=&b_{0,1}\cdots b_{0,l_0}\\ x_1x_0^{-1}&=&b_{1,1}\cdots b_{1,l_1}, \end{eqnarray*} where $b_{i,j}\in B^\pm$ for all $i,j$. As $x_0x_1^{-1}=(x_1x_0^{-1})^{-1}$, it follows that $l_0=l_1=l$, say, and that \begin{equation} \label{equation: cancellation in NS=>AC2} b_{1,1}=b_{0,l}^{-1},...,b_{1,l}=b_{0,1}^{-1}. \end{equation} There are two cases: \begin{enumerate}[(i)] \item $l$ is odd: Let $m=(l-1)/2$. The middle $B$-letter of $x_0x_1^{-1}$ is $b_{0,m+1}$, whereas the middle $B$-letter of $x_1x_0^{-1}$ is $b_{1,m+1}=b_{0,m+1}^{-1}$ by (\ref{equation: cancellation in NS=>AC2}). One of these two is in $B$, while the other is in $B^-$. Define $c(y)$ to be the unique element $x\in y$ such that the middle $B$-letter of $xs_y(x)^{-1}$ is a member of $B$. \item $l$ is even: Let $m=l/2$. The following two functions are the key to the proof. \begin{eqnarray*} f_y:&y\rightarrow K:&x_i\mapsto b_{i,1}\cdots b_{i,m}\\ g_y:&y\rightarrow F:&x\mapsto f_y(x)^{-1}\cdot x \end{eqnarray*} The idea of $f_y$ is to map $x_i$ to the 'first half' of $x_ix_{i+1}^{-1}$ in terms of the new basis $B$. $f_y(x)$ is intended to represent $x$ in $K$. Using (\ref{equation: cancellation in NS=>AC2}), we obtain \begin{equation} \begin{split} \label{equation: f is well-behaved} f_y(x_i)f_y(x_{i+1})^{-1}&=b_{i,1}\cdots b_{i,m}b_{i+1,m}^{-1}\cdots b_{i+1,1}^{-1}\\ &=b_{i,1}\cdots b_{i,m}b_{i,m+1}\cdots b_{i,2m}\\ &=x_ix_{i+1}^{-1}. \end{split} \end{equation} It follows that $g_y(x_0)=g_y(x_1)$. Hence the image of $y$ under $g_y$ has a single member, $\alpha_y$, say. Note that \begin{equation} \label{equation: sigma of alpha is 1} \begin{split} \sigma_y(\alpha_y)& =\sigma_y(g_y(x_0))\\ & =\sigma_y(f_y(x_0)^{-1}x_0)\\ & =\sigma_y(f_y(x_0)^{-1})+\sigma_y(x_0)\\ & =0+1\text{ using (\ref{equation: K is in the kernel}), }f_y(x_0)\in K\text{, and }x_0\in y \end{split} \end{equation} is non-zero. This means that $\alpha_y$ mentions at least one $y$-letter. So we define $c(y)$ to be the $y$-letter which appears first in the $X$-reduction of $\alpha_y$. \end{enumerate} \end{proof} We are now ready to prove the general case: \begin{thm} \label{theorem: NS implies ACfin} $\mathsf{ZF}\vdash\mathsf{NS}\Rightarrow\mathsf{AC}_{fin}$. \end{thm} \begin{proof} Let $Z$ be a family of non-empty finite sets. Without loss of generality, assume that the members of $Z$ are pairwise disjoint. We form a new family $$Y=\{y:y\not=\emptyset\land(\exists z\in Z)y\subseteq z\},$$ i.e. the closure of $Z$ under taking non-empty subsets. Since $Z\subseteq Y$, any choice function for $Y$ immediately gives a choice function for $Z$. Let $X=\bigcup Y$, let $F=F(X)$ be the free group on $X$, and let $K\leq F$ be the subgroup defined by $$K=\langle\{wx^{-1}:(\exists y\in Y)w,x\in y\}\rangle.$$ By the Nielsen-Schreier theorem, $K$ has a basis $B$. For each $n<\omega$, let $Y^{(n)}=\{y\in Y:|y|=n\}$ and $Y^{(\leq n)}=\{y\in Y:|y|\leq n\}$. By induction on $n$, we will find a choice function $c_n$ on $Y^{(\leq n)}$ for each $2\leq n<\omega$. By construction, the $c_n$ will be nested, so that $\bigcup_{2\leq n<\omega}c_n$ is a choice function for $Y$. A choice function $c_2$ on $Y^{(\leq2)}$ is guaranteed by lemma \ref{lemma: NS implies AC2}. Assume that $n\geq3$ and that there is a choice function $c_{n-1}$ for $Y^{(\leq n-1)}$. For every $y\in Y^{(n)}$ we define a function $s_y$ by $$s_y:y\rightarrow y:x\mapsto c_{n-1}(y\setminus\{x\}).$$ Note that, as $Y$ is closed under taking non-empty subsets, $y\setminus\{x\}\in Y^{(n-1)}$, so $c_{n-1}(y\setminus\{x\})$ is defined. There are four cases: \begin{enumerate}[(i)] \item $s_y$ is not a bijection: In this case, $|\{s_y(x):x\in y\}|\leq n-1$, so defining $$c_n(y)=c_{n-1}(\{s_y(x):x\in y\})$$ gives a choice for $y$. \item $s_y$ is a bijection with at least two orbits\footnote{Thanks to Thomas Forster for suggesting a simplification of this part of the proof}: Since there are at least two orbits, each orbit has size $\leq n-1$. Moreover, as $s_y(x)\not=x$ for all $x\in y$, the number of orbits is also $\leq n-1$. So choosing one point from each orbit, and then choosing one point from among the chosen points gives a single element of $y$. More specifically, if we write $orb(x)$ for the orbit of $x\in y$ under $s_y$, we define $$c_n(y)=c_{n-1}(\{c_{n-1}(orb(x)):x\in y\}).$$ \item $s_y$ is a bijection with one orbit, and $n$ is even: If $n$ is even, $s_y^2$ is a bijection with two orbits. Remembering that we are assuming $n\geq 3$, this gives us $\leq n-1$ orbits of size $\leq n-1$ each. A choice is made as in the previous case. \item $s_y$ is a bijection with one orbit, and $n$ is odd: Notice that, for any $x\in y$, $y=\{x,s_y(x),s_y^2(x),...,s_y^{n-1}(x)\}$. $s_y(x)$ may be viewed as the successor of $x$. For simplicity, we set $x_i=s_y^i(x)$ for $i\in\mathbb{Z}$, so that $y=\{x_0,x_1,...,x_{n-1}\}$. In order to further simplify our notation, we shall assume that the elements of $Y^{(n)}$ are pairwise disjoint. Of course, this is not possible when $Y$ is constructed as above. But replacing every $y\in Y^{(n)}$ with $y\times\{y\}$ makes no difference to the argument, so the proof carries over without any changes. Recall the basis $B$ of the subgroup $K$ defined earlier in the proof. We may write \begin{eqnarray*} x_0x_1^{-1}&=&b_{0,1}\cdots b_{0,l_0}\\ x_1x_2^{-1}&=&b_{1,1}\cdots b_{1,l_1}\\ &...&\\ x_{n-1}x_0^{-1}&=&b_{n-1,1}\cdots b_{n-1,l_{n-1}} \end{eqnarray*} as reduced $B$-words, with $b_{i,j}\in B^\pm$ for all $i,j$. First, we make two simplifications: \begin{enumerate}[(a)] \item If it is \emph{not} the case that $l_0=...=l_{n-1}$, let $l=min\{l_i:i=0,...,n-1\}$. Then $\{x_i:l_i=l\}$ is a proper non-empty subset of $y$, and we define $$c_n(y)=c_{n-1}(\{x_i:l_i=l\}).$$ From now on it is assumed that $l_0=...=l_{n-1}=l$, say. \item Note that $$(x_0x_1^{-1})(x_1x_2^{-1})\cdots(x_{n-1}x_0^{-1})={\bf 1},$$ i.e. \begin{equation} \label{equation: cyclic cancelling} (b_{0,1}\cdots b_{0,l})(b_{1,1}\cdots b_{1,l})\cdots(b_{n-1,1}\cdots b_{n-1,l})={\bf 1}. \end{equation} For $i=0,...,n-1$, let $k_i$ be the number of $B$-cancellations in \begin{equation} \label{equation: number of cancellations} (b_{i,1}\cdots b_{i,l})(b_{i+1,1}\cdots b_{i+1,l}). \end{equation} If it is \emph{not} the case that $k_0=...=k_{n-1}$, let $k=min\{k_i:i=0,...,n-1\}$. Then $\{x_i:k_i=k\}$ is a proper non-empty subset of $y$, and we define $$c_n(y)=c_{n-1}(\{x_i:k_i=k\}).$$ From now on it is assumed that $k_0=...=k_{n-1}=k$, say. \end{enumerate} As letters always cancel in pairs, (\ref{equation: cyclic cancelling}) implies that $nl$ is even.\footnote{I would like to thank John Truss and Benedikt L\"owe for finding an error in this proof and suggesting a solution.} Since we are assuming that $n$ is odd, it follows that $l$ is even. Define $m=l/2$, and note that $k\geq m$: if not, then complete cancellation in (\ref{equation: cyclic cancelling}) would not be possible. This allows us to define functions $f_y$ and $g_y$, as in the proof of lemma \ref{lemma: NS implies AC2}: \begin{eqnarray*} f_y:&y\rightarrow K:&x_i\mapsto b_{i,1}\cdots b_{i,m}\\ g_y:&y\rightarrow F:&x\mapsto f_y(x)^{-1}x. \end{eqnarray*} Since there are $k\geq m$ cancellations in (\ref{equation: number of cancellations}), we have $b_{i+1,1}=b_{i,l}^{-1},...,b_{i+1,m}=b_{i,l-m+1}^{-1}=b_{i,m+1}^{-1}$ for all $i$. By the same calculation as in (\ref{equation: f is well-behaved}), it follows that $$f_y(x_i)f_y(x_{i+1})^{-1}=x_ix_{i+1}^{-1}$$ for all $i$, and hence that $g_y(x_i)=g_y(x_{i+1})$ for all $i$. So $g_y:y\rightarrow F$ is a constant function, taking a single value $\alpha_y$, say. The same calculation as (\ref{equation: sigma of alpha is 1}) yields $$\sigma_y(\alpha_y)=1.$$ So we set $c_n(y)$ to be the first $y$-letter occurring in the $X$-reduction of $\alpha_y$. \end{enumerate} \end{proof} Whether or not the Nielsen-Schreier theorem is equivalent to the Axiom of Choice still remains an open question. A positive answer might be obtainable by adapting the proof of theorem \ref{theorem: NS implies ACfin}. Finiteness of the sets was used to define the choice function recursively, splitting up in cases (i) -- (iv). Cases (i) -- (iii) were easily dealt with. Case (iv) gave us a cyclic ordering on the finite set -- enough structure to use the basis of the subgroup $K$ to choose a single element.
{ "timestamp": "2015-10-13T02:20:00", "yymm": "1506", "arxiv_id": "1506.03435", "language": "en", "url": "https://arxiv.org/abs/1506.03435", "abstract": "We present a new proof that the statement 'every subgroup of a free group is free' implies the Axiom of Choice for finite sets.", "subjects": "Logic (math.LO)", "title": "Nielsen-Schreier implies the finite Axiom of Choice", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357616286122, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739229187201 }
https://arxiv.org/abs/math/0611158
Simple Homotopy Types and Finite Spaces
We present a new approach to simple homotopy theory of polyhedra using finite topological spaces. We define the concept of collapse of a finite space and prove that this new notion corresponds exactly to the concept of a simplicial collapse. More precisely, we show that a collapse of finite spaces induces a simplicial collapse of their associated simplicial complexes. Moreover, a simplicial collapse induces a collapse of the associated finite spaces. This establishes a one-to-one correspondence between simple homotopy types of finite simplicial complexes and simple equivalence classes of finite spaces.We also prove a similar result for maps: We give a complete characterization of the class of maps between finite spaces which induce simple homotopy equivalences between the associated polyhedra. Furthermore, this class describes all maps coming from simple homotopy equivalences at the level of complexes.The advantage of this theory is that the elementary move of finite spaces is much simpler than the elementary move of simplicial complexes: It consists of removing (or adding) just a single point of the space.
\section{Introduction} J.H.C. Whitehead's theory of simple homotopy types is inspired by Tietze's theorem in combinatorial group theory, which states that any finite presentation of a group could be deformed into any other by a finite sequence of elementary moves, which are now called Tietze transformations. Whitehead translated these algebraic moves into the well-known geometric moves of elementary collapses and expansions of finite simplicial complexes. His beautiful theory of simple homotopy types turned out to be fundamental for the development of piecewise-linear topology: The s-cobordism theorem, Zeeman's conjecture \cite{Zee}, the applications of the theory in surgery, Milnor's classical paper on Whitehead Torsion \cite{Mil} and the topological invariance of torsion are some of its major uses and advances. \medskip In this paper we show how to use finite topological spaces to study simple homotopy types. There is a strong relationship between finite spaces and finite simplicial complexes, which was discovered by McCord \cite{Mcc}. Explicitly, given a finite simplicial complex $K$, one can associate to $K$ a finite $T_0$-space $\mathcal{X} (K)$ which corresponds to the poset of simplices of $K$ ordered by inclusion. Moreover, a simplicial map $\varphi:K\to L$ gives rise to a continuous map $\mathcal{X}(\varphi)$ between the associated finite spaces. Conversely, one can associate to a finite $T_0$-space $X$ a simplicial complex $\mathcal{K} (X)$, whose simplices are the non-empty chains of $X$, and a weak equivalence $\mathcal{K} (X)\to X$. This construction is also functorial. \medskip In \cite{Bar} we showed that finite spaces are very useful for studying homotopy invariants of (general) spaces. In fact, in that article we were looking for \it finite minimal models \rm of some spaces, i.e. the smallest finite spaces which are weak equivalent to a given space. In our opinion, finite spaces are more effective for studying homotopy theory than simplicial complexes, because, besides their combinatorial nature, they have the extra topological structure. \medskip It is easy to prove that if two finite $T_0$-spaces $X, Y$ are homotopy equivalent, their associated simplicial complexes $\mathcal{K}(X), \mathcal{K}(Y)$ are also homotopy equivalent. Furthermore, Osaki \cite{Osa} showed that in this case, the latter have the same simple homotopy type. Nevertheless, we noticed that the converse of this result is not true in general: There are finite spaces with different homotopy types whose associated simplicial complexes have the same simple homotopy type. Starting from this point, we were looking for the relation that $X$ and $Y$ should satisfy for their associated complexes to be simple homotopy equivalent. More especifically, we wanted to find an elementary move in the setting of finite spaces (if it existed) which corresponds exactly to a simplicial collapse of the associated polyhedra. \medskip We discovered this elementary move when we were looking for a homotopically trivial finite space (i.e. weak equivalent to a point) which were non-contractible. In order to construct such a space, we developed a method of reduction (i.e. a method that allows us to reduce a finite space to a smaller weak equivalent space). This method of reduction together with the homotopically trivial and non-contractible space (of 11 points) that we found are exhibited in section 3. Suprisingly, this method, which consists of removing a \it weak point \rm of the space (see \ref{weakpoint}), turned out to be the key to solve the problem of translating simplicial collapses into this setting. \medskip We will say that two finite spaces are \textit{simply equivalent} if we can obtain one of them from the other by adding and removing weak points. If $Y$ is obtained from $X$ by only removing weak points, we say that $X$ \textit{collapses} to $Y$ and write $X \searrow Y$. The first main result of this article is the following \begin{fmteo} \begin{enumerate} \item[ ] \item[(a)] Let $X$ and $Y$ be finite $T_0$-spaces. Then, $X$ and $Y$ are simply equivalent if and only if $\mathcal{K}(X)$ and $\mathcal{K} (Y)$ have the same simple homotopy type. Moreover, if $X \searrow Y$ then $\mathcal{K} (X) \searrow \mathcal{K}(Y)$. \item[(b)] Let $K$ and $L$ be finite simplicial complexes. Then, $K$ and $L$ are simple homotopy equivalent if and only if $\mathcal{X}(K)$ and $\mathcal{X} (L)$ are simply equivalent. Moreover, if $K \searrow L$ then $\mathcal{X} (K) \searrow \mathcal{X} (L)$. \end{enumerate} \end{fmteo} In particular, the functors $\mathcal{K}$ and $\mathcal{X}$ induce a one-to-one correspondence between simply equivalence classes of finite spaces and simple homotopy types: \begin{displaymath} \xymatrix@C=50pt{ \{Finite\ T_0-Spaces\} \! \! \textrm{\raisebox{-2ex}{\Huge{/}} \raisebox{-2.7ex}{$\! \! \! \! \! \diagup \! \! \! \! \: \! \: \searrow$}} \ar@<2.4ex>^{\! \! \! \! \! \! \! \! \! \! \! \! \mathcal{K}}[r] & \{Finite\ Simplicial\ Complexes\} \! \! \textrm{\raisebox{-2ex}{\Huge{/}} \raisebox{-2.7ex}{$\! \! \! \! \! \diagup \! \! \! \! \: \! \: \searrow$}} \ar@<-0.3ex>[l]^{\! \! \! \! \! \! \! \! \! \! \! \! \mathcal{X}} } \end{displaymath} We are now able to study finite spaces using all the power of Whitehead's simple homotopy theory for CW-complexes. But also, what is more important, we can use finite spaces to strengthen the classical theory. The elementary move in this setting is much simpler to handle and describe because it consists of adding or removing just one single point. \medskip As an example or application of this theorem, we study \textit{collapsible} finite spaces and their relationship with collapsible complexes. We also relate simple types of finite spaces with the notion of minimal finite model introduced in \cite{Bar}. \medskip In the last section of this article we investigate the class of maps between finite spaces which induce simple homotopy equivalences between their associated simplicial complexes. To this end, we introduce the notion of a \it distinguished \rm map. Similarly to the classical case, the class of maps which induce simple homotopy equivalences can be generated, in a certain way, by expansions and a kind of formal homotopy inverses of expansions. Remarkably this class, which we denote $\overline{\mathcal{D}}$, is also generated by the distinguished maps. The second main result of the article is the following \begin{smteo} \begin{enumerate} \item[] \item[(a)] Let $f:X\to Y$ be a map between finite $T_0$-spaces. Then $f\in \overline{\mathcal{D}}$ if and only if $\mathcal{K}(f):\mathcal{K}(X)\to \mathcal{K}(Y)$ is a simple homotopy equivalence. \item[(b)] Let $\varphi:K\to L$ be a simplicial map between finite simplicial complexes. Then $\varphi$ is a simple homotopy equivalence if and only if $\mathcal{X}(\varphi)\in \overline{\mathcal{D}}$. \end{enumerate} \end{smteo} \section{Preliminaries} In this section we recall the basic notions on finite spaces which are essential in section 3 and 4. For more details on finite spaces we refer the reader to \cite{Mcc, Sto} and P. May's beautiful notes \cite{May, May2}. First we describe Alexandroff's correspondence between topologies and preorders on a finite set. \bigskip Let $X$ be a finite topological space and let $x$ be a point of $X$. The \textit{minimal open set} $U_x$ of $x$ is defined as the intersection of all open sets containing $x$. The minimal open sets of $X$ form a basis for the topology on $X$, which is called the \textit{minimal basis} of $X$ for obvious reasons. The preorder associated to the topology on $X$ is given by the relation $x\le y$ if $x\in U_y$. \bigskip Conversely, given a preorder $\le$ on $X$, we define for each $x\in X$ the set $$U_x=\{y\in X \ | \ y\le x\}.$$ It is not hard to see that these sets form a basis for a topology, which is the topology associated to $\le$. \bigskip These applications define a one-to-one correspondence between topologies and preorders in the finite set $X$. Moreover, $T_0$-topologies correspond to orders. One can therefore regard finite $T_0$-spaces as finite posets and viceversa. \bigskip It is very useful to represent finite $T_0$-spaces with their \textit{Hasse diagram}. The Hasse diagram of $X$ is a digraph whose vertex set is $X$ and whose edges are the ordered pairs $(x,y)$ such that $x<y$ and there exist no $z\in X$ with $x<z<y$. \bigskip Consider for example the space $X=\{a,b,c,d\}$ whose proper open sets are $\{a,c,d\}$, $\{b,c,d\}$, $\{c,d\}$ and $\{d\}$. Its Hasse diagram would be \begin{displaymath} \xymatrix@C=10pt{ ^a \bullet \ \ \ar@{-}[dr] & & \ \ \bullet ^b \ar@{-}[ld] \\ & \ \ \bullet _c \ar@{-}[d] & \\ & \ \ \bullet _d \\ } \end{displaymath} Instead of representing an edge $(x,y)$ with an arrow, one simply writes $y$ over $x$. \bigskip Given a finite space $X$, we will denote by $X^{op}$ the space with the same underlying set as $X$ but with the opposite preorder. \bigskip \bigskip Following Stong \cite{Sto} and May \cite{May}, we say that a point $x$ of a finite $T_0$-space $X$ is an \textit{up beat point} if the set of points which are greater than $x$ has a minimum. A \textit{down beat point} is one such that the set of points below it has a maximum.\ \begin{obs} \label{comparable} If $x\in X$ is a beat point (up or down), there exists $y\in X$, $y\neq x$ such that any point which is comparable with $x$ is also comparable with $y$. \end{obs} Stong proved in \cite{Sto} that if $x\in X$ is a beat point, then $X\smallsetminus \{x\}$ is a strong deformation retract of $X$.\ A finite $T_0$-space with no beat points is called a \textit{minimal finite space}. A \textit{core} of a finite space $X$ is a strong deformation retract of $X$ which is a minimal finite space. \ Given a finite $T_0$-space $X$, one can find a sequence of spaces $X=X_0\supsetneq X_1 \supsetneq \ldots \supsetneq X_n$ where $X_{i+1}$ is obtained from $X_i$ by removing a beat point and such that $X_n$ has no beat points. Therefore every finite $T_0$-space has a core. \smallskip Furthermore, Stong proved that every homotopy equivalence between minimal finite spaces is a homeomorphism and therefore, the core of any finite space $X$ is unique up to homeomorphism. It can be described as the smallest space which is homotopy equivalent to $X$. \begin{obs} If $X$ is a contractible finite $T_0$-space, there exists a sequence $X=X_0\supsetneq X_1\supsetneq \ldots \supsetneq X_n=*$ where $X_{i+1}$ is obtained from $X_i$ by removing a beat point. \end{obs} \smallskip It is not hard to prove that a finite $T_0$-space with maximum or minimum is contractible. \begin{obs} A point $x$ of a finite $T_0$-space $X$ is an up beat point if and only if $x$ is a down beat point of $X^{op}$. Therefore, $X$ is contractible if and only if $X^{op}$ is contractible. \end{obs} \bigskip Note that a function $f:X \to Y$ between finite spaces is continuous if and only if it is order preserving.\ Given two functions $f,g:X \to Y$, we will say that $f\le g$ if $f(x)\le g(x)$ for every $x\in X$. It is not difficult to prove that if $f$ and $g$ are continuous and $f\le g$, then $f$ is homotopic to $g$ (see \cite{May,Sto} for more details). \bigskip \bigskip Following McCord \cite{Mcc} (cf. also \cite{Bar, May2}) one can associate to a finite $T_0$-space $X$ a simplicial complex $\mathcal{K} (X)$, whose simplices are the non-empty chains of $X$, and a weak equivalence $|\mathcal{K} (X)|\to X$. Here $|\mathcal{K}(X)|$ denotes the geometric realization of $\mathcal{K}(X)$. The application $\mathcal{K}$ is in fact functorial. A continuous map $f:X\to Y$ between finite $T_0$-spaces induces a simplicial map $\mathcal{K}(f):\mathcal{K} (X)\to \mathcal{K}(Y)$ which coincides with $f$ on vertices. \bigskip Given a map $f:X\to Y$, one has the following commutative diagram \begin{displaymath} \xymatrix@C=20pt{ |\mathcal{K} (X)| \ar@{->}[d] \ar@{->}^{|\mathcal{K}(f)|}[r] & |\mathcal{K} (Y)| \ar@{->}[d] \\ X \ar@{->}^f[r] & Y } \end{displaymath} If $f\simeq g: X\to Y$, it can be proved that $\mathcal{K}(f),\mathcal{K}(g) :\mathcal{K} (X)\to \mathcal{K}(Y)$ lie in the same contiguity class. In particular $|\mathcal{K}(f)|\simeq |\mathcal{K}(g)|$. \bigskip Conversely, one can associate to each finite simplicial complex $K$ a finite $T_0$-space $\mathcal{X} (K)$. This finite space is the poset of simplices of $K$ ordered by inclusion. Note that $\mathcal{K} (\mathcal{X} (K))=K'$ is the first barycentric subdivision of $K$. This implies of course that there exists a weak equivalence $|K|\to \mathcal{X} (K)$. This application is also functorial. A simplicial map $\varphi:K\to L$ between finite simplicial complexes induces a continuous map $\mathcal{X}(\varphi): \mathcal{X} (K)\to \mathcal{X} (L)$, where $\mathcal{X}(\varphi)(S)=\varphi(S)$ for every simplex $S$ of $K$. In this case one has the following diagram that commutes up to homotopy \begin{displaymath} \xymatrix@C=20pt{ |K| \ar@{->}[d] \ar@{->}^{|\varphi|}[r] & |L| \ar@{->}[d] \\ \mathcal{X} (K) \ar@{->}^{\mathcal{X}(\varphi)}[r] & \mathcal{X} (L) } \end{displaymath} \bigskip Recall that two spaces $X$ and $Y$ (non-necessarily finite) are weak equivalent if there exists a sequence of spaces $X=X_1, X_2, \ldots ,X_n=Y$ such that for each $1\le i<n$ there is a weak equivalence $X_i\to X_{i+1}$ or $X_{i+1}\to X_i$. We denote $X\overset{we}{\approx} Y$. \bigskip We call a space $X$ homotopically trivial if $X\overset{we}{\approx} *$ (i.e. all its homotopy groups are trivial). \bigskip Note that if $X$,$Y$ are finite $T_0$-spaces, then by Whitehead Theorem, $X\overset{we}{\approx} Y$ if and only if $|\mathcal{K} (X)|$ and $|\mathcal{K}(Y)|$ have the same homotopy type. For finite simplicial complexes $K$ and $L$, $|K|$ and $|L|$ are homotopy equivalent if and only if $\mathcal{X} (K)\overset{we}{\approx} \mathcal{X} (L)$. \bigskip \bigskip We finish this introductory section by recalling the basic notions on simple homotopy theory for simplicial complexes. Essentially we want to fix the notations that we will use in the main sections of the paper. The standard references for this are of course Whitehead's papers \cite{Whi, Whi2, Whi4}, Milnor's article \cite{Mil} and M.M.Cohen's book \cite{Coh}. \bigskip Let $K$ and $L$ be finite simplicial complexes. Recall that there is an \textit{elementary simplicial collapse} from $K$ to $L$ if there is a simplex $S$ of $K$ and a vertex $a$ of $K$ not in $S$ such that $K=L\cup aS$ and $L\cap aS=a\dot{S}$. Elementary collapses will be denoted, as usual, $K\searrow \! \! \! \! \! ^e \: \: L$. \bigskip We say that $K$ \textit{(simplicially) collapses} to $L$ (or that $L$ \textit{expands} to $K$) if there exists a sequence $K=K_1, K_2, \ldots, K_n=L$ of finite simplicial complexes such that $K_i\searrow \! \! \! \! \! ^e \: \: K_{i+1}$ for all $i$. This is denoted by $K \searrow L$ or $L \nearrow K$. Two complexes $K$ and $L$ have the same simple homotopy type if there is a sequence $K=K_1, K_2, \ldots, K_n=L$ such that $K_i\searrow K_{i+1}$ or $K_i \nearrow K_{i+1}$ for all $i$. Following M.M. Cohen's notation, we denote this by $K \diagup \! \! \! \! \: \! \: \searrow L$. \bigskip It is well known that $K \diagup \! \! \! \! \: \! \: \searrow L$ if and only if $|K|$ and $|L|$ are simple homotopy equivalent viewed as CW-complexes \cite{Whi4}. \section{Simple Homotopy Types: The First Main Theorem} We start by introducing the notion of a \textit{weak beat point}. This concept appeared naturally when we were searching for reduction methods (cf. \cite{Bar}) to find the \it minimal finite models \rm of some spaces, i.e. the smallest finite spaces which are weak equivalent to a given space. Surprisingly this new notion turned out to be crucial for studying the relationship between finite spaces and simplicial collapses of their associated complexes. \begin{defi} \label{weakpoint} Let $X$ be a finite $T_0$-space. We will say that $x\in X$ is a \textit{weak beat point of $X$} (or a \textit{weak point}, for short) if either $U_x\smallsetminus \{x\}$ is contractible or $\overline{\{x\}} \smallsetminus \{x\}$ is contractible. \end{defi} Here $\overline{\{x\}}$ denotes the closure of $\{x\}$, i.e. the set of points which are greater than or equal to $x$. Note that if $x\in X$ is a down beat point, $U_x\smallsetminus \{x\}$ has a maximum and if $x$ is an up beat point, $\overline{\{x\}} \smallsetminus \{x\}$ has a minimum. Therefore, beat points are particular cases of weak points. \bigskip As we have seen in the previous section, when $x$ is a beat point of $X$, the inclusion $i: X\smallsetminus \{x\} \hookrightarrow X$ is a homotopy equivalence. The following proposition generalizes Stong's result. \begin{prop} \label{weak} Let $x$ be a weak point of a finite $T_0$-space $X$. Then the inclusion map $i: X\smallsetminus \{x\} \hookrightarrow X$ is a weak equivalence. \end{prop} \begin{proof} We may suppose that $U_x\smallsetminus \{x\}$ is contractible since the other case follows from this one considering $X^{op}$. Note that the minimal open set $U_x$ of $x$ in $X^{op}$ is the closure $\overline{\{x\}}$ of $x$ in $X$ and that $\mathcal{K}(X^{op})=\mathcal{K}(X)$. Given $y\in X$, we have that $i^{-1}(U_y)=U_y \smallsetminus \{x\}$, which has maximum if $y\neq x$ and is contractible if $y=x$. It is clear then that $$i|_{i^{-1}(U_y)}:i^{-1}(U_y)\to U_y$$ is a weak homotopy equivalence for every $y\in X$. Now the result follows from Theorem 6 of \cite{Mcc} applied to the basis-like cover given by the minimal basis of $X$. \end{proof} In \cite{Osa} Corollary 3.4, Osaki proves that if two finite $T_0$-spaces, $X$ and $Y$ are homotopy equivalent, then their associated simplicial complexes, $\mathcal{K} (X)$ and $\mathcal{K} (Y)$, have the same simple homotopy type. \bigskip One might ask if the converse of this result also holds. Explicitly, suppose that $\mathcal{K} (X)$ and $\mathcal{K}(Y)$ have the same simple homotopy type, is it true that $X$ and $Y$ are homotopy equivalent? This question, which we can refer to as (Q1), is related to the following question (Q2): Is there a finite space which is homotopically trivial but not contractible? In \cite{Bar} we have already showed that Whitehead's Theorem does not hold for finite spaces (in fact, one can exhibit many examples of finite spaces which are weak equivalent but not homotopy equivalent), but we did not know whether (Q2) was true or not. Both questions are related in the following sense: An affirmative answer to (Q2) would give a negative answer to (Q1). If $X$ is a homotopically trivial finite $T_0$-space, then $|\mathcal{K} (X)|$ is contractible, then its Whitehead group is trivial, and therefore, $\mathcal{K} (X) \diagup \! \! \! \! \: \! \: \searrow *$. \smallskip As an application of the last proposition, we found the following example of a homotopically trivial space of 11 points which is not contractible. \begin{ej}[The Wallet] \label{wallet} Let us consider $W$, the finite $T_0$-space whose Hasse diagram is the following \begin{displaymath} \xymatrix@C=10pt{ \bullet \ar@{-}[d] \ar@{-}[drr] & & \bullet \ar@{-}[lld] \ar@{-}[rrd] & & \underset{}{\overset{x}{\bullet}} \ar@{-}[lld] \ar@{-}[rrd] & & \bullet \ar@{-}[lld] \ar@{-}[d] \\ \bullet \ar@{-}[dr] \ar@{-}[drrr] & & \bullet \ar@{-}[dl] \ar@{-}[dr] & & \bullet \ar@{-}[dl] \ar@{-}[dr] & & \bullet \ar@{-}[dlll] \ar@{-}[dl] \\ & \bullet & & \bullet & & \bullet } \end{displaymath} \begin{center} Fig. 1: $W$ \end{center} Note that $W$ is not contractible since it is a minimal finite space (with more than one point). Nevertheless it contains a weak point $x$ (see Fig. 1), since $U_x \smallsetminus \{x\}$ is contractible (see Fig. 2). \begin{displaymath} \xymatrix@C=10pt{ & \bullet \ar@{-}[dl] \ar@{-}[dr] & & & & \bullet \ar@{-}[dlll] \ar@{-}[dl] \\ \bullet & & \bullet & & \bullet } \end{displaymath} \begin{center} Fig. 2: $U_x\smallsetminus \{x\}$ \end{center} Therefore $W$ is weak equivalent to $W\smallsetminus \{x\}$, whose Hasse diagram is as follows \begin{displaymath} \xymatrix@C=10pt{ \bullet \ar@{-}[d] \ar@{-}[drr] & & \bullet \ar@{-}[lld] \ar@{-}[rrd] & & & & \bullet \ar@{-}[lld] \ar@{-}[d] \\ \bullet \ar@{-}[dr] \ar@{-}[drrr] & & \bullet \ar@{-}[dl] \ar@{-}[dr] & & \bullet \ar@{-}[dl] \ar@{-}[dr] & & \bullet \ar@{-}[dlll] \ar@{-}[dl] \\ & \bullet & & \bullet & & \bullet } \end{displaymath} \begin{center} Fig. 3: $W\smallsetminus \{x\}$ \end{center} Now it is easy to see that this subspace is contractible. In fact, $W \smallsetminus \{x\}$ does have beat points, and one can get rid of them one by one. Therefore $W$ is homotopically trivial but not contractible. \end{ej} As we pointed out before, this example shows that the converse of Osaki's result does not hold. This naturally leads us to the following harder problem, which is one of the main goals of the paper: What is exactly the relation that $X$ and $Y$ must satisfy for $\mathcal{K}(X)$ and $\mathcal{K} (Y)$ to have the same simple homotopy type? More precisely, is there an \textit{elementary move} in the setting of finite spaces which corresponds to a simplicial collapse of the associated complexes? \smallskip We found out that the notion of weak point (\ref{weakpoint}) was the key to solve this problem: \begin{defi} \label{definicion} Let $X$ be a finite $T_0$-space and let $Y\subsetneq X$. We say that $X$ \textit{collapses} to $Y$ by an \textit{elementary collapse} (or that $Y$ \textit{expands} to $X$ by an \textit{elementary expansion}) if $Y$ is obtained from $X$ by removing a weak point. We denote $X\searrow \! \! \! \! \! ^e \: \: Y$ or $Y\nearrow \! \! \! \! \! \! \! ^e \ \ X$. \smallskip In general, given two finite $T_0$-spaces $X$ and $Y$, we say that $X$ \textit{collapses} to $Y$ (or $Y$ \textit{expands} to $X$) if there is a sequence $X=X_1, X_2, \ldots, X_n=Y$ of finite $T_0$-spaces such that for each $1\le i <n$, $X_i\searrow \! \! \! \! \! ^e \: \: X_{i+1}$. In this case we write $X\searrow Y$ or $Y\nearrow X$. \smallskip Two finite $T_0$-spaces $X$ and $Y$ are \textit{simply equivalent} if there is a sequence $$X=X_1, X_2, \ldots, X_n=Y$$ of finite $T_0$-spaces such that for each $1\le i <n$, $X_i\searrow X_{i+1}$ or $X_i\nearrow X_{i+1}$. We denote in this case $X \diagup \! \! \! \! \: \! \: \searrow Y$, following the same notation that we adopted for simplicial complexes. \end{defi} It follows straightforward from \ref{weak} that, if $X$ and $Y$ are simply equivalent finite $T_0$-spaces, then they are weak equivalent. We shall see later, as an immediate corollary of our first main result \ref{main}, that homotopy equivalent finite spaces are simply equivalent. This result follows from the fact that beat points are weak points and that homeomorphic finite spaces are simply equivalent. The main reason of this curious situation (in the classical setting, a simple homotopy equivalence is in particular a homotopy equivalence) is that Whitehead's Theorem does not hold in this context. \bigskip In order to prove the First Main Theorem, we need some previous results. We show first that the associated finite space $\mathcal{X} (K)$ of a simplicial cone $K$ is contractible. \bigskip Suppose $K=aL$ is a cone, i.e. $K$ is the join of a simplicial complex $L$ with a vertex $a\notin L$. Since $|K|$ is contractible, it is clear that $\mathcal{X} (K)$ is homotopically trivial. The following lemma shows that $\mathcal{X} (K)$ is, in fact, contractible. \begin{lema} \label{cono} Let $K=aL$ be a finite cone. Then $\mathcal{X} (K)$ is contractible. \end{lema} \begin{proof} Define $f:\mathcal{X} (K)\to \mathcal{X} (K)$ by $f(S)=S\cup \{a\}$. This function is order-preserving and therefore continuous. If we consider $g:\mathcal{X} (K)\to \mathcal{X} (K)$ the constant map that takes all $\mathcal{X} (K)$ into $\{a\}$, we have that $$ 1_{\mathcal{X} (K)}\le f \ge g .$$ This proves that the identity is homotopic to a constant map. \end{proof} It is well known that any finite simplicial complex $K$ has the same simple homotopy type of its barycentric subdivision $K'$. We prove next an analogous result for finite spaces. \bigskip Following \cite{Har}, the barycentric subdivision of a finite $T_0$-space $X$ is defined by $X'=\mathcal{X} (\mathcal{K} (X))$. Explicitly, $X'$ consists of the non-empty chains of $X$ ordered by inclusion. It is shown in \cite{Har} that there is a weak equivalence $X'\to X$ which takes each chain $C$ to $max(C)$. \bigskip \begin{notac} We fix some notation that we will adopt for the rest of the paper. Given a set $A$ which can be regarded as a subset of different spaces $X$ and $Y$, we will denote $\overline{A}^X$ the closure of $A$ in the space $X$ so as not to confuse it with its closure in $Y$. Similarly, given $x\in X$, we will denote $U_x^X$ the minimal open set of $x$ in the space $X$. \end{notac} \begin{prop} \label{bedex} Let $X$ be a finite $T_0$-space. Then $X$ and $X'$ are simply equivalent. \end{prop} \begin{proof} Consider the space $B(X)$ whose underlying set is $X\sqcup X'$ with the following order relation. Given $a,b\in B(X)$ we say that $a\le b$ if one of the following holds: \begin{itemize} \item $a,b\in X$ and $a\le b$ in $X$. \item $a,b\in X'$ and $a\le b$ in $X'$. \item $a\in X'$, $b\in X$ and $max(a)\le b$ in $X$. \end{itemize} It easy to see that this relation defines an order on $B(X)$, thus it is a finite $T_0$-space. We will show that $X$ and $X'$ are both simply equivalent to $B(X)$. \bigskip We label all the elements $C_1, C_2, \ldots, C_n$ of $X'$ in such a way that $C_i\le C_j$ implies $i\le j$. Then we define $X_i=X\sqcup \{C_1, C_2, \ldots, C_i\} \subseteq B(X)$ for every $0\le i\le n$. Since $$\overline{\{C_i\}}^{B(X)}\smallsetminus \{C_i\}=\{x\in X \ | \ x\ge max(C_i)\}\sqcup \{C\in X' \ | \ C\supsetneq C_i\}, $$ we have that $$\overline{\{C_i\}}^{X_i}\smallsetminus \{C_i\}=\{x\in X \ | \ x\ge max(C_i)\},$$ which is homeomorphic to $\overline{\{max(C_i)\}}^X$. Therefore, it has a minimum and then is contractible. We have just proved that $C_i$ is a weak point of $X_i$ for every $1\le i\le n$. Hence, $X_i \searrow \! \! \! \! \! ^e \: \: X_{i-1}$ for $1\le i\le n$, and then $X=X_0$ is simply equivalent to $B(X)=X_n$. \bigskip Now order the elements $x_1,x_2, \ldots, x_m$ of $X$ in such a way that $x_i\le x_j$ implies $i\le j$. We define $X_i'=\{x_{i+1},x_{i+2}, \ldots, x_m\}\sqcup X'\subseteq B(X)$ for every $0\le i\le m$. Then $$U_{x_{i}}^{B(X)}\smallsetminus \{x_{i}\}=\{x\in X \ | \ x< x_{i}\}\sqcup \{C\in X' \ | \ max(C)\le x_{i}\}$$ Therefore $$U_{x_{i}}^{X_{i-1}'}\smallsetminus \{x_{i}\}=\{C\in X' \ | \ max(C)\le x_{i}\},$$ which is homeomorphic to $\mathcal{X} (\mathcal{K} (U_{x_{i}}^X))$. But $\mathcal{K} (U_{x_{i}}^X)=x_{i}\mathcal{K} (U_{x_{i}}^X\smallsetminus \{x_{i}\})$ is a cone. By the previous lemma, $U_{x_{i}}^{X_{i-1}'}\smallsetminus \{x_{i}\}$ is contractible. Thus $x_{i}$ is a weak point of $X_{i-1}'$ for every $1\le i\le m$, and then $X_{i-1}'\searrow \! \! \! \! \! ^e \: \: X_i'$ for $1\le i\le m$. Therefore $B(X)=X_0'$ is simply equivalent to $X'=X_m'$. \end{proof} The next technical lemma will also be used in the proof of the First Main Theorem. \begin{lema} \label{expansion} Let $L$ be a subcomplex of a finite simplicial complex $K$. Let $T$ be a set of simplices of $K$ which are not in $L$, and let $a$ be a vertex of $K$ which is contained in no simplex of $T$, but such that $aS$ is a simplex of $K$ for every $S\in T$. Finally, suppose that $K=L\cup \bigcup\limits_{S\in T} \{S,aS\}$ (i.e. the simplices of $K$ are those of $L$ together with the simplices $S$ and $aS$ for every $S$ in $T$). Then $L$ simplicially expands to $K$. \end{lema} \begin{proof} Suppose that $T=\{S_1,S_2, \ldots, S_n\}$ where $i\le j$ implies $\# S_i\le \# S_j$. We define $K_i=L\cup \bigcup\limits_{j=1}^{i} \{S_j,aS_j\}$ for $0\le i \le n$. Let $1\le i\le n$, and let $S\subsetneq S_i$. If $S\in T$, since $\# S<\# S_i$, we have that $S, aS\in K_{i-1}$. If $S\notin T$, then $S, aS\in L\subseteq K_{i-1}$. Therefore, we have proved that $aS_i\cap K_{i-1}=a\dot{S_i}$. \smallskip Inductively we have that $K_i$ is a simplicial complex for every $i$ and that there is an elementary simplicial expansion from $K_{i-1}$ to $K_i$ for every $1\le i\le n$, thus $L=K_0$ expands simplicially to $K=K_n$. \end{proof} Now we are ready to prove the first main result of this article. \begin{fmteo} \label{main} \begin{enumerate} \item[ ] \item[(a)] Let $X$ and $Y$ be finite $T_0$-spaces. Then, $X$ and $Y$ are simply equivalent if and only if $\mathcal{K}(X)$ and $\mathcal{K} (Y)$ have the same simple homotopy type. Moreover, if $X \searrow Y$ then $\mathcal{K} (X) \searrow \mathcal{K}(Y)$. \item[(b)] Let $K$ and $L$ be finite simplicial complexes. Then, $K$ and $L$ are simple homotopy equivalent if and only if $\mathcal{X}(K)$ and $\mathcal{X} (L)$ are simply equivalent. Moreover, if $K \searrow L$ then $\mathcal{X} (K) \searrow \mathcal{X} (L)$. \end{enumerate} \end{fmteo} \begin{proof} Let $X$ be a finite $T_0$-space and let $x\in X$ be a weak point. Suppose first that $U_x\smallsetminus \{x\}$ is contractible. In this case, there exists a sequence of spaces $U_x\smallsetminus \{x\}=X_n\supsetneq X_{n-1} \supsetneq \ldots \supsetneq X_1=\{x_1\}$, $X_i=\{x_1,x_2,\ldots ,x_i\}$ and such that $x_i$ is a beat point of $X_i$ for $i\ge 2$. It follows from \ref{comparable} that for each $2\le i\le n$ there exists $y_i\in X_{i-1}$ with the following property: if $z\in X_i$ is comparable with $x_i$, then it is comparable with $y_i$. \bigskip For every $1\le i \le n$ we define $K_i\subseteq \mathcal{K} (X)$, the subcomplex whose simplices are the chains of $Y=X\smallsetminus {\{x\}}$ and the chains of $\overline{\{x\}}\cup X_i\subseteq X$. In other words, $K_i=\mathcal{K} (Y) \cup \mathcal{K} (\overline{\{x\}}\cup X_i)$. \bigskip The simplices of $K_1$ which are not in $\mathcal{K} (Y)$, are the chains of $\overline{\{x\}}\cup \{x_1\}$ that contain $x$. Taking $T=\{S\in K_1 \ | \ x\in S, \ x_1\notin S\}$ and $a=x_1$ in \ref{expansion} it is easy to see that $\mathcal{K} (Y) \nearrow K_1$. If $S\in T$, every element of $S$ is greater than or equal to $x$, and therefore comparable with $x_1$. That is, $x_1S$ is a simplex of $K_1$. \bigskip Now, if $i\ge 2$, the simplices of $K_i$ which are not in $K_{i-1}$ are the chains of $\overline{\{x\}}\cup X_i$ that contain both $x$ and $x_i$. In order to use \ref{expansion}, we define $T=\{S\in K_i \ | \ x,x_i\in S, \ y_i\notin S\}$ and $a=y_i$. If $S\in T$, every element $y\in S$ satisfies one of the following: (i) $y\ge x$ or (ii) $y\in X_i$ is comparable with $x_i$. In any of these cases it holds that $y$ is comparable with $y_i$, and then $y_iS\in K_i$. We deduce then that $K_{i-1}\nearrow K_i$ and therefore $\mathcal{K}(Y) \nearrow K_n=\mathcal{K} (X)$. \bigskip Suppose now that $\overline{\{x\}}\smallsetminus \{x\}$ is contractible. If we consider the opposite order on X, it follows from the previous case that $\mathcal{K} (X\smallsetminus \{x\}) \nearrow \mathcal{K}(X)$. We have then proved that $X\searrow Y$ implies $\mathcal{K} (X)\searrow \mathcal{K}(Y)$. In particular, $X\diagup \! \! \! \! \: \! \: \searrow Y$ implies $\mathcal{K} (X)\diagup \! \! \! \! \: \! \: \searrow \mathcal{K}(Y)$. \bigskip Now suppose that $K$ and $L$ are finite simplicial complexes such that there is an elementary simplicial collapse from $K$ to $L$. Hence, there exists $S\in K$ and a vertex $a$ of $K$ not in $S$ such that $aS\in K$, $K=L\cup \{S,aS\}$ and $aS\cap L=a\dot{S}$. There is only one simplex of $K$ containing $S$ properly, namely $aS$. Therefore, $S$ is an up beat point in $\mathcal{X} (K)$, and then $\mathcal{X} (K) \searrow \! \! \! \! \! ^e \: \: \mathcal{X} (K)\smallsetminus \{S\}$. The simplices contained properly in $aS$ which are distinct from $S$ form a cone, and then $$U_{aS}^{\mathcal{X} (K)\smallsetminus \{S\}} \smallsetminus \{aS\}=\mathcal{X} (a\dot{S})$$ is contractible by \ref{cono}. Then $aS$ is a weak point of $\mathcal{X} (K)\smallsetminus \{S\}$ which collapses to $\mathcal{X} (K)\smallsetminus \{S,aS\}=\mathcal{X} (L)$. This proves the first part of $(b)$ and the ``moreover'' part. \bigskip If $X$, $Y$ are finite $T_0$-spaces such that $\mathcal{K} (X)\diagup \! \! \! \! \: \! \: \searrow \mathcal{K} (Y)$, we have just proved that $$X'=\mathcal{X} (\mathcal{K} (X))\diagup \! \! \! \! \: \! \: \searrow Y'=\mathcal{X} (\mathcal{K} (Y)).$$ However, by \ref{bedex} $X\diagup \! \! \! \! \: \! \: \searrow X'$ and $Y\diagup \! \! \! \! \: \! \: \searrow Y'$, and then $X\diagup \! \! \! \! \: \! \: \searrow Y$.\ Finally, if $K$, $L$ are finite simplicial complexes such that $\mathcal{X} (K) \diagup \! \! \! \! \: \! \: \searrow \mathcal{X} (L)$, then $$K'=\mathcal{K} (\mathcal{X} (K))\diagup \! \! \! \! \: \! \: \searrow L'=\mathcal{K} (\mathcal{X} (L)).$$ Since $K\diagup \! \! \! \! \: \! \: \searrow K'$ and $L\diagup \! \! \! \! \: \! \: \searrow L'$, it follows that $K\diagup \! \! \! \! \: \! \: \searrow L$. This completes the proof. \end{proof} In particular, we have the following corollary. \begin{coro} The functors $\mathcal{K}$, $\mathcal{X}$ induce a one-to-one correspondence between simply equivalence classes of finite spaces and simple homotopy types of finite simplicial complexes \begin{displaymath} \xymatrix@C=50pt{ \{Finite\ T_0-Spaces\} \! \! \textrm{\raisebox{-2ex}{\Huge{/}} \raisebox{-2.7ex}{$\! \! \! \! \! \diagup \! \! \! \! \: \! \: \searrow$}} \ar@<2.4ex>^{\! \! \! \! \! \! \! \! \! \! \! \! \mathcal{K}}[r] & \{Finite\ Simplicial\ Complexes\} \! \! \textrm{\raisebox{-2ex}{\Huge{/}} \raisebox{-2.7ex}{$\! \! \! \! \! \diagup \! \! \! \! \: \! \: \searrow$}} \ar@<-0.3ex>[l]^{\! \! \! \! \! \! \! \! \! \! \! \! \mathcal{X}} } \end{displaymath} \end{coro} The theorem shows that the exact translation of the notion of simple homotopy type in the context of finite spaces is the one defined in \ref{definicion}. This result can be used in both ways. On one hand, to prove results on finite spaces using the machinery of Whitehead's theory. On the other hand, to enrich the classical theory with the incipient theory of finite spaces. We hope to find in the future a new way to characterize the obstruction for two simplicial complexes to have the same simple homotopy type, which is measured by the Whitehead groups of the complexes, via their associated finite spaces. Note that in the finite space setting the elementary move consists of removing (or adding) just one point. \bigskip If two finite $T_0$-spaces $X$ and $Y$ are homeomorphic, we could use a \mbox{construction similar} to $B(X)$ in \ref{bedex} to prove that $X\diagup \! \! \! \! \: \! \: \searrow Y$ (see \ref{dis}). However that is not necessary, \mbox{since this} \mbox{follows immediately} from the theorem because $\mathcal{K} (X)$ and $\mathcal{K} (Y)$ are isomorphic and therefore simple homotopy equivalent. It follows that homotopy equivalent finite spaces are simply equivalent. Explicitly, if $X$ and $Y$ have the same homotopy type, their cores $X_c$ and $Y_c$ are homeomorphic and then $X\searrow X_c\diagup \! \! \! \! \: \! \: \searrow Y_c\nearrow Y$. \bigskip The following diagrams illustrate the whole situation. \begin{displaymath} \xymatrix@C=20pt{ X \overset{he}{\simeq} Y \ar@{=>}[r] & X \diagup \! \! \! \! \: \! \: \searrow Y \ar@{=>}[r] \ar@{<=>}[d] & X \overset{we}{\approx} Y \ar@{<=>}[d] & \\ & \mathcal{K} (X) \diagup \! \! \! \! \: \! \: \searrow \mathcal{K} (Y) \ar@{=>}[r] & |\mathcal{K} (X)| \overset{we}{\approx} |\mathcal{K} (Y)| \ar@{<=>}[r] & |\mathcal{K} (X)| \overset{he}{\simeq} |\mathcal{K} (Y)| } \end{displaymath} \begin{displaymath} \xymatrix@C=20pt{ \mathcal{X} (K) \overset{he}{\simeq} \mathcal{X} (L) \ar@{=>}[r] & \mathcal{X} (K) \diagup \! \! \! \! \: \! \: \searrow \mathcal{X} (L) \ar@{=>}[r] \ar@{<=>}[d] & \mathcal{X} (K) \overset{we}{\approx} \mathcal{X} (L) \ar@{<=>}[d] & \\ & K \diagup \! \! \! \! \: \! \: \searrow L \ar@{=>}[r] & |K| \overset{we}{\approx} |L| \ar@{<=>}[r] & |K| \overset{he}{\simeq} |L| } \end{displaymath} Here $\overset{he}{\simeq}$ denotes that the spaces are homotopy equivalent. The Wallet $W$ satisfies $W\searrow *$, however $W \overset{he}{\simeq \! \! \! \! \! \! \! \: /} *$. Therefore $X\diagup \! \! \! \! \: \! \: \searrow Y \Rightarrow \! \! \! \! \! \! \! \! \: / \ \ X\overset{he}{\simeq} Y$. Since $|K|\overset{he}{\simeq} |L| \Rightarrow \! \! \! \! \! \! \! \! \: / \ \ K\diagup \! \! \! \! \: \! \: \searrow L$, we also have that $X \overset{we}{\approx} Y \Rightarrow \! \! \! \! \! \! \! \! \: / \ \ X\diagup \! \! \! \! \: \! \: \searrow Y$. \bigskip Note that, if $X\overset{we}{\approx} Y$ and their Whitehead group $Wh (\pi_1 (X))$ is trivial, then $|\mathcal{K} (X)|$ and $|\mathcal{K} (Y)|$ are homotopy equivalent CW-complexes with trivial Whitehead group and therefore, simple homotopy equivalent. It follows from \ref{main} that $X\diagup \! \! \! \! \: \! \: \searrow Y$. Thus we have proved \begin{coro} Let $X$, $Y$ be weak equivalent finite $T_0$-spaces such that $Wh (\pi_1 (X))=0$. Then $X\diagup \! \! \! \! \: \! \: \searrow Y$. \end{coro} As another immediate consequence of the theorem, we have \begin{coro} Let $X$, $Y$ be finite $T_0$-spaces. If $X\searrow Y$, then $X'\searrow Y'$. \end{coro} Note also that, as a corollary of the theorem one deduces the following known fact: if $K$ and $L$ are finite simplicial complexes such that $K\searrow L$, then $K'\searrow L'$. \bigskip \bigskip \textbf{Collapsible finite spaces} \bigskip As one can imagine, we will say that a finite $T_0$-space $X$ is \textit{collapsible} if $X\searrow *$.\ Observe that every contractible finite $T_0$-space is collapsible, however the converse is not true. The Wallet $W$ introduced in \ref{wallet} is collapsible and non-contractible. One could ask if there is a finite $T_0$-space which is homotopically trivial but non-collapsible. We will come back to this question in a minute. \bigskip Note that if a finite $T_0$-space $X$ is collapsible, its associated simplicial complex $\mathcal{K} (X)$ is also collapsible. Moreover, if $K$ is a collapsible complex, then $\mathcal{X}(K)$ is a collapsible finite space. Therefore, if $X$ is a collapsible finite space, its subdivision $X'$ is also collapsible. \bigskip Let us consider now a compact contractible polyhedron $X$ such that any triangulation of $X$ is non-collapsible, for instance the Dunce Hat \cite{Zee}. Let $K$ be any triangulation of $X$. Now we claim that $\mathcal{X} (K)$ is homotopically trivial because $X$ is contractible. Nevertheless, $\mathcal{X} (K)$ cannot be collapsible since $K'$ is not collapsible. \bigskip This is an interesting example of how classical simple homotopy theory can help us to answer natural questions on finite spaces. It was not easy to find a non-contractible homotopically-trivial finite space as we did in \ref{wallet}. However, the finite space associated to the Dunce Hat, despite being much bigger than $W$, arises in a more natural way. \bigskip We have the following situation $$\textrm{contractible} \Rightarrow \textrm{collapsible} \Rightarrow \textrm{homotopically trivial}$$ and none of the converses holds. \bigskip \bigskip \textbf{Minimal simple models} \bigskip The core of a finite space $X$ is the smallest space which is homotopy equivalent to $X$. As we pointed out before the core is unique up to homeomorphism. In \cite{Bar} we have studied the \textit{minimal finite models} of a space $X$ (not necessarily finite), which are the smallest spaces which are weak equivalent to $X$. We proved that in general these models are not unique. In \cite{Bar} we characterized the minimal finite models of finite graphs (finite CW-complexes of dimension one) and as an example we showed that $\bigvee\limits_{i=1}^3 S^1$ has three minimal finite models up to homeomorphism. It has sense to make the following definition. \begin{defi} A \textit{minimal simple model} of a finite $T_0$-space $X$ is a finite $T_0$-space simply equivalent to $X$ of minimum cardinal. We will say that a space is a minimal simple model if it is a minimal simple model of itself. \end{defi} For finite $T_0$-spaces we have that $$\textrm{minimal finite model} \Rightarrow \textrm{minimal simple model} \Rightarrow \textrm{minimal finite space}$$ Note that if the Whitehead group $Wh(\pi_1(X))$ is trivial, the converse of the first implication holds. Therefore if we have a finite $T_0$-space $X$ such that $Wh(\pi_1(X))=0$, we could reach any minimal finite model of $X$ ``simply'' by adding and removing weak points from $X$. \smallskip Going back to the first paragraph of this section: Elementary collapses and expansions give us an effective method of reduction when the space has trivial Whitehead group. Unfortunately this is not exactly what one looks for since it is not possible to get a minimal simple model just by taking away weak points. More explicitly: A minimal simple model has no weak points. However, if we consider a triangulation $K$ of the Dunce Hat, and we remove as many weak points as we can from $\mathcal{X} (K)$, we will obtain a space without weak points which is not a minimal simple model. This is very different to the homotopy type case, where removing beat points is an effective way of getting a core. \bigskip Of course there is not uniqueness of minimal simple models. For example we can consider the space $\mathbb{S} D_3$ \begin{displaymath} \xymatrix@C=6pt{ & \bullet \ar@{-}[dl] \ar@{-}[dr] \ar@{-}[drrr] & & \bullet \ar@{-}[dlll] \ar@{-}[dl] \ar@{-}[dr]\\ \bullet & & \bullet & & \bullet} \end{displaymath} and its opposite, which are minimal simple models because they are minimal finite models. Moreover $\mathbb{S} D_3\diagup \! \! \! \! \: \! \: \searrow (\mathbb{S} D_3) ^{op}$ since $\mathcal{K} (\mathbb{S} D_3)$ and $\mathcal{K} ((\mathbb{S} D_3)^{op})$ are isomorphic. \bigskip \section{Simple Homotopy Equivalences: The Second Main Theorem} In this section we prove the second main result of the article. We present the notion of a \textit{distinguished} map between finite $T_0$-spaces and the class $\overline{\mathcal{D}}$ generated by these maps. We shall prove that every map in $\overline{\mathcal{D}}$ induces a simple homotopy equivalence between the associated complexes. Conversely, if $|\mathcal{K}(f)|$ is a simple homotopy equivalence, then $f\in \overline{\mathcal{D}}$. Furthermore, for a simplicial map $\varphi:K\to L$, $|\varphi|$ is a simple homotopy equivalence if and only if $\mathcal{X}(\varphi)\in \overline{\mathcal{D}}$. \bigskip Recall first that a homotopy equivalence $f:|K|\to |L|$ between compact polyhedra is a simple homotopy equivalence if it is homotopic to a composition of a finite sequence of maps $|K| \to |K_1|\to\ldots \to |K_n|\to |L|$, each of them an expansion or a homotopy inverse of one \cite{Coh,Sie}. \smallskip We prove first that homotopy equivalences between finite spaces induce simple equivalences between the associated polyhedra. \begin{teo} \label{ehinduce} If $f:X\to Y$ is a homotopy equivalence between finite $T_0$-spaces, then $|\mathcal{K}(f)|: |\mathcal{K} (X)|\to |\mathcal{K} (Y)|$ is a simple homotopy equivalence. \end{teo} \begin{proof} Let $X_c$ and $Y_c$ be cores of $X$ and $Y$. Let $i_X:X_c\to X$ and $i_Y:Y_c\to Y$ be the inclusions and $r_X:X\to X_c$, $r_Y:Y\to Y_c$ retractions of $i_X$ and $i_Y$ such that $i_Xr_X\simeq 1_X$ and $i_Yr_Y\simeq 1_Y$. Since $r_Yfi_X:X_c\to Y_c$ is a homotopy equivalence between minimal finite spaces, it is a homeomorphism. Therefore $\mathcal{K}(r_Yfi_X):\mathcal{K} (X_c)\to \mathcal{K}(Y_c)$ is an isomorphism and then $|\mathcal{K}(r_Yfi_X)|$ is a simple equivalence. \bigskip Since $\mathcal{K}(X)\searrow \mathcal{K}(X_c)$, $|\mathcal{K}(i_X)|$ is a simple equivalence, and then the homotopy inverse $|\mathcal{K}(r_X)|$ is also a simple equivalence. Analogously $|\mathcal{K}(i_Y)|$ is a simple equivalence. Finally, since $f\simeq i_Yr_Yfi_Xr_X$, it follows that $|\mathcal{K}(f)|\simeq |\mathcal{K}(i_Y)| |\mathcal{K}(r_Yfi_X)| |\mathcal{K}(r_X)|$ is a simple equivalence. \end{proof} \begin{defi} A map $f:X\to Y$ between finite $T_0$-spaces is \textit{distinguished} if $f^{-1}(U_y)$ is contractible for each $y\in Y$. We denote by $\mathcal{D}$ the class of distinguished maps. \end{defi} \begin{obs} \label{eind} From the proof of \ref{weak}, it is clear that if $x\in X$ is a weak point such that $U_x\smallsetminus \{x\}$ is contractible, the inclusion $X\smallsetminus \{x\}\hookrightarrow X$ is distinguished. \end{obs} Note that by the theorem of McCord (\cite{Mcc}; Theorem 6), every distinguished map is a weak equivalence and therefore induces a homotopy equivalence between the associated complexes. We will prove in \ref{dis} that in fact the induced map is a simple equivalence. \begin{obsi} \label{hardie} Let $X$ be a finite $T_0$-space. In \cite{Har} and \cite{May2} it is proved that the map $h:X'\to X$ defined by $h(C)=max(C)$ is a weak equivalence. Moreover $\mathcal{K}(h):\mathcal{K} (X)'\to \mathcal{K}(X)$ is a simplicial approximation to the identity and then $\mathcal{K}(h)$ is in fact a simple homotopy equivalence. We give here a different approach: Since $$h^{-1}(U_x)=\{C \ | \ max(C)\le x \}=\mathcal{X} (\mathcal{K} (U_x))=\mathcal{X} (x\mathcal{K} (U_x\smallsetminus \{x\}))$$ is contractible for every $x\in X$, $h$ is a distinguished map. In particular, by \ref{dis}, $h$ induces a simple homotopy equivalence. \end{obsi} It is easy to see that homeomorphisms are distinguished: If $f:X\to Y$ is a homeomorphism, then $f^{-1}(U_y)=U_{f^{-1}(y)}$, which is contractible. However homotopy equivalences are not distinguished in general. The map \begin{displaymath} \xymatrix@C=6pt{ & \ \ \bullet ^a \ar@{-}[dl] \ar@{-}[dr] & \\ _b \bullet \ & & \ \bullet _c } \qquad \xymatrix@C=10pt{ \ar@{->}^f[rr] & & \\ & & & } \qquad \xymatrix@C=6pt{ \ \ \bullet ^1 \ar@{-}[d] \\ \ \ \bullet _0 } \end{displaymath} defined by $f(a)=1$, $f(b)=f(c)=0$ is a homotopy equivalence because both spaces are contractible, however $f^{-1}(U_0)=\{b,c\}$ which is not contractible. \begin{teo} \label{dis} Every distinguished map induces a simple homotopy equivalence. \end{teo} \begin{proof} Suppose $f:X\to Y$ is distinguished. We define the \textit{non-Hausdorff mapping cylinder} $B(f)$ as the set $X\sqcup Y$ with the following order. Given $a, b$ in $B(f)$, $a\le b$ if one of the following holds: \begin{itemize} \item $a,b\in X$ and $a\le b$ in $X$. \item $a,b\in Y$ and $a\le b$ in $Y$. \item $a\in X$, $b\in Y$ and $f(a)\le b$ in $Y$. \end{itemize} Note that the space $B(X)$ constructed in \ref{bedex} is a particular case of the non-Hausdorff cylinder when $f=h:X'\to X$ as defined in \ref{hardie}. We will show that both $X$ and $Y$ expand to $B(f)$. \bigskip Labeling all the elements $x_1, x_2, \ldots, x_n$ of $X$ in such a way that $x_i\le x_j$ implies $i\le j$ and defining $Y_i=\{x_1, x_2, \ldots, x_i\} \sqcup Y \subseteq B(f)$ for every $0\le i\le n$, we have that $$\overline{\{x_i\}}^{Y_i}\smallsetminus \{x_i\}=\{y\in Y \ | \ y\ge f(x_i)\}$$ is homeomorphic to the contractible space $\overline{\{f(x_i)\}}^Y$. Therefore $Y_i \searrow \! \! \! \! \! ^e \: \: Y_{i-1}$ for $1\le i\le n$, and then $Y=Y_0$ expands to $B(f)=Y_n$. Notice that we have not used yet the fact that $f$ is distinguished. \bigskip Now order the elements $y_1,y_2, \ldots, y_m$ of $Y$ in such a way that $y_i\le y_j$ implies $i\le j$ and define $X_i=X \sqcup \{y_{i+1},y_{i+2}, \ldots, y_m\} \subseteq B(f)$ for every $0\le i\le m$. Then $$U_{y_{i}}^{X_{i-1}}\smallsetminus \{y_{i}\}=\{x\in X \ | \ f(x)\le y_{i}\},$$ is homeomorphic to $f^{-1}(U_{y_i})$, which is contractible by hypothesis. Thus $X_{i-1}\searrow \! \! \! \! \! ^e \: \: X_i$ for $1\le i\le m$. Therefore $B(f)=X_0$ collapses to $X=X_m$. \bigskip The following diagram \begin{displaymath} \xymatrix@C=30pt{ & B(f) & \\ X \ar@{^{(}->}^{i_X}[ur] \ar@{->}^f[rr] & & Y \ar@{_{(}->}_{i_Y}[ul] } \end{displaymath} does not commute, but $i_X\le i_Yf$ and then $i_X\simeq i_Yf$. Therefore $|\mathcal{K}(i_X)|\simeq |\mathcal{K}(i_Y)| |\mathcal{K}(f)|$. The expansions $|\mathcal{K}(i_X)|$ and $|\mathcal{K}(i_Y)|$ are simple equivalences, and then so is $|\mathcal{K}(f)|$. \end{proof} We have already showed that expansions, homotopy equivalences and distinguished maps induce simple equivalences at the level of complexes. Note that if $f,g,h$ are three maps between finite $T_0$-spaces such that $fg\simeq h$ and two of them induce simple equivalences, then the third map also does. This provides us a way to construct new maps that induce simple equivalences: \begin{defi} Let $\mathcal{C}$ be a class of continuous maps between finite $T_0$-spaces. We define recursively the class $\mathcal{C}_n$ in the following way. $\mathcal{C}_0=\mathcal{C}$, $$\mathcal{C}_{n+1}=\{f,g,h \ | \ fg\simeq h \textrm{ and such that 2 of the 3 are in }\mathcal{C}_n \}$$ We call $\overline{\mathcal{C}}=\bigcup\limits_{n\in \mathbb{N}}\mathcal{C}_n$, the class \textit{generated} by $\mathcal{C}$. In other words, $\overline{\mathcal{C}}$ is the smallest class having the 2-of-3 property (up to homotopy) containing $\mathcal{C}$. \end{defi} It is clear that no matter what $\mathcal{C}$ is, the class $\overline{\mathcal{C}}$ is closed by composition and homotopy. Note that if every map in $\mathcal{C}$ induces a simple equivalence, each map in $\overline{\mathcal{C}}$ does. If we denote $\mathcal{E}$ the class of elementary expansions of finite $T_0$-spaces, it is clear then that every map in $\overline{\mathcal{E}}$ induces a simple homotopy equivalence. \bigskip Observe that the class of simple equivalences between CW complexes is the smallest class closed by the 2-of-3 property (up to homotopy) containing elementary expansions. In the setting of finite spaces, a map which induces a simple equivalence needs not have a homotopy inverse. This is the reason why the definition of $\overline{\mathcal{E}}$ is not as simple as in the setting of CW-complexes. \begin{obs} \label{dine} Every expansion of finite $T_0$-spaces is in $\overline{\mathcal{E}}$ because it is a composition of maps in $\mathcal{E}$. The proof of \ref{dis} shows that for every distinguished map $f$, there exist expansions $i$, $j$ such that $i\simeq jf$. Therefore $f\in \overline{\mathcal{E}}$. \end{obs} A map $f:X\to Y$ such that $f^{-1}(\overline{\{y\}})$ is contractible for every $y$ needs not be distinguished. However it is clear that $|\mathcal{K}(f)|$ is a simple equivalence since $f^{op}:X^{op}\to Y^{op}$ is distinguished. Here, $f^{op}$ denotes the map that coincides with $f$ in the underlying sets. We prove that in fact $f\in \overline{\mathcal{D}}$. \medskip We denote $\mathcal{D} ^{op}$ the class of maps $f$ such that $f^{op}\in \mathcal{D}$. \begin{prop} \label{dopind} Let $f:X\to Y$ be in $\mathcal{D} ^{op}$. Then $f\in \overline{D}$. \end{prop} \begin{proof} Consider the following commutative diagram \begin{displaymath} \xymatrix@C=30pt{ X \ar@{->}^f[r] & Y \\ X'=(X^{op})' \ar@{->}^{h_X}[u] \ar@{->}^{f'}[r] \ar@{->}^{h_{X^{op}}}[d] & Y'=(Y^{op})' \ar@{->}^{h_Y}[u] \ar@{->}^{h_{Y^{op}}}[d] \\ X^{op} \ar@{->}^{f^{op}}[r] & Y^{op} } \end{displaymath} Here, $f'$ denotes the map $\mathcal{X}(\mathcal{K}(f))$. Since $\overline{\mathcal{D}}$ satisfies the 2-of-3 property and $h_{X^{op}}$, $h_{Y^{op}}$, $f^{op}$ are distinguished by \ref{hardie}, then $f'\in \overline{\mathcal{D}}$. And since $h_X$, $h_Y$ are distinguished, $f\in \overline{\mathcal{D}}$. \end{proof} Therefore if $x\in X$ is a weak point such that $\overline{\{x\}}\smallsetminus \{x\}$ is contractible, the inclusion $X\smallsetminus \{x\} \hookrightarrow X$ lies in $\overline{\mathcal{D}}$. \begin{coro} \label{todoigual} By \ref{eind} and what we have just proved, every elementary expansion is in $\overline{\mathcal{D}}$, which proves that $\overline{\mathcal{E}}\subseteq \overline{\mathcal{D}}$. From \ref{dine}, it follows that $\overline{\mathcal{E}}= \overline{\mathcal{D}}$. From \ref{dopind} and a simetrical result we also obtain that $\overline{\mathcal{D}}=\overline{\mathcal{D} ^{op}}$.\ The proof of \ref{ehinduce} shows that if $f:X\to Y$ is a homotopy equivalence, then $fi_X\simeq i_Yr_Yfi_X$ where $i_X$, $i_Y$ are expansions and $r_Yfi_X$ is a homeomorphism. Thus, every homotopy equivalence lies in $\overline{\mathcal{E}}=\overline{\mathcal{D}}=\overline{\mathcal{D} ^{op}}$. \end{coro} Below we shall prove that $\overline{\mathcal{E}}=\overline{\mathcal{D}}=\overline{\mathcal{D} ^{op}}$ is exactly the class of maps which induce simple homotopy equivalences between the associated polyhedra. \begin{lema} \label{lema1} Let $\varphi,\psi:K\to L$ be simplicial maps which lie in the same contiguity class. Then $\mathcal{X} (\varphi)\simeq \mathcal{X} (\psi)$. \end{lema} \begin{proof} Assume that $\varphi$ and $\psi$ are contiguous. Then the map $f :\mathcal{X}(K)\to \mathcal{X}(L)$, defined by $f (S)=\varphi(S)\cup \psi(S)$ is well-defined and continuous. Moreover $\mathcal{X}(\varphi)\le f \ge \mathcal{X}(\psi)$, and then $\mathcal{X}(\varphi)\simeq \mathcal{X}(\psi)$. \end{proof} Given $n\in\mathbb{N}$ we denote by $K^n$ the n-th barycentric subdivision of $K$. \begin{lema} \label{lema2} Let $\lambda :K^n\to K$ be a simplicial approximation to the identity. Then $\mathcal{X}(\lambda)\in \overline{D}$. \end{lema} \begin{proof} Suppose first that $n=1$. Let $\lambda :K'\to K$ be any simplicial approximation to $1_{|K|}$. Then $\mathcal{X}(\lambda): \mathcal{X}(K)'\to \mathcal{X}(K)$ is homotopic to $h_{\mathcal{X}(K)}$, for if $S_1\subsetneq S_2\subsetneq \ldots \subsetneq S_m$ is a chain of simplices of $K$, then $\mathcal{X}(\lambda)(\{S_1,S_2,\ldots,S_m\})=\{\lambda(S_1),\lambda(S_2),\ldots,\lambda(S_m)\}\subseteq S_m=h_{\mathcal{X}(K)}(\{S_1,S_2,\ldots,S_m\})$. By \ref{hardie}, it follows that $\mathcal{X}(\lambda)\in \overline{\mathcal{D}}$. \bigskip Now suppose that $n\ge 1$. A composition of approximations to the identity $K^{i+1}\to K^i$ for $0\le i<n$ defines an approximation to the identity $\nu :K^n\to K$. Since $\overline{\mathcal{D}}$ is closed by compositions, $\mathcal{X}(\nu)\in \overline{\mathcal{D}}$. Any other approximation $\lambda: K^n\to K$ to $1_{|K|}$, is contiguous to $\nu$. By \ref{lema1}, $\mathcal{X}(\lambda)$ is homotopic to $\mathcal{X}(\nu)$ and lies in $\overline{\mathcal{D}}$. \end{proof} \begin{lema} \label{lema3} Let $\varphi,\psi:K\to L$ be simplicial maps such that $|\varphi|\simeq|\psi|$ and such that $\mathcal{X}(\varphi)\in \overline{\mathcal{D}}$. Then $\mathcal{X}(\psi)$ also lies in $\overline {\mathcal{D}}$. \end{lema} \begin{proof} There exist $n\ge 1$ and simplicial approximations $\widetilde{\varphi}, \widetilde{\psi}: K^n\to L$ to $|\varphi|$ and $|\psi|$ in the same contiguity class. Let $\lambda :K^n\to K$ be a simplicial approximation to $1_{|K|}$. Then $\varphi\lambda$ is an approximation to $|\varphi|$ and therefore $\varphi\lambda$ and $\widetilde{\varphi}$ are contiguous. Analogously $\psi\lambda$ and $\widetilde{\psi}$ are contiguous. Hence, $\varphi\lambda$ and $\psi\lambda$ lie in the same contiguity class and, by \ref{lema1}, $\mathcal{X}(\varphi)\mathcal{X}(\lambda)= \mathcal{X}(\varphi\lambda)\simeq \mathcal{X}(\psi\lambda) =\mathcal{X}(\psi)\mathcal{X}(\lambda)$. By \ref{lema2} $\mathcal{X}(\varphi),\mathcal{X}(\lambda)\in \overline{\mathcal{D}}$. It follows that $\mathcal{X}(\psi)\in \overline{\mathcal{D}}$. \end{proof} \begin{teo} Let $K_0,K_1,\ldots,K_n$ be finite simplicial complexes and let $f_i:|K_i|\to |K_{i+1}|$ be such that for each $0\le i<n$ one of the following holds: \begin{enumerate} \item[(1)] $f_i=|\varphi_i|$ where $\varphi_i:K_i\to K_{i+1}$ is a simplicial map such that $\mathcal{X}(\varphi_i)\in \overline{\mathcal{D}}$. \item[(2)] $f_i$ is a homotopy inverse of $|\varphi_i|$ where $\varphi_i:K_{i+1}\to K_{i}$ is a simplicial map such that $\mathcal{X}(\varphi_i)\in \overline{\mathcal{D}}$. \end{enumerate} Let $\varphi:K_0\to K_n$ be a simplicial map such that $|\varphi|\simeq f_{n-1}f_{n-2}\ldots f_0$. Then $\mathcal{X}(\varphi)\in \overline{\mathcal{D}}$. \end{teo} \begin{proof} We may assume that $f_0$ satisfies condition $(1)$. Otherwise we define $\widetilde{K_0}=K_0$, $\widetilde{f_0}=|1_{K_0}|:|\widetilde{K_0}|\to |K_0|$ and then $|\varphi|\simeq f_{n-1}f_{n-2}\ldots f_0\widetilde{f_0}$. \bigskip The theorem is proved by induction on $n$. If $n=1$, $|\varphi|\simeq |\varphi_0|$ where $\mathcal{X}(\varphi_0)\in \overline{\mathcal{D}}$ and the result follows straightforward from \ref{lema3}. Suppose now that the theorem is true for $n$. Let $K_0,K_1,\ldots, K_n,K_{n+1}$ be finite complexes, $f_i:|K_i|\to |K_{i+1}|$ maps satisfying conditions $(1)$ or $(2)$ and $\varphi:K_0\to K_{n+1}$ such that $|\varphi|\simeq f_nf_{n-1}\ldots f_0$. We consider two cases: $f_n$ satisfies condition $(1)$ or $f_n$ satisfies condition $(2)$. \bigskip In the first case we define $g:|K_0|\to |K_n|$, $g=f_{n-1}f_{n-2}\ldots f_0$. Let $\widetilde{g}:K_0^m\to K_n$ be a simplicial approximation to $g$ and let $\lambda :K_0^m\to K_0$ be a simplicial approximation to the identity. Then $|\widetilde{g}|\simeq g|\lambda|=f_{n-1}f_{n-2}\ldots f_1(f_0|\lambda|)$ where $f_0|\lambda|=|\varphi_0\lambda|$ and $\mathcal{X}(\varphi_0\lambda)=\mathcal{X}(\varphi_0)\mathcal{X}(\lambda)\in \overline{\mathcal{D}}$ by \ref{lema2}. By induction, $\mathcal{X}(\widetilde{g})\in \overline{\mathcal{D}}$, and then $\mathcal{X}(\varphi_n\widetilde{g})\in \overline{\mathcal{D}}$. Since $|\varphi\lambda|\simeq f_ng|\lambda|\simeq f_n|\widetilde{g}|=|\varphi_n\widetilde{g}|$, by lemma \ref{lema3}, $\mathcal{X}(\varphi\lambda)$ lies in $\overline{\mathcal{D}}$. Therefore $\mathcal{X}(\varphi)\in \overline{\mathcal{D}}$. \bigskip In the other case, $|\varphi_n\varphi|\simeq f_{n-1}f_{n-2}\ldots f_0$ and by induction, $\mathcal{X}(\varphi_n\varphi)\in \overline{\mathcal{D}}$. Therefore $\mathcal{X}(\varphi)$ also lies in $\overline{\mathcal{D}}$. \end{proof} \begin{coro} Let $\varphi:K\to L$ be a simplicial map such that $|\varphi|$ is a simple homotopy equivalence. Then $\mathcal{X}(\varphi)\in \overline{\mathcal{D}}$. \end{coro} \begin{proof} Since $|\varphi|$ is a simple equivalence, there exist finite complexes $K=K_0, K_1,\ldots K_n=L$ and maps $f_i:|K_i|\to |K_{i+1}|$, which are simplicial expansions or homotopic inverses of simplicial expansions, and such that $|\varphi|\simeq f_{n-1}f_{n-2}\ldots f_0$. By our First Main Theorem \ref{main}, simplicial expansions between complexes induce expansions between the associated finite spaces which lie in $\overline{\mathcal{D}}$ by \ref{todoigual}. Therefore, the last theorem applies. \end{proof} Now we are ready to prove the second important result of this article which is the analogue for maps of our First Main Theorem. In fact, we have already done most of the work. \begin{smteo} \begin{enumerate} \item[] \item[(a)] Let $f:X\to Y$ be a map between finite $T_0$-spaces. Then $f\in \overline{\mathcal{D}}$ if and only if $|\mathcal{K}(f)|:|\mathcal{K}(X)|\to |\mathcal{K}(Y)|$ is a simple homotopy equivalence. \item[(b)] Let $\varphi:K\to L$ be a simplicial map between finite simplicial complexes. Then $|\varphi|$ is a simple homotopy equivalence if and only if $\mathcal{X}(\varphi)\in \overline{\mathcal{D}}$. \end{enumerate} \end{smteo} \begin{proof} Suppose $f:X\to Y$ is a map such that $|\mathcal{K}(f)|$ is a simple equivalence. By the last corollary, $f':X'\to Y'$ lies in $\overline{\mathcal{D}}$ and since $fh_X=h_Yf'$, we have that $f\in \overline{\mathcal{D}}$. If $\varphi:K\to L$ is a simplicial map such that $\mathcal{X}(\varphi)\in \overline{\mathcal{D}}$, then $|\varphi'|:|K'|\to |L'|$ is a simple equivalence. Here $\varphi'=\mathcal{K}(\mathcal{X}(\varphi))$ is the barycentric subdivision of $\varphi$. Let $\lambda_K:K'\to K$ and $\lambda_L:L'\to L$ be simplicial approximations to the identities. Then $\lambda_L\varphi'$ and $\varphi\lambda_K$ are contiguous. In particular $|\lambda_L||\varphi'|\simeq |\varphi||\lambda_K|$ and then $|\varphi|$ is a simple equivalence. \end{proof} Since there are homotopy equivalences which are not simple in the setting of polyhedra, the theorem says that in the setting of finite spaces the inclusions $$\{homotopy \ equivalences\}\subsetneq \overline{\mathcal{D}} \subsetneq \{weak \ equivalences\}$$ are both strict. However, if $f:X\to Y$ is a weak equivalence between finite $T_0$-spaces with trivial Whitehead group, then $f\in \overline{\mathcal{D}}$.
{ "timestamp": "2006-11-08T20:08:55", "yymm": "0611", "arxiv_id": "math/0611158", "language": "en", "url": "https://arxiv.org/abs/math/0611158", "abstract": "We present a new approach to simple homotopy theory of polyhedra using finite topological spaces. We define the concept of collapse of a finite space and prove that this new notion corresponds exactly to the concept of a simplicial collapse. More precisely, we show that a collapse of finite spaces induces a simplicial collapse of their associated simplicial complexes. Moreover, a simplicial collapse induces a collapse of the associated finite spaces. This establishes a one-to-one correspondence between simple homotopy types of finite simplicial complexes and simple equivalence classes of finite spaces.We also prove a similar result for maps: We give a complete characterization of the class of maps between finite spaces which induce simple homotopy equivalences between the associated polyhedra. Furthermore, this class describes all maps coming from simple homotopy equivalences at the level of complexes.The advantage of this theory is that the elementary move of finite spaces is much simpler than the elementary move of simplicial complexes: It consists of removing (or adding) just a single point of the space.", "subjects": "Algebraic Topology (math.AT); Geometric Topology (math.GT)", "title": "Simple Homotopy Types and Finite Spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357610169274, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.709673922475328 }
https://arxiv.org/abs/1207.3606
Dual concepts of almost distance-regularity and the spectral excess theorem
Generally speaking, `almost distance-regular' graphs share some, but not necessarily all, of the regularity properties that characterize distance-regular graphs. In this paper we propose two new dual concepts of almost distance-regularity, thus giving a better understanding of the properties of distance-regular graphs. More precisely, we characterize $m$-partially distance-regular graphs and $j$-punctually eigenspace distance-regular graphs by using their spectra. Our results can also be seen as a generalization of the so-called spectral excess theorem for distance-regular graphs, and they lead to a dual version of it.
\section{Preliminaries} Almost distance-regular graphs, recently studied in the literature, are graphs which share some, but not necessarily all, of the regularity properties that characterize distance-regular graphs. Two examples of the former are partially distance-regular graphs \cite{p91} and $m$-walk-regular graphs \cite{dfg09}. In this paper we propose and characterize two dual concepts of almost distance-regularity, and study some cases where distance-regularity is attained. As in the theory of distance-regular graphs, the two proposed concepts lead to several duality results. Our results can also be seen as a generalization of the so-called spectral excess theorem for distance-regular graphs (see \cite{fg97}; for short proofs, see \cite{vd08,fgg09}). This theorem characterizes distance-regular graphs by their spectra and the average number of vertices at extremal distance. A dual version of this theorem is also derived. We use standard concepts and results for distance-regular graphs \cite{biggs,bcn}, spectral graph theory \cite{cds80,g93}, and spectral and algebraic characterizations of distance-regular graphs \cite{f02}. Moreover, for some more details and other concepts of almost distance-regularity (such as distance-polynomial and partially distance-regular graphs), we refer the reader to our recent paper \cite{ddfgg10}. In what follows, we recall the main concepts, terminology, and results involved. Let $\Gamma$ be a simple, connected, $\delta$-regular graph, with vertex set $V$, order $n=|V|$, and adjacency matrix $\textbf{\emph{A}}$. The {\it distance} between two vertices $u$ and $v$ is denoted by $\mathop{\rm dist }\nolimits (u,v)$, so the {\it diameter} of $\Gamma$ is $D=\textrm{max}_{u,v\in V}\mathop{\rm dist }\nolimits(u,v)$. The set of vertices at distance $i$ from a given vertex $u\in V$ is denoted by $\Gamma_i(u)$, for $i=0,1,\ldots,D$. The {\em distance-$i$ graph} $\Gamma_i$ is the graph with vertex set $V$ and where two vertices $u$ and $v$ are adjacent if and only if $\mathop{\rm dist }\nolimits(u,v)=i$ in $\Gamma$. Its adjacency matrix $\textbf{\emph{A}}_i$ is usually referred to as the {\em distance-$i$ matrix} of $\Gamma$. The spectrum of $\Gamma$ is denoted by $ \textrm{sp}\,\Gamma = \{\lambda_0^{m_0},\lambda_1^{m_1},\ldots, \lambda_d^{m_d}\}, $ where the different eigenvalues of $\Gamma$ are in decreasing order, $\lambda_0>\lambda_1>\cdots >\lambda_d$, and the superscripts stand for their multiplicities $m_i=m(\lambda_i)$. \subsection{The predistance and preidempotent polynomials} From the spectrum of $\Gamma$, we consider the {\em predistance polynomials} $\{p_i\}_{0\le i\le d}$ which are orthogonal with respect to the following scalar product in $\mathbb{R}_d[x]$: \begin{equation} \label{product} \langle f, g\rangle_{\vartriangle} =\frac{1}{n}\textrm{tr}\, (f(\textbf{\emph{A}})g(\textbf{\emph{A}}))=\frac{1}{n} \sum_{i=0}^d m_i f(\lambda_i) g(\lambda_i), \end{equation} and which satisfy $\textrm{deg}\,p_i=i$ and $\langle p_i,p_j \rangle_\vartriangle= \delta_{ij}p_i(\lambda_0)$, for all $i,j=0,1,\ldots,d$. For more details, see \cite{fg97}. Like every sequence of orthogonal polynomials, the predistance polynomials satisfy a three-term recurrence of the form \begin{equation} \label{recur-pol} xp_i=\beta_{i-1}p_{i-1}+\alpha_i p_i+\gamma_{i+1}p_{i+1},\qquad i=0,1,\ldots,d, \end{equation} with $\beta_{-1}=\gamma_{d+1}=0$. Some basic properties of these coefficients, such as $\alpha_i+\beta_i+\gamma_i=\lambda_0$ for $i=0,1,\ldots, d$, and $\beta_i n_i=\gamma_{i+1}n_{i+1}\neq0$ for $i=0,1,\ldots, d-1$, where $n_i=\|p_i\|_{\vartriangle}^2=p_i(\lambda_0)$, can be found in \cite{cffg09}. Let $\omega_i$ be the leading coefficient of $p_i$. Then, from the above recurrence and since $p(0)=1$, it is immediate that $\omega_i= (\gamma_1\gamma_2\cdots \gamma_i)^{-1}$ for $i=1,\ldots,d$. For any graph, the sum of all the predistance polynomials gives the {\em Hoffman polynomial} $H$ satisfying $H(\lambda_i)=n\delta_{0i}$, $i=0,1,\ldots,d$, which characterizes regular graphs via the condition $H(\textbf{\emph{A}})=\textbf{\emph{J}}$, the all-$1$ matrix \cite{hof63}. Note that the leading coefficient $\omega_d$ of $H$ (and also of $p_d$) is $\omega_d=n/\pi_0$. From the predistance polynomials, we define the so-called {\em preidempotent polynomials} $q_j$, $j=0,1,\ldots, d$, by $$ q_j(\lambda_i)= \frac{m_j}{n_i}p_i(\lambda_j),\qquad i=0,1,\ldots, d, $$ which are orthogonal with respect to the scalar product \begin{equation} \label{product-preadj} \langle f,g\rangle_\blacktriangle =\frac{1}{n}\textrm{tr}\, (f\{\textbf{\emph{A}}\}g\{\textbf{\emph{A}}\})=\frac{1}{n}\sum_{i=0}^d n_i f(\lambda_i) g(\lambda_i), \end{equation} where $f\{\textbf{\emph{A}}\}=\frac{1}{\sqrt{n}}\sum_{i=0}^d f(\lambda_i) p_i(\textbf{\emph{A}})$. Note that, since $q_j(\lambda_0)=m_j$, the duality between the two scalar products (\ref{product}) and (\ref{product-preadj}) and their associated polynomials is made apparent by writing \begin{eqnarray} \langle p_i, p_j\rangle_\vartriangle &=& \frac{1}{n}\sum_{l=0}^d m_l p_i(\lambda_l) p_j(\lambda_l)=\delta_{ij} n_i,\qquad i,j=0,1,\ldots,d, \label{basic-predistance} \\ \langle q_i, q_j\rangle_{\blacktriangle} &=& \frac{1}{n}\sum_{l=0}^d n_l q_i(\lambda_l) q_j(\lambda_l)=\delta_{ij} m_i,\qquad i,j=0,1,\ldots,d. \label{basic-preadj} \end{eqnarray} \subsection{Vector spaces, algebras and bases} \label{subsec_alg} Let $\Gamma$ be a graph with diameter $D$, adjacency matrix $\textbf{\emph{A}}$ and $d+1$ distinct eigenvalues. We consider the vector spaces ${\cal A}= \mathbb{R}_{d}[\textbf{\emph{A}}] = \linebreak \textrm{span} \{\textbf{\emph{I}}, \textbf{\emph{A}}, \textbf{\emph{A}}^2, \ldots, \textbf{\emph{A}}^{d}\}$ and ${\cal D}= \textrm{span} \{\textbf{\emph{I}},\textbf{\emph{A}},\textbf{\emph{A}}_2,\ldots,\textbf{\emph{A}}_D\}$, with dimensions $d+1$ and $D+1$, respectively. Then, ${\cal A}$ is an algebra with the ordinary product of matrices, known as the {\it adjacency algebra}, with orthogonal bases $A_p=\{p_0(\textbf{\emph{A}}),p_1(\textbf{\emph{A}}),p_2(\textbf{\emph{A}}),\ldots, p_d(\textbf{\emph{A}})\}$ and $A_\lambda=\{\textbf{\emph{E}}_0,\textbf{\emph{E}}_1,\ldots, \textbf{\emph{E}}_d\}$, where the matrices $\textbf{\emph{E}}_i$, $i=0,1,\ldots,d$, corresponding to the orthogonal projections onto the eigenspaces, are the {\it $($principal\/$)$ idempotents} of $\textbf{\emph{A}}$. Besides, since $\textbf{\emph{I}},\textbf{\emph{A}},\textbf{\emph{A}}^2,\ldots,\textbf{\emph{A}}^D$ are linearly independent, we have that $\textrm{dim}\, \mathcal{A}=d+1\ge D+1$ and, therefore, we always have $D\le d$ \cite{biggs}. Moreover, ${\mathcal D}$ forms an algebra with the entrywise or Hadamard product of matrices, defined by $(\textbf{\emph{X}}\circ\textbf{\emph{Y}})_{uv}=\textbf{\emph{X}}_{uv}\textbf{\emph{Y}}_{uv}$. We call ${\mathcal D}$ the {\em distance $\circ$-algebra}, which has orthogonal basis $D_{\lambda}= \{\textbf{\emph{I}},\textbf{\emph{A}},\textbf{\emph{A}}_2,\ldots,\textbf{\emph{A}}_d\}$. From now on, we work with the vector space ${\cal T}={\cal A}+{\cal D}$, and relate the distance-$i$ matrices $\textbf{\emph{A}}_i \in {\mathcal D}$ to the matrices $p_i(\textbf{\emph{A}}) \in {\mathcal A}$. Note that $\textbf{\emph{I}}$, $\textbf{\emph{A}}$, and $\textbf{\emph{J}}$ are matrices in ${\cal A}\cap{\cal D}$ since $\textbf{\emph{J}}=H(\textbf{\emph{A}})\in \mathcal{A}$. Recall that ${\mathcal A}={\mathcal D}$ if and only if $\Gamma$ is distance-regular (see \cite{biggs,bcn}). In this case, we have $D=d$, and the predistance polynomials become the {\em distance polynomials} satisfying $\textbf{\emph{A}}_i=p_i(\textbf{\emph{A}})$. In ${\cal T}$, we consider the following scalar product: \begin{equation} \label{equationscalarproduct} \langle\textbf{\emph{R}},\textbf{\emph{S}}\rangle= \frac 1n\textrm{tr}\, (\textbf{\emph{RS}})= \frac 1n\textrm{sum}\,(\textbf{\emph{R}}\circ\textbf{\emph{S}}), \end{equation} where $\textrm{sum}\,(\textbf{\emph{M}})$ denotes the sum of all entries of $\textbf{\emph{M}}$. Observe that the factor $1/n$ assures that $\|\textbf{\emph{I}}\|^2=1$, whereas $\|\textbf{\emph{J}}\|^2=n$. Note also that the {\em average degree} of $\Gamma_i$ is $\overline{\delta}_i=\|\textbf{\emph{A}}_i\|^2$ and the {\em average multiplicity} of $\lambda_j$ is $\overline{m}_j=\frac{m_j}{n}=\|\textbf{\emph{E}}_j\|^2$. According to (\ref{product}), this scalar product of matrices satisfies $\langle f(\textbf{\emph{A}}),g(\textbf{\emph{A}})\rangle=\langle f,g\rangle_\vartriangle$. \section{Two dual approaches to almost distance-regularity} Here we limit ourselves to the case of graphs with spectrally maximum diameter (or the `non-degenerate' case) $D=d$. Consequently, we will use indiscriminately the two symbols, $D$ and $d$, depending on what we are referring to. In this context, let us consider the following two definitions of almost distance-regularity: \begin{defi} \label{D1} For a given $i$, $0\le i\le D$, a graph $\Gamma$ is {\em $i$-punctually distance-regular} when there exist constants $p_{ji}$ such that \begin{equation} \label{def(a)} \textbf{A}_i\textbf{E}_j = p_{ji}\textbf{E}_j \end{equation} for every $j=0,1,\ldots,d$; and $\Gamma$ is {\em $m$-partially distance-regular} when it is $i$-punctually distance-regular for all $i\le m$. \end{defi} \begin{defi} \label{D2} For a given $j$, $0\le j\le d$, a graph $\Gamma$ is {\em $j$-punctually eigenspace distance-regular} when there exist constants $q_{ij}$ such that \begin{equation}\label{def(b)} \textbf{E}_j\circ \textbf{A}_i = q_{ij}\textbf{A}_i \end{equation} for every $i=0,1,\ldots,D$; and $\Gamma$ is {\em $m$-partially eigenspace distance-regular} when it is $j$-punctually eigenspace distance-regular for all $j\le m$. \end{defi} Notice that the concepts of $D$-partial distance-regularity and $d$-partial \linebreak eigenspace distance-regularity coincide with the known dual definitions of distance-regularity (see \cite{bcn}). Some basic characterizations of punctual distance-regularity, in terms of the distance matrices and the idempotents, were given in \cite{ddfgg10}. \begin{pro}[\cite{ddfgg10}] \label{first-charac(D1)} Let $D=d$. Then, $\Gamma$ is $i$-punctually distance-regular if and only if any of the following conditions holds: \begin{enumerate} \item[$(a1)$] $\textbf{A}_i\in {\cal A}$, \item[$(a2)$] $p_i(\textbf{A})\in {\cal D}$, \item[$(a3)$] $\textbf{A}_i=p_i(\textbf{A})$. \end{enumerate} \end{pro} Following the duality between Definitions \ref{D1} and \ref{D2}, it seems natural to conjecture the dual of this proposition: A graph $\Gamma$ is {\em $j$-punctually eigenspace distance-regular} if and only if any of the following conditions is satisfied: \begin{enumerate} \item[$(b1)$] $\textbf{\emph{E}}_j\in {\cal D}$, \item[$(b2)$] $q_j[\textbf{\emph{A}}]\in {\cal A}$, \item[$(b3)$] $\textbf{\emph{E}}_j=q_j[\textbf{\emph{A}}]$, \end{enumerate} where $f[\textbf{\emph{A}}]=\frac{1}{n}\sum_{i=0}^d f(\lambda_i) \textbf{\emph{A}}_i$. However, although $(b1)$ is clearly equivalent to Definition \ref{D2} and $(b3)\Rightarrow (b1),(b2)$, until now we have not been able to prove any of the other equivalences and we leave them as conjectures. In order to derive some new characterizations of punctual distance-regularity, besides the already defined $\overline \delta_i$ and $\overline m_j$, we consider the following average numbers: \begin{itemize} \item The {\em average crossed local multiplicities} are \begin{equation}\label{averagecrossed} \overline{m}_{ij}=\frac1{n\overline{\delta}_i}\sum_{\mathop{\rm dist }\nolimits(u,v)=i}m_{uv}(\lambda_j) =\frac{\langle \textbf{\emph{E}}_j,\textbf{\emph{A}}_i\rangle}{\|\textbf{\emph{A}}_i\|^2}, \end{equation} where $m_{uv}(\lambda_j)=(\textbf{\emph{E}}_j)_{uv}$ are the {\em crossed local multiplicities}. \item The {\em average number of shortest $i$-paths from a vertex} is \begin{equation} \label{meanPi} \overline P_i= \frac{1}{n}\sum_{u\in V} P_i(u)=\frac{1}{n}\textrm{sum}\,(\textbf{\emph{A}}^i\circ \textbf{\emph{A}}_i)=\langle \textbf{\emph{A}}^i, \textbf{\emph{A}}_i\rangle= \frac{1}{\omega_i}\langle p_i(\textbf{\emph{A}}), \textbf{\emph{A}}_i\rangle, \end{equation} where $P_i(u)$ denotes the number of shortest paths from a vertex $u$ to the vertices in $\Gamma_i(u)$ and $\omega_i=(\gamma_1 \gamma_2 \cdots \gamma_i)^{-1}$ is the leading coefficient of $p_i$, $i=1,\ldots,d$. \item The {\em average number of shortest $i$-paths} is \begin{equation} \label{mean-aii} \overline a_i^{(i)}=\frac{1}{n\overline\delta_i}\textrm{sum}\,(\textbf{\emph{A}}^i \circ \textbf{\emph{A}}_i)=\frac{\overline P_i}{\overline\delta_i}. \end{equation} \end{itemize} \begin{pro} \label{propo-punt-dr} Let $\Gamma$ be a graph with predistance polynomials $p_i$ and recurrence coefficients $\gamma_i,\alpha_i,\beta_i$, $i=0,1,\ldots, d$. Then, $\Gamma$ is $i$-punctually distance-regular if and only if any of the following equalities holds: \begin{itemize} \item[$(a1)$] $\displaystyle \frac{1}{\overline{\delta}_i} = \sum_{j=0}^d\frac{\overline{m}_{ij}^2}{\overline m_j}$. \item[$(a2)$] $ \overline P_i= \frac{1}{\omega_i}\sqrt{p_i(\lambda_0)\overline \delta_i} =\sqrt{\beta_0\beta_1\cdots \beta_{i-1}\overline \delta_i \gamma_i\gamma_{i-1}\cdots \gamma_1}$. \item[$(a3)$] $ \label{bound-aii} \omega_i\overline a_i^{(i)}=1 \quad\mbox{and}\quad \overline \delta_i=p_i(\lambda_0)$. \end{itemize} Moreover, $\Gamma$ is $j$-punctually eigenspace distance-regular if and only if \begin{itemize} \item[$(b1)$] $\displaystyle \overline m_j=\sum_{i=0}^D\overline{\delta}_i\overline{m}_{ij}^2$. \end{itemize} \end{pro} \begin{pf} $(a1)$ This is a result from \cite{ddfgg10}. $(a2)$ From (\ref{meanPi}) and the Cauchy-Schwarz inequality, we get \begin{equation} \label{boundPi} \omega_i \overline P_i =\langle p_i(\textbf{\emph{A}}), \textbf{\emph{A}}_i\rangle \le \|p_i(\textbf{\emph{A}})\|\|\textbf{\emph{A}}_i\|=\sqrt{p_i(\lambda_0)\overline \delta_i} \nonumber\\ = \sqrt{\frac{\beta_0\beta_1\cdots \beta_{i-1}}{\gamma_1\gamma_2\cdots \gamma_i}\overline \delta_i}. \end{equation} Moreover, equality occurs if and only if the matrices $p_i(\textbf{\emph{A}})$ and $\textbf{\emph{A}}_i$ are proportional, which is equivalent to $\Gamma$ being $i$-punctually distance-regular by Proposition \ref{first-charac(D1)}. $(a3)$ From (\ref{mean-aii}) and (\ref{boundPi}) we have that $\omega_i \overline a_i^{(i)}\le \sqrt{p_i(\lambda_0)/\overline\delta_i}$, with equality if and only if $\Gamma$ is $i$-punctually distance-regular. Thus, if the conditions in $(a3)$ hold, $\Gamma$ satisfies the claimed property. Conversely, if $\Gamma$ is $i$-punctually distance-regular, both equalities in $(a3)$ are simple consequences of $p_i(\textbf{\emph{A}})=\textbf{\emph{A}}_i$. Indeed, the first one comes from considering the $uv$-entries, with $\mathop{\rm dist }\nolimits(u,v)=i$, in the above matrix equation, whereas the second one is obtained by taking square norms. $(b1)$ From (\ref{averagecrossed}), we find that the orthogonal projection of $\textbf{\emph{E}}_j$ on ${\cal D}$ is $ \widehat{\textbf{\emph{E}}_j} =\sum_{i=0}^D \overline m_{ij}\textbf{\emph{A}}_i $. Now, from $\|\widehat{\textbf{\emph{E}}_j}\|^2\le \|\textbf{\emph{E}}_j\|^2$ we get $$ \sum_{i=0}^D \overline m_{ij}^2\|\textbf{\emph{A}}_i\|^2 = \sum_{i=0}^D\overline{\delta}_i\overline{m}_{ij}^2\le \overline{m}_j $$ and, in the case of equality, Definition \ref{D2} applies with $q_{ij}=\overline m_{ij}$. \end{pf} Notice the duality between $(a1)$ and $(b1)$ with $\frac{1}{\overline{\delta}_i}$ and $\overline{m}_j$. Now, let us consider the more global concept of partial distance-regularity. In this case, we also have the following new result where, for a given $0\le i\le d$, $s_i=\sum_{j=0}^i p_j$, $t_i=H-s_{i-1}=\sum_{j=i}^d p_j$, $\textbf{\emph{S}}_i=\sum_{j=0}^i \textbf{\emph{A}}_j$, and $\textbf{\emph{T}}_i=\textbf{\emph{J}}-\textbf{\emph{S}}_{i-1}=\sum_{j=i}^d \textbf{\emph{A}}_j$. \begin{pro} \label{charac-pardr} A graph $\Gamma$ is $m$-partially distance-regular if and only if any of the following conditions holds: \begin{itemize} \item[$(a1)$] $\Gamma$ is $i$-punctually distance-regular for $i=m,m-1,\ldots,\emph{max}\{2,2m-d\}$. \item[$(a2)$] $\Gamma$ is $m$-punctually distance-regular and $t_{m+1}(\textbf{A})\circ \textbf{S}_m=\textbf{O}$. \item[$(a3)$] $s_i(\textbf{A})=\textbf{S}_i$ for $i=m,m-1$. \end{itemize} \end{pro} \begin{pf} In all cases, the necessity is clear since $p_i(\textbf{\emph{A}})=\textbf{\emph{A}}_i$ for every $0\le i\le m$ (for $(a2)$, note that $t_{m+1}(\textbf{\emph{A}})=\textbf{\emph{J}}-s_m(\textbf{\emph{A}})$). Then, let us prove sufficiency. The result in $(a1)$ is basically Proposition 3.7 in \cite{ddfgg10}. In order to prove $(a2)$, we show by (backward) induction that $p_i(\textbf{\emph{A}})=\textbf{\emph{A}}_i$ and $t_{i+1}(\textbf{\emph{A}})\circ \textbf{\emph{S}}_i=\textbf{\emph{O}}$ for $i= m,m-1,...,0.$ By assumption, these equations are valid for $i=m$. Suppose now that $p_i(\textbf{\emph{A}})=\textbf{\emph{A}}_i$ and $t_{i+1}(\textbf{\emph{A}})\circ \textbf{\emph{S}}_i=\textbf{\emph{O}}$ for some $i>0$. Then, $t_i(\textbf{\emph{A}})\circ \textbf{\emph{S}}_i=\textbf{\emph{A}}_i$ and, multiplying both terms by $\textbf{\emph{S}}_{i-1}$ (with the Hadamard product), we get $t_{i}(\textbf{\emph{A}})\circ \textbf{\emph{S}}_{i-1}=\textbf{\emph{O}}$. So, what remains is to show that $p_{i-1}(\textbf{\emph{A}})=\textbf{\emph{A}}_{i-1}$. To this end, let us consider the following three cases: \begin{itemize} \item[$(i)$] For $\mathop{\rm dist }\nolimits(u,v)>i-1$, we have $(p_{i-1}(\textbf{\emph{A}}))_{uv}=0$. \item[$(ii)$] For $\mathop{\rm dist }\nolimits(u,v)=i-1$, we have $(t_{i+1}(\textbf{\emph{A}}))_{uv}=0$, so $(p_{i-1}(\textbf{\emph{A}}))_{uv}=(s_{i-1}(\textbf{\emph{A}}))_{uv}$ $= (s_{i-1}(\textbf{\emph{A}}))_{uv}+(\textbf{\emph{A}}_{i})_{uv}=(s_{i}(\textbf{\emph{A}}))_{uv}= 1-(t_{i+1}(\textbf{\emph{A}}))_{uv}=1$. \item[$(iii)$] For $\mathop{\rm dist }\nolimits(u,v)<i-1$, we use the recurrence (\ref{recur-pol}) to write \begin{eqnarray*} xt_i=\sum_{j=i}^d xp_j & = & \sum_{j=i}^d (\beta_{j-1}p_{j-1}+\alpha_j p_j+ \gamma_{j+1}p_{j+1})\\ & = & \beta_{i-1}p_{i-1}- \gamma_i p_i + \sum_{j=i}^d(\alpha_j+\beta_j+\gamma_j) p_j \\ & = & \beta_{i-1}p_{i-1}- \gamma_ip_i+ \delta t_i , \end{eqnarray*} which gives $$ \textbf{\emph{A}}t_i(\textbf{\emph{A}})=\beta_{i-1}p_{i-1}(\textbf{\emph{A}}) -\gamma_i \textbf{\emph{A}}_i+ \delta t_i(\textbf{\emph{A}}). $$ Then, since $(t_i(\textbf{\emph{A}}))_{uv}=(\textbf{\emph{A}}_i)_{uv}=0$ and $\beta_{i-1}\neq 0$, we get $$ (p_{i-1}(\textbf{\emph{A}}))_{uv}=\frac1{\beta_{i-1}}(\textbf{\emph{A}}t_i(\textbf{\emph{A}}))_{uv}= \frac1{\beta_{i-1}}\sum_{w\in \Gamma(u)}(t_i(\textbf{\emph{A}}))_{wv}=0, $$ because $\mathop{\rm dist }\nolimits(v,w)\le \mathop{\rm dist }\nolimits(v,u)+\mathop{\rm dist }\nolimits(u,w) \le i-1$ for the relevant $w$. \end{itemize} From $(i),(ii),$ and $(iii)$, we have that $p_{i-1}(\textbf{\emph{A}})=\textbf{\emph{A}}_{i-1}$, so by induction $\Gamma$ is $m$-partially distance-regular, and the sufficiency of $(a2)$ is proven. Finally, the sufficiency of $(a3)$ follows from that of $(a2)$ because $s_i(\textbf{\emph{A}})=\textbf{\emph{S}}_i$ for every $i\in\{m-1,m\}$ implies that $p_m(\textbf{\emph{A}})=(s_m-s_{m-1})(\textbf{\emph{A}})=\textbf{\emph{S}}_m-\textbf{\emph{S}}_{m-1}=\textbf{\emph{A}}_m$ and $t_{m+1}(\textbf{\emph{A}})\circ \textbf{\emph{S}}_m=(\textbf{\emph{J}}-s_m(\textbf{\emph{A}}))\circ \textbf{\emph{S}}_m=(\textbf{\emph{J}}-\textbf{\emph{S}}_m)\circ \textbf{\emph{S}}_m= \textbf{\emph{O}}$. \end{pf} Given some vertex $u$ and an integer $i\le \textrm{ecc}(u)$, we denote by $N_i(u)$ the {\em $i$-neighborhood} of $u$, which is the set of vertices that are at distance at most $i$ from $u$. In \cite{f02} it was proved that $s_i(\lambda_0)$ is upper bounded by the harmonic mean of the numbers $|N_i(u)|$ and equality is attained if and only if $s_i(\textbf{\emph{A}})=\textbf{\emph{S}}_i$. A direct consequence of this property and Proposition \ref{charac-pardr}$(a3)$ is the following characterization. \begin{thm} \label{thm-pdr} A graph $\Gamma$ is $m$-partially distance-regular if and only if, for every $i\in \{m-1,m\}$, $$ s_i(\lambda_0) = \frac{n}{\sum_{u\in V}|N_i(u)|^{-1}}. $$ \end{thm} \section{Distance-regular graphs} Let us particularize our results to the case of distance-regular graphs. With this aim, we use the following theorem giving some known characterizations. \begin{thm}[\cite{f01,fgy1b}] \label{charac-drg} A graph $\Gamma$ with $d+1$ distinct eigenvalues and diameter $D=d$ is distance-regular if and only if any of the following statements is satisfied: \begin{itemize} \item[$(a)$] $\Gamma$ is $D$-punctually distance-regular. \item[$(b)$] $\Gamma$ is $j$-punctually eigenspace distance-regular for $j=1,d$. \end{itemize} \end{thm} In fact, notice that $(a)$ corresponds to any of the conditions in Proposition \ref{charac-pardr} with $m=d$. Moreover, the duality between $(a)$ and $(b)$ is made apparent when they are stated as follows: \begin{itemize} \item[$(a)$] $\textbf{\emph{A}}_0(=\textbf{\emph{I}}),\textbf{\emph{A}}_1(=\textbf{\emph{A}}),\textbf{\emph{A}}_D\in {\cal A}$; \item[$(b)$] $\textbf{\emph{E}}_0(=\frac{1}{n}\textbf{\emph{J}}),\textbf{\emph{E}}_1,\textbf{\emph{E}}_d\in {\cal D}$. \end{itemize} Then, by using Theorem \ref{charac-drg} and Proposition \ref{propo-punt-dr}$(a1)$ and $(b1)$, and Theorem \ref{thm-pdr} (with $m=d$), we have the spectral excess theorem \cite{fg97} in the next condition $(a)$, its dual form in $(b)$, and its harmonic mean version \cite{f02,vd08} in $(c)$. \begin{thm} A regular graph $\Gamma$ with $D=d$ is distance-regular if and only if any of the following equalities holds: \begin{itemize} \item[$(a)$] $\displaystyle \frac{1}{\overline{\delta}_d}= \sum_{j=0}^d\frac{\overline{m}_{dj}^2}{\overline m_j}$. \item[$(b)$] $\displaystyle \overline m_j=\sum_{i=0}^D\overline{\delta}_i\overline{m}_{ij}^2\ \mbox{ for $j=1,d$}$. \item[$(c)$] $\displaystyle s_{d-1}(\lambda_0) = \frac{n}{\sum_{u\in V}|N_{d-1}(u)|^{-1}}$. \end{itemize} \end{thm} In fact, condition $(a)$ is usually written in its equivalent form $\overline{\delta}_d=p_d(\lambda_0)$ as, when $i=d$, the first condition in Proposition \ref{bound-aii}$(a.3)$ always holds since $$ \overline a_d^{(d)}=\frac{1}{\overline \delta_d}\langle\textbf{\emph{A}}^d, \textbf{\emph{A}}_d\rangle= \frac{1}{\overline \delta_d \omega_d}\langle H(\textbf{\emph{A}}), \textbf{\emph{A}}_d\rangle= \frac{1}{\overline \delta_d \omega_d}\langle \textbf{\emph{J}}, \textbf{\emph{A}}_d\rangle= \frac{1}{\overline \delta_d \omega_d}\| \textbf{\emph{A}}_d\|^2=\frac{1}{\omega_d}. $$ Notice also that, in $(c)$, we do not need to impose the condition of Theorem \ref{thm-pdr} for $i=d$ since $s_d(\lambda_0)=H(\lambda_0)=N_d(u)=n$ for every $u\in V$.
{ "timestamp": "2012-07-17T02:03:06", "yymm": "1207", "arxiv_id": "1207.3606", "language": "en", "url": "https://arxiv.org/abs/1207.3606", "abstract": "Generally speaking, `almost distance-regular' graphs share some, but not necessarily all, of the regularity properties that characterize distance-regular graphs. In this paper we propose two new dual concepts of almost distance-regularity, thus giving a better understanding of the properties of distance-regular graphs. More precisely, we characterize $m$-partially distance-regular graphs and $j$-punctually eigenspace distance-regular graphs by using their spectra. Our results can also be seen as a generalization of the so-called spectral excess theorem for distance-regular graphs, and they lead to a dual version of it.", "subjects": "Combinatorics (math.CO)", "title": "Dual concepts of almost distance-regularity and the spectral excess theorem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357591818726, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739211451513 }
https://arxiv.org/abs/1112.0098
The fast track to Löwner's theorem
The operator monotone functions defined in the positive half-line are of particular importance. We give a version of the theory in which integral representations for these functions can be established directly without invoking Löwner's detailed analysis of matrix monotone functions of a fixed order or the theory of analytic functions.We found a canonical relationship between positive and arbitrary operator monotone functions defined in the positive half-line, and this result effectively reduces the theory to the case of positive functions.MSC2010 classification: 26A48; 26A51; 47A63. Key words and phrases: operator monotone function; integral representation; Löwner's theorem.
\section{Introduction and preliminaries} The functional calculus is defined by the spectral theorem. Since we only deal with matrices the function $ f(x) $ of a hermitian matrix $ x $ is defined for any function $ f $ defined on the spectrum of $ x. $ \begin{definition} Let $I$ be an interval of any type. A function $ f\colon I \to\mathbf R $ is said to be \textit{n-matrix monotone} (or just \textit{n-monotone}) if \[ x\le y\quad\Rightarrow\quad f(x)\leq f(y) \] for every pair of $ n\times n $ hermitian matrices $ x $ and $y$ with spectra in $I.$ \end{definition} \begin{definition} Let $I$ be an interval of any type. A function $ f\colon I \to\mathbf R $ is said to be \textit{n-matrix convex} (or just \textit{n-convex}) if \[ f(\lambda x+(1-\lambda)y)\le\lambda f(x)+(1-\lambda)f(y) \] for every $ \lambda\in[0,1] $ and every pair of $ n\times n $ hermitian matrices $ x $ and $y$ with spectra in $I$. \end{definition} Note that the spectrum of the matrix $ \lambda x+(1-\lambda)y $ in the definition automatically is contained in $ I. $ The functional calculus on the left-hand side is therefore well-defined. We realise that a point-wise limit of $ n $-monotone ($ n $-convex) functions is $ n $-monotone ($ n $-convex). \begin{definition} A function $ f\colon I \to\mathbf R $ defined in an interval $ I $ is said to be operator monotone (operator convex) if it is $ n $-monotone ($ n $-convex) for all natural numbers $ n. $ \end{definition} We realise that a point-wise limit of operator monotone (operator convex) functions is operator monotone (operator convex). \subsection{Other proofs of Löwner's theorem} Karl Löwner\footnote{Karel Löwner was a Czech known under his German name Karl Löwner. Fleeing the Nazis in 1939 he moved to the United States and changed his name to Charles Loewner.} \cite{kn:loewner:1934} analyses in great detail matrix monotone functions of a fixed order and then arrive at the characterisation of operator monotone functions by means of interpolation theory. Wigner and von Neumann gives in \cite{kn:wigner:1954} a new proof of Löwner's theorem based on continued fractions which is almost never cited. Bendat and Sherman \cite{kn:bendat:1955} gives a new proof of Löwner's theorem that relies on Löwner's detailed analysis of matrix monotone functions of a fixed order but combines it with the Hamburger moment problem. They also rely on Kraus \cite{kn:kraus:1936} to essentially prove that a function is operator convex if and only if the secant-slope function is operator monotone. Kor{\'a}nyi \cite{kn:koranyi:1956} gives a new proof of Löwner's theorem by using a variant of Löwner's characterisation of matrix monotone functions of a fixed order and spectral theory for unbounded self-adjoint operators. The monograph of Donoghue \cite{kn:donoghue:1974} follows \cite{kn:loewner:1934} closely but introduces some simplifications. Sparr \cite{kn:sparr:1980} gives a new proof of Löwner's theorem that combines Löwner's characterisation of matrix monotone functions of a fixed order with the theory of interpolation spaces. The paper \cite{kn:hansen:1982} by Pedersen and the author introduces the idea of first determining the extreme operator monotone functions and then obtain Löwner's theorem by applying Krein-Milman's theorem. The paper does not rely on Löwner's detailed analysis of matrix monotone functions but uses algebraic methods based on Jensen's operator inequality. The proof in \cite{kn:hansen:1982} is used in a number of other sources including the book of Bhatia \cite{kn:bhatia:1997}. Ameur \cite{kn:ameur:2003} combines the techniques of applying Jensen's operator inequality as in \cite{kn:hansen:1982} with interpolation theory in the sense of Foiaş-Lions to obtain a new proof of Löwner's theorem. \section{Matrix monotonicity and matrix concavity} There is a striking connection between matrix monotonicity and matrix concavity for functions defined in an interval extending to plus infinity. \begin{theorem}\label{theorem: 2n-monotone function is n-concave} Let $ f:(0,\infty)\to{\mathbf R} $ be a $ 2n $-monotone function where $ n\ge 1. $ Then $ f $ is matrix concave of order $ n. $ In particular, $ f $ is continuous. \end{theorem} \begin{proof} Let $ x_1,x_2 $ be positive definite matrices of order $ n $ and take $ s\in[0,1]. $ We consider the unitary block matrix $ V $ of order $ 2n\times 2n $ given by \[ V=\left(\begin{array}{cc} s^{1/2} & -(1-s)^{1/2}\\[0.5ex] (1-s)^{1/2} & s^{1/2} \end{array}\right) \] and obtain by an elementary calculation that \[ V^*\left(\begin{array}{cc} x_1 & 0\\ 0 & x_2 \end{array}\right)V =\left(\begin{array}{cc} s x_1+(1-s)x_2 & s^{1/2}(1-s)^{1/2}(x_2-x_1)\\[0.5ex] s^{1/2}(1-s)^{1/2}(x_2-x_1) & (1-s)x_1+s x_2 \end{array}\right). \] We set $ d=-s^{1/2}(1-s)^{1/2}(x_2-x_1) $ and notice that to a given $ \varepsilon>0 $ the difference \[ \begin{array}{l} \left(\begin{array}{cc} s x_1+(1-s)x_2+\varepsilon & 0\\[0.5ex] 0 & 2\lambda \end{array}\right)- V^*\left(\begin{array}{cc} x_1 & 0\\[0.5ex] 0 & x_2 \end{array}\right)V \\[3ex] \ge\left(\begin{array}{cc} \varepsilon & d\\[0.5ex] d & \lambda \end{array}\right) \qquad\qquad\text{for}\quad\displaystyle \lambda\ge (1-s)x_1+s x_2. \end{array} \] Since the last block matrix is positive semi-definite for $ \lambda\ge \varepsilon^{-1}\|d\|^2 $ we realize that \[ V^*\left(\begin{array}{cc} x_1 & 0\\ 0 & x_2 \end{array}\right)V\le\left(\begin{array}{cc} s x_1+(1-s)x_2+\varepsilon & 0\\[0.5ex] 0 & 2\lambda \end{array}\right) \] for a sufficiently large $ \lambda>0. $ Since $ f $ is $ 2n $-monotone we then obtain \[ f\left(V^*\left(\begin{array}{cc} x_1 & 0\\[0.5ex] 0 & x_2 \end{array}\right)V\right) \le\left(\begin{array}{cc} f\left(sx_1+(1-s)x_2+\varepsilon\right) & 0\\[0.5ex] 0 & f(2\lambda) \end{array}\right) \] for such $ \lambda, $ and since \[ \begin{array}{l} f\left(V^*\left(\begin{array}{cc} x_1 & 0\\[0.5ex] 0 & x_2 \end{array}\right)V\right) =V^*\left(\begin{array}{cc} f(x_1) & 0\\[0.5ex] 0 & f(x_2) \end{array}\right)V\\[3ex] =\left(\begin{array}{cc} s f(x_1)+(1-s)f(x_2) & s^{1/2}(1-s)^{1/2}(f(x_2)-f(x_1))\\[0.5ex] s^{1/2}(1-s)^{1/2}(f(x_2)-f(x_1)) & (1-s)f(x_1)+ s f(x_2) \end{array}\right) \end{array} \] we realize that \begin{equation}\label{epsilon concave function} s f(x_1)+(1-s) f(x_2)\le f\left(s x_1+(1-s) x_2+\varepsilon\right). \end{equation} Since $ f $ is monotone the right limit $ f^+ $ defined by setting \[ f^+(t)=\displaystyle\lim_{\varepsilon\searrow 0} f(t+\varepsilon)\qquad t>0 \] is well-defined. For positive numbers $ t_1,t_2>0 $ we obtain \[ \begin{array}{rl} sf^+(t_1)+(1-s)f^+(t_2)&\le sf(t_1+\varepsilon)+(1-s)f(t_2+\varepsilon)\\[1ex] &\le f\left(st_1+(1-s)t_2+2\varepsilon\right), \end{array} \] where the first inequality follows from the definition of the right limit and the second follows from inequality (\ref{epsilon concave function}) by setting $ x_1=t_1+\varepsilon $ and $ x_2=t_2+\varepsilon. $ By letting $ \varepsilon $ tend to zero we then obtain \[ sf^+(t_1)+(1-s)f^+(t_2)\le f^+\left(s t_1+(1-s)t_2\right), \] therefore $ f^+ $ is concave and thus continuous. Since $ f $ is monotone increasing we have \[ f^+(t-\varepsilon)\le f(t)\le f^+(t)\qquad t>0,\, 0<\varepsilon<t, \] and since $ f^+ $ is continuous we obtain $ f=f^+ $ by letting $ \varepsilon $ tend to zero. Finally, since we established that $ f $ is continuous, we may let $ \varepsilon $ tend to zero in inequality (\ref{epsilon concave function}) to obtain \[ sf(x_1)+(1-s)f(x_2)\le f\left(s x_1+(1-s)x_2\right), \] showing that $ f $ is $ n $-concave. \end{proof} The above theorem, with the added condition that $ f $ is continuous, was proved by Mathias \cite{kn:mathias:1991}. That a $ 4n $-monotone function defined in the positive half-line is $ n $-concave already follows from \cite[proofs of 2.5.~Theorem and 2.1.~Theorem]{kn:hansen:1982}. The idea of the above proof is taken from \cite{kn:hansen:2003:1}. \begin{corollary}\label{operator monotonicity implies operator concavity} An operator monotone function $ f:(0,\infty)\to{\mathbf R} $ is automatically operator concave. \end{corollary} It is essential for the above result that the function is defined in an interval stretching out to infinity. Without this assumption there are easy counter examples. \begin{theorem} Let $ f:(0,\infty)\to\mathbf R $ be a non-negative function which is $ n $-concave for some $ n\ge 1. $ Then $ f $ is also $ n $-monotone. \end{theorem} \begin{proof} Let $ x $ and $ y $ be positive definite $ n\times n $ matrices with $ x< y $ and take $ \lambda $ in the open interval $ (0,1). $ We may write \[ \lambda y=\lambda x +(1-\lambda )\bigl(\lambda(1-\lambda)^{-1}(y-x)\bigr) \] as a convex combination of two positive definite matrices. Since $ f $ is $ n $-concave we thus obtain \[ f(\lambda y)\ge\lambda f(x)+(1-\lambda)f(\lambda(1-\lambda)^{-1}(y-x))\ge \lambda f(x), \] where we used that $ f $ is non-negative. Since $ f $ is continuous we obtain $ f(x)\le f(y) $ be letting $ \lambda\to 1. $ In the general case, where just $ x\le y, $ we have \[ \mu x<x\le y\quad\text{for }\, 0<\mu<1, \] since $ x $ is positive definite, and then obtain $ f(\mu x)\le f(y). $ The assertion now follows by letting $ \mu\to 1. $ \end{proof} The above proof is taken from \cite[2.5.~Theorem]{kn:hansen:1982}. \begin{corollary}\label{monotonicity and concavity} A function mapping the positive half-line into itself is operator monotone if and only if it is operator concave. \end{corollary} \subsection{Regularization} The following regularization procedure is standard, cf. for example \cite[Page 11]{kn:donoghue:1974}. Let $ \varphi $ be a positive and even $ C^{\infty} $-function defined in the real line, vanishing outside the closed interval $ [-1,1] $ and normalized such that \[ \int_{-1}^1 \varphi(x)\,dx=1. \] For any locally integrable function $ f $ defined in an open interval $ (a,b), $ where possibly $ b=\infty, $ we form, for small $ \varepsilon>0, $ its regularization, \[ f_\varepsilon(t)=\frac{1}{\varepsilon}\int_a^b \varphi\left(\frac{t-s}{\varepsilon}\right)f(s)\,ds\qquad t\in(a+\varepsilon,b-\varepsilon), \] and realize that it is infinitely many times differentiable. We may also write \[ f_\varepsilon(t)=\int_{-1}^1\varphi(s)f(t-\varepsilon s)\,ds\qquad t\in(a+\varepsilon,b-\varepsilon). \] If $ f $ is continuous, then $ f_\epsilon $ is eventually well-defined and converges uniformly towards $ f $ on any compact subinterval of $ (a,b). $ In particular, for each $ t\in(a,b), $ the net $ f_\varepsilon(t) $ is well-defined for sufficiently small $ \varepsilon $ and converges to $ f(t) $ as $ \varepsilon $ tends to zero. Suppose now that $ f $ is $ n $-monotone in $ (0,\infty) $ for $ n\ge 2. $ We notice that $ f $ is continuous by Theorem~\ref{theorem: 2n-monotone function is n-concave}. It follows from the last integral representation that $ f_\epsilon $ is $ n $-monotone in the interval $ (\varepsilon,\infty) $ for $ \varepsilon>0. $ We realise that the restriction of $ f $ to any compact interval $ J $ in $ (0,\infty) $ is the uniform limit of a sequence of $ n $-monotone functions that are infinitely many times differentiable in a neighbourhood of $ J. $ A similar statement is obtained for $ n $-convex functions defined in an open interval $ (a,b). $ Notice that in this case the continuity is immediate. \section{Bendat and Sherman's theorem} For a differentiable function $ f\colon I\to\mathbf R $ the (first) divided difference $ [t, s]_f $ for $ t,s\in I $ is defined by \[ [t, s]_f=\left\{ \begin{array}{ll}\displaystyle \frac{f(t)-f(s)}{t-s}\quad &t\neq s\\[2ex] f'(t)& t=s, \end{array}\right. \] and the Löwner matrix $ L(\lambda_1,\dots,\lambda_n) $ is defined by setting \[ L(\lambda_1,\dots,\lambda_n)=\Big([\lambda_i, \lambda_j]_f\Big)_{i,j=1}^n \] for $ \lambda_1,\dots,\lambda_n\in I. $ Notice that a Löwner matrix is linear in the function $ f. $ If $ f $ is twice continuously differentiable the second divided difference $ [t,s,r]_f $ for distinct numbers $ s,t,r\in I $ is defined by setting \[ [t,s,r]_f=\frac{[t,s]_f-[s,r]_f}{t-r} \] and the definition is then extended by continuity to arbitrary numbers $ t,s,r\in I. $ Notice that in this way $ [t,t,t]_f=f''(t)/2. $ Divided differences are symmetric in the entries. \begin{lemma} Let $ f $ be a real function in $ C^1(I), $ where $ I $ is an open interval, and let $ x $ be an $ n\times n $ diagonal matrix with diagonal elements $ \lambda_1,\dots,\lambda_n\in I. $ The function $ t\to f(x+th) $ is defined in a neighbourhood of zero for any hermitian $ n\times n $ matrix $ h=(h_{i,j})_{i,j=1}^n $ and \[ \df{}{t} \bigl(f(x+th)\xi\mid\xi\bigr)\Big|_{t=0}=(h\circ L(\lambda_1,\dots,\lambda_n)\xi\mid\xi)\qquad\xi\in\mathbf C^n, \] where $ h\circ L(\lambda_1,\dots,\lambda_n) $ denotes the Hadamard (entry-wise) product of $ h $ and $ L(\lambda_1,\dots,\lambda_n). $ \end{lemma} \begin{proof} We first prove the lemma for a monomial $ f(t)=t^m, $ where $ m\ge 1 $ is an integer. The first divided difference \[ \begin{array}{l} \displaystyle [\lambda_i,\lambda_j]_f=\frac{\lambda_i^m-\lambda_j^m}{\lambda_i-\lambda_j} =\lambda_i^{m-1}+\lambda_i^{m-2}\lambda_j+\cdots+\lambda_i\lambda_j^{m-2}+\lambda_j^{m-1}\\[3ex] =\displaystyle\sum_{a+b=m-1}\lambda_i^a\lambda_j^b \end{array} \] and this holds also for $ \lambda_i=\lambda_j. $ Therefore, \[ \begin{array}{l} \displaystyle (h\circ L(\lambda_1,\dots,\lambda_n)\xi\mid\xi)=\sum_{i=1}^n \bigl(h\circ L(\lambda_1,\dots,\lambda_n)\xi\bigr)_i\bar\xi_i\\[2ex] =\displaystyle\sum_{i,j=1}^n h_{i,j} \sum_{a+b=m-1}\lambda_i^a\lambda_j^b\xi_j\bar\xi_i =\sum_{a+b=m-1} (x^a h x^b\xi\mid\xi) \end{array} \] which is the first order term i $ t $ of $ \bigl((x+th)^m\xi\mid\xi\bigr). $ By using linearity the statement of the lemma follows for arbitrary polynomials. The general case then follows by approximation. \end{proof} \begin{theorem}\label{characterization in terms of Loewner matrices} Let $ f $ be a real function in $ C^1(I), $ where $ I $ is an open interval and take a natural number $ n\ge 1. $ Then $ f $ is $ n $-monotone if and only if the Löwner matrix $ L(\lambda_1,\dots,\lambda_n) $ is positive semi-definite for all sequences $ \lambda_1,\dots,\lambda_n\in I. $ \end{theorem} \begin{proof} It follows from classical analysis that $ f $ is $ n $-monotone if and only if \[ \df{}{t} \bigl(f(x+th)\xi\mid\xi\bigr)\Big|_{t=0}\ge 0 \] for every hermitian $ x $ with spectrum in $ I, $ every positive semi-definite matrix $ h, $ and every $ \xi\in\mathbf C^n. $ We may now choose $ x $ as a diagonal matrix with diagonal elements $ \lambda_1,\dots,\lambda_n\in I. $ By choosing $ h $ as the positive semi-definite matrix with $ h_{i,j}=1 $ for $ i,j=1,\dots,n $ we realise that $ L(\lambda_1,\dots,\lambda_n)\ge 0 $ if $ f $ is $ n $-monotone. Since the Hadamard product of two semi-definite matrices is positive semi-definite (indeed, it is a principal submatrix of the tensor product), we realise that $ f $ is $ n $-monotone if all the Löwner matrices $ L(\lambda_1,\dots,\lambda_n)\ge 0 $ for arbitrary $ \lambda_1,\dots,\lambda_n\in I. $ \end{proof} \begin{definition} Take a function $ f\in C^2(I), $ where $ I $ is an open interval and (not necessarily distinct) numbers $ \lambda_1,\dots,\lambda_n\in I. $ The associated Kraus~\cite{kn:kraus:1936, kn:hansen:1997:2} matrices $ H(1),\dots,H(n) $ are defined by setting \[ H(p)=2\Big([\lambda_p,\lambda_i,\lambda_j]_f\Bigr)_{i,j=1}^n \] for $ p=1,\dots,n. $ \end{definition} Notice that a Kraus matrix is linear in the function $ f. $ \begin{lemma} Let $ f $ be a real function in $ C^2(I), $ where $ I $ is an open interval and let $ x $ be an $ n\times n $ diagonal matrix with diagonal elements $ \lambda_1,\dots,\lambda_n\in I. $ The function $ t\to f(x+th) $ is defined in a neighbourhood of zero for any hermitian $ n\times n $ matrix $ h=(h_{i,j})_{i,j=1}^n $ and \[ \dto{}{t} \bigl(f(x+th)\xi\mid\xi\bigr)\Big|_{t=0}=\sum_{p=1}^n \bigl(H(p) \eta(p)\mid \eta(p)\bigr), \] where \begin{enumerate}[(i)] \item $ \xi=(\xi_1,\dots,\xi_n) $ is a vector in $ \mathbf C^n. $ \item $ H(1),\dots,H(n) $ are the Kraus matrices associated with $ f $ and $ \lambda_1,\dots,\lambda_n. $ \item $ \eta(p)=\bigl(\xi_1 h_{p,1},\dots,\xi_n h_{p,n}\bigr) $ for $ p=1,\dots,n. $ \end{enumerate} \end{lemma} \begin{proof} We first prove the lemma for a monomial $ f(t)=t^m, $ where $ m\ge 2 $ is an integer. Since \[ \begin{array}{l} \displaystyle [\lambda_p,\lambda_i]_f-[\lambda_i,\lambda_j]_f\\[2ex] \displaystyle=\lambda_p^{m-1}+\lambda_p^{m-2}\lambda_i+\cdots+\lambda_p\lambda_i^{m-2}+\lambda_i^{m-1}\\[1ex] \hskip 10em-\bigl(\lambda_i^{m-1}+\lambda_i^{m-2}\lambda_j+\cdots+\lambda_i\lambda_j^{m-2}+\lambda_j^{m-1}\bigr)\\[2ex] =\lambda_i^{m-2}(\lambda_p-\lambda_j)+\lambda_i^{m-3}(\lambda_p^2-\lambda_j^2)+\cdots+ \lambda_i(\lambda_p^{m-2}-\lambda_j^{m-2})+\lambda_p^{m-1}-\lambda_j^{m-1} \end{array} \] the second divided difference \[ \begin{array}{l} \displaystyle[\lambda_p,\lambda_i,\lambda_j]_f=\frac{[\lambda_p,\lambda_i]_f-[\lambda_i,\lambda_j]_f}{\lambda_p-\lambda_j}\\[2ex] =\lambda_i^{m-2}+\lambda_i^{m-3}(\lambda_p+\lambda_j)+\lambda_i^{m-4}(\lambda_p^2+\lambda_p\lambda_j+\lambda_j^2)+\cdots\\[1ex] \hskip 8em +\,\lambda_i(\lambda_p^{m-3}+\lambda_p^{m-4}\lambda_j+\cdots+\lambda_p\lambda_j^{m-4}+\lambda_j^{m-3})\\[1ex] \hskip 10em+\,(\lambda_p^{m-2}+\lambda_p^{m-3}\lambda_j+\cdots+\lambda_p\lambda_j^{m-3}+\lambda_j^{m-2})\\[2ex] =\displaystyle\sum_{a+b+c=m-2}\lambda_p^a\lambda_i^b\lambda_j^c\,, \end{array} \] where summation limits should be properly interpreted, and the case $ \lambda_p=\lambda_j $ is handled separately. We then obtain \[ \begin{array}{l} \displaystyle\sum_{p=1}^n \big(H(p)\eta(p)\mid\eta(p)\bigr)=\sum_{p,i,j=1}^n 2[\lambda_p,\lambda_i,\lambda_j]_f \xi_j h_{p,j} \bar\xi_i \bar h_{p,i}\\[2ex] =\displaystyle 2\sum_{i,j,p=1}^n\sum_{a+b+c=m-2}\lambda_p^a\lambda_i^b\lambda_j^c \xi_j h_{p,j}\bar\xi_i \bar h_{p,i}= 2\sum_{a+b+c=m-2} (x^b h x^a h x^c\xi\mid\xi) \end{array} \] which is the second order term in $ t $ of $ \bigl((x+th)^{m}\xi\mid\xi\bigr). $ By using linearity the statement of the lemma follows for arbitrary polynomials. The general case then follows by approximation. \end{proof} \begin{theorem}\label{convexity in terms of Kraus matrices} Let $ f $ be a real function in $ C^2(I), $ where $ I $ is an open interval. Then $ f $ is $ n $-convex if and only if the Kraus matrices associated with $ f $ and any choice of $ \lambda_1,\dots,\lambda_n\in I $ are positive semi-definite. \end{theorem} \begin{proof} The sufficiency of the conditions is obvious from the above lemma. Assume now that $ f $ is $ n $-convex and choose $ \lambda_1,\dots,\lambda_n\in I. $ Take a fixed $ p=1,\dots,n $ and a fixed vector $ \eta\in\mathbf C^n. $ To a given $ \varepsilon>0 $ we choose a vector $ \xi $ by setting $ \xi_i=\varepsilon^{-1} $ for $ i\ne p $ and $ \xi_p=1. $ We then choose a vector $ a $ by setting \[ a_i=\frac{\eta_i}{\xi_i}=\left\{\begin{array}{ll} \varepsilon\eta_i &i\ne p\\[0.5ex] \eta_p &i=p. \end{array}\right. \] We finally choose a self-adjoint (actually a positive semi-definite) matrix $ h $ by setting $ h_{i,j}=\bar a_i a_j $ for $ i,j=1,\dots,n $ and calculate \[ \eta(q)_i=\xi_i h_{q,i}=\xi_i \bar a_q a_i\qquad q=1,\dots,n. \] With these choices we obtain \[ \eta(p)=\bar\eta_p\,\eta\qquad\text{and}\qquad\eta(q)=\varepsilon\bar\eta_q\,\eta\quad\text{for}\quad q\ne p. \] Therefore, \[ \dto{}{t} \bigl(f(x+th)\xi\mid\xi\bigr)\Big|_{t=0}=|\eta_p|^2 \bigl(H(p)\eta\mid\eta)+ \varepsilon^2\sum_{q\ne p}^n |\eta_q|^2 \bigl(H(q)\eta\mid\eta\bigr) \] is non-negative, and since $ \eta $ is a fixed vector we obtain \[ |\eta_p|^2 \bigl(H(p)\eta\mid\eta)\ge 0 \] by letting $ \varepsilon $ tend to zero. In particular, $ \bigl(H(p)\eta\mid\eta)\ge 0 $ for all vectors $ \eta\in\mathbf C^n $ with $ \eta_p\ne 0. $ By continuity we finally realize that $ H(p) $ is positive semi-definite. \end{proof} \begin{theorem}[Bendat and Sherman]\label{theorem: Bendat and Sherman} Let $ f $ be a real function in $ C^2(I), $ where $ I $ is an open interval. Then $ f $ is operator convex if and only if the function \[ g(t)=\left\{\begin{array}{ll} \displaystyle\frac{f(t)-f(t_0)}{t-t_0}\quad &t\ne t_0\\[2.5ex] f'(t_0) &t=t_0 \end{array}\right. \] is operator monotone for each $ t_0\in I. $ \end{theorem} \begin{proof} Using the symmetry of divided differences we realise that \[ [\lambda_i,\lambda_j]_g=[t_0,\lambda_i,\lambda_j]_f\qquad i,j=1,\dots,n. \] If $ f $ is $ (n+1) $-convex then $ g $ is $ n $-monotone for each $ t_0\in I $ by Theorem~\ref{characterization in terms of Loewner matrices} and Theorem~\ref{convexity in terms of Kraus matrices}. Conversely, if $ g $ is $ n $-monotone for each $ t_0\in I $ then $ f $ is $ n $-convex. \end{proof} \subsection{Further preparations} \begin{theorem}[Bendat and Sherman]\label{theorem: Bendat and Sherman, general case} Let $ f $ be an operator convex function defined in the positive half-line. Then $ f $ is differentiable, and the function \[ g(t)=\left\{\begin{array}{ll} \displaystyle\frac{f(t)-f(t_0)}{t-t_0}\quad &t\ne t_0\\[2.5ex] f'(t_0) &t=t_0 \end{array}\right. \] is operator monotone for each $ t_0>0. $ \end{theorem} \begin{proof} Suppose $ f $ is operator convex, thus in particularly continuous. Using regularization (with $ \varepsilon < t_0) $ we obtain $ f $ as the point-wise limit, for $ \varepsilon\to 0, $ of a sequence $ (f_{\varepsilon})_{\varepsilon>0} $ of infinitely differentiable operator convex functions. The functions \[ g_\varepsilon(t)=\left\{\begin{array}{ll} \displaystyle\frac{f_\varepsilon(t)-f_\varepsilon(t_0)}{t-t_0}\quad &t\ne t_0\\[3ex] \displaystyle f_\varepsilon'(t_0)\quad &t=t_0 \end{array}\right. \] are operator monotone in $ (\epsilon,\infty) $ by Theorem~\ref{theorem: Bendat and Sherman}. In addition, $ g_\varepsilon(t)\to g(t) $ for $ t\ne t_0. $ Since $ f $ is convex the set of derivatives $ \{f'_\varepsilon(t_0)\} $ is bounded for small $ \varepsilon<t_0. $ A subsequence of $ (g_\varepsilon)_{\epsilon>0} $ therefore converges towards an operator monotone function which is continuous according to Theorem~\ref{theorem: 2n-monotone function is n-concave}. But then $ f $ is differentiable in $ t_0 $ and we conclude that $ f'(t_0)=\lim_{\varepsilon\to 0} f_\varepsilon'(t_0). $ \end{proof} In the above proof we also learn that $ f'_\varepsilon(t)\to f'(t) $ for every $ t\in (0,\infty), $ where $ f_\varepsilon $ is the regularization of $ f. $ In connection with Corollary~\ref{monotonicity and concavity} we obtain \begin{corollary}\label{corollary: automatic differentiability} An operator monotone or operator convex function $ f $ defined in the positive half-line is automatically differentiable and $ f'_\varepsilon(t)\to f'(t) $ for every $ t\in(0,\infty), $ where $ f_\epsilon $ is the regularization of $ f. $ \end{corollary} By applying regularization of $ f $ and then appealing to Theorem~\ref{characterization in terms of Loewner matrices} and Corollary~\ref{corollary: automatic differentiability} we obtain: \begin{corollary}\label{corollary: positivity of Loewner matrices} Let $ f $ be an operator monotone function defined in the positive half-line. The Löwner matrices $ L(\lambda_1,\dots,\lambda_n) $ associated with $ f $ are well-defined and positive semi-definite for arbitrary $ \lambda_1,\dots,\lambda_n\in I. $ \end{corollary} \begin{corollary}\label{the derivative of a non-constant 2-monotone function cannot have a zero} Let $ f $ be an operator monotone function defined in the open half-line. If the derivative $ f'(t)=0 $ in any point $ t>0, $ then $ f $ is a constant function. \end{corollary} \begin{proof} The Löwner matrix \[ L(t,s)=\begin{pmatrix} f'(t) & [t,s]_f\\[0.5ex] [s,t]_f & f'(s) \end{pmatrix}\qquad t\ne s \] is well-defined and positive semi-definite by Corollar~\ref{corollary: positivity of Loewner matrices}, thus \[ f'(t)f'(s)\ge\left(\frac{f(t)-f(s)}{t-s}\right)^2. \] If $ f'(t)=0, $ then necessarily $ f(s)=f(t) $ for every $ s>0. $ \end{proof} Notice that we for this result only need $ 2 $-monotonicity of $ f, $ cf.~\cite{kn:hansen:2007:1}. \section{The fast track to Löwner's theorem} \begin{lemma}\label{main lemma on involutions} Let $ f\colon(0,\infty)\to(0,\infty) $ be an operator monotone function. The function $ t\to t^{-1}f(t) $ is operator monotone decreasing. \end{lemma} \begin{proof} For $ \varepsilon>0 $ the function $ f_{\varepsilon}(t)= f(t+\varepsilon) $ is defined in the open set $ (-\varepsilon,\infty) $ containing zero. Since $ f $ and hence $ f_{\varepsilon} $ are operator monotone and therefore operator concave by Corollary~\ref{operator monotonicity implies operator concavity} we may use Theorem~\ref{theorem: Bendat and Sherman, general case} (Bendat and Sherman) to obtain that the function \[ t\to\frac{f_{\varepsilon}(t)-f_{\varepsilon}(0)}{t-0}=\frac{f(t+\varepsilon)-f(\varepsilon)}{t} \] is operator monotone decreasing. By using $ f(\varepsilon)>0 $ and the identity \[ \frac{f(t+\varepsilon)}{t}=\frac{f(t+\varepsilon)-f(\varepsilon)}{t}+\frac{f(\varepsilon)}{t} \] we realize that the function $ t\to t^{-1} f(t+\varepsilon) $ is operator monotone decreasing when restricted to the positive half-line. The result now follows by letting $ \varepsilon $ tend to zero.\hfill $ \Box $ \end{proof} \begin{corollary}\label{the two involution of positive oper mon func} Let $ f\colon(0,\infty)\to(0,\infty) $ be an operator monotone function. The functions \[ f^\sharp(t)=t f(t)^{-1}\quad\text{and}\quad f^*(t)=t f(t^{-1}) \] are operator monotone in the positive half-line. \end{corollary} \begin{proof} Since $ t\to f^\sharp(t)^{-1}=t^{-1}f(t) $ is operator monotone decreasing by the above lemma it follows that $ f^\sharp $ is operator monotone (increasing). The second assertion follows from the same argument by first replacing $ f $ with the operator monotone function $ t\to f(t^{-1})^{-1}. $ \end{proof} The corollary states that the mappings $ f\to f^\sharp $ and $ f\to f^* $ are involutions of the set of positive operator monotone functions defined in the positive half-line. \begin{lemma}\label{bound for positive operator monotone function} We have the bound $ f(t)\le t+1 $ for any positive operator monotone function $ f $ defined in the positive half-line with $ f(1)=1. $ \end{lemma} \begin{proof} Since $ f $ is increasing we obviously have \[ f(t)\le f(1)=1\le t+1\qquad\text{for }\, 0<t\le 1. \] We also notice that $ f $ is concave by Theorem~\ref{theorem: 2n-monotone function is n-concave}. It follows, for $ t>1, $ that $ f(t) $ is bounded by the continuation of the chord between $ (0, \lim_{\varepsilon\to 0}f(\varepsilon)) $ and $ (1,f(1))=(1,1). $ But the continuation of this chord is bounded by $ t+1. $ \end{proof} Let $ \mathcal P $ denote the set of positive operator monotone functions defined in the positive half-line and consider the convex set \[ \mathcal P_0=\{f\in\mathcal P\mid f(1)=1\}. \] We equip $ \mathcal P_0 $ with the topology of point-wise convergence and realize, by the preceding lemma, that $ \mathcal P_0 $ is compact in this topology. \begin{theorem}\label{reduction to positive functions} Let $ f\colon(0,\infty)\to\mathbf R $ be a non-constant operator monotone function. Then $ f $ can be written on the form \[ f(t)=f(1)+f'(1)\frac{t-1}{t} (\T f)(t)\qquad t>0, \] where $ \T f\in \mathcal P_0 $ is given by \[ (\T f)(t)=\frac{t}{f'(1)}\cdot \left\{\begin{array}{ll} \displaystyle \frac{f(t)-1}{t-1}\qquad &t\ne 1\\[2ex] f'(1) &t=1. \end{array}\right. \] \end{theorem} Notice that $ f'(1)>0 $ by Corollary~\ref{the derivative of a non-constant 2-monotone function cannot have a zero} since $ f $ is non-constant. \begin{proof} The function \[ h_1(t)=\frac{1}{f'(1)}\cdot\frac{f(t)-f(1)}{t-1} \] is positive since $ f $ is strictly increasing, and $ h_1(1)=1. $ Since $ f $ is operator monotone and thus operator concave the function $ h_1 $ is operator monotone decreasing by Theorem~\ref{theorem: Bendat and Sherman, general case}. By composing with the operator monotone decreasing function $ t\to t^{-1} $ we obtain that \[ h_2(t)=h_1(t^{-1})=\frac{1}{f'(1)}\cdot\frac{f(t^{-1})-f(1)}{t^{-1}-1} \] is positive and operator monotone with $ h_2(1)=1. $ By applying the involution $ h_2\to h_2^* $ we finally obtain that the function \[ (\T f) (t)=h_2^*(t)=t h_2(t^{-1})= \frac{t}{f'(1)}\cdot\frac{f(t)-f(1)}{t-1} \] is operator monotone by Corollary~\ref{the two involution of positive oper mon func}. It is also positive and $ (\T f)(1)=1. $ The assertion now follows by solving the equation for $f. $ \end{proof} \begin{lemma}\label{The two invariant operations} The involution $ f\to f^* $ maps $ \mathcal P_0 $ into itself, and the operation $ f\to\T f $ maps the non-constant functions in $ \mathcal P_0 $ into $ \mathcal P_0. $ \end{lemma} \begin{proof} Follows immediately from Corollary~\ref{the two involution of positive oper mon func} and Theorem~\ref{reduction to positive functions}. \end{proof} \begin{lemma} The sum of the derivatives \[ \left.\df{}{t}f(t)\right|_{t=1}+\left.\df{}{t}f^*(t)\right|_{t=1}=1 \] for any $ f\in \mathcal P_0. $ \end{lemma} \begin{proof}The assertion follows from the calculation \[ \frac{f(t)-1}{t-1}+\frac{f^*(t^{-1})-1}{t^{-1}-1}=1 \] by letting $ t $ tend to $ 1. $ \end{proof} Both $ f $ and $ f^* $ are increasing functions. By Corollary~\ref{the derivative of a non-constant 2-monotone function cannot have a zero} we therefore obtain: \begin{corollary} The derivative of $ f $ satisfies \[ 0<f'(1)<1 \] for any function $ f\in\mathcal P_0 $ different from the constant function $ t\to1 $ or the identity function $ t\to t. $ \end{corollary} \begin{lemma}\label{formula for the extreme points} An extreme point $ f $ in $ \mathcal P_0 $ is necessarily of the form \[ f(t)=\frac{t}{f'(1)+(1-f'(1))t}\qquad t>0. \] \end{lemma} \begin{proof} Take first a function $ f\in\mathcal P_0 $ which is neither the constant function $ t\to 1 $ nor the identity function $ t\to t, $ thus $ \lambda=f'(1)\in(0,1) $ by the above corollary. An elementary calculation shows that \begin{equation} \lambda\T{f} + (1-\lambda) (\T{f^*})^*=f. \end{equation} Indeed, \[ \lambda(\T{f})(t)=t\,\frac{f(t)-1}{t-1}\qquad t\ne 1 \] and \[ (1-\lambda) (\T{f^*})^*(t)=(1-\lambda)t(\T{f^*})(t^{-1})=\frac{f^*(t^{-1})-1}{t^{-1}-1}= \frac{f(t)-t}{1-t} \] from which the assertion follows. Consequently, if $ f $ is an extreme point in $ \mathcal P_0 $ then $ \T{f}=f $ or \[ \frac{t}{\lambda}\cdot\frac{f(t)-1}{t-1}=f(t)\qquad t>0 \] from which it follows that \[ f(t)=\frac{t}{\lambda+(1-\lambda)t}\qquad t>0. \] Finally, the two functions we left out may also be written in this way. Indeed, the constant function $ t\to 1 $ appears in the formula by setting $ \lambda=0 $ while the identity function $ t\to t $ appears by setting $ \lambda=1. $ \end{proof} \begin{theorem}\label{theorem: formula for oper mon func with measure on [0,1]} Let $ f $ be a positive operator monotone function defined in the positive half-line. There is a bounded positive measure $ \mu $ on the closed interval $ [0,1] $ such that \[ f(t)=\int_0^1 \frac{t}{\lambda+(1-\lambda)t}\,d\mu(\lambda)\qquad t>0. \] Conversely, any function given on this form is operator monotone. The measure $ \mu $ is a probability measure if and only if $ f(1)=1. $ \end{theorem} \begin{proof} We noticed that $ \mathcal P_0 $ is convex and compact in the topology of point-wise convergence of functions. Therefore, by Krein-Milman's theorem, it is generated by its extreme points $ \mathit{Ext}(\mathcal P_0) $ in the sense that $ \mathcal P_0 $ is the closure \[ \mathcal P_0=\mathit{\overline{conv}(Ext}(\mathcal P_0)) \] of the convex hull of $ \mathit{Ext}(\mathcal P_0). $ By Lemma~\ref{formula for the extreme points} the convex hull of $ \mathit{Ext}(\mathcal P_0) $ consists of functions of the form \begin{equation}\label{formula with a discrete measure} f(t)=\int_0^1 \frac{t}{\lambda+(1-\lambda)t}\,d\mu(\lambda)\qquad t>0, \end{equation} where $ \mu $ is a discrete probability measure on $ [0,1]. $ A function $ f $ in $ \mathcal P_0 $ is therefore the limit of a net of functions $ (f_j)_{j\in J} $ written on the form (\ref{formula with a discrete measure}) in terms of discrete probability measures $ (\mu_j)_{j\in J}. $ Since the set of probability measures on $ [0,1] $ is compact in the weak topology there exists an accumulation measure $ \mu $ such that $ f $ is expressed as in the statement of the theorem. \end{proof} Notice that a possible atom in zero of the measure $ \mu $ in the above theorem contributes with the constant term $ \mu\{0\} $ in the integral. A possible atom in $ 1 $ contributes with the term $ \mu\{1\} t. $ A brief outline of the theory presented in Theorem~\ref{reduction to positive functions}, Lemma~\ref{formula for the extreme points} and Theorem~\ref{theorem: formula for oper mon func with measure on [0,1]} was given in the authors' PhD thesis~\cite[Page 12-13]{kn:hansen:1983:2}. It is illuminating to consider the linear mapping \[ \Lambda(f)(t)=\left\{\begin{array}{ll} \displaystyle t\frac{f(t)-f(1)}{t-1}\qquad &t\ne 1\\[2ex] f'(1) &t=1 \end{array}\right. \] defined for differentiable functions $ f\colon(0,\infty)\to\mathbf R. $ It is closely related to the non-linear transformation $ T $ introduced in Theorem~\ref{reduction to positive functions}. Indeed, \[ \Lambda(f)=f'(1) \T{f}\qquad\text{for}\quad f\in\mathcal P_0 \] and $ \Lambda $ is thus a transformation of $ \mathcal P. $ However, we cannot replace $ T $ by $ \Lambda $ in the proof of Lemma~\ref{formula for the extreme points} since $ \Lambda $ does not map $ \mathcal P_0 $ into itself, and we cannot alternatively work directly with $ \mathcal P $ since $ \mathcal P $ is not compact. \begin{theorem} The measure $ \mu $ appearing in Theorem~\ref{theorem: formula for oper mon func with measure on [0,1]} is uniquely defined by the operator monotone function $ f\in\mathcal P. $ \end{theorem} \begin{proof} The action of $ \Lambda $ on functions in $ \mathcal P $ is calculated by noticing that \[ \Lambda\left(\frac{t}{\lambda+(1-\lambda)t}\right)=\frac{\lambda t}{\lambda+(1-\lambda)t} \] and thus \[ p(\Lambda)\left(\frac{t}{\lambda+(1-\lambda)t}\right)= \frac{p(\lambda) t}{\lambda+(1-\lambda)t} \] for any polynomial $ p. $ For a function $ f\in\mathcal P $ we thus have \[ p(\Lambda)(f)=\int_0^1 \frac{t}{\lambda+(1-\lambda)t}\, p(\lambda)\,d\mu(\lambda), \] where $ \mu $ is the representing measure for $ f. $ This identity recovers the measure $ \mu $ from $ f $ by using Weierstrauss's polynomial approximation theorem. \end{proof} \section{Other integral representations} \begin{corollary}\label{integral formula for positive operator monotone functions} Let $ f $ be a positive operator monotone function defined in the positive half-line. There is a bounded positive measure $ \mu $ on the closed extended half-line $ [0,\infty] $ such that \[ f(t)=\int_0^\infty \frac{t(1+\lambda)}{t+\lambda}\, d\mu(\lambda)\qquad t>0. \] Conversely, any function given on this form is operator monotone. The measure $ \mu $ is a probability measure if and only if $ f(1)=1. $ \end{corollary} \begin{proof} The assertion follows from the previous theorem by applying the transformation \[ \lambda\to\alpha=\lambda(1-\lambda)^{-1} \] which maps the closed interval $ [0,1] $ onto the closed extended half-line $ [0,\infty], $ and by noticing the identity \[ \frac{t}{\lambda+(1-\lambda)t}= \frac{t(1-\lambda)^{-1}}{\lambda(1-\lambda)^{-1}+t}=\frac{t(1+\alpha)}{t+\alpha} \] which is valid also in the end points of the two intervals. \end{proof} We are finally able to give an integral formula for the operator monotone functions defined in the positive half-line. There are various ways of doing so, but the following formula establishes the connection between operator monotone functions and the theory of Pick functions \cite{kn:donoghue:1974} in complex analysis. \begin{theorem}\label{formula for operator monotone function} Let $ f\colon(0,\infty)\to\mathbf R $ be an operator monotone function. There exists a positive measure $ \nu $ on the closed positive half-line $ [0,\infty) $ with $ \int (1+\lambda^2)^{-1}\, d\nu(\lambda)<\infty $ such that \[ f(t)=\alpha t+\beta+\int_0^\infty\left(\frac{\lambda}{1+\lambda^2} - \frac{1}{t+\lambda}\right)\, d\nu(\lambda)\qquad t>0, \] where $ \alpha\ge 0 $ and $ \beta\in\mathbf R. $ Conversely, any function given on this form is operator monotone. \end{theorem} \begin{proof} We first use Theorem~\ref{reduction to positive functions} to write $ f $ on the form \[ f(t)=f(1)+f'(1)\frac{t-1}{t} (\T f)(t)\qquad t>0, \] where $ \T f $ is a positive and normalized operator monotone function. We can then apply Corollary~\ref{integral formula for positive operator monotone functions} to obtain a probability measure $ \mu $ on the closed extended half-line $ [0,\infty] $ such that \[ f(t)=f(1)+f'(1)\frac{t-1}{t}\int_0^\infty \frac{t(1+\lambda)}{t+\lambda}\, d\mu(\lambda)\qquad t>0. \] We explicitly remove a possible atom in $ \infty $ to obtain \[ f(t)=f(1)+f'(1)\mu(\{\infty\}) (t-1)+ f'(1)\int_0^\infty \frac{(t-1)(1+\lambda)}{t+\lambda}\, d\tilde\mu(\lambda), \] where $ \tilde\mu $ is a positive finite measure on the closed half-line $ [0,\infty). $ We then make use of the identity \[ \frac{(t-1)(1+\lambda)}{t+\lambda}=(1+\lambda)^2\left(\frac{\lambda}{1+\lambda^2}-\frac{1}{t+\lambda}\right) +\frac{1-\lambda^2}{1+\lambda^2} \] to obtain \[ f(t)=\alpha t+\beta+ f'(1)\int_0^\infty (1+\lambda)^2\left(\frac{\lambda}{1+\lambda^2}-\frac{1}{t+\lambda}\right)\, d\tilde\mu(\lambda), \] where $ \alpha=f'(1)\mu(\{\infty\})\ge 0 $ and \[ \beta=f(1)-\mu(\{\infty\})f'(1)+f'(1)\int_0^\infty \frac{1-\lambda^2}{1+\lambda^2}\, d\tilde\mu(\lambda) \] is finite since the integrand is bounded between $ -1 $ and $ 1. $ The assertion now follows by setting $ d\nu(\lambda)=f'(1)(1+\lambda)^2\, d\tilde\mu(\lambda) $ and noticing that \[ 1\le (1+\lambda)^2/(1+\lambda^2)\le 2 \] for $ 0\le\lambda<\infty. $ \end{proof} \begin{remark} The unicity of the representing measure $ \mu $ in Theorem~\ref{theorem: formula for oper mon func with measure on [0,1]} readily implies unicity of the representing measures in Corollary~\ref{integral formula for positive operator monotone functions} and Theorem~\ref{formula for operator monotone function}. \end{remark} \subsection{Löwner's theorem} We learn from the integral expression in the previous theorem that an operator monotone function $ f $ defined in the positive half-line can be continued to an analytic function defined in $ \mathbf C\backslash (-\infty,0]. $ Since the imaginary part \[ \Im\left(-\frac{1}{z+\lambda}\right)=\frac{\Im z}{|z+\lambda|^2} \] we also learn that the analytic continuation of $ f $ to the complex upper half-plane has non-negative imaginary part. In fact, the imaginary part of the continuation is positive if $ f $ is not constant. \begin{theorem}[Löwner] Let $ f:I\to \mathbf R $ be a function defined in an open interval which is either finite $ I=(a,b) $ or infinite of the form $ (a,\infty). $ Then $ f $ is operator monotone if and only if it allows an analytic continuation to the upper half-plane with non-negative imaginary part. \end{theorem} \begin{proof} The case where $ I $ is the positive half-line follows from Theorem~\ref{formula for operator monotone function} and from the Theory of Pick functions \cite{kn:donoghue:1974}, and the case $ I=(a,\infty) $ then follows by a simple translation. The remaining cases may be similarly reduced to the case $ I=(0,1). $ The function, \[ h(t)=\frac{t}{t+1}\qquad t>0, \] is a bijection between $ (0,\infty) $ and the interval $ (0,1). $ It is operator monotone, and the inverse function, \[ h^{-1}(t)=\frac{t}{1-t}=\frac{1}{t^{-1}-1}\qquad 0<t<1, \] is also operator monotone. Both functions have analytic continuations which map the complex upper half-plane into itself. Composition with $ h $ therefore establishes a bijection between the operator monotone functions defined in the two intervals $ (0,\infty) $ and $ (0,1). $ It also establishes a bijection between the functions defined in each of the two intervals, that allow an analytic continuation into the complex upper half-plane with non-negative imaginary part. \end{proof} \subsection{The representing measure} \begin{theorem}\label{calculation of the associated measure} Let $ f\colon(0,\infty)\to\mathbf R $ be an operator monotone function, and let $ \nu $ be the representing measure as given in Theorem~\ref{formula for operator monotone function}. Let $ \tilde\nu $ be the measure obtained from $ \nu $ by removing a possible atom in zero. Then \[ \lim_{\varepsilon\to 0}\frac{1}{\pi}\int_0^\infty \Im f(-t+i\varepsilon) g(t)\, dt=\frac{g(0)}{2}\nu(\{0\})+\int_0^\infty g(\lambda)\, d\tilde\nu(\lambda) \] for every continuous, bounded and integrable function $ g $ defined in $ [0,\infty). $ \end{theorem} \begin{proof} By applying Theorem~\ref{formula for operator monotone function} we obtain \[ \begin{array}{rl} I_\varepsilon&=\displaystyle\frac{1}{\pi}\int_0^\infty \Im f(-t+i\varepsilon) g(t)\, dt\\[3ex] &=\displaystyle\frac{1}{\pi}\int_0^\infty\left(\varepsilon\alpha + \int_0^\infty\frac{\varepsilon}{(\lambda-t)^2+\varepsilon^2}\, d\nu(\lambda) \right) g(t)\, dt. \end{array} \] By Fubini's theorem we may then write \[ I_\varepsilon=\frac{\varepsilon\alpha}{\pi} \int_0^\infty g(t)\, dt +\frac{1}{\pi}\int_0^\infty\int_0^\infty\frac{\varepsilon}{(\lambda-t)^2+\varepsilon^2} g(t)\, dt\, d\nu(\lambda). \] Since \[ \frac{1}{\pi}\int_{-\infty}^\infty\frac{\varepsilon}{(\lambda-t)^2+\varepsilon^2}\, dt =1, \] we obtain by Lebesgue's convergence theorem \[ \lim_{\varepsilon\to 0}\frac{1}{\pi}\int_0^\infty\frac{\varepsilon}{(\lambda-t)^2+\varepsilon^2} g(t)\, dt=g(\lambda)\qquad\text{for}\quad\lambda>0. \] For $ \lambda=0 $ we only obtain $ g(0)/2\,. $ \end{proof} \begin{acknowledgement*} The author is indebted to the referees for a number of useful suggestions. \end{acknowledgement*} {\small
{ "timestamp": "2013-02-05T02:01:52", "yymm": "1112", "arxiv_id": "1112.0098", "language": "en", "url": "https://arxiv.org/abs/1112.0098", "abstract": "The operator monotone functions defined in the positive half-line are of particular importance. We give a version of the theory in which integral representations for these functions can be established directly without invoking Löwner's detailed analysis of matrix monotone functions of a fixed order or the theory of analytic functions.We found a canonical relationship between positive and arbitrary operator monotone functions defined in the positive half-line, and this result effectively reduces the theory to the case of positive functions.MSC2010 classification: 26A48; 26A51; 47A63. Key words and phrases: operator monotone function; integral representation; Löwner's theorem.", "subjects": "Operator Algebras (math.OA); Mathematical Physics (math-ph)", "title": "The fast track to Löwner's theorem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357591818726, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739211451513 }
https://arxiv.org/abs/1304.1217
On the communication complexity of sparse set disjointness and exists-equal problems
In this paper we study the two player randomized communication complexity of the sparse set disjointness and the exists-equal problems and give matching lower and upper bounds (up to constant factors) for any number of rounds for both of these problems. In the sparse set disjointness problem, each player receives a k-subset of [m] and the goal is to determine whether the sets intersect. For this problem, we give a protocol that communicates a total of O(k\log^{(r)}k) bits over r rounds and errs with very small probability. Here we can take r=\log^{*}k to obtain a O(k) total communication \log^{*}k-round protocol with exponentially small error probability, improving on the O(k)-bits O(\log k)-round constant error probability protocol of Hastad and Wigderson from 1997.In the exist-equal problem, the players receive vectors x,y\in [t]^n and the goal is to determine whether there exists a coordinate i such that x_i=y_i. Namely, the exists-equal problem is the OR of n equality problems. Observe that exists-equal is an instance of sparse set disjointness with k=n, hence the protocol above applies here as well, giving an O(n\log^{(r)}n) upper bound. Our main technical contribution in this paper is a matching lower bound: we show that when t=\Omega(n), any r-round randomized protocol for the exists-equal problem with error probability at most 1/3 should have a message of size \Omega(n\log^{(r)}n). Our lower bound holds even for super-constant r <= \log^*n, showing that any O(n) bits exists-equal protocol should have \log^*n - O(1) rounds.
\section{Discussion} \label{sec:discussion} The $r$-round protocol we gave in \autoref{sec:upperbound} solves the sparse set disjointness problem in $O(k\log^{(r)}k)$ total communication. As we proved in \autoref{sec:lowerbound} this is optimal. The same, however, cannot be said of the error probability. With the same protocol, but with more careful setting of the parameters the exponentially small error $O(2^{-\sqrt k})$ of the $\log^*k$-round protocol can be further decreased to $2^{-k^{1-o(1)}}$. For small (say, constant) values of $r$ this protocol cannot achieve exponentially small error error without the increase in the complexity if the universe size $m$ is unbounded. But if $m$ is polynomial in $k$ (or even slightly larger, $m=\exp^{(r)}(O(\log^{(r)}k))$), we can replace the last round of the protocol by one player deterministically sending his or her entire ``current set'' $S_r$. With careful setting of the parameters in other rounds, this modified protocol has the same $O(k\log^{(r)}k)$ complexity but the error is now exponentially small: $O(2^{-k/\log k})$. Note that in our lower bound on the $r$-round complexity of the sparse set disjointness we we use the exists-equal problem with parameters $n=k$ and $t=4k$. This corresponds to the universe size $m=tn=4k^2$. In this case any protocol solving the exists-equal problem with $1/3$ error can be strengthened to exponentially small error using the same number of rounds and only a constant factor more communication. Our lower and upper bounds match for the exists-equal problem with parameters $n$ and $t=\Omega(n)$, since the upper bounds were established without any regard of the universe size, while the lower bounds worked for $t=4n$. Extensions of the techniques presented in this paper give matching bounds also in the case $3\le t<n$, where the $r$-round complexity is $\Theta(n\log^{(r)}t)$ for $r\le\log^*t$. Note, however, that in this case one needs to consider significantly more complicated input distributions and a more refined isoperimetric inequality, that does not permit arbitrary mismatches. The $\Omega(n)$ lower bound applies for the exists-equal problem of parameters $n$ and $t\ge3$ regardless of the number of rounds, as the disjointness problem on a universe of size $n$ is a sub-problem. For $t=2$ the situation is drastically different, the exists-equal problem with $t=2$ is equivalent to a single equality problem. Finally a remark on using the joint random source model of randomized protocols throughout the paper. By a result of Newman \cite{Newman91} our protocols of \autoref{sec:upperbound} can be made to work in private coin model (or even if one of the players is forced to behave deterministically) by increasing the first message length by $O(\log\log(N)+\log(1/\epsilon))$ bits, where $N= {m \choose k}$ is the number of possible inputs. In our case this means adding the term $O(\log\log m)+o(k)$ to our bound of $O(k\log^{(r)}k)$, since our protocols make at least $\exp(-k/\log k)$ error. This additional cost is insignificant for reasonably small values of $m$, but it is necessary for large values as the equality problem, which is an instance of disjointness, requires $\Omega(\log \log m)$-bits in the private coin model. Note also that we achieve a super-linear increase in the communication for OR of $n$ instances of equality even in the private coin model for $r=1$. For $r\geq 2$, no such increase happens in the private coin model as communication complexity of $\EE^t_n$ is at most $O(n\log\log t)$ however a single equality problem requires $\Omega(\log \log t)$ bits. \section{Lower bound for single round protocols} \label{sec:elementary} In this section we give an combinatorial proof that any single round randomized protocol for the exists-equal problem with parameters $n$ and $t=4n$ has complexity $\Omega(n\log n)$ if its error probability is at most $1/3$. As pointed out in the Introduction, to our knowledge this is the fist established case when solving the OR of $n$ instances of a communication problem requires strictly more than $n$ times the complexity needed to solve a single such instance. We start with with a simple and standard reduction from the randomized protocol to the deterministic one and further to a large set of inputs that makes the first (and in this case only) message fixed. These steps are also used in the general round elimination argument therefore we state them in general form. Let $\epsilon>0$ be a small constant and let $P$ be an $1/3$-error randomized protocol for the exists-equal problem with parameters $n$ and $t=4n$. We repeat the protocol $P$ in parallel taking the majority output, so that the number of rounds does not change, the length of the messages is multiplied by a constant and the error probability decreases below $\epsilon$. Now we fix the coins of of this $\epsilon$-error protocol in a way to make the resulting deterministic protocol err on at most $\epsilon$ fraction of the possible inputs. Denote the deterministic protocol we obtain by $Q$. \begin{lemma} \label{lem:determine-s} Let $Q$ be a deterministic protocol for the $\EE_n$ problem that makes at most $\epsilon$ error on the uniform distribution. Assume Alice sends the first message of length $c$. There exists an $S\subset [t]^n$ of size $\mu(S)=2^{-c-1}$ such that the first message of Alice is fixed when $x\in S$ and we have $\Pr_{y\sim \mu}[Q(x,y)\neq\EE(x,y)]\leq 2\epsilon$ for all $x\in S$. \end{lemma} \begin{proof} Note that the quantity $e(x)=\Pr_{y\sim \mu}[Q(x,y)\neq\EE(x,y)]$, averaged over all $x$, is the error probability of $Q$ on the uniform input, hence is at most $\epsilon$. Therefore for at least half of $x$, we have $e(x)\leq 2\epsilon$. The first message of Alice partitions this half into at most $2^c$ subsets. We pick $S$ to consist of $t^n/2^{c+1}$ vectors of the same part: at least one part must have this many elements. \end{proof} We fix a set $S$ as guaranteed by the lemma. We assume we started with a single round protocol, so $Q(x,y)=Q(x',y)$ whenever $x,x'\in S$. Indeed, Alice sends the same message by the choice of $S$ and then the output is determined by Bob, who has the same input in the two cases. We call a pair $(x,y)$ {\em bad} if $x\in S$, $y\in[t]^n$ and $Q$ errs on this input, i.e., $Q(x,y)\ne\EE(x,y)$. Let $b$ be the number of bad pairs. By \autoref{lem:determine-s} each $x\in|S|$ is involved in at most $2\epsilon t^n$ bad pairs, so we have $$b\le2\epsilon|S|t^n.$$ We call a triple $(x,x',y)$ {\em bad} if $x,x'\in S$, $y\in[t]^n$, $\EE(x,y)=1$ and $\EE(x',y)=0$. The proof is based on double counting the number $z$ of bad triples. Note that for a bad triple $(x,x',y)$ we have $Q(x,y)=Q(x',y)$ but $\EE(x,y)\ne\EE(x',y)$, so $Q$ must err on either $(x,y)$ or $(x',y)$ making one of these pairs bad. Any pair (bad or not) is involved in at most $|S|$ bad triples, so we have $$z\le b|S|\le2\epsilon|S|^2t^n.$$ Let us fix arbitrary $x,x'\in S$ with $\Match(x,x')\le n/2$. We estimate the number of $y\in[t]^n$ that makes $(x,x',y)$ a bad triple. Such a $y$ must have $\Match(x,y)>\Match(x',y)=0$. To simplify the calculation we only count the vectors $y$ with $\Match(x,y)=1$. The match between $y$ and $x$ can occur at any position $i$ with $x_i\ne x'_i$. After fixing the coordinate $y_i=x_i$ we can pick the remaining coordinates $y_j$ of $y$ freely as long as we avoid $x_j$ and $x'_j$. Thus we have $$|\{y\,|\,(x,x'y)\hbox{ is bad}\}|\ge(n-\Match(x,y))(t-2)^{n-1}\ge(n/2)(t-2)^{n-1}>t^n/14,$$ where in the last inequality we used $t=4n$. Let $s$ be the size of the Hamming ball $B_{n/2}(x)=\{y\in[t]^n\,|\,\Match(x,y)>n/2\}$. By the Chernoff bound we have $s<t^n/n^{n/2}$ (using $t=4n$ again). For a fixed $x$ we have at least $|S|-s$ choices for $x'\in S$ with $\Match(x,x')\le n/2$ when the above bound for triples apply. Thus we have $$z\ge|S|(|S|-s)t^n/14.$$ Combining this with the lower bound on the number of bad triples we get $$28\epsilon|S|\ge|S|-s.$$ Therefore we conclude that we either have large error $\epsilon>1/56$ or else we have $|S|\le2s<2t^n/n^{n/2}$. As we have $|S|=t^n/2^{c+1}$ the latter possibility implies $$c\ge n\log n/2-2.$$ Summarizing we have the following. \begin{theorem} \label{thm:singleround} A single round probabilistic protocol for $\EE_n$ with error probability $1/3$ has complexity $\Omega(n\log n)$. A single round deterministic protocol for $\EE_n$ that errs on at most $1/56$ fraction of the inputs has complexity at least $n\log n/2-2$. \end{theorem} \section{Introduction} In a two player communication problem the players, named Alice and Bob, receive separate inputs, $x$ and $y$, and they communicate in order to compute the value $f(x,y)$ of a function $f$. In an $r$-round protocol, the players can take at most $r$ turns alternately sending each other a message and the last player to receive a message declares the output of the protocol. A protocol can be {\em deterministic} or {\em randomized}, in the latter case the players can base their actions on a common random source and we measure the {\em error probability}: the maximum over inputs $(x,y)$, of the probability that the output of the protocol differs from $f(x,y)$. \subsection{Sparse set disjointness} Set disjointness is perhaps the most studied problem in communication complexity. In the most standard version Alice and Bob receive a subset of $[m]:=\{1,\ldots,m\}$ each, with the goal of deciding whether their sets intersect or not. The primary question is whether the players can improve on the trivial deterministic protocol, where the first player sends the entire input to the other player, thereby communicating $m$ bits. The first lower bound on the randomized complexity of this problem was given in \cite{BabaiFS86} by Babai et al., who showed that any $\epsilon$-error protocol for disjointness must communicate $\Omega(\sqrt{m})$ bits. The tight bound of $\Omega(m)$-bits was first given by Kalyanasundaram and Schnitger \cite{KalyanasundaramS92} and was later simplified by Razborov \cite{Razborov92} and Bar-Yossef et al.\ \cite{Bar-YossefJKS04}. In the sparse set disjointness problem $\DISJ_k^m$, the sets given to the players are guaranteed to have at most $k$ elements. The deterministic communication complexity of this problem is well understood. The trivial protocol, where Alice sends her entire input to Bob solves the problem in one round using $O(k\log(2n/k))$ bits. On the other hand, an $\Omega(k\log(2n/k))$ bit total communication lower bound can be shown even for protocols with an arbitrary number of rounds, say using the rank method; see \cite{KushilevitzN97}, page 175. The randomized complexity of the problem is far more subtle. The results cited above immediately imply a $\Omega(k)$ lower bound for this version of the problem. The folklore $1$-round protocol solves the problem using $O(k\log k)$ bits, wherein Alice sends $O(\log k)$-bit hashes for each element of her set. H\aa stad and Widgerson \cite{HastadW07} gave a protocol that matches the $\Omega(k)$ lower bound mentioned above. Their $O(k)$-bit randomized protocol runs in $O(\log k)$-rounds and errs with a small constant probability. In \autoref{sec:upperbound}, we improve this protocol to run in $\log^*k$ rounds, still with $O(k)$ total communication, but with exponentially small error in $k$. We also present a $r$-round protocol for any $r<\log^*k$ with total communication $O(k\log^{(r)}k)$ and error probability well below $1/k$; see \autoref{thm:ub}. (Here $\log^{(r)}$ denotes the iterated logarithm function, see \autoref{notation}.) As the exists-equal problem with parameters $t$ and $n$ (see below) is a special case of $\DISJ_n^{tn}$, our lower bounds for the exists-equal problem (see below) show that complexity of this algorithm is optimal for any number $r\le\log^*k$ of rounds, even if we allow much the larger error probability of $1/3$. Buhrman et al.~\cite{BuhrmanGMW12} and Woodruff \cite{Woodruff08} (as presented in \cite{Patrascu09}) show an $\Omega(k\log k)$ lower bound for $1$-round complexity of $\DISJ^m_k$ by a reduction from the indexing problem (a similar reduction was also given in \cite{DasguptaKS12}). We note that these lower bounds do not apply to the exists-equal problem, as the input distribution they use generates instances inherently specific to the disjointness problem; furthermore this distribution admits a $O(\log k)$ protocol in two rounds. \subsection{The exists-equal problem} In the equality problem Alice and Bob receive elements $x$ and $y$ of a universe $[t]$ and they have to decide whether $x=y$. We define the two player communication game exists-equal with parameters $t$ and $n$ as follows. Each player is given an $n$-dimensional vector from $[t]^n$, namely $x$ and $y$. The value of the game is one if there exists a coordinate $i\in[n]$ such that $x_i = y_i$, zero otherwise. Clearly, this problem is the OR of $n$ independent instances of the equality problem. The direct sum problem in communication complexity is the study of whether $n$ instances of a problem can be solved using less than $n$ times the communication required for a single instance of the problem. This question has been studied extensively for specific communication problems as well as some class of problems \cite{ChakrabartiSWY01, JainRS03, JainRS05, Ben-AroyaRW08, Gavinsky08, JainKN08, HarshaJMR10, BarakBCR10}. The so called direct sum approach is a very powerful tool to show lower bounds for communication games. In this approach, one expresses the problem at hand, say as the OR of $n$ instances of a simpler function and the lower bound is obtained by combining a lower bound for the simpler problem with a direct sum argument. For instance, the two-player and multi-player disjointness bounds of \cite{Bar-YossefJKS04}, the lopsided set disjointness bounds \cite{Patrascu11}, and the lower bounds for several communication problems that arise from streaming algorithms \cite{JayramW09, MagniezMN10} are a few examples of results that follow this approach. Exists-equal with parameters $t$ and $n$ is a special case of $\DISJ_n^{tn}$, so our protocols in \autoref{sec:upperbound} solve exists-equal. We show that when $t=\Omega(n)$ these protocols are optimal, namely every $r$-round randomized protocol ($r\le\log^*n$) with at most $1/3$ error error probability needs to send at least one message of size $\Omega(n\log^{(r)}n)$ bits. See \autoref{thm:main}. Our result shows that computing the OR of $n$ instances of the equality problem requires {\em strictly more} than $n$ times the communication required to solve a single instance of the equality problem when the number of rounds is smaller than $\log^* n-O(1)$. Recall that the equality problem admits an $\epsilon$-error $\log(1/\epsilon)$-bit one-round protocol in the common random source model. For $r=1$, our result implies that to compute the OR of $n$ instances of the equality problem with {\em constant probability}, no protocol can do better than solving each instance of the equality problem with {\em high probability} so that the union bound can be applied when taking the OR of the computed results. The single round case of our lower bound also generalizes the $\Omega(n\log n)$ lower bound of Molinaro et al.\ \cite{MolinaroWY13} for the one round communication problem, where the players have to find all the answers of $n$ equality problems, outputting an $n$ bit string. \subsection{Lower bound techniques} We obtain our general lower bound via a round elimination argument. In such an argument one assumes the existence of a protocol $P$ that solves a communication problem, say $f$, in $r$ rounds. By suitably modifying the internals of $P$, one obtains another protocol $P'$ with $r-1$ rounds, which typically solves smaller instances of $f$ or has larger error than $P$. Iterating this process, one obtains a protocol with zero rounds. If the protocol we obtain solves non-trivial instances of $f$ with good probability, we conclude that we have arrived at a contradiction, therefore the protocol we started with, $P$, cannot exist. Although round elimination arguments have been used for a long time, our round elimination lemma is the first to prove a {\em super-linear} communication lower bound in the number of primitive problems involved, obtaining which requires new and interesting ideas. The general round elimination presented in \autoref{sec:lowerbound} is very involved, but the lower bound on the one-round protocols can also be obtained in a more elementary way. As the one round case exhibits the most dramatic super-linear increase in the communication cost and also generalizes the lower bound in \cite{MolinaroWY13}, we include this combinatorial argument separately in \autoref{sec:elementary}, see \autoref{thm:singleround}. At the heart of the general round elimination lemma is a new isoperimetric inequality on the discrete cube $[t]^n$ endowed with the Hamming distance. We present this result, \autoref{thm:isoperimetry}, in \autoref{sec:isoperimetry}. To the best of our knowledge, the first isoperimetric inequality on this metric space was proven by Lindsey in \cite{Lindsey64}, where the subsets of $[t]^n$ of a certain size with the so called minimum induced-edge number were characterized. This result was rediscovered in \cite{KleitmanKR71} and \cite{Clements71} as well. See \cite{AzizogluO03} for a generalization of this inequality to universes which are $n$-dimensional boxes with arbitrary side lengths. In \cite{BollobasL91}, Bollobás et al.\ study isoperimetric inequalities on $[t]^n$ endowed with the $\ell_1$ distance. For the purposes of our proof we need to find sets $S$ that minimize a substantially more complicated measure. This measure also captures how spread out $S$ is and can be described roughly as the average over points $x\in[t]^n$ of the logarithm of the number of points in the intersection of $S$ and a Hamming ball around $x$. \subsection{Related work} In \cite{MiltersenNSW98}, a round elimination lemma was given, which applies to a class of problems with certain self-reducibility properties. The lemma is then is used to get lower bounds for various problems including the greater-than and the predecessor problems. This result was later tightened in \cite{Sen03} to get better bounds for the aforementioned problems. Different round elimination arguments were also used in \cite{KarchmerW90, HalstenbergR88, NisanW93,Miltersen94, DurisGS87,BeameF01} for various communication complexity lower bounds and most recently in \cite{BrodyC09} and \cite{BrodyCRVW10} for obtaining lower bounds for the gapped Hamming distance problem. Independent of and in parallel of the present form of this paper Brody et al.\ \cite{BrodyCK12} have also established an $\Omega(n\log^{(r)}n)$ lower bound for the $r$-round communication complexity of the exists-equal problem with parameter $n$. Their result applies for protocols with a polynomially small error probability like $1/n$. This stronger assumption on the protocol allows for simpler proof techniques, namely the information complexity based direct sum technique developed in several papers including \cite{ChakrabartiSWY01}, but it is not enough to create an example where solving the OR of $n$ communication problems requires more than $n$ times the communication of solving a single instance. Indeed, even in the shared random source model one needs $\log n$ bits of communication (independent of the number of rounds) to achieve $1/n$ error in a single equality problem. \subsection{Notation}\label{notation} For a positive integer $t$, we write $[t]$ for the set of positive integers not exceeding $t$. For two $n$-dimensional vectors $x$, $y$, let $\Match(x,y)$ be the number of coordinates where $x$ and $y$ agree. Notice that $n-\Match(x,y)$ is the Hamming distance between $x$ and $y$. For a vector $x\in[t]^n$ we write $x_i$ for its $i$\/th coordinate. We denote the distribution of a random variable $X$ by $\dist(X)$ and the support set of it by $\supp(X)$. We write $\Pr_{x\sim\nu}[\cdot]$ and $\E_{x\sim\nu}[\cdot]$ for the probability and expectation, respectively, when $x$ is distributed according to a distribution $\nu$. We write $\mu$ for the uniform distribution on $[t]^n$. For instance, for a set $S\subseteq [t]^n$, we have $\mu(S) = |S| / t^n$. For $x,y\in [t]^n$ we denote the value of the exists-equal game by $\EE_n^t(x,y)$. Recall that it is zero if and only if $x$ and $y$ differ in each coordinate. Whenever we drop $t$ from the notation we assume $t=4n$. Often we will also drop $n$ and simply denote the game value by $\EE(x,y)$ if $n$ is clear from the context. All logarithms in this paper are to the base 2. Analogously, throughout this paper we take $\exp(x)=2^x$. We will also use the iterated versions of these functions: \begin{align*} \log^{(0)}x&\defeq x, & \exp^{(0)}x&\defeq x,\\ \log^{(r)}x&\defeq \log(\log^{(r-1)}x), & \exp^{(r)}x&\defeq \exp(\exp^{(r-1)}x) \quad\text{for $r\geq 1$}. \end{align*} Moreover we define $\log^* x$ to be the smallest integer $r$ for which $\log^{(r)} x<2$. Throughout the paper we ignore divisibility problems, e.g., in \autoref{lem:determine-s} in \autoref{sec:elementary} we assume that $t^n/2^{c+1}$ is an integer. Dealing with rounding issues would complicate the presentation but does not add to the complexity of the proofs. \subsection{Information theory} Here we briefly review some definitions and facts from information theory that we use in this paper. For a random variable $X$, we denote its binary Shannon entropy by $\Ent(X)$. We will also use conditional entropies $\Ent(X\,|\, Y)=\Ent(X,Y)-\Ent(Y)$. Let $\mu$ and $\nu$ be two probability distributions, supported on the same set $S$. We denote the binary Kullback-Leibler divergence between $\mu$ and $\nu$ by $\D(\mu\,\|\, \nu)$. A random variable with Bernoulli distribution with parameter $p$ takes the value $1$ with probability $p$ and the value $0$ with probability $1-p$. The entropy of this variable is denoted by $\BEnt(p)$. For two reals $p,q\in (0,1)$, we denote by $\BD(p\,\|\, q)$ the divergence between the Bernoulli distributions with parameters $p$ and $q$. If $X\in[t]^n$ and $L\subseteq[n]$, then the projection of $X$ to the coordinates in $L$ is denoted by $X_L$. Namely, $X_L$ is obtained from $X=(X_1,\ldots,X_n)$ by keeping only the coordinates $X_i$ with $i\in L$. The following lemma of Chung et al.~\cite{ChungGFS86} relates the entropy of a variable to the entropy of its projections. \begin{lemma} \label{lem:ent-subset}{\rm (Chung et al.~\cite{ChungGFS86})} Let $\supp(X)\subseteq[t]^n$. We have $\frac{l}{n}\Ent(X) \leq \E_L[\Ent(X_L)]$, where the expectation is taken for a uniform random $l$-subset $L$ of $[n]$. \end{lemma} \subsection{Structure of the paper} We start in \autoref{sec:upperbound} with our protocols for the sparse set disjointness. Note that the exists-equal problem is a special case of sparse set disjointness, so our protocols work also for the exists-equal problem. In the rest of the paper we establish matching lower bounds showing that the complexity of our protocols are within a constant factor to optimal for both the exists-equal and the sparse set disjointness problems, and for any number of rounds. In \autoref{sec:elementary} we give an elementary proof for the case of single round protocols. In \autoref{sec:isoperimetry} we develop our isoperimetric inequality and in \autoref{sec:lowerbound} we use it in our round elimination proof to get the lower bound for multiple round protocols. Finally in \autoref{sec:discussion} we point toward possible extensions of our results. \section{An isoperimetric inequality on the discrete grid} \label{sec:isoperimetry} The isoperimetric problem on the Boolean cube $\{0,1\}^n$ proved extremely useful in theoretical computer science. The problem is to determine the set $S\subseteq \{0,1\}^n$ of a fixed cardinality with the smallest ``perimeter'', or more generally, to establish connection between the size of a set and the size of its boundary. Here the boundary can be defined in several ways. Considering the Boolean cube as a graph where vertices of Hamming distance 1 are connected, the {\em edge boundary} of a set $S$ is defined as the set of edges connecting $S$ and its complement, while the {\em vertex boundary} consists of the vertices outside $S$ having a neighbor in $S$. Harper \cite{Harper64} showed that the vertex boundary of a Hamming ball is smallest among all sets of equal size, and the same holds for the edge boundary of a subcube. These results can be generalized to other cardinalities \cite{Hart76}; see the survey by Bezrukov \cite{Bezrukov94}. Consider the metric space over the set $[t]^n$ endowed with the Hamming distance. Let $f$ be a concave function on the nonnegative integers and $1\le M<n$ be an integer. We consider the following value as a generalized perimeter of a set $S\subseteq[t]^n$: \begin{align*} \E_{x\sim\mu}[f\left(\left|B_M(x)\cap S\right|\right)], \end{align*} where $B_M(x)=\{y\in[t]^n\mid\Match(x,y)\ge M\}$ is the radius $n-M$ Hamming ball around $x$. Note that when $M=n-1$ and $f$ is the counting function given as $f(0)=0$ and $f(l)=1$ for $l>0$ (which is concave), the above quantity is exactly the normalized size of the vertex boundary of $S$. For other concave functions $f$ and parameters $M$ this quantity can still be considered a measure of how ``spread out'' the set $S$ is. We conjecture that $n$-dimensional boxes minimize this measure in every case. \begin{conjecture} \label{conj:product} Let $1\le k\le t$ and $1\le M<n$ be integers. Let $S$ be an arbitrary subset of $[t]^n$ of size $k^n$ and $P=[k]^n$. We have \begin{align*} \E_{x\sim\mu}[f\left(\left|B_M(x)\cap P\right|\right)]\leq \E_{x\sim\mu}[f\left(\left|B_M(x)\cap S\right|\right)]. \end{align*} \end{conjecture} Even though a proof of \autoref{conj:product} remained elusive, in \autoref{thm:isoperimetry}, we prove an approximate version of this result, where, for technical reasons, we have to restrict our attention to a small fraction of the coordinates. Having this weaker result allows us to prove our communication complexity lower bound in the next section but proving the conjecture here would simplify this proof. We start the technical part of this section by introducing the notation we will use. For $x,y\in[t]^n$ and $i\in[n]$ we write $x\sim_iy$ if $x_j=y_j$ for $j\in[n]\setminus\{i\}$. Observe that $\sim_i$ is an equivalence relation. A set $K\subseteq [t]^n$ is called an {\em $i$-ideal} if $x\sim_i y$, $x_i<y_i$ and $y\in K$ implies $x\in K$. We call a set $K\subseteq[t]^n$ an {\em ideal} if it is an $i$-ideal for all $i\in[n]$. For $i\in[n]$ and $x\in[t]^n$ we define $\down_{i}(x)=(x_1,\ldots,x_{i-1},x_i-1,x_{i+1},\ldots,x_n)$. We have $\down_i(x)\in[t]^n$ whenever $x_i>1$. Let $K\subseteq [t]^n$ be a set, $i\in[n]$ and $2\le a\in[t]$. For $x\in K$, we define $\down_{i,a}(x,K)=\down_i(x)$ if $x_i=a$ and $\down_i(x)\notin K$ and we set $\down_{i,a}(x,K)=x$ otherwise. We further define $\down_{i,a}(K)=\{\down_{i,a}(x,K)\mid x\in K\}$. For $K\subseteq[t]^n$ and $i\in[n]$ we define \begin{align*} \down_i(K)=\big\{y\in[t]^n \mid y_i\le|\{z\in K\mid y\sim_iz\}|\big\}. \end{align*} Finally for $K\subseteq[t]^n$ we define \begin{align*} \down(K)=\down_1(\down_2(\ldots\down_n(K)\ldots)). \end{align*} The following lemma states few simple observations about these down operations. \begin{lemma} \label{lem:down} Let $K\subseteq[t]^n$ be a set and let $i,j\in[n]$ be integers. The following hold. \begin{enumerate}[(i)] \item $\down_i(K)$ can be obtained from $K$ by applying several operations $\down_{i,a}$. \item $|\down_{i,a}(K)|=|K|$ for each $2\le a\le t$, $|\down_i(K)|=|K|$ and $|\down(K)|=|K|$. \item $\down_i(K)$ is an $i$-ideal and if $K$ is a $j$-ideal, then $\down_i(K)$ is also a $j$-ideal. \item $\down(K)$ is an ideal. For any $x\in\down(K)$ we have $P\defeq[x_1]\times[x_2]\times\cdots\times[x_n]\subseteq\down(K)$ and there exists a set $T\subseteq K$ with $P= \down(T)$. \end{enumerate} \end{lemma} \begin{proof} For statement (i) notice that as long as $K$ is not an $i$-ideal one of the operations $\down_{i,a}$ will not fix $K$ and hence will decrease $\sum_{x\in K}x_i$. Thus a finite sequence of these operations will transform $K$ into an $i$-ideal. It is easy to see that the operations $\down_{i,a}$ preserve the number of elements in each equivalence class of $\sim_i$, thus the $i$-ideal we arrive at must indeed be $\down_i(K)$. Statement (ii) follows directly from the definitions of each of these $\down$ operations. The first claim of statement (iii), namely that $\down_i(K)$ is an $i$-ideal, is trivial from the definition. Now assume $j\ne i$ and $K$ is a $j$-ideal, $y\in\down_i(K)$ and $y_j>1$. To see that $\down_i(K)$ is a $j$-ideal it is enough to prove that $\down_j(y)\in\down_i(K)$. Since $y\in\down_i(K)$, there are $y_i$ distinct vectors $z\in K$ that satisfy $z\sim_i y$. Considering the vectors $\down_j(z)\sim_i\down_j(y)$ and using that these distinct vectors are in the $j$-ideal $K$ proves that $\down_j(y)$ is indeed contained in $\down_i(K)$. By statement (iii), $\down(K)$ is an $i$-ideal for each $i\in [n]$. Therefore $\down(K)$ is an ideal and the first part of statement (iv), that is, $P\subseteq K'$ follows. We prove the existence of suitable $T$ by induction on the dimension $n$. The base case $n=0$ (or even $n=1$) is trivial. For the inductive step consider $K'=\down_2(\down_3(\ldots\down_n(K)\ldots))$. As $x\in\down(K)=\down_1(K')$, we have distinct vectors $x^{(k)}\in K'$ for $k=1,\ldots, x_1$, satisfying $x^{(k)}\sim_1x$. Notice that the construction of $K'$ from $K$ is performed independently on each of the $(n-1)$-dimensional ``hyperplanes'' $S^l=\{y\in[t]^n\mid y_1=l\}$ as none of the operations $\down_2,\ldots,\down_n$ change the first coordinate of the vectors. We apply the inductive hypothesis to obtain the sets $T^{(k)}\subseteq S^{x^{(k)}_1}\cap K$ such that $\down_2(\ldots\down_n(T^{(k)})\ldots)=\{x^{(k)}_1\} \times[x_2]\times\cdots\times[x_n]$. Using again that these sets are in distinct hyperplanes and the operations $\down_2,\ldots,\down_n$ act separately on the hyperplanes $S^l$, we get for $T:=\cup_{k=1}^{x_1}T^{(k)}$ that $$\down_2(\dots\down_n(T)\dots)=\{x^{(k)}_1\mid k\in[x_1]\}\times[x_2]\times\cdots\times[x_n].$$ Applying $\down_1$ on both sides finishes the proof of this last part of the lemma. \end{proof} For sets $x\in[t]^n$, $I\subseteq[n]$, and integer $M\in[n]$ we define $B_{I,M}(x)=\{y\in[t]^n\mid\Match(x_I,y_I)\ge M\}$. The projection of $B_{I,M}$ to the coordinates in $I$ is the Hamming ball of radius $|I|-M$ around the projection of $x$. \begin{lemma} \label{lem:list} Let $I\subseteq[n]$, $M\in[n]$ and let $f$ be a concave function on the nonnegative integers. For arbitrary $K\subseteq [t]^n$ we have $$\E_{x\sim\mu}[f(|B_{I,M}(x)\cap\down(K)|)]\le\E_{x\sim\mu}[f(|B_{I,M}(x)\cap K|)].$$ \end{lemma} \begin{proof} By \autoref{lem:down}(i), the set $\down(K)$ can be obtained from $K$ by a series of operations $\down_{i,a}$ with various $i\in[n]$ and $2\le a\le t$. Therefore, it is enough to prove that the expectation in the lemma does not increase in any one step. Let us fix $i\in[n]$ and $2\le a\le t$. We write $N_x=B_{I,M}(x)\cap K$ and $N'_x=B_{I,M}(x)\cap\down_{i,a}(K)$ for $x\in[t]^n$. We need to prove that \begin{align*} \E_{x\sim\mu}[f(|N_x|)]\ge\E_{x\sim\mu}[f(|N'_x|)]. \end{align*} Note that $|N_x|=|N'_x|$ whenever $i\notin I$ or $x_i\notin\{a,a-1\}$. Thus, we can assume $i\in I$ and concentrate on $x\in[t]^n$ with $x_i\in\{a,a-1\}$. It is enough to prove $f(|N_x|)+f(|N_y|)\ge f(|N'_x|)+f(|N'_y|)$ for any pair of vectors $x,y\in[t]^n$, satisfying $x_i=a$, and $y=\down_i(x)$. Let us fix such a pair $x,y$ and set $C=\{z\in K\setminus\down_{i,a}(K)\,|\,\Match(x_I,z_I)=M\}$. Observe that $N_x = N'_x \cup C$ and $N'_x\cap C=\emptyset$. Similarly, observe that $N'_y = N_y \cup \down_{i,a}(C)$ and $N_y \cap \down_{i,a}(C)=\emptyset$. Thus we have $|N'_x|=|N_x|-|C|$ and $|N'_y|=|N_y|+|\down_{i,a}(C)|=|N_y|+|C|$. The inequality $f(|N_x|)+f(|N_y|)\ge f(|N'_x|)+f(|N'_y|)$ follows now from the concavity of $f$, the inequalities $|N'_x|\le|N_y|\le|N'_y|$ and the equality $|N_x|+|N_y|=|N'_x|+|N'_y|$. Here the first inequality follows from $\down_{i,a}(N'_x)\subseteq\down_{i,a}(N_y)$, the second inequality and the equality comes from the observations of the previous paragraph. \end{proof} \begin{lemma} \label{lem:find-prod} Let $K\subseteq [t]^n$ be arbitrary. There exists a vector $x\in K$ having at least $n/5$ coordinates that are greater than $k\defeq\frac{t}{2}\mu(K)^{5/(4n)}$. \end{lemma} \begin{proof} The number of vectors that have at most $n/5$ coordinates greater than $k$ can be upper bounded as \begin{align*} {n\choose n/5} t^{n/5} k^{4n/5} = t^n {n\choose n/5} (k/t)^{4n/5} = |K|\frac{{n \choose n/5}}{2^{4n /5}}, \end{align*} where in the last step we have substituted $\frac{k}{t}=\frac{1}{2}\mu(K)^{5/(4n)}$ and $\mu(K) = |K| / t^n$. Estimating ${n\choose n/5}\le 2^{n\BEnt(1/5)}$, we obtain that the above quantity is less than $|K|$. Therefore, there must exists an $x\in K$ that has at least $n/5$ coordinates greater than $k$. \end{proof} \begin{theorem} \label{thm:isoperimetry} Let $S$ be an arbitrary subset of $[t]^n$. Let $k=\frac{t}{2}\mu(S)^{5/(4n)}$ and $M = nk/(20t)$. There exists a subset $T\subset S$ of size $k^{n/5}$ and $I\subset [n]$ of size $n/5$ such that, defining $N_x=\{x'\in T\mid\Match(x_I,x'_I)\ge M\}$, we have \begin{enumerate}[(i)] \item $\Pr_{x\sim\mu}[N_x=\emptyset] \le 5^{-M}$ and \item $\E_{x\sim \mu}[\log|N_x|]\geq (n/5-M)\log k - n\log k /5^M$, where we take $\log 0 = -1$ to make the above expectation exist. \end{enumerate} \end{theorem} \begin{proof} By \autoref{lem:down}(ii), we have $|\down(S)|=|S|$. By \autoref{lem:find-prod}, there exists an $x\in\down(S)$ having at least $n/5$ coordinates that are greater than $k$. Let $I\subset[n]$ be a set of $n/5$ coordinates such that $x_i\geq k$ for a fixed $x\in\down(S)$. By \autoref{lem:down}(iv), $\down(S)$ is an ideal and thus it contains the set $P=\prod_iP_i$, where $P_i=[k]$ for $i\in I$ and $P_i=\{1\}$ for $i\notin I$. Also by \autoref{lem:down}(iv), there exists a $T\subseteq S$ such that $P = \down(T)$. We fix such a set $T$. Clearly, $|T|=k^{n/5}$. For a vector $x\in[t]^n$, let $h(x)$ be the number of coordinates $i\in I$ such that $x_i\in [k]$. Note that $\E_{x\sim \mu}[h(x)] = 4M$ and $h(x)$ has a binomial distribution. By the Chernoff bound we have $\Pr_{x\sim \mu}[h(x)<M] < 5^{-M}$. For $x$ with $h(x)\ge M$ we have $|B_{I,M}(x)\cap P|\ge k^{n/5-M}$, but for $h(x)<M$ we have $B_{I,M}(x)\cap P=\emptyset$. With the unusual convention $\log0=-1$ we have \begin{align*} \E_{x\sim \mu} [\log|B_{I,M}(x)\cap P|]&\ge\Pr[h(x)\ge M](n/5-M)\log k-\Pr[h(x)<M]\\ &>(n/5-M)\log k-n\log k/5^M \end{align*} We have $\down(T)=P$ and our unusual $\log$ is concave on the nonnegative integers, so \autoref{lem:list} applies and proves statement (ii): \begin{align*} \E_{x\sim \mu}[\log |N_x|] &\ge\E_{x\sim \mu} [\log|B_{I,M}(x)\cap P|]\\ &\ge(n/5-M)\log k - n\log k /5^M. \end{align*} To show statement (i), we apply \autoref{lem:list} with the concave function $f$ defined as $f(0)=-1$ and $f(l)=0$ for all $l>0$. We obtain that \begin{align*} \Pr_{x\sim\mu}[N_x=\emptyset] &=-\E_{x\sim\mu}[f(|N_x|)]\\ &\le-\E_{x\sim\mu}[f(|B_{I,M}(x)\cap P|)]\\ &=\Pr_{x\sim\mu}[B_{I,M}(x)\cap P=\emptyset]\\ &<5^{-M}. \end{align*} This completes the proof. \end{proof} \section{Lower bound for multiple round protocols} \label{sec:lowerbound} In this section we prove our main lower bound result: \begin{theorem} \label{thm:main} For any $r\leq\log^*n$, an $r$-round probabilistic protocol for $\EE_n$ with error probability at most $1/3$ sends at least one message of size $\Omega(n\log^{(r)}n)$. \end{theorem} Note that the $r=1$ round case of this theorem was proved as \autoref{thm:singleround} in \autoref{sec:elementary}. The other extreme, which immediately follows from \autoref{thm:main}, is the following. \begin{corollary} Any probabilistic protocol for $\EE_n$ with maximum message size $O(n)$ and error $1/3$ has at least $\log^* n - O(1)$ rounds. \end{corollary} \autoref{thm:main} is a direct consequence of the corresponding statement on deterministic protocols with small distributional error on uniform distribution; see \autoref{thm:main2} at the end of this section. Indeed, we can decrease the error of a randomized protocol below any constant $\epsilon>0$ for the price of increasing the message length by a constant factor, then we can fix the coins of this low error protocol in a way that makes the resulting deterministic protocol $Q$ err in at most $\epsilon$ fraction of the possible inputs. Applying \autoref{thm:main2} to the protocol $Q$ proves \autoref{thm:main}. In the rest of this section we use round-elimination to prove \autoref{thm:main2}, that is, we will use $Q$ to solve smaller instances of the exists-equal problem in a way that the first message is always the same, and hence can be eliminated. Suppose Alice sends the first message of $c$ bits in $Q$. By \autoref{lem:determine-s}, there exists a $S\subset [t]^n$ of size $\mu(S)=2^{-c-1}$ such that the first message of Alice is fixed when $x\in S$ and we have $\Pr_{y\sim \mu}[Q(x,y)\neq\EE(x,y)]\leq 2\epsilon$ for all $x\in S$. Fix such a set $S$ and let $k\defeq t/2^{\frac{5(c+1)}{4n} + 1}$ and $M \defeq nk/(20t)$. By \autoref{thm:isoperimetry}, there exists a $T\subset S$ of size $k^{n/5}$ and $I\subset[n]$ of size $n/5$ such that defining \begin{align*} N_x=\{y\in T\mid\Match(x_I,y_I)\ge M\} \end{align*} we have $\Pr_{x\sim\mu}[N_x=\emptyset] \le 5^{-M}$ and $\E_{x\sim \mu}[\log|N_x|]\geq (n/5-M)\log k - n\log k /5^M$. Let us fix such sets $T$ and $I$. Note also that \autoref{thm:isoperimetry} guarantees that $T$ is a strict subset of $S$. Designate an arbitrary element of $S\setminus T$ as $x'_e$. \subsection{Embedding the smaller problem} \label{sec:embed} The players embed a smaller instance $u,v\in[t']^{n'}$ of the exists-equal problem in $\EE_n$ concentrating on the coordinates $I$ determined above. We set $n'\defeq M/10$ and $t'\defeq4n'$. Optimally, the same embedding should guarantee low error probability for all pairs of inputs, but for technical reasons we need to know the number of coordinate agreements $\Match(u,v)$ for the input pairs $(u,v)$ in the smaller problem having $\EE_{n'}(u,v)=1$. Let $R\ge1$ be this number, so we are interested in inputs $u,v\in[t']^{n'}$ with $\Match(u,v)=0$ or $R$. We need this extra parameter so that we can eliminate a non-constant number of rounds and still keep the error bound a constant. For results on constant round protocols one can concentrate on the $R=1$ case. In order to solve the exist-equal problem with parameters $t'$ and $n'$ Alice and Bob use the joint random source to turn their input $u,v\in[t']^{n'}$ into longer random vectors $X',Y\in[t]^n$, respectively, and apply the protocol $Q$ above to solve this exists-equal problem for these larger inputs. Here we informally list the main requirements on the process generating $X'$ and $Y$. We require these properties for the random vectors $X',Y\in[t]^n$ generated from a fixed pair $u,v\in[t']^{n'}$ satisfying $\Match(u,v)=0$ or $R$. \begin{enumerate}[(P1)]\item $\EE(X',Y)=\EE(u,v)$ with large probability, \label{prop:2} \item $\supp(X')=T\cup \{x'_e\}$ and \label{prop:3} \item for most $x'\sim X'$, we have $\dist(Y\,|\, X'=x')$ is close to uniform distribution on $[t]^n$. \label{prop:4} \end{enumerate} Combining these properties with the fact that $\Pr_{y\sim \mu}[Q(x,y)\neq\EE(x,y)]\leq 2\epsilon$ for each $x\in S$, we will argue that for the considered pairs of inputs $Q(X',Y)$ equals $\EE(u,v)$ with large probability, thus the combined protocol solves the small exists-equal instance with small error, at least for input pairs with $\Match(u,v)=0$ or $R$. Furthermore, by \propref{prop:3} the first message of Alice will be fixed and hence does not need to be sent, making the combined protocol one round shorter. The random variables $X'$ and $Y$ are constructed as follows. Let $m\defeq 2n/(MR)$ be an integer. Each player repeats his or her input ($u$ and $v$, respectively) $m$ times, obtaining a vector of size $n/(5R)$. Then using the shared randomness, the players pick $n/(5R)$ uniform random maps $m_i:[t']\to[t]$ independently and apply $m_i$ to $i$\/th coordinate. Furthermore, the players pick a uniform random 1-1 mapping $\pi:[n/(5R)]\to I$ and use it to embed the coordinates of the vectors they constructed among the coordinates of the vectors $X$ and $Y$ of length $n$. The remaining $n-n/(5R)$ coordinates of $X$ is picked uniformly at random by Alice and similarly, the remaining $n-n/(5R)$ coordinates of $Y$ is picked uniformly at random by Bob. Note that the marginal distribution of both $X$ and $Y$ are uniform on $[t]^n$. If $\Match(u,v)=0$ the vectors $X$ and $Y$ are independent, while if $\Match(u,v)=R$, then $Y$ can be obtained by selecting a random subset of $I$ of cardinality $mR$, copying the corresponding coordinates of $X$ and filling the rest of $Y$ uniformly at random. This completes the description of the random process for Bob. However Alice generates one more random variable $X'$ as follows. Recall that $N_x=\{z\in T\mid\Match(z_I,x_I)\ge M\}$. The random variable $X'$ is obtained by drawing $x\sim X$ first and then choosing a uniform random element of $N_x$. In the (unlikely) case that $N_x=\emptyset$, Alice chooses $X'=x'_e$. Note that $X'$ either equals $x'_e$ or takes values from $T$, hence \propref{prop:3} holds. In the next lemma we quantify and prove \propref{prop:2} as well. \begin{lemma} \label{lem:error} Assume $n\ge3$, $M\ge2$ and $u,v\in[t']^{n'}$. We have \begin{enumerate}[(i)] \item if $\Match(u,v)=0$ then $\Pr[\EE(X',Y)=0] > 0.77$; \item if $\Match(u,v)=R$, then $\Pr[\EE(X', Y)=1] \ge 0.80$. \end{enumerate} \end{lemma} \begin{proof} For the first claim, note that when $\Match(u,v) = 0$, the random variables $X$ and $Y$ are independent and uniformly distributed. We construct $X'$ based on $X$, so its value is also independent of $Y$. Hence $\Pr[\EE(X',Y)=0]=(1-1/t)^n$. This quantity goes to $e^{-1/4}$ since $t=4n$ and is larger than $0.77$ when $n\geq 3$. This establishes the first claim. For the second claim let $J = \{i\in I\mid X_i=Y_i\}$ and $K=\{i\in I\mid X'_i = X_i\}$. By construction, $|J|=\Match(X_{I},Y_{I})\ge mR$ and $|K|=\Match(X'_{I}, X_I) \geq M$ unless $N_X=\emptyset$. By our construction, each $J\subset I$ of the same size is equally likely by symmetry, even when we condition on a fix value of $X$ and $X'$. Thus we have $\E[|J\cap K|\,|\, N_X\ne\emptyset]\ge mRM/|I|=10$ and $\Pr[J\cap K=\emptyset\,|\, N_X\ne\emptyset]<e^{-10}$. Note that $X$ is distributed uniformly over $[t]^n$, therefore by Theorem~\ref{thm:isoperimetry}(i) the probability that $N_X=\emptyset$ is at most $5^{-M}$. Note that $\Match(X',Y)\ge|J\cap K|$ and thus $\Pr[\EE(X',Y)=0]\le\Pr[J\cap K=\emptyset]\le\Pr[J\cap K=\emptyset\,|\, N_X\ne\emptyset] +\Pr[N_X=\emptyset]\le e^{-10}+5^{-M}$. This completes the proof. \end{proof} We measure ``closeness to uniformity'' in \propref{prop:4} by simply calculating the entropy. This entropy argument is postponed to the next subsection; here we show how such a bound to the entropy implies that the error introduced by $Q$ is small. \begin{lemma} \label{lem:kl-err} Let $x'\in S$ be fixed and let $\gamma$ be a probability in the range $2\epsilon\le\gamma<1$. If $\Ent(Y\,|\, X'=x')\geq n\log t - \BD(\gamma \,\|\, 2\epsilon)$ then $\Pr_{y\sim Y|X'=x'}[Q(x',y) \neq \EE(x',y)]\leq \gamma$. \end{lemma} \begin{proof} For a distribution $\nu$ over $[t]^n$, let $e(\nu) = \Pr_{y\sim \nu}[Q(x', y)\neq\EE(x', y)]$. We prove the contrapositive of the statement of the lemma, that is assuming $\Pr_{y\sim Y|X'=x'}[Q(x',y) \neq \EE(x',y)]>\gamma$ we prove $\Ent(Y\,|\, X'=x')< n\log t - \BD(\gamma \,\|\, 2\epsilon)$: \begin{align*} n\log t -\Ent(Y\,|\, X' = x') &= \D(\dist(Y\,|\, X'=x')\,\|\, \mu)\\ &\geq \BD(e(\dist(Y\,|\, X'=x'))\,\|\, e(\mu))\\ &\geq\BD(\gamma\,\|\, 2\epsilon), \end{align*} where the first inequality follows from the chain rule for the Kullback-Leibler divergence. \end{proof} \subsection{Establishing \propref{prop:4}} We quantify \propref{prop:4} using the conditional entropy $\Ent(Y\,|\, X')$. If $\Match(u,v)=R$ our process generetas $X$ and $Y$ with the expected number $\E[\Match(X_I,Y_I)]$ of matches only slightly more than the minimum $mR$. We lose most of these matches with $Y$ when we replace $X$ by $X'$ and only an expected constant number remains. A constant number of forced matches with $X'$ within $I$ restricts the number of possible vectors $Y$ but it only decreases the entropy by $O(1)$. The calculations in this subsection make this intuitive argument precise. \begin{lemma} \label{lem:y-uniform} Let $X',Y$ be as constructed above. The following hold. \begin{enumerate}[(i)] \item If $\Match(u,v)=0$ we have $\Ent(Y\,|\, X')=n\log t.$ \item If $M>100\log n$ and $\Match(u,v)=R$ we have $\Ent(Y\,|\, X') = n\log t - O(1).$ \end{enumerate} \end{lemma} \begin{proof} Part (i) holds as $Y$ is uniformly distributed and independent of $X'$ whenever $\EE(u,v)=0$. For part (ii) recall that if $\Match(u,v)=R$ one can construct $X$ and $Y$ by uniformly selecting a size $mR$ set $L\subseteq I$ and selecting $X$ and $Y$ uniformly among all pairs satisfying $X_L=Y_L$. Recall that $L$ is the set of coordinates the $mR$ matches between $u^m$ and $v^m$ were mapped. These are the ``intentional matches'' between $X_I$ and $Y_I$. Note that there may be also ``unintended matches'' between $X_I$ and $Y_I$, but not too many: their expected number is $(n/5-mR)/t<1/20$. As given any fixed $L$, the marginal distribution of both $X$ and $Y$ are still uniform, so in particular $X$ is independent of $L$ and so is $X'$ constructed from $X$. Therefore we have $$\Ent(Y\,|\, X') =\Ent(Y\,|\, X', L) + \Ent(L) - \Ent(L\,|\, Y, X').$$ We treat the terms separately. First we split the first term: $$\Ent(Y\,|\, X',L)=\Ent(Y_L\,|\, X',L)+\Ent(Y_{[n]\setminus L}\,|\, X',L,Y_L)$$ and use that $Y_{[n]\setminus L}$ is uniformly distributed for any fixed $L$, $X'$ and $Y_L$, making $$\Ent(Y_{[n]\setminus L}\,|\, X',L,Y_L)=(n-mR)\log t.$$ We have $X_L=Y_L$, thus \begin{align*} \Ent(Y_L\,|\, X',L)&=\Ent(X_L\,|\, X',L)\\ &\ge \frac{mR}{n/5}\Ent(X_I\,|\, X')\\ &\ge mR\log t-10\log k-\frac{MR}{5^{M-1}}\log k, \end{align*} where the first inequality follows by \autoref{lem:ent-subset} as $L$ is a uniform and independent of $X$ and $X'$ and the second inequality follows from \autoref{lem:x-uniform} that we will prove shortly and the formula defining $m$. The next term, $\Ent(L)$ is easy to compute as $L$ is a uniform subset of $I$ of size $mR$: $$\Ent(L)=\log{n/5\choose mR}$$ It remains to bound the term $\Ent(L\,|\, Y, X')$. Let $Z=\{i\mid i\in I \text{ and } X'_i=Y_i\}$. Note that $Z$ can be derived from $X',Y$ (as $I$ is fixed) hence $\Ent(L\,|\, Y,X')\leq \Ent(L\,|\, Z)$. Further, let $C=|Z\setminus L|$. We obtain \begin{align*} \Ent(L\,|\, Y,X')&\le\Ent(L\,|\, Z)\leq \Ent(L\,|\, Z, C) + \Ent(C)\\ &< \E_{Z,C}\left[\log{n/5-|Z|+C \choose mR - |Z|+C}\right] + \E_{Z,C}\left[\log {|Z| \choose C}\right]+2 \end{align*} where we used $\Ent(C)<2$. Note that for any fixed $x'\in T$ and $x\in \supp(X\,|\, X'=x')$, we have $$\E[|Z|-C\,|\, X=x, X'=x']=\Match(x_I,x_I') mR /(n/5) \geq 10$$ as $\Match(x_I, x_I')\geq M$ by definition. Hence we have $$\log{n/5\choose mR}-\log{n/5-|Z|+|C|\choose mR-|Z|+|C|}\ge10\log\frac n{5m}-O(1),$$ $$\E_{Z,C}\left[\log {|Z| \choose C}\right] \le \E[|Z|] < 20.$$ Summing the estimates above for the various parts of $\Ent(Y\,|\, X')$ the statement of the lemma follows. \end{proof} It remains to prove the following simple lemma that ``reverses'' the conditional entropy bound in Theorem~\ref{thm:isoperimetry}(ii): \begin{lemma} \label{lem:x-uniform} For any $u,v\in[t']^{n'}$ we have $\Ent(X_I\,|\, X')\geq \frac{n}{5}\log t - M\log k - n\log k / 5^M$. \end{lemma} \begin{proof} Using the fact that $\Ent(A,B) = \Ent(A\,|\, B) +\Ent(B) = \Ent(B \,|\, A) + \Ent(A)$ we get \begin{align*} \Ent(X_I\,|\, X') &= \Ent(X' \,|\, X_I) + \Ent(X_I) - \Ent(X')\\ &\ge \frac{n}{5}\log t + \Ent(X'\,|\, X_I) - \frac{n}{5}\log k, \end{align*} where in the last step we used $\Ent(X')\le \log|\supp(X')| = \log |T| = \frac{n}{5}\log k$ and $\Ent(X_I)=(n/5)\log t$ as $X$ is uniformly distributed. Observe that $\Ent(X'\,|\, X_I)=\Ent(X'\,|\, X) = \E_{x\sim \mu}[\log|N_x|]$, where $\log 0$ is now taken to be $0$. From \autoref{thm:isoperimetry}(ii) we get $\Ent(X'\,|\, X)\geq \frac{n}{5}\log k - M\log k - n\log k/5^M$ finishing the proof of the lemma. \end{proof} \subsection{The round elimination lemma} Let $\nu_n$ be the uniform distribution on $[t]^n\times [t]^n$, where we set $t=4n$. The following lemma gives the base case of the round elimination argument. \begin{lemma}\label{lem:terminal} Any 0-round deterministic protocol for $\EE_n$ has at least 0.22 distributional error on $\nu_n$, when $n\geq 1$. \end{lemma} \begin{proof} The output of the protocol is decided by a single player, say Bob. For any given input $y\in[t]^n$ we have $3/4 \leq\Pr_{x\sim\mu}[\EE(x,y)=0] < e^{-1/4} < 0.78$. Therefore the distributional error is at least $0.22$ for any given $y$ regardless of the output Bob chooses, thus the overall error is also at least $0.22$. \end{proof} Now we give our full round elimination lemma. \begin{lemma}\label{lem:roundel} Let $r>0, c ,n$ be an integers such that $c < (n\log n)/2$. There is a constant $0<\epsilon_0<1/200$ such that if there is an $r$-round deterministic protocol with $c$-bit messages for $\EE_n$ that has $\epsilon_0$ error on $\nu_n$, then there is an $(r-1)$-round deterministic protocol with $O(c)$-bit messages for $\EE_{n'}$ that has $\epsilon_0$ error on $\nu_{n'}$, where $n' = \Omega(n/2^\frac{5c}{4n})$. \end{lemma} \begin{proof} We start with an intuitive description of our reduction. Let us be given the deterministic protocol $Q$ for $\EE_n$ that errs on an $\epsilon_0$ fraction of the inputs. To solve an instance $(u,v)$ of the smaller $\EE_{n'}$ problem the players perform the embedding procedure described in previous subsection $k_0$ times independently for each parameter $R\in[R_0]$. Here $k_0$ and $R_0$ are constants we set later. They perform the protocol $Q$ in parallel for each of the $k_0R_0$ pairs of inputs they generated. Then they take the majority of the $k_0$ outputs for a fixed parameter $R$. We show that this result gives the correct value of $\EE(u,v)$ with large probability provided that $\Match(u,v)=0$ or $R$. Finally they take the OR of these results for the $R_0$ possible values of $R$. By the union bound this gives the correct value $\EE(u,v)$ with large probability provided $\Match(u,v)\le R_0$. Fixing the random choices of the reduction we obtain a deterministic protocol. The probability of error for the uniform random input can only grow by the small probability that $\Match(u,v)>R_0$ and we make sure it remains below $\epsilon_0$. The rest of the proof makes this argument precise. For random variables $X'$ and $Y$ constructed in \autoref{sec:embed}, \autoref{lem:y-uniform} guarantees that $\Ent(Y\,|\, X')\ge n\log t - \alpha_0$ for some constant $\alpha_0$, as long as $M>100\log n$ and $\Match(u,v)=R$. Let $\epsilon_0$ be a constant such that $\BD(1/10\,\|\, 2\epsilon_0) > 200(\alpha_0 + 1)$. Note that such $\epsilon_0$ can be found as $\BD(1/10\,\|\, \epsilon)$ tends to infinity as $\epsilon$ goes to 0. We can bound $\Pr_{(x,y)\sim\nu_m}[\Match(x,y) \ge l] \le 1/(4^l l!)$ for all $m\ge1$. We set $R_0$ such that $\Pr_{(x,y)\sim\nu_m}[\Match(x,y) \ge R_0 ] \le \epsilon_0 / 2$ for all $m\ge1$. Let $Q$ be a deterministic protocol for $\EE_n$ that sends $c < (n\log n)/2$ in each round and that has $\epsilon_0$ error on $\nu_n$. Let $S$ be as constructed in \autoref{lem:determine-s} and let $M$ be as defined in \autoref{thm:isoperimetry}. We have $M=\frac{n}{40}2^{\frac{-5(c+1)}{4n}}$ as $t=4n$ and $\mu(S)=2^{-(c+1)}$ by \autoref{lem:determine-s}. Note that by our choice of $c$, we have $M>100\log n$, hence the hypotheses of \autoref{lem:y-uniform} are satisfied. Let $n' = M/10 = \frac{n}{400}2^{\frac{-5(c+1)}{4n}}$. Now we give a randomized protocol $Q'$ for $\EE_{n'}$. Suppose the players are given an instance of $\EE_{n'}$, namely the vectors $(u,v)\in[4n']^{n'}\times[4n']^{n'}$. Let $k_0 = 10\log (R_0 + 1/\epsilon_0)$. For $R\in[R_0]$ and $k\in [k_0]$, the players construct the vectors $X'_{R,k}$ and $Y_{R,k}$ as described in \autoref{sec:embed} with parameter $R$ and with fresh randomness for each of the $R_0k_0$ procedures. The players run $R_0 k_0$ instances of protocol $Q$ in parallel, on inputs $X'_{R,k}, Y_{R,k}$ for $R\in[R_0]$ and $k\in[k_0]$. Note that the first message of the first player, Alice, is fixed for all instances of $Q$ by \propref{prop:3} and \autoref{lem:determine-s}. Therefore, the second player, Bob, can start the protocol assuming Alice has sent the fixed first message. After the protocols finish, for each $R\in[R_0]$, the last player who received a message computes $b_R$ as the majority of $Q(X_{R,k}',Y_{R,k})$ for $k\in [k_0]$. Finally, this player outputs $0$ if $b_R=0$ for all $R\in[R_0]$ and outputs $1$ otherwise. Suppose now that $\EE(u,v) = 0$. By \autoref{lem:error}(i), we have $\Pr[\EE(X'_{R,k},Y_{R,k}) = 0] \ge 0.77$ for each $R$ and $k$. Recall that that $Y_{R,k}$ is distributed uniformly for each $R$ and $k$ and since $\EE(u,v)=0$, it is independent of $X'_{R,k}$. Therefore, by $X'_{R,k}\in S$ (\propref{prop:3}) and the fact that $\Pr_{y\sim \mu}[Q(x,y)\neq\EE(x,y)]\leq 2\epsilon_0$ for all $x\in S$ as per \autoref{lem:determine-s}, we obtain $\Pr[Q(X'_{R,k},Y_{R,k}) = 0] \ge 0.77 - 2\epsilon_0 > 0.76$. By the Chernoff bound we have $\Pr[b_R = 1] < \epsilon_0/(2R_0)$, and by the union bound $\Pr[Q'\hbox{ outputs }0]\ge1-\epsilon_0 /2$. Let us now consider the case $\Match(u,v) = R$ for some $R\in[R_0]$. Fix any $k\in[k_o]$ and set $X'=X'_{R,k}$, $Y=Y_{R,k}$. By \autoref{lem:error}(ii), $\Pr[\EE(X',Y) = 1]\ge 0.80$. By \autoref{lem:y-uniform}, $\Ent(Y\,|\, X')\geq n\log t -\alpha_0$, and so we have $\E_{x'\sim X'}[\Ent(Y) - \Ent(Y\,|\, X'=x')] < \alpha_0$. Let $Z=\{x'\mid\Ent(Y) - \Ent(Y\,|\, X'=x') > 10\alpha_0\}$. Note that $Y$ is uniform, and has full entropy, therefore $\Ent(Y) - \Ent(Y\,|\, X'=x') \geq 0$. Using Markov's inequality we have $\Pr[X'\in Z]<1/10$. When $X'\in Z$ we cannot effectively bound the probability that $\EE(u,v)\neq Q(X', Y)$; namely, we bound this probability by 1. But if $X'\notin Z$, then by \autoref{lem:kl-err} and our choice of $\epsilon_0$, we have $\Pr[\EE(X', Y)\neq Q(X', Y)] < 1/10$. Furthermore, by \autoref{lem:error}(ii), $\Pr[\EE(u,v) \neq \EE(X', Y)]< 0.20$ hence with probability at least $0.60$ we have $\EE(u,v) = Q(X', Y)$. This happens independently for all the values of $k\in[k_0]$, so by the Chernoff bound and our choice of $k_0$, we have $\Pr[Q'\hbox{ outputs }0]\le\Pr[b_R = 0] < \epsilon_0 / 2$. Finally, $\Pr_{(u,v)\sim \nu_{n'}}[\Match(u,v) \ge R_0] \le \epsilon_0 /2$ by our choice of $R_0$. Note that the protocol $Q'$ uses a shared random bit string, say $W$, in the construction of the vectors $X'_{R,k}$ and $Y_{R,k}$. Hence, overall, we have \begin{align*} \Pr_{W, (u,v)\sim\nu_{n'}}[\EE(u,v) = Q'(u,v)] \ge 1 - \epsilon_0 \end{align*} Since we measure the error of the protocol under a distribution, we can fix $W$ to a value without increasing the error under the aforementioned distribution by the so called easy direction of Yao's lemma. Namely, there exists a $w\in \supp(W)$ such that \begin{align*} \Pr_{(u,v)\sim\nu_{n'}}[\EE(u,v) = Q'(u,v)\,|\, W=w] \ge 1 - \epsilon_0 \end{align*} Fix such $w$. Observe that $Q'$ is a $(r-1)$-round protocol for $\EE_{n'}$ where $n'=\frac{n}{400}2^\frac{-5(c+1)}{4n}=\Omega(n/2^\frac{5c}{4n})$ and it sends at most $R_0k_0c = O(c)$ bits in each message. Furthermore, $Q'$ is deterministic and has at most $\epsilon_0$ error on $\nu_{n'}$ as desired. \end{proof} \begin{theorem} \label{thm:main2} There exists a constant $\epsilon_0$ such that for any $r\leq\log^*n$, an $r$-round deterministic protocol for $\EE_n$ which has $\epsilon_0$ error on $\nu_n$ sends at least one message of size $\Omega(n\log^{(r)}n)$. \end{theorem} \begin{proof} Suppose we have an $r$-round protocol with $c$-bit messages for $\EE_n$ that has $\epsilon_0$ error on $\nu_n$, where $c=\gamma n\log^{(r)}n$ for some $\gamma<4/5-o(1)$. By \autoref{lem:roundel}, this protocol can be converted to an $r-1$ round protocol with $\alpha c$-bit messages for $\EE_{n'}$ that has $\epsilon_0$-error on $\nu_{n'}$, where $n'=\beta n/2^{5c/4n}$ for some $\alpha, \beta >0$. We only need to verify that $\alpha c \leq \gamma n'\log^{(r-1)}n'$. We have \begin{align*} \gamma n'\log^{(r-1)} n' &= \gamma\beta n/2^{5c/4n}\log^{(r-1)} (\beta n/2^{5c/4n})\\ &= \gamma\beta n/2^{\frac{5\gamma}{4}\log^{(r)}n}\log^{(r-1)} (\beta n/2^{5c/4n})\\ &\geq \gamma\beta n \left(\log^{(r-1)}n\right)^{1-\frac{5\gamma}{4}-o(1)}\\ &\geq \gamma\alpha n\log^{(r)}n \end{align*} for $\gamma< 4/5 - o(1)$ and large enough $n$. Therefore, by iteratively applying \autoref{lem:roundel} we obtain a $0$-round protocol for $\EE_{\bar n}$ that makes $\epsilon_0$ error on $\nu_{\bar n}$ for some $\bar n$ satisfying $\gamma {\bar n}^2 = \gamma \bar n \log^{(0)} \bar n\geq c \alpha^r$. Therefore $\bar n \geq 1$ and since $\epsilon_0< 0.22$, the protocol we obtain contradicts \autoref{lem:terminal}, showing that the protocol we started with cannot exists. \end{proof} \begin{remark} We note that in the proof of \autoref{thm:main}, to show that a protocol with small communication does not exist, we start with the given protocol and apply the round elimination lemma (i.e., \autoref{lem:roundel}) $r$ times to obtain a $0$-round protocol with small error probability, which is shown to be impossible by \autoref{lem:terminal}. Alternatively, one can apply the round elimination $r-1$ times to obtain a $1$-round protocol with $o(n\log n)$ communication for $\EE_{n}$, which is ruled out by \autoref{thm:singleround}. \end{remark} \section*{Acknowledgments} We would like to thank Hossein Jowhari for many stimulating discussions during the early stages of this work. \section{The upper bound} \label{sec:upperbound} Recall that in the communication problem $\DISJ_k^m$, each of the two players is given a subset of $[m]$ of size at most $k$ and they communicate in order to determine whether their sets are disjoint or not. In 1997, H\aa stad and Wigderson \cite{ParnafesIRWA97,HastadW07} gave a probabilistic protocol that solves this problem with $O(k)$ bits of communication and has constant one-sided error probability. The protocol takes $O(\log k)$ rounds. Let us briefly review this protocol as this is the starting point of our protocol. Let $S,T\subseteq[m]$ be the inputs of Alice and Bob. Observe that if they find a set $Z$ satisfying $S\subseteq Z\subseteq [m]$, then Bob can replace his input $T$ with $T'=T\cap Z$ as $T'\cap S=T\cap S$. The main observation is that if $S$ and $T$ are disjoint, then a random set $Z\supseteq S$ will intersect $T$ in a uniform random subset, so one can expect $|T'|\approx|T|/2$. In the H\aa stad-Wigderson protocol the players alternate in finding a random set that contains the current input of one of them, effectively halving the other player's input. If in this process the input of one of the players becomes empty, they know the original inputs were disjoint. If, however, the sizes of their inputs do not show the expected exponential decrease in time, then they declare that their inputs intersect. This introduces a small one sided error. Note that one of the two outcomes happens in $O(\log k)$ rounds. An important observation is that Alice can describe a random set $Z\supseteq S$ to Bob using an expected $O(|S|)$ bits by making use of the joint random source. This makes the total communication $O(k)$. In our protocol proving the next theorem, we do almost the same, but we choose the random sets $Z\supseteq S$ not uniformly, but from a biased distribution favoring ever smaller sets. This makes the size of the input sets of the players decrease much more rapidly, but describing the random set $Z$ to the other player becomes more costly. By carefully balancing the parameters we optimize for the total communication given any number of rounds. When the number of rounds reaches $\log^*k-O(1)$ the communication reaches its minimum of $O(k)$ and the error becomes exponentially small. \begin{theorem}\label{thm:ub} For any $r\leq \log^*k$, there is an $r$-round probabilistic protocol for $\DISJ^m_k$ with $O(k\log^{(r)}k)$ bits total communication. There is no error for intersecting input sets, and the probability of error for disjoint sets can be made $O(1/\exp^{(r)}(c\log^{(r)} k)+ \exp(-\sqrt k))\ll 1/k$ for any constant $c > 1$. For $r=\log^*k-O(1)$ rounds this means an $O(k)$-bit protocol with error probability $O(\exp(-\sqrt k))$. \end{theorem} \begin{proof} We start with the description of the protocol. Let $S_0$ and $S_1$ be the input sets of Alice and Bob, respectively. For $1\le i\le r$, $i$ even Alice sends a message describing a set $Z_i\supset S_i$ based on her ``current input'' $S_i$ and Bob updates his ``current input'' $S_{i-1}$ to $S_{i+1}\defeq S_{i-1}\cap Z_i$. In odd numbered rounds the same happens with the role of Alice and Bob reversed. We depart from the H\aa stad-Wigderson protocol in the way we choose the sets $Z_i$: Using the shared random source the players generate $l_i$ random subsets of $[m]$ containing each element of $[m]$ independently and with probability $p_i$. We will set these parameters later. The set $Z_i$ is chosen to be the first such set containing $S_i$. Alice or Bob (depending on the parity of $i$) sends the index of this set or ends the protocol by sending a special error signal if none of the generated sets contain $S_i$. The protocol ends with declaring the inputs disjoint if the error signal is never sent and we have $S_{r+1}=\emptyset$. In all other cases the protocol ends with declaring ``not disjoint''. This finishes the description of the protocol except for the setting of the parameters. Note that the error of the protocol is one-sided: $S_0\cap S_1=S_i\cap S_{i+1}$ for $i\le r$, so intersecting inputs cannot yield $S_{r+1}=\emptyset$. We set the parameters (including $k_i$ used in the analysis) as follows: \begin{align*} u&=(c+1)\log^{(r)}k,\\ p_i&=\frac1{\exp^{(i)}u}&\hbox{for }1\le i\le r,\\ l_1&=k\exp(ku),\\ l_i&=k2^{k/2^{i-4}}&\hbox{for }2\le i\le r,\\ k_0&=k_1=k,\\ k_i&=\frac k{2^{i-4}\exp^{(i-1)}u}&\hbox{for }2\le i\le r,\\ k_{r+1}&=0. \end{align*} The message sent in round $i>1$ has length $\lceil\log(l_i+1)\rceil<k/2^{i-4}+\log k+1$, thus the total communication in all rounds but the first is $O(k)$. The length of the first message is $\lceil\log(l_1+1)\rceil\le ku+\log k+1$. The total communication is $O(ku)=O(ck\log^{(r)}k)$ as claimed (recall that $c$ is a constant). Let us assume the input pair is disjoint. To estimate the error probability we call round $i$ {\em bad} if an error message is sent or a set $S_{i+1}$ is created with $|S_{i+1}|>k_{i+1}$. If no bad round exists we have $S_{r+1}=\emptyset$ and the protocol makes no error. In what follows we bound the probability that round $i$ is bad assuming the previous rounds are not bad and therefore having $|S_j|\le k_j$ for $0\le j\le i$. The probability that a random set constructed in round $i$ contains $S_i$ is $p_i^{-|S_i|}\ge p_i^{-k_i}$. The probability that none of the $l_i$ sets contains $S_i$ and thus an error message is sent is therefore at most $(1-p_i^{k_i})^{l_i}<e^{-k}$. If no error occurs in the first bad round $i$, then $|S_{i+1}|>k_{i+1}$. Note that in this case $S_{i+1}=S_{i-1}\cap Z_i$ contains each element of $S_{i-1}$ independently and with probability $p_i$. This is because the choice of $Z_i$ was based on it containing $S_i$, so it was independent of its intersection with $S_{i-1}$ (recall that $S_i\cap S_{i-1}=S_1\cap S_0=\emptyset$). For $i<r$ we use the Chernoff bound. The expected size of $S_{i+1}$ is $|S_{i-1}|p_i\le k_{i-1}p_i\le k_{i+1}/2$, thus the probability of $|S_{i+1}|>k_{i+1}$ is at most $2^{-k_{i+1}/4}$. Finally for the last round $i=r$ we use the simpler estimate $p_rk_{r-1}\le k/\exp^{(r)}u$ for $|S_{r+1}|>k_{r+1}=0$. Summing over all these estimates we obtain the following error bound for our protocol: $$\Pr[\hbox{error}]\le re^{-k}+\frac k{\exp^{(r)}u}+\sum_{i=2}^r2^{-k_i/4}.$$ In case $k_r\ge4\sqrt n$ this error estimate proves the theorem. In case $k_r<4\sqrt k$ we need to make a minor adjustments in the setting of our parameters. We take $j$ to be the smallest value with $k_j<4\sqrt k$, modify the parameters for round $j$ and stop the protocol after this round declaring ``disjoint'' if $S_{j+1}=\emptyset$ and ``intersecting'' otherwise. The new parameters for round $j$ are $k'_j=4\sqrt k$, $p'_j=2^{-2\sqrt k}$, $l'_j=k2^{8k}$. This new setting of the parameters makes the message in the last round linear in $k$, while both the probability that round $j-1$ is bad because it makes $|S_j|>k'_j$, or the probability that round $j$ is bad for any reason (error message or $S_{j+1}\ne\emptyset$) is $O(2^{-\sqrt k})$. This finishes the analysis of our protocol. \end{proof}
{ "timestamp": "2013-04-05T02:00:49", "yymm": "1304", "arxiv_id": "1304.1217", "language": "en", "url": "https://arxiv.org/abs/1304.1217", "abstract": "In this paper we study the two player randomized communication complexity of the sparse set disjointness and the exists-equal problems and give matching lower and upper bounds (up to constant factors) for any number of rounds for both of these problems. In the sparse set disjointness problem, each player receives a k-subset of [m] and the goal is to determine whether the sets intersect. For this problem, we give a protocol that communicates a total of O(k\\log^{(r)}k) bits over r rounds and errs with very small probability. Here we can take r=\\log^{*}k to obtain a O(k) total communication \\log^{*}k-round protocol with exponentially small error probability, improving on the O(k)-bits O(\\log k)-round constant error probability protocol of Hastad and Wigderson from 1997.In the exist-equal problem, the players receive vectors x,y\\in [t]^n and the goal is to determine whether there exists a coordinate i such that x_i=y_i. Namely, the exists-equal problem is the OR of n equality problems. Observe that exists-equal is an instance of sparse set disjointness with k=n, hence the protocol above applies here as well, giving an O(n\\log^{(r)}n) upper bound. Our main technical contribution in this paper is a matching lower bound: we show that when t=\\Omega(n), any r-round randomized protocol for the exists-equal problem with error probability at most 1/3 should have a message of size \\Omega(n\\log^{(r)}n). Our lower bound holds even for super-constant r <= \\log^*n, showing that any O(n) bits exists-equal protocol should have \\log^*n - O(1) rounds.", "subjects": "Computational Complexity (cs.CC)", "title": "On the communication complexity of sparse set disjointness and exists-equal problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357591818726, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739211451513 }
https://arxiv.org/abs/1206.6367
A comparison of the discrete Kolmogorov-Smirnov statistic and the Euclidean distance
Goodness-of-fit tests gauge whether a given set of observations is consistent (up to expected random fluctuations) with arising as independent and identically distributed (i.i.d.) draws from a user-specified probability distribution known as the "model." The standard gauges involve the discrepancy between the model and the empirical distribution of the observed draws. Some measures of discrepancy are cumulative; others are not. The most popular cumulative measure is the Kolmogorov-Smirnov statistic; when all probability distributions under consideration are discrete, a natural noncumulative measure is the Euclidean distance between the model and the empirical distributions. In the present paper, both mathematical analysis and its illustration via various data sets indicate that the Kolmogorov-Smirnov statistic tends to be more powerful than the Euclidean distance when there is a natural ordering for the values that the draws can take -- that is, when the data is ordinal -- whereas the Euclidean distance is more reliable and more easily understood than the Kolmogorov-Smirnov statistic when there is no natural ordering (or partial order) -- that is, when the data is nominal.
\section{Introduction} \label{intro} Testing goodness-of-fit is one of the foundations of modern statistics, as elucidated by~\cite{rao}, for example. The formulation in the discrete setting involves $n$ independent and identically distributed (i.i.d.)\ draws from a probability distribution over $m$ bins (``categories,'' ``cells,'' and ``classes'' are common synonyms for ``bins''). In accordance with the standard conventions, we will use $p$ to denote the actual (unknown) underlying distribution of the draws; $p = (p^{(1)}, p^{(2)}, \dots, p^{(m)})$, with $p^{(1)}$,~$p^{(2)}$, \dots, $p^{(m)}$ being nonnegative and \begin{equation} \label{prob} \sum_{j=1}^m p^{(j)} = 1. \end{equation} We will use $p_0$ to denote a user-specified distribution, usually called the ``model''; again $p_0 = (p_0^{(1)}, p_0^{(2)}, \dots, p_0^{(m)})$, with $p_0^{(1)}$,~$p_0^{(2)}$, \dots, $p_0^{(m)}$ being nonnegative and \begin{equation} \label{prob0} \sum_{j=1}^m p_0^{(j)} = 1. \end{equation} A goodness-of-fit test produces a value --- the ``P-value'' --- that gauges the consistency of the observed data with the assumption that $p = p_0$. In many formulations, the user-specified model $p_0$ consists of a family of probability distributions parameterized by $\theta$, where $\theta$ can be integer-valued, real-valued, complex-valued, vector-valued, matrix-valued, or any combination of the many possibilities. In such cases, the P-value gauges the consistency of the observed data with the assumption that $p = p_0(\hat\theta)$, where $\hat\theta$ is an estimate (taken to be the maximum-likelihood estimate throughout the present paper). We now review the definition of P-values. P-values are defined via the empirical distribution $\hat{p}$, where $\hat{p} = (\hat{p}^{(1)}, \hat{p}^{(2)}, \dots, \hat{p}^{(m)})$, with $\hat{p}^{(j)}$ being the proportion of the $n$ observed draws that fall in the $j$th bin, that is, $\hat{p}^{(j)}$ is the number of draws falling in the $j$th bin, divided by $n$. P-values involve a hypothetical experiment taking $n$ i.i.d.\ draws from the assumed actual underlying distribution $p = p_0(\hat\theta)$. We denote by $\hat{P}$ the empirical distribution of the draws from the hypothetical experiment; we denote by $\hat\Theta$ a maximum-likelihood estimate of $\theta$ obtained from the hypothetical experiment. The P-value is then the probability that the discrepancy between the random variables $\hat{P}$ and $p_0(\hat\Theta)$ is at least as large as the observed discrepancy between $\hat{p}$ and $p_0(\hat\theta)$, calculating the probability under the assumption that $p = p_0(\hat\theta)$. To complete the definition of P-values, we must choose a measure of discrepancy. In the present paper, we consider the (discrete) Kolmogorov-Smirnov and Euclidean distances, \begin{equation} \label{dKolmogorov-Smirnov} d_1(a,b) = \max_{1 \le k \le m} \left| \sum_{j=1}^k a^{(j)} - \sum_{j=1}^k b^{(j)} \right| \end{equation} and \begin{equation} \label{dEuclidean} d_2(a,b) = \sqrt{\sum_{j=1}^m (a^{(j)} - b^{(j)})^2}, \end{equation} respectively. The P-value for the Kolmogorov-Smirnov statistic is the probability that $d_1(\hat{P},p_0(\hat\Theta)) \ge d_1(\hat{p},p_0(\hat\theta))$; the P-value for the Euclidean distance is the probability that $d_2(\hat{P},p_0(\hat\Theta)) \ge d_2(\hat{p},p_0(\hat\theta))$. When evaluating the probabilities, we view $\hat{P}$ and $\hat\Theta$ as random variables, constructed with i.i.d.\ draws from the assumed distribution $p = p_0(\hat\theta)$, while viewing the observed $\hat{p}$ and $\hat\theta$ as fixed, not random. If a P-value is very small, then we can be confident that the given observed draws are inconsistent with the assumed model, are not i.i.d.,\ or are both inconsistent and not i.i.d. Needless to say, the Kolmogorov-Smirnov distance defined in~(\ref{dKolmogorov-Smirnov}) is the maximum absolute difference between cumulative distribution functions. The Kolmogorov-Smirnov statistic depends on the ordering of the bins, unlike the Euclidean distance. As supported by the investigations below, we recommend using the Kolmogorov-Smirnov statistic when there is a natural ordering of the bins, while the Euclidean distance is more reliable and more easily understood than the Kolmogorov-Smirnov statistic when there is no natural ordering (or partial order). Unlike the Euclidean distance, the Kolmogorov-Smirnov statistic utilizes the information in a natural ordering of the bins, when the latter is available. \cite{horn} gave similar recommendations when comparing the $\chi^2$ and Kolmogorov-Smirnov statistics. Detailed comparisons between the Euclidean distance and $\chi^2$ statistics are available in~\cite{perkins-tygert-ward3}. The Kolmogorov-Smirnov statistic is cumulative; it accentuates low-frequency differences between the model and the empirical distribution of the draws, but tends to average away and otherwise obscure high-frequency differences. Similar observations have been made by~\cite{pettitt-stephens}, \cite{dagostino-stephens}, \cite{choulakian-lockhart-stephens}, \cite{from}, \cite{best-rayner}, \cite{haschenburger-spinelli}, \cite{steele-chaseling}, \cite{lockhart-spinelli-stephens}, \cite{ampadu}, and~\cite{ampadu-wang-steele}, among others. Our suggestions appear to be closest to those of~\cite{horn}. There are many cumulative approaches similar to the Kolmogorov-Smirnov statistic. These include the Cram\'er--von-Mises, Watson, Kuiper, and R\'enyi statistics, as well as their Anderson-Darling variants; Section~14.3.4 of~\cite{press-teukolsky-vetterling-flannery}, \cite{stephens2}, and~\cite{renyi} review these statistics. We ourselves are fond of the Kuiper approach. However, the present paper focuses on the popular Kolmogorov-Smirnov statistic; the Cram\'er--von-Mises, Watson, and Kuiper variants are very similar. The remainder of the present paper has the following structure: Section~\ref{nonatorder} describes how the Euclidean distance is generally preferable to the Kolmogorov-Smirnov statistic when there is no natural ordering (or partial order) of the bins. Section~\ref{natorder} describes how the Kolmogorov-Smirnov statistic is generally preferable to the Euclidean distance when there is a natural ordering of the bins. Section~\ref{data_analysis} illustrates both cases with examples of data sets and the associated P-values, computing the P-values via Monte-Carlo simulations with guaranteed error bounds. The reader may wish to begin with Section~\ref{data_analysis}, referring back to earlier sections as needed. \section{The case when the bins do not have a natural order} \label{nonatorder} The Euclidean distance is generally preferable to the Kolmogorov-Smirnov statistic when there is no natural ordering (or partial order) of the bins. As discussed by~\cite{perkins-tygert-ward2}, the interaction of parameter estimation and the Euclidean distance is easy to understand and quantify, at least asymptotically, in the limit of large numbers of draws. In contrast, the interaction of parameter estimation and the Kolmogorov-Smirnov statistic can be very complicated, though \cite{choulakian-lockhart-stephens} and~\cite{lockhart-spinelli-stephens} have pointed out that the interaction is somewhat simpler with Cram\'er's and von Mises', Watson's, and some of Anderson's and Darling's very similar statistics. That said, the Euclidean distance can be more reliable even when there are no parameters in the model, that is, when the model $p_0$ is a single, fixed, fully specified probability distribution; the remainder of the present section describes why. The basis of the analysis is the following lemma, a reformulation of the fact that the expected maximum absolute deviation from zero of the standard Brownian bridge is $\sqrt{\pi/2} \cdot \ln(2) \approx .8687$ \citep[see, for example, Section~3 of][]{marsaglia-tsang-wang}. \begin{lemma} \label{bridge} Suppose that $m$ is even and that $D^{(1)}$,~$D^{(2)}$, \dots, $D^{(m)}$ form a randomly ordered list of $m/2$ positive ones and $m/2$ negative ones (with the ordering drawn uniformly at random). Then, \begin{equation} {\bf E \,} \max_{1 \le k \le m} \left| \sum_{j=1}^k D^{(j)} \right| \Bigg/ \sqrt{m} \quad \longrightarrow \quad \sqrt{\pi/2} \cdot \ln(2) \end{equation} in the limit that $m \to \infty$, where (as usual) ${\bf E \,}$\,produces the expected value. \end{lemma} We denote by $p$ the actual underlying distribution of the $n$ observed i.i.d.\ draws. We denote by $p_0$ the model distribution. We denote by $\hat{P}$ the empirical distribution of the $n$ draws. These are all probability distributions, that is, $p^{(j)} \ge 0$, $p_0^{(j)} \ge 0$, and $\hat{P}^{(j)} \ge 0$ for $j = 1$,~$2$, \dots, $m$, and (\ref{prob}) and~(\ref{prob0}) hold. Suppose that the actual underlying distribution $p^{(1)}$,~$p^{(2)}$, \dots, $p^{(m)}$ of the draws is the same as the model distribution $p_0^{(1)}$,~$p_0^{(2)}$, \dots, $p_0^{(m)}$; the random variables $\hat{P}^{(1)}$,~$\hat{P}^{(2)}$, \dots, $\hat{P}^{(m)}$ are then the proportions of $n$ i.i.d.\ draws from $p_0$ that fall in the respective $m$ bins. The Euclidean distance is \begin{equation} U = \sqrt{\sum_{j=1}^m (\hat{P}^{(j)} - p_0^{(j)})^2}. \end{equation} The Kolmogorov-Smirnov statistic is \begin{equation} V = \max_{1 \le k \le m} \left|\sum_{j=1}^k (\hat{P}^{(j)} - p_0^{(j)})\right|. \end{equation} The expected value of the square of the Euclidean distance is \begin{equation} \label{expectedx} {\bf E \,} U^2 = \sum_{j=1}^m {\bf E \,}(\hat{P}^{(j)} - p_0^{(j)})^2 = \sum_{j=1}^m \frac{p_0^{(j)}}{n} = \frac{1}{n}. \end{equation} As shown, for example, by~\cite{durbin} using Lemma~\ref{bridge} above, the expected value of $\sqrt{n}$ times the Kolmogorov-Smirnov statistic is \begin{equation} \label{expecteds} {\bf E \,} V\sqrt{n} \to \sqrt{\pi/2} \cdot \ln(2) \approx .8687 \end{equation} in the limit that $n \to \infty$ and $\max_{1 \le j \le m} p_0^{(j)} \to 0$. Comparing~(\ref{expectedx}) and~(\ref{expecteds}), we see that $U$ and $V$ are roughly the same size (inversely proportional to $\sqrt{n}$) when the actual underlying distribution of the draws is the same as the model distribution. However, when the actual underlying distribution of the draws differs from the model distribution, the Euclidean distance and the Kolmogorov-Smirnov statistic can be very different. If the number $n$ of draws is large, then the empirical distribution $\hat{P}$ will be very close to the actual distribution $p$. Therefore, to study the performance of the goodness-of-fit statistics as $n \to \infty$ when the actual distribution $p$ differs from the model distribution $p_0$ (and both are independent of $n$), we can focus on the difference between $p$ and $p_0$ (rather than the difference between $\hat{P}$ and $p_0$). We now define and study the difference \begin{equation} \label{diffs} d^{(j)} = p^{(j)} - p_0^{(j)} \end{equation} for $j = 1$,~$2$, \dots, $m$. The Euclidean distance between $p$ and $p_0$ (the root-sum-square difference) is \begin{equation} \label{Euclidean} u = \sqrt{\sum_{j=1}^m (d^{(j)})^2}. \end{equation} The Kolmogorov-Smirnov statistic (the maximum absolute cumulative difference) is \begin{equation} \label{KS} v = \max_{1 \le k \le m} \left| \sum_{j=1}^k d^{(j)} \right|. \end{equation} For simplicity (and because the following analysis generalizes straightforwardly), let us consider the illustrative case in which $|d^{(1)}| = |d^{(2)}| = \dots = |d^{(m)}|$, that is, \begin{equation} \label{equal} |d^{(j)}| = c_m \end{equation} for all $j = 1$,~$2$, \dots, $m$, where $c_m$ is a positive real number ($c_m$ must always satisfy $m \cdot c_m \le 2$, since $m \cdot c_m = \sum_{j=1}^m c_m = \sum_{j=1}^m |d^{(j)}| \le \sum_{j=1}^m [p^{(j)} + p_0^{(j)}] = 2$). Combining~(\ref{diffs}), (\ref{prob}), and~(\ref{prob0}) yields that \begin{equation} \label{zero} \sum_{j=1}^m d^{(j)} = 0. \end{equation} Together, (\ref{zero}) and~(\ref{equal}) imply that $m$ is even and that half of $d^{(1)}$,~$d^{(2)}$, \dots, $d^{(m)}$ are equal to $+c_m$, and the other half are equal to $-c_m$. Combining~(\ref{equal}) and~(\ref{Euclidean}) yields that the Euclidean distance is \begin{equation} u = \sqrt{m} \cdot c_m. \end{equation} The fact that half of $d^{(1)}$,~$d^{(2)}$, \dots, $d^{(m)}$ are equal to $+c_m$, and the other half are equal to $-c_m$, yields that the Kolmogorov-Smirnov statistic $v$ defined in~(\ref{KS}) could be as small as $c_m$ or as large as $m \cdot c_m/2$, depending on the ordering of the signs in $d^{(1)}$,~$d^{(2)}$, \dots, $d^{(m)}$. If all orderings are equally likely (which is equivalent to ordering the bins uniformly at random), then by Lemma~\ref{bridge} the mean value for $v$ is $\sqrt{m\pi/2} \cdot \ln(2) \cdot c_m \approx \sqrt{m} \cdot .8687 \cdot c_m$ in the limit that $m$ is large (this is the expected maximum absolute deviation from zero of a tied-down random walk with $m$ steps, each of length $c_m$, that starts and ends at zero; the random walk ends at zero due to~(\ref{zero})). Thus, in the limit that the number $n$ of draws is large (and $\max_{1 \le j \le m} p_0^{(j)} \to 0$, while both the model $p_0$ and the alternative distribution $p$ are independent of $n$), the Euclidean distance and the Kolmogorov-Smirnov statistic have similar statistical power on average, if all orderings of the bins are equally likely. However, the Euclidean distance is the same for any ordering of the bins, whereas the power of the Kolmogorov-Smirnov statistic depends strongly on the ordering. We see, then, that the Euclidean distance is more reliable than the Kolmogorov-Smirnov statistic when there is no especially natural ordering for the bins. \begin{remark} \label{l1} It is possible to use an ordering for which the Kolmogorov-Smirnov statistic attains its greatest value (this corresponds to renumbering the bins such that the differences $D^{(j)} = \hat{P}^{(j)}-p_0^{(j)}$ satisfy $D^{(1)} \ge D^{(2)} \ge \dots \ge D^{(m)}$ or $D^{(1)} \le D^{(2)} \le \dots \le D^{(m)}$). However, this data-dependent ordering produces a statistic which is proportional to the $l^1$ distance $\sum_{j=1}^m |D^{(j)}|$ (whereas the Euclidean distance is the $l^2$ distance), as remarked at the top of page~396 of~\cite{hoeffding}. The resulting statistic is no longer cumulative. \end{remark} \section{The case when the bins have a natural order} \label{natorder} The Kolmogorov-Smirnov statistic is often preferable to the Euclidean distance when there is a natural ordering of the bins. In fact, the Kolmogorov-Smirnov statistic is always preferable when the data is very sparse and there is a natural ordering of the bins. In the limit that the maximum expected number of draws per bin tends to zero, the Euclidean distance always takes the same value under the null hypothesis, providing no discriminative power: indeed, when the draws producing the empirical distribution $\hat{P}$ are taken from the model distribution $p_0$, the Euclidean distance is almost surely $1/\sqrt{n}$, \begin{equation} \sqrt{\sum_{j=1}^m (\hat{P}^{(j)}-p_0^{(j)})^2} = \frac{1}{\sqrt{n}}, \end{equation} in the limit that $n \cdot \max_{1 \le j \le m} p_0^{(j)} \to 0$ (the reason is that, in this limit, $\max_{1 \le j \le m} p_0^{(j)} \to 0$ and moreover almost every realization of the experiment satisfies that, for all $j = 1$,~$2$, \dots,~$m$, $\hat{P}^{(j)} = 0$ or $\hat{P}^{(j)} = 1/n$, that is, there is at most one observed draw per bin). In contrast, the Kolmogorov-Smirnov statistic is nontrivial even in the limit that the maximum expected number of draws per bin tends to zero --- in fact, this is exactly the continuum limit for the original Kolmogorov-Smirnov statistic involving continuous cumulative distribution functions (as opposed to the discontinuous cumulative distribution functions arising from the discrete distributions considered in the present paper). Furthermore, the Kolmogorov-Smirnov statistic is sensitive to symmetry (or asymmetry) in a distribution, and can detect other interesting properties of distributions that depend on the ordering of the bins. \section{Data analysis} \label{data_analysis} This section gives four examples illustrating the performance of the Kolmogorov-Smirnov statistic and the Euclidean distance in various circumstances. The Kolmogorov-Smirnov statistic is more powerful than the Euclidean distance in the first two examples, for which there are natural orderings of the bins. The Euclidean distance is more reliable than the Kolmogorov-Smirnov statistic in the last two examples, for which any ordering of the bins is necessarily rather arbitrary. We computed all P-values via Monte-Carlo simulations with guaranteed error bounds, as in Remark~3.3 of~\cite{perkins-tygert-ward3}. Remark~3.4 of~\cite{perkins-tygert-ward3} proves that the standard error of the estimate for a P-value $P$ is $\sqrt{P(1-P)/\ell}$, where $\ell$ is the number of simulations conducted to calculate the P-value. \subsection{A test of randomness} A particular random number generator is supposed to produce an integer from 1 to $2^{32}$ uniformly at random. The model distribution for such a generator is \begin{equation} \label{simplemod} p_0^{(j)} = 2^{-32} \end{equation} for $j = 1$,~$2$, \dots, $2^{32}$. We test the (obviously poor) generator which produces the numbers 1, 2, 3, \dots, $n$, in that order, so that the observed distribution of the generated numbers is \begin{equation} \label{baddata} \hat{p}^{(j)} = \left\{ \begin{array}{rl} 1/n, & j = 1,\ 2,\ \dots,\ n \\ 0, & j = n+1,\ n+2,\ \dots,\ 2^{32} \end{array} \right. \end{equation} for $j = 1$,~$2$, \dots, $2^{32}$. For these observations, the P-value for the Euclidean distance is 1 to several digits of precision, while the P-value for the Kolmogorov-Smirnov statistic is 0 to several digits, at least for $n$ between a hundred and a million. So, as expected, the Euclidean distance has almost no discriminative power for such sparse data, whereas the Kolmogorov-Smirnov statistic easily discerns that the data~(\ref{baddata}) is inconsistent with the model~(\ref{simplemod}). \begin{remark} Like the Euclidean distance, classical goodness-of-fit statistics such as $\chi^2$, $G^2$ (the log--likelihood-ratio), and the Freeman-Tukey/Hellinger distance are invariant to the ordering of the bins, and also produce P-values that are equal to 1 to several digits of precision, at least for $n$ between a hundred and a million. For definitions and further discussion of the $\chi^2$, $G^2$, and Freeman-Tukey statistics, see Section~2 of~\cite{perkins-tygert-ward3}. \end{remark} \subsection{A test of Poissonity} A Poisson-distributed random number generator with mean $100$ is supposed to produce a nonnegative integer according to the model \begin{equation} \label{models} p_0^{(j)} = \frac{100^j}{j! \cdot \exp(100)} \end{equation} for $j = 0$,~$1$,~$2$,~$3$, \dots. We test the (obviously poor) generator which produces the numbers 100, 101, 102, \dots, 109, so that the observed distribution of the numbers is \begin{equation} \label{observations} \hat{p}^{(j)} = \left\{ \begin{array}{rl} 1/10, & j = 100, 101, 102, \dots, 109 \\ 0, & \hbox{otherwise} \end{array} \right. \end{equation} for $j = 0$,~$1$,~$2$,~$3$, \dots. The P-values, each computed via 4,000,000 simulations, are \begin{itemize} \item Kolmogorov-Smirnov: .0075 \item Euclidean distance: .998 \item $\chi^2$: .999 \item $G^2$ (the log--likelihood-ratio): .999 \item Freeman-Tukey (the Hellinger distance): .998 \end{itemize} For definitions and further discussion of the $\chi^2$, $G^2$, and Freeman-Tukey statistics, see Section~2 of~\cite{perkins-tygert-ward3}. The Kolmogorov-Smirnov statistic is far more powerful for this example, in which the bins have a natural ordering (in this example the bins are the nonnegative integers). Figure~\ref{observedpmf} plots the model probabilities $p_0^{(0)}$, $p_0^{(1)}$, $p_0^{(2)}$, \dots\ defined in~(\ref{models}) along with the observed proportions $\hat{p}^{(0)}$, $\hat{p}^{(1)}$, $\hat{p}^{(2)}$, \dots\ defined in~(\ref{observations}). Figure~\ref{simulatedpmf} plots the model probabilities $p_0^{(0)}$, $p_0^{(1)}$, $p_0^{(2)}$, \dots\ along with analogues of the proportions $\hat{p}^{(0)}$, $\hat{p}^{(1)}$, $\hat{p}^{(2)}$, \dots\ for a simulation generating 10 i.i.d.\ draws according to the model. Figure~\ref{observedcmf} plots the cumulative model probabilities \ $p_0^{(0)}$,\; $p_0^{(0)}+p_0^{(1)}$,\; $p_0^{(0)}+p_0^{(1)}+p_0^{(2)}$, \ \dots\ along with the cumulative observed proportions \ $\hat{p}^{(0)}$,\; $\hat{p}^{(0)}+\hat{p}^{(1)}$,\; $\hat{p}^{(0)}+\hat{p}^{(1)}+\hat{p}^{(2)}$, \ \dots. Figure~\ref{simulatedcmf} plots the cumulative model probabilities \ $p_0^{(0)}$,\; $p_0^{(0)}+p_0^{(1)}$,\; $p_0^{(0)}+p_0^{(1)}+p_0^{(2)}$, \ \dots\ along with analogues of the cumulative proportions \ $\hat{p}^{(0)}$,\; $\hat{p}^{(0)}+\hat{p}^{(1)}$,\; $\hat{p}^{(0)}+\hat{p}^{(1)}+\hat{p}^{(2)}$, \ \dots\ for the simulation generating 10 i.i.d.\ draws according to the model. \begin{figure}[p] \begin{center} \rotatebox{-90}{\scalebox{.47}{\includegraphics{plotfish}}} \\\vspace{.1in} \caption{Proportions associated with the bins for the observations} \label{observedpmf} \end{center} \end{figure} \begin{figure} \begin{center} \rotatebox{-90}{\scalebox{.47}{\includegraphics{plotfishr}}} \\\vspace{.1in} \caption{Proportions associated with the bins for a simulation} \label{simulatedpmf} \end{center} \end{figure} \begin{figure} \begin{center} \rotatebox{-90}{\scalebox{.47}{\includegraphics{plotfishc}}} \\\vspace{.1in} \caption{Cumulative proportions associated with the bins for the observations} \label{observedcmf} \end{center} \end{figure} \begin{figure} \begin{center} \rotatebox{-90}{\scalebox{.47}{\includegraphics{plotfishrc}}} \\\vspace{.1in} \caption{Cumulative proportions associated with the bins for the simulation from Figure~\ref{simulatedpmf}} \label{simulatedcmf} \end{center} \end{figure} \subsection{A test of Hardy-Weinberg equilibrium} In a population with suitably random mating, the proportions of pairs of Rhesus haplotypes in members of the population (each member has one pair) can be expected to follow the Hardy-Weinberg law discussed by~\cite{guo-thompson}, namely to arise via random sampling from the model \begin{equation} \label{hw} p_0^{(j,k)}(\theta_1, \theta_2, \dots, \theta_9) = \left\{ \begin{array}{cl} 2 \cdot \theta_j \cdot \theta_k, & j > k \\ (\theta_k)^2, & j = k \end{array} \right. \end{equation} for $j,k = 1$,~$2$, \dots,~$9$ with $j \ge k$, under the constraint that \begin{equation} \sum_{j=1}^9 \theta_j = 1, \end{equation} where the parameters $\theta_1$,~$\theta_2$, \dots,~$\theta_9$ are the proportions of the nine Rhesus haplotypes in the population (naturally, their maximum-likelihood estimates are the proportions of the haplotypes in the given data). For $j,k = 1$,~$2$, \dots,~$9$ with $j \ge k$, therefore, $p_0^{(j,k)}$ is the expected probability that the pair of haplotypes in the genome of an individual is the pair $j$ and $k$, given the parameters $\theta_1$,~$\theta_2$, \dots,~$\theta_9$. In this formulation, the hypothesis of suitably random mating entails that the members of the sample population are i.i.d.\ draws from the model specified in~(\ref{hw}); if a goodness-of-fit statistic rejects the model with high confidence, then we can be confident that mating has not been suitably random. Table~\ref{hwt} provides data on $n = 8297$ individuals; we duplicated Figure~3 of~\cite{guo-thompson} to obtain Table~\ref{hwt}. Figure~\ref{phwt} plots the associated P-values, each computed via 90,000 Monte-Carlo simulations. The Kolmogorov-Smirnov statistic depends on the ordering of the bins; for the first trial $t=1$ in Figure~\ref{phwt}, the order of the bins is the lexicographical ordering, namely $(1,1)$,~$(2,1)$, $(2,2)$, $(3,1)$, $(3,2)$, $(3,3)$, \dots, $(9,9)$. The nine trials $t = 2$,~$3$, \dots, $10$ displayed in Figure~\ref{phwt} use pseudorandom orderings of the bins. Please note that the Euclidean distance does not depend on the ordering. Generally, a more powerful statistic produces lower P-values. In Figure~\ref{phwt}, the P-values for the Kolmogorov-Smirnov statistic are sometimes lower, sometimes higher than the P-values for the Euclidean distance. There is no particularly natural ordering of the bins for~Figure~\ref{phwt}; Figure~\ref{phwt} displays 10 different orderings corresponding to 10 different trials. Figure~\ref{phwt} demonstrates that the Euclidean distance is more reliable than the Kolmogorov-Smirnov statistic when there is no natural ordering (or partial order) for the bins. \begin{remark} The P-values for classical goodness-of-fit statistics are substantially higher; the classical statistics are less powerful for this example. The P-values, each computed via 4,000,000 Monte-Carlo simulations, are \begin{itemize} \item Euclidean distance: .039 \item $\chi^2$: .693 \item $G^2$ (the log--likelihood-ratio): .600 \item Freeman-Tukey (the Hellinger distance): .562 \end{itemize} For definitions and further discussion of the $\chi^2$, $G^2$, and Freeman-Tukey statistics, see Section~4.5 of~\cite{perkins-tygert-ward3}. Like the Euclidean distance, the $\chi^2$, $G^2$, and Freeman-Tukey statistics are all invariant to the ordering of the bins. \end{remark} \begin{table} \caption{Frequencies of pairs of Rhesus haplotypes} \label{hwt} \begin{center} \hspace{3.5pc}$k$\\\vspace{4pt} $j$ \begin{tabular}{c||c|c|c|c|c|c|c|c|c} \hspace{-1pc}$_{j\hspace{-.3pc}}\diagdown{}^{\hspace{-.35pc}k}$\hspace{-1.3pc} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline\hline 1 & 1236 &&&&&&& \\\hline 2 & 120 & 3 &&&&&&& \\\hline 3 & 18 & 0 & 0 &&&&& \\\hline 4 & 982 & 55 & 7 & 249 &&&& \\\hline 5 & 32 & 1 & 0 & 12 & 0 &&& \\\hline 6 & 2582 & 132 & 20 & 1162 & 29 & 1312 && \\\hline 7 & 6 & 0 & 0 & 4 & 0 & 4 & 0 & \\\hline 8 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\hline 9 & 115 & 5 & 2 & 53 & 1 & 149 & 0 & 0 & 4 \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \rotatebox{-90}{\scalebox{.47}{\includegraphics{plotks}}} \\\vspace{.1in} \caption{P-values for Table~\ref{hwt} to be consistent with formula~(\ref{hw})} \label{phwt} \end{center} \end{figure} \subsection{A test of uniformity} Table~\ref{skittlest} duplicates Table~1 of~\cite{gilchrist}, giving the colors of the $n = 62$ pieces of candy in a 2.17 ounce bag. Figure~\ref{pskittlest} plots the P-values for Table~\ref{skittlest} to be consistent up to expected random fluctuations with Table~\ref{skittlestu}, the model of uniform proportions. We computed each P-value via 4,000,000 Monte-Carlo simulations. The Kolmogorov-Smirnov statistic depends on the ordering of the bins; the ten trials $t = 1$,~$2$, \dots, $10$ displayed in Figure~\ref{pskittlest} use pseudorandom orderings of the bins. The Euclidean distance does not depend on the ordering. Generally, a more powerful statistic produces lower P-values. In Figure~\ref{pskittlest}, the P-values for the Kolmogorov-Smirnov statistic are sometimes lower, sometimes higher than the P-values for the Euclidean distance. There is no particularly natural ordering of the bins for Table~\ref{skittlestu}; Figure~\ref{pskittlest} displays 10 different pseudorandom orderings corresponding to 10 different trials. Figure~\ref{pskittlest} illustrates that the Euclidean distance is more reliable than the Kolmogorov-Smirnov statistic when there is no natural ordering (or partial order) for the bins. \pagebreak \begin{remark} Table~\ref{skittlest} provides a possible means for ordering the bins. However, such an ordering will depend on the observed data. Using a data-dependent ordering can profoundly alter the nature of the goodness-of-fit statistic; see Remark~\ref{l1}. \end{remark} \begin{remark} Like the Euclidean distance, many classical goodness-of-fit statistics are invariant to the ordering of the bins. The following are P-values, each computed via 4,000,000 Monte-Carlo simulations: \begin{itemize} \item Euclidean distance: .770 \item $\chi^2$: .770 \item $G^2$ (the log--likelihood-ratio): .766 \item Freeman-Tukey (the Hellinger distance): .755 \end{itemize} For definitions and further discussion of the $\chi^2$, $G^2$, and Freeman-Tukey statistics, see Section~2 of~\cite{perkins-tygert-ward3}. For this example, the Euclidean distance and the $\chi^2$ statistic produce exactly the same P-values: for the model of homogeneous proportions, displayed in Table~\ref{skittlestu}, the Euclidean distance is directly proportional to the square root of the $\chi^2$ statistic, and hence the Euclidean distance is a strictly increasing function of $\chi^2$. \end{remark} \begin{table} \caption{Observed frequencies of colors of candies in a 2.17 ounce bag} \label{skittlest} \begin{center} \begin{tabular}{ccccccc} {\it color} && red & orange & yellow & green & violet \\ {\it number} && 15 & 9 & 14 & 11 & 13 \end{tabular} \end{center} \end{table} \begin{table} \caption{Expected frequencies of colors of candies in a 2.17 ounce bag} \label{skittlestu} \begin{center} \begin{tabular}{ccccccc} {\it color} && red & orange & yellow & green & violet \\ {\it number} && 12.4 & 12.4 & 12.4 & 12.4 & 12.4 \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \rotatebox{-90}{\scalebox{.47}{\includegraphics{plotskittles}}} \\\vspace{.1in} \caption{P-values for Table~\ref{skittlest} to be consistent with the model displayed in Table~\ref{skittlestu}} \label{pskittlest} \end{center} \end{figure} \section*{Acknowledgements} \addcontentsline{toc}{section}{\protect\numberline{}Acknowledgements} We would like to thank Alex Barnett, G\'erard Ben Arous, James Berger, Tony Cai, Sourav Chatterjee, Ronald Raphael Coifman, Ingrid Daubechies, Jianqing Fan, Jiayang Gao, Andrew Gelman, Leslie Greengard, Peter W. Jones, Deborah Mayo, Peter McCullagh, Michael O'Neil, Ron Peled, William Perkins, William H. Press, Vladimir Rokhlin, Joseph Romano, Gary Simon, Amit Singer, Michael Stein, Stephen Stigler, Joel Tropp, Larry Wasserman, and Douglas A. Wolfe. This work was supported in part by Alfred P. Sloan Research Fellowships, a Donald D. Harrington Faculty Fellowship, and a DARPA Young Faculty Award. \addcontentsline{toc}{section}{\protect\numberline{}References} \bibliographystyle{asamod.bst}
{ "timestamp": "2012-06-28T02:05:59", "yymm": "1206", "arxiv_id": "1206.6367", "language": "en", "url": "https://arxiv.org/abs/1206.6367", "abstract": "Goodness-of-fit tests gauge whether a given set of observations is consistent (up to expected random fluctuations) with arising as independent and identically distributed (i.i.d.) draws from a user-specified probability distribution known as the \"model.\" The standard gauges involve the discrepancy between the model and the empirical distribution of the observed draws. Some measures of discrepancy are cumulative; others are not. The most popular cumulative measure is the Kolmogorov-Smirnov statistic; when all probability distributions under consideration are discrete, a natural noncumulative measure is the Euclidean distance between the model and the empirical distributions. In the present paper, both mathematical analysis and its illustration via various data sets indicate that the Kolmogorov-Smirnov statistic tends to be more powerful than the Euclidean distance when there is a natural ordering for the values that the draws can take -- that is, when the data is ordinal -- whereas the Euclidean distance is more reliable and more easily understood than the Kolmogorov-Smirnov statistic when there is no natural ordering (or partial order) -- that is, when the data is nominal.", "subjects": "Methodology (stat.ME); Statistics Theory (math.ST)", "title": "A comparison of the discrete Kolmogorov-Smirnov statistic and the Euclidean distance", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357585701875, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739207017589 }
https://arxiv.org/abs/2003.07382
Slack Ideals in Macaulay2
Recently Gouveia, Thomas and the authors introduced the slack realization space, a new model for the realization space of a polytope. It represents each polytope by its slack matrix, the matrix obtained by evaluating each facet inequality at each vertex. Unlike the classical model, the slack model naturally mods out projective transformations. It is inherently algebraic, arising as the positive part of a variety of a saturated determinantal ideal, and provides a new computational tool to study classical realizability problems for polytopes. We introduce the package SlackIdeals for Macaulay2, that provides methods for creating and manipulating slack matrices and slack ideals of convex polytopes and matroids. Slack ideals are often difficult to compute. To improve the power of the slack model, we develop two strategies to simplify computations: we scale as many entries of the slack matrix as possible to one; we then obtain a reduced slack model combining the slack variety with the more compact Grassmannian realization space model. This allows us to study slack ideals that were previously out of computational reach. As applications, we show that the well-known Perles polytope does not admit rational realizations and prove the non-realizability of a large quasi-simplicial sphere.
\section{Introduction} Slack matrices of polytopes are nonnegative real matrices whose entries express the slack of a vertex in a facet inequality. In particular, the zero pattern of a slack matrix encodes the vertex-facet incidence structure of the polytope. Slack matrices have found remarkable use in the theory of extended formulations of polytopes: Yannakakis \cite{Y91} proved that the extension complexity of a polytope is equal to the nonnegative rank of its slack matrix. More generally, one can define the slack matrix of a matroid by computing the slacks of the ground set vectors in the hyperplanes of the matroid. If $P$ is $d$-dimensional polytope, replacing all positive entries in the slack matrix with distinct variables, one obtains a new sparse generic matrix $S_P(\xx)$, called the \textit{symbolic slack matrix} of $P$. Then we define the \textit{slack ideal} $I_P$ of $P$ as the ideal of all $(d+2)$-minors of $S_P(\xx)$, saturated with respect to the product of all variables in $S_P(\xx)$. Slack ideals were introduced for polytopes in \cite{GPRT17}, where it was also noted that they could be used to model the realization space of a polytope. The details of this realization space model and further properties of the slack ideal were studied in \cite{GMTWfirstpaper}, \cite{GMTWsecondpaper} and \cite{GMWthirdpaper}. An analogous realization space model for matroids was introduced in \cite{BW19}. In this paper, we describe the \texttt{Macaulay2} \cite{M2} package \texttt{SlackIdeals.m2}, that is available at \url{https://bitbucket.org/macchia/slackideals/src/master/SlackIdeals.m2}. It provides methods to define and manipulate slack matrices of polytopes, matroids, polyhedra, and cones; obtain a slack matrix directly from the Gale transform of a polytope; compute the symbolic slack matrix and the slack ideal from a slack matrix; compute the graphic ideal of a polytope, the cycle ideal and the universal ideal of a matroid. Slack ideal computations are often out of computational reach. Therefore we develop two techniques to speed up and simplify computations. First, we suitably set to one as many entries of the slack matrix as possible. One can compute the slack ideal of this dehomogenized slack matrix and then rehomogenize the resulting ideal (see Proposition \ref{PROP:rehomIdeal}). The new ideal coincides with the original slack ideal if the latter is radical. Second, we obtain a reduced slack matrix by keeping the columns of a set of facets $F$ that contains a flag (a maximal chain in the face lattice of P) and such that the facets not in $F$ are simplicial. Combining these two strategies, we have a powerful tool for the study of hard realizability questions. As applications, we show that the well-known Perles polytope does not admit rational realizations and prove the non-realizability of a large quasi-simplicial sphere. \section{Slack matrices and slack ideals} Given a collection of points $V = \{\vv_1,\ldots, \vv_n\}\subset\RR^d$ and a collection of (affine) hyperplanes $H = \{\{\xx\in\RR^d: b_i-\aalpha_i^\top\xx = 0\} : i=1\ldots f\}$ we can define a \textit{slack matrix of the pair} $(V,H)$ by \[ S_{(V,H)} = \begin{bmatrix} \1 & \vv_1 \\ \vdots & \vdots \\ \1 & \vv_n \end{bmatrix} \begin{bmatrix} b_1 & \cdots & b_f \\ \aalpha_1 & \cdots & \aalpha_f \end{bmatrix} \in \RR^{n\times f}. \] If $P$ is a $d$-polytope, we take $V = \textup{vert}(P)$ and $H$ to be the set of facet defining hyperplanes. Then $S_P = S_{(V,H)}$. When coordinates $V$ are given for the vectors of a matroid~$M$, they are always assumed to be an affine configuration which gets homogenized to form the matroid; in particular, this means that if $V = \textup{vert}(P)$, then the associated matroid is the matroid of the polytope $P$. The hyperplanes are taken to be all hyperplanes of $M$, and then $S_M = S_{(V,H)}$. {\renewcommand{\baselinestretch}{0.9} \begin{verbatim} i1 : needsPackage "SlackIdeals"; i2 : V = {{0,0},{0,1},{1,1},{1,0}}; -- Compute the slack matrix of P=conv(V) i3 : slackMatrix(V) o3 = | 0 1 0 1 | | 1 0 0 1 | | 0 1 1 0 | | 1 0 1 0 | -- Compute the slack matrix of matroid of V i4 : slackMatrix(V, Object=>"matroid") o4 = | -1 -1 0 -1 0 0 | | -1 0 1 0 1 0 | | 0 1 1 0 0 -1 | | 0 0 0 -1 -1 -1 | \end{verbatim} } The \texttt{slackMatrix} command also takes a pre-computed matroid, polyhedron or cone object as input. Another way to compute the slack matrix of a polytope is from its Gale transform using the command \texttt{slackFromGaleCircuits}. Let $G$ be a matrix with real entries whose columns are the vectors of a Gale transform of a polytope $P$. A slack matrix of $P$ is computed by finding the minimal positive circuits of $G$, see \cite[Section 5.4]{G03}. Alternatively, the command \texttt{slackFromGalePlucker} applies the maps of \cite[Section~5]{GMWthirdpaper} to fill a slack matrix with Pl\"ucker coordinates of the Gale transform. The slack matrices of a few specific polytopes and matroids of theoretical importance are built-in, using the command \texttt{specificSlackMatrix}. \medskip The \textit{symbolic slack matrix} can be obtained by replacing the nonzero entries of a slack matrix by distinct variables; that is, \[ [S_{(V,H)}(\xx)]_{i,j} = \begin{cases} 0 & \text{ if } \vv_i\in H_j \\ x_{i,j} & \text{ if } \vv_i\notin H_j \end{cases}. \] From this sparse generic matrix we obtain the \textit{slack ideal} as the saturation of the ideal of its $(d+2)$-minors by the product of all variables in $S_{(V,H)}(\xx)$: \[ I_{(V,H)} = \langle (d+2)-\textup{minors of } S_{(V,H)}(\xx)\rangle : \left(\prod_{j=1}^f\prod_{i : \vv_i\notin H_j} x_{i,j}\right)^\infty. \] Given a (symbolic) slack matrix of a $d$-polytope, $(d+1)$-dimensional cone, or rank $d+1$ matroid, we can compute the associated slack ideal, specifying $d$ as an input. Unless we pass variable names as an option, the function labels the variables consecutively by rows with a single index starting from $1$: {\renewcommand{\baselinestretch}{0.9} \begin{verbatim} -- Compute slack ideal of d-polytope P=conv(V) i10 : V = {{0,0},{0,1},{1,1},{1,0}}; i11 : slackIdeal(2, slackMatrix(V)) -- here d=2 o11 = ideal(x x x x - x x x x ) 0 3 5 6 1 2 4 7 \end{verbatim} } We get the same result if we compute \texttt{slackIdeal(2,V)}, giving only the list of vertices of a $d$-polytope or ground set vectors of a matroid instead of a slack matrix. {We also get the same result with \texttt{slackIdeal(V)}, but the computation is faster if you provide $d$ as an argument.} As optional argument, one can choose the object to be set as \texttt{"polytope"}, \texttt{"cone"}, or \texttt{"matroid"} (default is \texttt{Object=>"polytope"}). To a polytope or matroid we can also associate a specific toric ideal, known as the {\em graphic} or {\em cycle ideal}, respectively. These ideals are important in the classification of certain projectively unique polytopes \cite{GMTWsecondpaper} and matroids \cite{BW19}, and can be computed using the commands \texttt{graphicIdeal} and \texttt{cycleIdeal}. In \cite[Section 4]{GMWthirdpaper} it is shown that a slack matrix can be filled with Pl\"ucker coordinates of a matrix formed from the vertex coordinates of a polytope (or extreme ray generators of a cone or ground set vectors of a matroid). This idea is the basis for the reduction technique described in \cite[Section 6] {GMWthirdpaper} and Section~\ref{S.Reduction}. The Grassmannian section ideal of a polytope is also defined and shown to cut out exactly a set of representatives of the slack variety that are constructed in this way \cite[Section 4.1]{GMWthirdpaper}. The command \texttt{grassmannSectionIdeal} computes this section ideal given a set of vertices of a polytope and the indices of vertices that span each facet. \section{On the dehomogenization of the slack ideal} Let $P$ be a polytope and $S_P$ its slack matrix. We define the \textit{non-incidence graph} $G_P$ as the bipartite graph whose vertices are the vertices and facets of $P$, and whose edges are the vertex-facet pairs of $P$ such that the vertex is not on the facet. This graphic structure provides a systematic way to scale a maximal number of entries in $S_P$ to~$1$, as spelled out in \cite[Lemma~5.2]{GMTWsecondpaper}. In particular, we may scale the rows and columns of $S_P(\xx)$ so that it has ones in the entries indexed by the edges in a maximal spanning forest of the graph $G_P$. This can be done using \texttt{setOnesForest}, which outputs a sequence $(Y,F)$ where $Y$ is the scaled symbolic slack matrix and $F$ is the spanning forest used to scale $Y$. {\renewcommand{\baselinestretch}{0.9} \begin{verbatim} i23 : V = {{0,0,0},{1,0,0},{0,1,0},{0,0,1},{1,0,1},{1,1,0}}; i24 : (Y, F) = setOnesForest(symbolicSlackMatrix(V)); Y o24 = | 0 1 0 0 1 | | 1 0 0 0 1 | | 0 1 1 0 0 | | 1 0 x_7 0 0 | | 0 1 0 1 0 | | 1 0 0 x_11 0 | \end{verbatim}} This leads to a dehomogenized version of the slack ideal defined as follows. Given $S_P$ and a maximal spanning forest $F$ of $G_P$, let $S_P(\xx^F)$ be the symbolic slack matrix of $P$ with all the variables corresponding to edges in~$F$ set to $1$. Then the dehomogenized ideal, $I_P^F$, is the slack ideal of this scaled slack matrix: $$I_P^F := \langle (d+2)-\textup{minors of }S_P(\xx^F)\rangle : \left(\prod \xx^F\right)^\infty.$$ It is natural to ask what is the relation between $I_P^F$ and the original slack ideal $I_P$. In particular, we might wish to know if we can recover the full slack ideal from $I_P^F$. From \cite[Lemma 5.2]{GMTWsecondpaper} we know that any slack matrix in $\mathcal V(I_P)$ (or, in fact, any point in the slack variety with all coordinates that correspond to $F$ being nonzero) can be scaled to a matrix in $\mathcal V(I^F_P)$. Conversely, it is clear that any point in $\mathcal V(I^F_P)$ can be thought of as a point in $\mathcal V(I_P)$. Thus, in terms of the varieties we have $\VV(I_P)^*/(\RR^v\times\RR^f) \cong \VV(I_P^F)^*,$ where $\VV(I)^*$ denotes the part of the variety where all coordinates are nonzero. To see the algebraic implications of this, let us introduce the following rehomogenization process. Notice that in the proof of \cite[Lemma 5.2]{GMTWsecondpaper}, we dehomogenize by following the edges of forest $F$ starting from some chosen root(s) and moving toward the leaves. The destination vertex of each edge tells us which row or column to scale, and the edge label is the variable by which we scale. Now, given a polynomial in $I_P^F$, using the same forest and orientation we proceed in the reverse order: starting at the leaves, for each edge of the forest, we reintroduce the variable corresponding to it in order to rehomogenize the polynomial with respect to the row or column corresponding to the destination vertex of that edge. \begin{example} Consider the slack matrix $S_P(\xx^F)$ of the triangular prism $P$ scaled \noindent\begin{minipage}[c]{0.65\textwidth} according to forest $F$, pictured in Figure \ref{F:forest}. Then $I_P^F = \langle x_7-1,x_{11}-1\rangle$. So we can rehomogenize, for example, the element $x_7-x_{11}$ with respect to forest~$F$ as follows.\\ \indent First, consider the leaf corresponding to column 3. \end{minipage} \hspace{0.02\textwidth} {\renewcommand{\arraystretch}{0.8}\begin{minipage}[c]{0.29\textwidth} \vspace{-10pt} \[ S_P(\xx^F) \!=\! \begin{bmatrix} 0 & 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & x_7 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & x_{11} & 0 \end{bmatrix} \] \end{minipage}}\\[0.1cm] Its edge is labeled with $x_5$, so we reintroduce that variable to the monomial $x_{11}$ since its degree in column 3 is currently $0$, while the degree of $x_7$ in that column is $1$. We continue this process until all the edges of $F$ have been used. \begin{figure} \scalebox{1}{\hspace{-15pt} \begin{tikzpicture} \newcommand{\midarrow}{\tikz \draw[-{>[length=1.5mm, width=1.5mm,angle'=45,open]}] (0,0) -- +(0.1,0);} \draw (-1,0.2) node[] {$F$}; \draw (0,-6) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {below:$c_1$}] (c1) {}; \draw (-1,-5) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {left:$r_2$}] (r2) {}; \draw (0,-5) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {above:$r_4$}] (r4) {}; \draw (1,-5) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {right:$r_6$}] (r6) {}; \draw (-1,-4) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {left:$c_5$}] (c5) {}; \draw (-1,-3) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {left:$r_1$}] (r1) {}; \draw (-1,-2) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {left:$c_2$}] (c2) {}; \draw (-1.5,-1) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {left:$r_3$}] (r3) {}; \draw (-0.5,-1) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {right:$r_5$}] (r5) {}; \draw (-1.5,0) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {left:$c_3$}] (c3) {}; \draw (-0.5,0) node[circle, fill, inner sep = 0pt, minimum size = 4pt, label = {right:$c_4$}] (c4) {}; \draw (c1) -- node[sloped, label={[label distance=-6pt]below:$x_{2}$}]{\rotatebox{180}{\midarrow}} (r2) --node[sloped,label={[label distance=-5pt]above:$x_{3}$}]{\midarrow} (c5) --node[sloped,label={[label distance=-5pt]above:$x_{1}$}]{\midarrow} (r1) --node[sloped,label={[label distance=-5pt]above:$x_{0}$}]{\midarrow} (c2) --node[sloped,label={[label distance=-5pt]-110:$x_{4}$}]{\rotatebox{180}{\midarrow}} (r3) --node[sloped,label={[label distance=-5pt]above:$x_{5}$}]{\midarrow} (c3); \draw (c1) -- node[sloped,label={[label distance=-7pt]60:$x_{6}$}]{\midarrow} (r4); \draw (c1) -- node[sloped,label={[label distance=-6pt]below:$x_{10}$}]{\midarrow} (r6); \draw (c2) -- node[sloped,label={[label distance=-5pt]-70:$x_{8}$}]{\midarrow} (r5) -- node[sloped,label={[label distance=-5pt]below:$x_{9}$}]{\midarrow} (c4); \draw (1.3,0) node[label=right:{$\red{x_{9}}x_7-x_{11}\red{x_5}$, {\small both terms now have degree $1$ in columns 3 and 4}}] (p1) {}; \draw[-{>[length=1.5mm, width=1.5mm,angle'=45,open]}] (p1) -- (0.5,0); \draw (1.3,-1) node[label=right:{$\red{x_4}x_{9}x_7-x_{11}x_5\red{x_8}$, {\small both terms now have degree $1$ in rows 3 and 5 }}] (p2) {}; \draw[-{>[length=1.5mm, width=1.5mm,angle'=45,open]}] (p2) -- (0.5,-1); \draw (1.3,-2) node[label=right:{$x_4x_{9}x_7-x_{11}x_5x_8$, {\small both terms already have degree $1$ in column 2}}] (p3) {}; \draw[-{>[length=1.5mm, width=1.5mm,angle'=45,open]}] (p3) -- (0.5,-2); \draw (1.3,-3) node[label=right:{$x_4x_{9}x_7-x_{11}x_5x_8$, {\small both terms already have degree $0$ in row 1}}] (p4) {}; \draw[-{>[length=1.5mm, width=1.5mm,angle'=45,open]}] (p4) -- (0.5,-3); \draw (1.3,-4) node[label=right:{$x_4x_{9}x_7-x_{11}x_5x_8$, {\small both terms already have degree $0$ in column 5}}] (p5) {}; \draw[-{>[length=1.5mm, width=1.5mm,angle'=45,open]}] (p5) -- (0.5,-4); \draw (2.3,-5) node[label=right:{$\red{x_{10}}x_4x_{9}x_7-x_{11}x_5x_8\red{x_6}$, }] (p6) {}; \draw (2.3,-5.5) node[label=right:{{\small both terms already have degree $0$ in row 2,}}] {}; \draw (2.3,-6) node[label=right:{{\small both terms now have degree $0$ in rows 4 and 6}}] {}; \draw[-{>[length=1.5mm, width=1.5mm,angle'=45,open]}] (p6) -- (1.7,-5); \end{tikzpicture} } \caption{A spanning forest for the triangular prism} \label{F:forest} \end{figure} \end{example} Call the resulting ideal $H({I}_P^F)$. By the tree structure, the rehomogenization process does indeed end with a polynomial that is homogeneous, as once we make it homogeneous for a row or column we never add variables in that row or column again. We now consider the effect of this rehomogenization on minors. \begin{lemma} Let $p$ be a minor of $S_P(\xx)$ and $p^F$ its dehomogenization by $F$. Then its rehomogenization $H(p^F)$ equals $p$ divided by the product of all variables in $F$ that divide $p$. \label{LEM:minorhomog} \end{lemma} \begin{proof} Note that all monomials in a minor have degree precisely one on every relevant row and column. In fact they can be interpreted as perfect matchings on the subgraph of $G_P$ corresponding to the $(d+2) \times (d+2)$ submatrix being considered. Let $\xx^{\aa}$ and $\xx^{\bb}$ be two distinct monomials in the minor, then their dehomogenizations are also distinct. To see this, note that if we interpret $\aa$ and $\bb$ as matchings, a common dehomogenization would be a common submatching $\cc$ of both, with all the remaining edges being in $F$. But $\aa \setminus \cc$ and $\bb \setminus \cc$ would then be distinct matchings on the same set of variables, hence their union contains a cycle, so they would not be both contained in the forest $F$. Now note that when rehomogenizing a minor, we start with all degrees being zero or one for every row and column, and since we visit each node (corresponding to each of the rows/columns) exactly once by the tree structure, the degree of every row and column is at most one after homogenizing. In the first step of rehomogenizing, we start with a leaf of~$F$, which means the variable $x_i$ labeling its edge is the only variable in the row or column corresponding to that leaf which was set to~1. Thus if any monomial of the minor has degree zero on that row or column, it must be because $x_i$ occurred in that monomial in the original minor. Hence rehomogenizing will just add that variable to the monomials where it was originally present, with the exception of the case where it was present on all monomials, in which case there will be no need to add it, as the dehomogenized polynomial would be homogeneous (of degree 0) for that particular row/column. All degrees remain 0 or 1 after this process, and now the node incident to the leaf we just rehomogenized corresponds to a row/column with exactly one variable that is still dehomogenized. Thus we can repeat the argument on the entire forest to find that each monomial rehomogenizes to itself divided by the variables that were originally present in all monomials of the minor. \end{proof} \begin{remark} It is important to note that $H(I_P^F)$ is the ideal of {\em all elements} of $I_P^F$ rehomogenized. In general, this is different from the ideal generated by the rehomogenized generators of $I_P^F$. In the package, we rehomogenize the whole ideal by rehomogenizing the generators and saturating the resulting ideal by all the variables we just homogenized by. \end{remark} For example, let $V$ be the set of vertices of the triangular prism with spanning forest $Y$ as computed before, and let us compute the rehomogenized ideal $H(I_P^F)$. {\renewcommand{\baselinestretch}{0.9} \begin{verbatim} i25 : HIF = rehomogenizeIdeal(3, Y, F) \end{verbatim} } {\renewcommand{\baselinestretch}{0.9} \begin{verbatim} o25 = ideal (x x x x - x x x x , x x x x - x x x x , 4 7 9 10 5 6 8 11 0 3 9 10 1 2 8 11 x x x x - x x x x ) 0 3 5 6 1 2 4 7 \end{verbatim} } Notice that, in this case the rehomogenized ideal $H(I_P^F)$ equals the slack ideal~$I_P$. \begin{example} Recall that the generators of $I_P^F$ for the triangular prism were $x_7-1$ and $x_{11}-1$, which rehomogenize to $x_1x_2x_4x_7-x_0x_3x_5x_6$ and $x_1x_2x_8x_{11}-x_0x_3x_9x_{10}$, respectively. However, \[ \langle x_1x_2x_4x_7-x_0x_3x_5x_6, x_1x_2x_8x_{11}-x_0x_3x_9x_{10} \rangle \neq H(I_P^F). \] \end{example} The relation between the rehomogenized ideal $H(I_P^F)$ and the original slack ideal is given in the following lemma. The proof relies on the key fact that the variety of the rehomogenized ideal is still the same as the slack variety that we started with. \begin{proposition} Given a spanning forest $F$ for the non-incidence graph of polytope $P$, the rehomogenization of its scaled slack ideal is an intermediate ideal between the slack ideal and its radical: $I_P \subseteq H(I_P^F) \subseteq \sqrt{I_P}$. \label{PROP:rehomIdeal} \end{proposition} \begin{proof} To prove the inclusion $I_P \subseteq H(I_P^F)$, note that $p \in I_P$ happens if and only if $\xx^{\aa} p \in J$ for some exponent vector $\aa$, where $J$ is the ideal generated by all $(d+2)$-minors of the symbolic slack matrix of $P$. Dehomogenizing we get $\xx^{\bb} p^F \in J^F$, which means $p^F$ is in the saturation of $J^F$ by the product of all variables, which is precisely the definition of $I_P^F$. From Lemma~\ref{LEM:minorhomog} it follows that $p\in H(I_P^F)$. To prove that $H(I_P^F) \subseteq \sqrt{I_P}$, it is enough to show that any polynomial in $H(I_P^F)$ vanishes in the slack variety. By construction, any such polynomial must vanish on the points of the slack variety where the variables corresponding to the forest $F$ are nonzero, $\VV(I_P)\backslash \VV(\langle \xx^F\rangle)$. Thus, they vanish on the Zariski closure of that set. Considering the following containments, $$\VV(I_P)\backslash\VV(\langle\xx\rangle) \subset \VV(I_P)\backslash \VV(\langle\xx^F\rangle) \subset \VV(I_P),$$ we get that this closure is exactly the slack variety since $\overline{\VV(I_P)\backslash\VV(\langle\xx\rangle)} = \VV(I_P:\left<\xx\right>^\infty) = \mathcal V(I_P)$. \end{proof} \begin{remark} One would like to say that $I_P = H(I_P^F)$, and so far we have no counterexample for this equality, since it always holds if $I_P$ is radical, and we also have no examples of non-radical slack ideals. \end{remark} \section{Reduced slack matrices} \label{S.Reduction} In general, computing the slack ideal may take a long time or be infeasible, especially if the dimension of the polytope is small compared to its number of vertices and facets. In some cases we can speed up this computation combining the slack and the Grassmannian realization space models \cite[Section 6]{GMWthirdpaper}. In fact, we do not need to work with the full slack matrix, since the essential information is contained into a sufficiently large submatrix. We will see in Examples~\ref{EX:Perles} and~\ref{EX:sphere}, that slack ideals which we were not even able to compute (using personal computers) are now able to be calculated in a matter of a few seconds. To give an estimate of the improvement, computing the slack ideal of the full slack matrix in Example~\ref{EX:Perles} requires the computation of about $8.6 \cdot 10^9$ minors, whereas the reduced slack ideal only requires the computation of about $1.9 \cdot 10^4$ minors. More precisely, let $P$ be a realizable polytope and $F$ be a set of facets of~$P$ such that $F$ contains a set of facets that can be intersected to form a flag in the face lattice of $P$ and all facets of $P$ not in $F$ are simplicial. We call a \textit{reduced slack matrix} for $P$ the submatrix, $S_F$, of $S_P$ consisting of only the columns indexed by $F$. Set $\VV_F$ to be the nonzero part of the slack variety $\VV(I_F)$. If $\overline{\VV_F}$ is irreducible, then $\VV_F \times \mathbb C^h \cong \VV(I_P)^*$ are birationally equivalent, where~$h$ denotes the number of facets of $P$ outside $F$ \cite[Proposition 6.9]{GMWthirdpaper}. \begin{example} Let $P$ be the Perles projectively unique polytope with no rational realization coming from the point configuration in \cite[Figure 5.5.1, p. 93]{G03}. This is an $8$-polytope with $12$ vertices and $34$ facet and its symbolic slack matrix $S_P(\xx)$ is a $12 \times 34$ matrix with 120 variables. Let $S_F$ be the following submatrix of $S_P$ whose 13 columns correspond to all the nonsimplicial facets of $P$: {\renewcommand{\baselinestretch}{0.9} \begin{verbatim} i28 : S = specificSlackMatrix("perles1"); -- Checking that the first 13 columns of S indeed contain a flag i29 : containsFlag(toList(0..12),S) o29 = true i30 : SF = reducedSlackMatrix(8, S, FlagIndices=>toList(0..12)); \end{verbatim} } The associated symbolic slack matrix is: \[ S_F(\xx) = \begin{bmatrix} 0 & 0 & 0 & x_0 & x_1 & x_2 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & x_3 & 0 & 0 & x_4 & x_5 & x_6 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & x_7 & 0 & 0 & x_8 & x_9 & 0 & 0\\ 0 & 0 & 0 & 0 & x_{10} & 0 & 0 & 0 & 0 & 0 & 0 & x_{11} & x_{12}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & x_{13} & 0 & x_{14} & 0 & x_{15} & 0\\ x_{16} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & x_{17} & 0 & x_{18}\\ 0 & x_{19} & 0 & 0 & 0 & 0 & 0 & 0 & x_{20} & 0 & 0 & 0 & 0\\ 0 & 0 & x_{21} & 0 & 0 & x_{22} & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ x_{23} & 0 & 0 & x_{24} & 0 & 0 & 0 & x_{25} & 0 & 0 & 0 & 0 & 0\\ 0 & x_{26} & 0 & 0 & x_{27} & 0 & 0 & 0 & 0 & x_{28} & 0 & 0 & 0\\ 0 & 0 & x_{29} & 0 & 0 & 0 & x_{30} & 0 & 0 & 0 & 0 & 0 & x_{31}\\ 0 & 0 & 0 & 0 & 0 & x_{32} & 0 & 0 & x_{33} & 0 & x_{34} & x_{35} & 0 \end{bmatrix}. \] Using \cite[Lemma 5.2]{GMTWsecondpaper}, we first set $x_i=1$ for $i = 0,3,4,5,6,7,8,9,12,14,15,16,\break 17,20,21,25,26,27,28,29,30,31,32,34$. The resulting scaled reduced slack ideal is: \begin{gather*} \langle \boldsymbol{x_{35}^2+x_{35}-1}, x_{33}-x_{35}-1, x_{24}-x_{35}, x_{23}-x_{35}, x_{22}-1,x_{19}-x_{35},\\ x_{18}-x_{35},x_{13}-x_{35}-1, x_{11}-x_{35},x_{10}-1,x_{2}-1,x_{1}-x_{35}-1 \rangle. \end{gather*} It follows that $x_{35}=\frac{-1 \pm \sqrt{5}}{2}$. Hence, $P$ does not admit rational realizations. \label{EX:Perles} \end{example} \begin{example} \label{EX:sphere} Let $P$ be the abstract polytope, labeled \#1963 in \cite{CS19}, with 14 vertices labeled $0, \dots, 6, a \dots, g$ and with 94 facets, $\{0,1,2,3,4,5,6\}, \{a,b,c,d,e,f,g\}$ and the other 92 listed in \cite[Table 3]{CS19}. A reduced slack matrix for $P$ (where facets $F = \{F_0, F_1, F_2, F_3, F_4, F_{10}\}$ form a flag) with the maximum number of variables set to one is the following: \[ S_F(\xx) = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & x_{10} & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & x_{18} & 0 & 0 \\ 0 & 1 & x_{22} & x_{23} & 0 & 0 \\ 0 & x_{33} & x_{34} & 0 & x_{35} & 1 \\ 0 & x_{43} & x_{44} & x_{45} & x_{46} & 1 \\ 0 & 1 & 0 & 1 & 1 & 1 \\ x_{68} & 0 & x_{69} & 0 & x_{70} & 1 \\ x_{76} & 0 & x_{77} & x_{78} & 0 & 1 \\ x_{85} & 0 & x_{86} & x_{87} & x_{88} & 1 \\ x_{95} & 0 & 0 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 & x_{105} & 1 \\ x_{113} & 0 & x_{114} & x_{115} & x_{116} & 1 \\ x_{127} & 0 & x_{128} & x_{129} & x_{130} & 1 \end{bmatrix} \] Now we reconstruct the remaining columns of the slack matrix. We can then recursively determine the sign of each column, by looking for monomial entries and setting the sign of all the entries of that column so they are positive. From this process, we get a collection of polynomials that must be simultaneously positive. In particular, from this matrix we get polynomials that imply the inequalities $x_{76} > x_{77} > x_{34} > 1$. Furthermore, we have degree 2 polynomials including $-x_{34}x_{76} + x_{34} + x_{76} - x_{77} > 0$. The first inequalities give us \[ \frac{x_{76}-x_{77}}{x_{76}-1} < 1 \] while the second gives \[ \frac{x_{76} - x_{77}}{x_{76}-1} > x_{34} \] which is a contradiction to $x_{34}>1$. Thus we have found a subset of the entries of the slack matrix which cannot be simultaneously positive, so that $P$ is not realizable. \end{example} The previous example shows that the reduction process can be a powerful tool to show nonrealizability of large quasi-simplicial spheres. \bigskip \noindent \textbf{Acknowlegements.} We would like to thank Jo\~{a}o Gouveia for helping us with Section 3. \bibliographystyle{splncs04}
{ "timestamp": "2020-11-03T02:05:11", "yymm": "2003", "arxiv_id": "2003.07382", "language": "en", "url": "https://arxiv.org/abs/2003.07382", "abstract": "Recently Gouveia, Thomas and the authors introduced the slack realization space, a new model for the realization space of a polytope. It represents each polytope by its slack matrix, the matrix obtained by evaluating each facet inequality at each vertex. Unlike the classical model, the slack model naturally mods out projective transformations. It is inherently algebraic, arising as the positive part of a variety of a saturated determinantal ideal, and provides a new computational tool to study classical realizability problems for polytopes. We introduce the package SlackIdeals for Macaulay2, that provides methods for creating and manipulating slack matrices and slack ideals of convex polytopes and matroids. Slack ideals are often difficult to compute. To improve the power of the slack model, we develop two strategies to simplify computations: we scale as many entries of the slack matrix as possible to one; we then obtain a reduced slack model combining the slack variety with the more compact Grassmannian realization space model. This allows us to study slack ideals that were previously out of computational reach. As applications, we show that the well-known Perles polytope does not admit rational realizations and prove the non-realizability of a large quasi-simplicial sphere.", "subjects": "Combinatorics (math.CO); Commutative Algebra (math.AC); Algebraic Geometry (math.AG)", "title": "Slack Ideals in Macaulay2", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357555117624, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739184847974 }
https://arxiv.org/abs/1510.07284
Random version of Dvoretzky's theorem in $\ell_p^n$
We study the dependence on $\varepsilon$ in the critical dimension $k(n,p,\varepsilon)$ for which one can find random sections of the $\ell_p^n$-ball which are $(1+\varepsilon)$-spherical. We give lower (and upper) estimates for $k(n,p,\varepsilon)$ for all eligible values $p$ and $\varepsilon$ as $n\to \infty$, which agree with the sharp estimates for the extreme values $p=1$ and $p=\infty$. Toward this end, we provide tight bounds for the Gaussian concentration of the $\ell_p$-norm.
\section{Introduction} The fundamental theorem of Dvoretzky from \cite{Dvo} in geometric language states that every centrally symmetric convex body on $\mathbb R^n$ has a central section of large dimension which is almost spherical. The optimal form of the theorem, which was proved by Milman in \cite{Mil}, reads as follows. For any $\varepsilon \in (0,1)$ there exists $\eta=\eta(\varepsilon)>0$ with the following property: for every $n$-dimensional symmetric convex body $A$ there exist a linear image $A_1$ of $A$ and $k$-dimensional subspace $F$ with $k \geq \eta(\varepsilon) \log n$ such that \begin{align*} (1-\varepsilon) B_F \subseteq A_1\cap F \subseteq (1+\varepsilon)B_F, \end{align*} where $B_F$ denotes the Euclidean ball in $F$. The example of the cube $A= B_\infty^n$ shows that this result is best possible with respect to $n$ (see \cite{Sch3} for the details). The approach of \cite{Mil} is probabilistic in nature and shows that most of the $k$-dimensional sections are $(1+\varepsilon)$-spherical (or Euclidean). Here ``most" means with overwhelming probability in terms of the Haar probability measure $\nu_{n,k}$ on the Grassmann manifold $G_{n,k}$. More precisely, given a centrally symmetric convex body $A$ on $\mathbb R^n$ and $\varepsilon \in (0,1)$ the random $k$-dimensional subspace $F$ satisfies: \begin{align*} \frac{1-\varepsilon}{M} B_F \subseteq A\cap F \subseteq \frac{1+\varepsilon}{M}B_F \end{align*} with probability greater than $1-e^{-k}$ as long as $k\leq c(\varepsilon)k(A)$. Here $c(\varepsilon)$ stands for the function of $\varepsilon$ in the probabilistic formulation and $k(A)$ is usually referred to the ``critical dimension" of the body $A$. The latter can be computed in terms of the global parameters $M=M(A)=\int_{S^{n-1}} \|\theta\|_A \, d\sigma(\theta)$ and $b=b(A)=\max_{\theta \in S^{n-1}} \|\theta\|_A$; that is $k(A) \simeq n(M/b)^2$. Recall that $1/b$ is the radius of the maximal centered inscribed ball in $A$. Next, one may select a good position of the body $A$ for which the $k(A)$ is large enough with respect to $n$ (see \cite{MS} for further details). It has been proved in \cite{MS2} that this formulation is optimal with respect to the dimension $k(A)$ in the following sense: the maximal dimension $m$ for which the random $m$-dimensional sections are $4$-Euclidean with probability greater than $\frac{n}{n+m}$ is less than $Ck(A)$ for some absolute constant $C>0$, i.e. $m\lesssim k(A)$.\footnote{For any two quantities $\Gamma,\Delta$ depending on $n,p$, etc. we write $\Gamma\lesssim \Delta$ if there exists numerical constant $C>0$ - independent of everything - such that $\Gamma \leq C \Delta$. We write $\Gamma \gtrsim \Delta$ if $\Delta \lesssim \Gamma$ and $\Gamma \simeq \Delta$ if $\Gamma \lesssim \Delta$ and $\Delta \lesssim \Gamma$. Accordingly we write $\Gamma \simeq_p \Delta$ if the constants involved are depending only on $p$.} (Here and everywhere else $C,c,C_1, c_1, \ldots$ stand for positive absolute constants whose values may change from line to line). The proof in \cite{Mil} provides the lower bound $c(\varepsilon) \geq c\varepsilon^2 /\log \frac{1}{\varepsilon}$ and this is improved to $c(\varepsilon) \geq c\varepsilon^2$ by Gordon in \cite{Go} and an alternative approach is given by Schechtman in \cite{Sch1}. This dependence is known to be optimal. The recent works of Schechtman in \cite{Sch2} and Tikhomirov in \cite{Tik} established that the dependence on $\varepsilon$ in the randomized Dvoretzky for $B_\infty^n$ is of the exact order $\varepsilon/ \log \frac{1}{\varepsilon}$. As far as the dependence on $\varepsilon$ in the existential version of Dvoretzky's theorem is concerned, Schechtman proved in \cite{Sch2} that one can always $(1+\varepsilon)$-embed $\ell_2^k$ in any $n$-dimensional normed space $E$ with $k(E, \varepsilon) \geq c\varepsilon \log n / (\log\frac{1}{\varepsilon})^2 $. Tikhomirov in \cite{Tik2} proved that for 1-symmetric spaces $E$ we may have $k(E,\varepsilon) \geq c \log n/ \log \frac{1}{\varepsilon}$ complementing the previously known result due to Bourgain and Lindenstrauss from \cite{BL}. Recall that a normed space $(\mathbb R^n,\|\cdot\|)$ is said to be 1-symmetric if the norm satisfies $\|\sum_i \varepsilon_i a_i e_{\pi(i)}\|= \|\sum_i a_i e_i\|$ for all scalars $(a_i)$, for all choices of signs $\varepsilon_i=\pm 1$ and for any permutation $\pi$, where $(e_i)$ is the standard basis in $\mathbb R^n$. Tikhomirov's result was subsequently extended by Fresen in \cite{Fres} for permutation invariant spaces with uniformly bounded basis constant. In this note we will not deal with the existential form of Dvoretzky's theorem. Related results for $\ell_p$ spaces are presented in \cite{Ko}. For more detailed information on the subject, explicit statements and historical remarks the reader is referred to the recent monograph \cite{AGM}. Our goal here is to study the random version for the spaces $\ell_p^n$ and to give bounds on the dimension $k(n,p,\varepsilon)\equiv k(\ell_p^n, \varepsilon)$ for which the $k$-dimensional random section of $B_p^n$ is $(1+\varepsilon)$-Euclidean with high probability on $G_{n,k}$. These bounds are continuous with respect to $p$ and coincide with the known bounds in the extreme cases $p=1$ and $p=\infty$. To this end we first study the concentration phenomenon for the $\ell_p$ norms and we prove the following result: \begin{theorem} \label{thm: 1.1} For all sufficiently large $n$ and for any $1\leq p\leq \infty$ one has: \begin{align*} P\left( \big | \|X\|_p-\mathbb E\|X\|_p \big| > \varepsilon \mathbb E\|X\|_p \right) \leq C_1\exp(-c_1\beta(n,p,\varepsilon)), \quad 0<\varepsilon<1, \end{align*} where $X$ is standard $n$-dimensional Gaussian vector and $C_1,c_1>0$ are absolute constants. The function $\beta(n,p,\varepsilon)$ is defined as follows: \begin{align*} \beta(n,p,\varepsilon) = \left\{ \begin{array}{lll} \varepsilon^2n, & 1\leq p\leq 2 \\ \max\left\{ \min \left\{ p^2 2^{-p}\varepsilon^2 n, (\varepsilon n)^{2/p} \right\}, \varepsilon pn^{2/p} \right\}, & 2<p\leq c_0\log n \\ \varepsilon pn^{2/p}, & p> c_0 \log n \end{array}\right. , \end{align*} where $0< c_0 < 1$ is suitable absolute constant. Furthermore, for $p\leq c_0\log n$ we have: \begin{align*} P\left( \big | \|X\|_p-\mathbb E\|X\|_p \big| > \varepsilon \mathbb E\|X\|_p \right) \leq \frac{C_1}{1+p^2 2^{-p} \varepsilon^2 n}, \end{align*} for all $\varepsilon> 0$. \end{theorem} The bound we retrieve in the case of fixed $p$ is not new. The corresponding estimates have been studied by Naor \cite{Naor} in an even more general probabilistic context. Also, for $p=\infty$ we recover the same bound proved by Schechtman in \cite{Sch2}. Therefore, the above concentration result interpolates between the sharp concentration estimates for fixed $1\leq p<\infty$ and $p=\infty$ and is derived in a unified way. However, our methods are different from the techniques used in \cite{Naor} and \cite{Sch2} and utilize Gaussian functional inequalities. Actually, following the same ideas as in \cite{Sch1} we will prove a distributional inequality for Gaussian random matrices similar to the concentration inequality described above. Using this inequality and a chaining argument we prove the second main result which is the critical dimension $k(n,p,\varepsilon)$ in the randomized Dvoretzky for the $B_p^n$ balls. \begin{theorem} \label{thm: 1.2} For all large enough $n$, for any $1\leq p\leq \infty$ and for any $0 < \varepsilon <1 $ the random $k$-dimensional section of $B_p^n$ with dimension $k\leq k(n,p,\varepsilon)$ is $(1+\varepsilon)$-Euclidean with probability greater than $1-C\exp(-c k(n,p,\varepsilon) )$, where $k(n,p,\varepsilon)$ is defined as: \begin{itemize} \item [i.] If $1\leq p<2$, then \begin{align*} k(n,p,\varepsilon) \gtrsim \varepsilon^2n. \end{align*} \item [ii.] If $2<p<c_0 \log n$, then \begin{align} k(n,p,\varepsilon) \gtrsim \left\{ \begin{array}{ccc} (Cp)^{-p} \varepsilon^2n , & 0< \varepsilon \leq (Cp)^{p/2} n^{-\frac{p-2}{2(p-1)}} \\ p^{-1} \varepsilon^{2/p} n^{2/p}, & (Cp)^{p/2} n^{-\frac{p-2}{2(p-1)}} <\varepsilon \leq 1/p \\ \varepsilon pn^{2/p}/\log \frac{1}{\varepsilon}, & 1/p < \varepsilon <1 \end{array} \right. . \end{align} Furthermore for $p< c_0\log n$ we have: \begin{align*} k(n,p,\varepsilon) \gtrsim \log n/ \log\frac{1}{\varepsilon}. \end{align*} \item [iii.] If $p\geq c_0 \log n$, then \begin{align*} k(n,p, \varepsilon) \gtrsim \varepsilon \log n/ \log\frac{1}{\varepsilon} . \end{align*} \end{itemize} where $C,c, c_0>0$ are absolute constants. \end{theorem} As one observes the dependence on $\varepsilon$ in $1\leq p\leq 2$ is $\varepsilon^2$ as predicted by V. Milman's proof (and its improvement by \cite{Gor} and \cite{Sch1}). However, for $p>2$ the dependence on $\varepsilon$ is much better than $\varepsilon^2$ for all values of $p$. This permits us to find sections of $B_p^n$ of polynomial dimension which are closer to the Euclidean ball than previously obtained. Observe that Theorem \ref{thm: 1.2} retrieves the right dependence on $c(\varepsilon)$ at $p=1$ (actually when $p$ is fixed) and at $p=\infty$. The rest of the paper is organized as follows: In Section 2 we fix the notation, we give the required background material and we include some basic probabilistic inequalities. Gaussian functional inequalities as logarithmic Sobolev inequality, Talagrand's $L_1-L_2$ inequality and Pisier's Gaussian inequality are also included. Before the proof of Theorem \ref{thm: 1.1} we prefer to deal with an easier problem first; the problem of determining the right order of the Gaussian variance of the $\ell_p$ norm. We study this question in Section 3. This is a warm-up for the concentration result we will investigate in Section 4. The main techniques that we will use, as well as the main problems we have to resolve, will be apparent already in Section 3. This estimate will be used to obtain the dependence $\log n /\log \frac{1}{\varepsilon}$ for $p\leq c_0\log n$, but still proportional to $\log n$ in Theorem \ref{thm: 1.2}. In Section 4 we present the proof of Theorem \ref{thm: 1.1}. Moreover, efforts have been made to provide lower estimates for the probability described in Theorem \ref{thm: 1.1} (see also the Appendix by Tikhomirov). In Section 5 we prove Theorem \ref{thm: 1.2} and we show that in several cases the result is best possible up to constants. We conclude in Section 6 with further remarks and open questions. \section{Notation and background material} We work in $\mathbb R^n$ equipped with the standard inner product $\langle x, y\rangle =\sum_{i=1}^n x_iy_i$ for $x=(x_1,\ldots,x_n)$ and $y=(y_1,\ldots, y_n) $ in $\mathbb R^n$. The $\ell_p$-norm in $\mathbb R^n$ ($1\leq p<\infty$) is defined as: \begin{align*} \|x\|_{\ell_p^n} \equiv \|x\|_p := \left( \sum_{i=1}^n |x_i|^p\right)^{1/p} , \; x=(x_1,\ldots,x_n) \end{align*} and for $p=\infty$ as: \begin{align*} \|x\|_{\ell_\infty^n} \equiv \|x\|_\infty := \max_{1\leq i\leq n} |x_i| , \, x=(x_1,\ldots, x_n). \end{align*} The Euclidean sphere is defined as: $S^{n-1}= \{x\in \mathbb R^n : \|x\|_2=1 \}$. The normed space $(\mathbb R^n, \|\cdot\|_p)$ is denoted by $\ell_p^n$, for $1\leq p\leq \infty$ and its unit ball by $B_p^n$, i.e. $B_p^n=\{x\in \mathbb R^n : \|x\|_p\leq 1\}$. For $1\leq p < q \leq \infty$ we have: \begin{align}\label{eq: Holder - p-norms} \|x\|_q \leq \|x\|_p \leq n^{1/p-1/q} \|x\|_q, \end{align} for all $x\in \mathbb R^n$. We write $\|\cdot\|$ for an arbitrary norm on $\mathbb R^n$ and $\|\cdot\|_A$ if the norm is induced by the centrally symmetric convex body $A$ on $\mathbb R^n$. For any subspace $F$ of $\mathbb R^n$ we write: $S_F: = S^{n-1}\cap F$ and $B_F:= B_2^n \cap F$. The random variables in some probability space $(\Omega, \mathcal A, P)$ are denoted by $\xi , \eta, \ldots$ while the random vectors by $X=(X_1,\ldots, X_n)$ or simply $X, Y, Z, \ldots$. The random vectors under consideration are going to be Gaussian unless it is stated otherwise. If $\mu$ is a probability measure we write $\mathbb E_\mu$ and $\rm Var_\mu$ for the expectation and the variance respectively with respect to $\mu$. If the measure is prescribed the subscript is omitted. We shall make frequent use of the Paley-Zygmund inequality (for a proof see \cite{BLM}): \begin{lemma} \label{lem: PZ-ineq} Let $\xi$ be a non-negative random variable defined on some probability space $(\Omega, \mathcal A, P)$ with $\xi\in L_2(\Omega, \mathcal A, P)$. Then, \begin{align*} P \left( \xi \geq t \mathbb E\xi \right) \geq (1-t)^2 \frac{(\mathbb E\xi)^2}{\mathbb E\xi^2}, \end{align*} for all $0<t<1$. \end{lemma} \medskip Also the multivariate version of Chebyshev's association inequality due to Harris will be useful: \begin{proposition} \label{prop:Harris} Let $Z=(\zeta_1,\ldots, \zeta_k)$ where $\zeta_1,\ldots,\zeta_k$ are i.i.d. random variables taking values almost surely in $A \subseteq \mathbb R$. If $F,G: A^k \subseteq \mathbb R^k \to \mathbb R$ are coordinatewise non-decreasing\footnote{A real valued function $H$ defined on $U \subseteq \mathbb R^k$ is said to be {\it coordinatewise non-decreasing} if it is non-decreasing in each variable while keeping all the other variables fixed at any value.} functions, then we have: \begin{align*} \mathbb E [F(Z) G(Z) ] \geq \mathbb E [ F(Z) ] \mathbb E [ G(Z) ]. \end{align*} \end{proposition} Harris' inequality can be derived from consecutive applications of Chebyshev's association inequality and conditioning. For a detailed proof we refer the reader to \cite{BLM}. For some measure space $( \Omega, \mathcal {E}, \mu)$ we write \begin{align*} \|f\|_{L_p(\mu)} := \left( \int_\Omega |f|^p \, d\mu \right)^{1/p}, \; 1\leq p<\infty. \end{align*} for any measurable function $f:\Omega \to \mathbb R$. If $\mu$ is Borel probability measure on $\mathbb R^n$ and $K$ is a centrally symmetric convex body on $\mathbb R^n$ we also use the notation \begin{align*} I_r(\mu, K) := \left( \int_{\mathbb R^n} \|x\|_K^r \, d\mu(x) \right)^{1/r}, \, -n<r \neq 0, \end{align*} and for $r=0$ \begin{align*} I_0(\mu, K):= \exp\left( \int_{\mathbb R^n} \log \|x\|_K \, d\mu(x) \right). \end{align*} If $\sigma$ is the (unique) probability measure on $S^{n-1}$ which is invariant under orthogonal transformations and $A$ is centrally symmetric convex body on $\mathbb R^n$, then we write: \begin{align} M_q(A) := \left( \int_{S^{n-1}} \|\theta\|_A^q \, d\sigma(\theta) \right)^{1/q}, \quad q\neq 0. \end{align} For $q=1$ we simply write $M(A)=M_1(A)$. For the random version of Dvoretzky's theorem recall V. Milman's formulation from \cite{Mil} (see also \cite{MS} or \cite{AGM}) and see \cite{Gor} and \cite{Sch1} for the dependence on $\varepsilon$: \begin{theorem} \label{thm: VMil} Let $A$ be a centrally symmetric convex body on $\mathbb R^n$. Define the critical dimension $k(A)$ of $A$ as follows: \begin{align} \label{eq:dvo-num} k(A) = \frac{ \mathbb E\|Z\|_A^2 }{b^2(A)} \simeq n\left( \frac{M(A)}{b(A)}\right)^2, \end{align} where $b(A)$ is the Lipschitz constant of the map $x\mapsto \|x\|_A$, i.e. $b=\max_{\theta\in S^{n-1}} \|\theta\|_A$ and $Z$ is a standard Gaussian $n$-dimensional random vector. Then, the random $k$-dimensional subspace $F$ of $ (\mathbb R^n, \|\cdot\|_A)$ satisfies: \begin{align*} \frac{1}{(1+\varepsilon)M}B_F \subseteq A\cap F \subseteq \frac{1}{(1-\varepsilon)M} B_F \end{align*} with probability greater than $1-e^{-ck}$ provided that $k\leq k(A,\varepsilon)$, where $k(A,\varepsilon) \simeq \varepsilon^2 k(A)$ and $M\equiv M(A)$. \end{theorem} Here the probability is considered with respect to the Haar probability measure $\nu_{n,k}$ on the Grassmann manifold $G_{n,k}$, which is invariant under the orthogonal group action. With some abuse of terminology for a subspace $F$ of a normed space $(\mathbb R^n, \|\cdot\|)$ (or equivalently for a section $A\cap F$ of a centrally symmetric convex body $A$ on $\mathbb R^n$) we say that is $(1+\varepsilon)$-{\it spherical} (or {\it Euclidean}) if: \begin{align*} \max_{\theta \in S_F} \|\theta\| / \min_{\theta \in S_F} \|\theta\| < 1+\varepsilon \quad {\rm or} \quad \max_{z\in S_F} \|z\|_A / \min_{z\in S_F} \|z\|_A <1+\varepsilon. \end{align*} Thus, the previous theorem states that the random $k$-dimensional subspace of $(\mathbb R^n, \|\cdot\|_A)$ is $\frac{1+\varepsilon}{1-\varepsilon}$-spherical with probability greater than $1-e^{-ck}$ as long as $k\leq \varepsilon^2 k(A)$. In the next paragraph we provide asymptotic estimates for $k_{p,n}:=k(\ell_p^n) \equiv k(B_p^n)$ in terms of $n$ and $p$. \subsection{Gaussian random variables} If $g$ is a standard Gaussian random variable we set $\sigma_p^p:=\mathbb E|g|^p$ for every $p>0$. The next asymptotic estimate follows easily by Stirling's formula: \begin{align} \label{eq: 2.4} \sigma_p^p = \mathbb E|g|^p =\frac{2^{p/2}}{\sqrt{\pi}} \Gamma \left(\frac{p+1}{2} \right) \sim \sqrt{2} \left( \frac{p}{e} \right)^{p/2} , \quad p\to\infty.\end{align} The $n$-dimensional standard Gaussian measure with density $(2\pi)^{-n/2} e^{-\|x\|_2^2/2}$ is denoted by $\gamma_n$. In the next Proposition, the asymptotic estimate \eqref{eq:mean-ell-p} is a special case of a more general result from \cite{SZ}. \begin{proposition}\label{prop:mean-ell-p} Let $1\leq p\leq \infty$ and let $Z$ be distributed according to $\gamma_n$. Then, we have: \begin{align} \label{eq:mean-ell-p} \mathbb E\|Z\|_p = \int_{\mathbb R^n} \|x\|_p \, d\gamma_n(x) \simeq \left\{ \begin{array}{ll} n^{1/p}\sqrt{p}, & p< \log n\\ \sqrt{\log n}, & p\geq \log n \end{array} \right. . \end{align} Therefore, for the critical dimension of $B_p^n$, we have: \begin{align*} k_{p,n}=k(B_p^n) \simeq \left\{ \begin{array}{lll} n & 1\leq p\leq 2 \\ pn^{2/p} & 2\leq p \leq \log n \\ \log n & p\geq \log n \end{array} \right. . \end{align*} \end{proposition} We shall need Gordon's lemma for Mill's ratio from \cite{Go}: \begin{lemma} \label{lem:Gordon-ineq} For any $a>0$ we have: \begin{align} \label{eq:gordon-1}\frac{a}{1+a^2}\leq e^{a^2/2}\int_a^\infty e^{-t^2/2}\, dt\leq \frac{1}{a}. \end{align} Equivalently, we have: \begin{align} \label{eq:gordon-2} 1\leq \frac{\phi(a)}{a(1-\Phi(a) )} \leq 1+\frac{1}{a^2}, \end{align} for $a>0$, where $\Phi(x)= \frac{1}{\sqrt{2\pi}}\int_{-\infty}^x e^{-t^2/2}\, dt$ and $\phi=\Phi'$. \end{lemma} The following technical lemma will be useful: \begin{lemma} \label{lem:gauss^p} Let $2\leq p <\infty$ and let $g_1,g_2$ be i.i.d. standard normal variables. The following properties hold: \begin{itemize} \item [\rm i.] The function $t\mapsto P\left ( \big| |g_1|^p -|g_2|^p \big| >t\right)$ is log-convex in $(0,\infty)$. \item [\rm ii.] For any $r\geq 1$ we have: \begin{align} \left( \mathbb E \big| |g_1|^p -|g_2|^p \big|^r\right)^{1/r} \simeq r^{p/2} \sigma_p^p. \end{align} \end{itemize} \end{lemma} \noindent {\it Proof.} (i) Set $H_p(t):= P\left( \big| |g_1|^p -|g_2|^p \big| >t\right)$. Then, we may check that: \begin{align*} H_p(t) = \sqrt{\frac{2}{\pi}} \int_{\mathbb R} H_p(x,t) \, d\gamma_1(x), \end{align*} where \begin{align*} H_p(x,t) := \int_{(|x|^p+t)^{1/p}}^\infty e^{-y^2/2} \, dy \quad (x,t) \in \mathbb R\times (0,\infty). \end{align*} We have the following: \smallskip \noindent {\it Claim 1.} For fixed $x\in \mathbb R$, the map $t\mapsto H_p(x,t)$ is log-convex on $(0,\infty)$. \smallskip \noindent To this end it suffices to check that $ H_p(x,t) \geq (H'_p(x,t))^2/ H_p''(x,t)$ for all $t>0$, equivalently: \begin{align*} \int_{(|x|^p +t)^{1/p} }^\infty e^{-y^2/2}\, dy \geq \exp \left(-\frac{1}{2}(|x|^p+t)^{2/p} \right) \frac{(|x|^p+t)^{1/p} }{p-1+ (|x|^p+t)^{2/p}}. \end{align*} The latter follows by \eqref{eq:gordon-1} (for $a=(|x|^p+t)^{1/p}$) in Lemma \ref{lem:Gordon-ineq} . The first assertion now follows by H\"{o}lder's inequality. \smallskip \noindent (ii) The upper estimate is a consequence of the triangle inequality and the fact that $\sigma_{pr}^p \simeq r^{p/2} \sigma_p^p$ (see estimate \eqref{eq: 2.4}). For the lower bound we have to elaborate more. Using polar coordinates we may write: \begin{align*} \mathbb E\big| |g_1|^p -|g_2|^p \big|^r = \frac{2^{\frac{pr}{2}+2}}{\pi} \Gamma\left(\frac{pr}{2}+1 \right) \int_0^{\pi/4} \left( \cos^p\theta -\sin^p\theta \right)^r \, d\theta. \end{align*} We have the following: \smallskip \noindent {\it Claim 2.} For $r\geq 1$ we have: \begin{align*} \int_0^{\pi/4} \left( \cos^p\theta -\sin^p\theta \right)^r \, d\theta \gtrsim (2/3)^r /\sqrt{pr}. \end{align*} \smallskip \noindent Indeed; we may write: \begin{align*} \int_0^{\pi/4} \left( \cos^p\theta -\sin^p\theta \right)^r \, d\theta \geq \int_0^{\pi/6} \left( \cos^p\theta -\sin^p\theta \right)^r \, d\theta \geq \left( 1-3^{-p/2} \right)^r \int_0^{\pi/6} (\cos\theta)^{pr} \, d\theta, \end{align*} where we have used the fact that $\sin \theta \leq 3^{-1/2} \cos \theta$ for any $\theta \in [0,\pi/6]$. Next, we have: \begin{align*} \int_0^{\pi/6} (\cos\theta)^{pr} \, d\theta = \frac{1}{2}B\left( \frac{pr +1}{2}, \frac{1}{2}\right)- \int_{\pi/6}^{\pi/2} (\cos \theta)^{pr} \, d\theta \geq \frac{1}{2}B\left( \frac{pr+1}{2}, \frac{1}{2}\right)- \frac{2\cos^{pr+1}(\pi/6)}{pr+1}.\end{align*} A standard approximation for the Beta function provides: \begin{align*} B \left(\frac{pr+1}{2}, \frac{1}{2} \right) \simeq (pr)^{-1/2}, \end{align*} and thus, the Claim 2 follows. Finally, Stirling's approximation formula yields $2^{pr/2}\Gamma(\frac{pr}{2}+1)\simeq (pr)^{1/2} (pr/e)^{pr/2}$ and the result follows. $\hfill \quad \Box$ \subsection{Functional inequalities on Gauss' space} First we refer to the logarithmic Sobolev inequality. In general, if $\mu$ is a Borel measure on $\mathbb R^n$ it is said that $\mu$ satisfies a {\it log-Sobolev inequality with constant $\rho$} if for any smooth function $f$ we have: \begin{align*} {\rm Ent}_\mu (f^2):= \mathbb E_\mu(f^2\log f^2) - \mathbb E_\mu f^2 \log(\mathbb E_\mu f^2) \leq \frac{2}{\rho} \int \|\nabla f\|^2_2\, d\mu. \end{align*} It is well known (see \cite{Led}) that the standard $n$-dimensional Gaussian measure $\gamma_n$ satisfies the log-Sobolev inequality with $\rho =1$. The next lemma, based on the classical Herbst's argument, is a useful estimate which holds for any measure satisfying a log-Sobolev inequality: \begin{lemma} \label{lem:log-sob-moms} Let $\mu$ be a measure satisfying the log-Sobolev inequality with constant $\rho>0$. Then, for any Lipschitz\footnote{Recall that for a Lipschitz map $f :(X,\rho)\to \mathbb R$ on some metric space $(X,\rho)$ the Lipschitz constant of $f$ is defined by $\|f\|_{\rm Lip}=\sup_{x,y\in X, x\neq y}\frac{|f(x)-f(y)|}{\rho(x,y)}$.} map $f$ and for any $2\leq p< q$ we have: \begin{align} \label{eq: 2.19} \|f\|_{L_q(\mu)}^2- \|f\|_{L_p(\mu)}^2 \leq \frac{\|f\|_{\rm Lip}^2}{\rho}(q-p). \end{align} In particular, we have: \begin{align}\label{eq:2.7} \frac{ \|f\|_{L_q(\mu)} }{\|f\|_{L_2(\mu)} }\leq \sqrt{1+\frac{q-2}{ \rho k(f)} }, \end{align} for $q\geq 2$ where $k(f):= \|f\|_{L_2(\mu)}^2/ \|f\|_{\rm Lip}^2$. Furthermore, \begin{align} \frac{\|f\|_{L_2(\mu)} }{\|f\|_{L_p(\mu)} } \leq \exp\left( \frac{1/p-1/2}{ \rho k(f)}\right), \end{align} for $0<p \leq 2$. \end{lemma} \noindent {\it Proof.} The proof of the first estimate is essentially contained in \cite{SV}. The second one is direct application of the first for $p=2$. For the last assertion, note that by Lyapunov's convexity theorem (see \cite{HLP}) the map $p \stackrel{\phi} \mapsto \log\|f\|_p^p$ is convex. Moreover, we have: $p\phi'(p)-\phi(p) =\frac{{\rm Ent}_\mu(|f|^p)}{ \int |f|^p \, d\mu}$. Hence, for any $0<p<2$, the convexity of $\phi$ and the log-Sobolev inequality yield: \begin{align*} 2 \frac{\phi(2)-\phi(p)}{2-p} \leq 2\phi'(2) = \frac{{\rm Ent}_\mu(f^2)}{\|f\|_2^2} +\phi(2) \leq \frac{2}{2\rho k} +\phi(2), \end{align*} where $k\equiv k(f)$. The result follows. $\hfill \quad \Box$ \smallskip \begin{note} When $f$ is a Lipschitz map with $k(f)\gtrsim 1$, the above two estimates imply \begin{align} \label{eq:2.11} \frac{\|f\|_{L_q(\gamma_n)} }{ \|f\|_{L_1(\gamma_n)} } \leq \sqrt{ 1+c_1 \frac{q-1}{k(f)}}, \quad q\geq 1. \end{align} In the case $A$ is a centrally symmetric convex body on $\mathbb R^n$, integration in polar coordinates yields: \begin{align} I_r(\gamma_n,A) = c_{n,r} M_r(A), \end{align} where $c_{n,r}: = \sqrt{2}[\Gamma(\frac{n+r}{2})/ \Gamma (\frac{n}{2})]^{1/r}$ and $M^r_r(A) :=\int_{S^{n-1}} \|\theta\|_A^r \, d\sigma(\theta)$. Applying this for $A=B_2^n$ we readily see that $c_{n,r}=I_r(\gamma_n,B_2^n)$. Therefore, for $-n< s < r$ we obtain: \begin{align} \label{eq:2.15} \max\left\{ \frac{M_r(A)}{M_s(A)}, \frac{I_r(\gamma_n, B_2^n)}{ I_s(\gamma_n,B_2^n)} \right\} \leq \frac{M_r(A) I_r(\gamma_n, B_2^n)}{M_s(A) I_s(\gamma_n,B_2^n)} = \frac{I_r(\gamma_n,A)}{I_s(\gamma_n,A)} . \end{align} It follows that: \begin{align} \label{eq:iso-M} M_q(A)/M_1(A) \leq \sqrt{ 1+c_1\frac{q-1}{k(A)} }, \quad q\geq 1. \end{align} This estimate improves considerably upon the estimate presented in \cite[Statement 3.1]{LMS} or \cite[Proposition 1.10, (1.19)]{Led} in the range $1\leq q \leq k(A)$. For a purely probabilistic approach of this fact we refer the reader to \cite{PPV}. \end{note} It is immediate that \begin{align*} \| f\|_{L_r(\gamma_n)} \lesssim \left\{ \begin{array}{ll} \displaystyle \|f\|_{L_1(\gamma_n)} , & 1\leq r\leq k(f) \\ \displaystyle \sqrt{ \frac{r}{k(f)} } \|f\|_{L_1(\gamma_n)}, & r\geq k(f) \end{array} \right. \end{align*} for any Lipschitz function $f$ in $(\mathbb R^n,\gamma_n)$. In \cite{LMS} it is proved that for norms this estimate can be reversed: \begin{lemma} \label{lem:equiv-r-means} Let $\|\cdot\|_A$ be a norm on $\mathbb R^n$. Then, we have: \begin{align*} I_r(\gamma_n,A) \simeq \left\{ \begin{array}{ll} I_1(\gamma_n,A) , & r\leq k(A) \\ \displaystyle \sqrt{\frac{r}{k(A)} } I_1(\gamma_n, A), & r\geq k(A) \end{array} \right. . \end{align*} \end{lemma} \medskip This result implies the next well known fact: \begin{proposition} \label{prop: sharp-tails} Let $\|\cdot \|\equiv \|\cdot\|_A$ be a norm on $\mathbb R^n$. Then, we have: \begin{align*} c\exp(-Ct^2k) \leq P \left( \|X\| >(1+t) \mathbb E\|X\| \right) \leq C\exp(-ct^2 k), \end{align*} for $t\geq 1$. Moreover, one has: \begin{align*} \left( \mathbb E \big| \|X\| -\mathbb E\|X\|\big|^r \right)^{1/r} \simeq \sqrt{\frac{r}{k}} \mathbb E\|X\|, \end{align*} for all $r\geq k$, where $k\equiv k(A)$ and $X$ is a standard Gaussian $n$-dimensional random vector. \end{proposition} \noindent {\it Sketch of proof of Proposition \ref{prop: sharp-tails}.} Set $I_r\equiv I_r(\gamma_n,A)$. There exists $c_1 \in (0,1)$ such that $I_s\geq c_1\sqrt{s/k}I_1$ for all $s>k$ by Lemma \ref{lem:equiv-r-means}. Thus, for $t\geq 1$, if we choose $r>k$ by $c_1\sqrt{r/k}=4t$, we may write: \begin{align*} P \left( \|X\| >\frac{1}{2} I_r \right) \leq P \left( \|X\| > \frac{c_1}{2} \sqrt{r/k} I_1 \right) \leq P(\|X\| \geq (1+t)I_1). \end{align*} On the other hand the Paley-Zygmund inequality (Lemma \ref{lem: PZ-ineq}) yields: \begin{align*} P \left( \|X\| >\frac{1}{2} I_r \right) \geq (1-2^{-r})^2 (I_r/I_{2r})^{2r} \geq c_2e^{-C_2r} \geq c_2\exp(-C_2' t^2 k), \end{align*} where we have also used the fact that $I_r\simeq I_{2r}$ which follows by Lemma \ref{lem:equiv-r-means}. For the second assertion we apply integration by parts and we use the first estimate. $\hfill \quad \Box$ \medskip The above estimate shows that the large deviation estimate for norms with respect to $\gamma_n$ is completely settled. Therefore for the concentration inequalities we are interested in, we may restrict ourselves to the range $0<\varepsilon<1$. Other important functional inequalities related to the concentration of measure phenomenon are the Poincar\'{e} inequalities. Using a standard variational argument (see \cite{Led}) one can show that any measure which satisfies a log-Sobolev inequality with constant $\rho$ also satisfies a Poincar\'{e} inequality with constant $\rho$, i.e. \begin{align} \label{eq:Poin-ineq} \rho {\rm Var}_\mu(f) \leq \int_{\mathbb R^n} \|\nabla f\|_2^2 \, d\mu, \end{align} for any smooth function $f$. A refinement of the Poincar\'{e} inequality was proved by Talagrand in \cite{Tal} for the discrete cube $\{-1,1\}^n$ (see also \cite{BLM} for a recent exposition) and its continuous version, in the Gaussian context, was presented in \cite{CL} (see also \cite{Cha}): \begin{theorem}[Talagrand's $L_1-L_2$ bound] \label{thm:Talagrand bd} Let $f: \mathbb R^n \to \mathbb R$ be a smooth function. If $A_i:=\|\partial_i f\|_{L_2(\gamma_n)} $ and $a_i:= \|\partial_i f\|_{L_1(\gamma_n)}$, then one has: \begin{align*} {\rm Var}_{\gamma_n} (f) \leq C \sum_{i=1}^n \frac{A_i^2}{1+\log ( A_i / a_i) }, \end{align*} where $\partial_i f$ stands for the partial derivative $\partial f / \partial x_i$. \end{theorem} \noindent This inequality will be used in order to prove concentration for the $\ell_p$ norm when $p$ is sufficiently large. Pisier discovered in \cite{Pis} another Gaussian inequality which contains the $(r,r)$-Poincar\'{e} inequalities and the Gaussian concentration inequality as a special case (see Remarks \ref{rems:2-12}). \begin{theorem}\label{thm:Pis-ineq} Let $\phi:\mathbb R\to \mathbb R$ be a convex function and let $f:\mathbb R^n\to \mathbb R$ be $C^1$-smooth. Then, if $X,Y$ are independent copies of a Gaussian random vector, we have: \begin{align*} \mathbb E \phi\left( f(X)-f(Y)\right)\leq \mathbb E \phi \left( \frac{\pi}{2} \langle \nabla f(X), Y\rangle \right). \end{align*} \end{theorem} \begin{remarks} \label{rems:2-12} \rm \noindent 1. {\it $(r,r)$-Poincar\'{e} inequalities.} For $\phi(t)=|t|^r, \, r\geq 1$ we get: \begin{align} \label{eq: 2.9} \|f-\mathbb Ef\|_{L_r(\gamma_n)} \simeq \left(\mathbb E |f(X)-f(Y)|^r \right)^{1/r} \leq \frac{\pi}{2}\sigma_r \left( \mathbb E \|\nabla f(X)\|_2^r \right)^{1/r}. \end{align} In particular for $r=2$ we have ${\rm Var}(f(X)) \leq \frac{\pi^2}{8} \mathbb E \|\nabla f(X)\|_2^2$, which is the Gaussian Poincar\'e inequality with non-optimal constant. \noindent 2. {\it Gaussian concentration.} The choice $\phi_\lambda(t)=\exp(\lambda t), \; \lambda>0 $ and a standard optimization argument on $\lambda$ (see \cite{Pis} for the details) yield: \begin{align} \label{eq:2.14} P( |f(X)- \mathbb Ef(X) |>t )\leq 2 \exp(-t^2/(2\pi^2 \|f\|^2_{\rm Lip})), \end{align} for all $t>0$. Alternatively, we may conclude a similar estimate by equations \eqref{eq: 2.9} and Markov's inequality. \end{remarks} \subsection{Negative moments of norms} The next result is due to Klartag and Vershynin from \cite{KV} (see also \cite{LO} for a similar estimate as \eqref{eq:gauss-sb} with $k(A)$ instead of $d(A)$): \begin{proposition} \label{prop:small-ball} Let $A$ be a centrally symmetric convex body on $\mathbb R^n$. We define: \begin{align} d(A):= \min\left\{ n, -\log \gamma_n \left( \frac{m}{2} A \right)\right\}, \end{align} where $m$ is the median of $x \mapsto \|x\|_A$ with respect to $\gamma_n$. Then, one has: \begin{align} \label{eq:gauss-sb} \gamma_n\left( \left \{ x: \|x\|_A \leq c\varepsilon \mathbb E\|X\|_A \right \} \right)\leq (C\varepsilon)^{cd(A)}, \end{align} for all $0<\varepsilon<\varepsilon_0$ where $\varepsilon_0>0$ is an absolute constant. Moreover, for all $0<k< d(A)$ we have: $ I_{-k}(\gamma_n, A) \geq c I_1(\gamma_n, A)$. Note that $d(A)> c_1 k(A)$. \end{proposition} Note that this result implies that the negative moments exhibit stable behavior up to the point $d(A)$. However, one can show that up to the critical dimension the moments of any norm with respect to the Gaussian (or the uniform on the sphere) measure are almost constant, thus complementing the estimates \eqref{eq:2.11} and \eqref{eq:iso-M}. In order to quantify the latter we need the next consequence of Proposition \ref{prop:small-ball}. \begin{lemma} \label{lem:reduc-neg-moms} Let $A$ be a centrally symmetric convex body on $\mathbb R^n$ which satisfies the small ball probability estimate: \begin{align*} \gamma_n( \varepsilon I_1 A) < (K\varepsilon)^{\alpha d}, \end{align*} for all $0<\varepsilon< \varepsilon_0$ ($K, \alpha>0$). Then, for all $r,s>0$ with $r+s< \alpha d/3$ we have: \begin{align*} I_{-r-s}^{-r-s}(\gamma_n, A) \leq \left(\frac{CK}{I_1}\right)^s I_{-r}^{-r}(\gamma_n, A), \end{align*} where $I_1=I_1(\gamma_n,A)$ and $C>0$ is an absolute constant. \end{lemma} \noindent {\it Proof.} We set $I_q=I_q(\gamma_n,A)$. For any $0<\varepsilon <\varepsilon_0$ we may write: \begin{align*} I_{-r-s}^{-r-s} = \int \frac{1}{\|x\|_A^{r+s}} \, d\gamma_n(x) &\leq \frac{1}{(\varepsilon I_1)^s} \int \frac{1}{\|x\|_A^r} \, d\gamma_n(x)+\int_{\varepsilon I_1A} \frac{1}{\|x\|_A^{r+s}} \, d\gamma_n(x) \\ &\leq \frac{1}{(\varepsilon I_1)^s} I_{-r}^{-r} + (K\varepsilon)^{\alpha d/2} I_{-2(r+s)}^{-r-s}, \end{align*} by the Cauchy-Schwarz inequality. Note that the small ball probability assumption implies that: $I_{-s}\geq c\varepsilon_0 I_1$ for all $0<s< 2\alpha d/3$. Thus, if $r+s<\alpha d/3$ we get $I_{-2(r+s)} >c_1 I_{-(r+s)}$ and previous estimate yields: \begin{align*} I_{-r-s}^{-r-s} <\frac{1}{(\varepsilon I_1)^s} I_{-r}^{-r} + (K\varepsilon)^{\alpha d/2} c_1^{-r-s} I_{-r-s}^{-r-s}. \end{align*} Choosing $\varepsilon$ small enough so that $(K\varepsilon)^{\alpha d/2}< c_1^{r+s}/2$, say $0< \varepsilon \leq c_1/(2K)$, we conclude the result. $\hfill \quad \Box$ \begin{theorem}\label{thm:stability-moms} Let $A$ be a centrally symmetric convex body on $\mathbb R^n$. Then, one has: \begin{align*} \frac{I_r(\gamma_n, A)}{I_{-r}(\gamma_n,A)} \leq 1+\frac{Cr}{k(A)}, \end{align*} for all $0<r<ck(A)$, where $C,c>0$ are absolute constants. \end{theorem} \noindent {\it Proof.} We present the argument in two steps: \smallskip \noindent {\it Step 1.} (positive moments). We use the log-Sobolev inequality to estimate the growth of the moments. The basic observation is that: \begin{align*} \frac{d}{dr} \left( \log \|f\|_{L_r(\mu)} \right) = \frac{{\rm Ent}_\mu (|f|^r)}{r^2 \|f\|_{L_r(\mu)}^r}, \end{align*} for any Lipschitz function $f$. Apply this for the function $f=\|\cdot\|_A $ to get: \begin{align*} (\log I_r)' \leq \frac{1}{2I_r^r} \mathbb E \|X\|_A^{r-2} \| \, \nabla \|X\|_A \, \|_2 \leq \frac{b^2}{2I_r^r} I_{r-2}^{r-2}, \end{align*} for all $r>0$, where $b=b(A)$ the Lipschitz constant of $\|\cdot\|_A$. It is easy to see that $(\log I_r)' \leq \frac{1}{2k(A)}$ for $r\geq 2$, while for $0<r<2$ we may write: \begin{align}\label{eq:4.11} (\log I_r)' \leq \frac{b^2}{2 I_{-(2-r)}^2}\leq \frac{C_1b^2}{I_1^2} \leq \frac{C_1'}{k(A)}, \end{align} where we have used Proposition \ref{prop:small-ball}. Using \eqref{eq:4.11} we may write: \begin{align} \label{eq:2.41} \log (I_r/I_0)= \int_0^r (\log I_t)'\, dt \leq \int_0^r \frac{C_1}{k} \, dt=\frac{C_1r}{k}, \end{align} for all $r>0$. \smallskip \noindent {\it Step 2.} (negative moments). As before, using the log-Sobolev inequality, for all $0<r<c_1d(A)$ we may write: \begin{align*} (\log I_{-r})' \geq -\frac{b^2}{2I_{-r}^{-r}}I_{-r-2}^{-r-2} \geq -\frac{C_2 b^2}{I_1^2} \geq -\frac{C_2'}{k(A)}, \end{align*} where we have used Lemma \ref{lem:reduc-neg-moms}. The same reasoning applied to \eqref{eq:2.41} shows that $\log (I_{-r}/I_0)\geq -\frac{C_2r}{k}$, for all $0<r<c_1d(A)$. Combining the two steps and restricting to $0<r<c_2k(A)$ we conclude the result. $\hfill \quad \Box$ \section{The Gaussian variance of the $\ell_p$ norm} A standard method for bounding the variance is the concentration inequality \eqref{eq:2.14}, e.g. see \cite{LMS} or \cite[Proposition 1.9]{Led}. An integration by parts argument implies that if $f:\mathbb R^n\to \mathbb R$ is $L$-Lipschitz function, then ${\rm Var}(f) \lesssim L^2$. In particular, if $f(x)=\|x\|_p$ this estimate yields: \begin{align*} {\rm Var} \|X\|_p \lesssim b^2(B_p^n) \simeq \max\{ n^{2/p-1}, 1\}, \quad 1\leq p\leq \infty. \end{align*} For $1\leq p\leq 2$ this estimate turns out to be the correct one. But, for $2<p\leq \infty$ this method gives bounds which are far from the actual ones. The purpose of this Section is to compute the correct order of magnitude for the Gaussian variance of the $\ell_p$ norm. Our first approach lies in determining the limit distribution of the sequence of variables $(\|g\|_{\ell_p^n})_{n=1}^\infty$. Here $\|g\|_{\ell_p^n}$ stands for the $\ell_p$ norm of the $n$-dimensional ``truncation" of the sequence $(g_i)_{i=1}^\infty$ of i.i.d. standard Gaussian random variables, i.e. $\|g\|_{\ell_p^n}:=(\sum_{i\leq n}|g_i|^p)^{1/p}$. \subsection{ The variance of the $\ell_p$ norm for $1 \leq p <\infty$} In this case we use the next Proposition known in Statistics as the "Delta Method" (for a proof see \cite{Cra}): \begin{proposition} \label{prop: delta-meth} Let $\theta, \sigma \in \mathbb R$ and let $(Y_n)$ be a sequence of random variables that satisfies $n^{1/2}(Y_n-\theta) \longrightarrow N(0,\sigma^2)$ in distribution. For the differentiable function $h$ assume that $h'(\theta)\neq 0$. Then, \begin{align*} n^{1/2} (h(Y_n)-h(\theta)) \longrightarrow N(0,\sigma^2(h'(\theta))^2) \end{align*} in distribution. \end{proposition} Now we may prove the next asymptotic estimate: \begin{theorem} \label{thm:up-low bound var} Let $1\leq p <\infty$. Let $(\xi_j)_{j=1}^\infty$ be sequence of i.i.d random variables with $m_{3p}^{3p}:=\mathbb E|\xi_1 |^{3p}<\infty$. Then, there exist positive constants $c_p,C_p$ depending only on $p$ and the distribution of $(\xi_j)$ such that: \begin{align*} c_p n^{\frac{2}{p}-1} \leq {\rm Var}\| \xi \|_{\ell_p^n} \leq C_p n^{\frac{2}{p}-1}, \end{align*} for all $n$, where $\|\xi\|_{\ell_p^n}^p =\sum_{j\leq n} |\xi_j|^p$. \end{theorem} \noindent {\it Proof.} Let $Y_n:=\frac{1}{n}\sum_{j=1}^n |\xi_j|^p$. Then by the Central Limit Theorem we know that: \begin{align*} \sqrt{n}(Y_n-m_p^p) \longrightarrow N(0, v_p^2) \end{align*} in distribution, where $v_p^2 := {\rm Var}|\xi_1|^p$. Consider the function $h(t)=t^{1/p}, \; t>0$ and apply Proposition \ref{prop: delta-meth} to get: \begin{align*} \zeta_n:= \sqrt{n}(n^{-1/p}\|\xi\|_p-m_p) \longrightarrow N \left( 0, \frac{v_p^2}{p^2} m_p^{2(1-p)} \right), \end{align*} in distribution. Using the fact that $m_{3p}<\infty$ we may conclude the uniform integrability of $(\zeta_n^2)_{n=1}^\infty$: \noindent {\it Claim.} For all $n\geq 1$ we have: \begin{align*} \mathbb E |\zeta_n|^3 \lesssim m_{3p}^{3p}/ m_p^{3(p-1)}. \end{align*} \noindent {\it Proof of Claim.} We may write: \begin{align*} \mathbb E |\zeta_n|^3 & =n^{\frac{3}{2}-\frac{3}{p}} \mathbb E \left| \|\xi\|_p -n^{1/p} m_p \right|^3 \leq \frac{ n^{\frac{3}{2}-\frac{3}{p}} }{ (n^{1/p}m_p)^{3(p-1)} } \mathbb E \left| \|\xi\|_p^p -nm_p^p \right|^3 \leq \frac{ n^{-3/2} }{ m_p^{3(p-1)} } \mathbb E \left | \|\xi\|_p^p - \|\xi'\|_p^p \right|^3, \end{align*} where $\xi'$ is an independent copy of $\xi$ and we have also used the numerical inequality $a^{p-1} |z-a| \leq |z^p-a^p|$ for $z\geq 0, a>0, p\geq 1$ and Jensen's inequality. Finally, a standard symmetrization argument yields: \begin{align*} \mathbb E \left| \sum_{j=1}^n \left( |\xi_j|^p -|\xi_j'|^p \right) \right|^3 \lesssim \mathbb E \left[\sum_{j=1}^n \left( |\xi_j|^p -|\xi_j'|^p \right)^2 \right]^{3/2} \lesssim n^{3/2} \mathbb E \left| |\xi_1|^p -|\xi_1'|^p \right|^3 \lesssim n^{3/2} m_{3p}^{3p}, \end{align*} where we have also used Jensen's inequality, again. This proves the claim. Hence, we may conclude: \begin{align} \label{eq:3.7} n^{1-\frac{2}{p}} {\rm Var}(\| \xi \|_p) = {\rm Var} \left( n^{\frac{1}{2}-\frac{1}{p}} \|\xi \|_p \right) = {\rm Var} \left[ \sqrt{n}(n^{-1/p}\| \xi \|_p-m_p) \right] \to \frac{v_p^2}{p^2}m_p^{2(1-p)}, \end{align} as $n\to \infty$ and the result follows. $\hfill \quad \Box$ \smallskip \begin{remark} \rm The reader should notice that, for fixed $p \geq 1$, the dependence we obtain on the dimension is the same regardless the randomness we choose for the underlying variables $(\xi_i)$. In addition the argument is essentially based on the stochastic independence. Moreover, in the case that $(\xi_i)$ are standard normals, the above limit value is estimated as: \begin{align} \label{eq:3.8} \frac{v_p^2}{p^2}m_p^{2(1-p)} \sim \frac{1}{e\sqrt{2}} \frac{2^p}{p}, \quad p\to \infty. \end{align} This suggests that the constants $c_p,C_p$ should depend exponentially on $p$. \end{remark} \subsection {The variance of the $\ell_\infty$ norm} Of course the variance in that case can be computed by employing the tail estimates for the $\ell_\infty$-norm proved in \cite{Sch2}. However, we prefer here to give a proof of a more "probabilistic flavor". Actually, the argument we present below works for all i.i.d. random variables with exponential tails, but we shall focus on Gaussians. Let $(g_i)_{i=1}^\infty$ be independent, standard Gaussian random variables and let $Y_n:= \max_{i\leq n} |g_i|, \; n\geq 2$. We set $a_n:= -\Phi^{-1}(\frac{1}{2n}) >0$. Note that $a_n\to \infty$ and Gordon's inequality \eqref{eq:gordon-2} shows that $a_n \sim \sqrt{2\log n}$ as $n\to \infty$. We define $W_n: = a_n(Y_n-a_n)$ and we have the next well known fact (see \cite[\S \, 9.3]{Dav}): \begin{proposition} Let $\eta$ be a Gumbel random variable, that is the cumulative distribution function of $\eta$ is given as: \begin{align*} F_\eta(t) := \exp(-e^{-t}), \; t\in \mathbb R. \end{align*} If $(W_n)$ is the sequence defined above, then for every $t\in \mathbb R$ we have: \begin{align*} \mathbb P (W_n\leq t) \to \exp(-e^{-t}) , \end{align*} that is $W_n$ converges to the Gumbel variable in distribution. \end{proposition} For the random variable $\eta$ it is known that $\mathbb E (\eta) =\gamma$ (the Euler-Mascheroni constant) and ${\rm Var} (\eta) = \pi^2/6$. Therefore, we obtain: \begin{align*} a_n^2 {\rm Var}(Y_n) = {\rm Var}(W_n) \to {\rm Var} (\eta) , \end{align*} as $n\to \infty$. This proves the following: \begin{theorem}\label{thm:var-ell-infty} If $Z$ is an $n$-dimensional standard Gaussian random vector, we have: \begin{align*} {\rm Var} \|Z\|_\infty ={\rm Var}_{\gamma_n} \|x\|_\infty \simeq (\log n)^{-1}. \end{align*} \end{theorem} \medskip It should be noticed that the dependence on dimension we get for fixed $1\leq p<\infty$ is polynomial in $n$ while for $p=\infty$ is logarithmic in $n$. As we have already explained this ``skew'' behavior relies on the fact that as $p$ grows, the constants in the equivalence should be expected to be exponential in $p$ (see \eqref{eq:3.7} and \eqref{eq:3.8}). In the rest of the paragraph we try to study and quantify this phenomenon. Our aim is to give as sharp bounds as possible and describe the behavior of $p$ along $n$, too. \subsection{Tightening the bounds} The purpose of this subsection is to provide continuous bounds in terms of $p$ for the variance of the $\ell_p$ norm when dimension $n \to \infty$ and $p$ varies from $1$ to $\infty$ (along with $n$). One can easily see that: \begin{align*} c_1 p \leq n^{1-2/p} {\rm Var}\|X\|_p \leq c_2 p {\rm Var}|g_1|^p \simeq p(2p/e)^p, \end{align*} by just comparing with the variance of the $\ell_2$ norm and the $p$-th power of the $\ell_p$ norm. Below, we show that one can always have better estimates. In order to prove these estimates we will use the following: \begin{lemma} \label{lem:bd-var} Let $4\leq p\leq \infty$. Then one has: \begin{align*} I_r (\gamma_n,B_p^n) / I_{-r} (\gamma_n,B_p^n) \leq \exp \left( \frac{C_1r}{k_{p,n} \log n} \right) , \quad 0<r< c_1\sqrt{k_{p,n} \log n}, \end{align*} where $k_{p,n}\equiv k(B_p^n)$. \end{lemma} We postpone the proof of this Lemma to Section 4 (Theorem \ref{thm:stability-r-means}). \subsubsection{Upper bound (via Talagrand's inequality)} For $p>1$ we have: $\partial_i \|x\|_p =\frac{|x_i|^{p-1}}{\|x\|_p^{p-1}} \sgn{x_i}$ a.s. Thus, one has: \begin{align*} A^2:= \big \| \partial_i \| \cdot \|_p \big\|_{L_2}^2 \leq \sigma_{2p-2}^{2p-2}I_{-2(p-1)}^{-2(p-1)}(\gamma_{n-1},B_p^{n-1}), \; a: = \big \| \partial_i \|\cdot \|_p \big\|_{L_1} \leq \sigma_{p-1}^{p-1}I_{-(p-1)}^{-(p-1)}(\gamma_{n-1},B_p^{n-1}). \end{align*} Set $I_s(\gamma_{n-1},B_p^{n-1})\equiv I_s$. Thus, direct application of Theorem \ref{thm:Talagrand bd} yields: \begin{align} \label{eq:6} {\rm Var}(\|X\|_p)\leq C n\ \frac{\sigma_{2p-2}^{2p-2} I_{-2(p-1)}^{-2(p-1)} }{1+\log\left(\frac{\sigma_{2p-2}^{p-1} }{\sigma_{p-1}^{p-1}} \frac{I_{-(p-1)}^{p-1}}{I_{-2(p-1)}^{p-1} } \right) } \leq C_1 n\frac{\sigma_{2p-2}^{2p-2}/I_{-2(p-1)}^{2(p-1)} }{p}, \end{align} where we have used the fact that $\left(\sigma_{2p-2}/ \sigma_{p-1}\right)^{p-1} \simeq 2^p$, which follows by \eqref{eq: 2.4}. As long as $2p<c_1\sqrt{k_{p,n} \log n}$, which is satisfied when $p\leq c_0\log n$ for some sufficiently small absolute constant $c_0>0$ in view of Proposition \ref{prop:mean-ell-p}, we may apply Lemma \ref{lem:bd-var} to get: \begin{align*} I_{-2(p-1)}^{2(p-1)} \geq e^{ - \frac{c' p^2}{k_{p,n} \log n}}I_p^{2(p-1)} \geq c_1' \sigma_p^{2(p-1)} (n-1)^{2-2/p}. \end{align*} Plug this estimate in \eqref{eq:6} we derive the upper bound: \begin{align*} {\rm Var}\|X\|_p \leq C_2 \frac{ \sigma_{2p-2}^{2p-2} }{\sigma_p^{2p-2} p} n^{\frac{2}{p}-1} \simeq \frac{2^p}{p} n^{2/p-1}. \end{align*} Note that this is exactly of the same order as the one we obtained at the limit value using the Delta Method. \subsubsection{Lower bound (via Talagrand's inequality)} Here we will use the next numerical result: \begin{lemma} \label{lem:2-sided-ineq} Let $a,b>0$ and $0<\theta \leq 1$. Then, we have: \begin{align*} \theta |a-b| \left( \frac{2}{a+b}\right)^{1-\theta} \leq |a^\theta -b^\theta| \leq \theta |a-b| \frac{a^{\theta-1} + b^{\theta-1}}{2} . \end{align*} \end{lemma} \noindent {\it Proof.} We may assume without loss of generality that $0<a<b$ and $0<\theta<1$. If we set $f(t)=t^{\theta -1}, \; t>0$, note that $f$ is convex in $[a,b]$, hence the estimate follows by the Hermite-Hadamard inequality (see \cite{HLP}). $\hfill \quad \Box$ \medskip Applying the lower bound of Lemma \ref{lem:2-sided-ineq} for $a=\|X\|_p^p, \; b=\|Y\|_p^p$ and $\theta =1/p$, where $X,Y$ are independent and $X,Y \sim N({\bf 0}, I_n)$, we obtain: \begin{align} \label{eq:10} 2 {\rm Var}\|X\|_p = \mathbb E( \|X\|_p-\|Y\|_p)^2 \geq \frac{2^{2/q}}{p^2} \mathbb E \frac{(\|X\|_p^p -\|Y\|_p^p )^2}{( \|X\|_p^p +\|Y\|_p^p)^{2/q}} \geq \frac{1}{p^2} \mathbb E \left|\sum_{i=1}^n \frac{|X_i|^p-|Y_i|^p}{S^{1/q}}\right|^2, \end{align} where $q$ is the conjugate exponent of $p$, i.e. $1/p+1/q=1$ and \begin{align*} S:= \|X\|_p^p+\|Y\|_p^p = \|Z\|_p^p , \quad Z=(Z_1, \ldots ,Z_{2n}) \sim N({\bf 0},I_{2n}). \end{align*} Now we observe that the variables $\eta_j : =\frac{|X_j|^p - |Y_j|^p}{S^{1/q}}$ have the same distribution and satisfy $\mathbb E(\eta_i \eta_j)=0$ for $i\neq j$. Therefore, we have: \begin{align*} \mathbb E \left|\sum_{i=1}^n \frac{|X_i|^p-|Y_i|^p}{S^{1/q}}\right|^2 = \mathbb E \left|\sum_{i=1}^n \eta_i\right|^2 =\sum_{i=1}^n \mathbb E \eta_i^2 = n\mathbb E \eta_1^2 \end{align*} Hence, estimate \eqref{eq:10} becomes: \begin{align*} {\rm Var}\|X\|_p \geq \frac{n}{2p^2} \mathbb E \frac{(|X_1|^p-|Y_1|^p)^2}{S^{2/q}} = \frac{n}{p^2} \left( \mathbb E \frac{|X_1|^{2p}}{S^{2/q}} - \mathbb E \frac{|X_1|^p |Y_1|^p}{S^{2/q}}\right). \end{align*} Let $T:=\sum_{i>1}|X_i|^p+ \sum_{i>1}|Y_i|^p$. Note that $T\leq S$, thus we obtain: \begin{align} \label{eq:var-1} {\rm Var} \|X\|_p \geq \frac{n}{p^2} \left[ \mathbb E\frac{|Z_1|^{2p}}{S^{2/q}}-\sigma_p^{2p} \mathbb E (T^{-2/q}) \right]. \end{align} An application of Lemma \ref{lem:bd-var} yields \begin{align} \label{eq:var-2} \mathbb E(T^{-2/q}) \lesssim \frac{1}{\sigma_p^{2p-2}(n-1)^{2-2/p}}, \end{align} as long as $p\leq c_0\log n$. For the term $\mathbb E\frac{|Z_1|^{2p}}{S^{2/q} }$ we may write: \begin{align*} \mathbb E\frac{|Z_1|^{2p}}{S^{2/q} } = (2n)^{-1} \mathbb E \frac{\|Z\|_{2p}^{2p}}{S^{2/q}} =(2n)^{-1} \mathbb E \frac{\|Z\|_{2p}^{2p}}{\|Z\|_p^{2(p-1)}} \geq \frac{(\mathbb E \|Z\|_{2p}^p)^2}{2n \mathbb E\|Z\|_p^{2(p-1)} }, \end{align*} where we have used that the variables $|Z_j|^{2p}/S^{2/q}$ are equidistributed and the Cauchy-Schwarz inequality. Now by using Lemma \ref{lem:bd-var} again we obtain: \begin{align*} \mathbb E\|Z\|_{2p}^{2p} \leq e^{ \frac{cp^2}{k_{2p,2n} \log n}} (\mathbb E \|Z\|_{2p}^p)^2 \leq C_1 (\mathbb E \|Z\|_{2p}^p)^2 \end{align*} and similarly we have: $ \mathbb E \|Z\|_p^{2(p-1)} \leq C_2 (\mathbb E \|Z\|_p^p)^{2(p-1)/p}$, as long as $p\leq c_0\log n$. Therefore, we get: \begin{align} \label{eq:var-3} \mathbb E\frac{|Z_1|^{2p}}{S^{2/q} } \geq \frac{c_3}{n} \frac{ \mathbb E \|Z\|_{2p}^{2p}}{ (\mathbb E \|Z\|_p^{p})^{2(p-1)/p} } \simeq \frac{\sigma_{2p}^{2p}}{ n^{2-2/p} \sigma_p^{2(p-1)}}. \end{align} Inserting \eqref{eq:var-2} and \eqref{eq:var-3} in \eqref{eq:var-1} we get: \begin{align*} {\rm Var}\|X\|_p \geq \frac{c_4n}{p^2} \left[ c_5 \frac{\sigma_{2p}^{2p} }{n^{2-2/p}\sigma_p^{2(p-1)}} - c_6\frac{\sigma_p^{2p} }{\sigma_p^{2p-2} n^{2-2/p}} \right] = \frac{c_4c_5 \sigma_p^2}{p^2 n^{1-2/p} } \left[ \frac{\sigma_{2p}^{2p} }{\sigma_p^{2p}}-\frac{c_6}{c_5}\right]. \end{align*} Taking into account that $(\sigma_{2p}/\sigma_p)^{2p}\simeq 2^p$ we may conclude: \begin{align} {\rm Var}(\|X\|_p) \geq c_7\frac{2^p}{p} n^{2/p-1}, \end{align} provided that $p$ is greater than some large absolute constant. \bigskip Finally, for much larger values of $p$, namely for $p\geq c_0\log n$, we employ Theorem \ref{thm:Talagrand bd} again. This is an extension of the known argument for $\ell_\infty$, which can be found in \cite{Cha}. As before, if $a_i:= \|\partial_i f\|_{L_1(\gamma_n)}$ we may write: \begin{align*} a_i = \int_{\mathbb R^n} \frac{|x_i|^{p-1}}{\|x\|_p^{p-1}}\, d\gamma_n(x)= \frac{1}{n} \int_{\mathbb R^n} \left( \frac{\|x\|_{p-1}}{\|x\|_p}\right)^{p-1}\, d\gamma_n(x) \leq \frac{n^{1/p}}{n}=n^{-1/q}, \end{align*} where in the last step we have used estimate \eqref{eq: Holder - p-norms} and $q$ is the conjugate exponent of $p$. Moreover, we have: \begin{align*} A_i^2:=\|\partial_i f\|_{L_2(\gamma_n)}^2 = \int_{\mathbb R^n}\frac{|x_i|^{2p-2}}{\|x\|_p^{2p-2}}= \frac{1}{n} \int_{\mathbb R^n} \left(\frac{\|x\|_{2p-2}}{\|x\|_p}\right)^{2p-2}\, d\gamma_n(x) \leq 1/n, \end{align*} by the estimates \eqref{eq: Holder - p-norms} again. These bounds and Theorem \ref{thm:Talagrand bd} yield: \begin{align} \label{eq:var-4} {\rm Var}(\|X\|_p) \leq C \sum_{i=1}^n \frac{A_i^2}{1+ \frac{1}{q}\log n +\log A_i } \leq \frac{C}{\log n}, \quad p\geq c_0\log n \end{align} where we have used the monotonicity of $t\mapsto \frac{t^2}{1+ \frac{1}{q}\log n +\log t}$ and that $q \ll 2$. \smallskip Finally, let us note that the variance of the $\ell_p$ norm stabilizes around $\frac{1}{\log n }$ for $p> (\log n)^2$. This is a special case of the next reverse concentration estimate: \begin{proposition} \label{prop: 3.5} Let $p> (\log n)^2$ and let $X$ be an $n$-dimensional standard Gaussian random vector. Then we have: \begin{align*} P\left( \big| \|X\|_p -\mathbb E \|X\|_p \big| > \varepsilon \mathbb E\|X\|_p \right) \geq c e^{-C\varepsilon \log n} , \end{align*} for all $0<\varepsilon<1$, where $C,c>0$ are absolute constant. In particular, we have: \begin{align*} {\rm Var}\|X\|_p \simeq \frac{1}{\log n}. \end{align*} \end{proposition} \noindent {\it Proof.} Consider $\frac{2}{\log n} <\varepsilon <1$ and write: \begin{align*} P( \|X\|_p > (1+\varepsilon)\mathbb E\|X\|_p) & \geq P(\|X\|_\infty > (1+\varepsilon)n^{1/p} \mathbb E\|X\|_\infty) \\ & \geq P(\|X\|_\infty > (1+2\varepsilon) \mathbb E\|X\|_\infty) > c e^{-C\varepsilon \log n}, \end{align*} where we have used \eqref{eq: Holder - p-norms} and at the last step the concentration from \cite{Sch2}. Hence, \begin{align*} P\left( \big| \|X\|_p -\mathbb E\|X\|_p \big| >\varepsilon \mathbb E\|X\|_p \right) & \geq c' e^{-C \varepsilon \log n} , \end{align*} for all $0<\varepsilon <1$. For the second assertion we may write: \begin{align*} {\rm Var}(\|X\|_p ) &= 2 (\mathbb E\|X\|_p)^2 \int_0^{\infty} t P \left( \big| \|X\|_p -\mathbb E\|X\|_p \big| > t\mathbb E\|X\|_p \right) \, dt \\ & \geq 2 c'(\mathbb E\|X\|_p)^2 \int_0^1 t e^{-Ct\log n} \, dt \gtrsim \frac{(\mathbb E\|X\|_p)^2}{(\log n)^2}. \end{align*} The result follows by Proposition \ref{prop:mean-ell-p}. $\hfill \quad \Box$ \medskip The results of this paragraph can be summarized in the next: \begin{theorem} \label{thm:var-ell_p} There exist absolute constants $c_0, c_1,C_1>0$ with the following property: For all $n$ large enough and for any $1\leq p\leq c_0\log n$ we have: \begin{align} c_1\frac{2^p}{p} \leq n^{1-\frac{2}{p}} {\rm Var} \|X\|_p \leq C_1 \frac{2^p}{p}. \end{align} If $p>c_0 \log n$ then we have: \begin{align} {\rm Var}\|X\|_p \leq \frac{C_1} {\log n} , \end{align} whereas for $p\geq (\log n)^2$ we also have: \begin{align} {\rm Var} \|X\|_p \geq \frac{c_1}{\log n}, \end{align} where $X\sim N({\bf 0},I_n)$. \end{theorem} \begin{note} While this paper was under review, Tikhomirov \cite{Tik3} improved Proposition \ref{prop: 3.5} by extending the range to $p \geq C_0\log n$ (his proof gives $C_0=12$). In particular, ${\rm Var} \|X\|_p \gtrsim (\log n)^{-1}$ for $p \geq C_0 \log n$. We present his argument in the Appendix. This only leaves a relatively small interval $(c_0\log n, C_0\log n)$, for which the behavior of the variance is not exactly determined. In other words we are not aware for which constant $c_t>0$ the phase of transition from polynomial to logarithmic behavior occurs. Our bounds strongly suggest that the value of this constant seems plausible to be $c_t=1/\log 2$. \end{note} \medskip We close this section with some discussion on the methods used for bounding the variance. If we are interested in giving sufficient upper bounds, we may use the Poincar\'{e} inequality \eqref{eq:Poin-ineq} which estimates the variance by the $L_2$ average of the Euclidean norm of the gradient of $f$. In principle the latter average is smaller than the Lipschitz constant: $\big\| \|\nabla f\|_2 \big\|_{L_2(\gamma_n)} \leq \big\| \|\nabla f\|_2 \big\|_{L_\infty (\gamma_n) }=L$. The reader may check that for $2< p<\infty$ and $f=\|\cdot\|_p$ we have: \begin{align*} \int_{\mathbb R^n} \big \| \nabla f(x) \big\|_2^2 \, d\gamma_n(x) = \int_{\mathbb R^n} \frac{\|x\|_{2p-2}^{2p-2}}{\|x\|_p^{2p-2}}\, d\gamma_n(x) \simeq_p \frac{1}{n^{1-\frac{2}{p}}} \ll 1= b^2(B_p^n) \equiv {\rm Lip}(f)^2. \end{align*} In case $p=\infty$ we have $\big\| \nabla\|x\|_\infty \big\|_2\equiv 1$ a.e., hence: \begin{align*} \int_{\mathbb R^n} \big\| \nabla \|x\|_\infty \big\|_2^2 \, d\gamma_n(x) =1 =b(B_\infty^n). \end{align*} Thus, the Poincar\'{e} inequality also fails to give the sharp upper bound for the variance in this case. The recovery of the correct estimate is promised by the different order of magnitude for the $L_1-L_2$ norms of the partial derivatives of $x\mapsto \|x\|_\infty$ and Talagrand's inequality (see \cite{Cha} for the details). The phenomenon that ${\rm Var}\|X\|_{\ell_\infty^n}\simeq 1/\log n$ while $\mathbb E \big\| \nabla \|X\|_\infty \big\|_2^2\simeq 1$ is referred to {\it super-concentration} following \cite{Cha}. For recent results on the related subject see \cite{Tan}. \section{Gaussian concentration for $\ell_p$ norms} In this Section we study the Gaussian concentration for the $\ell_p$-norms for $1\leq p\leq \infty$. First we show how we may employ the log-Sobolev inequality in order to get concentration results. \subsection{An argument via the log-Sobolev inequality} Note that for the $\ell_p$ norm with $1\leq p\leq 2$ the estimate \eqref{eq:2.11} implies: \begin{align*} \frac{I_r(\gamma_n,B_p^n)}{I_1(\gamma_n,B_p^n)} \leq \sqrt{1+\frac{C_1r}{k(B_p^n) } } \leq \exp \left( \frac{C_2r}{n} \right), \end{align*} for all $r\geq 1 $. Therefore, for any $0<\varepsilon <1$ we apply Markov's inequality to get: \begin{align*} P(\|X\|_p >(1+\varepsilon) I_1 ) \leq P( \|X\|_p > e^{\varepsilon/2} I_1) \leq e^{-\varepsilon r/2} (I_r/I_1)^r\leq \exp(-\varepsilon r/2 +C_2r^2/n). \end{align*} Choosing $r= \varepsilon n/(4C_2)$ (as long as $\varepsilon >4C_2/n$) we obtain: \begin{align*} P(\|X\|_p >(1+\varepsilon) I_1 ) \leq \exp \left(-\frac{1}{16C_2} \varepsilon^2 n \right). \end{align*} Taking into account Theorem \ref{thm:stability-moms} and arguing similarly we find: \begin{align*} P(\|X\|_p <(1-\varepsilon)I_1 )\leq \exp(-c_2\varepsilon^2n). \end{align*} Combining those two estimates we arrive at the next concentration result: \begin{align*} P\left( \big| \|X\|_p -I_1 \big| >\varepsilon I_1 \right )\leq C_3 \exp(-c_3\varepsilon^2n), \end{align*} for all $0<\varepsilon <1$. This estimate is sharp, as we will show later, but the same method fails for the $\ell_p$ norm, when $2<p \leq \infty$, to give the correct concentration estimate. By carefully inspecting the proof of the estimates we used before we see that we have bounded the $L_2$ norm of the gradient by the $L_\infty$ norm, i.e. the Lipschitz constant. A first attempt to improve the estimates, would be to improve the bound on that quantity. To this end, we restrict ourselves to the range $2<p<\log n $ and we use the log-Sobolev inequality. We have the following: \begin{proposition} \label{prop: stab-ell_p} Let $2<p< c \log n$. Then, for every $r>0$ we have: \begin{align*} \displaystyle \frac{d}{dr}(\log I_r) \leq \frac{C^p}{n} \left( 1+ \frac{r}{k(B_{2p-2}^n)}\right)^{p-1} \leq \left\{ \begin{array}{ll} C_1^p/n , & 0<r\leq k(B_{2p-2}^n) \\ \\ \displaystyle \frac{1}{r} \left( \frac{C_1r}{k(B_p^n)} \right)^p , & k(B_{2p-2}^n) \leq r < k(B_p^n)/C_1 \end{array} \right. , \end{align*} while for $0< r < c d(B_p^n)$ we have: \begin{align*} - \frac{d}{dr}(\log I_{-r}) \leq \frac{C^p}{n}, \end{align*} where $c,C,C_1>0$ are absolute constants and $I_s\equiv I_s(\gamma_n, B_p^n)$. \end{proposition} \noindent {\it Proof.} First we prove the growth condition on the positive moments. Our starting point is the next estimate: \begin{align*} \frac{d}{dr} (\log I_r) =\frac{1}{r^2 I_r^r} {\rm Ent}_{\gamma_n}(\|x\|_p^r) \leq \frac{2}{r^2I_r^r} \mathbb E \left\| \nabla (\|X\|_p^{r/2})\right\|_2^2 =\frac{1}{2 I_r^r} \mathbb E\|X\|_{2p-2}^{2p-2} \|X\|_p^{r-2p}, \end{align*} where we have used the log-Sobolev inequality. We distinguish two cases: \smallskip \noindent {\it Case 1: $0<r \leq 2p$.} We may write: \begin{align*} \frac{d}{dr} (\log I_r) &\leq \frac{n}{\mathbb E \|X\|^r_p} \mathbb E \frac{|X_1|^{2p-2}}{\|X\|_p^{2p-r}} \leq \frac{n \sigma_{2p-2}^{2p-2}}{ \mathbb E \|X\|^r_{p}} \mathbb E \frac{ 1}{\|X\|^{2p-r}_{p} } \leq \frac{n (cp)^{p-1}}{I_r^r(B_p^{n}) I_{-(2p-r)}^{2p-r}(B_p^{n})} \leq \frac{n(cp)^p}{I_{-2p}^{2p}(B_p^{n})}, \end{align*} by Proposition \ref{prop:Harris} and H\"{o}lder's inequality. By Proposition \ref{prop:small-ball} for $0< s < c_1k_{p,n}$ we have: $I_{-s}\geq c_2 I_1$. Since, $p < c_1 k_{p,n}$ for $p\lesssim \log n$ we get: $(\log I_r)' \leq C_2^p / n$. \smallskip \noindent {\it Case 2: $r> 2p$.} We may write: \begin{align*} \frac{d}{dr} (\log I_r) &\leq \frac{1}{2 I_r^r} \mathbb E\|X\|_{2p-2}^{2p-2} \|X\|_p^{r-2p} \leq \frac{I_r^{2p-2}(\gamma_n,B_{2p-2}^n) }{2I_r^{2p}} , \end{align*} by H\"{o}lder's inequality. By Lemma \ref{lem:log-sob-moms} we get: \begin{align*} \frac{d}{dr} (\log I_r) \leq \frac{I_{2p-2}^{2p-2}(\gamma_n,B_{2p-2}^n) }{2I_p^{2p}} \left(1+\frac{r}{k_{2p-2,n}} \right)^{p-1} &=\frac{\sigma_{2p-2}^{2p-2}/\sigma_p^{2p}}{2n} \left(1+\frac{r}{k_{2p-2,n} }\right)^{p-1} \\ &\leq \frac{C_3^p}{n} \left(1+\frac{r}{k_{2p-2,n} }\right)^{p-1}, \end{align*} for some absolute constant $C_3>0$. \smallskip Now we turn to providing bounds for the negative moments. Here the argument is simpler. Using the log-Sobolev inequality again and Proposition \ref{prop:Harris} we have: \begin{align*} \frac{d}{dr} (\log I_{-r}) & \geq - \frac{1}{2I_{-r}^{-r}} \mathbb E \|X\|_{2p-2}^{2p-2} \|X\|_p^{-r-2p} \geq - \frac{1}{2I_{-r}^{-r}} \mathbb E \|X\|_{2p-2}^{2p-2} \mathbb E\|X\|_p^{-r-2p} \\ &= - \frac{1}{2I_{-r}^{-r}} \mathbb E \|X\|_{2p-2}^{2p-2} I_{-r-2p}^{-r-2p} \geq -C_2^p \frac{ \sigma_{2p-2}^{2p-2} n}{I_1^{2p}} \geq -C_3^p/n , \end{align*} for $r\leq c_4 d(B_p^n)$, where in the last step we have used Lemma \ref{lem:reduc-neg-moms}. The result easily follows. $\hfill \quad \Box$ \medskip We are ready to prove the next concentration inequality. Note that the dependence we get on $\varepsilon$ is better than the one we get if we employ \eqref{eq:2.14}. \begin{proposition} \label{prop: weak-conc} Let $4\leq p < c_0 \log n$. Then, one has: \begin{align*} P \left( \big | \|X\|_p- \mathbb E\|X\|_p \big| >\varepsilon \mathbb E\|X\|_p \right) \leq C_1\exp \left(-c_1 \varepsilon^{1+\frac{1}{p}} k(B_p^n) \right), \end{align*} for all $0<\varepsilon<1$. Moreover, we have: \begin{align*} P \left( \|X\|_p \leq (1-\varepsilon) \mathbb E \|X\|_p \right) \leq C_2 \exp \left( -c_2\varepsilon k(B_p^n) \right), \end{align*} for $0<\varepsilon <1$. \end{proposition} \noindent {\it Proof.} Let $4\leq p\leq c\log n$, where $c>0$ is the constant from Proposition \ref{prop: stab-ell_p}. Then, for each $0<\varepsilon<1$ using Markov's inequality we may write: \begin{align} \label{eq:4.8} P(\|X\|_p > (1+\varepsilon) I_0 )\leq e^{-\varepsilon r/2} \exp( r \log (I_r/I_0)) =\exp\left[ -r\left(\frac{\varepsilon}{2}- \log(I_r/I_0) \right) \right], \end{align} for all $r>0$. Using Proposition \ref{prop: stab-ell_p} we obtain: \begin{align*} \log (I_r/I_0) \leq \frac{C^p}{n}\int_0^r \left( 1+\frac{s}{k_{2p-2,n}} \right)^{p-1} \, ds < \frac{C^p k_{2p-2,n} }{pn } \left( 1+\frac{r}{k_{2p-2,n}} \right)^p < \frac{(2C)^p k_{2p-2,n} }{pn } \left(\frac{r}{k_{2p-2,n}} \right)^p , \end{align*} for $r> k_{2p-2,n}$. Therefore, \eqref{eq:4.8} becomes: \begin{align*} P(\|X\|_p > (1+\varepsilon) I_0 )\leq \exp \left(-\frac{\varepsilon r}{2} +\frac{(2C)^p }{pn k_{2p-2,n}^{p-1}} r^{p+1} \right), \end{align*} for $r>k_{2p-2,n}$. Minimizing the right-hand side with respect to $r$, we find that $r_{\rm min} =r_0$ satisfies: \begin{align} \label{eq:4.10} \frac{(2C)^p }{pn k_{2p-2,n}^{p-1}} (p+1) r_0^p -\frac{\varepsilon}{2} =0 \Longrightarrow r_0\simeq \varepsilon^{1/p} k_{p,n}, \end{align} and in order for this value to be admissible we ought to have $r_0>k_{2p-2,n}$. Hence, the value $r_0$ is admissible if $\varepsilon$ satisfies: \begin{align*} r_0>k_{2p-2,n} \Longleftrightarrow (2C)^{-p} \frac{\varepsilon n}{2} \frac{p}{p+1} > k_{2p-2,n} \Longleftrightarrow \varepsilon >(2C)^p \frac{2(p+1)}{pn} k_{2p-2,n}. \end{align*} Note that Proposition \ref{prop:mean-ell-p} implies that: \begin{align} \label{eq: 4.11} k_{q,n} \leq c_2qn^{2/q}, \quad \forall \, 2\leq q\leq \log n. \end{align} Since $p\geq 4$ it suffices to have $\varepsilon > (2C)^p 8c_2 p n^{-\frac{p-2}{p-1}}$ or equivalently to have $\varepsilon > (16ec_2C)^p n^{-\frac{p-2}{p-1}}$. First consider the case $k_{p,n}^{-\frac{p}{p+1}} <\varepsilon <1$. In this case the above restriction is satisfied as long as $p\leq c_3\log n$ for some sufficiently small absolute constant $c_3>0$. Indeed one needs to check that: $k_{p,n}^{-\frac{p}{p+1}} > (16ec_2C)^p n^{-\frac{p-2}{p-1}}$ and by taking into account \eqref{eq: 4.11} again it suffices to have $\frac{ n^{\frac{p-2}{p-1}} }{(c_2p n^{2/p})^{\frac{p}{p+1}}} > (16ec_2C)^p$ or it is enough to have: \begin{align*} n^{\frac{p^2-3p}{p^2-1}} > (16e^2c_2^2 C)^p=e^{p/c_4}. \end{align*} Thus, if $c_0:= \min \{c_3,c_4/4\}>0$ we have all the requirements so that we may conclude: \begin{align*} P(\|X\|_p > (1+\varepsilon) I_0 )\leq \exp \left(-\frac{\varepsilon r_0}{2} +\frac{(2C)^p }{pn k_{2p-2,n}^{p-1}} r_0^{p+1} \right) & \stackrel{ \eqref{eq:4.10}}=\exp \left( -\frac{\varepsilon}{2}r_0 +\frac{\varepsilon r_0}{2(p+1)} \right) \\ &=\exp \left( -\frac{p}{2(p+1)} \varepsilon r_0 \right) \\ & \leq \exp \left(-c \varepsilon^{1+\frac{1}{p}} k_{p,n} \right), \end{align*} for all $4\leq p\leq c_0 \log n$ and for all $k_{p,n}^{-\frac{p}{p+1}} <\varepsilon<1$. By adjusting the constants we get: \begin{align*} P(\|X\|_p > (1+\varepsilon) I_0 )\leq C' \exp \left(-c \varepsilon^{1+\frac{1}{p}} k_{p,n} \right), \end{align*} for the whole range $0<\varepsilon <1$ and for $4\leq p \leq c_0 \log n$. \medskip \noindent Now we turn to bounding the probability $P(\|X\| \leq (1-\varepsilon)I_0)$. Proposition \ref{prop: stab-ell_p} shows that $(\log I_{-r})' \geq -C^p/n$ for $0<r\leq c_1k_p$. Hence, we get: \begin{align*} P(\|X\|_p \leq (1-\varepsilon)I_0) \leq P( \|X\|_ p\leq e^{-\varepsilon} I_0) \leq e^{-\varepsilon r} \left( \frac{I_0}{I_{-r}} \right)^r \leq \exp(-\varepsilon r+ r^2C^p/n), \end{align*} for all $0<r<c_1k_{p,n}$, where we have used the bound: \begin{align*} \log(I_0/I_{-r}) = -\int_0^r (\log I_{-s})' \, ds \leq \frac{C^p}{n}r, \end{align*} for $0<r<c_1k_{p,n}$. Finally, choosing $r\simeq k_{p,n}$ we see that $C^pk_{p,n}^2/n< (2eC)^p n^{4/p-1} \leq C'$ as long as $4\leq p\leq c_1' \log n$, hence we conclude: \begin{align*} P(\|X\|_p \leq (1-\varepsilon)I_0) \leq C' \exp(-c' \varepsilon k_{p,n}), \end{align*} for $0<\varepsilon <1$. $\hfill \quad \Box$ \medskip Although this concentration result improves upon the one we get by just using \eqref{eq:2.14}, it is still suboptimal. It turns out that although the $L_2$ average of the Euclidean norm of the gradient is the proper quantity to be estimated for the concentration result, yet it should not be used in order to bound the growth of the high moments of the norm, in this range of $p$. \subsection{Estimating centered moments} In this paragraph we study centered moments of the $\ell_p$ norm, i.e. $(\mathbb E \left| \|X\|_p- \|Y\|_p \right|^r)^{1/r}$. For this end we distinguish three cases: (a) $1\leq p\leq 2$, (b) $2<p < c_0\log n$ and (c) $c_0\log n\leq p\leq \infty$, where $c_0>0$ is sufficiently small absolute constant. While in the first two cases we estimate directly the centered moments in terms of $n,p,r$, in the last we have to argue differently and study the almost constant behavior of the noncentered moments. This is because when $p$ grows along with $n$ the estimates collapse. To overcome this obstacle we use Talagrand's $L_1-L_2$ bound. \subsubsection{The case $1\leq p\leq 2$} In this subsection we sketch the proof of the next theorem: \begin{theorem} \label{thm:conc-1<p<2} Let $1\leq p\leq 2$. Then, one has: \begin{align}\label{eq:4.1} c_1\exp(-C_1\varepsilon^2n)\leq P\left( \left| \|X\|_p -\mathbb E\|X\|_p\right| > \varepsilon \mathbb E\|X\|_p \right) \leq C_2\exp(-c_2\varepsilon^2 n), \end{align} for $0 < \varepsilon <1$, where $C_1,c_1,C_2,c_2>0$ are absolute constants. \end{theorem} \noindent {\it Proof (Sketch).} The rightmost inequality follows by the Gaussian concentration inequality \eqref{eq:2.14}, Proposition \ref{prop:mean-ell-p} and the fact that ${\rm Lip}(\|\cdot\|_p)=b(B_p^n)=n^{1/p-1/2}$ for $1\leq p\leq 2$. Now we focus on the left-hand side inequality. We have the next: \begin{proposition} Let $1\leq p\leq 2$. Then, we have: \begin{align*} \left( \mathbb E \left| \|X\|_p-\mathbb E\|X\|_p \right|^r \right)^{1/r} \simeq \sqrt{\frac{r}{n}} \mathbb E\|X\|_p, \end{align*} for all $r\geq 1$. \end{proposition} \noindent {\it Proof.} Indeed; the estimate \begin{align} \label{eq:4.3} \left( \mathbb E \left| \|X\|_p-\mathbb E\|X\|_p \right|^r \right)^{1/r} \leq C_3 \sqrt{\frac{r}{n}} \mathbb E\|X\|_p , \quad r\geq 1 \end{align} is well known and follows by integration by parts combined with the right-hand side estimate in \eqref{eq:4.1}. For the estimate \begin{align} \label{eq:4.4} \left( \mathbb E \left| \|X\|_p-\mathbb E\|X\|_p \right|^r \right)^{1/r} \geq c_3 \sqrt{\frac{r}{n}} \mathbb E\|X\|_p \end{align} we may apply the triangle inequality, Lemma \ref{lem:2-sided-ineq} and finally the Cauchy-Schwarz inequality to write: \begin{align*} 2\left( \mathbb E \left| \|X\|_p-\mathbb E\|X\|_p \right|^r \right)^{1/r} \geq \left( \mathbb E \left| \|X\|_p-\|Y\|_p \right|^r \right)^{1/r} \geq \frac{1}{2p} \frac{\left(\mathbb E\big|\|X\|_p^p -\|Y\|_p^p \big|^{r/2}\right)^{2/r}}{\left( \mathbb E \|X\|_p^{r(p-1)}\right)^{1/r} }. \end{align*} Note that \eqref{eq:4.3} already implies $\left(\mathbb E\|X\|_p^s\right)^{1/s} \leq 2C_3 \mathbb E\|X\|_p\simeq n^{1/p}$ for all $1\leq s\leq n$. Moreover, we have: \begin{align*} \left(\mathbb E \left| \|X\|_p^p -\|Y\|_p^p \right|^s\right)^{1/s} \geq \mathbb E\big| |X_1|^p-|Y_1|^p \big| \cdot \left( \mathbb E_\varepsilon \left| \sum_{i=1}^n \varepsilon_i \right|^s \right)^{1/s}\simeq \sqrt{sn}, \end{align*} where we have used the facts that the joint distribution of $(\varepsilon_i | |X_i|^p-|Y_i|^p |)_i$ is the same as $(|X_i|^p-|Y_i|^p)_i$, Jensen's inequality and at the last step, that $\left( \mathbb E_\varepsilon \left| \sum_{i=1}^n \varepsilon_i \right|^s \right)^{1/s} \simeq \sqrt{sn}$ for $1\leq s\leq n$ (see e.g. \cite{M}). Putting them all together we see: \begin{align*} \left( \mathbb E \left| \|X\|_p-\mathbb E\|X\|_p \right|^r \right)^{1/r} \geq c_4 \frac{\sqrt{rn}}{n^{1-1/p}} \simeq \sqrt{ \frac{r}{n}} \mathbb E\|X\|_p, \end{align*} which completes the proof. $\hfill \quad \Box$ \smallskip Now we turn to proving the lower bound in the probabilistic estimate \eqref{eq:4.1}: For every $n^{-1/2} <\varepsilon < 2c_3$ consider $r\in [1,n]$ so that $\varepsilon= 2c_3\sqrt{r/n}$ to write: \begin{align*} P \left( \big| \|X\|_p -\mathbb E \|X\|_p \big| >\varepsilon \mathbb E\|X\|_p \right) & \geq P \left( \big| \|X\|_p-\mathbb E\|X\|_p \big| \geq \frac{1}{2} \left( \mathbb E\big| \|X\|_p-\mathbb E\|X\|_p \big|^r \right)^{1/r} \right) \\ &= P(\zeta \geq 2^{-r} \mathbb E\zeta) \geq (1-2^{-r})^2 \frac{(\mathbb E\zeta)^2}{\mathbb E\zeta^2}, \end{align*} by Lemma \ref{lem: PZ-ineq}, where $\zeta := \big| \|X\|_p-\mathbb E\|X\|_p \big|^r$. Employing the estimates \eqref{eq:4.3} and \eqref{eq:4.4} we conclude: \begin{align*} P \left( \big| \|X\|_p -\mathbb E \|X\|_p \big| >\varepsilon \mathbb E\|X\|_p \right) \geq c_5 e^{-C_5r}, \end{align*} as required. $\hfill \quad \Box$ \subsubsection{The case $2<p \leq c_0 \log n$} It is clear from the argument of the previous paragraph that in order to obtain sharp concentration inequalities it is enough to get sharp estimates for the centered moments: $\left( \mathbb E \left| \|X\|_p-\|Y\|_p \right|^r \right)^{1/r}.$ In view of Lemma \ref{lem:2-sided-ineq} it is also obvious that estimates for the centered moments $\left( \mathbb E \big| \|X\|_p^p- \|Y\|_p^p \big|^r \right)^{1/r}$ will provide estimates for the moments $\left( \mathbb E \left| \|X\|_p-\|Y\|_p \right|^r \right)^{1/r}$. Note that in order to estimate the centered moments from above one may also employ Theorem \ref{thm:Pis-ineq} in the form of an $(r,r)$-Poincar\'{e} inequality \eqref{eq: 2.9}. We use this method in the next Section in order to derive the optimal dependence on $\varepsilon$ in the critical dimension of randomized Dvoretzky. Here we shall prove the next result (see \cite{Naor} for a different approach): \begin{proposition} \label{prop:ctrd-p-pwr} Let $2< p <\infty$. Then, we have: \begin{align} \left( \mathbb E \big| \|X\|_p^p- \|Y\|_p^p \big|^r \right)^{1/r} \simeq \sigma_p^p \max \left\{ 2^{p/2} ( r n )^{1/2} , \, r^{p/2} n^{1/r} \right\} , \end{align} for all $r\geq 2$. \end{proposition} \noindent {\it Proof.} Note that if $X=(X_1,\ldots,X_n)$ is a Gaussian random vector and $Y$ an independent copy of it, the variables $\xi_i:= |X_i|^p-|Y_i|^p$ are i.i.d. and the functions $t\mapsto P(|\xi_i| >t)$ are log-convex on $(0,\infty)$ by Lemma \ref{lem:gauss^p}. Then we may apply the main result from \cite{HMSO} to get: \begin{align*} \left( \mathbb E \big| \|X\|_p^p -\|Y\|_p^p \big|^r\right)^{1/r} \equiv \left\| \sum_{i=1}^n \xi_i \right\|_r &\simeq \left(\sum_{i=1}^n\|\xi_i\|_r^r \right)^{1/r} + \sqrt{r} \left( \sum_{i=1}^n \| \xi_i\|_2^2 \right)^{1/2} \\ &\simeq n^{1/r} \|\xi_1\|_r +\sqrt{rn} \|\xi_1\|_2 \\ &\simeq n^{1/r} r^{p/2} \sigma_p^p + \sqrt{rn} 2^{p/2} \sigma_p^p, \end{align*} where we have used Lemma \ref{lem:gauss^p} again. The proof is complete. $\hfill \quad \Box$ \medskip Now we are ready to prove the following: \begin{theorem} \label{thm:conc-p>2-I-p} Let $n>C$ and let $2<p \leq c_0 \log n$. Then, we have: \begin{align*} P\left( \left| \|X\|_p - \mathbb E \|X\|_p \right| > \varepsilon \mathbb E\|X\|_p \right)\leq C \exp \left(-c \min \left\{ \frac{\varepsilon^2 p^2 n}{2^p}, (\varepsilon n)^{2/p} \right\} \right), \end{align*} for all $0<\varepsilon<1/p$, where $C,c,c_0>0$ are absolute constants. \end{theorem} \noindent {\it Proof.} Define $\alpha(n,p,r):= \max\{ 2^{p/2} (rn)^{1/2}, r^{p/2}n^{1/r} \}, \; r\geq 2$. Note that for fixed $n,p$ the map $r \mapsto \alpha(n,p,r)$ is strictly increasing with inverse $A(n,p,s) \simeq \min\{ \frac{s^2}{2^p n} , s^{2/p}\}$. Then, Proposition \ref{prop:ctrd-p-pwr} shows that: \begin{align} \label{eq:equiv-p-moms} c_1 \sigma_p^p \alpha(n,p,r) \leq \left( \mathbb E \big| \|X\|_p^p- \mathbb E\|X\|_p^p \big|^r \right)^{1/r} \leq C_1 \sigma_p^p \alpha(n,p,r), \end{align} for all $r\geq 2$. Applying Markov's inequality we get: \begin{align*} P\left( \left| \|X\|_p^p -\mathbb E \|X\|_p^p \right| > t \mathbb E\|X\|_p^p \right) \leq \left( \frac{C_1 \alpha(n,p,r) }{t n} \right)^r &= \exp \left( - A(n,p, etn/C_1)\right) \\ &\leq \exp \left(-c_2 \min\left\{ \frac{t^2 n}{2^p}, (tn)^{2/p} \right\} \right) , \end{align*} provided that $etn/C_1 > \alpha(n,p,2)=2^{p/2}n^{1/2}$. It follows that \begin{align*} P\left( \left| \|X\|_p^p -\mathbb E \|X\|_p^p \right| > t \mathbb E\|X\|_p^p \right) \leq e^2 \exp \left(-c_2 \min\left\{ \frac{t^2 n}{2^p}, (tn)^{2/p} \right\} \right),\end{align*} for all $t>0$. Now fix $0< \varepsilon<1/p$. Then, we may write: \begin{align*} P \left( \|X\|_p < (1-\varepsilon) ( \mathbb E\|X\|_p^p)^{1/p} \right) \leq P\left( \|X\|_p^p < \left(1- \frac{\varepsilon p}{2} \right) \mathbb E\|X\|_p^p \right) \leq e^2 \exp \left(-c_3 \min \left\{ \frac{\varepsilon^2 p^2 n}{2^p}, (\varepsilon n)^{2/p} \right\}\right), \end{align*} by the previous estimate. Arguing similarly we show the upper estimate. Thus, \begin{align*} P\left( \left| \|X\|_p - (\mathbb E \|X\|_p^p)^{1/p} \right| > \varepsilon (\mathbb E\|X\|_p^p)^{1/p} \right)\leq e^2 \exp \left(-c_4 \min \left\{ \frac{\varepsilon^2 p^2 n}{2^p}, (\varepsilon n)^{2/p} \right\}\right), \end{align*} for all $0<\varepsilon<1/p$. The result follows. $\hfill \quad \Box$ \medskip \begin{remark}\rm For fixed $2<p< \infty$ the estimate can be reversed. The argument is similar to that in Theorem \ref{thm:conc-1<p<2}. Let $\frac{ 2^{p/2} }{2c_1 n^{1/2} } <t <1$ and choose $r\geq 2$ with $\alpha(n,p,r) = 2c_1 n t$. Then, in view of the lower estimate in \eqref{eq:equiv-p-moms} we get: \begin{align*} P \left(\big| \|X\|_p^p-\mathbb E\|X\|_p^p \big| > t \mathbb E\|X\|_p^p \right) &\geq P \left(\big| \|X\|_p^p-\mathbb E\|X\|_p^p \big| > \frac{1}{2} \left( \mathbb E \left| \|X\|_p^p -\mathbb E \|X\|_p^p \right|^r \right)^{1/r} \right) \\ &\geq (1-2^{-r})^2 \frac{\left( \mathbb E \big| \|X\|_p^p-\mathbb E\|X\|_p^p \big|^r \right)^2}{\mathbb E \big| \|X\|_p^p-\mathbb E\|X\|_p^p \big|^{2r} } \\ & \geq \frac{1}{4} e^{-c_5 p r} \geq \frac{1}{4} \exp\left(-c_6 p A(n, p, 2c_1nt) \right). \end{align*} It follows, as before, that: \begin{align*} P \left(\big| \|X\|_p-(\mathbb E\|X\|_p^p)^{1/p} \big| > \varepsilon (\mathbb E\|X\|_p^p)^{1/p} \right) \geq c_7\exp\left(-c_8 p A(n,p, 2c_1np\varepsilon) \right), \end{align*} for $ \frac{2^{p/2}}{2c_1p n^{1/2}} < \varepsilon <1/p$. Next recall that Proposition \ref{prop: stab-ell_p} implies $I_p/I_1 \leq 1+ \frac{2^{p/2}}{4c_1p n^{1/2}}$ for $p\leq c_0\log n$, where $I_r\equiv I_r(\gamma_n,B_p^n)$. Thus, we may replace $I_p$ by $I_1$ in the above concentration estimate. This yields the following double estimate: \begin{proposition} \label{prop:conc-p>2-u-l-b}For all sufficiently large $n$ and for $2<p<c_0\log n$ one has: \begin{align} c\exp\left( -C p \theta(n,p,\varepsilon) \right) \leq P\left( \left| \|X\|_p - \mathbb E \|X\|_p \right| > \varepsilon \mathbb E\|X\|_p \right) \leq C \exp \left(-c \theta(n,p,\varepsilon) \right), \end{align} for all $0<\varepsilon<1/p$, where $\theta(n,p,\varepsilon) := \min \left\{ \frac{p^2 \varepsilon^2 n}{2^p}, (\varepsilon n)^{2/p}\right\}$ and $C, c, c_0>0$ are absolute constants. \end{proposition} \end{remark} \noindent {\it Note.} Let us mention that the extra $p$ on the exponent in the lower estimate can be removed if we restrict the range to $p^{-1}2^{p/2}n^{-1/2} \lesssim \varepsilon \lesssim p^{-1}2^{p/2} n^{-\frac{p-2}{2(p-1)} }$. \subsubsection{The case $c_0\log n< p\leq \infty$} In this Subsection we deal with the large values of $p$ in terms of the dimension, namely for $p\gtrsim \log n$. We have the following: \begin{theorem}\label{thm:stability-r-means} Let $4< p\leq \infty$. Then, for any $0< r <s \leq c_1\sqrt{k_{p,n} \log n}$ we have: \begin{align*} \frac{I_s(\gamma_n,B_p^n)}{I_r(\gamma_n,B_p^n)} \leq \exp \left( \frac{c_2(2s-r)}{k_{p,n} \log n} \right), \quad \frac{I_{-s} (\gamma_n,B_p^n)}{I_{-r}(\gamma_n,B_p^n)} \geq \exp \left( -\frac{c_2(2s-r)}{k_{p,n} \log n} \right), \end{align*} where $c_1,c_2>0$ are absolute constants. \end{theorem} \noindent {\it Proof.} Set $I_s\equiv I_s(\gamma_n, B_p^n)$. If $a= a_i := \|\partial_i f\|_{L_1(\gamma_n)} $ we get: \begin{align} \label{eq:L-1-conc} a_i = \frac{|r|}{n} \int_{\mathbb R^n} \|x\|_{p-1}^{p-1} \|x\|_p^{r-p}\, d\gamma_n(x) \leq \frac{|r|}{n^{1/q}}I_{r-1}^{r-1}, \end{align} where we have used \eqref{eq: Holder - p-norms}. Similarly, for $A=A_i:= \|\partial_i f\|_{L_2(\gamma_n)}$ we have that: \begin{align} \label{eq:L-2-conc} \frac{|r|}{n^{1/q}} I_{2r-2}^{r-1} \leq A= \frac{|r|}{n^{1/2}} \left( \int_{\mathbb R^n} \|x\|_p^{2r-2p} \|x\|_{2p-2}^{2p-2} \, d\gamma_n(x) \right)^{1/2} \leq \frac{|r|}{n^{1/2}} I_{2r-2}^{r-1}. \end{align} We apply Theorem \ref{thm:Talagrand bd} for $f(x):=\|x\|_p^r, \, r\neq 0$ to obtain: \begin{align*} {\rm Var}_{\gamma_n}(f) \leq C_1 n \frac{A^2}{ 1+\log(A/a)} . \end{align*} The function $t \mapsto \frac{t^2}{1+\log (t/a)}, \; t>a$ is increasing, thus \eqref{eq:L-1-conc} and \eqref{eq:L-2-conc} imply that: \begin{align} \label{eq: 4.35} I_{2r}^{2r}-I_r^{2r} = {\rm Var}_{\gamma_n} (f) \leq C_1 r^2 \frac{I_{2r-2}^{2r-2}}{1+ \log \left(n^{1/q-1/2} \left(\frac{I_{2r-2}}{I_{r-1}} \right)^{r-1} \right) } \leq C_2 r^2 \frac{I_{2r-2}^{2r-2}}{\log n}, \end{align} for all $r\neq 0$, since $1\leq q<4/3$ and $\log (I_{2r-2}/I_{r-1}) \geq 0$. \smallskip \noindent {\it Claim.} For $r > - k_{p,n}, \; r\neq 0$ we have: \begin{align*}I_{2r-2}^{2r-2} \leq C_3 I_{2r}^{2r} / k_{p,n}. \end{align*} We distinguish three cases: \begin{itemize} \item For $0<r<1$ we have: $I_{2r-2}^{2r-2} =\frac{I_{2r-2}^{2r}}{I_{2r-2}^2}\leq c_2' I_1^{-2} I_{2r-2}^{2r} \leq \frac{c_2'}{k_{p,n} } I_{2r}^{2r}$. \item For $r\geq 1$ we may write: $I_{2r-2}^{2r-2} = \frac{I_{2r-2}^{2r} }{ I_{2r-2}^2 } \leq c_3 \frac{ I_{2r}^{2r} }{I_1^2} = \frac{c_3}{k_{p,n}} I_{2r}^{2r}$, since $I_1\simeq I_0$. \item Finally, for $- k_{p,n} < r<0$ we have: $I_{2r-2}^{2r-2} \leq \frac{c_4}{I_1^2} I_{2r}^{2r} =\frac{c_4}{k_{p,n} } I_{2r}^{2r}$, by Lemma \ref{lem:reduc-neg-moms}. \end{itemize} Thus, \eqref{eq: 4.35} yields: \begin{align} \label{eq:recursive-ineq} I_{2r}^{2r} -I_r^{2r} \leq Cr^2 \frac{I_{2r}^{2r}}{k_{p,n} \log n}. \end{align} for $r> - k_p, \; r\neq 0$. We only prove the stability for the positive moments (the negative moments are treated similarly): As long as $0 < r < \sqrt{k_{p,n} \log n /C}$ we may write \begin{align*} I_{2r}^{2r} \leq \left( 1+\frac{Cr^2}{k_{p,n} \log n} \right) I_r^{2r}. \end{align*} Iterating the last one we find: \begin{align*} \frac{I_{2^mr}}{I_r} \leq \exp \left( C\sum_{j=0}^{m-1} \frac{2^j r}{k_{p,n} \log n}\right) \leq \exp\left( \frac{C(2^mr-r)}{k_{p,n} \log n}\right), \end{align*} for $m=1,2,\ldots$ as long as $2^mr \leq \sqrt{k_{p,n} \log n/C}$. The result follows. $\hfill \quad \Box$ \medskip The next corollary is immediate: \begin{corollary} \label{cor:conc-large-p} Let $ c_0\log n<p\leq \infty$. Then, one has: \begin{align*} P \left( \big| \|X\|_p -\mathbb E\|X\|_p \big| >\varepsilon \mathbb E\|X\|_p \right) \leq C \exp \left(-c \varepsilon \log n \right), \end{align*} for all $\varepsilon \in (0,1)$, where $C,c,c_0>0$ are absolute constants. \end{corollary} \noindent {\it Proof.} Let $K:=k_{p,n} \log n$. Using Markov's inequality and Theorem \ref{thm:stability-r-means} we may write: \begin{align*} P \left( \|X\|_p\geq (1+\varepsilon) I_1 \right)\leq P \left( \|X\|_p \geq e^{\varepsilon/2} I_1 \right)\leq e^{-\varepsilon r/2} \left(\frac{I_r}{I_1}\right)^r \leq \exp(-\varepsilon r/2+c_2r^2/K), \end{align*} for all $0<r<c_1\sqrt{K}$. The choice $r\simeq \sqrt{K}$ yields the one-sided estimate: \begin{align*} P \left( \|X\|_p >(1+\varepsilon) I_1 \right)\leq C_1\exp \left( -c_1' \varepsilon \sqrt{K} \right). \end{align*} Working similarly with the probability $P( \|X\|_p < (1-\varepsilon) I_1)$ and taking into account the fact that $k(B_p^n) \simeq \log n$ for $p \gtrsim \log n$, we conclude the asserted estimate. $\hfill \quad \Box$ \medskip Summarizing the results of this paragraph (by taking into account Theorem \ref{thm:conc-1<p<2}, Theorem \ref{thm:conc-p>2-I-p} and Proposition \ref{prop: weak-conc} and the variance estimates from Section 3) we may have a concentration inequality which interpolates between the concentration estimates for fixed $p\geq 1$ and $p=\infty$: \begin{theorem} \label{thm:conc-full} For all large enough $n$ and for any $1\leq p\leq \infty$ one has: \begin{align*} P\left( \big | \|X\|_p-\mathbb E\|X\|_p \big| > \varepsilon \mathbb E\|X\|_p \right) \leq C_1\exp ( -c_1\beta(n,p,\varepsilon) ), \end{align*} for every $0<\varepsilon <1$, where $\beta(n,p,\varepsilon)$ is defined as follows: \begin{align*} \beta(n,p,\varepsilon) = \left\{ \begin{array}{lll} \varepsilon^2 n, & 1\leq p\leq 2 \\ \max \left\{ \min \left\{ p^2 2^{-p} \varepsilon^2n , (\varepsilon n)^{2/p} \right \} , \varepsilon pn^{2/p} \right\}, & 2< p\leq c_0 \log n \\ \varepsilon \log n, & p> c_0 \log n \end{array}\right. , \end{align*} where $c_0\in (0,1)$ and $C_1,c_1>0$ are suitable absolute constants. Furthermore, for $p\leq c_0 \log n$ we have the estimate: \begin{align*} P\left( \big | \|X\|_p-\mathbb E\|X\|_p \big| > \varepsilon \mathbb E\|X\|_p \right) \leq \exp \left( -\log\left(1 +c_1\frac{p^2}{2^p} \varepsilon^2n \right) \right), \end{align*} for every $\varepsilon\in (0,1)$. \end{theorem} \section{The critical dimension in random Dvoretzky for $\ell_p^n$} In this paragraph we study the critical dimension $k(n,p,\varepsilon)$ (and in particular the dependence on $\varepsilon$) in the random version of Dvoretzky's theorem for $\ell_p^n$ spaces. Our method is inspired by Schechtman's approach in \cite{Sch1}. The key point is a distributional inequality for rectangular matrices with independent standard Gaussian entries. In \cite{Sch1} it is proved that, if $G=(g_{ij})_{i,j=1}^{n,k}$ is a Gaussian matrix then the process $(\|Gx\|)_{x\in S^{k-1}}$ is sub-Gaussian with constant $b=\max_{\theta\in S^{n-1}}\|\theta\|$. The proof of \cite[Lemma]{Sch1} is based on an orthogonal splitting, combined with a conditioning argument and inequality \eqref{eq:2.14}. Here we use similar ideas to prove a functional inequality which generalizes \cite[Lemma]{Sch1}. Once again, the advantage of this new inequality is that it involves $\| \nabla f\|_2$ instead of the Lipschitz constant of $f$. Our result reads as follows: \begin{theorem}\label{thm:Sch-funct-ineq} Let $a,b\in S^{k-1}$ and $G=(g_{ij})_{i,j=1}^{n,k}$ be random matrix with standard i.i.d. Gaussian entries. If $f:\mathbb R^n\to \mathbb R$ is $C^1$-smooth, then we have: \begin{align*} \left( \mathbb E \big| f(Ga)-f(Gb) \big|^r \right)^{1/r} \leq \pi \sigma_r \|a-b\|_2 \left(\mathbb E \|\nabla f(W)\|_2^r \right)^{1/r}, \end{align*} for all $r\geq 1$, where $W\sim N(\mathbf 0, I_n)$. \end{theorem} \noindent {\it Proof.} Fix $a,b\in S^{k-1}$ and assume without loss of generality that $a\neq \pm b$. Define $p:=\frac{a+b}{2}$ and note that since $\|a\|_2=\|b\|_2$ the vector $u:=a-p$ is perpendicular to $p$. If we set $X:=G(u)$ and $Z:=G(p)$ then $X, Z$ are independent Gaussian random vectors in $\mathbb R^n$ with $X \sim N({\bf 0}, \|u\|_2^2I_n)$, $Z \sim N({\bf 0}, \|p\|_2^2I_n)$ and $G(a)=Z+X$ while $G(b)=Z-X$. Thus, we may write: \begin{align*} \mathbb E \left| f(Ga) -f(Gb) \right|^r =\mathbb E_Z\mathbb E_X \left| f(Z+X) -f(Z-X) \right|^r. \end{align*} For $x, z\in \mathbb R^n$ we define $F(x,z):=f(z+x)-f(z-x)$. Note that for fixed $z$ we have $\mathbb E_X F(X,z)=0$ since, $X$ is symmetric random vector. Applying Theorem \ref{thm:Pis-ineq} for $\phi(t)=|t|^r,\; r\geq 1$ and $x\mapsto F(x,z)$ instead of $f$ we derive: \begin{align*} \mathbb E|F(X,z)| = \mathbb E_X \left| f(z+X) -f(z-X) \right|^r &\leq \left(\frac{\pi}{2}\right)^r \mathbb E_{X,Y} \left| \langle \nabla f(z+X) ,Y\rangle + \langle \nabla f(z-X),Y\rangle \right|^r \\ &\leq \pi^r \mathbb E_{X,Y} \left| \langle \nabla f(z+X), Y\rangle \right|^r\\ &= \pi^r \|a-b\|_2^r \sigma_r^r \, \mathbb E_X \left \|\nabla f(z+X) \right\|_2^r. \end{align*} Moreover, note that $W:=X+Z\sim N({\bf 0},I_n)$, thus we get: \begin{align*} \mathbb E \left| f(Ga) -f(Gb) \right|^r = \mathbb E|F(X,Z)| &\leq \pi^r \|a-b\|_2^r \sigma_r^r \, \mathbb E_{X,Z} \left \|\nabla f(Z+X) \right\|_2^r, \end{align*} as required. $\hfill \quad \Box$ \smallskip \begin{remarks}\rm 1. If we assume that $f$ is $L$-Lipschitz and applying Markov's inequality we may conclude the more general form of \cite[Lemma]{Sch1}: \begin{align*} {\rm Prob} \left( \big| f(G(a))-f(G(b)) \big| > t \right) \leq 2 \exp\left(-\frac{2}{\pi^2} \frac{t^2}{L^2 \|a-b\|_2^2}\right), \quad t>0. \end{align*} \smallskip \noindent 2. The same proof provides the following variant of Theorem \ref{thm:Pis-ineq} which we state here for future reference: \begin{theorem} Let $\phi:\mathbb R\to \mathbb R$ be convex function and let $f:\mathbb R^n\to \mathbb R$ be $C^1$-smooth. If $G=(g_{ij})_{i,j=1}^{n,k}$ is Gaussian matrix and $a,b\in S^{k-1}$, then we have: \begin{align*} \mathbb E \phi\left( f(Ga)-f(Gb)\right)\leq \mathbb E \phi \left( \frac{\pi}{2} \|a-b\|_2 \langle \nabla f(X), Y\rangle \right), \end{align*} where $X,Y$ are independent copies of a Gaussian $n$-dimensional random vector. \end{theorem} The proof is left as an exercise to the interested reader (see also \cite{PV-dvo}). \smallskip \noindent 3. For $a,b\in S^{k-1}$ with $\langle a,b\rangle=0$ the above statements are reduced to the inequalities we discussed in Section 2. \end{remarks} The next result is an application of Theorem \ref{thm:Sch-funct-ineq} for the $\ell_p$ norm. \begin{theorem} \label{thm:main-5-1} Let $n$ be large enough and let $2<p <c_0\log n$. Let $a,b \in S^{k-1}$ and let $G=(g_{ij})_{i,j=1}^{n,k}$ be standard Gaussian random variables. Then, \begin{align*} \left( \mathbb E \left| \|Ga\|_p-\|Gb\|_ p\right|^r \right)^{1/r} \lesssim \|a-b\|_2 \psi(n,p,r) \mathbb E\|Z\|_p, \end{align*} for $r\geq 2$, where $\psi(n,p,r)$ is defined as: \begin{align*} \psi(n,p,r):=\sqrt{r} \min \left\{ \frac{1}{\sigma_p n^{1/p}}, \frac{\sigma_{2p-2}^{p-1}}{n^{1/2}\sigma_p^p } \left(1+\frac{pr}{\sigma_{2p-2}^2n^{\frac{1}{p-1}}} \right)^{\frac{p-1}{2}} \right\} \end{align*} Moreover, for any $\varepsilon >0$ one has: \begin{align*} P \left( \left| \|Ga\|_p-\|Gb\|_ p\right| > \varepsilon \mathbb E\|Z\|_p \right) \leq C \exp \left (-c \tau \left( n,p,\frac{\varepsilon}{\| a-b\|_2} \right) \right), \end{align*} where \begin{align*} \tau(n,p,t) := \max \left\{ t^2 pn^{2/p} , \min \left\{ \frac{t^2 n}{C^p}, (t n)^{2/p} \right \} \right \}, \quad t>0 \end{align*} and $C,c>0$ are absolute constants. \end{theorem} \noindent {\it Proof.} In view of Theorem \ref{thm:Sch-funct-ineq} we need an upper estimate for the quantity: \begin{align} \label{eq:main-5-1-1} \left( \mathbb E \big\| \nabla \|X\|_p \big\|_2^r \right)^{1/r} = \left( \mathbb E \frac{\|X\|_{2p-2}^{r(p-1)} }{\|X\|_p^{r(p-1)} } \right)^{1/r} \leq \frac{I_{r(p-1)}^{p-1}(\gamma_n,B_{2p-2}^n) }{I_{-r(p-1)}^{p-1}(\gamma_n,B_p^n)}, \end{align} where in the last step we have used Proposition \ref{prop:Harris}. A standard application of Lemma \ref{lem:log-sob-moms} (we use \eqref{eq: 2.19}) yields: \begin{align} \label{eq:main-5-1-2} \frac{I_{r(p-1)}^{p-1}(\gamma_n,B_{2p-2}^n) }{n^{1/2} \sigma_{2p-2}^{p-1}} = \frac{I_{r(p-1)}^{p-1}(\gamma_n,B_{2p-2}^n) }{I_{2p-2}^{p-1}(\gamma_n,B_{2p-2}^n)} \leq \left(1+\frac{(p-1)(r-2)}{\sigma_{2p-2}^2n^{\frac{1}{p-1}}} \right)^{\frac{p-1}{2}}. \end{align} Moreover, from Proposition \ref{prop:small-ball} we see that: \begin{align} \label{eq:main-5-1-3} \frac{I_{-r(p-1)}^{p-1} (\gamma_n,B_p^n)}{n^{1-1/p} \sigma_p^{p-1} } \gtrsim \frac{I_{-r(p-1)}^{p-1} (\gamma_n,B_p^n)}{I_p^{p-1}(\gamma_n,B_p^n)} \gtrsim 1, \end{align} for $r \leq c_1k(B_p^n)$. Plugging estimates \eqref{eq:main-5-1-2} and \eqref{eq:main-5-1-3} in \eqref{eq:main-5-1-1} we find: \begin{align*} \left( \mathbb E \big\| \nabla \|X\|_p \big\|_2^r \right)^{1/r} \lesssim \frac{ (\sigma_{2p-2} / \sigma_p)^{p-1} }{n^{1/2-1/p} } \left(1+\frac{p(r-2)}{\sigma_{2p-2}^2n^{\frac{1}{p-1}}} \right)^{\frac{p-1}{2}}, \end{align*} for $2\leq r\leq c_1k(B_p^n)$. Taking into account that $\big\| \nabla \|X\|_p \big\|_2 \leq 1$ a.s. we conclude the first assertion. For the distributional inequality we argue as in the proof of Theorem \ref{thm:conc-p>2-I-p}, i.e. we use Markov's inequality and the previous estimate. $\hfill \quad \Box$ \medskip \noindent {\bf The chaining method: Dudley-Fernique decomposition.} For each $j=1,2,\ldots$ consider $\delta_j$-nets ${\cal N}_j$ on $S^{k-1}$ with cardinality $|{\cal N}_j| \leq (3/\delta_j)^k$ (see \cite[Lemma 2.6]{MS}). Note that for any $\theta\in S^{k-1}$ and for all $j$ there exist $u_j\in {\cal N}_j$ with $\|\theta-u_j\|_2\leq \delta_j$ and by the triangle inequality it follows that $\|u_j-u_{j-1}\|_2\leq \delta_j+\delta_{j-1}$. Moreover, if we assume that $\delta_j\to 0$ as $j\to \infty$ and $(t_j)$ is sequence of numbers with $t_j\geq 0$ and $\sum_j t_j\leq 1$ then, for any $\varepsilon>0$ we have the following: \smallskip \noindent {\it Fact.} Set $E:=\mathbb E\|X\|$. If we define the following sets: \begin{align*} A:= \left\{\omega \mid \, \exists \theta\in S^{k-1} : \big| \| G_\omega(\theta)\|-E \big| > \varepsilon E \right\}, \\ A_1:= \left \{ \omega \mid \exists u_1\in {\cal N}_1 : \big| \|G_\omega(u_1)\| -E \big| > t_1\varepsilon E \right\} \nonumber \end{align*} and for $j\geq 2$ \begin{align*} A_j:= \left \{\omega \mid \exists u_j\in {\cal N}_j, u_{j-1}\in {\cal N}_{j-1} : \left| \|G_\omega(u_j)\|- \|G_\omega(u_{j-1})\| \right| > t_j \varepsilon E \right \}, \end{align*} then one has: $A\subseteq \bigcup_{j=1}^\infty A_j$ (see also \cite{Sch1}). \medskip Now we apply the above chaining method for the $\ell_p$ norm with $p>2$ and we employ the distributional inequality of Theorem \ref{thm:main-5-1} to prove our second main result: \begin{theorem} [Random Dvoretzky for $\ell_p^n$] \label{thm:rdm-dvo-p>2} For all large $n$, for any $1\leq p \leq \infty$ and for every $0<\varepsilon<1$ there exists $k(n,p,\varepsilon)$ with the following property: the random $k$-dimensional subspace of $\ell_p^n$ with $k\leq k(n,p,\varepsilon)$ is $(1+\varepsilon)$-Euclidean with probability greater than $1-C\exp(-c k(n,p,\varepsilon) )$, where $k(n,p,\varepsilon)$ is estimated as follows: \begin{itemize} \item [\rm (i)] For $1\leq p<2$ we have: \begin{align*} k(n,p,\varepsilon) \gtrsim \varepsilon^2 n, \end{align*} \item [\rm (ii)] For $2<p < c_0 \log n$ we have: \begin{align*} k(n, p, \varepsilon) \gtrsim \left\{ \begin{array}{lll} (Cp)^{-p} \varepsilon^2 n, & 0<\varepsilon \leq (Cp)^{p/2} n^{-\frac{p-2}{2(p-1)}} \\ \frac{1}{p} (\varepsilon n)^{2/p} , & (Cp)^{p/2} n^{-\frac{p-2}{2(p-1)}} < \varepsilon \leq 1/p\\ \varepsilon pn^{2/p}/ \log\frac{1}{\varepsilon} , & 1/p < \varepsilon <1. \end{array} \right. \end{align*} Moreover, for $p <c_0 \log n$ we have: \begin{align*} k(n,p,\varepsilon) \gtrsim \log n/ \log \frac{1}{\varepsilon}, \end{align*} \item [\rm (iii)] For $c_0 \log n <p \leq \infty$ we have: \begin{align*} k(n, p, \varepsilon) \gtrsim \varepsilon \log n /\log \frac{1}{\varepsilon}, \end{align*} \end{itemize} where $C, c, c_0>0$ are absolute constants. \end{theorem} \noindent {\it Sketch of proof.} For $1\leq p<2$ the assertion follows from Theorem \ref{thm: VMil} and the fact that $k(B_p^n) \simeq n$. Let $2<p<c_0\log n$ and fix $0<\varepsilon<1/p$. Choose $\delta_j=e^{-j}$, $t_j = s_p^{-1} j^{p/2}e^{-j}$, with $s_p:=\sum_{j=1}^\infty j^{p/2}e^{-j}$. Then, according to the previous chaining method we may write: \begin{align*} P(A) &\leq C|{\cal N}_1| \exp( -c_1 \tau(n,p,\varepsilon t_1) ) + C\sum_{j=2}^\infty |{\cal N}_{j-1}| \cdot |{\cal N}_j| \exp (- c_1\tau(n,p, \varepsilon s_p^{-1} t_je^j/4)) \\ &\leq C\sum_{j=1}^\infty (3 e^{j})^{2k} \exp(-c_2 \tau(n,p, s_p^{-1} \varepsilon j^{p/2}) ), \end{align*} where $\tau(n,p,t)$ was defined in Theorem \ref{thm:main-5-1}, hence: \begin{align*} \tau(n,p, t) \simeq \min \left\{ \frac{t^2n}{C_1^p}, (tn )^{2/p}\right\}, \, t>0. \end{align*} Note that \begin{align*} \tau(n, p, s_p^{-1} \varepsilon j^{p/2}) \gtrsim j \min\left\{ \frac{\varepsilon^2 n}{(Cp)^p}, \frac{(\varepsilon n)^{2/p}}{p} \right\}= j k(n, p, \varepsilon), \end{align*} where we have used the fact that $s_p \lesssim \sqrt{p} (\frac{p}{2e})^{p/2}$. Therefore, we have: \begin{align*} P(A) &\leq C \sum_{j=1}^\infty \exp \left( c_3j k- c_4 j k(n,p,\varepsilon) \right) \\ &\leq \sum_{j=1}^\infty \exp \left( - \frac{c_4}{2} j k(n,p,\varepsilon) \right) \leq C' \exp \left( - \frac{c_4}{2} k(n,p,\varepsilon) \right) . \end{align*} as long as $k\leq \frac{c_4}{2c_2} k(n,p, \varepsilon)$. In the case that $p<c_0 \log n$ and $p\gg 1$ for the range $1/p<\varepsilon<1$ we have for any fixed $\theta\in S^{k-1}$ the concentration inequality \begin{align*} P\left( \big| \|G\theta\|_p -\mathbb E\|X\|_p \big| > \varepsilon \mathbb E\|X\|_p \right) \leq C \exp(-c\varepsilon k(B_p^n) ), \end{align*} by Proposition \ref{prop: weak-conc}. Thus, the classical net argument yields the estimate: $k(n,p,\varepsilon) \gtrsim \varepsilon k(B_p^n) /\log \frac{1}{\varepsilon}$. We omit the details. Moreover, for $2< p < c_0 \log n$ but $p\simeq \log n$, the main result of Section 2 shows that ${\rm Var}\|X\|_p \lesssim n^{-c_1}$ for some absolute constant $c_1>0$. Therefore, Chebyshev's probabilistic inequality and the net argument as before implies $k(n, p, \varepsilon) \gtrsim \log n / \log \frac{1}{\varepsilon}$. Finally, for $p\gtrsim \log n$ we employ Corollary \ref{cor:conc-large-p} combined with the net argument again to get $k(n,p,\varepsilon) \gtrsim \varepsilon \log n/ \log \frac{1}{\varepsilon}$. $\hfill \quad \Box$ \medskip Below we show that the dependence on $\varepsilon$ we get for the randomized Dvoretzky in $\ell_p^n$, for fixed $2< p<\infty$ is essentially optimal. We have the following: \begin{theorem} [Optimality in the Random Dvoretzky for $\ell_p^n$] \label{thm:sharp-dvo} Let $2< p< c_0 \log n$. Assuming that with probability larger than $1-e^{-\beta k}$, a $k$-dimensional subspace satisfies that the ratio between the $\ell_p^n$ norm and a multiple of the $\ell_2^n$ norm is $(1+\varepsilon)$ equivalent for all vectors in the subspace, with $\frac{2^{p/2}}{p} n^{-\frac{p-2}{2(p-1)}} < \varepsilon <1/p$, then $k\lesssim \beta^{-1} \varepsilon^{2/p} k(B_p^n)$. \end{theorem} For the proof we will need the next lemma from \cite{Sch2}: \begin{lemma} \label{lem:meas-grass-gauss} Let $1\leq k\leq n-1$ and let ${\cal A}\subset G_{n,k}$ be a $\nu_{n,k}$-measurable set. Then, for $U_{\cal A}: =\bigcup \{F \mid F\in {\cal A} \}$ we have: \begin{align*} \nu_{n,k}({\cal A}) \leq [\gamma_n(U_{\cal A})]^k. \end{align*} \end{lemma} \noindent {\it Proof of Theorem \ref{thm:sharp-dvo}.} Let $0<\varepsilon<1/3$ and define the collection of all $k$-dimensional subspaces of a space $(\mathbb R^n, \| \cdot \|)$ for which the restricted norm there has distortion (with respect to the Euclidean norm) at most $1+\varepsilon$: \begin{align*} {\cal A}_\varepsilon:=\{F\in G_{n,k} \mid \exists \lambda_F : \lambda_F \leq \|\theta\| \leq (1+\varepsilon) \lambda_F, \, \forall \theta\in S_F\}. \end{align*} Note that for $F\in {\cal A}_\varepsilon$ we have: $(1+\varepsilon)^{-1}M_F \leq \lambda_F\leq M_F$. Thus instead of working with $\lambda_F$ we may define ${\cal A}_\varepsilon$ using $M_F:=M(F\cap B)$ (here $B=\{x : \|x\|\leq 1\}$) namely, if \begin{align*} {\cal F}_\varepsilon:= \left\{ F\in G_{n,k} \mid (1+\varepsilon)^{-1} M_F\leq \|\theta\| \leq (1+\varepsilon)M_F\, \; \forall \theta\in S_F \right\}, \end{align*} then we get ${\cal A}_\varepsilon \subset {\cal F}_{\varepsilon}$. Define further: \begin{align*} {\cal B}_\varepsilon:= \left\{ F\in {\cal F}_\varepsilon \mid (1-2\varepsilon)\frac{\mathbb E\|g\|}{\mathbb E\|g\|_2}\leq M_F\leq (1+2\varepsilon)\frac{\mathbb E\|g\|}{\mathbb E\|g\|_2}\right\} \end{align*} and note that ${\cal F}_\varepsilon, \; {\cal B}_\varepsilon$ are measurable.\footnote{The map $F\mapsto M_F$ is Lipschitz continuous with respect to the unitarily invariant metric $d$ on $G_{n,k}$ defined as: $d(E,F)= \inf\{ \|I-U\|_{\rm op} : U(E)=F, \, U\in O(n)\}, \; E,F\in G_{n,k}$.} Hence, an application of Lemma \ref{lem:meas-grass-gauss} yields: \begin{align*} \nu_{n,k}({\cal F}_\varepsilon) &= \nu_{n,k}({\cal F}_\varepsilon \setminus {\cal B}_\varepsilon)+ \nu_{n,k}({\cal B}_\varepsilon) \\ & \leq \left[ \gamma_n \left( \left\{ x: \|x\| \geq \frac{1+ 2 \varepsilon}{1+\varepsilon}\frac{\mathbb E\|g\|}{\mathbb E\|g\|_2} \|x\|_2 \; {\rm \bf or} \; \|x\| \leq (1+ \varepsilon)(1- 2\varepsilon)\frac{\mathbb E\|g\|}{\mathbb E\|g\|_2} \|x\|_2 \right\} \right) \right]^k + \\ & \left[ \gamma_n \left( \left\{ x: \frac{1-2\varepsilon}{1+\varepsilon} \|x\|_2 \frac{\mathbb E\|g\|}{\mathbb E\|g\|_2} \leq \|x\|\leq (1+\varepsilon)(1+2\varepsilon) \frac{\mathbb E \|g\|}{\mathbb E \|g\|_2} \|x\|_2 \right\}\right) \right]^k. \end{align*} Apply this argument for the $\ell_p$ norm with $2<p< c_0 \log n$ and consider the next claim which follows easily by Theorem \ref{thm:conc-1<p<2} and Proposition \ref{prop:conc-p>2-u-l-b}: \smallskip \noindent {\it Claim.} For every $2^{p/2} p^{-1} n^{-\frac{p-2}{2(p-1)} } < t < 1/p$ we have: \begin{align*} c e^{-Cp (tn)^{2/p} } \leq P \left( \|g\|_p \leq \frac{(1-t)\mathbb E \|g\|_p}{\mathbb E\|g\|_2} \|g\|_2 \; {\rm \bf or} \; \|g\|_p \geq \frac{(1+t)\mathbb E \|g\|_p}{\mathbb E\|g\|_2} \|g\|_2 \right) \leq C e^{-c (tn)^{2/p} }. \end{align*} \smallskip \noindent Now assume that $2^{p/2} p^{-1} n^{-\frac{p-2}{2(p-1)} }<\varepsilon<1/p$, so by the previous claim we get: \begin{align*} \nu_{n,k}({\cal F}_\varepsilon) &\leq C^k e^{-ck(\varepsilon n)^{2/p}}+ (1-ce^{-Cp(\varepsilon n)^{2/p}})^k \leq e^{-c'k(\varepsilon n)^{2/p}} +1-ce^{-Cp(\varepsilon n)^{2/p}}. \end{align*} Now employing the assumption that $\nu_{n,k}({\cal F}_\varepsilon) \geq 1- e^{-\beta k}$ for some absolute constant $\beta>0$ and that $\beta \ll ( \varepsilon n)^{2/p}$, we conclude: \begin{align*} 1-ce^{-Cp(\varepsilon n)^{2/p}} \geq 1-e^{-\beta k} -e^{-c'k(\varepsilon n)^{2/p}} \geq 1- 2e^{-c'' \beta k}, \end{align*} which implies $k\leq \frac{C'}{\beta} p (\varepsilon n)^{2/p}$, as required. $\hfill \quad \Box$ \section{Further remarks and questions} \noindent {\bf 1. Instability of the variance.} It is worth mentioning that the variance is not an isomorphic invariant. One can observe that: \begin{quote} \it There exists absolute constant $0< c_0 <1$ with the following property: for every $n\geq 2$ there exist 1-symmetric convex bodies $K$ and $L$ on $\mathbb R^n$ such that: \begin{align*} {\rm Var}\|Z\|_K \simeq \frac{1}{n^{\delta} \log n}, \quad {\rm Var}\|Z\|_L \simeq \frac{1}{\log n} \quad {\rm and} \quad e^{-1/c_0}L \subseteq K \subseteq L, \end{align*} where $\delta =1-c_0 \log 2$ and $Z\sim N({\bf 0},I_n)$. \end{quote} \noindent Indeed; for $p_0:= c_0 \log n$, where $0<c_0<1$ as in Theorem \ref{thm:var-ell_p}, we consider the bodies $K:=B_{p_0}^n$ and $L:=B_\infty^n$. We can easily see that these bodies enjoy the aforementioned properties. \medskip \noindent {\bf 2. Non-centered moments.} We know that for any centrally symmetric convex body $T$ on $\mathbb R^n$ one has: \begin{align*} \frac{c_1r}{n} \leq \left( \frac{I_r(\gamma_n,T)}{I_1(\gamma_n,T)} \right)^2 -1 \leq \frac{c_2r}{k(T)}, \end{align*} for all $r\geq 2$, where $c_1,c_2>0$ are absolute constants. This follows from the lower estimate in \eqref{eq:2.15} and Lemma \ref{lem:log-sob-moms}. In particular, for $1\leq r \leq k(T)$ we obtain: \begin{align} \label{eq:6.2} \frac{c_1' r}{n} \leq \frac{I_r(\gamma_n,T)}{I_1(\gamma_n,T)} -1 \leq \frac{c_2' r}{k(T)} \end{align} and when $k(T) \simeq n$ we readily see that this estimate is sharp up to constants, in particular for the $\ell_p$ norms with $1\leq p\leq 2$. Furthermore, one can show that the same behavior holds true for $2< p< c_0\log n$ even though the critical dimension $k(B_p^n)$ in that case is much smaller than $n$. For $2<p<c_0 \log n$ we have: \begin{align*} \frac{I_r(\gamma_n, B_p^n)}{I_1(\gamma_n,B_p^n)} \leq 1 +\frac{C^p}{n}r, \end{align*} for all $1\leq r\leq k(B_p^n)/C$. In fact for the negative moments this is already clear if we take into account Theorem \ref{thm:stability-moms}, Proposition \ref{prop: stab-ell_p} and Theorem \ref{thm:stability-r-means}. More precisely we have: For $1\leq p < c_0\log n$ and for any $1\leq r\leq c k(B_p^n)$ we get: \begin{align*} \max \left \{ \frac{I_1(\gamma_n, B_p^n)}{I_{-r}(\gamma_n,B_p^n)} , \, \frac{I_r(\gamma_n, B_p^n)}{I_1(\gamma_n,B_p^n)} \right\} \leq 1 +\frac{C^p}{n}r \end{align*} and for $p\geq c_0\log n$ and $1\leq r \leq c k(B_p^n)$ we have: \begin{align*} \max \left \{ \frac{I_1(\gamma_n, B_p^n)}{I_{-r} (\gamma_n,B_p^n)}, \, \frac{I_r(\gamma_n, B_p^n)}{I_1(\gamma_n,B_p^n)} \right\} \leq 1 +\frac{C}{(\log n)^2}r. \end{align*} We should note here the next threshold phenomenon when $2<p\leq \infty$: \begin{itemize} \item $2<p\leq c_0\log n$: It is $I_r/I_1 -1 \lesssim_p r / n =O_p(n^{2/p-1})$ for $1 \leq r\leq c_1 k(B_p^n)$ while for $r\geq c_2 k(B_p^n)$ we have $I_r/I_1 -1 \simeq 1$. \item $p>c_0\log n$: It is $I_r/I_1 -1 \lesssim r / (\log n)^2 =O( (\log n)^{-1})$ for $1\leq r\leq c_1 k(B_p^n)$, while for $r\geq c_2k(B_p^n)$ we have $I_r/I_1 -1 \simeq 1$, \end{itemize} for absolute constants $0< c_1< c_2$. The detailed study of this phenomenon will be presented elsewhere. Let us also note that although the behavior of the quantities $\frac{I_r}{I_1} -1, \, \frac{I_1}{I_{-r}}-1$ is completely determined for the $\ell_p$ norms -- it is of the order $r/n$ for $1\leq r\leq ck(B_p^n)$ -- combining this information with Markov's inequality we still do not derive the optimal concentration inequality in the whole range $2<p <\infty$. \medskip \noindent {\bf 3. Gaussian concentration and randomized Dvoretzky.} One can show that the Gaussian concentration for norms $\|\cdot\|_A$ with $k(A) \simeq n$ is essentially optimal: \begin{lemma} Let $\alpha\in (0,1)$ and let $A$ be centrally symmetric convex body on $\mathbb R^n$ with $k=k(A) \geq \alpha n $. Then, \begin{align*} P \left( \big | \|Z\|_A - \mathbb E \|Z\|_A \big |\geq \varepsilon \mathbb E\|Z\|_A \right) \geq c e^{-C\varepsilon^2 k / \alpha^2}, \end{align*} for all $n^{-1/2} <\varepsilon<1$. \end{lemma} \noindent {\it Proof.} Set $I_q^q=\mathbb E \|Z\|_A^q$. Taking into account \eqref{eq:6.2} we may write: \begin{align*} 1+ \frac{c_1r}{n} \leq \frac{I_r}{I_1} \leq \sqrt{ 1+\frac{C_1r}{k} }, \end{align*} for all $r\geq 2$. Let $n^{-1/2} <\varepsilon < 1$. If we set $r_0:=\frac{2n\varepsilon}{c_1}$, then by previous estimates and the Paley-Zygmund inequality we have: \begin{align*} P\left( \|Z\| > (1+\varepsilon) I_1 \right)\geq P\left( \|Z\| > \frac{1+\varepsilon}{1+\frac{c_1r_0}{n}} I_{r_0}\right) = P( \|Z\|> \delta I_{r_0}) \geq (1-\delta^{r_0})^2 \frac{I_{r_0}^{2r_0}}{I_{2r_0}^{2r_0}} \geq c_2 e^{-C_2 r_0^2/k } , \end{align*} where $\delta := \frac{1+\varepsilon}{1+\frac{c_1r_0}{n}}$. The result easily follows. $\hfill \quad \Box$ \smallskip Although the Gaussian concentration for spaces $E=(\mathbb R^n, \|\cdot\|)$ with $k(E) \simeq n$ is sharp, the argument provided in Section 5 fails to give the optimal dependence on $\varepsilon$ in randomized Dvoretzky. The reason is that in Gauss' space, norms with concentration estimate less than $e^{-\varepsilon^2n}$ cannot be distinguished from the Euclidean norm. Therefore it is more appropriate to work on the sphere when we study almost spherical sections in normed spaces. \medskip \noindent {\bf 4. Refined Gaussian concentration and "new dimensions".} The reader should notice that the refined form of the Gaussian concentration for $2<p<\infty$ (Theorem \ref{thm:conc-full}) and moreover Theorem \ref{thm:rdm-dvo-p>2} provide random, almost Euclidean subspaces of relatively large dimensions in which the norm has very small distortion. Previously, that phenomenon could not be observed if one was using the classical concentration inequality in terms of the Lipschitz constant. In order to illustrate this let us consider an example, say the $\ell_p$ norm with $p=5$. The classical setting yields the existence of random $k$-dimensional sections of $B_5^n$ which are $(1+\varepsilon)$-isomorphic to a multiple of $B_2^k$ as long as $k \lesssim \varepsilon^2 n^{2/5}$. The latter is relatively large when $\varepsilon \gg n^{-1/5}$. Now, we may consider distortions smaller than $n^{-1/5}$, in fact as small as $n^{-1/2}$, since $\tau(n,5,\varepsilon) \simeq \min\{\varepsilon^2 n, (\varepsilon n)^{2/5} \}$. For instance (for $\varepsilon \simeq n^{-2/5}$) the random $k$-dimensional section of $B_5^n$ with $k\simeq n^{1/5}$, is $(1+n^{-2/5})$-isomorphic to a multiple of $B_2^k$ with probability greater than $1-e^{-cn^{1/5}}$. \medskip \medskip \noindent {\bf 5. The existence of $\log(1/\varepsilon)$ as $p\to \infty$.} Note that Theorem \ref{thm:stability-r-means} and furthermore Corollary \ref{cor:conc-large-p} suggest that the concentration of the $\ell_p$ norm with $p\gtrsim \log n$ is similar with the one we get for the $\ell_\infty$ norm. This means that the classical net argument yields random subspaces which are $(1+\varepsilon)$-spherical as long as $k\lesssim \varepsilon \log n/ \log \frac{1}{\varepsilon}$. We do not know if this $\log(1/\varepsilon)$ term is needed, for this range of $p$. As an easy corollary of the main result of \cite{Tik}, we have: \begin{proposition} Let $p> (\log n)^2$ and $\varepsilon \in(0,1/3)$. If the random $k$-dimensional subspace of $\ell_p^n$ is $(1+\varepsilon)$-spherical with probability greater than $3/4$, then $k\leq C \varepsilon \log n/ \log \frac{1}{\varepsilon}$, where $C>0$ is an absolute constant. \end{proposition} \medskip \noindent {\bf Acknowledgements.} The authors are indebted to the anonymous referees whose valuable comments helped to improve the presentation of this note. The authors would also like to thank Konstantin Tikhomirov who was interested in their question and kindly allowed them to include his argument here.
{ "timestamp": "2017-03-10T02:03:13", "yymm": "1510", "arxiv_id": "1510.07284", "language": "en", "url": "https://arxiv.org/abs/1510.07284", "abstract": "We study the dependence on $\\varepsilon$ in the critical dimension $k(n,p,\\varepsilon)$ for which one can find random sections of the $\\ell_p^n$-ball which are $(1+\\varepsilon)$-spherical. We give lower (and upper) estimates for $k(n,p,\\varepsilon)$ for all eligible values $p$ and $\\varepsilon$ as $n\\to \\infty$, which agree with the sharp estimates for the extreme values $p=1$ and $p=\\infty$. Toward this end, we provide tight bounds for the Gaussian concentration of the $\\ell_p$-norm.", "subjects": "Functional Analysis (math.FA)", "title": "Random version of Dvoretzky's theorem in $\\ell_p^n$", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357549000773, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7096739180414052 }
https://arxiv.org/abs/1705.01350
The Samuelson's model as a singular discrete time system
In this paper we revisit the famous classical Samuelson's multiplier-accelerator model for national economy. We reform this model into a singular discrete time system and study its solutions. The advantage of this study gives a better understanding of the structure of the model and more deep and elegant results.
\section{Introduction} Many authors have studied generalised discrete \& continuous time systems, see [1-19], and their applications especially in cases where the memory effect is needed including generalised discrete \& continuous time systems with delays, see [20-38]. Many of these results have already been extended to systems of differential \& difference equations with fractional operators, see [43-49]. Keynesian macroeconomics inspired the seminal work of Samuelson (1939), who actually introduced the business cycle theory. Although primitive and using only the demand point of view, the Samuelson's prospect still provides an excellent insight into the problem and justification of business cycles appearing in national economies. In the past decades, many more sophisticated models have been proposed by other researchers [20-38]. All these models use superior and more delicate mechanisms involving monetary aspects, inventory issues, business expectation, borrowing constraints, welfare gains and multi-country consumption correlations. Some of the previous articles also contribute to the discussion for the inadequacies of Samuelson's model. The basic shortcoming of the original model is: the incapability to produce a stable path for the national income when realistic values for the different parameters (multiplier and accelerator parameters) are entered into the system of equations. Of course, this statement contradicts with the empirical evidence which supports temporary or long-lasting business cycles. In this article, we propose an alternative view of the model by reforming it into a singular discrete time system. The paper is organized as follows. Section 2 provides a short review for the organization of the original model and in Section 3 we introduce the proposed reformulation into a system of difference equations. Section 4 investigates the solutions of the proposed system. \section{The original model} The original version of Samuelson's multiplier-accelerator original model is based on the following assumptions: \\\\ \textit{Assumption 2.1.} National income $T_k$ in year $k$, equals to the summation of three elements: consumption, $C_k$, private investment, $I_k$, and governmental expenditure $G_k$ \begin{equation}\label{eq11} T_k=C_k+I_k+G_k. \end{equation} \textit{Assumption 2.2.} Consumption $C_k$ in year $k$, depends on past income (only on last year's value) and on marginal tendency to consume, modeled with $a$, the multiplier parameter, where $0 < a < 1$, \begin{equation}\label{eq12} C_k=aT_{k-1}. \end{equation} \textit{Assumption 2.3.} Private investment $I_k$ in year $k$, depends on consumption changes and on the accelerator factor $b$, where $b>0$. Consequently, $I_k$ depends on national income changes, \begin{equation}\label{eq13} I_k=b(C_k-C_{k-1})=ab(T_{k-1}-T_{k-2}). \end{equation} \textit{Assumption 2.4.} Governmental expenditure $G_k$ in year $k$, remains constant \[ G_k=\bar G. \] Hence, the national income is determined via the following second-order linear difference equation \[ T_{k+2}-a(1+b)T_{k+1}+abT_k=\bar G. \] See [39-42] for the needed theory of difference equations that lead to the solution of the above equation. \section{The reformulation - Singular Samuelson's model} Let \[ Y_k=\left[\begin{array}{c}T_k\\C_k\\I_k\end{array}\right] \] Then \eqref{eq11} can be written as \[ 0=-T_k+C_k+I_k+G_k, \] or, equivalently, \[ \left[\begin{array}{ccc}0&0&0\end{array}\right]Y_{k+1}=\left[\begin{array}{ccc}-1&1&1\end{array}\right]Y_k+G_k. \] The equation \eqref{eq12} can be written as \[ C_{k+1}=aT_k \] or, equivalently, \[ \left[\begin{array}{ccc}0&1&0\end{array}\right]Y_{k+1}=\left[\begin{array}{ccc}a&0&0\end{array}\right]Y_k. \] Finally \eqref{eq13} can be written as \[ I_{k+1}=b(C_{k+1}-C_k). \] or, equivalently, \[ -bC_{k+1}+I_{k+1}=-bC_k. \] or, equivalently, \[ \left[\begin{array}{ccc}0&-b&1\end{array}\right]Y_{k+1}=\left[\begin{array}{ccc}0&-b&0\end{array}\right]Y_k. \] Hence the above expressions can be written in the following matrix form \begin{equation}\label{eq1} \begin{array}{cc} FY_{k+1}=GY_k+V_k, & k= 2, 3,..., \end{array} \end{equation} Where \[ F=\left[\begin{array}{ccc} 0&0&0\\ 0&1&0\\ 0&-b&1\end{array}\right],\quad G=\left[\begin{array}{ccc} -1&1&1\\ a&0&0\\ 0&-b&0\end{array}\right],\quad V_k=\left[\begin{array}{c} G_k\\0\\0\end{array}\right]. \] Note that $F$ is singular (det$F$=0). Throughout the paper we will use in several parts matrix pencil theory to establish our results. A matrix pencil is a family of matrices $sF-G$, parametrized by a complex number $s$, see [46-53]. \\\\ \textbf{Definition 3.1.} Given $F,G\in \mathbb{R}^{r \times m}$ and an arbitrary $s\in\mathbb{C}$, the matrix pencil $sF-G$ is called: \begin{enumerate} \item Regular when $r=m$ and det$(sF-G)\neq 0$; \item Singular when $r\neq m$ or $r=m$ and det$(sF-G)\equiv 0$. \end{enumerate} \textbf{Corollary 3.1.} The system \eqref{eq1} has always a \textsl{regular pencil} $\forall a,b$.\\\\ \textbf{Proof.} The determinant det$(sF-G)=s^2-a(b+1)s+ab\neq 0$. Hence from Definition 2.1, the pencil is regular. The proof is completed. \\\\ The class of $sF-G$ is characterized by a uniquely defined element, known as the Weierstrass canonical form, see [50-57], specified by the complete set of invariants of $sF-G$. This is the set of elementary divisors of type $(s-a_j)^{p_j}$, called \emph{finite elementary divisors}, where $a_j$ is a finite eigenvalue of algebraic multiplicity $p_j$ ($1\leq j \leq \nu$), and the set of elementary divisors of type $\hat{s}^q=\frac{1}{s^q}$, called \emph{infinite elementary divisors}, where $q$ is the algebraic multiplicity of the infinite eigenvalue. $\sum_{j =1}^\nu p_j = p$ and $p+q=m$. \\\\ From the regularity of $sF-G$, there exist non-singular matrices $P$, $Q$ $\in \mathbb{R}^{m \times m}$ such that \begin{equation}\label{eq3} \begin{array}{c}PFQ=\left[\begin{array}{cc} I_p&0_{p,q}\\0_{q,p}&H_q\end{array}\right], \\\\ PGQ=\left[\begin{array}{cc} J_p&0_{p,q}\\0_{q,p}&I_q\end{array}\right].\end{array} \end{equation} $J_p$, $H_q$ are appropriate matrices with $H_q$ a nilpotent matrix with index $q_*$, $J_p$ a Jordan matrix and $p+q=m$. With $0_{q,p}$ we denote the zero matrix of $q\times p$. The matrix $Q$ can be written as \begin{equation}\label{eq4} Q=\left[\begin{array}{cc}Q_p & Q_q\end{array}\right]. \end{equation} $Q_p\in \mathbb{R}^{m \times p}$ and $Q_q\in \mathbb{R}^{m \times q}$. The matrix $P$ can be written as \begin{equation}\label{eq5} P=\left[\begin{array}{c}P_1 \\ P_2\end{array}\right]. \end{equation} $P_1\in \mathbb{R}^{p \times r}$ and $P_2\in \mathbb{R}^{q \times r}$. \\\\ The solution of system \eqref{eq1} is given by the following Theorem: \\\\ \textbf{Theorem 3.1.} (See [1-19]) We consider the system \eqref{eq1}. Since its pencil is always regular, its solution exists and for $k\geq 0$, is given by the formula \[ Y_k=Q_pJ_p^kC+QD_k. \] Where $D_k=\left[ \begin{array}{c} \sum^{k-1}_{i=0}J_p^{k-i-1}P_1V_i\\-\sum^{q_{*}-1}_{i=0}H_q^iP_2V_{k+i} \end{array}\right]$ and $C\in\mathbb{R}^p$ is a constant vector. The matrices $Q_p$, $Q_q$, $P_1$, $P_2$, $J_p$, $H_q$ are defined by \eqref{eq3}, \eqref{eq4}, \eqref{eq5}. \section{Main Results} In this section we will present our main results. We will provide the solution to the system \eqref{eq1} and consequently we will derive the sequence for the national income, the consumption and the private investment. \\\\ \textbf{Theorem 4.1.} We consider the system \eqref{eq1}. Then in year $k$, National income $T_k$, Consumption $C_k$ and private Investment $I_k$ are given by: \[ \begin{array}{c} T_k=s_1^{k+1}c_1+s_2^{k+1}c_2+a\sum^{k-1}_{i=0}[(s_1^{k-1}+s_2^{k-1})]G_i,\\\\ C_k=a(s_1^kc_1+s_2^kc_2)+a^2\sum^{k-1}_{i=0}[(s_1^{k-i-1}+s_2^{k-i-1})]G_i,\\\\ I_k=s_1^{k}(s_1-a)c_1+s_2^{k}(s_2-a)c_2+a\sum^{k-1}_{i=0}[((s_1-a)s_1^{k-1}+(s_2-a)s_2^{k-1})]G_i \end{array} \] \textbf{Proof.} From Corollary 3.1, the pencil $sF-G$ is always regular. Furthermore the pencil has one infinite eigenvalue and two finite: \[ s_1=\frac{a(1+b)+\sqrt{a^2(1+b)^2-4ab}}{2},\quad s_2=\frac{a(1+b)-\sqrt{a^2(1+b)^2-4ab}}{2}. \] From Theorem 3.1, the solution of \eqref{eq1} is given by \[ Y_k=Q_pJ_p^kC+Q\left[ \begin{array}{c} \sum^{k-1}_{i=0}J_p^{k-i-1}P_1V_i\\-\sum^{q_{*}-1}_{i=0}H_q^iP_2V_{k+i} \end{array}\right]. \] Since we have one infinite eigenvalue we have \[ H_q=0 \] and $J_p$ is the Jordan matrix of the two finite eigenvalues: \[ Y_k=Q_p \left[ \begin{array}{cc} s_1^k&0\\0&s_2^k \end{array}\right]C+Q\left[ \begin{array}{c} \sum^{k-1}_{i=0}J_p^{k-i-1}P_1V_i\\0 \end{array}\right]. \] The matrix $Q_p$ has the two eigenvectors of the two finite eigenvalues: \[ Q_p= \left[ \begin{array}{cc} s_1&s_2\\a&a\\s_1-a&s_2-a \end{array}\right], \] while $Q_q$ is the eigenvector of the infinite eigenvalue: \[ Q_q= \left[ \begin{array}{c} 1\\0\\0 \end{array}\right]. \] Hence: \[ Q=\left[ \begin{array}{ccc} s_1&s_2&1\\a&a&0\\s_1-a&s_2-a&0 \end{array}\right] \] and the solution $Y_k$ takes the form: \[ Y_k= \left[ \begin{array}{cc} s_1&s_2\\a&a\\s_1-a&s_2-a \end{array}\right] \left[ \begin{array}{cc} s_1^k&0\\0&s_2^k \end{array}\right]C+ \] \[ \left[ \begin{array}{ccc} s_1&s_2&1\\a&a&0\\s_1-a&s_2-a&0 \end{array}\right] \left[ \begin{array}{c} \sum^{k-1}_{i=0} \left[ \begin{array}{cc} s_1^{k-i-1}&0\\0&s_2^{k-i-1} \end{array}\right]P_1V_i\\0 \end{array}\right]. \] Finally, where $P_1$ is the matrix which contains the right eigenvectors of the finite eigenvalues \[ P_1= \left[ \begin{array}{ccc} a&1&\frac{a}{s_1}\\a&1&\frac{a}{s_2} \end{array}\right]. \] Hence \[ Y_k= \left[ \begin{array}{cc} s_1&s_2\\a&a\\s_1-a&s_2-a \end{array}\right] \left[ \begin{array}{cc} s_1^k&0\\0&s_2^k \end{array}\right]C+ \] \[ \left[ \begin{array}{ccc} s_1&s_2&1\\a&a&0\\s_1-a&s_2-a&0 \end{array}\right]\left[ \begin{array}{c} \sum^{k-1}_{i=0}\left[ \begin{array}{cc} s_1^{k-i-1}&0\\0&s_2^{k-i-1} \end{array}\right]\left[ \begin{array}{ccc} a&1&\frac{a}{s_1}\\a&1&\frac{a}{s_2} \end{array}\right]\left[ \begin{array}{c} G_i\\0\\0 \end{array}\right]\\0 \end{array}\right], \] or, equivalently, \[ Y_k= \left[ \begin{array}{c} s_1^{k+1}c_1+s_2^{k+1}c_2+a\sum^{k-1}_{i=0}[(s_1^{k-1}+s_2^{k-1})]G_i\\ a(s_1^kc_1+s_2^kc_2)+a^2\sum^{k-1}_{i=0}[(s_1^{k-i-1}+s_2^{k-i-1})]G_i\\ s_1^{k}(s_1-a)c_1+s_2^{k}(s_2-a)c_2+a\sum^{k-1}_{i=0}[((s_1-a)s_1^{k-1}+(s_2-a)s_2^{k-1})]G_i \end{array}\right], \] or, equivalently, \[ \left[\begin{array}{c}T_k\\C_k\\I_k\end{array}\right]= \left[ \begin{array}{c} s_1^{k+1}c_1+s_2^{k+1}c_2+a\sum^{k-1}_{i=0}[(s_1^{k-1}+s_2^{k-1})]G_i\\ a(s_1^kc_1+s_2^kc_2)+a^2\sum^{k-1}_{i=0}[(s_1^{k-i-1}+s_2^{k-i-1})]G_i\\ s_1^{k}(s_1-a)c_1+s_2^{k}(s_2-a)c_2+a\sum^{k-1}_{i=0}[((s_1-a)s_1^{k-1}+(s_2-a)s_2^{k-1})]G_i \end{array}\right]. \] The proof is completed. \subsection*{Initial Conditions} We assume system \eqref{eq1} and the known initial conditions (IC): $Y_{2}$. \\\\ \textbf{Definition 4.1.} Consider the system \eqref{eq1} with known IC. Then the IC are called consistent if there exists a solution for the system \eqref{eq1} which satisfies the given conditions. \\\\ \textbf{Proposition 4.2.} (See [1-19]) The IC of system \eqref{eq1} are consistent if and only if \[ Y_2\in colspanQ_p +QD_2. \] \textbf{Proposition 4.3.} (See [1-19]) Consider the system \eqref{eq1} with given IC. Then the solution for the initial value problem is unique if and only if the IC are consistent. Then, the unique solution is given by the formula \[ Y_k=Q_pJ_p^kZ^p_{2}+QD_k. \] where $D_k=\left[ \begin{array}{c} \sum^{k-1}_{i=0}J_p^{k-i-1}P_1V_i\\-\sum^{q_{*}-1}_{i=0}H_q^iP_2V_{k+i} \end{array}\right]$ and $Z^p_2$ is the unique solution of the algebraic system $Y_2=Q_pZ^p_2+D_2$. \\\\ \textbf{Proposition 4.3.} The reformulation - Singular Samuelson's model has always a unique solution for given initial conditions\\\\ \textbf{Proof.} The reformulation - Singular Samuelson's model has always a unique solution for given initial conditions is a singular system given by \eqref{eq4}. For $k=2$ we get: \[ Y_2= \left[\begin{array}{c}T_2\\C_2\\I_2\end{array}\right], \] or, equivalently, \[ Y_2 = \left[\begin{array}{c}T_2\\aT_1\\ab(T_1-T_0)\end{array}\right]. \] or, equivalently, \[ Y_2 = \left[\begin{array}{c}1\\0\\0\end{array}\right]T_2+\left[\begin{array}{c}0\\1\\b\end{array}\right]aT_1+\left[\begin{array}{c}0\\0\\-b\end{array}\right]aT_0. \] However \[ colspanQ_p +QD_2=<\left[\begin{array}{c}0\\1\\b\end{array}\right],\left[\begin{array}{c}0\\0\\-b\end{array}\right]>+\left[\begin{array}{c}1\\0\\0\end{array}\right] \] and hence from Proposition 4.1, the IC of the reformulation - Singular Samuelson's model are always consistent and from Proposition 4.2, reformulation - Singular Samuelson's model has a unique solution for given IC. The proof is completed.
{ "timestamp": "2017-05-04T02:05:00", "yymm": "1705", "arxiv_id": "1705.01350", "language": "en", "url": "https://arxiv.org/abs/1705.01350", "abstract": "In this paper we revisit the famous classical Samuelson's multiplier-accelerator model for national economy. We reform this model into a singular discrete time system and study its solutions. The advantage of this study gives a better understanding of the structure of the model and more deep and elegant results.", "subjects": "Dynamical Systems (math.DS)", "title": "The Samuelson's model as a singular discrete time system", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357549000773, "lm_q2_score": 0.7248702761768249, "lm_q1q2_score": 0.7096739180414052 }
https://arxiv.org/abs/1911.03958
A spanning bandwidth theorem in random graphs
The bandwidth theorem [Mathematische Annalen, 343(1):175--205, 2009] states that any $n$-vertex graph $G$ with minimum degree $(\frac{k-1}{k}+o(1))n$ contains all $n$-vertex $k$-colourable graphs $H$ with bounded maximum degree and bandwidth $o(n)$. In [arXiv:1612.00661] a random graph analogue of this statement is proved: for $p\gg (\frac{\log n}{n})^{1/\Delta}$ a.a.s. each spanning subgraph $G$ of $G(n,p)$ with minimum degree $(\frac{k-1}{k}+o(1))pn$ contains all $n$-vertex $k$-colourable graphs $H$ with maximum degree $\Delta$, bandwidth $o(n)$, and at least $C p^{-2}$ vertices not contained in any triangle. This restriction on vertices in triangles is necessary, but limiting.In this paper we consider how it can be avoided. A special case of our main result is that, under the same conditions, if additionally all vertex neighbourhoods in $G$ contain many copies of $K_\Delta$ then we can drop the restriction on $H$ that $Cp^{-2}$ vertices should not be in triangles.
\section{Introduction} One major topic of research in extremal graph theory is to determine minimum degree conditions on a graph~$G$ which force it to contain copies of a spanning subgraph $H$. The primal example of such a theorem is Dirac's theorem~\cite{dirac1952}, which states that if $\delta(G)\ge\tfrac12 v(G)$ then $G$ is Hamiltonian. Optimal results of this type were established for a wide range of other spanning subgraphs~$H$ with bounded maximum degree such as powers of Hamilton cycles, trees, or $F$-factors for any fixed graph~$F$ (see e.g.~\cite{kuhnsurvey} for a survey). One characteristic all these graphs~$H$ have in common is that they have sublinear bandwidth. The \emph{bandwidth} of a labelling of the vertex set of~$H$ by integers $1, \ldots, n$ is the minimum~$b$ such that $|i-j| \leq b$ for every edge $ij$ of~$H$. The bandwidth of~$H$ is the minimum bandwidth among all its labellings. The relevance of this parameter was highlighted in~\cite{bottcher2009proof}, where the following asymptotically optimal general result was proved. \begin{theorem}[Bandwidth Theorem~\cite{bottcher2009proof}] \label{thm:bandwidth} For every $\gamma >0$, $\Delta \geq 2$, and $k\geq 1$, there exist $\beta>0$ and $n_0 \geq 1$ such that for every $n\geq n_0$ the following holds. If $G$ is a graph on $n$ vertices with minimum degree $\delta(G) \geq \left(\frac{k-1}{k}+\gamma\right)n$ and if $H$ is a $k$-colourable graph on $n$ vertices with maximum degree $\Delta(H) \leq \Delta$ and bandwidth at most $\beta n$, then $G$ contains a copy of $H$. \qed \end{theorem} More recently, the transference of extremal results from dense graphs to sparse graphs became a research focus. Again, a prime example, due to Lee and Sudakov~\cite{lee2012}, is that if $\Gamma=G(n,p)$ is a typical binomial random graph with $p\ge C\tfrac{\log n}{n}$, for some large $C$, then any $G\subset\Gamma$ with minimum degree $\big(\tfrac12+o(1)\big)pn$ is Hamiltonian. This is a transference of Dirac's theorem to sparse random graphs. Further such results exist, all focused on finding small-bandwidth subgraphs (for a comprehensive list see, e.g., the recent survey~\cite{BCCsurv}). One can also ask similar questions in other sparse graphs than random graphs---for example for sufficiently pseudorandom graphs---but we will not focus on this question here. As for the classical extremal statements, it is desirable to have a result covering a very general class of spanning subgraphs. This is achieved in~\cite{ABET}, where the following transference of the Bandwidth Theorem to sparse random graphs is proved. \begin{theorem}[{Sparse Bandwidth Theorem~\cite[Theorem~6]{ABET}}] \label{thm:abet} For each $\gamma >0$, $\Delta \geq 2$, and $k \geq 1$, there exist constants $\beta^\ast >0$ and $C^{\ast} >0$ such that the following holds asymptotically almost surely for $\Gamma = G(n,p)$ if $p \geq C^{\ast}\big(\frac{\log n}{n}\big)^{1/\Delta}$. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq\left(\frac{k-1}{k}+ \gamma\right)pn$, and let $H$ be a $k$-colourable graph on $n$ vertices with $\Delta(H) \leq \Delta$, bandwidth at most $\beta^\ast n$, and with at least $C^{\ast} p^{-2}$ vertices which are not contained in any triangles of $H$. Then $G$ contains a copy of $H$. \qed \end{theorem} Note however that this result is not quite what one would expect as a transference of the Bandwidth Theorem. There is an additional restriction that some vertices of $H$ may not be in triangles. This restriction is necessary, since in a sparse random graph an adversary who creates $G$ from $\Gamma$ can typically remove only a tiny fraction of the edges at each vertex and still make the neighbourhoods of $\Omega(p^{-2})$ vertices into independent sets. This prompts the question how we should restrict the adversary so that any~$H$ with small maximum degree and sublinear bandwidth is contained in~$G$? The following theorem, which is the main result of this paper, answers this question. \begin{theorem}[Main result] \label{thm:main} For each $\gamma >0$, $\Delta \geq 2$, $k \geq 2$ and $0\le s\le k-1$, there exist constants $\beta^\ast >0$ and $C^{\ast} >0$ such that the following holds asymptotically almost surely for $\Gamma = G(n,p)$ if $p \geq C^{\ast}\big(\frac{\log n}{n}\big)^{1/\Delta}$. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq\left(\frac{k-1}{k}+ \gamma\right)pn$, such that for each $v\in V(G)$ there are at least $\gamma p^{\binom{s}{2}}(pn)^s$ copies of $K_s$ in $N_G(v)$. Let $H$ be a graph on $n$ vertices with $\Delta(H) \leq \Delta$, bandwidth at most $\beta^\ast n$ and suppose that there is a proper $k$-colouring of $V(H)$ and at least $C^{\ast} p^{-2}$ vertices in $V(H)$ whose neighbourhood contains only $s$ colours. Then $G$ contains a copy of $H$. \end{theorem} To help understand this statement, observe that the extra condition we put on $G$ is that each vertex neighbourhood contains a constant (but perhaps rather small) fraction of the copies of $K_s$ which it has in $\Gamma$. The additional restriction on $H$ which is not present in the Bandwidth Theorem specialises to requiring $\Omega(p^{-2})$ vertices not in triangles if $s=1$ (re-proving Theorem~\ref{thm:abet}) and becomes trivially satisfied when $s=k-1$, since no vertex in a proper $k$-colouring can have neighbours of $k$ or more different colours. In particular, imposing the additional requirement on $G$ that every vertex neighbourhood contains many copies of $K_{k-1}$ allows us to find copies of all the graphs handled by the Bandwidth Theorem. The reader might have expected, by analogy with Theorem~\ref{thm:abet}, to see a different condition on $H$, namely that there should exist $\Omega(p^{-2})$ vertices whose neighbourhood is $s$-colourable (ignoring the overall colouring of $H$). However this condition is not sufficient: we will give in Section~\ref{sec:construct} an example of a graph $H$ which satisfies this condition (for $s=2$) but which need not be a subgraph of $G$ satisfying the conditions of Theorem~\ref{thm:main}. We should comment on the relation between this result and the recent work of Fischer, \v{S}kori\'c, Steger and Truji\'c~\cite{FSST}, who show `triangle-resilience' for the square of a Hamilton cycle. Triangle-resilience is a stronger condition to impose on $G$ than our Theorem~\ref{thm:main} would require for proving the existence of the square of a Hamilton cycle, so in this sense our result is stronger. However we can only work with $p\gg\big(\tfrac{\log n}{n}\big)^{1/4}$, whereas in~\cite{FSST} $p$ may be as small as $Cn^{-1/2}\log^3n$. This is rather close to the lower bound $p=n^{-1/2}$ at which point even a typical $G(n,p)$ does not contain the square of a Hamilton cycle, so in this sense the result of~\cite{FSST} is much stronger. It would be very interesting to improve the probability bounds in our result. But the method of~\cite{FSST} uses the structure of the square of a Hamilton cycle in an essential way (in particular that it has constant bandwidth), and it is not clear how one might use their ideas in our more general situation. \subsection{Outline of the paper} We prove Theorem~\ref{thm:main} by making use of the sparse regularity lemma of Kohayakawa and R\"odl~\cite{kohayakawa1997,kohayakawa2003}, the sparse blow-up lemma of~\cite{blowup}, and several lemmas from~\cite{ABET}. In Section~\ref{sec:preliminaries} we give the definitions and results necessary to state and use the sparse regularity lemma and the sparse blow-up lemma, and also a few probabilistic lemmas. In Section~\ref{sec:mainlemmas} we give a somewhat more general statement (Theorem~\ref{thm:maink}) than Theorem~\ref{thm:main}, which allows for graphs $H$ which are not quite $k$-colourable, and outline briefly how to prove it using various lemmas. The basic proof strategy, and most of the lemmas, are taken from~\cite{ABET}. The main exception is the pre-embedding lemma, Lemma~\ref{lem:coverv}, which replaces the `Common Neighbourhood Lemma' of~\cite{ABET}. The proof of this lemma, which is provided in Section~\ref{sec:pel}, requires new ideas and is the main work of this paper. The setup that this pre-embedding lemma creates also entails a number of modifications to the proof from~\cite{ABET}, which need some work. The details are given in Section~\ref{sec:mainproof}, where we give the proof of the main technical theorem, Theorem~\ref{thm:maink}. Finally, we finish with some concluding remarks in Section~\ref{sec:remarks}. \section{Preliminaries} \label{sec:preliminaries} Throughout the paper $\log$ denotes the natural logarithm. We assume that the order $n$ of all graphs tends to infinity and therefore is sufficiently large whenever necessary. Our graph-theoretic notation is standard. In particular, given a graph $G$ its vertex set is denoted by $V(G)$ and its edge set by $E(G)$. Let $A,B\subseteq V$ be disjoint vertex sets. We denote the number of edges between $A$ and $B$ by $e(A,B)$. For a vertex $v \in V(G)$ we write $N_G(v)$ for the neighbourhood of $v$ in $G$ and $N_G(v,A):= N_G(v) \cap A$ for the neighbourhood of $v$ restricted to $A$. Finally, let $\deg_G(v) := |N_G(v)|$ be the degree of $v$ in $G$. For the sake of readability, we do not make any effort to optimise the constants in our theorems and proofs. \subsection{The sparse regularity method} Now we introduce some definitions and results of the regularity method as well as related tools that are essential in our proofs. In particular, we state a minimum degree version of the sparse regularity lemma (Lemma~\ref{lem:regularitylemma}) and the sparse blow-up lemma (Lemma~\ref{thm:blowup}). Both lemmas use the concept of regular pairs. Let $G= (V,E)$ be a graph, $\varepsilon, d >0$, and $p \in (0,1]$. Moreover, let $X,Y \subseteq V$ be two disjoint nonempty sets. The \emph{$p$-density} of the pair $(X,Y)$ is defined as \[d_{G,p}(X,Y) := \frac{e_G(X,Y)}{p|X||Y|}.\] We now define regular, and super-regular, pairs. Note that what we are calling `regular' is sometimes referred to as `lower-regular' by contrast with `fully-regular' (sometimes just called `regular') pairs in which an upper bound on $p$-densities is also imposed. It is immediate from the definition of the latter that a fully-regular pair is also lower-regular, with the same parameters; the converse is false. \begin{definition}[regular pairs, fully-regular pairs, super-regular pairs] \label{def:regular} The pair $(X,Y)$ is called \emph{$(\varepsilon,d,p)_G$-regular} if for every $X'\subseteq X$ and $Y'\subseteq Y$ with $|X'|\geq \varepsilon|X|$ and $|Y'|\geq \varepsilon |Y|$ we have $d_{G,p}(X',Y') \geq d- \varepsilon$. It is called \emph{$(\varepsilon,d,p)_G$-regular} if there is some $d'\ge d$ such that for every $X'\subseteq X$ and $Y'\subseteq Y$ with $|X'|\geq \varepsilon|X|$ and $|Y'|\geq \varepsilon |Y|$ we have $\big|d_{G,p}(X',Y')-d'\big| \le \varepsilon$. If $(X,Y)$ is $(\varepsilon,d,p)_G$-regular, and in addition we have \begin{align*} |N_G(x,Y)| &\geq (d-\varepsilon)\max\big(p|Y|,\deg_\Gamma(x,Y)/2\big)\quad\text{and}\\ |N_G(y,X)| &\geq (d-\varepsilon)\max\big(p|X|,\deg_\Gamma(y,X)/2\big) \end{align*} for every $x \in X$ and $y \in Y$, then the pair $(X,Y)$ is called \emph{$(\varepsilon,d,p)_G$-super-regular}. \end{definition} A direct consequence of the definition of $(\varepsilon,d,p)$-regular pairs is the following proposition about the sizes of neighbourhoods in regular pairs. \begin{proposition} \label{prop:neighbourhood} Let $(X,Y)$ be $(\varepsilon, d,p)$-regular. Then there are less than $\varepsilon |X|$ vertices $x\in X$ with $|N(x,Y)| < (d-\varepsilon)p|Y|$. \qed \end{proposition} The following proposition is another immediate consequence of Definition~\ref{def:regular}. It states that an $(\varepsilon,d,p)$-regular pair is still regular if only a linear fraction of its vertices is removed. \begin{proposition} \label{prop:subpairs} Let $(X,Y)$ be $(\varepsilon, d,p)$-regular and suppose $X'\subseteq X$ and $Y' \subseteq Y$ satisfy $|X'| \geq \mu |X|$ and $|Y'|\geq \nu |Y|$ with some $\mu, \nu >0$. Then $(X',Y')$ is $(\frac{\varepsilon}{\min\{\mu,\nu\}},d,p)$-regular. \qed \end{proposition} In order to state the sparse regularity lemma, we need some more definitions. A partition $\mathcal V = \{V_i\}_{i\in\{0,\ldots,r\}}$ of the vertex set of $G$ is called an \emph{$(\varepsilon,p)_G$-regular partition} of $V(G)$ if $|V_0|\leq \varepsilon |V(G)|$ and $(V_i,V_{i'})$ forms an $(\varepsilon,0,p)_G$-fully-regular pair for all but at most $\varepsilon\binom{r}{2}$ pairs $\{i,i'\}\in \binom{[r]}{2}$. It is called an \emph{equipartition} if $|V_i| = |V_{i'}|$ for every $i,i'\in[r]$. The partition $\mathcal V$ (or the pair $(G,\mathcal V)$) is called \emph{$(\varepsilon,d,p)_G$-regular} on a graph $R$ with vertex set $[r]$ if $(V_i, V_{i'})$ is $(\varepsilon,d,p)_G$-regular for every $\{i,i'\} \in E(R)$. The graph $R$ is referred to as the \emph{$(\varepsilon,d,p)_G$-reduced graph} of $\mathcal V$, the partition classes $V_i$ with $i \in [r]$ as \emph{clusters}, and $V_0$ as the \emph{exceptional set}. We also say that $\mathcal V$ (or the pair $(G,\mathcal V)$) is \emph{$(\varepsilon,d,p)_G$-super-regular} on a graph $R'$ with vertex set $[r]$ if $(V_i, V_{i'})$ is $(\varepsilon,d,p)_G$-super-regular for every $\{i,i'\}\in E(R')$. Analogously to Szemeredi's regularity lemma for dense graphs, the sparse regularity lemma, proved by Kohayakawa, Rödl, and Scott~\cite{kohayakawa1997, kohayakawa2003, scott2011}, asserts the existence of an $(\varepsilon,p)$-regular partition of constant size of any sparse graph. We state a minimum degree version of this lemma, whose proof can be found in the appendix of~\cite{ABET}. \begin{lemma}[Minimum degree version of the sparse regularity lemma] \label{lem:regularitylemma} For each $\varepsilon >0$, each $\alpha \in [0,1]$, and $r_0\geq 1$ there exists $r_1\geq 1$ with the following property. For any $d\in[0,1]$, any $p>0$, and any $n$-vertex graph $G$ with minimum degree $\alpha p n$ such that for any disjoint $X,Y\subset V(G)$ with $|X|,|Y|\ge\tfrac{\varepsilon n}{r_1}$ we have $e(X,Y)\le \big(1+\tfrac{1}{1000}\varepsilon^2\big)p|X||Y|$, there is an $(\varepsilon,p)_G$-regular equipartition of $V(G)$ with $(\varepsilon,d,p)_G$-reduced graph $R$ satisfying $\delta(R) \geq (\alpha-d-\varepsilon)|V(R)|$ and $r_0 \leq |V(R)| \leq r_1$. \qed \end{lemma} We will need the following version of the sparse regularity lemma (see e.g.~\cite[Lemma~29]{ABET} for a proof), allowing for a partition equitably refining an initial partition with parts of very different sizes. Given a partition $V(G)=V_1\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}}\dots\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}} V_s$, we say a partition $\{V_{i,j}\}_{i\in[s],j\in[t]}$ is an \emph{equitable $(\varepsilon,p)$-regular refinement} of $\{V_i\}_{i\in[s]}$ if $|V_{i,j}|=|V_{i,j'}|\pm 1$ for each $i\in[s]$ and $j,j'\in[t]$, and there are at most $\varepsilon s^2t^2$ pairs $(V_{i,j},V_{i',j'})$ which are not $(\varepsilon,0,p)$-fully-regular. \begin{lemma}[Refining version of the sparse regularity lemma] \label{lem:SRLb} For each $\varepsilon>0$ and $s\in\mathbb{N}$ there exists $t_1\geq 1$ such that the following holds. Given any graph $G$, suppose $V_1\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}}\dots\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}} V_s$ is a partition of $V(G)$. Suppose that $e(V_i)\le 3p|V_i|^2$ for each $i\in[s]$, and $e(V_i,V_{i'})\le 2p|V_i||V_{i'}|$ for each $i\neq i'\in[s]$. Then there exist sets $V_{i,0}\subset V_i$ for each $i\in[s]$ with $|V_{i,0}|<\varepsilon|V_i|$, and an equitable $(\varepsilon,p)$-regular refinement $\{V_{i,j}\}_{i\in[s],j\in[t]}$ of $\{V_i\setminus V_{i,0}\}_{i\in[s]}$ for some $t\le t_1$. \qed \end{lemma} A key ingredient in the proof of our main theorem is the so-called sparse blow-up lemma established in~\cite{blowup}. Given a subgraph $G \subseteq \Gamma =G(n,p)$ with $p \gg (\log n/n)^{1/\Delta}$ and an $n$-vertex graph $H$ with maximum degree at most $\Delta$ with vertex partitions $\mathcal V$ and $\mathcal W$, respectively, the sparse blow-up lemma guarantees under certain conditions a spanning embedding of $H$ in $G$ which respects the given partitions. In order to state this lemma we need some definitions. Let $G$ and $H$ be graphs on $n$ vertices with partitions $\mathcal V=\{V_i\}_{i\in[r]}$ of $V(G)$ and $\mathcal W=\{W_i\}_{i\in[r]}$ of $V(H)$. We say that $\mathcal V$ and $\mathcal W$ are \emph{size-compatible} if $|V_i|=|W_i|$ for all $i\in[r]$. If there exists an integer $m \geq 1$ such that $m \leq |V_i| \leq \kappa m$ for every $i\in [r]$, then we say that $(G,\mathcal V)$ is \emph{$\kappa$-balanced}. Given a graph $R$ on $r$ vertices, we call $(H, \mathcal W)$ an \emph{$R$-partition} if for every edge $\{x,y\}\in E(H)$ with $x \in W_i$ and $y\in W_{i'}$ we have $\{i,i'\}\in E(R)$. The following definition allows for image restrictions in the sparse blow-up lemma. \begin{definition}[Restriction pair] \label{def:restrict} Let $\varepsilon,d>0$, $p \in [0,1]$, and let $R$ be a graph on $r$ vertices. Furthermore, let $G$ be a (not necessarily spanning) subgraph of $\Gamma = G(n,p)$ and let $H$ be a graph given with vertex partitions $\mathcal V= \{V_i\}_{i\in[r]}$ and $\mathcal W = \{W_i\}_{i\in[r]}$, respectively, such that $(G,\mathcal V)$ and $(H,\mathcal W)$ are size-compatible $R$-partitions. Let $\mathcal I=\{I_x\}_{x\in V(H)}$ be a collection of subsets of $V(G)$, called \emph{image restrictions}, and $\mathcal J=\{J_x\}_{x\in V(H)}$ be a collection of subsets of $V(\Gamma)\setminus V(G)$, called \emph{restricting vertices}. For each $i\in [r]$ we define $R_i\subseteq W_i$ to be the set of all vertices $x \in W_i$ for which $I_x \neq V_i$. We say that $\mathcal I$ and $\mathcal J$ are a \emph{$(\rho,\zeta,\Delta,\Delta_J)$-restriction pair} if the following properties hold for each $i\in[r]$ and $x\in W_i$. \begin{enumerate}[label=\itmarab{RP}] \item\label{itm:restrict:numres} We have $|R_i|\leq\rho|W_i|$. \item\label{itm:restrict:sizeIx} If $x\in R_i$, then $I_x\subseteq \bigcap_{u\in J_x} N_\Gamma(u, V_i)$ is of size at least $\zeta(dp)^{|J_x|}|V_i|$. \item\label{itm:restrict:Jx} If $x\in R_i$, then $|J_x|+\deg_H(x)\leq\Delta$ and if $x\in W_i\setminus R_i$, then $J_x=\varnothing$. \item\label{itm:restrict:DJ} Each vertex in $V(G)$ appears in at most $\Delta_J$ of the sets of $\mathcal J$. \item\label{itm:restrict:sizeGa} We have $\big|\bigcap_{u\in J_x} N_\Gamma(u, V_i)\big| = (p\pm\varepsilon p)^{|J_x|}|V_i|$. \item\label{itm:restrict:Ireg} If $x\in R_i$, for each $xy\in E(H)$ with $y\in W_j$, \[\text{the pair }\quad\Big( V_i \cap \bigcap_{u\in J_x}N_\Gamma(u), V_j \cap \bigcap_{v\in J_y}N_\Gamma(v)\Big)\quad\text{ is $(\varepsilon,d,p)_G$-regular.}\] \end{enumerate} \end{definition} The sparse blow-up lemma needs not all pairs in the reduced graph~$R$ to be super-regular, but only those in a subgraph~$R'$ of~$R$. This, however, is only possible if a good proportion of~$H$ is embedded to the pairs in~$R'$. The following definition of buffer-sets makes this requirement precise. Moreover, we need certain regularity inheritance properties for the pairs in~$R'$. \begin{definition}[$(\vartheta, R')$-buffer, regularity inheritance] \label{def:buffer} Let $R$ and $R'$ be graphs on vertex set $[r]$ with $R'\subset R$. Suppose that $(H,\mathcal W)$ is an $R$-partition and that $(G,\mathcal V)$ is a size-compatible $(\varepsilon,d,p)_G$-regular partition with reduced graph~$R$. We say that the family $\widetilde{\mathcal{W}}=\{\widetilde{W}_i\}_{i\in[r]}$ of subsets $\widetilde{W}_i\subseteq W_i$ is an \emph{$(\vartheta,R')$-buffer} for $H$ if \begin{enumerate}[label=\rom] \item $|\widetilde{W}_i|\geq\vartheta |W_i|$ for all $i\in[r]$, and \item for each $i\in[r]$ and each $x\in\widetilde{W}_i$, the first and second neighbourhood of $x$ go along $R'$, i.e.,\ for each $\{x,y\},\{y,z\}\in E(H)$ with $y\in W_j$ and $z\in W_k$ we have $\{i,j\}\in E(R')$ and $\{j,k\}\in E(R')$. \end{enumerate} We say $(G,\mathcal V)$ has \emph{one-sided inheritance} on $R'$ if for every $\{i,j\}, \{j,k\}\in E(R')$ and every $v\in V_i$ the pair $\big(N_\Gamma(v, V_j),V_k\big)$ is $(\varepsilon,d,p)_G$-regular. We say $(G,\mathcal V)$ has \emph{two-sided inheritance} on $R'$ for $\widetilde{\mathcal{W}}$ if for each $i,j,k \in V(R')$ such that there is a triangle $x_ix_jx_k$ in~$H$ with $x_i\in \widetilde{W}_i$, $x_j\in W_j$, and $x_k\in W_k$ the following holds. For every $v\in V_i$ the pair $\big(N_\Gamma(v, V_j),N_\Gamma(v, V_k)\big)$ is $(\varepsilon,d,p)_G$-regular. \end{definition} Now we can finally state the sparse blow-up lemma. \begin{lemma}[{Sparse blow-up lemma~\cite[Lemma 1.21]{blowup}}] \label{thm:blowup} For each $\Delta$, $\Delta_{R'}$, $\Delta_J$, $\vartheta,\zeta, d>0$, $\kappa>1$ there exist $\eps_{\scalebox{\scalefactor}{$\mathrm{BL}$}},\rho>0$ such that for all $r_1$ there is a $C_{\scalebox{\scalefactor}{$\mathrm{BL}$}}$ such that for $p\geqC_{\scalebox{\scalefactor}{$\mathrm{BL}$}}(\log n/n)^{1/\Delta}$ the random graph $\Gamma=G_{n,p}$ asymptotically almost surely satisfies the following. Let $R$ be a graph on $r\le r_1$ vertices and let $R'\subseteq R$ be a spanning subgraph with $\Delta(R')\leq \Delta_{R'}$. Let $H$ and $G\subseteq \Gamma$ be graphs given with $\kappa$-balanced, size-compatible vertex partitions $\mathcal W=\{W_i\}_{i\in[r]}$ and $\mathcal V=\{V_i\}_{i\in[r]}$ with parts of size at least $m\geq n/(\kappa r_1)$. Let $\mathcal I=\{I_x\}_{x\in V(H)}$ be a family of image restrictions, and $\mathcal J=\{J_x\}_{x\in V(H)}$ be a family of restricting vertices. Suppose that \begin{enumerate}[label=\itmarab{BUL}] \item\label{itm:blowup:H} $\Delta(H)\leq \Delta$, for every edge $\{x,y\}\in E(H)$ with $x\in W_i$ and $y\in W_j$ we have $\{i,j\}\in E(R)$ and $\widetilde{\mathcal{W}}=\{\widetilde{W}_i\}_{i\in[r]}$ is an $(\vartheta,R')$-buffer for $H$, \item\label{itm:blowup:G} $(G,\mathcal V)$ is $(\eps_{\scalebox{\scalefactor}{$\mathrm{BL}$}},d,p)_G$-regular on $R$, $(\eps_{\scalebox{\scalefactor}{$\mathrm{BL}$}},d,p)_G$-super-regular on $R'$, has one-sided inheritance on $R'$, and two-sided inheritance on $R'$ for $\widetilde{\mathcal{W}}$, \item\label{itm:blowup:restrict} $\mathcal I$ and $\mathcal J$ form a $(\rho,\zeta,\Delta,\Delta_J)$-restriction pair. \end{enumerate} Then there is an embedding $\phi\colon V(H)\to V(G)$ such that $\phi(x)\in I_x$ for each $x\in H$. \qed \end{lemma} Observe that in the blow-up lemma for dense graphs, proved by Koml{\'o}s, S{\'a}rk{\"o}zy, and Szemer{\'e}di~\cite{komlos1997blow}, one does not need to explicitly ask for one- and two-sided inheritance properties since they are always fulfilled by dense regular partitions. This is, however, not true in general in the sparse setting. The following two lemmas will be very useful whenever we need to redistribute vertex partitions in order to achieve some regularity inheritance properties. \begin{lemma}[One-sided regularity inheritance~\cite{blowup}] \label{lem:OSRIL} For each $\eps_{\scalebox{\scalefactor}{$\mathrm{OSRIL}$}}, \alpha_{\scalebox{\scalefactor}{$\mathrm{OSRIL}$}} >0$ there exist $\varepsilon_0 >0$ and $C >0$ such that for any $0 < \varepsilon < \varepsilon_0$ and $0 < p <1$ asymptotically almost surely $\Gamma= G(n,p)$ has the following property. For any disjoint sets $X$ and $Y$ in $V(\Gamma)$ with $|X|\geq C\max\big(p^{-2}, p^{-1} \log n\big)$ and $|Y| \geq C p^{-1} \log n$, and any subgraph $G$ of $\Gamma[X,Y]$ which is $(\varepsilon, \alpha_{\scalebox{\scalefactor}{$\mathrm{OSRIL}$}},p)_G$-regular, there are at most $C p^{-1}\log (en/|X|)$ vertices $z \in V(\Gamma)$ such that $(X \cap N_{\Gamma}(z),Y)$ is not $(\eps_{\scalebox{\scalefactor}{$\mathrm{OSRIL}$}},\alpha_{\scalebox{\scalefactor}{$\mathrm{OSRIL}$}},p)_G$-regular. \qed \end{lemma} \begin{lemma}[Two-sided regularity inheritance~\cite{blowup}] \label{lem:TSRIL} For each $\eps_{\scalebox{\scalefactor}{$\mathrm{TSRIL}$}},\alpha_{\scalebox{\scalefactor}{$\mathrm{TSRIL}$}}>0$ there exist $\varepsilon_0>0$ and $C >0$ such that for any $0<\varepsilon<\varepsilon_0$ and $0<p<1$, asymptotically almost surely $\Gamma=G_{n,p}$ has the following property. For any disjoint sets $X$ and $Y$ in $V(\Gamma)$ with $|X|,|Y|\ge C\max\{p^{-2},p^{-1}\log n\}$, and any subgraph $G$ of $\Gamma[X,Y]$ which is $(\varepsilon,\alpha_{\scalebox{\scalefactor}{$\mathrm{TSRIL}$}},p)_G$-regular, there are at most $C\max\{p^{-2},p^{-1}\log (en/|X|)\}$ vertices $z \in V(\Gamma)$ such that $\big(X\cap N_\Gamma(z),Y\cap N_\Gamma(z)\big)$ is not $(\eps_{\scalebox{\scalefactor}{$\mathrm{TSRIL}$}},\alpha_{\scalebox{\scalefactor}{$\mathrm{TSRIL}$}},p)_G$-regular. \qed \end{lemma} Finally, we need a statement about random subpairs of regular pairs (which is used to prove Lemma~\ref{lem:TSRIL}). \begin{corollary}[{\cite[Corollary 3.8]{GKRS}}]\label{cor:TSI} For any $d$, $\beta$, $\varepsilon'>0$ there exist $\varepsilon_0>0$ and $C$ such that for any $0<\varepsilon<\varepsilon_0$ and $0<p<1$, if $(X,Y)$ is an $(\varepsilon,d,p)$-regular pair in a graph $G$, then the number of pairs $X'\subseteq X$ and $Y'\subseteq Y$ with $|X'|=w_1\ge C/p$ and $|Y'|=w_2\ge C/p$ such that $(X',Y')$ is an $(\varepsilon',d,p)$-regular pair in~$G$ is at least $(1-\beta^{\min(w_1,w_2)})\binom{|X|}{w_1}\binom{|Y|}{w_2}$. \qed \end{corollary} \subsection{Concentration inequalities} We close this section with two of Chernoff's bounds for random variables that follow a binomial (Theorem~\ref{thm:chernoff}) and a hypergeometric distribution (Theorem~\ref{thm:hypergeometric}), respectively, and the following useful observation. Roughly speaking, it states that a.a.s.~nearly all vertices in $G(n,p)$ have approximately the expected number of neighbours within large enough subsets (for a proof see e.g.~\cite[Proposition~18]{ABET}). \begin{proposition} \label{prop:chernoff} For each $\varepsilon>0$ there exists a constant $C >0$ such that for every $0<p<1$ asymptotically almost surely $\Gamma=G(n,p)$ has the property that for any sets $X,Y\subset V(\Gamma)$ with $|X|\ge Cp^{-1}\log n$ and $|Y|\ge Cp^{-1}\log (en/|X|)$ the following holds. \begin{enumerate}[label=\itmit{\alph{*}}] \item If~$X$ and~$Y$ are disjoint, then $e(X,Y)=(1\pm\varepsilon)p|X||Y|$. \item We have $e(X)\le 2p|X|^2$. \item At most $C p^{-1} \log (en/|X|)$ vertices $v \in V(\Gamma)$ satisfy $\big||N_{\Gamma}(v,X)| - p |X|\big| > \varepsilon p |X|$. \end{enumerate}\qed \end{proposition} We use the following version of Chernoff's Inequalities (see e.g.~\cite[Chapter~2]{janson2011random} for a proof). \begin{theorem}[Chernoff's Inequality,~\cite{janson2011random}] \label{thm:chernoff} Let $X$ be a random variable which is the sum of independent Bernoulli random variables. Then we have for $\varepsilon\leq 3/2$ \[\Pr\big[|X-\mathbb{E}[X]| > \varepsilon \mathbb{E}[X]\big] < 2e^{-\varepsilon^2\mathbb{E}[X]/3}\,.\] Furthermore, if $t\ge 6\mathbb{E}[X]$ then we have \[\Pr\big[X\ge\mathbb{E}[X]+t\big]\le e^{-t}\,.\] \qed \end{theorem} Finally, let $N$, $m$, and $s$ be positive integers and let $S$ and $S' \subseteq S$ be two sets with $|S| = N$ and $|S'| = m$. The \emph{hypergeometric distribution} is the distribution of the random variable $X$ that is defined by drawing $s$ elements of $S$ without replacement and counting how many of them belong to $S'$. It can be shown that Theorem~\ref{thm:chernoff} still holds in the case of hypergeometric distributions (see e.g.~\cite{janson2011random}, Chapter~2 for a proof) with $\mathbb{E}[X]= ms/N$. \begin{theorem}[Hypergeometric inequality,~\cite{janson2011random}] \label{thm:hypergeometric} Let $X$ be a random variable is hypergeometrically distributed with parameters $N$, $m$, and $s$. Then for any $\varepsilon>0$ and $t\ge\varepsilon ms/N$ we have \[\Pr\big[|X - ms/N| > t \big] < 2e^{-\varepsilon^2t/3}\,.\] \qed \end{theorem} We require the following technical lemma, which is a consequence of the hypergeometric inequality stated in Theorem~\ref{thm:hypergeometric}. \begin{lemma}\label{lem:hypgeo} For each $\varepsilon^+_0,d^+>0$ there exists $\varepsilon^+>0$, and for each $\varepsilon,d>0$ there exists $\varepsilon^->0,$ such that for each $\eta>0$ and $\Delta$ there exists $C$ such that the following holds for each $p>0$. Let $W\subset [n]$, let $t\le 100n^{\Delta+1}$, and let $T_1,\ldots,T_t$ be subsets of $W$. Let~$G$ be a graph on~$W$. For each $i\in[t]$ let $(X_i,Y_i)$ be a pair which is either $(\varepsilon^+,d^+,p)_G$-regular, or $(\varepsilon^-,d,p)_G$-regular (respectively), and which satisfies $m|X_i|/|W|,m|Y_i|/|W|\ge 2Cp^{-1}\log n$. For each $m\le |W|$ there is a set $S\subset W$ of size $m$ such that for each $i\in[t]$ \[|T_i\cap S|=\tfrac{m}{|W|}|T_i|\pm \big(\eta|T_i|+C\log n\big)\,,\] and the pair $\big(X_i\cap S,Y_i\cap S\big)$ is $\big(\varepsilon^+_0,d^+,p\big)$-regular, or $(\varepsilon,d,p)$-regular (respectively). \end{lemma} \begin{proof} Given $\varepsilon^+_0,d^+$, let $\varepsilon^+$ be returned by Corollary~\ref{cor:TSI} for input $d^+$, $\beta=\tfrac12$ and $\varepsilon^+_0$. Given $\varepsilon,d$, let $\varepsilon^-$ be returned by Corollary~\ref{cor:TSI} for input $d$, $\beta=\tfrac12$ and $\varepsilon$. Let $C\ge 30\eta^{-2}\Delta$ be large enough for these applications of Corollary~\ref{cor:TSI}. Observe that for each $i$, the size of $T_i\cap S$ is hypergeometrically distributed. By Theorem~\ref{thm:hypergeometric}, for each $i$ we have \[\Pr\big[|T_i\cap S|\neq \tfrac{m}{|W|}|T_i|\pm \big(\eta|T_i|+C\log n\big)\big]<2e^{-\eta^2C\log n/3}<\frac{2}{n^{2+\Delta}}\,,\] so taking the union bound over all $i\in[t]$ we conclude that the probability of failure is at most $2t/n^{2+\Delta}\le 200/n\to 0$ as $n\to\infty$, as desired. To obtain the second property, observe that Theorem~\ref{thm:hypergeometric} also implies that we have $|X_i\cap S|,|Y_i\cap S|\ge Cp^{-1}\log n$ for each $i\in[t]$ with probability tending to one as $n\to\infty$. Conditioning on the size of $|X_i\cap S|$, the set $X_i\cap S$ is a uniformly distributed subset of $X_i$ of size $|X_i\cap S|$, and the same applies to $Y_i\cap S$. Now Corollary~\ref{cor:TSI} says that, conditioning on $|X_i\cap S|,|Y_i\cap S|\ge Cp^{-1}\log n$, the probability that $\big(X_i\cap S,Y_i\cap S\big)$ fails to have the desired regularity in $G$ is at most $2^{-Cp^{-1}\log n}$, and taking a union bound over the choices of~$i$ the result follows. \end{proof} \section{Main technical result and main lemmas} \label{sec:mainlemmas} We deduce Theorem~\ref{thm:main} from the following technical result (corresponding results also appear in the predecessor papers~\cite{ABET,bottcher2009proof}). This result is more general in that it allows for an extra colour, \emph{zero}, in the colouring of $H$, provided that this colour does not appear too often. \begin{definition}[Zero-free colouring]\label{def:zerofree} Let $H$ be a $(k+1)$-colourable graph on $n$ vertices and let $\mathcal L$ be a labelling of its vertex set of bandwidth at most $\beta n$. A proper $(k+1)$- colouring $\sigma:V(H) \to \{0,\ldots,k\}$ of its vertex set is said to be \emph{$(z,\beta)$-zero-free} with respect to $\mathcal L$ if any $z$ consecutive blocks contain at most one block with colour zero, where a block is defined as a set of the form $\{(t-1)4k\beta n +1, \ldots, t4k\beta n\}$ with $t \in [1/(4k\beta)]$. \end{definition} \begin{theorem}[Main technical result] \label{thm:maink} For each $\gamma>0$, $\Delta \geq 2$, $k\geq 2$ and $1\le s\le k-1$, there exist constants $\beta >0$, $z>0$, and $C>0$ such that the following holds asymptotically almost surely for $\Gamma = G(n,p)$ if $p\geq C\big(\frac{\log n}{n}\big)^{1/\Delta}$. Let $G$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq\big(\frac{k-1}{k}+\gamma\big)pn$ such that for each $v\in V(G)$ there are at least $\gamma p^{\binom{s}{2}}(pn)^s$ copies of $K_s$ in $N_G(v)$ and let $H$ be a graph on $n$ vertices with $\Delta(H) \leq \Delta$ that has a labelling $\mathcal L$ of its vertex set of bandwidth at most $\beta n$, a $(k+1)$-colouring that is $(z,\beta)$-zero-free with respect to $\mathcal L$ and where the first $\sqrt{\beta} n$ vertices in $\mathcal L$ are not given colour zero and the first $\beta n$ vertices in $\mathcal L$ include $Cp^{-2}$ vertices whose neighbourhood contains only $s$ colours. Then $G$ contains a copy of $H$. \end{theorem} The basic proof strategy for this theorem is analogous to the proof strategy for~\cite[Theorem~23]{ABET}. Eventually, we will apply the sparse blow-up lemma, Lemma~\ref{thm:blowup}, to embed most of~$H$ into $G$, and we need to obtain the necessary conditions for this lemma. The difficulty is that, whatever regular partition of $G$ we take, there may be some exceptional vertices which are `badly behaved' with respect to this partition. Our first main lemma, the following Lemma for~$G$, states that there is a partition with only few such vertices, which we collect in a set~$V_0$. These vertices will be dealt with in a pre-embedding stage before the application of the sparse blow-up lemma. For the application of the sparse blow-up lemma the following two graphs $B^k_r$ and $K^k_r$, which we shall find as subgraphs of the reduced graph of~$G$, are essential. Let $r, k \geq 1$ and let $B^k_r$ be the \emph{backbone graph} on $kr$ vertices. That is, we have \[V(B^k_r) := [r] \times [k]\] and for every $j \neq j' \in [k]$ we have $\{(i,j),(i',j')\} \in E(B^k_r)$ if and only if $|i-i'|\le1$. Let $K^k_r \subseteq B^k_r$ be the spanning subgraph of $B^k_r$ that is the disjoint union of $r$ complete graphs on $k$ vertices given by the following components: the complete graph $K^k_r[\{(i,1),\ldots, (i,k)\}]$ is called the \emph{$i$-th component} of $K^k_r$ for each $i\in [r]$. A vertex partition $\mathcal V' = \{V_{i,j}\}_{i\in[r],j\in[k]}$ is called \emph{$k$-equitable} if $\big||V_{i,j}| -|V_{i,j'}|\big|\leq 1$ for every $i\in [r]$ and $j,j'\in[k]$. Similarly, an integer partition $\{n_{i,j}\}_{i\in[r],j\in[k]}$ of $n$ (meaning that $n_{i,j} \in \mathbb Z_{\geq 0}$ for every $i\in [r],j\in[k]$ and $\sum_{i\in[r]j\in[k]} n_{i,j} = n$) is \emph{$k$-equitable} if $|n_{i,j}-n_{i,j'}| \leq 1$ for every $i\in[r]$ and $j,j'\in[k]$. The Lemma for~$G$ then guarantees a $k$-equitable partition for~$G$ whose reduced graph~$R^k_r$ contains a copy of the backbone graph~$B^k_r$, is super-regular on $K^k_r\subset B^k_r$, and satisfies certain regularity inheritance properties. \begin{lemma}[Lemma for $G$,~{\cite[Lemma~24]{ABET}}] \label{lem:G} For each $\gamma > 0$ and integers $k \geq 2$ and $r_0 \geq 1$ there exists $d > 0$ such that for every $\varepsilon \in \left(0, \frac{1}{2k}\right)$ there exist $r_1\geq 1$ and $C^{\ast}>0$ such that the following holds a.a.s.~for $\Gamma = G(n,p)$ if $p \geq C^{\ast} \left(\log n/n\right)^{1/2}$. Let $G=(V,E)$ be a spanning subgraph of $\Gamma$ with $\delta(G) \geq \left(\frac{k-1}{k} + \gamma\right)pn$. Then there exists an integer $r$ with $r_0\leq kr \leq r_1$, a subset $V_0 \subseteq V$ with $|V_0| \leq C^{\ast} p^{-2}$, a $k$-equitable vertex partition $\mathcal V = \{V_{i,j}\}_{i\in[r],j\in[k]}$ of $V(G)\setminus V_0$, and a graph $R^k_r$ on the vertex set $[r] \times [k]$ with $K^k_r \subseteq B^k_r \subseteq R^k_r$, with $\delta(R^k_r) \geq \left(\frac{k-1}{k} + \frac{\gamma}{2}\right)kr$, and such that the following is true. \begin{enumerate}[label=\itmarab{G}] \item \label{lemG:size} $\frac{n}{4kr}\leq |V_{i,j}| \leq \frac{4n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item \label{lemG:regular} $\mathcal V$ is $(\varepsilon,d,p)_G$-regular on $R^k_r$ and $(\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item \label{lemG:inheritance} both $\big(N_{\Gamma}(v, V_{i,j}),V_{i',j'}\big)$ and $\big(N_{\Gamma}(v', V_{i,j}),N_{\Gamma}(v, V_{i',j'})\big)$ are $(\varepsilon,d,p)_G$-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, \item \label{lemG:gamma} $|N_{\Gamma}(v,V_{i,j})| = (1 \pm \varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \end{enumerate} \end{lemma} The next step is to find a partition of $H$ which more or less matches that of $G$. This partition of~$H$ defines an assignment of the vertices of~$H$ to the clusters of~$G$. In other words, we assign the vertices in $V(H)$ indices $(i,j)$ of the partition $\mathcal V$, such that about $|V_{i,j}|$ vertices are assigned $(i,j)$ and all edges of~$H$ are assigned to edges of $R^k_r$. In fact, the lemma states further that most edges of~$H$ are assigned to edges of~$K^k_r$, and only those incident to vertices of a small set of special vertices~$X$ may be assigned to other edges of~$R^k_r$. \begin{lemma}[Lemma for H,~{\cite[Lemma~25]{ABET}}]\label{lem:H2} Given $D, k, r \geq 1$ and $\xi, \beta > 0 $ the following holds if $\xi \leq 1/(kr)$ and $\beta \leq 10^{-10}\xi^2/(D k^4r)$. Let $H$ be a $D$-degenerate graph on $n$ vertices, let $\mathcal L$ be a labelling of its vertex set of bandwidth at most $\beta n$ and let $\sigma: V(H) \to \{0,\ldots k\}$ be a proper $(k+1)$-colouring that is $(10/\xi, \beta)$-zero-free with respect to $\mathcal L$, where the colour zero does not appear in the first $\sqrt{\beta}n$ vertices of $\mathcal{L}$. Furthermore, let $R^k_r$ be a graph on vertex set $[r] \times [k]$ with $K^k_r \subseteq B^k_r \subseteq R^k_r$ such that for every $i\in [r]$ there exists a vertex $z_i \in \big([r]\setminus\{i\}\big) \times [k]$ with $\big\{z_i, (i,j)\big\} \in E(R^k_r)$ for every $j\in [k]$. Then, given a $k$-equitable integer partition $\{m_{i,j}\}_{i\in[r],j\in[k]}$ of $n$ with $n/(10kr) \leq m_{i,j} \leq 10n/(kr)$ for every $i \in[r]$ and $j\in [k]$, there exists a mapping $f \colon V(H) \to [r]\times[k]$ and a set of special vertices $X \subseteq V(H)$ such that we have for every $i\in [r]$ and $j\in[k]$ \begin{enumerate}[label=\itmarab{H}] \item\label{lemH:H1} $m_{i,j} - \xi n \leq |f^{-1}(i,j)| \leq m_{i,j} + \xi n$, \item\label{lemH:H2} $|X| \leq \xi n$, \item\label{lemH:H3} $\{f(x),f(y)\} \in E(R^k_r)$ for every $\{x,y\} \in E(H)$, \item\label{lemH:H4} $y,z\in \cup_{j'\in[k]}f^{-1}(i,j')$ for every $x\in f^{-1}(i,j)\setminus X$ and $xy,yz\in E(H)$, and \item\label{lemH:H5} $f(x) = \big(1, \sigma(x)\big)$ for every $x$ in the first $\sqrt{\beta}n$ vertices of $\mathcal{L}$ \end{enumerate} \end{lemma} Our next lemma concerns the pre-embedding stage, in which we cover the vertices in $V_0\subset V(G)$ with vertices of~$H$. For this purpose we use the vertices of~$H$ whose neighbourhood contains only~$s$ colours. Let~$x$ be one of these vertices, let~$H'$ be the subgraph of~$H$ induced on all vertices of distance at most~$s+1$ from~$x$ (including~$x$), and let~$T$ be the set of those vertices in~$H'$ of distance exactly $s+1$ from~$x$. We cover a vertex~$v$ of $V_0$ by embedding such a vertex~$x$ of~$H$ onto~$v$, and we also embed all other vertices in the corresponding~$H'$ which are not in~$T$. This creates image restrictions on the vertices of $G$ to which we can embed the vertices in~$T$. For the application of Lemma~\ref{thm:blowup} we need that these image restrictions satisfy certain conditions, and that this pre-embedding preserves the super-regularity of the remaining partition of~$G$. For achieving the latter we take a random induced subgraph~$G'$ of~$G$ containing roughly $\mu n$ vertices, and perform the pre-embedding in~$G'$ only. In each cluster of~$G$, the subgraph~$G'$ selects roughly a $\mu$-fraction of the vertices, and the induced partitions on~$G'$ and on $G-V(G')$ are also super-regular. The next lemma states that we can also obtain suitable image restrictions for the vertices in~$T$ while performing the pre-embedding in~$G'$. This lemma is a main difference to the proof in~\cite{ABET} and is the place where we need that the neighbourhood of every vertex in $G$ has a certain density of $K_s$'s. Another difference to our proof strategy that this lemma creates, is that it selects a clique $\{q_1,\ldots,q_k\}$ in~$R$, which might not be one of the cliques of the chosen $K_r^k\subseteq R$, and the vertices of~$T$ are assigned to the corresponding clusters in $G$ (that is, the image restriction of~$y\in T$ is a subset of the cluster $V_{q_j}$ to which it is assigned). This assignment may well differ from the assignment given by the Lemma for~$H$, so in our proof of Theorem~\ref{thm:maink} we need to adapt to this difference by reassigning some more $H$-vertices. \begin{lemma}[Pre-embedding lemma]\label{lem:coverv} For $\Delta,k\ge 2$, $2\le s\le k-1$, and $\gamma,d>0$ with $d \le \frac{\gamma}{32}$ there exists $\zeta>0$ such that for every $\varepsilon'>0$ there exists $\varepsilon_0>0$ such that for all $0 < \varepsilon < \varepsilon_0$, all $\mu>0$ and $r\geq 10^5\gamma^{-1}$, there exists a constant $C^\ast>0$ such that the random graph $\Gamma=G(n,p)$ a.a.s.\ has the following property if $p\ge C^\ast\big(\tfrac{\log n}{n}\big)^{1/\Delta}$. Suppose we have the following setup. \begin{enumerate}[label=\itmarab{P}] \item $H'$ is a graph with $\Delta(H')\le\Delta$, with a root vertex $x$, and no vertex at distance greater than $s+1$ from $x$. \item $\rho$ is a proper $k$-colouring of $V(H')$ in which $N(x)$ receives at most $s$ colours, and $T$ is the set of vertices in $H'$ at distance exactly $s+1$ from $x$. \item $G$ is a spanning subgraph of $\Gamma$ with $\delta(G)\ge\big(\tfrac{k-1}{k}+{\gamma}\big)pn$ with an $(\varepsilon,p)$-regular partition $V(G)=V_0\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}} V_1\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}}\dots\mathbin{\text{\mbox{\makebox[0mm][c]{\hphantom{$\cup$}$\cdot$}$\cup$}}} V_r$ with $(\varepsilon,d,p)$-reduced graph $R$, and such that $\frac{n}{4r} \le |V_i| \le \frac{4n}{r}$ for all $i \in [r]$. \item $G'\subset G$ is a graph with $|V(G')|=(1\pm\varepsilon)\mu n$, with $\delta(G')\ge\big(\tfrac{k-1}{k}+{\gamma}\big)p|V(G')|$, and $|N_{G'}(W)| \le 2 \mu n p^{t}$ for any set $W \subset V(G')$ of size $t \le \Delta$. Suppose further that $\left|V_i\cap V(G')\right|=(1\pm\varepsilon)\mu|V_i|$ for each $i$, and that $V_0\cap V(G'),\dots,V_r\cap V(G')$ is also an $(\varepsilon,p)$-regular partition of $G'$ with $(\varepsilon,d,p)$-reduced graph $R$. \item $v\in V(G')$ is a vertex such that there are at least $\gamma p^{\binom{s+1}{2}}(\mu n)^s$ copies of $K_s$ in $N_{G'}(v)$ . \end{enumerate} Then there exist a partial embedding $\phi:V(H')\setminus T\to V(G')$ of $H'$ into $G'$ and a subset $\{q_1,\dots,q_k\}\subset [r]$ with the following properties. For each $u,u'\in T$, each $j\in[k]$, and for $\Pi(u)=\phi\big(N_{H'}(u)\cap \mathrm{Dom}(\phi)\big)$, we have \begin{enumerate}[label=\itmarabp{P}{'}] \item\label{item:pel-v} $\phi(x)=v$. \item\label{item:pel-clique} $q_1,\dots,q_k$ forms a clique in $R$. \item\label{item:pel-size} $\big|N_\Gamma\big(\Pi(u)\big)\cap V_{q_{\rho(u)}}\big|=(1\pm\varepsilon')p^{|\Pi(u)|}|V_{q_{\rho(u)}}|$. \item\label{item:pel-minsize} $\big|N_G\big(\Pi(u)\big)\cap V_{q_{\rho(u)}}\cap V(G')\big|\ge2\zeta p^{|\Pi(u)|}|V_{q_{\rho(u)}}\cap V(G')|$. \item\label{item:pel-osril} If $j\neq\rho(u)$ and $|\Pi(u)|\le\Delta-1$ then the pair $\big(N_\Gamma(\Pi(u),V_{q_{\rho(u)}}),V_{q_j}\big)$ is $(\varepsilon',d,p)_G$-regular. \item\label{item:pel-tsril} If $uu'\in H'$ then the pair $\big(N_\Gamma(\Pi(u),V_{q_{\rho(u)}}),N_\Gamma(\Pi(u'),V_{q_{\rho(u')}})\big)$ is $(\varepsilon',d,p)_G$-regular. \end{enumerate} \end{lemma} After the pre-embedding stage, we want to apply the sparse blow-up lemma to embed the remainder of~$H$. However, the sizes of the clusters $V_{i,j}$ from Lemma~\ref{lem:G} do not quite match the sizes of the sets $X_{i,j}$ from Lemma~\ref{lem:H2}. Also, Lemma~\ref{lem:coverv} embeds some vertices, creating a little further imbalance, and we need to slightly alter the mapping $f$ from Lemma~\ref{lem:H2} to accommodate these pre-embedded vertices. The next lemma allows us to change the sizes of the clusters $V_{i,j}$ slightly to match the partition of $H$, without destroying the properties of the partition of $G$ and of the pre-embedded vertices we worked to achieve. \begin{lemma}[Balancing lemma,~{\cite[Lemma~27]{ABET}}] \label{lem:balancing} For all integers $k\geq 1$, $r_1, \Delta \geq 1$, and reals $\gamma, d >0$ and $0 < \varepsilon < \min\{d,1/(2k)\}$ there exist $\xi >0$ and $C^{\ast} >0$ such that the following is true for every $p \geq C^{\ast} \left(\log n/n\right)^{1/2}$ and every $10\gamma^{-1}\le r \leq r_1$ provided that $n$ is large enough. Let~$\Gamma$ be a graph on vertex set $[n]$ and let $G=(V,E)\subseteq \Gamma$ be a (not necessarily spanning) subgraph with vertex partition $\mathcal V = \{V_{i,j}\}_{i\in[r],j\in[k]}$ that satisfies $n/(8kr) \leq |V_{i,j}| \leq 4n/(kr)$ for each $i\in[r]$, $j\in[k]$. Let $\{n_{i,j}\}_{i \in [r], j\in [k]}$ be an integer partition of $\sum_{i\in[r],j\in[k]} |V_{i,j}|$. Let $R^k_r$ be a graph on the vertex set $[r] \times [k]$ with minimum degree $\delta(R^k_r) \geq \big((k-1)/k+\gamma/2\big) kr$ such that $K^k_r \subseteq B^k_r \subseteq R^k_r$. Suppose that the partition $\mathcal V$ satisfies the following properties for each $i\in[r]$, each $j\neq j'\in[k]$, and each $v\in V$. Suppose we have \begin{enumerate}[label=\itmarab{B}] \item \label{lembalancing:sizes} $n_{i,j} - \xi n \leq |V_{i,j}| \leq n_{i,j} + \xi n$, \item \label{lembalancing:regular1} $\mathcal V$ is $\big(\tfrac{\varepsilon}{4},d,p\big)_G$-regular on $R^k_r$ and $\big(\tfrac{\varepsilon}{4},d,p\big)_G$-super-regular on $K^k_r$, \item \label{lembalancing:inheritance1} $\big(N_{\Gamma}(v, V_{i,j}),V_{i,j'}\big)$ and $\big(N_{\Gamma}(v, V_{i,j}),N_{\Gamma}(v, V_{i,j'})\big)$ are $\big(\tfrac{\varepsilon}{4},d,p\big)_G$-regular, and \item \label{lembalancing:gamma1} $|N_{\Gamma}(v,V_{i,j})| = \big(1 \pm \tfrac{\varepsilon}{4}\big)p|V_{i,j}|$. \end{enumerate} Then, there exists a partition $\mathcal{V'}= \{V'_{i,j}\}_{i\in[r],j\in[k]}$ of $V$ such that for each $i\in[r]$, each $j\neq j'\in [k]$, and each $v\in V$ we have \begin{enumerate}[label=\itmarabp{B}{'}] \item\label{lembalancing:sizesout} $|V'_{i,j}|=n_{i,j}$, \item\label{lembalancing:symd} $|V_{i,j}\triangle V'_{i,j}|\le 10^{-10}\varepsilon^4k^{-2}r_1^{-2} n$, \item \label{lembalancing:regular} $\mathcal{V'}$ is $(\varepsilon,d,p)_G$-regular on $R^k_r$ and $(\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item \label{lembalancing:inheritance} $\big(N_{\Gamma}(v,V'_{i,j}), V'_{i,j'}\big)$ and $\big(N_{\Gamma}(v,V'_{i,j}), N_\Gamma(v,V'_{i,j'})\big)$ are $(\varepsilon,d,p)_G$-regular, and \item\label{lembalancing:gammaout} for each $1\le s\le\Delta$ and for each $v_1,\ldots,v_s\in[n]$ \[\Big|\bigcap_{i\in[s]} N_\Gamma(v_1,V_{i,j})\triangle \bigcap_{i\in[s]} N_\Gamma(v_1,V'_{i,j})\Big|\le 10^{-10}\varepsilon^4k^{-2}r_1^{-2}\deg_\Gamma(v_1,\ldots,v_s)+C^{\ast}\log n\,.\] \end{enumerate} \end{lemma} After applying Lemma~\ref{lem:balancing} it remains only to check that the conditions of Lemma~\ref{thm:blowup} are met to complete the embedding of $H$ and thus the proof of Theorem~\ref{thm:maink}. \section{Proof of the pre-embedding lemma} \label{sec:pel} The basic idea of the proof is as follows. We construct $\phi$ by first embedding $x$ to $v$, then the neighbours of $x$ to $W:=N_{G'}(v)$, and then we keep embedding further vertices at greater distance from $x$ into $G'$ until we finally reach the neighbours of $T$ which we embed such that their neighbourhoods in the regular partition $(V_q)_{q\in R}$ have the desired sizes and regularity properties. We want to use the regularity method to perform this embedding. That is, we begin by assigning each vertex of $H'$ to a cluster, such that each edge of $H'$ is assigned to two clusters which form an $(\varepsilon,d,p)$-regular pair; these clusters are the initial \emph{candidate sets} for the vertices of $H'$. We embed vertices sequentially. When we embed $y\in V(H')$, we will naturally decrease the candidate sets for $z\in V(H')$ such that $yz\in E(H')$; we need to choose an image for $y$ which has enough $G$-neighbours in these candidate sets in order for the embedding to continue. To guarantee this, we will also need to keep track of $\Gamma$-neighbourhoods, and we will need to maintain the property that if two vertices of $H'$ are adjacent, then their candidate sets form an $(\varepsilon,d,p)$-regular pair. All these properties are easy to maintain because only very few vertices will fail them at any step. The idea is that at the end, the candidate sets for the vertices of $T$ (which we do not embed) are large enough for~\ref{item:pel-minsize}. In order to apply this strategy to prove Lemma~\ref{lem:coverv}, we need a new \emph{fine} regular partition, which partitions $W$ into clusters and which is also consistent with $(V_q)_{q\in R}$. The obvious way to do this is to apply Lemma~\ref{lem:SRLb} with $W$ and the sets $(V(G')\cap V_q\setminus W)_{q\in V(R)}$ as an initial partition. However this turns out not to work: we need the bound $t_1$ of the number of parts in the fine partition produced by Lemma~\ref{lem:SRLb} to be small compared to $\varepsilon^{-1}$, where $\varepsilon$ is the regularity from Lemma~\ref{lem:G}, but we have $v(R)\gg\varepsilon^{-1}$ and $t_1\gg v(R)$, giving a circular dependence. So what we do instead is to fix a number $\ell$ which is small compared to $\varepsilon^{-1}$, select a set $L$ of $\ell$ clusters from $(V_q)_{q\in V(R)}$, and apply Lemma~\ref{lem:SRLb} to the union of $W$ and the $(V(G')\cap V_q\setminus W)_{q\in L}$ with that initial partition; this breaks the circular dependence. We need $R[L]$ to have similar statistical properties to $R$ in order to begin the regularity embedding, i.e.\ to find parts of the fine partition to which we can assign the vertices of $H'$. We show that a random choice of $L$ is likely to give the desired $R[L]$. \begin{proof}[Proof of Lemma~\ref{lem:coverv}] First we fix all constants that we need throughout the proof. Let $\Delta, k\geq 2$ and $\gamma, d>0$ be given. Recall $d\le\tfrac{\gamma}{32}$ by assumption of the lemma. Let $d' = \min(\tfrac12d, 10^{-5k}\gamma)$ and choose $\xi= 10^{-6}2^{-k}\gamma$ and an integer \[\ell=\max\big(1000\xi^{-6}\log\xi^{-1},100\cdot 2^k\gamma^{-1}\big)\,.\] Let $\nu^{\ast}_{\Delta-1,\Delta-1}=\nu^{\ast}_{i,\Delta}=\nu^{\ast}_{\Delta,i}=\tfrac{1}{100\Delta}8^{-\Delta}d'^{\Delta}$ for $i \in [\Delta]$. For each $(i,j) \in\{0, \dots, \Delta-1\}^2\setminus\{(\Delta-1,\Delta-1)\}$ in reverse lexicographic order, we choose~$\nu^{\ast}_{i,j}\le\nu^{\ast}_{i+1,j},\nu^{\ast}_{i,j+1},\nu^{\ast}_{i+1,j+1} $ not larger than the $\varepsilon_0$ returned by Lemma~\ref{lem:OSRIL} for both input $\nu^{\ast}_{i+1,j}$ and $d'$, and for input $\nu^{\ast}_{i,j+1}$ and $d'$, and not larger than the $\varepsilon_0$ returned by Lemma~\ref{lem:TSRIL} for input $\nu^{\ast}_{i+1,j+1}$ and $d'$. Choose $\nu_0=\min\big(\nu^{\ast}_{0,0},\tfrac{d'}{2},10^{-5}\gamma\big)$. Now, Lemma~\ref{lem:SRLb} with input $\nu_0^2/(16\ell^2)$ and $2\ell$ returns $t_1$. Set $\zeta= \big(\tfrac{d'}{4}\big)^\Delta /4 t_1$. Given $\varepsilon'$, let $\eps^{\ast\ast}_{\Delta} = \varepsilon'$ and for every $i \in (\Delta-1, \dots,1,0)$, let $\eps^{\ast\ast}_{i}\le\eps^{\ast\ast}_{i+1}$ be returned by Lemma~\ref{lem:OSRIL} with input $\eps_{\scalebox{\scalefactor}{$\mathrm{OSRIL}$}}= \eps^{\ast\ast}_{i+1}$ and $\alpha_{\scalebox{\scalefactor}{$\mathrm{OSRIL}$}} = d$. Next, let $\eps^{\ast}_{\Delta-1,\Delta-1}=\varepsilon'$ and $\eps^{\ast}_{i,\Delta}=\eps^{\ast}_{\Delta,i}=1$ for $i \in [\Delta]$. For each $(i,j) \in\{0, \dots, \Delta-1\}^2\setminus\{(\Delta-1,\Delta-1)\}$ in reverse lexicographic order, we choose~$\eps^{\ast}_{i,j}\le\eps^{\ast}_{i+1,j},\eps^{\ast}_{i,j+1},\eps^{\ast}_{i+1,j+1} $ not larger than the $\varepsilon_0$ returned by Lemma~\ref{lem:OSRIL} for both input $\eps^{\ast}_{i+1,j}$ and $d$, and for input $\eps^{\ast}_{i,j+1}$ and $d$, and not larger than the $\varepsilon_0$ returned by Lemma~\ref{lem:TSRIL} for input $\eps^{\ast}_{i+1,j+1}$ and $d$. We choose $\varepsilon_0 \leq \eps^{\ast\ast}_0, \eps^{\ast}_{0,0}, \frac{\nu_0}{2t_1}$ small enough such that $(1+\varepsilon_0)^{\Delta} \leq 1+\varepsilon'$ and $(1-\varepsilon_0)^{\Delta} \geq 1-\varepsilon'$. Given $r\ge 10^5\gamma^{-1}$, $\varepsilon$ with $0<\varepsilon\le\varepsilon_0$, and $\mu > 0$, let $C$ be a large enough constant for all of the above calls to Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL}, and for Proposition~\ref{prop:chernoff} with input $\varepsilon_0$. Finally, we choose $C^{\ast}=10^{10\Delta}d'^{-\Delta}\ell t_1r\mu^{-1}$. Let $\Gamma = G(n,p)$ with $p \geq C^{\ast} {(\log n/n)}^{1/\Delta}$. Then $\Gamma$ satisfies a.a.s.~the properties stated in Lemma~\ref{lem:OSRIL}, Lemma~\ref{lem:TSRIL}, Proposition~\ref{prop:chernoff} and Lemma~\ref{lem:SRLb} with the parameters specified above. We assume from now on that $\Gamma$ satisfies these good events and has these properties. Let $G'$, $v\in V(G')$, $G$, $\{V_i\}_{i\in \{0, \ldots, r\}}$, $H'$, $x \in V(H')$, the $k$-colouring $\rho$ of $V(H')$, and the $(\varepsilon,d,p)$-reduced graph $R$, be as in the statement of the lemma. Since $\varepsilon\le\varepsilon_0$, $R$ is also an $(\varepsilon_0,d,p)$-reduced graph. To be able to apply Lemma~\ref{lem:SRLb} we need to choose a suitable subset of the clusters $\{V_i\}_{i\in \{0, \ldots, r\}}$ of bounded size. As the clusters $\{V_i\}_{i\in \{0, \ldots, r\}}$ might be of different sizes and we will want to have a minimum degree condition on the reduced graph, we will consider a weighted version of this degree that takes the cluster sizes into account. \begin{claim}\label{claim:Vast} There exists $L \subset [r]$ of size $\ell$ such that $R^\ast:=R[L]$ satisfies the following weighted minimum degree condition, where $V^\ast = \bigcup_{i \in L} V_i$. \[ \forall i \in L: \sum_{j \in N_R(i) \cap L} \frac{|V_j|}{|V^\ast|} \ge \left(\frac{k-1}{k} + \frac{\gamma}{5}\right)\,. \] Additionally, we have that \[ W:= \left\{ w \in N_{G'}(v): |N_{G'}(w) \cap V^\ast| \ge \left(\frac{k-1}{k} + \frac{\gamma}{5}\right)p|V^\ast \cap V(G')| \right\} \] has size at least $(1 - \xi) |N_{G'}(v)|$ and there are at least $\frac12 \gamma p^{\binom{s+1}{2}}{(\mu n)}^s$ copies of $K_s$ in $W$. \end{claim} \begin{claimproof} We choose a subset $L \subset [r]$ of size $\ell$ uniformly at random. First, we will transfer the minimum degree of $G$ to the reduced graph and show that with high probability the minimum degree is preserved on the chosen clusters. Recall that $G$ satisfies a minimum degree of $\delta(G) \ge (\frac{k-1}{k} + \gamma)pn$ and that we have the following bounds on the sizes of the clusters. \begin{equation}\label{eq:cluster-sizes} \frac{4n}{r} \ge |V_i| \ge \frac{n}{4r} \ge Cp^{-1} \log n \end{equation} Without loss of generality, we may assume that no $V_i$ forms an irregular pair with more than $\sqrt{\varepsilon}$ of the clusters, otherwise, add it to $V_0$, which over all clusters increases the size of $V_0$ by at most $4\sqrt{\varepsilon}n$. Fix $i \in [r]$. Proposition~\ref{prop:chernoff} applied to the edges between $V_i$ and $V_0$ implies that \[ e(V_i, V_0) \le 2p (\varepsilon + 4\sqrt{\varepsilon})n |V_i| \quad \text{and} \quad e(V_i) \le 2p {|V_i|}^2 \le 2p \frac{16}{r}n |V_i| \] Also, we can bound the number of edges from $V_i$ to other clusters that are in pairs which are not dense or $(\varepsilon,p)$-regular as follows. \[ e\Big(V_i, \bigcup_{j \in R \setminus N_R(i)} V_j\Big) \le dpn |V_i| + 2p \cdot 4\sqrt{\varepsilon} n |V_i|. \] Putting the above together, we obtain that \[ e\Big(V_i, \bigcup_{j \in N_R(i)} V_j\Big) \ge \left(\frac{k-1}{k} + \gamma - 2\varepsilon - 16\sqrt{\varepsilon} - d - \frac{32}{r}\right)pn |V_i| \] As, again by Proposition~\ref{prop:chernoff}, the number of edges between any $V_i$ and $V_j$ is at most $(1+\varepsilon_0)|V_i||V_j|$, we get that \[ \sum_{j \in N_R(i)} \frac{|V_j|r}{|V(G)|} \ge \left(\frac{k-1}{k} + \gamma - 2\varepsilon - 16\sqrt{\varepsilon} - d - \frac{32}{r}\right) {(1+\varepsilon_0)}^{-1} r \ge \left(\frac{k-1}{k} + \frac{\gamma}{2} \right) r. \] By the size conditions on the clusters, the relative sizes $w_j := \frac{|V_j|r}{|V(G)|}$ take values in $(\frac 14,4)$. We now consider \[ w'_j = \xi \left\lfloor {w_j}/{\xi} \right\rfloor, \] the discretisation of $w_j$ into steps of size~$\xi$. Of these discretised weights, we will ignore those that occur fewer than $\xi^2 r$ times. We lose at most a factor of $4\xi$ due to the discretisation as all weights are at least $\frac 14$. Also weights in $(\frac 14,4)$ occuring fewer than $\xi^2 r$ times contribute at most $16 \xi r$ to the sum, so we get the following lower bound. \[ \sum_{j \in N_R(i)} w'_j \ge \left(1-4\xi\right) \left(\frac{k-1}{k} + \frac{\gamma}{2} \right) r - 16 \xi r \ge \left(\frac{k-1}{k} + \frac{\gamma}{3} \right) r. \] We can now apply the hypergeometric inequality (Theorem~\ref{thm:hypergeometric}) to all possible rounded weight values separately. For any $j \in [r]$ the probability that $j$ is in $L$ is $\ell/r$ and so for a given density in $(\frac 14, 4)$, which occurs, say, $\theta r$ times, the probability that this density is chosen fewer than $(1-\xi)\theta\ell$ times is at most $2e^{-\xi^2 \cdot \xi \theta \ell/3} \le 2e^{-\xi^5 \ell/3}$. This implies by the union bound that with probability at most $4\xi^{-1} 2e^{-\xi^5 \ell/3}$ we do not have \begin{equation}\label{eq:weighted-min-degree} \sum_{j \in N_R(i) \cap L} w_j \ge (1-\xi)\left(\frac{k-1}{k} + \frac{\gamma}{3} \right) \frac{\ell}{r} r \ge \left(\frac{k-1}{k} + \frac{\gamma}{4} \right) \ell. \end{equation} So by the union bound the expected number of vertices in $R^\ast$ that do not satisfy~\eqref{eq:weighted-min-degree} is at most $\ell 8 \xi^{-1} e^{-{\xi^5 \ell}/{3}}<1/10$, where the inequality is by choice of $\ell$. By Markov's inequality, the probability that there is any such vertex in $R^\ast$ is thus at most~$1/10$. By the same discretisation of $w_j$ and application of the hypergeometric inequality to the discretised weights, we can also deduce that \begin{equation}\label{eq:size-Vast} |V^\ast| = \frac{|V(G)|}{r} \sum_{i \in L} w_i = (1 \pm 100\xi) \frac{\ell}{r} \sum_{i \in [r]} w_i = (1 \pm 100\xi) (1 \pm \varepsilon) \frac{\ell |V(G)|}{r} \end{equation} with probability at least $9/10$. Putting~\eqref{eq:weighted-min-degree} and~\eqref{eq:size-Vast} together implies that with probability at least $8/10$ the first claimed statement holds. For the claim, we also require that the minimum degree condition of the vertices in $N_{G'}(v)$ carries over to the chosen clusters for most vertices. Fix $w$ in $N_{G'}$. For $j \in [r]$ we consider the following weighted $p$-density, which may take values in $(0,5)$. \[ {d}_{w,j} = {d}_{G,p}(\{w\},V_j \cap V(G')) \frac{|V_j \cap V(G')|r}{|V(G')|}. \] Accounting for the exceptional set $V_0$ with Proposition~\ref{prop:chernoff}, the minimum degree condition on $G'$ of $(\frac{k-1}{k} + {\gamma})p|V(G')|$ implies that these weighted $p$-densities satisfy \[ \sum_{j \in [r]} {d}_{w,j} \ge \left(\frac{k-1}{k} + {\gamma} - 2\varepsilon\right)r \ge \left(\frac{k-1}{k} + \frac{\gamma}{2}\right)r. \] Similarly to before, we consider ${d'}_{w,i} = \xi \lfloor {{d}_{w,i}}/{\xi} \rfloor$, the discretisation of ${d}_{w,i}$ into steps of size~$\xi$. Of these discretised weighted densities, we ignore those that occur fewer than $\xi^2 r$ times and those that are smaller than $\sqrt{\xi}$. The small densities contribute at most $\sqrt{\xi}r$ to the sum and we lose a factor of at most $\sqrt{\xi}$ due to the discretisation for larger values. Also weights in $(\sqrt{\xi},5)$ occuring fewer than $\xi^2 r$ times contribute at most $25\xi r$ to the sum, so we get the following lower bound. \[ \sum_{i \in [r]} {d'}_{w,i} \ge (1 - \sqrt{\xi}) \left(\frac{k-1}{k} + \frac{\gamma}{2} - \sqrt{\xi} - 25 \xi\right)r \ge \left(\frac{k-1}{k} + \frac{\gamma}{3}\right)r. \] Applying the hypergeometric inequality to all density values separately as before, we get that for any $w \in N_{G'}(v)$ with probability at most $5\xi^{-1} 2e^{-\xi^5 \ell/3} \ge \xi / 10$ we do not have \begin{equation}\label{eq:W-weighted-minimum-degree} \sum_{i \in L} {d'}_{w,i} \ge \left( 1 - \xi\right) \left(\frac{k-1}{k} + \frac{\gamma}{3}\right) \frac{\ell}{r} r \ge \left(\frac{k-1}{k} + \frac{\gamma}{4}\right) \frac{\ell}{r} r. \end{equation} So the expected number of vertices in $N_{G'}(v)$ not satisfying~\eqref{eq:W-weighted-minimum-degree} is at most $\xi |N_{G'}(v)| / 10$. By Markov's inequality, with probability at least $9/10$ at most a fraction $\xi$ of vertices in $N_{G'}(v)$ violate~\eqref{eq:W-weighted-minimum-degree}. And in particular all vertices satisfying~\eqref{eq:W-weighted-minimum-degree} have at least \[ (1 - 100\xi) (1 - \varepsilon) \left(\frac{k-1}{k} + \frac{\gamma}{4}\right) (1 - \varepsilon) \mu p|V^\ast| \ge \left(\frac{k-1}{k} + \frac{\gamma}{5}\right)p|V^\ast \cap V(G')| \] neighbours in $V^\ast \cap V(G')$ if~\eqref{eq:size-Vast} holds. So indeed with probability at least $7/10$ the first two claimed statements hold, so assume we chose $L$ such that they do. For the claim it only remains to show the lower bound on the number of cliques in $W$. It follows, by inductively building up cliques, from the assumption in the lemma that any $t \le \Delta$ vertices of $G'$ have at most $2 p^t \mu n$ common neighbours in $G'$, that $v$ and each $w \in N_{G'}(v)$ are contained in at most \[ \prod_{t=2}^{s} 2 p^t \mu n = p^{\binom{s+1}{2}-1} {(2\mu n)}^{s-1} \] copies of $K_{s+1}$. Since $|W|\ge (1-\xi)|N_{G'}(v)|$, the number of copies of $K_s$ which are in $N_{G'}(v)$ but not $W$ is at most $\xi|N_{G'}(v)|\cdot p^{\binom{s+1}{2}-1}(2\mu n)^{s-1}$. Since $|N_{G'}(v)|\le 2\mu pn $, and $N_{G'}(v)$ contains at least $\gamma p^{\binom{s+1}{2}}(\mu n)^s$ copies of $K_s$, there are at least \[ \gamma p^{\binom{s+1}{2}}{(\mu n)}^s - \xi \cdot2\mu pn\cdot p^{\binom{s+1}{2}-1} {(2\mu n)}^{s-1} \ge \tfrac12 \gamma p^{\binom{s+1}{2}} {(\mu n)}^s \] copies of $K_s$ in $W$. \end{claimproof} Let $\{W_i\}_{i \in [\ell]}$ be an arbitrary equipartition of $W$ into $\ell$ parts (so that the fine partition we are about to obtain has enough parts in $W$). We apply Lemma~\ref{lem:SRLb} to $G'$ with the $2\ell$-part initial partition $\{ (V_i \cap V(G')) \setminus W \}_{i\in L} \cup \{W_i\}_{i \in [\ell]}$ and input parameter $\nu_0^2/(16\ell^2)$. This returns a partition refining each of these sets into $1\le t\le t_1$ clusters $\{V_{i,j}\}_{i\in L,j\in[t]}\cup\{W_{i,j}\}_{i\in[\ell],j\in[t]}$ together with small exceptional sets~$\{V_{i,0}:i\in L\}\cup\{ W_{i,0}: i\in[\ell]\}$. From the definition of a regular refinement, there are at most $\tfrac{\nu_0^2}{16\ell^2}\cdot(2\ell t)^2$ irregular pairs in this partition, and in particular at most $\nu_0 t$ of the clusters form an irregular pair with more than $\nu_0 t$ of the clusters. Include the vertices of all those clusters in the exceptional sets, which now make up a fraction of at most $2 \nu_0$ of the vertices. We now want to obtain $s$ clusters $W'_1,\dots,W'_s$ in ${\{W_{i,j}\}}_{i \in [\ell],j \in [t]}$ that are pairwise $(\nu_0, d', p)$-regular. Assume for a contradiction that no such clusters exist. So each $K_k$ in $W$ must either contain an edge meeting an exceptional set $W_{i,0}$, one which does not lie in a $(\nu_0, d', p)$-regular pair or one that is contained completely in some set $W_{i,j}$ for $i\in[\ell]$ and $j\in[t]$. Note that we have for all $i \in [\ell]$ and $j \in [t]$ that \[ |W_{i,j}| \ge \frac{1}{2 \ell t_1} |W| \ge \frac{\mu np}{4\ell t_1} \ge Cp^{-1} \log n. \] So we may apply Proposition~\ref{prop:chernoff} to bound the number of edges within and between clusters. Using the upper bound on common neighbourhoods in $G'$ given in the lemma statement to bound the number of edges meeting the exceptional sets, we obtain that deleting at most \[ 2 \nu_0 |W| 2p^2 \mu n + 2p(\nu_0 + d') {|W|}^2 + \ell 2p {(|W|/\ell)}^2 \le (8\nu_0 + 8\nu_0 + 8d' + {2}/{\ell}) p^3 \mu^2 n^2 \] edges would remove all cliques from $W$. Again by the upper bound on common neighbourhoods in $G'$ given in the lemma any of these edges is contained in at most \[ \prod_{t=3}^{s} 2 p^t \mu n = p^{\binom{s+1}{2}-3} {(2\mu n)}^{s-2} \] copies of $K_{s+1}$ together with $v$. So there would be at most \[ (16\nu_0 + 8d' + {2}/{\ell}) p^3 \mu^2 n^2 p^{\binom{s+1}{2}-3} {(2\mu n)}^{s-2} < \tfrac12 \gamma p^{\binom{s+1}{2}} {(\mu n)}^s \] copies of $K_s$ in $W$, a contradiction. It follows that there are some $s$ clusters in $W$ which are pairwise $(\nu_0,d',p)$-regular. Let $W'_1,\dots,W'_s$ in ${\{W_{i,j}\}}_{i \in [\ell],j \in [t]}$ be pairwise $(\nu_0, d', p)$-regular. Because the vertices of $W$ each have at least $\big(\tfrac{k-1}{k}+\tfrac{\gamma}{5}\big)p|V^*\cap V(G')|$ $G'$-neighbours in $V^*$, the number of edges leaving each cluster $W'_i$ to $V^*$ is at least $|W'_i|\big(\tfrac{k-1}{k}+\tfrac{\gamma}{5}\big)p|V^*\cap V(G')|$. By Proposition~\ref{prop:chernoff}, and because at most $\nu_0t$ irregular pairs leave $W'_i$, at most $(1+\varepsilon_0)p|W'_i|\nu_0|V^*\cap V(G')|$ of these edges lie in irregular pairs. By definition, at most $d'p|W'_i||V^*\cap V(G')|$ of these edges lie in pairs of relative density less than $d'$. Thus the remaining edges lie in $(\nu_0,d',p)$-regular pairs, and there are at least $|W'_i|\big(\tfrac{k-1}{k}+\tfrac{\gamma}{6}\big)p|V^*\cap V(G')|$ of these edges. Since the number of edges between $W'_i$ and any given $V_{i',j'}$ is at most $(1+\varepsilon_0)p|W'_i||V_{i',j'}|$ by Proposition~\ref{prop:chernoff}, we obtain \begin{equation}\label{eq:weighted-regular-min-degree} \sum_{V_{i',j'}:\, (W'_i, V_{i',j'})\text{ is }(\nu_0, d', p)-\text{regular}} \frac{|V_{i',j'}|}{|V^\ast \cap V(G')|} \ge \left(\frac{k-1}{k} + \frac{\gamma}{8}\right)\,. \end{equation} Now we can choose the clusters into which we will embed the vertices of $H'$. We choose sequentially \[ (q_{s+1},j_{s+1}),\dots,(q_{k+1},j_{k+1}) \in L \times [t] \] such that for each $1\le i\le s$ and each $s+1\le i'\le k+1$, the pair $\big(W'_i,V_{q_{i'},j_{i'}}\cap V(G')\big)$ is $(\nu_0,d',p)$-regular, and for each $s+1\le i'<i''\le k+1$ the pair $(q_{i}',q_{i''})$ is an edge of $R^*$. This is possible by~\eqref{eq:weighted-regular-min-degree} and Claim~\ref{claim:Vast}, which give a weighted minimum degree condition that implies that for any $k$ clusters (in $W$ or $V^*$ or a mixture) there is a cluster in $V^*$ which satisfies the given condition with respect to all $k$ clusters. We then choose pairs $(q_{s},j_{s}),\dots,(q_{1},j_{1})$ in that order sequentially such that for each $a \in \{s, \dots, 1\}$ the clusters \[ W'_1,\dots,W'_{a-1}, V_{q_{a},j_{a}},V_{q_{a+1},j_{a+1}},\dots,V_{q_{k+1},j_{k+1}} \] satisfy the same condition, i.e.\ for each $1\le i\le a$ and each $a+1\le i'\le k+1$, the pair $\big(W'_i,V_{q_{i'},j_{i'}}\cap V(G')\big)$ is $(\nu_0,d',p)$-regular, and for each $a+1\le i'<i''\le k+1$ the pair $(q_{i}',q_{i''})$ is an edge of $R^*$. Note that by choice of $\varepsilon_0$, if $(q_{i'},q_{i''})$ is an edge of $R^*$ then the pair $\big(V_{q_{i'},j_{i'}}\cap V(G')\setminus W,V_{q_{i''},j_{i''}}\cap V(G')\setminus W\big)$ is $(\nu_0,d',p)$-regular in $G'$. For convenience, we let $V'_i:=V_{q_i,j_i}\cap V(G')\setminus W$ for each $1\le i\le k+1$. We will embed $H'-(\{x\}\cup T)$ into the chosen clusters, i.e.\ $W'_1,\dots,W'_s,V'_1,\dots,V'_{k+1}$, using the regularity embedding strategy mentioned above. We will need to embed some vertices of $H'$ which are not neighbours of $x$ into the sets $W'_i$. For this to work, each such vertex $u$ needs to have at most $\Delta-3$ neighbours which we embed before $u$, and the aim of the next arguments is to assign vertices of $H'$ to clusters, and put an order on $V(H')$, which ensures this. Recall that $\rho$ is a proper $k$-colouring of $H'$ which uses only $s$ colours on $N(x)$. Reordering the colours if necessary, let us assume $\rho$ uses only colours in $[s]$ on $N(x)$. We define a proper $(k+1)$-vertex colouring $\rho':V(H') \to [k+1]$ inductively as follows. Initially we set $\rho'(w)=\rho(w)$ for all $w$ in $H'$. Let \[ U_{\rho'} = \bigcup_{i=2}^{s} \left\{ w \in N^i(x): \rho'(w) \le s-i+1 \right\}, \] where $N^i(x)$ refers to the vertices at distance $i$ from $x$. If $U_{\rho'}$ contains a vertex $w$ with no neighbour in ${\rho'}^{-1}(i)$ for some $\rho'(w)+1\le i\le k+1$, we set $\rho'(w)=i$ (if there are several such $i$, we choose one arbitrarily). We repeat this step until $U_{\rho'}$ contains no such vertices. Since the colour of any given vertex only increases through this process, the recolouring procedure must terminate eventually. The resulting $\rho'$ has the following property: if $u$ is any vertex with $d(x,u)\ge 2$ and $d(x,u)+\rho'(u)\le s+1$, then $u$ has a neighbour in each of the colour classes $\rho'(u)+1,\dots,k+1$. In particular, since $\rho'(u)\le s-1$ (as otherwise $d(x,u)+\rho'(u)\le s+1$ is impossible), and since $s\le k-1$ by assumption of the lemma, $u$ has a neighbour in each of the colour classes $k-1$, $k$ and $k+1$. Observe that no vertex in these colour classes is in $U_{\rho'}$ by definition. Note that the colouring remains unchanged on $N(x)$ and the vertices at distance $s+1$ from $x$. We define an order $<_{\rho'}$ on $V(H')\setminus \{x\}$ by putting first all the vertices of $U_{\rho'}$ in an arbitrary order, then the remaining vertices of $V(H')\setminus (T\cup\{x\})$ in an arbitrary order, and finally the vertices of $T$ in an arbitrary order. With the colouring $\rho'$ defined as above, this gives us, for all $u$ at distance at least two from $x$ with $\rho'(u) + d(x,u) \le s+1$: \begin{equation} \label{eq:order} |\text{pred}_{<_{\rho'}}(u) \cap N(u)| = |\{u': u' <_{\rho'} u, u' \in N(u)\}| \le \Delta - 3\,. \end{equation} Now we can assign the vertices of $H'$ to clusters. For $u \in V(H')$, let \[ V_u = V_{q_{\rho'(u)}} \quad \text{and} \quad C_u = \begin{cases} W'_{\rho'(u)} & \text{if } \rho'(u) + d(x,u) \le s+1 \\ V_{q_{\rho'(u)},j_{\rho'(u)}} & \text{otherwise}.\\ \end{cases} \] We now iteratively embed the vertices of $H'$ in the order specified above respecting the assignments to clusters. The following claim, which we prove by induction on the number of embedded vertices, encapsulates the conditions we maintain through this embedding. Here, as in the statement of the lemma, we set $\Pi(u)=\phi\big(N_{H'}(u)\cap \mathrm{Dom}(\phi)\big)$, and recall that $T$ is the vertices in $H'$ at distance exactly $s+1$ from $v$. \begin{claim}\label{claim:coverv} For each integer $0 \le z \le |V(H')\setminus T|-1$ there exists an embedding $\phi$ of the first $z$ vertices of~$H'\setminus (T\cup\{x\})$ (w.r.t.\ to the order $<_{\rho'}$) into $G$ such that \begin{enumerate}[label=\itmarab{I}] \item\label{item:claim-coverv-right-cluster} for every $u \in \mathrm{Dom}(\phi)$ we have $\phi(u) \in C_{u}$, \end{enumerate} and for every $u,u' \in H'\setminus (\mathrm{Dom}(\phi)\cup\{x\})$, where $u' \in N_{H'}(u)$ we have the following. \begin{enumerate}[label=\itmarab{I},start=2] \item\label{item:claim-coverv-2} $|N_G(\Pi(u),C_u)| \geq \left(\frac{d'}4\right)^{|\Pi(u)|} p^{|\Pi(u)|}|C_u|$, \item\label{item:claim-coverv-3} $|N_{\Gamma}(\Pi(u),C_u)| = {(1\pm\nu_0)}^{|\Pi(u)|}p^{|\Pi(u)|} |C_u|$, \item\label{item:claim-coverv-4} $\big(N_{\Gamma}(\Pi(u),C_u),N_{\Gamma}(\Pi(u'),C_{u'})\big)$ is $(\nu^{\ast}_{|\Pi(u)|,|\Pi(u')|},d',p)_G$-regular. \end{enumerate} Also, if $d(x,u)+\rho'(u),d(x,u')+\rho'(u')> s+1$ we have \begin{enumerate}[label=\itmarab{L}] \item\label{item:claim-coverv-1prime} if $|\Pi(u)|\le\Delta-1$ then $\big(N_{\Gamma}(\Pi(u),V_u),V_{q_j}\big)$ is $(\eps^{\ast\ast}_{|\Pi(u)|}, d, p)_G$-regular for each $j\neq\rho'(u)$, \item\label{item:claim-coverv-3prime} $|N_{\Gamma}(\Pi(u),V_u)| = {(1\pm\varepsilon_0)}^{|\Pi(u)|}p^{|\Pi(u)|} |V_u|$, \item\label{item:claim-coverv-4prime} $\big(N_{\Gamma}(\Pi(u),V_u),N_{\Gamma}(\Pi(u'),V_{u'})\big)$ is $(\eps^{\ast}_{|\Pi(u)|,|\Pi(u')|},d,p)_G$-regular. \end{enumerate} \end{claim} \begin{claimproof} We prove the claim inductively, starting with $z=0$ and $\phi$ the empty embedding. We first check that the claimed properties hold for this embedding. \ref{item:claim-coverv-right-cluster} is true vacuously. Since $\Pi(u)=\emptyset$ for each $u\in V(H')\setminus\{x\}$, the various neighbourhoods in $C_u$ and $C_{u'}$ are equal to $C_u$ and $C_{u'}$. So~\ref{item:claim-coverv-2} and~\ref{item:claim-coverv-3} hold trivially, and~\ref{item:claim-coverv-4} holds by choice of the $W'_i$ and by choice of $\nu^{\ast}_{0,0}$. Similarly,~\ref{item:claim-coverv-1prime} and~\ref{item:claim-coverv-4prime} hold because by choice of the $q_j$ the pair $(V_{q_j},V_{q_{j'}})$ is $(\varepsilon,d',p)$-regular for each $1\le j<j'\le k+1$, and~\ref{item:claim-coverv-3prime} holds trivially. We now have to show the induction step holds; suppose that for some $0\le z<|V(H')\setminus T|-1$, the map $\phi$ is an embedding of the first $z$ vertices of $H'-(T\cup \{x\})$ satisfying the conclusion of Claim~\ref{claim:coverv}. Let $w$ be the $(z+1)$st vertex of $H'-(T\cup\{x\})$. We aim to show the existence of an embedding $\phi'$ extending $\phi$ satisfying the conclusion of Claim~\ref{claim:coverv} for $z+1$. To do this, it is enough to show that, for each statement among~\ref{item:claim-coverv-2}--\ref{item:claim-coverv-4} and~\ref{item:claim-coverv-1prime}--\ref{item:claim-coverv-4prime} separately, the number of vertices in $N_G\big(\Pi(w),C_w\big)$ which cause the given statement to fail is small compared to $\big|N_G\big(\Pi(w),C_w\big)\big|$; then we choose a vertex $y$ in that set (so guaranteeing~\ref{item:claim-coverv-right-cluster}) which causes none of the statements to fail, and have the desired embedding $\phi\cup\{w\to y\}$. We therefore record some lower bounds on $\big|N_G\big(\Pi(w),C_w\big)\big|$. Suppose $d(x,w)\ge2$ and $d(x,w)+\rho'(w)\le s+1$, or if $d(x,w)=1$ and $w$ has two neighbours in $H'-x$ which come after $w$ in $<_{\rho'}$. In the first case, by~\eqref{eq:order}, we have $|\Pi(w)|\le\Delta-3$. In the second case, since $w$ has three neighbours in $H'$ which do not come before it in $<_{\rho'}$ (as $x$ is not in that order at all) we have $|\Pi(w)|\le\Delta-3$. In either case, by~\ref{item:claim-coverv-2}, we get \begin{equation}\label{eq:coverv:two} \big|N_G\big(\Pi(w),C_w\big)\big|\ge\big(\tfrac{d'}{4}\big)^{\Delta-3}p^{\Delta-3}|C_w|\ge\big(\tfrac{d'}{4}\big)^{\Delta-3}p^{\Delta-2}\cdot\tfrac{\mu n}{4\ell t_1}\ge 100C\Delta^2p^{-2}\log n\,, \end{equation} where the final inequality uses $p\ge C^*\big(\tfrac{\log n}{n}\big)^{1/\Delta}$ and the choice of $C^*$. By a similar calculation, if either $d(x,w)=1$ and $w$ has a neighbour coming after in in $<_{\rho'}$, or $d(x,w)+\rho'(w)>s+1$ and $w$ has a neighbour coming after it in $<_{\rho'}$, we have \begin{equation}\label{eq:coverv:one} \big|N_G\big(\Pi(w),C_w\big)\big|\ge\big(\tfrac{d'}{4}\big)^{\Delta-1}p^{\Delta-1}\cdot\tfrac{\mu n}{4\ell t_1r}\ge 100C\Delta^2p^{-1}\log n\,. \end{equation} Finally, if either $d(x,w)=1$ or $d(x,w)+\rho'(w)>s+1$, we get \begin{equation}\label{eq:coverv:none} \big|N_G\big(\Pi(w),C_w\big)\big|\ge\big(\tfrac{d'}{4}\big)^{\Delta}p^{\Delta}\cdot\tfrac{\mu n}{4\ell t_1r}\ge 100C\Delta^2\log n\,. \end{equation} We now estimate the fraction of $\big|N_G\big(\Pi(w),C_w\big)\big|$ which causes each of the desired statements to fail. The statement~\ref{item:claim-coverv-2} can only fail for a neighbour $u$ of $w$, and then only if we choose $y\in N_G\big(\Pi(w),C_w\big)$ which has too few neighbours in $N_G\big(\Pi(u),C_u\big)$. But by~\ref{item:claim-coverv-4} these two sets are on either side of a $\big(\nu^*_{|\Pi(w)|,|\Pi(u)|},d',p)_G$-regular pair, and by~\ref{item:claim-coverv-2} and~\ref{item:claim-coverv-3} the latter covers more than a $\nu^*_{\Delta,\Delta}$-fraction of $N_{\Gamma}\big(\Pi(u),C_u\big)$. So by regularity, at most $\nu^*_{\Delta,\Delta}|N_{\Gamma}\big(\Pi(w),C_w\big)\big|$ vertices of $\big|N_G\big(\Pi(w),C_w\big)\big|$ can cause~\ref{item:claim-coverv-2} to fail for $u$. Using~\ref{item:claim-coverv-2} and~\ref{item:claim-coverv-3}, and summing over the at most $\Delta$ choices of $u$, we see that at most a $8^\Delta d'^{-\Delta}\Delta\nu^*_{\Delta,\Delta}$-fraction of $\big|N_G\big(\Pi(w),C_w\big)\big|$ cause~\ref{item:claim-coverv-2} to fail. For~\ref{item:claim-coverv-3}, we note that embedding $w$ can only cause this statement to fail if $w$ has at least one neighbour in $H'$ coming after it in $<_{\rho'}$, and in this case by~\eqref{eq:coverv:two} and~\eqref{eq:coverv:one}, we have $\big|N_G\big(\Pi(w),C_w\big)\big|\ge 100C\Delta^2p^{-1}\log n$. Now a vertex $y\in N_G\big(\Pi(w),C_w\big)$ can only cause~\ref{item:claim-coverv-3} to fail if it has the wrong number of neighbours in $N_{\Gamma}\big(\Pi(u),C_u\big)$ for some neighbour $u$ of $w$. Because the good event of Proposition~\ref{prop:chernoff} occurs, this happens for at most $Cp^{-1}\log n$ vertices, and summing over the at most $\Delta$ choices of $u$, we see that at most a $\tfrac{1}{100}$-fraction of $\big|N_G\big(\Pi(w),C_w\big)\big|$ cause~\ref{item:claim-coverv-3} to fail. For~\ref{item:claim-coverv-4}, we need to be a bit more careful. To start with, if there are no neighbours of $w$ coming after $w$ in $<_{\rho'}$, then no matter how we embed $w$ we cannot make \ref{item:claim-coverv-4} fail. Suppose first that there are neighbours of $w$ coming after $w$ in $<_{\rho'}$, but that no two such neighbours are adjacent. As above, by~\eqref{eq:coverv:two} and~\eqref{eq:coverv:one}, we have $\big|N_G\big(\Pi(w),C_w\big)\big|\ge 100C\Delta^2p^{-1}\log n$. By~\ref{item:claim-coverv-4}, a vertex $y\in N_G\big(\Pi(w),C_w\big)$ can only cause~\ref{item:claim-coverv-4} to fail for a given $u,u'$ if $u$ is a neighbour of $w$ and $u'$ is not, and $y$ is one of the at most $Cp^{-1}\log n$ vertices which fail to inherit regularity, as guaranteed by the good event of Lemma~\ref{lem:OSRIL}. Summing over the at most $\Delta^2$ choices of $u,u'$, we see that in this case at most a $\tfrac{1}{100}$-fraction of $\big|N_G\big(\Pi(w),C_w\big)\big|$ cause~\ref{item:claim-coverv-4} to fail. The remaining case is that there are two adjacent neighbours of $w$ coming after $w$ in $<_{\rho'}$. In this case we need the good events of Lemmas~\ref{lem:OSRIL} and~\ref{lem:TSRIL}, and consequently for given $u,u'$ up to $Cp^{-2}\log n$ vertices might fail to inherit regularity. But in this case by~\eqref{eq:coverv:two} we have$ \big|N_G\big(\Pi(w),C_w\big)\big|\ge 100C\Delta^2p^{-2}\log n$, and again in this case at most a $\tfrac{1}{100}$-fraction of $\big|N_G\big(\Pi(w),C_w\big)\big|$ cause~\ref{item:claim-coverv-4} to fail. The proofs that at most a $\tfrac{1}{100}$-fraction of $\big|N_G\big(\Pi(w),C_w\big)\big|$ cause any one of~\ref{item:claim-coverv-1prime}--\ref{item:claim-coverv-4prime} are essentially identical, and we omit the details. Summing up, by choice of $\nu^*_{\Delta,\Delta}$ and since $|V(H')|\le\sum_{i=0}^{s+1}\Delta^i$, we see that at least half of $\big|N_G\big(\Pi(w),C_w\big)\big|$ consists of vertices $y$ such that $\phi\cup\{w\to y\}$ satisfies the conclusions of Claim~\ref{claim:coverv} for $z+1$, completing the induction step and hence the proof of the claim. \end{claimproof} Now we can conclude the proof of Lemma~\ref{lem:coverv}. Given an embedding of $H'-(T\cup\{x\})$ satisfying the conclusions of Claim~\ref{claim:coverv}, we extend it to an embedding $\phi$ of $H'-T$ by setting $\phi(x) = v$. This is a valid embedding since we embedded all neighbours of $x$ to $W$, and we obtain~\ref{item:pel-v}. Property~\ref{item:pel-clique} holds by choice of the $q_1,\dots,q_{k}$. For every vertex $u$ in~$T$ we have that $C_u = V_{q_{\rho'(u)},j_{\rho'(u)}}$ and $|C_u| \ge |{V_{q_{\rho'(u)}}}\cap V(G')|/{2 t_1}$. So by the choice of~$\zeta$,~\ref{item:pel-minsize} follows from~\ref{item:claim-coverv-2}. The choice of constants ensures that the remaining statements in the lemma are a direct consequence of~\ref{item:claim-coverv-1prime}-\ref{item:claim-coverv-4prime}. \end{proof} \section{Proof of the main technical result} \label{sec:mainproof} The proof of Theorem~\ref{thm:maink} is broadly similar to the proof of~\cite[Theorem~23]{ABET}. Again, basically the idea is that we apply the lemmas of Section~\ref{sec:mainlemmas} in order to first find a well-behaved partition of $G$ and a corresponding partition of $H$. We then deal with the few badly-behaved vertices of $G$ by sequentially pre-embedding onto them some vertices of $H$ whose neighbourhoods contain at most $s$ colours. Lemma~\ref{lem:coverv} deals with this pre-embedding, and sets up for the vertices which are not pre-embedded but which have pre-embedded neighbours restriction sets in the sense of Definition~\ref{def:restrict}. We then adjust the partition of $H$ to fit this pre-embedding, and balance the partition of $G$ to match. Finally, we see that the conditions of Lemma~\ref{thm:blowup} are met, and that lemma completes the desired embedding of $H$ in $G$. As in~\cite{ABET}, there are two slightly subtle points. The first is that for $\Delta=2$ we can have $Cp^{-2}>pn$, so that we should be worried that we come to some badly-behaved vertex of $G$ onto which we wish to pre-embed and discover that all its neighbours have already been used in pre-embedding. As in~\cite{ABET}, this is easy to handle: at each step we choose the badly-behaved vertex with most neighbours already embedded to. It is easy to check that this ordering avoids the above problem. The second, more serious, problem is that we need restriction sets fulfilling the conditions of Definition~\ref{def:restrict}. Although Lemma~\ref{lem:coverv} gives us pre-embeddings satisfying these conditions, we might destroy the conditions when we pre-embed later vertices. The condition we could destroy is simply that we need each restriction set to be reasonably large; the danger is that we pre-embed many vertices to some restriction set. The solution to this is (as in~\cite{ABET}) to select a set $S$, whose size is linear in $n$ but small, using Lemma~\ref{lem:hypgeo} to avoid large intersections with any possible restriction set. When we apply Lemma~\ref{lem:coverv} to cover a badly-behaved vertex $v$, we will pre-embed to $v$ and to some vertices chosen from $S$, and not to any other vertex. The badly-behaved vertices are not (by construction) in any restriction set, while $S$ has small intersection with all restriction sets, so that even removing all of $S$ would not make the restriction sets too small. The only point in the proof where we really need to do more than in~\cite{ABET} (apart from using Lemma~\ref{lem:coverv} to pre-embed) is that we need to ensure the conditions of Lemma~\ref{lem:coverv} are met. When we wish to cover a badly-behaved $v$, its neighbourhood within the set $S$ must contain many copies of $K_s$. Further, some vertices of $S$ will have been used in earlier pre-embeddings, and we need to ensure that these used vertices do not hit too many of the copies of $K_s$. For this, we apply the sparse regularity lemma, Lemma~\ref{lem:SRLb}, to $G\big[N_G(v)\big]$ before choosing $S$. We will see that (since $N_G(v)$ contains many copies of $K_s$) we find a set of $s$ clusters in $N_G(v)$ such that all the pairs are relatively dense and regular. When we use Lemma~\ref{lem:hypgeo} to choose $S$, we also insist that $S$ contains a significant fraction of each of these clusters. The order in which we cover badly-behaved vertices ensures that a (slightly smaller but still) significant fraction of each cluster is not used by the previous pre-embedding; and we find the desired many copies of $K_s$ in $N_G(v)\cap S$ as a result. As a final observation, Lemma~\ref{lem:coverv}~\ref{item:pel-minsize} gives us something which looks like an image restriction set suitable for Definition~\ref{def:restrict}---but it is a subset of $S$. A careful reader will see from the constant choices below that it is therefore too small for Lemma~\ref{thm:blowup}. However, the fact that $S$ is selected at random allows us to deduce the existence of a larger image restriction set which is suitable for Lemma~\ref{thm:blowup}. \begin{proof}[Proof of Theorem~\ref{thm:maink}] Given $\gamma>0$, we set $d^+=2^{-s-5}\gamma$ and $\varepsilon^+_{s-2}=16^{-s}(d^+)^{2s}/s$. For each $i=s-3,s-4,\ldots,0$ sequentially, let $0<\varepsilon^+_i\le \varepsilon^+_{i-1}$ be sufficiently small for Lemma~\ref{lem:TSRIL} with input $d^+$ and $\varepsilon^+_{i+1}$. Let $\varepsilon^+\le\varepsilon^+_0$ be small enough for an application of Lemma~\ref{lem:hypgeo} with input $d^+$ and $\varepsilon^+_0$. Let $t_1^+$ be returned by Lemma~\ref{lem:SRLb} for input $\varepsilon^+$ and $\lceil 1/d^+\rceil$, and let $\alpha^+=\tfrac14d^+/t_1^+$. Let $\gamma^+=2^{-4s^2}(d^+)^{-2s^2}(t_1^+)^{-s}$. Note we have $\gamma^+<\gamma$. We now choose $d \le \frac{\gamma^+}{32}$ not larger than the $d$ given by Lemma~\ref{lem:G} for input~$\gamma$,~$k$ and~ $r_0:=10^5\gamma^{-1}$. We let $\alpha$ be the $\zeta$ returned by Lemma~\ref{lem:coverv} for input $\Delta$, $k$, $s$, $\gamma^+$ and $d$. We set $D=\Delta$ and let $\eps_{\scalebox{\scalefactor}{$\mathrm{BL}$}}$ be returned by Lemma~\ref{thm:blowup} for input $\Delta$, $\Delta_{R'}=3k$, $\Delta_J=\Delta$, $\vartheta=\tfrac{1}{100D}$, $\zeta=\tfrac{1}{4}\alpha$, $d$ and $\kappa=64$. Next, putting $\varepsilon^*:=\tfrac18\eps_{\scalebox{\scalefactor}{$\mathrm{BL}$}}$ into Lemma~\ref{lem:coverv} (with earlier parameters as above) returns $\varepsilon_0>0$. We set $\varepsilon=\min(\varepsilon_0,d,\varepsilon^*/4\Delta,1/100k)$, and set $\varepsilon^-\le\varepsilon$ small enough for Lemma~\ref{lem:hypgeo} with input as above and $d,\varepsilon$. Now Lemma~\ref{lem:G}, for input $\varepsilon^-$ and earlier constants as above, returns $r_1$. At last, Lemma~\ref{lem:balancing}, for input $k$, $r_1$, $\Delta$, $\gamma$, $d$ and $8\varepsilon$, returns $\xi>0$. Without loss of generality, we may assume $\xi<10(10kr_1)$, and set $\beta=10^{-12}\xi^2/(\Delta k^4 r_1^2)$. Let $\mu=\varepsilon^2/(100000kr_1)$. Next, suppose $C^{\ast}$ is large enough for Lemma~\ref{lem:coverv}, and also to play the r\^ole of $C$ in each of these other lemmas, and also for Proposition~\ref{prop:chernoff} with input $\varepsilon$, for Lemma~\ref{lem:TSRIL} with input $d^+$ and each of $\varepsilon^+_i$ for $i=1,\ldots,s-2$, and for Lemma~\ref{lem:hypgeo} with input $\varepsilon\mu^2$, $\varepsilon$, $\min(d,d^+)$ and $\Delta$. We set $C=10^{100}k^2 r_1^2 \varepsilon^{-2}\xi^{-1}\Delta^{1000k^3}\mu^{-\Delta}C^{\ast}$ and $z=10/\xi$. Given $p\ge C\big(\tfrac{\log}{n}\big)$, a.a.s.\ $\Gamma=G(n,p)$ satisfies the good events of each of the lemmas and propositions listed above with each of the specified inputs. In addition, for each set $W$ of at most $\Delta$ vertices of $G(n,p)$, the size of the common neighbourhood $N_{G(n,p)}(W)$ is distributed as a binomial random variable with mean $p^{|W|}(n-|W|)$. By Theorem~\ref{thm:chernoff}, the probability that the outcome is $(1\pm\varepsilon)p^{|W|}n$ is at least $1-n^{-(\Delta+1)}$ for sufficiently large $n$. By the union bound, we conclude that a.a.s.\ $G(n,p)$ satisfies \begin{equation}\label{eq:nolargedegs} \text{for each $W\subset V\big(G(n,p)\big)$ with $|W|\le\Delta$ we have }\big|N_{G(n,p)}(W)\big|=(1\pm\varepsilon)p^{|W|}n\,. \end{equation} Suppose that $\Gamma=G(n,p)$ satisfies these good events. Let $G$ be a spanning subgraph of $\Gamma$ such that $\delta(G)\ge\big(\tfrac{k-1}{k}+\gamma\big)pn$ and such that for each $v\in V(G)$ the neighbourhood $N_G(v)$ contains at least $\delta p^{\binom{s}{2}}(pn)^s$ copies of $K_s$. Let $H$ be a graph on $n$ vertices with $\Delta(H)\le\Delta$. Let $\sigma$ be a proper colouring of $V(H)$ using colours $\{0,\dots,k\}$, and let $\mathcal L$ be a labelling of $V(H)$ with bandwidth at most $\beta n$ with the following properties. The colouring $\sigma$ is $(z,\beta)$-zero-free with respect to $\mathcal L$, the first $\sqrt{\beta}n$ vertices of $\mathcal L$ do not use the colour zero, and the first $\beta n$ vertices of $\mathcal L$ contain $Cp^{-2}$ vertices whose neighbourhood contains only $s$ colours. We now claim that for each $v\in V(G)$ we can find $s$ large subsets of $N_G(v)$ all pairs of which are dense and regular in $G$. This forms a `robust witness' that each vertex neighbourhood in $G$ contains many copies of $K_{s}$. \begin{claim} For each $v\in V(G)$, there exist sets $Q_{v,1},\dots,Q_{v,s}\subset N_G(v)$ each of size at least $\alpha^+pn$ such that for each $i<j$ the pair $(Q_{v,i},Q_{v,j})$ is $(\varepsilon^+,d^+,p)$-regular in $G$. \end{claim} \begin{claimproof} We apply Lemma~\ref{lem:SRLb} with input $\varepsilon^+$ and $\lceil 1/d^+\rceil$ to $G\big[N_G(v)\big]$, with an arbitrary equipartition into $\lceil 1/d^+\rceil$ sets as an initial partition. Note that the conditions of Lemma~\ref{lem:SRLb} are satisfied because the good event of Proposition~\ref{prop:chernoff} holds. We obtain an $(\varepsilon,p)$-regular partition of $N_G(v)$ whose non-exceptional parts are of size between $\alpha^+pn$ and $8\alpha^+pn$, by choice of $\alpha^+$ and since $\big|N_G(v)\big|>\tfrac12pn$. If there exist $s$ parts in this partition all pairs of which form $(\varepsilon^+,d^+,p)$-regular pairs, then these parts form the desired $Q_{v,1}$,\dots,$Q_{v,s}$. So we may assume for a contradiction that no such $s$ parts exist. It follows that when we delete all edges within parts, meeting the exceptional sets, in irregular pairs, and in pairs of density less than $d^+p$, we remove all copies of $K_s$ from $G\big[N_G(v)\big]$. The total number of such edges is, since the good event of Proposition~\ref{prop:chernoff} holds, at most \begin{align*} (d^+)^{-1}\cdot 8p^3n^2(d^+)^{2}+2p(2\varepsilon^+pn)(2pn)+4\varepsilon^+p^3n^2+4d^+p^3n^2&\le (12\varepsilon^++12d^+)p^3n^2\\ &\le 2^{-s}\gamma p^3n^2\,, \end{align*} where the final inequality is by choice of $d^+$ and $\varepsilon^+$. We now estimate simply how many copies of $K_{s+1}$ a given edge $e$, together with $v$, can make in $\Gamma$. Since by~\eqref{eq:nolargedegs} any $\ell$-tuple of vertices of $\Gamma$ has at most $2p^\ell n$ common neighbours, the number of copies of $K_4$ containing $e$ and $v$ is at most $2p^3n$, and inductively the number of copies of $K_{s+1}$ containing $e$ and $v$ is at most \[\prod_{\ell=3}^{s}2p^\ell n=2^{s-2}p^{\binom{s+1}{2}-3}n^{s-2}\,.\] Putting these estimates together we see that the total number of copies of $K_s$ in $G\big[N_G(v)\big]$ is at most $\tfrac12\gamma p^{\binom{s+1}{2}}n^{s}$. This is the desired contradiction, completing the proof. \end{claimproof} We apply Lemma~\ref{lem:G} to $G$, with input $\gamma$, $k$, $r_0$ and $\varepsilon^-$, to obtain an integer $r$ with $10\gamma^{-1}\le kr\le r_1$, a set $V_0\subset V(G)$ with $|V_0|\le C^\ast p^{-2}$, a $k$-equitable partition $\mathcal V=\big\{V_{i,j}\big\}_{i\in[r],j\in[k]}$ of $V(G)\setminus V_0$, and a graph $R^k_r$ on $[r]\times[k]$ with minimum degree $\delta(R^k_r)\ge\big(\tfrac{k-1}{k}+\tfrac{\gamma}{2}\big)kr$, such that $K^k_r\subset B^k_r\subset R^k_r$ and such that the following hold. \begin{enumerate}[label=\itmarabp{G}{a}] \item\label{main:Gsize} $\frac{n}{4kr}\leq |V_{i,j}| \leq \frac{4n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item\label{main:Greg} $\mathcal V$ is $(\varepsilon^-,d,p)_G$-regular on $R^k_r$ and $(\varepsilon^-,d,p)_G$-super-regular on $K^k_r$, \item\label{main:Ginh} both $\big(N_{\Gamma}(v, V_{i,j}),V_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V_{i,j}),N_{\Gamma}(v, V_{i',j'})\big)$ are $(\varepsilon^-, d,p)_G$-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, and \item\label{main:Ggam} $|N_{\Gamma}(v,V_{i,j})| = (1 \pm \varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \end{enumerate} Given $i\in[r]$, because $\delta(R^k_r)>(k-1)r$, there exists $v\in V(R^k_r)$ adjacent to each $(i,j)$ with $j\in[k]$. This, together with our assumptions on $H$, allow us to apply Lemma~\ref{lem:H2} to $H$, with input $D$, $k$, $r$, $\tfrac{1}{10}\xi$ and $\beta$, and with $m_{i,j}:=|V_{i,j}|+\tfrac{1}{kr}|V_0|$ for each $i\in[r]$ and $j\in[k]$, choosing the rounding such that the $m_{i,j}$ form a $k$-equitable integer partition of $n$. Since $\Delta(H)\le\Delta$, in particular $H$ is $\Delta$-degenerate. Let $f\colon V(H) \to [r] \times [k]$ be the mapping returned by Lemma~\ref{lem:H2}, let $W_{i,j} := f^{-1}(i,j)$, and let $X \subseteq V(H)$ be the set of special vertices returned by Lemma~\ref{lem:H2}. For every $i\in [r]$ and $j\in [k]$ we have \begin{enumerate}[label=\itmarabp{H}{a}] \item\label{H:size} $m_{i,j} - \tfrac{1}{10}\xi n \leq |W_{i,j}| \leq m_{i,j} + \tfrac{1}{10}\xi n$, \item\label{H:sizeX} $|X| \leq \xi n$, \item\label{H:edge} $\{f(x),f(y)\} \in E(R^k_r)$ for every $\{x,y\} \in E(H)$, \item\label{H:special} $y,z\in \bigcup_{j'\in[k]}f^{-1}(i,j')$ for every $x\in f^{-1}(i,j)\setminus X$ and $xy,yz\in E(H)$, and \item\label{H:v1} $f(x)=\big(1,\sigma(x)\big)$ for every $x$ in the first $\sqrt{\beta}n$ vertices of $\mathcal{L}$. \end{enumerate} We let $F$ be the first $\beta n$ vertices of $\mathcal{L}$. By definition of $\mathcal{L}$, in $F$ there are at least $C p^{-2}$ vertices whose neighbourhood in $H$ receives at most $s$ colours from $\sigma$. Next, we apply Lemma~\ref{lem:hypgeo}, with input $\varepsilon\mu^2$ and $\Delta$, to choose a set $S\subset V(G)$ of size $\mu n$. We let the $T_i$ of Lemma~\ref{lem:hypgeo} be all sets which are common neighbourhoods in $\Gamma$ of at most $\Delta$ vertices of $\Gamma$, and all sets which are common neighbourhoods in $G$ of at most $\Delta$ vertices of $\Gamma$ into any set of $\mathcal V$, together with the sets $V_{i,j}$ for $i\in[r]$ and $j\in[k]$, and the sets $Q_{v,i}$ for $v\in V(G)$ and $i\in[s]$. We let the regular pairs $(X_i,Y_i)$ of Lemma~\ref{lem:hypgeo} be the pairs $(Q_{v,i},Q_{v,j})$ for $1\le i<j\le s$ and $v\in V(G)$, and all regular pairs $(V_{i,j},V_{i',j'})\in R^k_r$. The result of Lemma~\ref{lem:hypgeo} is that for any $1\le\ell\le\Delta$, any $V\in\mathcal V$, and any vertices $u_1,\dots,u_\ell$ of $V(G)$, we have \begin{equation}\label{eq:intS} \begin{split} \Big|S\cap\bigcap_{1\le i\le\ell}N_\Gamma(u_i)\Big|&=(1\pm\varepsilon\mu)\mu\Big|\bigcap_{1\le i\le\ell}N_\Gamma(u_i)\Big|\pm \varepsilon\mu p^\ell n\,,\\ \Big|S\cap V\cap \bigcap_{1\le i\le\ell}N_G(u_i)\Big|&=(1\pm\varepsilon\mu)\mu\Big|V\cap\bigcap_{1\le i\le\ell}N_G(u_i)\Big|\pm \tfrac{\varepsilon\mu p^\ell n}{4kr}\,,\quad \text{and}\\ \big|S\cap V_{i,j}\big|&=(1\pm\tfrac12\varepsilon)\mu|V_{i,j}|\quad\text{for each $i\in[r]$ and $j\in[k]$,} \end{split} \end{equation} where we use the fact $p\ge C\big(\tfrac{\log n}{n}\big)^{1/\Delta}$ and choice of $C$ to deduce $C^{\ast}\log n<\tfrac{\varepsilon\mu p^\Delta n}{4kr}$. Furthermore, for each $v\in V(G)$ and $1\le i<j\le s$ the pair $\big(Q_{v,i}\cap S,Q_{v,j}\cap S\big)$ is $\big(\varepsilon^{+}_0,d^+,p)$-regular in $G$, and for each $(V_{i,j},V_{i',j'})\in R^k_r$ the pair $\big(V_{i,j}\cap U,V_{i',j'}\cap U\big)$ is $(\varepsilon,d,p)$-regular in $G$. Our next task is to create the pre-embedding that covers the vertices of $V_0$. We use the following algorithm, starting with $\phi_0$ the empty partial embedding. \begin{algorithm} \caption{Pre-embedding} \label{alg:pre} $t:=0$ \; \While{$V_0\setminus\mathrm{Im}(\phi_t)\neq\emptyset$}{ \lnl{line:choosev} Let $v_{t+1}\in V_0\setminus\mathrm{Im}(\phi_t)$ maximise $\big|N_G(v)\cap S\cap\mathrm{Im}(\phi_t)\big|$ over $v\in V_0\setminus\mathrm{Im}(\phi_t)$ \; Choose $x_{t+1}\in F$ such that $\big|\sigma\big(N_H(x)\big)\big|\le s$ and $\mathrm{dist}\big(x_{t+1},\mathrm{Dom}(\phi_t)\big)\ge 100k^2$ \; $H_{t+1}:= H \left[\big\{y\in V(H):\mathrm{dist}(x_{t+1},y)\le s+1\big\}\right]$ \; Let $G'_{t+1}$ be the maximum subgraph of $G\big[(S\cup\{v_{t+1}\})\setminus\mathrm{Im}(\phi_t)\big]$ \break\mbox{}\hspace{7cm} with minimum degree $\left(\tfrac{k-1}{k}+\tfrac{\gamma}{4}\right)\mu pn$ \; Let $\phi$ and $q_1,\dots,q_k$ be given by Lemma~\ref{lem:coverv} with input $G'_{t+1}$, $H'_{t+1}$ and colouring $\sigma|_{V(H')}$ \; $\phi_{t+1}:=\phi_t\cup\phi$ \; \ForEach{$y\in H_{t+1}$ such that $\mathrm{dist}(x_{t+1},y)=s+1$}{ Let $f^{**}(y):=q_{\sigma(y)}$ \; Let $J_y:=\phi\big(\mathrm{Dom}(\phi)\cap N_H(y)\big)$ \; Let $I'_y:=N_{G}(J_y)\cap V_{q_{\sigma(y)}}\cap V(G'_{t+1})$ \; } $t:=t+1$ \; } \end{algorithm} Suppose this algorithm does not fail, terminating with $t=t^*$ and with a final embedding $\phi:=\phi_{t^*}$. Let $H'=H\setminus\mathrm{Dom}(\phi)$. Then $\phi$ is an embedding of $H\big[V(H)\setminus V(H')\big]$ into $V(G)$ which covers $V_0$ and is contained in $V_0\cup S$. The algorithm in addition defines $f^{**}(y)\in R^k_r$, $J_y\subset S$ and $I'_y\subset S$ for each $y\in V(H')$ which has $H$-neighbours in $\mathrm{Dom}(\phi)$. The meanings of these are as follows. When we apply the sparse blow-up lemma, we will embed $y$ to the cluster $V_{f^{**}(y)}$. We will need to image restrict $y$ (as in Definition~\ref{def:restrict}), and the image restricting vertices will be $J_y$. The set $I'_y$ will \emph{not} be the image restriction we use, but we will deduce the existence of a suitable image restriction from $I'_y$. Before we explain this, we first claim that the algorithm does not fail, and the requirements of Lemma~\ref{lem:coverv} are met at each iteration. \begin{claim} Algorithm~\ref{alg:pre} does not fail, and the conditions of Lemma~\ref{lem:coverv} are met at each iteration. \end{claim} \begin{claimproof} Observe that in total we embed at most $\Delta^{s+2}$ vertices in each iteration, and the number of iterations is at most $|V_0|\le C^{\ast} p^{-2}$, so that the total number of vertices we embed is at most $C^{\ast}\Delta^{s+2}p^{-2}$. We begin by discussing the choice of $v_{t+1}$. Suppose that at some time $t$ we pick a vertex $v=v_{t+1}$ such that $\big|N_G(v)\cap S\cap\mathrm{Im}(\phi_t)\big|>\tfrac12\alpha^+\mu pn$. For each $t-\tfrac14\Delta^{-s-2}\mu\alpha^+ pn\le t'<t$, we have $\big|N_G(v)\cap S\cap\mathrm{Im}(\phi_{t'})\big|>\tfrac14\alpha^+\mu pn$, yet at each of these times $v$ is not picked, so that the vertex picked at each time $t'$ has at least $\tfrac14\alpha^+\mu pn$ neighbours in $\mathrm{Im}(\phi_t)\cap S$, and in particular in $\mathrm{Im}(\phi_t)$, a set of size at most $C^{\ast}\Delta^{s+2}p^{-2}$. Let $Z$ be a superset of $\mathrm{Im}(\phi_t)$ of size at least $C^{\ast} p^{-1}\log n$. Now the good event of Proposition~\ref{prop:chernoff} states that in $\Gamma$ at most $C^{\ast} p^{-1}\log n$ vertices of $\Gamma$ have more than $2p|Z|<\tfrac14\alpha^+\mu pn$ neighbours in $Z$. Since $\tfrac14\Delta^{-s-2}\mu\alpha^+ pn>C^{\ast} p^{-1}\log n$ by choice of $p$, this is a contradiction. We conclude that at each time $t$, the vertex $v_{t+1}$ picked at time $t$ satisfies $\big|N_G(v)\cap S\cap\mathrm{Im}(\phi_t)\big|\le\tfrac12\alpha^+\mu pn$. From this point on we consider a fixed time $t$, and write $v$ rather than $v_{t+1}$, and $\phi$ for $\phi_t$, and so on. Since we cover at most $C^{\ast}\Delta^{s+2} p^{-2}$ vertices, so we have $|S\setminus\mathrm{Im}(\phi)|=(1\pm\tfrac12\varepsilon)\mu n$. Now, to obtain the maximum subgraph of $G\big[(S\cup\{v\})\setminus\mathrm{Im}(\phi)\big]$ with minimum degree $\big(\tfrac{k-1}{k}+\tfrac{\gamma}{4}\big)\mu pn$, we successively remove vertices whose degree is too small until no further remain. We claim that less than $\tfrac18\mu\alpha^+pn$ vertices are removed, and $v$ is not one of the vertices removed. To see this, observe that every vertex has at least $\big(\tfrac{k-1}{k}+\tfrac{\gamma}{2}\big)\mu pn$ neighbours in $S$ by~\eqref{eq:intS}. Suppose for a contradiction that there is a set $Z$ of $\tfrac18\mu\alpha^+pn$ vertices which are the first removed from $S$ in this process. Then each vertex of $Z$ has at least $\tfrac14\gamma\mu p n$ neighbours in $Z\cup\mathrm{Im}(\phi)$, which by choice of $\alpha^+$ is a contradiction to the good event of Proposition~\ref{prop:chernoff}. We conclude $\big|(S\cup\{v\})\setminus\mathrm{Im}(\phi)\big|=(1\pm\varepsilon)\mu n$. Since $v$ has at least $\big(\tfrac{k-1}{k}+\tfrac{\gamma}{2}\big)\mu pn$ neighbours in $S$, of which at most $\tfrac12\alpha^+\mu pn$ are in $\mathrm{Im}(\phi)$ and at most $|Z|$ are in $Z$, the vertex $v$ is not removed. Furthermore, for each $i\in[s]$ we have $|Q_{v,i}\cap V(G')\big|\ge\tfrac12\big|Q_{v,i}\cap S\big|$. We now use this to count copies of $K_s$ in $N_{G'}(v)$. We choose for $i=1,\ldots,s$ sequentially vertices in $Q_{v,i}\cap V(G')$, at each step choosing a vertex $w_i$ which is adjacent to the previous vertices, and which is such that $w_1,\ldots,w_i$ have at least $(d^+-\varepsilon^+_{s-2})^ip^i|Q_{v,j}|$ common $G$-neighbours in each $Q_{v,j}$ for $j>i$, and have $(1\pm\varepsilon)^ip^i|Q_{v,j}|$ common $\Gamma$-neighbours in each $Q_{v,j}$ for $j>i$, and the pair \[\Big(\bigcap_{\ell\in[i]} N_{\Gamma}(w_\ell,Q_{v,j}), \bigcap_{\ell\in[i]} N_{\Gamma}(w_\ell,Q_{v,j'})\Big)\] is $(\varepsilon^+_i,d^+,p)$-regular in $G$ for each $i<j<j'\le s$. Note that all these properties hold when $i=0$ vertices have been chosen. Assuming these properties hold when we come to choose $w_i$, there are at least $2^{1-i}(d^+)^{i-1}p^{i-1}|Q_{v,i}|$ vertices of $Q_{v,i}$ which are adjacent to all previously chosen vertices. If $i=s$ then all of these are valid choices. If $i<s$, by Propositions~\ref{prop:neighbourhood} and~\ref{prop:subpairs}, and because the good event of Proposition~\ref{prop:chernoff} holds, at most \begin{equation*} s\cdot 4^i(d^+)^{1-i}\varepsilon^+_{s-2}p^{i-1}|Q_{v,i}|+s\cdotC^{\ast} p^{-1}\log n \end{equation*} vertices of $Q_{v,i}$ cause the numbers of $G$- or $\Gamma$-common neighbours in some $Q_{v,j}$ for $j>i$ to go wrong. Finally, if $i=s-1$ then there is no choice of $i<j<j'\le s$ and so no failure of regularity can occur, while if $i<s-1$ then by the good event of Lemma~\ref{lem:TSRIL} the number of vertices which cause a failure of regularity is at most $s^2C^{\ast} p^{-2}\log n$. By choice of $\varepsilon^+_{s-2}$ and $p$, in total at least $2^{-i}(d^+)^{i-1}p^{i-1}|Q_{v,i}|$ vertices of $Q_{v,i}$ are thus valid choices for $w_i$. Finally, by choice of $\gamma^+$ the total number of copies of $K_s$ in $N_{G'}(v)$ is at least $2\gamma^+ p^{\binom{s}{2}}\big(p|S|\big)^s \ge \gamma^+ p^{\binom{s+1}{2}}(\mu n)^s$, as desired. The remaining conditions of Lemma~\ref{lem:coverv} are simpler to check. By~\eqref{eq:intS} we have $\big|N_{G'}(W)\big| \le \big|N_\Gamma(W)\cap S\big|\le 2\mu n p^{|W|}$ for any $W \subset V(G')$ of size at most $\Delta$. The graph $G$ with the regular partition $(V_{i,j})_{i\in[r],j\in[k]}$, with reduced graph $R^k_r$, has the required minimum degree. By~\eqref{eq:intS} the intersection of the part $V_{i,j}$ with $S$ has size $(1\pm\tfrac12 \varepsilon)\mu |V_{i,j}|$, so that $|V_{i,j}\cap V(G')|=(1\pm\varepsilon)\mu|V_{i,j}|$ as required. Furthermore the regular pairs of $R$ intersected with $S$ are regular, and so by Proposition~\ref{prop:subpairs} the subpairs obtained by intersecting with $V(G')$ (which is, except for $v$, contained in $S$; and $v$ is in $V_0$ hence not in any of these pairs) are also sufficiently regular. Finally, the graph $H_{t+1}$ chosen at each time $t$ satisfies the conditions of Lemma~\ref{lem:coverv} by definition. Note that we can at each step choose $x_{t+1}$ and hence $H_{t+1}$ because there are at least $Cp^{-2}$ vertices of $F$ whose neighbourhood is coloured with at most $s$ colours; even after embedding all of $V_0$, the domain of $\phi$ contains at most $C^{\ast}\Delta^{s+2}p^{-2}$ vertices, and hence at most $C^{\ast}\Delta^{s+100k^2+3}p^{-2}<Cp^{-2}$ vertices of $H$ are too close to $\mathrm{Dom}(\phi)$. \end{claimproof} We next define image restricting vertex sets and create an updated homomorphism $f^*:V(H')\to [r]\times[k]$. The former is easier. Let $X^{**}$ consist of the vertices of $H'$ which have at least one $H$-neighbour in $\mathrm{Dom}(\phi)$. The vertices of $\mathrm{Dom}(\phi)$ are partitioned according to the $x_t$ chosen at each time in Algorithm~\ref{alg:pre}, and because these vertices are chosen far apart in $H$, any vertex $y$ of $X^{**}$ is at distance $s+1$ from some $x_t$. The neighbours in $H'$ of $y$ are either also at distance $s+1$ in $H$ from $x_t$ and not adjacent to any vertices of $\mathrm{Dom}(\phi)$ corresponding to other $x_{t'}$, or they are not adjacent to any vertex of $\mathrm{Dom}(\phi)$ at all. It follows that for each $y\in X^{**}$ the quantities $f^{**}(y)$, $J_y$ and $I'_y$ are set exactly once in the running of Algorithm~\ref{alg:pre}. By Lemma~\ref{lem:coverv} and~\eqref{eq:intS}, given $y\in X^{**}$, we have $|I'_y|\ge 2\alpha p^{|J_y|}|(1-\varepsilon)\mu|V_{f^{**}(y)}|$. We claim this implies \begin{equation}\label{eq:mainproof:sizeI} \big|N_G(J_y)\cap V_{f^{**}(y)}\big|\ge\alpha p^{|J_y|}\big|V_{f^{**}(y)}\big|\,. \end{equation} Indeed, suppose for a contradiction that~\eqref{eq:mainproof:sizeI} fails. Since $I'_y$ is by construction contained in $S$, we have $|I'_y|\le\big|N_G(J_y)\cap V_{f^{**}(y)}\cap S\big|$. Using~\eqref{eq:intS} to estimate the size of the latter set, we get \[|I'_y|\le (1\pm\varepsilon\mu)\mu\cdot\alpha p^{|J_y|}\big|V_{f^{**}(y)}\big|+\tfrac{\varepsilon\mu p^{|J_y|}n}{4kr}<2\alpha p^{|J_y|}|(1-\varepsilon)\mu|V_{f^{**}(y)}|\,,\] where the final inequality is by choice of $\varepsilon$ and since $|V_{f^{**}(y)}|\ge\tfrac{n}{4kr}$ by~\ref{main:Gsize}. This is in contradiction to the lower bound on $|I'_y|$ from Lemma~\ref{lem:coverv} stated above. We construct the updated homomorphism as follows. We will have $f^*(y)=f(y)$ for all vertices which are not within distance $s+\binom{k+1}{2}$ of $\mathrm{Dom}(\phi)$ in $H$. Given a vertex $x$ of $H$ chosen at some time $t$ in Algorithm~\ref{alg:pre}, we set $f^*(y)$ for each $y$ at distance between $s+1$ and $s+\binom{k+1}{2}$ from $x$ in $H$ as follows. We will generate a collection $Z_1,\ldots,Z_{\binom{k+1}{2}}$ of copies of $K_k$ in $R^k_r$, each labelled with the integers $1,\ldots,k$. For each $i=1,\ldots,\binom{k+1}{2}$, if $y$ is at distance $s+i$ from $x$ in $H$, then we set $f^*(y)$ to be the label $\sigma(y)$ cluster of $Z_i$. The properties of the sequence $Z_1,\ldots,Z_{\binom{k+1}{2}}$ we require are the following. First, $Z_1$ is the clique returned by the application of Lemma~\ref{lem:coverv} at $x$ with the labelling given by that lemma. Second, $Z_{\binom{k+1}{2}}$ is the clique $\big(V_{1,1},\dots,V_{1,k}\big)$, labelled $1,\ldots,k$ in that order. Third, for each $i=2,\ldots,\binom{k+1}{2}$, each cluster of $Z_i$ is adjacent in $R^k_r$ to each differently-labelled cluster of $Z_{i-1}$. Assuming such a sequence of cliques exists, the resulting $f^*$ has the properties that each vertex $y$ of $X^{**}$ is assigned by $f^*$ to $f^{**}(y)$, that each edge of $H'$ is mapped by $f^*$ to an edge of $R^k_r$, and that $f$ and $f^*$ disagree on at most $C^{\ast} p^{-2}\Delta^{s+\binom{k+1}{2}+3}$ vertices of $H'$, all in the first $\sqrt{\beta}n$ vertices of $\mathcal L$. These will be the properties we need of $f^*$. Note that this definition is consistent, in that it does not attempt to set $f^*(y)$ to two different clusters for any $y$, because the vertices chosen at each step of Algorithm~\ref{alg:pre} are at pairwise distance at least $100k^2$. It remains only to show that the desired sequence of cliques always exists. \begin{claim} For any $k$-cliques $Z_1$ and $Z_{\binom{k+1}{2}}$ in $R^k_r$ a sequence $Z_1,\ldots,Z_{\binom{k+1}{2}}$ with the above properties exists. \end{claim} \begin{claimproof} By the minimum degree of $R^k_r$, any $k$-set in $V(R^k_r)$ has at least one common neighbour. We will use this fact at each step in the following algorithm. Set $t=2$. We loop through $j=1,\ldots,k-1$ sequentially. For each value of $j$ we perform the following operation. For each $i=j+1,\ldots,k$ sequentially, choose a cluster $w_{t}$ of $R^k_r$ which is adjacent to all the clusters of $Z_{t-1}$ except possibly that labelled $i$, and which is also adjacent to the cluster of $Z_{\binom{k+1}{2}}$ labelled $j$. We let $Z_{t}$ be the clique obtained from $Z_{t-1}$ by replacing the label $i$ cluster with $w_t$, which we label $i$; all other clusters keep their previous label. We increment $t$. After performing the $i=k$ operation, we let $Z_{t}$ be obtained from $Z_{t-1}$ by replacing the label $j$ cluster of $Z_{t-1}$ with the label $j$ cluster of $Z_{\binom{k+1}{2}}$, and increment $t$. We now proceed with the next round of the $j$-loop. Observe that after the completion of each $j$-loop, the clusters of $Z_{t-1}$ labelled $1,\ldots,j$ are the same as those of $Z_{\binom{k+1}{2}}$. In particular the given $Z_{\binom{k+1}{2}}$ has the required adjacencies in $Z_{\binom{k+1}{2}-1}$ (the final clique constructed in the $j=k-1$ loop), while the remaining required adjacencies hold by construction. \end{claimproof} At this point we complete the proof almost exactly as in~\cite{ABET}. What follows is taken from there, with only trivial changes, for completeness' sake. For each $i\in[r]$ and $j\in[k]$, let $W'_{i,j}$ be the set of vertices $w\in V(H')$ with $f^*(w)\in V_{i,j}$, and let $X'$ consist of $X$ together with all vertices of $H'$ at $H$-distance $100k^2$ or less from some $x_t$ with $t\in[t^*]$. The total number of vertices $z\in V(H)$ at distance at most $100k^2$ from some $x_t$ is at most $2\Delta^{200k^2}|V_0|<\tfrac{1}{100}\xi n$. Since $W_{i,j}\triangle W'_{i,j}$ contains only such vertices, we have \begin{enumerate}[label=\itmarabp{H}{b}] \item\label{Hp:sizeWp} $m_{i,j}-\tfrac15\xi n\le |W'_{i,j}|\le m_{i,j}+\tfrac15\xi n$, \item\label{Hp:sizeX} $|X'| \leq 2\xi n$, \item\label{Hp:edge} $\{f^*(x),f^*(y)\} \in E(R^k_r)$ for every $\{x,y\} \in E(H')$, and \item\label{Hp:special} $y,z\in \bigcup_{j'\in[k]}W'_{i,j'}$ for every $x\in W'_{i,j}\setminus X'$ and $xy,yz\in E(H')$. \end{enumerate} where~\ref{Hp:sizeX},~\ref{Hp:edge} and~\ref{Hp:special} hold by~\ref{H:sizeX} and definition of $X'$, by definition of $f^*$, and by~\ref{H:special} and choice of $X'$ respectively. Furthermore, we have \begin{enumerate}[label=\itmarabp{G}{a}] \item $\frac{n}{4kr}\leq |V_{i,j}| \leq \frac{4n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item $\mathcal V$ is $(\varepsilon,d,p)_G$-regular on $R^k_r$ and $(\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item both $\big(N_{\Gamma}(v, V_{i,j}),V_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V_{i,j}),N_{\Gamma}(v, V_{i',j'})\big)$ are $(\varepsilon, d,p)_G$-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, and \item $|N_{\Gamma}(v,V_{i,j})| = (1\pm \varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \item\label{main:GpI} $\big|V_{f^*(x)}\cap\bigcap_{u\in J_x}N_G(u)\big|\ge\alpha p^{|J_x|}|V_{f^*(x)}|$ for each $x\in V(H')$, \item\label{main:GpGI} $\big|V_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u)\big|=(1\pm\varepsilon^*)p^{|J_x|}|V_{f^*(x)}|$ for each $x\in V(H')$, and \item\label{main:GpIreg} $\big(V_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u),V_{f^*(y)}\cap\bigcap_{v\in J_y}N_\Gamma(v)\big)$ is $(\varepsilon^*,d,p)_G$-regular for each $xy\in E(H')$. \item\label{main:GaI} $\big|\bigcap_{u\in J_x}N_\Gamma(u)\big|\le(1+\eps^{\ast}) p^{|J_x|}n$ for each $x\in V(H')$, \end{enumerate} Properties~\ref{main:Gsize} to~\ref{main:Ggam} are repeated for convenience (replacing $\varepsilon^-$ with the larger $\varepsilon$). Properties~\ref{main:GpI},~\ref{main:GpGI} and~\ref{main:GaI}, are trivial when $J_x=\emptyset$. Otherwise,~\ref{main:GpI} is guaranteed by~\eqref{eq:mainproof:sizeI}, and~\ref{main:GpGI} and~\ref{main:GaI} are guaranteed by Lemma~\ref{lem:coverv}. Finally~\ref{main:GpIreg} follows from~\ref{main:Greg} when $J_x,J_y=\emptyset$, and otherwise is guaranteed by Lemma~\ref{lem:coverv}, as follows. If both $J_x$ and $J_y$ are non-empty, then~\ref{item:pel-tsril} states that the desired pair is $(\eps^{\ast},d,p)_G$-regular. If $J_x$ is empty and $J_y$ is not, then necessarily $|J_x|\le\Delta-1$, and by~\ref{item:pel-osril} the pair $\big(V_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u),V_{f^*(y)}\big)$ is $(\varepsilon^*,d,p)_G$-regular. For each $i\in[r]$ and $j\in[k]$, let $V'_{i,j}=V_{i,j}\setminus\mathrm{Im}(\phi_{t^*})$, and let $\mathcal V'=\{V'_{i,j}\}_{i\in[r],j\in[k]}$. Because $V_{i,j}\setminus V'_{i,j}\subset S$ for each $i\in[r]$ and $j\in[k]$, using~\eqref{eq:intS} and Proposition~\ref{prop:subpairs}, and our choice of $\mu$, we obtain \begin{enumerate}[label=\itmarabp{G}{b}] \item\label{Gp:sizeV} $\frac{n}{6kr}\leq |V'_{i,j}| \leq \frac{6n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item\label{Gp:Greg} $\mathcal V'$ is $(2\varepsilon,d,p)_G$-regular on $R^k_r$ and $(2\varepsilon,d,p)_G$-super-regular on $K^k_r$, \item\label{Gp:Ginh} both $\big(N_{\Gamma}(v, V'_{i,j}),V'_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V'_{i,j}),N_{\Gamma}(v, V'_{i',j'})\big)$ are $(2\varepsilon, d,p)_G$-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, and \item\label{Gp:GsGa} $|N_{\Gamma}(v,V'_{i,j})| = (1 \pm 2\varepsilon)p|V_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \item\label{Gp:sizeI} $\big|V'_{f^*(x)}\cap\bigcap_{u\in J_x}N_G(u)\big|\ge\tfrac12\alpha p^{|J_x|}|V'_{f^*(x)}|$, \item\label{Gp:sizeGa} $\big|V'_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u)\big|=(1\pm2\varepsilon^*)p^{|J_x|}|V'_{f^*(x)}|$, and \item\label{Gp:Ireg} $\big(V'_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u),V'_{f^*(y)}\cap\bigcap_{v\in J_y}N_\Gamma(v)\big)$ is $(2\varepsilon^*,d,p)_G$-regular. \item\label{Gp:GaI} $\big|\bigcap_{u\in J_x}N_\Gamma(u)\big|\le(1+2\eps^{\ast}) p^{|J_x|}n$ for each $x\in V(H')$, \end{enumerate} We are now almost finished. The only remaining problem is that we do not necessarily have $|W'_{i,j}|=|V'_{i,j}|$ for each $i\in[r]$ and $j\in[k]$. Since $|V'_{i,j}|=|V_{i,j}|\pm 2\Delta^{200k^2}|V_0|=m_{i,j}\pm 3\Delta^{200k^2}|V_0|$, by~\ref{Hp:sizeWp} we have $|V'_{i,j}|=|W'_{i,j}|\pm \xi n$. We can thus apply Lemma~\ref{lem:balancing}, with input $k$, $r_1$, $\Delta$, $\gamma$, $d$, $8\varepsilon$, and $r$. This gives us sets $V''_{i,j}$ with $|V''_{i,j}|=|W'_{i,j}|$ for each $i\in[r]$ and $j\in[k]$ by~\ref{lembalancing:sizesout}. Let $\mathcal V''=\{V''_{i,j}\}_{i\in[r],j\in[k]}$. Lemma~\ref{lem:balancing} guarantees us the following. \begin{enumerate}[label=\itmarabp{G}{c}] \item\label{Gpp:sizeV} $\frac{n}{8kr}\leq |V''_{i,j}| \leq \frac{8n}{kr}$ for every $i\in[r]$ and $j\in[k]$, \item\label{Gpp:Greg} $\mathcal V''$ is $(4\eps^{\ast},d,p)_G$-regular on $R^k_r$ and $(4\eps^{\ast},d,p)_G$-super-regular on $K^k_r$, \item\label{Gpp:Ginh} both $\big(N_{\Gamma}(v, V''_{i,j}),V''_{i',j'}\big)$ and $\big(N_{\Gamma}(v, V''_{i,j}),N_{\Gamma}(v, V''_{i',j'})\big)$ are $(4\eps^{\ast}, d,p)_G$-regular pairs for every $\{(i,j),(i',j')\} \in E(R^k_r)$ and $v\in V\setminus V_0$, and \item\label{Gpp:GsGa} we have $(1-4\varepsilon)p|V''_{i,j}| \leq |N_{\Gamma}(v,V''_{i,j})| \leq (1 + 4\varepsilon)p|V''_{i,j}|$ for every $i \in [r]$, $j\in [k]$ and every $v \in V \setminus V_0$. \item\label{Gpp:sizeI} $\big|V''_{f^*(x)}\cap\bigcap_{u\in J_x}N_G(u)\big|\ge\tfrac14\alpha p^{|J_x|}|V''_{f^*(x)}|$, \item\label{Gpp:sizeGa} $\big|V''_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u)\big|=(1\pm4\varepsilon^*)p^{|J_x|}|V'_{f^*(x)}|$, and \item\label{Gpp:Ireg} $\big(V''_{f^*(x)}\cap\bigcap_{u\in J_x}N_\Gamma(u),V''_{f^*(y)}\cap\bigcap_{v\in J_y}N_\Gamma(v)\big)$ is $(4\varepsilon^*,d,p)_G$-regular. \end{enumerate} Here~\ref{Gpp:sizeV} comes from~\ref{Gp:sizeV} and~\ref{lembalancing:symd}, while~\ref{Gpp:Greg} comes from~\ref{lembalancing:regular} and choice of $\varepsilon$. \ref{Gpp:Ginh} is guaranteed by~\ref{lembalancing:inheritance}. Now, each of~\ref{Gpp:GsGa},~\ref{Gpp:sizeI} and~\ref{Gpp:sizeGa} comes from the corresponding~\ref{Gp:GsGa},~\ref{Gp:sizeI} and~\ref{Gp:sizeGa} together with~\ref{lembalancing:gammaout}. Finally,~\ref{Gpp:Ireg} comes from~\ref{Gp:Ireg} and~\ref{Gp:GaI} together with Proposition~\ref{prop:subpairs} and~\ref{lembalancing:gammaout}. For each $x\in V(H')$ with $J_x=\emptyset$, let $I_x=V''_{f^*(x)}$. For each $x\in V(H')$ with $J_x\neq\emptyset$, let $I_x=V''_{f^*(x)}\cap\bigcap_{u\in J_x}N_G(u)$. Now $\mathcal W'$ and $\mathcal V''$ are $\kappa$-balanced by~\ref{Gpp:sizeV}, size-compatible by construction, partitions of respectively $V(H')$ and $V(G)\setminus\mathrm{Im}(\phi_{t^*})$, with parts of size at least $n/(\kappa r_1)$ by~\ref{Gpp:sizeV}. Letting $\widetilde{W}_{i,j}:=W'_{i,j}\setminus X'$, by~\ref{Hp:sizeX}, choice of $\xi$, and~\ref{Hp:special}, $\{\widetilde{W}_{i,j}\}_{i\in[r],j\in[k]}$ is a $\big(\vartheta,K^k_r\big)$-buffer for $H'$. Furthermore since $f^*$ is a graph homomorphism from $H'$ to $R^k_r$, we have~\ref{itm:blowup:H}. By~\ref{Gpp:Greg},~\ref{Gpp:Ginh} and~\ref{Gpp:GsGa} we have~\ref{itm:blowup:G}, with $R=R^k_r$ and $R'=K^k_r$. Finally, the pair $(\mathcal I,\mathcal J)=\big(\{I_x\}_{x\in V(H')},\{J_x\}_{x\in V(H')}\big)$ form a $\big(\rho,\tfrac14\alpha,\Delta,\Delta\big)$-restriction pair. To see this, observe that the total number of image restricted vertices in $H'$ is at most $\Delta^2|V_0|<\rho|V_{i,j}|$ for any $i\in[r]$ and $j\in[k]$, giving~\ref{itm:restrict:numres}. Since for each $x\in V(H')$ we have $|J_x|+\deg_{H'}(x)=\deg_H(x)\le\Delta$ we have~\ref{itm:restrict:Jx}, while~\ref{itm:restrict:sizeIx} follows from~\ref{Gpp:sizeI}, and~\ref{itm:restrict:sizeGa} follows from~\ref{Gpp:sizeGa}. Finally,~\ref{itm:restrict:Ireg} follows from~\ref{Gpp:Ireg}, and~\ref{itm:restrict:DJ} follows since $\Delta(H)\le\Delta$. Together this gives~\ref{itm:blowup:restrict}. Thus, by Lemma~\ref{thm:blowup} there exists an embedding $\phi$ of $H'$ into $G\setminus\mathrm{Im}(\phi_{t^*})$, such that $\phi(x)\in I_x$ for each $x\in V(H')$. Finally, $\phi\cup\phi_{t^*}$ is an embedding of $H$ in $G$, as desired. \end{proof} \section{Concluding remarks} \label{sec:remarks} \subsection{Optimality of Theorem~\ref{thm:main}} In Theorems~\ref{thm:abet} and~\ref{thm:main}, the requirement for $C^\ast p^{-2}$ vertices in $H$ whose neighbourhood contains few colours is optimal up to the value of $C^\ast$. However the value of $C^\ast$ we obtain derives from (multiple applications of) the sparse regularity lemma and is hence very far from optimal. One can use the methods of this paper to obtain an improved (but still far from sharp) constant, and we expect that one can use the methods of this paper to determine an optimal $C^\ast$ asymptotically, at least for special cases. The way to obtain this improvement is the following. We work exactly as in the proof of Theorem~\ref{thm:maink}, except that for each $v\in V(G)$ we identify the largest $1\le s\le k-1$ for which there are many copies of $K_s$ in $N_G(v)$, and obtain a robust witness for this property as in that proof. Now when we come to cover the vertices of the set $V_0$ returned by Lemma~\ref{lem:G}, we use vertices from zero-free regions of $\mathcal L$ which are not in the first few vertices of $\mathcal L$ whenever possible: in particular this is always possible when we are to cover a vertex which is in many copies of $K_k$. Our proof, with trivial modification, shows that this pre-embedding method succeeds. The result is that we can reduce $C^{\ast}$ to a quantity on the order of $\Delta^{100k^2}$; this number comes from our requirement to choose vertices in $\mathcal L$ which are widely separated in $H$ for the pre-embedding onto the vertices of $V_0$ which are not in many copies of $K_{k}$. When $H$ contains many isolated vertices, this requirement disappears and we can further improve. We believe (but have not attempted to prove) that there is some $C_k$ with the following property. Let $\Gamma$ be a typical instance of $G(n,p)$, where $p\gg n^{-1/k}$. Suppose $G\subset\Gamma$ has minimum degree $\big(\tfrac{k-1}{k}+o(1)\big)pn$. Then any choice of $G$ contains at most $\big(C_k+o(1)\big)p^{-2}$ vertices which are in $o\big( p^{\binom{k}{2}}n^{k-1}\big)$ copies of $K_k$; on the other hand there is a choice of $G$ which has $\big(C_k-o(1)\big)p^{-2}$ vertices not in any copy of $K_k$. Assuming the above statement to be true, it follows that $C_k$ is the asymptotically optimal $C^\ast$ whenever all vertices of $H$ are either isolated or contained in a copy of $K_k$; for example when $H$ consists of a $(k-1)$st power of a cycle together with some isolated vertices. Further generalisation to (for example) try to establish an optimal value of $C^\ast$ in Theorem~\ref{thm:abet} would be possible; but it would also presumably depend on the graph structure of $H$. If the vertices of $H$ which are not in triangles are far apart in $H$, then the generalisation is easy (and the answer is the same) but if they are not generally far apart it seems likely that one would have to use several such vertices to cover one badly-behaved vertex of $G$, and hence $C^\ast$ would need to be larger than the above $C_k$. \subsection{Local colourings of~$H$ versus global colourings} \label{sec:construct} Recall that Theorem~\ref{thm:abet} requires some vertices in $H$ to have neighbourhoods which contain no edges, and that this is necessary because otherwise we can `locally' avoid $H$-containment simply by picking a vertex of $G(n,p)$ and removing all edges in its neighbourhood to form $G$. Theorem~\ref{thm:main} implies that, when $H$ is $3$-colourable, this is really the only obstruction: if we insist that every vertex of $G$ has a reasonable number of edges in its neighbourhood, then $G$ contains all $3$-colourable $H$ with small bandwidth and maximum degree. It is natural to guess that a similar `local' obstruction generalises: perhaps for every $k$, if $H$ is a $k$-colourable graph with small bandwidth and constant maximum degree which has $\Omega(p^{-2})$ vertices whose neighbourhoods are bipartite, then $H$ is guaranteed to be contained in any subgraph $G$ of $G(n,p)$ with sufficiently high minimum degree and in which every vertex neighbourhood has a reasonable number of edges. The purpose of this section is to observe that the above guess is false. Indeed, one cannot merely consider the chromatic number of vertex neighbourhoods, but really has to take into account the number of colours used on vertex neighbourhoods in the whole $k$-colouring of $H$ (as in the statement of Theorem~\ref{thm:main}). \begin{figure}[t] \tikzstyle{every node}=[circle, draw, fill=black, inner sep=0pt, minimum width=4pt] \begin{tikzpicture}[thick,scale=1] \node at (-2,0) (4) [label=left:$4$] {} ; \node at (1,-0.75) (1) [label=above left:$1$] {} ; \node at (1,0) (2) [label=above left:$2$] {} ; \node at (1,0.75) (3) [label=below left:$3$] {} ; \node at (3.5,1.25) (a) [label=above:$a$] {} ; \node at (4,0.5) (b) [label=below right:$b$] {} ; \node at (4,-0.5) (c) [label=above right:$c$] {} ; \node at (3.5,-1.25) (d) [label=below:$d$] {} ; \node at (3,-0.5) (e) [label=above right:$e$] {} ; \node at (3,0.5) (f) [label=below right:$f$] {} ; \node at (6,0) (r) [label=right:$r$] {} ; \path[draw] (4) edge [line width=0.5pt,bend right=20] (1.center) ; \path[draw] (4) edge [line width=0.5pt] (2.center) ; \path[draw] (4) edge [line width=0.5pt,bend left=20] (3.center) ; \path[draw] (1) edge [line width=0.5pt] (2.center) ; \path[draw] (2) edge [line width=0.5pt] (3.center) ; \path[draw] (3) edge [line width=0.5pt,bend left] (1.center) ; \path[draw] (a) edge [line width=0.5pt] (b.center) ; \path[draw] (b) edge [line width=0.5pt] (c.center) ; \path[draw] (c) edge [line width=0.5pt] (d.center) ; \path[draw] (d) edge [line width=0.5pt] (e.center) ; \path[draw] (e) edge [line width=0.5pt] (f.center) ; \path[draw] (f) edge [line width=0.5pt] (a.center) ; \path[draw] (r) edge [line width=0.5pt,bend right=20] (a.center) ; \path[draw] (r) edge [line width=0.5pt,bend right=10] (b.center) ; \path[draw] (r) edge [line width=0.5pt,bend left=10] (c.center) ; \path[draw] (r) edge [line width=0.5pt,bend left=20] (d.center) ; \path[draw] (r) edge [line width=0.5pt,bend left] (e.center) ; \path[draw] (r) edge [line width=0.5pt,bend right] (f.center) ; \path[draw] (4) edge [line width=0.5pt,bend left] (a.center) ; \path[draw] (4) edge [line width=0.5pt,bend left=35] (b.center) ; \path[draw] (4) edge [line width=0.5pt,bend right=35] (c.center) ; \path[draw] (4) edge [line width=0.5pt,bend right] (d.center) ; \path[draw] (4) edge [line width=0.5pt,bend right] (e.center) ; \path[draw] (4) edge [line width=0.5pt,bend left] (f.center) ; \path[draw] (1) edge [line width=0.8pt,bend right=5] (b.center) ; \path[draw] (1) edge [line width=0.8pt,bend right=10] (c.center) ; \path[draw] (1) edge [line width=0.8pt] (e.center) ; \path[draw] (1) edge [line width=0.8pt] (f.center) ; \path[draw] (2) edge [line width=0.8pt] (a.center) ; \path[draw] (2) edge [line width=0.8pt,bend left=15] (c.center) ; \path[draw] (2) edge [line width=0.8pt] (d.center) ; \path[draw] (2) edge [line width=0.8pt] (f.center) ; \path[draw] (3) edge [line width=0.8pt] (a.center) ; \path[draw] (3) edge [line width=0.8pt,bend left=10] (b.center) ; \path[draw] (3) edge [line width=0.8pt] (d.center) ; \path[draw] (3) edge [line width=0.8pt] (e.center) ; \end{tikzpicture} \caption{The graph $F$}\label{fig:date} \end{figure} Consider the following graph $F$ (see Figure~\ref{fig:date}). We begin with vertices $1,2,3,4$ which form a clique, and vertices $a,b,c,d,e,f$ which form a cycle of length six (in that order). We join $4$ to all of $a,b,c,d,e,f$, we join $1$ to $b,c,e,f$, and $2$ to $a,c,d,f$, and $3$ to $a,b,d,e$. Finally we add a vertex $r$ adjacent to $a,b,c,d,e,f$. This graph has the following properties. It is $4$-colourable and in any $4$-colouring the vertices $a,d$ have the same colour as $1$, the vertices $b,e$ have the same colour as $2$, and $c,f$ have the same colour as $3$. All vertices except $r$ are in a copy of $K_4$. The neighbourhood of $r$ is a cycle of order $6$, which is bipartite. Given $n$ divisible by $11$, we let $H$ consist of $n/11$ disjoint copies of $F$. By Theorem~\ref{thm:main}, with $s=3$, if $G$ is a subgraph of a typical random graph $\Gamma=G(n,p)$, where $p\gg\big(\tfrac{\log n}{n}\big)^{1/9}$, such that $\delta(G)\ge\big(\tfrac{3}{4}+\gamma\big)pn$, and in addition the neighbourhood of every vertex of $G$ contains at least $\gamma p^9n^3$ copies of $K_3$, then we have $H\subset G$. Observe that we cannot take $s$ smaller than $3$, since in every $4$-colouring of $F$ every vertex has three different colours in its neighbourhood, including $r$. This is why Theorem~\ref{thm:abet} requires many copies of $K_3$ in every vertex neighbourhood. However the neighbourhood of $r$ itself is $K_3$-free, and in fact bipartite. We now give a construction that shows that it is not enough for every vertex neighbourhood to contain many edges (or indeed many copies of $C_6$). We begin by selecting (for some small $\varepsilon>0$) a set $X$ of $\varepsilon p^{-1}$ vertices, and then generating $\Gamma=G(n,p)$. With high probability no vertex of $X$ has more than $\log n$ neighbours in $X$, and the joint neighbourhood $Y$ of $X$ has size at most $2\varepsilon n$. We randomly partition $Y=Y_1\cup Y_2$ into two equal parts, and we randomly partition $Z:=V(\Gamma)\setminus(X\cup Y)$ into five equal parts $Z_1,\dots,Z_5$. We let $G$ be the subgraph of $\Gamma$ obtained by taking all edges from $X$ to $Y$, all edges between $Y_1$ and $Y_2$, all edges from $Y_1$ to $Z\setminus Z_1$ and from $Y_2$ to $Z\setminus Z_2$, and all edges within $Z$ which are not contained in any $Z_i$. It is easy to check that with high probability $G$ has minimum degree roughly $\tfrac45pn$, and that neighbourhoods of all vertices contain many edges (and many copies of $C_6$). However we claim $G$ does not contain $H$. Indeed, consider any $x\in X$. Since $N_G(x)$ is contained in $Y$, the graph $G\big[N_G(x)\big]$ is bipartite, so that any copy of $F$ using $x$ must place $r\in F$ on $x$. Furthermore, the vertices $a,b,c,d,e,f$ must be placed alternating in $Y_1$ and $Y_2$. Without loss of generality suppose $a,c,e\in Y_1$ and $b,d,f\in Y_2$. Now each of $1,2,3,4$ has at least one neighbour in $\{a,c,e\}$, and at least one neighbour in $\{b,d,f\}$, so that none of $1,2,3,4$ can be placed in $Y$, or in $Z_1$, or in $Z_2$. It follows that none of $1,2,3,4$ can be placed in $X$ (since all neighbours of vertices in $X$ are in $Y$), and so all of $1,2,3,4$ must be in $Z_3\cup Z_4\cup Z_5$. But $1,2,3,4$ form a copy of $K_4$ in $F$, and $Z_3\cup Z_4\cup Z_5$ induces a tripartite subgraph of $G$, a contradiction. In this example we cannot have $F$-copies at any vertex of $X$, so the best we can do is find $\tfrac{n}{v(F)}-\Omega(p^{-1})$ vertex-disjoint copies of $F$. This may be asymptotically optimal; we have not investigated this problem. We note also that it is straightforward to generalise this construction to higher chromatic numbers $k$: we add to $F$ further numbered vertices $5,\dots,k$, adjacent to all other vertices but $r$; and we partition $Z$ into $k+1$ parts. \bibliographystyle{abbrv}
{ "timestamp": "2019-11-12T02:16:43", "yymm": "1911", "arxiv_id": "1911.03958", "language": "en", "url": "https://arxiv.org/abs/1911.03958", "abstract": "The bandwidth theorem [Mathematische Annalen, 343(1):175--205, 2009] states that any $n$-vertex graph $G$ with minimum degree $(\\frac{k-1}{k}+o(1))n$ contains all $n$-vertex $k$-colourable graphs $H$ with bounded maximum degree and bandwidth $o(n)$. In [arXiv:1612.00661] a random graph analogue of this statement is proved: for $p\\gg (\\frac{\\log n}{n})^{1/\\Delta}$ a.a.s. each spanning subgraph $G$ of $G(n,p)$ with minimum degree $(\\frac{k-1}{k}+o(1))pn$ contains all $n$-vertex $k$-colourable graphs $H$ with maximum degree $\\Delta$, bandwidth $o(n)$, and at least $C p^{-2}$ vertices not contained in any triangle. This restriction on vertices in triangles is necessary, but limiting.In this paper we consider how it can be avoided. A special case of our main result is that, under the same conditions, if additionally all vertex neighbourhoods in $G$ contain many copies of $K_\\Delta$ then we can drop the restriction on $H$ that $Cp^{-2}$ vertices should not be in triangles.", "subjects": "Combinatorics (math.CO)", "title": "A spanning bandwidth theorem in random graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357549000774, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739180414052 }
https://arxiv.org/abs/1604.08574
Axial compression of a thin elastic cylinder: bounds on the minimum energy scaling law
We consider the axial compression of a thin elastic cylinder placed about a hard cylindrical core. Treating the core as an obstacle, we prove upper and lower bounds on the minimum energy of the cylinder that depend on its relative thickness and the magnitude of axial compression. We focus exclusively on the setting where the radius of the core is greater than or equal to the natural radius of the cylinder. We consider two cases: the "large mandrel" case, where the radius of the core exceeds that of the cylinder, and the "neutral mandrel" case, where the radii of the core and cylinder are the same. In the large mandrel case, our upper and lower bounds match in their scaling with respect to thickness, compression, and the magnitude of pre-strain induced by the core. We construct three types of axisymmetric wrinkling patterns whose energy scales as the minimum in different parameter regimes, corresponding to the presence of many wrinkles, few wrinkles, or no wrinkles at all. In the neutral mandrel case, our upper and lower bounds match in a certain regime in which the compression is small as compared to the thickness; in this regime, the minimum energy scales as that of the unbuckled configuration. We achieve these results for both the von Kármán-Donnell model and a geometrically nonlinear model of elasticity.
\section{Introduction} In many controlled experiments involving the axial compression of thin elastic cylinders, one observes complex folding patterns (see, e.g., \cite{donnell1934new,horton1965imperfections,pogorelov1988bendings,seffen2014surface}). It is natural to wonder if such patterns are required to minimize elastic energy, or if they are instead due to loading history. Before we can begin to answer these questions, we need to understand the minimum energy and in particular its dependence on external parameters. This paper offers progress towards this goal. Since the work of Horton and Durham \cite{horton1965imperfections}, it is a common experimental practice to place the elastic cylinder about a hard inner core that stabilizes its deformation during loading. In this paper, we consider the minimum energy of a compressed thin elastic cylinder fit about a hard cylindrical core (which we also refer to as the ``mandrel''). We prove upper and lower bounds on the minimum energy which quantify its dependence on the thickness of the cylinder, $h$, and the amount of axial compression, $\lambda$. Ultimately, our goal is to identify the first term in the asymptotic expansion of the minimum energy about $h,\lambda=0$. A more modest goal, closer to what we achieve, is to prove upper and lower bounds that match in scaling but not necessarily in pre-factor, e.g., \[ Ch^{\alpha}\lambda^{\beta}\leq\min\,E\leq C'h^{\alpha}\lambda^{\beta}. \] When our bounds match, which they do in some cases, we will have identified the minimum energy scaling law along with test functions that achieve this scaling. There is a growing mathematical literature on minimum energy scaling laws for thin elastic sheets. Some recent studies have considered problems in which the direction of wrinkling is known in advance. This could be due to the presence of a tensile boundary condition \cite{bella2014wrinkles}, or a tensile body force such as gravity pulling on a heavy curtain \cite{bella2015coarsening}. Such a tensile force acts as a stabilizing mechanism, in that it pulls the wrinkles taut and sets their direction. Then, the question is typically: how should the wavelength of the wrinkles change throughout the sheet, in order to achieve (nearly) minimal energy? Other works concern problems in which the direction, or even the presence, of wrinkling is unknown \emph{a priori}. These include works on blistering patterns \cite{belgacem2000rigorous,jin2001energy}; delamination \cite{bedrossian2015blister}; herringbone patterns \cite{kohn2013analysis}; and crumpling and folding of paper \cite{conti2008confining,venkataramani2004lower}. In these papers, an important point is the construction of energetically favorable crumpling or folding patterns which accommodate biaxial compressive loads. In our view, the cylinder-mandrel problem belongs to either category, as a function of whether the cylinder is fit snugly onto the mandrel or not. Our analysis addresses the following two cases: the ``large mandrel'' case, in which the natural radius of the cylinder is smaller than that of the core, and the ``neutral mandrel'' case, in which the radii of the cylinder and the core are the same. In the first case, the mandrel pre-strains the cylinder along its hoops and, in the presence of axial compression, this drives the formation of axisymmetric wrinkles. In this setting, we prove upper and lower bounds on the minimum energy that match in their scaling. The neutral mandrel case is different, as there is no pre-strain to set the direction of wrinkling. In this case, our best upper and lower bounds do not match (so that at least one of them is suboptimal). Nevertheless, our lower bound is among the few examples thus far of ansatz-free lower bounds in problems involving confinement with the possibility of crumpling. The cylinder-mandrel problem is similar in spirit to that of \cite{kohn2013analysis}: in some sense, the obstacle in our analysis plays the role of their elastic substrate. A key difference, however, is that in this paper the cost of deviating from the mandrel is felt internally by the elastic cylinder, whereas in \cite{kohn2013analysis} the cost of deviating from the substrate is included as separate bulk effect. In this sense, our discussion is also similar to that in \cite{bedrossian2015blister}, where the delaminated set is unknown. These problems belong to a larger class in which the emergence of ``microstructure'' is modeled using a nonconvex variational problem regularized by higher order terms (see, e.g., \cite{desimone2006recent,kohn1994surface,serfaty2006vortices}). While we would like to understand energy minimizers, and eventually local minimizers, a natural first step is to understand how the value of the minimum energy depends on the problem's external parameters. Proving upper bounds is conceptually straightforward, as it involves evaluating the energy of suitable test functions; proving lower bounds is more difficult, as the argument must be ansatz-free. The presence of the inner obstacle in the cylinder-mandrel setup has a stabilizing effect. This has been exploited in experiments which explore both the incipient buckling load \cite{horton1965imperfections}, as well as buckled states deep into the bifurcation diagram \cite{seffen2014surface}. In practice, there is a gap between the cylinder and the core (we call this the ``small mandrel'' case). In the recent experimental work \cite{seffen2014surface}, the authors explore the effect of this gap size on the resulting buckling patterns. The character of the observed patterns depends strongly on the size of the gap between the cylinder and the core: in some cases the resulting structures resemble origami folding patterns (e.g., the Yoshimura pattern), while in other cases they resemble delamination patterns (e.g., the ``telephone-cord'' patterns discussed in \cite{moon2002characterization}). The effect of imposing a cylindrical geometry on confined thin elastic sheets has also been explored in the literature. In the experimental work \cite{roman2012stress}, Roman and Pocheau consider the axial compression of a sheet trapped between two cylindrical obstacles. The authors explore the effect of the size of the gap between the obstacles on the compression-driven deformation of the sheet. When the gap is large, the sheet exhibits crumples and folds; as the gap shrinks, the sheet ``uncrumples'' in a striking fashion. At the smallest reported gap sizes, the sheet appears to be (almost) axially symmetric. This raises the question of whether the deformations from \cite{seffen2014surface} would also become axially symmetric if the size of the gap between the cylinder and mandrel were reduced to zero. In the large mandrel case of the present paper, we prove that axially symmetric wrinkling patterns achieve the minimum energy scaling law. Our upper bounds in the neutral mandrel case also use axisymmetric wrinkling patterns, but we wonder if optimal deformations must be axisymmetric there. In the recent paper \cite{paulsen2016curvature}, Paulsen et al.\ consider the axial compression of a thin elastic sheet bonded to a cylindrical substrate. The substrate acts as a Winkler foundation, and sets the effective shape in the vanishing thickness limit. The effective cylindrical geometry, in turn, gives rise to an additional geometric stiffness which adds to the inherent stiffness of the substrate. The authors also consider the effect of applying tension along the wrinkles; the result is a local prediction for the optimal wavelength of wrinkles in the sheet via the ``Far-From-Threshold'' approach \cite{davidovitch2011prototypical}. The cylinder-mandrel problem offers a similar opportunity to discuss the competition between stiffness of geometrical and physical origin. In particular, in the neutral mandrel case, our lower bounds quantify the additional stability afforded by the cylindrical obstacle. While a flat sheet placed along a planar obstacle is immediately unstable to compressive uniaxial loads, the same is not true in the presence of cylindrical obstacles: superimposing wrinkles onto a curved shape costs additional stretching energy. In the large mandrel case, our upper and lower bounds balance the pre-strain induced stiffness against the bending resistance. Since the resulting bounds match up to prefactor, our prediction for the wavelength of wrinkling is optimal in its scaling. The present paper is not a study of the buckling load of a thin elastic cylinder under axial compression, though this is an interesting problem in its own right. This is the subject of the recent papers by Grabovsky and Harutyunyan \cite{grabovsky2015rigorous,grabovsky2015scaling}, which give a rigorous derivation of Koiter's formula for the buckling load from a fully nonlinear model of elasticity. These papers also discuss the sensitivity of buckling to imperfections; in the context of the von K\'arm\'an-Donnell equations, this is discussed in \cite{horak2006cylinder}. (See also \cite{horak2008numerical,hunt2003cylindrical} for related work.) The existence of a large family of buckling modes associated with the incipient buckling load of a thin cylinder is consistent with the development of geometric complexity when buckling first occurs. One might imagine that the complexity seen experimentally reflects the initial and perhaps subsequent bifurcations. Nevertheless, it still makes sense to ask whether this complexity is required for, or even consistent with, achievement of minimal energy. We cannot begin to answer this question without first understanding the energy scaling law. In this paper, we prove upper and lower bounds on the minimum energy in the cylinder-mandrel problem. Our upper bounds are ansatz-driven, and we achieve them by constructing competitive test functions. In contrast, our lower bounds are ansatz-free. Given enough compression, low-energy test functions must buckle. Buckling in the presence of the mandrel requires ``outwards'' displacement, and this leads to tensile hoop stresses which cost elastic energy at leading order. Thus, the mandrel drives buckling patterns to refine their length scales to minimize elastic energy; this is compensated for by bending effects, which prefer larger length scales overall. Through the use of various Gagliardo-Nirenberg interpolation inequalities, we deduce lower bounds by balancing these effects. In the large mandrel case, this argument proves the minimum energy scaling law. In the neutral mandrel case, the optimal such argument leads to matching bounds only when the compression is small as compared to the thickness. For a more detailed discussion of these ideas, we refer the reader to \prettyref{sub:DiscussionofProofs}, following the statements of the main results. \subsection{The elastic energies} We now describe the energy functionals that will be discussed in this paper. Each is a model for the elastic energy per thickness of a unit cylinder. Throughout this paper, we let $\theta\in I_{\theta}=[0,2\pi]$ be the reference coordinate along the ``hoops'' of the cylinder and $z\in I_{z}=[-\frac{1}{2},\frac{1}{2}]$ be the reference coordinate along the generators. The reference domain is $\Omega=I_{\theta}\times I_{z}$. \subsubsection{The von K\'arm\'an-Donnell model} The first model we consider is a geometrically linear model of elasticity, which we refer to as the von K\'arm\'an-Donnell (vKD) model. Let $\phi:\Omega\to\mathbb{R}^{3}$ be a displacement field, given in cylindrical coordinates by $\phi=(\phi_{\rho},\phi_{\theta},\phi_{z})$. Treating the ``in-cylinder'' displacements, $\phi_{\theta},\phi_{z}$, as ``in-plane'' displacements, the elastic strain tensor is given in the vKD model by \begin{equation} \epsilon=e(\phi_{\theta},\phi_{z})+\frac{1}{2}D\phi_{\rho}\otimes D\phi_{\rho}+\phi_{\rho}e_{\theta}\otimes e_{\theta}.\label{eq:FvKstrain} \end{equation} Assuming a trivial Hooke's law, the elastic energy per thickness is given in this model by \begin{equation} E_{h}^{vKD}(\phi)=\int_{\Omega}\,\abs{\epsilon}^{2}+h^{2}\abs{D^{2}\phi_{\rho}}^{2}\,d\theta dz.\label{eq:EFvK} \end{equation} Here, the symmetric linear strain tensor $e=e\left(\phi_{\theta},\phi_{z}\right)$ is given in $\left(\theta,z\right)$-coordinates by $e_{ij}=\left(\partial_{i}\phi_{j}+\partial_{j}\phi_{i}\right)/2$, $i,j\in\left\{ \theta,z\right\} $, and the vectors $\left\{ e_{\theta},e_{z}\right\} $ are the reference coordinate basis vectors. The first term in \prettyref{eq:EFvK} is known as the ``membrane term'', the second is the ``bending term'', and the parameter $h$ is the (non-dimensionalized) thickness of the sheet. The primary interest in this functional as a model of elasticity is in the ``thin'' regime, $h\ll1$. We note here that, as in \cite{horak2006cylinder,horak2008numerical,hunt2003cylindrical}, we choose to call this the von K\'arm\'an-Donnell model of elasticity. In doing so, we invite comparison with the well-known F\"oppl-von K\'arm\'an model for the elastic energy of a thin plate. In the F\"oppl-von K\'arm\'an model, the elastic strain tensor is given by \[ \epsilon=e(u_{x},u_{y})+\frac{1}{2}Dw\otimes Dw, \] where $u=(u_{x},u_{y})$ and $w$ are the ``in-plane'' and ``out-of-plane'' displacements respectively. The elastic energy per thickness is then given by the direct analog of \prettyref{eq:EFvK}. The key difference between this model and the vKD model described above is the presence of the last term in \prettyref{eq:FvKstrain}. This term is of geometrical origin: it arises as $\phi_{\rho}$ describes the radial, or ``out-of-cylinder'', displacement in the present work. To model axial confinement of the elastic cylinder in the presence of the mandrel, we consider the minimization of $E_{h}^{vKD}$ over the admissible set \begin{equation} \begin{split}A_{\lambda,\varrho,m}^{vKD}= & \{\phi:\Omega\to\mathbb{R}^{3}\ :\ \phi_{\rho}\in H_{\text{per}}^{2}(\Omega),\ \phi_{\theta}\in H_{\text{per}}^{1}(\Omega),\ \phi_{z}+\lambda z\in H_{\text{per}}^{1}(\Omega)\}\\ & \quad\cap\{\phi_{\rho}\geq\varrho-1,\ \max_{\substack{i\in\left\{ \theta,z\right\} ,\,j\in\{\rho,\theta,z\}} }\norm{\partial_{i}\phi_{j}}_{L^{\infty}(\Omega)}\leq m\}. \end{split} \label{eq:AFvK} \end{equation} The parameter $\lambda\in(0,1)$ is the relative axial confinement of the cylinder. The parameter $\varrho\in(0,\infty)$ is the radius of the mandrel,\footnote{We warn the reader that while we use the subscript $\rho$ to denote the radial component of a vector in $\mathbb{R}^{3}$, e.g., $x_{\rho}$, we use the symbol $\varrho$ to denote the radius of the mandrel. } which we treat as an obstacle. The parameter $m\in(0,\infty]$ gives an \emph{a priori }bound on the ``slope'' of the displacement, $D\phi$. (As we will show, minimization of $E_{h}^{vKD}$ under axial confinement prefers unbounded slopes as $h\to0$. We introduce the hypothesis $m<\infty$ in order to systematically discuss sequences of test functions which do not feature exploding slopes.) The assumption of periodicity in the $z$-direction is for simplicity and does not change the essential features of the problem. \subsubsection{A nonlinear model of elasticity} The vKD model described in the previous section fails to be physically valid when the ``slope'' of the displacement, $D\phi$, is too large. In this paper, we also consider the following nonlinear model for the elastic energy per thickness: \begin{equation} E_{h}^{NL}(\Phi)=\int_{\Omega}\,\abs{D\Phi^{T}D\Phi-\text{id}}^{2}+h^{2}\abs{D^{2}\Phi}^{2}\,d\theta dz\label{eq:ENL} \end{equation} where $\Phi:\Omega\to\mathbb{R}^{3}$ is the deformation of the cylinder. This is related to the displacement, $\phi$, through the formulas \[ \Phi_{\rho}=1+\phi_{\rho},\quad\Phi_{\theta}=\theta+\phi_{\theta},\ \text{and}\quad\Phi_{z}=z+\phi_{z}. \] The functional $E_{h}^{NL}$ is a widely-used replacement for the fully nonlinear elastic energy of a thin sheet (see, e.g., \cite{bella2014metric,conti2008confining}). We note two simplifications from a fully nonlinear model: the energy is written as the sum of a membrane term and a bending term; and where a difference of second fundamental forms between that of the deformed and that of the undeformed configurations would usually appear, it has been replaced by the full matrix of second partial derivatives of the deformation, $D^{2}\Phi$. In parallel with the vKD model, we consider the minimization of $E_{h}^{NL}$ over the admissible set \begin{equation} \begin{split}A_{\lambda,\varrho,m}^{NL}= & \{\Phi:\Omega\to\mathbb{R}^{3}\ :\ \Phi_{\rho}\in H_{\text{per}}^{2}(\Omega),\ \Phi_{\theta}-\theta\in H_{\text{per}}^{2}(\Omega),\ \Phi_{z}-(1-\lambda)z\in H_{\text{per}}^{2}(\Omega)\}\\ & \quad\cap\{\Phi_{\rho}\geq\varrho,\ \max_{\substack{i\in\left\{ \theta,z\right\} ,\,j\in\{\rho,\theta,z\}} }\norm{\partial_{i}\Phi_{j}}_{L^{\infty}(\Omega)}\leq m,\ \partial_{z}\Phi_{z}\geq0\ \text{Leb-a.e.}\}. \end{split} \label{eq:ANL} \end{equation} As above, $\lambda\in(0,1)$ is the relative axial confinement, $\varrho\in(0,\infty)$ is the radius of the mandrel, and $m\in(0,\infty]$ is an $L^{\infty}$\emph{-a priori }bound on $D\Phi$. The final hypothesis, on the sign of $\partial_{z}\Phi_{z}$, has no analog in \prettyref{eq:AFvK}, and deserves some additional discussion. One might imagine that the cylinder should fold over itself to accommodate axial compression. Indeed, if $z\to\Phi_{z}$ need not be invertible, one can construct test functions that have significantly lower energy than given in \prettyref{thm:NLlargemandrelscaling} or \prettyref{thm:NLneutralbounds}. (In the notation of these results, such test functions can be made to have excess energy no larger than $C(\varrho_{0})\max\{[(\varrho^{2}-1)\vee h^{2}]^{1/3}h^{4/3},h^{3/2}\}$ whenever $\varrho\in[1,\varrho_{0}]$ and $h,\lambda\in(0,\frac{1}{2}]$.) In order to avoid this, and to facilitate a direct comparison with the geometrically linear setting, we introduce the hypothesis that $\partial_{z}\Phi_{z}\geq0$ in the definition of \prettyref{eq:ANL}. We remark that such a hypothesis can be relaxed; as discussed in \prettyref{rem:invertibilityhypothesis}, one only needs to prevent $\partial_{z}\Phi_{z}$ from approaching the well at $-1$ in order to obtain our results. \subsection{Statement of results} We prove quantitative bounds on the minimum energy of $E_{h}^{vKD}$ and $E_{h}^{NL}$ in two cases: the ``large mandrel case'', where $\varrho>1$, and the neutral mandrel case, where $\varrho=1$. The small mandrel case, where $\varrho<1$, is close to the poorly understood question of the energy scaling law of a crumpled sheet of paper, which is still a matter of conjecture (despite significant recent progress offered in \cite{conti2008confining}). \subsubsection{The large mandrel case\label{sub:largemandrelresults}} We begin with the case where $\varrho>1$. In this setting, our methods prove the minimum energy scaling law. We state the results first for the vKD model. Define \begin{equation} \mathcal{E}_{b}^{vKD}(\varrho)=\left|\Omega\right|\left(\varrho-1\right)^{2}\label{eq:EbFVK} \end{equation} and let $c_{0}(\lambda,h,m)=\min\{\lambda^{1/2}h^{1/4},m^{1/2}h^{1/2}\}$. \begin{thm} \label{thm:FvKlargemandrelscaling} Let $h,\lambda\in(0,\frac{1}{2}]$, $\varrho\in[1,\infty)$, and $m\in[2,\infty)$. Then we have that \[ \min_{A_{\lambda,\varrho,m}^{vKD}}E_{h}^{vKD}-\mathcal{E}_{b}^{vKD}\sim_{m}\min\left\{ \lambda^{2},\max\left\{ (\varrho-1)^{4/7}h^{6/7}\lambda^{5/7}, (\varrho-1)^{2/3}h^{2/3}\lambda\right\} \right\} \] whenever $\varrho-1\geq c_{0}(\lambda,h,m)$. In the case that $m=\infty$, we have that \[ \min_{A_{\lambda,\varrho,\infty}^{vKD}}E_{h}^{vKD}-\mathcal{E}_{b}^{vKD}\sim\min\left\{ \lambda^{2},(\varrho-1)^{4/7}h^{6/7}\lambda^{5/7}\right\} \] whenever $\varrho-1\geq c_{0}(\lambda,h,\infty)$.\end{thm} \begin{rem} \label{rem:FvKslopeexlposion} Note that the scaling law $(\varrho-1)^{2/3}h^{2/3}\lambda $ disappears from the result when one does not assume an \emph{a priori} $L^{\infty}$-bound on $D\phi$. Indeed, this assumption changes the character of minimizing sequences. A consequence of our methods is a quantification of the blow-up rate of $\norm{D\phi}_{L^{\infty}}$ as $h\to0$. For instance, if we fix $\varrho\in(1,\infty)$ and $\lambda\in(0,\frac{1}{2}]$, then the minimizers $\{\phi_{h}\}$ of $E_{h}^{vKD}$ over $A_{\lambda,\varrho,\infty}^{vKD}$ satisfy $\norm{D\phi_{h}}_{L^{\infty}}\gtrsim_{\varrho,\lambda}h^{-2/7}$ as $h\to0$. The interested reader is directed to \prettyref{sub:Blowuprate_largemandrel} for a precise statement of the full result. In any case, we are led by this observation to include the parameter $m$ in the definition of the admissible set, $A_{\lambda,\varrho,m}^{vKD}$, in order to prevent the non-physical explosion of slope that is energetically preferred in the large mandrel vKD problem.\end{rem} \begin{proof} \begin{comment} By \prettyref{prop:FvKUB}, we have that \[ \min_{A_{\lambda,\varrho,m}^{FvK}}E_{h}^{FvK}-\mathcal{E}_{b}^{FvK}\lesssim\min\left\{ \lambda^{2},\max\left\{ \lambda h,h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7},m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}\right\} \right\} \] whenever $h\in(0,\frac{1}{16}]$, $\lambda\in(0,\frac{1}{2}]$, $\varrho\in[1,\infty)$, and $m\in[2,\infty]$. By \prettyref{prop:FvKlargemandrelLB}, we have that \[ \min\left\{ \max\left\{ m^{-2/3}(\varrho-1)^{2/3}h^{2/3}\lambda,\lambda^{5/7}(\varrho-1)^{4/7}h^{6/7}\right\} ,\lambda^{2}\right\} \lesssim\min_{A_{\lambda,\varrho,m}^{FvK}}E_{h}^{FvK}-\mathcal{E}_{b}^{FvK} \] whenever $h,\lambda\in(0,\infty)$, $\varrho\in[1,\infty)$, and $m\in(0,\infty]$. The result now follows from the observation that \[ \lambda h\leq\max\{h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7},m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}\}\iff\min\{\lambda^{1/2}h^{1/4},m^{1/2}h^{1/2}\}\leq\varrho-1. \] \end{comment} {} \prettyref{thm:FvKlargemandrelscaling} follows from \prettyref{prop:FvKUB} and \prettyref{prop:FvKlargemandrelLB}, once we note that \[ \lambda h\leq\max\{h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7},m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}\}\iff\min\{\lambda^{1/2}h^{1/4},m^{1/2}h^{1/2}\}\leq\varrho-1. \] \end{proof} This theorem shows that there are three types of patterns (three ``phases'') which achieve the minimum energy scaling law, and that there are two types of patterns if $m=\infty$. As we will see in the proof of the upper bounds, these patterns consist of axisymmetric wrinkles. Roughly speaking, the phases correspond to the absence of wrinkles, the presence of one or a few wrinkles, or the presence of many wrinkles. The distinction between ``few'' and ``many'' is made clear in \prettyref{sec:upperbounds} (see \prettyref{lem:FvKUB_onewrinkle} and \prettyref{lem:FvKUB_manywrinkles}). See \prettyref{fig:wrinklingpatterns} for a depiction of these wrinkling patterns. \begin{figure} \subfloat[]{\includegraphics[height=0.18\textheight]{cylinderunwrinkled}}\hspace{1.5em}\subfloat[]{\includegraphics[height=0.18\textheight]{cylinder1wrinkle}}\hspace{1.5em}\subfloat[]{\includegraphics[height=0.18\textheight]{cylinderNwrinkle}} \caption{This figure depicts the three types of axisymmetric wrinkling patterns that achieve the minimum energy scaling laws from \prettyref{thm:FvKlargemandrelscaling}. In each, a thin elastic cylinder of unit radius and thickness $h$ is compressed axially by amount $\lambda$, and lies entirely outside of an inner cylindrical mandrel of radius $\varrho>1$. Pattern A shows the trivial wrinkling pattern, i.e., the unbuckled configuration, which achieves an excess energy scaling as $\lambda^{2}$. Pattern B is made up of one wrinkle, and achieves an excess energy scaling as $(\varrho-1)^{4/7}h^{6/7}\lambda^{5/7}$. Pattern C features many wrinkles, and achieves an excess energy scaling as $(\varrho-1)^{2/3} h^{2/3}\lambda$. In this pattern, the number of wrinkles scales as $(\varrho-1)^{1/3}h^{-2/3}\lambda $. A similar discussion applies for \prettyref{thm:NLlargemandrelscaling}, where $\varrho-1$ is replaced by $(\varrho^{2}-1)\vee h^{2}$. \label{fig:wrinklingpatterns}} \end{figure} A similar result can be proved for the nonlinear energy. Define \begin{equation} \mathcal{E}_{b}^{NL}(\varrho,h)=\left|\Omega\right|\left(\varrho^{2}-1\right)^{2}+\left|\Omega\right|\varrho^{2}h^{2}\label{eq:EbNL} \end{equation} and recall the definition of $c_{0}$ given immediately before the statement of \prettyref{thm:FvKlargemandrelscaling} above. \begin{thm} \label{thm:NLlargemandrelscaling} Let $\varrho_{0}\in[1,\infty)$, and let $h,\lambda\in(0,\frac{1}{2}]$, $\varrho\in[1,\varrho_{0}]$, and $m\in[1,\infty)$. Then we have that \[ \min_{A_{\lambda,\varrho,m}^{NL}}E_{h}^{NL}-\mathcal{E}_{b}^{NL}\sim_{\varrho_{0},m}\min\left\{ \lambda^{2},\max\left\{ [(\varrho^{2}-1)\vee h^{2}]^{4/7}h^{6/7}\lambda^{5/7},[(\varrho^{2}-1)\vee h^{2}]^{2/3}h^{2/3}\lambda \right\} \right\} \] whenever $(\varrho^{2}-1)\vee h^{2}\geq c_{0}(\lambda,h,1)$.\end{thm} \begin{rem} \label{rem:apriorislopebound} In contrast with \prettyref{thm:FvKlargemandrelscaling}, we do not address the case $m=\infty$ in this result. As the reader will observe, our proof of the lower bound part of \prettyref{thm:NLlargemandrelscaling} rests on the assumption that $m<\infty$. However, in the proof of the upper bound part, the successful test functions belong to $A_{\lambda,\varrho,1}^{NL}$ uniformly in $h$. It does not appear to us that one can improve the scaling of these upper bounds by considering test functions with exploding slopes. This should be contrasted with the blow-up estimates discussed for the vKD model in \prettyref{rem:FvKslopeexlposion}. % \begin{comment} We wonder if of $E_{h}^{NL}$ over $A_{\lambda,\varrho,\infty}^{NL}$: are minimizers Lipschitz uniformly in $h$? Try an argument a la Chipot+Evans? \end{comment} \end{rem} \begin{proof} \begin{comment} By \prettyref{prop:NLUBs}, we have that \[ \min_{A_{\lambda,\varrho,m}^{NL}}E_{h}^{NL}-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0}}\min\left\{ \lambda^{2},\max\left\{ \lambda h,h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7},[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}\right\} \right\} \] whenever $h\in(0,\frac{1}{4}]$, $\lambda\in(0,\frac{1}{2}]$, $\varrho\in[1,\varrho_{0}]$, and $m\in[1,\infty]$. By , we have that \[ \min\left\{ \max\left\{ \left[(\varrho^{2}-1)\vee h^{2}\right]^{2/3}h^{2/3}\lambda,\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}h^{6/7}\right\} ,\lambda^{2}\right\} \lesssim_{m,\varrho_{0}}\min_{A_{\lambda,\varrho,m}^{FvK}}E_{h}^{NL}-\mathcal{E}_{b}^{NL} \] whenever $h,\lambda\in(0,1]$, $\varrho\in[1,\varrho_{0}]$, and $m\in(0,\infty)$. The result now follows from the observation that \[ \lambda h\leq\max\{h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7},[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}\}\iff\min\{\lambda^{1/2}h^{1/4},h^{1/2}\}\leq(\varrho^{2}-1)\vee h^{2}. \] \end{comment} {} \prettyref{thm:NLlargemandrelscaling} follows from \prettyref{prop:NLUBs} and \prettyref{prop:NLLBs_largemandr} once we observe that \[ \lambda h\leq\max\{h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7},[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}\}\iff\min\{\lambda^{1/2}h^{1/4},h^{1/2}\}\leq(\varrho^{2}-1)\vee h^{2}. \] \end{proof} \subsubsection{The neutral mandrel case\label{sub:neutralmandrelresults}} Next we turn to the borderline case between the large and small mandrel cases, given by $\varrho=1$. In this case, our methods prove upper and lower bounds on the minimum energy which fail to match in general, though they do match in a regime in which the thickness, $h$, is large as compared to the compression, $\lambda$. % \begin{comment} Note that in the neutral mandrel case, the bulk energy vanishes as \$h\textbackslash{}to0\$. \end{comment} We begin with the results for the vKD model. \begin{thm} \label{thm:FvKneutralbounds} Let $h,\lambda\in(0,\frac{1}{2}]$ and $m\in[2,\infty)$. Then we have that \[ \min\left\{ \max\{h\lambda^{3/2},(h\lambda)^{12/11}\},\lambda^{2}\right\} \lesssim_{m}\min_{A_{\lambda,1,m}^{vKD}}E_{h}^{vKD}\lesssim\min\left\{ h\lambda,\lambda^{2}\right\} . \] In the case that $m=\infty$, we have that \[ \min\left\{ \max\{(h\lambda)^{12/11}\},\lambda^{2}\right\} \lesssim\min_{A_{\lambda,1,\infty}^{vKD}}E_{h}^{vKD}\lesssim\min\left\{ h\lambda,\lambda^{2}\right\} . \] \end{thm} \begin{rem} Although the lower bound in this result changes when $m=\infty$, in this case it does not imply a blow-up rate for $\norm{D\phi}_{L^{\infty}}$ as $h\to0$. Indeed, as discussed in \prettyref{rem:unifbddslopes_neutralmandrel}, minimizing sequences need not have exploding slopes in the neutral mandrel case. \end{rem} \begin{proof} Taking $\varrho=1$ in \prettyref{prop:FvKUB} proves the upper bound part of \prettyref{thm:FvKneutralbounds}. To prove the lower bound part, we first observe that if we define \begin{equation} FS_{h}(\phi)=\int_{\Omega}\,\abs{\epsilon_{\theta\theta}}^{2}+\abs{\epsilon_{zz}}^{2}+h^{2}\abs{D^{2}\phi_{\rho}}^{2}\,d\theta dz,\label{eq:FS} \end{equation} then \[ E_{h}^{vKD}(\phi)\geq FS_{h}(\phi)\quad\forall\,\phi\in A_{\lambda,\varrho,m}^{vKD}. \] \prettyref{prop:FSscalinglaw} identifies the minimum energy scaling law of $FS_{h}$ over $A_{\lambda,1,m}^{vKD}$, and this proves the result. \end{proof} As the reader will note, the argument in the proof above uses only the $\theta\theta$- and $zz$-components of the membrane term. As far as scaling is concerned, the lower bounds given in \prettyref{thm:FvKneutralbounds} are the optimal bounds that can be proved by such a method. This is discussed in more detail in \prettyref{sub:neutralmandrelLB_FvK}; the essential point is that our lower bounds arise as the minimum energy scaling law of what we call the \emph{free-shear functional}, defined in \prettyref{eq:FS} above. The upper and lower bounds from \prettyref{thm:FvKneutralbounds} match in a certain regime of the form $h\geq\lambda^{\alpha}$. \begin{cor} \label{cor:unbuckled}Let $h,\lambda\in(0,\frac{1}{2}]$ and $m\in[2,\infty)$. If $h\geq\lambda^{5/6}$, we have that \[ \min_{A_{\lambda,1,m}^{vKD}}E_{h}^{vKD}\sim_{m}\lambda^{2}. \] The same result holds in the case that $m=\infty$. \end{cor} \begin{comment} This scaling law is achieved by the test function $\phi=(0,0,-\lambda z)$. Is it true that in the region where $\min E_{h}^{FvK}\sim\lambda^{2}$, this test function is the unique minimizer? \end{comment} \begin{rem} We note here a possible connection between our analysis and that of \cite{grabovsky2015rigorous,grabovsky2015scaling}, which derives Koiter's formula for the incipient buckling load of a (perfect) thin cylinder via an analysis of the fully nonlinear model. Although our focus is not on buckling as such, \prettyref{cor:unbuckled} proves that, in the regime $\lambda\leq h^{6/5}$, the minimum energy scales as that of the unbuckled deformation. In comparison, the buckling load of a thin elastic cylinder scales linearly with $h$. If the effect of the neutral mandrel is to improve local to global stability, then perhaps the upper bound from \prettyref{thm:FvKneutralbounds} is optimal in its scaling.\end{rem} \begin{proof} \prettyref{cor:unbuckled} follows from \prettyref{thm:FvKneutralbounds}, after observing that, since $\lambda\leq1$, \[ h\geq\lambda^{5/6}\iff\max\{h\lambda^{3/2},(h\lambda)^{12/11}\}\geq\lambda^{2}. \] \end{proof} Now we state the corresponding results for the nonlinear energy. \begin{thm} \label{thm:NLneutralbounds} Let $h,\lambda\in(0,\frac{1}{2}]$ and $m\in[1,\infty)$. Then we have that \[ \min\left\{ \max\left\{ h\lambda^{3/2},(h\lambda)^{12/11}\right\} ,\lambda^{2}\right\} \lesssim_{m}\min_{A_{\lambda,1,m}^{NL}}E_{h}^{NL}-\mathcal{E}_{b}^{NL}(1,h)\lesssim_{\varrho_{0}}\min\left\{ \lambda h,\lambda^{2}\right\} . \] \end{thm} \begin{rem} As discussed in \prettyref{rem:apriorislopebound}, the lower bound in the case that $m=\infty$ is not addressed for the nonlinear model by our methods. \end{rem} \begin{proof} Taking $\varrho=\varrho_{0}=1$ in \prettyref{prop:NLUBs} gives the upper bound part, once we observe that \[ \lambda\leq1\implies\lambda h\geq\min\{h^{2}\lambda^{5/7},\lambda^{2}\}. \] The lower bound part follows from \prettyref{prop:NLneutralLBs}.\end{proof} \begin{cor} Let $h,\lambda\in(0,\frac{1}{2}]$ and $m\in[1,\infty)$. If $h\geq\lambda^{5/6}$, then we have that \[ \min_{A_{\lambda,1,\infty}^{NL}}E_{h}^{NL}-\mathcal{E}_{b}^{NL}(1,h)\sim_{m}\lambda^{2} \] \end{cor} \begin{proof} Arguing as in the proof of \prettyref{cor:unbuckled}, we see that the result follows from \prettyref{thm:NLneutralbounds}. \end{proof} \subsection{Discussion of the proofs\label{sub:DiscussionofProofs}} We turn now to a discussion of the mathematical ideas behind the proofs of these results. To fix ideas, we focus exclusively in this section on the nonlinear model, given in \prettyref{eq:ENL}. For added clarity, we consider \textbf{only} the case where $h\to0$ while $\lambda\in(0,\frac{1}{2}]$, $\varrho\in[1,\infty)$, and $m\in[1,\infty)$ are held fixed. Under these additional assumptions, \prettyref{thm:NLlargemandrelscaling} and \prettyref{thm:NLneutralbounds} imply the following results: \begin{itemize} \item If $\varrho>1$, there are constants $c,C$ depending only on $\lambda,\varrho,m$ such that \begin{equation} ch^{2/3}\leq\min_{A_{\lambda,\varrho,m}^{NL}}E_{h}^{NL}-\mathcal{E}_{b}^{NL}\leq Ch^{2/3}\quad\text{as}\ h\to0.\label{eq:asymptoticexp} \end{equation} \item If $\varrho=1$, there are constants $c,C$ depending only on $\lambda,m$ such that \begin{equation} ch\leq\min_{A_{\lambda,1,m}^{NL}}E_{h}^{NL}-\mathcal{E}_{b}^{NL}\leq Ch\quad\text{as}\ h\to0.\label{eq:asymptoticexp-1} \end{equation} \end{itemize} \subsubsection{The bulk energy } We see from \prettyref{eq:EbNL} that $\mathcal{E}_{b}^{NL}$ is of the form \[ \mathcal{E}_{b}^{NL}=b_{m}(\varrho)+b_{\kappa}(\varrho)h^{2}. \] The first factor, $b_{m}$, is the ``bulk membrane energy'' that remains in the limit $h\to0$. The second factor, $b_{\kappa}h^{2}$, is the ``bulk bending energy'' and appears in $\mathcal{E}_{b}^{NL}$ due to our choice of bending term. The bulk membrane energy can be found by solving the relaxed problem: \begin{equation} b_{m}=\min_{\Phi\in A_{\lambda,\varrho,m}^{NL}}\int_{\Omega}QW(D\Phi)\,dx.\label{eq:relaxedpblm} \end{equation} Here, $QW$ is the quasiconvexification of $W(F)=\left|F^{T}F-\text{id}\right|^{2}$. It follows from the results of \cite{pipkin1994relaxed} that \[ QW(F)=(\lambda_{1}^{2}-1)_{+}^{2}+(\lambda_{2}^{2}-1)_{+}^{2} \] where $\{\lambda_{i}\}_{i=1,2}$ are the singular values of $F$. Regardless of whether we consider the large, neutral, or small mandrel cases, the deformation \[ \Phi_{\text{eff}}(\theta,z)=(1+(\varrho-1)_{+},\theta,(1-\lambda)z) \] is a minimizer of \prettyref{eq:relaxedpblm}. The effective (first Piola\textendash Kirchhoff) stress field is given by \begin{equation} \sigma_{\text{eff}}=DQW(D\Phi_{\text{eff}})=4\varrho(\varrho^{2}-1)_{+}E_{\theta}\otimes e_{\theta},\label{eq:effectivestress} \end{equation} and the bulk membrane energy satisfies \[ b_{m}=|\Omega|(\varrho^{2}-1)_{+}^2. \] We note here that in the large mandrel case, where $\varrho>1$, both $\sigma_{\text{eff}}$ and $b_{m}$ are non-zero, whereas for the small or neutral mandrels these both vanish. As will become clear, the appearance of different power laws for the scaling of the excess energy in \prettyref{eq:asymptoticexp} and \prettyref{eq:asymptoticexp-1} is due precisely to the vanishing or non-vanishing of $\sigma_{\text{eff}}$. \subsubsection{Upper bounds} To achieve the upper bounds from \prettyref{eq:asymptoticexp} and \prettyref{eq:asymptoticexp-1}, one must construct a good test function and estimate its elastic energy. The particular test functions that we use are of the form \begin{equation} \Phi(\theta,z)=(\varrho+w(z),\theta,(1-\lambda)z+u(z)).\label{eq:axisymmpattern} \end{equation} We refer to such constructions as ``axisymmetric wrinkling patterns'' (see \prettyref{fig:wrinklingpatterns}). By construction, the metric tensor $g=D\Phi^{T}D\Phi$ satisfies $g_{\theta z}=0$ and by choosing $u,w$ suitably we can ensure that $g_{zz}=0$ as well. In \prettyref{sec:upperbounds}, we estimate the elastic energy of \eqref{eq:axisymmpattern}. The result is that the excess energy is bounded above by a multiple of \[ \int_{I_{z}}(\varrho^{2}-1)_{+}|w|+|w|^{2}+h^{2}|w''|^{2}\,dz, \] where $\norm{w'}_{L^{2}}\geq c(\lambda)$. Minimizing over all such $w$ leads to the desired upper bounds. Evidently, both the character of the optimal $w$ and the scaling in $h$ of the resulting upper bound depend crucially on whether $\varrho>1$. \subsubsection{Ansatz-free lower bounds} The proofs of the lower bounds from \prettyref{eq:asymptoticexp} and \prettyref{eq:asymptoticexp-1} require an ansatz-free argument. We start by establishing the following claims: \begin{enumerate} \item With enough axial confinement, low-energy configurations must buckle; \item Buckling in the presence of the mandrel induces excess hoop stress, and costs energy. \end{enumerate} The first claim is quantified in \prettyref{cor:NLbucklingcontrol}, with the result being that low-energy configurations must satisfy \begin{equation} \norm{D\Phi_{\rho}}_{L^{2}}\geq c(\lambda).\label{eq:LBsketch-inequality1} \end{equation} The second claim is quantified in \prettyref{lem:NLmembrLB}; this result implies in particular that the excess energy is bounded below by a multiple of \begin{equation} (\varrho^{2}-1)_{+}\norm{\Phi_{\varrho}-\varrho}_{L^{1}(\Omega)}+\norm{\Phi_{\rho}-\varrho}_{L_{z}^{2}L_{\theta}^{1}}^{2}.\label{eq:LBsketch-inequality2} \end{equation} The anisotropic norm appearing here is characteristic of our neutral mandrel analysis. It arises because we consider the stretching of each $\theta$-hoop individually in this case, a choice that may be sub-optimal in general as it ignores the cost of shear. Finally, we prove in \prettyref{lem:NLbendingcontrol} that, for low-energy configurations, the excess energy is bounded below by a multiple of \begin{equation} h^{2}\norm{D^{2}\Phi_{\rho}}_{L^{2}(\Omega)}^{2}.\label{eq:LBsketch-inequality3} \end{equation} While such a bound comes for free when we consider $E_{h}^{vKD}$, it requires some extra work for $E_{h}^{NL}$, due to the nonlinearities in the bending term. Combining \prettyref{eq:LBsketch-inequality1}, \prettyref{eq:LBsketch-inequality2}, and \prettyref{eq:LBsketch-inequality3} with various Gagliardo-Nirenberg interpolation inequalities (see \prettyref{sec:Appendix}), we conclude the desired lower bounds. \subsubsection{The role of $\sigma_{\text{eff}}$ in lower bounds} As described above, the vanishing of the effective applied stress, $\sigma_{\text{eff}}$, affects both the scaling law of the excess energy as well as the character of low energy sequences. We wish now to present a short argument for the first part of \prettyref{eq:LBsketch-inequality2}. While this argument is not strictly necessary for the proof of the main results, we believe that it helps to clarify the role of $\sigma_{\text{eff}}$ in the lower bounds. It turns out that \[ E_{h}^{NL}(\Phi)-\mathcal{E}_{b}^{NL}\geq\int_{\Omega}W(D\Phi)-b_{m}, \] i.e., the excess energy can be split into its membrane and bending parts (see \prettyref{lem:NLexcesssplits}). Since $QW\leq W$, we have that \[ \int_{\Omega}W(D\Phi)-b_{m}\geq\int_{\Omega}QW(D\Phi)-QW(D\Phi_{eff}). \] If $\sigma_{\text{eff}}\neq0$, then to first order \begin{equation} QW(D\Phi)-QW(D\Phi_{eff})=\left\langle \sigma_{\text{eff}},D(\Phi-\Phi_{\text{eff}})\right\rangle +h.o.t.,\label{eq:TaylorQW} \end{equation} and in fact we have that \[ QW(D\Phi)-QW(D\Phi_{\text{eff}})\geq\left\langle \sigma_{\text{eff}},D(\Phi-\Phi_{\text{eff}})\right\rangle \] since $QW$ is convex (this also follows from \cite{pipkin1994relaxed}). Integrating by parts with the formula \prettyref{eq:effectivestress}, and using that $\Phi_{\rho}\geq\varrho$, we conclude that \[ \int_{\Omega}\left\langle \sigma_{\text{eff}},D(\Phi-\Phi_{\text{eff}})\right\rangle =\int_{\Omega}|\sigma_{\text{eff}}||\Phi_{\rho}-\varrho|. \] Hence, \[ E_{h}^{NL}(\Phi)-\mathcal{E}_{b}^{NL}\geq|\sigma_{\text{eff}}|\norm{\Phi_{\rho}-\varrho}_{L^{1}(\Omega)}\quad\forall\,\Phi\in A_{\lambda,\varrho,\infty}^{NL}. \] While this argument succeeds in proving the first part of \prettyref{eq:LBsketch-inequality2}, it fails to prove the second part since, essentially, the expansion \prettyref{eq:TaylorQW} fails to capture the leading order behavior of $QW$ in the neutral mandrel case. Nevertheless, one can prove the full power of \prettyref{eq:LBsketch-inequality2} assuming only that the cylinder is at least as large as the mandrel, i.e., $\varrho\geq1$. The argument we give in \prettyref{sub:largemandrelLB_nonlinear} establishes both parts at once, using only familiar calculus and Sobolev-type inequalities along with the basic definitions. \subsection{Outline} In \prettyref{sec:upperbounds}, we give the proofs of the upper bound parts of \prettyref{thm:FvKlargemandrelscaling}, \prettyref{thm:NLlargemandrelscaling}, \prettyref{thm:FvKneutralbounds}, and \prettyref{thm:NLneutralbounds}. In \prettyref{sec:largemandrelLB} we prove the lower bounds in the large mandrel case, i.e., the lower bound parts of \prettyref{thm:FvKlargemandrelscaling} and \prettyref{thm:NLlargemandrelscaling}. In \prettyref{sec:neutralmandrelLB}, we consider the analysis of lower bounds in the neutral mandrel case. There, we prove the lower bound parts of \prettyref{thm:FvKneutralbounds} and \prettyref{thm:NLneutralbounds}, as well as the energy scaling law for the free-shear functional. We end with a short appendix in \prettyref{sec:Appendix} which contains the various interpolation inequalities that we use. \subsection{Notation\label{sub:notation}} The notation $X\lesssim Y$ means that there exists a positive numerical constant $C$ such that $X\leq CY$, and the notation $X\lesssim_{a}Y$ means that there exists a positive constant $C'$ depending only on $a$ such that $X\leq C'(a)Y$. The notation $X\sim Y$ means that $X\lesssim Y$ and $Y\lesssim X$, and similarly for $X\sim_{a}Y$. When the meaning is clear, we sometimes abbreviate function spaces on $\Omega$ by dropping the dependence on the domain, e.g., $H^{k}=H^{k}(\Omega)$. The space $H_{\text{per}}^{k}=H_{\text{per}}^{k}(\Omega)$ is the space of periodic Sobolev functions on $\Omega$ of order $k$ and integrability $2$. We employ the following notation regarding mixed $L^{p}$-norms: \[ \norm{f}_{L_{x_{1}}^{p_{1}}L_{x_{2}}^{p_{2}}}=\left(\int\left(\int|f(x_{1},x_{2})|^{p_{2}}\,dx_{2}\right)^{\frac{p_{1}}{p_{2}}}\,dx_{1}\right)^{\frac{1}{p_{1}}} \] and \[ \norm{f}_{L_{x_{1}}^{p}}(x_{2})=\left(\int|f(x_{1},x_{2})|^{p}\,dx_{1}\right)^{\frac{1}{p}}. \] We refer to the unit basis vectors for the reference $\theta,z$-coordinates on $\Omega$ as $\{e_{i}\}_{i\in\{\theta,z\}}$, and the unit frame of coordinate vectors for the cylindrical $\rho,\theta,z$-coordinates on $\mathbb{R}^{3}$ as $\{E_{i}\}_{i\in\{\rho,\theta,z\}}$. Note that $E_{\rho}=E_{\rho}(x)$ and $E_{\theta}=E_{\theta}(x)$ depend on $x\in\mathbb{R}^{3}$ through its $\theta$-coordinate, $x_{\theta}$; our convention is that $E_{\rho}$ points in the direction of increasing radial coordinate, $\rho$, and $E_{\theta}$ in the direction of increasing azimuthal coordinate, $\theta$, so that in particular $x=x_{\rho}E_{\rho}(x)+x_{z}E_{z}$. We will sometimes perform Lebesgue averages of a function $f:\Omega\to\mathbb{R}$ over the reference $\theta$-coordinate. We denote this by \[ \overline{f}(z)=\frac{1}{\left|I_{\theta}\right|}\int_{I_{\theta}}f(\theta,z)\,d\theta. \] The notation $\left|A\right|$ denotes the Euclidean volume of the (Lebesgue measurable) set $A$. The set $\mathcal{B}(U)$ denotes the set of Lebesgue measurable subsets $A\subset U$. \subsection{Acknowledgements} We would like to thank our advisor R.~V.\ Kohn for his constant support. We would like to thank S.\ Conti for many inspirational discussions during an intermediate phase of this project, and in particular for his insight into the analysis of the free-shear functional. We would like to thank the University of Bonn for its hospitality during our visit in April and May of 2015. This research was conducted while the author was supported by a National Science Foundation Graduate Research Fellowship DGE-0813964, and National Science Foundation grants OISE-0967140 and DMS-1311833. \section{Elastic energy of axisymmetric wrinkling patterns\label{sec:upperbounds}} We begin our analysis of the compressed cylinder by estimating the elastic energy of various axisymmetric wrinkling patterns. This amounts to considering test functions that depend only on the $z$-coordinate. The results in this section constitute the upper bound parts of \prettyref{thm:FvKlargemandrelscaling}, \prettyref{thm:NLlargemandrelscaling}, \prettyref{thm:FvKneutralbounds}, and \prettyref{thm:NLneutralbounds}. We consider the vKD model in \prettyref{sub:FvKUB} and the nonlinear model in \prettyref{sub:NLUB}. \subsection{vKD model\label{sub:FvKUB}} Recall the definitions of $E_{h}^{vKD}$, $A_{\lambda,\varrho,m}^{vKD}$, and $\mathcal{E}_{b}^{vKD}$, given in \prettyref{eq:EFvK}, \prettyref{eq:AFvK}, and \prettyref{eq:EbFVK} respectively. In this section, we prove the following upper bound. \begin{prop} \label{prop:FvKUB} We have that \[ \min_{A_{\lambda,\varrho,m}^{vKD}}E_{h}^{vKD}-\mathcal{E}_{b}^{vKD}\lesssim\min\left\{ \lambda^{2},\max\left\{ \lambda h,h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7},m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}\right\} \right\} \] whenever $h,\lambda\in(0,\frac{1}{2}]$, $\varrho\in[1,\infty)$, and $m\in[2,\infty]$.\end{prop} \begin{proof} The upper bound of $\lambda^{2}$ is achieved by the unbuckled configuration, $\phi=(\varrho-1,0,-\lambda z)$. To prove the remainder of the upper bound, note first that it suffices to achieve it for $(h,\lambda,\varrho,m)\in(0,h_{0}]\times(0,\frac{1}{2}]\times[1,\infty)\times[2,\infty]$ for some $h_{0}\in(0,\frac{1}{2}]$. We apply \prettyref{lem:FvKUB_manywrinkles}, \prettyref{lem:FvKUB_onewrinkle}, and \prettyref{lem:FvKUB_uniforminmandrel} to deduce the required upper bound in the stated parameter range with $h_{0}=\frac{1}{2^{4}}$. \end{proof} In the remainder of this section, we will \textbf{assume} that \[ h\in(0,\frac{1}{2^{4}}],\ \lambda\in(0,\frac{1}{2}],\ \varrho\in[1,\infty),\ \text{and}\ m\in[2,\infty] \] unless otherwise explicitly stated. We begin by defining a two-scale axisymmetric wrinkling pattern. We will refer to the parameters $n\in\mathbb{N}$ and $\delta\in(0,1]$, which are the number of wrinkles and their relative extent. We refer the reader to \prettyref{fig:radialheightfield} for a schematic of this construction. Fix $f\in C^{\infty}(\mathbb{R})$ such that \begin{itemize} \item $f$ is non-negative and one-periodic \item $\text{supp}\,f\cap[-\frac{1}{2},\frac{1}{2}]\subset(-\frac{1}{2},\frac{1}{2})$ \item $\norm{f'}_{L^{\infty}}\leq2$ \item $\norm{f'}_{L^{2}(B_{1/2})}^{2}=1$, \end{itemize} and define $f_{\delta,n}\in C^{\infty}(\mathbb{R})$ by \[ f_{\delta,n}(t)=\frac{\sqrt{\delta}}{n}f(\frac{n}{\delta}\{t\})\ind{\{t\}\in B_{\delta/2}}. \] Define $w_{\delta,n,\lambda},u_{\delta,n,\lambda}:\Omega\to\mathbb{R}$ by \[ w_{\delta,n,\lambda}(\theta,z)=\sqrt{2\lambda}f_{\delta,n}(z)\quad\text{and}\quad u_{\delta,n,\lambda}(\theta,z)=\int_{-\frac{1}{2}\leq z'\leq z}\lambda-\frac{1}{2}(\partial_{z}w_{\delta,n,\lambda}(\theta,z'))^{2}\,dz'. \] Finally, define $\phi_{\delta,n,\lambda,\varrho}:\Omega\to\mathbb{R}^{3}$ by \[ \phi_{\delta,n,\lambda,\varrho}=(w_{\delta,n,\lambda}+\varrho-1,0,-\lambda z+u_{\delta,n,\lambda}), \] in cylindrical coordinates. \begin{figure} \includegraphics[height=0.18\textheight]{cylinderNwrinkle_radialdisp} \caption{This schematic depicts the axisymmetric wrinkle construction used in the proof of the upper bounds. The pattern features $n$ wrinkles in the $e_{z}$-direction with volume fraction $\delta$. The optimal choice of $\delta,n$ depends on the axial compression, $\lambda$, the thickness, $h$, the mandrel's radius, $\varrho$, and the \emph{a priori} $L^{\infty}$ slope bound, $m$. \label{fig:radialheightfield}} \end{figure} Now, we estimate the elastic energy of this construction in the vKD model. Define \[ m_{1}(\lambda,\delta)=2\max\left\{ \sqrt{\frac{2\lambda}{\delta}},\frac{2\lambda}{\delta}\right\} . \] \begin{lem} \label{lem:FvKadmissability}We have that $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m_{1}}^{vKD}$. Furthermore, \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\max\left\{ (\varrho-1)\frac{\lambda^{1/2}\delta^{3/2}}{n},\frac{\lambda\delta^{2}}{n^{2}},h^{2}\frac{\lambda n^{2}}{\delta^{2}}\right\} . \] \end{lem} \begin{proof} Abbreviate $\phi_{\delta,n,\lambda,\varrho}$ by $\phi$, $w_{\delta,n,\lambda}$ by $w$, and $u_{\delta,n,\lambda}$ by $u$. We claim that $\phi_{\rho}\in H_{\text{per}}^{2}$, $\phi_{\theta}\in H_{\text{per}}^{1}$, and $\phi_{z}+\lambda z\in H_{\text{per}}^{1}$. To see this, observe that \[ \int_{I_{z}}\frac{1}{2}|\partial_{z}w_{\delta,n,\lambda}|^{2}dz=\lambda\int_{B_{\delta/2}}|f'_{\delta,n}|^{2}dt=\lambda\int_{B_{1/2}}|f'|^{2}dt=\lambda \] for all $\theta\in I_{\theta}$, so that $u\in H_{\text{per}}^{1}$. That $w\in H_{\text{per}}^{2}$ follows from its definition. Observe also that $\phi_{\rho}\geq\varrho-1$, since $w\geq0$. Now we check the slope bounds. By construction, we have that \[ \epsilon_{zz}=\partial_{z}\phi_{z}+\frac{1}{2}(\partial_{z}\phi_{\rho})^{2}=0 \] and that \[ \partial_{z}\phi_{\rho}=\partial_{z}w=\sqrt{2\lambda}f'_{\delta,n}. \] Hence, \[ \norm{\partial_{z}\phi_{\rho}}_{L^{\infty}}\le\sqrt{2\lambda}\norm{f_{\delta,n}'}_{L^{\infty}}\leq2\sqrt{\frac{2\lambda}{\delta}} \] and \[ \norm{\partial_{z}\phi_{z}}_{L^{\infty}}\leq\lambda\norm{f'_{\delta,n}}_{L^{\infty}}^{2}\leq\frac{4\lambda}{\delta}. \] It follows that \[ \max_{\substack{i\in\left\{ \theta,z\right\} ,\,j\in\{\rho,\theta,z\}} }\norm{\partial_{i}\phi_{j}}_{L^{\infty}}\leq m_{1}(\lambda,\delta), \] and therefore that $\phi\in A_{\lambda,\varrho,m_{1}}^{vKD}$. Now we bound the elastic energy of this construction. Since $\epsilon_{zz}=\epsilon_{\theta z}=0$ and $w$ depends only on $z$, we see that \[ E_{h}^{vKD}(\phi)=\int_{\Omega}\,\abs{w+\varrho-1}^{2}+h^{2}\abs{\partial_{z}^{2}w}^{2}\,d\theta dz \] and hence that \[ E_{h}^{vKD}(\phi)-\mathcal{E}_{b}^{vKD}\lesssim\max\left\{ (\varrho-1)_{+}\norm{w}_{L^{1}(\Omega)},\norm{w}_{L^{2}(\Omega)}^{2},h^{2}\norm{\partial_{z}^{2}w}_{L^{2}(\Omega)}^{2}\right\} . \] Now we conclude the desired result from the elementary bounds \[ \norm{w}_{L^{1}(\Omega)}\lesssim\frac{\lambda^{1/2}\delta^{3/2}}{n},\quad\norm{w}_{L^{2}(\Omega)}^{2}\lesssim\frac{\lambda\delta^{2}}{n^{2}},\ \text{and}\quad\norm{\partial_{z}^{2}w}_{L^{2}(\Omega)}^{2}\lesssim\frac{\lambda n^{2}}{\delta^{2}}. \] \end{proof} We make three choices of the parameters $n,\delta$ in what follows. First, we consider a construction which features many wrinkles as $h\to0$. \begin{lem} \label{lem:FvKUB_manywrinkles} Assume that $m<\infty$ and that \[ m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}\geq\max\{\lambda h,h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}\}. \] Let $n\in\mathbb{N}$ and $\delta\in(0,1]$ satisfy \[ n\in\left[(\varrho-1)^{1/3}\lambda h^{-2/3}m^{-7/6},2(\varrho-1)^{1/3}\lambda h^{-2/3}m^{-7/6}\right]\quad\text{and}\quad\delta=4\lambda m^{-1}. \] Then, $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m}^{vKD}$ and \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\frac{(\varrho-1)^{2/3}h^{2/3}\lambda}{m^{1/3}}. \] \end{lem} \begin{proof} Rearranging the inequality $m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}\geq h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}$, we find that $(\varrho-1)^{1/3}\lambda h^{-2/3}m^{-7/6}\geq1$ so that there exists such an $n\in\mathbb{N}$. Also, with our choice of $\delta$ we have that $m_{1}(\delta,\lambda)=m.$ We note that indeed $\delta\leq1$ since $\lambda\leq\frac{1}{2}$ and $m\geq2$. It follows from \prettyref{lem:FvKadmissability} that $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m}^{vKD}$, and that \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\max\left\{ (\varrho-1)^{2/3}h^{2/3}m^{7/6}\delta^{3/2}\frac{1}{\lambda^{1/2}},\frac{\delta^{2}h^{4/3}m^{7/3}}{(\varrho-1)^{2/3}\lambda},h^{2/3}\frac{\lambda^{3}(\varrho-1)^{2/3}}{\delta^{2}m^{7/3}}\right\} . \] Using that $\delta\sim\frac{\lambda}{m}$, we have that \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\max\left\{ \frac{(\varrho-1)^{2/3}h^{2/3}\lambda}{m^{1/3}},\lambda m^{1/3}\frac{h^{4/3}}{(\varrho-1)^{2/3}}\right\} . \] Since \[ \frac{(\varrho-1)^{2/3}h^{2/3}\lambda}{m^{1/3}}\geq\lambda m^{1/3}\frac{h^{4/3}}{(\varrho-1)^{2/3}}\iff(\varrho-1)^{2/3}\geq m^{1/3}h^{1/3}, \] the result follows. \end{proof} Next, we consider a construction consisting of one wrinkle. \begin{lem} \label{lem:FvKUB_onewrinkle} Assume that \[ h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}\geq\max\{\lambda h,m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}\}. \] Let $n=1$ and let $\delta\in(0,1]$ be given by \[ \delta=4\lambda^{1/7}(\varrho-1)^{-2/7}h^{4/7}. \] Then, $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m}^{vKD}$ and \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}. \] \end{lem} \begin{proof} First, we check that $\delta\leq1$. Note that $4\lambda^{1/7}h^{4/7}(\varrho-1)^{-2/7}\leq1$ if and only if $\lambda h^{4}\leq(\varrho-1)^{2}\frac{1}{2^{14}}$. By assumption, we have that $\lambda h\leq h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}$ so that $\lambda h^{1/2}\leq(\varrho-1)^{2}$. Since $h\leq\frac{1}{2^{4}}$, it follows that $h^{4}\leq\frac{1}{2^{14}}h^{1/2}$ and hence that $\lambda h^{4}\leq\frac{1}{2^{14}}(\varrho-1)^{2}$ as required. Now we check the slope bounds. We have that \[ m_{1}(\lambda,\delta)=\max\left\{ \sqrt{2}\lambda^{3/7}(\varrho-1)^{1/7}h^{-2/7},\lambda^{6/7}(\varrho-1)^{2/7}h^{-4/7}\right\} . \] By assumption, $m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}\leq h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}$ so that $(\varrho-1)^{2/7}\lambda^{6/7}h^{-4/7}\leq m$. Since $m\geq2$, we have that $m^{2}\geq2m$ so that $2(\varrho-1)^{2/7}\lambda^{6/7}h^{-4/7}\leq2m\leq m^{2}$ and hence $\sqrt{2}(\varrho-1)^{1/7}\lambda^{3/7}h^{-2/7}\leq m$. It follows that $m_{1}(\lambda,\delta)\leq m$. Using \prettyref{lem:FvKadmissability}, we conclude that $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m}^{vKD}$ and that \[ E_{h}^{vKD}(\phi)-\mathcal{E}_{b}^{vKD}\lesssim\max\left\{ (\varrho-1)^{4/7}\lambda^{5/7}h^{6/7},\lambda^{9/7}(\varrho-1)^{-4/7}h^{8/7}\right\} . \] Since \[ (\varrho-1)^{4/7}\lambda^{5/7}h^{6/7}\geq\lambda^{9/7}(\varrho-1)^{-4/7}h^{8/7}\iff(\varrho-1)^{2}\geq\lambda h^{1/2} \] we conclude the desired result. \end{proof} The previous two results fail to cover the neutral mandrel case, where $\varrho=1$. Our next result includes this case. \begin{lem} \label{lem:FvKUB_uniforminmandrel} Assume that \[ \lambda h\geq\max\{m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3},h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}\}. \] If $\lambda\leq mh^{1/2}$, then upon taking $n=1$ and $\delta=4h^{1/2}\in(0,1]$ we find that $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m}^{vKD}$ and that \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\lambda h. \] If $\lambda>mh^{1/2}$, then upon taking $n\in\mathbb{N}$ and $\delta\in(0,1]$ which satisfy \[ n\in[\lambda h^{-1/2}m^{-1},2\lambda h^{-1/2}m^{-1}]\quad\text{and}\quad\delta=4\lambda m^{-1}, \] we find that $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m}^{vKD}$ and that \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\lambda h. \] \end{lem} \begin{rem} \label{rem:unifbddslopes_neutralmandrel} We note here that if $\varrho-1$ is small enough, then the scaling law of $\lambda h$ can be achieved by a construction with uniformly bounded slopes. Indeed, if one takes $n\sim h^{-1/2}$ and $\delta=1$, then the resulting $\phi_{\delta,n,\lambda,\varrho}$ belongs to $A_{\lambda,\varrho,m}^{vKD}$ for all $\lambda\in[0,\frac{1}{2}]$ and $m\in[2,\infty]$, and the excess energy is bounded by a multiple of $\lambda h$ whenever $\varrho-1\leq\lambda^{1/2}h^{1/2}$. \end{rem} \begin{proof} We prove this in two parts. Assume first that $\lambda\leq mh^{1/2}$. Then let $n=1$ and $\delta=4h^{1/2}$. Note that $\delta\in(0,1]$ if and only if $h\leq\frac{1}{2^{4}}$. Also, \[ m_{1}(\lambda,\delta)=\max\left\{ 2\sqrt{\frac{2\lambda}{4h^{1/2}}},\frac{4\lambda}{4h^{1/2}}\right\} =\max\left\{ \sqrt{\frac{2\lambda}{h^{1/2}}},\frac{\lambda}{h^{1/2}}\right\} . \] Since $m\geq2$, $2m\leq m^{2}$. Thus, $\lambda\leq mh^{1/2}\implies2\lambda\leq2mh^{1/2}\leq m^{2}h^{1/2}$ so that $(2\lambda h^{-1/2})^{1/2}\leq m$. Thus, $m_{1}(\lambda,\delta)\leq m$. By \prettyref{lem:FvKadmissability}, we have that $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m}^{vKD}$ and that \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\max\left\{ (\varrho-1)\lambda^{1/2}h^{3/4},\lambda h\right\} . \] Note that $(\varrho-1)\lambda^{1/2}h^{3/4}\leq\lambda h$ is a rearrangement of $\lambda h\geq h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}$. Thus, \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\lambda h. \] Now assume that $\lambda>mh^{1/2}$. Let $n\in\mathbb{N}$ and $\delta\in(0,1]$ satisfy \[ n\in[\lambda h^{-1/2}m^{-1},2\lambda h^{-1/2}m^{-1}]\quad\text{and}\quad\delta=4\lambda m^{-1}. \] Note that $\lambda h^{-1/2}m^{-1}>1$ is a rearrangement of $\lambda>h^{1/2}m$, so that such an $n$ exists. Also, note that $\delta\leq1$ since $m\geq2$ and $\lambda\leq\frac{1}{2}$, and that $m_{1}(\delta,\lambda)=m$. Hence by \prettyref{lem:FvKadmissability}, we have that $\phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,m}^{vKD}$ and that \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\max\left\{ (\varrho-1)\frac{\lambda h^{1/2}}{m^{1/2}},\lambda h\right\} . \] Since $(\varrho-1)\frac{\lambda h^{1/2}}{m^{1/2}}\leq\lambda h$ is a rearrangement of $\lambda h\geq m^{-1/3}(\varrho-1)^{2/3}\lambda h^{2/3}$, we conclude that \[ E_{h}^{vKD}(\phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{vKD}\lesssim\lambda h. \] \end{proof} \subsection{Nonlinear model\label{sub:NLUB}} Recall the definitions of $E_{h}^{NL}$, $A_{\lambda,\varrho,m}^{NL}$, and $\mathcal{E}_{b}^{NL}$, given in \prettyref{eq:ENL}, \prettyref{eq:ANL}, and \prettyref{eq:EbNL}. In this section, we prove the following upper bound. \begin{prop} \label{prop:NLUBs}Let $\varrho_{0}\in[1,\infty)$. Then we have that \[ \min_{A_{\lambda,\varrho,m}^{NL}}E_{h}^{NL}-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0}}\min\left\{ \lambda^{2},\max\left\{ \lambda h,h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7},[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}\right\} \right\} \] whenever $h,\lambda\in(0,\frac{1}{2}]$, $\varrho\in[1,\varrho_{0}]$, and $m\in[1,\infty]$.\end{prop} \begin{proof} Note that since $A_{\lambda,\varrho,m}^{NL}\subset A_{\lambda,\varrho,m'}^{NL}$ if $m\leq m'$, we only need to prove the claim for the case of $m=1$. The upper bound of $\lambda^{2}$ is achieved by the unbuckled configuration, $\Phi=(\varrho,\theta,(1-\lambda)z)$. To prove the remainder of the upper bound, note first that it suffices to achieve it for $(h,\lambda,\varrho)\in(0,h_{0}]\times(0,\frac{1}{2}]\times[1,\varrho_{0}]$ for some $h_{0}\in(0,\frac{1}{2}]$. We apply \prettyref{lem:NLUB_manywrinkles}, \prettyref{lem:NLUB_onewrinkle}, and \prettyref{lem:NLUB_uniforminmandrel} to deduce the required upper bound in the stated parameter range with $h_{0}=\frac{1}{4}$. Note that the dependence of the constants in these lemmas on $f$ can be dropped, since $f$ is fixed in the subsequent paragraphs. \end{proof} In the remainder of this section, we\textbf{ fix} $\varrho_{0}\in[1,\infty)$ as in the claim. Furthermore, we \textbf{assume} that \[ h\in(0,\frac{1}{4}],\ \lambda\in(0,\frac{1}{2}],\ \text{and}\quad\varrho\in[1,\varrho_{0}] \] unless otherwise explicitly stated. As in the analysis of the vKD model, we define a two-scale axisymmetric wrinkling pattern. We refer to $n\in\mathbb{N}$ and $\delta\in(0,1]$, which represent the number of wrinkles and their relative extent respectively. Again, we refer the reader to \prettyref{fig:radialheightfield} for a schematic of this construction. We start by fixing $f\in C^{\infty}(\mathbb{R})$ such that \begin{itemize} \item $f$ is non-negative and one-periodic \item $\text{supp}\,f\cap[-\frac{1}{2},\frac{1}{2}]\subset(-\frac{1}{2},\frac{1}{2})$ \item $\norm{f'}_{L^{\infty}}<1$ \item $\int_{-\frac{1}{2}}^{\frac{1}{2}}\sqrt{1-f'^{2}}\,dt=\frac{1}{2}$. \end{itemize} Define $f_{\delta,n}\in C^{\infty}(\mathbb{R})$ by \[ f_{\delta,n}(t)=\frac{\delta}{n}f(\frac{n}{\delta}\{t\})\ind{\{t\}\in B_{\delta/2}}. \] Let $S_{f}:[0,1]\to\mathbb{R}$ be defined by \[ S_{f}(q)=1-\int_{-\frac{1}{2}}^{\frac{1}{2}}\sqrt{1-q^{2}f'^{2}}\,dt, \] and observe that $S_{f}$ is a bijection of $[0,1]\leftrightarrow[0,\frac{1}{2}]$. Hence, \textbf{if} $\delta\in[2\lambda,1]$, we can define $w_{\delta,n,\lambda},u_{\delta,n,\lambda}:\Omega\to\mathbb{R}$ by \[ w_{\delta,n,\lambda}(\theta,z)=S_{f}^{-1}\left(\frac{\lambda}{\delta}\right)f_{\delta,n}(z)\quad\text{and}\quad u_{\delta,n,\lambda}(\theta,z)=\int_{-\frac{1}{2}\leq z'\leq z}\sqrt{1-(\partial_{z}w_{\delta,n,\lambda}(\theta,z'))^{2}}-(1-\lambda)\,dz'. \] Finally, we define $\Phi_{\delta,n,\lambda,\varrho}:\Omega\to\mathbb{R}^{3}$ by \[ \Phi_{\delta,n,\lambda,\varrho}=(w_{\delta,n,\lambda}+\varrho,\theta,(1-\lambda)z+u_{\delta,n,\lambda}), \] in cylindrical coordinates. We now estimate the elastic energy of this wrinkling pattern. \begin{lem} \label{lem:NLadmissability} Let $\delta\in[2\lambda,1]$. Then we have that $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$. Furthermore, \[ E_{h}^{NL}(\Phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0},f}\max\left\{ \left[(\varrho^{2}-1)\vee h^{2}\right]\frac{\lambda^{1/2}\delta^{3/2}}{n},\frac{\lambda\delta^{2}}{n^{2}},h^{2}\frac{\lambda n^{2}}{\delta^{2}}\right\} . \] \end{lem} \begin{proof} Abbreviate $\Phi_{\delta,n,\lambda,\varrho}$ by $\Phi$, $w_{\delta,n,\lambda}$ by $w$, and $u_{\delta,n,\lambda}$ by $u$. By its definition, $\Phi_{\rho}\in H_{\text{per}}^{2}$, $\Phi_{\theta}-\theta\in H_{\text{per}}^{2}$, and $\Phi_{z}-(1-\lambda)z\in H_{\text{per}}^{2}$. To see these, note that $w,u\in H_{\text{per}}^{2}$. Indeed, we have that \begin{align*} \int_{-\frac{1}{2}}^{\frac{1}{2}}\sqrt{1-(\partial_{z}w(\theta,z))^{2}}\,dz & =\int_{[-\frac{1}{2},\frac{1}{2}]\backslash B_{\delta/2}}1\,dt+\int_{B_{\delta/2}}\sqrt{1-\left(S_{f}^{-1}\left(\frac{\lambda}{\delta}\right)f_{\delta,n}'(t)\right)^{2}}\,dt\\ & =2(\frac{1}{2}-\frac{\delta}{2})+\delta\int_{-\frac{1}{2}}^{\frac{1}{2}}\sqrt{1-(S_{f}^{-1}\left(\frac{\lambda}{\delta}\right))^{2}\left(f'(t)\right)^{2}}\,dt\\ & =1-\delta S_{f}\circ S_{f}^{-1}(\frac{\lambda}{\delta})=1-\lambda \end{align*} for each $\theta\in I_{\theta}$. Also, we have that $\Phi_{\rho}\geq\varrho$, since $w\geq0$, and that \[ \partial_{z}\Phi_{z}=1-\lambda+\partial_{z}u=\sqrt{1-(\partial_{z}w)^{2}}\geq0. \] Now we check the slope bounds. Note that \[ \partial_{z}\Phi_{\rho}=\partial_{z}w=S_{f}^{-1}\left(\frac{\lambda}{\delta}\right)f_{\delta,n}'(z) \] so that \[ \norm{\partial_{z}\Phi_{\rho}}_{L^{\infty}}\leq\left|S_{f}^{-1}\left(\frac{\lambda}{\delta}\right)\right|\norm{f_{\delta,n}'}_{L^{\infty}}\leq\norm{f'}_{L^{\infty}}<1. \] Also, by the above, we have that \[ \partial_{z}\Phi_{z}=\sqrt{1-(\partial_{z}w)^{2}}\in[0,1]. \] Hence, \[ \max_{\substack{i\in\left\{ \theta,z\right\} ,\,j\in\{\rho,\theta,z\}} }\norm{\partial_{i}\Phi_{j}}_{L^{\infty}}\leq1 \] and it follows that $\Phi\in A_{\lambda,\varrho,1}^{NL}$. Now we bound the energy of this construction. Since $g_{zz}=1$, $g_{\theta z}=0$, and $u,w$ are functions of $z$ alone, we have that \[ E_{h}^{NL}(\Phi)=\int_{\Omega}\,\left|(\varrho+w)^{2}-1\right|^{2}+h^{2}(\left|\varrho+w\right|^{2}+|\partial_{z}^{2}w|^{2}+2|\partial_{z}w|^{2}+|\partial_{z}^{2}u|^{2})\,d\theta dz. \] Hence, \begin{align*} E_{h}^{NL}(\Phi)-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0}} & \max\{\left[(\varrho^{2}-1)\vee h^{2}\right]\norm{w}_{L^{1}(\Omega)},\norm{w}_{L^{2}(\Omega)}^{2},\\ & \qquad h^{2}\left(\norm{\partial_{z}^{2}w}_{L^{2}(\Omega)}^{2}\vee\norm{\partial_{z}w}_{L^{2}(\Omega)}^{2}\vee\norm{\partial_{z}^{2}u}_{L^{2}(\Omega)}^{2}\right)\}. \end{align*} (Here we used that $\norm{w}_{L^{\infty}}\leq1$, which follows from its definition and our choice of $f$.) By definition, we have that \[ \partial_{z}^{2}u=-\frac{\partial_{z}w\partial_{z}^{2}w}{\sqrt{1-(\partial_{z}w)^{2}}} \] so that \[ \norm{\partial_{z}u}_{L^{2}(\Omega)}\lesssim_{f}\norm{\partial_{z}^{2}w}_{L^{2}(\Omega)}. \] Also, we have that \begin{align*} & \norm{w}_{L^{1}(\Omega)}\lesssim S_{f}^{-1}(\frac{\lambda}{\delta})\frac{\delta^{2}}{n},\quad\norm{w}_{L^{2}(\Omega)}^{2}\lesssim\left(S_{f}^{-1}(\frac{\lambda}{\delta})\right)^{2}\frac{\delta^{3}}{n^{2}},\\ & \norm{\partial_{z}w}_{L^{2}(\Omega)}^{2}\lesssim\left(S_{f}^{-1}(\frac{\lambda}{\delta})\right)^{2}\delta,\ \text{and}\quad\norm{\partial_{z}^{2}w}_{L^{2}(\Omega)}^{2}\lesssim\left(S_{f}^{-1}(\frac{\lambda}{\delta})\right)^{2}\frac{n^{2}}{\delta}. \end{align*} Since \[ \frac{q^{2}}{2}\norm{f'}_{L^{2}([-\frac{1}{2},\frac{1}{2}])}^{2}\leq S_{f}(q) \] it follows that \[ S_{f}^{-1}(\frac{\lambda}{\delta})\lesssim_{f}\left(\frac{\lambda}{\delta}\right)^{1/2}. \] Combining the above, we conclude that \[ E_{h}^{NL}(\Phi)-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0},f}\max\left\{ \left[(\varrho^{2}-1)\vee h^{2}\right]\frac{\lambda^{1/2}\delta^{3/2}}{n},\frac{\lambda\delta^{2}}{n^{2}},h^{2}\left(\frac{\lambda n^{2}}{\delta^{2}}\vee\lambda\right)\right\} \] and the result immediately follows. \end{proof} Next, we choose $n,\delta$ which are optimal for our construction in various regimes. Our first choice exhibits many wrinkles, and is the nonlinear analog of \prettyref{lem:FvKUB_manywrinkles}. \begin{lem} \label{lem:NLUB_manywrinkles}Assume that \[ [(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}\geq\max\{\lambda h,h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}\}. \] Let $n\in\mathbb{N}$ and $\delta\in(0,1]$ satisfy \[ n\in\left[[(\varrho^{2}-1)\vee h^{2}]^{1/3}\lambda h^{-2/3},2[(\varrho^{2}-1)\vee h^{2}]^{1/3}\lambda h^{-2/3}\right]\quad\text{and}\quad\delta=2\lambda. \] Then, $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$ and \[ E_{h}^{NL}(\Phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0},f}[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}. \] \end{lem} \begin{proof} Rearranging the inequality $[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}\geq h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}$, we find that $[(\varrho^{2}-1)\vee h^{2}]^{1/3}\lambda h^{-2/3}\geq1$ so that there exists such an $n\in\mathbb{N}$. Also, with our choice of $\delta$ we have that $\delta\in[2\lambda,1]$. It follows immediately from \prettyref{lem:NLadmissability} that $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$. Finally, the bound on the energy follows from \prettyref{lem:NLadmissability} as in the proof of \prettyref{lem:FvKUB_manywrinkles}, where $\varrho-1$ is replaced by $(\varrho^{2}-1)\vee h^{2}$ and $m$ is replaced by the number $1$. \end{proof} Next, we consider a pattern consisting of one wrinkle. \begin{lem} \label{lem:NLUB_onewrinkle}Assume that \[ h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}\geq\max\{\lambda h,[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}\}. \] Let $n=1$ and let $\delta\in[2\lambda,1]$ be given by \[ \delta=2\lambda^{1/7}[(\varrho^{2}-1)\vee h^{2}]^{-2/7}h^{4/7}. \] Then, $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$ and \[ E_{h}^{NL}(\Phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0},f}h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}. \] \end{lem} \begin{proof} First, we check that $\delta\in[2\lambda,1]$. For the upper bound, note that $2\lambda^{1/7}[(\varrho^{2}-1)\vee h^{2}]^{-2/7}h^{4/7}\leq1$ if and only if $\lambda h^{4}\leq\frac{1}{2^{7}}[(\varrho^{2}-1)\vee h^{2}]^{2}$. By assumption, we have that $\lambda h\leq h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}$ so that $\lambda h^{1/2}\leq[(\varrho^{2}-1)\vee h^{2}]^{2}$. Since $h\leq\frac{1}{4}$, it follows that $h^{4}\leq\frac{1}{2^{7}}h^{1/2}$ and hence that $\lambda h^{4}\leq\frac{1}{2^{7}}[(\varrho^{2}-1)\vee h^{2}]^{2}$ as required. For the lower bound, we note that $2\lambda^{1/7}[(\varrho^{2}-1)\vee h^{2}]^{-2/7}h^{4/7}\geq2\lambda$ if and only if $h^{4}\geq\lambda^{6}[(\varrho^{2}-1)\vee h^{2}]^{2}$. As this is a rearrangement of $[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3}\leq h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}$, we conclude the lower bound. It follows from \prettyref{lem:NLadmissability} that $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$. The bound on the energy also follows from \prettyref{lem:NLadmissability}, as in the proof of \prettyref{lem:FvKUB_onewrinkle} but where $\varrho-1$ is replaced by $(\varrho^{2}-1)\vee h^{2}$. \end{proof} Finally, we discuss the neutral mandrel case, where $\varrho=1$. \begin{lem} \label{lem:NLUB_uniforminmandrel} Assume that \[ \lambda h\geq\max\{[(\varrho^{2}-1)\vee h^{2}]^{2/3}\lambda h^{2/3},h^{6/7}\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}\}. \] If $\lambda\leq h^{1/2}$, then upon taking $n=1$ and $\delta=2h^{1/2}\in[2\lambda,1]$ we find that $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$ and that \[ E_{h}^{NL}(\Phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0},f}\lambda h. \] If $\lambda>h^{1/2}$, then upon taking $n\in\mathbb{N}$ and $\delta\in[2\lambda,1]$ which satisfy \[ n\in[\lambda h^{-1/2},2\lambda h^{-1/2}]\quad\text{and}\quad\delta=2\lambda, \] we find that $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$ and that \[ E_{h}^{NL}(\Phi_{\delta,n,\lambda,\varrho})-\mathcal{E}_{b}^{NL}\lesssim_{\varrho_{0},f}\lambda h. \] \end{lem} \begin{proof} We prove this in two parts. Assume first that $\lambda\leq h^{1/2}$. Then let $n=1$ and $\delta=2h^{1/2}$. Note that $\delta\in[2\lambda,1]$ if and only if $h\leq\frac{1}{4}$ and $h^{1/2}\geq\lambda$. It follows from \prettyref{lem:NLadmissability} that $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$, and the bound on the energy follows from \prettyref{lem:NLadmissability} as in the proof of \prettyref{lem:FvKUB_uniforminmandrel}, where $\varrho-1$ is replaced by $(\varrho^{2}-1)\vee h^{2}$. Now assume that $\lambda>h^{1/2}$. Let $n\in\mathbb{N}$ and $\delta\in[2\lambda,1]$ which satisfy \[ n\in[\lambda h^{-1/2},2\lambda h^{-1/2}]\quad\text{and}\quad\delta=2\lambda. \] Note that $\lambda h^{-1/2}>1$ is a rearrangement of $\lambda>h^{1/2}$, so that such an $n$ exists. It follows immediately from \prettyref{lem:NLadmissability} that $\Phi_{\delta,n,\lambda,\varrho}\in A_{\lambda,\varrho,1}^{NL}$. The bound on the energy follows from \prettyref{lem:NLadmissability} as in the proof of \prettyref{lem:FvKUB_uniforminmandrel}, where $\varrho-1$ is replaced by $(\varrho^{2}-1)\vee h^{2}$ and $m$ is replaced by the number $1$. \end{proof} \section{Ansatz-free lower bounds in the large mandrel case \label{sec:largemandrelLB}} We turn now to prove the ansatz-free lower bounds from \prettyref{thm:FvKlargemandrelscaling} and \prettyref{thm:NLlargemandrelscaling}. The key idea behind their proof is that buckling in the presence of the mandrel requires ``outwards'' displacement, i.e., displacement in the direction of increasing $\rho$, and that this results in the presence of non-trivial tensile hoop stresses. This observation leads to lower bounds on $E_{h}^{vKD}$ in \prettyref{sub:largemandrelLB_FvK} and on $E_{h}^{NL}$ in \prettyref{sub:largemandrelLB_nonlinear}. These bounds are optimal in certain regimes of the form $\varrho-1\geq c_{m}(\lambda,h)>0$ (for the precise statement, we refer the reader to \prettyref{sub:largemandrelresults} in the introduction). \subsection{vKD model\label{sub:largemandrelLB_FvK}} Recall the definitions of $E_{h}^{vKD}$, $A_{\lambda,\varrho,m}^{vKD}$, and $\mathcal{E}_{b}^{vKD}$ from \prettyref{eq:EFvK}, \prettyref{eq:AFvK}, and \prettyref{eq:EbFVK}. In \prettyref{sub:largemandrelLBproof}, we prove the following lower bound. \begin{prop} \label{prop:FvKlargemandrelLB} We have that \[ \min\left\{ \max\left\{ m^{-2/3}(\varrho-1)^{2/3}h^{2/3}\lambda,\lambda^{5/7}(\varrho-1)^{4/7}h^{6/7}\right\} ,\lambda^{2}\right\} \lesssim\min_{A_{\lambda,\varrho,m}^{vKD}}E_{h}^{vKD}-\mathcal{E}_{b}^{vKD} \] whenever $h,\lambda\in(0,\infty)$, $\varrho\in[1,\infty)$, and $m\in(0,\infty]$. \end{prop} \begin{proof} This follows from \prettyref{cor:FvKLB_largemandr} and \prettyref{cor:FvKLB_largemandrel2}, which combine to prove the equivalent statement that \[ \min_{A_{\lambda,\varrho,m}^{vKD}}E_{h}^{vKD}-\mathcal{E}_{b}^{vKD}\gtrsim\max\left\{ \min\{m^{-2/3}(\varrho-1)^{2/3}h^{2/3}\lambda,\lambda^{2}\},\min\{\lambda^{5/7}(\varrho-1)^{4/7}h^{6/7},\lambda^{2}\}\right\} . \] \end{proof} In \prettyref{sub:Blowuprate_largemandrel}, we prove an estimate on the blow-up rate of $D\phi$ as $h\to0$ for the minimizers of the $m=\infty$ problem. \subsubsection{Proof of the ansatz-free lower bound\label{sub:largemandrelLBproof}} We begin by controlling various features of the radial displacement, $\phi_{\rho}$. Given $\phi\in A_{\lambda,\varrho,m}^{vKD}$ we call \[ \Delta^{vKD}=E_{h}^{vKD}(\phi)-\mathcal{E}_{b}^{vKD}, \] which is the excess elastic energy in the vKD model. \begin{lem} \label{lem:FvKLBs_largemandr} Let $\phi\in A_{\lambda,\varrho,\infty}^{vKD}$. Then we have that \[ \Delta^{vKD}\geq\max\left\{ (\varrho-1)\norm{\phi_{\rho}-(\varrho-1)}_{L^{1}(\Omega)},h^{2}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{2},\norm{\frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}-\lambda}_{L_{\theta}^{2}}^{2}\right\} . \] \end{lem} \begin{proof} Make the substitution \[ \phi=(w+\varrho-1,u_{\theta},u_{z}-\lambda z), \] given in cylindrical coordinates. By definition, the vKD strain tensor, $\epsilon$, satisfies \[ \epsilon_{\theta\theta}=\partial_{\theta}u_{\theta}+\frac{1}{2}(\partial_{\theta}w)^{2}+w+(\varrho-1)\quad\text{and}\quad\epsilon_{zz}=\partial_{z}u_{z}-\lambda+\frac{1}{2}(\partial_{z}w)^{2}. \] Since $u_{\theta}\in H_{\text{per}}^{1}$, we have that \begin{align*} E_{h}^{vKD}(\phi) & \geq\int_{\Omega}\abs{\epsilon_{\theta\theta}}^{2}+\abs{\epsilon_{zz}}^{2}+h^{2}\abs{D^{2}w}^{2}\\ & \geq\int_{\Omega}(\varrho-1)^{2}+2(\varrho-1)(\partial_{\theta}u_{\theta}+\frac{1}{2}(\partial_{\theta}w)^{2}+w)+\abs{\epsilon_{zz}}^{2}+h^{2}\abs{D^{2}w}^{2}\\ & \geq\mathcal{E}_{b}^{vKD}+\int_{\Omega}2(\varrho-1)w+\abs{\epsilon_{zz}}^{2}+h^{2}\abs{D^{2}w}^{2}. \end{align*} Since $w$ is non-negative, we conclude that \[ \Delta^{vKD}\geq\max\left\{ 2(\varrho-1)\norm{w}_{L^{1}(\Omega)},\norm{\epsilon_{zz}}_{L^{2}(\Omega)}^{2},h^{2}\norm{D^{2}w}_{L^{2}(\Omega)}^{2}\right\} . \] By applying Jensen's inequality and using that $u_{z}\in H_{\text{per}}^{1}$, it follows that \[ \norm{\epsilon_{zz}}_{L^{2}(\Omega)}^{2}\geq\frac{1}{\abs{I_{z}}}\int_{I_{\theta}}\abs{\int_{I_{z}}\epsilon_{zz}\,dz}^{2}\,d\theta=\frac{1}{\abs{I_{z}}}\norm{\frac{1}{2}\norm{\partial_{z}w}_{L_{z}^{2}}^{2}-\lambda}_{L_{\theta}^{2}}^{2}. \] Since $\abs{I_{z}}=1$, the result follows. \end{proof} Now, we will apply the Gagliardo-Nirenberg interpolation inequalities from \prettyref{sec:Appendix} to deduce the desired lower bounds. \begin{cor} \label{cor:FvKLB_largemandr} If $\phi\in A_{\lambda,\varrho,m}^{vKD}$, then \[ \Delta^{vKD}\gtrsim\min\{m^{-2/3}(\varrho-1)^{2/3}h^{2/3}\lambda,\lambda^{2}\}. \] In fact, if $\phi\in A_{\lambda,\varrho,\infty}^{vKD}$, then \[ \Delta^{vKD}\gtrsim\min\{\norm{D\phi_{\rho}}_{L^{\infty}}^{-2/3}(\varrho-1)^{2/3}h^{2/3}\lambda,\lambda^{2}\}. \] \end{cor} \begin{proof} Observe that by \prettyref{lem:FvKLBs_largemandr} and an application of H\"older's inequality, we have that \[ (\Delta^{vKD})^{1/2}\geq\abs{I_{z}}^{-1/2}|I_{\theta}|^{-1/2}\norm{\frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}-\lambda}_{L_{\theta}^{1}}. \] Hence, by the triangle inequality, \[ \frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2}+|\Omega|^{1/2}(\Delta^{vKD})^{1/2}\geq\lambda|I_{\theta}|. \] Now we perform a case analysis. If $\phi$ satisfies $\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2}\leq\lambda|I_{\theta}|$, then we conclude by the above that $\Delta^{vKD}\gtrsim\lambda^{2}$. If, on the other hand, $\phi$ satisfies $\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2}>\lambda|I_{\theta}|$, then we can combine the interpolation inequality from \prettyref{lem:2dinterp-1} (applied to $f=\phi_{\rho}-(\varrho-1)$) with \prettyref{lem:FvKLBs_largemandr} to conclude that \[ \lambda\lesssim\norm{D\phi_{\rho}}_{L^{\infty}(\Omega)}^{2/3}\left(\frac{1}{\varrho-1}\Delta^{vKD}\right)^{2/3}\left(\frac{1}{h^{2}}\Delta^{vKD}\right)^{1/3}\lesssim m^{2/3}(\varrho-1)^{-2/3}h^{-2/3}\Delta^{vKD}. \] These observations combine to prove the desired result.\end{proof} \begin{cor} \label{cor:FvKLB_largemandrel2} If $\phi\in A_{\lambda,\varrho,m}^{vKD}$, then \[ \Delta^{vKD}\gtrsim\min\{\lambda^{5/7}(\varrho-1)^{4/7}h^{6/7},\lambda^{2}\}. \] \end{cor} \begin{proof} Evidently, it suffices to prove that \[ \Delta^{vKD}\leq|I_{\theta}|\lambda^{2}\implies\Delta^{vKD}\gtrsim\lambda^{5/7}(\varrho-1)^{4/7}h^{6/7}. \] Assume that $\Delta^{vKD}\leq|I_{\theta}|\lambda^{2}$, and define the set \[ Z=\left\{ \theta\in I_{\theta}\ :\ \abs{\frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}-\lambda}\geq\sqrt{2}\lambda\right\} . \] We claim that $|I_{\theta}\backslash Z|\geq\frac{1}{2}|I_{\theta}|$. Indeed, by Chebyshev's inequality and \prettyref{lem:FvKLBs_largemandr}, we have that \[ 2\lambda^{2}\abs{Z}\leq\norm{\frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}-\lambda}_{L_{\theta}^{2}}^{2}\leq|I_{\theta}|\lambda^{2} \] so that $|Z|\leq\frac{1}{2}|I_{\theta}|$ as desired. It follows that \[ \lambda^{5/7}\abs{I_{\theta}}\lesssim\int_{I_{\theta}\backslash Z}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{10/7}\,d\theta\leq\int_{I_{\theta}}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{10/7}\,d\theta. \] Applying the first interpolation inequality from \prettyref{lem:1dinterp} to $f=\phi_{\rho}-(\varrho-1)$, we conclude that \[ \lambda^{5/7}\abs{I_{\theta}}\lesssim\int_{I_{\theta}}\norm{f}_{L_{z}^{1}}^{4/7}\norm{\partial_{z}^{2}f}_{L_{z}^{2}}^{6/7}\,d\theta\leq\norm{\phi_{\rho}-(\varrho-1)}_{L^{1}(\Omega)}^{4/7}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{6/7}. \] Note that we used H\"older's inequality in the second step. Finally, \prettyref{lem:FvKLBs_largemandr} proves that \[ \lambda^{5/7}\lesssim\left(\frac{1}{\varrho-1}\Delta^{vKD}\right)^{4/7}\left(\frac{1}{h^{2}}\Delta^{vKD}\right)^{3/7}=(\varrho-1)^{-4/7}h^{-6/7}\Delta^{vKD} \] and the lower bound follows. \end{proof} \subsubsection{Blow-up rate of $D\phi$ as $h\to0$\label{sub:Blowuprate_largemandrel}} We can now make \prettyref{rem:FvKslopeexlposion} precise, regarding the claim that $E_{h}^{vKD}$ prefers exploding slopes in the limit $h\to0$. The following result can be seen to justify the introduction of the parameter $m$ in the definition of the admissible set, $A_{\lambda,\varrho,m}^{vKD}$. \begin{cor} Let $\left\{ (h_{\alpha},\lambda_{\alpha},\varrho_{\alpha})\right\} _{\alpha\in\mathbb{R}_{+}}$ be such that $h_{\alpha},\lambda_{\alpha}\in(0,\frac{1}{2}]$ and $\varrho_{\alpha}\geq1+\lambda_{\alpha}^{1/2}h_{\alpha}^{1/4}$. Assume that $h_{\alpha}\ll(\varrho_{\alpha}-1)^{-2/3}\lambda_{\alpha}^{3/2}$ as $\alpha\to\infty$, and let $\{\phi^{\alpha}\}_{\alpha\in\mathbb{R}_{+}}$ satisfy \[ \phi^{\alpha}\in A_{\lambda_{\alpha},\varrho_{\alpha},\infty}^{vKD}\quad\text{and}\quad E_{h_{\alpha}}^{vKD}(\phi^{\alpha})=\min_{A_{\lambda_{\alpha},\varrho_{\alpha},\infty}^{vKD}}E_{h_{\alpha}}^{vKD}. \] Then we have that \[ (\varrho_{\alpha}-1)^{1/7}h_{\alpha}^{-2/7}\lambda_{\alpha}^{3/7}\lesssim\norm{D\phi_{\rho}^{\alpha}}_{L^{\infty}}\quad\text{as}\ \alpha\to\infty. \] \end{cor} \begin{proof} For ease of notation, we omit the index $\alpha$ in what follows. By \prettyref{prop:FvKUB} we have that \[ E_{h}^{vKD}(\phi)-\mathcal{E}_{b}^{vKD}\lesssim h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}. \] Hence, by \prettyref{cor:FvKLB_largemandr}, it follows that \[ \lambda^{2}\lesssim h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}\quad\text{or}\quad\norm{D\phi_{\rho}}_{L^{\infty}}^{-2/3}(\varrho-1)^{2/3}h^{2/3}\lambda\lesssim h^{6/7}\lambda^{5/7}(\varrho-1)^{4/7}. \] Rearranging, we have that \[ h\gtrsim(\varrho-1)^{-2/3}\lambda^{3/2}\quad\text{or}\quad(\varrho-1)^{1/7}h^{-2/7}\lambda^{3/7}\lesssim\norm{D\phi_{\rho}}_{L^{\infty}}. \] By assumption the first inequality does not hold, and the result follows. \end{proof} \subsection{Nonlinear model\label{sub:largemandrelLB_nonlinear}} Recall the definitions of $E_{h}^{NL}$, $A_{\lambda,\varrho,m}^{NL}$, and $\mathcal{E}_{b}^{NL}$ given in \prettyref{eq:ENL}, \prettyref{eq:ANL}, and \prettyref{eq:EbNL}. In this section, we prove the following lower bound. \begin{prop} \label{prop:NLLBs_largemandr}Let $\varrho_{0}\in[1,\infty)$. Then we have that \[ \min\left\{ \max\left\{ \left[(\varrho^{2}-1)\vee h^{2}\right]^{2/3}h^{2/3}\lambda,\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}h^{6/7}\right\} ,\lambda^{2}\right\} \lesssim_{m,\varrho_{0}}\min_{A_{\lambda,\varrho,m}^{NL}}E_{h}^{NL}-\mathcal{E}_{b}^{NL} \] whenever $h,\lambda\in(0,1]$, $\varrho\in[1,\varrho_{0}]$, and $m\in(0,\infty)$. \end{prop} The reader may notice that, although it is certainly more involved, the following argument shares the same overall structure as the one given for the vKD model in \prettyref{sub:largemandrelLB_FvK}. For more on this, we refer to the discussion in \prettyref{sub:DiscussionofProofs}. In the remainder of this section, we \textbf{assume} that \[ 0<h,\lambda\leq1,\quad1\leq\varrho\leq\varrho_{0}<\infty,\ \text{and}\quad0<m<\infty. \] Given $\Phi\in A_{\lambda,\varrho,m}^{NL}$ we call \begin{equation} \Delta^{NL}=E_{h}^{NL}(\Phi)-\mathcal{E}_{b}^{NL},\label{eq:NLexcess} \end{equation} which is the excess elastic energy in the nonlinear model. Observe we may \textbf{assume} that \[ \Phi\ \text{satisfies}\ \Delta^{NL}\leq1, \] since otherwise the desired bound is clear. As the reader will note, this assumption simplifies the discussion throughout. We will make frequent use of the following identities concerning the components of the metric tensor, $g=D\Phi^{T}D\Phi$, in $(\theta,z)$-coordinates: \begin{align}\label{eq:g_components} \begin{split} g_{\theta\theta}&=\left(\partial_{\theta}\Phi_{\rho}\right)^{2}+\Phi_{\rho}^{2}\left(\partial_{\theta}\Phi_{\theta}\right)^{2}+\left(\partial_{\theta}\Phi_{z}\right)^{2}\\ g_{zz}&=\left(\partial_{z}\Phi_{\rho}\right)^{2}+\Phi_{\rho}^{2}\left(\partial_{z}\Phi_{\theta}\right)^{2}+\left(\partial_{z}\Phi_{z}\right)^{2}\\ g_{\theta z}&=\partial_{\theta}\Phi_{\rho}\partial_{z}\Phi_{\rho}+\Phi_{\rho}^{2}\partial_{\theta}\Phi_{\theta}\partial_{z}\Phi_{\theta}+\partial_{\theta}\Phi_{z}\partial_{z}\Phi_{z} \end{split} \end{align}We will also make use of the following identities concerning the components of $D^{2}\Phi$ in $(\theta,z)$-coordinates: \begin{align}\label{eq:DDPhi} \begin{split} \partial_{\theta}^{2}\Phi&=(\partial_{\theta}^{2}\Phi_{\rho}-\Phi_{\rho}(\partial_{\theta}\Phi_{\theta})^{2})E_{\rho}(\Phi)+(2\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}+\Phi_{\rho}\partial_{\theta}^{2}\Phi_{\theta})E_{\theta}(\Phi)+\partial_{\theta}^{2}\Phi_{z}E_{z}\\ \partial_{z}^{2}\Phi&=(\partial_{z}^{2}\Phi_{\rho}-\Phi_{\rho}(\partial_{z}\Phi_{\theta})^{2})E_{\rho}(\Phi)+(2\partial_{z}\Phi_{\rho}\partial_{z}\Phi_{\theta}+\Phi_{\rho}\partial_{z}^{2}\Phi_{\theta})E_{\theta}(\Phi)+\partial_{z}^{2}\Phi_{z}E_{z}\\ \partial_{\theta z}\Phi&=(\partial_{\theta z}\Phi_{\rho}-\Phi_{\rho}\partial_{\theta}\Phi_{\theta}\partial_{z}\Phi_{\theta})E_{\rho}(\Phi)+(\partial_{\theta}\Phi_{\rho}\partial_{z}\Phi_{\theta}+\partial_{\theta}\Phi_{\theta}\partial_{z}\Phi_{\rho}+\Phi_{\rho}\partial_{\theta z}\Phi_{\theta})E_{\theta}(\Phi)+\partial_{\theta z}\Phi_{z}E_{z} \end{split} \end{align}Here, $\{E_{i}\}_{i\in\{\rho,\theta,z\}}$ denotes the unit frame of coordinate vectors for the cylindrical $\rho,\theta,z$-coordinates on $\mathbb{R}^{3}$ (as defined in \prettyref{sub:notation}). \subsubsection{Controlling the radial deformation} We begin by proving that the excess energy controls the membrane and bending terms individually. \begin{lem} \label{lem:NLexcesssplits} If $\Phi\in A_{\lambda,\varrho,\infty}^{NL}$, then \begin{align*} \Delta^{NL} & \geq\max\left\{ \int_{\Omega}\abs{g_{\theta\theta}-1}^{2}-(\varrho^{2}-1)^{2},\norm{g_{\theta z}}_{L^{2}(\Omega)}^{2},\norm{g_{zz}-1}_{L^{2}(\Omega)}^{2}\right\} \\ \Delta^{NL} & \geq h^{2}\max\left\{ \int_{\Omega}\abs{\partial_{\theta}^{2}\Phi}^{2}-\varrho^{2},\norm{\partial_{\theta z}\Phi}_{L^{2}(\Omega)}^{2},\norm{\partial_{z}^{2}\Phi}_{L^{2}(\Omega)}^{2}\right\} . \end{align*} \end{lem} \begin{proof} By the definition of $\Delta^{NL}$ in \prettyref{eq:NLexcess}, it suffices to prove the following two inequalities to conclude the result: \[ \int_{\Omega}\abs{g_{\theta\theta}-1}^{2}-(\varrho^{2}-1)^{2}\geq0\quad\text{and}\quad\int_{\Omega}\abs{\partial_{\theta}^{2}\Phi}^{2}-\varrho^{2}\geq0. \] To see the first inequality, we begin by noting that \begin{equation} (g_{\theta\theta}-1)^{2}-(\varrho^{2}-1)^{2}=2(\varrho^{2}-1)(g_{\theta\theta}-\varrho^{2})+(g_{\theta\theta}-\varrho^{2})^{2}\label{eq:excessthetaenergy} \end{equation} and \begin{equation} g_{\theta\theta}-\varrho^{2}=(\partial_{\theta}\Phi_{\rho})^{2}+\Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}+(\partial_{\theta}\Phi_{z})^{2}-\varrho^{2}\label{eq:excessthetastrain} \end{equation} by \prettyref{eq:g_components}. It follows that \begin{equation} (g_{\theta\theta}-1)^{2}-(\varrho^{2}-1)^{2}\geq2(\varrho^{2}-1)(\Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}-\varrho^{2}+(\partial_{\theta}\Phi_{\rho})^{2}+(\partial_{\theta}\Phi_{z})^{2}).\label{eq:excessenergylarge} \end{equation} Using the hypothesis that $\Phi_{\rho}\geq\varrho$ and applying Jensen's inequality, we see that \begin{equation} \int_{\Omega}\Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}-\varrho^{2}\geq\frac{\varrho^{2}}{\left|\Omega\right|}\left(\left(\int_{\Omega}\partial_{\theta}\Phi_{\theta}\right)^{2}-\left|\Omega\right|^{2}\right)=\frac{\varrho^{2}}{\left|\Omega\right|}\left(\left|\Omega\right|^{2}-\left|\Omega\right|^{2}\right)=0.\label{eq:excessenergylarge-1} \end{equation} Since $\varrho\geq1$, the first inequality follows. To see the second inequality, note that by \prettyref{eq:DDPhi} we have that \[ \abs{\partial_{\theta}^{2}\Phi}\geq\abs{\partial_{\theta}^{2}\Phi_{\rho}-\Phi_{\rho}(\partial_{\theta}\Phi_{\theta})^{2}}. \] Hence, by Jensen's inequality and since $\Phi_{\rho}\in H_{\text{per}}^{2}$, it follows that \[ \int_{\Omega}\abs{\partial_{\theta}^2\Phi}^{2}-\varrho^{2}\geq\frac{1}{\abs{\Omega}}\left(\int_{\Omega}\partial_{\theta}^2\Phi_{\rho}-\Phi_{\rho}(\partial_{\theta}\Phi_{\theta})^{2}\right)^{2}-\abs{\Omega}\varrho^{2}=\frac{1}{\abs{\Omega}}\left(\int_{\Omega}\Phi_{\rho}(\partial_{\theta}\Phi_{\theta})^{2}\right)^{2}-\abs{\Omega}\varrho^{2}. \] Using that $\Phi_{\rho}\geq\varrho$ and applying Jensen's inequality again, we conclude that \[ \int_{\Omega}\abs{\partial_{\theta}^2\Phi}^{2}-\varrho^{2}\geq\frac{\varrho^{2}}{|\Omega|}\left(\left(\int_{\Omega}(\partial_{\theta}\Phi_{\theta})^{2}\right)^{2}-\abs{\Omega}^{2}\right)\geq\frac{\varrho^{2}}{|\Omega|}\left(|\Omega|^{2}-|\Omega|^{2}\right)=0 \] as desired. \end{proof} Next, we establish control on the radial component of the deformation, $\Phi_{\rho}$. As we will require the uniform-in-mandrel estimates from this result to complete the proof of \prettyref{prop:NLLBs_largemandr}, we record these alongside the large mandrel estimates now. \begin{lem} \label{lem:NLmembrLB}Let $\Phi\in A_{\lambda,\varrho,\infty}^{NL}$. Then we have that \begin{align*} \Delta^{NL} & \gtrsim(\varrho^{2}-1)\max\{\norm{\Phi_{\rho}-\varrho}_{L^{1}(\Omega)},\norm{\partial_{\theta}\Phi_{\rho}}_{L^{2}(\Omega)}^{2},\norm{\partial_{\theta}\Phi_{\theta}-1}_{L^{2}(\Omega)}^{2},\norm{\partial_{\theta}\Phi_{z}}_{L^{2}(\Omega)}^{2}\}\\ (\Delta^{NL})^{1/2} & \gtrsim\max\left\{ \norm{\Phi_{\rho}-\varrho}_{L_{z}^{2}L_{\theta}^{1}},\norm{\partial_{\theta}\Phi_{\rho}}_{L_{z}^{4}L_{\theta}^{2}}^{2},\norm{\partial_{\theta}\Phi_{\theta}-1}_{L_{z}^{4}L_{\theta}^{2}}^{2},\norm{\partial_{\theta}\Phi_{z}}_{L_{z}^{4}L_{\theta}^{2}}^{2}\right\} . \end{align*} \end{lem} \begin{proof} We begin by proving the first estimate. Recall \prettyref{lem:NLexcesssplits} and equations \prettyref{eq:excessenergylarge} and \prettyref{eq:excessenergylarge-1}. Altogether, these imply that \begin{equation} \Delta^{NL}\geq2(\varrho^{2}-1)\max\left\{ \int_{\Omega}\Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}-\varrho^{2},\norm{\partial_{\theta}\Phi_{\rho}}_{L^{2}(\Omega)}^{2},\norm{\partial_{\theta}\Phi_{z}}_{L^{2}(\Omega)}^{2}\right\} .\label{eq:NLmembrLB-1} \end{equation} Introduce the displacements $\phi_{\rho}=\Phi_{\rho}-\varrho$ and $\phi_{\theta}=\Phi_{\theta}-\theta$. In these variables, \begin{equation} \Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}-\varrho^{2}\geq\varrho^{2}\left(2\partial_{\theta}\phi_{\theta}+(\partial_{\theta}\phi_{\theta})^{2}\right)+2\varrho\phi_{\rho}(\partial_{\theta}\phi_{\theta}+1)^{2}.\label{eq:NLmembrLB-0} \end{equation} Since the second term is non-negative, and since $\phi_{\theta}\in H_{\text{per}}^{2}$ and $\varrho\geq1$, we conclude from \prettyref{eq:NLmembrLB-0} that \begin{equation} I:=\int_{\Omega}\Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}-\varrho^{2}\geq\int_{\Omega}\varrho^{2}\left(2\partial_{\theta}\phi_{\theta}+(\partial_{\theta}\phi_{\theta})^{2}\right)\geq\norm{\partial_{\theta}\phi_{\theta}}_{L^{2}(\Omega)}^{2}.\label{eq:NLmembrLB-2} \end{equation} In a similar manner, we can conclude from \prettyref{eq:NLmembrLB-0} that \[ I\geq\int_{\Omega}2\varrho\phi_{\rho}(\partial_{\theta}\phi_{\theta}+1)^{2}\geq\int_{\Omega}\phi_{\rho}(2\partial_{\theta}\phi_{\theta}+1) \] and, since $\phi_{\rho}\geq0$, that \[ I+\left|\int_{\Omega}\phi_{\rho}\partial_{\theta}\phi_{\theta}\right|\gtrsim\norm{\phi_{\rho}}_{L^{1}(\Omega)}. \] Recall the notation $\overline{f}$ for the $\theta$-average of a function $f$, introduced in \prettyref{sub:notation}. Integrating by parts and applying Poincare's inequality, we see that \begin{align*} \abs{\int_{\Omega}\phi_{\rho}\partial_{\theta}\phi_{\theta}} & =\abs{\int_{\Omega}\partial_{\theta}\phi_{\rho}(\phi_{\theta}-\overline{\phi_{\theta}})}\leq\norm{\partial_{\theta}\phi_{\rho}}_{L^{2}(\Omega)}\norm{\phi_{\theta}-\overline{\phi_{\theta}}}_{L^{2}(\Omega)}\\ & \lesssim\norm{\partial_{\theta}\phi_{\rho}}_{L^{2}(\Omega)}\norm{\partial_{\theta}\phi_{\theta}}_{L^{2}(\Omega)}. \end{align*} Hence, \begin{equation} I+\norm{\partial_{\theta}\phi_{\rho}}_{L^{2}(\Omega)}\norm{\partial_{\theta}\phi_{\theta}}_{L^{2}(\Omega)}\gtrsim\norm{\phi_{\rho}}_{L^{1}(\Omega)}.\label{eq:NLmembrLB-3} \end{equation} Combining \prettyref{eq:NLmembrLB-1}, \prettyref{eq:NLmembrLB-2}, and \prettyref{eq:NLmembrLB-3} gives the required bound. We turn now to prove the second estimate. First, we observe that by \prettyref{eq:excessthetastrain} and \prettyref{eq:excessenergylarge-1}, \[ \int_{\Omega}g_{\theta\theta}-\varrho^{2}\geq\int_{\Omega}\Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}-\varrho^{2}\geq0. \] Hence, by \prettyref{lem:NLexcesssplits}, \prettyref{eq:excessthetaenergy}, and since $\varrho\geq1$, we have that \[ \Delta^{NL}\geq\int_{\Omega}|g_{\theta\theta}-1|^{2}-(\varrho^{2}-1)^{2}\geq\int_{\Omega}(g_{\theta\theta}-\varrho^{2})^{2}. \] Applying Jensen's inequality along the slices $\{z\}\times I_{\theta}$, we find that \begin{equation} (\Delta^{NL})^{1/2}\gtrsim\norm{\overline{g_{\theta\theta}-\varrho^{2}}}_{L_{z}^{2}}.\label{eq:NLmembrLB-4} \end{equation} Now we estimate the integrand in the line above. It follows from \prettyref{eq:excessthetastrain} that \[ \overline{g_{\theta\theta}-\varrho^{2}}\geq\max\left\{ \overline{\Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}-\varrho^{2}},\norm{\partial_{\theta}\Phi_{\rho}}_{L_{\theta}^{2}}^{2},\norm{\partial_{\theta}\Phi_{z}}_{L_{\theta}^{2}}^{2}\right\} \] for a.e.\ $z\in I_{z}$. Here we used that \[ II=\overline{\Phi_{\rho}^{2}(\partial_{\theta}\Phi_{\theta})^{2}-\varrho^{2}}\geq0 \] for a.e.\ $z\in I_{z}$, which follows from Jensen's inequality (as in the proof of \prettyref{eq:excessenergylarge-1}). Now, we apply the same reasoning to $II$ as for $I$ above. The analog of \prettyref{eq:NLmembrLB-2} is that \[ II\geq\norm{\partial_{\theta}\phi_{\theta}}_{L_{\theta}^{2}}^{2}\quad a.e., \] and this is implied by \prettyref{eq:NLmembrLB-0}. The analog of \prettyref{eq:NLmembrLB-3} is that \[ II+\norm{\partial_{\theta}\phi_{\rho}}_{L_{\theta}^{2}}\norm{\partial_{\theta}\phi_{\theta}}_{L_{\theta}^{2}}\gtrsim\norm{\phi_{\rho}}_{L_{\theta}^{1}}\quad a.e. \] This also follows from \prettyref{eq:NLmembrLB-0}, by an integration by parts argument and Poincare's inequality. It follows that \[ \overline{g_{\theta\theta}-\varrho^{2}}\gtrsim\max\left\{ \norm{\phi_{\rho}}_{L_{\theta}^{1}},\norm{\partial_{\theta}\phi_{\theta}}_{L_{\theta}^{2}}^{2},\norm{\partial_{\theta}\Phi_{\rho}}_{L_{\theta}^{2}}^{2},\norm{\partial_{\theta}\Phi_{z}}_{L_{\theta}^{2}}^{2}\right\} \quad a.e. \] Combining this with \prettyref{eq:NLmembrLB-4} proves the required bound. \end{proof} Now, we turn to quantify the observation that if $\lambda$ is large enough, the cylinder should buckle. \begin{lem} \label{lem:bucklingestimate_witherror}Let $\Phi\in A_{\lambda,1,\infty}^{NL}$. Then we have that \[ \lambda\abs{A}\lesssim\max\{\int_{A}\norm{\partial_{z}\Phi_{\rho}}_{L_{z}^{2}}^{2}\,d\theta,(\Delta^{NL})^{1/2},\norm{\Phi_{\rho}\partial_{z}\Phi_{\theta}}_{L^{2}(\Omega)}^{2}\} \] for all $A\in\mathcal{B}(I_{\theta})$.\end{lem} \begin{rem} \label{rem:invertibilityhypothesis}It is precisely in the proof of this lemma where the hypothesis on the sign of $\partial_{z}\Phi_{z}$ from the definition of $A_{\lambda,\varrho,m}^{NL}$ is used. We note that this can be relaxed, the crucial hypothesis being that $\partial_{z}\Phi_{z}$ ``stays away'' from the well at $-1$. Indeed, the lemma would remain true if the statement that $\partial_{z}\Phi_{z}\geq0$ from \prettyref{eq:ANL} were replaced with the statement that there exists a constant $c>0$ such that $|\partial_{z}\Phi_{z}+1|\geq c>1$. \end{rem} \begin{proof} Since $\Phi\in A_{\lambda,1,\infty}^{NL}$, we have that \[ \int_{I_{z}}\partial_{z}\Phi_{z}-1\,dz=1-\lambda-1=-\lambda \] for a.e.\ $\theta\in I_{\theta}$. Since we have assumed that $\partial_{z}\Phi_{z}\geq0$ a.e., it follows that \[ \lambda\leq\int_{I_{z}}|\partial_{z}\Phi_{z}-1||1+\partial_{z}\Phi_{z}|\,dz=\norm{(\partial_{z}\Phi_{z})^{2}-1}_{L_{z}^{1}} \] for a.e.\ $\theta\in I_{\theta}$. By the identity for $g_{zz}$ in \prettyref{eq:g_components}, we see that \[ \lambda\leq\norm{g_{zz}-1}_{L_{z}^{1}}+\norm{\partial_{z}\Phi_{\rho}}_{L_{z}^{2}}^{2}+\norm{\Phi_{\rho}\partial_{z}\Phi_{\theta}}_{L_{z}^{2}}^{2}. \] Now the result follows from \prettyref{lem:NLexcesssplits} by an application of H\"older's inequality. \end{proof} Now we control the cross-term, $\Phi_{\rho}\partial_{z}\Phi_{\theta}$. \begin{lem} \label{lem:NLcross-term}Let $\Phi\in A_{\lambda,\varrho,m}^{NL}.$ Then we have that \[ \norm{\Phi_{\rho}\partial_{z}\Phi_{\theta}}_{L^{2}(\Omega)}\lesssim_{\varrho_{0},m}(\Delta^{NL})^{1/4}. \] \end{lem} \begin{proof} Since $\Phi_{\rho}\geq1$, we have that \[ \abs{\Phi_{\rho}\partial_{z}\Phi_{\theta}}\leq\abs{\Phi_{\rho}\partial_{z}\Phi_{\theta}\partial_{\theta}\Phi_{\theta}}+\abs{\Phi_{\rho}\partial_{z}\Phi_{\theta}(\partial_{\theta}\Phi_{\theta}-1)}\leq\Phi_{\rho}^{2}\abs{\partial_{z}\Phi_{\theta}\partial_{\theta}\Phi_{\theta}}+\abs{\Phi_{\rho}}\abs{\partial_{z}\Phi_{\theta}}\abs{\partial_{\theta}\Phi_{\theta}-1}. \] From the definition of $g_{\theta z}$ in \prettyref{eq:g_components}, we see that \[ \Phi_{\rho}^{2}\abs{\partial_{z}\Phi_{\theta}\partial_{\theta}\Phi_{\theta}}\leq\abs{g_{\theta z}}+\abs{\partial_{\theta}\Phi_{\rho}}\abs{\partial_{z}\Phi_{\rho}}+\abs{\partial_{\theta}\Phi_{z}}\abs{\partial_{z}\Phi_{z}}. \] Using a Lipschitz bound along with \prettyref{lem:NLmembrLB} and H\"older's inequality, we see that \begin{align*} \norm{\Phi_{\rho}}_{L^{\infty}(\Omega)} & \lesssim\norm{\Phi_{\rho}}_{L^{1}(\Omega)}+\norm{D\Phi_{\rho}}_{L^{\infty}(\Omega)}\leq\varrho|\Omega|+\norm{\Phi_{\rho}-\varrho}_{L^{1}(\Omega)}+\norm{D\Phi_{\rho}}_{L^{\infty}(\Omega)}\\ & \lesssim\varrho+(\Delta^{NL})^{1/2}+\norm{D\Phi_{\rho}}_{L^{\infty}(\Omega)}. \end{align*} Combining the above with the definition of $A_{\lambda,\varrho,m}^{NL}$ and the hypotheses that $\varrho\leq\varrho_{0}$ and $\Delta^{NL}\leq1$ gives that \[ \abs{\Phi_{\rho}\partial_{z}\Phi_{\theta}}\lesssim_{\varrho_{0},m}\max\{\abs{g_{\theta z}},\abs{\partial_{\theta}\Phi_{\rho}},\abs{\partial_{\theta}\Phi_{z}},\abs{\partial_{\theta}\Phi_{\theta}-1}\}. \] It follows that \[ \norm{\Phi_{\rho}\partial_{z}\Phi_{\theta}}_{L^{2}(\Omega)}\lesssim_{\varrho_{0},m}\max\{\norm{g_{\theta z}}_{L^{2}(\Omega)},\norm{\partial_{\theta}\Phi_{\rho}}_{L^{2}(\Omega)},\norm{\partial_{\theta}\Phi_{\theta}-1}_{L^{2}(\Omega)},\norm{\partial_{\theta}\Phi_{z}}_{L^{2}(\Omega)}\}. \] Thus, after applying \prettyref{lem:NLexcesssplits}, \prettyref{lem:NLmembrLB}, and using H\"older's inequality, we find that \[ \norm{\Phi_{\rho}\partial_{z}\Phi_{\theta}}_{L^{2}(\Omega)}\lesssim_{\varrho_{0},m}\max\{(\Delta^{NL})^{1/2},(\Delta^{NL})^{1/4}\}=(\Delta^{NL})^{1/4} \] as desired. \end{proof} Combining \prettyref{lem:bucklingestimate_witherror} and \prettyref{lem:NLcross-term} gives the following result. \begin{cor} \label{cor:NLbucklingcontrol}Let $\Phi\in A_{\lambda,\varrho,m}^{NL}$. Then we have that \[ \lambda\abs{A}\lesssim_{\varrho_{0},m}\max\{\int_{A}\norm{\partial_{z}\Phi_{\rho}}_{L_{z}^{2}}^{2}\,d\theta,(\Delta^{NL})^{1/2}\} \] for all $A\in\mathcal{B}(I_{\theta})$. \end{cor} Finally, we consider the bending term. \begin{lem} \label{lem:NLbendingcontrol}Let $\Phi\in A_{\lambda,\varrho,m}^{NL}$. Then we have that \[ \max\left\{ \frac{1}{h^{2}}\Delta^{NL},(\Delta^{NL})^{1/2}\right\} \gtrsim_{\varrho_{0}m}\max\left\{ \norm{D^{2}\Phi_{\rho}}_{L^{2}(\Omega)}^{2},\norm{\Phi_{\rho}-\varrho}_{L^{1}(\Omega)}\right\} . \] \end{lem} \begin{proof} First, we consider the $\theta z$- and $zz$-components of $D^{2}\Phi_{\rho}$. From \prettyref{eq:DDPhi}, it follows that \begin{align*} \abs{\partial_{\theta z}\Phi} & \geq\abs{\partial_{\theta z}\Phi_{\rho}-\Phi_{\rho}\partial_{\theta}\Phi_{\theta}\partial_{z}\Phi_{\theta}}\\ \abs{\partial_z^2\Phi} & \geq\abs{\partial_z^2\Phi_{\rho}-\Phi_{\rho}\left(\partial_{z}\Phi_{\theta}\right)^{2}} \end{align*} so that \begin{align*} \norm{\partial_{\theta z}\Phi_{\rho}}_{L^{2}(\Omega)} & \leq\norm{\partial_{\theta z}\Phi}_{L^{2}(\Omega)}+\norm{\Phi_{\rho}\partial_{\theta}\Phi_{\theta}\partial_{z}\Phi_{\theta}}_{L^{2}(\Omega)}\\ \norm{\partial_z^2\Phi_{\rho}}_{L^{2}(\Omega)} & \leq\norm{\partial_z^2\Phi}_{L^{2}(\Omega)}+\norm{\Phi_{\rho}\left(\partial_{z}\Phi_{\theta}\right)^{2}}_{L^{2}(\Omega)}. \end{align*} Using \prettyref{lem:NLcross-term}, we can bound the error terms in the same manner: \begin{align*} \norm{\Phi_{\rho}\partial_{\theta}\Phi_{\theta}\partial_{z}\Phi_{\theta}}_{L^{2}(\Omega)} & \leq\norm{\Phi_{\rho}\partial_{z}\Phi_{\theta}}_{L^{2}(\Omega)}\norm{\partial_{\theta}\Phi_{\theta}}_{L^{\infty}(\Omega)}\lesssim_{\varrho_{0},m}(\Delta^{NL})^{1/4}\\ \norm{\Phi_{\rho}\left(\partial_{z}\Phi_{\theta}\right)^{2}}_{L^{2}(\Omega)} & \leq\norm{\Phi_{\rho}\partial_{z}\Phi_{\theta}}_{L^{2}(\Omega)}\norm{\partial_{z}\Phi_{\theta}}_{L^{\infty}(\Omega)}\lesssim_{\varrho_{0},m}(\Delta^{NL})^{1/4}. \end{align*} Combining this with \prettyref{lem:NLexcesssplits}, we find that \[ \norm{\partial_{\theta z}\Phi_{\rho}}_{L^{2}(\Omega)}\vee\norm{\partial_z^2\Phi_{\rho}}_{L^{2}(\Omega)}\lesssim_{\varrho_{0},m}(\frac{1}{h^{2}}\Delta^{NL})^{1/2}\vee(\Delta^{NL})^{1/4}. \] This completes the $\theta z$- and $zz$-components of the result. Now we consider the $\theta\theta$-component of $D^{2}\Phi$, which requires a more careful estimate. We begin by using \prettyref{eq:DDPhi} to write that \begin{equation} \abs{\partial_{\theta}^2\Phi}^{2}-\varrho^{2}\geq\abs{\partial_{\theta}^2\Phi_{\rho}-\Phi_{\rho}\left(\partial_{\theta}\Phi_{\theta}\right)^{2}}^{2}+\abs{2\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}+\Phi_{\rho}\partial_{\theta}^2\Phi_{\theta}}^{2}-\varrho^{2}=\abs{\partial_{\theta}^2\Phi_{\rho}}^{2}+I+II\label{eq:NLbendingeq1} \end{equation} where \begin{align*} I & =\abs{\Phi_{\rho}\left(\partial_{\theta}\Phi_{\theta}\right)^{2}}^{2}-\varrho^{2}\\ II & =\abs{\Phi_{\rho}\partial_{\theta}^2\Phi_{\theta}}^{2}+4\abs{\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}}^{2}+4\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}\Phi_{\rho}\partial_{\theta}^2\Phi_{\theta}-2\Phi_{\rho}\partial_{\theta}^2\Phi_{\rho}\left(\partial_{\theta}\Phi_{\theta}\right)^{2}. \end{align*} First, we discuss $I$. Introducing the displacement $\phi_{\rho}=\Phi_{\rho}-\varrho$, which is non-negative, we have that \[ I=(\phi_{\rho}+\varrho)^{2}\left(\partial_{\theta}\Phi_{\theta}\right)^{4}-\varrho^{2}\geq\varrho^{2}((\partial_{\theta}\Phi_{\theta})^{4}-1)+2\varrho|\phi_{\rho}|(\partial_{\theta}\Phi_{\theta})^{4}. \] By Jensen's inequality and since $\varrho\geq1$, \[ \int_{\Omega}I\geq2\varrho\int_{\Omega}|\phi_{\rho}|(\partial_{\theta}\Phi_{\theta})^{4}\geq\norm{\phi_{\rho}(\partial_{\theta}\Phi_{\theta})^{4}}_{L^{1}(\Omega)}. \] In particular, this shows that $\int_{\Omega}I\geq0$. Continuing, we have that \begin{align*} \norm{\phi_{\rho}}_{L^{1}(\Omega)} & \leq\norm{\phi_{\rho}((\partial_{\theta}\Phi_{\theta})^{4}-1)}_{L^{1}(\Omega)}+\int_{\Omega}I \\ & \leq\norm{\phi_{\rho}}_{L^{2}(\Omega)}\norm{(\partial_{\theta}\Phi_{\theta})^{4}-1)}_{L^{2}(\Omega)}+\int_{\Omega}I \lesssim_{m}\norm{\phi_{\rho}}_{L^{2}(\Omega)}\norm{\partial_{\theta}\Phi_{\theta}-1}_{L^{2}(\Omega)}+\int_{\Omega}I \\ &\lesssim(\norm{\partial_{\theta}\phi_{\rho}}_{L^{2}(\Omega)}\vee\norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}})\norm{\partial_{\theta}\Phi_{\theta}-1}_{L^{2}(\Omega)}+\int_{\Omega}I \end{align*} where in the last step we used Poincare's inequality. So by \prettyref{lem:NLmembrLB}, H\"older's inequality, and our assumption that $\Delta^{NL}\leq1$, it follows that \begin{equation} \norm{\phi_{\rho}}_{L^{1}(\Omega)}\lesssim_{m}(\Delta^{NL})^{1/2}\vee\left|\int_{\Omega}I\right|.\label{eq:NLbendingeq2} \end{equation} Next, we discuss $II$. An integration by parts argument shows that \[ \int_{\Omega}\Phi_{\rho}\partial_{\theta}^2\Phi_{\rho}\left(\partial_{\theta}\Phi_{\theta}\right)^{2}=-\int_{\Omega}\left(\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}\right)^{2}+2\Phi_{\rho}\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}\partial_{\theta}^2\Phi_{\theta}, \] so that by an elementary Young's inequality we have that \[ \int_{\Omega}II=\int_{\Omega}\abs{\Phi_{\rho}\partial_{\theta}^2\Phi_{\theta}}^{2}+6\abs{\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}}^{2}+8\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}\Phi_{\rho}\partial_{\theta}^2\Phi_{\theta}\geq-10\int_{\Omega}\abs{\partial_{\theta}\Phi_{\rho}\partial_{\theta}\Phi_{\theta}}^{2}. \] Hence, by H\"older's inequality and \prettyref{lem:NLmembrLB}, it follows that \[ \int_{\Omega}II\gtrsim_{m}-\norm{\partial_{\theta}\Phi_{\rho}}_{L_{z}^{4}L_{\theta}^{2}}^{2}\gtrsim-(\Delta^{NL})^{1/2}. \] Now we combine the estimates. Using \prettyref{lem:NLexcesssplits} along with \prettyref{eq:NLbendingeq1} and the fact that $\int_{\Omega}\,I\geq 0$, we have that \[ \frac{1}{h^{2}}\Delta^{NL}\geq\int_{\Omega}\abs{\partial_{\theta}^2\Phi}^{2}-\varrho^{2}\geq\norm{\partial_{\theta}^2\Phi_{\rho}}_{L^{2}(\Omega)}^{2}+\left|\int_{\Omega}I\right|+\int_{\Omega}II \] and hence that \begin{equation} \left|\int_{\Omega}I\right|+\norm{\partial_{\theta}^2\Phi_{\rho}}_{L^{2}(\Omega)}^{2}\leq\frac{1}{h^{2}}\Delta^{NL}-\int_{\Omega}II\lesssim_{m}(\frac{1}{h^{2}}\Delta^{NL})\vee(\Delta^{NL})^{1/2}.\label{eq:NLbendingeq3} \end{equation} Combining \prettyref{eq:NLbendingeq2} and \prettyref{eq:NLbendingeq3} gives the desired result. \end{proof} \subsubsection{Proof of the ansatz-free lower bound} We now combine the above estimates with the Gagliardo-Nirenberg interpolation inequalities from \prettyref{sec:Appendix} to prove the desired lower bound. At this stage, the argument is more-or-less parallel to the one given for the vKD model in \prettyref{sub:largemandrelLB_FvK}. \begin{proof}[Proof of \prettyref{prop:NLLBs_largemandr}] Introduce the radial displacement, $\phi_{\rho}=\Phi_{\rho}-\varrho$. As a result of \prettyref{lem:NLmembrLB}, \prettyref{cor:NLbucklingcontrol}, and \prettyref{lem:NLbendingcontrol}, we have the following estimates: \[ \Delta^{NL}\gtrsim(\varrho^{2}-1)\norm{\phi_{\rho}}_{L^{1}(\Omega)}, \] \[ \max\left\{ \frac{1}{h^{2}}\Delta^{NL},(\Delta^{NL})^{1/2}\right\} \gtrsim_{\varrho_{0},m}\max\left\{ \norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{2},\norm{\phi_{\rho}}_{L^{1}(\Omega)}\right\} , \] and \[ \max\{\int_{A}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}\,d\theta,(\Delta^{NL})^{1/2}\}\gtrsim_{\varrho_{0},m}\lambda\abs{A}\quad\forall\,A\in\mathcal{B}(I_{\theta}). \] We now conclude the proof by a case analysis. First, consider the case that $\frac{1}{h^{2}}\Delta^{NL}\leq(\Delta^{NL})^{1/2}$. In this case, we conclude by Poincare's inequality (since $\phi_{\rho}\in H_{\text{per}}^{2}$) that \[ (\Delta^{NL})^{1/2}\gtrsim_{\varrho_{0},m}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{2}\gtrsim\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2} \] and hence that \[ \Delta^{NL}\gtrsim_{\varrho_{0},m}\lambda^{2} \] upon taking $A=I_{\theta}$. In the opposite case, we have the lower bound \[ \Delta^{NL}\gtrsim_{\varrho_{0},m}\max\left\{ \left[(\varrho^{2}-1)\vee h^{2}\right]\norm{\phi_{\rho}}_{L^{1}(\Omega)},h^{2}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{2}\right\} . \] Now, we give two separate arguments that combine to give the desired result. First, we apply the interpolation inequality from \prettyref{lem:2dinterp-1} to $\phi_{\rho}$ to conclude that \begin{align*} \norm{D\phi_{\rho}}_{L^{2}(\Omega)}^{2} &\lesssim_{\varrho_{0},m}\norm{D\phi_{\rho}}_{L^{\infty}(\Omega)}^{2/3}\left(\frac{1}{(\varrho^{2}-1)\vee h^{2}}\Delta^{NL}\right)^{2/3}\left(\frac{1}{h^{2}}\Delta^{NL}\right)^{1/3}\\ &\lesssim_{\varrho_{0},m}\left[(\varrho^{2}-1)\vee h^{2}\right]^{-2/3}h^{-2/3}\Delta^{NL}. \end{align*} Taking $A=I_{\theta}$ gives that \[ \max\{\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2},(\Delta^{NL})^{1/2}\}\gtrsim_{\varrho_{0},m}\lambda \] so that \[ \max\left\{ \left[(\varrho^{2}-1)\vee h^{2}\right]^{-2/3}h^{-2/3}\Delta^{NL},(\Delta^{NL})^{1/2}\right\} \gtrsim_{\varrho_{0},m}\lambda. \] Therefore, we conclude by this argument that \[ \Delta^{NL}\gtrsim_{\varrho_{0},m}\min\left\{ \lambda^{2},h^{2/3}\left[(\varrho^{2}-1)\vee h^{2}\right]^{2/3}\lambda\right\} . \] For the second argument, we begin by defining the sets \[ Z_{\epsilon}=\left\{ \theta\in I_{\theta}\ :\ \norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}\geq\epsilon\lambda\right\} \] for $\epsilon\in\mathbb{R}_{+}$. Choosing $A=I_{\theta}\backslash Z_{\epsilon}$ gives that \[ \max\{\epsilon\lambda\abs{I_{\theta}\backslash Z_{\epsilon}},(\Delta^{NL})^{1/2}\}\geq c_{1}(\varrho_{0},m)\lambda\abs{I_{\theta}\backslash Z_{\epsilon}}. \] In particular, taking $\epsilon=c_{1}/2$, we conclude that \[ \Delta^{NL}\geq c_{1}^{2}\abs{I_{\theta}\backslash Z_{c_{1}/2}}^{2}\lambda^{2}. \] Now if $\abs{I_{\theta}\backslash Z_{c_{1}/2}}\geq\frac{1}{2}\abs{I_{\theta}}$, we conclude that \[ \Delta^{NL}\geq\frac{c_{1}^{2}}{4}\abs{I_{\theta}}^{2}\lambda^{2}. \] Otherwise, we are in the case where $\abs{Z_{c_{1}/2}}>\frac{1}{2}\abs{I_{\theta}}$. In this final case, we have that \[ \lambda^{5/7}\lesssim_{\varrho_{0},m}\frac{1}{2}\abs{I_{\theta}}(\frac{c_{1}}{2}\lambda)^{5/7}\leq\int_{Z_{c_{1}/2}}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{10/7}\,d\theta\leq\int_{I_{\theta}}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{10/7}\,d\theta. \] Applying the first interpolation inequality in \prettyref{lem:1dinterp} to $\phi_{\rho}$, we get that \begin{align*} \lambda^{5/7} & \lesssim_{\varrho_{0},m}\int_{I_{\theta}}\left(\norm{\phi_{\rho}}_{L_{z}^{1}}^{2/5}\norm{\partial_z^2\phi_{\rho}}_{L_{z}^{2}}^{3/5}\right)^{10/7}\,d\theta=\int_{I_{\theta}}\norm{\phi_{\rho}}_{L_{z}^{1}}^{4/7}\norm{\partial_z^2\phi_{\rho}}_{L_{z}^{2}}^{6/7}\,d\theta\\ & \leq\norm{\phi_{\rho}}_{L^{1}(\Omega)}^{4/7}\norm{\partial_z^2\phi_{\rho}}_{L^{2}(\Omega)}^{6/7} \end{align*} after an application of H\"older's inequality. It follows that \[ \lambda^{5/7}\lesssim_{\varrho,m}\left(\frac{1}{(\varrho^{2}-1)\vee h^{2}}\Delta^{NL}\right)^{4/7}\left(\frac{1}{h^{2}}\Delta^{NL}\right)^{3/7}=\left[(\varrho^{2}-1)\vee h^{2}\right]^{-4/7}h^{-6/7}\Delta^{NL} \] and so we conclude the second result: \[ \Delta^{NL}\gtrsim_{\varrho_{0},m}\min\left\{ \lambda^{2},\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}h^{6/7}\right\} . \] In conclusion, we have proved that \[ \Delta^{NL}\gtrsim_{\varrho_{0},m}\min\left\{ \lambda^{2},\min\left\{ \lambda^{2},h^{2/3}\left[(\varrho^{2}-1)\vee h^{2}\right]^{2/3}\lambda\right\} \vee\min\left\{ \lambda^{2},\lambda^{5/7}[(\varrho^{2}-1)\vee h^{2}]^{4/7}h^{6/7}\right\} \right\} , \] which is simply a restatement of the desired result. \end{proof} \section{Ansatz-free lower bounds in the neutral mandrel case \label{sec:neutralmandrelLB}} In this section, we prove the lower bounds from \prettyref{thm:FvKneutralbounds} and \prettyref{thm:NLneutralbounds}. We begin with the vKD model in \prettyref{sub:neutralmandrelLB_FvK}. There, we introduce the free-shear functional from \prettyref{eq:FS} as a bounding device and prove its minimum energy scaling law. Then, we turn to the nonlinear model in \prettyref{sub:neutralmandrelLB_NL}. \subsection{vKD model\label{sub:neutralmandrelLB_FvK}} In the neutral mandrel case, where $\varrho=1$, the estimates proved in \prettyref{sub:largemandrelLB_FvK} do not lead to useful lower bounds on $E_{h}^{vKD}$. Nevertheless, buckling in the presence of the mandrel continues to induce tensile hoop stresses when $\varrho=1$, and this can still be used to prove non-trivial lower bounds. We emphasize here that it is not clear at first the degree of success that we should expect from this approach: indeed, the magnitude of the hoop stresses induced by the mandrel vanish as $h\to0$ in the neutral mandrel case. This is in stark contrast with the large mandrel case, where the effective hoop streses are of order one and the excess hoop stresses set the minimum energy scaling law. For more on this, we refer the reader to the discussion in \prettyref{sub:DiscussionofProofs}. Let us briefly recall from \prettyref{sub:neutralmandrelresults} our approach to \prettyref{thm:FvKneutralbounds}: introducing the free-shear functional, \[ FS_{h}(\phi)=\int_{\Omega}\,\abs{\epsilon_{\theta\theta}}^{2}+\abs{\epsilon_{zz}}^{2}+h^{2}\abs{D^{2}\phi_{\rho}}^{2}\,d\theta dz, \] we observe that \[ E_{h}^{vKD}(\phi)\geq FS_{h}(\phi)\quad\forall\,\phi\in A_{\lambda,\varrho,m}^{vKD} \] since in the definition of $FS_{h}$ we have simply neglected the cost of shear in the membrane term. Thus, lower bounds on the minimum of $FS_{h}$ give lower bounds on the minimum of $E_{h}^{vKD}$. In the present section, we give the optimal argument along these lines. To do so, we answer the following question: what is the minimum energy scaling law of the free-shear functional? Let $A_{\lambda,m}=A_{\lambda,1,m}^{vKD}$. \begin{prop} \label{prop:FSscalinglaw} Let $h,\lambda\in(0,\frac{1}{2}]$ and $m\in[2,\infty)$. Then we have that \[ \min_{A_{\lambda,m}}\,FS_{h}\sim_{m}\min\left\{ \max\{h\lambda^{3/2},(h\lambda)^{12/11}\},\lambda^{2}\right\} \] In the case that $m=\infty$, we have that \[ \min_{A_{\lambda,\infty}}\,FS_{h}\sim\min\left\{ (h\lambda)^{12/11},\lambda^{2}\right\} . \] \end{prop} \begin{rem} \label{rem:blowupFS} As in the analysis of the large mandrel case, we can quantify the blow-up rate of $\norm{D\phi}_{L^{\infty}}$ for the free-shear functional as $h\to0$. See \prettyref{sub:BlowuprateFS} for the precise statement of this result. \end{rem} \begin{proof} The asserted lower bounds follow from \prettyref{cor:FSLB-1} and \prettyref{cor:FSLB-2}. The upper bound of $\lambda^{2}$ is achieved by the unbuckled configuration, $\phi=(0,0,-\lambda z)$. To prove the remainder of the upper bound, note first that it suffices to achieve it for $(h,\lambda,m)\in(0,h_{0}]\times(0,\frac{1}{2}]\times[2,\infty)$ for some $h_{0}\in(0,\frac{1}{2}]$. So, we take $h_{0}=\frac{1}{2^{10}}$ and apply \prettyref{lem:FSUB_manywrinkles_tilted}, \prettyref{lem:FSUB_onewrinkle_tilted_long}, and \prettyref{lem:FSUB_onewrinkle_tilted} to get that \[ \min_{A_{\lambda,m}}\,FS_{h}\lesssim\min\left\{ \lambda^{2},\max\left\{ m^{-1/2}h\lambda^{3/2},(h\lambda)^{12/11},h^{6/5}\lambda\right\} \right\} \] in the stated parameter range. Since \[ \min\left\{ \lambda^{2},\max\left\{ (h\lambda)^{12/11},h^{6/5}\lambda\right\} \right\} =\min\left\{ \lambda^{2},(h\lambda)^{12/11}\right\} \] the result follows. \end{proof} This result shows that the free-shear functional prefers three types of low-energy patterns if $m<\infty$, and two if $m=\infty$. See \prettyref{fig:FSheightfield} for a schematic of these patterns. \subsubsection{Lower bounds on the free-shear functional} Here, we prove the lower bound from \prettyref{prop:FSscalinglaw}. Our first result is the free-shear version of \prettyref{lem:FvKLBs_largemandr}. \begin{lem} \label{lem:FSLBs}Let $\phi\in A_{\lambda,\infty}$. Then we have that \[ FS_{h}(\phi)\gtrsim\max\left\{ \norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}}^{2},\norm{\partial_{\theta}\phi_{\rho}}_{L_{z}^{4}L_{\theta}^{2}}^{4},h^{2}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{2},\norm{\frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}-\lambda}_{L_{\theta}^{2}}^{2}\right\} . \] \end{lem} \begin{proof} By the definition of $FS_{h}$ in \prettyref{eq:FS}, we have that \[ FS_{h}(\phi)=\int_{\Omega}\,\abs{\partial_{\theta}\phi_{\theta}+\frac{1}{2}(\partial_{\theta}\phi_{\rho})^{2}+\phi_{\rho}}^{2}+\abs{\partial_{z}\phi_{z}+\frac{1}{2}(\partial_{z}\phi_{\rho})^{2}}^{2}+h^{2}\abs{D^{2}\phi_{\rho}}^{2}\,d\theta dz. \] Applying Jensen's inequality in the $\theta$-direction and using that $\phi_{\theta}\in H_{\text{per}}^{1}$ and that $\phi_{\rho}\geq0$ we see that \[ \norm{\partial_{\theta}\phi_{\theta}+\frac{1}{2}(\partial_{\theta}\phi_{\rho})^{2}+\phi_{\rho}}_{L^{2}(\Omega)}\gtrsim\norm{\int_{I_{\theta}}\partial_{\theta}\phi_{\theta}+\frac{1}{2}(\partial_{\theta}\phi_{\rho})^{2}+\phi_{\rho}\,d\theta}_{L_{z}^{2}}\gtrsim\norm{\partial_{\theta}\phi_{\rho}}_{L_{z}^{4}L_{\theta}^{2}}^{2}\vee\norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}}. \] Applying Jensen's inequality in the $z$-direction and using that $\phi_{z}+\lambda z\in H_{\text{per}}^{1}$ we see that \[ \norm{\partial_{z}\phi_{z}+\frac{1}{2}(\partial_{z}\phi_{\rho})^{2}}_{L^{2}(\Omega)}\gtrsim\norm{\int_{I_{z}}\partial_{z}\phi_{z}+\frac{1}{2}(\partial_{z}\phi_{\rho})^{2}\,dz}_{L_{\theta}^{2}}=\norm{\frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}-\lambda}_{L_{\theta}^{2}}. \] The result now follows. \end{proof} Now, we apply the Gagliardo-Nirenberg interpolation inequalities from \prettyref{sec:Appendix} to deduce the desired lower bounds. \begin{cor} \label{cor:FSLB-1} If $\phi\in A_{\lambda,m}$, then \[ FS_{h}(\phi)\gtrsim\min\{ m^{-1}h\lambda^{3/2},\lambda^{2}\} \] whenever $h,\lambda\in(0,\infty)$ and $m\in(0,\infty]$. In fact, if $\phi\in A_{\lambda,\infty}$, then \[ FS_{h}(\phi)\gtrsim\min\{\norm{D\phi_{\rho}}_{L^{\infty}(\Omega)}^{-1}h\lambda^{3/2},\lambda^{2}\}. \] \end{cor} \begin{proof} Observe that by \prettyref{lem:FSLBs} and H\"older's inequality, we have that \[ c_{1}\left(FS_{h}(\phi)\right)^{1/2}\geq\norm{\frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L_{z}^{2}}^{2}-\lambda}_{L_{\theta}^{1}} \] for some numerical constant $c_{1}$. Hence, by the triangle inequality, \[ \frac{1}{2}\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2}+c_{1}(FS_{h}(\phi))^{1/2}\geq\lambda|I_{\theta}|. \] Now we perform a case analysis. If $\phi$ satisfies $\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2}\leq\lambda|I_{\theta}|$, then we conclude by the above that $FS_{h}(\phi)\gtrsim\lambda^{2}$. On the other hand, suppose that $\phi$ satisfies $\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2}>\lambda|I_{\theta}|$. Then, observe that by \prettyref{lem:FSLBs} and H\"older's inequality, \[ FS_{h}(\phi)\gtrsim\max\left\{ \norm{\phi_{\rho}}_{L^{1}(\Omega)}^{2},h^{2}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{2}\right\} . \] Combining this with the interpolation inequality from \prettyref{lem:2dinterp-1}, we conclude that \[ \lambda^{1/2}\lesssim\norm{D\phi_{\rho}}_{L^{2}(\Omega)}\lesssim\norm{D\phi_{\rho}}_{L^{\infty}(\Omega)}^{1/3}\norm{\phi_{\rho}}_{L^{1}(\Omega)}^{1/3}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{1/3}\lesssim m^{1/3}h^{-1/3}(FS_{h}(\phi))^{1/3} \] and the result follows.\end{proof} \begin{cor} \label{cor:FSLB-2} If $\phi\in A_{\lambda,m}$, then \[ FS_{h}(\phi)\gtrsim\min\left\{ (h\lambda)^{12/11},\lambda^{2}\right\} \] whenever $h,\lambda\in(0,1]$ and $m\in(0,\infty]$.\end{cor} \begin{proof} As in the proof of \prettyref{cor:FSLB-1}, it suffices to prove that \[ \norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2}\gtrsim\lambda\implies FS_{h}(\phi)\gtrsim(h\lambda)^{12/11}. \] Combining the third interpolation inequality from \prettyref{lem:2dinterp} with the anisotropic interpolation inequality from \prettyref{lem:mixedinterp_1}, we find that \begin{align*} \norm{D\phi_{\rho}}_{L^{2}(\Omega)} & \lesssim\norm{\phi_{\rho}}_{L^{2}(\Omega)}^{1/2}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{1/2}\lesssim(\norm{\partial_{\theta}\phi_{\rho}}_{L_{z}^{4}L_{\theta}^{2}}^{1/3}\norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}}^{2/3}+\norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}})^{1/2}\norm{D^{2}\phi_{\rho}}_{L_{\theta z}^{2}}^{1/2}\\ & \lesssim\max\left\{ \norm{\partial_{\theta}\phi_{\rho}}_{L_{z}^{4}L_{\theta}^{2}}^{1/6}\norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}}^{1/3}\norm{D^{2}\phi_{\rho}}_{L_{\theta z}^{2}}^{1/2},\norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}}^{1/2}\norm{D^{2}\phi_{\rho}}_{L_{\theta z}^{2}}^{1/2}\right\} . \end{align*} Hence, by \prettyref{lem:FSLBs}, we conclude that \[ h\lambda\lesssim\max\left\{ FS_{h}^{11/12},FS_{h}\right\} . \] It follows immediately that \[ FS_{h}\gtrsim\min\left\{ (h\lambda)^{12/11},h\lambda\right\} =(h\lambda)^{12/11} \] as desired. \end{proof} \subsubsection{Upper bounds on the free-shear functional} In this section, we prove the upper bound from \prettyref{prop:FSscalinglaw}. Since this upper bound matches the lower bounds from the previous section, our analysis of the free-shear functional is optimal as far as scaling laws are concerned. In the remainder of this section, we will \textbf{assume} that \[ h\in(0,\frac{1}{2^{10}}],\ \lambda\in(0,\frac{1}{2}],\ \text{and}\ m\in[2,\infty) \] unless otherwise explicitly stated. We begin by defining a two-scale wrinkling pattern along a to-be-chosen direction. We refer to the parameters $n,k\in\mathbb{N}$ and $\delta\in(0,1]$, which are the number of wrinkles, the number of times each wrinkle wraps about the cylinder, and the relative extent of the wrinkles. See \prettyref{fig:FSheightfield} for a schematic of this construction. To define the construction, we fix $f\in C^{\infty}(\mathbb{R})$ such that \begin{itemize} \item $f$ is non-negative and one-periodic \item $\text{supp}\,f\cap[-\frac{1}{2},\frac{1}{2}]\subset(-\frac{1}{2},\frac{1}{2})$ \item $\norm{f'}_{L^{\infty}}\leq2$ \item $\norm{f'}_{L^{2}(B_{1/2})}^{2}=1$. \end{itemize} Define $f_{\delta,n}\in C^{\infty}(\mathbb{R})$ by \[ f_{\delta,n}(t)=\frac{\sqrt{\delta}}{n}f(\frac{n}{\delta}\{t\})\ind{\{t\}\in B_{\delta/2}} \] and $w_{\delta,n,k,\lambda}:\Omega\to\mathbb{R}$ by \[ w_{\delta,n,k,\lambda}(\theta,z)=\frac{\sqrt{2\lambda}}{k}f_{\delta,n}(\frac{\theta}{2\pi}+kz). \] Recall that we write $\overline{f}$ to denote the $\theta$-average of $f$, as given in \prettyref{sub:notation}. Define $u^{\delta,n,k,\lambda}=(u_{\theta}^{\delta,n,k,\lambda},u_{z}^{\delta,n,k,\lambda}):\Omega\to\mathbb{R}^{2}$ by \begin{align*} u_{\theta}^{\delta,n,k,\lambda}(\theta,z) & =\int_{0\leq\theta'\leq\theta}\left[\left(\overline{\frac{1}{2}(\partial_{\theta}w)^{2}+w}\right)(z)-\frac{1}{2}(\partial_{\theta}w(\theta',z))^{2}-w(\theta',z)\right]\,d\theta'\\ u_{z}^{\delta,n,k,\lambda}(\theta,z) & =\int_{-\frac{1}{2}\leq z'\leq z}\left[\lambda-\frac{1}{2}(\partial_{z}w(\theta,z'))^{2}\right]\,dz' \end{align*} where $w=w_{\delta,n,k,\lambda}$. Finally, define $\phi_{\delta,n,k,\lambda}:\Omega\to\mathbb{R}^{3}$ by \[ \phi_{\delta,n,k,\lambda}=(w_{\delta,n,k,\lambda},u_{\theta}^{\delta,n,k,\lambda},-\lambda z+u_{z}^{\delta,n,k,\lambda}), \] in cylindrical coordinates. \begin{figure} \includegraphics[height=0.18\textheight]{freeshearconstrN} \caption{This schematic depicts the free-shear construction. The pattern features $n$ wrinkles which wrap $k$ times about the cylinder, with total volume fraction $\delta$. The optimal choice of $n,k\delta$ depends on the axial compression, $\lambda$, the thickness, $h$, and the \emph{a priori} $L^{\infty}$ slope bound, $m$. \label{fig:FSheightfield}} \end{figure} Now, we estimate the energy of this construction. Let \[ m_{2}(\delta,n,k,\lambda)=2\max\left\{ \sqrt{\frac{2\lambda}{\delta}},\frac{2\lambda}{\delta},\frac{2\lambda}{\pi k\delta}+\frac{2\pi\sqrt{2\lambda\delta}}{n}\right\} . \] \begin{lem} \label{lem:FSadmissability}We have that $\phi_{\delta,n,k,\lambda}\in A_{\lambda,m_{2}}$. Furthermore, \[ FS_{h}(\phi_{\delta,n,k,\lambda})\lesssim\max\left\{ \frac{\lambda\delta^{3}}{k^{2}n^{2}},\frac{\lambda^{2}}{k^{4}},h^{2}\frac{\lambda k^{2}n^{2}}{\delta^{2}}\right\} . \] \end{lem} \begin{proof} Abbreviate $\phi_{\delta,n,k,\lambda}$ by $\phi$, $w_{\delta,n,k,\lambda}$ by $w$, and $u^{\delta,n,k,\lambda}$ by $u$. By its definition, $\phi_{\rho}\in H_{\text{per}}^{2}$, $\phi_{\theta}\in H_{\text{per}}^{1}$, and $\phi_{z}+\lambda z\in H_{\text{per}}^{1}$. In particular, we note that \[ \int_{-\frac{1}{2}\leq z'\leq\frac{1}{2}}\frac{1}{2}|\partial_{z}w(\theta,z')|^{2}dz=\lambda\int_{B_{\delta/2}}|f'_{\delta,n}|^{2}dt=\lambda\int_{B_{1/2}}|f'|^{2}dt=\lambda \] for all $\theta\in I_{\theta}$, so that $u_{z}^{\delta,n,k,\lambda}\in H_{\text{per}}^{1}$. Also, we have that $w\geq0$ so that $\phi_{\rho}\geq0$. Now we obtain the slope bounds. Since \begin{align*} \epsilon_{\theta\theta} & =\partial_{\theta}\phi_{\theta}+\frac{1}{2}(\partial_{\theta}\phi_{\rho})^{2}+\phi_{\rho}=\overline{\frac{1}{2}(\partial_{\theta}\phi_{\rho})^{2}+\phi_{\rho}}\\ \epsilon_{zz} & =\partial_{z}\phi_{z}+\frac{1}{2}(\partial_{z}\phi_{\rho})^{2}=0 \end{align*} and \begin{align*} \partial_{\theta}\phi_{\rho}(\theta,z) & =\partial_{\theta}w(\theta,z)=\frac{1}{2\pi}\frac{\sqrt{2\lambda}}{k}f'_{\delta,n}(\frac{\theta}{2\pi}+kz)\\ \partial_{z}\phi_{\rho}(\theta,z) & =\partial_{z}w(\theta,z)=\sqrt{2\lambda}f'_{\delta,n}(\frac{\theta}{2\pi}+kz), \end{align*} we find that \begin{align*} \norm{\partial_{\theta}\phi_{\rho}}_{L^{\infty}(\Omega)} & \leq\frac{1}{2\pi}\frac{\sqrt{2\lambda}}{k}\norm{f_{\delta,n}'}_{L^{\infty}}\leq\frac{1}{\pi k}\sqrt{\frac{2\lambda}{\delta}}\\ \norm{\partial_{z}\phi_{\rho}}_{L^{\infty}(\Omega)} & \le\sqrt{2\lambda}\norm{f_{\delta,n}'}_{L^{\infty}}\leq2\sqrt{\frac{2\lambda}{\delta}}\\ \norm{\partial_{z}\phi_{z}}_{L^{\infty}(\Omega)} & \leq\lambda\norm{f'_{\delta,n}}_{L^{\infty}}^{2}\leq\frac{4\lambda}{\delta}, \end{align*} and that \begin{align*} \norm{\partial_{\theta}\phi_{\theta}}_{L^{\infty}(\Omega)} & \leq\norm{\overline{\frac{1}{2}(\partial_{\theta}\phi_{\rho})^{2}+\phi_{\rho}}-\frac{1}{2}(\partial_{\theta}\phi_{\rho})^{2}-\phi_{\rho}}_{L^{\infty}(\Omega)}\leq2\norm{\frac{1}{2}(\partial_{\theta}\phi_{\rho})^{2}+\phi_{\rho}}_{L^{\infty}(\Omega)}\\ & \leq2(\frac{1}{4\pi^{2}}\frac{\lambda}{k^{2}}\norm{f'_{\delta,n}}_{L^{\infty}}^{2}+\frac{\sqrt{2\lambda}}{k}\norm{f_{\delta,n}}_{L^{\infty}})\leq2\left(\frac{\lambda}{\pi^{2}k^{2}\delta}+\frac{\sqrt{2\lambda\delta}}{kn}\right). \end{align*} Here, we used that $\norm{f}_{L^{\infty}}\leq1$, which follows from its definition. Now we deal with the shear terms. We have that \begin{align*} \partial_{\theta}\phi_{z}(\theta,z) & =\partial_{\theta}u_{z}(\theta,z)=-\int_{-\frac{1}{2}\leq z'\leq z}\partial_{z}w\partial_{\theta z}w(\theta,z')\,dz'\\ \partial_{z}\phi_{\theta}(\theta,z) & =\partial_{z}u_{\theta}(\theta,z)=\int_{0\leq\theta'\leq\theta}\left[\overline{\partial_{\theta}w\partial_{z\theta}w+\partial_{z}w}(z)-\partial_{\theta}w\partial_{z\theta}w(\theta',z)-\partial_{z}w(\theta',z)\right]\,d\theta'. \end{align*} Since \[ \partial_{\theta z}w(\theta,z)=\frac{\sqrt{2\lambda}}{2\pi}f''_{\delta,n}(\frac{\theta}{2\pi}+kz), \] we see that \begin{align*} \partial_{\theta}\phi_{z}(\theta,z) & =-\int_{-\frac{1}{2}\leq z'\leq z}\frac{2\lambda}{2\pi}f'_{\delta,n}f''_{\delta,n}(\frac{\theta}{2\pi}+kz')\,dz'=-\int_{-\frac{1}{2}\leq t\leq z}\frac{\lambda}{2\pi}\frac{1}{k}\frac{d}{dt}\left[\left(f'_{\delta,n}\right)^{2}(\frac{\theta}{2\pi}+kt)\right]\,dt\\ & =\frac{1}{2\pi}\frac{\lambda}{k}\left(\left(f'_{\delta,n}\right)^{2}(\frac{\theta}{2\pi}-\frac{k}{2})-\left(f'_{\delta,n}\right)^{2}(\frac{\theta}{2\pi}+kz)\right) \end{align*} so that \[ \norm{\partial_{\theta}\phi_{z}}_{L^{\infty}(\Omega)}\leq2\frac{1}{2\pi}\frac{\lambda}{k}\norm{f'_{\delta,n}}_{L^{\infty}}^{2}\leq\frac{4\lambda}{\pi k\delta}. \] Similarly, we have that \begin{align*} & \int_{0\leq\theta'\leq\theta}\left[\partial_{\theta}w\partial_{z\theta}w(\theta',z)+\partial_{z}w(\theta',z)\right]\,d\theta'\\ & \qquad=\int_{0\leq\theta'\leq\theta}\left[\frac{2\lambda}{(2\pi)^{2}}\frac{1}{k}f'_{\delta,n}f''_{\delta,n}(\frac{\theta'}{2\pi}+kz)+\sqrt{2\lambda}f'_{\delta,n}(\frac{\theta'}{2\pi}+kz)\right]\,d\theta'\\ & \qquad=\int_{0\leq t\leq\theta}\frac{1}{2\pi}\frac{\lambda}{k}\frac{d}{dt}\left[\left(f'_{\delta,n}\right)^{2}(\frac{t}{2\pi}+kz)\right]+2\pi\sqrt{2\lambda}\frac{d}{dt}\left[f{}_{\delta,n}(\frac{t}{2\pi}+kz)\right]\,dt\\ & \qquad=\frac{1}{2\pi}\frac{\lambda}{k}\left(\left(f'_{\delta,n}(\frac{\theta}{2\pi}+kz)\right)^{2}-\left(f'_{\delta,n}(kz)\right)^{2}\right)+2\pi\sqrt{2\lambda}\left(f{}_{\delta,n}(\frac{\theta}{2\pi}+kz)-f{}_{\delta,n}(kz)\right). \end{align*} Hence, \[ \partial_{z}\phi_{\theta}(\theta,z)=-\frac{1}{2\pi}\frac{\lambda}{k}\left(\left(f'_{\delta,n}(\frac{\theta}{2\pi}+kz)\right)^{2}-\left(f'_{\delta,n}(kz)\right)^{2}\right)-2\pi\sqrt{2\lambda}\left(f{}_{\delta,n}(\frac{\theta}{2\pi}+kz)-f{}_{\delta,n}(kz)\right) \] so that \[ \norm{\partial_{z}\phi_{\theta}}_{L^{\infty}(\Omega)}\leq2\left(\frac{1}{2\pi}\frac{\lambda}{k}\norm{f'_{\delta,n}}_{L^{\infty}}^{2}+2\pi\sqrt{2\lambda}\norm{f_{\delta,n}}_{L^{\infty}}\right)\leq2\left(\frac{2\lambda}{\pi k\delta}+\frac{2\pi\sqrt{2\lambda\delta}}{n}\right). \] Combining the above, we have shown that \[ \max_{\substack{i\in\left\{ \theta,z\right\} ,\,j\in\{\rho,\theta,z\}} }\norm{\partial_{i}\phi_{j}}_{L^{\infty}(\Omega)}\leq2\max\left\{ \sqrt{\frac{2\lambda}{\delta}},\frac{2\lambda}{\delta},\frac{2\lambda}{\pi k\delta}+\frac{2\pi\sqrt{2\lambda\delta}}{n}\right\} =m_{2} \] and it follows that $\phi\in A_{\lambda,m_{2}}$. Now we bound the free-shear energy of this construction. Since $\epsilon_{\theta\theta}=\overline{\epsilon_{\theta\theta}}$ and $\epsilon_{zz}=0$, we have that \[ FS_{h}(\phi)=\int_{\Omega}\,\abs{\overline{\frac{1}{2}(\partial_{\theta}w)^{2}+w}}^{2}+h^{2}\abs{D^{2}w}^{2}\,d\theta dz \] so that \[ FS_{h}(\phi)\lesssim\max\left\{ \norm{w}_{L_{z}^{2}L_{\theta}^{1}}^{2},\norm{\partial_{\theta}w}_{L_{z}^{4}L_{\theta}^{2}}^{4},h^{2}\norm{D^{2}w}_{L^{2}(\Omega)}^{2}\right\} . \] Since \[ \norm{w}_{L_{z}^{2}L_{\theta}^{1}}^{2}\lesssim\frac{\lambda\delta^{3}}{k^{2}n^{2}},\quad\norm{\partial_{\theta}w}_{L_{z}^{4}L_{\theta}^{2}}^{4}\lesssim\frac{\lambda^{2}}{k^{4}},\ \text{and}\quad\norm{D^{2}w}_{L^{2}(\Omega)}^{2}\lesssim\frac{\lambda k^{2}n^{2}}{\delta^{2}}, \] it follows that \[ FS_{h}(\phi)\lesssim\max\left\{ \frac{\lambda\delta^{3}}{k^{2}n^{2}},\frac{\lambda^{2}}{k^{4}},h^{2}\frac{\lambda k^{2}n^{2}}{\delta^{2}}\right\} . \] \end{proof} Next, we choose $n,k,\delta$ to optimize this bound. Note that each of the following three choices is optimal in a different parameter regime. First, we consider a construction made of up many wrinkles, each of which wraps many times about the cylinder. \begin{lem} \label{lem:FSUB_manywrinkles_tilted} Assume that \[ m^{-1/2}h\lambda^{3/2}\geq\max\{h^{6/5}\lambda,(h\lambda)^{12/11}\}. \] Let $n,k\in\mathbb{N}$ and $\delta\in(0,1]$ satisfy \begin{align*} n&\in\left[7\lambda^{9/8}h^{-1/4}m^{-11/8},8\lambda^{9/8}h^{-1/4}m^{-11/8}\right] \\ k&\in\left[7h^{-1/4}\lambda^{1/8}m^{1/8},8h^{-1/4}\lambda^{1/8}m^{1/8}\right] \\ \delta&=4\lambda m^{-1}. \end{align*} Then, $\phi_{\delta,n,k,\lambda}\in A_{\lambda,m}$ and \[ FS_{h}(\phi_{\delta,n,k,\lambda})\lesssim\frac{1}{m^{1/2}}h\lambda^{3/2}. \] \end{lem} \begin{proof} Rearranging the inequality $m^{-1/2}h\lambda^{3/2}\geq(h\lambda)^{12/11}$, we find that $\lambda^{9/8}h^{-1/4}m^{-11/8}\geq1$ so that there exists such an $n\in\mathbb{N}$. Rearranging the inequality $m^{-1/2}h\lambda^{3/2}\geq h^{6/5}\lambda$, we find that $\lambda^{5/8}\geq h^{1/4}m^{5/8}$. Since $m\geq1$ and $\lambda\leq1$, it follows that $\lambda^{1/8}m^{1/8}h^{-1/4}\geq1$. Hence, there exists such a $k\in\mathbb{N}$. Also, we have that $\delta\leq1$, since $\lambda\leq\frac{1}{2}$ and $m\geq2$. Now we check the slope bound. We claim that $m_{2}(\delta,n,k,\lambda)=m.$ Indeed, we have that \[ m_{2}=2\max\left\{ \sqrt{\frac{m}{2}},\frac{m}{2},\frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}\lambda}{nm^{1/2}}\right\} =2\max\left\{ \frac{m}{2},\frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}\lambda}{nm^{1/2}}\right\} , \] and using that $m\geq2$, $\lambda\leq\frac{1}{2}$, and $n,k\geq7$ we see that \[ \frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}\lambda}{nm^{1/2}}\leq\frac{m}{2} \] so that $m_{2}\leq m$ as required. It follows from \prettyref{lem:FSadmissability} that $\phi_{\delta,n,k,\lambda}\in A_{\lambda,m}$, and that \[ FS_{h}(\phi_{\delta,n,k,\lambda})\lesssim\max\left\{ \frac{hm^{5/2}\delta^{3}}{\lambda^{3/2}},\frac{1}{m^{1/2}}h\lambda^{3/2},\frac{h\lambda^{7/2}}{m^{5/2}\delta^{2}}\right\} . \] Using that $\delta\sim\frac{\lambda}{m}$, we have that \[ FS_{h}(\phi_{\delta,n,k,\lambda})\lesssim\frac{1}{m^{1/2}}h\lambda^{3/2}. \] \end{proof} We now consider a construction made up of a few wrinkles, each of which wraps many times about the cylinder. \begin{lem} \label{lem:FSUB_onewrinkle_tilted_long} Assume that \[ (h\lambda)^{12/11}\geq\max\{h^{6/5}\lambda,m^{-1/2}h\lambda^{3/2}\}. \] Let $n,k\in\mathbb{N}$ and $\delta\in(0,1]$ satisfy \[ n=12,\quad k\in\left[12h^{-3/11}\lambda^{5/22},13h^{-3/11}\lambda^{5/22}\right],\ \text{and}\quad\delta=4(h\lambda)^{2/11}. \] Then, $\phi_{\delta,n,k,\lambda}\in A_{\lambda,m}$ and \[ FS_{h}(\phi_{\delta,n,k,\lambda})\lesssim(h\lambda)^{12/11}. \] \end{lem} \begin{proof} Rearranging the inequality $(h\lambda)^{12/11}\geq h^{6/5}\lambda$, we find that $h^{-3/11}\lambda^{5/22}\geq1$ so that there exists such a $k\in\mathbb{N}$. Also we note that $\delta\leq1$ since $\lambda\leq\frac{1}{2}$ and $h\leq\frac{1}{2^{10}}$. Now we check the slope bound. We have that \[ m_{2}=2\max\left\{ \sqrt{\frac{\lambda^{9/11}}{2h{}^{2/11}}},\frac{\lambda^{9/11}}{2h{}^{2/11}},\frac{1}{\pi k}\frac{\lambda^{9/11}}{2h{}^{2/11}}+2\pi\frac{2\sqrt{2}h^{1/11}\lambda^{13/22}}{n}\right\} . \] Rearranging the inequality $(h\lambda)^{12/11}\geq m^{-1/2}h\lambda^{3/2}$, we find that $m\geq\lambda^{9/11}h^{-2/11}$ so that \[ m_{2}\leq2\max\left\{ \sqrt{\frac{m}{2}},\frac{m}{2},\frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}}{n}h^{1/11}\lambda^{13/22}\right\} =2\max\left\{ \frac{m}{2},\frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}}{n}h^{1/11}\lambda^{13/22}\right\} . \] Using that $h^{-3/11}\lambda^{5/22}\geq1$ we see that \[ m_{2}\leq2\max\left\{ \frac{m}{2},\frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}}{n}\lambda^{2/3}\right\} . \] Since $m\geq2$ , $\lambda\leq\frac{1}{2}$, and $n,k\geq12$ we find that \[ \frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}}{n}\lambda^{2/3}\leq\frac{m}{2} \] so that $m_{2}\leq m$ as required. It follows from \prettyref{lem:FSadmissability} that $\phi_{\delta,n,k,\lambda}\in A_{\lambda,m}$, and that \[ FS_{h}(\phi_{\delta,n,k,\lambda})\lesssim(h\lambda)^{12/11}. \] \end{proof} Finally, we consider a construction made up of a few wrinkles, each of which wraps a few times about the cylinder. \begin{lem} \label{lem:FSUB_onewrinkle_tilted}Assume that \[ h^{6/5}\lambda\geq\max\{m^{-1/2}h\lambda^{3/2},(h\lambda)^{12/11}\}. \] Let $n,k\in\mathbb{N}$ and $\delta\in(0,1]$ satisfy \[ n=2,\quad k=2,\ \text{and}\quad\delta=4h^{2/5}. \] Then, $\phi_{\delta,n,k,\lambda}\in A_{\lambda,m}$ and \[ FS_{h}(\phi_{\delta,n,k,\lambda})\lesssim h^{6/5}\lambda. \] \end{lem} \begin{rem} Although this choice of $n,k,\delta$ is sometimes optimal with respect to the wrinkling construction considered in this section, it is suboptimal at the level of the free-shear functional. More precisely, in the regime of this result, one can achieve significantly less free-shear energy by not wrinkling at all. Indeed, the scaling law of $h^{6/5}\lambda$ is not present in the statement of \prettyref{prop:FSscalinglaw}. \end{rem} \begin{proof} Note that $\delta\leq1$ since $h\leq\frac{1}{2^{5}}$. Now we check the slope bound. We have that \[ m_{2}=2\max\left\{ \sqrt{\frac{\lambda}{2h^{2/5}}},\frac{\lambda}{2h^{2/5}},\frac{1}{2\pi}\frac{1}{k}\frac{\lambda}{h^{2/5}}+2\pi\frac{2\sqrt{2}\lambda^{1/2}h^{1/5}}{n}\right\} . \] Rearranging the inequality $h^{6/5}\lambda\geq m^{-1/2}h\lambda^{3/2}$, we find that $m\geq\lambda h^{-2/5}$ so that \[ m_{2}\leq2\max\left\{ \sqrt{\frac{m}{2}},\frac{m}{2},\frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}}{n}\lambda^{1/2}h^{1/5}\right\} =2\max\left\{ \frac{m}{2},\frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}}{n}\lambda^{1/2}h^{1/5}\right\} . \] Rearranging the inequality $h^{6/5}\lambda\geq(h\lambda)^{12/11}$ we find that $\lambda\leq h^{6/5}$, and hence that \[ m_{2}\leq2\max\left\{ \frac{m}{2},\frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}}{n}h^{4/5}\right\} . \] Using that $h\leq\frac{1}{2^{5}}$, $m\geq2$, and $n,k\geq2$ we see that \[ \frac{1}{2\pi}\frac{m}{k}+2\pi\frac{2\sqrt{2}}{n}h^{4/5}\leq\frac{m}{2} \] so that $m_{2}\leq m$ as required. It follows from \prettyref{lem:FSadmissability} that $\phi_{\delta,n,k,\lambda}\in A_{\lambda,m}$, and that \[ FS_{h}(\phi_{\delta,n,k,\lambda})\lesssim\max\left\{ \lambda h^{6/5},\lambda^{2}\right\} =\lambda h^{6/5}. \] \end{proof} \subsubsection{Blow-up rate of $D\phi$ as $h\to0$ for the free-shear functional\label{sub:BlowuprateFS}} We can now make \prettyref{rem:blowupFS} precise. \begin{cor} Let $\left\{ (h_{\alpha},\lambda_{\alpha})\right\} _{\alpha\in\mathbb{R}_{+}}$ be such that $h_{\alpha},\lambda_{\alpha}\in(0,\frac{1}{2}]$. Assume that $h_{\alpha}\ll\lambda_{\alpha}^{5/6}$ as $\alpha\to\infty$, and let $\{\phi^{\alpha}\}_{\alpha\in\mathbb{R}_{+}}$ satisfy \[ \phi^{\alpha}\in A_{\lambda_{\alpha},\infty}\quad\text{and}\quad FS_{h_{\alpha}}(\phi^{\alpha})=\min_{A_{\lambda_{\alpha},\infty}}FS_{h_{\alpha}}. \] Then we have that \[ h_{\alpha}^{-1/11}\lambda_{\alpha}^{9/22}\lesssim\norm{D\phi_{\rho}^{\alpha}}_{L^{\infty}(\Omega)}\quad\text{as}\ \alpha\to\infty. \] \end{cor} \begin{proof} For ease of notation, we omit the index $\alpha$ in what follows. By \prettyref{prop:FSscalinglaw} we have that \[ FS_{h}(\phi)\lesssim(h\lambda)^{12/11}. \] Hence, by \prettyref{cor:FSLB-1}, it follows that \[ \lambda^{2}\lesssim(h\lambda)^{12/11}\quad\text{or}\quad\norm{D\phi_{\rho}}_{L^{\infty}(\Omega)}^{-1}h\lambda^{3/2}\lesssim(h\lambda)^{12/11}. \] Rearranging, we have that \[ \lambda^{5/6}\lesssim h\quad\text{or}\quad h^{-1/11}\lambda^{9/22}\lesssim\norm{D\phi_{\rho}}_{L^{\infty}(\Omega)}. \] By assumption the first inequality does not hold, so the result follows. \end{proof} \subsection{Nonlinear model\label{sub:neutralmandrelLB_NL}} By combining the interpolation inequalities used in the analysis of the free-shear functional above and the uniform-in-mandrel lower bounds from \prettyref{sub:largemandrelLB_nonlinear}, we obtain the following lower bound in the neutral mandrel case. \begin{prop} \label{prop:NLneutralLBs}We have that \[ \min_{A_{\lambda,1,m}^{NL}}\,E_{h}^{NL}-\mathcal{E}_{b}^{NL}(1,h)\gtrsim_{m}\min\left\{ \max\{m^{-1}h\lambda^{3/2},(h\lambda)^{12/11}\},\lambda^{2}\right\} \] whenever $h,\lambda\in(0,1]$ and $m\in(0,\infty)$.\end{prop} \begin{proof} Let $\Phi\in A_{\lambda,1,m}^{NL}$ and introduce the radial displacement $\phi_{\rho}=\Phi_{\rho}-1$. Recall the definition of the excess energy given in \prettyref{eq:NLexcess}. Applying \prettyref{lem:NLmembrLB}, \prettyref{cor:NLbucklingcontrol}, and \prettyref{lem:NLbendingcontrol} in the case $\varrho=\varrho_{0}=1$, we obtain the following estimates: \[ \Delta^{NL}\gtrsim\norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}}^{2}\vee\norm{\partial_{\theta}\phi_{\rho}}_{L_{z}^{4}L_{\theta}^{2}}^{4}, \] \[ \max\left\{ \frac{1}{h^{2}}\Delta^{NL},(\Delta^{NL})^{1/2}\right\} \gtrsim_{m}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{2}, \] and \[ \max\{\norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2},(\Delta^{NL})^{1/2}\}\gtrsim_{m}\lambda. \] As in the proof of \prettyref{prop:NLLBs_largemandr}, we see that either $\Delta^{NL}\gtrsim_{m}\lambda^{2}$ or else \[ \Delta^{NL}\gtrsim_{m}\max\left\{ \norm{\phi_{\rho}}_{L_{z}^{2}L_{\theta}^{1}}^{2},\norm{\partial_{\theta}\phi_{\rho}}_{L_{z}^{4}L_{\theta}^{2}}^{4},h^{2}\norm{D^{2}\phi_{\rho}}_{L^{2}(\Omega)}^{2}\right\} \] and \[ \norm{\partial_{z}\phi_{\rho}}_{L^{2}(\Omega)}^{2}\gtrsim_{m}\lambda. \] Now the result follows from the interpolation inequalities in \prettyref{sec:Appendix}, just as in the proofs of \prettyref{cor:FSLB-1} and \prettyref{cor:FSLB-2}. \end{proof} \section{Appendix\label{sec:Appendix}} In this appendix, we collect the interpolation inequalities that were used in \prettyref{sec:largemandrelLB} and \prettyref{sec:neutralmandrelLB}. We call $I=[-\frac{1}{2},\frac{1}{2}]$ and $Q=[-\frac{1}{2},\frac{1}{2}]^{2}$. \subsection{Isotropic interpolation inequalities} The following periodic Gagliardo-Nirenberg inequalities are standard. They can, for example, be easily deduced from their non-periodic analogs (see, e.g., \cite{friedman1969partial} for the non-periodic case). \begin{lem} \label{lem:1dinterp} \label{lem:2dinterp} We have that \[ \norm{f}_{L^{1}(I)}^{2/5}\norm{f''}_{L^{2}(I)}^{3/5}\gtrsim\norm{f'}_{L^{2}(I)} \] for all $f\in H_{\text{per}}^{2}(I)$, and that \begin{align*} \norm{f}_{L^{1}(Q)}^{1/2}\norm{D^{2}f}_{L^{2}(Q)}^{1/2} & \gtrsim\norm{Df}_{L^{4/3}(Q)}\\ \norm{f}_{L^{2}(Q)}^{1/2}\norm{D^{2}f}_{L^{2}(Q)}^{1/2} & \gtrsim\norm{Df}_{L^{2}(Q)} \end{align*} for all $f\in H_{\text{per}}^{2}(Q)$. \end{lem} Combing H\"older's inequality with the second inequality above, we deduce the following result. \begin{lem} \label{lem:2dinterp-1} We have that \[ \norm{Df}_{L^{\infty}(Q)}^{1/3}\norm{f}_{L^{1}(Q)}^{1/3}\norm{D^{2}f}_{L^{2}(Q)}^{1/3}\gtrsim\norm{Df}_{L^{2}(Q)} \] for all $f\in H_{\text{per}}^{2}(Q)$.\end{lem} \subsection{An anisotropic interpolation inequality} The next lemma was used to interpolate between the mixed norms appearing in the discussion of the neutral mandrel case (see \prettyref{sec:neutralmandrelLB}). Here, we refer to a point $x\in Q$ by its coordinates, i.e., $x=(x_{1},x_{2})$ where $x_{i}\in I$, $i=1,2$. Recall the notation for mixed $L^{p}$-norms given in \prettyref{sub:notation}. \begin{lem} \label{lem:mixedinterp_1} We have that \[ \norm{f}_{L_{x_{2}}^{2}L_{x_{1}}^{1}}+\norm{\partial_{x_{1}}f}_{L_{x_{2}}^{4}L_{x_{1}}^{2}}^{1/3}\norm{f}_{L_{x_{2}}^{2}L_{x_{1}}^{1}}^{2/3}\gtrsim\norm{f}_{L^{2}(Q)} \] for all $f\in W^{1,4}(Q)$.\end{lem} \begin{proof} By a standard one-dimensional Gagliardo-Nirenberg interpolation inequality, we have that \[ \norm{f}_{L_{x_{1}}^{2}}\lesssim\norm{\partial_{x_{1}}f}_{L_{x_{1}}^{2}}^{1/3}\norm{f}_{L_{x_{1}}^{1}}^{2/3}+\norm{f}_{L_{x_{1}}^{1}} \] for a.e.\ $x_2 \in I$. After integrating and applying H\"older's inequality, it follows that \[ \norm{f}_{L_{x_{2}}^{2}L_{x_{1}}^{2}}\lesssim\norm{\norm{\partial_{x_{1}}f}_{L_{x_{1}}^{2}}^{1/3}\norm{f}_{L_{x_{1}}^{1}}^{2/3}}_{L_{x_{2}}^{2}}+\norm{f}_{L_{x_{2}}^{2}L_{x_{1}}^{1}}\lesssim \norm{\partial_{x_{1}}f}_{L_{x_{2}}^{4}L_{x_{1}}^{2}}^{1/3}\norm{f}_{L_{x_{2}}^{2}L_{x_{1}}^{1}}^{2/3} + \norm{f}_{L_{x_{2}}^{2}L_{x_{1}}^{1}}. \] \end{proof} \bibliographystyle{amsplain}
{ "timestamp": "2016-05-12T02:00:38", "yymm": "1604", "arxiv_id": "1604.08574", "language": "en", "url": "https://arxiv.org/abs/1604.08574", "abstract": "We consider the axial compression of a thin elastic cylinder placed about a hard cylindrical core. Treating the core as an obstacle, we prove upper and lower bounds on the minimum energy of the cylinder that depend on its relative thickness and the magnitude of axial compression. We focus exclusively on the setting where the radius of the core is greater than or equal to the natural radius of the cylinder. We consider two cases: the \"large mandrel\" case, where the radius of the core exceeds that of the cylinder, and the \"neutral mandrel\" case, where the radii of the core and cylinder are the same. In the large mandrel case, our upper and lower bounds match in their scaling with respect to thickness, compression, and the magnitude of pre-strain induced by the core. We construct three types of axisymmetric wrinkling patterns whose energy scales as the minimum in different parameter regimes, corresponding to the presence of many wrinkles, few wrinkles, or no wrinkles at all. In the neutral mandrel case, our upper and lower bounds match in a certain regime in which the compression is small as compared to the thickness; in this regime, the minimum energy scales as that of the unbuckled configuration. We achieve these results for both the von Kármán-Donnell model and a geometrically nonlinear model of elasticity.", "subjects": "Analysis of PDEs (math.AP); Soft Condensed Matter (cond-mat.soft)", "title": "Axial compression of a thin elastic cylinder: bounds on the minimum energy scaling law", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357549000773, "lm_q2_score": 0.7248702761768248, "lm_q1q2_score": 0.7096739180414051 }
https://arxiv.org/abs/2012.02125
On the Impossibility of Convergence of Mixed Strategies with No Regret Learning
We study the limiting behavior of the mixed strategies that result from optimal no-regret learning strategies in a repeated game setting where the stage game is any 2 by 2 competitive game. We consider optimal no-regret algorithms that are mean-based and monotonic in their argument. We show that for any such algorithm, the limiting mixed strategies of the players cannot converge almost surely to any Nash equilibrium. This negative result is also shown to hold under a broad relaxation of these assumptions, including popular variants of Online-Mirror-Descent with optimism and/or adaptive step-sizes. Finally, we conjecture that the monotonicity assumption can be removed, and provide partial evidence for this conjecture. Our results identify the inherent stochasticity in players' realizations as a critical factor underlying this divergence in outcomes between using the opponent's mixtures and realizations to make updates.
\subsection{Beyond the monotonicity assumption: A conjecture}\label{sec:conjecture} In Section~\ref{sec: beyond_mean_based}, we showed that our results can be extended beyond exact mean-based strategies, allowing their applicability to a wide range of popular online learning algorithms. In this section, we ask whether the other principal assumption of \textit{monotonicity} of the mapping from empirical average to mixed strategy at every round (Definition~\ref{as:monotonic}) is also relaxable. While this assumption of monotonicity will intuitively be satisfied by any learning agent that aims to maximize its utility\footnote{Note that this is significantly more general than the expected-utility-theory as it is only a qualitative statement, not a quantitative one.}, and can be verified to be satisfied by all known mean-based no-regret algorithms, it is of interest to examine whether it is truly necessary to prove our results. We conjecture below that last-iterate oscillations will continue to hold for mean-based strategies that are not monotonic. \begin{conjecture}\label{con:lastiteratedivergence} If both players $1$ and $2$ use mean-based (not necessarily monotonic) repeated game strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ that are uniformly no-regret and each have a regret rate of $(1/2,c)$, then the pair of mixed strategies $(\bm{P_t}, \bm{Q_t})$ does not converge almost surely. \end{conjecture} Conjecture~\ref{con:lastiteratedivergence} turns out to be significantly more difficult to prove than the corresponding result with the additional monotonicity assumption, i.e. Theorem~\ref{thm:lastiteratedivergence}. The assumption of monotonicity on strategy maps (Definition~\ref{as:monotonic}) allowed us to link the event of last-iterate oscillations to an inequality relation on the empirical average $\bm{\widehat{Q}_{t-1}}$; therefore, we could lower bound the probability of a last-iterate oscillation by the cumulative distribution function (CDF) of $\bm{\widehat{Q}_{t-1}}$ and invoke limit theorems. In the absence of monotonicity of strategy maps, it turns out that we need to reason about the \textit{probability mass function} (PMF) of $\bm{\widehat{Q}_{t-1}}$ instead. This is a mathematically far more difficult object to study under the general scenario where $\bm{Q}_t$ is now a complex functional of the history of $(\bm{I}^{t-1},\bm{J}^{t-1})$. Importantly, the realizations of player $2$, given by $\{\bm{J_t}\}_{t \geq 1}$ are \textit{not} mutually independent, even conditionally on the mixed strategies $\{\bm{Q_t}\}_{t \geq 1}$. Moreover, martingale structure by itself is insufficient to obtain adequate control on the PMF of $\bm{\widehat{Q}_{t-1}}$. Nevertheless, we can make partial progress on proving Conjecture~\ref{con:lastiteratedivergence}. First, we note that the proof of the ``warm-up'' Theorem~\ref{thm:warmup} can be modified to show that player $1$ would oscillate in the idealized case where player $2$ has already converged to his NE even when the mean-based strategies are non-monotonic (Proposition~\ref{prop:warmup_ext} in Appendix~\ref{sec:fixedconvergent}). Second, while we know that the realizations $\{\bm{J_t}\}_{t \geq 1}$ cannot be mutually independent, they are still likely to be ``minimally stochastic" across rounds in a certain sense: after all, an independent coin is being tossed on every round to generate the realization of player $2$, $\bm{J_t}$, from his mixed strategy on that round, $\bm{Q_t}$. Thus, it is reasonable to conjecture that the PMF of $\bm{\widehat{Q}_t}$ is sufficiently ``close'' to a sum of independent random variables, which we denote by: \begin{align*} \bm{Z}'_t &:= \sum_{s=1}^t \bm{J}'_s \text{ where } \\ \bm{J}'_s &\sim \text{Ber}(\bbE[\bm{Q}_s]), \bm{J'}_s \text{ mutually independent. } \end{align*} More precisely, suppose that we could show that the induced distribution on the empirical average of player $2$, i.e. $\bm{\widehat Q_t}$, was similar to the normalized sum of independent random variables, defined above by $\bm{Z_t}'/t$, in the following quantitative sense: \begin{definition} The mean-based strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ would satisfy the \textbf{shaky-hands} property if for any $\gamma > 0$, there exists an $\epsilon > 0$ and a $T > 0$ such that \begin{align}\label{eq: req_measure_ratio_prop} \frac{\bbP(t\bm{\hat Q_t} = z)}{\bbP(\bm{Z'_t} = z)} \geq \epsilon \text{ for all } z \in [tq^* - \gamma \sqrt{t}, tq^* + \gamma \sqrt{t}], \end{align} for all $t \geq T$. \end{definition} This \emph{shaky-hands} property, if true, would posit a minimal amount of stochasticity on the dependent realizations of player $2$. The following theorem shows that Conjecture~\ref{con:lastiteratedivergence} is true if the shaky-hands property holds. \begin{theorem}\label{thm:shakyhands} If the mean-based strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ satisfy the Shaky-hands property (Equation~\eqref{eq: req_measure_ratio_prop}, and the strategy $\{f_t\}_{t \geq 1}$ of player $1$ has a regret-rate of $(1/2,c)$, then player $1$'s last-iterates diverge from the equilibrium strategy $p^*$ in probability, i.e. there exist positive constants $(\delta,\epsilon_0)$ such that \begin{align} {\lim \sup}_{t \to \infty} \bbP\left[|\bm{P_{t}} - p^*| \geq \delta \right] \geq \epsilon_0 . \end{align} \end{theorem} Theorem~\ref{thm:shakyhands} shows, in fact, a stronger statement of lack of convergence \emph{in probability} under the shaky-hands property. This is a valuable result, as it ensures that proving Conjecture~\ref{con:lastiteratedivergence} reduces to proving that the shaky-hands property (Equation~\eqref{eq: req_measure_ratio_prop}) will be satisfied by an arbitrary mean-based, non-monotonic no-regret algorithm. The proof of Theorem~\ref{thm:shakyhands} is significantly more technically involved than the CLT-based arguments provided in the previous sections, and is provided in Appendix~\ref{sec:fixedconvergent}. \section{Introduction} \label{sec: intro} The mixed strategy Nash equilibrium (NE) is one of the oldest solution concepts central to game theory. A finer understanding of how the NE arises as an outcome of learning behavior in a repeated game setting continues to be an active area of research. Classical research in economics\footnote{For a recent survey, see~\citet{fudenberg1998learning}.} dating back to~\citet{brown1951iterative} and~\citet{robinson1951iterative} as well as recent work in computer science~\citep{freund1999adaptive} tells us that when both the players in a two-player zero-sum game use strategies based on no-regret learning dynamics~\citep{hannan1957approximation, littlestone1989weighted,kalai2005efficient}, then the time-average of their strategies will converge, almost surely, to a Nash equilibrium~\citep{freund1999adaptive}. However, the convergence of the \textit{time-averaged} mixed actions to a NE does not necessarily imply that the \textit{day-to-day behavior}, i.e. the sequence of mixed strategies, of these players converges. In the asymptotic sense, the quantity of interest is the tuple of the limiting mixed strategies of both players, also referred to as the last-iterate (e.g. in~\citet{daskalakis2018last}). \citet{bailey2018multiplicative} discovered the following surprising property of the last iterate: When the players in a two-player zero-sum game compete against each other with the popular multiplicative weights update algorithm with certain learning rates, which constitutes a popular no-regret algorithm, then their resulting mixed strategies drift away from any interior NE --- in fact, they drift towards the boundary of the strategy space. This intriguing result is derived in an environment where players can play what we term \textit{telepathic strategies}, i.e. player $1$ can observe the exact mixed strategies used by player $2$, and vice versa. However, in the traditional repeated game setting, players can only observe the \emph{realizations} of the opponent's mixed strategies. One would only expect the oscillation problem to be exacerbated by the ensuing stochastic feedback. The natural question that arises is whether these \textit{last-iterate oscillations} are a specific property of the family of multiplicative weights algorithms, or a fundamental consequence of the no-regret property itself. This paper provides substantial evidence that it is the latter, by showing that last-iterate oscillation occurs for a broad, generic class of asymptotically optimal no-regret algorithms in the traditional repeated game setting (where players only observe realizations of each other's mixed strategies, not the mixed strategies themselves). In this ``non-telepathic'' scenario, we show that the ensuing stochasticity in realizations is one of the critical ingredients underlying the last-iterate oscillation. Our results suggest that no-regret learning strategies possess certain intrinsic properties by which the two notions---no-regret and convergence of the limiting mixed strategies---inherently conflict with one another. {\bf Our contributions:} We consider a repeated $2 \times 2$ game, i.e. a two player game repeatedly played infinitely many times at steps $t = 1, 2, \dots$, where both the players can play mixtures of two pure strategies each. The repeated game strategy for a player outlines the rule by which she picks her mixed action at step $t$ based on the history up to and including step $(t-1)$. We will first describe our main result in its most stylized form to help identify the key components responsible for the phenomenon of last-iterate oscillation. We will make three natural assumptions on each player's repeated game strategy, all of which are ubiquitous to most popular learning dynamics: \begin{enumerate} \item We assume the player's strategy to be an \textit{optimal no-regret} strategy with respect to her utility function, that is, she has an expected average regret of $\mathcal{O}(t^{1/2})$ irrespective of the strategy employed by the other player. See Definition~\ref{def: uniform_noregret_rate} for formal definitions of no-regret algorithms, optimal or otherwise. \item We assume that the player's optimal no-regret strategy is \textit{mean-based}, i.e. the player uses only the empirical average of the actions of the other player at step $(t-1)$ as a sufficient statistic to decide her mixed action\footnote{In other words, the player is agnostic to the ordering in the opponent's action realizations. In Section~\ref{sec: beyond_mean_based}, we weaken this assumption to show the validity of our results with strategies that display recency bias.} at step $t$. Note, further, that such strategies are \textit{self-agnostic}, in the sense that they do not use the actual realizations of their own mixed strategies to update their strategy. In general, the player is aware of the step $t$, and we accordingly allow her rule for mapping empirical averages to mixed strategies to depend on the step $t$. \item We assume that the player's optimal no-regret, mean-based strategy is \textit{monotonic} in its argument, i.e. the empirical average of her opponent's actions, at every step $t \geq 1$. The monotonicity does not need to be strict, and its direction (increasing or decreasing) can vary arbitrarily across rounds. We note that monotonicity in the direction of the player's best response (in the sense that the larger the relative advantage of a response is, the more likely a player is to use it) is a natural constraint to impose on a rational agent. In this context, time-varying monotonicity constitutes a significantly weaker regularity condition on the player's strategies. We discuss a possible relaxation of this monotonicity assumption in Section~\ref{sec:conjecture}. \end{enumerate} Most popular online learning dynamics, such as \textit{Online-Mirror-Descent}~\citep{nemirovsky1983problem} (or \textit{Follow-the-Regularized Leader}~\cite{shalev2011online}) strategies, can easily be verified to satisfy all three of these assumptions. Our main contribution is to show that if both players deploy strategies satisfying the above three properties, and the stage game possesses a unique completely mixed NE\footnote{We note that any $2\times2$ game that possesses only completely mixed NE, can be shown to possess one unique NE, which is also the unique correlated equilibrium of the game~\citep{phade2019geometry}. These games are designated as \textit{competitive games} by~\citet{calvo2006set}. These games have been of special interest in the design of experiments to test the performance of NE as a predictor of behavior in games as they have an unambiguous NE. For example,~\citet{selten2008stationary,binmore2001does} refer to these games as completely mixed $2\times 2$ games.}, then their mixed actions \textit{cannot} converge to Nash NE. We denote the unique NE of the stage game by the tuple $(p^*,q^*)$, where $0 < p^*, q^* < 1$ denote the equilibrium strategies of playing action $1$ by players $1$ and $2$ respectively. In Theorem~\ref{thm:lastiteratedivergence}, we prove the following statement for any such game (described here informally): \begin{center} \textit{If players $1$ and $2$ use (possibly different) optimal-no-regret, mean-based and monotonic repeated game strategies, then their mixed strategies \textbf{cannot} converge to the NE $(p^*,q^*)$.} \end{center} Our proof technique isolates the ensuing stochasticity in either of the player's realizations as a critical ingredient underlying these \textit{last-iterate oscillations}. In particular, we prove Theorem~\ref{thm:lastiteratedivergence} via contradiction: suppose, instead, that the sequence of mixed strategies $(\bm{P_t},\bm{Q_t})$ converged to $(p^*,q^*)$, which implies that $\bm{Q_t} \to q^*$. We show that this would cause sufficient stochasticity by itself to necessitate $\bm{P_t}$ to oscillate with a positive probability \textit{as a fundamental consequence} of no-regret (together with the mean-based and monotonic properties). The intuition for why stochasticity in realizations is the primary cause of last-iterate oscillations is contained in an elementary ``warm-up'' argument provided in Theorem~\ref{thm:warmup}, which shows that the iterates of player $1$ oscillate even in an idealized scenario in which player $2$ has already converged to his NE strategy, i.e. $q_t = q^*$ for all $t \geq 1$. The mean-based and monotonic properties described above do not constitute \textit{all} popular no-regret strategies used in practice; notable exceptions are the family of \textit{optimistic} no-regret algorithms~\citep{rakhlin2013optimization,syrgkanis2015fast,daskalakis2018last} and Online-Mirror-Descent algorithms run with data-adaptive step sizes~\citep{hazan2010extracting,cesa2007improved,erven2011adaptive,rakhlin2013online,rakhlin2013optimization}. However, we prove that such strategies can be reduced in a natural way to mean-based and monotonic strategies in Section~\ref{sec: beyond_mean_based}, and consequently show that last-iterate oscillations will continue to arise when these strategies are used. Our negative result for optimistic strategies in particular highlights a contrast to the \textit{telepathic setting}, in which~\citet{daskalakis2018last} showed that the last iterate (which is deterministic under telepathic dynamics) will converge to NE when both players use optimistic mirror descent strategies. A complete removal of the mean-based and monotonic assumptions remains an important direction for future work; however, in Section~\ref{sec:conjecture} we provide partial evidence that the monotonicity assumption in particular can be removed. {\bf Related work:} While the evolution of the \textit{time-averages} of players' strategies as a consequence of multiple players using no-regret dynamics has been an active topic of study for several decades~\citep{brown1951iterative,robinson1951iterative,foster1997calibrated,fudenberg1998learning,freund1999adaptive,hart2000simple,hart2005adaptive,kalai2005efficient}, the properties of the limiting mixed strategies, or the last-iterates, have only been examined more recently. This topic has also seen substantial attention in the related setup of \textit{min-max optimization}~\citep{daskalakis2018training,mertikopoulos2018optimistic,liang2019interaction,abernethy2019last,lei2020last}, where the primary goal is to attain a pure-strategy NE of a game with a continuous-pure-strategy set through the use of first-order optimization algorithms, e.g. gradient descent-ascent. This problem has been primarily studied in the \textit{deterministic setting}, corresponding to the aforementioned telepathic dynamics in the game-theoretic setup. Recently,~\citet{daskalakis2018last} showed that a modification of the multiplicative weights strategy that incorporates \textit{recency bias} succeeds in last-iterate convergence in the game-theoretic setup with telepathic dynamics. This type of recency bias, commonly called optimism, has also been shown to successfully converge in min-max optimization when applied to the gradient descent/ascent algorithms~\citep{daskalakis2018training,mertikopoulos2018optimistic,liang2019interaction,abernethy2019last,lei2020last}. Moreover, optimistic algorithms have other notable properties, such as leading to faster convergence rates of the time-average of mixed strategies in zero-sum as well as non-zero-sum games~\citep{rakhlin2013optimization,syrgkanis2015fast}. However, we show in Section~\ref{sec: beyond_mean_based} that when stochastic realization-based feedback is considered, optimistic variants on mean-based strategies \textit{do not} resolve the last-iterate oscillation issue. In fact, the key phenomena that we outlined above manifest in recency-bias-based strategies as well. This illustrates that the issue of last-iterate oscillation runs deeper in the traditional repeated-game setting than in the telepathic setting. We briefly discuss alternative (non-constructive) strategies that could satisfy the last-iterate-convergence property in Section~\ref{sec: concl} --- these strategies are not no-regret, but satisfy a weaker property of ``smoothly calibrated forecasting''~\citep{foster2018smooth}. \section{Proof of Theorem~\ref{thm:optimism}} \label{sec: optimismproof} We first define notation pertinent to $\ell$-recency-bias strategies. We denote $\bm{Z^\ell_t} := \sum_{t'=1}^t \bm{J_{t'}} + \sum_{j=1}^{\ell} r_j \bm{J_{t-j + 1}}$, where $\{\bm{J}_t\}_{t \geq 1}$ denotes the sequence of realizations generated by player $2$. Note that $\bm{\widehat{Q}^\ell_t} = \bm{Z^\ell_t}/t$. More conveniently, we can also write \begin{align*} \bm{Z^\ell_t} := \sum_{t'=1}^t (1 + r'_{t'}) \bm{J_{t'}} , \end{align*} where we designate $r'_{t'} = 0$ for $t' \leq (t - \ell)$, and $r'_{t'} = r_{t - t' + 1}$ thereafter. We will essentially mimic the proof-by-contradiction approach of Theorem~\ref{thm:lastiteratedivergence}. We will suppose that $(\bm{P}_t,\bm{Q}_t) \to (p^*,q^*)$ almost surely, and show that if $\bm{Q}_t \to q^*$ almost surely, then $\bm{P}_t$ must oscillate, which provides the desired contradiction. We state the following claim: \begin{claim}\label{claim:optimismclt} Let $C$ be the universal constant (well-defined by Lemma~\ref{lem:timeave} for any pair of optimal no-regret algorithms for players $1$ and $2$) such that $\bm{\overline{Q}_t} \geq q^* - C/\sqrt{t}$ pointwise. Then, for any deterministic subsequence $\{t_k\}_{k \geq 1}$ and any $\beta > 0$, we have \begin{align}\label{eq:optimismclt} \lim_{k \to \infty} \bbP(\sqrt{t_k} (\bm{\widehat{Q}^\ell_{t_k}} - q^*) \geq \beta) \geq \text{erfc}\left(\frac{\beta + C}{\gamma(q^*)}\right) > 0 , \end{align} where $\gamma(\cdot) > 0$ is defined as in the proof of Theorem~\ref{thm:lastiteratedivergence}. \end{claim} Claim~\ref{claim:optimismclt} essentially provides a CLT-like-statement for the recency-bias-adjusted random sequence $\{\bm{\widehat{Q}^\ell_{t_k}}\}_{k \geq 1}$, and is proved below. \begin{proof} By the definition of $\bm{\widehat{Q}^\ell_{t_k}}$, we notice that \begin{align*} \bm{\widehat{Q}_{t_k}} \leq \bm{\widehat{Q}^\ell_{t_k}} \leq \bm{\widehat{Q}_{t_k}} + \frac{\ell^2}{t_k} \text{ a.s.}, \end{align*} which gives us \begin{align*} \sqrt{t_k}(\bm{\widehat{Q}_{t_k}} - q^*) \leq \sqrt{t_k}(\bm{\widehat{Q}^\ell_{t_k}} - q^*) \leq \sqrt{t_k}(\bm{\widehat{Q}_{t_k}} - q^*) + \frac{\ell^2}{\sqrt{t_k}} \text{ a.s.} \end{align*} Since $\ell$ is assumed to be a constant that does not grow with $t$, we can apply the sandwich theorem to get \begin{align*} \sqrt{t_k}(\bm{\widehat{Q}^\ell_{t_k}} - q^*) \to \sqrt{t_k}(\bm{\widehat{Q}_{t_k}} -q^*) \text{ a.s.} \end{align*} Substituting Equation~\eqref{eq:martingaleCLTsubsequence} from the proof of Theorem~\ref{thm:lastiteratedivergence} (which used the martingale CLT together with the time-averaged convergence property) into the above yields \begin{align*} \lim_{k \to \infty} \bbP(\sqrt{t_k}(\bm{\widehat{Q}^\ell_{t_k}} - q^*) \geq \beta) \geq \text{erfc}\left(\frac{\beta + C}{\gamma(q^*)}\right), \end{align*} which completes the proof of the claim. \end{proof} We now use Claim~\ref{claim:optimismclt} to complete the proof of Theorem~\ref{thm:optimism}. We denote $\{\bm{J'_{t}}\}_{t \geq 1}$ to be a sequence of iid Bernoulli$(q^*)$ random variables. Using this notation, we can then write, for any $0 \leq s \leq t$, \begin{align*} \bm{(Z'')^\ell_{t,s}} &:= \sum_{t'=1}^{t-s} (1 + r'_{t'}) \bm{J'_{t'}} + \sum_{t' = t - s + 1}^t (1 + r'_{t'}) \\ \bm{(Z')^\ell_{t}} &:= \sum_{t'=1}^{t} (1 + r'_{t'}) \bm{J'_{t'}} \\ \bm{(\widehat{Q}'')^\ell_{t,s}} &:= \frac{\bm{(Z'')^\ell_{t,s}}}{t} \\ \bm{(\widehat{Q}')^\ell_{t}} &:= \frac{\bm{(Z')^\ell_{t}}}{t}. \\ \end{align*} From Proposition~\ref{prop:noregretsensitivity}, we know that there exists a sequence $\{t_k,s_k\}_{k \geq 1}$ such that $0 \leq s_k \leq \alpha (t_k)^{1/2}$ for all $k \geq 1$, and \begin{align*} \bbE\left[f_{t_k}(\bm{(\widehat{Q}'')^\ell_{t_k,s_k}})\right] \geq p^* + 2 \delta \text{ for all } k \geq 1 . \end{align*} As in the proof of Theorem~\ref{thm:warmup}, we can use Markov's inequality to get \begin{align*} \bbP\left(f_{t_k}(\bm{(\widehat{Q}'')^\ell_{t_k,s_k}}) > p^* + \delta \right) \geq \epsilon_0 , \end{align*} where $\epsilon_0 := \delta/((1 - p^*) - \delta)$. Note that $0 < \epsilon_0 < 1/2$, as in the proof of Theorem~\ref{thm:lastiteratedivergence}. Next, in an argument similar to the proof of Claim~\ref{claim:optimismclt}, we can apply the central-limit-theorem to the recency-biased random variable $\bm{(\widehat{Q}')^\ell_t}= \bm{(Z')^\ell_t}/t$. This will yield \begin{equation} \label{eq: prob_markov_tailcut_bdd_opt} \bbP \l(f_{t_k} \l(\frac{\bm{({Z}'')^\ell_{t_k, s_k}}}{t_k} \r) \geq p^* + \delta, \bm{(Z'')^\ell_{t_k, s_k}} \leq q^* t_k + \beta \sqrt{t_k} \r) \geq \frac{\epsilon_0}{2}. \end{equation} Equation~\eqref{eq: prob_markov_tailcut_bdd_opt} together with the monotonicity assumption (Assumption~\ref{as:monotonic}) yields \begin{align*} f_{t_k}\left(q^* + \frac{\beta}{\sqrt{t_k}}\right) \geq p^* + \delta \text{ for all } k \geq 1, \end{align*} where the justification is the same as in the proof of Theorem~\ref{thm:warmup}, and $\beta > 0$ is chosen in the same way. Accordingly, we get \begin{align*} \lim_{k \to \infty} \bbP[\bm{P}_{t_k} \geq p^* + \delta] \geq \lim_{k \to \infty} \bbP[\sqrt{t_k}(\bm{\widehat{Q}_{t_k}} - q^*) \geq \beta] \geq \text{erfc} \left(\frac{\beta + C}{\gamma(q^*)}\right), \end{align*} where the last inequality follows from Equation~\eqref{eq:optimismclt}. This completes the proof of Theorem~\ref{thm:optimism}. \section{Proof of Theorem~\ref{thm:shakyhands}}\label{sec:fixedconvergent} In this section, we provide the proof for our partial result (Theorem~\ref{thm:shakyhands}) that shows that last-iterate oscillations can occur even as a consequence of non-monotonic strategies under the conjecture that the shaky-hands condition (Equation~\eqref{eq: req_measure_ratio_prop}) holds. We begin by showing a version of the ``warm-up'' Theorem~\ref{thm:warmup} that holds for non-monotonic strategies. \begin{proposition}\label{prop:warmup_ext} Let player $2$'s strategy $\{\bm{J_t}\}_{t \geq 1}$ be an i.i.d. sequence of Bernoulli($q^*$) random variables. Then, any mean-based repeated game strategy $\{f_t\}_{t \geq 1}$ that has a regret rate of $(1/2,c)$ causes player $1$'s last iterate to diverge, i.e. there exist positive constants $(\delta,\epsilon)$ such that \begin{align} \label{eq: liminf_prob_dev_response} {\lim \sup}_{t \to \infty} \bbP\left[|\bm{P_{t}} - p^*| \geq \delta \right] \geq \epsilon . \end{align} \end{proposition} \begin{proof} The proof is identical to the proof of Theorem~\ref{thm:warmup} until the step \begin{align*} \bbP \l(f_{t_k} \l(\frac{\bm{{Z}''_{t_k, s_k}}}{t_k} \r) \geq p^* + \delta, \bm{Z''_{t_k, s_k}} \leq q^* \cdot t_k + \beta \sqrt{t_k} \r) \geq \frac{\epsilon_0}{2}. \end{align*} Since our strategy $\{f_t\}_{t \geq 1}$ is no longer guaranteed to be monotonic, we can no longer turn the above equation into a deterministic statement on which we can apply the central-limit-theorem. We now use a more specialized argument that controls the ratio of probability mass functions. Recall that we defined the opponent sequence $\{\bm{J'_t}\}_{t \geq 1}$ to be an iid sequence of Bernoulli($q^*$) random variables. Note that $\bm{Z'_t} \overset{d}{=} t \bm{\widehat{Q}_{t}} $, and by definition $\bm{Z''_{t,s}} \geq s$ point-wise. We denote $\beta_0 := \beta/q^*$. Note that $q^* t + \beta \sqrt{t} = q^* (t + \beta_0\sqrt{t})$ for any $t$. Now, we show that \begin{equation} \label{eq: prob_ratio_lower_bdd} \min_{s \leq z \leq q^*(t + \beta_0 \sqrt{t}) } \; \frac{\bbP(\bm{Z'_t} = z)}{\bbP(\bm{Z''_{t,s}} = z)} \geq (1 + \beta_0)^{-\alpha}, \end{equation} for all $0 < s \leq \alpha \sqrt{t}$, $t \geq 1$. Indeed, we have, \begin{align*} \frac{\bbP(\bm{Z'_t} = z)}{\bbP(\bm{Z''_{t,s}} = z)} &= \frac{\binom{t}{z}(q^*)^z (1 - q^*)^{t - z}}{\binom{t-s}{z-s}(q^*)^{z-s} (1 - q^*)^{t - z}} = \frac{t}{z} \cdot \frac{t-1}{z-1} \cdots \frac{t-s+1}{z-s+1} \cdot (q^*)^s\\ &\geq \l(\frac{q^*t}{z}\r)^s \geq \l(\frac{t}{t + \beta_0 \sqrt{t}}\r)^{\alpha \sqrt{t}}\\ &\geq (1 + \beta_0)^{-\alpha} > 0, \end{align*} where the first inequality follows from $z \leq t$ and therefore $\frac{t - \ell}{z - \ell}$ is increasing in $\ell$, and the last inequality follows from the fact that \[ \l(\frac{t + \beta_0 \sqrt{t}}{t}\r)^{\alpha \sqrt{t}} = \l(1 + \frac{\beta_0}{\sqrt{t}}\r)^{\alpha \sqrt{t}} \leq (1 + \beta_0)^{\alpha}. \] (Note that the function $(1 + \beta_0/x)^{\alpha x}$ is decreasing in $x$ for $x \geq 1$.) We are now ready to complete our proof via a simple ``change-of-measure" argument and the above lower bound on the ratio of the probability mass functions. From Equations~\eqref{eq: prob_markov_tailcut_bdd} and \eqref{eq: prob_ratio_lower_bdd} and the law of total probability, we get \begin{align*} &\bbP \l(f_{t_k} \l(\frac{\bm{{Z}'_{t_k}}}{t_k} \r) \geq p^* + \delta, \bm{Z'_{t_k}} \leq q^* \cdot t_k + \beta \sqrt{t_k} \r) \\ &\geq \sum_{z=s_k}^{q^* \cdot t_k + \beta \sqrt{t_k}} \bbP \l(\bm{Z'_{t_k}} = z\r) \cdot \mathbb{I}\left[f_{t_k}\l(\frac{z}{t_{k}}\r) \geq p^* + \delta\right]\\ &\geq (1 + \beta_0)^{-\alpha} \cdot \sum_{z=s_k}^{q^* \cdot t_k + \beta\sqrt{t_k}} \bbP \l(\bm{Z''_{t_k,s_k}} = z\r) \cdot \mathbb{I}\left[f_{t_k}\l(\frac{z}{t_{k}}\r) \geq p^* + \delta\right] \\ &= (1 + \beta_0)^{-\alpha} \cdot \bbP \l(f_{t_k} \l(\frac{\bm{{Z''}_{t_k}}}{t_k} \r) \geq p^* + \delta, \bm{Z''_{t_k}} \leq q^* \cdot t_k + \beta \sqrt{t_k} \r) \\ &\geq \frac{\epsilon_0}{2} (1+\beta_0)^{-\alpha}, \end{align*} and hence \[ \bbP \l(f_{t_k} \l(\frac{\bm{{Z}'_{t_k}}}{t_k} \r) \geq p^* + \delta\r) \geq \frac{\epsilon_0}{2} (1+\beta_0)^{-\alpha}, \] for $k \geq k_1$. Since $\bm{P_{t_k}} \overset{d}{=} f_{t_k}(\bm{\widehat{Q}_{t_k}})$, taking $\epsilon := (\epsilon_0/2) (1+\beta_0)^{-\alpha}$, we get \begin{align*} \bbP\left[\bm{P_{t_k}} \geq p^* + \delta \right] \geq \epsilon, \end{align*} for all $k \geq k_1$. This implies Equation~\eqref{eq: liminf_prob_dev_response} and completes the proof of the proposition. \end{proof} Now, to extend the ideas from the proof of Proposition~\ref{prop:warmup_ext}, we need to show that the probability mass function of the random variable $\bm{Z_t} := \sum_{s=1}^t \bm{J_s}$ is sufficiently similar to the probability mass function of the sum of independent Bernoulli$(q^*)$ random variables, which we denoted above by $\bm{Z'_t}$. In other words, it suffices to show that \begin{align}\label{eq:pmfratio} \frac{\bbP(\bm{{Z}_t} = z)}{\bbP(\bm{Z'_t} = z)} \geq \epsilon \text{ for all } z \in [tq^* - \beta\sqrt{t}, tq^* + \beta\sqrt{t}] \end{align} for some universal positive constant $\epsilon > 0$. We will do this by defining two sets of intermediate random variables: \begin{enumerate} \item The random variable $\bm{Y_t} := \sum_{s=1}^t \bm{J''_s}$, where $\bm{J''_t} \sim \text{Ber}(\bbE[\bm{Q_t}])$ and $\{\bm{J''_t}\}_{t \geq 1}$ are mutually independent. \item The random variable $\bm{Y'_t} := \text{Binomial}(t,\bbE[\bm{\overline{Q}_t}])$. \end{enumerate} We will show that the ratios of the pmfs between $(\bm{Z_t}, \bm{Y_t})$, $(\bm{Y_t},\bm{Y'_t})$, and $(\bm{Y'_t}, \bm{Z'_t})$ are lower bounded under the shaky-hands conjecture, i.e. assuming that Equation~\eqref{eq: req_measure_ratio_prop} holds. First, we note that Equation~\eqref{eq: req_measure_ratio_prop} directly yields \begin{align*} \frac{\bbP(\bm{Z_t} = z)}{\bbP(\bm{Y_t} = z)} \geq \epsilon \text{ for all } z \in [tq^* - \beta \sqrt{t}, tq^* + \beta \sqrt{t}]. \end{align*} It remains to show that the pmf of $\bm{Y_t}$ is sufficiently similar to the pmf of $\bm{Z'_t} \sim \text{Binomial}(t,q^*)$. Henceforth, we denote $q_t := \bbE[\bm{Q_t}]$ and $\overline{q}_t := \bbE[\bm{\overline{Q}_t}]$ as shorthand. We will use a similar proof-by-contradiction approach to the proof of Theorem~\ref{thm:lastiteratedivergence} to establish the required pmf control. In other words, we will suppose that $\bm{Q_t} \to q^*$ (in fact in probability), and show that we cannot have $\bm{P_t} \to p^*$ in probability. Note that $\bm{Q_t} \to q^*$ (in probability) obviously requires $q_t \to q^*$. Moreover, it follows from the property of time-averaged convergence and the definition of a competitive game (Appendix~\ref{sec:timeave}) that $|\overline{q}_t - q^*| \leq \frac{C}{\sqrt{t}}$ for all $t \geq 1$. In summary, the sequence $\{\bbE[\bm{Q_t}]\}_{t \geq 1}$ belongs to the following class of \textit{fixed-convergent} strategies defined with respect to $q^*$: \begin{subequations} \begin{align} |q_t - q^*| &\leq \delta/2 \text{ for all } t \geq t_0 \label{eq:lastiterateoblivious} \\ |\overline{q}_t - q^*| &\leq \frac{C}{\sqrt{t}} \text{ for all } t \geq 1 \text{, where } \label{eq:timeaverageoblivious} \\ \overline{q}_t &:= \frac{1}{t} \sum_{s=1}^t q_s \nonumber. \end{align} \end{subequations} We also denote the set of such fixed-convergent strategies by $\mathcal{Q}_{\delta,t_0,C}$ and denote their truncation to step $t$ by $\mathcal{Q}_{\delta,t_0,C}(t)$. First, we show that the pmf of $\bm{Y_t}$ is very close to the pmf of $\bm{Y'_t} \sim \text{Binomial}(t, \bar q_t)$ for any fixed-convergent sequence $\{q_t\}_{t \geq 1}$ satisfying Equations~\eqref{eq:lastiterateoblivious} and~\eqref{eq:timeaverageoblivious} (note that $\bar q_t = \sum_{s = 1}^t q_s$.) This is encapsulated in the following lemma. \begin{lemma}\label{lem:changeofmeasure} Consider any fixed-convergent sequence $\{q_t\}_{t \geq 1}$, i.e. such that Equations~\eqref{eq:lastiterateoblivious} and~\eqref{eq:timeaverageoblivious} both hold. Let $\bm{Y_t} = \sum_{s=1}^t \bm{J''_s}$ where $\bm{J''_s} \sim \text{Ber}(q_s)$ and $\{\bm{J''_t}\}_{t \geq 1}$ are mutually independent, and let $\bm{Y'_t} \sim \text{Binomial}(t, \bar q_t)$. Then, there exists positive constant $\epsilon_b$ and integer $t_{0,b}$ such that for all $t \geq t_{0,b}$, we have \begin{align*} \frac{\bbP(\bm{Y_t} = z)}{\bbP(\bm{Y'_t} = z)} \geq \epsilon_b \text{ for all } z \in [tq^* - \beta \sqrt{t}, tq^* + \beta \sqrt{t}] . \end{align*} \end{lemma} We next show that the pmfs of $\bm{Y'_t} \sim \text{Binomial}(t,\overline{q_t})$ and $\bm{Z'_t} \sim \text{Binomial}(t,q^*)$ are sufficiently close. \begin{lemma} \label{lem: ration_YtoZ} There exists a positive constant $\epsilon_d > 0$ and a sufficiently large $t_{0,d}$ such that for all $t > t_{0,d}$, we have \[ \frac{\bbP(\bm{Y'_t} = z)}{\bbP(\bm{Z'_t} = z)} \geq \epsilon_d, \text{ for all } z \in [q^*t - \beta\sqrt{t}, q^*t + \beta\sqrt{T}]. \] \end{lemma} Notice that from Lemma~\ref{lem:changeofmeasure}, Lemma~\ref{lem: ration_YtoZ} and Equation~\eqref{eq: req_measure_ratio_prop}, we get \begin{align*} \frac{\bbP(\bm{Z_t} = z)}{\bbP(\bm{Z'_t} = z)} \geq \epsilon \text{ for all } z \in [tq^* - \beta\sqrt{T}, tq^* + \beta\sqrt{T}] . \end{align*} for a constant $\epsilon = \epsilon_b \epsilon_d > 0$. In the following two subsections, we prove these two lemmas. This then completes the proof of Theorem~\ref{thm:shakyhands}. \qed It remains to prove Lemmas~\ref{lem:changeofmeasure} and~\ref{lem: ration_YtoZ}, which are significantly more technical than the martingale CLT that could be applied under the monotonicity assumption. We denote the pdf of the normal distribution $\mathcal{N}(\mu, \sigma^2)$ by $p(\cdot;\mu,\sigma^2)$. To prove these lemmas, we use the following theorem of De-Moivre and Laplace in several sub-lemmas that follow. \begin{theorem}[de-Moivre and Laplace, statement from~\citet{feller1957introduction}] \label{thm:demoivrelaplace} Let $\bm{X} \sim \text{Binomial}(t,q)$ for any $0 < q < 1$, and consider any sequence $\{k_t\}_{t \geq 1}$ such that $k_t^3/t^2 \to 0$ as $t \to \infty$. Then, for every $0 < \epsilon < 1$, there exists a $t_0$ sufficiently large such that for all $t > t_0$, we have \[ 1 - \epsilon < \frac{\bbP(\bm{X} = z)}{p(z;tq, tq(1-q))} \leq 1 + \epsilon, \text{ for all integers } qt - k_t \leq z \leq qt + k_t. \] \end{theorem} Note that Theorem~\ref{thm:demoivrelaplace} is a much sharper form of asymptotic normality than the typically stated Central Limit Theorem, as it obtains direct control on the probability mass function itself. \subsection*{Proof of Lemma~\ref{lem:changeofmeasure}} We consider the constant step index $t_{0,b} := \max\{t_0, 4C^2/\delta^2\}$, and note that by the triangle inequality and the assumed convergence (that we are going to contradict), we have \begin{align*} |q_t - \overline{q_t}| &\leq |q_t - q^*| + |q^* - \overline{q_t}| \\ &\leq \frac{\delta}{2} + \frac{C}{\sqrt{t}} \\ &\leq \frac{\delta}{2} + \frac{\delta}{2} = \delta , \end{align*} where the last inequality holds for all $t \geq t_{0,b}$. Thus, we have $|q_t - \overline{q_t}| \leq \delta$, which is useful and required for comparing the pmfs of the random variables $\bm{Y_t}$ and $\bm{Y'_t}$. Consider a fixed $t \geq t_{0,b}$. In general, relating the probability mass function of $\bm{Y_t}$, which is the Poisson binomial random variable, directly to the binomial distribution is challenging. The following technical lemma characterizes the sequence $\{q_s\}_{s=1}^t$ that \textit{minimizes} the probability mass function $\bbP(\bm{Y_t} = z)$ for a fixed choice of $z$. This minimizing sequence takes values $q_s \in \{\overline q_t - \delta, \overline q_t, \overline q_t + \delta\}$, which turns out to be a much simpler form to analyze. \begin{sublemma}\label{lem:extremizing} Consider any step index $t \geq 1$. Then, for every $z \in \{1,\ldots, t\}$, there exists an even integer $0 \leq n_t(z)\leq t$ such that \begin{align*} \bbP(\bm{Y_t} = z) \geq \bbP(\bm{\widetilde{Y}_t} = z) , \end{align*} where $\bm{\widetilde{Y}_t} = \text{Binomial}\left(\frac{n_t(z)}{2}, q^* + \delta\right) + \text{Binomial}\left(\frac{n_t(z)}{2}, q^* - \delta\right) + \text{Binomial}\left(t - n_t(z), q^*\right)$. (Here, the three random variables are independent.) \end{sublemma} \begin{proof}[Proof of sublemma~\ref{lem:extremizing}] Let $\eta_s := q_s - \overline q_t$, for $1 \leq s \leq t$, denote the deviation of $q_s$ from the average at time $t$, $\overline{q_t} $. Thus we have $\sum_{s = 1}^t \eta_s = 0$, and $\eta_s \in [-\delta, \delta]$ for all $s \in \{1,\ldots, t\}$. Let $e^t := \{e_1, e_2, \dots, e_t\}$ where $e_s \in \{-1, +1\}$ for all $1 \leq s \leq t$ represent the unique encoding of the output sequence $J^t \in \{0,1\}^t$. Let $|e^t| := |\{e_s = +1 : 1 \leq s \leq t\}|$ denote the number of positive ones in the vector $e^t$. Now, we consider $1 \leq z \leq t$. We have \begin{align*} \bbP(\bm{Y_t} = z) &= \sum_{|e^t| = z} \prod_{s = 1}^t q_s \1\{e_s = +1\} + (1 - q_s)\1 \{e_s = -1\})\\ &= \sum_{|e^t| = z} \prod_{s = 1}^t (\overline{q_t} + \eta_s)\1\{e_s = +1\} + (1 - \overline{q_t} - \eta_s)\1 \{e_s = -1\}).\\ \end{align*} On the other hand, we have \begin{align*} \bbP(\bm{\widetilde{Y}_t} = z) &= \sum_{|e^t| = z} \prod_{s = 1}^t (\overline{q_t})\1\{e_s = +1\} + (1 - \overline{q_t})\1 \{e_s = -1\})\\ &= \binom{t}{z} (\overline{q_t})^z (1 - \overline{q_t})^{(t - z)}. \end{align*} Thus, we get \begin{align*} \frac{\bbP(\bm{Y_t} = z)}{ \bbP(\bm{\widetilde{Y}_t} = z)} &= \binom{t}{z}^{-1} \sum_{|e^t| = z} \prod_{s = 1}^t \frac{\overline{q_t} + \eta_s}{\overline{q_t}}\1\{e_s = +1\} + \frac{1 - \overline{q_t} - \eta_s}{1 - \overline{q_t}}\1 \{e_s = -1\})\\ &= \binom{t}{z}^{-1} \sum_{|e^t| = z} \prod_{s = 1}^t \l(1 + \frac{\eta_s}{\overline{q_t}}\r)\1\{e_s = +1\} + \l(1 - \frac{\eta_s}{1 - \overline{q_t}}\r)\1 \{e_s = -1\}). \end{align*} Let $\widehat e_s = \frac{+1}{\overline{q_t}}$ if $e_s = +1$ and $\widehat e_s = \frac{-1}{1 - \overline{q_t}}$ if $e_s = -1$. Let $\widehat e^t = \{\widehat e_1, \dots, \widehat e_t\}$ and let $|\widehat e^t| := |\{\widehat e_s = +1/\overline{q_t} : 1 \leq s \leq t\}|$. Then, we get \begin{equation} \frac{\bbP(\bm{Y_t} = z)}{ \bbP(\bm{\widetilde{Y}_t} = z)} = \binom{t}{z}^{-1} \sum_{|\widehat e^t| = z} \prod_{s = 1}^t (1 + \widehat e_s \eta_s) . \end{equation} We will now try to lower bound the ratio ${\bbP(\bm{Y_t} = z)}/{ \bbP(\bm{\widetilde Y_t} = z)}$ over $\eta^t := \{\eta_1, \dots, \eta_t\}$ such that $\eta_s \in [-\delta, \delta]$ for all $1 \leq s \leq t$ and $\sum_{s = 1}^t \eta_s = 0$. Let $F$ denote the set of all such vectors $\eta^t$. Let \[ P(\eta^t) = \sum_{|\widehat e^t| = z} \prod_{s = 1}^t (1 + \widehat e_s \eta_s), \] for $\eta \in F$, and let \[ \widetilde \eta^t \in \arg \min_{\eta^t \in F} P(\eta). \] Note that $P(\eta)$ is a multinomial in $\eta_1, \dots, \eta_t$. We now show that $\widetilde \eta^t$ satisfies: $\widetilde \eta_s \in \{-\delta, 0, \delta\}$, for all $1 \leq s \leq t$. First note that if $\widetilde \eta_s \in \{-\delta, \delta \}$ for all $1 \leq s \leq t$ then we are done. If this does not hold, then without loss of generality let $\widetilde \eta_t \in (-\delta, \delta)$. Since $\sum_{s = 1}^t \widetilde \eta_s = 0$, let us substitute $\widetilde \eta_t = -\sum_{s = 1}^{t-1} \widetilde \eta_s$. We now argue that $\widetilde \eta_1 \in \{-\delta, \delta \}$. We have \begin{align*} &\sum_{|\widehat e^t| = z} \prod_{s = 1}^t (1 + \widehat e_s \widetilde \eta_s) = \sum_{|\widehat e^t| = z} (1 + \widehat e_1 \widetilde \eta_1) (1 - \widehat e_t ( \widetilde \eta_1 + \dots + \widetilde \eta_{t-1}) \prod_{s = 2}^{t-1} (1 + \widehat e_s \widetilde \eta_s) \\ &= \sum_{|\widehat e^t| = z} (1 + \widehat e_1 \widetilde \eta_1 - \widehat e_t \widetilde \eta_1 - \widehat e_t( \widetilde \eta_2 + \dots + \widetilde \eta_{t-1}) - \widehat e_1 \widehat e_t \widetilde \eta_1^2 - \widehat e_1 \widehat e_2 ( \widetilde \eta_2 + \dots + \widetilde \eta)) H(\widehat e_2, \dots, \widehat e_{t-1}), \end{align*} where \[ H(\widehat e_2, \dots, \widehat e_{t-1}) = \prod_{s = 2}^{t-1} (1 + \widehat e_s \widetilde \eta_s). \] Note that the above is a quadratic expression in $\widetilde \eta_1$. We now observe that the coefficient of $\widetilde \eta_1$ in this expression is zero. Indeed, the coefficient of $\widetilde \eta_1$ is given by \begin{align*} \sum_{|\widehat e| = z} H(\widehat e_2, \dots, \widehat e_{t-1}) (\widehat e_1 - \widehat e_t) = 0, \end{align*} because of the symmetry in $\widehat e_1$ and $\widehat e_t$ in the above expression. A quadratic of the form $ax^2 + b$ attains its minimum on an interval $[l,h]$ either at $x = l, h$ or $x = 0$. This establishes that $\widetilde \eta_1 \in \{-\delta, 0, \delta\}$, and indeed the same argument works for all $t \in \{1,\ldots, (T - 1)\}$. Moreover, we get $\widetilde \eta_t \in \{-\delta, 0, \delta\}$, as these are the only choices that can allow $\sum_{s=1}^t \widetilde \eta_s = 0$. Thus, we have established that $\widetilde \eta_s \in \{-\delta, 0, \delta\}$ for all $s \in \{1,\ldots, t\}$. Thus, there must be exact $n_t(z)/2$ values of $s$ corresponding to $\widetilde \eta_s = \delta$, $n_t(z)/2$ values of $s$ corresponding to $\widetilde \eta_s = -\delta$, and $(t - n_t(z))$ values of $s$ corresponding to $\widetilde \eta_s = 0$. Thus, we have shown that \begin{align*} \bbP(\bm{Y_t} = z) \geq \bbP(\bm{\widetilde{Y}_t} = z), \end{align*} which completes the proof of the sublemma. \end{proof} We need one more sublemma relating the random variables $\bm{\widetilde{Y}_t}$ and $\bm{Y'_t} \sim \text{Binomial}(t,\overline{q_t})$. \begin{sublemma}\label{lem:demoivre} Let $\bm{Y_t(n)} := \text{Binomial}\left(\frac{n}{2}, \overline{q_t} + \delta\right) + \text{Binomial}\left(\frac{n}{2}, \overline{q_t} - \delta\right) + \text{Binomial}\left(t - n, \overline{q_t}\right)$ for any \textit{even} $n \in \{1,\ldots, t\}$. Then, there exists universal constant $\epsilon_c > 0$ such that for every $z \in [q^*t - \beta \sqrt{t}, q^*t + \beta \sqrt{t}]$, we have \begin{align*} \frac{\bbP(\bm{Y_t(n)} = z)}{\bbP(\bm{Y'_t} = z)} \geq \epsilon_c > 0. \end{align*} Recall that we defined $\bm{Y'_t} \sim \text{Binomial}(t,\overline{q_t})$. \end{sublemma} Note that Lemma~\ref{lem:demoivre} immediately implies that \begin{align*} \frac{\bbP(\bm{\widetilde{Y}_t} = z)}{\bbP(\bm{Y'_t} = z)} = \frac{\bbP(\bm{Y_t(n_t(z))} = z)}{\bbP(\bm{Y_t} = z)}\geq \epsilon_c > 0, \end{align*} and we have thus related the original random variable $\bm{Y_t}$ to the Binomial random variable $\bm{Y'_t}$ through the constant $\epsilon_b := \epsilon_c$, which would complete the proof of Lemma~\ref{lem:changeofmeasure}. Thus, it only remains to prove Sublemma~\ref{lem:demoivre}, which we do below. \begin{proof}[Proof of Sublemma~\ref{lem:demoivre}] Our proof will critically use the classical de-Moivre-Laplace theorem, stated earlier. To see how we can apply the de-Moivre-Laplace theorem to the denominator $\bbP(\bm{Y'_t} = z)$, we fix $q := \overline{q_t}$. Then, note that since $z \in [tq^* - \beta \sqrt{t}, tq^* + \beta\sqrt{t}]$ and, from \eqref{eq:timeaverageoblivious}, we have $\overline q_t \in \{q^* - C/\sqrt{t},q^* + C/\sqrt{t}\}$. Thus, we have $$z \in [t\overline q_t - (C-\beta)\sqrt{t}, t\overline q_t + (C + \beta)\sqrt{t}].$$ Designating $C_c := (\beta + C)$, we consider the choice of sequence $\{k_t = C_c \sqrt{t}\}_{t \geq 1}$. This sequence clearly satisfies $k_t^3/t^2 \to 0$, and so we can directly apply the statement of the DeMoivre-Laplace theorem to get \begin{align*} (1 - \epsilon_c) \cdot p\left(z;t \overline{q_t},t\overline{q_t}(1 - \overline{q_t})\right) \leq \bbP(\bm{Y'_t} = z) \leq (1 + \epsilon_c) \cdot p\left(z;t \overline{q_t},t\overline{q_t}(1 - \overline{q_t})\right), \end{align*} for all $t \geq t_{0,c}$ for all $qt - k_t \leq z \leq qt + k_t$. Further, we will adjust $t_{0,c}$ such that $t > t_{0,c} := t_{0,c}/q^*$. Recall that $p(\cdot;\mu, \sigma^2)$ denotes the pdf of the normal distribution $\mathcal{N}(\mu, \sigma^2)$. There are two cases to study depending on the value that $n$ takes. The first one considers $n \leq t_{0,c}$. Noting that $t_{0,c}$ is a constant, in this case we can directly bound the ratio of pmfs. First, we very crudely lower bound the numerator to get \begin{align*} \bbP(\bm{Y_t(n)} = z) &= \sum_{0 \leq k_1,k_2 \leq n/2,0 \leq k_3 \leq (t-n), k_1 + k_2 + k_3 = z} \binom{n/2}{k_1} \binom{n/2}{k_2} \binom{t-n}{k_3} \\ &(\overline{q_t} + \delta)^{k_1} \cdot (1 - \overline{q_t} - \delta)^{n/2 - k_1} \cdot (\overline{q_t} - \delta)^{k_2} \cdot (1 - \overline{q_t} + \delta)^{n/2 - k_2} \cdot (\overline{q_t})^{k_3} (1 - \overline{q_t})^{t - n - k_3} \\ &> \binom{t-n}{z - n} (\overline{q_t} + \delta)^{n/2} (\overline{q_t} - \delta)^{n/2} (\overline{q_t})^{z - n}(1 - \overline{q_t})^{t - z} , \end{align*} where in the last inequality we considered only the point $k_1 = k_2 = n/2, k_3 = z - n$ in the sum. (Note that this is a valid point as $z \leq t$ and $z - n \geq q^* t - t_{0,c} > 0$. The latter inequality follows because we assumed that $q^*T > t_{0,c}$.) On the other hand, for the denominator we have \begin{align*} \bbP(\bm{Y'_t} = z) = \binom{t}{z} (\overline{q_t})^z (1 - \overline{q_t})^{t - z} , \end{align*} and so we get, after some algebraic simplification, \begin{align*} \frac{\bbP(\bm{Y_t(n)} = z)}{\bbP(\bm{Y'_t} = z)} &> \frac{\binom{t - n}{z - n} \left(1 - \frac{\delta^2}{\overline{q_t}^2}\right)^{n/2}}{\binom{t}{z}} \\ &\geq \epsilon_c > 0 , \end{align*} where the constant $\epsilon_c$ will depend on $\overline{q_t}, \delta,t_{0,c}$, but not on $t$. Here, we have critically used $n \leq t_{0,c}$ to lower bound the term $\left(1 - \frac{\delta^2}{\overline{q_t}^2}\right)$ by such a constant, as well as noting that \begin{align*} \frac{\binom{t-n}{z - n}}{\binom{t}{z}} &= \frac{z}{t} \cdot \frac{z - 1}{t - 1} \ldots \frac{z - n + 1}{t - n + 1} \\ &\geq \left(\frac{ z - n + 1}{t - n + 1}\right)^{n} \\ &\geq (\epsilon_c)^n \geq (\epsilon_c)^{t_{0,c}} , \end{align*} for some constant $\epsilon_c$ that is close to $q^*$. Notice that the above crude argument does not work for the case where $n > t_{0,c}$, in particular, if it can grow indefinitely as a function of $t$, is less trivial. For this case, we make the following claim using the de-Moivre-Laplace theorem, under which it suffices to prove the lemma. \begin{claim}\label{claim:demoivre} There exists a constant $\epsilon_c \in (0,1)$ that can depend on $C_c$, but is independent of $(t, n)$, such that for $t_{0,c} \leq n \leq t$, we have \begin{align} \bbP(\bm{Y'_t} = z) &\leq (1 + \epsilon_c) \cdot p\left(z;t \overline{q_t},t\overline{q_t}(1 - \overline{q_t})\right) \label{eq:binomialdemoivre} \\ \bbP(\bm{Y_t(n)} = z) &\geq (1 - \epsilon_c) \cdot p\left(z; t \overline{q_t}, t \sigma^2(\overline{q_t}, n, t)\right) \label{eq:mixtureofbinomialsdemoivre} \end{align} where \[ \sigma^2(\overline{q_t},n,t) := \frac{1}{t} \left(\frac{t}{2}(\overline{q_t} - \delta)(1 - \overline{q_t} + \delta) + \frac{n}{2}v^+(\overline{q_t};\delta) + (t-n) \overline{q_t}(1 - \overline{q_t}))\right), \] for any $z \in [tq^* - C_c\sqrt{t}, tq^* + C_c\sqrt{t}]$. \end{claim} First, notice that Claim~\ref{claim:demoivre} directly gives us our proof for the case where $n \geq t_{0,c}$. To see this, consider the second case where $\overline{q_t} > 1/2$. This gives us \begin{align*} \frac{\bbP(\bm{Y_t(n)} = z)}{\bbP(\bm{Y'_t} = z)} &\geq \frac{(1 - \epsilon_c)}{(1 + \epsilon_c)} \cdot \frac{p(z; t \overline{q_t}, t\sigma^2(\overline{q_t},n,t))}{p\left(z;t \overline{q_t},t\overline{q_t}(1 - \overline{q_t})\right)} \\ &= \frac{(1 - \epsilon_c)}{(1 + \epsilon_c)} \cdot \frac{\sqrt{2\pi \cdot t(\overline{q_t})(1 - \overline{q_t})}}{\sqrt{2\pi \cdot t\sigma^2(\overline{q_t},n,t)}} \cdot \frac{e^{-\frac{(z - t\overline{q_t})^2}{2 t \sigma^2(\overline{q_t},n,t)}}}{e^{-\frac{(z - t\overline{q_t})^2}{2 t\overline{q_t}(1 - \overline{q_t})}}} . \end{align*} First, we note that $\sigma^2(\overline{q_t}, n,t) \leq \frac{1}{4}$. Moreover, we know that $\overline{q_t} \in [q^* - \delta, q^* + \delta]$, and so we have \begin{align*} \frac{\sqrt{2\pi \cdot t\overline{q_t}(1 - \overline{q_t})}}{\sqrt{2 \pi \cdot t\sigma^2(\overline{q_t}, n,t)}} \geq \epsilon_c > 0 , \end{align*} where $\epsilon_c$ is a constant that depends only on $\delta$. Thus, we get \begin{align*} \frac{\bbP(\bm{Y_t(n)} = z)}{\bbP(\bm{Y'_t} = z)} \geq \epsilon_c \cdot \frac{e^{-\frac{(z - t\overline{q_t})^2}{2 t\sigma^2(\overline{q_t},n,t)}}}{e^{-\frac{(z - t\overline{q_t})^2}{2 t\overline{q_t}(1 - \overline{q_t})}}} . \end{align*} Finally, we note that $z \in [tq^* - C_c\sqrt{t}, tq^* + C_c\sqrt{t}]$. Thus, to lower bound the numerator we get \begin{align*} e^{-\frac{(z - t\overline{q_t})^2}{2 t\sigma^2(\overline{q_t},n,t)}} &\geq e^{-\frac{4C_c^2 \cdot t}{2 t\sigma^2(\overline{q_t},n,t)}} \\ &\geq e^{-\frac{4C_c^2}{2\sigma^2(\overline{q_t}, n,t)}} \geq \epsilon_c > 0 , \end{align*} where we now use the fact that $\sigma^2(\overline{q_t}, n,t) \geq (\overline{q_t} + \delta)(1 - \overline{q_t} - \delta)$. Thus, we get $\overline{q_t} + \delta \leq q^* + 2\delta < 1$. Note that this constant $\epsilon_c$ will depend on $(q^*, \delta, C_c)$, but is independent of $t$. For the denominator, we trivially have $e^{-\frac{(z - t\overline{q_t})^2}{2 t\overline{q_t}(1 - \overline{q_t})}} \leq 1$. Putting all of these together, we get \begin{align*} \frac{\bbP(\bm{Y_t(n)} = z)}{\bbP(\bm{Y'_t} = z)} \geq \epsilon_c > 0 , \end{align*} where $\epsilon_c$ is the product of all the above constants and thus depends on $(t_{0,c},q^*, C_c, \delta)$, but is independent of $t$. Thus, given Claim~\ref{claim:demoivre}, we have proved Lemma~\ref{lem:demoivre}. (A symmetric argument, which we omit, also works for the case $\overline{q_t} \leq 1/2$.) It only remains to prove this claim, which we do below using the DeMoivre-Laplace theorem. \begin{proof}[Proof of Claim~\ref{claim:demoivre}] As we noted above, Equation~\eqref{eq:binomialdemoivre} follows immediately from the statement of Theorem~\ref{thm:demoivrelaplace}. To prove Equation~\eqref{eq:mixtureofbinomialsdemoivre}, we need to do a little more work, but essentially we can exploit the mixture-of-binomials structure in the random variable $\bm{Y_t(n)} := \text{Binomial}\left(\frac{n}{2}, \overline{q_t} + \delta\right) + \text{Binomial}\left(\frac{n}{2}, \overline{q_t} - \delta\right) + \text{Binomial}\left(t - n, \overline{q_t}\right)$ for any \textit{even} $n \in \{1,\ldots, t\}$. First, we consider the extreme case where the distribution is ``most different" from $\bm{Y_t}$, i.e. $n = t$. In this case, note that $\bm{Y_t(t)} = \bm{Y_{t,1}} + \bm{Y_{t,2}}$ where $\bm{Y_{t,1}} \sim \text{Binomial}\left(\frac{t}{2}, \overline{q_t} + \delta\right)$ and $\bm{Y_{t,2}} \sim \text{Binomial}\left(\frac{t}{2}, \overline{q_t} - \delta\right)$, and the random variables $\bm{Y_{t,1}}$ and $\bm{Y_{t,2}}$ are independent. Thus, we get \begin{align*} \bbP(\bm{Y_t(t)} = z) &= \sum_{y = z - t\overline{q_t} + C_c\sqrt{t}}^{t\overline{q_t} - C_c\sqrt{t}} \bbP(\bm{Y_{t,1}} = y) \bbP(\bm{Y_{t,2}} = (z-y)) \\ &\geq \sum_{y = \frac{t}{2} \cdot (\overline{q_t} + \delta) - C_c t^{5/9}}^{\frac{t}{2} \cdot (\overline{q_t} + \delta) + C_c t^{5/9}} \bbP(\bm{Y_{t,1}} = y) \bbP(\bm{Y_{t,2}} = (z-y)) . \end{align*} Now, observe that $y \in [t/2 \cdot (\overline{q_t} + \delta) - C_ct^{5/9}, t/2 \cdot (\overline{q_t} + \delta) + C_ct^{5/9}]$, and because we have assumed that $z \in [\overline{q_t} - C_c\sqrt{t}, \overline{q_t} + C_c\sqrt{t}]$, we also have $(z-y) \in [t/2 \cdot (\overline{q_t} - \delta) - C_c t^{5/9}, t/2 \cdot (\overline{q_t} - \delta) + C_ct^{5/9}]$ for slightly adjusted constant $C_c$. Moreover, it is easy to verify that the sequence $\{k_t := C_c t^{5/9}\}_{t \geq 1}$ satisfies the conditions required for application of de-Moivre-Laplace theorem. We denote $v^*(q;\delta) := (q + \delta)(1 - q - \delta)$ and $v^-(q;\delta) := (1 - q + \delta)(q - \delta)$ as the variances of the Bernoulli random variables with parameters $(q + \delta)$ and $(1 - q + \delta)$ respectively. Therefore, for large enough $t \geq t_{0,c}$ (where $t_{0,c}$ will depend on $(\epsilon_c, \delta, q^*, C_c)$, and appropriately chosen constant $\epsilon_c \in (0,1)$, and the specified ranges of $(y, z)$, we get \begin{align*} \bbP(\bm{Y_{t,1}} = y) &\geq (1-\epsilon_c) \cdot p\left(y;\frac{t}{2}(\overline{q_t} + \delta), \frac{t}{2}v^+(\overline{q_t};\delta)\right) \\ \bbP(\bm{Y_{t,2}} = (z-y)) &\geq (1 - \epsilon_c) \cdot p\left((z-y);\frac{t}{2}(\overline{q_t} - \delta), \frac{t}{2}v^-(\overline{q_t};\delta))\right) , \end{align*} and so we get \begin{align*} &\bbP(\bm{Y_t(t)} = z) \\ &\geq (1 - \epsilon_c)^2 \sum_{y = \frac{t}{2} \cdot (\overline{q_t} + \delta) - C_ct^{5/9}}^{\frac{t}{2} \cdot (\overline{q_t} + \delta) + C_ct^{5/9}} p\left(y;\frac{t}{2}(\overline{q_t} + \delta),\frac{t}{2}v^+(\overline{q_t};\delta)\right) \cdot p\left((z-y);\frac{t}{2}(\overline{q_t} - \delta), \frac{t}{2}v^-(\overline{q_t};\delta)\right) \\ &\stackrel{(\mathsf{i})}{\geq} (1 - \epsilon_c)^2 \cdot p\left(z;\frac{t}{2}(\overline{q_t} + \delta), \frac{t}{2}v^+(\overline{q_t};\delta)\right) \star p\left(z;\frac{t}{2}(\overline{q_t} - \delta), \frac{t}{2}v^-(\overline{q_t};\delta)\right) - 2(1 -\epsilon_c)^2 \cdot e^{-C_c t^{1/9}} \\ &= (1 - \epsilon_c)^2 \cdot p\left(z; t\overline{q_t}, \frac{t}{2}v^+(\overline{q_t};\delta) + \frac{t}{2}(\overline{q_t} - \delta), \frac{t}{2}v^-(\overline{q_t};\delta)\right) - 2(1 - \epsilon_c)^2 \cdot e^{-C_c t^{1/9}} \\ &\stackrel{(\mathsf{ii})}{\geq} \epsilon_c (1 - \epsilon_c) p\left(z; t\overline{q_t}, \frac{t}{2}v^+(\overline{q_t};\delta) + \frac{t}{2}(\overline{q_t} - \delta), \frac{t}{2}v^-(\overline{q_t};\delta)\right) . \end{align*} Here, inequality $(\mathsf{ii})$ follows for large enough $t \geq t_{0,c}$ noting that for the specified range of $z$, we have $p\left(z; t\overline{q_t}, \frac{t}{2}v^+(\overline{q_t};\delta) + \frac{t}{2}(\overline{q_t} - \delta), \frac{t}{2}v^-(\overline{q_t};\delta)\right) \geq \epsilon_c > 0$; and also that $e^{-C_c t^{1/9}}$ goes to $0$ as $t \to \infty$. Inequality $(\mathsf{i})$ follows by noting that \begin{align*} \sum_{y > \frac{t}{2} \cdot (\overline{q_t} + \delta) + C_ct^{5/9}} p\left(y;\frac{t}{2}(\overline{q_t} + \delta),\frac{t}{2}v^+(\overline{q_t};\delta)\right) &\leq \bbP\left(\bm{W} > C_c t^{1/18}\right) \\ &\leq e^{-C_c t^{1/9}} , \end{align*} where $\bm{W}$ denotes the standard normal random variable, and we have overloaded notation in choices of $C_c$. Similarly, we have \begin{align*} \sum_{y < \frac{t}{2} \cdot (\overline{q_t} - \delta) - C_ct^{5/9}}p\left(y;\frac{t}{2}(\overline{q_t} - \delta),\frac{t}{2}v^-(\overline{q_t};\delta)\right) \leq e^{-C_c t^{1/9}} . \end{align*} From this, we get \begin{align*} &\sum_{y = \frac{t}{2} \cdot (\overline{q_t} + \delta) - C_ct^{5/9}}^{\frac{t}{2} \cdot (\overline{q_t} + \delta) + C_ct^{5/9}} p\left(y;\frac{t}{2}(\overline{q_t} + \delta),\frac{t}{2}v^+(\overline{q_t};\delta)\right) \cdot p\left((z-y);\frac{t}{2}(\overline{q_t} - \delta), \frac{t}{2}v^-(\overline{q_t};\delta)\right) \\ &\geq p\left(z;\frac{t}{2}(\overline{q_t} + \delta), \frac{t}{2}v^+(\overline{q_t};\delta)\right) \star p\left(z;\frac{t}{2}(\overline{q_t} - \delta), \frac{t}{2}v^-(\overline{q_t};\delta)\right) - 2e^{-C_c t^{1/9}} , \end{align*} and plugging this in above gives inequality $(\mathsf{i})$. Clearly, we have proved Equation~\eqref{eq:mixtureofbinomialsdemoivre} for this extreme case. Let us now extend this extreme case more generally. Recall that we assumed $n \geq t_{0,c}$. In this case, by an identical argument to the above, we get \begin{align*} \bbP(\bm{Y_{n,1}} + \bm{Y_{n,2}} = z') \geq \epsilon_c(1 - \epsilon_c)^2 \cdot p\left(z; n\overline{q_t}, \frac{n}{2}v^+(\overline{q_t};\delta) + \frac{n}{2}(\overline{q_t} - \delta), \frac{n}{2}v^-(\overline{q_t};\delta)\right) . \end{align*} Thus, we can utilize a similar convolution argument as before to study $\bm{Y_t(n)} := \bm{Y_{n,1}} + \bm{Y_{n,2}} + \bm{Y_{(t-n),3}}$, where $\bm{Y_{(t-n),3}} \sim \text{Binomial}((t-n), \overline{q_t})$ and is independent from $\{\bm{Y_{n,1}},\bm{Y_{n,2}}\}$. Thus, we get \begin{align*} \bbP(\bm{Y_t(n)} = z) &= \bbP(\bm{Y_{n,1}} + \bm{Y_{n,2}} + \bm{Y_{(t-n),3}} = z) \\ &\geq \sum_{z' = n \overline{q_t} - C_cn^{5/9}}^{n \overline{q_t} + C_cn^{5/9}} \bbP(\bm{Y_{n,1}} + \bm{Y_{n,2}} = z') \bbP(\bm{Y_{(t-n),3}} = (z -z')) . \end{align*} There are two cases depending on the value of $(t-n)$: \begin{enumerate} \item $(t - n) \leq t_{0,c}$. In this case, because $\bm{Y_{(t-n),3}} \in \{0,\ldots, t_{0,c}\}$, we have \begin{align*} \bbP(\bm{Y_{(t-n),3}} = (z - z')) &\geq (\min\{\overline{q_t}, 1 - \overline{q_t}\})^{(t - n)} \\ &\geq (\min\{\overline{q_t}, 1 - \overline{q_t}\})^{t_{0,c}} . \end{align*} Similarly, it is easy to verify that the normal pdf $p((z-z');(t-n)\overline{q_t}, (t-n) \overline{q_t}(1 - \overline{q_t}))$ is also bounded above by a constant $c'$ as $(z - z') \leq t_{0,c}$. Thus, the ratio of the two is bounded by a universal constant $\epsilon_c$ \textit{ for all } $(z - z') \in \{0,\ldots,t_{0,c}\}$. \item The second case is $(t-n) \geq t_{0,c}$. Now, we note that $(z - z') \in [(t - n)\overline{q_t} - C_c\sqrt{n} - C_c\sqrt{t}, (t-n) \overline{q_t} + C_c\sqrt{n} + C_c\sqrt{t}]$, therefore, $(z - z') \in [(t-n) \overline{q_t} - C_c\sqrt{(t-n)}, (t-n) \overline{q_t} + C_c\sqrt{(t-n)}]$ for slightly adjusted constant $C_c$. Then, since $(t - n) \geq t_{0,c}$ as well, we can apply the de-Moivre-Laplace theorem on the Binomial random variable $\bm{Y_{(t-n),3}}$ and show that \begin{align*} \frac{\bbP(\bm{Y_{(t-n),3}} = (z - z'))}{p((z-z');(t-n)\overline{q_t}, (t-n) \overline{q_t}(1 - \overline{q_t}))} \geq (1 - \epsilon_c) \end{align*} for the specified range on $(z - z')$. \end{enumerate} Thus, in both cases, for appropriately chosen $\epsilon_c > 0$ we get \begin{align*} \bbP(\bm{Y_t(n)} = z) &\geq \epsilon_c \sum_{z' = n \overline{q_t} - C_c n^{5/9}}^{n \overline{q_t} + C_cn^{5/9}} p\left(z';n \overline{q_t}, \frac{n}{2}v^+(\overline{q_t};\delta) + \frac{n}{2}v^-(\overline{q_t};\delta)\right) \cdot \\ &p\left((z-z'); (t-n) \overline{q_t}, (t-n) \overline{q_t} (1 - \overline{q_t}) \right) \\ &\geq \epsilon_c^2 \cdot p\left(z;n \overline{q_t}, \frac{n}{2}v^+(\overline{q_t};\delta) + \frac{n}{2}v^-(\overline{q_t};\delta)\right) \star \\ &p\left(z; (t-n) \overline{q_t}, (t-n) \overline{q_t} (1 - \overline{q_t}) \right) \\ &= \epsilon_c^2 \cdot p\left(z;t\overline{q_t}, \frac{n}{2}v^+(\overline{q }_t;\delta) + \frac{n}{2}v^-(\overline{q_t};\delta) + (t-n) \overline{q_t}(1 - \overline{q_t})\right) , \end{align*} where the second inequality uses an identical argument as earlier again noting that $n \geq t_{0,c}$. This completes the statement of the claim, and thus completes the proof. \end{proof} Now that we have proved Claim~\ref{claim:demoivre}, we have completed the proof of Sublemma~\ref{lem:demoivre}. \end{proof} \subsection*{Proof of Lemma~\ref{lem: ration_YtoZ}} \begin{proof} From the DeMoivre-Laplace theorem, there exists positive constant $\epsilon_d \in (0,1)$ and a sufficiently large $t_{0,d}$, such that for all $t > t_{0,d}$, we have \begin{align*} \bbP(\bm{Y'_t} = z) &\geq (1 - \epsilon_d) \cdot p\left(z;t \overline{q_t},t\overline{q_t}(1 - \overline{q_t})\right), \\ \bbP(\bm{Z'_t} = z) &\leq (1 + \epsilon_d) \cdot p\left(z;t q^*,t q^*(1 - q^*)\right). \end{align*} Hence, we have \begin{align*} \frac{\bbP(\bm{Y'_t} = z)}{\bbP(\bm{Z'_t} = z)} &\geq \frac{(1 - \epsilon_d)}{(1 + \epsilon_d)} \cdot \frac{p(z; t \overline{q_t}, t\overline{q_t}(1 - \overline{q_t}))}{p\left(z;t{q^*},t{q^*}(1 - {q^*})\right)} \\ &= \frac{(1 - \epsilon_d)}{(1 + \epsilon_d)} \cdot \frac{\sqrt{2\pi \cdot t(\overline{q_t})(1 - \overline{q_t})}}{\sqrt{2\pi \cdot t{q^*_t}(1 - q^*)}} \cdot \frac{e^{-\frac{(z - t\overline{q_t})^2}{2 t \overline{q_t}(1 - \overline{q_t})}}}{e^{-\frac{(z - t{q^*})^2}{2 t{q^*}(1 - \overline{q^*})}}} . \end{align*} Now, we have \[ \frac{\sqrt{2\pi \cdot t(\overline{q_t})(1 - \overline{q_t})}}{\sqrt{2\pi \cdot t{q^*}(1 - q^*)}} \geq \max\l\{\frac{\sqrt{2\pi \cdot (q^* - \delta)(1 - q^* + \delta)}}{\sqrt{2\pi \cdot {q^*}(1 - q^*)}}, \frac{\sqrt{2\pi \cdot (q^* + \delta)(1 - q^* - \delta)}}{\sqrt{2\pi \cdot {q^*}(1 - q^*)}}\r\} > 0, \] for all $t$. This is because $\overline{q_t} \in [q^* - \delta, q^* + \delta]$ and $\overline{q_t}(1 - \overline{q_t})$ being concave over this interval attains its minimum on the boundary. Further, we get \[ e^{-\frac{(z - t\overline{q_t})^2}{2 t \overline{q_t}(1 - \overline{q_t})}} \geq \max \l\{e^{-\frac{2C_d^2}{2 (q^* - \delta)(1 - q^* + \delta)}}, e^{-\frac{2C_d^2}{2 (q^* + \delta)(1 - q^* - \delta)}} \r\} > 0. \] Note that we have obtained bounds that do not depend on $t$. Thus there exists a positive constant $\epsilon_d$ such that the statement in the lemma holds. \end{proof} \section{Main results}\label{sec:results} In this section, we prove in a series of steps that the limiting mixed strategies as an outcome of both players using optimal, mean-based and monotonic no-regret learning \emph{cannot} converge. In the process, we will show some fundamental properties about no-regret strategies that hold in more general settings and may be of independent interest. \subsection{A fundamental sensitivity to fluctuations}\label{sec:fluctuationsensitivity} We start by proving a condition that any (not necessarily mean-based or monotonic) self-agnostic, no-regret strategy with a rate $r \geq 1/2$ necessarily satisfies. We show that any such strategy is \emph{fundamentally} sensitive to stochastic fluctuations in the outcomes of player $2$'s strategy, in the sense that such fluctuations will cause player $1$ to deviate from her equilibrium strategy by an additive constant with a constant non-zero probability. More precisely: we show that there must be infinitely many mappings $\{f_t(\cdot)\}_{t \geq 1}$ for player $1$ that deviate (with a constant non-zero probability) by at least a constant value (say $\delta > 0$) from her NE strategy $p^*$ whenever player $2$ deviates from his NE strategy and plays action $1$ (or $0$) for a fraction of steps on the order of $t^{-(1 -r)}$. When the strategy is mean-based, the deviation of player $2$ for $\mathcal{O}(t^{-(1-r)})$ fraction of steps corresponds to a deviation of $\mathcal{O}(t^{-(1-r)})$ in the empirical average of the actions of player $2$. Overall, this shows that if we need our no-regret algorithm to have better regret rates, then we have to make the mappings constituting the no-regret algorithm more fluctuation-sensitive. A depiction of this fluctuation-sensitivity is in Figure~\ref{fig:sensitivity_figure} for the example of the multiplicative weights algorithm, parameterized to have optimal no-regret rate $r = 1/2$, in the matching pennies game. This property of fluctuation-sensitivity is formally defined and proved below. \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{./Figures/sensitivity_figure} \caption{Plot of $f_t(\bm{\widehat{Q}_t})$ v.s. $\bm{\widehat{Q}_t}$ for the no-regret strategies in the Online-Mirror-Descent family with the entropy regularizer (commonly known as multiplicative weights) and log-barrier regularizer, with learning rate $\eta_t = 1/\sqrt{t}$. Here, we set $t = 10^6$.} \label{fig:sensitivity_figure} \end{subfigure}% ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/sensitivity_figure_proof_2} \caption{Response of player $1$ against the randomized sequence in Equation~\eqref{eq:randomsequence} (for multiplicative weights). The blue line plots the time-averages of player $2$, and the red line plots the iterates of player $1$. Since the $y$-axis is on linear scale, the fluctuation in the time-averages of player $2$ (which begins at $s_t := 9.99\times 10^5$) is not visible.} \label{fig:sensitivity_figure_proof} \end{subfigure} \caption{Depiction of the sensitivity of common no-regret strategies in the matching pennies game in terms of $f_t(\bm{\widehat{Q}_t})$, as a function of $\bm{\widehat{Q}_t}$ for $t = 10^6$. Figure~\ref{fig:sensitivity_figure} depicts the extent of sensitivity for two popular no-regret strategies, and Figure~\ref{fig:sensitivity_figure_proof} illustrates the proof strategy.}\label{fig:sensitivity} \end{figure} \begin{proposition}\label{prop:noregretsensitivity} For any self-agnostic repeated game strategy $\{f_t\}_{t \geq 1}$ used by player $1$ that is uniformly no-regret with a rate of $(r,c)$, $1/2 \leq r < 1$ , and any $0 < \delta < (1 -p^*)/3$, there exists a positive constant $\alpha$ and an infinite sequence of integer tuples $\{(t_k,s_k)\}_{k \geq 1}$ such that \begin{equation} \label{eq:sensitivitymargin} 0 < s_k \leq \alpha (t_k)^{r}, \text{ for all } k \geq 1, \end{equation} and \begin{align}\label{eq:sensitivitycondition} \mathbb{E}\left[f_{t_k}\left(\bm{(J'(k))^{t_k}}\right)\right] \geq p^* + 2\delta \text{ for all } k \geq 1 , \end{align} where the expectation is over the random sequence $\bm{(J'(k))^{t_k}} := \{\bm{J'_s(k)}\}_{s=1}^{t_k}$ defined as below: \begin{align}\label{eq:randomsequence} \bm{J'_s(k)} = \begin{cases} \bm{J^*_s} \text{ i.i.d. } \sim \text{Bernoulli}(q^*), \text{ if } 1 \leq s \leq t_k - s_k, \\ 1 \text{ otherwise. } \end{cases} \end{align} Similarly, we also have \begin{align}\label{eq:sensitivityconditionopp} \mathbb{E}\left[f_{t_k}\left(\bm{(J'(k))^{t_k}}\right)\right] \leq p^* - 2\delta \text{ for all } k \geq 1 , \end{align} where the expectation is over the random sequence $\bm{(J''(k))^{t_k}} := \{\bm{J''_s(k)}\}_{s=1}^{t_k}$ defined as below: \begin{align}\label{eq:randomsequenceopp} \bm{J''_s(k)} = \begin{cases} \bm{J^*_s} \text{ i.i.d. } \sim \text{Bernoulli}(q^*), \text{ if } 1 \leq s \leq t_k - s_k, \\ 0 \text{ otherwise. } \end{cases} \end{align} \end{proposition} In particular, if the self-agnostic repeated game strategy $\{f_t\}_{t \geq 1}$ is \textit{optimally} uniformly no-regret i.e. $r = 1/2$, then condition~\ref{eq:sensitivitymargin} would be \[ 0 < s_k \leq \alpha \sqrt{t_k}, \text{ for all } k \geq 1. \] For this case, Figure~\ref{fig:sensitivity_figure_proof} shows player $1$'s response, when she uses the multiplicative weight algorithm, to an opponent who plays the random sequence defined in Equation~\eqref{eq:randomsequence}. The game that is being played is matching pennies, which is a zero-sum game with $G(0,0) = G(1,1) = 1$ and $G(0,1) = G(1,0) = -1$. Player $1$'s strategy, averaged over player $2$'s realizations, is shown to deviate sizably from the NE of the matching pennies game, $p^* = 0.5$. This case is important because, even if player $2$ remained close to her equilibrium mixed strategy $q^*$ (as is \textit{necessary} in any hypothetical scenario of last-iterate convergence), there is a non-trivial probability of player $2$'s empirical average deviating from $q^*$ by a number on the order of $1/\sqrt{t}$ at step $t$. The sensitivity in an optimal no-regret strategy to pick deviations of this order will allow us to show the oscillation of player $1$'s mixed strategies in the subsequent Sections~\ref{sec:warmup},~\ref{sec:mainresult},~\ref{sec: beyond_mean_based} and~\ref{sec:conjecture}. To execute this proof approach, we would use Equation~\eqref{eq:sensitivitycondition} if $f_t(\cdot)$ is monotonically \textit{increasing} in its argument, and Equation~\eqref{eq:sensitivityconditionopp} if $f_t(\cdot)$ is monotonically \textit{decreasing} in its argument. Henceforth, we will assume that $f_t(\cdot)$ is increasing in its argument without loss of generality. We conclude this subsection with the proof of Proposition~\ref{prop:noregretsensitivity}. \begin{proof} Recall that by the competitive $2 \times 2$ game assumption, we have $G(0,1) < G(1,1)$. Thus, for any $0 \leq p < 1$, we have $G(p,1) < G(1,1)$. Consider $\{f_t\}_{t \geq 1}$ to be any self-agnostic uniformly no-regret strategy with a no-regret rate of $(r,c)$. We note that for any $t$, and any sequence $\{J_s\}_{s=1}^t$, we have $$\max_{i \in \{0,1\}} \sum_{s=1}^t G(i,J_s) = t \cdot \max\{G(0,\widehat Q_t), G(1,\widehat Q_t)\}.$$ Recall that $$\widehat Q_t = \frac{1}{t} \sum_{s = 1}^t J_s.$$ Thus, for $c' > c$, there exists a sufficiently large $t_0$ such that for all $t \geq t_0$, we have \begin{equation} \label{eq: suff_large_no_regret} \frac{1}{t^{r}} \l[t \cdot \max\{G(0,\widehat Q_t), G(1,\widehat Q_t)\} - \sum_{s = 1}^t G(f_s((J)^{s-1}), J_s)) \r] \leq c', \end{equation} for any sequence $\{J_s\}_{s = 1}^t$. Recall that $R^*$ denotes the Nash equilibrium payoff of player $1$. Because the equilibrium $(p^*,q^*)$ is strictly mixed, we have $R^* = G(p^*,q^*) = G(p^*, 1) < G(1,1)$. Let $0 < \delta < \frac{(1 - p^*)}{3} = \frac{(G(1,1) - R^*)}{3(G(1,1) - G(0,1))},$ and let $\delta' := \delta(G(1,1) - G(0,1))$. Note that $0 < \delta' < \frac{(G(1,1) - R^*)}{3}$. Let $\alpha := \frac{c'}{(G(1,1) - R^* - 3\delta')} > 0$. For any $t > t_1 := \max\{t_0, \alpha^{1/r} ,(\alpha \delta'/G(1,1))^{-1/r}\}$, let $t^*(t) := t - \lfloor \alpha t^{r} \rfloor,$ where $\lfloor \cdot \rfloor$ is the floor function. Note that since $t > \alpha^{1/r}$, we have $t^*(t) \geq 1$. Let $\{\bm{J_s^*}\}$ be an i.i.d. sequence of Bernoulli($q^*$) random variables for $1 \leq s \leq t$. Let $\bm{\widehat Q_s^*} := \frac{1}{s} \sum_{s' = 1}^s \bm{J_{s'}^*}$ denote the empirical average of this sequence at step $s$. We state the following useful lemma. \begin{lemma} \label{lem: martingale_prop} For the sequence $\{\bm{J_s^*}\}_{s \geq 1}$ defined above, and any $t \geq 1$, we have \begin{subequations} \begin{align} \mathbb{E}\left[\sum_{s = 1}^{t} G(f_s(\bm{(J^*)^{s-1}}), \bm{J_s^*}))\right] &= t R^* \text{ and } \label{eq:Gteq}\\ \mathbb{E}\left[\sum_{s=1}^{t} \bm{J_s^*}\right] &= t q^* \label{eq:Jteq}. \end{align} \end{subequations} \end{lemma} See Appendix~\ref{sec: martingaleproof} for the proof of this lemma. We define the sequence $\{\bm{J_s'}\}_{s=1}^{t}$ as specified in the statement of Proposition~\ref{prop:noregretsensitivity}. In other words, we define $\bm{J_s'} = \bm{J^*_s}$ for $1 \leq s \leq t^*(t)$, and $\bm{J_s'} = 1$ for $t^*(t) < s \leq t$. Then, we can denote the empirical average of this sequence as $\bm{\widehat Q_s'} := \frac{1}{s} \sum_{s' = 1}^s \bm{J_{s'}'}$ for $1 \leq s \leq t$. We denote \begin{align} \label{eq: M_def} M := \frac{1}{\lfloor \alpha t^{r} \rfloor} \cdot \mathbb{E}\left[\sum_{s = 1}^{\lfloor \alpha t^{r} \rfloor} G\left(f_{t^*(t) + s}\left(\bm{(J')^{t^*(t) + s - 1}}\right),1\right)\right]. \end{align} From the definition of uniform no-regret, i.e. Equation~\eqref{eq: suff_large_no_regret}, we have \begin{align*} c' &\geq \frac{1}{t^{r}} \cdot \mathbb{E} \l[t \max\{G(0,\bm{\widehat Q_t'}), G(1,\bm{\widehat Q_t'})\} - \sum_{s = 1}^{t} G(f_s(\bm{(J')^{s-1}}), \bm{J_s'})) \r] \\ &\geq \frac{1}{t^{r}} \cdot \mathbb{E} \l[t \cdot G(1,\bm{\widehat Q_t'}) - \sum_{s = 1}^{t} G(f_s(\bm{(J')^{s-1}}), \bm{J_s'})) \r] \\ &= \frac{1}{t^{r}} \cdot \l[ t^*(t) \cdot R^* + \lfloor \alpha t^{r} \rfloor \cdot G(1,1) - t^*(t) \cdot R^* - \lfloor \alpha t^{r} \rfloor \cdot M \r] \\ &\geq (G(1,1) - M) \alpha - G(1,1) t^{-r}. \end{align*} Using the fact that $\alpha := \frac{c'}{(G(1,1) - R^*_1 - 3\delta')}$ and $t > \left(\frac{\alpha \delta'}{G(1,1)}\right)^{-1/r}$, we get \begin{align*} M \geq G(1,1) - \frac{c'}{\alpha} - \frac{G(1,1) \cdot t^{-r}}{\alpha} = R^* + 3\delta' - \frac{G(1,1) \cdot t^{-r}}{\alpha} \geq R^* + 2\delta' . \end{align*} Now, using linearity of expectation, and linearity of the payoff function $G(p,1)$ in the argument $p$, we get \begin{align*} M &= G(\overline{f}, 1) \text{ where } \\ \overline{f} &:= \frac{1}{\lfloor \alpha t^r \rfloor} \sum_{s = 1}^{\lfloor \alpha t^r \rfloor} \mathbb{E}\left[f_{t^*(t) + s}\left(\bm{(J')^{t^*(t) + s - 1}}\right)\right] . \end{align*} Now, since $G(1,1) > G(0,1)$, we note that $G(p,1) = G(0,1) + (G(1,1) - G(0,1)) p$ is an increasing function in $p$ and so we get \begin{align*} \overline{f} \geq p^* + \frac{2\delta'}{G(1,1) - G(0,1)} = p^* + 2 \delta, \end{align*} from the above inequality on $M$. Thus, there exists $s(t)$ such that $1 \leq s(t) \leq \lfloor \alpha t^{r} \rfloor$ and \begin{align}\label{eq:sensitivitycondition2} \bbE \l[ f_{t^*(t) + s(t)}\left(\bm{(J')^{t^*(t) + s(t) - 1}}\right) \r] \geq p^* + 2\delta. \end{align} To write this in the language of Equation~\eqref{eq:sensitivitycondition}, we observe that \begin{align*} 0 < \frac{s(t)}{t^*(t) + s(t)} \leq \frac{\lfloor \alpha t^{r} \rfloor}{t^*(t) + \lfloor \alpha t^{r} \rfloor} \leq \frac{\alpha t^{r}}{t} = \alpha t^{r-1} \leq \alpha (t^*(t) + s(t))^{r-1}, \end{align*} and therefore we get, \begin{align}\label{eq:stinequality} s(t) \leq \alpha (t^*(t) + s(t))^{r} . \end{align} We note that $t^*(t) \to \infty$ as $t \to \infty$, and hence we can define an infinite sequence of integer tuples $\{t_k, s_k\}_{k \geq 1}$ (where we have defined $t_k = t^*(t_1 + k) + s_k$ and $s_k = s(t_1 + k)$ as above) such that $0 < s_k \leq \alpha (t_k)^{r}$ and \begin{align} \label{eq: twisted_response_exp} \mathbb{E}\left[f_{t_k}\left(\bm{(J'(k))^{t_k}}\right)\right] \geq p^* + 2\delta \text{ for all } k = 1,2, \ldots \end{align} This is precisely the statement in Equation~\eqref{eq:sensitivitycondition}, and completes the proof of Proposition~\ref{prop:noregretsensitivity}. \end{proof} \subsection{Warmup: Last-iterate oscillation when opponent is already at equilibrium}\label{sec:warmup} Equation~\eqref{eq:sensitivitycondition} highlights a critical property of any self-agnostic uniformly no-regret algorithm with a regret rate of $(r,c)$: it needs to be sufficiently sensitive to small perturbations on the order of $t^r$ in the opponent's strategy. We can concretize this property to show that the \emph{last-iterate oscillates} (i.e. does not converge almost surely) when both players use optimal no-regret strategies (i.e. $r = 1/2$) that are mean-based (Definition~\ref{as:meanbased}) and monotonic (Definition~\ref{as:monotonic}). Recall that under the mean-based assumption, player $1$'s strategy functions are given by $f_t(\bm{(J)^{t-1}}) := f_t(\bm{\widehat{Q}_{t-1}}) \text{ for all } t \geq 1$. In this section, we state and prove a warm-up result that clearly illustrates how the stochasticity in realizations \textit{alone} can lead to last-iterate oscillation. We consider the idealized case in which player $2$ is playing his equilibrium strategy at all steps, i.e. $\{\bm J_t\}_{t \geq 1} \text{ i.i.d} \sim \text{Bernoulli}(q^*)$. Remarkably, we show that even this simple case\footnote{This is in stark contrast to the setting of telepathic dynamics, in which a simple algorithm like multiplicative weights used in the matching pennies game would trivially lead player $1$ to converge to the NE strategy, $p^* = 1/2$.} necessitates the limiting mixed strategy of player $1$ to diverge! The central reason for this is as follows: if player $2$ is playing the mixed NE $q^*$ at all his steps, then the time-averages of his realized actions will fluctuate on the order of $t^{-1/2}$ infinitely often as well, leading to a $\delta$-deviation of player $1$ from her NE strategy $p^*$ with a positive probability via Proposition~\ref{prop:noregretsensitivity}. Figure~\ref{fig:average_iterates_figure} (which should remind the reader of the fluctuations of a symmetric random walk) depicts\footnote{This figure was inspired by Dean P. Foster's illustration of the law of the iterated logarithm: \url{https://en.wikipedia.org/wiki/Law_of_the_iterated_logarithm}} these recurring fluctuations on the order of $t^{-1/2}$ of player $2$'s time-averaged actions for a typical realization of player $2$ playing mixed strategy $q^* = 1/2$ at all steps. The ensuing last-iterate oscillation is stated and proved below. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./Figures/average_iterates_figure} \caption{Depiction of constant fluctuations of player $2$'s time-averages, i.e. $\bm{\widehat{Q}_t}$, as a function of $t$, when player $2$ plays $\bm{Q_t} \text{ i.i.d } \sim q^*$.}\label{fig:average_iterates_figure} \end{figure} \begin{theorem}\label{thm:warmup} Let player $2$'s strategy $\{\bm{J_t}\}_{t \geq 1}$ be an i.i.d. sequence of Bernoulli($q^*$) random variables. Then, any mean-based and monotonic% \footnote{We prove this theorem without the monotonicity assumption in Proposition~\ref{prop:warmup_ext}.} repeated game strategy $\{f_t\}_{t \geq 1}$ that has a regret rate of $(1/2,c)$ causes player $1$'s last iterate to diverge in probability, i.e. there exist positive constants $(\delta,\epsilon)$ such that \begin{align} \label{eq: liminf_prob_dev_response} {\lim \sup}_{t \to \infty} \bbP\left[|\bm{P_{t}} - p^*| \geq \delta \right] \geq \epsilon . \end{align} \end{theorem} The proof of Theorem~\ref{thm:warmup} constitutes an application of Markov's inequality and the central limit theorem, and is provided below. \section{Main results}\label{sec:results} In this section, we use the above assumptions to prove, in a series of steps, that the limiting mixed strategies as an outcome of both players using self-agnostic, mean-based, optimal no regret learning \textit{diverge}. We start by proving a condition that any self-agnostic, optimal no-regret strategy necessarily satisfies (regardless of whether it is mean-based or not). As described in Section~\ref{sec: intro}, this condition is described as sensitivity to \textit{fluctuations} in the time-average of the opponent strategy, and is detailed below. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{./Figures/sensitivity_figure_proof_2} \caption{Response of player $1$, who is using the multiplicative weights algorithm with learning rate $\eta = 1/\sqrt{t}$, against player $2$ who uses the randomized sequence in Equation~\eqref{eq:randomsequence}. The blue line plots the time-averages of player $2$, i.e. $\{\bm{\widehat Q_t}\}_{t \geq 1}$, and the red line plots the iterates of player $1$, i.e. $\{\bm P_t\}_{t \geq 1}$. Note that the $y$-axis of the figure is on linear scale, so the fluctuation in the time-averages of player $2$ is not visible. }\label{fig:sensitivity_figure_proof} \end{figure} \begin{proposition}\label{prop:noregretsensitivity} For any self-agnostic repeated game strategy $\{f_t\}_{t \geq 1}$ used by player $1$ that is uniformly no-regret with a rate of $(r,c)$ (where $1/2 \leq r < 1$) , and any $0 < \delta < (1 -p^*)/3$, there exists a positive constant $\alpha$ and an infinite sequence of integer tuples $\{(t_k,s_k)\}_{k \geq 1}$ such that \begin{equation} \label{eq:sensitivitymargin} 0 < s_k \leq \alpha (t_k)^{r}, \text{ for all } k \geq 1, \end{equation} and \begin{align}\label{eq:sensitivitycondition} \mathbb{E}\left[f_{t_k}\left(\bm{(J'(k))^{t_k}}\right)\right] \geq p^* + 2\delta, \text{ for all } k \geq 1 , \end{align} where the expectation is over the random sequence $\bm{(J'(k))^{t_k}} := \{\bm{J'_s(k)}\}_{s=1}^{t_k}$ defined as below: \begin{align}\label{eq:randomsequence} \bm{J'_s(k)} = \begin{cases} \bm{J^*_s} \text{ i.i.d. } \sim \text{Bernoulli}(q^*), \text{ if } 1 \leq s \leq t_k - s_k, \\ 1 \text{ otherwise. } \end{cases} \end{align} \end{proposition} In particular, if the self-agnostic repeated game strategy $\{f_t\}_{t \geq 1}$ is \textit{optimally} uniformly no-regret i.e. $r = 1/2$, then condition~\ref{eq:sensitivitymargin} would be \[ 0 < s_k \leq \alpha \sqrt{t_k}, \text{ for all } k \geq 1. \] For this case, Figure~\ref{fig:sensitivity_figure_proof} shows player $1$'s response, when she uses the multiplicative weight algorithm, to an opponent who plays the random sequence as defined in Equation~\eqref{eq:randomsequence}. The game that is played is the matching pennies game, which is a zero-sum game with $G(0,0) = G(1,1) = 1$ and $G(0,1) = G(1,0) = 1$. Her strategy, averaged over player $2$'s realizations, is shown to deviate sizably from the NE of the matching pennies game, $p^* = 0.5$. This case is important because, even if player $2$ remained close to her equilibrium mixed strategy $q^*$ (as is \textit{necessary} in any hypothetical scenario of last-iterate convergence), there is a non-trivial probability of player $2$'s empirical average deviating from $q^*$ by a number on the order of $1/\sqrt{t}$ at step $t$. The sensitivity in an optimal no-regret strategy to pick deviations of this order will allow us to show the non-convergence of player $1$'s mixed strategies in the subsequent Sections~\ref{sec:warmup},~\ref{sec:lastiteratedivergence} and~\ref{sec:optimism}. \begin{proof} Recall that by the competitive $2 \times 2$ game assumption, we have $G(0,1) < G(1,1)$. Thus, for any $0 \leq p < 1$, we have $G(p,1) < G(1,1)$. Consider $\{f_t\}_{t \geq 1}$ to be any self-agnostic uniformly no-regret strategy with a no-regret rate of $(r,c)$. We note that for any $t$, and any sequence $\{J_s\}_{s=1}^t$, we have $$\max_{i \in \{0,1\}} \sum_{s=1}^t G(i,J_s) = t \cdot \max\{G(0,\widehat Q_t), G(1,\widehat Q_t)\}.$$ Thus, for $c' > c$, there exists a sufficiently large $t_0$ such that for all $t \geq t_0$, we have \begin{equation} \label{eq: suff_large_no_regret} \frac{1}{t^{r}} \l[t \cdot \max\{G(0,\widehat Q_t), G(1,\widehat Q_t)\} - \sum_{s = 1}^t G(f_s((J)^{s-1}), J_s)) \r] \leq c', \end{equation} for any sequence $\{J_s\}_{s = 1}^t$. Recall that $R^*$ denotes the Nash equilibrium payoff of player $1$. Because the equilibrium $(p^*,q^*)$ is strictly mixed, we have $R^* = G(p^*,q^*) = G(p^*, 1) < G(1,1)$. Let $0 < \delta < \frac{(1 - p^*)}{3} = \frac{(G(1,1) - R^*)}{3(G(1,1) - G(0,1))},$ and let $\delta' := \delta(G(1,1) - G(0,1))$. Note that $0 < \delta' < \frac{(G(1,1) - R^*)}{3}$. Let $\alpha := \frac{c'}{(G(1,1) - R^* - 3\delta')} > 0$. For any $t > t_1 := \max\{t_0, \alpha^{1/r} ,(\alpha \delta'/G(1,1))^{-1/r}\}$, let $t^*(t) := t - \lfloor \alpha t^{r} \rfloor,$ where $\lfloor \cdot \rfloor$ is the floor function. Note that since $t > \alpha^{1/r}$, we have $t^*(t) \geq 1$. Let $\{\bm{J_s^*}\}$ be an i.i.d. sequence of Bernoulli($q^*$) random variables for $1 \leq s \leq t$. Let $\bm{\widehat Q_s^*} := \frac{1}{s} \sum_{s' = 1}^s \bm{J_{s'}^*}$ denote the empirical average of this sequence at step $s$. We state the following useful lemma. \begin{lemma} \label{lem: martingale_prop} For the sequence $\{\bm{J_s^*}\}_{s \geq 1}$ defined above, and any $t \geq 1$, we have \begin{subequations} \begin{align} \mathbb{E}\left[\sum_{s = 1}^{t} G(f_s(\bm{(J^*)^{s-1}}), \bm{J_s^*}))\right] &= t R^* \text{ and } \label{eq:Gteq}\\ \mathbb{E}\left[\sum_{s=1}^{t} \bm{J_s^*}\right] &= t q^* \label{eq:Jteq}. \end{align} \end{subequations} \end{lemma} See Appendix~\ref{sec: martingaleproof} for the proof of this lemma. We define the sequence $\{\bm{J_s'}\}_{s=1}^{t}$ as specified in the statement of Proposition~\ref{prop:noregretsensitivity}. In other words, we define $\bm{J_s'} = \bm{J^*_s}$ for $1 \leq s \leq t^*(t)$, and $\bm{J_s'} = 1$ for $t^*(t) < s \leq t$. Then, we can denote the empirical average of this sequence as $\bm{\widehat Q_s'} := \frac{1}{s} \sum_{s' = 1}^s \bm{J_{s'}'}$ for $1 \leq s \leq t$. We denote \begin{align} \label{eq: M_def} M := \frac{1}{\lfloor \alpha t^{r} \rfloor} \cdot \mathbb{E}\left[\sum_{s = 1}^{\lfloor \alpha t^{r} \rfloor} G\left(f_{t^*(t) + s}\left(\bm{(J')^{t^*(t) + s - 1}}\right),1\right)\right]. \end{align} From the definition of uniform no-regret, i.e. Equation~\eqref{eq: suff_large_no_regret}, we have \begin{align*} c' &\geq \frac{1}{t^{r}} \cdot \mathbb{E} \l[T \max\{G(0,\bm{\widehat Q_t'}), G(1,\bm{\widehat Q_t'})\} - \sum_{s = 1}^{t} G(f_s(\bm{(J')^{s-1}}), \bm{J_s'})) \r] \\ &\geq \frac{1}{t^{r}} \cdot \mathbb{E} \l[t \cdot G(1,\bm{\widehat Q_t'}) - \sum_{s = 1}^{t} G(f_s(\bm{(J')^{s-1}}), \bm{J_s'})) \r] \\ &= \frac{1}{t^{r}} \cdot \l[ t^*(t) \cdot R^* + \lfloor \alpha t^{r} \rfloor \cdot G(1,1) - t^*(t) \cdot R^* - \lfloor \alpha t^{r} \rfloor \cdot M \r] \\ &\geq (G(1,1) - M) \alpha - G(1,1) t^{-r}. \end{align*} Using the fact that $\alpha := \frac{c'}{(G(1,1) - R^*_1 - 3\delta')}$ and $t > \left(\frac{\alpha \delta'}{G(1,1)}\right)^{-1/r}$, we get \begin{align*} M \geq G(1,1) - \frac{c'}{\alpha} - \frac{G(1,1) \cdot t^{-r}}{\alpha} = R^* + 3\delta' - \frac{G(1,1) \cdot t^{-r}}{\alpha} \geq R^* + 2\delta' . \end{align*} Now, using linearity of expectation, and linearity of the payoff function $G(p,1)$ in the argument $p$, we get \begin{align*} M &= G(\overline{f}, 1) \text{ where } \\ \overline{f} &:= \frac{1}{\lfloor \alpha t^r \rfloor} \sum_{s = 1}^{\lfloor \alpha t^r \rfloor} \mathbb{E}\left[f_{t^*(t) + s}\left(\bm{(J')^{t^*(t) + s - 1}}\right)\right] . \end{align*} Now, since $G(1,1) > G(0,1)$, we note that $G(p,1) = G(0,1) + (G(1,1) - G(0,1)) p$ is an increasing function in $p$ and so we get \begin{align*} \overline{f} \geq p^* + \frac{2\delta'}{G(1,1) - G(0,1)} = p^* + 2 \delta, \end{align*} from the above inequality on $M$. Thus, there exists $s(t)$ such that $1 \leq s(t) \leq \lfloor \alpha t^{r} \rfloor$ and \begin{align}\label{eq:sensitivitycondition2} \bbE \l[ f_{t^*(t) + s(t)}\left(\bm{(J')^{t^*(t) + s(t) - 1}}\right) \r] \geq p^* + 2\delta. \end{align} To write this in the language of Equation~\eqref{eq:sensitivitycondition}, we observe that \begin{align*} 0 < \frac{s(t)}{t^*(t) + s(t)} \leq \frac{\lfloor \alpha t^{r} \rfloor}{t^*(t) + \lfloor \alpha t^{r} \rfloor} \leq \frac{\alpha t^{r}}{t} = \alpha t^{r-1} \leq \alpha (t^*(t) + s(t))^{r-1}, \end{align*} and therefore we get \begin{align}\label{eq:stinequality} s(t) \leq \alpha (t^*(t) + s(t))^{r} . \end{align} We note that $t^*(t) \to \infty$ as $t \to \infty$, and hence we can define an infinite sequence of integer tuples $\{t_k, s_k\}_{k \geq 1}$ (where we have defined $t_k = t^*(t_1 + k) + s_k$ and $s_k = s(t_1 + k)$ as above) such that $0 < s_k \leq \alpha (t_k)^{r}$ and \begin{align} \label{eq: twisted_response_exp} \mathbb{E}\left[f_{t_k}\left(\bm{(J'(k))^{t_k}}\right)\right] \geq p^* + 2\delta \text{ for all } k = 1,2, \ldots \end{align} This is precisely the statement in Equation~\eqref{eq:sensitivitycondition}, and completes the proof of Proposition~\ref{prop:noregretsensitivity}. \end{proof} \subsection{Last-iterate divergence when opponent is already at equilibrium}\label{sec:warmup} Equation~\eqref{eq:sensitivitycondition} highlights a critical property of any self-agnostic uniformly no-regret algorithm with a regret rate of $(r,c)$: it needs to be sufficiently sensitive to small perturbations on the order of $t^r$ in the opponent's strategy. We can concretize this property to show last-iterate divergence when both players use optimal no-regret strategies, i.e. $r = 1/2$ satisfying Asssumptions~\ref{as:meanbased} and~\ref{as:monotonic}, i.e. the mean-based and monotonic assumptions. Recall that under the mean-based assumption, player $1$'s strategy functions are given by \begin{align}\label{eq:empiricalaverages} f_t(\bm{(J)^{t-1}}) := f_t(\bm{\widehat{Q}_{t-1}}) \text{ for all } t \geq 1 . \end{align} The mean-based assumption underlies the broad family of Online-Mirror-Descent algorithms that satisfy the external-no-regret property. More generally, strategies that use an appropriate mean of the past history of outcomes are among the earliest algorithms satisfying related properties\footnote{Note, however, that most algorithms that satisfy these latter properties are not self-agnostic, and so we do not study their properties here. } like Blackwell approachability~\citep{blackwell1956}, internal-no-regret~\citep{blum2007external} and calibration~\citep{foster1998asymptotic}. Moreover, in reality, for our techniques to work we only need strategies to be mean-based in an approximate sense. In other words, in Section~\ref{sec:optimism} we show that essentially the same results hold when the players use variants of the above class of strategies that incorporate a \textit{recency bias}. The way we will show last-iterate-divergence for player $1$ is in the style of \textit{proof-by-contradiction}: we show that if player $2$ were able to converge in the last-iterate, player $1$ \textit{must} diverge. The central idea is that the stochasticity in the realizations of player $2$, itself, cause player $1$ to diverge. This result in its full generality is in Section~\ref{sec:lastiteratedivergence}. Here, we state and prove a warm-up result that most clearly illustrates how the stochasticity in realizations \textit{alone} can lead to last-iterate divergence. This considers the special case where player $2$ is playing his equilibrium strategy at all steps, i.e. $\{\bm J_t\}_{t \geq 1} \text{ i.i.d} \sim \text{Bernoulli}(q^*)$. Remarkably, we show that even this simple case necessitates the limiting mixed strategy of player $1$ to diverge! (This is in stark contrast to the setting of telepathic dynamics, in which a simple algorithm like multiplicative weights used in the matching pennies game would lead player $1$ to converge to the NE strategy, $p^* = 1/2$.) \vidyacomment{For now I'm keeping all text here as-is, but we will return to discussing the discrepancy between a fixed strategy by player $2$ and an adaptive one in subsequent sections.} \begin{theorem}\label{thm:warmup} Let player $2$'s strategy $\{\bm{J_t}\}_{t \geq 1}$ be an i.i.d. sequence of Bernoulli($q^*$) random variables. Then, any mean-based repeated game strategy $\{f_t\}_{t \geq 1}$ that has a regret rate of $(1/2,c)$ causes player $1$'s last iterate to diverge, i.e. there exist positive constants $(\delta,\epsilon)$ such that \begin{align} \label{eq: liminf_prob_dev_response} {\lim \sup}_{t \to \infty} \bbP\left[|\bm{P_{t}} - p^*| \geq \delta \right] \geq \epsilon . \end{align} \end{theorem} The proof of Theorem~\ref{thm:warmup} constitutes an elementary application of Markov's inequality and the central limit theorem, and is provided below. \input{tex/proof_warmup} \subsection{Main result: Last-iterate oscillations when \textit{both} players use no-regret} The case for which player $2$ is \textit{already} at equilibrium serves to isolate the ramifications of the stochasticity in realizations. While the day-to-day behavior of players is clearly quite different when both players are using popular no-regret strategies (see Figures~\ref{fig:MPoptimalfixedNE} and~\ref{fig:MPoptimalstochastic} in Section~\ref{sec:simulations} for a comparison), we show below that the inherent stochasticity in realizations continues to be the dominating factor in last-iterate oscillations. \begin{theorem}\label{thm:lastiteratedivergence} If both players $1$ and $2$ use mean-based and monotonic, optimal-no-regret repeated game strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$, respectively, then the pair of mixed strategies of both the players $(\bm{P_t}, \bm{Q_t})$ cannot converge almost surely. \end{theorem} \input{tex/proof_martingale} \subsection{Extensions beyond the mean-based assumption}\label{sec:optimism} \label{sec: beyond_mean_based} \vidyacomment{To be edited} In this section, we examine the fidelity of these last-iterate divergence results beyond exact mean-based strategies (Assumption~\ref{as:meanbased}). While the mean-based assumption is fairly strong, it is worth noting that mean-based strategies underlie the design of almost all no-regret algorithms in practice. Moreover, as we will now show, the essence of our results continues to hold even for algorithmic variants of mean-based strategies that are ubiquitous in the online learning and games literature. One of the most common such variants incorporates a form of recency bias, colloquially called ``optimism''. We define the broad class of recency-bias strategies below. \begin{definition}\label{def:krecencybias} The class of $\ell$-recency bias strategies is defined as below: \begin{align*} f_t(\bm{J^{t-1}}) := f_t(\bm{\widehat{Q}^\ell_{t-1}}) \text{ where } \\ \bm {\widehat{Q}^\ell_t} := \frac{1}{t} \left(\sum_{s=1}^{t} \bm{J_s} + \sum_{j = 1}^{\ell} r_j \bm{J_{t - j + 1}}\right) , \end{align*} and $\{r_j\}_{j=1}^\ell$ are positive integers taking values in $\{1,\ldots, \ell\}$. \end{definition} Note that the class of $0$-recency bias strategies essentially constitutes mean-based strategies, and $1$-recency bias strategies with $r_1 = 1$ constitutes the class of \textit{optimistic} mean-based strategies, since they are using $\sum_{s=1}^t \bm{J_s} + \bm{J_t}$ as the summary statistic. As mentioned in Section~\ref{sec: intro}, the study of the last iterate of optimism-based strategies has generated a lot of interest in the optimization literature~\citep{daskalakis2018training,mertikopoulos2018optimistic,liang2019interaction,abernethy2019last,lei2020last}; more-over, these strategies are known to cause faster time-averaged convergence~\citep{rakhlin2013optimization,syrgkanis2015fast}. Most recently, it was shown that the last iterate of the players' strategies in the setting of telepathic dynamics (that arises when players use each others' mixtures to update their strategies) will converge~\citep{daskalakis2018last}. In the more realistic realization model, the following result shows that the ensuing stochasticity \textit{alone} causes recency-bias-based strategies to diverge. \begin{theorem}\label{thm:optimism} Assume that player $2$'s strategy $\{\bm{J_t}\}_{t \geq 1}$ is an i.i.d. sequence of Bernoulli($q^*$) random variables. Then, any $\ell$-recency-bias strategy $\{f_t\}_{t \geq 1}$, as defined in Definition~\ref{def:krecencybias}, that has a regret rate of $(1/2,c)$ causes player $1$'s last iterate to diverge, i.e. there exist positive constants $(\delta,\epsilon)$ such that \begin{align} \label{eq: liminf_prob_dev_response} {\lim \sup}_{t \to \infty} \bbP\left[|\bm{P_{t}} - p^*| \geq \delta \right] \geq \epsilon . \end{align} \end{theorem} The proof of Theorem~\ref{thm:optimism} is essentially a relatively simple variant of the proof of Theorem~\ref{thm:warmup} with slightly more involved algebraic calculations of the involved probability mass functions owing to the recency bias. The full proof is contained in Appendix~\ref{sec: optimismproof}. It is worth noting that the bounded memory of the recency bias, as well as bounded increments $\{r_j\}_{j=1}^{\ell}$ are critical to the essence of the argument. It is plausible that stronger recency biases that grow with the number of steps $t$ could lead to different last-iterate behavior; however, such stronger recency biases could also break the no-regret property. \subsection{Open questions and conjectures}\label{sec:lastiteratedivergence} \vidyacomment{To be edited} Now, we ask what will happen if both the players use a mean-based optimal no-regret repeated game strategy, say given by the sequences of functions $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ respectively. This generates a probability distribution on the set of sequences $\{I_t, J_t\}_{t \geq 1}$. Suppose that we could show that the induced distribution on the empirical average of player $2$, i.e. $\bm{\widehat Q_t}$, was similar to a (normalized) Binomial random variable, i.e. $\bm{Z_t}/t$, in the following sense: \begin{align} \label{eq: req_measure_ratio_prop} \frac{\bbP(t\bm{\widehat Q_t} = z)}{\bbP(\bm{Z_t} = z)} \geq \epsilon_1 \text{ for all } z \in [tq^* - \beta' \sqrt{t}, tq^* + \beta \sqrt{t}], \end{align} for some $\epsilon_1 > 0$, for all $t$ sufficiently large (here, $\beta$ is as defined in the previous subsection). Then, using the same ``change-of-measure'' argument as in the proof of Theorem~\ref{thm:warmup}, we would get $\bbP\left[\bm{P_{t_k}} \geq p^* + \delta \right] \geq (\epsilon_1 \epsilon)/2$, for all $k$ sufficiently large. We conjecture that when the strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ are mean-based and satisfy optimal no-regret, the property~\eqref{eq: req_measure_ratio_prop} will hold. This in turn would imply the following remarkable result. \begin{conjecture}\label{con:lastiteratedivergence} If both players $1$ and $2$ use self-agnostic, mean-based repeated game strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$, respectively, that are uniformly no-regret and each have a regret rate of $(1/2,c)$, then the pair of mixed strategies of both the players $(\bm{P_t}, \bm{Q_t})$ diverges with a positive probability. \end{conjecture} Observe that it is sufficient to show divergence of the pair $(\bm{P_t}, \bm{Q_t})$ from only the equilibrium $(p^*, q^*)$ with a positive probability.% \footnote{This is because if a sequence $(\bm{P_t}, \bm{Q_t})_{t \geq 1}$ did converge to some other point $(p, q) \neq (p^*, q^*)$, then the time-averages $(\bm{\hat P_t}, \bm{\hat Q_t})_{t \geq 1}$ would have to converge to $(p,q)$ as well. However, we know that the time-averages have to converge to the equilibrium $(p^*, q^*)$ almost surely; thus the event that the sequence $(\bm{P_t}, \bm{Q_t})_{t \geq 1}$ converges to some point other than the equilibrium has zero probability.} In particular, it suffices to show that there exist positive constants $(\delta,\epsilon)$ such that \begin{subequations}\label{eq:lastiteratedivergence} \begin{align} {\lim \sup}_{t \to \infty} \bbP\left[|\bm{P_{t}} - p^*| \geq \delta \right] &\geq \epsilon \text{ or } \label{eq:lastiteratedivergence1} \\ {\lim \sup}_{t \to \infty} \bbP\left[|\bm{Q_{t}} - q^*| \geq \delta \right] &\geq \epsilon .\label{eq:lastiteratedivergence2} \end{align} \end{subequations} While we do not prove this conjecture, let us provide some intuition for it. Suppose player $2$'s strategies results in him using a sequence of mixed strategies $\{\bm q_t\}_{t \geq 1}$ as an instance of randomness in the two players' strategies. If this sequence does not converge, to $q^*$ then we cannot have last-iterate convergence and we are done. The more interesting case is when the sequence $\{\bm q_t\}_{t \geq 1}$ converges to $q^*$. Now, as a consequence of the optimal no-regret strategies of both the players, one can show that after sufficiently many steps the empirical average of player $2$'s action realizations is concentrated in an interval of length $\mathcal{O}(1/\sqrt{t})$ around $q^*$ at step $t$. Also, player $2$ will be playing mixed actions $q_t$ close to $q^*$ for all $t$ sufficiently large. In particular, his mixed actions will be bounded away from the pure actions, since his NE strategy $q^* \in (0,1)$. As a result, it is unlikely that player $2$ would be able to have a precise control over the empirical average of his realized actions. Thus, we would expect the condition in Equation~\eqref{eq: req_measure_ratio_prop}, which we call the \emph{shaky-hands} property, to hold. This property is more formally defined below. {\bf Shaky-hands property:} The mean-based strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ are such that for any $\gamma > 0$, there exists an $\epsilon > 0$ and a $T > 0$ such that \begin{align*} \frac{\bbP(t\bm{\hat Q_t} = z)}{\bbP(\bm{Z_t} = z)} \geq \epsilon \text{ for all } z \in [tq^* - \gamma \sqrt{t}, tq^* + \gamma \sqrt{t}], \end{align*} for all $t \geq T$. The following proposition, whose proof is a straightforward consequence of the argument in the proof of Theorem~\ref{thm:warmup}, shows that strategies satisfying this shaky-hands property will automatically cause last-iterate divergence. \begin{proposition} If the mean-based strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ satisfy the Shaky-hands property, and the strategy $\{f_t\}_{t \geq 1}$ of player $1$ has a regret-rate of $(1/2,c)$, then player $1$'s last-iterates diverge from the equilibrium strategy $p^*$, i.e. there exist positive constants $(\delta,\epsilon_0)$ such that \begin{align} {\lim \sup}_{t \to \infty} \bbP\left[|\bm{P_{t}} - p^*| \geq \delta \right] \geq \epsilon_0 . \end{align} \end{proposition} \begin{proof} First, we carry over steps directly from the proof of Theorem~\ref{thm:warmup}. We showed that \[ \bbP \l(f_{t_k} \l(\frac{\bm{{Z}_{t_k}}}{t_k} \r) \geq p^* + \delta, \bm{Z_{t_k}} \leq q^* \cdot t_k + \beta \sqrt{t_k} \r) \geq \epsilon, \] for all $k$ sufficiently large (here, $\epsilon$ is as defined in the previous subsection). Let $\beta' > 0$ be such that \[ \bbP \l({Z_t} < q^* t_k - {\beta'}{\sqrt{t}}\r) \leq \frac{\epsilon}{2}, \] for sufficiently large $t$. Recall that $\bm Z_t \sim \text{Binomial}(t, q^*)$ and hence such $\beta'$ exists. Then, we have \begin{equation} \label{eq: prob_f_Z_int} \bbP \l(f_{t_k} \l(\frac{\bm{{Z}_{t_k}}}{t_k} \r) \geq p^* + \delta, \bm{Z_{t_k}} \in [q^* t_k - \beta' \sqrt{t_k}, q^* \cdot t_k + \beta \sqrt{t_k}] \r) \geq \frac{\epsilon}{2}, \end{equation} for sufficiently large $k$, and where $\beta$ is defined as in the proof of Theorem~\ref{thm:warmup}. Then, Equation \eqref{eq: req_measure_ratio_prop} follows from the shaky-hands property taking $\gamma = \max\{\beta', \beta\}$. Taking $\epsilon_0 = \epsilon \epsilon_1/2$, we get the required result. \end{proof} \subsubsection{A comment on adaptive strategies} Unfortunately, proving that player $2$'s iterates satisfy the shaky-hands property when both players using optimal-no-regret strategies is non-trivial. The difficulty is primarily technical: the realizations of player $2$ are not mutually independent due to the adaptivity of player $2$. However, the iterates $\{\bm{Q_t}\}_{t \geq 1}$ do \textit{marginally} satisfy certain favorable conditions that would result in the shaky-hands property if the realizations were independent. To capture this, Theorem~\ref{thm:lastiteratedivergenceoblivious} in Appendix~\ref{sec:fixedconvergent} shows that the shaky-hands property holds for \textit{a-priori fixed} sequences $\{q_t\}_{t \geq 1}$ satisfying these conditions. This implies last-iterate divergence of player $1$ when player $2$ plays non-adaptively. More generally, nothing in our techniques precludes dependencies across realizations, as long as the probability mass function of player $2$'s sum of realizations at any step sufficiently resembles the probability mass function of a sum of mutually independent coin tosses. Thus, we do believe that Theorem~\ref{thm:lastiteratedivergenceoblivious} implies last-iterate divergence for the full stochastic dynamical system. However, the generality of our algorithmic framework, and minimal assumptions made on the strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ will necessitate new mathematical techniques to formally prove this result. {Under an additional natural assumption on monotonicity on $\{f_t\}_{t \geq 1}$ (increasing in their arguments), we show in the sequel that the ``shaky hands" property is replaced by a condition on the cumulative distribution function of the empirical averages that is technically easier to reason about. Then, we can show last-iterate divergence of the entire dynamical system. Showing this last-iterate divergence in the absence of this monotonicity assumption, however, remains an intriguing open direction for future work.} \section{Notation conventions for proofs}\label{sec:notationconvention} Before proving the full proofs of statements, we state our convention for notation in the appendix. We designate constants that take a value in $(0,1)$ by $\epsilon$, and strictly positive, finite constants by $C \in (0,\infty)$. Moreover, we will designate $t_0$ as a lower bound on $t$ above which all our statements apply. In general, these constants can depend on the parameters of the game, either directly, or just on the equilibrium strategies $(p^*,q^*)$. For ease of exposition, we will also sub-script these constants by alphabets $\{a,b,\ldots\}$, corresponding to the lemmas in which they appear and are used. Thus, for example, in the first lemma the constants will be denoted as $\{\epsilon_a,C_a\}$ and the lower bound on $t$ will be denoted as $t_{0,a}$. While in general we will overload notation within a lemma for our choice of constants, we will be explicit about manipulations when possible. \section{Proof of Lemma~\ref{lem: martingale_prop}} \label{sec: martingaleproof} We consider the distribution of \textit{mutually independent} coin tosses, \begin{align}\label{eq:Jtunif} \bm{J_t} \text{ i.i.d. } \sim \text{Bernoulli}(q^*) , \end{align} and denote the expectation of quantities under this probability distribution by $\mathbb{E}[\cdot]$. Recall that $q^*$ is the Nash equilibrium strategy of player $2$. By linearity of expectation, it is trivial to show the second statement, i.e. \begin{align*} \mathbb{E}\l[\sum_{t=1}^{T} \bm{J_t}\r] = T q^*. \end{align*} To show the first statement, recall that $\bm{J_t} \perp \bm{(J)^{t-1}}$ for all $t \in \{1,\ldots, T\}$ due to mutual independence. Thus, we use the law of iterated expectations to get \begin{align*} &\mathbb{E}\l[\sum_{t=1}^{T} G(f_t(\bm{(J)^{t-1}}),\bm{J_t})\r]\\ &= \mathbb{E}\l[\sum_{t=1}^{T} \mathbb{E}\l[f_t(\bm{(J)^{t-1}}) \cdot G(1,\bm J_t) + (1 - f_t(\bm{(J)^{t-1}}))G(0,\bm J_t) \Big{|} \bm{(J)^{t-1}} \r] \r] \\ &= \mathbb{E}\l[\sum_{t=1}^{T} f_t(\bm{(J)^{t-1}}) \cdot G(1,q^*) + (1 - f_t(\bm{(J)^{t-1}})) \cdot G(0,q^*) \r] \\ &= T R^*, \end{align*} where the last statement follows from $G(0,q^*) = G(1,q^*) = G(p^*,q^*) = R^*$. This completes the proof. \qed \section{Proof of Claim~\ref{clm: conv_as_frac}} \label{sec: claim_as_fracconv} Let $\Omega$ denote the $\sigma$-algebra induced by $\{\bm{P}_t,\bm{Q}_t\}_{t \geq 1}$ and the strategies $\{f_t(\cdot),g_t(\cdot)\}_{t \geq 1}$. Let $\omega \in \Omega$ denote any realization of this, and $(\bm{P}_t(\omega),\bm{Q}_t(\omega))$ denote the corresponding mixed strategies at time $t$. Then, $\bm{Q}_t \to q^*$ a.s. implies that \begin{align*} \bbP\left(\omega \in \Omega: \lim_{t \to \infty} \bm{Q}_t(\omega) = q^* \right) = 1 . \end{align*} Thus, we get \begin{align*} \bbP\left(\omega \in \Omega: \lim_{k \to \infty} \bm{Q}_{t_k}(\omega) = q^* \right) = 1 \end{align*} for any subsequence $\{t_k\}_{k \geq 1}$. This implies, by the definition of a limit, that for every $\omega$ such that $\lim_{t \to \infty} \bm{Q}_{t}(\omega) = q^*$, there exists finite $t_0(\omega)$ such that \begin{align*} \bm{Q}_{t}(\omega) \in \left[\frac{q^*}{2},\frac{q^* + 1}{2} \right] \text{ for all } t \geq t_0(\omega) . \end{align*} Let $k_0(\omega) := \min\{k: t_k \geq t_0(\omega)\}$. This directly gives us \begin{align*} {\lim \inf}_{k \to \infty} \frac{1}{t_k} \Big{|} \left\{s \leq t_k: \frac{q^*}{2} \leq \bm{Q}_s(\omega) \leq \frac{q^* + 1}{2}\right\} \Big{|} \geq {\lim \inf}_{k \to \infty} \frac{t_k - t_{k_0(\omega)}}{t_k} = 1 . \end{align*} Thus, we get \begin{align*} \bbP\left(\omega \in \Omega: {\lim \inf}_{k \to \infty} \frac{1}{t_k} \Big{|} \left\{s \leq t_k: \frac{q^*}{2} \leq \bm{Q}_s(\omega) \leq \frac{q^* + 1}{2}\right\} \Big{|} = 1 \right) = 1 , \end{align*} which completes the proof. \qed \section{Conclusion and future work} \label{sec: concl} In this paper, we have shown partial but compelling evidence for a fundamental tension between the guarantees of no-regret and last-iterate convergence on uncoupled dynamics that use the opponents' realizations alone as feedback. Perhaps the most important immediate question to address is whether last-iterate oscillations occur for strategies not satisfying monotonicity; in particular, whether Conjecture~\ref{con:lastiteratedivergence} is true. Additionally, we can ask whether the mean-based nature of the strategies is truly needed for our impossibility result. While we did show that our results are quite robust to inexact versions of the mean-based assumption (through Theorem~\ref{thm:optimism} and Corollary~\ref{cor:extension}), whether they also hold for strategies that significantly deviate from the class of mean-based strategies is an intriguing question. Section~\ref{sec:simulations} provided preliminary empirical evidence that last-iterate oscillation can occur even when no-regret strategies with suboptimal rates are used. Our techniques break down in the face of suboptimal no-regret rates, so it is interesting to ponder whether last-iterate oscillation happens for general suboptimal no-regret algorithms, or if it is just a property of the particular algorithms that were simulated. Finally, while last-iterate convergence may seem like a strong requirement in the realization-based repeated game model, we note that there are non-trivial classes of strategies that can be shown to satisfy it. Of particular interest are the (intractable for large games) \textit{smoothly calibrated} strategies proposed by~\citet{foster2018smooth}. These strategies constitute randomized responses to deterministic forecasting, and are conceptually quite different from strategies satisfying the no-regret property. Whether these strategies can be studied more constructively, and from a behavioral game theory standpoint, is an important question for future work. \section*{Acknowledgements} We thank Venkat Anantharam for useful preliminary discussions. VM acknowledges support from a Simons-Berkeley Research Fellowship. SP acknowledges the support of NSF grants CNS-1527846, CCF-1618145 and the NSF Science \& Technology Center grant CCF-0939370 (Science of Information). AS acknowledges support of the ML4Wireless center member companies and NSF grants AST-144078 and ECCS-1343398. Part of this work was done while some of the authors were visiting the Simons Institute for the Theory of Computing. \section{Simplified ``warm-up" proof under monotonicity} \section{Full proof of last-iterate divergence using martingale CLT} Finally, we show that Equation~\eqref{eq:noregretsensitivity2} admits a full proof of last-iterate divergence, essentially a proof of Conjecture 3.4, using the martingale CLT. \begin{proof} We will adopt the same ``proof-by-contradiction'' structure. We will show that, if $\bm{Q}_t \to 1/2 \text { a.s.}$, then we need to have \begin{align*} \lim_{t \to \infty} \bbP\left[\bm{P}_t \geq \frac{1}{2} + \delta \right] \geq \epsilon(\delta) > 0 \text{ for any } 0 < \delta < 1/4 . \end{align*} First, we prove the following claim. \begin{claim} $\bm{Q}_t \to 1/2 \text{ a.s. }$ implies that \begin{align}\label{eq:continuumclosetoNE} {\lim \inf}_{k \to \infty} \frac{1}{t_k} \Big{|} \left\{s \leq t_k: \frac{1}{4} \leq \bm{Q}_s \leq \frac{3}{4}\right\} \Big{|} = 1 \text{ almost surely, } \end{align} where $\{t_k\}_{k \geq 1}$ was the deterministic subsequence used in Proposition 8. \end{claim} Equation~\eqref{eq:continuumclosetoNE} is reminiscent of a convergence condition that is used by~\citet{foster2018smooth}. We now prove the claim. \begin{proof} Let $\Omega$ denote the $\sigma$-algebra induced by $\{\bm{P}_t,\bm{Q}_t\}_{t \geq 1}$ and the strategies $\{f_t(\cdot),g_t(\cdot)\}_{t \geq 1}$. let $\omega \in \Omega$ denote any realization of this, and $(\bm{P}_t(\omega),\bm{Q}_t(\omega))$ denote the corresponding mixed strategies at time $t$. Then, $\bm{Q}_t \to 1/2$ a.s. implies that \begin{align*} \bbP\left(\omega \in \Omega: \lim_{t \to \infty} \bm{Q}_t(\omega) = 1/2 \right) = 1 . \end{align*} This directly implies that for any subsequence $\{t_k\}_{k \geq 1}$, we get \begin{align*} \bbP\left(\omega \in \Omega: \lim_{k \to \infty} \bm{Q}_{t_k}(\omega) = 1/2 \right) = 1 . \end{align*} For every $\omega$ such that $\lim_{t \to \infty} \bm{Q}_{t}(\omega) = 1/2$, there exists finite $t_0(\omega)$ such that \begin{align*} \bm{Q}_{t}(\omega) \in \left[\frac{1}{4},\frac{3}{4} \right] \text{ for all } t \geq t_0(\omega) . \end{align*} Let $k_0(\omega) := \min\{k: t_k \geq t_0(\omega)\}$. This directly gives us \begin{align*} {\lim \inf}_{k \to \infty} \frac{1}{t_k} \Big{|} \{s \leq t_k: \frac{1}{4} \leq \bm{Q}_s(\omega) \leq \frac{3}{4}\} \Big{|} \geq {\lim \inf}_{k \to \infty} \frac{t_k - t_{k_0(\omega)})}{t_k} = 1 . \end{align*} Thus, we directly get \begin{align*} \bbP\left(\omega \in \Omega: {\lim \inf}_{k \to \infty} \frac{1}{t_k} \Big{|} \left\{s \leq t_k: \frac{1}{4} \leq \bm{Q}_s(\omega) \leq \frac{3}{4}\right\} \Big{|} = 1 \right) = 1 , \end{align*} which completes the proof. \end{proof} Thus, we have proved our claim. As a direct consequence of the above claim, we have \begin{align}\label{eq:variancebound} {\lim \inf}_{t \to \infty} \frac{1}{t} \sum_{s=1}^t \bm{Q}_s(1 - \bm{Q}_s) \geq \frac{3}{16} \text{ almost surely. } \end{align} We consider the filtration $\{\mathcal{F}_t := (\bm{I}_s,\bm{J}_s)_{s=1}^{t-1}\}_{t \geq 1}$, and the stochastic process $\{\bm{D}_t := \bm{Z}_t - \bm{Z}_{t-1} - \bm{Q}_t\}_{t \geq 1}$. Recall that we had defined $\bm{Z}_t := \sum_{s=1}^t \bm{J}_t$, where $\bm{J}_t \sim \text{Ber}(\bm{Q}_t)$. We observe the following properties of the stochastic process $\{\bm{D}_t\}_{t \geq 1}$: \begin{enumerate} \item $\bm{D}_t$ is a martingale difference sequence with respect to the filtration $\mathcal{F}_{t-1}$. This is a direct consequence of $\bm{Q}_t$ being a \textit{deterministic} function of $\mathcal{F}_{t-1}$. \item $|\bm{D}_t| \leq 1$. \item The sum of conditional variances diverges to $\infty$. In other words, we have \begin{align*} \sigma_t^2 := \bbE\left[\bm{D}_t^2 | \mathcal{F}_{t-1}\right] &= \bbE\left[(\bm{J}_t - \bm{Q}_t)^2 | \mathcal{F}_{t-1}\right] \\ &= \bm{Q}_t(1 - \bm{Q}_t) , \end{align*} and from Equation~\eqref{eq:variancebound} it is clear that ${\lim \inf}_{t \to \infty} \sum_{s=1}^t \sigma_s^2 = \infty$ a.s. \end{enumerate} As a consequence of these properties, we can apply the martingale central limit theorem~\citep{hall2014martingale} in the following form: \begin{theorem}[Martingale CLT,~\citep{hall2014martingale}]\label{thm:martingaleCLT} Let $\{\bm{D}_t\}_{t \geq 1}$ be a martingale difference sequence with respect to the filtration $\mathcal{F}_{t-1}$ such that $|\bm{D}_t| \leq 1$. Further, let $\{\sigma_t^2 := \bbE\left[\bm{D}_t^2 | \mathcal{F}_{t-1}\right]\}_{t \geq 1}$ denote the conditional variance sequence with the property that $\sum_{t=1}^{\infty} \sigma_t^2$ diverges to infinity almost surely. Then, we have \begin{align}\label{eq:martingaleCLT} \frac{\sum_{s=1}^t \bm{D}_s}{\sqrt{\sum_{s=1}^t \sigma_s^2}} \to \bm{Z} \sim \mathcal{N}(0,1) . \end{align} \end{theorem} Note that Equation~\eqref{eq:martingaleCLT} directly implies \begin{align}\label{eq:martingaleCLTsubsequence} \frac{\sum_{s=1}^{t_k} \bm{D}_s}{\sqrt{\sum_{s=1}^{t_k} \sigma_s^2}} \to \bm{Z} \sim \mathcal{N}(0,1) . \end{align} for any deterministic subsequence $\{t_k\}_{k \geq 1}$. From Proposition 8, we get \begin{align*} {\lim \inf}_{k \to \infty} \bbP\left( \bm{P}_{t_k} \geq \frac{1}{2} + \delta \right) &\geq \lim_{k \to \infty} \bbP\left(\sqrt{t_k} (\widehat{\bm{Q}}_{t_k} - 1/2) \geq \beta \right) \\ &\stackrel{\mathsf{(i)}}{\geq} \lim_{k \to \infty} \bbP\left(\sqrt{t_k} (\widehat{\bm{Q}}_{t_k} - \overline{\bm{Q}}_{t_k}) \geq \beta' \right) \end{align*} Inequality $(\mathsf{i})$ follows because of the statement made in Appendix C of the main paper, which is \begin{align*} \overline{\bm{Q}}_t \geq \frac{1}{2} - \frac{C}{\sqrt{t}} \text{ pointwise, for all } t \geq 1 \end{align*} for some bounded constant $C > 0$. Above, we denote $\beta' := \beta + C$. Now, note that for any $k \geq 1$, we have $\frac{\sum_{s=1}^{t_k} \bm{D}_s}{\sqrt{t_k}} = \frac{1}{\sqrt{t_k}}\left( \bm{Z}_{t_k} - \sum_{s=1}^{t_k} \bm{Q}_s \right) = \sqrt{t_k} (\widehat{\bm{Q}}_{t_k} - \overline{\bm{Q}}_{t_k})$. Therefore, we get \begin{align*} \lim_{k \to \infty} \bbP\left(\sqrt{t_k} (\widehat{\bm{Q}}_{t_k} - \overline{\bm{Q}}_{t_k}) \geq \beta' \right) &= \lim_{k \to \infty} \bbP\left( \frac{\sum_{s=1}^{t_k} \bm{D}_s}{\sqrt{t_k}} \geq \beta' \right) \\ &\stackrel{\mathsf{(i)}}{\geq} \lim_{k \to \infty} \bbP\left(\frac{\sum_{s=1}^{t_k} \bm{D}_s}{\sqrt{\sum_{s=1}^{t_k} \bm{Q}_s(1 - \bm{Q}_s)}}\geq \frac{16\beta'}{3} \right) \\ &\stackrel{\mathsf{(ii)}}{\geq} \text{erfc}(c) > 0 . \end{align*} where $c := \frac{16\beta'}{3}$. Inequality $(\mathsf{i})$ uses Equation~\eqref{eq:continuumclosetoNE}, and inequality $(\mathsf{ii})$ uses the martingale CLT on the subsequence $\{t_k\}_{k \geq 1}$ (as described in Equation~\eqref{eq:martingaleCLTsubsequence}. Thus, we have shown that if the last iterate of player $2$ were assumed to converge, the last iterate of player $1$ could not converge (in fact, even in probabality). This completes our \textit{proof-by-contradiction}: at least one of the players cannot converge almost surely to NE. \end{proof} \section{Extended proof for Hedge with adaptive step sizes} In this section, we present a very brief proof of Conjecture 3.4 for all known variants of Hedge with adaptive step sizes, i.e. \begin{align*} \bm{P}_t &= \frac{\exp\{\eta_t \bm{Z}_{t-1} \}}{\exp\{\eta_t \bm{Z}_{t-1} \} + \exp\{\eta_t (t - 1 - \bm{Z}_{t-1})\}} , \end{align*} where $\eta_t := \eta_t(\mathcal{F}_{t-1})$. This includes algorithms that achieve variation-based bounds~\cite{hazan2010extracting}, algorithms that adapt between adversity and stochasticity~\cite{cesa2007improved,erven2011adaptive}, and learning on predictable sequences~\cite{rakhlin2013online,rakhlin2013optimization}. The adaptive learning rate can be set via the ubiquitous ``doubling-trick"~\cite{cesa2007improved,hazan2010extracting} or in a more continuous manner~\cite{erven2011adaptive,rakhlin2013online,rakhlin2013optimization}. Either way, none of these algorithms do not satisfy either the mean-based property, and some of them (most notably AdaHedge~\cite{cesa2007improved,erven2011adaptive}) do not satisfy even the self-agnostic property. This is because in general, the learning rate function $\eta_t(\mathcal{F}_{t-1})$ will depend in some non-linear way on $\{\bm{P}_s,\bm{Q}_s,\bm{I}_s,\bm{J}_s\}_{s=1}^{t-1}$. In spite of this, we can prove last-iterate divergence for these algorithms. We critically use the following property that is shared by all such adaptive algorithms: for some universal constant $C > 0$, we have \begin{align*} \eta_t \geq \frac{C}{\sqrt{t}} \text{ pointwise. } \end{align*} This property is usually used to prove that adaptive Hedge algorithms retain the $\mathcal{O}(\sqrt{t})$ regret guarantee in the worst case (but can get faster rates under more favorable conditions). For a given round $t$, we define $\bm{Z}_{t-1}$ as above, and further define \begin{align*} \bm{P}_{t,\mathsf{Hedge}} := f_{t,C}(\widehat{\bm{Q}}_{t-1}) := \frac{\exp\{C \sqrt{t} \widehat{\bm{Q}}_{t-1} \}}{\exp\{C \sqrt{t} \widehat{\bm{Q}}_{t-1} \} + \exp\{C \sqrt{t} (1 - \widehat{\bm{Q}}_{t-1})\}} \end{align*} as the counterfactual iterate that would have been generated if the algorithm switched to Hedge with learning rate $\eta_t = C/\sqrt{t}$ at round $t$, and we note that $|2\bm{P}_{t} - 1| \geq |2\bm{P}_{t,\mathsf{Hedge}} - 1|$. Clearly, $\{f_{t,C}(\cdot)\}_{t \geq 1}$ is a mean-based and self-agnostic strategy, and so it satisfies the deterministic statement of Equation~\eqref{eq:noregretsensitivity2} for all $t := t_k$. That is, we have for every $k \geq 1$, \begin{align*} f_{t_k, C}(x) \geq \frac{1}{2} + \delta \text{ for all } x \geq \frac{1}{2} + \frac{\beta}{\sqrt{t_k}} \text{ and for all } k \geq 1 . \end{align*} Therefore, we get for any $k \geq 1$, \begin{align*} \bbP\left(\bm{P}_{t_k} \geq \frac{1}{2} + \delta \right) \geq \bbP\left(\bm{P}_{t_k,\mathsf{Hedge}} \geq \frac{1}{2} + \delta \right) , \end{align*} and the proof directly follows by lower bounding the right hand side. This completes the proof of last-iterate divergence for Hedge with adaptive step sizes. \qed \begin{remark} Note that the above proof did not use any details about the adaptive step sizes or the fact that the underlying framework was the Hedge algorithm. The only necessary property is that the iterates stochastically dominate the iterates of an equivalent mean-based no-regret algorithm. Thus, this proof idea would be more broadly applicable to any no-regret algorithm that can be ``reduced" to a mean-based, self-agnostic algorithm in this sense. \end{remark} \section{Convergence of the empirical average: Proof of Lemma~\ref{lem:timeave}}\label{sec:timeave} In this section, we provide for completeness the proof of Lemma~\ref{lem:timeave}, which shows almost-sure convergence of the time-average of the mixed strategies of players who deploy no-regret algorithms to the unique mixed NE of a competitive game. Suppose that both the players employ no-regret algorithms with regret rate $r$. Then,~\citet[Proposition~17.9]{roughgarden2016twenty}, shows that the product distribution $\bm{\overline{P}_t} \times \bm{\overline{Q}_t}$ is an approximate \emph{coarse-correlated equilibrium (CCE)}, in the sense that there exists a universal constant $C > 0$ such that \[ \bbE_{(\bm{I},\bm{J}) \sim \bm{\overline{P}_t} \times \bm{\overline{Q}_t}}[G(\bm{I},\bm{J})] \leq \bbE_{(\bm{I},\bm{J}) \sim \bm{\overline{P}_t} \times \bm{\overline{Q}_t}}[G(i',\bm{J})] + \frac{C}{T^r} \text{ pointwise} \] for any unilateral deviation $i' \in \{0,1\}$ of player $1$, and similarly, \[ \bbE_{(\bm{I},\bm{J}) \sim \bm{\overline{P}_t} \times \bm{\overline{Q}_t}}[H(\bm{I},\bm{J})] \leq \bbE_{(\bm{I},\bm{J}) \sim \bm{\overline{P}_t} \times \bm{\overline{Q}_t}}[H(\bm{I},j')] + \frac{C}{T^r} \text{ pointwise} \] for any unilateral deviation $j' \in \{0,1\}$ of player $2$. Next, we observe that for any competitive game, there is a unique CCE, i.e. there is a unique distribution $\mu$ that satisfies \[ \bbE_{(\bm{I},\bm{J}) \sim \mu }[G(\bm{I},\bm{J})] \leq \bbE_{(\bm{I},\bm{J}) \sim \mu }[G(i',\bm{J})], \] for any unilateral deviation $i' \in \{0,1\}$ of player $1$, and \[ \bbE_{(\bm{I},\bm{J}) \sim \mu}[H(\bm{I},\bm{J})] \leq \bbE_{(\bm{I},\bm{J}) \sim \mu }[H(\bm{I},j')], \] for any unilateral deviation $j' \in \{0,1\}$ of player $2$. In fact, this distribution corresponds to the unique NE of this game, i.e. $\mu = p^* \times q^*$. As a consequence of this uniqueness, and from the conditions on the payoffs of a competitive game given in Definition~\ref{def: competitive}, we get that the distribution $\mu_t := \bm{\overline{P}_t} \times \bm{\overline{Q}_t}$ is with a distance on the order of $\cal{O}(T^r)$ from $\mu$ in the absolute norm. In particular, when $r = 1/2$, we get that the time-average of player $2$'s actions converges to $q^*$ at the rate \begin{align*} |\overline{\bm{Q}}_t - q^*| \leq \frac{C}{\sqrt{t}} \text{ pointwise, for all } t \geq 1. \end{align*} which is precisely Equation~\eqref{eq:timeave}. This completes the proof of Lemma~\ref{lem:timeave}. \qed While the above reasoning holds for all competitive games and all no-regret algorithms, we note that this lemma was first proved by~\citet{freund1999adaptive} for the special case of zero-sum games with agents playing the multiplicative weights algorithm. \section{Connections between non-zero-sum and zero-sum competitive games}\label{sec:competitive} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/game_eg1_NZS_player1_iterates} \caption{Non-zero-sum game with parameters $(\alpha = 0.111, \beta = 4)$.} \label{fig:game1NZS} \end{subfigure}% ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/game_eg1_ZS_player1_iterates} \caption{Zero-sum ``equivalent" with parameters $(\alpha = 0.111, \beta = 4)$.} \label{fig:game1ZS} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/game_eg2_NZS_player1_iterates} \caption{Non-zero-sum game with parameters $(\alpha = 0.25, \beta = 0.429)$.} \label{fig:game2NZS} \end{subfigure}% ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/game_eg2_ZS_player1_iterates} \caption{Zero-sum ``equivalent" with parameters $(\alpha = 0.25, \beta = 0.429)$.} \label{fig:game2ZS} \end{subfigure} \caption{Comparison of the iterates of player $1$ using common no-regret algorithms in the non-zero-sum and zero-sum version of the competitive games as specified in Definition~\ref{def:competitivegames2} for two choices of parameters $\alpha,\beta$. While both the non-zero-sum and zero-sum game have identical NE ($p^* = 0.8$ for player $1$ in the first case, and $p^* = 0.3$ for player $1$ in the second case), the iterates as a consequence of no-regret learning exhibit qualitatively different behavior.}\label{fig:competitive} \end{figure} In this section, we recap the connection between a non-zero-sum competitive game and a zero-sum competitive game that share the same best-response functions (and thus NE), that was alluded to in Remark~\ref{remark:competitive}. The following characterization for competitive games was provided in~\citet{phade2019geometry}: \begin{definition}\label{def:competitivegames2} For any choice of parameters $\alpha, \beta > 0$, we define a competitive game with payoff matrix entries given by \begin{align*} G(0,0) = -\alpha, G(1,1) &= -1, H(0,0) = \beta, H(1,1) = 1 \text { and } \\ G(0,1) = G(1,0) &= H(0,1) = H(1,0) = 0 . \end{align*} The unique Nash equilibrium of this game (which is also a correlated equilibrium) is given by $p^* = \beta/(1 + \beta), q^* = \alpha/(1 + \alpha)$. Moreover, the corresponding zero-sum game with payoff matrix entries for player $1$ given by \begin{align*} G'(0,0) = \frac{1 - \alpha \beta}{1 + \beta}, G'(0,1) = 1, G'(1,0) = \frac{1 + \alpha}{1 + \beta}, G'(1,1) = 0 \end{align*} is equivalent to the above competitive game in its best-response function and NE. Note that the transformation between payoff matrices $(G,H)$ and $(G',-G')$ is non-affine. \end{definition} As mentioned in Remark~\ref{remark:competitive}, this equivalence in best-response functions does not translate to an equivalence in actual day-to-day behavior when the players use no-regret algorithms. See Figure~\ref{fig:competitive} for a depiction of the qualitative discrepancies in behavior for two choices of parameters $(\alpha,\beta)$. Understanding these discrepancies at a deeper level remains an intriguing direction for future work. \section{Introduction} The mixed-strategy Nash equilibrium (NE) of a multi-player game is the oldest solution concept in one-shot game theory. Its existence for all games having a finite pure strategy space was one of the first central results in game theory \citep{von1945theory,nash1951non}, but a finer understanding of how the NE arises as an outcome of repeated, ``natural" play remains a somewhat elusive goal as well as an active area of research\footnote{It is interesting to note that a question that is somewhat pre-requisite to this goal is that of whether the equilibrium can even be computed efficiently --- and even this has been shown to be impossible for general-sum games in the algorithmic game theory community(~\cite{daskalakis2009complexity}). However, for special classes of games, such as zero (or constant-sum) games, it is well-known that the computation of the NE reduces to the computation of a special linear program (LP).}. Of special interest is whether players who use certain \textit{learning dynamics}, based on principles of minimizing regret with respect to the best response to the empirical average of the opponent's plays\footnote{The first choice of dynamics satisfying this property was introduced in 1957(~\cite{hannan1957approximation}); this was refined a few decades later in a number of algorithms in the \textit{online learning} literature in the 1990s(~\cite{littlestone1989weighted,freund1999adaptive,kalai2005efficient}). }, are able to converge to equilibrium. {\color{red} Vidya: Earlier, I had phrased this as connecting no-regret to some notion of \textit{optimality} in repeated zero-sum games: I'm not actually sure that is true if you read Hannan/Blackwell's older papers. The only claims made in these papers are in optimality of min-max regret, not min-max loss. I feel like in general the concept of ``regret" has not seen sufficient scrutiny, and it was invented because very nice guarantees could be obtained on it; not necessarily because it's natural. } \mycomment{Maybe we can bring in the motivation for no-regret from the learning in CS literature. Over there it seems to be an established concept commonly used in multi-armed bandits type scenarios. Although you are right, we need an explanation why a player would insist on playing an optimal no-regret strategy. I think a good strategy would be to give several examples of use of no-regret and say it upfront that we would like our players to have optimal no-regret. Better to keep it short and straight-forward to avoid the typical criticisms of no-regret learning in general. For example, structure the argument like if the reader is ready to accept that a player should play optimal no-regret then we have some interesting results, and several other researchers have looked at some version of this, give examples.} Being learning dynamics, they are fundamentally \textit{uncoupled}, in the sense that the players only use knowledge of their own payoff function, and not their opponents', to update their strategies. Classical research in the economics(~\cite{foster1997calibrated,foster1998asymptotic,hart2000simple,hart2005adaptive}) as well as computer science(~\cite{freund1999adaptive,blum2007external}) communities has shown that when players play commonly used``no-regret" algorithms against each other, the \textit{time-average} of their strategies will converge (almost surely) to Nash equilibrium when the game being played is zero-sum, or constant-sum\footnote{Convergence to NE does not occur for general non-zero-sum games; in fact, classes of non-zero-sum games have been constructed for which no choice of uncoupled dynamics can lead to even time-averaged convergence to NE(~\cite{hart2003uncoupled}). However, algorithms that satisfy a further refined notion of \textit{internal} no-regret can be shown to converge to the polytope of correlated equilibria of any general game (for a general discussion, see(~\cite[Chapter 4]{nisan2007algorithmic})).}. {\color{red} Vidya: there is some subtle mis-attribution here: most of the econ references are actually for calibrated learning, which is related to no-internal-regret dynamics, and directly show convergence to the set of correlated equilibria.} A subtle point here is that the above guarantee --- on convergence of the \textit{time averages} to NE --- does not necessarily imply that the day-to-day behavior converges. That is, the actual mixed strategy used by each player need not converge to the NE. Stated alternatively, the \textit{last-iterate} of the player's strategies can diverge. Consider the \textit{matching pennies} (more common variants of this with $> 2$ strategies are the rock-paper-scissors game) game, in which the pure strategy space for both players is $\{H,T\}$ and the unique mixed-strategy NE for the game consists of the uniform distribution over $\{H,T\}$ for both players. Clearly, the time average of the dynamic where the players repeatedly cycle through the pure strategies $\{(H,H),(H,T),(T,H),(T,T)\}$ converges to NE --- but the last-iterate by definition cycles, therefore diverges. In fact, a surprising property was discovered for multiplicative weights/Mirror Descent families of algorithms recently on games like matching pennies(~\cite{bailey2018multiplicative}): while the time average of the players' strategies converges, the individual strategies, or the last-iterate, converges to a limit cycle. \mycomment{I like this paragraph. Can we start the paper with this, with suitable modifications?} The question naturally arise whether last-iterate divergence is a specific property of multiplicative weights/Mirror Descent-style algorithms, or a fundamental consequence of the property of no-regret itself. This paper provides substantial evidence that it is the latter, by proving lack of last-iterate convergence for a broad class of generic no-regret algorithms under minimal assumptions. {\color{red} Vidya: ``minimal assumptions" is intentionally vague at this stage; I intend to flesh out later.} Moreover, the setting considered actually corresponds to the actual repeated game setting in which players can only observe the opponent's realizations of their mixed strategies (as opposed to the strategy itself, which is assumed in some recent literature(~\cite{bailey2018multiplicative,daskalakis2018last})). The ensuing stochasticity in realizations is shown to be one of the critical ingredients (in addition to others) underlying last-iterate divergence. \mycomment{I like the motivation. My suggestion is we should try to compress these three paragraphs, maybe into one (or two) short paragraph, and begin the paper and later go into more detailed descriptions of these in related work maybe. This way it will be clear at the onset what our paper's goal is. Currently it seems masked and leads one to believe that we are proving some very general result about Nash equilibrium through learning in games. Also many times people are aware of the background, especially in the learning community, and they want to cut to chase.} \subsection{Our contributions} We consider the ensemble of $2 \times 2$ \textit{zero-sum games} for which at least one NE is strictly mixed for both players. Moreover, the game is assumed to have no strictly dominant strategies for either player. Then, we show the following result (for a formal statement, see Theorem~\ref{thm:lastiterateconvergence}). \textit{Even if one of the players has already converged to their NE strategy, the other player necessarily diverges in their last iterate, if she is constrained to use a strategy that is no-regret AND a functional of the empirical averages of the other player's realized plays.} {\color{red} Vidya: as I think about it more, our theorem statement should also hold for functionals of empirical averages of \textit{bounded} functions of the other player's realized plays. This can help us show a lack of last-iterate convergence for other slightly more niche algorithms in the literature that use potential function ideas. } We unpack the above statement in a series of comments below. {\color{red} Vidya: We will consider amending the writing here once all of these extensions have actually been proved. Some of this could also go in a later section.} \begin{remark} The result above implies that the outcome of \textit{both} players playing no-regret algorithms with the above properties is last-iterate divergence in the following ``proof-by-contradiction" style reasoning: If last-iterate convergence of \textit{both players} was possible, then the outcome of this interaction would need to have player $2$ converging \textit{almost surely} to NE, thus, remaining arbitrarily close to NE with arbitrarily high probability after enough iterations. Our result above implies that, then, player $1$ will necessarily diverge as a direct consequence of the stochasticity in the realizations of play of player $2$'s mixed strategy on every round (which is always arbitrarily close to NE). {\color{red} Vidya: this is intentionally vaguely worded at the moment and will be updated once we have extended our proof argument.} \end{remark} \begin{remark} Our result does not imply last-iterate divergence for \textit{all} no-regret algorithms: we have assumed that the strategies use empirical averages of the other player's plays as a sufficient statistic (along with the round number itself). However, we subsequently show, in two important extensions of this result, that this reasoning can be slightly modified to show last-iterate divergence for commonly used variations of such strategies. The first such variation is \textit{optimism}, that has seen success in last-iterate convergence in the \textit{deterministic} variant of repeated interaction, where each player can see the other's mixtures (not just their realizations). In the actual setting that will occur in repeated interaction, players only observe their opponents' realizations, and we show that all optimistic algorithms will also suffer last-iterate divergence. {\color{red} Vidya: this is essentially a cosmetic extension} The second variation that we consider, for completeness, constitutes reductions to \textit{internal-no-regret} algorithms that use a slightly different notion of empirical average as a sufficient statistic. We show, using very similar arguments to Theorem~\ref{thm:lastiterateconvergence}, that all such algorithms will also diverge in their last iterate. Thus, the reliance on empirical averages for the players' strategies is more an approximate one than an exact one for the purposes of our proofs. Our results, in sum, imply that any strategy that could successfully converge in the last-iterate would need to deviate, significantly, from using the empirical averages of past play and use a much stronger recency bias than the one used in typical optimistic variants of Mirror Descent --- whether it is possible to design such algorithms while maintaining the no-regret property is an open question. \end{remark} {\color{red} Vidya: We want to use this last remark to say something like this: ``We checked \textit{all} no-regret algorithms, both external and internal, in the literature; and showed that they cannot converge in the last-iterate.} \mycomment{Let's include another important remark that we restrict our strategies to optimal no-regret, that is with a regret of the order of at most $\sqrt{T}$.} Putting this reasoning together, our results provide partial, but compelling corroborative evidence for the possibility that the very requirement of no-regret, itself, could preclude last-iterate convergence for either player even in the simplest possible case, which is $2 \times 2$ zero-sum games. We note that recent work on \textit{smoothly calibrated strategies} (which do not in general satisfy the no-regret property) highlights the possibility of their last-iterate convergence to NE, even for general games. We discuss this work in more detail in Section~\ref{sec:otheralgorithms}. \section{Related work} In this section, we discuss relevant related work to our perspectives on last-iterate convergence/divergence of learning dynamics. \subsection{Convergence in time averages to equilibrium} Classical guarantees on converence to equilibrium, whether Nash or correlated, have been shown on the \textit{time averages} of the players' realizations of strategies. Several choices of learning dynamics have been considered in the economics(~\cite{brown1951iterative,fudenberg1998learning,foster1997calibrated,hart2000simple,hart2005adaptive}), online learning(~\cite{littlestone1989weighted,freund1999adaptive,kalai2005efficient,blum2007external}) and control theory(~\cite{}) literature. {\color{red} Vidya: todo, find the appropriate controls literature} One of the oldest learning dynamics, \textit{fictitious play}(~\cite{brown1951iterative}), involves players best-responding to the empirical average of their opponents' play. This dynamic has long been known to lead to divergence of the time-average of each player's strategies from NE, even for simple zero-sum games. Learning dynamics that incorporate explicit randomization, such as multiplicative weights and ``Follow-the-Perturbed Leader" (the former was first conceptualized by Littlestone and Warmuth(~\cite{littlestone1989weighted}); the latter idea goes back to Hannan(~\cite{hannan1957approximation})) have also been considered under the umbrella term of ``adaptive heuristics". These have seen more success: the time averages of their updates are known to converge to the NE of a zero-sum game\footnote{In fact, as detailed in (~\cite[Chapter 9]{cesa2006prediction}), this constitutes an alternative proof of the minimax theorem.}. These are all examples of \textit{no-regret algorithms} --- the idea of no-regret is spiritually related to Blackwell's approachability condition(~\cite{blackwell1956}), and more recently these two notions were shown to be equivalent in a strong sense(~\cite{abernethy2011blackwell}). In fact, remarkable properties of the time-average of strategies can be shown even in general games, but not to the NE\footnote{In fact, convergence to the NE for a special class of general games is known to be impossible for all uncoupled dynamics(~\cite{hart2003uncoupled}), of which no-regret algorithms form a subset.}. The notion of no-regret that all of the above algorithms satisfy (called \textit{external} no-regret in the online learning literature) measures the suboptimality of utility with respect to the utility that could have been obtained by playing any single strategy \textit{on all rounds} of the game. A more refined benchmark, called \textit{internal} no-regret in the online literature(~\cite{blum2007external}), benchmarks utility against the maximal utility obtainable by swapping all instances of a particular pure strategy played with an alternative strategy. {\color{red} Vidya: slightly hand-wavy explanation of no-internal-regret; read papers again and update} When strategies satisfying the stronger no-internal-regret condition are played against one another, they will converge to the set of correlated equilibria of a general game(~\cite{foster1997calibrated,hart2000simple,hart2005adaptive}). Note that the time-average of players' strategies are random quantities owing to the randomness in realizations. Accordingly, the time-average of strategies is shown to converge \textit{almost surely} over this randomness. \subsection{Recent work on last-iterate convergence and min-max optimization} As we noted in the introduction, convergence of the time average of the players' realizations of strategies to NE does not imply that their joint distributions used at every round, themselves, converge. The question of whether such \textit{last-iterate} convergence happens has received recent attention with a surprising result(~\cite{bailey2018multiplicative}): players who use the common no-regret dynamic, exponential weights/Follow-the-Regularized Leader, suffer last-iterate divergence, or convergence to a limit cycle of joint distributions (which averages to the NE). This chaotic behavior has also been observed in the continuous-time equivalents of these algorithms(~\cite{mertikopoulos2018cycles}). Spiritually related to this is also the cycling behavior of continuous optimization algorithms such as gradient descent (ascent) when used in \textit{min-max} optimization of a differentiable function that is convex in its first argument and concave in its second argument (e.g.~\cite{mazumdar2018convergence}, but also observed in many other papers). These algorithmic observations have also received much attention in the ML community, as gradient-descent-ascent is commonly used as an optimization algorithm to train generative adversarial networks, which involves finding the arg-min of an appropriately defined\footnote{In addition to the chaotic algorithmic behavior, several other issues arise in the optimization landscape relating to the limit points of gradient-descent-ascent related to the possible non-convexity/concavity of the objective in GANs.} min-max optimization problem. The chaotic behavior is usually attributed to specific issues with gradient descent in particular being used as the base algorithm: in particular, \textit{optimistic} variants of gradient descent/ascent, that incorporate a recency bias in their updates, can be shown to remedy the issue and lead to convergence in min-max optimization(~\cite{daskalakis2017training,mertikopoulos2018mirror,liang2018interaction}). Optimism has also, coincidentally, led to faster convergence of \textit{time-averages} in the game-theoretic settings both for zero-sum and non-zero-sum games(~\cite{rakhlin2013optimization,syrgkanis2015fast}). Most recently, last-iterate convergence was also shown in the game-theoretic setting for optimistic exponential weights recently(~\cite{daskalakis2018last}). However, this result made a critical assumption that is unrealistic in repeated interaction: namely, that every player is able to observe her opponent's choice of \textit{mixed strategy} in past rounds, and use this information in her update. Under this assumption, the evolution of jointly distributed strategies is a \textit{deterministic} dynamical system. We consider the more realistic, and traditionally considered, setting in which players only use their opponents' \textit{stochastic realizations} drawn from their mixed strategies, constituting a \textit{stochastic} dynamical system\footnote{Note that all of the classical results on convergence of the time averages are stochastic dynamical systems for this reason, the stochasticity arising from realizations of mixed strategies of both players and impacting both the outcomes of current rounds, and future strategies.}. As we will see shortly, this stochasticity critically matters, and we show that optimistic variants of algorithms no longer converge in the last iterate in the actually realistic setting. {\color{red} Vidya: weighing in pretty strongly here; we can discuss whether to tone down the language.} \mycomment{I would like to bring this too in the beginning, say along with the remarks above. And along with this it would be nice to explain in words the high level idea behind our result -- optimal no-regret requires a minimum level of sensitivity in the strategy of a player towards her opponent's actions, and this sensitivity is strong enough to even pick up the noise in the opponents action and avoid its convergence. } \subsection{Properties of other algorithms of interest (that can suffer linear regret)}\label{sec:otheralgorithms} A very reasonable question to ask, in the light of our negative results on last-iterate divergence for such a broad class of no-regret algorithms, is whether \textit{any} non-trivial dynamics, whether no-regret or otherwise, can be guaranteed to converge in the last iterate. In fact, algorithms satisfying a notion of \textit{smoothed calibration} can be shown to converge in precisely this fashion(~\cite{foster2018smooth}). {\color{red} Vidya: going to write this section later once we understand this paper better. This may also come later in the paper after our main results are stated.} \section{Simulations} \label{sec:simulations} In this section, we provide empirical evidence for last-iterate oscillations under even \emph{suboptimal} no-regret strategies. We evaluate three popular no-regret strategies: \begin{enumerate} \item The standard multiplicative weights update, which is known to lead to last-iterate oscillation even under telepathic dynamics (the deterministic setting)~\citep{bailey2018multiplicative}. \item The optimistic multiplicative weights update, which converges in the last-iterate in the deterministic setting~\citep{daskalakis2018last}. \item The online mirror descent algorithm with the log function as regularizer, often called ``log barrier"~\citep{nemirovsky1983problem}. This regularizer has been successfully used to establish robustness of fast \textit{time-average} convergence guarantees in limited-information feedback settings~\citep{foster2016learning}, and is thus naturally interesting to evaluate. \end{enumerate} All of the above algorithms fall under the online-mirror-descent framework, and employ fixed learning rates $\{\eta_t\}_{t \geq 1}$. The rate of decay of $\eta_t$ with $t$ dictates the no-regret rate in all three cases: for any $r \geq 1/2$, if $\eta_t = 1/t^r$, the no-regret rate is equal to $(r,c)$ for a suitable positive constant $c$. We will evaluate these algorithms with two learning rate choices: $\eta_t = 1/\sqrt{t}$ (optimal), and $\eta_t = 1/t^{0.7}$ (suboptimal rate\footnote{It is easy to verify that the same suboptimal rate $r = 0.7$ would also result from the slower choice of learning rate decay, $\eta_t = 1/t^{0.3}$. We do not evaluate this choice, as the argument in Proposition~\ref{prop:noregretsensitivity} can be employed to show that it results in even more fluctuation-sensitivity than the case of optimal-no-regret; thus, it provably causes last-iterate oscillation. Intuitively, higher learning rates correspond to less randomness in the mixed strategies as a function of the past, and so more fluctuation-sensitivity.} $r = 0.7$), and study the evolution of player $1$'s iterates until $T := 10^8$ steps. Furthermore, we will consider the simplest $2 \times 2$ game: the matching pennies game, for which $G(0,0) = G(1,1) = 1$ and $G(0,1) = G(1,0) = 0$ (without loss of generality, player $1$ is the player who wants the actions to match). Note that the unique mixed-strategy equilibrium of this game is $p^* = q^* = 1/2$. We will plot the evolution of the mixed strategies of player $1$ with time --- since the matching pennies game is symmetric, player $2$ has similar behavior. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/matching_pennies_player1_optimal_deterministic} \caption{Deterministic case.} \label{fig:MPoptimaldeterministic} \end{subfigure}% ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/matching_pennies_player1_optimal} \caption{Stochastic case (typical realization).} \label{fig:MPoptimalstochastic} \end{subfigure} \caption{Evolution of the iterates of multiplicative weights in the matching pennies game (for player $1$) when the optimal-no-regret rate $r = 1/2$ is used.}\label{fig:MPoptimal} \end{figure} Figure~\ref{fig:MPoptimal} studies the optimal-no-regret case, and shows the striking difference between the evolution of the mixed strategies when the players use opponents' mixtures (the deterministic case) as opposed to their realizations (the stochastic case, studied in this paper). Notably, we see in Figure~\ref{fig:MPoptimaldeterministic} that while multiplicative weights converges to a limit cycle, optimistic multiplicative weights converges to NE quite quickly. The third algorithm, log-barrier Online-Mirror-Descent, also oscillates, but the amplitude of the cycles is much smaller\footnote{This likely reflects the increased entropy of the strategies used in the log-barrier algorithm.} than for multiplicative weights, and is in fact not visible in the figure. On the other hand, we see in Figure~\ref{fig:MPoptimalstochastic} that all three of these algorithms diverge in the last iterate. In fact, they are very rarely close to the equilibrium strategy $p^* = 0.5$! All in all, Figure~\ref{fig:MPoptimalstochastic} empirically corroborates Theorems~\ref{thm:lastiteratedivergence} and~\ref{thm:optimism}, and shows in particular that introducing optimism into no-regret strategies does not fix the issue of last-iterate oscillation. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{./Figures/matching_pennies_player1_player2atNE} \caption{Evolution of the iterates of multiplicative weights in the matching pennies game (for player $1$) when the optimal-no-regret rate $r = 1/2$ is used, and player $2$ is already at NE.}\label{fig:MPoptimalfixedNE} \end{figure} It is also worth examining the differential impact on player $1$ as a result of player $2$ using an optimal no-regret strategy, as opposed to player $2$ playing his fixed NE strategy. In the case of the matching pennies game, the latter case corresponds to player $2$ playing $q^* = 0.5$ at every step. Figure~\ref{fig:MPoptimalfixedNE} depicts the evolution of the mixed strategies of player $1$ in this latter case. Comparing the evolution to Figure~\ref{fig:MPoptimalstochastic}, it is evident that the mixed strategies diverge in both cases. While the ``period'' of limiting cycles, if any, seems to be larger in the fixed-strategy case, the amplitude of oscillation is similar in both cases. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/matching_pennies_player1_suboptimal_deterministic} \caption{Deterministic case.} \label{fig:MPsuboptimaldeterministic} \end{subfigure}% ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{./Figures/matching_pennies_player1_suboptimal} \caption{Stochastic case (typical realization).} \label{fig:MPsuboptimalstochastic} \end{subfigure} \caption{Evolution of the iterates of multiplicative weights in the matching pennies game (for player $1$) when the suboptimal-no-regret rate $r = 0.7$ is used.}\label{fig:MPsuboptimal} \end{figure} Finally, the techniques in our theory have crucially relied on the optimality of the no-regret algorithms used by both players. It is naturally interesting to ask whether relaxing the optimality of the no-regret rate could lead to better results in the evolution of the mixed strategies. Figure~\ref{fig:MPsuboptimal} provides preliminary empirical evidence that this may not be the case. In particular, Figure~\ref{fig:MPsuboptimalstochastic} shows that the suboptimal variants of all three algorithms continue to lead to last-iterate oscillation in the stochastic setting, although the amplitude of the oscillations does appear to be sizably reduced. \subsection{Beyond the mean-based assumption} \label{sec: beyond_mean_based} In this section, we show that an \emph{exact} mean-based strategy (Definition~\ref{as:meanbased}) is not required to produce the last-iterate oscillation phenomenon. In particular, we show that equivalents of Theorem~\ref{thm:lastiteratedivergence} hold under two algorithmic variants of mean-based strategies that are ubiquitous in the online learning and games literature. \subsubsection{Oscillation under optimism} One of the most common algorithmic variants of exact mean-based strategies incorporates a form of recency bias, colloquially called ``optimism''. We define the broad class of recency-bias strategies below. \begin{definition}\label{def:krecencybias} The class of $\ell$-recency bias strategies is defined as below: \begin{align*} f_t(\bm{J^{t-1}}) := f_t(\bm{\widehat{Q}^\ell_{t-1}}) \text{ where } \\ \bm {\widehat{Q}^\ell_t} := \frac{1}{t} \left(\sum_{s=1}^{t} \bm{J_s} + \sum_{j = 1}^{\ell} r_j \bm{J_{t - j + 1}}\right) , \end{align*} and $\{r_j\}_{j=1}^\ell$ are positive integers taking values in $\{1,\ldots, \ell\}$. We continue to assume that $f_t(\cdot)$ is monotonic in its argument for every $t \geq 1$ (similar to Definition~\ref{as:monotonic}). \end{definition} Note that the class of $0$-recency bias strategies is equivalent to the class of mean-based strategies. Further, $1$-recency bias strategies with $r_1 = 1$ constitute the class of \textit{optimistic} mean-based strategies, since they are using $\sum_{s=1}^t \bm{J_s} + \bm{J_t}$ as the summary statistic. As mentioned in Section~\ref{sec: intro}, the study of the last iterate of optimism-based strategies has generated a lot of interest in the optimization literature~\citep{daskalakis2018training,mertikopoulos2018optimistic,liang2019interaction,abernethy2019last,lei2020last}; more-over, these strategies are known to cause faster time-averaged convergence~\citep{rakhlin2013optimization,syrgkanis2015fast}. Most recently, it was shown that the last iterate of the players' strategies in the setting of telepathic dynamics (that arises when players use each others' mixtures to update their strategies) will converge~\citep{daskalakis2018last}. In the more realistic realization-based model, the following result shows that the ensuing stochasticity \textit{alone} causes recency-bias-based strategies to diverge. \begin{theorem}\label{thm:optimism} Fix any $\ell > 0$. If both players $1$ and $2$ use any $\ell$-recency-biased, monotonic, optimal-no-regret strategies $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$ respectively, then the pair of the mixed strategies of both the players $(\bm{P}_t,\bm{Q}_t)$ cannot converge to $(p^*,q^*)$ almost surely. \end{theorem} The proof of Theorem~\ref{thm:optimism} is a relatively simple modification of the proof of Theorem~\ref{thm:lastiteratedivergence}, and is provided in Appendix~\ref{sec: optimismproof} for completeness. It is worth noting that the bounded memory of the recency bias, as well as bounded increments $\{r_j\}_{j=1}^{\ell}$ are critical to the essence of the proof argument. It is plausible that stronger recency biases that grow with the number of steps $t$ could lead to different last-iterate behavior; however, such stronger recency biases could also break the no-regret property. \subsubsection{Oscillation under stochastic domination (e.g. adaptive step sizes)} The assumption of mean-based strategies (Definition~\ref{as:meanbased}) is not satisfied by some of the most popular online learning algorithms that retain the worst-case optimal no-regret guarantee, but obtain faster rates under ``easier data''. These include variants of Online-Mirror-Descent that adapt to small cumulative loss or variance~\citep{hazan2010extracting}, predictable sequences~\citep{rakhlin2013online,rakhlin2013optimization}, and stochasticity~\citep{cesa2007improved,erven2011adaptive}; while retaining the usual worst-case bounds. The catch in these algorithms is that the learning rate (or step size) can now be adaptively set, i.e. $\eta_t := \eta_t(\bm{(I)^{t-1}}, \bm{(J)^{t-1}})$. The adaptive learning rate can be set via the ubiquitous ``doubling-trick"~\citep{cesa2007improved,hazan2010extracting} or in a more continuous manner~\citep{erven2011adaptive,rakhlin2013online,rakhlin2013optimization}. Either way, none of these algorithms satisfy the mean-based property as in general, the learning rate function $\eta_t(\mathcal{F}_{t-1})$ will depend in some non-linear way on $\{\bm{P}_s,\bm{Q}_s,\bm{I}_s,\bm{J}_s\}_{s=1}^{t-1}$. In spite of this, we can prove that the last-iterate oscillates even for these algorithms by showing that their iterates \emph{stochastically dominate} those of an algorithm in the Online-Mirror-Descent family, which is mean-based\footnote{It is important to note that the Online-Mirror-Descent family does not comprise all algorithms used in online prediction. In particular, the ``parameter-free'' online learning paradigm~\citep{chaudhuri2009parameter,orabona2014simultaneous,koolen2015second,van2016metagrad} is not mean-based or easily reducible to a mean-based family. Understanding whether our techniques could plausibly extend to the parameter-free online learning paradigm is an intriguing direction for future work.}. More generally, we now show that our main result of last-iterate oscillation would directly extend to \textit{any} algorithm whose iterates stochastically dominate those of a mean-based algorithm. \begin{corollary}\label{cor:extension} Consider any optimal-no-regret strategy for player $1$ (or player $2$) $\{f_t\}_{t \geq 1}$ that \textbf{stochastically dominates} some mean-based and monotonic optimal-no-regret strategy $\{f'_t\}_{t \geq 1}$ in the following sense: for every $t \geq 1$ and every history $((I)^{t-1}, (J)^{t-1})$, we have \begin{align}\label{eq:stochasticdominance} |f_t((I)^{t-1}, (J)^{t-1})) - p^*| \geq |f'_t(\widehat{Q}_{t-1}) - p^*| . \end{align} Further, let player $2$ (or player $1$) follow \textbf{any} optimal-no-regret strategy $\{g_t\}_{t \geq 1}$. Then, the pair of mixed strategies $(\bm{P_t},\bm{Q_t})$ cannot converge almost surely. \end{corollary} Corollary~\ref{cor:extension} automatically implies that the players cannot converge almost surely if even one of them is using any of the aforementioned adaptive variants of Online-Mirror-Descent. Let us see why this is true for the simplest case of adaptive variants of multiplicative weights/Hedge (which is a special case of the Online-Mirror-Descent family). We recall the following critical property that is shared by all adaptive variants of Hedge: for some universal constant $C > 0$, we have \begin{align*} \eta_t \geq \frac{C}{\sqrt{t}} \text{ pointwise. } \end{align*} This property is usually used to prove that such adaptive algorithms retain the $\mathcal{O}(\sqrt{t})$ regret guarantee in the worst case (while admitting faster rates under more favorable conditions). For a given round $t$, we define $\bm{Z}_{t-1}$ as above, and further define \begin{align*} \bm{P}_{t,\mathsf{Hedge}} := f_{t,C}(\widehat{\bm{Q}}_{t-1}) := \frac{\exp\{C \sqrt{t} \widehat{\bm{Q}}_{t-1} \}}{\exp\{C \sqrt{t} \widehat{\bm{Q}}_{t-1} \} + \exp\{C \sqrt{t} (1 - \widehat{\bm{Q}}_{t-1})\}} \end{align*} as the \emph{counterfactual iterate} that would have been generated if the algorithm switched to Hedge with learning rate $\eta_t = C/\sqrt{t}$ at round $t$. We note that $|\bm{P}_{t} - 1/2| \geq |\bm{P}_{t,\mathsf{Hedge}} - 1/2|$, implying that the stochastic dominance property in Equation~\eqref{eq:stochasticdominance} is satisfied. We conclude this section with the proof of Corollary~\ref{cor:extension}, provided below. \section{Setup}\label{sec:setup} We consider $2 \times 2$ games, i.e. a two player game where both the players have two pure strategies, namely, \textit{action $0$ and action $1$}. Let the payoff matrices for player $1$ and player $2$ be given by \begin{align*} G := \begin{bmatrix} G(0,0) & G(0,1) \\ G(1,0) & G(1,1) \end{bmatrix} \text{ and } H := \begin{bmatrix} H(0,0) & H(0,1) \\ H(1,0) & H(1,1) \end{bmatrix}, \end{align*} respectively. Thus, if player $1$ plays action $i \in \{0,1\}$ and player $2$ plays action $j \in \{0,1\}$, the payoff to player $1$ is given by $G(i,j)$ and the payoff to player $2$ is given by $H(i,j)$. We denote by the indicator random variables $\bm I$ and $\bm J$ the realizations of the mixed strategies of player $1$ and player $2$, respectively. We follow the convention of denoting random variables by the bold versions of their corresponding deterministic variables. Let $p := \bbE [\bm I]$ and $q := \bbE [\bm J]$ be the probabilities with which the two players play action $1$, respectively. In general, since the two players will implement their mixed strategies independently, the random variables $\bm I$ and $\bm J$ will be independent. Therefore, the expected payoff for player $1$ and player $2$ corresponding to the choice of mixed strategies $(p,q)$ is given by $G(p,q)$ and $H(p,q)$, respectively, where \begin{align*} X(p,q) := (1-p)(1-q)X(0,0) + (1-p)q X(0,1) + p(1-q) X(1,0) + pq X(1,1), \end{align*} where $X$ stands for $G$ or $H$. In the repeated game setting, $\{\bm{I_t}\}_{t \geq 1}$ and $\{\bm{J_t}\}_{t \geq 1}$ denote the action sequences of the two players and $\{\bm{P_t}\}_{t \geq 1}$ and $\{\bm{Q_t}\}_{t \geq 1}$ denote the mixed strategy sequences of the two players. We denote by $${\bm{(I)^t}} := \{\bm{I_s}\}_{s = 1}^t \text{ and } {\bm{(J)^t}} := \{\bm{J_s}\}_{s = 1}^t$$ the random sequence of actions up to step $t$. The \emph{empirical averages}, or \textit{time-averages}, of the \emph{actions} of the two players are given by $${\bm{\widehat P_t}} := \frac{1}{t} \sum_{s = 1}^t \bm{I_t} \text{ and } {\bm{\widehat Q_t}} := \frac{1}{t} \sum_{s = 1}^t \bm{J_t},$$ respectively. Similarly, the \emph{empirical averages}, or \textit{time-averages}, of the \emph{mixed strategies} of the two players are given by $${\bm{\overline P_t}} := \frac{1}{t} \sum_{s = 1}^t \bm{P_t} \text{ and } {\bm{\overline Q_t}} := \frac{1}{t} \sum_{s = 1}^t \bm{Q_t},$$ respectively. General repeated game strategies for player $1$ and player $2$ are given by sequences of functions $\{f_t\}_{t \geq 1}$ and $\{g_t\}_{t \geq 1}$, where for every $t \geq 1$, $f_t, g_t: \{0,1\}^{2(t-1)} \to [0,1]$ map the history up to step $t$ to mixed strategies given by $$\bm{P_t} := f_t(\bm{(I)^{t-1}}, \bm{(J)^{t-1}}) \text{ and } \bm{Q_t} := f_t(\bm{(I)^{t-1}}, \bm{(J)^{t-1}})$$ for players $1$ and $2$ respectively. We will refer to these functions $f_t$ and $g_t$ as the \emph{strategy functions} for players $1$ and $2$ at step $t$. Critically, observe that we are not allowing for telepathy in the updates. In other words, the history used by player $1$ at step $t$ does not include $\{\bm{Q_s}\}_{s=1}^{t-1}$, and the history used by player $2$ at step $t$ does not include $\{\bm{P_s}\}_{s=1}^{t-1}$. This is in agreement with the information structure of the traditional repeated game environment. We now put forward several definitions to identify key sub-classes of these strategies. \begin{definition}\label{def:selfagnostic} We say that a repeated game strategy for player $1$ is \emph{self-agnostic} if player $1$ uses only the action sequence of player $2$ to decide her mixed strategy $P_t$ at step $t$. With an abuse of notation, the strategy function can be replaced by a function $f_t: \{0,1\}^{t-1} \to [0,1]$ such that the mixed strategy for player $1$ at step $t$ is given by $\bm{P_t} = f_t(\bm{(J)^{t-1}})$. \end{definition} Note that player $1$ is actually aware of her mixed strategies $\bm{P_1}, \bm{P_2}, \dots, \bm{P_{t-1}}$ at step $t$ since she is aware of her strategy functions $f_1, f_2, \dots, f_{t-1}$ and, for $1 \leq s \leq t-1$, $\bm{P_s}$ can be determined from $f_s$ and $\bm{(J)^{s-1}}$. Thus, by self-agnostic, we only mean that player $1$ is agnostic to the actual realizations of her actions up to that step in the process of choosing her next mixed strategy. From the point of view of player $1$, we now define \textit{no-regret} strategies, as well as \textit{uniformly no-regret} strategies against an \textit{oblivious opponent}. The former is precisely the classical definition of consistency first proposed by~\citet{hannan1957approximation}, while the latter is a strictly stronger condition, requiring an effective non-asymptotic guarantee on regret. We will use the stronger uniform no-regret condition to derive our results. \begin{definition} A self-agnostic repeated game strategy $\{f_t\}_{t \geq 1}$ is said to be \emph{no-regret} if \[ \limsup_{T \to \infty} \frac{1}{T} \l[ \max_{i \in \{0,1\}} \sum_{t=1}^T G(i, J_t) - \sum_{t = 1}^T G(f_t((J)^{t-1}), J_t) \r] \leq 0, \] for all opponent sequences $\{J_t\}_{t \geq 1}$. \end{definition} \begin{definition} \label{def: uniform_noregret_rate} A self-agnostic repeated game strategy $\{f_t\}_{t \geq 1}$ is said to be \emph{uniformly no-regret} if \[ \limsup_{T \to \infty} \max_{\{J_t\}_{t =1}^T} \frac{1}{T} \l[\max_{i \in \{0,1\}} \sum_{t=1}^T G(i, J_t) - \sum_{t = 1}^T G(f_t((J)^{t-1}), J_t) \r] \leq 0. \] In particular, the strategy achieves a \emph{no-regret rate} of $(r,c)$ if \[ \limsup_{T \to \infty} \max_{\{J_t\}_{t =1}^T} \frac{1}{T^r} \l[\max_{i \in \{0,1\}} \sum_{t=1}^T G(i, J_t) - \sum_{t = 1}^T G(f_t((J)^{t-1}), J_t) \r] \leq c. \] \end{definition} Observe that all uniformly no-regret strategies are also no-regret. The converse need not hold --- however, algorithms that are no-regret but not uniformly no-regret are typically contrived examples that are unlikely to be used in practice. The following properties of uniformly no-regret strategies $\{f_t\}$ can easily be verified: \begin{enumerate} \item If a strategy $\{f_t\}$ satisfies a uniform no-regret rate of $(r,c)$, then it satisfies a no-regret rate of $(r,c')$ for all $c' \geq c$. \item If a strategy $\{f_t\}$ satisfies a uniform no-regret rate of $(r,c)$, then it satisfies a no-regret rate of $(r',0)$ for all $r' > r$. \end{enumerate} It is well-known (e.g. see~\cite[Chapter 3]{cesa2006prediction}) that, for any finite constant $0 < c < \infty$, the best possible no-regret rate is $r = 1/2$. Moreover, several commonly used algorithms, such as those in the Online-Mirror-Descent family~\citep{nemirovsky1983problem,shalev2011online}, typically match the optimal no-regret rate for appropriately chosen constant $c$. We state two additional assumptions satisfied by no-regret strategies commonly encountered in the literature. We will prove our main results under these assumptions. \begin{definition}\label{as:meanbased} The strategy of player $1$ (or $2$) is called \emph{mean-based} if player $1$ uses only the empirical averages of player $2$ (or $1$) as a sufficient statistic to determine her mixed strategy $\bm{P_t}$ at step $t$. In this case, with another abuse of notation, the strategy function can be replaced by a function $f_t: [0,1] \to [0,1]$, such that the mixed strategy for player $1$ at step $t$ is given by $\bm{P_t} = f_t(\bm{\widehat Q_{t-1}})$. \end{definition} For example, all algorithms in the popular Online-Mirror-Descent framework~\citep{nemirovsky1983problem,shalev2011online} satisfy this assumption. Our results will also apply to variants on the mean-based property that incorporate a recency bias or adaptive step-sizes. These extensions are discussed in Section~\ref{sec: beyond_mean_based}. \begin{definition}\label{as:monotonic} The mean-based strategy of player $1$ (see Definitions~\ref{as:meanbased}) is called monotonic if, for every $t \geq 1$, $f_t[\cdot]$ is either \textit{non-increasing} or \textit{non-decreasing} in its argument. \end{definition} Note that this assumption \textit{does not} require strict monotonicity, and also does not require the direction of the monotonicity to be the same across rounds. We can interpret it as a regularity condition that we impose primarily for technical reasons. We conjecture that even this regularity condition is not required to show that last-iterate oscillations occur, and discuss partial evidence for this conjecture at length in Section~\ref{sec:conjecture}. In this paper, we will restrict our attention to $2 \times 2$ games for which none of the NE lie on the boundary of the mixed strategy space, i.e. neither player plays a pure-strategy in any NE\footnote{Note that, although for different reasons, the last-iterate oscillation result shown by~\citet{bailey2018multiplicative} for multiplicative weights also requires the assumption that all the NE lie in the interior of the strategy space.}. \citet{phade2019geometry} characterize the NE of all $2 \times 2$ games, and show that all the NE of a $2 \times 2$ game are completely mixed if and only if it is a \emph{competitive game}, defined for completeness below. \begin{definition} \label{def: competitive} A $2 \times 2$ game is said to be competitive if either of the following conditions holds: \begin{enumerate}[(a)] \item $G(0,0) > G(1,0)$, $G(0,1) < G(1,1)$, $H(0,0) < H(0,1)$, $H(1,0) > H(1,1)$, \item $G(0,0) < G(1,0)$, $G(0,1) > G(1,1)$, $H(0,0) > H(0,1)$, $H(1,0) < H(1,1)$. \end{enumerate} \end{definition} Moreover, competitive games have a unique NE $(p^*,q^*)$ which is also its unique correlated equilibrium. Henceforth, without loss of generality, we will assume that the payoffs satisfy condition (a) above. Observe that in the special case of zero-sum games, i.e. $H = -G$, the condition (a) would become: $G(0,0) > G(1,0)$, $G(0,1) < G(1,1)$, $G(0,0) > G(0,1)$ and $G(1,0) < G(1,1)$. This condition is satisfied by several common zero-sum games, including the \textit{matching pennies} game which we will use as a running example. \begin{remark}\label{remark:competitive} Note that the set of zero-sum games with unique NE is strictly contained in the set of competitive games. The results in \citet{phade2019geometry} imply that, for any competitive game, there exists a corresponding zero-sum game% \footnote{We note that the transformation between the non-zero-sum game and corresponding zero-sum game may not be linear, where a linear transformation is one that is obtained by changing the payoff matrices to be $a_1 G + b_1$ and $a_2 H + b_2$ for player $1$ and $2$, respectively, where $a_1, b_1, a_2, b_2 \in \bbR$.} such that the best-response functions of both the players are exactly the same. It is easy to see that if two games have the same best-response functions, then they have the same NE. However, this does not imply that the players' behavior (whether on average, or day-to-day) will be the same when such games are played repeatedly. Appendix~\ref{sec:competitive} illustrates significant qualitative differences in the last-iterate behavior of players playing no-regret strategies for two such ``equivalent'' games. \end{remark} We define $R^* := G(p^*, q^*) = G(1,q^*) = G(0,q^*)$ (where the chain of equalities follows because $p^*$ is in the interior and by the definition of a Nash equilibrium). \subsection{Main result: Last-iterate oscillation when \textit{both} players use no-regret}\label{sec:mainresult} The case for which player $2$ is \textit{already} at equilibrium serves to isolate the ramifications of the stochasticity in realizations. While the day-to-day behavior of players is clearly quite different when both players are using no-regret strategies (see Figures~\ref{fig:MPoptimalfixedNE} and~\ref{fig:MPoptimalstochastic} in Section~\ref{sec:simulations} for a comparison), we show below that the inherent stochasticity in realizations continues to be the dominating factor that causes last-iterate oscillations. Our main result is stated below. \begin{theorem}\label{thm:lastiteratedivergence} If both players $1$ and $2$ use optimal no-regret repeated game strategies that are mean-based (Definition~\ref{as:meanbased}) and monotonic (Definition~\ref{as:monotonic}), then the pair of mixed strategies of both the players $(\bm{P_t}, \bm{Q_t})$ cannot converge to the NE $(p^*,q^*)$ almost surely. \end{theorem} The proof of Theorem~\ref{thm:lastiteratedivergence} mirrors the proof of the simpler ``warm-up'' case (Theorem~\ref{thm:warmup}) by taking a \textit{proof-by-contradiction} approach: Suppose, instead, that both players \textit{did} converge almost surely. Then player $2$ would have to converge almost surely by definition, and we show that in this scenario player $1$ cannot converge because player $2$'s realizations are effectively too stochastic to allow it. The primary challenge is in showing this sufficient stochasticity in player $2$'s realizations; after all, player $2$'s realizations are highly dependent on past outcomes of player $1$. To do this, we use the martingale central-limit-theorem~\citep{hall2014martingale} together with a classical result on the convergence of the \textit{time-average} of the mixed strategies to NE~\citep{freund1999adaptive,roughgarden2016twenty}. The full proof is provided below.
{ "timestamp": "2021-08-03T02:34:11", "yymm": "2012", "arxiv_id": "2012.02125", "language": "en", "url": "https://arxiv.org/abs/2012.02125", "abstract": "We study the limiting behavior of the mixed strategies that result from optimal no-regret learning strategies in a repeated game setting where the stage game is any 2 by 2 competitive game. We consider optimal no-regret algorithms that are mean-based and monotonic in their argument. We show that for any such algorithm, the limiting mixed strategies of the players cannot converge almost surely to any Nash equilibrium. This negative result is also shown to hold under a broad relaxation of these assumptions, including popular variants of Online-Mirror-Descent with optimism and/or adaptive step-sizes. Finally, we conjecture that the monotonicity assumption can be removed, and provide partial evidence for this conjecture. Our results identify the inherent stochasticity in players' realizations as a critical factor underlying this divergence in outcomes between using the opponent's mixtures and realizations to make updates.", "subjects": "Computer Science and Game Theory (cs.GT); Machine Learning (stat.ML)", "title": "On the Impossibility of Convergence of Mixed Strategies with No Regret Learning", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.979035762851982, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.7096739179865299 }
https://arxiv.org/abs/1812.05833
Total Colourings - A survey
The smallest integer $k$ needed for the assignment of colors to the elements so that the coloring is proper (vertices and edges) is called the total chromatic number of a graph. Vizing and Behzed conjectured that the total coloring can be done using at most $\Delta(G)+2$ colors, where $\Delta(G)$ is the maximum degree of $G$.It is not settled even for planar graphs. In this paper we give a survey on total coloring of graphs.
\section{Introduction} Let $G$ be a simple graph with vertex set $V(G)$ and edge set $E(G)$. An element of $G$ is a vertex or an edge of $G$. The \textit{total coloring} of a graph $G$ is an assignment of colors to the vertices and edges such that no two incident or adjacent elements receive the same color. The \textit{ total chromatic number} of $G$, denoted by $\chi''(G)$, is the least number of colors required for a total coloring. Clearly, $\chi''(G)\geq \Delta(G)+1$, where $\Delta(G)$ is the maximum degree of $G$. Behzed~\cite{76,77} and Vizing~\cite{78} independently posed a conjecture called Total Coloring Conjecture (TCC) which states that for any simple graph $G$, the total chromatic number is either $\Delta(G)+1$ or $\Delta(G)+2$. In~\cite{86}, Molly and Reed gave a probabilistic approach to prove that for sufficiently large $\Delta(G)$, the total chromatic number is at most $\Delta(G)+10^{26}$. If $\chi''(G)=\Delta(G)+1$ then $G$ is known as a type-I graph and if $\chi''(G)=\Delta(G)+2$ then $G$ is a type-II graph. In this paper, we present a comprehensive survey on total coloring. Yap ~\cite{79} gave a nice survey on total colorings that covers the results till 1995. Therefore, our survey cover results from 1996 onwards. There are four sections in this paper. In the second section, we focus on results on planar graphs. In ~\cite{80}, Borodin gave a survey of results on total colorings of planar graphs up to 2009. There are several improved results on planar graphs since then. Thus, we only present the results from 2010 onwards. For the earlier results we refer the readers to the earlier two excellent surveys mentioned above. Third section of this paper consists of results on non-planar graphs. In this paper, we also prove TCC holds for unitary cayley graphs, mock threshold graphs and odd graphs. In the last section, we survey the results on complexity aspects of total coloring. \section{Planar graphs} In this section, we consider the results related to planar graphs (graphs that have a plane embedding). Many of the results in total coloring of planar graphs are based on the maximum degree and the girth constraints. One of the most yielding techniques on planar graphs is the Discharging Method. While one can say that the discharging method have been used in graph theory for more than 100 years, it came to prominence when it was used to prove the four color theorem by Appel and Hacken. Since then, the method has been applied to many types of problems (including graph embeddings and decompositions, spread of infections in networks, geometric problems, etc.). It is especially useful for dealing with planar graphs. An excellent guide to the method of discharging is given by Cranston and West ~\cite{98}. A rough sketch of using the discharging method is as follows~\cite{99}: \textbf{Charging phase:} 1. Assign initial charges to certain elements of a graph (vertices, edges, faces, etc.,). 2. Compute the total charge assigned to the whole graph (for planar graphs typically using Euler's formula). \textbf{Discharging phase:} 3. Redistribute charge in the graph according to a set of discharging rules. 4. Compute the total charge again (using the specific properties of the graph), and derive a conclusion. A configuration in a graph $G$ can be any structure in $G$ (often a specified sort of subgraph). A configuration is reducible for a graph property $Q$ if it cannot occur in a minimal graph not having property $Q$. The method of discharging is used to show that a set of reducible configurations is unavoidable in the class of graphs being discussed which establishes that the property $Q$ cannot have a counterexample in the class. Let $d_G(v)$ or simply $d(v)$ denote the degree (number of neighbors) of vertex $v$ in $G$, and let $d(G)$ denote the average of the vertex degrees in $G$. Degree charging is the assignment to each vertex $v$ of an initial charge equal to $d(v)$. In the following, we present the total coloring results on planar graphs. \\ The total coloring conjecture was verified for planar graphs with $\Delta(G)\leq 5$ ~\cite{kos96}. Sanders and Zhao~\cite{95} showed that every planar graph with $\Delta(G)\leq 7$ is 9-total colorable. Yap ~\cite{79} verified TCC for planar graphs with $\Delta(G)= 8$. In ~\cite{kss08}, Kowalik et al. proved that any planar graph with $\Delta(G)\geq9$ is type-I. Shen et al.~\cite{91} posed a conjecture on the total coloring of planar graph which states that ``Planar graphs with $4\leq \Delta(G)\leq 8$ are $\Delta(G)+1$-total colorable". Wang et al.~\cite{12} proved that for a planar graph $G$ with maximum degree $\Delta(G)$ and girth $g$ such that $G$ has no cycles of length from $g+1$ to $t, \ t>g$, the total chromatic number is $\Delta(G)+1$ provided $(\Delta(G), g, t)\in \{(5,4,6), (4,4,17)\}$ or $\Delta(G)=3$ and $(g,t)\in\{(5,13), (6,11), (7,11), (8,10), (9,10)\}$, where each vertex is incident with at most one $g$-cycle. For more details we refer the excellent survey by Borodin ~\cite{80}. \begin{center} \scalebox{1} { \begin{pspicture}(0,-2.5189064)(5.8984375,2.5189064) \pstriangle[linewidth=0.04,dimen=outer](2.6135938,0.72046876)(1.64,1.24) \psdots[dotsize=0.2](2.6335938,1.9604688) \psdots[dotsize=0.2](3.4135938,-0.33953115) \usefont{T1}{ptm}{m}{n} \rput(2.6264062,2.3304687){$u_1$} \usefont{T1}{ptm}{m}{n} \rput(4.0264063,0.85046875){$u_3$} \usefont{T1}{ptm}{m}{n} \rput(1.3464062,0.6904687){$u_2$} \usefont{T1}{ptm}{m}{n} \rput(4.0264063,-0.32953125){$u_4$} \usefont{T1}{ptm}{m}{n} \rput(4.0064063,-1.8095311){$u_5$} \psline[linewidth=0.04cm](3.4140625,0.73890626)(3.3940625,-1.6810937) \usefont{T1}{ptm}{m}{n} \rput(2.9232812,-2.2910938){Fig. 1. Reducible configuration from ~\cite{91}} \end{pspicture} } \end{center} First we start with results involving planar graphs with maximum degree at least six. As far as we know the first work on the total coloring with $\Delta(G)=6$ was given by Wang et al.~\cite{92}. They verified TCC for planar graphs without 4-cycles. Shen et al. improved the result by showing that the planar graph without 4-cycles is type-I~\cite{91} with the reducible configuration shown in Fig.1. Sun et al.~\cite{60} proved that every planar graph $G$ with maximum degree 6 is totally 8-colorable if no two triangles in $G$ share a common edge (which implies that every vertex $v$ in $G$ is incident with at most $\lfloor \frac{d(v)}{2}\rfloor$ triangles. In other words, every vertex is missing either a 3-cycle or a 4-cycle). They also proved that a planar graph without adjacent triangles and without cycles of length $k$, $k\geq 5$ is type-I. Nicolas Roussel~\cite{62} strengthened the result of ~\cite{60} by showing that a planar graph $G$ is total 8-colorable if every vertex of $G$ is missing some $k_v$-cycle for $k_v \in \{3,4,5,6,7,8\}$. In 2017, Zhu and Xu~\cite{51} improved the result of Sun et al. ~\cite{60} to show that TCC holds for planar graphs $G$ with $\Delta(G)=6,$ provided $G$ does not contain any subgraph isomorphic to a 4-fan. Further improvements on the total coloring of planar graph with $\Delta\geq 6$ as follows. Zhang and Wang~\cite{65} showed that every planar graph with $\Delta\geq 6$ and without adjacent short cycles (a cycle of length at most 4) is $\Delta+1$-total-colorable. Hou et al.~\cite{8} proved that a planar graph $G$ is type-I if $\Delta(G)\geq 5$ and $G$ contains neither 4-cycles nor 6-cycles or $\Delta(G)\geq 6$ and $G$ contains neither 5-cycles nor 6-cycles. A chordal $k$-cycle is a $k$-cycle with at least one chord. In~\cite{6}, Wu et al. improved the result of ~\cite{8} by showing that a planar graph with maximum degree $\Delta(G)$ is $M$-totally colorable if it contains neither chordal 5-cycle nor chordal 6-cycle, where $M=\text{max}\{7,\Delta(G)+1\}$. Dong et al.~\cite{48} proved that a planar graph $G$ where no 6-cycles has a chord satisfies TCC provided $\Delta(G)\geq 6$. Li~\cite{50} verified TCC for planar graphs with maximum degree six, if for each vertex $v$, there is an integer $k_v \in \{3,4,5,6\}$ such that $G$ has no $k_v$-cycle containing $v$. A notion closely related to total coloring of graphs is list total coloring of graphs. Suppose that a set $L(x)$ of colors, called a list of $x$, is assigned to each element $x\in V(G)\cup E(G)$. A total coloring $\phi$ is called a list total coloring of $G$ or $L$-coloring, if $\phi(G)\in L(x)$ for each element $x\in V(G)\cup E(G)$. If $|L(x)|=k$ for every $x\in V(G)\cup E(G)$, then a total $L$- coloring is called a list total $k$ coloring and we say that $G$ is totally $k$-choosable and the minimum integer $k$ for which $G$ is total $k$-choosable is the total choosability of $G$. Liu et al.~\cite{49} proved that a planar graph $G$ is total $\Delta(G)+2$-choosable ($\Delta(G)+2$-total colorable) whenever (1) $\Delta(G)\geq 7$ and $G$ has no adjacent triangles or (2) $\Delta(G)\geq 6$ and $G$ has no intersecting triangles or (3) $\Delta(G)\geq 5$ and $G$ has no adjacent triangles and $G$ has no $k$-cycles for some integer $k\in \{5,6\}$. We now list the results on the total coloring of planar graph with maximum degree at least 7. The first work on this direction is due to Sanders and Zhao~\cite{95}. \begin{center} \scalebox{1} { \begin{pspicture}(0,-3.9289062)(12.44,3.8889062) \pstriangle[linewidth=0.04,dimen=outer](0.69804156,2.5747952)(1.14,1.16) \psline[linewidth=0.04cm](1.2280415,2.5947952)(1.2280415,0.81479514) \psdots[dotsize=0.2](0.6880415,3.714795) \psdots[dotsize=0.2](1.2480415,1.6947951) \usefont{T1}{ptm}{m}{n} \rput(0.453125,0.7189062){a} \rput{-178.91699}(9.035453,6.4149876){\pstriangle[linewidth=0.04,dimen=outer](4.548042,2.5547953)(1.68,1.22)} \pstriangle[linewidth=0.04,dimen=outer](4.5480413,1.3747951)(1.68,1.22) \psdots[dotsize=0.2](5.3680415,3.774795) \psdots[dotsize=0.2](5.3480415,1.3947952) \psline[linewidth=0.04cm](5.3480415,1.4147952)(5.6680417,0.93479514) \usefont{T1}{ptm}{m}{n} \rput(4.644219,0.8389062){b} \psdiamond[linewidth=0.04,dimen=outer](8.278042,2.714795)(0.73,0.92) \psline[linewidth=0.04cm](7.5880413,2.714795)(8.948042,2.714795) \psdots[dotsize=0.2](8.268042,3.6347952) \psdots[dotsize=0.2](8.268042,1.7947952) \psline[linewidth=0.04cm](8.288041,1.7747952)(8.288041,0.7747951) \usefont{T1}{ptm}{m}{n} \rput(8.247969,0.4989063){c} \psdiamond[linewidth=0.04,dimen=outer](1.79,-1.0510937)(0.55,1.14) \psline[linewidth=0.04cm](0.7,-2.1310937)(2.92,-2.1310937) \psline[linewidth=0.04](2.32,-1.0510937)(3.48,-1.0510937)(2.92,-2.1310937) \psline[linewidth=0.04](1.24,-1.0510937)(0.02,-1.0510937)(0.7,-2.1310937) \psdots[dotsize=0.2](2.32,-1.0710938) \psdots[dotsize=0.2](1.3,-1.0510937) \psdots[dotsize=0.2](2.92,-2.1110938) \psdots[dotsize=0.2](0.74,-2.1110938) \pspolygon[linewidth=0.04](6.5,-2.1310937)(7.12,-1.3310938)(7.1,-0.57109374)(6.5,-1.3310938)(5.88,-0.5310938)(5.9,-1.3310938) \psline[linewidth=0.04cm](6.48,-1.3110938)(6.5,-2.0910938) \psline[linewidth=0.04](7.12,-1.3110938)(7.88,-1.2910937)(7.14,-2.1310937)(5.9,-2.1110938)(5.22,-1.3310938)(5.92,-1.3310938) \psdots[dotsize=0.2](6.48,-1.2910937) \psdots[dotsize=0.2](7.14,-1.3110938) \psdots[dotsize=0.2](7.14,-2.1310937) \psdots[dotsize=0.2](5.92,-1.3310938) \psdots[dotsize=0.2](5.92,-2.1310937) \pspolygon[linewidth=0.04](10.82,-2.2910938)(11.44,-1.4910938)(11.42,-0.73109376)(10.82,-1.4910938)(10.2,-0.69109374)(10.22,-1.4910938) \psline[linewidth=0.04cm](10.8,-1.4710938)(10.82,-2.2510939) \psline[linewidth=0.04](11.44,-1.4710938)(12.2,-1.4510938)(11.46,-2.2910938)(10.22,-2.2710938)(9.54,-1.4910938)(10.24,-1.4910938) \psline[linewidth=0.04](10.8,-2.2910938)(10.32,-3.1510937)(9.66,-3.1510937)(10.22,-2.2710938) \psdots[dotsize=0.2](10.34,-3.1710937) \psdots[dotsize=0.2](10.2,-2.3310938) \psdots[dotsize=0.2](10.2,-1.4910938) \psdots[dotsize=0.2](10.84,-1.4510938) \psdots[dotsize=0.2](11.46,-1.5110937) \psdots[dotsize=0.2](11.48,-2.2710938) \pspolygon[linewidth=0.04](11.46,1.3089062)(12.32,2.5089064)(12.32,3.6489062)(11.5,2.5089064)(10.68,3.7289062)(10.68,2.4689062)(10.7,2.4489062) \psline[linewidth=0.04cm](11.5,2.5289063)(11.48,1.3289063) \psdots[dotsize=0.2](11.5,2.5089064) \psdots[dotsize=0.2](10.7,2.4689062) \psdots[dotsize=0.2](12.32,2.4689062) \usefont{T1}{ptm}{m}{n} \rput(11.464531,0.93890625){d} \usefont{T1}{ptm}{m}{n} \rput(1.668125,-2.5210938){e} \usefont{T1}{ptm}{m}{n} \rput(6.5314064,-2.5610938){f} \usefont{T1}{ptm}{m}{n} \rput(11.281406,-3.0010939){g} \usefont{T1}{ptm}{m}{n} \rput(5.4979687,-3.7010937){Fig. 2. Reducible configurations from ~\cite{5}} \end{pspicture} } \end{center} Shen and Wang~\cite{5} showed that the planar graphs with maximum degree 7 and without 5-cycles are 8-totally colorable. Fig.2. shows the reducible configuration used in ~\cite{5}. Chang et al. ~\cite{63} proved that a planar graph $G$ with maximum degree 7, with the additional property that for every vertex $v$, there is an integer $k_v \in \{3,4,5,6\}$ so that $v$ is not incident with any $k_v$-cycle, is type-I. Wang and Wu~\cite{46} proved that a planar graph of maximum degree $\Delta(G)\geq 7$ is $\Delta(G)+1$-totally colorable if no 3-cycle has a common vertex with a 4-cycle or no 3-cycle is adjacent to a cycle of length less than 6. In~\cite{13}, Wang et al. proved that for any planar graph with maximum degree $\Delta(G)\geq 7$ and without intersecting 3-cycles (two cycles of length 3 are not incident with a common vertex), and without intersecting 5-cycles, the total chromatic number is $\Delta(G)+1$~\cite{14}. Wu et al.~\cite{1} proved that for a planar graph with maximum degree at least 7 and without adjacent 4-cycles the total chromatic number is $\Delta(G)+1$. The total chromatic number of a planar graph with $\Delta(G)\geq 7$ and without chordal 6-cycles~\cite{15} and without chordal 7-cycles~\cite{68} is $\Delta(G)+1$. We now turn to the results on maximum degree at least 8. There are many works on planar graphs with maximum degree at least 8. Yap~\cite{79} verified the TCC for planar graphs with $\Delta(G)\geq 8$. Roussel and Zhu~\cite{7} proved that for a planar graph $G$ with maximum degree 8, $\chi''(G)=9$, if there is no $k_x$-cycle which contains $x$, where $x$ is a vertex in $G$ and $k_x\in \{3,4,5,6,7,8\}$. This is an improvement over~\cite{103}. The reducible configurations used by Roussel and Zhu are given in Fig.3. Further, Wang et al.~\cite{47}, strengthened the result and proved that for a planar graph with $\Delta(G)\geq 8$, if for every vertex $v \in V$, there exists two integers, $i_v, j_v \in \{3,4,5,6,7,8\}$ such that $v$ is not incident with intersecting $i_v$-cycles and $j_v$-cycles, then the total chromatic number of $G$ is $\Delta(G)+1$. Wang et al.~\cite{9} showed that the total chromatic number of planar graphs with $\Delta(G)\geq 8$ is $\Delta(G)+1$, if for every vertex $v\in V(G)$, there exists two integers $i_v, j_v \in \{3,4,5,6,7\}$ such that $v$ is not incident with adjacent $i_v-$cycles and $j_v-$ cycles. \begin{center} \scalebox{1} { \begin{pspicture}(0,-1.7489063)(4.67375,1.7489063) \psdiamond[linewidth=0.04,dimen=outer](1.85375,0.45890626)(1.54,1.19) \psline[linewidth=0.04cm](1.85375,1.6089063)(1.85375,-0.69109374) \psline[linewidth=0.04cm](3.37375,0.46890625)(4.59375,0.44890624) \psdots[dotsize=0.2](1.85375,1.6289062) \psdots[dotsize=0.2](0.33375,0.46890625) \psdots[dotsize=0.2](1.85375,-0.69109374) \psdots[dotsize=0.2](4.55375,0.44890624) \psdots[dotsize=0.2](3.35375,0.46890625) \usefont{T1}{ptm}{m}{n} \rput(1.7698437,-1.5210937){Fig. 3. Reducible configuration from ~\cite{7}} \end{pspicture} } \end{center} Xu and Wu~\cite{66} proved that if $G$ is a planar graph with maximum degree at least 8 and every 7-cycle of $G$ contains at most two chords, then $G$ has a $(\Delta(G)+1)$-total-coloring. Wang et al.~\cite{69}, considered planar graphs $G$ with maximum degree $\Delta(G)\geq 8,$ and showed that if $G$ contains no adjacent $i,j$-cycles with two chords for some $i,j\in \{5,6,7\}$, then $G$ is total $(\Delta(G)+1)$-colorable. Jian Chang et al.~\cite{11} proved that planar graphs with maximum degree at least 8 and without 5-cycles with two chords are $\Delta(G)+1$ total colorable. In~\cite{70}, Cai et al. proved that the planar graphs with maximum degree 8 and without intersecting chordal 4-cycles are 9-totally colorable. Xu et al.~\cite{74} proved that if $G$ is a planar graph with maximum degree at least 8 and every 6-cycle of $G$ contains at most one chord or any chordal 6-cycles are not adjacent, then $G$ has a $\Delta(G)+1$-total coloring. In 2014, Wang et al.~\cite{16} showed that a planar graph with $\Delta(G)\geq 8$ and without adjacent cycles of size $i$ and $j$, for some $3\leq i\leq j \leq 5$, is $(\Delta(G)+1)$-total colorable. Later, Wang et al.~\cite{10} proved that a planar graph with $\Delta(G)\geq 8$ and if $v\in V(G)$ is not incident with chordal 6-cycle, or chordal 7-cycle or 2-chordal 5-cycle, then it is type-I. This is a generalisation of ~\cite{16}. In~\cite{17}, Chang et al. proved that a planar graph with maximum degree 8 is 9-total colorable if for every vertex $v$, $v$ is incident with at most $d(v)-2\lfloor\frac{d(v)}{5} \rfloor$ triangles. This is a generalisation of ~\cite{70} and ~\cite{11}. There are some other classes of graphs which are similar to planar graphs. We discuss the total coloring of some of them. A graph is called 1-planar, if it has at most one crossing edge. The following are some results on total coloring of 1-planar graphs. Zhang et al.~\cite{45} proved that each 1-planar graph with $\Delta(G)\geq 16 $ is $(\Delta(G)+2)$-total colorable and $(\Delta(G)+1)$-total colorable if $\Delta(G)\geq 21$. J$\acute{\text u}$lius Czap~\cite{2} studied the 1-planar graphs and gave some upper bound for the total chromatic number. He showed that a 1-planar graph with $\Delta(G) \geq 10$ satisfies TCC if $\chi(G)\leq 4$. He also proved that if $G$ is a 1-planar graph without adjacent triangles and with $\Delta(G) \geq 8$, then $\chi''(G)\leq \Delta(G)+3$ and if $\chi(G)\leq 4$, then $\chi''(G)\leq \Delta(G)+2$. Xin Zhang et al.~\cite{4} showed that for a 1-planar graph $G$, if $\Delta(G)\leq r$ and $r\geq 13$, where $r$ is an integer, then $\chi''(G)\leq r+2$. An outerplanar graph is a planar graph that has a plane embedding such that all vertices lie on the boundary of the outer face. In~\cite{3}, Y. Wang and W. Wang characterized the adjacent vertex distinguishing total chromatic number of outerplanar graphs. They proved that, if $G$ is an outerplane graph with $\Delta(G)\geq 4$, then $\chi''_a(G)\leq \Delta(G)+2$ and so the TCC is satisfied. They also proved that, if $G$ is an outerplane graph with $\Delta(G)\geq 4$ and without adjacent vertices of maximum degree, then $\chi''_a(G)= \Delta(G)+1$ and hence $G$ is type-I graph. A graph is pseudo-outerplanar if each of its blocks has an embedding in the plane so that the vertices lie on a fixed circle and the edges lie inside the disk of this circle with each of them crossing at most one another. In~\cite{67}, Xin Zhang and Guizhen Liu verified TCC for pseudo-outerplanar graphs and proved that the total chromatic number of every outerplanar graph with $\Delta(G)\geq 5$ is $\Delta(G)+1$. Xin Zhang~\cite{64} proved that every pseudo-outerplanar graph with $\Delta(G)\geq 5$ is totally $\Delta(G)+1$-choosable and hence the total chromatic number also has this as upper bound. Similar to planar graphs there is another type of graphs called toroidal graphs which are embedding of graphs on torus such that there is no crossing edges. Tao Wang~\cite{72} proved that if $G$ is a 1-toroidal graph with maximum degree $\Delta(G)\geq 11$ and without adjacent triangles, then $G$ has a total coloring with at most $\Delta(G)+2$ colors. \section{Non-planar graphs} \subsection{Circulant Graph} For a sequence of positive integers $1\leq d_1<d_2<...<d_l\leq \lfloor \frac{n}{2} \rfloor$, the circulant graph $G=C_n(d_1,d_2,...,d_l)$ has vertex set $V=Z_n=\{0,1,2,...,n-1\}$, two vertices $x$ and $y$ being adjacent iff $x=(y\pm d_i) \mod n$ for some $i, 1\leq i\leq l.$ Riadh Khennoufa and Olivier Togni~\cite{61} studied total colorings of circulant graphs and proved that every 4-regular circulant graphs $G=C_{5p}(1,k)$ and $C_{6p}(1,k)$ are type-I graphs for any positive integer $p$ and $k<\frac{5p}{2}$ with $k\equiv 2 \mod 5$ or $k \equiv 3 \mod 5$ and $p\geq 3$ and $k< 3p$ with $k\equiv 1 \mod 3$ or $k\equiv 2 \mod 3$ respectively. A graph is a \textit{power of cycle}, denoted $C^k_n$, $n$ and $k$ are integers, $1\leq k<\lfloor\frac{n}{2}\rfloor$, if $V(C^k_n)=\{v_0,v_1,...,v_{n-1}\}$ and $E(C^k_n)=E^1 \cup E^2 \cup ...\cup E^k$, where $E^i=\{e_0^i,e_1^i,...,e^i_{n-1}\}$ and $e_j^i=(v_j,v_{(j+i) \mod \ n })$ and $0 \leq j \leq n-1$, and $1\leq i\leq k$. Campos and de Mello~\cite{39} proved that $C_n^2, n\neq 7$, is type-I and $C_7^2$ is type-II. They ~\cite{18} verified the TCC for powers of cycles $C_n^k, n$ even and $2<k<\frac{n}{2}$ and also showed that one can obtain a $\Delta(G)+2$-total coloring for these graphs in polynomial time. They also proved that $C_n^k$ with $n\cong 0 \mod (\Delta(C_n^k)+1)$ are type-I and they proposed the following conjecture. \begin{con} Let $G=C_n^k$, with $2\leq k<\lfloor \frac{n}{2} \rfloor$. Then, $\chi''(G)=\begin{cases}\Delta(G)+2, & \text{ if }k>\frac{n}{3}-1 \ \text{and } n \ \text{is \ odd}\\ \Delta(G)+1 & \text{ otherwise}. \end{cases}$ \end{con} Geetha et al.~\cite{96} proved this conjecture for certain values of $n$ and $k$. They also verified TCC for the complement of powers of cycles $\overline{C^k_n}$. In particular, they proved that $\overline{C_n^2}$ is type-II for $n\leq 8$. Cayley graphs are those whose vertices are the elements of groups and adjacency relations are defined by subsets of the groups. Cayley graphs contain long paths and have many other nice combinatorial properties. They have been used to construct other combinatorial structures. Also, for the constructions of various communication networks, and difference sets in design theory. Cayley graphs have been used to analyze algorithms for computing with groups. Let $\Gamma$ be a multiplicative group with identity 1. For $S\subseteq \Gamma, 1\notin S \text{ and } S^{-1}=\{s^{-1}:s\in S\}=S$ the \textit{Cayley Graph} $X=Cay(\Gamma, S)$ is the undirected graph having vertex set $V(X)=\Gamma$ and edge set $E(X)=\{(a,b): ab^{-1}\in S\}$. For a positive integer $n>1$ the \textit{unitary Cayley graph} $X_n=Cay(Z_n, U_n)$ is defined by the additive group of the ring $Z_n$ of integer modulo $n$ and the multiplicative group $U_n$ of its units. If we represent the elements of $Z_n$ by the integers 0,1,...$n-1$, then it is well known that \begin{center} $U_n=\{a\in Z_n: gcd(a,n)=1\}$. \end{center} So $X_n$ has vertex set $V(X_n)=X_n=\{0,1,2,...,n-1\}$ and edge set \begin{center} $E(X_n)=\{(a,b): a,b \in Z_n, gcd(a-b,n)=1\}$. \end{center} Boggess et al. ~\cite{bhjk08} studied the structure of unitary cayley graphs. They have also discussed chromatic number, vertex and edge connectivity, planarity and crossing number. Klotz and Sander ~\cite{ws07} have determined the clique number, the independence number and the diameter. They have given a necessary and sufficient condition for the perfectness of $X_n$. The graph $X_n$ is regular of degree $U_n=\varphi(n)$ denotes the Euler function. Let the prime factorization of $n$ be $p_1^{\alpha_1} p_2^{\alpha_2}...p_t^{\alpha_t}$ where $p_1<p_2<...<p_t$. If $n=p$ is a prime number, then $X_n=K_p$ is the complete graph on $p$ vertices. If $n=p^\alpha$ is a prime power then $X_n$ is a complete $p$-partite graph. In the following theorem we prove TCC holds for unitary Cayley graphs. \begin{thm} A unitary Cayley graph $X_n$ is $(\Delta(X_n)+2)$ - total colorable. \end{thm} \begin{proof} We know that a unitary Cayley graph can be obtained from a balanced $r$ partite graph by deleting some edges. Suppose $n=p$ is a prime number, then $X_n$ is the complete graph on $p$ vertices. Also, if $n=p^\alpha$, a prime power, then $X_n$ is a complete $p$-partite graph and TCC holds for these two graphs ~\cite{79}. When $n=2k, k \in N$, then the unitary Cayley graph is a bipartite graph and any bipartite graph is total colorable. Suppose $n \not \equiv 0 \mod 2 $. As $p_1$ is the smallest prime, $kp_1,kp_1+1,...,(k+1)p_1-1,$ where $k=0,1,2,...,\frac{n}{p_1}-1$ induces $\frac{n}{p_1}$ vertex disjoint cliques each of order $p_1$. Since $p_1$ is odd, we can color all the elements of these $\frac{n}{p_1}$ cliques using $p_1$ colors ~\cite{79}. Now remove the edges of these cliques. The remaining graph is a $\varphi(n)-p_1+1$-regular graph where the vertices are already coloured. We color the edges of this resultant graph with $\varphi(n)-p_1+2$ colors. Thus we have used $\varphi(n)+2$ colors for the total coloring of $X_n$. \end{proof} In the following section, we look at graph products and papers on total colouring of product graphs. \subsection{Product Graphs} Graph products were first defined by Sabidussi~\cite{93} and Vizing~\cite{94}. A lot of work has been done on various topics related to graph products, but on the other hand there are still many questions open. There are four standard graph products, namely, $cartesian \ product$ ($G \Box H$), $direct \ product$ ($G \times H$), $strong \ product$ ($G\boxtimes H$) and \textit{lexicographic product} ($G \circ H$). In~\cite{93}, these products have been widely discussed with significant applications. The vertex sets of these products are same: $V(G) \times V(H)$. The edge sets are $E(G \Box H)=\{((g,h),(g',h'))| \ g=g', \ hh'\in E(H), \ \text{or} \ gg' \in E(G), \ h=h'\}$, \\ $E(G \times H)= \{((g,h),(g',h'))| \ gg' \in E(G) \ \text{and} \ hh'\in E(H)\}$, \\ $E(G\boxtimes H)=E(G\Box H)\cup E(G\times H)$ and \\ $E(G \circ H)=\{((g,h),(g',h'))| \ g=g', \ hh'\in E(H), \ \text{or} \ gg' \in E(G) \}$. \\ The first three products are commutative and the lexicographic product is associative but not commutative.\\ The total coloring conjecture was verified for the cartesian product of two graphs. Seoud et al.~\cite{35, SMWW97} determined the total chromatic number of the join of two paths, the cartesian product of two paths, the cartesian product of a path and a cycle, certain classes of the corona of two graphs and the theta graphs. Kemnitz and Marangio ~\cite{36} classified the cartesian product of complete graphs $K_n \Box K_m$ as type-I if $n\geq m\geq 4, n\equiv 0 \mod 4$ or $n>m \geq 4, n\equiv 2 \mod 4$, where $n$ and $m$ are even and as type-II if $n$ is even and $m$ odd and $n>(m-1)^2$. They also obtained the total chromatic number of the cartesian product of two cycles $C_n \Box C_m$, $K_n \Box H$ and $C_n \Box H$, where $H$ is a bipartite graph. Using the fact that if $G$ is a regular graph with the adjacent vertex distinguishing chromatic index $\chi'_a=\Delta(G)+1$ then $\chi''(G)=\Delta(G)+1$, Baril et al. ~\cite{BHO12} proved that $K_n \Box K_m$ is type-I if $m\text{ and }n$ are even. Still there are cases of these products of complete graphs which is not classified as type-I or type-II. Equitable total chromatic number of a graph $G$ is the smallest integer $k$ for which $G$ has a $k$-total coloring such that the number of vertices and edges colored with each color differs by at most one. Tong Chunling et al. ~\cite{22} improved the results of Seoud et al. ~\cite{SMWW97} and Kemnitz et al. ~\cite{36} by showing that the cartesian product of two cycles $C_m$ and $C_n, m, n\geq 3$ have equitable total 5-coloring. That is, $C_n \Box C_m$ is type-I. Zmasek and $\check{\text Z}$erovnik~\cite{19} proved that if TCC holds for graphs $G$ and $H$, then it holds for the cartesian product $G \Box H$. They also proved that if the factor with largest vertex degree is of type-I, then the product is also of type-I. \\ There are only a few results proved on total colorings of the other three product graphs. Katja Prnaver and Bla$\check{\text{z}}$ Zmazek~\cite{24} verified the conjecture for direct product of a path and any graph $G$ with $\chi'(G)=\Delta(G).$ Geetha and Somasundaram~\cite{58} proved that direct product of two even complete graphs are type-I and the direct product of two cycles $C_m$ and $C_n$ are type-I for certain values of $m$ and $n$. They also proved that if $K_2\boxtimes H$ ($K_2 \circ H$) satisfies TCC, then $G\boxtimes H$ ($G \circ H$) satisfy TCC, where $G$ is any bipartite graph. Mohan et al.~\cite{59} proved that the corona product of two graphs $G$ and $H$ is always type-I, provided $G$ is total colorable and $H$ is either a cycle, a complete graph or a bipartite graph. The \textit{deleted lexicographic product} of two graphs $G$ and $H$, denoted by $D_{lex}$ $(G,H)$, is a graph with the vertex set $V(G) \times V(H)$ and the edge set $\{((g, h), (g', h') ) : (g, g') \in E(G) $ and $ h \neq h'$, or $\ (h, h') \in E(H)$ and $ g=g' \}$. Similar to lexicographic product, $D_{lex}(G, H)$ and $D_{lex}(H, G)$ are not necessarily isomorphic. Recently, Vignesh et al. ~\cite{VGS18} proved that the if $G$ is a bipartite graph and $H$ is any total colorable graph then $G\circ H$ is also total colorable. They further show that for any class-I graph $G$ and any graph $H$ with at least 3 vertices, $D_{lex}(G, H)$ is total colorable. In particular, if $H$ is class-I then $D_{lex}(G, H)$ is also type-I. We present now the results on Sierpi$\acute{\text n}$ski graphs. \subsection{Sierpi$\acute{\text n}$ski Graphs} The Sierpi$\acute{\text n}$ski graphs $S(n,K_k)$, $k,n\geq1$, $k,n \in \mathbb{N} $ is defined on the vertex set $\{1, 2, ...,k\}^n$, where $K_k$ is complete graphs on $k$ vertices. Two different vertices $u=(u_1, u_2,..., u_n)$ and $v=(v_1, v_2,...,v_n)$ are adjacent if and only if there exists a $h\in \{1, 2, ..., n\}$ such that\\ \indent a) $u_t=v_t$ for $t=1,2,...,h-1$;\\ \indent b) $u_h\neq v_h$; and\\ \indent c) $u_t=v_h \text { and } v_t=u_h$ for $t=h+1,..., n$.\\ Sierpi$\acute{\text n}$ski gasket graphs $S_n$ were introduced by Scorer, Grundy and Smith~\cite{40}. The graph $S_n$ is obtained from the Sierpi$\acute{\text n}$ski graphs $S(n,3)$ by contracting every edge of $S(n,3)$ that lies in no triangle. Marko Jakovac and Sandi Klav$\check{\text z}$ar~\cite{90} generalized the graphs $S(n,3)$ to Sierpi$\acute{\text n}$ski graphs $S(n,k)$ for $k\geq 3$ and determined the total colorings of the Sierpi$\acute{\text n}$ski gasket graphs $S_n$. In particular they proved that for any $n\geq 2$ and any odd $k \geq 3$, $S(n, k)$ and $S(n, 4)$ are type-I graphs. For the even values of $k \geq 6$, they believed that $S(n, k)$ is always type-II and hence they proposed a conjecture that $S(n, k)$ is type-II. After three years Andreas M. Hinz, Daniele Parisse ~\cite{20} disproved the conjecture based on the canonical total colorings. Also they prove that the Hanoi graphs $H_p^n$ are type-I graphs. Geetha and Somasundaram~\cite{57} considered the generalized Sierpi$\acute{\text n}$ski graphs $S(n, G)$ and proved that $S(n, G)$ is type-I for certain classes of $G$. We now turn our attention to Chordal graphs. \subsection{Chordal Graphs} Chordal graphs are graphs in which every induced cycle is a 3-cycle. They form a very important class of graphs due to the fact that they have good algorithmic properties. The TCC is verified for several subfamilies of chordal graphs like interval graphs, split graphs and strongly chordal graphs. A graph $G$ is called split graph if its vertex set can be partitioned in to two subsets $U$ and $V$ such that $U$ induces a clique and $V$ is independent set in $G$. A color diagram $\mathcal{C}=\{R_1, R_2,..., R_k\}$ of frame $d=(d_1,d_2,...,d_k)$ is an ordered set of color arrays, where color array $R_i=\{c_{i,1},c_{i,2},...,c_{i,d_i}\}$, of length $d_i$, consists of distinct colors for all $1\leq i\leq k$. In~\cite{27}, Chen et al. proved that the split graphs satisfies TCC. They also proved that if $G$ is a split graph with $\Delta(G)$ even, then $G$ is type-I. They extensively used the concept of color diagram to prove these results. Campos et al.~\cite{28} gave conditions for the split-indifference graph $G$ to be type-II and constructed a $\Delta(G)+1$-total colorings for the remaining. Hilton~\cite{97} proved the following (it is known as Hilton's condition): Let $G$ be a simple graph with an even number of vertices. If $G$ has a universal vertex, then $G$ is type-II if and only if $\left| E(\overline{G}) \right| + \alpha(\overline{G}) < \frac{|V(G)|}{2}$, where $\alpha(\bar{G})$ is the cardinality of a maximum independent set of edges of $\overline{G}$. Three-clique graphs are generalization of the split-indifference graphs. Campos et al.~\cite{28} proposed a conjecture based on the Hilton's condition. \begin{con} A 3-clique graph is type-II if and only if it satisfies Hilton's condition. \end{con} A graph is dually chordal if it is the clique graph of a chordal graph. The class of dually chordal graphs generalize known subclasses of chordal graphs such as doubly chordal graphs, strongly chordal graphs, interval graphs, and indifference graphs. Figueiredo et al.~\cite{41} proved that TCC holds for dually chordal graphs. A pullback from $G$ to $G'$ is a function $f: V(G) \rightarrow V(G')$, such that: (i) $f$ is a homomorphism and (ii) $f$ is injective when restricted to the neighborhood of $x, \forall x\in V(G)$. Based on this pullback method, they proved that if $\Delta(G)$ is even, then $G$ is type-I. A family of sets satisfies the Helly property if any subfamily of pairwise intersecting sets has nonempty intersection. A graph is neighborhood-Helly when the set $\{N(v): v \in V(G) \}$ satisfies the Helly property. A characterization of dually chordal graphs says that $G$ is dually chordal if and only if $G$ is neighborhood-Helly and $G^2$ is chordal. It is proved that~\cite{41} TCC holds for neighborhood-Helly graphs $G$ such that $G^2$ is perfect. They also proposed the following problem which is still open: Determine the largest graph class for which all its odd maximum degree graphs are class-I and for which all its even maximum degree graphs are type-I.\\ A graph is weakly chordal if neither the graph nor the complement of the graph has an induced cycle on five or more vertices. A simple graph $G$ on $[n]= \{1, 2, ... , n\}$ is threshold, if $G$ can be built sequentially from the empty graph by adding vertices one at a time, where each new vertex is either isolated (nonadjacent to all the previous) or dominant (connected to all the previous). A graph $G$ is said to be mock threshold if there is a vertex ordering $v_1, . . . , v_n$ such that for every $i \ (1 \leq i \leq n)$ the degree of $v_i$ in $G : {v_1, . . . , v_i}$ is 0, 1, $i-2$, or $i-1$. Mock threshold graphs are a simple generalization of threshold graphs that, like threshold graphs, are perfect graphs. Mock threshold graphs are perfect and indeed weakly chordal but not necessarily chordal ~\cite{bsz18}. Similarly, the complement of a mock threshold graph is also mock threshold. In the following, we prove the TCC for Mock Threshold graphs. \noindent \textbf{Note:} A total coloring of $K_n$ can be constructed as follows: (This total coloring is due to Hinz and Parisse ~\cite{20}) When $n$ is even, we first construct an edge coloring of $K_n$ and extend it. We denote $[n]_0=\{0,1,2,...,n-1\} $. For $k\in [n]_0$, let $\tau_k$ be the transposition of $k$ and $n-1$ on $[n]_0$. For even $n$, $c_n(i,j)=(\tau_i(j)+\tau_j(i)+2)\mod (n+1)$, for $i,j \in [n]_0, i\neq j$, defines a $(n+1)$-edge coloring. In this coloring assignment line $k\in [n]_0$ will have the missing colors $k$ and $(k+1)\mod n$. We color $c_n(i)=i$ for all $i\in [n]_0$.\vspace{0.3cm} When $n$ is odd, we use the same coloring of $K_{n-1}$. In the coloring assignment of $K_{n-1}$, still the color $(k+1) \mod n$ is missing in line $k\in [n]_0$. We use these colors to the edges incident with $n^{th}$ vertex and color $n$ to the $n^{th}$ vertex. \begin{thm} Total coloring conjecture holds for any Mock threshold graph $G$. \end{thm} \begin{proof} Consider the Mock threshold graph $G$ with vertex ordering $v_1, v_2, ..., v_i,..., v_n$. We prove this theorem using mathematical induction on the induced subgraph $G[v_1,v_2,...,v_k]$. For $k\leq 4$, the maximum degree of all the induced subgraphs is less than or equal to 3. We know that a graph with maximum degree less than or equal to 3 satisfies TCC ~\cite{kos96}.\vspace{0.3cm} Let us assume that $G[v_1,v_2,...,v_k], k\geq 5$ satisfies TCC.\vspace{0.3cm} \noindent \textbf{Claim:} The graph $G[v_1,v_2,...,v_k,v_{k+1}]$ satisfies TCC.\vspace{0.3cm} The degree of the vertex $v_{k+1}$ in $G[v_1,v_2,...,v_{k+1}]$ can be $0, 1,k-1$ or $k$. \noindent Case-1: Suppose $d(v_{k+1})=0$. In this case the vertex is $v_{k+1}$ is an isolated vertex. By the induction assumption, $G[v_1,v_2,...,v_k,v_{k+1}]$ satisfies TCC. \noindent Case-2: Suppose $d(v_{k+1})=1$. In this case, the vertex $v_{k+1}$ is adjacent to a vertex, say $v_i$, in $G[v_1, v_2, ..., v_k]$. Since $G[v_1,v_2,...,v_k]$ is total colorable graph with at most $\Delta(G[v_1,v_2,...,v_k])+2$ colors, at each vertex there will be at least one missing color. We assign this missing color to the edge $(v_i,v_{k+1})$, and for the vertex $v_{k+1}$, we assign a color of a vertex which is not adjacent to $v_{k+1}$ and not the color of $v_i$. Therefore, $G[v_1, v_2, ..., v_{k+1}]$ satisfies TCC. \noindent Case-3: Suppose $d(v_{k+1})=k-1$. Let us assume that the vertex $v_{k+1}$ is not adjacent with $v_i$ and also assume that \\ $\Delta(G[v_1, v_2, ..., v_{k+1}])=k-1$. We consider following two cases: \noindent Subcase-1: $k$ is even. Since $k$ is even, $k+1$ is odd. Construct a complete graph induced by the vertices $v_1,v_2,...v_{i-1},v_{i+1},...,v_{k+1}$. Now, color this complete graph using colors in the set $\{0,1,..., n+1\}$ as given in the note. In this coloring assignment there will be one missing color at each of the vertices and they are distinct. Now, color the edges $(v_i,v_j), i\neq j, j=1,2,...,v_{k+1}$, with the missing colors. Assign the color $n-1$ to the vertex $v_i$. To get a total coloring of $G[v_1,v_2,...,v_{k},v_{k+1}]$, we remove these edges and there is no change in the maximum degree. \noindent Subcase-2: $k$ is odd. In this case $k+1$ is even, say $2p$. It is known that a graph of order $2p$ with maximum degree $2p-2$ satisfies TCC (see ~\cite{hil90, chen92}). \noindent Case-4: Suppose $d(v_{k+1})=k$. The maximum degree of $G[v_1,v_2,...,v_k,v_{k+1}]$ is $k$. Construct a complete graph on the vertex set $\{v_1, v_2, ..., v_k,v_{k+1}\}$. We know that the complete graph satisfies TCC. After removing the added edges we get a total coloring of $G[v_1,v_2,...,v_k,v_{k+1}]$ Hence, in all the cases, the mock threshold graph satisfies TCC. \end{proof} \subsection{Multipartite Graphs} Graph amalgamation~\cite{100} is one of the powerful techniques for various graph problems. A graph $H$ is an amalgamation of a graph $G$ if there exists a function $\phi$ called an amalgamation function from $V(G)$ onto $V(H)$ and a bijection $\phi': E(G)\rightarrow E(H)$ such that $e$ joining $u$ and $v$ is in $E(G)$ if and only if $\phi'(e)$ joining $\phi(u)$ and $\phi(v)$ is in $E(H)$. Total coloring conjecture was verified for some classes of multipartite graphs using the amalgamation technique. Dong and Yap~\cite{83} proved that the complete $p$-prartite graph $K=K(r_1,r_2,..., r_p)$ is of type-I if $r_2\leq r_3-2$ and $|V(K)|=2n$, $r_1\leq r_2\leq ...\leq r_p$. Deficiency of a graph $G$ to be def$(G)= \sum_{v\in V(G)} (\Delta(G) - d(v))$. Dalal and Rodger~\cite{82} proved that $K = K(r_1, . . . , r_5)$ is type-II if and only if $|V(K)|=0$(mod 2) and def($K$) is less than the number of parts in $K$ of odd size. Dalal et al.~\cite{56} proved that the complete $p$-partite graph $K=K(r_1,r_2,..., r_p)$ is type-I if and only if $K\neq K_{r,r}$ and if $K$ has an even number of vertices then def($K$) is at least the number of parts of odd size. Using graph amalgamations technique they showed that all complete multipartite graphs of the form $K(r,r,...,r+1)$ are type-I . Chen et al.~\cite{89} proved that an $(n-2)$-regular equi-bipartite graph $K_{n,n}-E(J)$ is type-I if and only if $J$ contains a 4-cycle. Campos and de Mello~\cite{31} determined the total chromatic number of some bipartite graphs like grids, near-ladders and $k$-dimensional cubes.\\ \subsection{Cubic Graphs} In~\cite{23} Dantas et al. proved that for each integer $k\geq 2,$ there exists an integer $N(k)$ such that, for any $n\geq N(k)$, the generalized Petersen graph $G(n,k)$ has total chromatic number 4. Snarks are cyclically 4-edge-connected cubic graphs that do not allow a 3-edge-coloring. Cavicchiolic et al.~\cite{101} proved that all the snarks of order less than 30 are of type-I (this was proved with the aid of a computer). Also they proposed a open problem ``find (if any) the smallest snark (with respect to the order) which is of type-II". Motivated by that question, Campos et al.~\cite{71} proved that all graphs in three infinite families of snarks, the Flower Snarks, the Goldberg Snarks, and the Twisted Goldberg Snarks are type-I. They gave recursive procedures to construct total-colourings that uses 4 colours in each case. Also they proposed an open problem that all snarks are type-I. Sasaki el al.~\cite{102} prove that the total chromatic number of some classes of Snarks like, Loupekhine, Goldberg snarksboth and Blanuša’s families of graphs is 4. They observed that the total chromatic number seems to have no relation with the chromatic index for a cubic graph of cyclic-edge-connectivity less than 4. Also they proposed the following questions: \\ (i). What is the smallest type-II cubic graph without a square? \\ (ii). What is the smallest type-II snark?\\ Gunnar Brinkmann et al.~\cite{75} considered the problems posed by Campos et al.~\cite{71} and Sasaki el al.~\cite{102} and showed that there exists type-II snarks for each even order $n\geq 40.$ They gave a computer search for which all the cubic graphs with girth 5 and up to 32 vertices are type-I. Also they proposed the following questions:\\ (i). Does there exist a type-II snark of order less than 40? (The only possible orders for which the existence is not yet known are 36 and 38.)\\ (ii). What is the smallest type-II cubic graph with girth at least 5?\\ (iii). Is there a girth $g$ so that all cubic graphs with girth at least $g$ are type-I?\\ \subsection{Graphs with Degree Constraints} Being a very difficult conjecture, it makes sense to prove the conjecture either for known classes of graphs or graphs with some degree constrains. Hilton and Hind~\cite{25} showed that TCC holds for the graphs $G$ having $\Delta(G)\geq \frac{3}{4}|V(G)|$. Chetwynd et al.~\cite{43} gave a necessary and sufficient condition for $\chi''(G)=\Delta(G)+1$, if $G$ is odd order and regular of degree $d\geq \frac{1}{3}\sqrt7 |V(G)|$. \\ Deficiency of a graph $G$ to be def$(G)= \sum_{v\in V(G)} (\Delta(G) - d(v))$. A graph $G$ is said to be conformable if $G$ has a vertex colouring that uses $\Delta(G) + 1$ colours with def$(G) > n$, where $n$ is the number of colour classes with parity different from $|V(G)|$. Chew~\cite{30} improved the previous result (Chetwynd et al.~\cite{43}) for $d\geq [\frac{(\sqrt{37}-1)}{6}]|V(G)|$. He proved that for any regular graph $G$ of odd order and with $d\geq [\frac{(\sqrt{37}-1)}{6}]|V(G)|$, $G$ is type-I if and only if $G$ is conformable; otherwise type-II.\\ Dezheng Xie and Zhongshi He~\cite{26} showed that if $G$ is a regular graph of even order and $\delta(G)\geq \frac{2}{3}|V(G)|+ \frac{23}{6}$, then $\chi''(G)\leq \Delta(G)+2$. Later, Xie DeZheng and Yang WanNian~\cite{55} proved the same result for regular graph of odd order. Combining these two results, we conclude that if $G$ regular graph with $\delta(G)\geq \frac{2}{3}|V(G)|+ \frac{23}{6}$ then $G$ satisfies TCC. In~\cite{32} Machado and de Figueiredo proved that every non-complete \{square, unichord\}-free graph of maximum degree at least 4 is type-I. Also they proved that any \{square, unichord\}-free graph is total colorable. Using graph decompositions, the same authors~\cite{33} proved that the non-complete \{ square, unichord \}-free graphs of maximum degree 3 are type-I. A graph is said to be $s$-degenerate for an integer $s\geq 1$ if it can be reduced to a trivial graph by successive removal of vertices with degree $ \leq s$. For example, every planar graph is 5-degenerate. Shuji Isobe et al.~\cite{37} proved that an $s$-degenerate graph $G$ has admits a total coloring with $\Delta(G)+1$ colors if the maximum degree $\Delta(G)\geq 4s+3$. The proof is based on Vizing’s and and Konig’s theorems on edge colorings. Further, they gave a linear time algorithm to find a total coloring of a graph $G$ with minimum number of colors if $G$ is a partial $k$-tree. \subsection{Other Classes of Graphs} Mycielski ~\cite{42}, introduced the graph Mycielskian graph $\mu(G)$, to build a graph with a high chromatic number and a small clique number. Let $G$ be a graph with vertex set $V^0=\{v_1^0,v_2^0,...,v_n^0\}$ and edge set $E^0$. Given an integer $m\geq 1$, the $m$-Mycielskian of $G$, denoted by $\mu_m(G)$, is the graph with vertex set $V^0\cup V^1\cup ...\cup V^m \cup\{u\}$, where $V^i=\{v_j^i:v_j^0\in V^0\}$ is the $i^{th}$ distinct copy of $V^0$ for $i=1,2,...,m,$ and the edge set $E^0 \cup(\bigcup_{i=0}^{m-1}\{v_j^iv_j^{i+1}:v_j^0v_j'^0 \in E^0\})\cup \{v_j^m u:v_j^m \in V^m\}$. Chen et al.~\cite{29} showed that the generalized Mycielski graphs satisfy TCC. Also they proved the total chromatic number of generalized Mycielski graphs $\mu_m(G)$ is $\Delta(\mu_m(G))+1$ if $\Delta(G)\leq \frac{|V(G)|-1}{2}$. Zhi-wen Wang et al.~\cite{38} proved that the vertex distinguishing total chromatic number and the total chromatic number are same for the graphs $P_n\vee P_n$ and $C_n\vee C_n$. Li and Zhang~\cite{85} proved that the join of a complete inequibipartite graph $K_{n_1,n_2}$ and a path $P_m$ is type-I. Hilton et al.~\cite{84}, determined the total chromatic numbers of graphs of the form $G_1+G_2$, where $G_1$ and $G_2$ are graphs of maximum degree at most two.\\ The line graph of $G$, denoted by $L(G)$, has the set $E(G)$ as its vertex set and two distinct vertices $e_1, e_2 \in V (L(G))$ are adjacent if and only if they share a common vertex in $G$. Vignesh et al. ~\cite{VGS18} showed in a direct manner that for $n\leq 4$, $L(K_n)$ is type-I. They believe that $L(K_n)$ is always type-I. Hence they proposed the following conjecture: \begin{con} For any complete graph $K_n$, $\chi''(L(K_n))= 2n-3$. \end{con} The \textit{double graph} $D(G)$ of a given graph $G$ is constructed by making two copies of $G$ (including the initial edge set of each) and adding edges $((u,1),(v,2))$ and $((v,1),(u,2))$ for every edge $uv$ of $G$. Vignesh et al. ~\cite{VGS18} also prove that for any total colorable graph $G$,\\ $\chi''(D(G)) \begin{cases}=\Delta(D(G))+1 & \text{if $G$ is type I}\\ \leq \Delta(D(G))+2 & \text{if $G$ is type II}. \end{cases} $ We know that middle graphs are subclasses of total graphs and super classes of line graphs. Muthuramakrishnan and Jayaraman ~\cite{MJ17} obtained the total chromatic number for line, middle and total graphs of star and bistars. \\ The Kneser graph $K(n,k)$ is the graph whose vertices correspond to the $k$-element subsets of a set of $n$ elements, and where two vertices are adjacent if and only if the two corresponding sets are disjoint. A vertex in the odd graph $O_n$ is a $(n - 1)$-element subset of a $(2n -1)$-element set. Two vertices are connected by an edge if and only if the corresponding subsets are disjoint. Note that the odd graphs are particular case of Kneser graphs. \begin{thm} Odd graph $O_n$ satisfies TCC. \end{thm} \begin{proof} Consider a $2n-1$ element set $X$. Let $O_n$ be an odd graph defined from the subsets of $X$. Let $x$ be any element of $X$. Then, among the vertices of $O_n$, exactly $ \binom {2n-2} {n-2}$ vertices correspond to sets that contain $x$. Because all these sets contain $x$, they are not disjoint, and form an independent set of $O_n$. That is, $O_n$ has $2n - 1$ different independent sets of size $ \binom {2n-2} {n-2}$. Further, every maximum independent set must have this form, so, $O_n$ has exactly $2n - 1$ maximum independent sets. If $I$ is a maximum independent set, formed by the sets that contain $x$, then the complement of $I$ is the set of vertices that do not contain $x$. This complementary set induces a matching in $G$. Each vertex of the independent set is adjacent to $n$ vertices of the matching, and each vertex of the matching is adjacent to $n - 1$ vertices of the independent set ~\cite{god80}. Based on the decomposition, we give a total coloring of $O_n$ in the following way: Assign $n$ colors to the edges between the vertices in the maximum independent set $I$ and the vertices in the matching. Color the edges in matching and the vertices in $I$ with a new color. Color one set of vertices in the matching with another new color and the second set of vertices with the missing colors at these vertices. This will give a total coloring of $O_n$ using at most $n+2=\Delta(O_n)+2$ colors. \end{proof} In the next section we look at the algorithmic aspects of TCC that has been discussed in the literature. \section{Algorithms} It is known~\cite{81} that the problem of finding a minimal total coloring of a graph is in general case NP-hard. In the same paper, Sanchez-Arroyo also proved that the problem is NP-complete even for cubic bipartite graphs. For general classes of graphs, the total coloring would be harder than edge colouring. Due to its complexity, several authors aim to find classes of graphs where there is a polynomial time algorithm for optimal total coloring. Bojarshinov~\cite{21} showed that the Behzad and Vizing conjecture holds for interval graphs. Also, he proved that every interval graph with even maximum degree can be totally colored in $\Delta(G)+1$ colors in time $O(|V(G)|+|E(G)|+(\Delta(G))^2)$. This is the first known polynomial time algorithm for total colorings. Recently, Golumbic~\cite{53} showed that a rooted path graph $G$ is type-I if $\Delta(G)$ is even, otherwise it satisfies TCC. Also, he gave a greedy algorithm (very greedy neighborhood coloring algorithm) which takes $O(|V(G)|+|E(G)|)$ time. \\ Chordal graphs are a subclass of the perfect graphs. We know that linear time algorithms exist for vertex colorings of chordal graphs. Yet, the complexity of total coloring is open for the class of chordal graphs. The complexity is know for interval graphs~\cite{21}, split graphs~\cite{27} and dually chordal graphs~\cite{41}. In~\cite{54}, Machado and Celina de Figueiredo proved that the total coloring of bipartite unichord-free graphs is NP-complete using the concept of separating class. Machado et al. ~\cite{34} used a decomposition result to establish that every chordless graph of maximum degree $\Delta(G)\geq 3$ has total chromatic number $\Delta(G)+1$ and proved that this coloring can be obtained in time $O(|V(G)|^3|E(G)|)$. Machado et al. ~\cite{52} discussed the time complexity of \{square, unichord\}-free graphs and showed that the total coloring can be obtained in polynomial time. In this case it is interesting to see that the edge coloring of this type of graphs is NP-complete. \\ \begin{table}[h] \centering \begin{tabular} {| l | l| l |} \hline \textbf{Class of graphs } & \textbf{Edge coloring} & \textbf{Total coloring } \\ \hline \hline Unichord free & NP-complete~\cite{87} & NP-complete~\cite{32} \\ \hline Chordlesss & Polynomial~\cite{34} & Polynomial~\cite{34} \\ \hline \{Square, unichord \}-free, $\Delta \geq 4$ & Polynomial~\cite{87} & Polynomial~\cite{33} \\ \hline \{Square, unichord \}-free, $\Delta = 3$ & NP-complete~\cite{87} & Polynomial~\cite{52} \\ \hline Bipartite unichord free & NP-complete~\cite{50} & NP-complete~\cite{54} \\ \hline Interval graphs & Polynomial~\cite{21} & Polynomial~\cite{21} \\ \hline Some classes of circulant graphs & Polynomial~\cite{88} & Polynomial~\cite{18,39,61} \\ \hline \end{tabular} \caption{Computational complexity of edge and total colorings} \label{ccec} \end{table} Shuji Isobe et al. ~\cite{37} proved that the total coloring problem for $s$-degenerate graph can be solved in time O($n$log$n$) for a fairly large class of graphs including all planar graphs with sufficiently large maximum degree. Further, they showed that the total coloring problem can be solved in linear time for partial $k$-trees with bounded $k$. Dantas et al. ~\cite{73} proved that the problem of deciding whether the equitable total chromatic number is 4 is NP-complete for bipartite cubic graphs. They also found one family of type-I cubic graphs of girth 5 having equitable total chromatic number 4. There are several classes of graphs in which the complexity of total coloring are unknown. We conclude this survey with a listing of the computational complexity of edge and total colorings of certain classes of graphs in Table~\ref{ccec}. \bibliographystyle{plain}
{ "timestamp": "2018-12-17T02:09:19", "yymm": "1812", "arxiv_id": "1812.05833", "language": "en", "url": "https://arxiv.org/abs/1812.05833", "abstract": "The smallest integer $k$ needed for the assignment of colors to the elements so that the coloring is proper (vertices and edges) is called the total chromatic number of a graph. Vizing and Behzed conjectured that the total coloring can be done using at most $\\Delta(G)+2$ colors, where $\\Delta(G)$ is the maximum degree of $G$.It is not settled even for planar graphs. In this paper we give a survey on total coloring of graphs.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "Total Colourings - A survey", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357585701874, "lm_q2_score": 0.7248702702332476, "lm_q1q2_score": 0.7096739148827843 }
https://arxiv.org/abs/0802.2109
On slicing invariants of knots
The slicing number of a knot, $u_s(K)$, is the minimum number of crossing changes required to convert $K$ to a slice knot. This invariant is bounded above by the unknotting number and below by the slice genus $g_s(K)$. We show that for many knots, previous bounds on unknotting number obtained by Ozsvath and Szabo and by the author in fact give bounds on the slicing number. Livingston defined another invariant $U_s(K)$ which takes into account signs of crossings changed to get a slice knot, and which is bounded above by the slicing number and below by the slice genus. We exhibit an infinite family of knots $K_n$ with slice genus $n$ and Livingston invariant greater than $n$. Our bounds are based on restrictions (using Donaldson's diagonalisation theorem or Heegaard Floer homology) on the intersection forms of four-manifolds bounded by the double branched cover of a knot.
\section{Introduction} \label{sec:intro} The unknotting number of a knot is the minimum number of crossing changes required to convert it to an unknot. Ozsv\'{a}th\ and Szab\'{o}\ used Heegaard Floer theory to provide a powerful obstruction to a knot having unknotting number one \cite{osu1}. This obstruction was generalised in \cite{u} to higher unknotting numbers. In this paper we show that similar techniques yield information about the number of crossing changes required to convert to a slice knot. The slice genus $g_s(K)$ of a knot $K$ in the three-sphere is the minimum genus of a connected oriented smoothly properly embedded surface in the four-ball with boundary $K$. A knot is called \emph{slice} if $g_s(K)=0$. Given any diagram $D$ for a knot $K$, a new knot may be obtained by changing one or more crossings of $D$. The slicing number $u_s(K)$ is the minimum number of crossing changes required to obtain a slice knot, where the minimum is taken over all diagrams for $K$. A ``movie" of a sequence of crossing changes represents an immersed annulus in $S^3\times [0,1]$ with a singularity for each crossing change. A neighbourhood of each singular point may be removed and replaced with an annulus; if the last frame of the movie is a slice knot it may be capped off with a disk, yielding a surface in $B^4$ with genus $u_s(K)$ and boundary $K$. Recall that crossings in a knot diagram may be given a sign as in Figure \ref{fig:crossings} (independent of the choice of orientation of the knot). Suppose that $K$ may be \emph{sliced} (converted to a slice knot) by changing $p$ positive and $n$ negative crossings (in some diagram). Form the immersed annulus in $S^3\times [0,1]$ as before. The sign of each self-intersection of this annulus agrees with the sign of the corresponding crossing in the changed diagram. Take two self-intersections of opposite sign, and in each case remove a disk neighbourhood of the singularity from just one of the intersecting sheets and connect the boundary components by a tube. This leads to a surface in $B^4$ with genus $\max(p,n)$. Livingston defined the following slicing invariant: $$U_s(K)=\min(\max(p,n)),$$ where the minimum is taken over all diagrams for $K$ and over all sets of crossing changes in a diagram which give a slice knot. From the preceding discussion we see that $$g_s(K)\le U_s(K)\le u_s(K).$$ Livingston showed in \cite{liv} that the two-bridge knot $S(15,4)$, also known as $7_4$, has $u_s=2$ and $g_s=1$, thus giving a negative answer to a question of Askitas \cite{ask}. (Murakami and Yasuhara showed in \cite{my} that $8_{16}$ has $u_s=2$ and $g_s=1$. Their proof is based on a four-manifold bounded by the double-branched cover of the knot. We take a similar approach here.) Livingston also asked whether in fact $U_s$ is always equal to the slice genus, and suggested that $7_4$ may be a counterexample. \begin{figure}[tbp] \begin{center} \ifpic \leavevmode \begin{xy} 0;/r2pc/: (0,0)*{ \begin{xy} 0;/r6pc/: (0,0)*{}="1"; (1,0)*{}="2"; (1,1.2)*{}="3"; (0,1.2)*{}="4"; "1";"3" **\crv{}?(1)*\dir{>}; \POS?(.5)*{\hole}="x"; "2";"x" **\crv{}; "x";"4" **\crv{}?(1)*\dir{>}; (0.5,-0.2)*{{\rm Positive}}; \end{xy}}; (5,0)*{ \begin{xy} 0;/r6pc/: (0,0)*{}="1"; (1,0)*{}="2"; (1,1.2)*{}="3"; (0,1.2)*{}="4"; "2";"4" **\crv{}?(1)*\dir{>}; \POS?(.5)*{\hole}="x"; "1";"x" **\crv{}; "x";"3" **\crv{}?(1)*\dir{>}; (0.5,-0.2)*{{\rm Negative}}; \end{xy}} \end{xy} \else \vskip 5cm \fi \begin{narrow}{0.3in}{0.3in} \caption{ \bf{Signed crossings in a knot diagram.}} \label{fig:crossings} \end{narrow} \end{center} \end{figure} Let $\sigma(K)$ denote the signature of a knot $K$. It is shown in \cite[Proposition 2.1]{cl} (also \cite[Theorem 5.1]{st}) that if $K'$ is obtained from $K$ by changing a positive crossing, then $$\sigma(K')\in\{\sigma(K),\sigma(K)+2\};$$ similarly if $K'$ is obtained from $K$ by changing a negative crossing then $$\sigma(K')\in\{\sigma(K),\sigma(K)-2\}.$$ Now suppose that $K$ may be sliced by changing $p$ positive and $n$ negative crossings (in some diagram). Since a slice knot has zero signature, it follows that a bound for $n$ is given by \begin{equation} \label{eqn:nsig} n\ge\sigma(K)/2. \end{equation} In this paper we give an obstruction to equality in (\ref{eqn:nsig}). Let $\Sigma(K)$ denote the double cover of $S^3$ branched along $K$, and suppose that crossing changes in some diagram for $K$ result in a slice knot $J$. It follows from ``Montesinos' trick'' (\cite{mont}, or see \cite{u}) that $\Sigma(K)$ is given by Dehn surgery on some framed link in $\Sigma(J)$ with half-integral framing coefficients. \begin{maindef} \label{def:halfint} An integer-valued symmetric bilinear form $Q$ on a free abelian group of rank $2r$ is said to be of \emph{half-integer surgery type} if it admits a basis $\{x_1,\dots,x_r,y_1,\dots,y_r\}$ with \begin{eqnarray*} Q(x_i,x_j)&=&2\delta_{ij},\\ Q(x_i,y_j)&=&\delta_{ij}. \end{eqnarray*} \end{maindef} \noindent{\bf Examples.} The positive-definite rank 2 unimodular form is of half-integer surgery type since it may be represented by the matrix $\left(\begin{matrix}2&1\\1&1\end{matrix}\right)$. The form represented by the matrix $\left(\begin{matrix}4&1\\1&4\end{matrix}\right)$ is not of half-integer surgery type since it has no vectors of square 2. \ Converting the half-integer surgery description above to integer surgery in the standard way gives a cobordism $W$ from $\Sigma(J)$ to $\Sigma(K)$ whose intersection form $Q_W$ is of half-integer surgery type. Since $J$ is slice, $\Sigma(J)$ bounds a rational homology ball $B$. Joining $B$ to $W$ along $\Sigma(J)$ gives a smooth closed four-manifold $X$ bounded by $\Sigma(K)$. The second Betti number of $X$ is twice the number of crossing changes used to get from $K$ to $J$. Suppose now that $K$ is converted to $J$ by changing $p$ positive and $n$ negative crossings, with $n=\sigma(K)/2$. Then $K$ bounds a disk in $B^4\#^{p+n}{\mathbb C}{\mathbb P}^2$. Let $X'$ be the double cover of the blown-up four-ball branched along this disk. It follows from a theorem of Cochran and Lickorish \cite[Theorem 3.7]{cl} that $X'$ is positive-definite with $b_2(X')=2(p+n)$. The following theorem is based on the idea that in fact $X$ is diffeomorphic to $X'$. \begin{maintheorem} \label{thm:mainthm} Suppose that a knot $K$ may be converted to a slice knot by changing $p$ positive and $n$ negative crossings, with $n=\sigma(K)/2$. Then the branched double cover $\Sigma(K)$ bounds a positive-definite smooth four-manifold $X$ with $b_2(X)=2(p+n)$ whose intersection form $Q_X$ is of half-integer surgery type, with exactly $n$ of $Q_X(x_1,x_1),\dots,Q_X(x_{p+n},x_{p+n})$ even, and $\det Q_X$ divides $\det K$ with quotient a square. \end{maintheorem} For knots whose determinant is square-free, it follows that the first two parts of Ozsv\'{a}th\ and Szab\'{o}'s obstruction to unknotting number one \cite[Theorem 1.1]{osu1} (without the symmetry condition) in fact give an obstruction to $u_s(K)=1$. \begin{maincor} \label{cor:os} The knots $$7_4, 8_{16}, 9_5, 9_{15}, 9_{17}, 9_{31}, 10_{19}, 10_{20}, 10_{24}, 10_{36}, 10_{68}, 10_{69}, 10_{86},$$ $$10_{97}, 10_{105}, 10_{109}, 10_{116}, 10_{121}, 10_{122}, 10_{144}, 10_{163}, 10_{165}$$ have slice genus 1 and slicing number 2. \end{maincor} (Note that as mentioned above this was shown for $7_4$ in \cite{liv} and for $8_{16}$ in \cite{my}. The slice genus information in Corollaries \ref{cor:os} and \ref{cor:u} is taken from \cite{knotinfo}.) Furthermore, the obstruction given in \cite[Theorem 5]{u} to unknotting a knot by changing $p$ positive and $n=\sigma(K)/2$ negative crossings is in fact an obstruction to slicing, provided again that $\det K$ is square-free. \begin{maincor} \label{cor:u} The knots $$9_{10}, 9_{13}, 9_{38}, 10_{53}, 10_{101}, 10_{120}$$ have slice genus $2$ and slicing number $3$. The 11-crossing two-bridge knot $S(51,35)$ (Dowker-Thistlethwaite name $11a365$) has slice genus $3$ and slicing number $4$. \end{maincor} It may also be shown that for some of the knots in Corollaries \ref{cor:os} and \ref{cor:u}, Livingston's invariant $U_s$ is not equal to the slice genus. The knot $7_4$ is such an example (as Livingston suggested in \cite{liv}), and in fact we find that it is the first member of an infinite family of such examples. \begin{maincor} \label{cor:Kn} For each positive integer $n$, there exists a two-bridge knot $K_n$ with signature $2n$ and slice genus $n$ which cannot be sliced by changing $n$ negative crossings and any number of positive crossings; hence $U_s(K_n)>n$. \end{maincor} \vskip2mm \noindent{\bf Acknowledgements.} It is a pleasure to thank Andr\'{a}s Stipsicz and Tom Mark for helpful conversations. \section{Proof of Theorem \ref{thm:mainthm}} \label{sec:proof} In this section we prove our main result. Recall that a positive-definite integer-valued symmetric bilinear form $Q$ on a free abelian group $A$ gives an integer lattice $L$ in Euclidean space on tensoring with ${\mathbb R}$. We say a lattice $L$ in ${\mathbb R}^n$ is of half-integer surgery type if the corresponding form $Q$ is (see Definition \ref{def:halfint}). Also a matrix representative for $Q$ is referred to as a Gram matrix for $L$. For convenience we will frequently denote $Q(x,y)$ by $x\cdot y$, and $Q(x,x)$ by $x^2$. The proof of Theorem \ref{thm:mainthm} consists of a topological and an algebraic step. Following \cite{u} we show using careful analysis of Montesinos' trick that, under the hypotheses of the theorem, $\Sigma(K)$ bounds a positive-definite manifold $X$ and that the intersection pairing of $X$ is of half-integer surgery type when restricted to some finite index sublattice. We then show that if a lattice $M$ has an odd index sublattice $L$ of half-integer surgery type then in fact $M$ is of half-integer surgery type. We begin with a couple of lemmas. \begin{lemma} \label{lem:mod4} Let $Q$ be a form of half-integer surgery type, with $m_i=Q(y_i,y_i)$. Then $$\det Q\equiv\prod_{i=1}^r(2m_i-1)\pmod4.$$ \end{lemma} {\bf \noindent Proof.\ } This follows from the discussion after Lemma 2.2 in \cite{u}.\endproof \begin{lemma} \label{lem:mod2} Let $Q$ be a block matrix of $r\times r$ blocks of the form $\left(\begin{matrix}2I & *\\ * & *\end{matrix}\right)$ which is congruent modulo 2 to $\left(\begin{matrix}2I & I\\I & X\end{matrix}\right)$. Then there exists $P=\left(\begin{matrix}I & *\\0 & R\end{matrix}\right)\in GL(2r,{\mathbb Z})$ with $$P^T Q P=\left(\begin{matrix}2I & I\\I & X'\end{matrix}\right),$$ and $X'\equiv X\pmod2$. \end{lemma} {\bf \noindent Proof.\ } Let $Q$ be the Gram matrix of a lattice with basis $x_1,\dots,x_r,z_1,\dots,z_r$. By successively adding multiples of $x_i$ to each of $z_1,\dots,z_r$ we get a new basis $x_1,\dots,x_r$, $y_1,\dots,y_r$ with $x_i\cdot y_j=\delta_{ij}$; since $x_i$ has even square this preserves parities on the diagonal. \endproof The following was originally proved by Ozsv\'{a}th\ and Szab\'{o}\ \cite{osu1} in the case $p+n=1$ and $J$ is the unknot. \begin{proposition} \label{prop:monttrick} Suppose that a knot $K$ may be converted to a slice knot $J$ by changing $p$ positive and $n$ negative crossings, with $n=\sigma(K)/2$. Then the branched double cover $\Sigma(K)$ bounds a positive-definite four-manifold $X$ with $b_2(X)=2(p+n)$. The lattice $(H_2(X;{\mathbb Z}),Q_X)$ contains a finite index sublattice of half-integer type, which has a basis as in Definition \ref{def:halfint} with exactly $p$ elements of odd square. \end{proposition} {\bf \noindent Proof.\ } We adapt the proof of \cite[Lemma 3.2]{u}. By Montesinos' lemma (\cite{mont}, or see \cite[Lemma 3.1]{u}), $\Sigma(J)$ is the result of surgery on some link $L$ in $S^3$ with half-integer framing coefficients. Convert to integer surgery (see \cite{gs} or \cite[Lemma 2.2]{u}), and let $Q_J$ be the resulting linking matrix of half-integer type. We may assume (after possibly adding a $-1/2$ framed unknot to $L$) that $\det Q_J$ is positive. Denote by $X_J$ the two-handlebody with boundary $\Sigma_J$ and intersection form represented by $Q_J$. Note that since $J$ is slice it has signature zero and determinant $\det J=\det Q_J=k^2$ for some odd integer $k$. Suppose that $K_-$ is a knot of signature 2 which may be converted to $J$ by changing a single negative crossing $c$. Then $\Sigma(K)$ is the result of surgery on $L\cup C$ for some knot $C$ in $S^3$, with framing $(2m-1)/2$ on $C$. Let $K_0$ be the result of taking the oriented resolution of the crossing $c$; then as in \cite[Lemma 3.2]{u} we have that $\Sigma(K_0)$ is surgery on $L\cup C$ with framing $m$ on $C$. Converting to integer surgery we find that $\Sigma(K_-),\Sigma(K_0)$ are given by integer surgeries with linking matrices $$Q_-=\left(\begin{matrix}2&1&&0&\\1&m&&*&\\&&&&\\0&*&&Q_J&\\&&&&\end{matrix}\right),\quad Q_0=\left(\begin{matrix}m&&*&\\&&&\\ *&&Q_J&\\&&&\end{matrix}\right).$$ Let $\Delta_K(t)$ denote the Conway-normalised Alexander polynomial of a knot $K$. This satisfies $$\Delta_K(-1)=(-1)^{\sigma(K)/2}\det K,$$ and thus $$\Delta_J(-1)=k^2,\ \Delta_{K_-}(-1)=-|\det Q_-|=-|2\det Q_0 -k^2|.$$ The skein relation for the Alexander polynomial (see \cite{lick2}) then yields $$k^2+|2\det Q_0 -k^2|=2|\det Q_0|,$$ from which we conclude that $\det Q_-=2\det Q_0 -k^2$ is positive. Now using Lemma \ref{lem:mod4} we have $$2m-1\equiv\det Q_-=\det K_-\equiv3\pmod4,$$ and thus $m$ is even. (The last congruence is due to Murasugi \cite{m}: $\det K\equiv\sigma(K)+1\pmod4$ .) Now suppose $K_+$ is a knot of signature 0 which may be converted to $J$ by changing a positive crossing. Again $\Sigma(K)$ is obtained by half-integer surgery on $L\cup C$ for some knot $C$ in $S^3$, with framing $(2m-1)/2$ on $C$. Let $Q_+$ denote the linking matrix after converting to integer surgery. A similar argument as above (see \cite{u} for the case where $J$ is the unknot) shows that $\det Q_+>0$ and $m$ is odd. Let $c_1,\dots,c_{p+n}$ be the set of crossings ($p$ positive, $n$ negative) in some chosen diagram of $K$ that we change to convert to $J$. Then $\Sigma(K)$ is Dehn surgery on the link $L\cup C_1\cup\dots\cup C_{p+n}$, with half-integer framing coefficients. Each $C_i$ corresponds to a crossing $c_i$. Dehn surgery on $L$ union a sublink of $C_1\cup\dots\cup C_{p+n}$ gives the double branched cover of a knot which is obtained from $K$ by changing a subset of the crossings $c_1,\dots,c_{p+n}$. In particular surgery on the knot $L\cup C_i$ yields the double branched cover of the knot $K'_i$ which is obtained from $K$ by changing all of the crossings except $c_i$. By the condition $n=\sigma(K)/2$ the knot signature changes every time a negative crossing is changed and remains constant when a positive crossing changes. It follows from the discussion above applied to $K'_i$ that the framing on each $C_i$ is of the form $(2m_i-1)/2$ and exactly those $m_i$ which correspond to changing negative crossings of $K$ are even. Denote by $X_K$ the two handlebody with boundary $\Sigma(K)$ that results from converting to integer surgery (i.e. surgery on the link $L\cup C_1\cup\dots\cup C_{p+n}$, followed by surgery on a 2-framed meridian of each component). Then $X_K$ has intersection form of half-integer surgery type; moreover we can view $X_K$ as the union of $X_J$ and a surgery cobordism $W$ along the common boundary $\Sigma(J)$. We will show by an induction argument that this cobordism is positive-definite. Let $K_j$ denote the knot obtained from $K$ by changing crossings $c_{j+1},\dots,c_{p+n}$, and let $Q_j$ be the linking matrix of half-integer type obtained by converting the corresponding Dehn surgery diagram of $\Sigma(K_j)$ to integer type. Suppose that $\det Q_{j-1}$ is positive and hence equal to $\det K_{j-1}$. We have $$Q_j=\left(\begin{matrix}2&1&&0&\\1&m_j&&*&\\&&&&\\0&*&&Q_{j-1}&\\&&&&\end{matrix}\right),$$ and by Lemma \ref{lem:mod4} $$\det Q_j\equiv(2m_j-1)\det Q_{j-1}\pmod4.$$ If $c_j$ is a positive crossing then $m_j$ is odd and so \begin{equation} \label{eqn:detQ}\det Q_j\equiv\det Q_{j-1}\pmod4. \end{equation} On the other hand the signature of $K_{j-1}$ is equal to that of $K_j$ and so \begin{equation} \label{eqn:detK}\det K_j\equiv\det K_{j-1}\pmod4. \end{equation} Comparing (\ref{eqn:detQ}) and (\ref{eqn:detK}) we see that $\det Q_j$ is congruent modulo 4 to its absolute value. Since it is an odd number it must be positive. On the other hand if $c_j$ is negative we find both congruences (\ref{eqn:detQ}) and (\ref{eqn:detK}) do \emph{not} hold, and again it follows that $\det Q_j$ is positive. By induction we see that $Q_J=Q_0, Q_1, \dots ,Q_{p+n}$ all have positive determinants. Thus the surgery cobordism $W$ is built by attaching $2(p+n)$ two-handles to the two-handlebody $X_J$, and before and after each attachment we have a four-manifold whose intersection pairing has positive determinant. It follows that $W$ is positive-definite. We claim $(H_2(W;{\mathbb Z}),Q_W)$ contains a finite index sublattice with a basis as in Definition \ref{def:halfint} with exactly $p$ elements of odd square. Suppose that $L$ has $r$ components, so that $b_2(X_K)=2(p+n+r)$. Let $\{x_i,y_i\}$ be a basis for $H_2(X_K;{\mathbb Z})$ as in Definition \ref{def:halfint}, chosen so that $\{x_i,y_i\}_{i>p+n}$ is a basis for the sublattice $H_2(X_J;{\mathbb Z})$. For $i\le p+n$, take $y_i$ to be the class corresponding to the two-handle attached along $C_i$, and $x_i$ that corresponding to the two-handle attached along the meridian of $C_i$. Note that each $x_i$ is contained in $H_2(W;{\mathbb Z})$. The rough idea is to form a sublattice by projecting the span of $\{x_i,y_i\}_{i\le p+n}$ orthogonally to $H_2(W;{\mathbb Z})$. Of course we cannot quite do this; however since $H_1(\Sigma(J);{\mathbb Z})$ has order $k^2$ we may write $$k^2 y_i=z_i+w_i,\qquad i=1,\dots,p+n$$ with $z_i\in H_2(W;{\mathbb Z})$ and $w_i\in H_2(X_J;{\mathbb Z})$. We claim that the self-intersection of $z_i$ has the same parity as that of $y_i$. To see this note that for each $j>p+n$, $y_i$ is orthogonal to $x_j$ and hence so is $w_i$. It follows that $w_i$ is in the span of $\{x_i\}_{i>p+n}$ and so has even self-intersection. Thus we have a full rank sublattice of $(H_2(W;{\mathbb Z}),Q_W)$ with basis $\{x_i,z_i\}_{i\le p+n}$; by Lemma \ref{lem:mod2}, this sublattice has a basis as in Definition \ref{def:halfint} with exactly $p$ elements of odd square. Form the manifold $X$ with one boundary component by capping off the $\Sigma(J)$ end of $W$ with the rational ball $B$ given as the double branched cover of $B^4$ along a slice disk bounded by $J$ \cite{cg}. Then $(H_2(W;{\mathbb Z}),Q_W)$ is a sublattice of $(H_2(X;{\mathbb Z}),Q_X)$, and therefore so is $\Span(\{x_i,z_i\}_{i\le p+n})$. \endproof \begin{proposition} \label{prop:subhalfint} Suppose $M$ is a positive-definite integer lattice of rank $2r$ and $L$ is a sublattice of $M$ of odd index $l$. If $L$ is of half-integer type then so is $M$. Moreover, the number of elements of a basis of $M$ as in Definition \ref{def:halfint} with odd square is the same as that for $L$. \end{proposition} This will follow from the next two lemmas, the first of which is standard. \begin{lemma} \label{lem:gln} The natural map from $GL(n,{\mathbb Z})$ to $GL(n,{\mathbb Z}/2)$ is onto for any $n$. \end{lemma} {\bf \noindent Proof.\ } Use induction on $n$. Suppose that $R\in M(n,{\mathbb Z})$ has odd determinant. The cofactor expansion across the first row yields $$\det R=r_{11}R_{11}+r_{12}R_{12}+\cdots+r_{1n}R_{1n}.$$ Since the determinant is odd, so is at least one $r_{1j}R_{1j}$. By induction we may choose $\tilde{R}\equiv R\pmod2$ with $\tilde{R}_{1j}=1$, then adjust the value of $r_{1j}$ to get $\det\tilde{R}=1$. \endproof \begin{lemma} \label{lem:mod2basis} Suppose $M$ is a positive-definite integer lattice of rank $2r$, and that $L$ is a lattice of half-integer type which is a sublattice of $M$ of odd index $l$. Let $x_1,\dots,x_r,y_1,\dots,y_r$ be a basis for $L$ as in Definition \ref{def:halfint}, and let $Q_L$ be the Gram matrix of $L$ in this basis. Then $x_1,\dots,x_r$ may be extended to a basis $x_1,\dots,x_r,z_1,\dots,z_r$ for $M$ with $$Q_M\equiv Q_L\pmod2.$$ \end{lemma} {\bf \noindent Proof.\ } Let $m_i=y_i\cdot y_i$. In the given basis $Q_L$ is in block form $\left(\begin{matrix}2I & I\\I & X\end{matrix}\right)$, with $\diag(X)=(m_1,\dots,m_r)$. By Theorem 6 in Chapter 1 \S 3 of \cite{grublek} there exists a basis $a_1,\dots,a_r,z_1,\dots,z_r$ of $M$ with $x_i\in\Span_{\mathbb Z}\{a_1,\dots,a_i\}$. A simple induction argument using the fact that $x_i\cdot y_j=\delta_{ij}$ shows that in fact (after possibly multiplying by $-1$) we have $a_i=x_i$ for $i=1,\dots,r$. Let $P\in M(2r,{\mathbb Z})$ be the matrix whose $i$th column is the coefficient vector of the $i$th basis vector of $L$ in the basis $x_1,\dots,x_r,z_1,\dots,z_r$. Then $P$ is in block form $\left(\begin{matrix}I & *\\0 & R\end{matrix}\right)$, and $$Q_L=P^T Q_M P.$$ Since $\det Q_L=l^2\det Q_M$ we have $\det R=l$ is odd. By Lemma \ref{lem:gln} we may choose $\tilde{R}\in GL(r,{\mathbb Z})$ with $\tilde{R}\equiv R\pmod2$. Applying the transition matrix $\tilde{P}=\left(\begin{matrix}I & *\\0 & \tilde{R}\end{matrix}\right)$ to $M$ yields the required basis. \endproof \noindent{\it Proof of Proposition \ref{prop:subhalfint}.} Let $x_1,\dots,x_r,z_1,\dots,z_r$ be the basis of $M$ given by Lemma \ref{lem:mod2basis}, in which $$Q_M=\left(\begin{matrix}2I & *\\ * & *\end{matrix}\right) \equiv Q_L=\left(\begin{matrix}2I & I\\I & X\end{matrix}\right)\pmod2.$$ The proposition now follows from Lemma \ref{lem:mod2}. \endproof \noindent{\it Proof of Theorem \ref{thm:mainthm}.} Let $X$ be the four-manifold bounded by $\Sigma(K)$ given by Proposition \ref{prop:monttrick}. The order of the first homology of a knot is always odd; it follows that $Q_X$ has odd order and the sublattice given by Proposition \ref{prop:monttrick} has odd index in $(H_2(X;{\mathbb Z}),Q_X)$. Theorem \ref{thm:mainthm} now follows immediately from Proposition \ref{prop:subhalfint}.\endproof \section{Examples} \label{sec:examples} Theorem \ref{thm:mainthm} tells us that to show that a knot $K$ cannot be converted to a slice knot by changing $p$ positive and $n=\sigma(K)/2$ negative crossings, we must show that its double branched cover cannot bound a four-manifold with an intersection form with certain properties. We will make use of two very effective gauge-theoretic obstructions to a rational homology three-sphere $Y$ bounding a positive-definite form $Q$. On the one hand, Ozsv\'{a}th\ and Szab\'{o}\ define a function $$d:\ifmmode{{\rm Spin}^c}\else{${\rm spin}^c$\ }\fi(Y)\to{\mathbb Q}$$ coming from the absolute grading in Heegaard Floer homology, and they show that for each \ifmmode{{\rm Spin}^c}\else{${\rm spin}^c$\ }\fi structure $\mathfrak s$ on a positive-definite four manifold bounded by $Y$ the following must hold: \begin{eqnarray} c_1(\mathfrak s)^2-b_2(X)&\ge& 4d(\mathfrak s|_Y),\label{eqn:ineq}\\ \mbox{and}\quad c_1(\mathfrak s)^2-b_2(X)&\equiv& 4d(\mathfrak s|_Y) \pmod2.\label{eqn:cong} \end{eqnarray} The left hand side depends on the intersection form of $X$. To determine if a given knot $K$ may be sliced by changing $p$ positive and $n=\sigma(K)/2$ negative crossings we carry out the following steps: \renewcommand{\labelenumi}{\arabic{enumi}.} \begin{enumerate} \item Compute $d:\Sigma(K)\to{\mathbb Q}$; \item Find a complete finite set of representatives $Q_1,\dots,Q_m$ of forms of rank $2(p+n)$ satisfying the conclusion of Theorem \ref{thm:mainthm}; \item Check using (\ref{eqn:ineq},\ref{eqn:cong}) whether $\Sigma(K)$ is obstructed from bounding each of $Q_1,\dots,Q_m$. \end{enumerate} Details on Heegaard Floer theory and the $d$ invariant may be found in \cite{os4,os6,osu1}; for a summary of how this theory may be used in our context see \cite{u}. Another approach to understanding the set of positive-definite forms that a three-manifold $Y$ may bound is to make use of Donaldson's diagonalisation theorem \cite{d}. This approach works well for Seifert fibred rational homology spheres, and especially for lens spaces (see e.g.~ \cite{lisca}, \cite{det3}). Knowledge of a particular negative-definite four-manifold $X_1$ bounded by $Y$ constrains the intersection form of a positive-definite $X_2$ with the same boundary, since $X=X_2\cup-X_1$ is a closed positive-definite manifold with $(H_2(X;{\mathbb Z}),Q_X)\cong{\mathbb Z}^m$ for some $m$. We will illustrate this technique in Subsection \ref{subsec:Kn}. The slicing number of a knot is the same as that for its reflection. We assume in what follows that all knots have nonnegative signature. (This distinguishes between the knot and its reflection unless the signature is zero.) \subsection{Knots with slice genus one} For a knot with $\sigma(K)=2$ and $u_s(K)=1$, it follows from inequality (\ref{eqn:nsig}) and Theorem \ref{thm:mainthm} that $\Sigma(K)$ bounds a four-manifold whose intersection form is represented by the matrix $$\left(\begin{matrix}m&1\\1&2\end{matrix}\right),$$ with $\det K=(2m-1)t^2$ for some integer $t$. For a knot with $\sigma(K)=0$ and $u_s(K)=1$ we find that either $\Sigma(K)$ or $-\Sigma(K)$ must bound such a positive-definite four-manifold. The knots listed in Corollary \ref{cor:os} have square-free determinant. For each of them, Ozsv\'{a}th\ and Szab\'{o}\ have shown in \cite{osu1}, using (\ref{eqn:ineq},\ref{eqn:cong}), that $\pm\Sigma(K)$ cannot bound $$\left(\begin{matrix}\frac{\det K +1}{2}&1\\1&2\end{matrix}\right);$$ it is also known in each case that the knot can be unknotted with two crossing changes. We conclude that $u_s(K)=2$. \begin{remark} Each of the knots $10_{29}$, $10_{40}$, $10_{65}$, $10_{67}$, $10_{89}$, $10_{106}$, and $10_{108}$ has signature 2 and $\det K=st^2$ for some $t>1$. In each case Ozsv\'{a}th\ and Szab\'{o}\ have shown $\Sigma(K)$ cannot bound $\left(\begin{matrix}\frac{\det K +1}{2}&1\\1&2\end{matrix}\right)$ and hence $K$ has unknotting number two. However we find in each case $\Sigma(K)$ is \emph{not} obstructed from bounding $\displaystyle\left(\begin{matrix}\frac{s+1}{2}&1\\1&2\end{matrix}\right)$. \end{remark} \subsection{Knots with slice genus two or three} We now consider the knots in Corollary \ref{cor:u}. Each of $9_{10}, 9_{13}, 9_{38}, 10_{53}, 10_{101}, 10_{120}$ has signature 4 and slice genus 2. In \cite{u} we have shown, using (\ref{eqn:ineq},\ref{eqn:cong}), that in each case $\Sigma(K)$ cannot bound a positive-definite form $Q$ as in Definition \ref{def:halfint} with rank 4, $\det Q=\det K$ and $Q(x_i,x_i)$ even. Since the knots have square-free determinant, it follows from Theorem \ref{thm:mainthm} that they cannot be sliced with two crossing changes. Each can be unknotted with three crossing changes, and so in each case $u_s(K)=3$. Similarly $K=11a365$ is shown in \cite{u} to have unknotting number 4, and since its determinant is square-free the same argument shows it has $u_s(K)=4$. \subsection{Knots with large slice genus} \label{subsec:Kn} In this subsection we prove Corollary \ref{cor:Kn}. We define $K_n$ to be the 4-plat closure of the four-string braid $(\sigma_1{}\!^{4}\sigma_2{}\!^{4})^n$, as illustrated in Figure \ref{fig:Kn} for $n=2$. For $n=1$ this is the knot $7_4=S(15,11)$ shown by Lickorish to have unknotting number 2 and by Livingston to have slicing number 2 \cite{lick,liv}. \begin{figure}[htbp] \begin{center} \ifpic \leavevmode \xygraph{ !{0;/r1.5pc/:} !{\hcap[-3]}[d] !{\hcap[-1]}[u] !{\htwist}[ldd] !{\xcaph[1]@(0)}[ld] !{\xcaph[1]@(0)}[uuu !{\htwist}[ldd] !{\xcaph[1]@(0)}[ld] !{\xcaph[1]@(0)}[uuu !{\htwist}[ldd] !{\xcaph[1]@(0)}[ld] !{\xcaph[1]@(0)}[uuu !{\htwist}[ldd] !{\xcaph[1]@(0)}[ld] !{\xcaph[1]@(0)}[uuu !{\xcaph[1]@(0)}[ld] !{\htwist}[ldd] !{\xcaph[1]@(0)}[uuu] !{\xcaph[1]@(0)}[ld] !{\htwist}[ldd] !{\xcaph[1]@(0)}[uuu] !{\xcaph[1]@(0)}[ld] !{\htwist}[ldd] !{\xcaph[1]@(0)}[uuu] !{\xcaph[1]@(0)}[ld] !{\htwist}[ldd] !{\xcaph[1]@(0)}[uuu] !{\htwist}[ldd] !{\xcaph[1]@(0)}[ld] !{\xcaph[1]@(0)}[uuu !{\htwist}[ldd] !{\xcaph[1]@(0)}[ld] !{\xcaph[1]@(0)}[uuu !{\htwist}[ldd] !{\xcaph[1]@(0)}[ld] !{\xcaph[1]@(0)}[uuu !{\htwist}[ldd] !{\xcaph[1]@(0)}[ld] !{\xcaph[1]@(0)}[uuu !{\xcaph[1]@(0)}[ld] !{\htwist}[ldd] !{\xcaph[1]@(0)}[uuu] !{\xcaph[1]@(0)}[ld] !{\htwist}[ldd] !{\xcaph[1]@(0)}[uuu] !{\xcaph[1]@(0)}[ld] !{\htwist}[ldd] !{\xcaph[1]@(0)}[uuu] !{\xcaph[1]@(0)}[ld] !{\htwist}[ldd] !{\xcaph[1]@(0)}[uuu] !{\hcap}[dd] !{\hcap}[u(0.9)] !{\knotstyle{.}} !{\xcapv[0.8]@(0)}[u(1)r(0.2)] !{\xcapv[0.8]@(0)}[u(0.7)l(17.6)] !{\xcaph[0.8]@(0)}[l(1)d(0.2)] !{\xcaph[0.8]@(0)} } \else \vskip 5cm \fi \begin{narrow}{0.3in}{0.3in} \caption{ {\bf{The knot $K_2$.}} The two pairs of dashed arcs indicate where to attach ribbons to go from $K_n$ to $K_{n-1}$.} \label{fig:Kn} \end{narrow} \end{center} \end{figure} As illustrated in the diagram, two oriented ribbon moves convert $K_n$ to $K_{n-1}$. Since $K_0$ is the unknot this shows that the slice genus of $K_n$ is at most $n$. The signature of $K_n$ may be shown (see below) to be $2n$. We conclude that $$g_s(K_n)=n.$$ Let $P(a_1,a_2,\dots,a_m)$ denote the plumbing of disk bundles over two-spheres corresponding to the linear graph with $m$ vertices, where the $i$th vertex has weight $a_i$. The double cover of $S^3$ branched along $K_n$ is the boundary of $P(4,4,\dots,4)$. Let $Q_n$ denote the rank $2n$ intersection form of this plumbing, and let $L_n$ denote the associated lattice in ${\mathbb R}^{2n}$. The following sequence of blow-ups and blow-downs exhibits $\Sigma(K_n)$ as the boundary of a negative-definite plumbing: \begin{eqnarray*} \Sigma(K_n)&\cong&\partial P(4,4,\dots,4)\\ &\cong&\partial P(-1,2,-1,2,\dots,-1,2,-1)\\ &\cong&\partial P(-2,-1,1,-2,-1,1,\dots,-2,-1,1,-1)\\ &\cong&\partial P(-2,-2,-3,-2,-3,-2,\dots,-3,-2,-3,-2,-2).\\ \end{eqnarray*} (We note the above shows that $K_n$ may also be represented by the alternating diagram which is the closure of the braid $$(\sigma_1{}\!^{-1}\sigma_2{}\!^{2})^{2n}\sigma_1{}\!^{-1};$$ using the formula of Gordon and Litherland \cite{gl} it is easy to compute the signature from this diagram.) Let $X'_n$ denote the positive-definite plumbing $P(2,2,3,2,3,2,\dots,3,2,2)$ whose boundary is $-\Sigma(K_n)$. Let $Q'_n$ denote its intersection form and let $L'_n$ denote the associated lattice. Note that $L'_n$ has dimension $2n+3$: there are $n$ vertices with weight $3$ and $n+3$ with weight $2$. \begin{lemma} \label{lem:Knobst1} Suppose $\Sigma(K_n)$ is given as the boundary of a smooth four-manifold $X$ with positive-definite intersection form $Q_X$. Then $(H_2(X;{\mathbb Z}),Q_X)$ embeds as a finite index sublattice of $L_n\oplus {\mathbb Z}^k$ for some $k\ge0$. \end{lemma} {\bf \noindent Proof.\ } Gluing $X$ to $X'_n$ along their boundary gives a closed positive-definite manifold. It follows from Donaldson's theorem that the lattice $L'_n$ embeds as a sublattice of ${\mathbb Z}^m$ with the lattice $(H_2(X;{\mathbb Z}),Q_x)$ contained in its orthogonal complement. Let $e_1,\dots,e_m$ be an orthonormal basis of ${\mathbb Z}^m$. Up to automorphism of ${\mathbb Z}^m$ there is a unique way to embed $L'_n$: the first vertex vector must map to $e_1+e_2$, the second to $e_2+e_3$, the third to $e_3+e_4+e_5$, the fourth to $e_5+e_6$ and so on. Thus the image of $L'_n$ is contained in a ${\mathbb Z}^{3n+4}$ sublattice of ${\mathbb Z}^m$. An easy calculation shows that the orthogonal complement of $L'_n$ in ${\mathbb Z}^{3n+4}$ is spanned by the vectors $e_1-e_2+e_3-e_4$, $e_4-e_5+e_6-e_7$,\dots, $e_{3n+1}-e_{3n+2}+e_{3n+3}-e_{3n+4}$. These span a copy of $L_n$, from which the conclusion follows.\endproof \begin{lemma} \label{lem:Knobst2} For any $n\ge1$, $k\ge0$, the lattice $L_n\oplus {\mathbb Z}^k$ does not admit any finite index sublattices of half-integer surgery type. \end{lemma} {\bf \noindent Proof.\ } For $k=0$ this is immediate since $L_n$ has no nonzero vectors of square less than $4$. If $k>0$ let $e_1$,\dots,$e_k$ be an orthonormal basis of ${\mathbb Z}^k$, and suppose we have a sublattice of $L_n\oplus {\mathbb Z}^k$ of half-integer surgery type with basis $\{x_i,y_i\}$ as in Definition \ref{def:halfint}. Up to an automorphism of $L_n\oplus {\mathbb Z}^k$ we have $x_1=e_1+e_2$. Then $x_2$ is orthogonal to $x_1$. We cannot have $x_2=e_1-e_2$ since $y_1$ pairs evenly with $x_2$ and oddly with $x_1$. Thus up to automorphism, $x_2=e_3+e_4$. It follows that any sublattice of $L_n\oplus {\mathbb Z}^k$ of half-integer surgery type has rank at most $k$.\endproof Corollary \ref{cor:Kn} now follows from Lemmas \ref{lem:Knobst1} and \ref{lem:Knobst2} and Theorem \ref{thm:mainthm}. \begin{remark} \label{rem:Kn} Livingston conjectured in \cite{liv} that the difference $U_s-g_s$ can be arbitrarily large. It is possible to unknot $K_n$ by changing $2n$ positive crossings in the diagram as in Figure \ref{fig:Kn}. In the absence of any further evidence it is tempting to conjecture that for these knots $U_s-g_s=n$; in any case this would seem to be a good candidate with which to attempt to verify Livingston's conjecture. \end{remark} \begin{remark} \label{rem:disk} The trace of a homotopy from a knot $K$ to a slice knot $J$ is an immersed annulus in $S^3\times I$; capping this off with a slice disk yields an immersed disk $D$ in $B^4$ bounded by $K$. If $J$ is obtained from $K$ by changing $p$ positive and $n$ negative crossings then the resulting disk $D$ has $p$ negative self-intersections and $n$ positive self-intersections. (This is why changing a positive crossing is often referred to as a ``negative crossing change''.) Instead of considering crossing changes one may ask whether a knot $K$ bounds an immersed disk in $B^4$ with a prescribed number of self-intersections, or with prescribed numbers of self-intersections of each sign. Rudolph has shown in \cite{r} that the minimal number of self-intersections in a \emph{ribbon} immersed disk bounded by $K$ is equal to the minimal number of crossing changes to get from $K$ to a ribbon knot. (Here a ribbon surface in $B^4$ is one on which the radial distance function has no maxima, and a ribbon knot is a knot which bounds an embedded ribbon disk.) Knowing whether a result analagous to Rudolph's for non-ribbon disks and slice knots holds would be very interesting. It is to be expected that the conclusion of Theorem \ref{thm:mainthm} holds under the weaker hypothesis that $K$ bounds an immersed disk with $p$ negative, $n=\sigma(K)/2$ positive self-intersections. The expected proof would generalise that of \cite{my} and also make use of \cite[Theorem 3.7]{cl}. \end{remark} \newpage
{ "timestamp": "2008-02-15T19:52:34", "yymm": "0802", "arxiv_id": "0802.2109", "language": "en", "url": "https://arxiv.org/abs/0802.2109", "abstract": "The slicing number of a knot, $u_s(K)$, is the minimum number of crossing changes required to convert $K$ to a slice knot. This invariant is bounded above by the unknotting number and below by the slice genus $g_s(K)$. We show that for many knots, previous bounds on unknotting number obtained by Ozsvath and Szabo and by the author in fact give bounds on the slicing number. Livingston defined another invariant $U_s(K)$ which takes into account signs of crossings changed to get a slice knot, and which is bounded above by the slicing number and below by the slice genus. We exhibit an infinite family of knots $K_n$ with slice genus $n$ and Livingston invariant greater than $n$. Our bounds are based on restrictions (using Donaldson's diagonalisation theorem or Heegaard Floer homology) on the intersection forms of four-manifolds bounded by the double branched cover of a knot.", "subjects": "Geometric Topology (math.GT)", "title": "On slicing invariants of knots", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357567351325, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7096739135526074 }
https://arxiv.org/abs/1902.06484
Find Subtrees of Specified Weight and Cycles of Specified Length in Linear Time
We apply the Euler tour technique to find subtrees of specified weight as follows. Let $k, g, N_1, N_2 \in \mathbb{N}$ such that $1 \leq k \leq N_2$, $g + h > 2$ and $2k - 4g - h + 3 \leq N_2 \leq 2k + g + h - 2$, where $h := 2N_1 - N_2$. Let $T$ be a tree of $N_1$ vertices and let $c : V(T) \rightarrow \mathbb{N}$ be vertex weights such that $c(T) := \sum_{v \in V(T)} c(v) = N_2$ and $c(v) \leq k$ for all $v \in V(T)$. We prove that a subtree $S$ of $T$ of weight $k - g + 1 \leq c(S) \leq k$ exists and can be found in linear time. We apply it to show, among others, the following: (i) Every planar hamiltonian graph $G = (V(G), E(G))$ with minimum degree $\delta \geq 4$ has a cycle of length $k$ for every $k \in \{\lfloor \frac{|V(G)|}{2} \rfloor, \dots, \lceil \frac{|V(G)|}{2} \rceil + 3\}$ with $3 \leq k \leq |V(G)|$. (ii) Every $3$-connected planar hamiltonian graph $G$ with $\delta \geq 4$ and $|V(G)| \geq 8$ even has a cycle of length $\frac{|V(G)|}{2} - 1$ or $\frac{|V(G)|}{2} - 2$. Each of these cycles can be found in linear time if a Hamilton cycle of the graph is given. This work was partially motivated by conjectures of Bondy and Malkevitch on cycle spectra of 4-connected planar graphs.
\section{Introduction} \label{sec:intro} Given a tree $T$ and vertex weights $c: V(T) \rightarrow \mathbb{N}$, it is natural to ask subtrees of which specified weight would exist. Let $S$ be a subtree of $T$. We define $c(S) := \sum_{v \in V(S)} c(v)$. Let $k, g \in \mathbb{N}$ with $1 \leq k \leq c(T)$. We aim at finding a subtree $S$ of weight $k - g + 1 \leq c(S) \leq k$. Denote by $N_1$ and $N_2$ the number of vertices of the tree $T$ and the weight $c(T)$ of the whole tree, respectively. Note that if we allow $N_2$ to be arbitrarily large when compared to $N_1$, then it would be hopeless for us to achieve our goal. For example, it can happen that every vertex has weight, say $g' \gg g$, then a subtree $S$ of weight $k - g + 1 \leq c(S) \leq k$ exists if and only if $k \equiv 0, \dots, g - 1 \mod g'$. It means that the desired subtree does not exist for most choices of $k$. We describe by $h$ the difference between $N_1$ and $\frac{N_2}{2}$. Our main goal is to prove the following lemma, which can be interpreted as that the closer the value of our target $k$ to the medium-weight $\frac{N_2}{2}$ and the smaller the medium-weight $\frac{N_2}{2}$ when compared to the number of vertices $N_1$, the more favourable conditions we have in finding the desired subtree. It is complemented by a deterministic linear-time algorithm, which will be given in Section~\ref{subsec:proof}. \begin{lemma} \label{lem:ksubtree} Let $k, g, N_1, N_2 \in \mathbb{N}$ such that $1 \leq k \leq N_2$, $g + h > 2$ and $2k - 4g - h + 3 \leq N_2 \leq 2k + g + h - 2$, where $h := 2N_1 - N_2$. Let $T$ be a tree of $N_1$ vertices and let $c : V(T) \rightarrow \mathbb{N}$ be vertex weights such that $c(T) = N_2$ and $c(v) \leq k$ for all $v \in V(T)$. Then there exists a subtree $S$ of $T$ of weight $k - g + 1 \leq c(S) \leq k$ and $S$ can be found in $O(N_1)$ time. \end{lemma} For the running time we assume that each arithmetic operation can be done in constant time. There are numerous results concerning subtrees of a tree with vertex weights, e.g. partitioning a tree into subtrees with constraints, finding a subtree of maximum weight. However, the author is not aware of result similar to ours on finding one subtree of specified weight. Along with the existence of subtrees of specified weight, we present an optimal linear-time algorithm for finding them. Note that a tree may have exponentially many subtrees in general. Hence, in our algorithm only a rather restricted (linear-size) subclass of subtrees will be considered. We will exploit the Euler tour technique and find a subtree by local search. Formally, given a tree $T = (V(T), E(T))$, we construct a directed cycle $C_T$ of size $2|E(T)|$ and consider the canonical homomorphism which maps vertices of $C_T$ to that of $T$ (as depicted in Figure~\ref{fig:CT}). It is clear that every path in $C_T$ will be correspondingly mapped to a subtree of $T$. We will prove that it is indeed the case that there exists such a subtree satisfying the requirement. Therefore a linear-time algorithm follows as a simple consequence, which searches greedily for a path in $C_T$ such that its corresponding subtree in $T$ is what we are looking for. To prove that such a path in $C_T$ exists, we assume it is not the case, and then deduce a contradiction by counting the number of vertices of weight 1 in $T$ in two ways. Section~\ref{sec:main} is devoted to the proof of Lemma~\ref{lem:ksubtree}. The problem on finding cycles of specified length appears as one of the most fundamental problems in algorithmic graph theory. By a novel method called \emph{color-coding}, Alon et al.~\cite{Alon1995} gave a randomized algorithm which finds a cycle of length $k$ in linear expected time for a fixed $k$ and a planar graph $G = (V(G), E(G))$ containing such a cycle. It can be derandomized at the price of a $\log |V(G)|$ factor. We refer to~\cite{Alon1997, Feder2010} for more related results on finding cycles of specified length efficiently. In this paper we will apply Lemma~\ref{lem:ksubtree} and obtain some closely related results. We prove, among others, the following: \begin{itemize} \item Every planar hamiltonian graph $G$ with minimum degree $\delta \geq 4$ has a cycle of length $k$ for every $k \in \{\lfloor \frac{|V(G)|}{2} \rfloor, \dots, \lceil \frac{|V(G)|}{2} \rceil + 3\}$ with $3 \leq k \leq |V(G)|$. \item Every $3$-connected planar hamiltonian graph $G$ with $\delta \geq 4$ has a cycle of length $\frac{|V(G)|}{2} - 1$ or $\frac{|V(G)|}{2} - 2$ if $|V(G)| \geq 8$ is even. \end{itemize} Each of these cycles can be found in linear time if a Hamilton cycle of the graph is given. These results were partially motivated by two conjectures, one posed by Bondy in 1973 and another one by Malkevitch in 1988. A detailed account of this subject will be given in Section~\ref{sec:cycle}. Finding a subtree of specified weight in a tree can be seen as a harder problem than finding a subset of specified sum in a multiset of integers, as one can weight the vertices of the tree by the integers in the multiset. We refer to Appendix~\ref{sec:number} for further discussion. \section{Notation} \label{sec:notation} We use minus sign to denote set subtraction, and parentheses would be omitted for single elements if it causes no ambiguity. We consider only simple graphs in this paper. Let $G$ be an undirected graph. We denote by $V(G)$ and $E(G)$ the vertex set and the edge set of $G$ and call $|V(G)|$ and $|E(G)|$ order and size of $G$, respectively. We denote by $d_G(v)$ the degree of vertex $v \in V(G)$ in the graph $G$. The minimum and maximum degrees of $G$ are defined as $\delta(G) := \min_{v \in V(G)} d_G(v)$ and $\Delta(G) := \max_{v \in V(G)} d_G(v)$, respectively. For $W \subseteq V(G)$, $G[W]$ is defined to be the induced subgraph of $G$ on $W$. Let $c : V(G) \rightarrow \mathbb{N}$ be vertex weights. For $i \in \mathbb{N}$, we denote by $V_i(G) \subseteq V(G)$ the set of vertices $v$ in $G$ with $c(v) = i$. We write $V_i := V_i(G)$ if there is no ambiguity. Let $H$ be a subgraph of $G$, we define $c(H) := \sum_{v \in V(H)} c(v)$. In an undirected graph $G$ we denote by $vw$ or $wv$ the edge with endvertices $v, w \in V(G)$. We abuse the notation of a sequence of vertices as follows. Let $t \in \mathbb{N}$. For $t$ distinct vertices $v_1, v_2, \dots, v_t$, we denote by $v_1 v_2 \dots v_t$ the \emph{path} $P$ with endvertices $v_1, v_t$ such that $V(P) := \{v_1, v_2, \dots, v_t\}$ and $E(P) := \{v_1 v_2, v_2 v_3, \dots, v_{t - 1} v_t\}$. For $t \geq 3$ distinct vertices $v_1, v_2, \dots, v_t$, we denote by $v_1 v_2 \dots v_t v_1$ the \emph{cycle} $K$ of length $t$ such that $V(K) := \{v_1, v_2, \dots, v_t\}$ and $E(K) := \{v_1 v_2, v_2 v_3, \dots, v_{t - 1} v_t, v_t v_1\}$. In a directed graph $G$ we denote by $vw$ the edge directed from $v$ to $w$ for $v, w \in V(G)$. Let $C$ be a directed cycle. For $u, v \in V(C)$, we define $[u, v]_C$ to be the path directed from $u$ to $v$ along $C$. Subscripts can be omitted if it is clear from the context. Let $vw$ be an edge in $C$, we define $v^+ := w$ and $w^- := v$. For a plane graph $G$, we identify the faces of $G$ not only with the vertices in the dual graph $G^*$ but also with the cycles in the boundaries of the faces provided that $G$ is 2-connected and is not a cycle. Let $T$ be a tree. For $vw \in E(T)$, we denote by $T[vw; v]$ the connected component of $T - vw$ containing $v$. Given a vertex $a \in V(T)$, we specify the tree $T$ rooted at $a$ by $T^{(a)}$. For $v \in V(T)$, $T^{(a)}_v$ is defined as the subtree of $T$ containing $v$ and all of its descendants in $T^{(a)}$. A graph $G$ is said to be $\kappa$-connected for some $\kappa \in \mathbb{N}$ if $G$ has at least $\kappa + 1$ vertices and $G - U$ is connected for any $U \subseteq V(G)$ with $|U| < \kappa$. \section{Find Subtrees of Specified Weight} \label{sec:main} This section is devoted to the proof of Lemma~\ref{lem:ksubtree}. To this end, we \emph{may} assume that it doesn't hold and then deduce a contradiction. However it would be hopeless to derive an efficient algorithm from such a proof, since we would then have to search over possibly exponentially many subtrees of a tree. Fortunately, we can actually consider a linear-size subclass of subtrees instead, which we define in Section~\ref{sec:DFS}. Since then we assume, towards a contradiction, that there is no subtree of $T$ having the desired weight in this subclass. We outline how to prove Lemma~\ref{lem:ksubtree} by double counting in Section~\ref{sec:oda}. We introduce a helpful notion called support subtree in Section~\ref{sec:support}, and study the subtrees in the aforementioned linear-size subclass in Section~\ref{sec:study}. We give a proof of Lemma~\ref{lem:ksubtree} and a pseudocode of a linear-time algorithm in Section~\ref{subsec:proof}. In Section~\ref{sec:examples} we present some examples showing that the conditions in Lemma~\ref{lem:ksubtree} are tight from several aspects. \subsection{An Overload-Discharge Approach} \label{sec:ODa} \subsubsection{Overloading Subtrees by ETT} \label{sec:DFS} To define the subclass of subtrees mentioned above, we consider the subtrees collected by the so-called \emph{Euler tour technique} (ETT) which was first introduced by Tarjan and Vishkin~\cite{Tarjan1984} and has abundant applications in computing and data structures. We assume a fixed planar embedding of the tree $T$ and we walk around it, i.e. we see edges of $T$ as walls perpendicular to the plane and we walk on the plane along the walls. This walk yields a cycle of size $2(|V(T)| - 1)$. To make it precise, we define the auxiliary directed cycle graph $C_T$ as follows. For each $v \in V(T)$, we enumerate the edges incident to $v$ in the clockwise order according to the planar embedding and denote them by $e_{v, 1}, e_{v, 2}, \dots, e_{v, d_T(v)}$. The vertex set $V(C_T)$ consists of $d_T(v)$ vertices $w_{v, 1}, w_{v, 2}, \dots, w_{v, d_T(v)}$ for each $v \in V(T)$. Let $w_{v, d_T(v) + 1} := w_{v, 1}$. And, for every edge $uv \in E(T)$, say $uv = e_{u, i} = e_{v, j}$ for some $i \in \{1, \dots, d_T(u)\}$ and $j \in \{1, \dots, d_T(v)\}$, $E(C_T)$ contains the edges $w_{u, i} w_{v, j + 1}$ and $w_{v, j} w_{u, i + 1}$. It is clear that $C_T$ is our desired cycle of size $2(|V(T)| - 1)$ (see Figure~\ref{fig:CT}). Note that a directed path in $C_T$ can be naturally corresponded to a subtree in $T$. Moreover, growing a subtree by this walking-around-walls in $T$ is equivalent to growing a directed path in $C_T$. We define the mapping $\rho$ (a homomorphism) from $V(C_T)$ to $V(T)$ by $\rho(w_{v, i}) := v$ for $w_{v, i} \in V(C_T)$ with $v \in V(T)$ and $i \in \{1, \dots, d_T(v)\}$. We also extend this mapping for paths $[u, v]$ directed from $u$ to $v$ in $C_T$ ($u, v \in V(C_T)$) by defining $\rho([u, v]) := T[\{\rho(w) : w \in V([u, v])\}]$. We then extend the weight function $c$ to the vertices $w$ and directed paths $[u, v]$ in $C_T$ ($w, u, v \in V(C_T)$) by $c(w) := c(\rho(w))$ and $c([u, v]) := c(\rho([u, v]))$. Here we state an assumption (towards a contradiction) which we adopt from now on: \begin{itemize} [\textbf{Assumption \setword{($\Omega$)}{ass:ksubtree}.}] \item There are no $x, y \in V(C_T)$ with ${k} - g + 1 \leq c([x, y]) \leq {k}$. \end{itemize} In other words there is no subtree of $T$ with weight between ${k} - g + 1$ and ${k}$ can be found by searching along the Euler tour. Once we show that Assumption~\ref{ass:ksubtree} cannot hold, we can assure that there exists a subtree $S$ of $T$ having weight ${k} - g + 1 \leq c(S) \leq {k}$. Indeed, we can have some linear-size subclass of subtrees that contains some subtree $S$ having weight ${k} - g + 1 \leq c(S) \leq {k}$. For instance, if $k < c(T)$, we can consider the subtrees corresponding to the paths $[u, v]$ ($u, v \in V(C_T)$) each satisfies $c([u, v]) \le k$, $c([u^-, v]) > k$ and $c([u, v^+]) > k$. There are at most $|V(C_T)| = O(|V(T)|)$ such subtrees, and at least one of them is of weight between $k - g + 1$ and $k$ when Assumption~\ref{ass:ksubtree} doesn't hold. This helps us to devise a linear-time algorithm (Algorithm~\ref{alg:overloaddischarge}) for searching a subtree of the desired weight. By Assumption~\ref{ass:ksubtree}, the inequalities $c([u, v]) \ge k - g + 1$ and $c([u, v]) \le k$ ($u, v \in V(C_T)$) are equivalent to $c([u, v]) > k$ and $c([u, v]) < k - g + 1$, respectively. (Such usage of Assumption~\ref{ass:ksubtree} would occur tacitly.) We mention some more consequences that follow from Assumption~\ref{ass:ksubtree}. It is readily to see that $c(v) \leq k - g$ for any $v \in V(T)$ and $N_2 > k$. Consider a path $[u, v]$ ($u, v \in V(C_T)$) satisfying $c([u, v]) \ge k$, $c([u^+, v]) \le k$ and $c([u, v^-]) \le k$, equivalently, $c([u, v]) > k$, $c([u^+, v]) < k - g + 1$ and $c([u, v^-]) < k - g + 1$. It is clear that such a path exists, $\rho(u) \neq \rho(v)$, and both $c(u)$, $c(v)$ are at least $g + 1$. In particular, we have $\sum_{i \geq g + 1} |V_i| \geq 2$. Let $u, v \in V(C_T)$. $[u, v]$ is \emph{$k$-overloading} or simply \emph{overloading} if $c([u^-, v^-]) > k$, $c([u, v^-]) \le k$ and $c([u, v]) > k$, and we say $\rho([u, v])$ is an \emph{overloading subtree}. Note that an overloading path always exists when we assume~\ref{ass:ksubtree}, since we are given that $1 \le k < N_2 = c(T)$ and $c(v) \le k$ for every $v \in V(T)$. \begin{figure}[!ht] \centering \subfloat[The tree $T$ with vertex weights. \label{subfig:vertexweight2}]{% \includegraphics[scale=1]{Simple_original_weights} } \hspace{1.5cm} \subfloat[The tree $T$, and the auxiliary cycle $C_T$ whose edges are directed clockwise. Set $k := 7$ and $g := 1$. $Q_{x, y}$ is a maximal overload-discharge quadruples, since $c({[}x, y{]} )> 7$, $c({[}x, y^-{]}) < 7$ and $c({[}x^-, y^-{]}) > 7$. The path ${[}x, y{]}$ and the corresponding overloading subtree are indicated in red and orange, respectively.\label{subfig:overloadingsubtree}]{% \includegraphics[scale=1]{Simple_algorithm_result} } \caption{Walk around the tree by ETT.} \label{fig:CT} \end{figure} \subsubsection{Bounds on $|V_1|$} \label{sec:oda} Now we see how we can count the number of vertices of weight 1 in two ways so that a contradiction may occur. By considering the sum of all the vertex weights, we have that \begin{align*} N_2 &= \sum_{i \geq 1} i |V_i|\\ &= 2 \sum_{i \geq 1} |V_i| + \sum_{1 \leq i \leq g} (i - 2) |V_i| + \sum_{i \geq g + 1} (i - 2) |V_i|\\ &= 2 N_1 - |V_1| + \sum_{2 \leq i \leq g} (i - 2) |V_i| + \sum_{i \geq g + 1} (i - g - 1) |V_i| + \sum_{i \geq g + 1} (g - 1) |V_i|\\ &= \sum_{i \geq g + 1} (i - g - 1)|V_i| + \sum_{i \geq 2} \min \{i - 2, g - 1\} |V_i| + 2 N_1 - |V_1|. \end{align*}As $2 N_1 - N_2 =: h$ and $\sum_{i \geq g + 1} |V_i| \geq 2$, we have the following lower bound on $|V_1|$: \begin{align} \label{for:1} |V_1| \geq \sum_{i \geq g + 1} (i - g - 1)|V_i| + 2g + h - 2. \tag{$\diamond$} \end{align} The intuitive idea of our proof of Lemma~\ref{lem:ksubtree} is that if $|V_1|$ is large enough, i.e. there are many vertices of weight $1$, then it should facilitate the search of subtree of the desired weight. Therefore, if Lemma~\ref{lem:ksubtree} would not hold, there would be some upper bound on the number of vertices of weight $1$ showing that the inequality (\ref{for:1}) must be contradicted. The upper bound is realized by the following observation. \begin{observation} \label{obs:1bound} Let $g, {k} \in \mathbb{N}$. Let $S$ be a subtree of $T$ with $c(S) > {k}$, $l$ be a leaf of $S$ with $c(S - l) < {k} - g + 1$, $M$ be a subset of $V(S) - l$ and $n$ a vertex in $S - M - l$ such that $S - M$ remains as a tree, $n$ is a leaf of $S - M$, $c(S - M) > {k}$ and $c(S - M - n) < {k} - g + 1$. Then we have \begin{align} \label{for:observation} |M \cap V_1| \leq c(S) - ({k} + 1) \leq c(l) - g - 1. \tag{$\ast$} \end{align} \end{observation} The vertex set $M$ can be seen as a set of vertices which are collected from a leave-cutting process, i.e. we cut leaves (other than $l$) one by one from $S$ with that the weight of the remainder still larger than ${k}$, and it becomes less than ${k} - g + 1$ once we further cut the vertex $n$. Note that $l$ is not cut from the subtree and it always stays as a leaf in the remaining part. \begin{proof} Since $c(S) - c(M) = c(S - M) > {k}$ and $c(S) - c(l) = c(S - l) < {k} - g + 1$, we have $|M \cap V_1| \leq c(M) \leq c(S) - ({k} + 1) \leq c(l) - g - 1$. \end{proof} Note that the conditions given in Observation~\ref{obs:1bound} appear naturally if we have a tree $T$ which has no subtree of weight between ${k} - g + 1$ and ${k}$. We carry out an \emph{overload-discharge process} as follows. We grow a subtree (say a single vertex) which is of weight less than ${k} - g + 1$ until we grow it with a vertex $l$ which makes the weight of the subtree at least ${k} - g + 1$. As we assume that no subtree is of weight between ${k} - g + 1$ and ${k}$, when we halt the growth, the weight of the subtree is actually not only at least ${k} - g + 1$ but greater than ${k}$. We then start to cut its leaves (other than $l$) one by one until the weight declines to be less than ${k} - g + 1$ again. The overload and discharge steps can always be achieved provided that $N_2 > {k}$ and $c(v) \leq {k} - g$ for all $v \in V(T)$. We say that $l$ \emph{overloads} $S$ and a \emph{discharge} $M \cup \{n\}$ containing the \emph{last discharge} $n$ follows, and that $(S, l, M, n)$ is an \emph{overload-discharge quadruple}. It is clear that $l$ and $n$ are two distinct vertices having weight at least $g + 1$. Let us have a look of a crude argument on how a contradiction would occur. Suppose we have a family of overload-discharge quadruples $(S_f, l_f, M_f, n_f)$ (with some indices $f$) such that the vertices of weight 1 in $T$ is \emph{covered} by the discharges, i.e. $V_1 \subseteq \bigcup_f M_f$, and each overloading vertex $l_f$ corresponds to precisely one overload-discharge quadruple, then, by the inequality~(\ref{for:observation}), we can simply deduce the following contradiction to the inequality (\ref{for:1}):\begin{align*} |V_1| \leq \sum_f |M_f \cap V_1| \leq \sum_f (c(l_f) - g - 1) \leq \sum_{i \geq g + 1} (i - g - 1)|V_i|. \end{align*} Although it is not always possible to have such a family of quadruples, we are still able to have some \emph{sufficiently good} family which leads to a contradiction to the inequality~(\ref{for:1}) even if we only assume~\ref{ass:ksubtree}. We will consider the family of overload-discharge quadruples corresponding to overloading subtrees. We demonstrate how an overload-discharge quadruple can be formed by considering paths in $C_T$. Let $u, v$ be two distinct vertices of $C_T$. If $c([u, v^-]) < {k} - g + 1$ but $c([u, v]) > {k}$, then there exists $w \in V([u, v^-])$ such that $c([w, v]) > {k}$ and $c([w^+, v]) < {k} - g + 1$. It is clear that $\rho(v)$ overloads the subtree $\rho([u, v])$ and we call $$(\rho([u, v]), \rho(v), V(\rho([u, v])) - V(\rho([w, v])), \rho(w)) =: Q_{u, v}$$ an \emph{overload-discharge quadruple associated with $u, v$}. An overload-discharge quadruple $Q_{u, v}$ associated with $u, v \in V(C_T)$ is \emph{maximal} if $c([u^-, v^-]) > {k}$ holds, or equivalently, $[u, v]$ is an overloading path in $C_T$ (see Figure~\subref*{subfig:overloadingsubtree}). We let $\mathcal{Q}(T; c, {k}) =: \mathcal{Q}$ be the family of all maximal overload-discharge quadruples associated with some $u, v \in V(C_T)$. \subsection{Support Vertices and Support Subtree} \label{sec:support} In order to see how the overloading subtrees from $\mathcal{Q}$ would be \emph{packed} in the weighted tree $T$, we need to study its structure in more detail. We introduce the notion of support vertices and support subtree of the weighted tree $T$ in this section. We first fix an arbitrary vertex $a \in V(T)$ and consider the rooted tree $T^{(a)}$. Note that there always exists a vertex $r$ such that $c(T^{(a)}_r) > k$ and $c(T^{(a)}_w) \leq k$ for all children $w$ of $r$ in $T^{(a)}$, as we assume $c(T^{(a)}) = N_2 > k$. We then take one such vertex $r$ and consider the tree $T^{(r)}$ rooted at $r$. Let $r_1, \dots, r_t$ ($t \in \mathbb{N}$) be the vertices each satisfies that $c(T^{(r)}_{r_i}) > k$ and $c(T^{(r)}_w) \leq k$ for all children $w$ of $r_i$ in $T^{(r)}$ ($i = 1, \dots, t$). We call $r, r_1, \dots, r_t$ \emph{support vertices} of $T$, and the minimal subtree $T^*$ containing all support vertices \emph{support subtree} of $T$. Note that $T^{(r)}_w$ is exactly the same subtree as $T^{(a)}_w$ for every $w \in V(T^{(a)}_r) - r$, therefore $c(T^{(r)}_w) = c(T^{(a)}_w) \leq k$ and $r_i \notin V(T^{(a)}_r) - r$ for every $i = 1, \dots, t$. If there are two distinct support vertices $r_{i_1}, r_{i_2}$ with $i_1, i_2 \in \{1, \dots, t\}$, we have that $r_{i_1}$ is neither ancestor nor descendant of $r_{i_2}$ in $T^{(r)}$, and, in particular, both $r_{i_1}, r_{i_2}$ cannot be $r$. It is possible that $r$ is one of the $r_1, \dots, r_t$; it happens if and only if $r$ is the only support vertex and $|V(T^*)| = 1$. We conclude that the leaves of $T^*$ are exactly the support vertices if $|V(T^*)| > 1$, as $T^*$ is the union of the paths $P_i$ ($i = 1, \dots, t$), where $P_i$ is the path in $T$ with endvertices $r$ and $r_i$. It is clear that $T - E(T^*)$ is a forest of $|V(T^*)|$ subtrees. For $\tilde{r} \in V(T^*)$, we denote by $T^*[\tilde{r}]$ the maximal subtree of $T - E(T^*)$ containing $\tilde{r}$. If $|V(T^*)| > 1$, then the subtrees $T^{(a)}_r, T^{(r)}_{r_1}, \dots, T^{(r)}_{r_t}$ are exactly $T^*[r], T^*[r_1], \dots, T^*[r_t]$, and each of them has weight at least $k + 1$. Let $t^*$ be the number of support vertices. It is clear that $t^* = t = 1$ if $|V(T^*)| = 1$, and $t^* = t + 1$ if $|V(T^*)| > 1$. We claim that $N_2 \geq t^*(k + 1)$. It holds trivially if $|V(T^*)| = 1$. If $|V(T^*)| > 1$, then we have $t^*$ vertex-disjoint subtrees $T^*[r], T^*[r_1], \dots, T^*[r_t]$. Thus we have $$N_2 = c(T) \geq c(T^*[r]) + c(T^*[r_1]) + \dots + c(T^*[r_t]) \geq t^*(k + 1).$$ In particular, $N_2 \geq 2k + 2$ if $|V(T^*)| > 1$. We remark that for a fixed $k$ the support tree $T^*$ is unqiuely defined if $|V(T^*)| > 1$, while it is not always uniquely defined if $|V(T^*)| = 1$, as it would depend on the initial root $a$. For ease of presentation we assume that some support tree is fixed throughout. \subsection{Overloading Vertices and Discharges} \label{sec:study} In this section we focus on the vertices which overload subtrees from $\mathcal{Q}$ and see how many discharges they can carry each. We first show a sufficient condition for a vertex to be contained in some discharge from $\mathcal{Q}$. \begin{lemma} \label{lem:indischarge} Let $vw$ be an edge in $T$. If $c(T[vw; w]) \geq k - g$, then there exists $(S, l, M, n) \in \mathcal{Q}$ with $v \in M \cup \{n\}$. \end{lemma} \begin{proof} Let $i \in \{1, \dots, d_T(v)\}$ such that the edge $vw$ is $e_{v, i}$. We grow a path in $C_T$ from $u := w_{v, i}$ to obtain an overload-discharge quadruple $Q_{u, y}$ associated with $u, y$ for some $y \in V(C_T)$. The corresponding situation in $T$ is that a subtree starts growing at $v$, then traverses along the edge $e_{v, i}$ immediately. It will overload, i.e. the weight reaches larger than $k$, without revisiting $v$, since $c(v) + c(T[vw; w]) \geq k - g + 1$. We can augment the path $[u, y]$ backwards along the cycle $C_T$ to obtain $[x, y]$ such that $c([x, y^-]) < k - g + 1$ but $c([x^-, y^-]) > k$. Then we have $Q_{x, y} \in \mathcal{Q}$. Note that $u$ is the only vertex in $[u, y]$ with $\rho(u) = v$, and $u$ cannot stay in the path after discharge since $c([u, y]) \geq k - g + 1$. Therefore the discharge of $Q_{x, y}$ must contain $v$. \end{proof} Now we give a necessary condition for a vertex to be an overloading vertex in some quadruple in $\mathcal{Q}$. \begin{lemma} \label{lem:olvertex} Let $Q_{x, y} \in \mathcal{Q}$ be an overload-discharge quadruple associated with some $x, y \in V(C_T)$. We have $c(T[\rho(y)\rho(y^-); \rho(y^-)]) + c(\rho(y)) > k$. \end{lemma} \begin{proof} It is clear that the subtree $\rho([x, y^-])$ is contained in the subtree $T[\rho(y)\rho(y^-); \rho(y^-)]$. Therefore, $c(T[\rho(y)\rho(y^-); \rho(y^-)]) + c(\rho(y)) \geq c([x, y]) > k$ as $\rho(y)$ overloads $\rho([x, y])$. \end{proof} We next show that if there are more than one overloading subtrees having the same overloading vertex, then the mutual intersection among these subtrees can only be the overloading vertex, and such a vertex must be in the support subtree. We also prove upper bounds on these discharges. \begin{lemma} \label{lem:overlap} Let $Q_{x_1, y_1}$ and $Q_{x_2, y_2}$ be two distinct overload-discharge quadruples associated with $x_1, y_1$ and $x_2, y_2$, where $x_1, y_1, x_2, y_2 \in V(C_T)$, in $\mathcal{Q}$, respectively. If $\rho(y_1) = \rho (y_2) =: l$, we have $$V(\rho([x_1, y_1])) \cap V(\rho([x_2, y_2])) = \{l\}$$ and $l$ must be a vertex in the support subtree $T^*$. \end{lemma} \begin{proof} As $Q_{x_1, y_1}$ and $Q_{x_2, y_2}$ are distinct overload-discharge quadruples, by the choice of maximality of elements of $\mathcal{Q}$, $y_1$ must be different from $y_2$. Hence we have distinct indices $i, j \in \{1, \dots, d_T(l)\}$ such that $y_1 = w_{l, i + 1}$ and $y_2 = w_{l, j + 1}$. Note that $y_f$ ($f = 1, 2$) is the only vertex in $[x_f, y_f]$ with $\rho(y_f) = l$ since $l$ is the overloading vertex. It means that $l$ is not in the subtree $\rho([x_f, {y_f}^-])$ ($f = 1, 2$). Moreover, the subtree $\rho([x_1, {y_1}^-])$ is a subtree of $T[e_{l, i}; \rho({y_1}^-)]$, i.e. the component not containing $l$ when deleting the edge $e_{l, i}$, and similarly, $\rho([x_2, {y_2}^-])$ is a subtree of the component not containing $l$ when deleting the edge $e_{l, j}$. Therefore $V(\rho([x_1, {y_1}^-])) \cap V(\rho([x_2, {y_2}^-])) = \emptyset$ and $V(\rho([x_1, y_1])) \cap V(\rho([x_2, y_2])) = \{l\}$. Suppose $l \notin V(T^*)$. Let $u \in V(T^*)$ and $w$ be the neighbor of $l$ such that $u \in T[lw; w]$. By the definition of the support subtree, for every neighbor $v$ of $l$ other than $w$, we have $c(T[lv; v]) + c(v) \leq k$. By Lemma~\ref{lem:olvertex}, there are at most one overloading subtree whose overloading vertex is $l$. \end{proof} \begin{lemma} \label{lem:hodotiu} Let $T_0$ be a subtree of $T$. Let $l$ be an overloading vertex shared by $t \in \mathbb{N}$ quadruples $(S_i, l, M_i, n_i) \in \mathcal{Q}$, for $i = 1, \dots, t$, such that $S_i$ is a subtree of $T_0$ for every $i = 1, \dots, t$. We have \begin{align*} \sum_{i = 1}^t |M_i \cap V_1| &\leq c(l) + (t - 2)(c(l) - k - 1) + c(T_0) - 2k - 2 \leq c(T_0) - k - 1. \end{align*} \end{lemma} \begin{proof} By Lemma~\ref{lem:overlap}, the overloading subtrees share only the overloading vertex $l$, hence $c(l) + \sum_i (c(S_i) - c(l)) \leq c(T_0)$. By Observation~\ref{obs:1bound} and the assumption $c(v) \leq k$ for all $v \in V(T)$, we have $\sum_i |M_i \cap V_1| \leq \sum_i (c(S_i) - (k + 1)) = \sum_i (c(S_i) - c(l)) + t (c(l) - (k + 1)) \leq c(T_0) - c(l) + t (c(l) - (k + 1)) = c(T_0) + c(l) - 2(k + 1) + (t - 2)(c(l) - k - 1) = c(T_0) - k - 1 + (t - 1)(c(l) - k - 1) \leq c(T_0) - k - 1$. \end{proof} As the last preparation for the proof of Lemma~\ref{lem:ksubtree} we show that a reasonable portion of vertices will be covered by the discharges from $\mathcal{Q}$. \begin{lemma} \label{lem:covering41} If $|V(T^*)| > 1$, then we have $$\bigcup_{(S, l, M, n) \in \mathcal{Q}} (M \cup \{n\}) = V(T).$$ If $|V(T^*)| = 1$ and $N_2 \geq 2k - 2g - D$ for some $D \in \mathbb{N}$, then we have $|\bigcup_{(S, l, M, n) \in \mathcal{Q}} (M \cup \{n\})| \geq |V(T)| - D$. If $|V(T^*)| = 1$ and $N_2 \geq 2k - 2g$, then we have $$\bigcup_{(S, l, M, n) \in \mathcal{Q}} (M \cup \{n\}) \supseteq V(T) - V(T^*).$$ \end{lemma} \begin{proof} If $|V(T^*)| > 1$, for a vertex $v$ in $T$, we can take a support vertex $u \neq v$ such that $v \notin T^*[u]$. Let $w$ be the vertex adjacent to $v$ such that $u \in T[vw, w]$. We have $c(T[vw; w]) \geq c(T^*[u]) \geq k + 1 \geq k - g$, and hence, by Lemma~\ref{lem:indischarge}, there exists $(S, l, M, n) \in \mathcal{Q}$ such that $v \in M \cup \{n\}$. Thus $\bigcup_{(S, l, M, n) \in \mathcal{Q}} (M \cup \{n\}) = V(T)$. If $|V(T^*)| = 1$ and $N_2 \geq 2k - 2g - D$ for some $D \in \mathbb{N}$, let $r$ be the only one support vertex. Consider the tree $T^{(r)}$ rooted at $r$. Let $U$ be the set of vertices $v$ with $c(T^{(r)}_v) \geq N_2 - k + g + 1$. For $v \in V(T) - r - U$, let $w$ be the parent of $v$ in $T^{(r)}$, we have $c(T[vw; w]) \geq N_2 - (N_2 - k + g) = k - g$ and hence, by Lemma~\ref{lem:indischarge}, $v$ is covered by some discharge from $\mathcal{Q}$. We can assume that $U$ is not empty (otherwise at most one vertex, namely the root $r$, can be not covered by any discharge from $\mathcal{Q}$). We consider the subtree $T[U]$ of $T$ induced by $U$. If $T[U]$ has two leaves $v, w$ other than $r$, then $|U| \leq 2 + c(T[U] - v - w) \leq 2 + N_2 - c(T^{(r)}_v) - c(T^{(r)}_w) \leq 2 + N_2 - 2(N_2 - k + g + 1) = - N_2 + 2k - 2g \leq D$. Otherwise $T[U]$ is a path with $r$ as one of the endvertices, say $r v_1 v_2 \dots v_t$ for some integer $t \geq 0$. If $t > D$, then $c(T^{(r)}_{v_{1}}) \geq \sum_{i = 1}^{t - 1} c(v_{i}) + c(T^{(r)}_{v_t}) \geq D + c(T^{(r)}_{v_t}) \geq D + (N_2 - k + g + 1) \geq k - g + 1$ which contradicts the definition of the support subtree $T^*$ as in this case $v_1$ should be in $V(T^*)$. If $t = D$, similarly as above, we have $c(T^{(r)}_{v_1}) \geq k - g$ and hence, by Lemma~\ref{lem:indischarge}, $r$ is covered by some discharge from $\mathcal{Q}$. In any case, we have that there are at most $D$ vertices which are not covered by any discharge from $\mathcal{Q}$, i.e. $|\bigcup_{(S, l, M, n) \in \mathcal{Q}} (M \cup \{n\})| \geq |V(T)| - D$. If $|V(T^*)| = 1$ and $N_2 \geq 2k - 2g$, let $r$ be the only support vertex and $r \neq v \in V(T)$ be a vertex in $T$. By the definition of the support subtree, we have that $c(T^{(r)}_v) \leq k - g$. Let $w$ be the parent of $v$ in $T^{(r)}$. We have $c(T[vw; w]) = N_2 - c(T^{(r)}_v) \geq (2k - 2g) - (k - g) = k - g$. Therefore $V(T) - r \subseteq \bigcup_{(S, l, M, n) \in \mathcal{Q}} (M \cup \{n\})$. \end{proof} \subsection{Proof of Lemma~\ref{lem:ksubtree}} \label{subsec:proof} In this section we prove Lemma~\ref{lem:ksubtree}. We first consider the case that $N_2 \geq 2k - 2g$. If $|V(T^*)| = 1$, let $r$ be the only support vertex. If $c(r) < g + 1$, then by Lemmas~\ref{lem:covering41} and~\ref{lem:overlap} and the condition that $g + h > 2$, we have $|V_1| \leq \sum_{(S, l, M, n) \in \mathcal{Q}} |M \cap V_1| + 1 \leq \sum_{(S, l, M, n) \in \mathcal{Q}} (c(l) - g - 1) + 1 \leq \sum_{i \geq g + 1} (i - g - 1)|V_i| + 1 < \sum_{i \geq g + 1} (i - g - 1)|V_i| + 2g + h - 2$. Otherwise, $c(r) \geq g + 1$ and $r$ can be an overloading vertex and we apply Lemmas~\ref{lem:hodotiu} (take $T_0 := T$) to bound the corresponding discharges as follows: $|V_1| \leq \sum_{(S, l, M, n) \in \mathcal{Q}, l \neq r} |M \cap V_1| + \sum_{(S, l, M, n) \in \mathcal{Q}, l = r} |M \cap V_1| \leq \sum_{(S, l, M, n) \in \mathcal{Q}, l \neq r} (c(l) - g - 1) + \max \{0, c(r) - g - 1, c(r) + N_2 - 2k - 2\} \leq \sum_{(S, l, M, n) \in \mathcal{Q}, l \neq r} (c(l) - g - 1) + c(r) + g + h - 4 \leq \sum_{i \geq g + 1} (i - g - 1)|V_i| + 2g + h - 3$. The third inequality follows from the condition that $N_2 \leq 2k + g + h - 2$. In any case the inequality~(\ref{for:1}) is contradicted. If $|V(T^*)| > 1$, then, by Lemma~\ref{lem:covering41}, all vertices in $V_1$ are covered by some discharge from $\mathcal{Q}$. For a vertex $u \in V(T^*)$, by Lemma~\ref{lem:hodotiu} (take $T_0 := T^*[u]$) and Observation~\ref{obs:1bound}, we have $$\sum_{(S, u, M, n) \in \mathcal{Q}} |M \cap V_1| \leq \max \{0, c(T^*[u]) - k - 1\} + d_{T^*}(u) \max \{0, c(u) - g - 1\}.$$ Define $U_1$ to be the set of vertices $u \in V(T^*)$ satisfying $d_{T^*}(u) = 1$, $U_2$ the set of vertices $u \in V(T^*)$ satisfying $d_{T^*}(u) > 1$ and $c(T^*[u]) \geq k + 1$, and $U_3$ the set of vertices $u \in V(T^*)$ satisfying $c(u) \geq g + 1$. Recall that $U_1$ is exactly the set of support vertices and $c(T^*[u]) \geq k + 1$ for all $u \in U_1$. As $U_1$ is disjoint with $U_2$, we have $N_2 \geq \sum_{u \in U_1 \cup U_2} c(T^*[u]) + \sum_{u \in U_3 - (U_1 \cup U_2)} c(u)$, and \begin{align*} &\sum_{(S, u, M, n) \in \mathcal{Q}, u \in V(T^*)} |M \cap V_1|\\ \leq &\sum_{u \in U_1 \cup U_2} (c(T^*[u]) - k - 1) + \sum_{u \in U_3} d_{T^*}(u) (c(u) - g - 1)\\ = &\sum_{u \in U_1 \cup U_2} (c(T^*[u]) - k - 1) + \sum_{u \in U_3} (c(u) - g - 1) + \sum_{u \in U_3 - U_1} (d_{T^*}(u) - 1) (c(u) - g - 1)\\ \leq &\sum_{u \in U_1 \cup U_2} c(T^*[u]) + \sum_{u \in U_3 - (U_1 \cup U_2)} c(u) + \sum_{u \in U_3} (c(u) - g - 1) + \sum_{u \in U_1} (- k - 1)\\ &+ \sum_{u \in U_3 \cap U_2} ((d_{T^*}(u) - 1) (c(u) - g - 1) - k - 1) + \sum_{u \in U_3 - (U_1 \cup U_2)} ((d_{T^*}(u) - 1) (c(u) - g - 1) - c(u))\\ \leq & \hspace{0.06cm}N_2 + \sum_{u \in U_3} (c(u) - g - 1) + \sum_{u \in U_3 - U_1} (d_{T^*}(u) - 2)(- k - 1) + 2(- k - 1)\\ &+ \sum_{u \in U_3 - U_1} (d_{T^*}(u) - 2) (c(u) - g - 1)\\ \leq &\sum_{u \in U_3} (c(u) - g - 1) + N_2 - 2k - 2. \end{align*} In the third inequality we utilize the basic fact about tree that $\sum_{u \in U_1} 1 = \sum_{u \in U_1} d_{T^*}(u) = \sum_{u \in V(T^*) - U_1} (d_{T^*}(u) - 2) + 2$. Thus we have \begin{align*} |V_1| &\leq \sum_{(S, l, M, n) \in \mathcal{Q}, l \notin V(T^*)} |M \cap V_1| + \sum_{(S, l, M, n) \in \mathcal{Q}, l \in V(T^*)} |M \cap V_1|\\ &\leq \sum_{(S, l, M, n) \in \mathcal{Q}, l \notin V(T^*)} (c(l) - g - 1) + \sum_{l \in U_3} (c(l) - g - 1) + N_2 - 2k - 2\\ &\leq \sum_{i \geq g + 1} (i - g - 1)|V_i| + g + h - 4, \end{align*} which contradicts the inequality~(\ref{for:1}). We now consider the case that $N_2 < 2k - 2g$. Note that in this case $|V(T^*)| = 1$ always holds. Let $r$ be the only support vertex. Set $D := 2g + h - 3 > 0$ in Lemma~\ref{lem:covering41}, we have \begin{align*} |V_1| &\leq \sum_{(S, l, M, n) \in \mathcal{Q}, l \neq r} (c(l) - g - 1) + \max \{0, c(r) - g - 1, c(r) + N_2 - 2k - 2\} + D\\ &\leq \sum_{(S, l, M, n) \in \mathcal{Q}, l \neq r} (c(l) - g - 1) + \max \{0, c(r) - g - 1, c(r) - 2g - 3\} + 2g + h - 3\\ &\leq \sum_{i \geq g + 1} (i - g - 1)|V_i| + 2g + h - 3, \end{align*} which contradicts the inequality~(\ref{for:1}). Thus it is proved the existence of a subtree $S$ with weight $k - g + 1 \leq c(S) \leq k$. As Assumption~\ref{ass:ksubtree} cannot hold, it is not hard to see that the subtree $S$ can be found by the iterative overload-discharge process described in Algorithm~\ref{alg:overloaddischarge}. The cycle $C_T$ and the mapping $\rho$ can be constructed in $O(N_1)$ time~\cite{Hierholzer1873}. Note that there are $O(|V(C_T)|)$, i.e. $O(N_1)$ iterations, since the initial vertex $v \in V(C_T)$ can be revisited at most once. Thus $S$ can be computed in $O(N_1)$ time. This completes the proof of Lemma~\ref{lem:ksubtree}. For instance, if we set $s := x$ in the example given in Figure~\subref*{subfig:overloadingsubtree}, Algorithm~\ref{alg:overloaddischarge} will output the subtree $\rho([z, w])$. We remark that here we assume that each arithmetic operation be done in constant time. If the arithmetic operations require logarithmic cost, then one can have $O(\Delta(T) \cdot N_1 \log \frac{N_2}{N_1})$ running time, where $\Delta(T)$ denotes the maximum degree of $T$. \smallskip \begin{algorithm}[H] \label{alg:overloaddischarge} \SetAlgoCaptionSeparator{} \caption{} \DontPrintSemicolon \SetKw{KwGoTo}{go to} \KwIn{A tree $T$ of $N_1$ vertices and vertex weights $c : V(T) \rightarrow \mathbb{N}$ with $c(T) = N_2$ such that $1 \leq k \leq N_2$, $g + h > 2$ and $2k - 4g - h + 3 \leq N_2 \leq 2k + g + h - 2$ ($k, g, N_1, N_2 \in \mathbb{N}$), where $h := 2N_1 - N_2$, and $c(v) \le k$ for any $v \in V(T)$.} \KwOut{A subtree $S$ of $T$ with $k - g + 1 \leq c(S) \leq k$.} Construct the directed cycle $C_T$ and the homomorphism $\rho: V(C_T) \rightarrow V(T)$. Choose an arbitrary vertex $v$ of $C_T$. Set $s:= v$ and $t := v$.\\ \While{$c([s, t]) < k - g + 1$}{ \label{od} Set $t := t^+$. } \If{$c([s, t]) \leq k$}{ Output $\rho([s, t])$. } \While{$c([s, t]) > k$}{ Set $s := s^+$. } \If{$c([s, t]) \geq k - g + 1$}{ Output $\rho([s, t])$. } \KwGoTo \scriptsize \bfseries \ref{od}.\\ \end{algorithm} \subsection{Some Examples} \label{sec:examples} In this section we give some examples and show that the conditions in Lemma~\ref{lem:ksubtree} are tight from several aspects. A useful fact to study some examples mentioned below is that Algorithm~\ref{alg:overloaddischarge} is an exhaustive search when the input tree is a path. The condition $1 \le k \le N_2$ should clearly be included for our interest. We show that the condition $g + h > 2$ is tight. Consider the \emph{star} $T$ of order $2p$ for some $p > 1$, such that center vertex has weight 1 and the other $2p - 1$ vertices have weight 2. We have $N_2 = 2N_1 - 1 = 4p - 1$ and $h = 1$. Set $k := N_1 = 2p \ge 4$ and $g := 1$. One can easily check that all conditions are satisfied except that $g + h = 2$, and $T$ has no subtree of weight $k$. For the condition $N_2 \ge 2k - 4g - h + 3$, we consider the path $v_1 v_2 \dots v_{p + 2q}$ of order $p + 2q$ for some integers $p > 1$ and $q \ge 1$. Set $c(v_{q + i}) := 1$ for $i = 1, 2, \dots, p$, and $c(v_j) := 2$ for any $j \neq q + 1, q + 2, \dots, q + p$. We have $N_2 = p + 4q$ and $h = p$. Set $k := p + 2q + 1$ and $g := 1$. One can easily check that all conditions are satisfied except that $N_2 = 2k - 4g - h + 2$, and $T$ has no subtree of weight $k$. We next discuss the condition $N_2 \le 2k + g + h - 2$. Let $p > 1$ be an integer, and $T$ be a path $v_1 v_2 \dots v_{2p + 3}$ of order $2p + 3$. Set $c(v_{p + 2}) := p + 2$, $c(v_{p + i}) := 2$ for $i = 1, 3$, and $c(v_j) := 1$ for any $j \neq p + 1, p + 2, p + 3$. We have $N_2 = 3p + 6$ and $h = 2N_1 - N_2 = p$. Set $k := p + 3$ and $g := 1$. One can easily check that all conditions are satisfied except that $N_2 = 2k + g + h - 1$, and $T$ has no subtree of weight $k$. As we have seen, the condition $c(v) \le k$ for all $v \in V(T)$ is one of the key ingredients to make the overload-discharge process work. If this condition is violated, then the existence of a subtree of the desired weight cannot be assured. Let $T$ be the star of order $p + 1$ for some integer $p > 1$. We set the vertex weight of the center vertex to be $q + 1$ for some integer $2 < q < p + 2$, and those of the other $p$ vertices (leaves) to be 1. We have $N_2 = p + q + 1$ and $h = p - q + 1$. Set $k := q$ and $g := 2$. Then it is clear that all conditions are satisfied except that there exists a vertex (the center vertex) with weight larger than $k$, and $T$ has no subtree of weight between $k - g + 1$ and $k$. \section{Cycle Spectra of Planar Graphs} \label{sec:cycle} The \emph{cycle spectrum} ${CS}(G)$ of a graph $G$ is defined to be the set of integers $k$ for which there is a cycle of length $k$ in $G$. $G$ is said to be \emph{hamiltonian} if $|V(G)| \in CS(G)$ and \emph{pancyclic} if its cycle spectrum has all possible lengths, i.e. $CS(G) = \{3, \dots, |V(G)|\}$. Cycle spectra of graphs have been extensively studied in many directions, in this paper we study cycle spectra of planar hamiltonian graphs with minimum degree $\delta \geq 4$. We first give an overview of previous results in Section~\ref{subsec:cycleconj} and present our results in Sections~\ref{sec:Mohr} and~\ref{sec:halfminus1}. \subsection{An Overview} \label{subsec:cycleconj} In 1956, Tutte~\cite{Tutte1956} proved his seminal result that every $4$-connected planar graph is hamiltonian. Motivated by Tutte's theorem together with the metaconjecture proposed by Bondy~\cite{Bondy1975} that almost any non-trivial conditions for hamiltonicity of a graph should also imply pancyclicity, Bondy~\cite{Bondy1975} conjectured in 1973 that every $4$-connected planar graph $G$ is almost pancyclic, i.e. $|CS(G)| \geq |V(G)| - 3$, and Malkevitch~\cite{Malkevitch1988} conjectured in 1988 that every $4$-connected planar graph is pancyclic if it contains a cycle of length $4$ (see~\cite{Malkevitch1971, Malkevitch1978} for other variants). These two conjectures remain open, while $4$ is the only known cycle length that can be missing in a cycle spectrum of a $4$-connected planar graph. For example, the line graph of a cyclically $4$-edge-connected cubic planar graph with girth at least $5$ is a $4$-regular $4$-connected planar graph with no cycle of length $4$, see also~\cite{Malkevitch1971, Trenkler1989}. If we relax the connectedness, more cycle lengths are known for being absent in some cycle spectra. Choudum~\cite{Choudum1977} showed that for every integer $k \geq 7$, there exist $4$-regular $3$-connected planar hamiltonian graphs of order larger than $k$ each has cycles of all possible lengths except $k$, which means every integer $k \geq 7$ can be absent in the cycle spectra of some $4$-regular $3$-connected planar hamiltonian graphs. Another interesting example was given by Malkevitch~\cite{Malkevitch1971}, which is, for every $p \in \mathbb{N}$, a $4$-regular planar hamiltonian graph $G$ of order $|V(G)| = 6p$ whose cycle spectrum $CS(G) = \{3, 4, 5, 6\} \cup \{r \in \mathbb{N}: \frac{|V(G)|}{2} \leq r \leq |V(G)|\}$, as shown in Figure~\ref{fig:Malkevitch}. \begin{figure}[b!] \centering \includegraphics[scale = 1]{Malkevitch} \caption{A $4$-regular planar hamiltonian graph $G$ which has no cycle of length between $7$ and $\frac{|V(G)|}{2} - 1$.} \label{fig:Malkevitch} \end{figure} So far we have seen which cycle lengths can be absent in some cycle spectra, we now ask the opposite question, i.e. which cycle lengths must be present in all cycle spectra. It is known that every planar graph with $\delta \geq 4$ must contain cycles of length $3$, $5$~\cite{Wang2002} and $6$~\cite{Fijavz2002}, which is shown to be best possible by the aforementioned examples. It is also known that every $2$-connected planar graph with $\delta \geq 4$ must have a cycle of length $4$ or $7$~\cite{Hornak2008}, a cycle of length $4$ or $8$ and a cycle of length $4$ or $9$~\cite{Madaras}. While the presence of a cycle of length $3$ follows easily from Euler's formula, the rest of them were shown by the discharging method. Another powerful tool in searching cycles of specified length is the so-called Tutte path method, which was first introduced by Tutte in his proof of hamiltonicity of 4-connected planar graphs. Using this technique, Nelson (see~\cite{Plummer1975, Thomassen1983}), Thomas and Yu~\cite{Thomas1994} and Sanders~\cite{Sanders1996} showed that every $4$-connected planar graph contains cycles of length $|V(G)| - 1$, $|V(G)| - 2$ and $|V(G)| - 3$, respectively. Note that we always assume $k \geq 3$ when we say a graph contains a cycle of length $k$. Chen et al.~\cite{Chen2004} noticed that the Tutte path method cannot be generalized for smaller cycle lengths, they hence combined Tutte paths with contractible edges and showed the existence of cycles of length $|V(G)| - 4$, $|V(G)| - 5$ and $|V(G)| - 6$. Following this approach, Cui et al.~\cite{Cui2009} showed that every $4$-connected planar graph has a cycle of length $|V(G)| - 7$. To summarize, every $4$-connected planar graph contains a cycle of length $k$ for every $k \in \{|V(G)|, |V(G)| - 1, \dots, |V(G)| - 7\}$ with $k \geq 3$. \subsection{Cycles of Length Close to Medium-Length} \label{sec:Mohr} With the knowledge of these short and long cycles, Mohr \cite{Mohr2018} asked whether cycles of length close to $\frac{|V(G)|}{2}$ also exist, and he answered his question by showing that every planar hamiltonian graph $G$ satisfying $|E(G)| \geq 2|V(G)|$ has a cycle of length between $\frac{1}{3}|V(G)|$ and $\frac{2}{3}|V(G)|$. We present his simple and elegant argument in the following. Let $G^*$ be the dual graph of the plane graph $G$ and $C$ be a Hamilton cycle of $G$. Note that $C$ separates the Euclidean plane into two open regions $C_{\textrm{int}}$ and $C_{\textrm{ext}}$ containing no vertex. Let $G_{\textrm{int}}$ and $G_{\textrm{ext}}$ be the graphs obtained from $G$ by deleting the edges in $C_{\textrm{ext}}$ and in $C_{\textrm{int}}$, respectively. We always assume that $|E(G_{\textrm{int}})| \geq |E(G_{\textrm{ext}})|$. As $C$ is a Hamilton cycle, its dual disconnects $G^*$ into two trees say $T_{\textrm{int}}$ lying on $C_{\textrm{int}}$ and $T_{\textrm{ext}}$ on $C_{\textrm{ext}}$ (see Figure~\ref{fig:transformation}). By Euler's formula, we have $|V(G^*)| = |E(G)| - |V(G)| + 2$, and hence $|V(T_{\textrm{int}})| \geq \frac12|V(G^*)| \geq \frac{1}{2}|V(G)| + 1$. We define vertex weight $c(v) := d_{G^*}(v) - 2 \geq 1$ for every vertex $v \in V(G^*) \supset V(T_{\textrm{int}})$, where $d_{G^*}(v)$ is the degree of $v$ in $G^*$, or equivalently, the face length of $v$ in $G$ (see Figure~\subref*{subfig:vertexweight}). It is not hard to see that for every subtree $S$ of $T_{\textrm{int}}$, the set of edges of $G^*$ having exactly one endvertex in $S$ is indeed the dual of an edge set of a cycle in $G$ of length $c(S) + 2$, where $c(S) := \sum_{v \in V(S)} c(v)$ (see Figure~\subref*{subfig:treeandcycle}). \begin{figure}[!ht] \centering \subfloat[The tree $T_{\textrm{int}}$ with vertex weights. \label{subfig:vertexweight}]{% \includegraphics[scale=1.2]{New10} } \hspace{1.5cm} \subfloat[A subtree (red) of $T_{\textrm{int}}$ of weight $6$ corresponds to a cycle (blue) of length $8$ in $G$.\label{subfig:treeandcycle}]{% \includegraphics[scale=1.2]{New20} } \caption{The tree $T_{\textrm{int}}$ (black) in the dual graph of $G$ (grey).} \label{fig:transformation} \end{figure} Thus the problem is transformed to that of finding a cycle of specified length in a hamiltonian outerplanar graph with sufficient edge density, dually, a subtree of specified weight: the existence of a subtree $S$ of weight $k$ in $T_{\textrm{int}}$ implies the existence of a cycle of length $k + 2$ in $G$. It is left to show that there is a subtree $S$ in $T_{\textrm{int}}$ with $\frac{1}{3}|V(G)| - 2 \leq c(S) \leq \frac{2}{3}|V(G)| - 2$. First note that $c(v) \leq \frac12|V(G)| - 2$ for all $v \in V(T_{\textrm{int}})$; otherwise $c(T_{\textrm{int}}) >\frac12|V(G)| - 2 + |V(T_{\textrm{int}})| - 1 \geq |V(G)| - 2$, which is not possible as $T_{\textrm{int}}$ corresponds to the Hamilton cycle of length $|V(G)|$. If there is a vertex $v \in V(T_{\textrm{int}})$ with $c(v) \geq \frac13|V(G)| - 2$, then we can simply take $S$ to be this single vertex $v$. Suppose $c(v) < \frac13|V(G)| - 2$ for all $v \in V(T_{\textrm{int}})$. We take $S$ to be a maximal subtree of $T_{\textrm{int}}$ with $c(S) \leq \frac23|V(G)| - 2$, it is clear that $c(S) \geq \frac13|V(G)| - 2$. Thus $G$ has a cycle of length between $\frac{1}{3}|V(G)|$ and $\frac{2}{3}|V(G)|$. We recapitulate the main content of Mohr's proof. Given a planar hamiltonian graph $G$, we can have a tree $T$ (in the dual graph) of at least $\frac12|E(G)| - \frac12|V(G)| + 1$ vertices with vertice weights $c: V(T) \rightarrow \mathbb{N}$ such that $c(T) = |V(G)| - 2$ and $c(v) \leq c(T) - |V(T)| + 1 \leq \frac32|V(G)| - \frac12|E(G)| - 2$ for all $v \in V(T)$. And, if there is a subtree of weight $k$ in $T$, then there is a cycle of length $k + 2$ in $G$. By Lemma~\ref{lem:ksubtree} we have the following. \begin{theorem} \label{cor:EgeqV} Let $G$ be a planar hamiltonian graph with $|E(G)| \geq (2 + \gamma)|V(G)|$ for some real number $-1 \leq \gamma < 1$. Let $k, g \in \mathbb{N}$ such that $g + \lceil \gamma |V(G)| \rceil + 2 > 0$, $3 \leq k \leq |V(G)|$ and $\lfloor \frac{(1 - \gamma)|V(G)|}{2} \rfloor \leq k \leq \frac{\lceil (1 + \gamma)|V(G)| \rceil}{2} + 2g + \frac32$. There exists a cycle $K$ in $G$ of length $k - g + 1 \leq |V(K)| \leq k$, and $K$ can be found in linear time if a Hamilton cycle of $G$ is given. \end{theorem} \begin{proof} Let $T$ be the tree with vertex weights $c$ that we mentioned before. We set $\tilde{k} := k - 2 \geq 1$ and $h := \lceil \gamma |V(G)| \rceil + 4$. We check the conditions required for applying Lemma~\ref{lem:ksubtree} on the parameters $\tilde{k}, g, h, N_1$ and $N_2$ as follows. First we have that $g + h > 2$, $1 \leq \tilde{k} \leq N_2 = |V(G)| - 2$, $2\tilde{k} \leq \lceil (1 + \gamma) |V(G)| \rceil + 4g - 1 = N_2 + 4g + h - 3$, and $\tilde{k} \geq \lfloor \frac{(1 - \gamma) |V(G)|}{2} \rfloor - 2 \geq \frac{\lfloor (1 - \gamma) |V(G)| \rfloor}{2} - \frac12 - 2 = \frac{|V(G)| - \lceil \gamma |V(G)| \rceil}{2} - \frac12 - 2 \geq \frac{N_2}{2} - \frac{g}{2} - \frac{h}{2} + 1$. Note also that $2|V(T)| \geq |E(G)| - |V(G)| + 2 \geq (1 + \gamma) |V(G)| + 2 = c(T) + \gamma |V(G)| + 4$ implies $2N_1 \geq N_2 + h$, and for every $v \in V(T)$, $c(v) \leq \frac32 |V(G)| - \frac12 |E(G)| - 2 \leq \frac32 |V(G)| - \frac{2 + \gamma}{2} |V(G)| - 2 = \frac{1 - \gamma}{2} |V(G)| - 2$ implies $c(v) \leq \lfloor \frac{(1 - \gamma) |V(G)|}{2} \rfloor - 2 \leq \tilde{k}$. As all conditions are satisfied, by Lemma~\ref{lem:ksubtree}, there exists a subtree $S$ of $T$ of weight $\tilde{k} - g + 1 \leq c(S) \leq \tilde{k}$ which can be found in linear time. And hence $G$ has a cycle $K$ of length $k - g + 1 \leq |V(K)| \leq k$ which can be found in linear time provided a Hamilton cycle of $G$ is given, since every planar graph can be embedded in plane in linear time~\cite{Chiba1985} and the tree $T_{\textrm{int}}$ can then be easily constructed from the planar embedding in linear time. \end{proof} We specify some implications as follows. \begin{corollary} Every planar hamiltonian graph $G$ with $\delta(G) \geq 3$ has a cycle of length $\lfloor \frac{|V(G)|+1}{4} \rfloor + 2 \leq k \leq \lfloor \frac{3|V(G)|}{4} \rfloor$. Every planar hamiltonian graph $G$ with $\delta(G) \geq 4$ has a cycle of length $k$ for every $k \in \{\lfloor \frac{|V(G)|}{2} \rfloor, \dots, \lceil \frac{|V(G)|}{2} \rceil + 3\}$ with $3 \leq k \leq |V(G)|$. Every planar hamiltonian graph $G$ with $\delta(G) \geq 5$ has a cycle of length $k$ for every $k \in \{\lfloor \frac{|V(G)|}{4} \rfloor, \dots, \lceil \frac{3|V(G)|}{4} \rceil + 3\}$ with $3 \leq k \leq |V(G)|$. Each of these cycles can be found in linear time if a Hamilton cycle of $G$ is given. \end{corollary} \begin{proof} It follows immediately when we in Theorem~\ref{cor:EgeqV} set $\gamma := - \frac12$, $g := \lfloor \frac{|V(G)|}{2} \rfloor - 1$ and $k := \lfloor \frac{3|V(G)|}{4} \rfloor$; $\gamma := 0$ and $g := 1$; and $\gamma := \frac12$ and $g := 1$, respectively. \end{proof} It is known that a Hamilton cycle can be found in linear time for every 4-connected planar graph~\cite{Chiba1989}, thus those cycles mentioned above can be simply found in linear time each in this case. \subsection{3-Connected Planar Hamiltonian Graphs} \label{sec:halfminus1} Note that Malkevitch's example (see Figure~\ref{fig:Malkevitch}) illustrates that not every planar hamiltonian graph $G$ with $\delta \geq 4$ can have a cycle of length $\lfloor \frac{|V(G)|}{2} \rfloor - 1$ or $\lfloor \frac{|V(G)|}{2} \rfloor - 2$. As a further application we prove in this section that this cycle length can be assured for $3$-connected planar hamiltonian graphs with $\delta \geq 4$. \begin{theorem} Let $G$ be a 3-connected planar hamiltonian graph with minimum degree $\delta(G) \geq 4$. If $|V(G)| \geq 8$ is even, there exists a cycle of length either $\frac12|V(G)| - 2$ or $\frac12|V(G)| - 1$ in $G$, and it can be found in linear time if a Hamilton cycle is given. \end{theorem} \begin{proof} We adopt the notations defined in Section~\ref{sec:Mohr}. If every face of $G_{\textrm{int}}$ is of length either $|V(G)|$ or less than $\frac12|V(G)|$, i.e. $c(v) \leq \frac12|V(G)| - 3$ for every $v \in V(T_{\textrm{int}})$, by Lemma~\ref{lem:ksubtree} (set $g := 2$ and $h := 4$), there exists a subtree of weight either $\frac12|V(G)| - 4$ or $\frac12|V(G)| - 3$ in $T_{\textrm{int}}$ and hence a cycle of length either $\frac12|V(G)| - 2$ or $\frac12|V(G)| - 1$ in $G$. Recall that $|E(G_{\textrm{int}})| \geq \frac32|V(G)|$. If $|E(G_{\textrm{int}})| > \frac32|V(G)|$, then $|V(T_{\textrm{int}})| \geq \frac12|V(G)| + 2 = \frac12c(T_{\textrm{int}}) + 3$ and $c(v) \leq c(T_{\textrm{int}}) - |V(T_{\textrm{int}})| + 1 \leq \frac12|V(G)| - 3$ for all $v \in V(T_{\textrm{int}})$. By Lemma~\ref{lem:ksubtree} (set $g := 1$ and $h := 6$), there exists a subtree of weight $\frac12|V(G)| - 3$ in $T_{\textrm{int}}$ and hence a cycle of length $\frac12|V(G)| - 1$ in $G$. Now we can assume that $|E(G_{\textrm{int}})| = \frac32|V(G)|$ and $G_{\textrm{int}}$ has a face of length $\frac12|V(G)|$. It holds immediately that $|E(G_{\textrm{ext}})| = \frac32|V(G)|$ since $|E(G_{\textrm{int}})| + |E(G_{\textrm{ext}})| = |E(G)| + |V(G)| \geq 3|V(G)|$ and $|E(G_{\textrm{int}})| \geq |E(G_{\textrm{ext}})|$. And we can also assume that $G_{\textrm{ext}}$ has a face of length $\frac12|V(G)|$. In this case we have $d_G(v) = 4$ and $d_{G_{\textrm{int}}}(v) + d_{G_{\textrm{ext}}}(v) = 6$ for every $v \in V(G)$, and that there are exactly one face of length $|V(G)|$, one face of length $\frac12|V(G)|$ and $\frac12|V(G)|$ faces of length $3$ in each of $G_{\textrm{int}}$ and $G_{\textrm{ext}}$. We denote by $F_{\textrm{int}}$ and $F_{\textrm{ext}}$ be the faces of length $\frac12|V(G)|$ in $G_{\textrm{int}}$ and $G_{\textrm{ext}}$, respectively. We claim that $G$ is the square of a cycle of length $|V(G)|$, which is obtained from a cycle of length $|V(G)|$ by adding edges for every pair of vertices having distance $2$ (see Figrue~\ref{fig:squareC16}). It is obvious that the square of a cycle is pancyclic. We call a face of length $3$ an \emph{$i$-triangle} ($i = 0, 1, 2$) if it contains exactly $i$ edges of the Hamilton cycle $C$. We assume that the plane graph $G$ has the maximum number of $2$-triangles over all of its planar embeddings. Let the Hamilton cycle $C$ of $G$ be $v_0 v_1 v_2 \dots v_{|V(G)| - 1} v_{0}$ (indices modulo $|V(G)|$). \begin{figure}[b!] \centering \includegraphics[scale = 1]{squareC16} \caption{The square of a cycle of length $16$.} \label{fig:squareC16} \end{figure} Suppose there is a $0$-triangle $v_0 v_i v_j v_0$ in $G$, say it is also in $G_{\textrm{int}}$, for some $0 < i - 1 < j - 2 < |V(G)| - 3$. If $i > 2$, then the face in $G_{\textrm{int}}$ containing the path $v_1 v_0 v_i v_{i - 1}$ is of length larger than $3$ and smaller than $|V(G)|$. As there is exactly one such face in $G_{\textrm{int}}$, namely $F_{\textrm{int}}$, we can assume that $i = 2$ and $j = 4$. Then $d_{G_{\textrm{int}}}(v_1) = d_{G_{\textrm{int}}}(v_3) = 2$ and $d_{G_{\textrm{ext}}}(v_1) = d_{G_{\textrm{ext}}}(v_3) = 4$. Let $v_{i_1}, v_{i_2}$ be the neighbors of $v_1$ other than $v_0, v_2$, and $v_{i_3}, v_{i_4}$ be the neighbors of $v_3$ other than $v_2, v_4$, for some $2 < i_1 < i_2 < |V(G)|$ and $4 < i_3 < i_4 < |V(G)| + 2$. If $v_1$ is adjacent to $v_3$ in $G_{\textrm{ext}}$, i.e. $i_1 = 3$ and $i_4 = |V(G)| + 1$, and the face in $G_{\textrm{ext}}$ containing $v_{i_2} v_1 v_3 v_{i_3}$ is a face of length $3$, i.e. $i_2 = i_3$, then it must be a $0$-triangle. We can assume that $i_2 = i_3 = 5$. Clearly, $\{v_0, v_5\}$ is a separator of $G$, which contradicts that $G$ is $3$-connected. If $v_1$ is adjacent to $v_3$ in $G_{\textrm{ext}}$, but the face in $G_{\textrm{ext}}$ containing $v_{i_2} v_1 v_3 v_{i_3}$ is a face of length larger than $3$, then the faces in $G_{\textrm{ext}}$ containing $v_0 v_1 v_{i_2}$ and $v_{i_3} v_3 v_4$ must be $2$-triangles and $\{v_{i_2}, v_{i_3}\} = \{v_{-1}, v_5\}$ is a separator of $G$, contradiction. If $v_1$ is not adjacent to $v_3$, then the face in $G_{\textrm{ext}}$ containing $v_{i_1} v_1 v_2 v_3 v_{i_4}$ is of length larger than $3$, and hence $v_{-1} v_0 v_1 v_{-1}$ and $v_3 v_4 v_5 v_3$ must be $2$-triangles in $G_{\textrm{ext}}$. In this case we can swap $v_0$ and $v_1$ and swap $v_3$ and $v_4$ to obtain a planar embedding with more $2$-triangles (see Figure~\subref*{subfig:0triangleswap}), which contradicts the maximality of the number of $2$-triangles. Thus there is no $0$-triangle in the plane graph $G$. Suppose there is a $1$-triangle $v_0 v_1 v_i v_0$ in $G$, say also in $G_{\textrm{int}}$, for some $2 < i < |V(G)| - 1$. It is not hard to see that we can assume that the face in $G_{\textrm{int}}$ containing $v_0 v_i v_{i + 1}$ is $F_{\textrm{int}}$. Under this assumption we must have a sequence of $i - 1$ faces of length 3 such that all faces are $1$-triangles except the last one which is a $2$-triangle, namely $v_0 v_1 v_i v_0, v_1 v_{i - 1} v_i v_1, v_1 v_2 v_{i - 1} v_1, \dots, v_{\lceil \frac{i}{2} \rceil - 1} v_{\lceil \frac{i}{2} \rceil} v_{\lceil \frac{i}{2} \rceil+ 1} v_{\lceil \frac{i}{2} \rceil - 1}$. We claim that $i \leq 4$. Suppose $i > 4$, we prove the claim for odd $i$, it can be proved for even $i$ in a similar way. It is clear that $d_{G_{\textrm{ext}}}(v_{\lceil \frac{i}{2} \rceil - 3}) \leq 3$, $d_{G_{\textrm{ext}}}(v_{\lceil \frac{i}{2} \rceil - 1}) = 3$ and $d_{G_{\textrm{ext}}}(v_{\lceil \frac{i}{2} \rceil}) = 4$. Let $v_{i_1}, v_{i_2}$ be the neighbors of $v_{\lceil \frac{i}{2} \rceil}$ other than $v_{\lceil \frac{i}{2} \rceil - 1}, v_{\lceil \frac{i}{2} \rceil + 1}$, and $v_{i_3}$ be the neighbor of $v_{\lceil \frac{i}{2} \rceil - 1}$ other than $v_{\lceil \frac{i}{2} \rceil - 2}, v_{\lceil \frac{i}{2} \rceil}, v_{\lceil \frac{i}{2} \rceil + 1}$, for some $i < i_1 < i_2 \leq i_3 \leq |V(G)|$. Note that the face in $G_{\textrm{ext}}$ containing $v_{i_1} v_{\lceil \frac{i}{2} \rceil} v_{\lceil \frac{i}{2} \rceil + 1} v_{\lceil \frac{i}{2} \rceil + 2}$ is of length larger than $3$. Therefore the face in $G_{\textrm{ext}}$ containing $v_{i_3} v_{\lceil \frac{i}{2} \rceil - 1} v_{\lceil \frac{i}{2} \rceil} v_{i_2}$ and that containing $v_{\lceil \frac{i}{2} \rceil - 3} v_{\lceil \frac{i}{2} \rceil - 2} v_{\lceil \frac{i}{2} \rceil - 1} v_{i_3}$ must be of length $3$. It implies that $v_{\lceil \frac{i}{2} \rceil - 3} = v_{i_3} = v_{i_2}$ and $d_{G_{\textrm{ext}}}(v_{\lceil \frac{i}{2} \rceil - 3}) \geq 4$, contradiction. Now we consider the case when $i = 4$. It is clear that $d_{G_{\textrm{ext}}}(v_2) = 4$ and $d_{G_{\textrm{ext}}}(v_3) = 3$. Let $v_{i_1}$ be the neigbor of $v_3$ other than $v_1, v_2, v_4$, and $v_{i_2}, v_{i_3}$ be the neigbors of $v_2$ other than $v_1, v_3$, for some $4 < i_1 \leq i_2 < i_3 \leq |V(G)|$. If the face in $G_{\textrm{ext}}$ containing $v_{i_1} v_3 v_4$ is of length larger than $3$, then $i_1 = i_2 = |V(G)| - 1$, $i_3 = |V(G)|$ and $\{v_{-1}, v_4\}$ is separator of $G$. If the face in $G_{\textrm{ext}}$ containing $v_{i_1} v_3 v_4$ is of length $3$ but that containing $v_{i_1} v_3 v_2 v_{i_2}$ is of length larger than $3$, then $i_1 = 5$, $i_2 = |V(G)| - 1$, $i_3 = |V(G)|$ and $\{v_{-1}, v_5\}$ is a separator of $G$. If the faces in $G_{\textrm{ext}}$ containing $v_{i_1} v_3 v_4$ and $v_{i_1} v_3 v_2 v_{i_2}$ are of length $3$ but that containing $v_{i_2} v_2 v_{i_3}$ is of length larger than $3$, then $i_1 = i_2 = 5$, $i_3 = |V(G)|$ and $\{v_0, v_5\}$ is a separator of $G$. If the faces in $G_{\textrm{ext}}$ containing $v_{i_1} v_3 v_4$, $v_{i_1} v_3 v_2 v_{i_2}$ and $v_{i_2} v_2 v_{i_3}$ are of length $3$, then $i_1 = i_2 = 5$, $i_3 = 6$ and $\{v_0, v_6\}$ is a separator of $G$. In any case it contradicts that $G$ is $3$-connected. Finally, we consider the case when $i = 3$. It is clear that $d_{G_{\textrm{ext}}}(v_1) = 3$ and $d_{G_{\textrm{ext}}}(v_2) = 4$. Let $v_{i_1}, v_{i_2}$ be the neigbors of $v_2$ other than $v_1, v_3$, and $v_{i_3}$ be the neigbor of $v_1$ other than $v_0, v_2, v_3$, for some $3 < i_1 < i_2 \leq i_3 < |V(G)|$. If the face in $G_{\textrm{ext}}$ containing $v_{i_1} v_2 v_3$ is of length larger than $3$, then $i_1 = |V(G)| - 2$ and $i_2 = i_3 = |V(G)| - 1$, which has been shown to be not possible. If the face in $G_{\textrm{ext}}$ containing $v_{i_1} v_2 v_3$ is of length $3$ but that containing $v_{i_1} v_2 v_{i_2}$ is of length larger than $3$, then $i_1 = 4$, $i_2 = i_3 = |V(G)| - 1$ and $d_{G_{\textrm{int}}}(v_0) = 4$. Let $v_{i_4}$ be the neighbor of $v_0$ other than $v_{-1}, v_1, v_3$ for some $3 < i_4 \leq |V(G)| - 2$. If the faces in $G_{\textrm{int}}$ containing $v_{-1} v_0 v_{i_4}$ is of length larger than $3$, then $i_4 = 4$, which has been shown to be not possible. Hence $v_{-1} v_0 v_{i_4} v_{-1}$ is $2$-triangle, $i_4 = |V(G)| - 2$ and $\{v_{-2}, v_4\}$ is a separator of $G$, which is not possible. If the faces in $G_{\textrm{ext}}$ containing $v_{i_1} v_2 v_3$ and $v_{i_1} v_2 v_{i_2}$ are of length $3$, then $i_1 = 4$ and $i_2 = 5$. Swapping $v_2$ and $v_3$ yields a planar embedding of more $2$-triangles (see Figure~\subref*{subfig:1triangleswap}), which contradicts the maximality of the number of $2$-triangles. Hence we can conclude that there is no $1$-triangle in the plane graph $G$. It is clear that $G$ is the square of a cycle of length $|V(G)|$ if it has a planar embedding with neither $0$- nor $1$-triangle. To find a cycle of the desired length, one can apply Algorithm~\ref{alg:overloaddischarge} for $T_{\textrm{int}}$ if $|E(G_{\textrm{int}})| > \frac32 |V(G)|$, or if there is no face of length $\frac12 |V(G)|$ in $G_{\textrm{int}}$ and $G_{\textrm{ext}}$, otherwise, do swaps of some vertex pairs at most once for each face to obtain a planar embedding of the square of a cycle of length $|V(G)|$ with neither $0$- nor $1$-triangle, then a cycle of length $\frac12 |V(G)|$ can be easily found in such planar embedding in linear time. \end{proof} \begin{figure}[!ht] \centering \subfloat[Swap $v_0, v_1$, and $v_3, v_4$. \label{subfig:0triangleswap}]{% \includegraphics[scale=1.2]{0triangleswap} } \hspace{1.8cm} \subfloat[Swap $v_2, v_3$. \label{subfig:1triangleswap}]{% \includegraphics[scale=1.2]{1triangleswap} } \caption{Swap vertices to obtain planar embedding of more $2$-triangles. Edges in $C$, $C_{\textrm{int}}$ and $C_{\textrm{ext}}$ are indicated as black, green and red, respectively.} \label{fig:triangleswap} \end{figure}
{ "timestamp": "2019-10-15T02:06:12", "yymm": "1902", "arxiv_id": "1902.06484", "language": "en", "url": "https://arxiv.org/abs/1902.06484", "abstract": "We apply the Euler tour technique to find subtrees of specified weight as follows. Let $k, g, N_1, N_2 \\in \\mathbb{N}$ such that $1 \\leq k \\leq N_2$, $g + h > 2$ and $2k - 4g - h + 3 \\leq N_2 \\leq 2k + g + h - 2$, where $h := 2N_1 - N_2$. Let $T$ be a tree of $N_1$ vertices and let $c : V(T) \\rightarrow \\mathbb{N}$ be vertex weights such that $c(T) := \\sum_{v \\in V(T)} c(v) = N_2$ and $c(v) \\leq k$ for all $v \\in V(T)$. We prove that a subtree $S$ of $T$ of weight $k - g + 1 \\leq c(S) \\leq k$ exists and can be found in linear time. We apply it to show, among others, the following: (i) Every planar hamiltonian graph $G = (V(G), E(G))$ with minimum degree $\\delta \\geq 4$ has a cycle of length $k$ for every $k \\in \\{\\lfloor \\frac{|V(G)|}{2} \\rfloor, \\dots, \\lceil \\frac{|V(G)|}{2} \\rceil + 3\\}$ with $3 \\leq k \\leq |V(G)|$. (ii) Every $3$-connected planar hamiltonian graph $G$ with $\\delta \\geq 4$ and $|V(G)| \\geq 8$ even has a cycle of length $\\frac{|V(G)|}{2} - 1$ or $\\frac{|V(G)|}{2} - 2$. Each of these cycles can be found in linear time if a Hamilton cycle of the graph is given. This work was partially motivated by conjectures of Bondy and Malkevitch on cycle spectra of 4-connected planar graphs.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "Find Subtrees of Specified Weight and Cycles of Specified Length in Linear Time", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357561234475, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7096739131092151 }
https://arxiv.org/abs/1702.00669
The variation of the maximal function of a radial function
We study the problem concerning the variation of the Hardy-Littlewood maximal function in higher dimensions. As the main result, we prove that the variation of the non-centered Hardy-Littlewood maximal function of a radial function is comparable to the variation of the function itself.
\section{Introduction} The non-centered Hardy-Littlewood maximal operator $M$ is defined by setting for $f\in L^1_{loc}(\mathbb{R}^n)\,$ that \begin{equation}\label{eq:1} M f(x)=\sup_{B(z,r)\ni x} \frac{1}{|B(z,r)|}\int_{B(z,r)}|f(y)|\,dy\,=:\,\sup_{B(z,r)\ni x}\vint_{B(z,r)}|f(y)|\,dy\ \end{equation} for every $x\in\mathbb{R}^n\,$. The centered version of $M$, denoted by $M_c$, is defined by taking the supremum over all balls centered at $x$. The classical theorem of Hardy, Littlewood and Wiener asserts that $M$ (and $M_c$) is bounded on $L^p(\mathbb{R}^n)\,$ for $1<p\leq\infty\,$. This result is one of the cornerstones of the harmonic analysis. While the absolute size of a maximal function is usually the principal interest, the applications in Sobolev-spaces and in the potential theory have motivated the active research of the regularity properties of maximal functions. The first observation was made by Kinnunen who verified \cite{Ki} that $M_c$ is bounded in Sobolev-space $W^{1,p}(\mathbb{R}^n)\,$ if $1<p\leq\infty\,$, and inequality \begin{equation}\label{est1} |DM_cf(x)|\leq M_c(|Df|)(x)\, \end{equation} holds for all $x\in\mathbb{R}^n$. The proof is relatively simple and inequality (\ref{est1}) (and the boundedness) holds also for $M$ and many other variants. The most challenging open problem in this field is so called '$W^{1,1}$-problem': Does it hold for all $f\in W^{1,1}(\mathbb{R}^n)\,$, that $Mf\in W^{1,1}(\mathbb{R}^n)$ and \begin{equation*} \norm{DMf}_1\leq C_n\norm{Df}_1\,? \end{equation*} This problem has been discussed (and studied) for example in \cite{AlPe}, \cite{CaHu}, \cite{CaMa}, \cite{HO}, \cite{HM}, \cite{Ku} and \cite{Ta}. The fundamental obstacle is that $M$ is not bounded in $L^1$ and therefore inequality (\ref{est1}) is not enough to solve the problem. In the case $n=1$ the answer is known to be positive, as was proved by Tanaka \cite{Ta}. For $M_c$ the problem turns out to be very complicated also when $n=1$. However, Kurka \cite{Ku} managed to show that the answer is positive also in this case. The goal of this paper is to develop technology for $W^{1,1}$-problem in higher dimensions, where the problem is still completely open. The known proofs in the one-dimensional case are strongly based on the simplicity of the topology: the crucial trick (in the non-centered case) is that $Mf$ does not have a strict local maximum outside the set $\{Mf(x)=f(x)\}$. This fact is a strong tool when $n=1$ but is far from sufficient for higher dimensions. The formula for the derivative of the maximal function (see Lemma \ref{peruskama} or \cite{L}) has an important role in the paper. It says that if $Mf(x)=\vint_{B}|f|$, $|f(x)|<Mf(x)<\infty$, and $Mf$ is differentiable at $x$, then \begin{equation}\label{formula} DMf(x)=\vint_{B}Df(y)\,dy\,. \end{equation} From this formula one can see immediately the validity of the estimate (\ref{est1}) for $M$. However, since $B$ is exactly the ball which gives the maximal average (for $|f|$), it is expected that one can derive from (\ref{formula}) much more sophisticated estimates than (\ref{est1}). In Section \ref{sec2} (Lemma \ref{peruskama}), we perform basic analysis related to this issue. The key observation we make is that if $B$ is as above, then \begin{equation}\label{formula2} \int_{B}Df(y)\cdot(y-x)\,dy\,=\,0\,. \end{equation} In the backround of this equality stands a more general princinple, concerning other maximal operators as well: if the value of the maximal function is attained to ball (or other permissible object) $B$, then the \textit{weighted} integral of $|Df|$ over $B$ is zero for a set of weights depending on the maximal operator. We believe that the utilization of this principle is a key for a possible solution of $W^{1,1}$-problem. As the main result of this paper, we employ equality (\ref{formula2}) to show that in the case of \textit{radial functions} the answer to $W^{1,1}$-problem is positive (Theorem \ref{main}). Even in this case the problem is evidently non-trivial and truly differs from the one-dimensional case. To become convinced about this, consider the important special case where $f$ is radially decreasing ($f(x)=g(|x|)$, where $g:[0,\infty)\to \mathbb{R}$ is decreasing). In this case $Mf$ is radially decreasing as well and $Mf(0)=f(0)$. If $n=1$, these facts immediately imply that $\norm{DMf}_1=\norm{Df}_1$, but if $n\geq 2$ this is definitely not the case: the additional estimates are necessary. This type of estimate for radially decreasing functions can be derived from (\ref{formula}) and (\ref{formula2}), saying that \begin{equation}\label{ref1} |DMf(x)|\leq \frac{C_n}{|x|}\vint_{B(0,|x|)}|Df(y)||y|\,dy\,. \end{equation} By using this inequality, the positive answer to $W^{1,1}$-problem for radially decreasing functions follows straightforwardly by Fubini Theorem (Corollary \ref{cor1}). For general radial functions, inequality (\ref{ref1}) turns out to hold only if the maximal average is achieved in a ball with radius comparable to $|x|$. To overcome this problem, we study the auxiliary maximal function $M^{I}$, defined for $f\in L^{1}_{loc}(\mathbb{R}^n)$ by \begin{equation*} M^If(x)=\sup_{x\in B(z,r), r\leq|x|/4 }\vint_{B(z,r)}|f(y)|\,dy\,, \end{equation*} and prove (Lemma \ref{basic3}) that for all radial $f\in W^{1,1}(\mathbb{R}^n)$ it holds that \begin{equation}\label{eq:eq} \norm{DM^If}_1\leq C_n\norm{Df}_1\,. \end{equation} The proof of this auxiliary result resembles the proof of $W^{1,1}$-problem (for $M$) in the case $n=1$. As the first step, we prove by straightforward calculation that for the 'endpoint operator' of $M^I$, defined by \begin{equation}\label{def99} f_{/4}(x):=\sup_{x\in B(z,|x|/4)}\vint_{B(z,|x|/4)}\,|f(y)|\,dy\,, \end{equation} it holds that $\norm{Df_{/4}}_1\leq C\norm{Df}_1$ for all $f\in W^{1,1}(\mathbb{R}^n)$. Recall again the fact that $Mf$ does not have a local maximum in $\{Mf(x)>|f(x)|\}$, leading to the estimate $\norm{DMf}_1\leq \norm{Df}_1$ in the case $n=1$. As a multidimensional counterpart for radial functions, we show that $M^If$ does not have a local maximum in $\{M^If(x)>\max\{|f(x)|,f_{/4}(x)\}\}$ and for every $k\in \mathbb{Z}$ it holds that \begin{equation*} \int_{\{2^k\leq|y|\leq 2^{k+1}\}}DM^If(y)\,dy\,\leq\,C_n\int_{\{2^{k-1}\leq |y|\leq 2^{k+2}\}}|D|f|(y)|\,dy\,. \end{equation*} Estimate (\ref{eq:eq}) can be easily derived from this fact. The main result follows by combining (\ref{eq:eq}) and exploiting the estimate (\ref{ref1}) in $\{Mf(x)>M^{I}f(x)\}$. \subsection*{Question} The analysis presented in this paper raises the interest towards the study of the integrability properties of some \textit{conditional} maximal operators. As an example, (\ref{formula}) and (\ref{formula2}) yield that $|DMf(x)|\leq \widetilde{M}(D|f|)(x)$, where $\widetilde{M}$ is defined for all locally integrable gradient fields $F:\mathbb{R}^n\to\mathbb{R}^n$ by \begin{equation*} \widetilde{M}F(x)= \sup\bigg\{\,\bigg|\vint_{B(z,r)}F\bigg|\,:\,x\in B(z,r)\,,\,\,\int_{B(z,r)}F(y)\cdot(y-x)dy\,=0\,\bigg\}\,. \end{equation*} It is clear that $\widetilde{M}F$ is bounded by $M(|F|)$, but does it hold that $\widetilde{M}$ has even better integrability properties than $M$? What about the boundedness in the Hardy-space $H^1$ or even in $L^1$? Notice that the boundedness of $\widetilde{M}$ in $L^1$ would imply the solution to $W^{1,1}$-problem. This problem is almost completely open, even in the case $n=1$. Counterexamples would be highly interesting as well. \textit{Acknowledgements.} The author would like to thank Antti V\"{a}h\"{a}kangas for useful comments on the manuscript and inspiring discussions. \section{Preliminaries and general results}\label{sec2} Let us introduce some notation. The boundary of the $n$-dimensional unit ball is denoted by $S^{n-1}$. The $s$-dimensional Hausdorff measure is denoted by $\mathcal{H}^s$. The volume of the $n$-dimensional unit ball is denoted by $\omega_n$ and the $\mathcal{H}^{n-1}$-measure of $S^{n-1}$ by $\sigma_n$. The weak derivative of $f$ (if exists) is denoted by $Df$. If $v\in S^{n-1}$, then \begin{equation*} D_vf(x):=\lim_{h\to 0}\frac{1}{h}(f(x+hv)-f(x))\,, \end{equation*} in the case the limit exists. \begin{definition} For $f\in L^1_{loc}(\mathbb{R}^n)$ let \begin{align*} \mathcal{B}_x:=&\{B(z,r):\,x\in\bar{B}(z,r),r>0,\vint_{B}|f|=Mf(x)\}\,. \end{align*} \end{definition} It is easy to see that if $f\in L^1(\mathbb{R}^n)$ and $|f(x)|<Mf(x)<\infty$, then $\mathcal{B}_x\not=\emptyset\,$. The following lemma is the main result of this section. We point out that below $(6)$ is especially useful in the case of radial functions. \begin{lemma}\label{peruskama} Suppose that $f\in W^{1,1}(\mathbb{R}^n)$, $Mf(x)>f(x)$ and $Mf$ is differentiable at $x$. Then \begin{enumerate} \item For all $v\in S^{n-1}$ and $B\in\mathcal{B}_x\,$, it holds that \begin{equation*} DMf(x)=\vint_{B}D|f|(y)\,dy\,\text{ and }\,D_{v}Mf(x)=\vint_{B}D_{v}|f|(y)\,dy\,. \end{equation*} \item If $x\in B$ for some $B\in \mathcal{B}_x$, then $DMf(x)=0\,.$ \item If $x\in \partial B$, $B=B(z,r)\in \mathcal{B}_x$ and $DMf(x)\not=0$, then \begin{equation*} \frac{DMf(x)}{|DMf(x)|}=\frac{z-x}{|z-x|}\,. \end{equation*} \item If $B\in\mathcal{B}_x$, then \begin{equation}\label{char} \int_{B}D|f|(y)\cdot (y-x)\,dy\,=\,0\,. \end{equation} \item If $x\in \partial B$, $B=B(z,r)\in \mathcal{B}_x$, then \begin{equation*} |DMf(x)|=\frac{1}{r}\vint_{B}D|f|(y)\cdot (z-y)\,dy\,. \end{equation*} \item If $B\in\mathcal{B}_x$, then \begin{equation}\label{nice} DMf(x)\cdot\frac{x}{|x|}=\frac{1}{|x|}\vint_{B}D|f|(y)\cdot y\,dy\,. \end{equation} \end{enumerate} \end{lemma} The proof of Lemma \ref{peruskama} is essentially based on the following auxiliary propositions. \begin{proposition}\label{aux1} Suppose that $f\in W^{1,1}(\mathbb{R}^n)$, $B$ is a ball, $h_i\in\mathbb{R}$ such that $h_i\to 0$ as $i\to\infty$, and $B_i=L_i(B)$, where $L_i$ are \textit{affine} mappings and \begin{equation*} \lim_{i\to\infty}\frac{L_i(y)-y}{h_i}\,=\,g(y)\,. \end{equation*} Then \begin{equation}\label{fir} \lim_{i\to \infty}\frac{1}{h_i}\bigg(\vint_{B_i}f(y)\,dy\,-\vint_{B}f(y)\,dy\,\bigg)\,=\,\vint_{B}Df(y)\cdot g(y)\,dy\,. \end{equation} \end{proposition} \begin{proof}The proof is a simple calculation: \begin{align*} &\frac{1}{h_i}\bigg(\vint_{B_i}f(y)\,dy\,-\vint_{B}f(y)\,dy\,\bigg) =\frac{1}{h_i}\bigg(\vint_{L_i(B)}f(y)\,dy\,-\vint_{B}f(y)\,dy\,\bigg)\\ =&\frac{1}{h_i}\bigg(\vint_{B}f(L_i(y))-f(y)\,dy\,\bigg) =\vint_{B}\frac{f(y+(L_i(y)-y))-f(y)}{h_i}\,dy\,\\ \approx&\vint_{B}\frac{D f(y)\cdot(L_i(y)-y)}{h_i}\,dy\,\to\vint_{B}D f(y)\cdot g(y)\,dy\,, \end{align*} if $i\to \infty\,$. \end{proof} \begin{lemma}\label{aux2} Let $f\in W^{1,1}(\mathbb{R}^n)$, $x\in\mathbb{R}^n$, $B\in\mathcal{B}_x$, $\delta>0$, and let $L_h$, $h\in[-\delta,\delta]$, be \textit{affine} mappings such that $x\in L_h(\bar{B})$ and \begin{equation}\label{assump} \lim_{h\to 0}\frac{L_h(y)-y}{h}\,=\,g(y)\,. \end{equation} Then \begin{equation}\label{sec} \int_{B} D|f|(y)\cdot g(y)\,dy\,=0\,. \end{equation} \end{lemma} \begin{proof} Let us denote $B_h:=L_h(B)$. By Proposition \ref{aux1} it holds that \begin{equation*} \vint_{B} D|f|(y)\cdot g(y)\,dy\,=\,\lim_{h\to 0}\frac{1}{h}\bigg(\vint_{B_h}|f|(y)-\vint_{B}|f|(y)\bigg)\,. \end{equation*} Since $B\in\mathcal{B}_x$ and $x\in \bar{B}_h$, the sign of the quantity inside the large parentheses is non-positive for all $h\in[-\delta,\delta]$. However, the sign of $1/h$ depends on the sign of $h$. The conclusion is that the above equality is possible only if (\ref{sec}) is valid. \end{proof} \subsection*{Proof of Lemma \ref{peruskama}} \begin{enumerate} \item The claim is counterpart for the formula for $DM_cf$, which was first time proved in \cite{L}. Suppose that $B=B(z,r)\in \mathcal{B}_x$ and let $B_h:=B(z+hv,r)$. Then it holds that \begin{align*} &D_vMf(x)=\lim_{h\to 0}\frac{1}{h}(Mf(x+hv)-Mf(x))\\ \geq& \lim_{h\to 0}\frac{1}{h}\bigg(\vint_{B_h}|f(y)|\,dy\,-\vint_{B}|f(y)|\,dy\,\bigg)\\ =&\lim_{h\to 0}\frac{1}{h}\bigg(\vint_{B}|f(y+hv)|-|f(y)|\,dy\,\bigg)\,=\,\vint_{B_h}D_v|f|(y)\,dy\,. \end{align*} On the other hand, if $B_h:= B(z-hv,r)$, then \begin{align*} &D_vMf(x)=\lim_{h\to 0}\frac{1}{h}(Mf(x)-Mf(x-hv))\\ \leq&\lim_{h\to 0}\frac{1}{h}\bigg(\vint_{B}|f(y)|\,dy\,-\vint_{B_h}|f(y)|\,dy\,\bigg)\\ =&\lim_{h\to 0}\frac{1}{h}\bigg(\vint_{B}|f(y)|-|f(y+hv)|\,dy\,\bigg)\,=\,\vint_{B_h}D_v|f|(y)\,dy\,. \end{align*} These inequalities imply the claim. \item If $B\in\mathcal{B}_x$ and $x\in B$, then $y\in B$ if $|y-x|$ is small enough, and thus $Mf(y)\geq Mf(x)$. \item Let $B=B(z,r)\in\mathcal{B}_x$, $v\in S^{n-1}$ such that $v\cdot(z-x)=0$, and $h_i\in(0,\infty)$, $h_i\to 0$ as $i\to\infty\,$. Moreover, let us denote $B_i:=B(z,|z-(x+h_iv)|)$. Then it clearly holds that $x+h_iv\in \bar{B}_i$ and it is also easy to see that $B_i=L_i(B)$ for an affine mapping $L_i$ given by \begin{equation*} L_i(y)=y+\bigg(\frac{|z-(x+h_iv)|-|z-x|}{|z-x|}\bigg)(y-z)\,. \end{equation*} By the assumption $v\cdot (z-x)=0$ it follows that \begin{equation*} \lim_{i\to \infty}\frac{L_i(y)-y}{h_i}\,=\,(y-z)\lim_{i\to \infty}\bigg(\frac{|z-(x+h_iv)|-|z-x|}{|z-x|}\,\bigg)\,=\,0\,. \end{equation*} Therefore, Proposition \ref{aux1} implies that \begin{equation*} \lim_{i\to \infty}\frac{1}{h_i}\bigg(\vint_{B_i}|f|(y)\,dy\,-\vint_{B}|f|(y)\,dy\,\bigg)\,=\,0\,. \end{equation*} This shows that $D_{v}Mf(x)=0$ for all $v$ orthogonal to $(z-x)$. In particular, it follows that $DMf(x)$ is parallel to $z-x$ or $x-z$. The final claim follows easily by the fact that $Mf(x+h(z-x))\geq Mf(x)$ if $0<h\leq 2$. \item Let $B\in\mathcal{B}_x$ and $L_h(y):=y+h(y-x)\,$, $h\in\mathbb{R}$. Then it holds that $L_h$ is affine mapping, $L_h(x)=x$, and so $x\in L_h(B)=:B_h$, and $(L_h(y)-y)/h=y-x\,$ for all $h\in\mathbb{R}\,$. Therefore, Lemma \ref{aux2} implies that \begin{equation*} \int_{B} D|f|(y)\cdot (y-x)\,dy\,=0\,. \end{equation*} \item By combining $(1)$, $(3)$ and $(4)$ the claim follows by \begin{align*} |DMf(x)|&=DMf(x)\cdot\bigg(\frac{z-x}{|z-x|}\bigg)=\vint_{B} D|f|(y)\cdot\bigg(\frac{z-x}{|z-x|}\bigg)\,dy\,\\ &=\vint_{B} D|f|(y)\cdot\bigg(\frac{z-y}{|z-x|}\bigg)\,dy\,. \end{align*} \item The claim follows from $(1)$ and $(4)$. \end{enumerate} \hfill$\Box$ \section{$W^{1,1}$-problem for radial functions} \subsection*{Radial functions and notation} In what follows, we will interpret a radial function on $\mathbb{R}^n$ as a function on $(0,\infty)$ in a natural way. To be more precise, if $f\in W^{1,1}_{loc}(\mathbb{R}^n)$ is radial, it is well known fact that there exists continuous function $\tilde{f}:(0,\infty)\to\mathbb{R}$ such that $\tilde{f}$ is weakly differentiable, \begin{equation*} \int_{0}^{\infty}|\tilde{f}'(t)|t^{n-1}\,dt\,<\infty\,, \end{equation*} and (by a possible redefinition of $f$ in a set of measure zero) for all $t\in(0,\infty)$ it holds that $f(x)=\tilde{f}(t)$ and $D_{x/|x|}f(x)=\tilde{f}'(t)$ if $|x|=t$. In what follows, we will simplify the notation and use $f$ to denote $\tilde{f}$ as well. To avoid the possibility of misuderstanding, we usually use variable $t$ and notation $f'$ (instead of $Df$) when we are actually working with $\tilde{f}$. We also say that $f$ is radially decreasing if $f$ is radial and $f(t_1)\leq f(t_2)$ if $t_1>t_2$. Notice also that if $f$ is radial then $Mf$ is also radial. The following result is an easy consequence of Lemma \ref{peruskama}. \begin{corollary}\label{cor1} If $f\in W^{1,1}(\mathbb{R}^n)$ is radially decreasing, then $DMf\in W^{1,1}(\mathbb{R}^n)$ and $\norm{DMf}_1\leq C_n\norm{Df}_1\,$. \end{corollary} \begin{proof} Since $f$ is radially decreasing, it is easy to show (the rigorous proof is left to the reader) that if $Mf(x)\not = 0$ and $B\in \mathcal{B}_x$, then $0\in \bar{B}$ and $\bar{B}\subset \bar{B}(0,|x|)$. Especially, we get by Lemma \ref{peruskama}, $(6)$, that \begin{equation}\label{ref2} |DMf(x)|\leq \frac{C_n}{|x|}\vint_{B(0,|x|)}|Df(y)||y|\,dy\,. \end{equation} Then the claim follows by Fubini theorem: \begin{align*} &\int_{\mathbb{R}^n}\bigg(\frac{1}{|x|}\vint_{B(0,|x|)}| Df(y)||y|\,dy\,\bigg)\,dx\\ =&\int_{\mathbb{R}^n}|D f(y)||y|\bigg(\int_{\mathbb{R}^n}\frac{\chi_{B(0,|x|)}(y)}{\omega_n|x|^{n+1}}\,dx\,\bigg)\,dy\\ =&\int_{\mathbb{R}^n}|D f(y)||y|\bigg(\int_{\{x:|x|\geq |y|\}}\frac{1}{\omega_n|x|^{n+1}}\,dx\,\bigg)\,dy\\ =&\int_{\mathbb{R}^n}|D f(y)||y|\bigg(\int_{S^{n-1}}\int_{|y|}^{\infty}\frac{1}{\omega_nt^{n+1}}t^{n-1}\,dt\,d\mathcal{H}^{n-1}\bigg)\,dy\,\\ =&\frac{\sigma_n}{\omega_n}\int_{\mathbb{R}^n}|D f(y)||y|\bigg(\int_{|y|}^{\infty}\frac{1}{t^2}\,dt\bigg)\,dy\,\\ =&\frac{\sigma_n}{\omega_n}\int_{\mathbb{R}^n}|D f(y)|\,dy\,.\\ \end{align*} \end{proof} In the case of general radial functions, (\ref{ref1}) is in general valid (and useful) only for those $x$ for which the radius of $B\in\mathcal{B}_x$ is comparable to $|x|$. As it was explained in the introduction, the main auxiliary tool in the case of general radial functions is the following result (recall the definition of $M^I$ in the introduction): \begin{lemma}\label{basic3} If $f\in W^{1,1}(\mathbb{R}^n)$ is radial, then $M^{I}f\in W^{1,1}(\mathbb{R}^n)$ and \\ $\norm{DM^{I}f}_1\leq C_n\norm{Df}_1\,$. \end{lemma} Before the actual proof of this result, we prove several auxiliary results. The first of them is well known. \begin{proposition}\label{union} Suppose that $E\subset\mathbb{R}$ is open. Then there exist disjoint intervals $(a_i,b_i)$ such that $E=\cup_{i=1}^{\infty}(a_i,b_i)\,$ and $a_i,b_i\in\partial E\cup\{-\infty,\infty\}$ for all $i\in{\mathbb N}\,$. \end{proposition} The following auxiliary result is repeatedly utilized in the proof. The result is well known but we express the proof for readers convenience. \begin{lemma}\label{basic} Suppose that $\Omega\subset\mathbb{R}^n$, $f\in W^{1,1}(\Omega)$ is continuous, $g:\Omega\to\mathbb{R}$ is continuous and weakly differentiable in E:=\{x\in\Omega:g(x)>f(x)\}\, , and $\int_{E}|Dg|<\infty\,$. Then $\max\{f,g\}$ is weakly differentiable in $\Omega\,$ and \begin{equation*} D(\max\{f,g\})=\chi_{E}Dg+\chi_{\Omega\cap E^c}Df\,. \end{equation*} \end{lemma} \begin{proof} Suppose that $\phi$ is a smooth test function, compactly supported in $\Omega$, $1\leq i\leq n$, $L(t)=p+te_i$, $p\in\mathbb{R}^n\,$, and let $L$ denote the line $L(\mathbb{R})$. By Proposition \ref{union}, $E\cap L$ can be written as a union of disjoint and open (in $\Omega\cap L$) line segments $E_j=L((a_j,b_j))$, $j\in{\mathbb N}$, such that $L(a_j),L(b_j)\in \partial E$ (with respect to $\Omega\cap L$) or $a_j=-\infty$ or $b_j=\infty$. In particular, $f(L(a_j))=g(L(a_j))$ if $a_j\not=-\infty$ and $f(L(b_j))=g(L(b_j))$ if $b_j\not=\infty$. Since $\phi$ is compactly supported, it follows that \begin{align*} &f(L(a_j))\phi(L(a_j))=g(L(a_j))\phi(L(a_j))\,\text{ and }\\ &f(L(b_j))\phi(L(b_j))=g(L(b_j))\phi(L(b_j))\,\text{ for all }j\in{\mathbb N}\,. \end{align*} Therefore, by using the assumptions for $g$, it holds that \begin{align*} &\int_{E_j}g(D_i\phi)d{\mathcal{H}}^1=\int_{E_j}D_i(g\phi)d{\mathcal{H}}^1-\int_{E_j}(D_ig)\phi d{\mathcal{H}}^1\,\\ =&\,g(L(b_j))\phi(L(b_j))-g(L(a_j))\phi(L(a_j))-\int_{E_j}(D_ig)\phi d{\mathcal{H}}^1\\ =&\,f(L(b_j))\phi(L(b_j))-f(L(a_j))\phi(L(a_j))-\int_{E_j}(D_ig)\phi d{\mathcal{H}}^1\\ =&\,\int_{E_j}D_i(f\phi)d{\mathcal{H}}^1-\int_{E_j}(D_ig)\phi d{\mathcal{H}}^1\\ =&\,\int_{E_j}(D_if)\phi + f(D_i\phi)-(D_ig)\phi\, d{\mathcal{H}}^1\, \end{align*} for all $j\in{\mathbb N}\,$. Then \begin{align*} &\int_{\Omega\cap L}\max\{f,g\}(D_i\phi)d{\mathcal{H}}^1=\int_{E\cap L}g(D_i\phi)d{\mathcal{H}}^1+\int_{\Omega\cap E^c\cap L}f(D_i\phi)d{\mathcal{H}}^1\\ =&\sum_{j=1}^{\infty}\int_{E_j}g(D_i\phi)d{\mathcal{H}}^1+\int_{\Omega\cap E^c\cap L}f(D_i\phi)d{\mathcal{H}}^1\\ =&\int_{E\cap L}(D_if)\phi +f(D_i\phi)-(D_ig)\phi\, d{\mathcal{H}}^1 +\int_{\Omega\cap E^c\cap L}f(D_i\phi)\,d{\mathcal{H}}^1\\ =&\int_{\Omega\cap L}f(D_i\phi)\,d{\mathcal{H}}^1+\int_{E\cap L}(D_if)\phi\,d{\mathcal{H}}^1-\int_{E\cap L}(D_ig)\phi\, d{\mathcal{H}}^1\\ =&-\int_{\Omega\cap L}(D_i f)\phi\,d{\mathcal{H}}^1+\int_{E\cap L}(D_if)\phi\,d{\mathcal{H}}^1-\int_{E\cap L}(D_ig)\phi\, d{\mathcal{H}}^1\\ =&-\int_{\Omega\cap{\mathbb E\,}^c\cap L}(D_i f)\phi\,d{\mathcal{H}}^1-\int_{E\cap L}(D_ig)\phi\, d{\mathcal{H}}^1\,\\ =&-\int_{\Omega\cap L}\big(\chi_{E}D_ig+\chi_{\Omega\cap E^c}D_if\big)\phi\,d{\mathcal{H}}^1\,. \end{align*} This implies the claim. \end{proof} \begin{definition} Let $f:\Omega\to\mathbb{R}$, where $\Omega\subset\mathbb{R}$ is open. We say that $x$ is a \text{local strict maximum} of $f$ in $(a,b)\subset\Omega$, $-\infty\leq a<b\leq\infty$, if there exist $a',b'\in(a,b)$ such that $a'<x<b'$, $f(t)\leq f(x)$ if $t\in (a',b')$, and $\max\{f(a'),f(b')\}<f(x)$. \end{definition} \begin{proposition}\label{lation} Suppose that $f:[a,b]\to\mathbb{R}$ is continuous and $c\in (a,b)$ such that $f(c)>\max\{f(a),f(b)\}$. Then $f$ has a local strict maximum on $(a,c)$. \end{proposition} \begin{proof} It is easy to see that now any maximum point $c$ ($f(c)=\max f$), which is known to exist, is also a local strict maximum of $f$. \end{proof} \begin{proposition}\label{deejee} Suppose that $f:[a,b]\to\mathbb{R}$ is continuous and does not have a local strict maximum on $(a,b)$. Then there exists $c\in[a,b]$ such that $f$ is non-increasing on $[a,c]$ and non-decreasing on $[c,b]$. \end{proposition} \begin{proof} Since $f$ is continuous, we can choose $c\in[a,b]$ such that $f(c)=\min f$. To show that $f$ is non-decreasing on $[c,b]$, let $c<y_1<y_2<b$ and assume, on the contrary, that $f(y_2)<f(y_1)$. This implies that $f(y_1)>\max\{f(c),f(y_2)\}$, and thus $f$ has a strict local maximum on $(c,y_2)$ by Proposition \ref{lation}. This is the desired contradiction. To show that $f$ is non-increasing on $[a,c]$, let $a<y_1<y_2<c$ and assume, on the contrary, that $f(y_1)<f(y_2)$. This implies that $f(y_2)>\max\{f(y_1),f(c)\}$, and thus $f$ has a strict local maximum on $(y_1,c)$ by Proposition \ref{lation}. This is the desired contradiction. \end{proof} Let us define for $0<a\leq b<\infty$ the annular domains \begin{align*} A_n(a,b):=&A(a,b):=\{x\in\mathbb{R}^n\,:\,a< |x|< b\}\,\,\text{ and }\\ A_n[a,b]:=&A[a,b]:=\{x\in\mathbb{R}^n\,:\,a\leq |x|\leq b\}\,. \end{align*} \begin{lemma}\label{simppeli} If $f\in W^{1,1}(\mathbb{R}^n)$ is radial, then $Mf$ does not have a local strict maximum in $\{t\in(0,\infty):\,Mf(t)>f(t)\}\,$. \end{lemma} \begin{proof} Suppose, on the contrary, that $t_0\in(0,\infty)$ is a local strict maximum of $Mf$ and $Mf(t_0)>f(t_0)$. Let us choose \begin{align* t^-:=&\sup\{t<t_0:\,Mf(t)<Mf(t_0)\}\,\text{ and }\\ \,t^+:=&\inf\{t>t_0\,:\,Mf(t)<Mf(t_0)\}\, \end{align*} By the definition of the local strict maximum, it follows that $t_0\in[t^-,t^+]$ and \begin{equation}\label{eq:last} Mf(t)=Mf(t_0) \text{ for all }t\in[t^-,t^+]\,. \end{equation} Suppose that $|x|=t_0$. Since $Mf(t_0)>f(t_0)$, it follows that there exist a ball $B$ such that $Mf(t_0)=\vint_{B}|f|$, $x\in \bar{B}\,$. Suppose first that $B\not\subset A[t^-,t^+]$. In this case there exists ${\varepsilon}>0$ such that $[t^--{\varepsilon},t^-]\subset\{|y|: y\in \bar{B}\}$ or $[t^+,t^++{\varepsilon}]\subset\{|y|: y\in \bar{B}\}$. Especially, it follows by the definition of $M$ that $Mf(t)\geq \vint_{B}|f|=Mf(t_0)$ if $t\in[t^--{\varepsilon},t^-]$ or $t\in[t^+,t^++{\varepsilon}]$, respectively. Obviously this contradicts with the choice of $t^-$ and $t^+$. This verifies that $B\subset A[t^-,t^+]$. Therefore, it holds by (\ref{eq:last}) that \begin{equation}\label{dispose} Mf(y)= Mf(t_0)\text{ for all }y\in B\,. \end{equation} However, $f(t_0)<Mf(t_0)$ also implies that there exists a ball $B'$ with positive radius such that $B'\subset B$ and $f<Mf(t_0)$ in $B'$. Combining this with (\ref{dispose}) yields the desired contradiction by \begin{align*} Mf(t_0)&=\vint_{B}|f|\leq\frac{1}{|B|}\bigg(\int_{B\setminus B'}|f|+\int_{B'}|f|\bigg)\\ &<\frac{1}{|B|}\bigg(\int_{B\setminus B'}Mf+\int_{B'}Mf(t_0)\bigg) =Mf(t_0)\,. \end{align*} \end{proof} Recall the definition of $f_{/4}$ (the endpoint operator of $M^I$, (\ref{def99})) from the introduction. Before showing the boundedness for $M^I$, we have to prove the boundedness for $f_{/4}$. \begin{proposition}\label{basic2} If $f\in W^{1,1}(\mathbb{R}^n)$, then $f_{/4}\in W^{1,1}(\mathbb{R}^n)$ and $\norm{Df_{/4}}_1\leq C_n\norm{Df}_1\,$. \end{proposition} \begin{proof} It is easy to check that $f_{/4}$ is Lipschitz outside the origin. Therefore, it suffices to verify the desired norm estimates for $Df_{/4}$. We will exploit Proposition \ref{aux1}. If $x\not=0$, we are going to show that if $h>0$ is small enough and $v\in S^{n-1}$, then \begin{equation}\label{aim1} \frac{1}{h}|f_{/4}(x)-f_{/4}(x+hv)|\leq C_n\vint_{B(x,\frac{|x|}{2})}|D|f|(y)|\,dy\,. \end{equation} To show this, we may assume that $f_{/4}(x)>f_{/4}(x+hv)$. Suppose that \begin{align*} f_{/4}(x)=&\vint_{B(z,|x|/4)}|f(y)|dy\,\,,\,\,\,\,x\in \bar{B}(z,|x|/4)=:B\,,\\ g_h(y):=&x+hv+\frac{|x+hv|}{|x|}(y-x)\,\text{ and }\\ B_h:=&g_h(B)=B(x+hv+\frac{|x+hv|}{|x|}(z-x),|x+hv|/4)\,. \end{align*} Especially, $x+hv\in \bar{B}_h$. Moreover, it is easy to compute that \begin{equation*} \lim_{h\to 0}\frac{g_h(y)-y}{h}=\lim_{h\to 0}\frac{hv+\big(\,\frac{|x+hv|}{|x|}-1\,\big)\big(y-x\big)}{h}\,=\,v+\frac{v\cdot x}{|x|^2}(y-x)\,. \end{equation*} Then it follows by Proposition \ref{aux1} that \begin{align*} &\lim_{h\to 0}\,\frac{f_{/4}(x)-f_{/4}(x+hv)}{h}\leq \lim_{h\to 0}\frac{1}{h}\bigg(\vint_{B}|f(y)|\,dy\,-\vint_{B_h}|f(y)|\,dy\,\bigg)\,\\ =&\,\vint_{B}D|f|(y)\cdot (v+\frac{v\cdot x}{|x|^2}(y-x))\,dy\,\leq \vint_{B}|D|f|(y)|(1+\frac{|y-x|}{|x|})\,dy\,\\ \leq &\vint_{B}(1+\frac{1}{4})|D |f|(y)|\,dy\,\leq C_n\vint_{B(x,\frac{|x|}{2})}|D |f|(y)|\,dy\,. \end{align*} This proves (\ref{aim1}). Then the claim follows (e.g) by using Fubini Theorem: Let us denote below $B_x=B(x,\frac{|x|}{2})\,$. By the above estimate, \begin{align*} &\int_{\mathbb{R}^n}|Df_{/4}(x)|\,dx\,\leq\,C_n\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}\frac{\chi_{B_x}(y)}{|B_x|}|Df(y)|\,dx\,dy\,\\ \leq&\,C_n\int_{\mathbb{R}^n}|Df(y)|\bigg(\int_{\{x\,:\,\frac{2|y|}{3}\leq |x|\leq 2|y|\}}|B_x|^{-1}\,dx\,\bigg)\,dy\, \leq C'_n\norm{Df}_1\,. \end{align*} \end{proof} The following estimate is well known. \begin{proposition} If $f\in W^{1,1}(\mathbb{R}^n)$ is radial and $0<a<b<\infty$, then \begin{equation*} \sigma_na^{n-1}\int_a^b|f'(t)|\,dt\,\leq \int_{A(a,b)}|Df(y)|\,dy\,\leq \sigma_nb^{n-1}\int_a^b|f'(t)|\,dt\,. \end{equation*} \end{proposition} \subsection*{The proof of Lemma \ref{basic3}} Let \begin{equation*} g(x)=\max\{f_{/4}(x),|f(x)|\}\,. \end{equation*} By Lemma \ref{basic} and Proposition \ref{basic2} it follows that $g\in W^{1,1}(\mathbb{R}^n)$ and $\norm{Dg}_1\leq C_n\norm{Df}_1\,$. Let \begin{equation*} E:=\{x\in\mathbb{R}^n\,:\,M^If(x)>g(x)\}\,\text{ and }\,E_k:=E\cap A[2^{-k},2^{-k+1}]\,,\,\,k\in\mathbb{N}. \end{equation*} It is well known that mapping $M^If$ is locally Lipschitz in $E$ and, especially, $D(M^If)$ exists in $E$. By Lemma \ref{basic}, it suffices to show that $\int_{E}|DM^If|\leq C_n\norm{Dg}_1\,$. First observe that since $|f|$ is radial, it follows that $M^If$ and $g$ are radial as well, and continuous in $\mathbb{R}^n\setminus\{0\}$. In particular, if \begin{equation*} E_k^{\mathbb{R}}:=\{|x|\,:\,x\in E_k\}\,, \end{equation*} then $x\in E_k$ if and only if $|x|\in E^{\mathbb{R}}_k$. Since $E_k^{\mathbb{R}}$ is open, we can write \begin{equation*} E_k^{\mathbb{R}}=\cup_{i=1}^{\infty}(a_i,b_i)\,, \end{equation*} such that $a_i<b_i$, $(a_i,b_i)$ are pairwise disjoint and $a_i,b_i\in\partial E_k^{\mathbb{R}}$. In the other words, \begin{equation*} E_k=\bigcup_{i=1}^{\infty}A(a_i,b_i)\,, \end{equation*} and (by the definition of $E_k$) for all $i\in\mathbb{N}$ it holds that \begin{equation}\label{cases} M^If(x)=g(x)\text{ if }|x|=a_i>2^{-k}\text{ and }M^If(x)=g(x)\text{ if }|x|=b_i<2^{-k+1}\,. \end{equation} Moreover, since $M^{I}f>f$ in $E_k$, Lemma \ref{simppeli} says that $M^{I}f$ does not have a strict local maximum in $E_k^{\mathbb{R}}$. In particular, by Proposition \ref{deejee} there exist $c_i\in(a_i,b_i)$ such that \begin{align*} \int_{A(a_i,b_i)}DM^{I}f(y)\,dy\,&\leq \,\sigma_n b_i^{n-1}\int_{a_i}^{b_i}|(M^{I}f)'(t)|\,dt\,\\ &= \,\sigma_n b_i^{n-1}(M^If(a_i)-M^If(c_i)+M^If(b_i)-M^If(c_i))\\ &\leq\,\sigma_n b_i^{n-1}(M^If(a_i)-g(c_i)+M^If(b_i)-g(c_i))\,. \end{align*} Combining this with (\ref{cases}) implies that if $2^{-k}<a_i<b_i<2^{-k+1}$, then \begin{align*} \int_{A(a_i,b_i)}DM^{I}f(y)\,dy\,&\leq \sigma_n b_i^{n-1}(g(a_i)-g(c_i)+g(b_i)-g(c_i))\\ &\leq \sigma_nb_i^{n-1}\int_{a_i}^{b_i}|g'(t)|\,dt\,\leq\,\bigg(\frac{b_i}{a_i}\bigg)^{n-1}\int_{A(a_i,b_i)}|Dg(y)|\,dy\,\\ &\leq\,2^{n-1}\int_{A(a_i,b_i)}|Dg(y)|\,dy\,. \end{align*} For the case $a_i=2^{-k}$ or $b_i=2^{-k+1}$, we employ the fact \begin{equation*} M^If(2^{-k}),M^If(2^{-k+1})\leq \sup_{y\in A(2^{-k-1},2^{-k+2})}g(y)\, \end{equation*} to obtain the estimates ($a_i=2^{-k}$ or $b_i=2^{-k+1}$) \begin{align*} \int_{A(a_i,b_i)}DM^{I}f(y)\,dy\,&\leq \sigma_n b_i^{n-1}(M^If(a_i)-g(c_i)+M^If(b_i)-g(c_i))\\ &\leq \sigma_nb_i^{n-1}\int_{2^{-k-1}}^{2^{-k+2}}|g'(t)|\,dt\,\\ &\leq\,2^{3(n-1)}\int_{A(2^{-k-1},2^{-k+2})}|Dg(y)|\,dy\,. \end{align*} Combining these estimates implies that \begin{align*} &\int_{E_k}|DM^{I}f(y)|\,dy\,=\sum_{i=1}^{\infty}\int_{A(a_i,b_i)}|DM^{I}f(y)|\,dy\,\\ \leq &\,2^{n-1}\sum_{i=1}^{\infty}\bigg[\int_{A(a_i,b_i)}|Dg(y)|\,dy\,\bigg]\,+2(2^{3(n-1)})\int_{A(2^{-k-1},2^{-k+2})}|Dg(y)|\,dy\,\\ \leq &\,2^{3n}\int_{A(2^{-k-1},2^{-k+2})}|Dg(y)|\,dy\,. \end{align*} Therefore, \begin{align*} \int_{E}|DM^If(y)|\,dy\,&\leq \sum_{k\in{\mathbb Z}}\int_{E_k}|DM^{I}f(y)|\,dy\\ &\leq 2^{3n}\sum_{k\in{\mathbb Z}}\int_{A(2^{-k-1},2^{-k+2})}|Dg(y)|\,dy\\ &= \,3(2^{3n})\sum_{k\in{\mathbb Z}}\int_{A(2^{-k},2^{-k+1})}|Dg(y)|\,dy\,=\,3(2^{3n})\norm{Dg}_1\,. \end{align*} This completes the proof. \hfill$\Box$ Then we are ready to prove our main theorem. \begin{theorem}\label{main} If $f\in W^{1,1}(\mathbb{R}^n)$ is radial, then $Mf\in W^{1,1}(\mathbb{R}^n)$ and $\norm{DMf}_1\leq C_n\norm{Df}_1\,$. \end{theorem} \begin{proof} Let \begin{equation*} E:=\{x\in\mathbb{R}^n\,:\,Mf(x)>M^{I}f(x)\,,\,\,\,DMf(x)\not= 0\,\}. \end{equation*} It is well known that $Mf$ is locally Lipschitz in $\{Mf(x)>f(x)\}$, implying the existence of $DMf$ in $\{Mf(x)>f(x)\}$. Since $Mf\geq M^{I}f(x)$, it holds that $Mf(x)=\max\{Mf(x),M^{I}f(x)\}$. Therefore, the theorem follows by Lemmas \ref{basic} and \ref{basic3}, if we can show that \begin{equation}\label{goal} \int_{E}|DMf(y)|\,dy\,\leq C_n\norm{Df}_1\,. \end{equation} To show this, observe first that for all $x\in E$ there exist $r_x>\frac{|x|}{4}$ and $z_x\in\mathbb{R}^n$ such that $x\in B(z_x,r_x)\in \mathcal{B}_x$. Moreover, since $DMf(x)\not=0$, Lemma \ref{peruskama} ($(2)$ and $(3)$) says that $x\in\partial B(z_x,r_x)$ and $DMf(x)/|DMf(x)|=(z_x-x)/|z_x-x|$. On the other hand, $Mf$ is radial and so $DMf(x)/|DMf(x)|=\pm x/|x|$. We conclude that \begin{equation*} B_x=B(c_xx,|c_xx-x|)\text{ for some }c_x\in\mathbb{R}\,. \end{equation*} Observe that $r_x=|c_xx-x|=|c_x-1||x|> |x|/4$ by the assumption, and thus $|c_x-1|>1/4\,$. Moreover, it holds that $c_x\geq -1$. To see this, observe that if $c_x<-1$, then $-x\in B_x$ and, since $Mf$ is radial, $B_x\in\mathcal{B}_{-x}$, implying by Lemma \ref{peruskama} that $0=DMf(-x)=DMf(x)$, which contradicts with the assumption $x\in E$. Summing up, we can write $E=E_+\cup E_-$, where \begin{equation*} E_+=\{x\in E\,:\,c_x> 1+1/4\,\}\,\text{ and }\,E_-=\{x\in E\,:\,-1\leq c_x<3/4\,\}\,. \end{equation*} We are going to use different estimates for $DMf(x)$ in $E_+$ and $E_-\,$. Since $|DMf(x)|=|DMf(x)\cdot \frac{x}{|x|}|$, it follows from Lemma \ref{peruskama} (\ref{nice}) that \begin{equation*} |DMf(x)|\leq \frac{1}{|x|}\vint_{B_x}|D|f|(y)||y|\,dy\,. \end{equation*} This estimate will be used in $E_-$, while in $E_+$ we will use (easier) estimate $|DMf(x)|\leq \vint_{B_x}|D|f||\,$ (Lemma \ref{peruskama}, $(1)$). We get that \begin{align*} &\int_{E}|DMf(x)|\,dx\,\leq\int_E\chi_{E_+}(x)|DMf(x)|+\chi_{E_-}(x)|DMf(x)|\,dx\,\\ \leq & \int_E\chi_{E_+}(x)\bigg(\vint_{B_x}|D|f|(y)|dy\,\bigg)+\chi_{E_-}(x)\bigg(\vint_{B_x}|D|f|(y)|\frac{|y|}{|x|}\,dy\,\bigg)\,dx\,\\ =& \int_E\int_{\mathbb{R}^n}\frac{\chi_{E_+}(x)\chi_{B_x}(y)|D|f|(y)|}{|B_x|}+\frac{\chi_{E_-}(x)\chi_{B_x}(y)|D|f|(y)||y|}{|B_x||x|}\,dy\,dx\,\\ =&\int_{\mathbb{R}^n}|D|f|(y)|\bigg(\int_{E_+}\frac{\chi_{B_x}(y)}{|B_x|}\,dx\,+\int_{E_-}\frac{\chi_{B_x}(y)|y|}{|B_x||x|}\,dx\,\bigg)\,dy. \end{align*} If $y\in B_x$ and $x\in E_+$, it follows from the definition of $E_+$ that $|x|\leq |y|$. Moreover, $y\in B_x$ and $x\in E$ imply also that $r_x\geq \max\{|y-x|,\frac{|x|}{4}\}\geq \frac{|y|}{6}\,$. This implies the estimate \begin{equation*} \int_{E_+}\frac{\chi_{B_x}(y)}{|B_x|}\,dx\,\leq \int_{B(0,|y|)}\frac{dx}{\omega_n(|y|/6)^n}\leq C_n\,, \text{ for all }y\in\mathbb{R}^n\,. \end{equation*} On the other hand, if $x\in E_-$, then $\,-1\leq c_x<3/4$ especially implies that $B_x\subset B(0,|x|)$. Therefore, if $x\in E_-$ and $y\in B_x$, then $y\in B(0,|x|)$, and thus $|x|\geq |y|\,$. Recall also that $r_x\geq \frac{|x|}{4}\,$. Combining these yields that \begin{align*} \int_{E_-}\frac{\chi_{B_x}(y)|y|}{|B_x||x|}\,dx\,\leq |y|\int_{\mathbb{R}^n\setminus B(0,|y|)}\frac{dx}{\omega_n(|x|/4)^{n+1}}=C'_n|y|\int_{|y|}^{\infty}\frac{dt}{t^2}=C'_n\,, \end{align*} for all $y\in\mathbb{R}^n\,$. This completes the proof. \end{proof}
{ "timestamp": "2017-02-03T02:07:16", "yymm": "1702", "arxiv_id": "1702.00669", "language": "en", "url": "https://arxiv.org/abs/1702.00669", "abstract": "We study the problem concerning the variation of the Hardy-Littlewood maximal function in higher dimensions. As the main result, we prove that the variation of the non-centered Hardy-Littlewood maximal function of a radial function is comparable to the variation of the function itself.", "subjects": "Classical Analysis and ODEs (math.CA)", "title": "The variation of the maximal function of a radial function", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357561234475, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7096739131092151 }
https://arxiv.org/abs/1905.01366
Noncommutative cross-ratio and Schwarz derivative
We present here a theory of noncommutative cross-ratio, Schwarz derivative and their connections and relations to the operator cross-ratio. We apply the theory to "noncommutative elementary geometry" and relate it to noncommutative integrable systems. We also provide a noncommutative version of the celebrated "pentagramma mirificum".
\section{Introduction} Cross-ratio and Schwarz derivative are some of the most famous invariants in mathematics (see \cite{Lab}, \cite{OT}, \cite{OT2}). Different versions of their noncommutative analogs and various applications of these constructions to integrable systems, control theory and other subjects were discussed in several publications including \cite{DGP2}. In this paper, which is the first one of a series of works, we recall some of these definitions, revisit the previous results and discuss their connections with each other and with noncommutative elementary geometry. In the forthcoming papers we shall further discuss the role of noncommutative cross ratio in the theory of noncommutative integrable models and in topology. The present paper is organized as follows. In Sections 2, 3 we recall a definition of noncommutative cross-ratios based on the theory of noncommutative quasi-Pl\"ucker invariants (see \cite{GR2, GGRW}), in Section 4 we use the theory of quasideterminants (see \cite{GR1}) to obtain noncommutative versions of Menelaus's and Ceva's theorems. In Section 5 we compare our definition of cross-ratio with the operator version used in control theory \cite{Zelikin06} and show how Schwarz derivatives appear as the infinitesimal analogs of noncommutative cross-ratios. In section 6 we revisit an approach to noncommutative Schwarz derivative from \cite{RS} and section 7 deals with possible applications of the theory we develop. It should also be mentioned that in present paper we develop the constructions and ideas, first outlined in \cite{RS}. It is our pleasure to dedicate this paper to Emma Previato, whose intelligence, erudition, interest to various domaines of our science are spectacular and her friendship is constant and loyal. Her results (\cite{DGP2}) were one of important motives which inspired us to think once more about the role of non-commutative cross-ratio. \vskip 2mm \noindent{\bf Acknowledgements.} The authors are grateful to B. Khesin, V. Ovsienko and S. Tabachnikov for helpful discussions. This research was started during V. Retakh's visit to LAREMA and Department of Mathematics, University of Angers. He is thankful to the project DEFIMATH for its support and LAREMA for hospitality. V.~Roubtsov thanks the project IPaDEGAN (H2020-MSCA-RISE-2017), Grant Number 778010 for support of his visits to CRM, University of Montreal where the paper was finished and the CRM group of Mathematical Physics for hospitality. He is partly supported by the Russian Foundation for Basic Research under the Grants RFBR 18-01-00461. G.~Sharygin is thankful to IHES and LAREMA for hospitality during his visits. His research is partly supported by the Russian Science Foundation, Grant No. 16-11-10069. \section{Quasi-Pl\"ucker coordinates} We begin with a list of basic properties of noncommutative cross-ratios introduced in \cite{R}. To this end we first recall the definition and properties of quasi-Pl\"ucker coordinates; observe that we shall only deal with the quasi-Pl\"ucker coordinates for $2\times n$-matrices over a noncommutative division ring $\mathcal R$. The corresponding theory for general $k\times n$-matrices is presented in \cite {GR2, GGRW}. Recall (see \cite {GR, GR1} and subsequent papers) that for a matrix $\begin{pmatrix} a_{1k}&a_{1i}\\ a_{2k}&a_{2i}\end{pmatrix}$ one can define four quasideterminants provided the corresponding elements are invertible: $$ \begin{vmatrix} \boxed{a_{1k}}&a_{1i}\\ a_{2k}&a_{2i}\end{vmatrix}=a_{1k}-a_{1i}a_{2i}^{-1}a_{2k}, \ \ \begin{vmatrix} a_{1k}&\boxed{a_{1i}}\\ a_{2k}&a_{2i}\end{vmatrix}=a_{1i}-a_{1k}a_{2k}^{-1}a_{2i}, $$ $$ \begin{vmatrix} a_{1k}&a_{1i}\\ \boxed{a_{2k}}&a_{2i}\end{vmatrix}=a_{2k}-a_{2i}a_{1i}^{-1}a_{1k},\ \ \begin{vmatrix} a_{1k}&a_{1i}\\ a_{2k}&\boxed{a_{2i}}\end{vmatrix}=a_{2i}-a_{2k}a_{1k}^{-1}a_{1i}. $$ Let $A=\begin{pmatrix} a_{11}&a_{12}&\dots&a_{1n}\\a_{21}&a_{22}&\dots&a_{2n}\end{pmatrix}$ be a matrix over $\mathcal R$. \begin{lemma} Let $i\neq k$. Then $$ \begin{vmatrix} a_{1k}&\boxed{a_{1i}}\\ a_{2k}&a_{2i}\end{vmatrix}^{-1} \begin{vmatrix} a_{1k}&\boxed{a_{1j}}\\ a_{2k}&a_{2j}\end{vmatrix}= \begin{vmatrix} a_{1k}&a_{1i}\\ a_{2k}&\boxed{a_{2i}}\end{vmatrix}^{-1} \begin{vmatrix} a_{1k}&a_{1j}\\ a_{2k}&\boxed{a_{2j}}\end{vmatrix}^{-1} $$ if the corresponding expressions are defined. \end{lemma} Note that in the formula the boxed elements on the left and on the right must be in the same row. \begin{definition} We call the expression $$ q_{ij}^k(A)=\begin{vmatrix} a_{1k}&\boxed{a_{1i}}\\ a_{2k}&a_{2i}\end{vmatrix}^{-1} \begin{vmatrix} a_{1k}&\boxed{a_{1j}}\\ a_{2k}&a_{2j}\end{vmatrix}= \begin{vmatrix} a_{1k}&a_{1i}\\ a_{2k}&\boxed{a_{2i}}\end{vmatrix}^{-1} \begin{vmatrix} a_{1k}&a_{1j}\\ a_{2k}&\boxed{a_{2j}}\end{vmatrix}^{-1} $$ the quasi-Pl\"ucker coordinates of matrix $A$. \end{definition} Our terminology is justified by the following observation. Recall that in the commutative case the expressions $$ p_{ik}(A)=\begin{vmatrix}a_{1i}&a_{1k}\\ a_{2i}&a_{2k}\end{vmatrix}= a_{1i}a_{2k}-a_{1k}a_{2i} $$ are the Pl\"ucker coordinates of $A$. One can see that in the commutative case $$ q_{ij}^k(A)=\frac{p_{jk}(A)}{p_{ik}(A)}, $$ i.e. quasi-Pl\"ucker coordinates are ratios of Pl\"ucker coordinates. Let us list here the properties of quasi-Pl\"ucker coordinates over (noncommutative) division ring $\mathcal R$. For the sake of brevity we shall sometimes write $q_{ij}^k$ instead of $q_{ij}^k(A)$ where it cannot lead to a confusion. \begin{enumerate} \item Let $g$ be an invertible matrix over $\mathcal R$. Then $$ q_{ij}^k(g\cdot A)=q_{ij}^k(A).$$ \item Let $\Lambda = \text {diag}\ (\lambda_1, \lambda _2,\dots, \lambda_n)$ be an invertible diagonal matrix over $\mathcal R$. Then $$ q_{ij}^k(A\cdot \Lambda)=\lambda_i^{-1}\cdot q_{ij}^k(A)\cdot \lambda_j.$$ \item If $j=k$ then $q_{ij}^k=0$; if $j=i$ then $q_{ij}^k=1$ (we always assume $i\neq k$). \item $q_{ij}^k\cdot q_{j\ell}^k=q_{i\ell}^k\ $. In particular, $q_{ij}^kq_{ji}^k=1$. \item ``Noncommutative skew-symmetry": For distinct $i,j,k$ $$ q_{ij}^k\cdot q_{jk}^i\cdot q_{ki}^j=-1. $$ One can also rewrite this formula as $q_{ij}^kq_{jk}^i=-q_{ik}^j$. \item ``Noncommutative Pl\"ucker identity": For distinct $i,j,k,\ell$ $$ q_{ij}^k q_{ji}^{\ell} + q_{i\ell}^k q_{\ell i}^j=1. $$ \end{enumerate} \medskip\noindent One can easily check two last formulas in the commutative case. In fact, $$ q_{ij}^k\cdot q_{jk}^i\cdot q_{ki}^j= \frac{p_{jk}p_{ki}p_{ij}}{p_{ik}p_{ji}p{kj}}=-1 $$ because Pl\"ucker coordinates are skew-symmetric: $p_{ij}=-p_{ji}$ for any $i,j$. Also, assuming that $i<j<k<\ell$ $$ q_{ij}^k q_{ji}^{\ell} + q_{i\ell}^k q_{\ell i}^j= \frac{p_{jk}p_{i\ell}}{p_{ik}p_{j\ell}}+ \frac{p_{\ell k}p_{ij}}{p_{ik}p_{\ell j}}. $$ Since $\frac{p_{\ell k}}{p_{\ell j}}=\frac {p_{k\ell}}{p_{j\ell}}$, the last expression is equal to $$ \frac{p_{jk}p_{i\ell}}{p_{ik}p_{j\ell}}+ \frac{p_{k\ell }p_{ij}}{p_{ik}p{j\ell }}= \frac {p_{ij}p_{k\ell}+p_{i\ell}p_{jk}}{p_{ik}p_{j\ell}}=1 $$ due to the celebrated Pl\"ucker identity $$ p_{ij}p_{k\ell} - p_{ik}p_{j\ell} +p_{i\ell}p_{jk}=0. $$ \begin{remark} We present here the theory of the {\it left} quasi-Pl\"ucker coordinates for $2$ by $n$ matrices where $n>2$. The theory of the {\it right} quasi-Pl\"ucker coordinates for $n$ by $2$ or, more generally, for $n$ by $k$ matrices where $n>k$ can be found in \cite{GR2, GGRW}. \end{remark} \section{Definition and basic properties of cross-ratios} \subsection{Non-commutative cross-ratio: basic definition.} We define cross-ratios over (noncommutative) division ring $\mathcal R$ by imitating the definition of classical cross-ratios in homogeneous coordinates. Namely, if four points in (real or complex) projective plane can be represented in homogeneous coordinates by vectors $a,b,c,d$ such that $c=a+b$ and $d=ka+b$, then their cross-ratio is $k$. So we let $$ x=\begin{pmatrix} x_1\\ x_2\\ \end{pmatrix},\ \ y=\begin{pmatrix} y_1\\ y_2\\ \end{pmatrix},\ \ z=\begin{pmatrix} z_1\\ z_2\\ \end{pmatrix},\ \ t=\begin{pmatrix} t_1\\ t_2 \end{pmatrix} $$ be four vectors in $\mathcal R^2$. We define their cross-ratio $\kappa=\kappa (x,y,z,t)$ by equations $$ \begin{cases} t=x\alpha +y\beta\\ z=x\alpha \gamma +y\beta \gamma \cdot \kappa \end{cases} $$ where $\alpha, \beta, \gamma, \kappa \in \mathcal R$. In order to obtain explicit formulas, let us consider the matrix $$ \begin{pmatrix} x_1&y_1&z_1&t_1\\ x_2&y_2&z_2&t_2 \end{pmatrix}. $$ We shall identify its columns with $x,y,z,t$. Then we have the following theorem (see \cite{R}) \begin{theorem} $$ \kappa (x,y,z,t)=q_{zt}^y\cdot q_{tz}^x\ . $$ \end{theorem} Note that in the generic case $$ \begin{aligned} \kappa &(x,y,z,t)= \begin{vmatrix} y_1&\boxed{z_1}\\ y_2&z_2\end{vmatrix}^{-1} \begin{vmatrix} y_1&\boxed{t_1}\\ y_2&t_2\end{vmatrix}\cdot \begin{vmatrix} x_1&\boxed{t_1}\\ x_2&t_2\end{vmatrix}^{-1} \begin{vmatrix} x_1&\boxed{z_1}\\ x_2&z_2\end{vmatrix}\\ &=z_2^{-1}(z_1z_2^{-1}-y_1y_2^{-1})^{-1}(t_1t_2^{-1}-y_1y_2^{-1})(t_1t_2^{-1}-x_1x_2^{-1})^{-1}(z_1z_2^{-1}-x_1x_2^{-1})z_2 \end{aligned} $$ which shows that $\kappa(x,y,z,t)$ coincides with the standard cross-ratio in commutative case and also demonstrates the importance of conjugation in the noncommutative world. \begin{corollary} Let $x,y,z,t$ be vectors in $\mathcal R$, $g$ be a $2$ by $2$ matrix over $\mathcal R$ and $\lambda_i\in \mathcal R$, $i=1,2,3,4$. If the matrix $g$ and elements $\lambda_i$ are invertible then \begin{equation} \kappa (gx\lambda_1, gy\lambda_2, gz\lambda_3, gt\lambda_4)= \lambda_3^{-1}\kappa (x,y,z,t)\lambda_3\ . \end{equation} \end{corollary} Again, as expected, in the commutative case the right hand side of (3.1) equals $\kappa (x,y,z,t)$. \begin {remark} Note that the group $GL_2(\mathcal R)$ acts on vectors in $\mathcal R^2$ by multiplication from the left: $(g,x)\mapsto gx$, and the group $\mathcal R^{\times}$ of invertible elements in $\mathcal R$ acts by multiplication from the right: $(\lambda, x)\mapsto x\lambda^{-1}$. These actions determine the action of $GL_2(\mathcal R)\times T_4(\mathcal R)$ on $P_4=\mathcal R^2\times \mathcal R^2\times \mathcal R^2\times \mathcal R^2$ where $T_4(\mathcal R)=(\mathcal R^{\times})^4$. The cross-ratios are {\it relative invariants} of the action. \end{remark} The following theorem generalizes the main property of cross-ratios to the noncommutive case (see \cite{R}). \begin{theorem} Let $\kappa (x,y,z,t)$ be defined and $\kappa (x,y,z,t)\neq 0,1$. Then $4$-tuples $(x,y,z,t)$ and $(x',y',z',t')$ from $P_4$ belong to the same orbit of $GL_2(\mathcal R)\times T_4(\mathcal R)$ if and only if there exists $\mu \in \mathcal R^{\times}$ such that \begin{equation} \kappa (x,y,z,t)=\mu \cdot \kappa (x',y',z',t')\cdot \mu ^{-1}\ . \end{equation} \end{theorem} The following corollary shows that the cross-ratios we defined satisfy {\it cocycle conditions} (see \cite{Lab}). \begin{corollary} For all vectors $x,y,z,t,w$ the following equations hold \begin{align*} \kappa(x,y,z,t)=\kappa (w,y,z,t)\kappa(x,w,z,t)\\ \kappa (x,y,z,t)=1-\kappa(t,y,z,x), \end{align*} if all the cross-ratios in these formulas exist. \end{corollary} The last proposition can also be generalized as follows: \begin{corollary} For all vectors $x,x_1, x_2,\dots x_n, z,t\in \mathcal R^2$ one has $$\kappa (x,x,z,t)=1$$ and $$ \kappa(x_{n-1},x_n,z,t)\kappa(x_{n-2}, x_{n-1},z,t)\dots \kappa(x_1,x_2,z,t)= \kappa(x_1,x_n,z,t) $$ where we assume that all the cross-ratios exist. \end{corollary} \subsection{Noncommutative cross-ratios and permutations} There are $24$ cross-ratios defined for vectors $x,y,z,t\in \mathcal R^2$, if we permute them. They are related by the following formulas: \begin{proposition} Let $x,y,z,t\in \mathcal R$. Then \begin{align} q_{tz}^x\kappa (x,y,z,t)q_{zt}^x= q_{tz}^y\kappa (x,y,z,t)q_{zt}^y=\kappa (y,x,t,z);\\ q_{xz}^y\kappa (x,y,z,t)q_{zx}^y= q_{xz}^t\kappa (x,y,z,t)q_{zx}^t=\kappa (z,t,x,y);\\ q_{yz}^x\kappa (x,y,z,t)q_{zy}^x= q_{yz}^t\kappa (x,y,z,t)q_{zy}^t=\kappa (t,z,x,y);\\ \kappa (x,y,z,t)^{-1}=\kappa (y,x,z,t). \end{align} \end{proposition} Note again the appearance of conjugation in the noncommutative case; this happens since $q_{ij}^k$ and $q_{ji}^k$ are inverse to each other. Also observe that using Proposition 3.7 and the cocycle condition (corollary 3.5) one can get all 24 formulas for cross-ratios of $x,y,z,t$ knowing just one of them. \subsection{Noncommutative triple ratio} Let $\mathcal R$ be a division ring as above; we shall work with the ``right $\mathcal R$-plane'' $\mathcal R^2$, i.e. we use right multiplication of vectors by the elements from $\mathcal R$. Consider the triangle with vertices $O(0,0),\ X(x,0)$, and $Y(0,y)$ in $\mathcal R^2$. Let $A(a_1,a_2)$ be a point on side $XY$, $B(b,0)$ be a point on side $OX$ and $C(0,c)$ be a point on side $OY$. Recall that the geometric condition $A\in XY$ means $$x^{-1}a_1+y^{-1}a_2=1.$$ Let $P(p_1,p_2)$ be the point of intersection of $XC$ and $YB$. Then one has $$ p_1=(y-c)(yb^{-1}-cx^{-1})^{-1}, \ p_2=(x-b)(by^{-1}-xc^{-1})^{-1}\ . $$ Let $Q$ be the point of intersection of $OP$ and $XY$. The non-commutative cross ratio for $Y,A,Q,X$ is equal to $$ x^{-1}(1-p_1p_2^{-1}a_2a_1^{-1})^{-1}x. $$ By changing the order of $Y,A,Q,X$ we get up to a conjugation \begin{equation} p_1p_2^{-1}a_2a_1^{-1}=-(y-c)(yb^{-1}-cx^{-1})^{-1}(x-b)^{-1}xc^{-1}(yb^{-1}-cx^{-1})bx^{-1}(x-a_1)a_1^{-1} \end{equation} In the commutative case (up to a sign) we have $$ p_1p_2^{-1}a_2a_1^{-1}=(y-c)c^{-1}b(x-b)^{-1}(x-a_1)a_1^{-1} $$ (compare it with the Ceva theorem in elementary geometry). Note that $(x-a_1)^{-1}a_1^{-1}=YA/AX$ and (3.7) is a (non-commutative analogue of) triple cross-ration (see section 6.5 in the book by Ovsienko and Tabachnikov \cite{OT}) \subsection {Noncommutative angles and cross-ratios.} Let $\mathcal R$ be a noncommutative division ring. Recall that noncommutative angles (or noncommutative $\lambda$-lengths) $T_i^{jk}=T_i^{kj}$ for vectors $A_1,A_2, A_3,A_4\in {\mathcal R}^2,\ A_i=(a_{1i},a_{2i})$ are defined by the formulas $$ T_i^{jk}=x_{ji}^{-1}x_{jk}x_{ik}^{-1}. $$ Here $x_{ij}= a_{1j} - a_{1i}a_{2i}^{-1}a_{2j}$, or $x_{ij} = a_{2j} - a_{2i}a_{1i}^{-1}a_{1j}$ (see \cite{BR18}). On the other hand the cross-ratio $\kappa (A_1,A_2,A_3,A_4)=\kappa(1,2,3,4)$ (see definition 2.2) is $$ \kappa(1,2,3,4)=q_{34}^2q_{43}^1. $$ It implies that $$ \kappa (1,2,3,4)=x_{43}^{-1}(T_4^{23})^{-1}T_4^{31}x_{43}\ . $$ In other words, cross-ratio is a ratio of two angles up to a conjugation. Under the transformation $x_{ij}\mapsto \lambda _ix_{ij}$ we have $$ T_i^{jk}\mapsto T_i^{jk}\cdot \lambda_i^{-1}\ . $$ Also note that $$ T_i^{jk}(T_i^{mk})^{-1}=q_{ik}^jq_{ki}^m $$ i.e. $T_i^{jk}(T_i^{mk})^{-1}$ is a cross-ratio. Further details on the properties of $T_i^{jk}$ can be found in \cite{BR18}. \section{ Noncommutative Menelaus' and Ceva's theorems} \bigskip \subsection{Higher rank quasi-determinants: reminder} Let $A=(a_{ij})$, $i,j=1,2,\dots,n$ be a matrix over a ring. Denote by $A^{pq}$ the submatrix of matrix $A$ obtained from $A$ by removing the $p$-th row and the $q$-th column. Let $r_p=(a_{p1}, a_{p2},\dots, \hat a_{pq},\dots a_{pn})$ be the row submatrix and $c_q=(a_{1q}, a_{2q},\dots, \hat a_{pq},\dots a_{nq})^T$ be the column submatrix of $A$. Following \cite{GR} we say that the quasideterminant $|A|_{pq}$ is defined if and only if the submatrix $A^{pq}$ is invertible. In this case $$ |A|_{pq}=a_{pq}-r_p(A^{pq})^{-1}c_q\ . $$ In the commutative case $|A|_{pq}=(-1)^{p+q}det A/det A^{pq}$. It is sometimes convenient to use the notation $$ |A|_{pq}=\left | \begin{matrix} \dots &\dots&a_{1q}&\dots\\ \dots&\dots&\dots&\dots\\ a_{p1}&\dots&\boxed {a_{pq}}&\dots\\ \dots&\dots&\dots &\dots \end{matrix}\right |\ . $$ \medskip \subsection{Commutative Menelaus' and Ceva's theorems} We follow the affine geometry proof. Let the points $D,E,F$ lie on the straight lines $AB,BC$ and $AC$ respectively (see figure 1 (\textit{a})). Denote by $\lambda_D$ the coefficient for homothety with center $D$ sending $B$ to $C$, by $\lambda_E$ the coefficient for homothety with center $E$ sending $C$ to $A$, and by $\lambda_F$ the coefficient for homothety with center $F$ sending $A$ to $B$. Note that (in a generic case) \[ \begin{aligned} \lambda_D&=(b_i-d_i)^{-1}(c_i-d_i), \ i=1,2,\\ \lambda_E&=(c_i-e_i)^{-1}(a_i-e_i), \ i=1,2,\\ \lambda_F&=(a_i-f_i)^{-1}(b_i-f_i), \ i=1,2. \end{aligned} \] Here $(a_1,a_2)$ are the coordinates of $A$ etc. We shall omit the indices and write $\lambda_D=(b-d)^{-1}(c-d)$, etc. \medskip \begin{theorem}\label{ComMenelaus} Points $E,D,F$ belong to a straight line if an only if $$ (a-f)^{-1}(b-f)\cdot (c-e)^{-1}(a-e)\cdot (b-d)^{-1}(c-d)=1\ . $$ \end{theorem} \begin{figure}[tb] \includegraphics[scale =0.8]{Menelaus.eps}\ \includegraphics[scale=0.8]{Ceva.eps}\\ \begin{center} (\textit{a})\hspace{8cm}(\textit{b}) \end{center} \caption{The classical Menelaus (part (\textit{a})) and Ceva (part (\textit{b})) theorems.} \end{figure} \noindent This is the Menelaus' theorem in the commutative case. \begin {proof} The composition of transformations $\lambda_D, \lambda_E, \lambda _F$ leaves the point $B$ unchanged, thus it is equal to a homothety with center $B$. On the other hand, if points belong to the same straight line then the center should belong to this line too, so the composition is equal to identity. So $$\lambda_F \lambda_E \lambda _D=1$$ i.e. $$ (a-f)^{-1}(b-f)\cdot (c-e)^{-1}(a-e)\cdot (b-d)^{-1}(c-d)=1. $$ The opposite statement can be proved by contradiction. \end{proof} Somewhat dually, one obtains Ceva theorem (see figure 1 (\textit{b})): \begin{theorem}\label{ComCeva} Lines $AD$, $BE$ and $CF$ intersect each other in a point $O$ if and only if $$ (e-a)^{-1}(e-c)\cdot (f-b)^{-1}(f-a)\cdot (d-c)^{-1}(d-b)= -1\ . $$ \end{theorem} This is the Ceva' theorem in the commutative case. \subsection{Non-commutative Menelaus' and Ceva's theorems} Let $\mathcal R$ be a noncommutative division ring. Consider $\mathcal R^2$ as the right vector space over $\mathcal R$. For a point $X\in \mathcal R^2$ denote by $x_i$ its $i$-th coordinate, $i=1,2$. Here and below we shall use the properties of quasideterminants, see \cite{GR,GR1}: \begin{proposition}\label{NCalign} Let points $X$ and $Y$ are in generic position, i.e. that matrix $$\left (\begin{matrix}x_1&y_1\\ x_2&y_2\end{matrix} \right ) $$ is invertible. Then the points $X,Y,Z\in\mathcal R^2$ belong to the same straight line (in the sense of linear algebra) if and only if $$ \left | \begin{matrix} x_1 & y_1 & z_1\\ x_2 &y_2 & z_2\\ 1 & 1 & {\boxed 1} \end{matrix}\right | =0\ . $$ \end{proposition} \begin{proof} From the general theory of quasideterminants it follows that that $$ \left | \begin{matrix} x_1 & y_1 & z_1\\ x_2 &y_2 & z_2\\ 1 & 1 & {\boxed 1} \end{matrix}\right | = 1 - \lambda -\mu $$ where $\lambda, \mu\in R$ satisfy the equation $X\lambda + Y\mu =Z$. Note that $X$, $Y$ and $Z$ belong to the same straight line if and only if there exists $\lambda + \mu =1$. \end{proof} \begin{corollary}\label{NCaligncond} Assume that $x_i-y_i\in\mathcal R,\ i=1,2$ are invertible. Then $X,Y,Z$ belong to one straight line if and only if $$ (y_1-x_1)^{-1}(z_1-x_1)=(y_2-x_2)^{-1}(z_2-x_2)\ . $$ \end{corollary} \begin{proof} Note that $$ \left | \begin{matrix} x_1 & y_1 & z_1\\ x_2 &y_2 & z_2\\ 1 & 1 & {\boxed 1} \end{matrix}\right | = -\left |\begin{matrix} x_1&y_1-x_1\\ \boxed {x_2}&y_2-x_2\end{matrix} \right |^{-1}\cdot \left |\begin{matrix} y_1-x_1&z_1-x_1\\ y_2-x_2&\boxed {z_2-x_2}\end{matrix} \right | $$ and that $(y_1-x_1)^{-1}(z_1-x_1)=(y_2-x_2)^{-1}(z_2-x_2)$ if and only if $$ \left |\begin{matrix} y_1-x_1&z_1-x_1\\ y_2-x_2&\boxed {z_2-x_2}\end{matrix} \right | = 0\ . $$ \end{proof} \subsection{NC analogue of Konopelchenko equations} Let again $\mathcal R$ be a division ring. Consider $\mathcal R^2$ as the right module over $\mathcal R$. \begin{proposition} Let $F_1=(x_1,y_1),\ F_2=(x_2,y_2)$ be two points in $\mathcal R^2$ in a generic position. Then the equation of the straight line $L_{12}$ passing through $F_1$ and $F_2$ is $$ (y_2-y_1)^{-1}(y-y_1)=(x_2-x_1)^{-1}(x-x_1). $$ \end{proposition} \begin{corollary}. An equation of the line $L_{12}'$ parallel to $L$ and passing through $(0,0)$ is $$ (y_2-y_1)^{-1}y=(x_2-x_1)^{-1}x, $$ i.e. any point $F_{12}$ on $L_{12}'$ has coordinates $((x_2-x_1)f_{12}, (y_2-y_1)f_{12})$. \end{corollary} The proposition and the corollary are both straightforward consequences of the Proposition \ref{NCalign} and the Corollary \ref{NCaligncond}. Denote by $L_{ij}$ the straight line passing through $F_i=(x_i,y_i)$ and $F_j=(x_j,y_j)$ and by $L_{ij}'$ the parallel line though $(0,0)$. Consider now (additionaly to the line $L_{12}'$ and to a point $(x_2-x_1)f_{12}, (y_2-y_1)f_{12}$ on it) points $F_{23}=((x_3-x_2)f_{23}, (y_3-y_2)f_{23})$ on line $L_{23}'$ and $F_{31}=(x_1-x_3)f_{31}, (y_1-y_3)f_{31})$ on line $L_{31}'$. \begin{proposition} For generic points $F_1,F_2,F_3$ the points $F_{12}, F_{23}, F_{31}$ belong to a straight line iff $$ f_{12}^{-1} + f_{23}^{-1} + f_{31}^{-1}\ = 0 . $$ \end{proposition} \medskip \noindent{\bf Warning:} Note that before we considered points with coordinates $(x_1,x_2)$, $(y_1,y_2)$, $(z_1,z_2)$ and now with coordinates $(x_i, y_i)$. \begin{proof} According to Proposition \ref{NCalign} in order to show that $F_{12}$, $F_{23}$, $F_{31}$ lie on the same straight line it is necessary and sufficient to check that $$ \theta: =\left |\begin{matrix} (x_2-x_1)f_{12}&(x_3-x_2)f_{23}&(x_1-x_3)f_{31}\\ (y_2-y_1)f_{12}&(y_3-y_2)f_{23}&(y_1-x_3)f_{31}\\ 1&1&\boxed {1}\end{matrix} \right | = 0. $$ According to the standard properties of quasideterminants $$ \theta= \left |\begin{matrix} (x_2-x_1)&(x_3-x_2)&(x_1-x_3)\\ (y_2-y_1)&(y_3-y_2)&(y_1-y_3)\\ f_{12}^{-1}&f_{23}^{-1}&\boxed {f_{31}^{-1}}\end{matrix} \right |f_{31}. $$ Adding the first two columns to the third one does not change $\theta$, so $$ \theta= \left |\begin{matrix} (x_2-x_1)&(x_3-x_2)&0\\ (y_2-y_1)&(y_3-y_2)&0\\ f_{12}^{-1}1&f_{23}^{-1}&\boxed {f_{12}^{-1}+f_{21}^{-1}+f_{31}^{-1}}\end{matrix} \right |f_{31}= (f_{12}^{-1}+f_{21}^{-1}+f_{31}^{-1})f_{31}. $$ \end{proof} This is a noncommutative generalization of formula (32) from Konopelchenko (\cite{Konop}) \subsection{Noncommutative Menelaus theorem and quasi-Pl\"ucker coordinates} Let as above $\mathcal R$ be a noncommutative division ring. Consider $\mathcal R^2$ as the right vector space over $\mathcal R$. For a point $X\in\mathcal R^2$ denote by $x_i$ its $i$-th coordinate, $i=1,2$. Recall that points $X,Y,Z$ are collinear if and only if $$ \left | \begin{matrix} x_1 & y_1 & z_1\\ x_2 &y_2 & z_2\\ 1 & 1 & {\boxed 1} \end{matrix}\right | =0\ . $$ \begin{corollary} \label{cor1} Points $X,Y,Z$ are collinear if and only if $$ (y_1-x_1)^{-1}(z_1-x_1)=(y_2-x_2)^{-1}(z_2-y_2) $$ or, equivalently, $$ (x_2-y_2)^{-1}(z_2-y_2)=q_{XZ}^Y. $$ \end{corollary} \noindent \begin{remark}The second identity is equivalent to the equality $$ \left | \begin{matrix} x_1 & y_1 & {\boxed {z_1}}\\ x_2 &y_2 & z_2\\ 1 & 1 & 1\end{matrix}\right | =0\ . $$ \end{remark} \begin{proposition}\label{prop1} Let $A,B,C$ be non-collinear points in $\mathcal R^2$. Then any point $P\in\mathcal R^2$ can be uniquely written as $$P=At+Bu+Cv, \ \ t,u,v\in \mathcal R, \ \ t+u+v=1\ .$$ \end{proposition} We will write $P=[t,u,v]$. \begin{proposition}\label{kap} Let $P_i=[t_i,u_i,v_i]$, $i=1,2,3$. Then $P_1,P_2,P_3$ are collinear if and only if $$ \left | \begin{matrix} t_1 & t_2 & {\boxed {t_3}}\\ u_1 &u_2 & u_3\\ v_1 & v_2 & v_3\end{matrix}\right | =0\ . $$ \end{proposition} We follow now the book by Kaplansky, \cite{Kap}, see pages 88-89. Consider a triangle $ABC$ (vertices go anti-clock wise). Take point $R$ at line $AB$, point $P$ at line $BC$, and point $Q$ at line $AC$. Then $$ P=B(1-t)+Ct, \ Q=C(1-u)+Au, \ R=A(1-v)+Bv\ . $$ Proposition \ref{kap} implies \begin{theorem}\label{thmABC} Points $A,B,C$ are collinear if and only if $$ u(1-u)^{-1}t(1-t)^{-1}v(1-v)^{-1}=-1\ . $$ \end{theorem} Note that $t(1-t)^{-1}=(c_1-p_1)^{-1}(p_1-b_1)$. Corollary \ref{cor1} implies that $$t(1-t)^{-1}=-q_{CB}^P$$ where $q_{CB}^P$ is a quasi-Pl\"ucker coordinate. Similarly, $$ u(1-u)^{-1}=-q_{AC}^Q, \ \ v(1-v)^{-1}=-q_{BA}^R $$ and Theorem \ref{thmABC} implies \begin{theorem} $$ q_{AC}^Qq_{CB}^Pq_{BA}^R=1\ . $$ \end{theorem} \section{Relation with matrix cross-ratio.} In this section we discuss the cross ratio in noncommutative algebras, introduced above in terms of \textit{quasideterminants} and its relation with the \textit{operator cross-ratio} of Zelikin (see \cite{Zelikin06} and also Chapter 5 of \cite{Zelikin-book}). We also consider the infinitesimal part of the cross-ratio in this case, which just like in the classical case will lead us to the construction of the noncommutative Schwarz derivative. Further details about the constructions of Schwarzian will be given in the next section. Recall that we defined the cross ratio of four elements $a,b,c,d\in \mathcal R^{\oplus 2}$ by explicit formulas as follows: \[ \kappa(a,b,c,d)=\begin{vmatrix}b_1 & \boxed{d_1}\\ b_2 & d_2\end{vmatrix}^{-1}\begin{vmatrix}b_1 & \boxed{c_1}\\ b_2 & c_2\end{vmatrix}\begin{vmatrix}a_1 & \boxed{c_1}\\ a_2 & c_2\end{vmatrix}^{-1}\begin{vmatrix}a_1 & \boxed{d_1}\\ a_2 & d_2\end{vmatrix}, \] under the assumption that all these expressions exist (in fact, except for the existence of the inverse elements of $a_2$ and $b_2$, it is enough to assume further that the matrices $\begin{pmatrix}a_1 & c_1\\ a_2 & c_2\end{pmatrix}$ and $\begin{pmatrix}b_1 & d_1\\ b_2 & d_2\end{pmatrix}$ are invertible) The expression $\kappa(a,b,c,d)$ has various algebraic properties (see sections 1-3) \ We are going now to compare it with the \textit{operator cross ratio} of Zelikin (see \cite{Zelikin06}). To this end we begin with the description of his construction. Let $\mathcal H$ be an even-dimensional (possibly infinite-dimensional) vector space; let us fix its polarization $\mathcal H=V_0\oplus V_1$, where the subspaces $V_0,\,V_1$ have the same dimension (in infinite dimensional case one can assume that there is a fixed isomorphism $\psi:V_0\to V_1$ between them); let $(\mathscr P_1,\mathscr P_2)$ and $(\mathscr Q_1,\mathscr Q_2)$ be two other pairs of subspaces, polarizing $\mathcal H$, i.e. $\mathscr P_i,\,\mathscr Q_i$ are isomorphic to $V_j$ and $\mathscr P_1$ (resp. $\mathscr Q_1$) is transversal to $\mathscr P_2$ (resp. to $\mathscr Q_2$). Then the cross ratio of these two pairs (or of the spaces $\mathscr P_1\,\mathscr P_2,\,\mathscr Q_1,\,\mathscr Q_2$ is the operator \[ \mathrm{DV}(\mathscr P_1,\mathscr P_2,\mathscr Q_1,\mathscr Q_2)=(\mathscr P_1\stackrel{\mathscr P_2}{\to}\mathscr Q_1\stackrel{\mathscr Q_2}{\to}\mathscr P_1) \] Here we use the notation from [9], where $\mathscr P_1\stackrel{\mathscr P_2}{\to}\mathscr Q_1$ denotes the projection of $\mathscr P_1$ to $\mathscr Q_1$ along $\mathscr P_2$ and similarly for the second arrow. In the cited paper the following explicit formula for $\mathrm{DV}$ was proved: let $\mathscr P_i$ be given by the graph of an operator $P_i:V_0\to V_1,\ i=1,2$ and similarly for $\mathscr Q_j$, then the following formula holds: \[ \mathrm{DV}(\mathscr P_1,\mathscr P_2,\mathscr Q_1,\mathscr Q_2)=(P_1-P_2)^{-1}(P_2-Q_1)(Q_1-Q_2)^{-1}(Q_2-P_1):V_0\to V_0. \] The invertibility of the operators $P_1-P_2$ and $Q_1-Q_2$ is provided by the transversality of $\mathscr P_1$ and $\mathscr P_2$ (resp. $\mathscr Q_1$ and $\mathscr Q_2$). The first claim we are going to make is the following: \begin{proposition} The operator cross ratio $\mathrm{DV}(\mathscr Q_2,\mathscr P_2,\mathscr Q_1,\mathscr P_1)$ (if it exists) is equal to $\kappa(p_1,p_2,q_1,q_2)$, for $p_1=\begin{pmatrix} 1\\ P'_1\end{pmatrix},\ p_2=\begin{pmatrix} 1\\ P'_2\end{pmatrix},\ q_1=\begin{pmatrix} 1\\ Q'_1\end{pmatrix},\ q_2=\begin{pmatrix} 1\\ Q'_2\end{pmatrix}$, where $1$ is the identity operator on $V_0$ and we identify $V_0$ and $V_1$ using the fixed map $\psi$ so that $P'_i=\psi^{-1}\circ P_i:V_0\to V_0$. \end{proposition} \begin{proof} This is a direct computation based on the explicit formula: \[ \begin{aligned} \kappa(p_1,p_2,q_1,q_2)&=\begin{vmatrix}1 & \boxed{1}\\ P'_2 & Q'_2\end{vmatrix}^{-1}\begin{vmatrix}1 & \boxed{1}\\ P'_2 & Q'_1\end{vmatrix}\begin{vmatrix}1 & \boxed{1}\\ P'_1 & Q'_1\end{vmatrix}^{-1}\begin{vmatrix} 1 & \boxed{1}\\ P'_1 & Q'_2\end{vmatrix}\\ &=(1-(P_2')^{-1}Q'_2)^{-1}(1-(P_2')^{-1}Q_1')(1-(P_1')^{-1}Q_1')^{-1}(1-(P_1')^{-1}Q_2')\\ &=(1-P_2^{-1}Q_2)^{-1}(1-P_2^{-1}Q_1)(1-P_1^{-1}Q_1)^{-1}(1-P_1^{-1}Q_2)\\ &=(P_2-Q_2)^{-1}P_2P_2^{-1}(P_2-Q_1)(P_1-Q_1)^{-1}P_1P_1^{-1}(P_1-Q_2)\\ &=(Q_2-P_2)^{-1}(P_2-Q_1)(Q_1-P_1)^{-1}(P_1-Q_2)\\ &=\mathrm{DV}(\mathscr Q_2,\mathscr P_2,\mathscr Q_1,\mathscr P_1). \end{aligned} \] \end{proof} Observe, that the role of $\psi$ is insignificant here: in effect, one can define the quasideterminants in the context of categories, i.e. for $A$ being a matrix of morphisms in certain category with its entries $a_{ij}$ being maps from the $i$-th object to the $j$-th object (see \cite{GGRW}). This makes the use of $\psi$ redundant. \subsection{Cocycle identity: cross-ratio and ``classifying map''} It is shown in \cite{Zelikin06} that the following equality holds for the $\mathrm{DV}$: let $(\mathscr P_1,\mathscr P_2)$ be a polarizing pair, and $\mathscr X,\,\mathscr Y,\,\mathscr Z$ three hyperplanes, then \begin{equation} \label{eq:coc1} \mathrm{DV}(\mathscr P_1,\mathscr X,\mathscr P_2,\mathscr Y)\,\mathrm{DV}(\mathscr P_1,\mathscr Y,\mathscr P_2,\mathscr Z)\,\mathrm{DV}(\mathscr P_1,\mathscr Z,\mathscr P_2,\mathscr X)=1, \end{equation} or, using the algebraic properties of $\mathrm{DV}$, \begin{equation} \label{eq:coc2} \mathrm{DV}(\mathscr P_1,\mathscr X,\mathscr P_2,\mathscr Y)\,\mathrm{DV}(\mathscr P_1,\mathscr Y,\mathscr P_2,\mathscr Z)=\mathrm{DV}(\mathscr P_1,\mathscr X,\mathscr P_2,\mathscr Z) \end{equation} if all three terms are well-defined. This relation can be reinterpreted topologically, by saying that the operator cross ratio corresponds to the change of coordinates function in the tautological fibre bundle over the Grassmanian space of polarizations of $\mathcal H$ (see \cite{Zelikin06}). Recall now that the noncommuative cross-ratio $\kappa(a,b,c,d)$ verifies a similar relation \begin{equation} \label{eq:coc3} \kappa(y,x,p_2,p_1)\kappa(z,y,p_2,p_1)=\kappa(z,x,p_2,p_1), \end{equation} see Sections 2 and 3 above for a purely algebraic proof. One can ask, if there exists an analogous topological interpretation of $\kappa$, i.e. if one can construct an analog of tautological bundle in the purely algebraic case. Here we shall sketch a construction, intended to answer this question, postponing the details to a forthcoming paper, dealing with the topological applications of the noncommutative cross-ratio. In order to give an interpretation of relations \eqref{eq:coc1}-\eqref{eq:coc3}, let us fix a vector $\omega=\begin{pmatrix}\omega_1\\ \omega_2\end{pmatrix}\in \mathcal R^{\oplus 2}$; let $\mathcal R^2_\omega$ denote the set of the elements $\begin{pmatrix}a_1\\ a_2\end{pmatrix}\in \mathcal R^{\oplus 2}$ such that the matrix \[ \begin{pmatrix}\omega_1 & a_1\\ \omega_2 & a_2\end{pmatrix} \] is invertible. Let $a\in \mathcal R^2_\omega$ and let $x\in\mathcal R^{\oplus 2}$ be such that both matrices \[ \begin{pmatrix}a_1 & x_1\\ a_2 & x_2\end{pmatrix}\ \mbox{and}\ \begin{pmatrix}\omega_1 & x_1\\ \omega_2 & x_2\end{pmatrix} \] are invertible. It is clear that the set of such $x$ is equal to the intersection $\mathcal R^2_\omega\bigcap \mathcal R^2_a$; we shall denote it by $\mathcal R^2_{a,\omega}=\tilde{\mathcal R}^2_a$ since $\omega$ is fixed. Consider now a \v Cech type simplicial complex $\check C_\cdot(\mathcal R^2)$: its set of $n$-simplices is spanned by the disjoint union of the intersections \[ \check C_n(\mathcal R^2;\omega)=\coprod_{a_0,\dots,a_n}\tilde {\mathcal R}^2_{a_0}\bigcap \tilde {\mathcal R}^2_{a_1}\bigcap\dots\bigcap\tilde {\mathcal R}^2_{a_n}, \] and the faces/degenracies are given by the omitting/repeating the terms in the intersections respectively. Then the formula \[ \phi=\{\phi_{a_0,a_1}\}:\check C_1(\mathcal R^2;\omega)\to\mathcal R^*,\ \phi_{a_0,a_1}(x)=\kappa(a_1,a_0,x,\omega), \] determines a map on the second term of this complex. Observe, that the cocycle condition now can be interpreted as the statement that $\phi$ can be extended to a simplicial map from $\check C_\cdot(\mathcal R^2;\omega)$ to the bar-resolution of the group $\mathcal R^*$ of invertible elements in $\mathcal R$. Namely: put \begin{align*} \phi_0=1&:\check C_0(\mathcal R^2;\omega)\to [1]=B_0(\mathcal R^*);\\ \phi_1=\phi&:\check C_1(\mathcal R^2;\omega)\to\mathcal R^*=B_1(\mathcal R^*);\\ \intertext{and for all other $n\ge 2$} \phi_n&:\check C_n(\mathcal R^2;\omega)\to(\mathcal R^*)^{\times n}=B_n(\mathcal R^*)\\ \intertext{given by the formula} \phi_n(x)&=[\phi_{a_0,a_1}(x)|\phi_{a_1,a_2}(x)|\dots|\phi_{a_{n-1},a_n}(x)], \end{align*} for all $x\in \tilde{\mathcal R}^2_{a_0,\dots,a_n}$. Then \begin{proposition} The collection of maps $\{\phi_n\}_{n\ge 0}$ determine a simplicial map from $\check C_\cdot(\mathcal R^2,\omega)$ to $B_\cdot(\mathcal R^*)$. \end{proposition} \begin{remark} The construction we just described bears striking similarity with the well-known Goncharov's complex (see \cite{Gonch}), so one can wonder if there are any relation with the actual Goncharov's Grassmannian complex and higher cross ratios/polylogarithms in this case? \end{remark} \subsection{Schwarzian operator} In classical theory Schwarzian operator is a quadratic differential operator, measuring the ``non-projectivity'' of a diffeomorphism of (real or complex) projective line. Applying the same ideas to noncommutative plane, we shall obtain an analog of operator as an infinitesimal part of the deformation of the cross-ratio. From the construction we use it then follows that this operator is invariant with respect to the action of $GL_2(\mathcal R)$ and the multiplication by invertible elements from $\mathcal R$. First, following the ideas in \cite{Zelikin06} we consider a smooth one-parameter family $Z(t)=\begin{pmatrix}Z(t)_1\\ Z(t)_2\end{pmatrix}$ of elements in $\mathcal R^{\oplus 2}$, such that for all different $t_1,t_2,t_3,t_4$ the cross ratio $\kappa(Z(t_1),Z(t_2),Z(t_3),Z(t_4))$ is well defined. Then, let us consider the function \[ \begin{aligned} f(t,t_1,t_2,t_3)&=\kappa(Z(t_3),Z(t_1),Z(t),Z(t_2))\\ &=(z(t_2) -z(t_1))^{-1}(z(t_1) -z(t))(z(t) -z(t_3))^{-1}(z(t_3) - z(t_2)), \end{aligned} \] where $z(t)=Z(t)_1^{-1}Z(t)_2$. Fix $t=0$, and let $t_2\to 0$. Then $f(0,t_1,t_2,t_3)\to 1$ and \[ \frac{\partial f}{\partial t_2}(0,t_1,0,t_3)=-(z(0)-z(t_1))^{-1}z'(0)+(z(0)-z(t_3))^{-1}z'(0). \] Thus, \[ f(t,t_1,t_2,t_3)=1-(t_2-t)\left((z(t)-z(t_1))^{-1}z'(t)-(z(t)-z(t_3))^{-1}z'(t)\right)+o(t_2-t) \] If $t_1=t_3$, the derivative on the right vanishes; consider the second partial derivative: \[ \frac{\partial^2f}{\partial t_3\partial t_2}(0,t_1,0,t_1)=-(z(0)-z(t_1))^{-1}z'(t_1)(z(0)-z(t_1))^{-1}z'(0), \] so that \[ f(t,t_1,t_2,t_3)=1-(t_2-t)(t_3-t_1)(z(t)-z(t_1))^{-1}z'(t_1)(z(t)-z(t_1))^{-1}z'(t)+o((t_2-t)(t_1-t_3)). \] This expression has a singularity at $t_1=0$. Now, using the Taylor series for $z(t)$ we compute for $t_1\to 0$: \[ \frac{\partial^2f}{\partial t_3\partial t_2}(0,t_1,0,t_1)=t_1^{-2}\left(1+\frac{t_1^2(z'(0))^{-1}z'''(0)}{6}-\frac{t_1^2((z'(0))^{-1}z''(0))^2}{4}+...\right) \] where $...$ denote the terms of degrees $3$ and higher in $t_1$. So, we obtain \begin{equation} \label{eq:schwa1} \frac{\partial^2f}{\partial t_3\partial t_2}(0,t_1,0,t_1)=t_1^{-2}(1+6t_1^2S(Z)+...)+... \end{equation} where we put \[ S(Z)=(z'(0))^{-1}z'''(0)-\frac32((z'(0))^{-1}z''(0))^2. \] Here $Z$ and $z$ are related as explained above. This differential operator is well-defined on functions with values in $\mathcal R^{\oplus 2}$, it is invariant with respect to the action of $GL_2(\mathcal R)$ and is conjugated by $\lambda\in\mathcal R^\times$, when $Z$ is multiplied by it on the right. Thus we come up with the following statement: \begin{proposition} Suppose we have a $1$-parameter family of elements in the projective noncommutative plane $\mathcal R^{\oplus 2}$, then the infinitesimal part of the cross ratio of four generic points in this family is equal to the noncommutative Schwarzian $S(Z)$. \end{proposition} \begin{proof} Above we have given a sketch, explaining the formula for $S(Z)$; however, the way we obtained the formula \eqref{eq:schwa1} there was a bit artificial. Now let us consider the formal Taylor expansion of $z(t_i)$ near $t_i=0:\ z(t_i)=z(0)+z'(0)t_i+\frac12z''(0)t_i^2+\frac16z'''(0)t_i^3+...,\ t_i=t,t_1,t_2,t_3$. Then (omitting the argument $(0)$ from our notation) \[ \begin{aligned} z(t_i)-z(t_j)&=z'(t_i-t_j)+\frac12z''(t_i^2-t_j^2)+\frac16z'''(t_i^3-t_j^3)+...\\ &=(t_i-t_j)(z'+\frac12z''(t_i+t_j)+\frac16z'''(t_i^2+t_it_j+t_j^2))+... \end{aligned} \] and similarly \[ \begin{aligned} (z&(t_i)-z(t_j))^{-1}(z(t_k)-z(t_l))=\\ &=\frac{t_k-t_l}{t_i-t_j}\Bigl(1+\frac12(z')^{-1}z''(t_i+t_j)+\frac16(z')^{-1}z'''(t_i^2+t_it_j+t_j^2)\Bigr)^{-1}\\ &\qquad\qquad\qquad\Bigl(1+\frac12(z')^{-1}z''(t_k+t_l)+\frac16(z')^{-1}z'''(t_k^2+t_kt_l+t_l^2)\Bigr)+...\\ &=\frac{t_k-t_l}{t_i-t_j}\Bigl(1-\frac12(z')^{-1}z''(t_i+t_j)-\frac16(z')^{-1}z'''(t_i^2+t_it_j+t_j^2)+\frac14((z')^{-1}z'')^2(t_i+t_j)^2\Bigr)\\ &\qquad\qquad\qquad\Bigl(1+\frac12(z')^{-1}z''(t_k+t_l)+\frac16(z')^{-1}z'''(t_k^2+t_kt_l+t_l^2)\Bigr)+...\\ &=\frac{t_k-t_l}{t_i-t_j}\Bigl(1+\frac12(z')^{-1}z''(t_k+t_l-t_i-t_j)+\frac16(z')^{-1}z'''(t_k^2+t_kt_l+t_l^2-t_i^2-t_it_j-t_j^2)\\ &\qquad\qquad\qquad +\frac14((z')^{-1}z'')^2((t_i+t_j)^2-(t_k+t_l)(t_i+t_j))\Bigr)+... \end{aligned} \] where we use $...$ to denote the elements of degree $3$ and higher in $t_i$. In particular, taking $t_i=t_2,\ t_j=t_1,\ t_k=t_1,\ t_l=t$, we obtain \[ \begin{aligned} (z&(t_2)-z(t_1))^{-1}(z(t_1)-z(t))=\\ &=\frac{t_1-t}{t_2-t_1}\Bigl(1+(t-t_2)\bigl(\frac12(z')^{-1}z''+\frac16(z')^{-1}z'''(t+t_1+t_2)-\frac14((z')^{-1}z'')^2(t_2+t_1)\bigr)\Bigr)+... \end{aligned} \] Similarly, with $t_i=t,\ t_j=t_3,\ t_k=t_3,\ t_l=t_2$, we have: \[ \begin{aligned} (z&(t)-z(t_3))^{-1}(z(t_3)-z(t_2))=\\ &=\frac{t_3-t_2}{t-t_3}\Bigl(1+(t_2-t)\bigl(\frac12(z')^{-1}z''+\frac16(z')^{-1}z'''(t+t_2+t_3)-\frac14((z')^{-1}z'')^2(t+t_3)\bigr)\Bigr)+... \end{aligned} \] Finally, taking the product of these two expressions we obtain \[ f(t,t_1,t_2,t_3)=\frac{(t_1-t)(t_3-t_2)}{(t_2-t_1)(t-t_3)}\left(1+(t_2-t)(t_3-t_1)\left(\frac16(z'(0))^{-1}z'''(0)-\frac14((z(0)')^{-1}z(0)'')^2\right)\right) \] \end{proof} Compare this formula with the formula (4.7) from the paper \cite{AZ}. We call the expression $Sch(z)=(z')^{-1}z'''-\frac32((z')^{-1}z'')^2$ \textit{the noncommutative Schwarzian of $z(t)$}. Just like the classical Schwarz derivative, this operator is invariant (up to conjugations) with respect to the M\"obius transformations in $\mathcal R^2$: this is the direct consequence of the method we derived this formula from the (operator) cross-ratio. \subsection{Infinitesimal Ceva ratio} The following expression is intended as a 2-dimensional analog of the Schwarzian operator. More accurately, Schwarz derivative can be regarded as the infinitesimal transformation of the cross-ratio under a diffeomorphism of the projective line. It is natural to assume that the role of cross-ratio in projective plane should in some sense be played by the Ceva theorem (see figure 1, part (\textit{b})). Thus here we try to find the infinitesimal part of the transformation of the Ceva ratio under a diffeomorphism; in a general case this is quite a difficult question, so we do it under certain additional conditions. Let $\xi,\,\eta$ be two commuting vector fields on a manifold $M$, and let $f:M\to M$ be a self-map of $M$ such that $df(\xi)=\kappa\cdot\xi,\,df(\eta)=\kappa\cdot\eta$ for some smooth function $\kappa\in C^\infty(M)$. It follows from this condition, that $f$ maps integral trajectories of both fields and of the fields, equal to their linear combinations with constant coefficients. One can imagine this map as a ``change of coordinates along the 2-dimensional net'', or a generalized conformal map. However, we do not assume that these fields are linearly independent, they can even be proportional to each other. Let us consider the following expression: take any point $x$; let $\phi(t)$ and $\psi(s)$ be the one-parameter diffeomorphism families, generated by $\xi$ and $\eta$ respectively. Since these fields commute, the composition $\phi(-r)\circ\psi(r)=\psi(r)\circ\phi(-r)=:\theta(r)$ is the one-parameter family, corresponding to their difference $\zeta=\eta-\xi$. Consider now the infinitesimal ``triangle'' at $x$: first we move from $x$ to $\phi(\epsilon)(x)$, then from this point to $\psi(2\epsilon)(x)$; then we apply to this point $\theta(\epsilon)$ and $\theta(2\epsilon)$; and finally we apply twice the diffeomorphism $\psi(-\epsilon)$. By definition, we come to the point $x$ again, having spun a ``curvilinear triangle'' $ABC$ ($A=x,\,B=\phi(2\epsilon)(x),\ C=\psi(2\epsilon)(x)$) with points $K,L,M$ on its sides ($K=\phi(\epsilon)(x),\,L=(\phi(\epsilon)\circ\psi(\epsilon))(x),\,M=\psi(\epsilon)(x)$). If we use the inherent ``time'' along the trajectories of the vector fields to measure length along these trajectories, then the points $K,L$ and $M$ will be midpoints of the sides of $ABC$ and the standard Ceva relation will be trivially $1$: \[ c(A,B,C;K,L,M)=\frac{AK}{KB}\cdot\frac{BL}{LC}\cdot\frac{CM}{MA}=1. \] Consider now the image of triangle $ABC$ under $f$: the points $K,L$ and $M$ will again fall on the ``sides'' of this image, however the lengths will be somehow distorted (in fact even the fields $df(\xi)=\kappa\cdot\xi$ and $df(\eta)=\kappa\cdot\eta$ need not be commuting). Let us now explore this ``distortion'' up to the degree $2$ in $\epsilon$: \begin{proposition} Up to degree $2$ the difference between the distorted Ceva relation and $1$ is trivial; we put \[ c(f(A),f(B),f(C);f(K),f(L),f(M))-1=:\epsilon^2S_3(f,\xi,\eta;x)+o(\epsilon^2), \] then \[ S_3(f,\xi,\eta;x)=\frac56\frac{\kappa''_{\eta\eta}(x)-\kappa''_{\xi\xi}(x)}{\kappa(x)}, \] where we use the standard notation $\kappa'_\xi=\xi(\kappa),\,\kappa'_\eta=\eta(\kappa)$. \end{proposition} \begin{proof} We compute: \[ \begin{aligned} f(A)f(K)&=\epsilon\kappa(x)+\frac12\epsilon^2\kappa'_\xi(x)+\frac16\epsilon^3\kappa''_{\xi\xi}(x)+o(\epsilon^3),\\ f(K)f(B)&=\epsilon\kappa(x+\epsilon\xi)+\frac12\epsilon^2\kappa'_\xi(x+\epsilon\xi)+\frac16\epsilon^3\kappa''_{\xi\xi}(x+\epsilon\xi)+o(\epsilon^3)\\ &=\epsilon\kappa(x)+\frac32\epsilon^2\kappa'_\xi(x)+\frac53\epsilon^3\kappa''_{\xi\xi}(x)+o(\epsilon^3),\\ f(M)f(A)&=-\epsilon\kappa(x)-\frac12\epsilon^2\kappa'_\eta(x)-\frac16\epsilon^3\kappa''_{\eta\eta}(x)+o(\epsilon^3),\\ f(C)f(M)&=-\epsilon\kappa(x)-\frac32\epsilon^2\kappa'_\eta(x)-\frac53\epsilon^3\kappa''_{\eta\eta}(x)+o(\epsilon^3),\\ f(B)f(L)&=\epsilon\kappa(x+2\epsilon\xi)+\frac12\epsilon^2\kappa'_\zeta(x+2\epsilon\xi)+\frac16\epsilon^3\kappa''_{\zeta\zeta}(x+2\epsilon\xi)+o(\epsilon^3)\\ &=\epsilon\kappa(x)+2\epsilon^2\kappa'_\xi(x)+2\epsilon^3\kappa''_{\xi\xi}(x)\\ &\quad+\frac12\epsilon^2\kappa'_\zeta(x)+\epsilon^3\kappa''_{\xi\zeta}(x)+\frac16\epsilon^3\kappa''_{\zeta\zeta}(x)+o(\epsilon^2)\\ &=\epsilon\kappa(x)+\frac32\epsilon^2\kappa'_\xi(x)+\frac12\epsilon^2\kappa'_\eta(x)\\ &\quad+\frac56\epsilon^3\kappa''_{\xi\xi}(x)+\frac16\epsilon^3\kappa''_{\eta\eta}(x)+\frac23\epsilon^3\kappa''_{\xi\eta}(x)+o(\epsilon^3)\\ f(L)f(C)&=\epsilon\kappa(x+\epsilon(\xi+\eta))+\frac12\epsilon^2\kappa'_\zeta(x+\epsilon(\xi+\eta))\\ &\qquad\qquad\qquad+\frac16\epsilon^3\kappa''_{\zeta\zeta}(x+\epsilon(\xi+\eta))+o(\epsilon^3)\\ &=\epsilon\kappa(x)+\frac32\epsilon^2\kappa'_\eta(x)+\frac12\epsilon^2\kappa'_\xi(x)\\ &\quad+\frac12\epsilon^3\kappa''_{\xi\xi}(x)+\frac12\epsilon^3\kappa''_{\eta\eta}(x)+\epsilon^3\kappa''_{\xi\eta}(x)\\ &\quad+\frac12\epsilon^3\kappa''_{\eta\eta}(x)-\frac12\epsilon^3\kappa''_{\xi\xi}(x)\\ &\quad+\frac16\epsilon^3\kappa''_{\xi\xi}(x)+\frac16\epsilon^3\kappa''_{\eta\eta}(x)-\frac13\epsilon^3\kappa''_{\xi\eta}(x)+o(\epsilon^3)\\ &=\epsilon\kappa(x)+\frac32\epsilon^2\kappa'_\eta(x)+\frac12\epsilon^2\kappa'_\xi(x)\\ &\quad+\frac16\epsilon^3\kappa''_{\xi\xi}(x)+\frac56\epsilon^3\kappa''_{\eta\eta}(x)+\frac23\epsilon^3\kappa''_{\xi\eta}(x)+o(\epsilon^3). \end{aligned} \] Plugging these expressions into the formula for $c(A,B,C;K,L,M)$, we obtain the expression we need. \end{proof} The analogy between this expression and the Schwarz derivative is quite evident. One can ask, if it is possible to extend it in any reasonable way to a more general situation when there are less restrictions on the diffeomorphism, and also if there exist a non-commutative version of this operator. We are going to address these questions in forthcoming papers. \section {Non-commutative Schwarzian and differential relations} In this section we present an alternative construction of Schwarz derivative: we obtain it as an invariant of a system of ``differential equations'' on an algebra. In commutative case the relation of Schwarzian and differential equations is well-known, see for example \cite{OT2}. It is remarkable, that this construction also can be phrased in purely algebraic terms. We shall also discuss below some properties of this construction. Consider the following system of linear ``differential equations'': \begin{equation} \label{eq:sys1} \begin{cases} f_1''+af_1'+bf_1&\!\!\!\!=0\\ f_2''+af_2'+bf_2&\!\!\!\!=0. \end{cases} \end{equation} Here $a,\,b,\,f_1,\,f_2$ are elements of a division ring $\mathcal R$, and ${}'$ denotes a linear differentiation in this ring, i.e. a linear endomorphism of $\mathcal R$ verifying the noncommutative Leibniz identity (a model example is the algebra of smooth operator-valued functions of one (real) variable, however one can plug in arbitrary algebra with a differentiation of any sort on it). Below we shall assume that all the elements we deal with are invertible if necessary. Using this assumption it is not difficult to solve the equations \eqref{eq:sys1} as a linear system on $a$ and $b$: multiplying the equations by $f_1^{-1}$ and $f_2^{-1}$ respectively and subtracting the second one from the first one we obtain (see [3]) \[ a=-(f_1''f_1^{-1}-f_2''f_2^{-1})(f_1'f_1^{-1}-f_2'f_2^{-1})^{-1} \] and similarly \[ b=-(f_1''(f_1')^{-1}-f_2''(f_2')^{-1})(f_1(f_1')^{-1}-f_2(f_2')^{-1})^{-1}. \] We can rewrite these formulas a little: \[ \begin{aligned} a&=-(f_1''-f_2''f_2^{-1}f_1)(f_1'-f_2'f_2^{-1}f_1)^{-1}\\ b&=-(f_1''-f_2''(f_2')^{-1}f_1')(f_1-f_2(f_2')^{-1}f_1')^{-1} \end{aligned} \] so that now it is evident that $a$ and $b$ can be expressed as $a=-q^1_{32},\ b=-q^2_{31}$, where $q^i_{jk}$ are right quasi-Pl\"ucker coordinates of the $3\times 2$-matrix $\begin{pmatrix}f_1 & f_1' & f_1''\\ f_2 & f_2' & f_2''\end{pmatrix}^T$. See section 1 for details. Observe that in the process of solving \eqref{eq:sys1} we obtained the expression: \begin{equation} \label{eq:inter2} -b=af_1'f_1^{-1}+f_1''f_1^{-1}=af_2'f_2^{-1}+f_2''f_2^{-1}. \end{equation} (The expression is a special case for the formula from Proposition 4.8.1 from [2] rewritten for right quasi-Pl\"ucker coordinates. The proposition connects quasi-Pl\"ucker coordinates for matrices of different sizes.) Thus \[ af_1'f_1^{-1}f_2+f_1''f_1^{-1}f_2=af_2'+f_2''. \] Hence \begin{equation} \label{eq:inter1} af_1(f_1^{-1}f_1'f_1^{-1}f_2-f_1^{-1}f_2')=f_1(f_1^{-1}f_2''-f_1^{-1}f_1''f_1^{-1}f_2). \end{equation} Now one have the formulas: \[ (f^{-1})''=-(f^{-1}f'f^{-1})'=2f^{-1}f'f^{-1}f'f^{-1}-f^{-1}f''f^{-1}, \] and \[ (fg)''=f''g+2f'g'+fg'', \] for all $f,g\in A$; so \[ (f^{-1}g)''=2f^{-1}f'f^{-1}f'f^{-1}g-2f^{-1}f'f^{-1}g'-f^{-1}f''f^{-1}g+f^{-1}g''. \] Thus on the right hand side of \eqref{eq:inter1} we have \[ \begin{aligned} f_1(f_1^{-1}f_2''-f_1^{-1}f_1''f_1^{-1}f_2)&=f_1(2f_1^{-1}f_1'f_1^{-1}f_1'f_1^{-1}f_2-2f_1^{-1}f_1'f_1^{-1}f_2'-f_1^{-1}f_1''f_1^{-1}f_2+f_1^{-1}f_2'')\\ &\quad-2f_1(f_1^{-1}f_1'f_1^{-1}f_1'f_1^{-1}f_2-2f_1^{-1}f_1'f_1^{-1}f_2')\\ &=f_1(f_1^{-1}f_2)''-2f_1'(f_1^{-1}f_1'f_1^{-1}f_2-f_1^{-1}f_2')\\ &=f_1(f_1^{-1}f_2)''+2f_1'(f_1^{-1}f_2)' \end{aligned} \] On the other hand, on the left hand side of \eqref{eq:inter1} we have $-af_1(f_1^{-1}f_2)'$, so denoting $\varphi=f_1^{-1}f_2$ we get: \begin{equation} \label{eq:eq2} af_1=-2f_1'-f_1\varphi''(\varphi')^{-1}, \end{equation} or equivalently \begin{equation} \label{eq:eq2'} a=-2f_1'f_1^{-1}-f_1\varphi''(\varphi')^{-1}f_1^{-1}. \end{equation} Here's a simple corollary of the formula \eqref{eq:eq2'}: \begin{proposition} \label{prop:1} When the elements $f_i\in A$ are replaced by $\tilde f_i=hf_i,\ i=1,2$ for some $h\in \mathcal R$, then $a$ in the system \eqref{eq:sys1} should be replaced by $\tilde a=-2h'h^{-1}+hah^{-1}$. \end{proposition} \begin{proof} Observe that $\varphi$ is not affected by the coordinate change $f_i\leftrightarrow \tilde f_i,\ i=1,2$. Now direct calculation with formula \eqref{eq:eq2'} shows \[ \begin{aligned} \tilde a&=-2{\tilde f_1}'{\tilde f_1}^{-1}-\tilde f_1\varphi''(\varphi')^{-1}{\tilde f_1}^{-1}\\ &=-2(h'f_1+hf_1')f_1^{-1}h^{-1}-h(f_1\varphi''(\varphi')^{-1}f_1^{-1})h^{-1}\\ &=-2h'h^{-1}+h(-2f_1'f_1^{-1}-f_1\varphi''(\varphi')^{-1}f_1^{-1})h^{-1}. \end{aligned} \] \end{proof} \begin{remark}\rm It is worth to observe a striking similarity of the expression in proposition \ref{prop:1} and the gauge transformation of a linear connection (the unnecessary $2$ in front of $h'h^{-1}$ can be eliminated by considering $\alpha=\frac12a$). \end{remark} It is now our purpose to find the way $b$ changes, when $f_1,\,f_2$ are multiplied by $h$, at least under some additional assumptions on $h$. We begin with the simple observation: \begin{corollary} If $h$ verifies the ``differential equation'' $h'=\frac12ha$ then $\tilde a=0$. \end{corollary} Further, there's another simple consequence of the formula \eqref{eq:eq2}: \begin{proposition} Assume that $h$ verifies the equation $h'=\frac12ha$; denote $\tilde f_1=h f_1,\ \theta=\varphi''(\varphi')^{-1}$. Then \[ {\tilde f_1}'=-\frac{\tilde f_1}{2}\theta. \] \end{proposition} \begin{proof} \[ {\tilde f_1}'=(h f_1)'=\frac12haf_1+hf_1'=\mbox{using equation \eqref{eq:eq2}}=-hf_1'-\frac12hf_1\theta+hf_1'=-\frac{\tilde f_1}{2}\theta. \] \end{proof} Repeating the differentiation we see: \begin{equation} \label{eq:inter3} {\tilde f_1}''=-\left (\frac{\tilde f_1}{2}\theta\right)'=\frac{\tilde f_1}{4}\theta-\frac{\tilde f_1}{2}\theta'. \end{equation} Finally, substituting these formulas in the first expression of \eqref{eq:inter2} we obtain the following result: \begin{theorem} If $h$ satisify the equation $h'=\frac12ha$ then the coordinate change $f_i\mapsto \tilde f_i=hf_i,\ i=1,2$ transforms the system \eqref{eq:sys1} in such a way that \[ \begin{aligned} a&\mapsto 0\\ b&\mapsto \frac12\tilde f_1\left(\theta'-\frac12\theta\right){\tilde f_1}^{-1} \end{aligned} \] where $\theta=\varphi''(\varphi')^{-1},\ \varphi=f_1^{-1}f_2$ and the equation $2{\tilde f_1}'+\tilde f_1\theta=0$ holds. \end{theorem} \begin{proof} Since $a\mapsto 0$, we obtain from \eqref{eq:inter2}: \[ b=-{\tilde f_1}''{\tilde f_1}^{-1}=\mbox{using \eqref{eq:inter3}}=-\left(\frac{\tilde f_1}{4}\theta-\frac{\tilde f_1}{2}\theta'\right){\tilde f_1}^{-1} \] \end{proof} \begin{remark}\rm Observe that in the commutative case the expression $\theta'-\frac12\theta$ coincides with the classical Schwarz differential of $\varphi$. \end{remark} \subsection{Generalized NC Schwarzian} Let $f$ and $g$ be two (invertible) elements of a division ring $\mathcal R$, equipped with a derivation $'$ (see previous section). We suppose that they satisfy so-called {\it left coefficients equations} $f''=F_1f$, $g''=F_2g$ for some $F_1,F_2\in\mathcal R$. We set $h:=fg^{-1}$ and $G:=F_1h-hF_2$. \begin{theorem}\label{ncSch} If $G=0$ then we have the following relation: \begin{equation}\label{NCSch} h'''=(3/2)h''(h')^{-1}h'' - 2h'F_2 \end{equation} (a {\it non-commutative analogue of the Schwarzian equation}.) \end{theorem} \begin{proof} $$ h'=f'g^{-1}-hg'g^{-1}, $$ $$ h''=G-2h'g'g^{-1}, $$ $$ h'''=G'-2h''g'g^{-1}-2h'F_2+2h'(g'g^{-1})^2\ . $$ One can express $g'g^{-1}=(1/2)(h')^{-1}(G-h'')$ and get $$ h'''=(3/2)h''(h')^{-1}h'' - 2h'F_2-(3/2)h''(h')^{-1}G +(1/2)G(h')^{-1}(G-h'')\ . $$ Let $f^{-1}f''= g^{-1} g''$ , i.e. $f, g$ are solutions of the same differential equation with {\it right coefficients}. Let $g'' = F g,$ i.e. $g$ is also a solution of a differential equation with a left coefficient. Let $h = f g^{-1}$ . Then $$ h'''-(3/2)h'' (h')^{-1} h'' = -2h'F . $$ Note that that the left-hand side is stable under M\"{o}bius transform $$ h \to (ah + b)(ch + d)^{-1} $$ where $a'=b'= c'= d'=0.$ \end{proof} \begin{remark} Consider the commutative analogue of (\ref{NCSch}) \begin{equation}\label{Sch} (h')^{-1}h'''=(3/2)(h')^{-2}h''^2- 2F_2. \end{equation} This equality can be regarded as yet another definition of the Schwarzian ${\rm Sch}(h)$ $$ {\rm Sch}(h):= (h')^{-1}h'''-(3/2)(h')^{-2}h''^2 = - 2F_2. $$ Hence, we obtain one more justification for calling {\it a NC Schwarzian of $h$} the following expression \begin{equation}\label{NCSch1} {\rm NCSch}(h):=(h')^{-1}h''' - (3/2)(h')^{-1}h''(h')^{-1}h'' \end{equation} \end{remark} \begin{remark} In commutative case there exist the following famous version of KdV equation \begin{equation}\label{SchKdV} h_t = (h'){\rm Sch}(h) \end{equation} It is invariant under the projective action of $SL_2$ and, when written as an evolution on the invariant $\rm{Sch}(h)$ it becomes the "usual" KdV \begin{equation}\label{KdV} {\rm Sch}(h)_t = {\rm Sch}(h)^{'''} + 3{\rm Sch}(h)' {\rm Sch}(h). \end{equation} \end{remark} Introducing two commuting derivatives $\partial_x = '$ and $\partial_t$ of our skew-field $\mathcal R$ with respect to two distinguished elements $x$ and $t$ one can write the analogs of (\ref{SchKdV}): \begin{equation}\label{NSchKdV} h_t = (h'){\rm NSch}(h) = h''' - (3/2)h''(h')^{-1}h'' \end{equation} \begin{remark} The equation (\ref{NSchKdV}) has an interesting geometric interpretation (specialisation) as the {\it Spinor Schwarzian-KdV equation} (see the equation (4.6) in \cite{MB}). \end{remark} \section{Some applications of NC cross-ratios.} Let us briefly describe few possible applications of noncommutative cross-ratios, inspired by the classical constructions. \subsection{Noncommutative leapfrog map} Let $\mathbb P^1$ be the projective line over a noncommutative division ring $\mathcal R.$ Consider points five points $S_{i-1} , S_i , S_{i+1} , S_i^{-}$ and $S_i^{+}$ on $\mathbb P^1$. The theory of noncommutative cross-ratios (see theorem 3.4) implies that there exists a projective transformation sending $$ \left(S_{i-1}, S_i, S_{i+1}, S_i^{-}\right) \to \left(S_{i+1} , S_i , S_{i-1} , S_i^{+}\right) $$ (in this order!) if and only if the corresponding cross-ratios coincide: $$ \begin{aligned} \bigl(S_{i+1}&-S_i \bigr)^{-1}\left(S_i^{-} - S_i\right)\left(S_i^{-} - S_{i-1}\right)^{-1}\left(S_{i+1} - S_{i-1}\right)=\\ &= \lambda^{-1}\left(S_{i-1} - S_i\right)^{-1}\left(S_i^{+}-S_i\right)\left(S_i^{+} - S_{i+1}\right)^{-1}\left(S_{i-1} - S_{i+1}\right)\lambda \end{aligned} $$ where $\lambda \in R.$ Note that the factor $\left(S_{i+1}-S_{i-1}\right)$ appears in both sides of the equation but with the different signs. It shows that in the commutative case one gets the identity (5.14) from \cite{GSTV}; this map is integrable and constitutes a part of the pentagramm family of maps, see the next paragraph. \begin{problem} It is very intriguing if the same properties exist in noncommutative case. \end{problem} \subsection{Noncommutative cross-ratios and the pentagramma mirificum} \subsubsection{Classical 5-recurrence} There is a wonderful observation (known as the {\it Gauss Pentagramma mirificum}) that when a pentagramma is drawn on a unit sphere in $\mathbb R^3$ (see figure 2, where we do not observe the orthogonality of great circles) with successively orthogonal great circles with the lengths of inner side arcs $\alpha_i,\quad i=1,\ldots,5$ and one takes $y_i := \tan^2 (\alpha_i),$ then the following recurrence relation satisfies: \begin{figure}[tb] \includegraphics[scale =.65]{penta-mirifi2.eps} \caption{Gauss' pentagramma mirificum} \end{figure} \begin{equation}\label{pentmirGauss} y_i y_{i+1} = 1 + y_{i+3}, \quad {\rm mod} \quad \mathbb Z_5. \end{equation} Gauss has observed that the first three equations for $i=1,2,3$ in \ref{pentmirGauss} completely define the last two equations for $i=4,5$. It was discussed in \cite{M-G} (which is our main source of the classical data for the Gauss Pentagramma Mirificum) that the variables $y_i$ can be expressed via the classical cross-ratios: $$y_i = [p_{i+1}, p_{i+2}, p_{i+3},p_{i+4}] = \frac{(p_{i+4}-p_{i+1})(p_{i+3}-p_{i+2})}{(p_{i+4}-p_{i+3})(p_{i+2}-p_{i+1})},$$ where $p_i = p_{i+5}$ are five points on real or complex projective line. \begin{proposition} Suppose that two consecutive points $y_i$ and $y_{i+1}$ (cyclically) are differents. Then the five cross-ratios $y_i$ satisfy the relation \eqref{pentmirGauss}. \end{proposition} It was remarked in \cite{M-G} that after renaming $x_1=y_1,\,x_2=y_4,\,x_3=y_2,\,x_4=y_5,\,x_5=y_3,$ the variables $x_i,\ i=1,\ldots,5$ satisfy the famous \textit{pentagon recurrence}: $$x_{i-1}x_{i+1}= 1+x_{i}.$$ It is also known, that this construction is closely related to cluster algebras, see \cite{FR} for further details. \subsubsection{Non-commutative analogues} Let $\mathcal R$ be an associative division ring. In [6] (see sections 2 and 3) we defined the cross-ratio $\kappa(i, j, k, l)$ four vectors $i, j, k, l \in \mathcal R^2.$ Recall that $$ \kappa(i, j, k, l)= q^j_{kl}q^i_{lk} $$ where $q^k_{ij}$ is the corresponding quasi-Pl\"{u}cker coordinate. In particular, $q^k_{ij}$ and $q^k_{ji}$ are inverse to each other and $$ \kappa(j, i, l, k) = q^i_{lk}\kappa(i, j, k,l)q^i_{kl},\ \kappa(k,l, i, j) = q^j_{ik}\kappa(i, j, k, l)q^j_{ki}. $$ We set $\overline{\kappa(i, j, k,l)} = \kappa(j, i, k, l)$ Let now $i, j, k, l, m$ be five vectors in $\mathcal R^2.$ We start with multiplicative relations for their cross-ratios. All these relations are redundant in the commutative case. $$ \begin{aligned} \kappa(i, j, k, l)q^{i}_{km} \kappa(i, k, m, l) q^{i}_{mk}& = q^{j}_{kl}\overline{\kappa(i,k,m,l)}\, \overline{\kappa(i, j,k, l)}q^{j}_{lk},\\ q^{l}_{ mk }\kappa(i, j, k, l)q^{l}_{ ki} \kappa(l, k, i, m)q^{l} _{im}& = \overline{\kappa(l, k, i, m)}q^{k}_{ ml} \overline{\kappa(i, j, k, l)}q^{k}_{lm }. \end{aligned} $$ Noncommutative versions of the {\it pentagramma mirificum} relations can be written as follows: $$\kappa(i, j, k, l)q^{i}_{kj}\kappa(m, l, j, i)q^{i}_{jk} = 1 - \kappa(m, j, k, i) , $$ $$\kappa(i, j, k, l)q^{l}_{ki} \kappa(l, k, i, m)q^{i}_{jk} = 1 - \kappa(l, j, k, m) , $$ $$q^{l}_{ jk} \kappa(i, j, k, l)q^{l}_{kj}\kappa(m, l, j, i) = 1 - \kappa(i, k, j, m) . $$ For five vectors $1, 2, 3, 4, 5$ in $\mathcal R^2$ we set $$x_1 = -\kappa(1, 2, 3, 4),\ x_2 = -\kappa(5, 2, 3, 1),\ x_3 = -\kappa(5, 4, 2, 1), $$ $$x_4 = -\kappa(3, 4, 2, 5),\ x_5 = -\kappa(3, 1, 4, 5) .$$ Then $$x_1 q^1_{32}x_3 q ^1_{23} = 1 + x_2, \quad x_4 q^5_{23}x_2 q ^5_{32} = 1 + x_3, $$ $$x_3 q^5_{24}x_5 q ^5_{42} = 1 + x_4, \quad x_6 q^3_{42}x_4 q ^3_{24} = 1 + x_5, $$ $$x_5 q^3_{41}x_7 q ^3_{14} = 1 + x_6,$$ where $x_6 := \bar x_1$ and $x_7 := \bar x_2 .$ Note the different order for even and odd left hand sides. So , we have an {\it 5-antiperiodicity}, i.e. the periodicity up to the anti-involution $x_{k+5} = \bar x_k$ Also, the relations with odd left hand parts imply the relations for even left hand parts as in the commutative case. \begin{remark} There is an important "continuous limit" of "higher pentagramma" maps on polygons in $\mathbb P^n$ which is the Boussinesq (or generalized $(2, n+ 1)-$KdV hierarchy) equation (\cite{KhS}). \end{remark} \begin{problem} What is a non-commutative "higher analogue" of pentagramma recurrences? Is there a related non-commutative integrable analogue of the Boussinesq equation? \end{problem} We hope to return to these questions in our future paper devoted to new examples of NC integrable systems (\cite{RRS}).
{ "timestamp": "2019-05-23T02:10:22", "yymm": "1905", "arxiv_id": "1905.01366", "language": "en", "url": "https://arxiv.org/abs/1905.01366", "abstract": "We present here a theory of noncommutative cross-ratio, Schwarz derivative and their connections and relations to the operator cross-ratio. We apply the theory to \"noncommutative elementary geometry\" and relate it to noncommutative integrable systems. We also provide a noncommutative version of the celebrated \"pentagramma mirificum\".", "subjects": "Rings and Algebras (math.RA)", "title": "Noncommutative cross-ratio and Schwarz derivative", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357555117624, "lm_q2_score": 0.7248702702332475, "lm_q1q2_score": 0.7096739126658228 }
https://arxiv.org/abs/1912.02846
The Maximum Wiener Index of Maximal Planar Graphs
The Wiener index of a connected graph is the sum of the distances between all pairs of vertices in the graph. It was conjectured that the Wiener index of an $n$-vertex maximal planar graph is at most $\lfloor\frac{1}{18}(n^3+3n^2)\rfloor$. We prove this conjecture and for every $n$, $n \geq 10$, determine the unique $n$-vertex maximal planar graph for which this maximum is attained.
\subsubsection*{key words} \section{Introduction} The Wiener index is a graph invariant based on distances in the graph. For a connected graph $G$, the Wiener index is the sum of distances between all unordered pairs of vertices in the graph and is denoted by $W(G)$. That means, \begin{equation*} W(G) = \sum_{\{u,v\} \subseteq V(G)} d_G(u,v). \end{equation*} Where $d_G(u,v)$ denotes the distance from $u$ to $v$ i.e. the minimum length of a path from $u$ to $v$ in the graph $G$. It was first introduced by Harry Wiener in 1947, while studying its correlations with boiling points of paraffin considering its molecular structure \cite{first}. Since then, it has been one of the most frequently used topological indices in chemistry, as molecular structures are usually modelled as undirected graphs. Many results on the Wiener index and closely related parameters such as the gross status \cite{third}, the distance of graphs \cite{fourth} and the transmission \cite{fifth} have been studied. A great deal of knowledge on the Wiener index is accumulated in several survey papers \cite{1,2,3,4,5}. Finding a sharp bound on the Wiener index for graphs under some constraints, has been one of the research topics attracting many researchers. The most basic upper bound of $W(G)$ states that, if $G$ is a connected graph of order $n$, then \begin{equation} W(G) \leq \frac{(n-1)n(n+1)}{6}, \end{equation} which is attained only by a path \cite{15,7,8}. Many sharp or asymptotically sharp bounds on $W(G)$ in terms of other graph parameters are known, for instance, minimum degree \cite{9,10,11}, connectivity \cite{12,13}, edge-connectivity \cite{14,15} and maximum degree \cite{16}. For finding more details in mathematical aspect of Wiener index, see also results \cite{17,18,19,20,21,22,23,24,25,26}. One can study the Wiener index of the family of connected planar graphs. Since the bound given in (1) is attained by a path, it is natural to ask for some particular family of planar graphs. For instance, the family of maximal planar graphs. The Wiener index of maximal planar graph with $n$ vertices, $n\geq 3$ has a sharp lower bound $(n-2)^2+2$, the bound is attained by any maximal planar graph such that the distance between any pair of vertices is at most 2 (for instance a planar graph containing the $n$-vertex star). Z. Che and K.L. Collins \cite{28}, and independently \' E. Czabarka, P. Dankelmann, T. Olsen and L.A. Sz\' ekely \cite{29}, gave a sharp upper bound of a particular class of maximal planar graphs known as \textit{Apollonian networks}. An Apollonian network may be formed, starting from a single triangle embedded on the plane, by repeatedly selecting a triangular face of the embedding, adding a new vertex inside the face, and connecting the new vertex to each three vertices of the face. They showed that \begin{theorem}(\cite{28, 29}) Let $G$ be an Apollonian network of order $n\geq 3$. Then $W(G)$ has a sharp upper bound \begin{equation*}W(G)\leq \bigg\lfloor\frac{1}{18}(n^3+3n^2)\bigg\rfloor= \begin{cases} \frac{1}{18}(n^3+3n^2), &\text{if $n\equiv 0(mod \ 3)$;}\\ \frac{1}{18}(n^3+3n^2-4), &\text{if $n\equiv 1(mod \ 3)$;}\\ \frac{1}{18}(n^3+3n^2-2), &\text{if $n\equiv 2(mod \ 3)$.} \end{cases} \end{equation*} \end{theorem} It has been shown explicitly that the Wiener index is attained for the maximal planar graphs $T_n$, we will give the construction of $T_n$ in the next section, see Definition \ref{xx}. The authors in \cite{28} also conjectured that this bound also holds for every maximal planar graph. It has been shown that the conjectured bound holds asymptotically \cite{29}. In particular they showed the following result. \begin{theorem}(\cite{29})\label{az} Let $k\in\{3,4,5\}$. Then there exists a constant $C$ such that \begin{equation*} W(G)\leq \frac{1}{6k}n^3+C n^{5/2} \end{equation*} for every $k$-connected maximal planar graph of order $n$. \end{theorem} In this paper, we confirm the conjecture. \begin{theorem}\label{Main_Theorem} Let $G$ be an $n$, $n\geq 6$, vertex, maximal, planar graph. Then we have, \begin{equation*}W(G)\leq \bigg\lfloor\frac{1}{18}(n^3+3n^2)\bigg\rfloor= \begin{cases} \frac{1}{18}(n^3+3n^2), &\text{if $n\equiv 0(mod \ 3)$;}\\ \frac{1}{18}(n^3+3n^2-4), &\text{if $n\equiv 1(mod \ 3)$;}\\ \frac{1}{18}(n^3+3n^2-2), &\text{if $n\equiv 2(mod \ 3)$.} \end{cases} \end{equation*} Equality holds if and only if $G$ is isomorphic to $T_n$ for all $n$, $n\geq 10$. \end{theorem} \section{Notations and Preliminaries} Let $G$ be a graph. We denote vertex set and edge set of $G$ by $V(G)$ and $E(G)$ respectively. For a vertex set $S \subset V(G)$, the \textit{status} of $S$ is defined as the sum of all distances from $S$ to all vertices of the graph. It is denoted by $\sigma_G(S)$, thus \begin{align*} \sigma_G(S)=\sum_{u\in V(G)}d_G(S,u). \end{align*} For simplicity, we may use the notation $\sigma(S)$ instead of $\sigma_G(S)$ when the underlined graph is clear. We have, \begin{equation* W(G) = \frac{1}{2} \sum_{v \in V(G)} \sigma_G(v). \end{equation*} Here we are defining an Apollonian network $T_n$ on $n$ vertices. We will prove later that it is the unique maximal planar graph which maximizes the Wiener index. \begin{definition}\label{xx} The Apollonian network $T_n$ is the maximal planar graph on $n\geq 3$ vertices, with the following structure, see Figure \ref{apollonian}. If $n$ is a multiple of $3$, then the vertex set of $T_n$ can be partitioned in three sets of same size, $A=\{a_1,a_2,\cdots,a_k\}$, $B=\{b_1, b_2,\dots,b_k\}$ and $C=\{c_1,c_2,\cdots,c_k\}$. The edge set of $T_n$ is the union of following three sets $E_1=\bigcup_{i=1}^{k}\{(a_i,b_i), (b_i,c_i), (c_i,a_i)\}$ forming concentric triangles, $E_2=\bigcup_{i=1}^{k-1} \{(a_i,b_{i+1}), (a_i,c_{i+1}), (b_i,c_{i+1})\}$ forming `diagonal' edges, and $E_3= \bigcup_{1}^{k-1} \{(a_i,a_{i+1}),\\ (b_i,b_{i+1}), (c_i,c_{i+1})\}$ forming paths in each vertex class, see Figure \ref{a}. Note, that there are two triangular faces $a_1,b_1,c_1$ and $a_k,b_k,c_k$. If $3|n-1$, then $T_n$ is the Apollonian network which may be obtained from $T_{n-1}$ by adding a degree three vertex in the face $a_1,b_1,c_1$ or $a_{\frac{n-1}{3}},b_{\frac{n-1}{3}},c_{\frac{n-1}{3}}$, see Figure \ref{b}. Note that both graphs are isomorphic. If $3|n-2$, then $T_n$ is the Apollonian network which may be obtained from $T_{n-2}$ by adding a degree three vertex in each of the faces $a_1,b_1,c_1$ and $a_{\frac{n-1}{3}},b_{\frac{n-1}{3}},c_{\frac{n-1}{3}}$, see Figure \ref{c}. \end{definition} \begin{figure}[h] \begin{subfigure}{0.3\linewidth} \centering \begin{tikzpicture}[scale=.4] \foreach \x in {1,2,4,5}{ \draw[thick] (90:\x) -- (90:{\x+1}) (-30:\x) -- (-30:{\x+1}) (210:\x) -- (210:{\x+1}); \draw[red] (90:{\x+1}) -- (-30:\x) -- (210:{\x+1}) (210:\x) -- (90:{\x+1}); } \foreach \x in {1,2,4,5,...,6}{ \filldraw (90:\x) circle (4pt) (-30:\x) circle (4pt) (210:\x) circle (4pt); \draw[thick] (90:\x) -- (-30:\x) -- (210:\x) -- (90:\x); } \filldraw (90:3) circle (4pt) (210:3) circle (4pt); \draw[thick] (90:3) -- (210:3); \draw (0,-3) node[below]{}; \end{tikzpicture} \caption{$3\mid n$} \label{a} \end{subfigure} \begin{subfigure}{0.3\linewidth} \centering \begin{tikzpicture}[scale=.4] \filldraw (210:1) -- (0,0) circle (4pt) -- (90:1) (-30:1) -- (0,0); \foreach \x in {1,2,4,5}{ \draw[thick] (90:\x) -- (90:{\x+1}) (-30:\x) -- (-30:{\x+1}) (210:\x) -- (210:{\x+1}); \draw[red] (90:{\x+1}) -- (-30:\x) -- (210:{\x+1}) (210:\x) -- (90:{\x+1}); } \foreach \x in {1,2,4,5,...,6}{ \filldraw (90:\x) circle (4pt) (-30:\x) circle (4pt) (210:\x) circle (4pt); \draw[thick] (90:\x) -- (-30:\x) -- (210:\x) -- (90:\x); } \filldraw (90:3) circle (4pt) (210:3) circle (4pt); \draw[thick] (90:3) -- (210:3); \draw (0,-3) node[below]{}; \end{tikzpicture} \caption{$\; 3\mid n-1$} \label{b} \end{subfigure} \begin{subfigure}{0.3\linewidth} \centering \begin{tikzpicture}[scale=.35] \filldraw (210:6) -- (0,7) circle (4pt) -- (-30:6) (0,7) -- (0,6); \filldraw (210:1) -- (0,0) circle (4pt) -- (90:1) (-30:1) -- (0,0); \foreach \x in {1,2,4,5}{ \draw[thick] (90:\x) -- (90:{\x+1}) (-30:\x) -- (-30:{\x+1}) (210:\x) -- (210:{\x+1}); \draw[red] (90:{\x+1}) -- (-30:\x) -- (210:{\x+1}) (210:\x) -- (90:{\x+1}); } \foreach \x in {1,2,4,5,...,6}{ \filldraw (90:\x) circle (4pt) (-30:\x) circle (4pt) (210:\x) circle (4pt); \draw[thick] (90:\x) -- (-30:\x) -- (210:\x) -- (90:\x); } \filldraw (90:3) circle (4pt) (210:3) circle (4pt); \draw[thick] (90:3) -- (210:3); \end{tikzpicture} \caption{$ \; 3 \mid n-2$} \label{c} \end{subfigure} \caption{Apollonian networks maximizing Wiener index of maximal planar graphs} \label{apollonian} \end{figure} The following lemmas will be used in the proof of Theorem \ref{Main_Theorem}. \begin{lemma}\label{S_Connected_Cycle_lemma} Let $G$ be a $s$-connected, maximal planar graph. Then every cut set of size $s$ contains a Hamiltonian cycle of length $s$. \end{lemma} \begin{proof} Let us assume that a cut set of size $s$ is $S=\{v_1,v_2,\dots,v_s\}$. Let $u$ and $w$ be two vertices such that any path from $u$ to $w$ contains at least one vertex from $S$. Since $G$ is $s$-connected, by Menger's Theorem, there are $s$ vertex disjoint paths from $u$ to $w$. Each of the paths intersects $S$ in disjoint nonempty sets, therefore each of the paths contain exactly one vertex from $S$. We may assume, that in a particular planar embedding of $G$, those paths are ordered in such a way that one of the two regions determined by the cycle obtained from two paths from $u$ to $w$ containing $v_{i_x}$ and ${v_{i_{x+1}}}$ has no vertex from $S$ (where indices are taken modulo $s$), see Figure \ref{S_connected_Picture}. Then we must have the edges $\{v_{i_x},v_{i_{x+1}}\}$, otherwise, from the maximality of the planar graph, there is a path from the vertex $u$ to the vertex $w$ that does not contain a vertex from $S$, a contradiction. Therefore we have a cycle of length $s$ on the vertex set $S$, $v_{i_{1}},v_{i_{2}},\cdots,v_{i_{s}},v_{i_{1}}$. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.2] \draw[fill=black](-15,0)circle(10pt); \draw[fill=black](15,0)circle(10pt); \draw[fill=black](0,12)circle(10pt); \draw[fill=black](0,-12)circle(10pt); \draw[fill=black](0,7)circle(10pt); \draw[fill=black](0,-7)circle(10pt); \draw[fill=black](0,2)circle(5pt); \draw[fill=black](0,-2)circle(5pt); \draw[fill=black](0,0)circle(5pt); \draw[black,thick](-15,0)..controls (-9,15) and (-3,0) .. (0,12); \draw[black,thick](-15,0)..controls (-9,10) and (-3,0) .. (0,7); \draw[black,thick](-15,0)..controls (-9, -8) and (-3,-6) .. (0,-12); \draw[black,thick](-15,0)..controls (-9,0) and (-3,0) .. (0,-7); \draw[black,thick](15,0)..controls (9,-15) and (3,0) .. (0,-12); \draw[black,thick](15,0)..controls (9,-10) and (3,0) .. (0,-7); \draw[black,thick](15,0)..controls (9,8) and (3,7) .. (0,7); \draw[black,thick](15,0)..controls (9,14) and (3,13) .. (0,12); \node at (-16,0) {$u$}; \node at (16,0) {$w$}; \node at (0,13) {$v_{i_1}$}; \node at (0,8) {$v_{i_2}$}; \node at (0,-8) {$v_{i_{s-1}}$}; \node at (0,-13) {$v_{i_s}$}; \end{tikzpicture} \caption{$s$ pairwise disjoint paths from $u$ to $w$.} \label{S_connected_Picture} \end{figure} \end{proof} The following definition would be particularly helpful. Given a set $S\subseteq V$, we define the Breadth First Search partition of $V$ with root $S$, $\pa{S}^G$ or simply $\pa{S}$ when the underline graph is clear, by $\pa{S} = \{S_0,S_1,\dots\}$, where $S_0 = S$, and for $i \geq 1$, $S_i$ is the set of vertices at distance exactly $i$ from $S$. We refer to those sets as \emph{levels} (of $\pa{s}$), $S_1$ is the \emph{first level}, and if $k$ is the largest integer such that $S_k \neq \emptyset$, the we refer to $S_k$ as the \emph{last level}. We refer to $S_0$ and the last level as \emph{terminal level}. Note that by definition every level beside the first and last is a cut set of $G$. We denote by $\pa{v}$ the Breadth First Search partition from $v$, that is the partition $\pa{\{v\}}.$ \begin{lemma}\label{Three_Connected_Lemma} Let $G$ be an $n+s$ vertex graph and $S$, $S\subset V(G)$, be a set of vertices of size $s$. Such that each non-terminal level of $\pa{S}$ has size at least $3$. Then we have \begin{equation*} \sigma(S)\leq \sigma_3(n):= \begin{cases} \frac{1}{6}(n^2+3n), &\text{if $n\equiv 0 \; (mod \ 3)$;}\\ \frac{1}{6}(n^2+3n+2), &\text{if $n\equiv 1,2 \; (mod \ 3)$.} \end{cases} \end{equation*} \end{lemma} \begin{proof} If $\pa{S} = \{S_0,S_1,\dots\}$, by definition, we have that $\sigma(S) = \displaystyle\sum_{i} i\abs{S}.$ Therefore \begin{align*} \sigma(S) &= \abs{S_1} + 2\abs{S_2} + 3\abs{S_3} + \cdots \\&\leq 3\bigg(1+2+\dots+ \floor{\frac{n}{3}}\bigg)+\bigg(\floor{\frac{n}{3}}+1 \bigg)\bigg(n-3 \floor{\frac{n}{3}}\bigg)= \sigma_3(n). \tag*{\qedhere} \end{align*} \end{proof} Similarly we can prove the following Lemmas. \begin{lemma}\label{Four_Connected_Lemma} Let $G$ be an $n+s$ vertex graph and $S$, $S\subset V(G)$, be a set of vertices of size $s$. Such that each non terminal level of $\pa{S}$ has size at least $4$. Then we have \begin{equation*} \sigma(S)\leq \sigma_4(n):= \begin{cases} \frac{1}{8}(n^2+4n), &\text{if $n\equiv 0 \;(mod \ 4)$;}\\ \frac{1}{8}(n^2+4n+3), &\text{if $n\equiv 1,3 \; (mod \ 4)$;}\\ \frac{1}{8}(n^2+4n+4), &\text{if $n\equiv 2 \;(mod \ 4)$.} \end{cases} \end{equation*} \end{lemma} \begin{lemma}\label{Five_Connected_Lemma} Let $G$ be an $n+s$ vertex graph and $S$, $S\subset V(G)$, be a set of vertices of size $s$. Such that each non terminal level of $\pa{S}$ has size at least $5$. Then we have \begin{align*} \sigma(S)\leq \sigma_5(n):= \begin{cases} \frac{1}{10}(n^2+5n), &\text{if $n\equiv 0 \;(mod \ 5)$;}\\ \frac{1}{10}(n^2+5n+4), &\text{if $n\equiv 1,4\; (mod \ 5)$;}\\ \frac{1}{10}(n^2+5n+6), &\text{if $n\equiv 2, 3\; (mod \ 5)$.} \end{cases} \end{align*} \end{lemma} \section[Proof of Theorem 1.3]{Proof of Theorem \ref{Main_Theorem}} \begin{proof We are going to prove Theorem \ref{Main_Theorem} by induction, on the number of vertices. In \cite{29} it is shown that Theorem \ref{Main_Theorem} holds, for $n \leq 18$. Therefore, we may assume $n\geq 19$. Let $G$ be a maximal planar graph. The proof contains three cases depending on the connectivity of the graph $G$. Since $G$ is a maximal planar graph, it is either $3$, $4$ or $5$ connected. Thus, we may consider three different cases. \noindent\begin{bf} Case $1$. Let $G$ be a $5$-connected graph.\end{bf} For every fixed vertex $v \in V(G)$, consider $\pa{v}$. Since $G$ is $5$-connected, and each of the non-terminal levels of $\pa{v}$ is a cut set, we have that each non-terminal level has size at least $5$. Therefore from Lemma \ref{Five_Connected_Lemma}, we have, $$W(G)=\frac{1}{2}\sum_{v\in V(G)}\sigma(v)\leq \frac{n}{2} \sigma_5(n-1) \leq \frac{n}{20}(n^2+3n+2)< \bigg\lfloor\frac{1}{18}(n^3+3n^2)\bigg\rfloor,$$ for all $n\geq 4$. Therefore we are done if $G$ is $5$-connected, since $n\geq 19$. \noindent\begin{bf} Case $2$. Let $G$ be $4$-connected and not $5$-connected. \end{bf} Then $G$ contains a cut set of size $4$, which induces a cycle of length four, by Lemma \ref{S_Connected_Cycle_lemma}. Let us denote the vertices of this cut set as $v_1,v_2,v_3$ and $v_4$, forming the cycle in this given order. The cut set divides the plane into two regions, we will call them the inner and the outer regions respectively. Let us denote the number of vertices in the inner region by $x$. Let us assume, without loss of generality, that $x$ is minimal possible, but greater then one. Obviously $x\leq \frac{n-4}{2}$ or $x=n-5$. From here on, we deal with several sub-cases depending on the value of $x$. \begin{bf}Case $2.1$ \end{bf}In this case we assume $x\geq 4$ and $x\neq n-5$. Let us consider the sub-graph of $G$, say $G'$, obtained by deleting all vertices from the outer region of the cycle $v_1,v_2,v_3,v_4$ in $G$. The graph $G'$ is not maximal, since the outer face is a $4$-cycle. The graph $G$ is $4$-connected, therefore it does not contain the edges $\{v_1,v_3\}$ and $\{v_2,v_4\}$, consequently we may add any of them to $G'$, to obtain a maximal planar graph. Adding an edge decreases the Wiener index of $G'$. In the following paragraph, we prove that one of the edges decrease the Wiener index of $G'$ by at most $\frac{x^2}{16}$. Let $A_i=\{v\in V(G')|d(v,v_i)<d(v,v_j), \forall j\in\{1,2,3,4\}\setminus\{i\}\}$ for $i\in\{1,2,3,4\}$. Let $A$ be the subset of vertices of $G'$ not contained in any of the $A_i$'s. So $A,A_1,A_2,A_3,A_4$ is a partition of vertices of $G'$. It is simple to observe that, adding the edge $\{v_i,v_{i+2}\}$, for $i\in \{1,2\}$, decreases the distance between a pair of vertices, then these vertices must be from $A_{i}$ and $A_{i+2}$. If there is a vertex $u$ which has three neighbours from the cut set, without loss of generality say $v_1,v_2,v_3$, then $A_{2}=\emptyset$, since $G$ is $4$-connected. therefore we are done in this situation. Otherwise, for each pair $\{v_1, v_{2}\}$, $\{v_2, v_{3}\}$, $\{v_3, v_{4}\}$, $\{v_4, v_{1}\}$, there is a distinct vertex which is adjacent to both vertices of the pair. Therefore the size of $A$ is at least $4$. Hence the size of the vertex set $\cup_{i=1}^{4}A_i$, is at most $x$. By the AM-GM inequality, we have that one of $\abs{A_1}\cdot\abs{A_3}$ or $\abs{A_2}\cdot\abs{A_4}$ is at most $\frac{x^2}{16}$. Therefore we can choose one of the edges $\{v_1,v_3\}$ or $\{v_2,v_4\}$, such that after adding that edge to the graph $G'$, the Wiener index of the graph decreases by at most $\frac{x^2}{16}$. Let us denote the maximal planar graph obtained by adding this edge to $G'$ by $G_{x+4}$. Similarly, we denote the maximal planar graph obtained from $G$, by deleting all vertices in the inner region and adding the diagonal which decreases the Wiener index by at most $\frac{(n-x-4)^2}{16}$ by $G_{n-x}$. Consider the graph $G_{n-x}$ and a sub-set of it's vertices $S=\{v_{1},v_{2},v_{3},v_{4}\}$. Since the graph $G$ is $4$-connected, each non-terminal level of $\pa{S}^{G_{n-x}}$ has at least $4$ vertices. Therefore we get that $\sigma_{G_{n-x}}(S)\leq\sigma_4(n-x-4)= \frac{(n-x-2)^2}{8}$, from Lemma \ref{Four_Connected_Lemma}. Recall that $G'$ is the graph obtained from $G$ by deleting the vertices from the outer region. For each $i \in \{1,2,3,4\}$, consider the BFS partition $\pa{v_i}^{G'}$. Note that, $x\geq 4$, $G$ is $4$-connected, and by minimality of $x$, $x>1$, we have that every non-terminal level of $\pa{v_i}^{G'}$ has at least $5$ vertices, except for the first level which may contain only four vertices and the level before the last, which may also contain four vertices in this case the last level has size exactly one. Status of the $v_i$ is maximised, if number of vertices in the first and before the last level are four, last level contains only one vertex and every other level contains exactly five vertices. To simplify calculations, of the status of the vertex $v_i$, we may hang a new temporary vertex on the root and we may bring a vertex from the last level to the previous level. This modifications do not change the status of the vertex, but it increases number of vertices. Now we may apply Lemma \ref{Five_Connected_Lemma} for this BFS partition considering that number of vertices in all levels is exactly 5. Therefore we have $\sigma_{G'}(v_i)\leq \frac{(x+4)^2+5(x+4)}{10}$. Observe that this status contains distances, from $v_i$ to other vertices from the cut set, which equals to four. Note that this is an uniform upper bound for the status of each of the vertices from the cut set. Finally we may upper bound the Wiener index of $G$ in the following way, \begin{equation*} \begin{split} W(G) &\leq W(G_{n-x})+\frac{(n-x-4)^2}{16}+W(G_{x+4})+\frac{x^2}{16}-8\\ &+x\cdot\sigma_{G_{n-x}}(\{v_1,v_2,v_3,v_4\})+(n-x-4)\cdot(\sigma_{G'}(v_1)-4). \end{split} \end{equation*} In the first line we upper bound all distances between pairs of vertices on the cut set and outer region, and between pairs of vertices on the cut set and inner region. We take minus $8$ since distances between the pairs from the cut set was double counted. In the second line we upper bound all distances from the outer region to the inner region. This distances are split in two, distances from the outer region to the cut set and from the fixed vertex, without loss of generality say $v_1$, of the cycle to the inner region. We are going to prove that $W(G)\leq \frac{1}{18}(n^3+3n^2)-1$, therefore we will be done in this sub-case. We need to prove the following inequality \begin{equation*} \begin{split} \frac{1}{18}(n^3+3n^2)-1 &\geq \frac{1}{18}((n-x)^3+3(n-x)^2)+\frac{(n-x-4)^2}{16}\\ &+\frac{1}{18}((x+4)^3+3(x+4)^2)+\frac{x^2}{16}-8\\ &+x\cdot\frac{(n-x-2)^2}{8}+(n-x-4)\cdot(\frac{(x+4)^2+5(x+4)}{10}-4). \end{split} \end{equation*} After we simplify, we get \begin{equation} \begin{split} \frac{82}{45}- \frac{9n}{10}+\frac{n^2}{16}+ \frac{x}{5} + \frac{41 n x}{120}- \frac{n^2 x}{24}- \frac{3 x^2}{40}+ \frac{n x^2}{60} + \frac{x^3}{40}\leq 0. \end{split} \label{eq1} \end{equation} We know that $4\leq x\leq \frac{n-4}{2}$ and if we set $x=4$, we get $2176 + 528 n - 75 n^2\leq 0$ which holds for all $n$, $n\geq 10$. Therefore, if the derivative of the right hand side of the inequality is negative for all $\{x\mid 4\leq x\leq \frac{n-4}{2}\}$, then the inequality holds for all these values of $x$. Differentiating the LHS of the Inequality (\ref{eq1}), with respect to $x$, we get \begin{align} \label{oscar1} \begin{split} &\frac{\delta}{\delta x} \bigg( \frac{82}{45}- \frac{9n}{10}+\frac{n^2}{16}+ \frac{x}{5} + \frac{41 n x}{120}- \frac{n^2 x}{24}- \frac{3 x^2}{40}+ \frac{n x^2}{60} + \frac{x^3}{40} \bigg)\\ &=\frac{1}{5} + \frac{41 n}{120} - \frac{n^2}{24} - \frac{3 x}{20} + \frac{n x}{30} + \frac{3 x^2}{40}. \end{split} \end{align} If we set $x=4$ in Equation \ref{oscar1}, we get $\frac{1}{120} (96 + 57 n - 5 n^2)$, which is negative for all $n$, $n\geq 13$. If we set $x=\frac{n-4}{2}$ in Equation \ref{oscar1}, we get $\frac{1}{160} (-n^2 + 8 n + 128)$, which is negative for all $n$, $n\geq 17$. Therefore Equation \ref{oscar1} is negative in the whole interval. Since $n\geq 19$, we have $W(G)\leq \frac{1}{18}(n^3+3n^2)-1$, and this sub-case is settled. \begin{bf} Case $2.2$ \end{bf} In this case, we assume $2\leq x\leq 3$. From the minimality of $x$, we have $x=2$. Let us consider the maximal planar graph, denoted by $G_{n-2}$, obtained from $G$ by deleting these two vertices from the inner region and adding an edge which decreases the Wiener index by at most $\frac{(n-6)^2}{16}$. By the choice of $x$, we have that for a vertex inside the cut set $v$, each level of $\pa{v}^{G}$ contains at least $5$ vertices, except the first one which contains only $4$ and the level before last may contain $4$ vertices too followed by one vertex in the last level. Therefore the status of the vertex $v$ is maximized, if the last level contains one vertex, the level before the last and the first level contain four vertices and every other level contains five vertices. Therefore status of the vertices inside can be bounded by $\sigma_5(n)=\frac{1}{10}(n^2+5n)$. This bound comes from Lemma \ref{Five_Connected_Lemma}, after similar modifications of the BFS partition as in previous case. Finally we have, \begin{equation} \begin{split} W(G)&\leq W(G_{n-2})+\frac{(n-6)^2}{16}+\frac{2}{10}(n^2+5n)\\ &\leq \frac{1}{18}((n-2)^3+3(n-2)^2)+\frac{(n-6)^2}{16}+\frac{2}{10}(n^2+5n)\\ &=\frac{1}{18}n^3 + \frac{23}{240}n^2+\frac{1}{4}n - \frac{89}{36}\leq \frac{1}{18}(n^3+3n^2)-1. \end{split} \end{equation} The last inequality holds for all $n\geq 10$. Therefore we have settled this sub-case too since $n\geq 19$. \begin{bf}Case $2.3$ \end{bf}In this case we assume $x=n-5$. Therefore we have a cut set of size one. With similar reasoning, as in previous case we get \begin{equation} \begin{split} W(G)&\leq W(G_{n-1})+\frac{(n-5)^2}{16}+\frac{1}{10}(n^2+5n)\\ &\leq \frac{1}{18}((n-1)^3+3(n-1)^2)+\frac{(n-5)^2}{16}+\frac{1}{10}(n^2+5n)\\ &=\frac{1}{18}n^3 + \frac{13}{80}n^2+\frac{7}{24}n - \frac{241}{144}\leq \frac{1}{18}(n^3+3n^2)-1. \end{split} \end{equation} The last inequality holds for all $n\geq 9$. Therefore, we have settled this sub-case too since $n\geq 19$. We have considered all sub-cases when $G$ is $4$-connected. We proved that in this case Wiener index is strictly less than the desired upper bound. \noindent\begin{bf} Case $3$. Let $G$ be $3$-connected and not $4$-connected. \end{bf} Since $G$ is not $4$-connected and it is a maximal planar graph, it must have a cut set of size $3$, say $\{v_1,v_2,v_3\}$. Which induces a triangle from the Lemma \ref{S_Connected_Cycle_lemma}. Let us assume, without loss of generality, that number of vertices in the inner region of the cut set is minimal, say $x$. \begin{bf} Case $3.1.$ \end{bf} Assume $x\leq 2$. From the minimality of $x$, we have $x \not=2$, therefore $x=1$. Let us denote this vertex as $v$. Let $G_{n-1}$ be a maximal planar graph obtained from $G$ by deleting the vertex $v$. From the Lemma \ref{Three_Connected_Lemma}, we have $\sigma_G(v)\leq \frac{1}{6}(n^2+n)-\frac{1}{3}\mathbb{1}_{3|(n-1)}$. Finally we have, \begin{equation} \begin{split} W(G) & \leq W(G_{n-1})+\sigma_G(v) \\ & \leq \frac{1}{18}((n-1)^3+3(n-1)^2)-\frac{1}{9}\mathbb{1}_{3|n}-\frac{2}{9}\mathbb{1}_{3|(n-2)} \\ &+\frac{1}{6}(n^2+n)-\frac{1}{3}\mathbb{1}_{3|(n-1)}= \frac{n^3}{18}+\frac{n^2}{6}+\frac{1}{9}-\frac{1}{9}\mathbb{1}_{3|(n)}-\frac{2}{9}\mathbb{1}_{3|(n-2)}-\frac{1}{3}\mathbb{1}_{3|(n-1)}\\ &\leq \bigg\lfloor\frac{1}{18}(n^3+3n^2)\bigg\rfloor. \end{split} \end{equation} In this case the equality holds if and only if the graph obtained after deleting the vertex $v$ is $T_{n-1}$. We can observe that, if we add the vertex $v$ to the graph $T_{n-1}$, the choice that maximize the status of $v$ is only when we get the graph $T_n$. Hence we have the desired upper bound of the Wiener index and equality holds if and only if $G=T_n$. \begin{bf} Case $3.2.$ \end{bf} Assume $x=3$. Let us denote vertices in the inner region as $x_1,\ x_2\text{ and }x_3$. From the minimality of $x$ and maximality of $G$, the structure of $G$ in the inner region is well defined, see Figure \ref{x=3}. If we remove these three inner vertices, the graph we get is denoted by $G_{n-3}$ and is still maximal. Hence we may use the induction hypothesis for the graph $G_{n-3}$. Consider the graph $G_{n-3}$ and a vertex set $S=\{v_1,v_2,v_3\}$. Each level of $\pa{S}^{G_{n-3}}$ has at least three vertices except the terminal one. Therefore we may apply Lemma \ref{Three_Connected_Lemma}, then we have $\sigma_{G_{n-3}}(\{v_1,v_2,v_3\})\leq \frac{1}{6}((n-6)^2+3(n-6)+2)$. To estimate distances from the vertices in the outer region to the vertices in the inner region we do the following. We first estimate distances from the outer region to the cut set and from the fixed vertex on the cut set to all $x_i$. The distances from the vertices in the outer region to the set $\{v_1,v_2,v_3\}$, is $\sigma_{G_{n-3}}( \{v_1,v_2,v_3\})$. The sum of distances from $v_i$ to the vertices $\{x_1,x_2,x_3\}$ is $4$. Note that, if we take a vertex in the outer region which has at least two neighbours on the cut set, then for this vertex we need to count $3$ for the distances from the cut set to the vertices $\{x_1,x_2,x_3\}$. Since we have at least two such vertices, all cross distances can be bounded by $3\sigma_{G_{n-3}}( \{v_1,v_2,v_3\})+4(n-5)+6$. Then we have, \begin{equation} \begin{split} W(G) & \leq W(G_{n-3})+W(K_3)+3\sigma_{G_{n-3}}(\{v_1,v_2,v_3\})+4n-14 \\ & \leq \frac{1}{18}((n-3)^3+3(n-3)^2)+\frac{1}{2}((n-6)^2+3(n-6)+2)+4n-11\\ &< \bigg\lfloor\frac{1}{18}(n^3+3n^2)\bigg\rfloor. \end{split} \end{equation}\ Therefore, this case is also settled. \begin{figure}[h] \begin{subfigure}{0.55\linewidth} \centering \begin{tikzpicture}[scale=0.15] \draw[fill=black](-12,0)circle(12pt); \draw[fill=black](12,0)circle(12pt); \draw[fill=black](0,18)circle(12pt); \draw[fill=black](-4,4)circle(12pt); \draw[fill=black](4,4)circle(12pt); \draw[fill=black](0,10)circle(12pt); \draw[thick](-12,0)--(0,18)--(12,0)--(-12,0); \draw[thick](-4,4)--(0,10)--(4,4)--(-4,4); \draw[thick](-12,0)--(-4,4)(12,0)--(4,4)(0,18)--(0,10)(-12,0)--(0,10)(12,0)--(-4,4)(0,18)--(4,4); \node at (-12,-2){$v_1$}; \node at (12,-2){$v_3$}; \node at (0,20){$v_2$}; \end{tikzpicture} \caption{$x=3$.} \label{x=3} \end{subfigure} \begin{subfigure}{0.3\linewidth} \centering \begin{tikzpicture}[scale=0.15] \draw[fill=black](-12,0)circle(12pt); \draw[fill=black](12,0)circle(12pt); \draw[fill=black](0,18)circle(12pt); \draw[fill=black](0,3)circle(12pt); \draw[fill=black](0,6)circle(12pt); \draw[fill=black](-4,7.67)circle(12pt); \draw[fill=black](4,7.67)circle(12pt); \draw[thick](-12,0)--(12,0)--(0,18)--(-12,0); \draw[thick](-12,0)--(-4,7.67)--(0,18); \draw[thick](-12,0)--(0,3)--(12,0); \draw[thick](12,0)--(4,7.67)--(0,18); \draw[thick](-12,0)--(0,6)--(4,7.67)--(-4,7.67)--(0,6)--(0,3)--(4,7.67); \node at (-12,-2) {$v_1$}; \node at (12,-2) {$v_3$}; \node at (0,20) {$v_2$}; \end{tikzpicture} \caption{$x=4$.} \label{x=4} \end{subfigure} \caption{The unique inner regions for the $3$-connected case when $x=3$ and $x=4$.} \label{x=3-4} \end{figure} \begin{bf} Case $3.3$ \end{bf} Assume $x=4$. From the minimality of $x$ and maximality of the planar graph $G$, the only configuration of the inner region is in Figure \ref{x=4}. Consider a maximal planar graph on the $n-4$ vertices, say $G_{n-4}$, which is obtained from $G$ by deleting the four inner vertices. We will apply the induction hypothesis for this graph, to upper bound the sum of distances between all pairs of vertices from $V(G_{n-4})$ in $G$. By applying Lemma \ref{Three_Connected_Lemma} for $G_{n-4}$ and $S= \{v_1,v_2,v_3\}$, we get $\sigma_{G_{n-4}}(\{v_1,v_2,v_3\})\leq\frac{1}{6}((n-4-3)^2+(n-4-3)+2)$. The sum of the distances between the four inner vertices is $7$. The sum of the distances from each $v_i$ to all of the vertices inside is at most six. By following a similar argument as in previous case we have, \begin{equation} \begin{split} W(G) & \leq \frac{1}{18}((n-4)^3+3(n-4)^2)+7+\frac{4}{6}((n-7)^2+(n-7)+2)+6(n-4)\\ &< \bigg\lfloor\frac{1}{18}(n^3+3n^2)\bigg\rfloor. \end{split} \end{equation} Therefore, this case is also settled. \begin{bf} Case $3.4$ \end{bf} Assume $x=5$. From the minimality of $x$ and maximality of the planar graph $G$, there are three configurations of the inner region, see Figure \ref{x=5}. Consider a maximal planar graph on the $n-5$ vertices, say $G_{n-5}$, which is obtained from $G$ by deleting $5$ vertices from the inner region. We will apply the induction hypothesis for this graph $G_{n-5}$, to bound the sum of the distances between the vertices of $V(G_{n-5})$ in the graph $G$. By applying Lemma \ref{Three_Connected_Lemma} for $G_{n-5}$ and $S= \{v_1,v_2,v_3\}$, we get $\sigma_{G_{n-5}}(\{v_1,v_2,v_3\})\leq\frac{1}{6}((n-8)^2+(n-8)+2)$. The sum of the distances between five inner vertices is at most $13$. The sum of the distances from $v_i$ to all of the vertices inside is at most $8$. Finally we have, \begin{equation} \begin{split} W(G) & \leq \frac{1}{18}((n-5)^3+3(n-5)^2)+13+\frac{5}{6}((n-8)^2+(n-8)+2)+8(n-5)\\ &< \bigg\lfloor\frac{1}{18}(n^3+3n^2)\bigg\rfloor. \end{split} \end{equation} Therefore this case is also settled. \begin{figure}[h] \centering \begin{tikzpicture}[scale=0.15] \draw[fill=black](-12,0)circle(12pt); \draw[fill=black](12,0)circle(12pt); \draw[fill=black](0,18)circle(12pt); \draw[fill=black](0,4)circle(12pt); \draw[fill=black](0,2)circle(12pt); \draw[fill=black](0,6)circle(12pt); \draw[fill=black](-4,7.67)circle(12pt); \draw[fill=black](4,7.67)circle(12pt); \draw[thick](-12,0)--(12,0)--(0,18)--(-12,0); \draw[thick](-12,0)--(-4,7.67)--(0,18); \draw[thick](-12,0)--(0,4)--(4,7.67)(0,2)--(4,7.67); \draw[thick](12,0)--(4,7.67)--(0,18); \draw[thick](-12,0)--(0,6)--(4,7.67)--(-4,7.67)--(0,6)--(0,4); \draw[thick](-12,0)--(0,2)--(12,0)(0,2)--(0,4); \node at (-12,-2){$v_1$}; \node at (12,-2){$v_3$}; \node at (0,20){$v_2$}; \end{tikzpicture}\qquad \begin{tikzpicture}[scale=0.15] \draw[fill=black](-12,0)circle(12pt); \draw[fill=black](12,0)circle(12pt); \draw[fill=black](0,18)circle(12pt); \draw[fill=black](-4,4)circle(12pt); \draw[fill=black](0,3)circle(12pt); \draw[fill=black](0,9)circle(12pt); \draw[fill=black](-4,7.67)circle(12pt); \draw[fill=black](4,7.67)circle(12pt); \draw[thick](-12,0)--(12,0)--(0,18)--(-12,0); \draw[thick](-12,0)--(-4,7.67)--(0,18); \draw[thick](-12,0)--(0,3)--(12,0); \draw[thick](12,0)--(4,7.67)--(0,18); \draw[thick](-4,7.67)--(-4,4)--(0,3)--(4,7.67)--(0,9)--(-4,7.67)(0,18)--(0,9); \draw[thick](-12,0)--(-4,4)--(0,9)--(0,3); \node at (-12,-2){$v_1$}; \node at (12,-2){$v_3$}; \node at (0,20){$v_2$}; \end{tikzpicture}\qquad \begin{tikzpicture}[scale=0.15] \draw[fill=black](-12,0)circle(12pt); \draw[fill=black](12,0)circle(12pt); \draw[fill=black](0,18)circle(12pt); \draw[fill=black](0,3)circle(12pt); \draw[fill=black](0,6)circle(12pt); \draw[fill=black](-4,4)circle(12pt); \draw[fill=black](-4,7.67)circle(12pt); \draw[fill=black](4,7.67)circle(12pt); \draw[thick](-12,0)--(12,0)--(0,18)--(-12,0); \draw[thick](-12,0)--(-4,7.67)--(0,18); \draw[thick](4,7.67)(0,3)--(4,7.67); \draw[thick](12,0)--(4,7.67)--(0,18)(-12,0)--(0,3)--(12,0); \draw[thick](-12,0)--(0,6)--(4,7.67)--(-4,7.67)(0,3)--(0,6)(0,3)--(-4,4); \draw[thick](-4,4)--(-4,7.67)--(0,6); \node at (-12,-2){$v_1$}; \node at (12,-2){$v_3$}; \node at (0,20){$v_2$}; \end{tikzpicture} \caption{3-connected, $x=5$.} \label{x=5} \end{figure} \begin{bf} Case $3.5$ \end{bf} Assume $x\geq 6$. First we settle for $x\geq 7$ and then for $x=6$. Consider the maximal planar graph on $n-x$ vertices, say $G_{n-x}$, which is obtained from $G$ by deleting those $x$ vertices from the inner region of the cut set $\{v_1,v_2,v_3\}$. Consider the maximal planar graph on $x+3$ vertices, say $G_{x+3}$, which is obtained from $G$, by deleting all $n-x-3$ vertices from the outer region of the cut set $\{v_1,v_2,v_3\}$. We know by induction that $W(G_{x+3})\leq \frac{1}{18}((x+3)^3+3(x+3)^2)$. There are at least two vertices from the cut set $\{v_1,v_2,v_3\}$, such that each of them has at least two neighbours in the outer region of the cut set. Without loss of generality, we may assume they are $v_1$ and $v_2$. Hence if we consider $\pa{v_1}^{G_{n-x}}$ and $\pa{v_2}^{G_{n-x}}$, we will have $4$ vertices in the first level of and at least three in the following levels until the last one. Therefore we have $\sigma_{G_{n-x}}(v_1)\leq \sigma_3(n-x-2)+1\leq\frac{1}{6}((n-x-2)^2+3(n-x-2)+8)$ from Lemma \ref{Three_Connected_Lemma} and same for $v_2$. Now let us consider $\pa{\{v_1,v_2\}}^{G_{x+3}}$, from minimality of $x$, each non-terminal level of the $\pa{\{v_1,v_2\}}^{G_{x+3}}$ contains at least 4 vertices. Therefore by applying Lemma \ref{Four_Connected_Lemma}, we get $\sigma_{G_{x+3}}(\{v_1,v_2\})\leq \frac{1}{8}(x^2+6x+9)$. We have, \begin{equation} \begin{split} W(G) \leq & (W(G_{x+3})+W(G_{n-x})-3)+(n-x-3)(\sigma_{G_{x+3}}(\{v_1,v_2\})-1)\\& +x \bigg(\text{max} \bigg\{ \sigma_{G_{n-x}}(v_{1}),\sigma_{G_{n-x}}(v_{2}) \bigg\}-2\bigg). \end{split} \end{equation} The first term of the sum is an upper bound for the sum of all distances which does not cross the cut set. The second and the third terms upper-bounds all cross distances in the following way- we may split this sum into two parts for each crossing pair sum from inside to $\{v_1, v_2\}$ set and from $v_i$, $i \in \{1,2\}$ to the vertex outside, those are the second and the third terms of the sum accordingly. Therefore applying estimates, we get \begin{equation}\label{nn} \begin{split} \frac{1}{18}(n^3+3n^2)-1 &\geq \frac{1}{18}((x+3)^3+3(x+3)^2)+\frac{1}{18}((n-x)^3+3(n-x)^2)-3\\ & +\frac{(n-x-3)(x^2+6x+1)}{8}+\frac{x((n-x-2)^2+3(n-x-2)-4)}{6}. \end{split} \end{equation} After simplification we have \begin{equation}\label{Tempo1} -x^3+x^2(n+3)+x(21-6n)-(15+3n)\geq 0. \end{equation} where \[ \frac{\delta}{\delta x} \bigg(-x^3+x^2(n+3)+x(21-6n)-(15+3n)\bigg)=-3x^2+(2n+6)x+21-6n. \] The derivative is positive when $x\in [7,\frac{n}{2}]$. Hence since the inequality (\ref{Tempo1}) holds for $x=7$, it also holds for all $x$, $x \in [7,\frac{n}{2}]$. Therefore, if $x\geq 7$ we are done. Finally if $x=6$, then distances from $v_1$ and $v_2$ to all vertices inside is $9$ instead of $\frac{73}{8}$ as in \ref{nn}. Thus we get an improvement of Inequality (\ref{nn}), which shows that $W(G) < \floor{\frac{1}{18}(n^3+3n^2)}$ even for $x=6$. Therefore we have settled $3$-connected case too. \end{proof} \section{Concluding Remarks} There is the unique maximal planar graph $T_n$, maximizing the Wiener index, Theorem \ref{Main_Theorem}. Clearly $T_n$ is not $4$-connected. One may ask for the maximum Wiener index for the family of $4$-connected and $5$-connected maximal planar graphs. In \cite{29}, asymptotic results were proved for both cases. Moreover, based on their constructions, they conjecture sharp bounds for both $4$-connected and $5$-connected maximal planar graphs. Their conjectures are the following. \begin{conjecture} Let $G$ be an $n$, $n\geq 6$, vertex, maximal, $4$-connected, planar graph. Then we have \begin{equation*}W(G)\leq \begin{cases} \frac{1}{24}n^3+\frac{1}{4}n^2+\frac{1}{3}n-2, &\text{if $n\equiv 0,2 \; (mod \ 4)$;}\\ \frac{1}{24}n^3+\frac{1}{4}n^2+\frac{5}{24}n-\frac{3}{2}, &\text{if $n\equiv 1 \;(mod \ 4)$.}\\ \frac{1}{24}n^3+\frac{1}{4}n^2+\frac{5}{24}n-1, &\text{if $n\equiv 3 \;(mod \ 4)$;} \end{cases} \end{equation*} \end{conjecture} \begin{conjecture} Let $G$ be an $n$, $n\geq 12$, vertex, maximal, $4$-connected, planar graph. Then we have \begin{equation*}W(G)\leq \begin{cases} \frac{1}{30}n^3+\frac{3}{10}n^2-\frac{23}{15}n+32, &\text{if $n\equiv 0 \;(mod \ 5)$;}\\ \frac{1}{30}n^3+\frac{3}{10}n^2-\frac{23}{15}n+\frac{156}{5}, &\text{if $n\equiv 1 \;(mod \ 5)$.}\\ \frac{1}{30}n^3+\frac{3}{10}n^2-\frac{23}{15}n+\frac{168}{5}, &\text{if $n\equiv 2 \;(mod \ 5)$;}\\ \frac{1}{30}n^3+\frac{3}{10}n^2-\frac{23}{15}n+31, &\text{if $n\equiv 3 \;(mod \ 5)$;}\\ \frac{1}{30}n^3+\frac{3}{10}n^2-\frac{23}{15}n+\frac{161}{5}, &\text{if $n\equiv 4 \;(mod \ 5)$;}\\ \end{cases} \end{equation*} \end{conjecture} \section*{Acknowledgements} The research of the second and the fourth authors is partially supported by the National Research, Development and Innovation Office -- NKFIH, grant K 116769 and SNN 117879. The research of the fourth author is partially supported by Shota Rustaveli National Science Foundation of Georgia SRNSFG, grant number FR-18-2499.
{ "timestamp": "2019-12-09T02:00:50", "yymm": "1912", "arxiv_id": "1912.02846", "language": "en", "url": "https://arxiv.org/abs/1912.02846", "abstract": "The Wiener index of a connected graph is the sum of the distances between all pairs of vertices in the graph. It was conjectured that the Wiener index of an $n$-vertex maximal planar graph is at most $\\lfloor\\frac{1}{18}(n^3+3n^2)\\rfloor$. We prove this conjecture and for every $n$, $n \\geq 10$, determine the unique $n$-vertex maximal planar graph for which this maximum is attained.", "subjects": "Combinatorics (math.CO)", "title": "The Maximum Wiener Index of Maximal Planar Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357634636667, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7096739126109471 }
https://arxiv.org/abs/1907.11094
Improving the Accuracy of Principal Component Analysis by the Maximum Entropy Method
Classical Principal Component Analysis (PCA) approximates data in terms of projections on a small number of orthogonal vectors. There are simple procedures to efficiently compute various functions of the data from the PCA approximation. The most important function is arguably the Euclidean distance between data items, This can be used, for example, to solve the approximate nearest neighbor problem. We use random variables to model the inherent uncertainty in such approximations, and apply the Maximum Entropy Method to infer the underlying probability distribution. We propose using the expected values of distances between these random variables as improved estimates of the distance. We show by analysis and experimentally that in most cases results obtained by our method are more accurate than what is obtained by the classical approach. This improves the accuracy of a classical technique that have been used with little change for over 100 years.
\subsection{Experiments with Rayleigh Quotients} \input{tablesrq} Table~\ref{tab:RQ1} describes the average difference in evaluating the column and row space Rayleigh quotient. Smaller mean and standard deviation of $\lvert \text{$r_\text{ent}$}-r \rvert$ indicate better estimates for the Maximum Entropy Method. The vectors evaluated in this experiment were randomly drawn from Gaussian distribution. The plots in~\ref{fig:rq1} show the advantage of the new formulas using the same format as in \ref{fig:distanceionosphere}, \ref{fig:distancemsd}, and \ref{fig:distancewdbc}. \input{plotsrq} \section{\textbf{#1}}} \newcommand{\bsubsection}[1]{\subsection{\textbf{#1}}} \newcommand{\bsubsectionS}[1]{\subsection*{\textbf{#1}}} \newcommand{\reff}[1]{(\ref{#1})} \newcommand{\Ereff}[1]{Equation~\reff{#1}} \newcommand{\fR}[1]{\mathbb{R}^{#1}} \newcommand{\ex}[1]{E\{ #1 \}} \newcommand{\text{$d_\text{classic}$}}{\text{$d_\text{classic}$}} \newcommand{\text{$d_\text{ent}$}}{\text{$d_\text{ent}$}} \newcommand{\text{$d_\text{lower}$}}{\text{$d_\text{lower}$}} \newcommand{\text{$r_\text{classic}$}}{\text{$r_\text{classic}$}} \newcommand{\text{$r_\text{ent}$}}{\text{$r_\text{ent}$}} \begin{document} \title{ Improving the Accuracy of Principal Component Analysis by the Maximum Entropy Method } \author{ \IEEEauthorblockN{Guihong Wan} \IEEEauthorblockA{\textit{Department of Computer Science} \\ \textit{The University of Texas at Dallas}\\ Richardson, Texas 75083\\ Guihong.Wan@utdallas.edu} \and \IEEEauthorblockN{Crystal Maung} \IEEEauthorblockA{\textit{7 Next} \\ \textit{7-Eleven Inc.}\\ Irving, Texas 75063\\ Crystal.Maung@7-11.com} \and \IEEEauthorblockN{Haim Schweitzer} \IEEEauthorblockA{\textit{Department of Computer Science} \\ \textit{The University of Texas at Dallas}\\ Richardson, Texas 75083\\ HSchweitzer@utdallas.edu} } \maketitle \begin{abstract} Classical Principal Component Analysis (PCA) approximates data in terms of projections on a small number of orthogonal vectors. There are simple procedures to efficiently compute various functions of the data from the PCA approximation. The most important function is arguably the Euclidean distance between data items, This can be used, for example, to solve the approximate nearest neighbor problem. We use random variables to model the inherent uncertainty in such approximations, and apply the Maximum Entropy Method to infer the underlying probability distribution. We propose using the expected values of distances between these random variables as improved estimates of the distance. We show by analysis and experimentally that in most cases results obtained by our method are more accurate than what is obtained by the classical approach. This improves the accuracy of a classical technique that have been used with little change for over 100 years. \end{abstract} \begin{IEEEkeywords} Principal Component Analysis (PCA), Rayleigh Quotient, Dimension Reduction, Low Rank Matrix Representation, Maximum Entropy Method \end{IEEEkeywords} \bsection{Introduction} \label{secIntro} We consider the standard representation of numerical data as a large matrix of numeric values. Let $n$ be the number of data items in the dataset, and let $m$ be the size of each item. The data can be viewed as a matrix of size $m \times n$, as illustrated in Fig.~\ref{figmn}. In many practical situations both $m$ and $n$ are very large. For example, datasets containing genome data may have $m$ in the thousands and $n$ in the millions~\cite{HapMap05}. In such cases even simple tasks, such as searching the data for a particular item become computationally expensive. \begin{figure} \begin{center} \input{fig1.tikz} \end{center} \caption{ The view of data as a matrix. There are $n$ data items, and each one is of size $m$. A data item is a column of an $m \times n$ matrix. } \label{figmn} \end{figure} A standard approach to address this ``curse of dimensionality'' is dimension reduction, reducing the dimension of each data item from $m$ to $k$, where $k < m$. For a review of dimension reduction techniques see, e.g.,~\cite{Burges10:book}. The most common approach is the Principal Component Analysis (PCA), known for over 100 years. For references see, for example~\cite{Gray17,Jolliffe02,HLR:AAAI19,astarRPCA:IJAIT}. The uncentered variant can be described as follows. Let $A$ be the data matrix of size $m \times n$; define the $m \times m$ matrix $B$ by: $B = A A^T$. Let $V$ be an $m \times k$ matrix whose columns are the $k$ eigenvectors of $B$ corresponding to the $k$ largest eigenvalues. The columns of $V$ are orthogonal, and their span gives the best possible approximation of rank $k$ to the column space of $A$. Let $a_i$ be the $i$th column of $A$. The following approximations hold: \begin{equation} \label{AVW} A \approx V W, \quad a_i \approx V w_i \end{equation} Here $W$ is $k \times n$, representing $A$ in the reduced dimension. In particular, the $i$th column of $A$ is the vector $a_i$, and it is represented by $w_i$, the $i$th column of $W$. The matrix $W$ or any specific column $w_i$ can be computed by: \begin{equation} \label{WVTA} W = V^T A, \quad w_i = V^T a_i \end{equation} The centered variant of the PCA is the same as the uncentered PCA with an initial centering of each column. The centering is performed by mean subtraction. See, e.g.,~\cite{Cadima09}. The PCA enables fast computations of many data related techniques. The low dimension also helps with the visualization and the interpretation of the data. We proceed to describe how to use the representation in \reff{AVW} to approximate the Euclidean distance between data items. Recall that the squared Euclidean distance between two vectors $x$ and $y$ is given by: \[ \text{distance}^2(x,y) = \|x - y\|^2 \] It is computed as the sum of $m$ squared coordinates. Thus, the cost of computing this distance is $O(m)$. Now suppose both $x$ and $y$ are from the same dataset with known PCA, as given by \Ereff{AVW}. For clarity we take $x=a_i$ and $y=a_j$. Then because of the orthogonality of $V$ we have: \begin{multline} \label{aiaj} \text{distance}^2(a_i,a_j) = \| a_i - a_j \|^2 \\ \approx \| V (w_i - w_j) \|^2 = \| w_i - w_j \|^2 \end{multline} This is a classical approximation formula, known for over 100 years. See, e.g., \cite{Jolliffe02,Burges10:book}. It shows that the approximate value of $\| a_i - a_j \|^2$ can be computed in $O(k)$ instead of the exact computation which takes $O(m)$. Another common situation where the PCA leads to significant improvements in the running time is the following. Suppose the vector $x$ is not necessarily a column of $A$, and one has to calculate the $n$ squared distances $d_i^2 = \| x - a_i \|^2$ for $i=1,\ldots, n$. (These are the distances between $x$ and all the columns of $A$.) This situation occurs, for example, in calculating the nearest neighbor of $x$ among the columns of $A$ (e.g., \cite{Weber98}), or in the computation of multi-dimensional scaling (e.g., \cite{CoxCox}). The direct approach requires computing $n$ distances which takes $O(mn)$. If the PCA of $A$ is known, the approximate $n$ distances can be computed in $O(km+kn)$ by the following algorithm: \begin{equation} \label{wx} w_x = V^T x, \quad d_j^2 \approx \| w_x - w_j \|^2 ~~ \text{for $j=1,\ldots n$} \end{equation} \bsubsectionS{Our contributions} Our main result is formulas that improve the quality of the approximations in \reff{aiaj} and \reff{wx}. Clearly, the approximation in \reff{aiaj}, and sometimes also the approximation in \reff{wx}, can be improved by increasing $k$, the rank of the reduced dimension. But this increases the computation cost and reduces the effectiveness of working in a reduced dimension. It also makes the interpretation of the data in the low dimension harder. For example, with $k=2$ the data can be visualized in a plane. Increasing $k$ to $4$ creates a representation that is much harder to visualize. Our main result is new formulas that improve the accuracy in \reff{aiaj} and \reff{wx} without increasing $k$. Specifically, we propose the approximation formula~\reff{d3a} as an alternative to \reff{aiaj}, and the approximation formula~\reff{d3x} as an alternative to \reff{wx}: \begin{align} \label{d3a} & \text{distance}^2(a_i, a_j) \approx \notag \\ & ~~~~ \| w_i - w_j \|^2 + \|a_i\|^2 - \|w_i\|^2 + \|a_j\|^2 - \|w_j\|^2 \end{align} \begin{align} \label{d3x} & \text{distance}^2(x, a_j) \approx \notag \\ & ~~~~ \| w_x - w_j \|^2 + \|x\|^2 - \|w_x\|^2 + \|a_j\|^2 - \|w_j\|^2 \\ & \text{where~~} w_x = V^T x \notag \end{align} The following are some observations about the result: \begin{compactitem} \item The formulas \reff{d3a} and \reff{d3x} use additional information, the squared norms $\|a_i\|^2$ for each column $a_i$ of $A$. This information can be pre-computed during the PCA calculation, without significant change to the running time of the PCA. \item The complexity of using~\reff{d3a} to compute the approximate squared distance between $a_i$ and $a_j$ is $O(k)$, the same as the complexity of using~\reff{aiaj} to approximate the squared distance between $a_i$ and $a_j$. \item The complexity of using~\reff{d3x} to compute the approximate squared distances between $x$ and all the columns of $A$ is $O(km + kn)$, the same as the complexity of using~\reff{wx} to compute the approximate squared distances between $x$ and all the columns of $A$. \item The new approximations \reff{d3a} and \reff{d3x} are not always better than the old approximations \reff{aiaj} and \reff{wx}. But we claim that ``on the average'' the new approximations are better. This follows from the derivation of these approximations using the Maximum Entropy Method and extensive evaluation on real datasets. \end{compactitem} \noindent The technique that we use to derive the formulas \reff{d3a} and \reff{d3x} can also be used for other applications of the PCA besides Euclidean distances. We derive related formulas for accurate computation of the Rayleigh Quotient, an important statistic that indicates the similarity of a vector to a collection of vectors. Another important contribution of the paper is the method in which the approximations are derived. We model the uncertainty in the PCA representation in terms of random variables with an unknown distribution. We then use the Maximum Entropy Method to determine the most likely distribution. Expected values are then used as the improved estimates. This approach appears to be novel. We are not aware of any previous studies that apply similar approaches to improve deterministic estimates. \bsubsectionS{Paper organization} The paper is organized as follows. Section~\ref{secDR} formulates dimension reduction as an approximate estimation of column vectors with unknown quantities. A key idea is to model the unknown quantities as random variables in an unknown probability distribution. Section~\ref{secME} describes the Maximum Entropy Method, a classical method of inferring the most likely probability distribution from partial information about random variables. We apply the Maximum Entropy Method to the random variables of Section~\ref{secDR} to derive the most likely probability distribution of the PCA estimates. A key theorem proved in this section characterizes the probability density of the unknown quantities. In Section~\ref{EVPCA} we use the probability density of Section~\ref{secME} to compute expected values of several expressions of PCA approximations. In Section~\ref{secDistances} we apply the results of Section~\ref{EVPCA} to compute estimates to distances between vectors. In Section~\ref{secRQ} we derive maximum entropy estimates to Rayleigh quotients. Section~\ref{secExperiments} describes extensive experimental results evaluating our approximation formulas on real data. \bsection{A probabilistic setting for PCA} \label{secDR} As discussed in Section~\ref{secIntro} the PCA approximation of the $m \times n$ matrix $A$ is given by \Ereff{AVW}. In this section we use a slightly different notation for the same relation. We write the PCA approximation as: \begin{equation} \label{AV1W1} A \approx V_1 W_1 \end{equation} Since $V_1$ has orthogonal columns, it is always possible to extend these columns to an orthogonal basis of $\mathbb{R}^m$. Let $V_2$ be such an extension then $V_1$ and $V_2$ are orthogonal complements. They satisfy the following properties: \begin{equation} \label{V1V2} V_1^T V_1 = I, \quad V_2^T V_2 = I, \quad V_1 V_1^T + V_2 V_2^T = I \end{equation} Using both $V_1$ and $V_2$ there is an exact representation of $A$ that can be expressed as follows: \begin{equation} \label{AV1W1V2W2} A = V_1 W_1 + V_2 W_2, \quad a_i = V_1 w_1^i + V_2 w_2^i \end{equation} where $a_i$ is the $i$th column of $A$, $w_1^i$ is the $i$th column of $W_1$, and $w_2^i$ is the $i$th column of $W_2$. Suppose the PCA of $A$ is given as the matrices $V_1$ and $W_1$. Without loss of generality $V_2$ can be selected as any orthogonal complement of $V_1$. This means that the only unknown quantities in \reff{AV1W1V2W2} are the entries of the matrix $W_2$. The special case of classical PCA is obtained by taking $W_2=0$. Instead, we propose to view $W_2$ as a random matrix, with entries that are random variables. From \reff{AV1W1V2W2} it follows that if $W_2$ is a random matrix then $A$ is also a random matrix, and so are the columns $a_i$ and $w_2^i$. \Ereff{rvA} identifies random variables with $\rv{~}$ as shown below: \begin{equation} \label{rvA} \rv{A} = V_1 W_1 + V_2 \rv{W_2}, \quad \rv{a_i} = V_1 w_1^i + V_2 \rv{w_2^i} \end{equation} We note that some of the matrices in \reff{rvA} are too big to manipulate explicitly. The size of $V_2$ is $ m \times m{-}k$, and the size of $W_2$ is $ m{-}k \times n$. A practical solution should not manipulate these matrices explicitly. We proceed to show that modeling the unknown $W_2$ as a random matrix has an advantage over setting it to be $0$. Suppose the probability density of $W_2$ is \underline{known}. Applying the expectation operator $\ex{}$ to both sides of the first equation in~\reff{rvA} we get: \[ \ex{\rv{A}} = V_1 W_1 + V_2 \ex{\rv{W_2}} \] Thus, Taking $\ex{\rv{A}}$ as an improved estimate of $A$ we can expect an improved result different from the classical result whenever $\ex{\rv{W_2}}$ is nonzero. Similarly, using the orthogonality of $V_1,V_2$ it is easy derive the following relation from~\reff{rvA}: $\rv{A^T}\rv{A} = W_1^T W_1 + \rv{W_2^T}\rv{W_2}$. Taking expectations we see that: \[ \ex{\rv{A^T}\rv{A}} = W_1^T W_1 + \ex{\rv{W_2^T}\rv{W_2}} \] Therefore, the improved estimate of $A^TA$ is different from the classical estimate whenever $\ex{\rv{W_2^T}\rv{W_2}} \neq 0$. Observe that $\ex{\rv{W_2^T}\rv{W_2}} \neq \ex{\rv{W_2}}^T \ex{\rv{W_2}}$, so that $\ex{\rv{W_2^T}\rv{W_2}}$ may be nonzero even if $\ex{\rv{W_2}}$ is 0. In our case the probability density of $W_2$ is unknown. We use the Maximum Entropy Method to compute the most likely probability distribution under the assumption that the column norms of $A$ are known. It is not surprising that under this probability density $\ex{\rv{W_2}} = 0$, but we found it surprising that $\ex{\rv{W_2^T}\rv{W_2}} \neq 0$. \bsection{The Maximum Entropy Method} \label{secME} The Maximum Entropy Method is a standard technique for inferring probability distributions. When given constraints that the probability distribution must satisfy, the Maximum Entropy Method asserts that the ``most likely distribution'' is the distribution with the largest entropy that satisfies the constraints. See, e.g.,~\cite{Jaynes82,papoulis,wiki:MaxEntropy}. In this paper we use the following special case described in Chapter 14 in Papoulis~\cite{papoulis}. \smallskip \noindent \textbf{Theorem~1:} ~ Let $x = (x_1, \ldots, x_n)^T$ be a random vector, where the coordinates $x_i$ are $n$ random variables. Let $R = \ex{x x^T}$ be the correlation matrix associated with $x$. Suppose $R$ is known. Let $\Delta$ be the determinant of $R$ and assume $ \Delta \neq 0$. Then according to the Maximum Entropy Method the probability density $f(x)$ and the entropy $H(x)$ are given by: \begin{equation} \label{fH} \begin{aligned} & f(x) = \frac{1}{\sqrt{(2 \pi)^n \Delta }} e^{-\frac{1}{2}x^t R^{-1} x} \\ & H(x) = \ln \sqrt{(2 \pi e)^n \Delta} \end{aligned} \end{equation} \noindent See~\cite{papoulis} for the proof. \smallskip \noindent As stated in~\reff{fH} the entropy of $x$ is determined by $\Delta$, the determinant of the correlation matrix $R$. If $R$ is only partially known, it can be determined by the Maximum Entropy Method by maximizing the determinant $\Delta$ over the unknown quantities. We use this technique to derive the following theorem which is our main technical result: \smallskip \noindent \textbf{Theorem~2:} ~ Let $W = (w_1, \ldots, w_n)$ be a random matrix of dimensions $k \times n$. Suppose $z_i = \ex{\|w_i\|^2}$ is known for $i=1,\ldots,n$, but nothing else is known about the probability density of $W$. Then according to the Maximum Entropy Method: \begin{compactdesc} \item[1.] All entries of the matrix $W$ have 0 mean: \[ \ex{w_{ij}} = 0, \quad \text{for}~ i=1,\ldots, n, ~ j=1,\ldots,k \] \item[2.] The random variable $w_{i_1,j_1}$ is independent of the random variable $w_{i_2,j_2}$ unless $i_1 = i_2$ and $j_1 = j_2$. \item[3.] The expected value of $\|w_{ij}\|^2$ is given by: \[ \ex{\|w_{ij}\|^2} = \frac{z_i}{k} \] \item[4.] The probability density of $W$ is given by: \[ \begin{aligned} & f(W) = \frac{1}{\sqrt{(2 \pi)^{kn} \Delta }} e^{s(W)} \\ & \text{where:} \\ & \Delta = \frac{\prod_{i=1}^n z_i^k}{k^{kn}}, \quad s(W) = -\frac{k}{2} \sum_{i,j} \frac{w_{i,j}^2}{z_i} \end{aligned} \] \end{compactdesc} \smallskip \noindent \textbf{Proof:} ~ Let $q$ be the vector of $kn$ random variables, created by concatenating all the columns of $W$: \[ q = \begin{pmatrix} w_{11}, \ldots, w_{1k}, \ldots, w_{n1}, \ldots, w_{nk} \end{pmatrix} ^T \] Let $R$ be the correlation matrix of $q$: $R = \ex{q q^T}$. Observe that the matrix $R$ is $nk \times nk$. The value of $R$ at row $I$ and column $J$ is: $R_{IJ} = \ex{w_{i_1,j_1} w_{i_2,j_2}}$ for some $i_1,j_1,i_2,j_2$. Define: $\nu_{ij} = \ex{w_{ij}^2}$. Then from the definition of $z_i$: \begin{equation} \label{sumnu} z_i = \ex{\|w_i\|^2} = \sum_j \ex{w_{ij}^2} = \sum_j \nu_{ij} \end{equation} The following information is known about the diagonal elements $R_{II}$, where the location $I$ in $q$ correspond to the location $i,j$ in $W$: \[ R_{II} = \ex{(w_{ij}}^2) = \nu_{ij}. \] From Theorem~1 it follows that the maximum entropy of $q$ is obtained by maximizing $\Delta$, the determinant of $R$, under the constraints~\reff{sumnu}. The proof of the theorem follows from this maximization. According to the Hadamard determinant inequality (see, e.g., \cite{Rozanski17}), $\Delta \leq \prod_I R_{II}$. Since $\Delta = \prod_I R_{II}$ if all off diagonal elements are 0, it follows that there is a maximum where $\ex{w_{i_1,j_1} w_{i_2,j_2}} = 0$ unless $i_1 {=} i_2$ and $j_1 {=} j_2$. This proves parts 1 and 2 of the theorem. To prove part 3 observe that from parts 1,2 it follows that \[ \Delta = \prod_I R_{II} = \prod_{i,j} \nu_{ij} = \prod_i (\prod_j \nu_{ij}) \] Therefore, for each $i$ we need to maximize $\prod_{i,j} \nu_{ij}$ subject to the constraint that $\sum_{j=1}^k \nu_{ij} = z_i$. It can be easily shown (for example using the method of Lagrange multipliers) that the maximizing solution is $\nu_{ij} = \frac{z_i }{k}$. This proves part 3 of the theorem. Part 4 of the theorem follows from Theorem 1 by observing that: \begin{compactitem} \item From part 3: \[ \Delta = \prod_{i=1}^n \prod_{j=1}^k (z_i/k) = \frac{\prod_{i=1}^n z_i^k}{k^{kn}} \] \item Since $R$ is diagonal: \[ q^T R^{-1} q = \sum_I \frac{q_I^2}{R_{II}} = \sum_{i,j} \frac{w_{i,j}^2}{z_i/k} \] \end{compactitem} This completes the proof of Theorem~2. \hfill $\blacksquare$ \bsection{Expected values of PCA approximations} \label{EVPCA} In \Ereff{rvA} the PCA is expressed as a relation between random variables. We apply Theorem~2 to the matrix $W_2$ and determine its most likely probability density. The expected values of various estimates can then be computed in closed form. The matrix $W_2$ is of size $m{-}k \times n$, and its $i$th column, $w_2^i$, is of size $m{-}k$. The value of $z_i$ in Theorem 2 can be computed as follows: \begin{equation} \label{zi} z_i = \ex{\|\rv{w_2^i}\|^2} = \|a_i\|^2 - \|w_1^i\|^2, \quad i=1,\ldots,n \end{equation} Applying Theorem~2 this gives the following expected values of expressions related to $w_2^i$, the $i$th column of $W_2$: \begin{equation} \label{w2} \begin{aligned} & \ex{ \rv{w_2^i} } = 0 , \quad \ex{\rv{\|w_2^i\|^2}} = z_i \\ & \ex{\rv{(w_2^i)^T}\rv{w_2^j}} = 0 ,\quad \ex{\rv{w_2^i}\rv{(w_2^i)^T}} = \frac{z_i}{m-k} I \\ & \text{where $i {\neq} j$, and $I$ is the $m{-}k \times m{-}k$ identity matrix.} \end{aligned} \end{equation} From~\reff{w2} we get the following expected values related to the entire matrix $W_2$: \begin{equation} \label{W2} \begin{aligned} & \ex{\rv{W_2}} = 0 \\ & \ex{\rv{W_2^T}\rv{W_2}} = \begin{pmatrix} z_1 \\ & z_2 \\ & & \ddots & \\ & & & z_n \end{pmatrix} \\ & \ex{\rv{W_2}\rv{W_2^T}} = \frac{\sum_{i=1}^n z_i}{m-k} I = \delta I \\ & \text{where $\delta = {\sum_{i=1}^n z_i} / {(m-k)}$} \end{aligned} \end{equation} The corresponding formulas for $\rv{A} = V_1 W_1 + V_2 \rv{W_2}$ (as in~\reff{rvA}) are: \begin{equation} \label{AA} \begin{aligned} & \ex{\rv{A}} = V_1 W_1 \\ & \ex{\rv{A^T}\rv{A}} = W_1^T W_1 + \text{Diag}(z_1, \ldots z_n) \\ & \ex{\rv{A}\rv{A^T}} = V_1 W_1 W_1^T V_1^T + \delta (I - V_1 V_1^T) \end{aligned} \end{equation} The first equation follows from the first formula in~\reff{W2}. The second equation follows by applying expectations to the identity: $\rv{A^T}\rv{A}= W_1^T W_1 + \rv{W_2^T}\rv{W_2}$. The third equation follows by applying expectations to the identity: \[ \begin{aligned} \rv{A}\rv{A^T} = & V_1 W_1 W_1^T V_1^T + V_2 \rv{W_2}\rv{W_2^T} V_2^T \\ & + V_1 W_1 \rv{W_2^T} V_2^T + V_2 \rv{W_2} W_1^T V_1^T \\ \end{aligned} \] Taking expectations the last two terms disappear. The final result is obtained by applying~\reff{W2} to second expression, and using~\reff{V1V2} to replace $V_2V_2^T$ with $I-V_1V_1^T$. \bsection{Computing distances with PCA} \label{secDistances} In this section we assume being given the matrix $A$ with pre-computed PCA expressed as: $A \approx V_1 W_1$. In addition to the PCA we assume that the column norms $\|a_i\|$ are known for all the columns of $A$. Two cases are analyzed. In the first case the goal is to compute distances between columns of $A$. In the second case the goal is to compute distances between a vector $x$ unrelated to $A$ and columns of $A$. In each case we describe three formulas. The first formula that we denote by $\text{$d_\text{classic}$}$ is the classical formula. It does not use the additional information of column norms. The second formula that we denote by $\text{$d_\text{ent}$}$ is obtained from the Maximum Entropy Method. It requires the additional information of column norms. Since $\text{$d_\text{ent}$}$ works much better than $\text{$d_\text{classic}$}$ one may suspect that the reason might be additional information of column norms. We use this information to derive another distance formula, as a tight lower bound to the true distance that also requires the additional information of column norms. We denote this third distance formula by $\text{$d_\text{lower}$}$. Our experimental results show that typically $\text{$d_\text{lower}$}$ is much better than $\text{$d_\text{classic}$}$, and $\text{$d_\text{ent}$}$ is much better than $\text{$d_\text{lower}$}$. \bsubsection{Distances between columns of $A$} \label{secdaa} We consider approximating the distance between $a_i$ and $a_j$, two columns of $A$. Their PCA representation is: \begin{equation} \label{aiajPCA} a_i \approx V_1 w_1^i, \quad a_j \approx V_1 w_1^j \end{equation} As discussed in Section~\ref{secIntro} the classical approximation formula for the squared distance between them is: \begin{equation} \label{dci} \text{distance}^2(a_i,a_j) \approx \text{$d_\text{classic}$}(a_i,a_j) = \|w_1^i - w_1^j\|^2 \end{equation} \smallskip \noindent \textbf{Theorem~3:} ~ Let $a_i$ and $a_j$ be two columns of $A$ with PCA representation as shown in~\reff{aiajPCA}. The estimate of the squared distance between them according to the Maximum Entropy Method is: \[ \begin{aligned} & \text{distance}^2(a_i,a_j) \approx \\ & \qquad \text{$d_\text{classic}$}(a_i,a_j) + \|a_i\|^2 - \|w_1^i\|^2 + \|a_j\|^2 - \|w_1^j\|^2 \end{aligned} \] \noindent \textbf{Proof:} ~ The random variable representation in~\reff{rvA} gives: \[ \rv{a_i} = V_1 w_1^i + V_2 \rv{w_2^i}, \quad \rv{a_j} = V_1 w_1^j + V_2 \rv{w_2^j} \] Computing the squared Euclidean distance between them as a random variable gives: \[ \begin{aligned} & \|\rv{a_i}-\rv{a_j}\|^2 = \| V_1 (w_1^i - w_1^j) + V_2 (\rv{w_2^i} - \rv{w_2^j}) \|^2 \\ & = \| w_1^i - w_1^j \|^2 + \| \rv{w_2^i} - \rv{w_2^j} \|^2 \\ & = \| w_1^i - w_1^j \|^2 + \| \rv{w_2^i} \|^2 + \| \rv{w_2^j} \|^2 - 2 (\rv{w_2^i})^T \rv{w_2^j} \end{aligned} \] Going to expectations and using \Ereff{w2} we see that the expected value of the right most term is 0, and the values of the middle two terms are $z_i$, $z_j$. This gives: \begin{equation} \label{de} \begin{aligned} & \text{$d_\text{ent}$}(a_i,a_j) = \text{$d_\text{classic}$}(a_i,a_j) + z_i + z_j \end{aligned} \end{equation} The theorem now follows from~\reff{zi}. \hfill $\blacksquare$ \smallskip \noindent \textbf{Theorem~4:} ~ Define $\text{$d_\text{lower}$}$ as follows: \[ \begin{aligned} & \text{$d_\text{lower}$}(a_i,a_j) = \text{$d_\text{classic}$}(a_i,a_j) + z_i + z_j - 2 \sqrt{z_i z_j} \\ & \text{where $z_i = \| a_i\|^2 - \|w_1^i\|^2$} \end{aligned} \] Then: \[ \text{$d_\text{classic}$}(a_i,a_j) \leq \text{$d_\text{lower}$}(a_i,a_j) \leq \text{distance}^2(a_i,a_j) \] \noindent \textbf{Proof:} ~ The following relations hold: \[ \begin{aligned} a.~& \text{distance}^2(a_i,a_j) = \|w_1^i - w_1^j\|^2 + \|w_2^i - w_2^j\|^2 \\ b.~&\|w_2^i - w_2^j\|^2 \geq (\|w_2^i\| - \|w_2^j\|)^2 = z_i + z_j - 2 \sqrt{z_i z_j} \\ c.~& \text{$d_\text{classic}$} = \|w_1^i - w_1^j\|^2 \end{aligned} \] Relation $a$ follows from~\reff{AV1W1V2W2}. Relation $b$ follows from the triangle inequality. Relation $c$ is the definition of $\text{$d_\text{classic}$}$. Combining relations $b$ and $c$ gives the left inequality in the theorem. Combining relations $a$ and $b$ gives the right inequality in the theorem. \hfill $\blacksquare$ \noindent This shows that $\text{$d_\text{lower}$}$ is a lower bound on the true distance. The bound is tight since there is an equality in $b$ when the angle between $w_2^i$ and $w_2^j$ is 0. \smallskip \noindent In summary, we describe 3 formulas for estimating distances between matrix columns using PCA data: \[ \begin{aligned} &\text{$d_\text{classic}$}(a_i,a_j) = \|w_1^i - w_1^j\|^2 \\ &\text{$d_\text{lower}$}(a_i,a_j) = \text{$d_\text{classic}$}(a_i,a_j) + z_i + z_j - 2 \sqrt{z_i z_j} \\ &\text{$d_\text{ent}$}(a_i,a_j) = \text{$d_\text{classic}$}(a_i,a_j) + z_i + z_j \end{aligned} \] \\ In these formulas $w_1^i$ is the representation of column $a_i$ in PCA space, and $z_i = \|a_i\|^2 - \|w_i\|^2$. Column norms are used by $\text{$d_\text{ent}$}$ and $\text{$d_\text{lower}$}$. Both $\text{$d_\text{classic}$}$ and $\text{$d_\text{lower}$}$ are lower bounds on the true distance, and $\text{$d_\text{lower}$}$ is guaranteed to be better than $\text{$d_\text{classic}$}$. The promise of $\text{$d_\text{ent}$}$ is that it was derived from the best probability distribution according to the Maximum Entropy Method. As we show in the experimental section its accuracy is significantly better than the accuracy of $\text{$d_\text{lower}$}$ and $\text{$d_\text{classic}$}$. \bsubsection{Distances between an arbitrary vector and columns of $A$} \label{secdax} Let $x$ be an arbitrary ($m$ dimensional) vector. Our goal is to approximate efficiently and accurately distances between $x$ and the columns of $A$. As in Section~\ref{secdaa} we assume the availability of the PCA of $A$, as well as the norms of $A$ columns. We begin by defining the vectors $w_1^x$ and $w_2^x$ as analogous to $w_1^i$ and $w_2^i$ for a column of $A$: \begin{equation} \label{w12x} w_1^x = V_1^T x, \quad w_2^x = V_2^T x \end{equation} With this definition most of the analysis in Section~\ref{secdaa} applies to case as well. The only difference in the analysis is that $w_2^x$ can be explicitly calculated, and therefore it is not a random variable. Still, the three distance formulas from Section~\ref{secdaa} can be used in this case as well. As in~\reff{dci} the classical error estimate is given below: \begin{equation} \label{dcx} \text{distance}^2(x,a_j) \approx \text{$d_\text{classic}$}(x,a_j) = \|w_1^x - w_1^j\|^2 \end{equation} This approximation can only be accurate when $x$ projection on $V2$ is small. \smallskip \noindent \textbf{Theorem~5:} ~ Define $\text{$d_\text{lower}$}$ as follows: \[ \begin{aligned} & \text{$d_\text{lower}$}(x,a_j) = \text{$d_\text{classic}$}(x,a_j) + z_x + z_j - 2 \sqrt{z_x z_j} \\ & \text{where $z_x = \|x\|^2 - \|w_1^x\|^2$, $z_j = \| a_j\|^2 - \|w_1^j\|^2$} \end{aligned} \] Then: \[ \text{$d_\text{classic}$}(x,a_j) \leq \text{$d_\text{lower}$}(x,a_j) \leq \text{distance}^2(x,a_j) \] \noindent \textbf{Proof:} This is identical to the proof of Theorem~4. \hfill $\blacksquare$ \smallskip \noindent \textbf{Theorem~6:} ~ The estimate of the squared distance between $x$ and a column of $A$ according to the Maximum Entropy Method is: \[ \begin{aligned} & \text{distance}^2(x,a_j) \approx \\ & \qquad \text{$d_\text{classic}$}(x,a_j) + \|x\|^2 - \|w_1^x\|^2 + \|a_j\|^2 - \|w_1^j\|^2 \end{aligned} \] \noindent \textbf{Proof:} ~ Both $x$ and the random variable $a_j$ can be described in terms of their projections on $V_1,V_2$. \[ x = V_1 w_1^x + V_2 w_2^x, \quad \rv{a_j} = V_1 w_1^j + V_2 \rv{w_2^j} \] Calculating the squared norm of their difference: \[ \begin{aligned} & \|x -\rv{a_j}\|^2 \\ & ~~ = \|x\|^2 + \|w_1^j\|^2 + \|\rv{w_2^j}\|^2 - 2 (w_1^x)^T w_1^j -2 (w_2^x)^T \rv{w_2^j} \\ & ~~ = \| w_1^x - w_1^j \|^2 - \| w_1^x\|^2 + \| \rv{w_2^j}\|^2 + \|x\|^2 - 2(w_2^x)^T \rv{w_2^j} \end{aligned} \] Going to expectations and using \Ereff{w2} we get: \[ \begin{aligned} & \text{$d_\text{ent}$}(x,a_j) = \text{$d_\text{classic}$}(x,a_j) + \|x\|^2 - \|w_1^x\|^2 + z_j \end{aligned} \] The theorem now follows from~\reff{zi}. \hfill $\blacksquare$ \bsection{Rayleigh Quotients} \label{secRQ} The Rayleigh Quotients (e.g., \cite{gv4}) is given by the following formula: \begin{equation} \label{RQ} r(v) = \frac{v^T B v}{\|v\|^2} \end{equation} For a given matrix $A$ we are interested in the two special cases of $B=AA^T$, and $B=A^TA$. In the first case the Rayleigh Quotient gives the sum of squared correlation between $v$ and the columns of $A$. In the second case it gives the sum of squared correlation between $v$ and the rows of $A$. Intuitively, the Rayleigh quotients measure the likelihood of the direction of $v$ among the columns/rows of the matrix $A$. The challenge we address here is how to estimate these Rayleigh Quotients when given the PCA of $A$ instead of $A$ itself. The classical solution is to replace $A$ with its PCA representation, as given by~\reff{AV1W1}. As in the case of distances the Maximum Entropy Method gives an improved solution. \bsubsection{Column space Rayleigh quotient} \label{seccs} In this section we consider the case in which $B=AA^T$ in~\reff{RQ}. For a vector $x \in \fR{m}$ the exact expression we wish to approximate is: \[ r(x) = \frac{x^T A A^T x}{\|x\|^2} \] When $A$ is approximated as in~\reff{AV1W1} we have: \begin{equation} \label{rc1} \begin{aligned} & \text{$r_\text{classic}$}(x) = \frac{x^T V_1 W_1 W_1^T V_1^T x }{\|x\|^2} = \frac{\|W_1^T w_1^x\|^2 }{\|x\|^2} \\ & \text{where $w_1^x = V_1^T x$}. \end{aligned} \end{equation} For the derivation of the Maximum Entropy solution we use the representation of $A$ as a random matrix in~\reff{rvA}. \[ \rv{r(x)} = \frac{x^T \rv{A}\rv{A^T}x}{\|x\|^2} \] Taking expectations of both side and using the result in~\Ereff{AA} we get: \[ \begin{aligned} & \text{$r_\text{ent}$} = \frac{x^T V_1 W_1 W_1^T V_1^T x + \delta (\|x\|^2 - x^T V_1 V_1^T x)}{\|x\|^2} \\ & = \frac{x^T V_1 (W_1 W_1^T - \delta I) V_1^T x} {\|x\|^2} + \delta \\ & = \frac{\|W_1^T w_1^x\|^2}{\|x\|^2} + \delta (1 - \frac{\|w_1^x\|^2}{\|x\|^2} ) = \text{$r_\text{classic}$} + \delta (1 - \frac{\|w_1^x\|^2}{\|x\|^2} ) \end{aligned} \] \bsubsection{Row space Rayleigh quotient} \label{secrs} In this section we consider the case in which $B=A^TA$ in~\reff{RQ}. For a vector $y \in \fR{n}$ the exact expression we wish to approximate is: \[ r(y) = \frac{y^T A^T A y}{\|y\|^2} \] When $A$ is approximated as in~\reff{AV1W1} we have: \begin{equation} \label{rr1} \text{$r_\text{classic}$}(y) = \frac{y^T W_1^T W_1 y }{\|y\|^2} = \frac{\|W_1 y\|^2}{\|y\|^2} \end{equation} For the derivation of the Maximum Entropy solution we use the representation of $A$ as a random matrix in~\reff{rvA}. \[ \rv{r(y)} = \frac{y^T \rv{A^T}\rv{A}y}{\|y\|^2} \] Taking expectations of both side and using the result in~\Ereff{AA} we get: \[ \begin{aligned} \text{$r_\text{ent}$}(y) & = \frac{y^T ( W_1^T W_1 + \text{Diag}(z_1, \ldots z_n)) y}{\|y\|^2} \\ &= \text{$r_\text{classic}$}(y) + \frac{y^T \text{Diag}(z_1, \ldots z_n) y}{\|y\|^2} \\ &= \text{$r_\text{classic}$}(y) + \frac{\sum_{i=1}^n z_i (y(i))^2}{\|y\|^2} \end{aligned} \] In summary we have the following formulas: \begin{equation} \label{RQf} \begin{aligned} & \text{column space} && \text{$r_\text{classic}$}(x) = \frac{\|W_1^T w_1^x\|^2 }{\|x\|^2} \\ & \text{column space} && \text{$r_\text{ent}$}(x) = \text{$r_\text{classic}$} + \delta (1 - \frac{\|w_1^x\|^2}{\|x\|^2} ) \\ & \text{row space} && \text{$r_\text{classic}$}(y) = \frac{\|W_1 y\|^2}{\|y\|^2} \\ & \text{row space} && \text{$r_\text{ent}$} = \text{$r_\text{classic}$} + \frac{ \sum_{i=1}^n z_i (y(i))^2}{\|y\|^2} \end{aligned} \end{equation} \bsection{Experimental results} \label{secExperiments} \input{experiments.tex} \bsection{Concluding remarks} This paper considers a common situation in which a matrix $A$ is approximated by PCA as: $A =VW$. A nice aspect of this representation is that a lot of the operations that involve matrix data can be performed ``in the PCA space'', without reconstructing the matrix or any of its columns. The paper discusses two of this cases. The first is computing distances that involve matrix columns, and the second is the computation of Rayleigh quotients. Our main result is a novel method of modeling the uncertainty in the estimates that one obtains from PCA approximations. The idea is to replace the unknown quantities with random variables. Using information that is typically available during the creation of the matrix $W$ in the above estimation and the Maximum Entropy Method one can determine the likely distribution of these random variables. Thus, evaluating expressions that involve the matrix $A$ become estimates of expected values. Applying this framework allows us to derive closed form solutions to distances and Rayleigh quotients that appear to be novel. Experimental results show that these new formulas produce a significant improvement in accuracy, when compared to the classical formulas.
{ "timestamp": "2019-07-26T02:12:36", "yymm": "1907", "arxiv_id": "1907.11094", "language": "en", "url": "https://arxiv.org/abs/1907.11094", "abstract": "Classical Principal Component Analysis (PCA) approximates data in terms of projections on a small number of orthogonal vectors. There are simple procedures to efficiently compute various functions of the data from the PCA approximation. The most important function is arguably the Euclidean distance between data items, This can be used, for example, to solve the approximate nearest neighbor problem. We use random variables to model the inherent uncertainty in such approximations, and apply the Maximum Entropy Method to infer the underlying probability distribution. We propose using the expected values of distances between these random variables as improved estimates of the distance. We show by analysis and experimentally that in most cases results obtained by our method are more accurate than what is obtained by the classical approach. This improves the accuracy of a classical technique that have been used with little change for over 100 years.", "subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)", "title": "Improving the Accuracy of Principal Component Analysis by the Maximum Entropy Method", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357634636668, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7096739126109471 }
https://arxiv.org/abs/1806.01638
Numerical Integration as an Initial Value Problem
Numerical integration (NI) packages commonly used in scientific research are limited to returning the value of a definite integral at the upper integration limit, also commonly referred to as numerical quadrature. These quadrature algorithms are typically of a fixed accuracy and have only limited ability to adapt to the application. In this article, we will present a highly adaptive algorithm that not only can efficiently compute definite integrals encountered in physical problems but also can be applied to other problems such as indefinite integrals, integral equations and linear and non-linear eigenvalue problems. More specifically, a finite element based algorithm is presented that numerically solves first order ordinary differential equations (ODE) by propagating the solution function from a given initial value (lower integration value). The algorithm incorporates powerful techniques including, adaptive step size choice of elements, local error checking and enforces continuity of both the integral and the integrand across consecutive elements.
\section{Introduction} \label{sec:introduction} Numerical integration(NI) is one of the most useful numerical tools that is routinely utilized in all scientific and engineering applications. There are general and specialized methods that efficiently compute definite integrals even for many pathological cases. Most of the integration algorithms are in software packages and are available in almost all scientific libraries. Ref.~\cite{davis2007methods} is an excellent book on the subject containing a comprehensive reference to scientific articles and the accompanying software. However, as efficient and advanced as those quadrature algorithms are, and perhaps for this very reason, the techniques implemented in those algorithms do not necessarily carry over to treat other related numerical problems. Applications pertinent to NI are usually a lot more complex than simply a value at a single point at the upper integration limit. A broader and versatile advantage can be gained if integration is viewed from its general mathematical perspective as a specific case of an initial value problem. In this paper, we present an algorithm that solves ODEs that, with a slight modification, can be used on many relevant applications, one of the most important of which being NI. As far as numerical quadrature is concerned, the efficiency of the method given here is comparable to the common NI packages available in scientific libraries. Moreover, the present method is so simple to use that it can be readily modified and applied to a broader spectrum of numerical applications as long as they can be set up as initial value problems. The algorithm in question has recently been applied to a second order ODE to solve and treat a challenging system--namely, the soft Coulomb problem, where its versatility and power is self--evident~\cite{PhysRevE.89.053319}. In this work, we discuss in some detail how the same algorithm can be adapted to solve a first-order ODE and thereby apply it, in a purely mathematical setting, to the calculation of the numerical integral of a given function. Illustrative examples that are best solved by the present method, and thereby highlight its important features, will also be included. \section{Description of Algorithm} \label{sec:descr-algor} The main intent of this paper is to present a finite element algorithm that numerically approximates a solution function $y$ to the following first order ODE \begin{equation} \label{eq:1} \frac{{\rm{d}}}{{\rm{d}}x} y(x) = f(x) \end{equation} \noindent with a given initial condition $y(x=a) = y_a$, where, both $y_a$ and $a$ are assumed to be finite. Generally, $f(x)$ can be a simple function of $y$ such that the above equation may be a linear or non--linear eigenvalue problem. $f$ can also be a kernel function of homogeneous or inhomogeneous integral equation~\cite{kythe2002computational}. Particularly, if $f(x)$ is a predetermined simple function and $y_a = 0$, then the above equation will be equivalent to a single integral given by \begin{equation} \label{eq:2} y(x) = \int_a^x f(t) \, {\rm{d}}t. \end{equation} Begin the numerical solution to eq.~(\ref{eq:1}) by breaking the x--axis into finite elements and mapping into a local variable $\tau$ with domain $-1 \leq \tau \leq 1$ defined by a linear transformation given below. \begin{equation} \label{eq:3} x = x_i + q_i(\tau + 1), \qquad x_i \le x \le x_{i + 1} \end{equation} Here, $i = 1, 2, \ldots, i_{\rm{max}}$ labels the elements with $x_1 = a$, while $q_i = (x_{i + 1} - x_i)/2$ is half the size of the element. In terms of the local variable eq.~(\ref{eq:1}) can be re--written as \begin{equation} \label{eq:4} \frac{{\rm{d}}}{{\rm{d}}\tau} \bar{y}(\tau) = q \bar{f}(\tau). \end{equation} The over--bar indicates the appropriate change in functional form while the element index $i$ is dropped for simplification of notation. At this point, we will expand $\bar{y}(\tau)$ in a polynomial basis set as follows. \begin{equation} \label{eq:5} \bar{y}(\tau) = \sum_{\mu = 0}^{M-1} u_\mu(\tau)B_\mu + s_0(\tau) q \bar{f}(-1) + \bar{y}(-1) \end{equation} \noindent Notice that $\bar{f}(-1) \equiv f(x_i)$ and similarly for $\bar{y}$. The main import of the above expansion is that it allows us to enforce continuity of both $f$ and $y$ across the boundary of two consecutive elements. This is because the basis functions $u$ and their derivatives (denoted by $s$) identically vanish at $-1$. These functions $u$ and $s$ are defined in terms of Legendre polynomials of the first kind $P$ \cite{citeulike:1816367} as \begin{equation} \label{eq:6} s_{\mu}(\tau) = \int_{-1}^{\tau} P_{\mu}(t) \, \rm{d}t, \qquad u_{\mu}(\tau) = \int_{-1}^{\tau} s_{\mu}(t) \, \rm{d}t. \end{equation} \noindent They are discussed in~\cite{PhysRevE.89.053319} in more detail. These polynomials satisfy the following recurrence identities presented here for the first time. It is surprising to see how these relations remain three--term, with no surface values, despite the fact that the polynomials $s$ and $u$ are sequentially primitives of Legendre polynomials. \begin{eqnarray} \label{eq:7} s_0(\tau) &=& \tau + 1, \qquad s_1(\tau) = \frac{1}{2}(\tau^2-1) \nonumber \\ (\mu + 1)s_\mu(\tau) &=& (2\mu - 1)\tau s_{\mu - 1}(\tau) - (\mu - 2)s_{\mu - 2}(\tau) \qquad \mu \ge 2 \\ \label{eq:8} u_0(\tau) &=& \frac{1}{2}(\tau+1)^2, \qquad u_1(\tau) = \frac{1}{6}(\tau+1)^2(\tau-2) \nonumber \\ (\mu + 2)u_\mu(\tau) &=& (2\mu - 1)\tau u_{\mu - 1}(\tau) - (\mu - 3)u_{\mu - 2}(\tau) \qquad \mu \ge 2 \end{eqnarray} \noindent The above two relations allow us to employ the Clenshaw recurrence formula~\cite{Press:1992:NRF:141273} which is known to facilitate effective numerical evaluation of relevant summations as the one included in eq.~(\ref{eq:5}). Substituting the expansion given in eq.~(\ref{eq:5}) into eq.~(\ref{eq:4}), evaluating the resulting equation at Gauss--Legendre (GL) abscissas and after some rearrangement, we get the following set of simultaneous equations of size $M$, \begin{equation} \label{eq:9} \sum_{\mu = 0}^{M-1} s_\mu(\tau_{\nu})B_\mu = q \left[ \bar{f}(\tau_{\nu}) - \bar{f}(-1) \right] \end{equation} \noindent where, $\tau_{\nu}$ is a root of an $M^{\rm{th}}$ order Legendre polynomial. This technique, known as collocation method, is an alternate way of constructing a linear system of equations compared to the more familiar projection integrals. $s_\mu(\tau_{\nu})$ are now elements of a square matrix which, for a given size $M$, is constant and hence, merely needs to be constructed only once, LU decomposed and stored for all times. Thus solving for the unknown coefficients $B$ only involves back substitution. Construction of the right--hand side column, on the other hand, requires an evaluation of the function $f$ at $M + 1$ points including at the beginning of the element. In the examples that follow we will denote the total number of function evaluations as $N$. After the coefficients $B$ are calculated this way, the value of the integral at the end point of the element $i$ can be obtained by evaluating eq.~(\ref{eq:5}) at $\tau=+1$. The result is simply \begin{equation} \label{eq:10} \bar{y}(+1) = 2B_0 - \frac{2}{3}B_1 + 2q\bar{f}(-1) + \bar{y}(-1). \end{equation} \noindent One of the advantages of representing the solution in the form given in eq.~(\ref{eq:5}) is that it allows us to evaluate the value of the integral at any continuous point inside the element. In this sense, $y$ is simply a function whose domain extends into all of the solved elements as the propagation proceeds. Hence, from a numerical perspective, this algorithm is best implemented by way of object--oriented programming in order to incorporate and preserve all the necessary quantities of all the elements in a hierarchy of derived variables. The integrand function $y(x)$, for instance, can be saved (as an object) for later use by including, among other quantities, the grid containing the coordinates of the steps $\left\{ x_i \right\}_{i = 1}^{i_{\rm{max}}}$. Then locating the index to which a given point $x$ belongs, is an interpolation exercise for which efficient codes already exist. See, for instance, subroutine \emph{hunt} and its accompanying notes in ref.~\cite{Press:1992:NRF:141273}. For a sorted array, as is our case, this search generally takes about $\log{}_2 i_{\rm{max}}$ tries while further points within close proximity can be located rather easily. An even more important advantage of this form of the solution is its suitability for estimation of the size of the next element via the method described in~\cite{PhysRevE.89.053319}. This adaptive step size choice relies on the knowledge of the derivatives of $y$ up to $4^{\rm{th}}$ order at the end of a previous element. Those derivatives can be directly computed from eq.~(\ref{eq:5}). Instead of solving two quadratic equations as we did in the last paper, however, we will solve one cubic equation, which is more suited for the present purpose. The user is required to guess only the size of the very first element. Overestimation of the size of an element may cause an increase in the number of function evaluations, but there will be no compromise in the accuracy of the solution as it will be made clear in a moment. One other attractive feature is that we can precisely measure the error of the calculated solution directly from eq.~(\ref{eq:4}). Specifically, at the end of a solved element $i$, the error between the exact $f(x_{i + 1})$ and the calculated \begin{eqnarray} \label{eq:11} \frac{\rm{d}}{{\rm{d}}x} y(x)\biggr|_{x=x_{i+1}} = \frac{1}{q_i} \frac{\rm{d}}{{\rm{d}} \tau} \bar{y}(\tau) \biggr|_{\tau=+1} = \frac{2B_0}{q_i} + f(x_i) \end{eqnarray} \noindent can be obtained. Notice that $f(x_{i + 1})$ will be needed in the beginning point of the next element. If the resulting error is not satisfactory, a bisection step will be taken to reduce the size of the element by moving the upper limit $x_{i + 1}$ closer to $x_i$ and re--solving. The main purpose of the adaptive step size choice is then to estimate \emph{a priory} an optimum step size, thereby reducing the number of bisections and/or the number of function evaluations necessary. Since this error is that of the integrand $f$, and not of the integral $y$, it is not necessary to demand this error be as small as the machine precision. The reason is, higher derivatives of $y$ computed from eq.~(\ref{eq:5}) will successively deteriorate in accuracy as the order increases. In a $16$ digit calculation a relative error of $10^{-4} - 10^{-7}$ in $f(x_{i + 1})$ often suffices, depending on how smooth the integrand is, for calculating the solution function $y$ correctly to within $\mathcal{O}(-14)$. Other quadrature methods do not directly take the integrand into account in their error estimation. They often use a formula, an outcome usually of a non--trivial analytic derivation, that estimates the upper bound of a residual term for a specific order~\cite{Ehrich1999}. In the present algorithm, the number of basis functions is kept constant. Other methods such as Clenshaw--Curtis quadrature, take advantage of a convenient property of the roots of Chebyshev polynomials that allows for conserving preceding function evaluations whenever the order of expansion is doubled. But this recursive doubling of basis set size is not necessarily effective as far as improving the accuracy of the integration is concerned. The reason is, for a given working precision, there is usually only a limited range of an optimum number of basis functions, say $10-16$, whose half or double is either too small or excessive. Rather, a more effective way is to estimate an adaptive step size, fix an optimum order of expansion, and reduce the size of the element whenever necessary - which is what is done here. Implementation of the algorithm starts by fixing the size of the first step $q_1$ and the size of the linear system $M$. Also, define $f(a) = f(x_1)$. The rest of the procedure is sketched below. \begin{enumerate} \item Construct the right--hand side of eq.~(\ref{eq:9}) by evaluating the function $\bar{f}(\tau_\nu)$ at the GL nodes. \item Calculate the solution coefficients $B$ by back substitution. \item Calculate the error between the two sides of eq.~(\ref{eq:2}) at the end point using eq.~(\ref{eq:11}) for the left--hand side and by evaluating $f(x_{i+1})$ directly. Retain the value of $f(x_{i+1})$ which will be needed at the beginning of the next element. \item If the error is not satisfactory, reduce $q_i \rightarrow q_i/2$ and go to step~(1). \item Otherwise calculate $y$ as per eq.~(\ref{eq:10}) and its higher derivatives at $\tau = 1$ using eq.~(\ref{eq:5}) and estimate the size of the next step $q_{i+1}$ using the method presented in~\cite{PhysRevE.89.053319}. \item For finite domain systems with an upper limit $b$, make sure the resulting step size does not stride beyond $b$. i.e., take $q_{i+1} \leftarrow min(q_{i+1}, (b - x_{i+1})/2)$. For open integrals on the other hand, keep propagating until the value of the integral $y$ converges. \end{enumerate} Finally, we may sometimes prefer not to evaluate the function $f(a)$ at the lower limit of the integration. A good example is an integrand containing singular terms at the origin. In those instances, only for the very first element, the main expansion given in eq.~(\ref{eq:5}) can be altered to be in terms of the $s$ polynomials as follows. \begin{equation} \label{eq:12} \bar{y}(\tau) = \sum_{\mu = 0}^{M-1} s_\mu(\tau)B_\mu + \bar{y}(-1) \end{equation} \noindent The rest of the propagation can then resume normally and the other modifications that follow can be worked out straightforwardly. \section{Numerical Examples} \label{sec:numerical-examples} We will now consider illustrative examples that demonstrate the typical behavior of the numerical algorithm discussed above. All of the examples chosen are difficult to calculate with an accuracy close to working precision, without breaking the range of integration into smaller intervals. Whenever applicable, comparisons will be made with DQAG~\cite{Quadpackref}, which is one of the integration subroutines compiled in QUADPACK \footnote{http://www.netlib.org/quadpack}. DQAG is an adaptive method that keeps bisecting the element with highest error estimate until the value of the overall integration is achieved to within the desired error. In all of the examples that follow, the error at the end of the elements has been determined by \begin{eqnarray} \label{eq:13} \left| \frac{2B_0}{q_i} + f(x_i) - f(x_{i+1}) \right| \le \left| f(x_{i+1}) \right| \delta_{\rm{rel}} + \delta_{\rm{abs}} \end{eqnarray} \noindent where, $\delta_{\rm{abs}}$ and $\delta_{\rm{rel}}$ denote the absolute and relative errors respectively. The values of $\delta_{\rm{rel}} \approx 2.22 \times 10^{-4}$ and $\delta_{\rm{abs}} \approx 2.22 \times 10^{-19}$ will be used unless otherwise specified. The size of the first step is fixed to be 0.5--that is $q_1 = 0.25$ and the number of basis functions used is $M = 13$. DQAG, on the other hand, has been run with absolute and relative errors of $\delta_{\rm{rel}} = \delta_{\rm{abs}} \approx 1.11 \times 10^{-13}$ and the lowest order ($key = 1$) has been used in order to keep the number of function evaluations at a minimum so as to provide a fair comparison. Our program is written in modern C++ and run on a late 2013, 2.8 GHz Intel Core i7 MacBook Pro laptop computer with Apple LLVM 8.1 compiler. \subsection{Closed Integral} \label{sec:closed-integral} We consider the $15$ closed integrals studied in ~\cite{doi:10.1080/10586458.2005.10128931}. These integrals exhibit different properties and were chosen by Bailey \emph{et al.} to test three other methods of NI. Herein we attempt all of them and report the output. Table~\ref{tab:table1} shows the results for numerical values of those integrals calculated using the present method as well as DQAG. \begin{table}[htp] \caption{Results for $15$ closed integrals taken from~\cite{doi:10.1080/10586458.2005.10128931}. The values of the integral $y(b)$ from the present method are shown in the second column. $\delta_{\rm{rel}}$ and $N$ of both the present method and DQAG are also shown.} \begin{center}\footnotesize \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|l|l|l|l|}\hline \textrm{label} & \textrm{$y(b)$} & $\delta_{\rm{rel}}$ & $\delta_{\rm{rel}}$ (DQAG) &\multicolumn{1}{|c|}{$N$} & $N$ (DQAG) \\ \hline $1$ & 0.250 000 000 000 000 & 1.110[-16] & 0.000 & 29 & 15 \\ $2$ & 0.210 657 251 225 807 & 0.000 & 1.318[-16] & 29 & 45 \\ $3$ & 1.905 238 690 482 68 & 0.000 & 0.000 & 191 & 15 \\ % $4$ & 0.514 041 895 890 071 & 0.000 & 0.000 & 29 & 45 \\ $5$ & -0.444 444 444 444 445 & 8.743[-16] & 3.747[-16] & 871 & 885 \\ $6$ & 0.785 398 163 397 448 & 1.414[-16] & 2.827[-16] & 974 & 795 \\ % $7$ & 1.198 140 227 142 81 & 6.337[-9] & 6.159[-9] & 2129 & 1725 \\ $8$ & 1.999 999 999 999 98 & 8.438[-15] & 4.441[-16] & 922 & 1545 \\ $9$ & -1.088 793 045 151 79 & 9.993[-15] & 2.855[-15] & 1243 & 1335 \\ % $10$ & 2.221 441 454 672 65 & 6.485[-9] & 2.907[-9] & 2032 & 1725 \\ $11$ & 1.570 796 326 794 90 & 0.000 & 0.000 & 29 & 105 \\ $12$ & 1.772 453 840 168 93 & 6.057[-9] & 5.929[-9] & 2439 & 1455 \\ % $13$ & 1.253 314 137 315 62 & 9.514[-14] & 0.000 & 96 & 255 \\ $14$ & 0.500 000 000 000 001 & 1.110[-15] & 0.000 & 231 & 375 \\ $15$ & 1.570 796 326 794 90 & 1.414[-16] & 2.218[-11] & 1523 & 210 \\ \hline \end{tabular} \end{center} \label{tab:table1} \end{table} $\delta_{\rm{rel}}$ with respect to the exact values, and a number of function evaluations $N$ are also indicated. $\delta_{\rm{rel}}$ and $N$ of the output from DQAG are also shown for comparison. The two methods are essentially, qualitatively similar in performance. Notice that the labels of the integrals are taken from the article ~\cite{doi:10.1080/10586458.2005.10128931}. Lets take a closer look at one of the integrals (index 6) in order to demonstrate the behavior of our algorithm. Example 6 has an integration limits $[0, 1]$ and an integrand $f$ given below. \begin{equation} \label{eq:14} f(x) = \sqrt{1 - x^2}, \qquad 0 \le x \le 1 \end{equation} \noindent The integral of the above function is \begin{equation} \label{eq:15} y(x) = \frac{1}{2} \left[ x \sqrt{1 - x^2} + \arcsin(x) \right]. \end{equation} \noindent with $y(1) = \pi/4$. Both functions $f$ and $y$ are plotted in Fig.~\ref{fig:1}. \begin{figure} \includegraphics[scale=0.7]{pyfy} \caption{(Color online) Plots of the functions given in Eqs.~(\ref{eq:14}) and~(\ref{eq:15}). $f$ varies very rapidly towards the upper integration limit causing numerical difficulty.} \label{fig:1} \end{figure} \noindent The calculated value of the integral is $y(1) = 0.785 398 163 397 448$ which has a $\delta_{\rm{rel}}$ of $\approx 1.414\times 10^{-16}$ compared to the exact value. It took $N = 974$ function evaluations and $i_{\rm{max}} = 35$ steps to propagate the integral $y$ from $0$ to $1$, where the size of the first step was set to $0.5$ as mentioned above. As can be seen from Fig.~(\ref{fig:2}), the step sizes chosen by our algorithm kept decreasing to as low as $2.22\times 10^{-11}$, which is consistent with the singularity of the derivative of the integrand $f$ at the upper integration limit. \begin{figure} \includegraphics[scale=0.8]{pyfyerr} \caption{(Color online) Step size of the finite elements (blue .) and $\delta_{\rm{rel}}$ of the integral at the end of the elements (red *) are shown. The algorithm spends more time and picks up more errors towards the upper integration limit. Vertical axis values are $\log{}_{10}$ of the actual.} \label{fig:2} \end{figure} Fig.~(\ref{fig:2}) also shows relative errors of the integral $y$ at the end of all the elements in comparison with the exact values from eq.~(\ref{eq:15}). Clearly, the maximum error occurs at the last step near $1$, which is what motivated our choice of this example. Rapidly changing functions, such as those with integrable singularity, are generally troublesome to the algorithm because the step size may not be small enough and/or the order of polynomials high enough to accommodate portions of the integrand with (nearly) vertical shape. In comparison, DQAG takes $795$ function evaluations and calculates the integral with a $\delta_{\rm{rel}}$ of $2.827\times 10^{-16}$. \subsection{Nonlinear Problem} \label{sec:nonlinear-problem} Bender \emph{et al.} have studied the following interesting nonlinear eigenvalue problem for which the numerical solution can be very challenging~\cite{1751-8121-47-23-235204}. \begin{equation} \label{eq:16} \frac{{\rm{d}}}{{\rm{d}}x} y(x) = \cos \left[\pi xy(x) \right], \qquad x \ge 0 \end{equation} \noindent Since this is a nonlinear equation, it has to be solved in an iterative fashion as: \begin{equation} \label{eq:17} \frac{{\rm{d}}}{{\rm{d}}x} y_{\sigma + 1}(x) = f_{\sigma}(x), \qquad \sigma = 1, 2, \ldots \end{equation} \noindent where $f_\sigma(x) = \cos \left[\pi xy_\sigma(x) \right]$ and $\sigma$ labels the levels of the iteration. Notice that at any stage of the iteration only the values of $f$ at the GL nodes are required. The first iterate of array $f_1$ is seeded from the solution vector $B$ of the final result in the preceding element. This is not only convenient but also an excellent approximation because the adaptive step choice implemented here is based on the assumption that the solution function between two consecutive elements remains constant up to a fourth order Taylor series expansion. The first element has been started by setting all the elements of the solution vector $B$ to unity. Apparently, only for this problem, the first two steps of the procedure given in Section~\ref{sec:descr-algor}, must be repeated until the iteration in eq.~(\ref{eq:17}) converges. The results of our calculation for $y(x), x\in [0, 24]$, are plotted in fig.~\ref{fig:3} for initial values at the origin $y_a = n, n = 1, 2, \ldots , 10$. \begin{figure} \includegraphics[scale=0.7]{pybender} \caption{(Color online) Calculated solution functions to eq.~(\ref{eq:16}) for $0 \le x \le 24$ with initial condition $y_a = 1, 2, \ldots , 10$ are shown. The functions are oscillatory near the origin before they evolve into an asymptotic discrete bundle.} \label{fig:3} \end{figure} Similar plots have been reported in ref.~\cite{1751-8121-47-23-235204} which are a result of point--wise convergent calculations. Our solution function, on the other hand, has been propagated from the origin outward. The very large number of steps the solution required for such a modest distance of $x = 24$ is quite remarkable. Table~\ref{tab:table1} summarizes the output of our program. \begin{table}[htp] \caption{Results for $y(24)$ to eq.~(\ref{eq:16}) for the respective initial conditions $y(0)$. The rest of the columns are: total number of steps, average step size, average number of functions evaluations per element and time elapsed.} \begin{center}\footnotesize \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|l|c|}\hline \textrm{$y(0)$} & \textrm{$y(24)$} & \textrm{$i_{\rm{max}}$} & \textrm{$2 q_{\rm{ave}} \, (\times 10^{-5})$} & \multicolumn{1}{|c|}{\textrm{$N_{\rm{ave}}$}} & \textrm{$t \, [sec]$} \\ \hline $1$ & 0.020 844 865 419 015 3 & 1 633 376 & 1.469 & 254.338 & 63 \\ $2$ & 0.104 224 327 270 128 & 1 701 378 & 1.411 & 253.336 & 66 \\ $3$ & 0.270 983 253 633 302 & 1 908 989 & 1.257 & 251.829 & 73 \\ $4$ & 0.437 742 187 280 245 & 2 068 413 & 1.160 & 250.609 & 79 \\ % $5$ & 0.687 880 611 222 152 & 2 397 070 & 1.001 & 249.380 & 91 \\ $6$ & 0.938 019 076 811 230 & 2 636 633 & 0.9103 & 248.157 & 100 \\ $7$ & 1.271 537 122 002 93 & 3 041 709 & 0.7890 & 247.071 & 116 \\ % $8$ & 1.688 434 875 810 57 & 3 586 933 & 0.6691 & 245.951 & 135 \\ $9$ & 2.105 332 915 403 23 & 4 021 621 & 0.5968 & 244.952 & 151 \\ $10$ & 2.605 611 041 676 66 & 4 626 563 & 0.5187 & 244.110 & 174 \\ \hline \end{tabular} \end{center} \label{tab:table2} \end{table} It took millions of steps, with an average step size as low as $\approx 5.187\times 10^{-6}$ and an average number of function evaluations up to $N_{\rm{ave}} \approx 254.3$ per element. Notice that this $N_{\rm{ave}}$ includes all the iterations, which, along with the brevity of the elapsed times shown, indicates the efficiency of our implementation. We have also included the final numerical values at $y(24)$ for reference. For this example, the required error has been set lower as $\delta_{\rm{rel}} = 3.0\times 10^{-9}$, for an obvious reason. In order to check the validity our results, we have re--run the program by still lowering the magnitude of the error to $\delta_{\rm{rel}} = 3.0\times 10^{-10}$. With the lowered error, the indicated values of $y(24)$ vary by an amount no larger than $\sim 1.86\times 10^{-13}$. The computational times in the last column of the table also increased to as high as $470$ seconds. Maintaining this much accuracy after millions of steps, and in an iterative calculation, shows how robust our method is. This non--linear eigenvalue problem is computationally challenging indeed. \subsection{Double--Range Integrals} \label{sec:double-range-integr} In this example, we consider a double integral for which one of the limits of the inner integral is identical to the variable of the outer integral. These types of integrals are common in studies of many particle dynamical systems involving Green's functions or double range addition theorems such as the Laplace expansion. Particularly, we will look at the following integral, which is the most prominent radial integral that is encountered in calculations that involve exponential type orbitals or Geminals in a spherical coordinate system. It stems from the addition theorem for ${r_{12}}^{n} e^{-\alpha r_{12}}$ given in~\cite{QUA:QUA24319} and the resulting integrals are still topics of interest in recent research~\cite{Jiao2015140, Rico2012, QUA:QUA21002}. Let us define the integral as \begin{equation} \label{eq:18} \hspace*{-0.5cm} I_{\lambda_1 \lambda_2}^{\mu_1 \mu_2}(\alpha_1, \beta_1, \alpha_2, \beta_2) = \int_0^\infty {\rm{d}} y \, e^{-\alpha_1 y} y^{{\mu}_1} \hat{i}_{\lambda_1}(\beta_1 y) \int_y^\infty {\rm{d}} x \, e^{-\alpha_2 x} x^{{\mu}_2} \hat{k}_{\lambda_2}(\beta_2 x) \end{equation} \noindent where $\hat{i}$ and $\hat{k}$ are spherical modified Bessel functions of the first and second kind respectively~\cite{citeulike:1816367}. The screening parameters $\alpha_1, \beta_1, \alpha_2, \beta_2$ are positive real numbers while all of the indices $\mu_1,\lambda_1, \mu_2, \lambda_2$ are integers. The correct composition of these parameters is such that both the inner and outer integrals remain finite as is the case in physical applications. In order to propagate from the origin, both lower integration limits need to be set to zero. This can be attained by switching the order of the two integrals using the following identity which maintains identical $x$--$y$ region of integration~\cite{hassani2000mathematical}. \begin{equation} \label{eq:19} \int_0^\infty {\rm{d}} y \, f(y) \int_y^\infty {\rm{d}} x \, g(x) \equiv \int_0^\infty {\rm{d}} x \, g(x) \int_0^x {\rm{d}} y \, f(y) \end{equation} \noindent Hence, eq.~(\ref{eq:18}) can be written as \begin{equation} \label{eq:20} \hspace*{-0.5cm} I_{\lambda_1 \lambda_2}^{\mu_1 \mu_2}(\alpha_1, \beta_1, \alpha_2, \beta_2) = \int_0^\infty {\rm{d}} x \, e^{-\alpha_2 x} x^{{\mu}_2} \hat{k}_{\lambda_2}(\beta_2 x) J_{\lambda_1}^{\mu_1}(\alpha_1, \beta_1; x) \end{equation} \noindent where, $J$ now represents the inner integral given below which needs to be propagated only once from the origin until the integral converges. \begin{equation} \label{eq:21} J_{\lambda_1}^{\mu_1}(\alpha_1, \beta_1; x) = \int_0^x {\rm{d}} y \, e^{-\alpha_1 y} y^{{\mu}_1} \hat{i}_{\lambda_1}(\beta_1 y), \qquad 0 \le x \le x_{i_{\rm{max}}+1} \end{equation} \noindent Here $x_{i_{\rm{max}}+1}$ signifies the end point of the last element where the result of the above integral converged to its value at infinity. Hence, beyond this point $J$ is considered to be constant function, i.e., $J(x) = J(x_{i_{\rm{max}}+1})$ for $x>x_{i_{\rm{max}}+1}$. Once the above integral is done and all the relevant parameters stored, it can be evaluated at any desired point $x>0$, which is a significant computational gain since the double integral $I$ given in eq.~(\ref{eq:18}) has essentially been reduced to two simple integrals. This demonstrates one of the main advantages contained in the present algorithm. Table~\ref{tab:table2} shows a sample calculation for the integral $I$ in eq.~(\ref{eq:20}) for $\beta_1, \beta_2 \in \{0.5, 1.0, 2.0 \}$. \begin{table}[htp] \caption{Exact and calculated values of the integral $I$ are shown for $\beta_1$ \& $\beta_2 \in \{0.5, 1.0, 2.0 \}$. Columns 3 and 4 show the number of function evaluations $N$ taken in the integrals Eq.~(\ref{eq:20}) and Eq.~(\ref{eq:19}) respectively. Numbers in square bracket signify powers of ten.} \begin{center}\footnotesize \renewcommand{\arraystretch}{1.3} \begin{tabular}{|c|c|c|c|l|l|}\hline \textrm{$\beta_1$} & \textrm{$\beta_2$} & \textrm{N of $J$} & \textrm{N of $I$} & \multicolumn{1}{|c|}{I -- calculated} & \multicolumn{1}{|c|}{I -- exact} \\ \hline $0.5$ & 0.5 & 219 & 259 & 1.627 473 168 386 65 [27] & 1.627 473 168 386 653 87 [27] \\ $0.5$ & 1.0 & 219 & 218 & 2.559 085 779 949 79 [22] & 2.559 085 779 949 794 01 [22] \\ $0.5$ & 2.0 & 219 & 231 & 3.103 777 873 917 21 [17] & 3.103 777 873 917 210 86 [17] \\ $1.0$ & 0.5 & 232 & 259 & 2.946 389 365 576 82 [23] & 2.946 389 365 576 741 23 [23] \\ $1.0$ & 1.0 & 232 & 245 & 6.062 810 005 197 87 [18] & 6.062 810 005 197 874 73 [18] \\ $1.0$ & 2.0 & 232 & 204 & 9.533 337 428 978 80 [13] & 9.533 337 428 978 808 27 [13] \\ $2.0$ & 0.5 & 245 & 259 & 4.342 544 722 241 59 [19] & 4.342 544 722 241 718 83 [19] \\ $2.0$ & 1.0 & 245 & 245 & 1.097 615 571 907 40 [15] & 1.097 615 571 907 438 80 [15] \\ $2.0$ & 2.0 & 245 & 231 & 2.258 572 729 378 12 [10] & 2.258 572 729 378 146 95 [10] \\ \hline \end{tabular} \end{center} \label{tab:table2} \end{table} The rest of the parameters are set as $\lambda_1 = -11, \mu_1 = 12, \lambda_2 = -13, \mu_2 = 14, \alpha_1 = 2\beta_1, \alpha_2 = 2\beta_2$. The Bessel functions $\hat{i}$ and $\hat{k}$ have been computed using the subroutines in GNU Scientific Library (GSL) \footnote{http://www.gnu.org/software/gsl/}. For the first element, eq.~(\ref{eq:12}) has been used in order to avoid evaluation at the origin. We also have calculated the exact values of the integrals using \emph{Mathematica} \cite{Mathematica7}, the first 18 digits of which are displayed in the last column. Comparison with the the present calculated results reveals that the integral $I$ was done accurately. The table also further shows the total number of function evaluations $N$ for the inner and outer integrals. \section{Conclusion} \label{sec:conclusion} There are many physical applications that can be modeled as initial value problems and the methods available to compute them are equally diverse. No single algorithm is known to address all of them at once, and hence, researchers usually digress from their main area of interest so as to familiarize themselves with many of the computational options. The present algorithm is by no means capable of solving all initial value problems, but it comes pragmatically close, especially for most physical applications. This is possible because it produces solutions on finite elements whose size is chosen to locally, checks the validity of the solution, and communicates the solution function and its first derivative to the next element, maintaining continiuity of both. More importantly, it can be applied on any ODE because it is easy to implement and modify, especially with the proper use of object--oriented programming techniques. The basis functions $u$ and $s$ discussed above are based on Legendre polynomials. In the future, we will use other classic orthogonal polynomials such as Chebyshev or Jacobi to determine if further advantages can be gained. \section{Acknowledgement} \label{sec:acknowledgement} DHG and CAW were partially supported by the Department of Energy, National Nuclear Security Administration, under Award Number(s) DE-NA0002630. CAW was also supported in part by the Defense Threat Reduction Agency. \section{REFERENCES} \label{sec:references} \bibliographystyle{elsarticle-num}
{ "timestamp": "2018-06-06T02:09:20", "yymm": "1806", "arxiv_id": "1806.01638", "language": "en", "url": "https://arxiv.org/abs/1806.01638", "abstract": "Numerical integration (NI) packages commonly used in scientific research are limited to returning the value of a definite integral at the upper integration limit, also commonly referred to as numerical quadrature. These quadrature algorithms are typically of a fixed accuracy and have only limited ability to adapt to the application. In this article, we will present a highly adaptive algorithm that not only can efficiently compute definite integrals encountered in physical problems but also can be applied to other problems such as indefinite integrals, integral equations and linear and non-linear eigenvalue problems. More specifically, a finite element based algorithm is presented that numerically solves first order ordinary differential equations (ODE) by propagating the solution function from a given initial value (lower integration value). The algorithm incorporates powerful techniques including, adaptive step size choice of elements, local error checking and enforces continuity of both the integral and the integrand across consecutive elements.", "subjects": "Numerical Analysis (math.NA); Computational Physics (physics.comp-ph)", "title": "Numerical Integration as an Initial Value Problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357628519819, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7096739121675549 }
https://arxiv.org/abs/0809.2981
Presenting the cohomology of a Schubert variety
We extend the short presentation due to [Borel '53] of the cohomology ring of a generalized flag manifold to a relatively short presentation of the cohomology of any of its Schubert varieties. Our result is stated in a root-system uniform manner by introducing the essential set of a Coxeter group element, generalizing and giving a new characterization of [Fulton '92]'s definition for permutations. Further refinements are obtained in type A.
\section{Introduction} \label{intro-section} The cohomology of the generalized flag manifold $G/B$, for any complex semisimple algebraic group $G$ and Borel subgroup $B$, has a classical presentation due to Borel \cite{Borel}. Pick a maximal torus $T\subset B\subset G$, choose a field $\Bbbk$ of characteristic zero, and let $V=\Bbbk\otimes_{{\mathbb Z}} X(T)$, where $X(T)$ is the coweight lattice of $T$. Let $\Bbbk[V]:=\mathrm{Sym}(V^{\star})$ be the symmetric algebra for $V^{\star}$; it naturally carries an action of the Weyl group $W := N_G(T)/T$. Furthermore let $\Bbbk[V]^W$ be the ring of $W$-invariants of $\Bbbk[V]$, and $\Bbbk[V]^W_+$ the ideal of $\Bbbk[V]$ generated by the $W$-invariants of positive degree. A classical theorem of Borel \cite{Borel} states that \begin{equation} \label{Borel-presentation} H^*(G/B,\Bbbk) \cong \Bbbk[V]/(\Bbbk[V]^W_+). \end{equation} A desirable feature of Borel's presentation is its shortness: since $W$ acts on $V^\star$ as a finite reflection group, by Chevalley's Theorem, the $W$-invariants $\Bbbk[V]^W$ are a polynomial algebra $\Bbbk[f_1,\ldots,f_n]$, where $n:=\dim_\Bbbk V$ is the rank of $G$ \cite[Section~3.5]{humphreys}. Hence Borel's presentation shows that $H^\star(G/B)$ is a complete intersection as it has $n$ generators and $n$ relations. There is a second way to describe $H^{\star}(G/B)$ arising from a cell decomposition of $G/B$. One has $$ \begin{array}{rll} G&=\bigsqcup_{w \in W} BwB & \text{(Bruhat decomposition)}\\ G/B&=\bigsqcup_{w \in W} X_w^\circ & \text{(Schubert cell decomposition), with}\\ X_w^\circ&:=BwB/B \cong {\mathbb C}^{\ell(w)} &\text{ (open Schubert cell)}. \end{array} $$ Here $\ell(w)$ denotes the Coxeter group length of $w$ under the Coxeter system $(W,S)$, where $S$ are the reflections in the simple roots $\Pi$ among the positive roots $\Phi^+$ within the natural root system $\Phi$ associated to our Lie pinning. This gives a $CW$-decomposition for $G/B$ in which all cells have even real dimension. Hence the fundamental homology classes $[X_w]$ of their closures $X_w:=\overline{X_w^\circ}$, the {\bf Schubert varieties}, form a ${\mathbb Z}$-basis for the integral homology $H_\star(G/B)$, and their (Kronecker) duals $\sigma_w:=[X_w]^\star$ form the dual ${\mathbb Z}$-basis for the integral cohomology $H^\star(G/B)$. The above facts lead to an {\it a priori} presentation for $H^\star(X_w)$; see also \cite[Cor. 4.4]{Carrell} and \cite[Prop. 2.1]{GasharovReiner}: The Schubert variety $X_w$ is the union of the Schubert cells $X_u^\circ$ for which $u \leq w$ in the {\it (strong) Bruhat order}; hence it inherits a cell decomposition from the flag variety. Consequently, the map on cohomology $$ H^\star(G/B) \rightarrow H^\star(X_w) $$ induced by including $X_w \hookrightarrow G/B$ is surjective with kernel \begin{equation} \label{first-I-def} I_w:=\Bbbk\text{-span of }\{ \sigma_u: u \not\leq w \}. \end{equation} This gives rise to presentations \begin{equation} \label{wasteful-presentation} \begin{aligned} H^\star(X_w) &\cong H^\star(G/B)/I_w &\text{ working over }{\mathbb Z}\\ &\cong \Bbbk[V]/\left( I_w + \Bbbk[V]^W_+ \right)& \text{ working over }\Bbbk.\\ \end{aligned} \end{equation} This presentation \eqref{wasteful-presentation} involves a generating set \eqref{first-I-def} for $I_w$ with at most $|W|$ generators. However, this generating set for $I_w$ is wasteful in that it not only generates $I_w$ as an ideal but actually spans it $\Bbbk$-\emph{linearly} within $H^\star(G/B)$. Therefore, a basic question is to request more efficient generating sets for $I_w$. For type $A_{n-1}$, earlier work \cite{GasharovReiner} reduced the $|W|=n!$ upper bound on the number of generators for $I_w$ to a polynomial bound of $n^2$ for the class of Schubert varieties $X_w$ \emph{defined by inclusions}; this class includes all smooth $X_w$. Moreover, for a certain subclass of smooth Schubert varieties $X_w$ considered originally by Ding \cite{Ding1,Ding2}, they gave a smaller generating set for these $I_w$ having only $n$ generators. This latter result was applied in \cite{DevelinMartinReiner} to classify these varieties up to isomorphism and homeomorphism (the generating set \eqref{first-I-def} having proved too unwieldy). One motivation for this work arises from the desire to extend this classification to general Schubert varieties of type $A$. Experience suggests that presentations for $I_w$ that are as simple as possible are best for this purpose. The question of finding simple presentations of $I_w$ for other root systems appears to have been less studied. The main goal of this paper is to give a concise and root-system uniform extension of Borel's presentation that produces for arbitrary $w\in W$ an abbreviated list of generators for $I_w$. Our first main result, Theorem~\ref{general-bound}, achieves this via a strong restriction on the {\bf descent set} $$ \mathrm{Des}(u):=\{ s \in S: \ell(us) < \ell(u) \} $$ of the elements $u$ in $W$ that index elements in our list of ideal generators $\sigma_u$ for $I_w$. We will need the following definitions, the first two of which are standard: \begin{enumerate} \item[$\bullet$] An element $u\in W$ is {\bf grassmannian} if $|\mathrm{Des}(u)| \leq 1$. \item[$\bullet$] An element $v \in W$ is {\bf bigrassmannian} if both $v$ and $v^{-1}$ are grassmannian. \item[$\bullet$] Given $w \in W$, the {\bf essential set for} $w$, denoted ${\mathcal{E}}(w)$, is the set of $u\in W$ which are minimal in the Bruhat order among those {\it not} below $w$. \end{enumerate} The nomenclature ``essential set'' for ${\mathcal{E}}(w)$ is justified in Proposition~\ref{essential-proposition}, where we give a new characterization of Fulton's essential set \cite{Fulton} for the case of the symmetric group $W=S_n$. Indeed, ${\mathcal{E}}(w)$ has been previously studied from a different point of view: a result of Lascoux and Sch\"utzenberger \cite[Th\'eor\`eme 3.6]{LascouxSchutzenberger}, implicit in our first main result below, is that the elements in ${\mathcal{E}}(w)$ are bigrassmannian for any $w$ in $W$.\footnote{ For Lascoux and Sch\"utzenberger this arises in their work (see also Geck and Kim \cite{GeckKim} and Reading \cite{Reading}) seeking efficient {\it encodings} of the strong Bruhat order; it is perhaps not surprising that it should arise in our search for efficient cohomology presentations as well.} \begin{theorem} \label{general-bound} For any $w\in W$, working in a field $\Bbbk$ of characteristic zero, the ideal $I_w$ defining $H^\star(X_w)$ as a quotient of $H^\star(G/B)$ is generated by the cohomology classes $\sigma_u$ where $u \not\leq w$ and $u$ is grassmannian. More precisely, $I_w$ is generated by the classes $\sigma_u$ indexed by those grassmannian $u$ for which there exist some bigrassmannian $v$ in ${\mathcal{E}}(w)$ satisfying both $u \geq v$ and $\mathrm{Des}(u) = \mathrm{Des}(v)$. \end{theorem} In type $A$, a similar result was obtained by Akyildiz, Lascoux, and Pragacz \cite[Theorem~2.2]{ALP}. Specifically, they prove the first sentence of Theorem~\ref{general-bound}, though they do not address the strengthening given by the second sentence. Their methods are mainly geometric, as opposed to our essentially combinatorial arguments. Their work provides, to our knowledge, the first inroads towards an abbreviated generating set for $I_w$. Theorem~\ref{general-bound} replaces the general upper bound of $|W|$ on the number of generators needed for $I_w$ with the bound \begin{equation} \label{less-wasteful-general-bound} \sum_{s \in S} [W:W_{S\setminus\{s\}}], \end{equation} where, for any subset $J\subset S$, $W_J$ denotes the parabolic subgroup of $W$ generated by $J$. Our theorem is deduced in Section~\ref{Hiller-section} from a more general result (Theorem~\ref{Hiller-BGG-general-bound}) that applies to Hiller's extension \cite[Chapter IV]{Hiller} of Schubert calculus as introduced by Bernstein, Gelfand, and Gelfand \cite{BGG} and Demazure~\cite{Demazure} to the \emph{coinvariant algebras} of finite reflection groups $W$. Section~\ref{Hiller-section} explains and proves Theorem~\ref{general-bound}. In Section~\ref{parabolic-section}, we exploit the particular form of our generators to derive a straightforward extension to Schubert varieties in any partial flag manifold $G/P$ associated to a parabolic subgroup $P$ of $G$. Section~\ref{type-A-section} examines more closely Theorem~\ref{general-bound} in type $A_{n-1}$. Here one can take $G=GL_n({\mathbb C})$, with $B$ the subgroup of invertible upper triangular matrices, $T$ the invertible diagonal matrices, and $W=S_n$ the permutation matrices. The bound on generators for $I_w$ in \eqref{less-wasteful-general-bound} becomes $2^n$, at least a practical improvement on $|W|=n!$. More importantly, one can be even more explicit and efficient in the generating sets for $I_w$. Identify points of $G/B$ with complete flags $$ \langle 0 \rangle \subset V_1 \subset \cdots \subset V_{n-1} \subset {\mathbb C}^n. $$ Under this identification, each Schubert variety $X_w$ is the set of flags satisfying certain specific conditions derived from $w$ of the form $\dim_{\mathbb C} (V_r \cap {\mathbb C}^s)\geq t$. The bigrassmannians $v$ comprising ${\mathcal{E}}(w)$ correspond to Fulton's {\it essential Schubert conditions}, a minimal list of such conditions defining the Schubert variety $X_w$. Our second main result (Theorem~\ref{J-generators}) provides for each bigrassmannian $v$ in $W=S_n$ a generating set for the ideal \begin{equation} \label{eqn:J_vgenset} J_v:=\Bbbk\text{-span of }\{ \sigma_u: u \geq v \} \end{equation} in type $A$ that is smaller than the one used as a general step (Theorem~\ref{general-J-generators}) in the proof of Theorem~\ref{general-bound} for arbitrary finite Coxeter groups $W$. Our proof of Theorem~\ref{J-generators} is based on symmetric function identities that we devise for this purpose. Therefore, concatenating these generating sets for $J_v$ gives a generating set for \begin{equation} \label{eqn:I_wconcat} I_w = \sum_{v \in {\mathcal{E}}(w)} J_v \end{equation} that is smaller than the one provided by Theorem~\ref{general-bound}. We remark that this result subsumes (and slightly improves upon; see Example~\ref{GR-comparison}) the generating set of size $n^2$ given by \cite{GasharovReiner} in the case of Schubert varieties defined by inclusions. Actually, we conjecture that this smaller generating set for $J_v$ in type $A$ is minimal (although the generating set (\ref{eqn:I_wconcat}) for $I_w$ obtained by concatenation is not always minimal; see Example~\ref{non-minimal-example}.) The significance of this minimality conjecture, as explained in Section~\ref{minconj}, is that it implies an exponential \emph{lower} bound of at least \[\binom{n/2}{n/4} \sim \frac{{\sqrt 2}^{n+2}}{\sqrt{\pi n}}\] on the number of generators needed for $I_w$, accompanying our exponential upper bound of $2^n$. Thus one would not be able to expect short presentations for $H^*(X_w)$ in general, at least in type~$A$. \section{Proof of Theorem~\ref{general-bound}} \label{Hiller-section} As in Section~1, let $\Bbbk$ be a field of characteristic zero. For $v,w$ in $W$, define two $\Bbbk$-linear subspaces $J_v, I_w$ of the cohomology $H^\star(G/B)$ with $\Bbbk$ coefficients: \begin{equation} \label{I-J-definitions} \begin{aligned} J_v&:=\Bbbk\text{-span of }\{ \sigma_u: u \geq v \}\\ I_w&:=\Bbbk\text{-span of }\{ \sigma_u: u \not\leq w\} \left(=\sum_{v \in {\mathcal{E}}(w)} J_v\right). \end{aligned} \end{equation} Recall the essential set ${\mathcal{E}}(w)$ is the set of all Bruhat-minimal elements of $\{v \in W: v \not\leq w\}$. The Schubert cell decomposition of $G/B$ shows that both of these $\Bbbk$-subspaces are actually ideals in the cohomology ring $H^\star(G/B)$: one has that $I_w$ (respectively $J_v$) is the kernel of the surjection $H^\star(G/B) \rightarrow H^\star(X)$ induced by the inclusion of the $B$-stable subvariety $X \subseteq G/B$, where $X=X_w$ (respectively $X= \bigcup_{u \not\geq v} X_u$). Also recall from Section~1 the following important property of bigrassmannians in the Bruhat order on Coxeter groups, originally due to Lascoux and Sch\"utzenberger \cite[Th\'eor\`eme 3.6]{LascouxSchutzenberger} and Geck and Kim \cite[Lemma 2.3 and Theorem 2.5]{GeckKim}. \begin{lemma} \label{bigrassmannian-fact} For any Coxeter system $(W,S)$ and $w$ in $W$, every element of ${\mathcal{E}}(w)$ is bigrassmannian. \end{lemma} \noindent See Section~\ref{essential-subsection} below for a further interpretation of ${\mathcal{E}}(w)$ when $W$ is a Weyl group of type $A_{n-1}$. As a consequence of Lemma~\ref{bigrassmannian-fact} and (\ref{I-J-definitions}), finding generators of $J_v$ for bigrassmannian $v$ automatically gives generators for the ideals $I_w$. We will actually work at the level of generality of irreducible Coxeter systems $(W,S)$ with $W$ finite, using Hiller's version \cite{Hiller} of the Schubert calculus \cite{BGG, Demazure} for coinvariant algebras. This emphasizes that the arguments of this section and the next only depend on Coxeter combinatorics and formal properties of divided difference operators and coinvariant algebras. We review here the relevant facts from \cite[Chapter IV]{Hiller}. Let $W$ be a finite and irreducible Coxeter group, and $V$ its reflection representation. One then picks a (possibly non-crystallographic) root system $\Phi$ for $W$ as follows: $\Phi \subset V^\star$ is any $W$-stable choice\footnote{Note that this may require coefficients in a subfield $\Bbbk$ of ${\mathbb R}$ strictly larger than ${\mathbb Q}$ when $W$ is not crystallographic.} of a set of a linear functionals $\alpha$ and $-\alpha$ such that the perpendicular spaces $\alpha^\perp$ in $V$ run through the reflecting hyperplanes of $W$. These reflecting hyperplanes divide $V$ into chambers; we pick one which we call the dominant chamber $\mathcal{C}$. One declares the positive roots $\Phi^+\subseteq\Phi$ to be the roots which are positive on this chamber. The Coxeter generators $S$ of $W$ are declared to be the reflections across the hyperplanes which are facets of $\mathcal{C}$. Among the positive roots are the simple roots $\Pi=\{\alpha_s\}_{s \in S}$, where $\alpha_s$ is the positive root vanishing on the reflecting hyperplane of $s$. With these choices, one defines for each $w\in W$ the {\bf Hiller Schubert polynomial}\footnote{``BGG/Demazure Schubert polynomial'' would be as fair, but for us the main point is to distinguish these polynomials from the type $A$ Lascoux-Sch\"{u}tzenberger version forthcoming in Section~4, which are not the same as elements of $k[V]$, although equivalent mod $k[V]^W_+$.} $S_w$ in $\Bbbk[V]=\mathrm{Sym}(V^\star)$ by $$ S_w := \partial_{w^{-1}w_0} S_{w_0}, $$ where $w_0$ is the unique longest element in $W$, for which one declares that $$ S_{w_0} := \frac{1}{|W|}\prod_{\alpha \in \Phi^+} \alpha. $$ The images of the polynomials $S_w$ as $w$ ranges over all of $W$ will form a basis for the {\bf coinvariant algebra} $\Bbbk[V]/(\Bbbk[V]^W_+)$. Here the {\bf divided difference} operators $\partial_u$ on $\Bbbk[V]$ are defined first for $s\in S$ by $$ \partial_s (f):= \frac{f-s(f)}{\alpha_s} $$ and then for any $u\in W$ of Coxeter length $\ell=\ell(u)$ by $$ \partial_u:=\partial_{s_{i_1}} \cdots \partial_{s_{i_\ell}}, $$ where $u=s_{i_1} \cdots s_{i_\ell}$ is any choice of a (reduced) decomposition expressing $u$ in terms of the generators $s\in S$. The relation to a generalized flag manifold $G/B$ is that $G$, $B$, and $T$ come equipped with a (crystallographic) root system and Weyl group $W$. Let $\mathfrak{h}\subset \mathfrak{b} \subset \mathfrak{g}$ be the Lie algebras associated to $T\subset B\subset G$. Taking the reflection representation $V$ to be $V=\mathfrak{h}$ (or more generally $V=\Bbbk \otimes_{\mathbb Z} X(T)$) and the root system $\Phi$ to be the set of weights of the adjoint representation of $\mathfrak{g}$ acting on itself, with $\Phi^+$ the weights of the action of $\mathfrak{g}$ on $\mathfrak{b}$. In this case, it was proven in \cite{BGG, Demazure} that the element $S_w\in\Bbbk[\mathfrak{h}]$ is a lift under the surjection $$ \Bbbk[\mathfrak{h}] \quad \rightarrow \quad \Bbbk[\mathfrak{h}]/(\Bbbk[\mathfrak{h}]^W_+) \quad \left( \cong H^\star(G/B) \right) $$ of the Schubert cohomology class $\sigma_w$ in $H^{\ell(w)}(G/B)$ which is {\it Kronecker dual} to the fundamental homology class $[X_w]$ of $X_w$ in $H_{\ell(w)}(G/B)$ and {\it Poincar\'e dual} to the fundamental homology class $[X_{w_0w}]$ in $H_{\ell(w_0)-\ell(w)}(G/B)$. We collect basic properties of the divided difference operators $\partial_s$ and Schubert polynomials $S_w$ that we need below. \begin{enumerate} \item[(a)] {\bf Leibniz rule}: $\partial_s (fg) = \partial_s(f) \cdot g + s(f) \partial_s(g).$ \item[(b)] $\partial_s(f)=0$ if and only if $s(f)=f$. In the case where $f=S_w$ for some $w$, then $\partial_s(S_w)=0$ if and only if $\ell(ws) > \ell(w)$, or equivalently if and only if $s \not\in \mathrm{Des}(w)$. \item[(c)] Consequently, $\partial_s$ preserves the ideal $I= \Bbbk[V]^W_+$ generated by the $W$-invariants of positive degree in $\Bbbk[V]$. \item[(d)] $$ \partial_u \partial_v = \begin{cases} \partial_{uv}& \text{ if }\ell(u)+\ell(v)=\ell(uv)\\ 0 & \text{ otherwise.}\\ \end{cases} $$ \item[(e)] If $\ell(w)=\ell(w')$ then $\partial_{w'} S_w = \delta_{w,w'}$. (By definition, $\delta_{a,b}=1$ if $a=b$ and $\delta_{a,b}=0$ otherwise.) \item[(f)] Consequently, the {\bf Schubert structure constants} $c_{u,v}^w$ which uniquely express $$ S_u S_v = \sum_{\substack{w \in W:\\ \ell(w)=\ell(u)+\ell(v)}} c_{u,v}^w S_w \qquad \mod k[V]^W_+ $$ can be computed by the formula $c_{u,v}^w= \partial_w \left( S_u S_v \right)$. \item[(g)] These structure constants also satisfy this sparsity rule: $$ c_{u,v}^w=0 \text{ unless }w \geq u,v\text{ in Bruhat order.} $$ This is a consequence of the Pieri formula \cite[\S IV.3]{Hiller} for multiplying the $S_u$ by any of the degree one elements that generate $\Bbbk[V]/(\Bbbk[V]^W_+)$. \end{enumerate} The key to the proof of Theorem~\ref{general-bound} turns out to be the following lemma (perhaps of independent interest) about some further sparsity of the Schubert structure constants $c_{u,v}^w$ appearing in property $(f)$ above. Given $J \subset S$, recall that every $w$ in $W$ has a unique length-additive {\bf parabolic factorization} $$ w=u \cdot x $$ where $x$ lies in the parabolic subgroup $W_J$ generated by $J$, and $u$ lies in the set $W^J$ of minimum-length coset representatives for $W/W_J$, characterized by the property that \linebreak $\mathrm{Des}(u) \cap J = \varnothing$. \begin{lemma} \label{sparsity-lemma} Let $J \subset S$. Suppose $w,w'$ in $W$ lie in the same coset $wW_J=w'W_J$, so that $w = u \cdot x$ and $w'=u \cdot x'$ for some $u$ in $W^J$ and $x, x'$ in $W_J$. Then $ c_{u,x}^{w'}= \delta_{w',w} = \delta_{x',x}. $ \end{lemma} \begin{proof} Using property (f) of divided differences above, one can rephrase the lemma as saying that, for any $w'\in W$ with $w'=u \cdot x'$ where $x' \in W_J$ and $\ell(x')=\ell(x)$ (so that $\ell(w')=\ell(w)$), one has \begin{equation} \label{rephrased-sparsity} \partial_{w'} (S_u S_x) = \delta_{x,x'}. \end{equation} We prove \eqref{rephrased-sparsity} by induction on $\ell(x')$. In the base case where $\ell(x')=0$, one has $x'=x=1$ and $w'=w=u$, so the assertion \eqref{rephrased-sparsity} follows from property (e) above. In the inductive step, let $\ell(x')>0$. Thus, there exists $s\in J$ with $\ell(x's)<\ell(x')$. Consequently, $$ \ell(w')=\ell(u)+\ell(x') = \ell(u)+\ell(x's)+\ell(s). $$ Hence by properties (d), (a) and (b), one has, respectively, \begin{equation} \label{Leibniz-induction} \begin{aligned} \partial_{w'}( S_u S_x ) & = \partial_u \partial_{x's} \partial_s ( S_u S_x ) \\ & = \partial_u \partial_{x's} \left( \partial_s(S_u) \cdot S_x + s(S_u) \partial_s(S_x ) \right)\\ & = \partial_u \partial_{x's} ( S_u \partial_s(S_x ) ). \end{aligned} \end{equation} Now consider two cases. \vskip .1in \noindent {\sf Case 1.} $\ell(xs) > \ell(x)$. Then property (b) says that $\partial_s(S_x)=0$. Using this in the last line of \eqref{Leibniz-induction}, one concludes that $$ \partial_{w'}( S_u S_x ) = 0= \delta_{x,x'} $$ since $\ell(x's)< \ell(x')$ implies $x \neq x'$. \vskip .1in \noindent {\sf Case 2.} $\ell(xs) < \ell(x)$. Then $\partial_s(S_x)=S_{xs}$, and $$ \begin{aligned} \partial_{w'}( S_u S_x ) & = \partial_u \partial_{x's} ( S_u \partial_s(S_x ) ) \\ & = \partial_u \partial_{x's} ( S_u S_{xs} ) )\\ & = \delta_{xs,x's} \\ & = \delta_{x,x'}, \end{aligned} $$ where the second-to-last equality applied the inductive hypothesis to $x's$ and $xs$. \end{proof} We now use this lemma to find a smaller generating set for the ideal $J_v$ as defined in \eqref{I-J-definitions} based on the descent set $\mathrm{Des}(v)$. Working more generally, for any finite Coxeter group $W$ and a choice of root data for the Hiller Schubert calculus, define the following two $\Bbbk$-subspaces within the coinvariant algebra $\Bbbk[V]/(\Bbbk[V]^W_+)$: \begin{equation} \label{Hiller-I-and-J-definitions} \begin{aligned} J_v&:=\Bbbk\text{-span of }\{ S_u: u \geq v \}\\ I_w&:=\Bbbk\text{-span of }\{ S_u: u \not\leq w\} = \sum_{v\in{\mathcal{E}}(w)} J_v. \end{aligned} \end{equation} Note that in this context we can appeal to property (g) to see that $J_v$ and $I_w$ are actually {\it ideals} within the coinvariant algebra $\Bbbk[V]/(\Bbbk[V]^W_+)$. \begin{theorem} \label{general-J-generators} Let $v$ be an element of a finite Coxeter group $W$ and $J$ a subset of $S$. Assume $v$ lies in $W^J$, or, equivalently, $\mathrm{Des}(v) \cap J = \varnothing$. Then $J_v$ is generated as an ideal within the coinvariant algebra $\Bbbk[V]/(\Bbbk[V]^W_+)$ by the set $$ \{ S_u: u \in W^J, \,\, u \geq v \}. $$ \end{theorem} \begin{proof} For any $t\in W^J$, let $J^\prime_t$ denote the ideal $$ \{ S_u: u\in W^J, \,\ u\geq t \}. $$ We need to show $J^\prime_v=J_v$. Certainly $J^\prime_v \subseteq J_v$ by definition, so it remains to show the reverse inclusion. Proceed by induction on the {\bf colength} $\ell(w_0)-\ell(v)$ of $v$. In the base case where $v$ has colength $0$, $v=w_0$; therefore the assumption $v \in W^J$ implies $J=\varnothing$, so $W^J=W$, and there is nothing to prove. In the inductive step, given $w \geq v$, one must show that $S_w$ lies in $J'_v$. Factor $w=u\cdot x$ uniquely with $u \in W^J$ and $x \in W_J$. We will use repeatedly the fact (see \cite[\S 2.5]{BjornerBrenti}) that the map $$ \begin{aligned} W & \overset{P^J}{\longrightarrow} W^J \\ w &\longmapsto u \end{aligned} $$ is order-preserving for the Bruhat order. In particular, since it was assumed that $w \geq v$ above, one has $u \geq v$ here. By Lemma~\ref{sparsity-lemma}, one has $$ \begin{aligned} S_u S_x &= S_w + \sum_{w'} c_{u,x}^{w'} S_{w'},\,\,\text{ so that } \\ S_w &= S_u S_x - \sum_{w'} c_{u,x}^{w'} S_{w'}. \end{aligned} $$ Here each $w'$ appearing in the sums satisfies $w' \geq u$ by property (g), and hence if one factors $w'=u' \cdot x'$ with $u' \in W^J$ and $x' \in W_J$, then $w' \geq u$ implies $u' \geq u$. But then Lemma~\ref{sparsity-lemma} also says that $c_{u,x}^{w'}=0$ unless $u' > u\ (\geq v)$; hence, by induction, for any $w^\prime$ with $S_{w^\prime}$ appearing with nonzero coefficient in the right hand sum, $J_{u^\prime}=J^\prime_{u^\prime}$ holds for the corresponding $u^\prime$ in the factorization $w^\prime=u^\prime\cdot x^\prime$. By definition, $J'_{u'} \subset J'_v$, and since $S_u$ also lies in $J'_v$ as $u \in W^J$, one concludes that $S_w$ lies in $J'_v$ as desired. \end{proof} Theorem~\ref{general-bound} is then a special case of the following result, which is immediate from Lemma~\ref{bigrassmannian-fact} and Theorem~\ref{general-J-generators}: \begin{theorem} \label{Hiller-BGG-general-bound} Let $W$ be a Coxeter system $(W,S)$ with $W$ finite, and let $w$ be an element of $W$. Then the ideal $I_w$ of the coinvariant algebra $\Bbbk[V]/(\Bbbk[V]^W_+)$, defined as in \eqref{Hiller-I-and-J-definitions}, is generated by the Schubert polynomials $$ \{S_u: u \not\leq w, \text{ and }u\text{ is grassmannian}\}. $$ More precisely, $I_w$ is generated by the set $\{S_u \}$ for those $u$ for which there exist some (bigrassmannian) $v$ in ${\mathcal{E}}(w)$ satisfying both $u \geq v$ and $\mathrm{Des}(u) = \mathrm{Des}(v)$. \end{theorem} \section{Reducing presentations in $H^{\star}(G/P)$ to those in $H^{\star}(G/B)$} \label{parabolic-section} We explain in this section how Theorem~\ref{Hiller-BGG-general-bound} leads to a shorter presentation more generally for the cohomology of Schubert varieties in any partial flag manifold. Given $J \subset S$, one has the parabolic subgroup $W_J$ of $W$ generated by $J$. One also has a corresponding parabolic subgroup $P_J$ of $G$, generated by the Borel subgroup $B$ together with representatives within $G$ that lift the elements $J \subset W=N_G(T)/T$. The Borel picture identifies the cohomology ring of the {\bf (generalized) partial flag manifold} $G/P_J$ as the subring of $W_J$-invariants inside the cohomology of $G/B$. In other words, the quotient map \[G/B \overset{\pi}{\rightarrow} G/P_J\] induces the inclusion \begin{equation} \label{parabolic-inclusion} H^\star(G/P_J) \cong H^\star(G/B)^{W_J} \overset{i}{\hookrightarrow} H^\star(G/B). \end{equation} Recall that $W^J$ denotes the set of minimum-length coset representatives for $W/W_J$. Inside $H^\star(G/B)$, the cohomology classes $\{ \sigma_w: w \in W^J\}$ lie in this $W_J$-invariant subalgebra $H^\star(G/B)^{W_J}$ and form a $\Bbbk$-basis identified with the $\Bbbk$-basis of Schubert cohomology classes $\sigma_{wW_J}$ for $H^\star(G/P_J)$. One also has that the pre-image of Schubert varieties in $G/P_J$ are certain Schubert varieties of $G/B$: specifically, $$ \pi^{-1}(X_{wW_J}) = X_{w_{\max}}, $$ where $w_{\max}$ is the unique {\it maximum}-length coset representative in $wW_J$. Working more generally with the Hiller Schubert calculus for any finite Coxeter system $(W,S)$, the inclusion \eqref{parabolic-inclusion} generalizes to the inclusion \begin{equation} \label{Hiller-parabolic-inclusion} \Bbbk[V]^{W_J}/ \Bbbk[V]^W_+ \Bbbk[V]^{W_J} \cong \left( \Bbbk[V] / ( \Bbbk[V]^W_+ ) \right)^{W_J} \overset{i}{\hookrightarrow} \Bbbk[V]/(\Bbbk[V]^W_+). \end{equation} The first isomorphism shown in \eqref{Hiller-parabolic-inclusion} is a consequence of the fact that one has an {\it averaging} map \begin{equation} \label{parabolic-retraction} \begin{array}{rll} \Bbbk[V] &\overset{\rho}{\longrightarrow} &\Bbbk[V]^{W_J} \\ f &\longmapsto &\frac{1}{|W_J|} \sum_{w \in W_J} w(f) \end{array} \end{equation} which provides a $\Bbbk[V]^{W_J}$-linear (and hence also $\Bbbk[V]^W$-linear) retraction map for $i$, meaning that $\rho \circ i = 1_{\Bbbk[V]^{W_J}}$. Inside $\Bbbk[V]/(\Bbbk[V]^W_+)$, the Hiller Schubert polynomials $\{ S_w : w \in W^J\}$ lie in this $W_J$-invariant subalgebra $\left( \Bbbk[V] / ( \Bbbk[V]^W_+ ) \right)^{W_J}$ and provide a $\Bbbk$-basis for it. The retraction in \eqref{parabolic-retraction} also provides the relation between the cohomology presentations for the Schubert varieties $X_{wW_J}$ and $X_{w_{\max}}$. Recall that when one has an inclusion of rings $R \overset{i}{\hookrightarrow} \hat{R}$, one can relate ideals of $R$ and $\hat{R}$ by the operations of {\it extension} and {\it contraction}: given an ideal $I$ in $R$, its extension $\hat{R}I$ to $\hat{R}$ is the ideal it generates in $\hat{R}$, and given an ideal $\hat{I}$ of $\hat{R}$, its contraction to $R$ is the intersection $\hat{I} \cap R$. Say that the inclusion $R \overset{i}{\hookrightarrow} \hat{R}$ is a {\it split inclusion} if it has an $R$-linear {\it retraction} $\hat{R} \overset{\rho}{\rightarrow} R$, meaning that $\rho \circ i = 1_R$. The following proposition about this situation is straightforward and well-known. \begin{proposition} \label{retraction-proposition} Assume $R \overset{i}{\hookrightarrow} \hat{R}$ is a split inclusion, and $\hat{I}$ is an ideal of $\hat{R}$ which is generated by its contraction $I:=\hat{I} \cap R$ to $R$. Then a set of elements $\{g_\alpha\}$ lying in $R$ generate $\hat{I}$ as an ideal of $\hat{R}$ if and only if the same elements $\{g_\alpha\}$ generate $I=\hat{I}\cap R$ as an ideal of $R$. \end{proposition} We will apply this proposition to the split inclusion \eqref{Hiller-parabolic-inclusion} and these ideals \begin{equation} \label{parabolic-ideal-definition} \begin{array}{llll} I=I_{wW_J} &:= \Bbbk\text{-span of }\{S_u: u \in W^J, \,\, uW_J \not\leq wW_J\} &\subset \left( \Bbbk[V] / ( \Bbbk[V]^W_+ ) \right)^{W_J}& = R\\ {\hat I} = I_{w_{\max}} &:= \Bbbk\text{-span of }\{S_u: u \in W, \,\, u \not\leq w_{\max}\} &\subset \Bbbk[V] / ( \Bbbk[V]^W_+ ) & ={\hat R} \end{array} \end{equation} which in the case where $(W,S)$ comes from an algebraic group $G$ have the following interpretations as kernels: \begin{equation} \label{kernel-interpretations} \begin{aligned} I_{wW_J} &=\ker\left( H^\star(G/P_J) \overset{i^\star}{\rightarrow} H^\star(X_{wW_J}) \right) \\ I_{w_{\max}}&=\ker\left( H^\star(G/B) \overset{i^\star}{\rightarrow} H^\star(X_{w_{\max}}) \right). \end{aligned} \end{equation} Borel's picture already gives a very short presentation for $H^\star(G/P_J)$ or more generally $\left( \Bbbk[V] / ( \Bbbk[V]^W_+ ) \right)^{W_J}$, as we now explain. The isomorphism in \eqref{Hiller-parabolic-inclusion} says that a presentation of $\left( \Bbbk[V] / ( \Bbbk[V]^W_+ ) \right)^{W_J}$ is equivalent to a presentation of the quotient $\Bbbk[V]^{W_J}/ \Bbbk[V]^W_+ \Bbbk[V]^{W_J}$. Since both $W_J$ and $W$ are finite reflection groups acting on $V$, their invariant rings are both polynomial algebras $$ \begin{aligned} \Bbbk[V]^{W_J}&=\Bbbk[g_1,\ldots,g_n] \\ \Bbbk[V]^W &=\Bbbk[f_1,\ldots,f_n],\\ \end{aligned} $$ and hence the quotient can be presented as a graded complete intersection ring: $$ \begin{aligned} \left( \Bbbk[V] / ( \Bbbk[V]^W_+ ) \right)^{W_J} &\cong \Bbbk[V]^{W_J}/ \Bbbk[V]^W_+ \Bbbk[V]^{W_J} \\ &\cong \Bbbk[g_1,\ldots,g_n]/ (f_1,\ldots,f_n). \end{aligned} $$ Thus we only need to provide generators for the ideal $I_{wW_J}$. \begin{theorem} \label{parabolic-generators-theorem} Let $(W,S)$ be a finite Coxeter system, with $J \subseteq S$ and $w$ in $W$, and $w_{\max}$ the unique maximum-length representative of $wW_J$. Consider the inclusion $$ R:=\left( \Bbbk[V] / ( \Bbbk[V]^W_+ ) \right)^{W_J} \quad \overset{i}{\hookrightarrow} \quad \hat{R}:=\Bbbk[V] / ( \Bbbk[V]^W_+ ). $$ Then the following hold. \begin{enumerate} \item[(i)] The essential set ${\mathcal{E}}(w_{\max})$ lies entirely in $W^J$. \item[(ii)] The ideal $I_{w_{\max}}$ of $\hat{R}$ is generated by its contraction $I_{w_{\max}} \cap R$. \item[(iii)] This contraction is the same as $I_{wW_J}$. \item[(iv)] The set $$ \bigcup_{v \in {\mathcal{E}}(w)} \{S_u: u \geq v, \mathrm{Des}(u) = \mathrm{Des}(v)\} $$ both generates $I_{w_{\max}}$ as an ideal of $\hat{R}$ and also generates the contraction $I_{wW_J}$ as an ideal of~$R$. \end{enumerate} \end{theorem} \begin{proof} For assertion (i), assume for the sake of contradiction that $v\in {\mathcal{E}}(w_{\max})$ and that $vs < v$ for some $s$ in $J$. Since $v$ is Bruhat-minimal among the elements {\it not} below $w_{\max}$, this implies $vs \leq w_{\max}$. However $w_{\max} s < w_{\max}$ by maximality of $w_{\max}$ within $wW_J$, so the {\it lifting property} \cite[Prop. 2.2.7, Cor. 2.2.8]{BjornerBrenti} of Bruhat order implies that $v \leq w_{\max}$, a contradiction. For assertion (ii), apply Theorem~\ref{Hiller-BGG-general-bound} to $w_{\max}$ to conclude that $I_w$ is generated by the set $\{S_u\}$ for those $u$ for which there exist $v\in {\mathcal{E}}(w_{\max})$ satisfying both $u \geq v$ and $\mathrm{Des}(u) \subseteq \mathrm{Des}(v)$. By (i), this forces $u$ to lie in $W^J$, so that $S_u$ is $W^J$-invariant and therefore lies in $R$. For assertion (iii), from the definition \eqref{parabolic-ideal-definition} and the fact that $R$ has a $\Bbbk$-basis given by $\{S_u: u \in W^J\}$, it suffices to show that for any $u$ in $W^J$, one has $uW_J \leq wW_J$ if and only if $u \leq w_{\max}$. By definition, $uW_J \leq wW_J$ if and only if $u \leq w_{\min}$, where $w_{\min}$ is the unique representative of $wW_J$ lying in $W^J$. The usual parabolic factorization $W=W^J \cdot W_J$ allows one to write down a reduced word $\omega$ for $w_{\max}$ in the concatenated form $$ \omega=\omega_1 \cdot \omega_2 $$ where the prefix $\omega_1$ factors $w_{\min}$, and the suffix $\omega_2$ contains only generators in $J$. The {\it subword characterization} of Bruhat order \cite[Cor. 2.2.3]{BjornerBrenti} shows that $u \leq w_{\max}$ if and only if $u$ is factored by a reduced subword of this word $\omega$; this subword must necessarily use no generators from $J$ (since $u$ is in $W^J$) and hence must actually be a subword of $\omega_1$. Thus $u \leq w_{\max}$ if and only if $u \leq w_{\min}$, as desired. Assertion (iv) then follows from assertions (ii), (iii) and Proposition~\ref{retraction-proposition}. \end{proof} \section{Refinements in Type $A$} \label{type-A-section} We investigate further the situation when $W$ is a Weyl group of type $A_{n-1}$, which exhibits extra features; one can: \begin{enumerate} \item[$\bullet$] be more explicit about bigrassmannians and their essential sets ${\mathcal{E}}(w)$, \item[$\bullet$] produce even smaller generating sets for the ideals $I_w$ and $J_v$, which are conjecturally minimal in the case of $J_v$, and \item[$\bullet$] work with ${\mathbb Z}$ coefficients rather than over a field $\Bbbk$ of characteristic zero. \end{enumerate} \subsection{Schubert conditions and bigrassmannians} \label{bigrassmannian-subsection} In type $A_{n-1}$, points in the variety $G/B$ are identified with complete flags of subspaces $$ \langle 0\rangle \subset V_1 \subset \cdots \subset V_{n-1} \subset {\mathbb C}^n $$ having $\dim_{\mathbb C} V_i = i$. Pick as our particular base flag $\{ {\mathbb C}^i \}_{i=0}^n$, where ${\mathbb C}^i$ is spanned by the first $i$ standard basis elements; this flag is fixed by the Borel subgroup $B$ consisting of the invertible upper-triangular matrices within $G:=GL_n({\mathbb C})$. Picking the maximal torus $T$ of invertible diagonal matrices, one identifies the Weyl group $W=N_G(T)/T$ with the symmetric group $S_n$. The Coxeter generators $S$ for $W=S_n$ associated to our Borel subgroup $B$ is the set of adjacent transpositions $S=\{(1\leftrightarrow 2), (2\leftrightarrow 3),\ldots,(n-1 \leftrightarrow n)\}$. The Schubert variety $X_w$ corresponding to a permutation $w=w_1 w_2 \cdots w_n\in S_n$ (written in one-line notation) can be defined as the subvariety of flags satisfying the conjunction of the {\bf Schubert conditions} \begin{equation} \label{Schubert-condition} \dim_{\mathbb C} (V_r \cap {\mathbb C}^s) \geq t \end{equation} where \begin{equation} \label{eqn:rankfn} t={t}_{r,s}(w):=|\{w_1,w_2,\ldots,w_r\} \cap \{1,2,\ldots,s\}| \end{equation} for all $r,s=1,2,\ldots,n-1$ is the {\bf rank function} associated to $w$. Denote the condition \eqref{Schubert-condition} by $C_{r,s,t}$ (for arbitrary $t$, not necessarily of the form (\ref{eqn:rankfn})). Note that $C_{r,s,t}$ is vacuous unless $t > r+s-n$. The following explicit identification of bigrassmannian permutations is well-known and straightforward. \begin{lemma} The bigrassmannian permutations (other than the identity) are parameterized by $r,s,t$ with $1\leq t \leq r,s \leq n$ and $t>r+s-n$. Let $v_{r,s,t,n}$ denote the unique bigrassmannian permutation $v_1\ldots v_n\in S_n$ such that \begin{enumerate} \item[$\bullet$] $\mathrm{Des}(v)=\{(r\leftrightarrow r+1)\}$ \item[$\bullet$] $\mathrm{Des}(v^{-1})=\{(s\leftrightarrow s+1)\}$, and \item[$\bullet$] $v_t=s+1$. \end{enumerate} Then explicitly we have: \begin{equation} \label{v-explicitly} \begin{aligned} v_{r,s,t,n}&:=(1, 2, \ldots,t-1 , \\ & \qquad s+1, s+2, \ldots, s+r-t+1, \\ & \quad \qquad t, t+1,t+2, \ldots,s,\\ & \qquad \qquad s+r-t+2,s+r-t+3,\ldots,n). \end{aligned} \end{equation} \end{lemma} There is a simple relation between these Schubert conditions $C_{r,s,t}$ and the bigrassmannian permutations in $W=S_n$: \begin{proposition} \label{bigrassmannian-as-Schubert-condition} Let $w\in S_n$. Then the Schubert condition $C_{r,s,t}$ is satisfied by all flags in $X_w$ if and only if $v_{r,s,t,n} \not\leq w$. \end{proposition} \begin{proof} Note $C_{r,s,t}$ is a Schubert condition on $X_w$ if and only if $t_{r,s}(w) \geq t$. It is then straightforward to check that the latter is equivalent to $v_{r,s,t,n} \not\leq w$ using the {\it tableau criterion} \cite[Theorem 2.6.3]{BjornerBrenti} for comparing elements in the Bruhat ordering. \end{proof} Note that imposing an arbitrary conjunction of Schubert conditions on complete flags cuts out a $B$-stable subvariety of $G/B$, but this subvariety may be a {\it reducible} union of Schubert varieties rather than a single Schubert variety $X_w$. However, in type $A$, when one imposes a {\it single} Schubert condition, the result is always a (single) Schubert variety. This fact can be traced to special properties of the Bruhat order in type $A$, first identified by Lascoux and Sch\"utzenberger \cite{LascouxSchutzenberger}, and exploited further by Geck and Kim \cite{GeckKim} and Reading \cite{Reading}. To explain this, we first recall some terminology. \begin{definition} In a poset $P$, say that an element $v$ is a {\it dissector} of $P$ if there exists a (necessarily unique) element $w$ in $P$ for which $P$ decomposes as the disjoint union of the principal order filter above $v$ and the principal order ideal below $w$: $$ P= \{u \in P: u \geq v \} \quad \bigsqcup \quad \{u \in P: u \leq w \}. $$ Say that an element $a$ in a poset $P$ (which need not be a lattice) is {\it join-irreducible} if there does not exist a subset $X \subset P$ with $a \not\in X$ such that $a$ is the least element among all upper bounds for $X$ in $P$. \end{definition} There are two subtle issues to point out in this definition of join-irreducibles. Firstly, when the finite poset is a {\it lattice}, an element is join-irreducible if and only if it covers a unique element. However, for non-lattices, one can have join-irreducibles that cover more than one element. For example, the strong Bruhat order in type $A_2$ has four non-minimal, non-maximal elements, each of which is join-irreducible, but two of them cover two elements. Secondly, all of the posets that we will consider have a unique least element, (e.g. in Bruhat order on $W$, the least element is the identity of $W$), and this least element is {\it not} considered join-irreducible because it is the least element among all upper bounds for the empty set $X=\varnothing$. \begin{theorem}\cite{LascouxSchutzenberger, GeckKim, Reading} \label{dissective-theorem} \begin{enumerate} \item[(i)] In any finite poset, every dissector is join-irreducible. When the poset is the Bruhat order for a Coxeter system $(W,S)$ of type $A, B, H_3,H_4$ or $I_2(m)$, the converse holds: the join-irreducible elements are exactly the dissectors. \item[(ii)] In the Bruhat order for any finite Coxeter system $(W,S)$, every join-irreducible element is bigrassmannian. In type $A$, the converse holds: the (non-identity) bigrassmannian elements are exactly the join-irreducibles. \end{enumerate} In particular, in type $A$, for every bigrassmannian $v$ in $W$, there exists a (necessarily unique) element $w$ in $W$ for which ${\mathcal{E}}(w)=\{v\}$. \end{theorem} \begin{corollary} In type $A_{n-1}$, a single Schubert condition $C_{r,s,t}$ on the flags in $G/B$ cuts out the Schubert variety $X_{w_{r,s,t,n}}$ in $G/B$, where $w_{r,s,t,n}$ is the unique element with ${\mathcal{E}}(w_{r,s,t,n})=\{v_{r,s,t,n}\}$ as in Theorem~\ref{dissective-theorem}. Thus in type $A_{n-1}$, for any bigrassmannian $v_{r,s,t,n}$, one has equality of the two ideals $J_{v_{r,s,t,n}}=I_{w_{r,s,t,n}}$ within the coinvariant algebra $\Bbbk[V]/(\Bbbk[V]^W_+)$. \end{corollary} \noindent We remark that, as with $v_{r,s,t,n}$, one knows $w_{r,s,t,n}$ explicitly (see \cite[\S 8]{Reading}): $$ \begin{aligned} w_{r,s,t,n} &=(n, n-1, \ldots,(n-r+t+1), \\ & \qquad s, s-1, \ldots, s-t+1, \\ & \quad \qquad n-r+t,n-r+t-3, \ldots,s+1,\\ & \qquad \qquad s-t,s-t-1,\ldots,1). \end{aligned} $$ \subsection{Bigrassmannians and essential Schubert conditions} \label{essential-subsection} Next, we explain the relation between what we have called the essential set ${\mathcal{E}}(w)$ for $w$ and Fulton's essential set of Schubert conditions for $X_w$. Note that there are implications among the various Schubert conditions $C_{r,s,t}$. Fulton introduced the {\bf essential set} of a permutation, a set of coordinates $\{(r_i,s_i)\}\subset n\times n$ which give an inclusion-minimal subset of Schubert conditions $C_{r_i,s_i,t}$ with $t=t_{r_i,s_i}(w)$ that suffice to define $X_w$ as a subset of the flag manifold. (See further remarks in Example~\ref{exa:essential_convention} below.) Correspondingly, we call these Schubert conditions the {\bf essential Schubert conditions} for $X_w$; see \cite[\S3]{Fulton}, \cite[pp. 20-21]{FultonPragacz}, and \cite[\S2]{ErikssonLinusson}. \begin{proposition} \label{essential-proposition} The Schubert condition $C_{r,s,t}$ implies the Schubert condition $C_{r',s',t'}$ if and only if $v_{r,s,t,n} \leq v_{r',s',t',n}$ in Bruhat order. Therefore in type $A_{n-1}$, Fulton's essential set of Schubert conditions $C_{r,s,t}$ for $X_w$ correspond bijectively to the elements of the essential set ${\mathcal{E}}(w)$ for $w$ defined for a general Coxeter group. \end{proposition} \begin{proof} For the first assertion, note that the Schubert cell decomposition for $X_w$ and Theorem~\ref{dissective-theorem} give the following: $$ \begin{aligned} C_{r,s,t} \text{ implies }C_{r',s',t'} \Leftrightarrow& \qquad X_{w_{r,s,t,n}} \subseteq X_{w_{r',s',t',n}} \\ \Leftrightarrow& \qquad \{u \in W: u \leq w_{r,s,t,n} \} \subseteq \{u \in W: u \leq w_{r',s',t',n}\}\\ \Leftrightarrow& \qquad \{u \in W: u \geq v_{r,s,t,n} \} \supseteq \{u \in W: u \geq v_{r',s',t',n}\}\\ \Leftrightarrow& \qquad v_{r,s,t,n} \leq v_{r',s',t',n}. \end{aligned} $$ The second assertion follows immediately from the first. \end{proof} \begin{example} \label{exa:essential_convention} In order to be explicit about the bijection asserted in Proposition~\ref{essential-proposition}, it will be convenient for us to use a slight adaptation of Fulton's essential set. This bijection can be inferred from the discussion in \cite{FultonPragacz} and \cite{GasharovReiner} (our conventions are in line with those the latter text), and one also can thereby also give an explicit bijection between our essential set and Fulton's essential set as originally defined. Given $w=w_1 w_2 \cdots w_n\in S_n$, draw an $n\times n$ matrix of ``bubbles'' $\circ$. Replace the bubbles in positions $(i, w_i)$ with an $\times$ and erase all bubbles in the ``hooks'' weakly below and (nonstandardly) to the \emph{left} of each $\times$. The {\bf diagram} of $w$, denoted ${\mathcal D}(w)$, consists of all bubbles left, which are those not in any hook. Under this convention $|{\mathcal D}(w)|=\ell(w_0 w)$. This reflects the fact that our diagram is the left$\leftrightarrow$right mirror image of the \emph{standard} diagram of $w_0 w$; see \cite[p.11]{FultonPragacz}. Fulton's essential set is then defined as the subset of ${\mathcal D}(w)$ such that neither $(i+1,j)$ nor $(i,j-1)$ is in ${\mathcal D}(w)$. Let us denote Fulton's essential set by ${\mathcal{E}}_{\rm Fulton}(w)$. The desired bijection between bubbles in ${\mathcal{E}}_{\rm Fulton}(w)$ and ${\mathcal{E}}(w)$ sends the essential bubble with (row,column) indices $(r,s+1)$ to the bigrassmannian $v_{r,s,t,n}$ where $t$ is the number of bubbles weakly above the essential one in the same column. For example, let $w=425163$. The figure below shows the positions $(i,w_i)$ marked with an $\times$, and the bubbles in the diagram $D(w)$ shown as $\bullet$ or $\circ$ depending upon whether or not they lie in ${\mathcal{E}}_{\rm Fulton}(w)$: $$ \begin{matrix} &1&2&3&4&5&6 \\ 1& & & &\times&\circ&\circ\\ 2& &\times&\bullet& &\bullet&\circ\\ 3& & & & &\times&\circ\\ 4&\times& &\bullet& & &\bullet\\ 5& & & & & &\times\\ 6& & &\times& & & \\ \end{matrix} $$ The following table then summarizes the bijection between the bubbles lying in Fulton's essential set ${\mathcal{E}}_{\rm Fulton}(w)$, the essential Schubert conditions defining $X_w$, and the bigrassmannians that comprise ${\mathcal{E}}(w)$. \begin{tabular}{|c|c|c|}\hline $(r,s+1) = $ (row,column) & Schubert condition $C_{r,s,t}$: & bigrassmannian $v=v_{r,s,t,n}$ \\ for bubble in ${\mathcal{E}}_{\rm Fulton}(w)$ & $\dim V_r \cap {\mathbb C}^s \geq t$ & in ${\mathcal{E}}(w)$ \\ \hline\hline & & \\ $(2,3)$ & $\dim V_2 \cap {\mathbb C}^2 \geq 1$ & 341256 \\ & & \\ \hline & & \\ $(2,5)$ & $\dim V_2 \cap {\mathbb C}^4 \geq 2$ & 152346 \\ & (i.e. $V_2 \subset {\mathbb C}^4$) & \\ \hline & & \\ $(4,3)$ & $\dim V_4 \cap {\mathbb C}^2 \geq 2$ & 134526 \\ & (i.e. ${\mathbb C}^2 \subset V_4$) & \\ \hline & & \\ $(4,6)$ & $\dim V_4 \cap {\mathbb C}^5 \geq 4$ & 123645 \\ & (i.e. $V_4 \subset {\mathbb C}^5$) & \\ \hline \end{tabular} \end{example} \subsection{Grassmannians and symmetric functions} \label{LS-subsection} In looking for generators for the ideals $J_{v_{r,s,t,n}}$, we wish to take advantage of symmetric function identities, so we briefly review here the relation between symmetric functions and the Schubert calculus in type $A$. We also point out how the calculations may be performed over ${\mathbb Z}$ rather than a coefficient field $\Bbbk$ of characteristic zero. Let $J:=S\setminus\{s_r\}$ where $s_r=(r \leftrightarrow r+1)$, so that $W_J=S_r \times S_{n-r}$, and $G/P_J$ is the {\it Grassmannian} of $r$-planes in ${\mathbb C}^n$. The cohomology inclusion \eqref{parabolic-inclusion} or \eqref{Hiller-parabolic-inclusion} remains valid working with coefficients in ${\mathbb Z}$ and becomes $$ \begin{array}{lll} H^\star(G/P_J) &\cong H(G/B)^{W_J} &\hookrightarrow H(G/B)\\ {\mathbb Z}[\mathbf{x}]^{W_J}/{\mathbb Z}[\mathbf{x}]^W_+ {\mathbb Z}[\mathbf{x}]^{W_J} &\cong \left( {\mathbb Z}[\mathbf{x}]/({\mathbb Z}[\mathbf{x}]^W_+) \right)^{W_J} &\hookrightarrow {\mathbb Z}[\mathbf{x}]/({\mathbb Z}[\mathbf{x}]^W_+). \end{array} $$ Here ${\mathbb Z}[\mathbf{x}]:={\mathbb Z}[x_1,\ldots,x_n]$ is viewed as the symmetric algebra ${\mathbb Z}[V]$, where $V$ is no longer the irreducible reflection representation of dimension $n-1$ for $W=S_n$ but rather the natural permutation representation of dimension $n$. In order to work over ${\mathbb Z}$, one can replace the retraction in \eqref{parabolic-retraction} with the {\bf Demazure operator} $$ {\mathbb Z}[\mathbf{x}] \overset{\pi_{w_0(J)}}{\longrightarrow} {\mathbb Z}[\mathbf{x}]^{W_J} $$ associated to the longest element $w_0(J)$ in $W_J$, where $$ \begin{aligned} \pi_{s_i}(f) &:=\frac{x_i f-x_{i+1} s_i(f)}{x_i-x_{i+1}} \end{aligned} $$ and $\pi_w :=\pi_{s_{i_1}} \cdots \pi_{s_{i_\ell}}$ if $w=s_{i_1} \cdots s_{i_\ell}$ is any reduced decomposition for $w$. In type $A_{n-1}$, one can replace the Hiller Schubert polynomial $S_w$ with {\bf Lascoux and} \linebreak {\bf Sch\"utzenberger's Schubert polynomial} $\mathfrak{S}_w$ (see for example \cite{Macdonald, Manivel}): one chooses the root linear functionals to be $x_i-x_j$ for $1 \leq i \neq j \leq n$ and replaces the previous choice of $S_{w_0}= \prod_{i<j}(x_i-x_j)$ with an element which is equivalent modulo the ideal $({\mathbb Z}[\mathbf{x}]^W_+)$, namely $$ \mathfrak{S}_{w_0} := \mathbf{x}^{\delta_n}:=x_1^{n-1} x_2^{n-2} \cdots x_{n-1}^1 x_n^0. $$ Defining $\mathfrak{S}_w := \partial_{w^{-1}w_0} \mathfrak{S}_{w_0}$, property (c) from Section~\ref{Hiller-section} tells us that the images of $S_w$ and $\mathfrak{S}_w$ within ${\mathbb Z}[\mathbf{x}]/({\mathbb Z}[\mathbf{x}]^W_+)$ are the same for all $w\in W$. These $\mathfrak{S}_w$: \begin{enumerate} \item[$\bullet$] lie in ${\mathbb Z}[\mathbf{x}]$ and have nonnegative integer coefficients, \item[$\bullet$] lift the cohomology classes $\sigma_w$ in the cohomology with {\it integer} coefficients \[H^\star(G/B,{\mathbb Z}) \cong {\mathbb Z}[\mathbf{x}]/({\mathbb Z}[\mathbf{x}]^W_+),\] and \item[$\bullet$] give us Schur functions in finite variable sets whenever $w$ is grassmannian: if one has $\mathrm{Des}(w) \subseteq \{(r,r+1)\}$, (in which case we say $u$ is $r$-{\bf grassmannian}), so that $$ w_1 < w_2 < \cdots < w_r \text{ and } w_{r+1} < w_{r+2} < \cdots < w_n, $$ then $$ \mathfrak{S}_w = s_\lambda(x_1,\ldots,x_r), $$ where $\lambda$ is the partition $\lambda=(w_r-r,\ldots,w_2-2,w_1-1)$. Note that $\lambda$ has at most $r$ parts, all of size at most $n-r$, so its Young diagram fits inside an $r \times (n-r)$ rectangle. \end{enumerate} In order both to suppress the variable set $x_1,\ldots,x_r$ from the notation and to make more convenient use of symmetric function identities, we will work within a quotient of the {\it ring of symmetric functions} with integral coefficients $$ \Lambda=\Lambda_{\mathbb Z}(x_1,x_2,\ldots); $$ see \cite[Chapter 1]{Macdonald-symm-fns}, \cite[Chapter 7]{Stanley-EC2}. The ${\mathbb Z}$-basis for $\Lambda$ given by the Schur functions $s_\lambda$ has the property that the ${\mathbb Z}$-submodule $I_{r,n-r}$ spanned by all $s_\lambda$ with $\lambda \not\subseteq (n-r)^r$ forms an ideal, and the map sending $s_\lambda$ to $s_\lambda(x_1,\ldots,x_r)$ induces an isomorphism $$ \Lambda/I_{r,n-r} \quad \overset{\sim}{\longrightarrow} \quad {\mathbb Z}[\mathbf{x}]^{W_J}/{\mathbb Z}[\mathbf{x}]^W_+ {\mathbb Z}[\mathbf{x}]^{W_J} \quad \left( \cong H^\star(G/P_J,{\mathbb Z}) \right) $$ Thus $H^\star(G/P_J,{\mathbb Z})$ has ${\mathbb Z}$-basis given by $$ \{ \sigma_w: w \in W^J \} = \{s_\lambda: \lambda \subseteq (n-r)^r \}. $$ \subsection{A shorter presentation in type $A$} \label{type-A-generators-subsection} Starting with Theorem~\ref{general-J-generators}, our goal is to find an even smaller set of generators for the ideal $J_{v_{r,s,t,n}}$ within the coinvariant algebra, so that through (\ref{eqn:I_wconcat}) we obtain an even shorter presentation of $I_w$ in type $A$. First note that even though our proof of Theorem~\ref{general-J-generators} for all finite Coxeter groups was done in $\Bbbk[V]/(\Bbbk[V]^W_+)$ where the field $\Bbbk$ has characteristic zero, the same proof works in type $A$ more generally for the {\it integral} coinvariant algebra $H^\star(G/B,{\mathbb Z})$. This follows since the Schubert polynomials $\{\mathfrak{S}_w\}$ satisfy the integer coefficient versions of all of the requisite properties (a)-(g) used in Section~\ref{Hiller-section}. The bigrassmannian $v_{r,s,t,n}$ described explicitly in \eqref{v-explicitly} is $r$-grassmannian, and corresponds to the $j \times i$ rectangular partition $i^j$, where we define $$ \begin{aligned} i&:=s-t+1,\\ j&:=r-t+1. \end{aligned} $$ When $u$ and $v$ are $r$-grassmannian and correspond respectively to partitions $\lambda$ and $\mu$, the Bruhat order relation $u \geq v$ is equivalent to inclusion $\lambda \supseteq \mu$ of their Young diagrams, meaning that $\lambda_i \geq \mu_i$ for all $i$. Thus Theorem~\ref{general-J-generators} says that $J_{v_{r,s,t,n}}$ is generated as an ideal of $\Lambda/I_{r,n-r}$ by \begin{equation} \label{step0-generators} \{\mathfrak{S}_u: \mathrm{Des}(u) = \{(r,r+1)\}, \,\, u \geq v_{r,s,t,n}\} =\{ s_{\mu}: i^j \subseteq \mu \subseteq (n-r)^r \}. \end{equation} This presentation from Theorem~\ref{general-J-generators} can be improved in type $A_{n-1}$ as follows: \begin{theorem} \label{J-generators} Given a bigrassmannian $v=v_{r,s,t,n}$ in type $A_{n-1}$ with $\mathfrak{S}_v = s_{i^j}$, let $$ \begin{aligned} a&:=\min(n-r-i,r-j)\\ b&:=\min(i,j). \end{aligned} $$ Then $J_v$ is generated as an ideal of $H^{\star}(G/P_J, {\mathbb Z})\cong \Lambda/I_{r,n-r}$ by \begin{equation} \label{generating-set-one} \{ s_{\mu}: i^j \subseteq \mu \subseteq ((i+a)^b,i^{j-b}) \}. \end{equation} Alternatively, $J_v$ is generated by \begin{equation} \label{generating-set-two} \{ s_{\mu}: i^j \subseteq \mu \subseteq (i^j,b^a) \}. \end{equation} \end{theorem} We delay our proof of Theorem~\ref{J-generators} until Section~\ref{minconjsection}. \noindent Note that in both of the asserted generating sets \eqref{generating-set-one} and \eqref{generating-set-two} for $J_v$, the shapes $\mu$ indexing the generators $s_\mu$ run through an interval between the $j \times i$ rectangular shape $i^j$ and the disjoint union of $i^j$ with a smaller rectangle of shape $a \times b$ or $b \times a$. In \eqref{generating-set-one} the smaller rectangle is to the right of the rectangle $i^j$, with both top-justified, while in \eqref{generating-set-two} the smaller rectangle is below the rectangle $i^j$, with both left-justified. Thus in both cases, the generating sets have size $\binom{a+b}{a}$ and consist of generators whose multiset of degrees have generating function $$ \sum_\mu q^{|\mu|} = q^{ij} \qbin{a+b}{a}, $$ where $\qbin{a+b}{a}$ is a $q$-{\it binomial coefficient} or {\it Gaussian polynomial} \cite[I.2 Exer. 3]{Macdonald}. \begin{example} \label{GR-comparison} We examine the special case where the bigrassmannian $v:=v_{r,s,t,n}$ has $t$ equal to $r$ or $s$, so that the Schubert condition $C_{r,s,t}$ in \ref{Schubert-condition} becomes an {\it inclusion} $V_r \subseteq {\mathbb C}^s$ or $V_r \supseteq {\mathbb C}^s$. Schubert varieties $X_w$ in type $A$ for which all Schubert conditions on $X_w$ take one of these two forms were called Schubert varieties {\it defined by inclusions} in \cite{GasharovReiner}. That paper gave a presentation for the cohomology containing \begin{enumerate} \item[$\bullet$] for each inclusion condition $V_r \supseteq {\mathbb C}^s$, a set of $s$ generators for $J_v$ of the form \begin{equation} \label{first-kind-of-inclusion} \{ e_m(x_1,\ldots,x_r) \}_{m=r-s+1,r-s+2,\ldots,r}, \end{equation} \item[$\bullet$] and for each inclusion condition $V_r \subseteq {\mathbb C}^s$, a set of $r$ generators for $J_v$ of the form \begin{equation} \{ e_m(x_r+1,\ldots,x_n) \}_{m=s-r+1,s-r+2,\ldots,n-r}. \end{equation} \end{enumerate} We compare this with the presentation for $J_v$ in Theorem~\ref{J-generators}, say for the inclusion conditions of the form $V_r \supseteq {\mathbb C}^s$, and using the generators given in \eqref{generating-set-two}. Since $t=s$, one has $$ \begin{aligned} i&=s-t+1=1\\ j&=r-s+1\\ b&=\min(i,j)=i=1\\ a&=\min(n-r-1,s-1) \end{aligned} $$ Hence \eqref{generating-set-two} says that $J_v$ is generated by the set of $a+1$ Schur functions $$ \{ s_\mu: 1^{r-s+1} \subseteq \mu \subseteq 1^{a+r-s+1} \} =\{ e_m(x_1,\ldots,x_r) \}_{m=r-s+1,r-s+2,\ldots,a+r-s+1} $$ which is exactly the first $a+1=\min(n-r,s)$ out of the $s$ generators listed in \eqref{first-kind-of-inclusion}. Hence Theorem~\ref{J-generators} provides a dramatic reduction in the size of the generating set for $J_v$ whenever $n-r$ is small compared to $s$. We remark also that the techniques utilized in \cite{GasharovReiner} seem very particular to the case where $X_w$ is defined by inclusions. We do not know how to use them for some alternate approach to the case of general $X_w$ considered in this paper. \end{example} \subsection{A minimality conjecture} \label{minconj} As we shall see in a moment, our generators for $I_w$ are not minimal. However, we believe the following holds: \begin{conjecture} \label{minimality-conjecture} The two generating sets for the ideal $J_{v_{r,s,t,n}}$ given in Theorem~\ref{J-generators} are both minimal. \end{conjecture} Via computer, we have verified this conjecture for all bigrassmannian permutations where $r\leq 4$ and $n-r\leq 5$. In fact, Conjecture~\ref{minimality-conjecture} indicates obstructions to short presentations of $H^{\star}(X_w)$ in general. We now give a family of ideals that would require a large number of generators if the conjecture is true. For a positive integer $m$, let $n=4m$, and consider in $W=S_n=S_{4m}$ the bigrassmannian $v_{r,s,t,n}$ that corresponds to $r=n-r=2m$ and $i=j=m$. Then $a=b=m$, and $J_{v_{r,s,t,n}}(=I_{w_{r,s,t,n}})$ requires \begin{equation} \label{eqn:badlowerbnd} \binom{2m}{m} \sim \frac{4^m}{\sqrt{\pi m}} = \frac{\sqrt{2}^{n+2}}{\sqrt{\pi n}} \end{equation} generators according to Conjecture~\ref{minimality-conjecture}. The size of any minimal generating set of a homogeneous ideal is well-defined. This is implied by the following well-known fact: \begin{proposition} \label{prop:commalgfact} Let $R$ be a commutative ring, and $\Lambda=\oplus_{n \geq 0} \Lambda_n$ a graded, connected $R$-algebra, meaning that $\Lambda_0 =R$ and $\Lambda_i \Lambda_j \subset \Lambda_{i+j}$. Let $M$ be a graded $\Lambda$-module, with degrees bounded below, meaning that $M=\oplus_{n \geq N} M_n$ for some $N \in {\mathbb Z}$, and $\Lambda_i M_j \subset M_{i+j}$. Then a set of homogeneous elements $\{m_i\}_{i=1}^t$ generate $M$ as a $\Lambda$-module if and only if their images $\{\bar{m_i}\}_{i=1}^t$ span $M/\Lambda_+ M$ as a $R$-module. In particular, $\{m_i\}_{i=1}^t$ form a minimal $\Lambda$-generating set with respect to inclusion for $M$ if and only if $\{\bar{m_i}\}_{i=1}^t$ form a minimal $R$-spanning set for $M/\Lambda_+ M$. \end{proposition} In our setting, the well-definedness follows by setting $\Lambda=\oplus_{n\geq 0}\Lambda_{n}$ to be the graded ring of symmetric functions with ${\mathbb Z}$ coefficients and setting $M=J_{v_{r,s,t,n}}$, so that the ${\mathbb Z}$-module $M/\Lambda_{+}M$ is a finitely generated abelian group. Thus we conjecture that this abelian group $M/ \Lambda_+ M$ requires $\binom{a+b}{a}$ generators, and in fact, we suspect that $M/\Lambda_{+}M\cong {\mathbb Z}^{\binom{a+b}{a}}$. So far a proof has eluded us. \begin{example} \label{non-minimal-example} Since $I_w=\sum_{v \in {\mathcal{E}}(w)} J_v$, and since we have conjectured that the generating sets provided by Theorem~\ref{J-generators} for $J_v$ are minimal, one might wonder whether their concatentation gives a minimal generating set of $I_w$. As mentioned above, this turns out to be false in general. The smallest counterexample is given by $w=1243$, which has $$ {\mathcal{E}}(w) = \{v_1 = 2134, v_2=1324\} $$ The generating sets given in Theorem~\ref{J-generators} for the ideals $J_{v_1}$ and $J_{v_2}$ are $$ \begin{aligned} J_{v_1} &= \langle s_{(1)}(x_1) \rangle = \langle x_1 \rangle \\ & \\ J_{v_2} &= \langle s_{(1)}(x_1,x_2), \quad s_{(2)}(x_1,x_2) \rangle \quad \text{ or } \quad \langle s_{(1)}(x_1,x_2), \quad s_{(1,1)}(x_1,x_2) \rangle \\ &= \langle x_1+x_2, \quad x_1^2+x_1 x_2+x_2^2 \rangle \text{ or }\langle x_1+x_2, \quad x_1 x_2 \rangle, \end{aligned} $$ and in each case they minimally generate their ideals $J_{v_i}$. However, concatenating them gives non-minimal generating sets for $I_w$, namely $$ \begin{aligned} I_w = J_{v_1}+J_{v_2} &= \langle x_1, \quad x_1+x_2,\quad x_1^2+x_1 x_2+x_2^2 \rangle \quad \text{ or } \quad \langle x_1, \quad x_1+x_2, \quad x_1 x_2 \rangle \\ & \left( = \langle x_1, \quad x_1+x_2 \rangle \right) . \end{aligned} $$ \end{example} \begin{example} Some readers may find the above earliest example artificial: though $X_w=X_{1243}$ lives inside $GL_4/B$, it is isomorphic to $X_{21}=GL_2/B \cong {\mathbb P}^2$. However one can easily produce from this more counterexamples with similar properties but no such artificial nature. For example, take $w=23541$, which has ${\mathcal{E}}(w) = \{v_1 = 31245, v_2=14235\}$. Then $J_{v_1}$ and $J_{v_2}$ require one and two generators respectively, but the sum $I_w=J_{v_1}+J_{v_2}$ requires only two generators, not three. \end{example} \subsection{Some symmetric function identities} The proof of Theorem~\ref{J-generators} on generators for $J_{v_{r,s,t,n}}$ will use some symmetric function identities which we describe and prove in this section. We will make use of standard terminology, such as in \cite{Macdonald-symm-fns, Sagan, Stanley-EC2}. In particular, we will use the {\bf Pieri rule} expanding the product of an elementary symmetric function $e_r:=s_{1^r}$ with an arbitrary Schur function into Schur functions: \begin{equation} \label{Pieri-rule} e_k s_\lambda = \sum_\mu s_\mu, \end{equation} where the sum runs over all partitions $\mu$ obtained from $\lambda$ by adding on a vertical strip of length $k$. The following easy consequence will be used in the proof of Theorem~\ref{J-generators} below. \begin{lemma} \label{Woo-lemma} For any partition $\nu$ and nonnegative integer $k$, one has \begin{equation} \label{eqn:Woo-lemma} s_{(\nu,1^k)} = \sum_{\ell=0}^k (-1)^{\ell} e_{k-\ell} \sum_\lambda s_\lambda, \end{equation} where the inner sum runs over partitions $\lambda$ obtained from $\nu$ by adding a horizontal strip of length~$\ell$. \end{lemma} \begin{proof} Using the Pieri rule \eqref{Pieri-rule} to expand the right side of (\ref{eqn:Woo-lemma}), one obtains $$ \sum_{\ell=0}^k (-1)^{\ell} e_{k-\ell} \sum_\lambda s_\lambda= \sum_{(\ell,\lambda)}(-1)^\ell s_\lambda, $$ where the sum runs over pairs $(\ell,\lambda)$ in which both $0 \leq \ell \leq k$, and $\lambda$ is obtained from $\nu$ by first adding a horizontal $\ell$-strip within the first $\ell(\nu)$ rows then adding an arbitrary vertical $(k-\ell)$-strip. Cancel all these pairs, except for the one with $\ell=0$ and $\lambda=(\nu,1^k)$, via the following sign-reversing involution: if $x$ (respectively $y$) is the farthest east (respectively, farthest north) box in the horizontal (respectively vertical) strip, then \begin{enumerate} \item[$\bullet$] when $y$ is to the right of $x$ (or when $\ell=0$ and $\lambda \neq (\nu,1^k)$), move $y$ from the vertical to the horizontal strip, and, \item[$\bullet$] when $y$ is below $x$, move $x$ from the horizontal strip to the vertical strip. \end{enumerate} \end{proof} We also need the {\bf Jacobi-Trudi identity}: \begin{equation} \label{Jacobi-Trudi} s_\lambda = \det( h_{\lambda_i - i + j} )_{i,j=1,2,\ldots,\ell(\lambda)} \end{equation} with the usual convention that $h_r := s_{(r)}$ for $r \geq 0$ and $h_r = 0$ for $r < 0$. This has the following consequence, also to be used in the proof of Theorem~\ref{J-generators} below. \begin{lemma} \label{Woo-2nd-lemma} Let $i < k$, and assume $\mu$ is a partition with $ \mu_k > i \geq \mu_{k+1}, $ so that the $(i+1)^{st}$ column of the Young diagram for $\mu$ has length $k > i$. Then $$ s_\mu = \sum_{m=1}^k (-1)^{k-m} \,\, h_{\mu_m+k-i-m} \,\, s_{\mu^{(m)}}, $$ where for $m=1,2,\ldots,k$ one defines $$ \mu^{(m)}:=(\mu_1,\mu_2,\ldots,\mu_{m-1},\widehat{\mu_m}, \mu_{m+1}-1,\mu_{m+2}-1, \ldots,\mu_k-1,i,\mu_{k+1},\mu_{k+2},\ldots,\mu_{\ell}), $$ where $\ell:=\ell(\mu)$ and $\widehat{\mu_m}$ refers to the deletion of the entry $\mu_m$. \end{lemma} \begin{proof} Start with the $\ell \times \ell$ Jacobi-Trudi matrix for $\mu$. From this create an $(\ell+1) \times \ell$ matrix by inserting a new row between its row $k$ and row $k+1$, having entries $$ (h_{i-k+1},h_{i-k+2},\ldots,h_{i-k+\ell}). $$ Then from this $(\ell+1) \times \ell$ matrix, create a singular $(\ell+1) \times (\ell+1)$ matrix by introducing an $(\ell+1)^{st}$ column that duplicates the $(k-i)^{th}$ column. This last duplicated column is $$ \begin{array}{lccrll} \ \ \ \ (h_{\mu_1+k-i-1},\ldots,h_{\mu_k-i}, &h_{i-k + (k-i)},&h_{\mu_{k+1}-i-1},&\ldots, &h_{\mu_{\ell}+k-i-\ell}&)^T \\ = (h_{\mu_1+k-i-1},\ldots,h_{\mu_k-i}, &1, &0, &\ldots,&0&)^T. \end{array} $$ Here we have used the facts that $h_0=1$ and that $h_{\mu_m+k-i-m}=0$ for $m \geq k+1$ because $\mu_m \leq \mu_{k+1} \leq i$ implies $\mu_m+k-i-m=(\mu_m-i)+(k-m)<0$. One then checks that cofactor expanding the (zero) determinant of this $(\ell+1) \times (\ell+1)$ matrix along this duplicated column gives the asserted identity. \end{proof} \subsection{Proof of Theorem~\ref{J-generators}} \label{minconjsection} The proof of the second statement will follow from the first, via the well-known ring involution $\omega$ on symmmetric functions defined by $$ \begin{aligned} \Lambda &\overset{\omega}{\rightarrow} \Lambda \\ s_\lambda &\longmapsto s_{\lambda'} \end{aligned} $$ where $\lambda'$ is the conjugate partition to $\lambda$. This means that $\omega$ sends the ideal $I_{r,n-r}$ to the ideal $I_{n-r,r}$. Hence the set \eqref{generating-set-two} generates $J_v$ within $\Lambda/I_{r,n-r}$, where $v$ corresponds to an $i \times j$ rectangle, if and only if the set \eqref{generating-set-one} generates the ideal $J_{v^\prime}$ within $\Lambda/I_{n-r,r}$, where $v^\prime$ corresponds to a $j\times i$ rectangle. The proof for \eqref{generating-set-one} is by induction on the degree $d$, which is the number of boxes in our partition. Our inductive hypothesis is that the portion of $J_v$ of degree at most $d$ is generated by those elements of \eqref{generating-set-one} of degree at most $d$, or equivalently, that all elements of \eqref{step0-generators} of degree at most $d$ are writable in terms of elements of \eqref{generating-set-one} of degree at most $d$. The base case, $d=ij$, is clear, since $s_{i^j}$ is the only element of degree $ij$ in both sets. Our proof for the inductive case proceeds in three steps. Start with the generating set for $J_v$ given in \eqref{step0-generators}. We wish to show that, modulo $I_{r,n-r}$, all such $s_\mu$ with $|\mu|=d$ lie in the ideal generated by those $s_\mu$ with $|\mu|<d$ and those \begin{enumerate} \item[{\sf Step 1.}] with $\mu$ in the interval $[i^j,(n-r)^j]$, and then furthermore \item[{\sf Step 2.}] with $\mu$ in the interval $[i^j,(i+a)^j$], and then finally \item[{\sf Step 3.}] with $\mu$ in the interval $[i^j,((i+a)^b,i^{j-b})]$. \end{enumerate} \vskip.1in \noindent {\sf Step 1.} We will use induction on a certain partial order on partitions which depends on the index $j$. For a partition $\lambda$, define $$ \hat{\lambda}:=(\lambda_{j+1},\lambda_{j+2},\ldots), $$ so that the Young diagram of $\hat{\lambda}$ consists of rows $j+1,j+2,\ldots$ from the Young diagram of $\lambda$. Then partially order the partitions containing $i^j$ by decreeing $\lambda \prec_j \mu$ if either $|\hat{\lambda}| <|\hat{\mu}|$, or if $|\hat{\lambda}| = |\hat{\mu}|$ but $\hat{\lambda}<\hat{\mu}$ in the {\it dominance order}, meaning that $$ \lambda_{j+1} + \lambda_{j+2} + \cdots \lambda_k \leq \mu_{j+1} + \mu_{j+2} + \cdots \mu_k $$ for each $k \geq j$. Now if $\mu$ does not already lie in the interval $[i^j,(n-r)^j]$, so that $\ell(\mu) = k+j > j$, let $\nu$ be the partition obtained from $\mu$ by removing $1$ from its last $k$ nonempty parts $\mu_{j+1},\mu_{j+2},\ldots,\mu_{j+k}$. Then by the Pieri rule \eqref{Pieri-rule}, $e_ks_\nu=s_\mu+\sum_{\lambda} s_\lambda$, where $\lambda$ runs through partitions other than $\mu$ obtained from $\nu$ by adding a vertical strip of size $k$. One can check that any such $\lambda$ satisfies $\lambda \prec_j \mu$: either the vertical strip contains some boxes in the first $j$ rows, so that $|\hat{\lambda}| <|\hat{\mu}|$, or if not, the location of the vertical strip forces $\hat{\lambda}<\hat{\mu}$ in dominance. Also, $k\geq1$, so $|\nu|<|\mu|$. Consequently, by induction on the order $\prec$, one has an expression for $s_\mu$ showing that it is in the ideal generated by $s_\lambda$ where either $|\lambda|<|\mu|$ or $\lambda$ is in the interval $[i^j,(n-r)^j]$. \vskip.1in \noindent {\sf Step 2.} We will again use induction, this time on reverse dominance order. We wish to write $s_\mu$ where $\mu$ is in the interval $[i^j,(n-r)^j]$ in terms of $s_\lambda$ where $|\lambda|<|\mu|$ or $\lambda$ lies in the interval $[i^j,(i+a)^j$]. Recall that $a=\min(n-r-i,r-j)$, and if $a=n-r-i$, then $n-r=i+a$ so there is nothing to do after Step 1. Thus we may assume $a=r-j$. If $\mu$ does not already lie in the interval $[i^j,(i+a)^j]$, so that $\mu_1 > i+a$, let $k:=\mu_1-i > a$, and let $\nu$ be the partition obtained from $\mu$ by removing $1$ box from each of the last $k$ nonempty columns in the Young diagram of $\mu$. Note that $\ell(\nu) \geq \ell(\mu)$ since $k<\mu_1$, and $\ell(\mu) \geq j$ since $i^j \subset \mu$. Hence $(\nu,1^k)$ has length at least $j+k > j+a = r$, so that the partition $(\nu,1^k) \not\subseteq (n-r)^r$, and hence Lemma~\ref{Woo-lemma} tells us that $$ \sum_{\ell=0}^k (-1)^\ell e_{k-\ell} \sum_\lambda s_\lambda \equiv 0 \mod I_{r,n-r} $$ where in the sum $\lambda$ runs through partitions having no more than $j$ parts obtained from $\nu$ by adding a horizontal strip of length $\ell$. We claim that almost all of the terms in this sum, excepting the single term with $\ell=k$ and $\lambda=\mu$, will have $|\lambda| < |\mu|$ or $\lambda>\mu$. If $\ell < k$, then $|\lambda|<|\mu|$. If $\ell = k$, note the horizontal strip $\lambda/\nu$ cannot have any boxes in the first $i$ columns as those already have length $j$ in $\nu$. Therefore, the location of the horizontal strip forces $\lambda > \mu$ in dominance. Consequently, by induction, one has an expression for $s_\mu$ showing that it is in the ideal generated by $s_\lambda$ where $|\lambda|<|\mu|$ or $\lambda$ is in the interval $[i^j,(i+a)^j]$. \vskip.1in \noindent {\sf Step 3.} Now we show that, if $\mu$ fits inside $(i+a)^j$ but not inside $((i+a)^b,i^{j-b})$, then $s_\mu$ can always be written as a sum of terms of the form $h_r s_\lambda$ for $r>0$ and $\lambda$ containing $i^j$. Since $r>0$, $|\lambda|<|\mu|$, this suffices to finish the proof. Recall that $b=\min(i,j)$, and if $b=j$ then there is nothing to do after Step 2. Thus we may assume $b=i < j$. Let $k$ be the number of parts of $\mu$ which are strictly larger than $i$, so that $k$ is the size of the $(i+1)^{st}$ column in the Young diagram of $\mu$. Since $\mu$ does not fit inside $((i+a)^i,i^{j-i})$, it must be that $k > i$, and we are in the situation of Lemma~\ref{Woo-2nd-lemma}. Hence $$ s_\mu = \sum_{m=1}^k (-1)^{k-m} h_{\mu_m+k-i-m}s_{\mu^{(m)}} $$ where $$ \mu^{(m)}:=(\mu_1,\mu_2,\ldots,\mu_{m-1},\mu_{m+1}-1,\mu_{m+2}-1, \ldots,\mu_k-1,i,\mu_{k+1},\mu_{k+2},\ldots,\mu_{\ell}) $$ and $\ell=\ell(\mu)$. Note that, since $\mu$ contains $i^j$, and hence $k \leq j$, each $\mu^{(m)}$ also contains $i^j$. Also note that each factor $h_{\mu_m+k-i-m}$ has positive degree: $m \leq k$ implies $\mu_m \geq \mu_k > i$, and hence $\mu_m+k-i-m = (\mu_m-i)+(k-m)> 0$. \qed \section{A question} \label{questions-section} \begin{question} Can one find a {\it minimal} generating set for the ideal $I_w$ in type $A_{n-1}$? Can this at least be done for some of the recently-studied subclasses \cite{GasharovReiner, HultmanLinussonShareshianSjostrand, OhPostnikovYoo} where $I_w$ can be generated by $n^2$ elements, such as \begin{enumerate} \item[$\bullet$] when $X_w$ is {\it defined by inclusions}, which occurs when $w$ avoids the patterns $$\{4231, 35142, 42513, 351624\},$$ \item[$\bullet$] or more restrictively, when $X_w$ is {\it smooth}, which occurs when $w$ avoids the patterns $$\{3412,4231\}?$$ \end{enumerate} \end{question} \noindent It was mentioned in the introduction that for a special subclass of smooth Schubert varieties $X_w$ originally considered by Ding \cite{Ding1, Ding2}, there is a known minimal (in fact, complete intersection) presentation for $H^*(X_w,{\mathbb Z})$ with $n$ relations that was exploited in \cite{DevelinMartinReiner}. Short presentations would be useful to extend that work further. \section*{Acknowledgements} The authors thank Nathan Reading for helpful comments and corrections. We also thank the anonymous referee for pointing out the reference \cite{ALP} to us, and for other helpful remarks. VR is supported by NSF grant DMS-0601010. AY is supported by NSF grants DMS-0601010 and DMS-0901331. AW is supported by NSF VIGRE grant DMS-0135345. This work was partially completed while AY was a visitor at the Fields Institute in Toronto, and was facilitated by a printer graciously provided by Lawrence Gray through the University of Minnesota.
{ "timestamp": "2009-09-04T23:19:22", "yymm": "0809", "arxiv_id": "0809.2981", "language": "en", "url": "https://arxiv.org/abs/0809.2981", "abstract": "We extend the short presentation due to [Borel '53] of the cohomology ring of a generalized flag manifold to a relatively short presentation of the cohomology of any of its Schubert varieties. Our result is stated in a root-system uniform manner by introducing the essential set of a Coxeter group element, generalizing and giving a new characterization of [Fulton '92]'s definition for permutations. Further refinements are obtained in type A.", "subjects": "Combinatorics (math.CO); Rings and Algebras (math.RA)", "title": "Presenting the cohomology of a Schubert variety", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357610169274, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.7096739108373785 }
https://arxiv.org/abs/2009.04451
Dimension of finite free complexes over commutative Noetherian rings
Foxby defined the (Krull) dimension of a complex of modules over a commutative Noetherian ring in terms of the dimension of its homology modules. In this note it is proved that the dimension of a bounded complex of free modules of finite rank can be computed directly from the matrices representing the differentials of the complex.
\section*{Introduction} \noindent This short note concerns certain homological invariants---specifically, dimension and depth---of complexes of modules over commutative Noetherian local rings. The concepts of depth and dimension for modules, introduced by Krull and by Auslander and Buchsbaum, respectively, need no recollection. Both concepts were extended to complexes of modules by Foxby \cite{HBF79}, and also by Iversen \cite{BIv77}. Their extensions agree up to a normalization; in what follows we work with Foxby's definitions, recalled further below, for they are better suited to computations in the derived category. The depth and dimension of a complex depend only on the quasi-isomorphism class of the complex; said differently, they are defined on the derived category of the ring. To compute these invariants one can usually reduce to the case where the complex is finite free, for they are independent of the domain. Indeed, if $Q\to R$ is a surjective map of rings with $Q$ a regular local ring, then the depth and dimension of an $R$-complex $M$ coincide with the corresponding invariants of $M$ viewed as a complex over $Q$. And, at least when $M$ is homologically finite, it is quasi-isomorphic, over $Q$, to a finite free complex. Thus, in what follows we consider a complex over a local ring $R$ of the form: \begin{equation*} F \:\colonequals\: 0 \longrightarrow F_b\xra{\partial} \cdots \xra{\partial} F_a\longrightarrow 0 \end{equation*} where each $F_i$ is a free $R$-module of finite rank. We assume that $F$ is minimal, in that $\partial(F)\subseteq \mathfrak{m}} \newcommand{\fp}{\mathfrak{p} F$, where $\mathfrak{m}} \newcommand{\fp}{\mathfrak{p}$ is the maximal ideal of $R$. For such a complex $F$, the depth can be read off easily: The equality of Auslander and Buchsbaum for modules of finite projective dimension applies equally to complexes---this was proved by Foxby \cite{HBF80}---and yields that the depth of $F$ equals $\operatorname{depth} R - b$, provided that $F_b \ne 0$. In this note we establish a formula that expresses the dimension of $F$ in terms of the ranks of the modules $F_i$ and the Fitting ideals of the differentials; see Theorem~\ref{mainthm}. We were lead to it in an attempt to relate the codimension, in the sense of Bruns and Herzog~\cite{bruher}, to other homological invariants. It turns out that the codimension of $F$ equals $\dim R - \dim_R \operatorname{Hom}_R(F,R)$; see Remark~\ref{bhcodim}. This observation gives a different perspective on, and different proofs of, certain results in \cite{bruher} related to the homological conjectures; see Proposition~\ref{sup} and Theorem~\ref{Fstar}. \section*{Dimension} \noindent Let $R$ be a ring. By an $R$-complex we mean a complex of $R$-modules, with lower grading: \begin{equation*} X \: \colonequals \: \cdots \longrightarrow X_n \xra{\partial_{n}} X_{n-1}\longrightarrow \cdots \end{equation*} A graded $R$-module, such as the homology $\operatorname{H}(X)$ of $X$, is viewed as an $R$-complex with zero differentials; in particular, we use the same grading convention for such objects. \subsection*{Dimension} Let $R$ be a commutative Noetherian ring. In \cite[Section 3]{HBF79} Foxby introduced the \emph{dimension} of an $R$-complex $X$ to be \begin{equation} \label{dim0} \dim_R X \colonequals \sup\{\dim(R/\fp) - \inf\operatorname{H}(X)_\fp \mid \fp\in\operatorname{Spec} R \}\:. \end{equation} By \cite[Proposition 3.5]{HBF79} this invariant can be computed in terms of the homology: \begin{equation} \label{dim1} \dim_R X = \sup\{\dim_R \operatorname{H}_n(X) - n \mid n\in \mathbb Z\}\:. \end{equation} The convention is that the dimension of the zero module is $-\infty$. \subsection*{Finite free complexes} By a \emph{finite free} $R$-complex we mean a bounded $R$-complex \begin{equation} \label{ffc} F \:\colonequals\: 0\longrightarrow F_b\xra{\partial_b} F_{b-1} \longrightarrow \cdots \longrightarrow F_{a+1} \xra{\partial_{a+1}}F_a\longrightarrow 0 \end{equation} where each $F_i$ is a free $R$-module of finite rank. For such a complex $F$ we set \begin{equation} \label{sn} s_n = \sum_{i \le n}(-1)^{n-i} \operatorname{rank}} \newcommand{\vf}{\varphi_RF_i \quad\text{for each $n\in\mathbb Z$}\:. \end{equation} Given a map $\vf$ between finite free modules we write $I_s(\vf)$ for the ideal generated by the $s\times s$ minors of a matrix representing $\vf$; see, for example, \cite[p.~21]{bruher}. It is convenient to adopt the convention that the determinant and all minors of the empty matrix is $1$; in particular, for $s \le 0$ an $s \times s$ minor of any matrix is $1$. For $s \ge 1$ an $s\times s$ minor of a non-empty matrix is $0$ if $s$ exceeds the number of rows or columns. For the differentials of the complex \eqref{ffc} this means that $I_{s_{n}}(\partial_{n+1}) = R$ holds for integers $n$ outside $[a,b]$. \begin{theorem} \label{mainthm} With $F$ and $s_n$ as above, there is an equality \begin{equation*} \dim_R F = \sup\{\dim (R/I_{s_{n}}(\partial_{n+1})) - n \mid n\in\mathbb Z \}\:. \end{equation*} \end{theorem} \begin{proof} To prove the inequality ``$\ge$'' we verify that \begin{equation*} \dim (R/I_{s_{n}}(\partial_{n+1})) \le \dim_R F + n \end{equation*} holds for every integer $n$ in $[a,b]$. Fix such an $n$. The inequality above holds if and only if one has $I_{s_{n}}(\partial_{n+1})_\fp=R_\fp$ for every $\fp\in\operatorname{Spec} R$ with $\dim(R/\fp) > \dim_R F + n$. For such a prime ideal and any integer $i\le n$ one gets \begin{equation*} \dim_R\operatorname{H}_i(F)\le \dim_R F + i <\dim(R/\fp) \end{equation*} where the first inequality holds by \eqref{dim1}. This yields \begin{equation*} \operatorname{H}_i(F)_\fp =0 \quad \text{for all $i\le n$ }\:, \end{equation*} which implies that the homology of the complex \begin{equation} \label{eq:Fp} (F_{n+1})_\fp \xrightarrow{(\partial_{n+1})_\fp} (F_{n})_{\fp} \longrightarrow \cdots \longrightarrow (F_{a+1})_\fp \xrightarrow{(\partial_{a+1})_\fp}(F_{a})_{\fp} \longrightarrow 0 \end{equation} is zero in degrees $\le n$. It follows that the image of $(\partial_{n+1})_{\fp}$ is a free $R_{\fp}$-module of rank $s_n$. Hence one has $I_{s_{n}}(\partial_{n+1})_\fp=R_\fp$. To prove the opposite inequality, ``$\le$'', we show that \begin{equation*} \dim_R \operatorname{H}_{n}(F) - n \leq \sup\{\dim (R/I_{s_i}(\partial_{i+1})) - i \mid a \le i \le b\} \end{equation*} holds for each integer $n$ in $[a,b]$. Let $t$ be the supremum above. One needs to verify that $\operatorname{H}_{n}(F)_{\fp}=0$ holds for primes $\fp$ with $\dim(R/\fp) > t + n$. Fix such a $\fp$; for every $i\le n$ one has \begin{equation*} \dim (R/I_{s_i}(\partial_{i+1})) \le t+ i \le t+n <\dim(R/\fp) \end{equation*} so that $I_{s_i}(\partial_{i+1})_{\fp} = R_{\fp}$. We now argue by induction on $i$ that the homology of the complex \eqref{eq:Fp} is zero in degrees $\le n$; in particular, one has $\operatorname{H}_{n}(F)_{\fp}=0$, as desired. With $f_i = \operatorname{rank}} \newcommand{\vf}{\varphi_RF_i$ and $K_i = \ker \partial_i$ the argument goes as follows: In the base case $i=a$ one applies \cite[Lemma~1.4.9]{bruher} to the presentation of the image of $\partial_{a+1}$ afforded by \eqref{eq:Fp}, and one concludes that it is a free submodule of $F_a$ of rank $f_a$, i.e.\ the whole thing. One also notices that a free module contained in $K_{a+1}$ has rank at most $s_{a+1}$. In the induction step one applies \emph{op.cit} to the presentation of the image of $\partial_{i+1}$ and concludes that it is a free module of rank $f_{i+1} - s_{i+1}=s_i$. By the induction hypothesis a free module contained in $K_i$ has rank at most $s_i$, so the complex is exact at $(F_i)_\fp$. \end{proof} \subsection*{Codimension} Let $R$ be a commutative Noetherian ring and $F$ a finite free $R$-complex as in \eqref{ffc}. For each integer $n$ set \begin{equation*} r_n\colonequals \sum_{i\ge n} (-1)^{i-n} \operatorname{rank}} \newcommand{\vf}{\varphi_R(F_i)\:. \end{equation*} For $n$ in $[a+1,b]$ this is the \emph{expected rank} of the map $\partial_n$; see \cite[p.~24]{bruher}. \begin{corollary} \label{codim1} With $F$ and $r_n$ as above there is an equality \begin{equation*} \dim_R \operatorname{Hom}_R(F,R) = \sup\{\dim(R/I_{r_n}(\partial^F_n)) + n \mid n \in \mathbb Z \}\:. \end{equation*} \end{corollary} \begin{proof} Set $G\colonequals \operatorname{Hom}_R(F,R)$. This too is a finite free complex, concentrated in degrees $[-b,-a]$, with differentials $\partial^{G}_{n} = \operatorname{Hom}_R(\partial^F_{1-n},R)$ for each $n$. It is now easy to check that the expected ranks $r_n$ of $F$ and the invariants $s_n$ of $G$, from \eqref{sn}, determine each other: \begin{equation*} s_n(G) = r_{-n}(F)\quad\text{for each $n$}. \end{equation*} Whence one gets equalities \begin{align*} \dim_R G &= \sup\{\dim (R/I_{s_n}(\partial^G_{n+1})) - n\mid n\in \mathbb Z\} \\ &= \sup\{\dim (R/I_{r_{-n}}(\partial^F_{-n}) -n \mid n\in\mathbb Z\} \\ &= \sup\{\dim (R/I_{r_n}(\partial^F_n)) + n \mid n \in \mathbb Z \}\:. \qedhere \end{align*} \end{proof} \begin{remark} \label{bhcodim} Bruns and Herzog~\cite[Section 9.1]{bruher} have introduced a notion of ``codimension" for finite free complexes. This is perhaps a misnomer: Applied to the minimal free resolution of a module, the codimension does not equal the usual codimension of the module. In fact, Corollary~\ref{codim1} yields that the codimension, in their sense, of any finite free $R$-complex $F$, is precisely $\dim R - \dim_R \operatorname{Hom}_R(F,R)$. Foxby also has a notion of codimension for an $R$-complex $X$, namely the invariant \begin{align*} \operatorname{codim}_R X & \colonequals \inf\{\dim R_\fp + \inf\operatorname{H}(X)_\fp \mid \fp\in\operatorname{Spec} R \} \\ & = \inf\{\operatorname{codim}_R \operatorname{H}_n(X) + n \mid n\in \mathbb Z\}\:; \end{align*} see \cite[Lemma 5.1]{HBF79} and the definition preceding it. From the definitions one immediately gets $\operatorname{codim}_R X + \dim_R X \le \dim R$; equality holds if $R$ is local, catenary, and equidimensional. For a finite free complex $F$ over such a ring one thus has \begin{equation*} \operatorname{codim}_R \operatorname{Hom}_R(F,R) = \dim R - \dim_R\operatorname{Hom}_R(F,R)\:. \end{equation*} In particular, the codimension of $F$ in the sense of \cite{bruher} is the codimension of the dual complex, $\operatorname{Hom}_R(F,R)$, in the sense of \cite{HBF79}. \end{remark} In Bruns and Herzog's \cite{bruher} treatment of the homological conjectures---most of which are now theorems thanks to Andr\'e~\cite{YAn18}---their notion of codimension of a finite free complex is key. Per Remark~\ref{bhcodim} this suggests that estimates on the dimension of $\operatorname{Hom}_R(F,R)$ are useful, and that motivates the development below. \subsection*{Support} Let $R$ be a commutative Noetherian ring. The \emph{large support} of an $R$-complex $X$ is the support of the graded module $\operatorname{H}(X)$, i.e. \begin{equation*} \operatorname{Supp}_R X \colonequals \{\fp\in\operatorname{Spec} R \mid \operatorname{H}_n(X)_\fp \ne 0 \text{ for some $n$}\}\:. \end{equation*} Foxby \cite[Section 2]{HBF79} also introduced the (small) \emph{support} of $X$ to be the set \begin{equation*} \operatorname{supp}_R X \colonequals \{\fp\in\operatorname{Spec} R \mid \operatorname{H}(\kappa(\fp) \otimes^{\mathsf{L}}_R X) \ne 0 \}\:; \end{equation*} as usual, $\kappa(\fp)$ denotes the residue field of the local ring $R_\fp$. Support is connected to the finiteness of the depth of $X$: \begin{equation*} \label{supp1} \operatorname{supp}_R X = \{\fp\in \operatorname{Spec} R \mid \operatorname{depth}_{R_\fp} X_\fp < \infty \}\:. \end{equation*} We recall that the \emph{depth} of a complex $X$ over local ring $R$ with residue field $k$ is \begin{equation*} \operatorname{depth}_RX \colonequals \inf\{n \in \mathbb Z \mid \operatorname{Ext}^n_R(k,X)\ne 0\}\:. \end{equation*} This invariant can also be computed in terms of the Koszul homology, and the local cohomology, of $X$; see \cite{HBFSIn03}. \begin{proposition} \label{fstar} Let $R$ be a commutative noetherian ring. For every finite free $R$-complex $F$ one has \begin{equation*} \dim_R \operatorname{Hom}_R(F,R) \le \dim R + \sup \operatorname{H}(F) \:. \end{equation*} \end{proposition} \begin{proof} For every prime ideal $\fp$ the complex $F_\fp$ has finite projective dimension, so the Auslander--Buchsbaum Formula combines with standard (in)equalities between invariants to yield \begin{align*} -\inf\operatorname{H}(\operatorname{Hom}_R(F,R))_\fp & = \sup\{m\in\mathbb Z \mid \operatorname{Ext}_{R_\fp}^m(F_\fp,R_\fp) \ne 0\} \\ & = \operatorname{proj.dim}_{R_\fp}F_\fp \\ & = \operatorname{depth} R_\fp - \operatorname{depth}_{R_\fp}F_\fp \\ & \le \dim R_\fp + \sup \operatorname{H}(F)_\fp \:. \end{align*} From the definition \eqref{dim0} one now gets \begin{align*} \dim_R\operatorname{Hom}_R(F,R) &\le \sup\{\dim (R/\fp) + \dim R_\fp + \sup \operatorname{H}(F)_\fp \mid \fp\in\operatorname{Spec} R \} \\ &\le \dim R + \sup \operatorname{H}(F)\:. \qedhere \end{align*} \end{proof} \begin{remark} For the minimal free resolution of a finitely generated module of finite projective dimension, the codimension considered in \cite{bruher} is non-negative by the Buchsbaum--Eisenbud acyclicity criterion; see the comment before \cite[Lemma 9.1.8]{bruher}. This compares to the inequality in Proposition~\ref{fstar}, rewritten as \begin{equation*} \dim R - \dim_R\operatorname{Hom}_R(F,R) \ge -\sup\operatorname{H}(F) \:. \end{equation*} \end{remark} \subsection*{Balanced big Cohen--Macaulay modules} Let $(R,\mathfrak{m}} \newcommand{\fp}{\mathfrak{p})$ be local and $M$ a big Cohen--Macaulay module; that is, a module with $\operatorname{depth}_RM=\dim R$ and $\mathfrak{m}} \newcommand{\fp}{\mathfrak{p} M\ne M$. Hochster~\cite{MHc73a,MHc75} proved that such a module exists for every equicharacteristic local ring, and Andr\'e \cite{YAn18a} proved their existence over local rings of mixed characteristic. A big Cohen--Macaulay $R$-module $M$ is called \emph{balanced} if every system of parameters for $R$ is an $M$-regular sequence. The $\mathfrak{m}} \newcommand{\fp}{\mathfrak{p}$-adic completion of any big Cohen--Macaulay module is balanced; see \cite[Theorem 8.5.3]{bruher}. Sharp~\cite{RYS81} demonstrated that these modules behave much like maximal Cohen--Macaulay modules. Of interest here is the fact that for a balanced big Cohen--Macaulay module $M$ one has \begin{equation} \label{bbcm} \operatorname{depth}_{R_\fp} M_\fp = \dim R - \dim (R/\fp) \quad \text{for each $\fp \in\operatorname{supp}_RM$\:;} \end{equation} this is part $(iii)$ in \cite[Theorem 3.2]{RYS81}. Note that what Sharp calls the supersupport of $M$ is the support of $M$, in the sense above; this follows from comparison of \cite[Remark~2.9]{HBF79} and part $(v)$ in \emph{op.\:cit.} \begin{proposition} \label{sup} Let $R$ be a local ring, $F$ a finite free $R$-complex, and $M$ a balanced big Cohen--Macaulay module. One has \begin{equation*} \sup\operatorname{H}(F \otimes_R M) = \dim_R \operatorname{Hom}_R(F,R) - \dim R \:. \end{equation*} \end{proposition} \begin{proof} Set $G \colonequals \operatorname{Hom}_R(F,R)$. There is an isomorphism $F \otimes M \cong \operatorname{Hom}_R(G,M)$. In the computation below, the first equality holds by \cite[Proposition 3.4]{HBF79}. The second equality follows from \eqref{bbcm} and the fourth one follows from \eqref{dim0}. \begin{align*} \sup\operatorname{H}(\operatorname{Hom}_R(G,M)) & = -\inf\{\operatorname{depth}_{R_\fp} M_\fp + \inf \operatorname{H}(G)_\fp \mid \fp\in\operatorname{Spec} R \} \\ & = -\inf\{\dim R - \dim (R/\fp) + \inf \operatorname{H}(G)_\fp \mid \fp\in\operatorname{Spec} R \} \\ & = \sup\{\dim (R/\fp) - \inf \operatorname{H}(G)_\fp \mid \fp\in\operatorname{Spec} R \} - \dim R \\ & = \dim_R G - \dim R\:. \qedhere \end{align*} \end{proof} Given Remark~\ref{bhcodim}, the theorem above recovers \cite[Lemma 9.1.8]{bruher}: \begin{corollary} \label{BH} Let $R$ be a local ring and $F \colonequals 0 \to F_b \to \cdots \to F_0 \to 0$ a finite free $R$-complex. If $\dim R - \dim_R\operatorname{Hom}_R(F,R) \ge 0$ holds, then for any balanced big Cohen--Macaulay module $M$ one has $\operatorname{H}_i(F \otimes_R M)=0$ for all $i\ge 1$. \qedhere \end{corollary} In \cite[Section 9.4]{bruher} it is shown how to derive various intersections theorems, including the New Intersection Theorem, from Corollary \ref{BH}. The latter sheds further light on the invariant $\dim R - \dim_R\operatorname{Hom}_R(F,R)$. \begin{theorem} \label{Fstar} Let $R$ be a local ring and $F$ a finite free $R$-complex. One has \begin{equation*} \dim R + \inf \operatorname{H}(F) \le \dim_R \operatorname{Hom}_R(F,R) \le \dim R + \sup \operatorname{H}(F)\:. \end{equation*} \end{theorem} \begin{proof} The right-hand inequality holds by Proposition~\ref{fstar}. Since $R$ is local, one can apply the version of the New Intersection Theorem recorded by Foxby \cite[Lemma 4.1]{HBF79} to the complex $\operatorname{Hom}_R(F,R)$ to get \begin{equation*} \dim R - \dim_R\operatorname{Hom}_R(F,R) \le \operatorname{proj.dim}_R \operatorname{Hom}_R(F,R)\;. \end{equation*} As $\operatorname{Hom}_R(F,R)$ is also a finite free complex one has \begin{equation*} \operatorname{proj.dim}_R \operatorname{Hom}_R(F,R) = -\inf\operatorname{H}(\operatorname{Hom}_R(\operatorname{Hom}_R(F,R),R)) = -\inf \operatorname{H}(F)\:. \qedhere \end{equation*} \end{proof} \begin{remark} Let $R$ be a local ring and $M$ a nonzero finitely generated $R$-module of finite projective dimension. Applying Theorem~\ref{Fstar} to a finite free resolution of $M$ yields the equality \begin{equation*} \max\{\dim_R \operatorname{Ext}_R^n(M,R) + n \mid n\in \mathbb Z\} = \dim R\:. \end{equation*} Notice that with $p \colonequals \operatorname{proj.dim}_R M$ one gets inequalities \begin{equation*} \dim R - \dim_R M \le \operatorname{proj.dim}_R M \le \dim R - \dim_R \operatorname{Ext}^p_R(M,R) \:; \end{equation*} the inequality on the left is the version of the New Intersection Theorem that went into the proof of Theorem \ref{Fstar}. \end{remark} \def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7 by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7 \hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax \rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex \char'47}}#1\relax\else\message{accent \string\soft \space #1 not defined!}#1\relax\fi\fi\fi\fi\fi\fi} \providecommand{\MR}[1]{\mbox{\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#1}}} \renewcommand{\MR}[1]{\mbox{\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#1}}} \providecommand{\arxiv}[2][AC]{\mbox{\href{http://arxiv.org/abs/#2}{\sf arXiv:#2 [math.#1]}}} \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2020-09-10T02:21:13", "yymm": "2009", "arxiv_id": "2009.04451", "language": "en", "url": "https://arxiv.org/abs/2009.04451", "abstract": "Foxby defined the (Krull) dimension of a complex of modules over a commutative Noetherian ring in terms of the dimension of its homology modules. In this note it is proved that the dimension of a bounded complex of free modules of finite rank can be computed directly from the matrices representing the differentials of the complex.", "subjects": "Commutative Algebra (math.AC)", "title": "Dimension of finite free complexes over commutative Noetherian rings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357573468176, "lm_q2_score": 0.7248702642896702, "lm_q1q2_score": 0.709673908177025 }
https://arxiv.org/abs/1710.10674
Existence and stability of steady compressible Navier-Stokes solutions on a finite interval with noncharacteristic boundary conditions
We study existence and stability of steady solutions of the isentropic compressible Navier-Stokes equations on a finite interval with non characteristic boundary conditions, for general not necessarily small-amplitude data. We show that there exists a unique solution, about which the linearized spatial operator possesses (i) a spectral gap between neutral and growing/decaying modes, and (ii) an even number of nonstable eigenvalues ? (with a nonnegative real part). In the case that there are no nonstable eigenvalues, i.e., of spectral stability, we show this solution to be nonlinearly exponentially stable in H2 X H3. Using "Goodman-type" weighted energy estimates, we establish spectral stability for small-amplitude data. For large amplitude data, we obtain high-frequency stability, reducing stability investigations to a bounded frequency regime. On this remaining, bounded-frequency regime, we carry out a numerical Evans function study, with results again indicating universal stability of solutions.
\section{Introduction} In this paper, we initiate in the simplest setting of 1D isentropic gas dynamics, a systematic study of existence and stability of steady solutions of systems of hyperbolic parabolic equations on a bounded domain, with noncharacteristic inflow or outflow boundary conditions, and data and solutions of amplitudes that are not necessarily small. We have in mind the scenario of a ``shock tube'', or finite-length channel with inflow-outflow boundary conditions, which in turn could be viewed as a generalization of the Poisseuille flow in the incompressible case. \medskip Our conclusions in the present, isentropic case, obtained by rigorous nonlinear and spectral stability theory, augmented in the large-amplitude case by numerical Evans function analysis, are that for any choice of data there exists a unique solution, and this solution is linearly and nonlinearly time-exponentially stable in $H^2\times H^3$. These results suggest a number of interesting directions for further investigation in 1 and multi-D. \subsection{Setting} We consider the 1D isentropic compressible Navier-Stokes equations \begin{equation}\label{NS} \left\{ \begin{aligned} &\rho_{t} + \left(\rho u \right)_{x} = 0,\\ &\left(\rho u \right)_{t} + \left(\rho u^{2} + P(\rho) \right)_{x} = \nu u_{xx} \end{aligned} \right. \end{equation} on the interval $[0,1]$, with the noncharacteristic boundary conditions \begin{equation}\label{BC} \left\{ \begin{aligned} &\rho(t,0)=\rho_{0} > 0,\\ &u(t,0)=u_{0} > 0,\\ &u(t,1)=u_{1} > 0. \end{aligned} \right. \end{equation} Notice that we have an inflow boundary condition at $x=0$ and an outflow boundary condition at $x=1$. We assume that the viscosity $\nu$ is positive and constant and that the pressure $P$ is a smooth function satisfying \begin{equation}\label{pressure_cond} P' > 0. \end{equation} Stability of steady states for hyperbolic parabolic systems has been studied by many authors. For problems on the whole line, the reader can refer to \cite{mascia_zumbrun,num_stab_zum} and references within. In the case of noncharacteristic boundary conditions on the half line, see for instance \cite{toan_zumbrun_ns,toan_zumbrun_hyp_par}. For studies of scalar conservation laws on a bounded interval, one may see for instance \cite{Kreiss_Kreiss_burgers,Jiu_Pan_scalar_cons_bounded}. Finally, we refer to \cite{control_ns_1d} for the study of boundary controllability of the 1D Navier-Stokes equations. \medskip In this paper, we study the existence and stability of steady states of \eqref{NS} satisfying the boundary conditions \eqref{BC}. Section \ref{section_existence_steady} is devoted to the existence and the uniqueness of such steady states. In Section \ref{section_linear_estimate}, we study the corresponding linearized problem about the steady state. In section \ref{section_spectral_stab}, we show that constant steady states and almost constant steady states, see Condition \eqref{almost_constant_assump}, are spectrally stable. We also show that general steady states are numerically spectrally stable. Section \ref{section_local_existence} is devoted to a local wellposedness result for problem \eqref{NS}-\eqref{BC}. Then, in section \ref{section_nonlinear_stab}, we show the nonlinear stability of steady states that are spectrally stable. Theorem \ref{stab_result} is the main result of this paper. Finally, in section \ref{section_improvement}, we improve the previous theorem under more restrictive assumptions. \begin{remark}\label{bcrmk} It is worth noting that boundary conditions \eqref{BC} are not the only ones we can deal with. For instance, the case \begin{equation*} \left\{ \begin{aligned} &\rho(t,1)=\rho_{0} > 0,\\ &u(t,0)=u_{0} < 0,\\ &u(t,1)=u_{1} < 0, \end{aligned} \right. \end{equation*} is equivalent by the change of variables $x \shortrightarrow 1-x$ and the change of unknowns $(\hat{\rho}, \hat{u}) \shortrightarrow (\hat{\rho}, -\hat{u})$. Moreover, these two possibilities are the only types of noncharacteristic boundary conditions yielding physically realizable steady states. For, the first equation of \eqref{NS} yields that steady solutions have constant momentum $\rho u\equiv m$, so that $u(0) $ and $u(1)$ necessarily agree in sign. By similar reasoning, characteristic boundary conditions $u(0)=u(1)=0$ yield $u\equiv 0$ yield only trivial, constant steady states $(\rho, u)\equiv (\rho_0, 0)$. \end{remark} \subsection{Discussion and open problems}\label{s:discussion} As mentioned earlier, our goal in this paper is to open a line of investigation of large-amplitude steady solutions for inflow-outflow problems on bounded domains. The main technical contribution is our argument for nonlinear exponential stability of spectrally stable solutions, which is both particularly simple and also applies to general hyperbolic parabolic systems of ``Kawashima type'', as considered on the whole- and half-line in \cite{mascia_zumbrun,num_stab_zum,toan_zumbrun_ns,toan_zumbrun_hyp_par}. Our goal, and the novelty of the argument as compared to those for the whole- and half-line, was to take advantage of the spectral gap to obtain a simple proof based on standard semigroup/energy methods. However, a close reading will reveal that this is deceptively difficult to accomplish, involving the introduction of a precisely chosen space $(\rho, u)\in H^1 \times L^2$ with norm strong enough that we can carry out energy-based high-frequency resolvent estimates and different from the usual Kawashima type estimates, but weak enough that the range of nonlinear terms is densely contained. \medskip The reduction of nonlinear to spectral stability gives a base for investigation of more general systems such as full (nonisentropic) gas dynamics or (isentropic or nonisentropic) MHD. Our results on uniqueness and universal stability on the other hand are likely accidents of low dimension. For example, the demonstration of unstable large-amplitude boundary layers in \cite{Serre_Zumbrun, zumbrun_standingshock} is suggestive via the large-interval length limit from bounded interval toward the half-line, that unstable large-amplitude steady solutions might occur on bounded intervals for polytropic full gas dynamics in some parameter regimes. Definitely, the example of unstable shock waves on the whole line in \cite{zumbrun_convex_entropy} together with the asymptotic analysis in \cite{sandstede_abs_conv_instab,zumbrun_standingshock} of spectra in the whole-line limit shows that unstable steady solutions are possible on an interval for full gas dynamics with an artificial equation of state satisfying all of the usual requirements imposed in standard theory, including existence of a convex entropy, genuine nonlinearity of acoustic modes, etc. \medskip Moreover, due to the presence of spectral gap/absence of essential spectra in the bounded-interval problem, differently from the whole- and half-line problems, changes in stability of the type considered in \cite{Serre_Zumbrun, zumbrun_standingshock}, involving passage of a real eigenvalue through zero, are associated necessarily with bifurcation/nonuniqueness, by Lyapunov-Schmidt or center manifold reduction to the finite-dimensional case.\footnote{ See in particular the center manifold theory for generators of $C^0$ semigroups in \cite{haragus_Iooss} or the still more general Fredholm-based Lyapunov-Shmidt reduction of \cite[Appendix D]{Monteiro2014} for closed densely defined operators with an isolated crossing eigenvalue, along with the general finite-dimensional bifurcation result of \cite[Lemma 3.10]{Barker_convexEntropy}. This is to be contrasted with the case of the whole line discussed in \cite[\S 6.2]{Zumbrun_multi_dim_stab_planar}, for which $\lambda=0$ is embedded in essential spectrum and a crossing eigenvalue at $\lambda=0$ may signal {\it either} steady state bifurcation as here or more complicated time-dependent bifurcations involving far-field behavior and solutions of an associated inviscid Riemann problem.} Thus, any such violations of stability should yield also examples of large-amplitude nonuniqueness at the same time. Small-amplitude uniqueness, on the other hand, follows readily by uniqueness of constant solutions, as follows by energy estimates like those here, plus continuity. The investigation of large-amplitude uniqueness and stability for larger systems thus appears to be a very interesting direction for future exploration; likewise, the study of the corresponding multi-D problem, for which existence/uniqueness of small-amplitude solutions has been studied for example in \cite{kellogg_inflow_unbounded,kellogg_inflow_bounded,poiseuille_Stab_3d}. In both 1- and multi-D, a very interesting open problem would be to study the asymptotic structure of solutions in the small-viscosity limit, particularly in the multi-D case analogous to Poiseuille flow. \subsection*{ Notation } \noindent In this paper, C($\cdot$) denotes a nondecreasing and positive function and $C$ a generic notation whose exact values are of no importance. $\left\lvert \; \right\rvert_{2}$ refers to the $L^{2}$-norm on $(0,1)$ and $\left\lvert \; \right\rvert_{H^{n}}$, for $n\geq1$, to the $H^{n}$-norm. $\left\lvert \; \right\rvert_{\infty}$ refers to the $L^{\infty}$-norm on $[0,1]$. \subsection*{ Acknowledgment } We would like to thank the anonymous referees for careful reading and for very valuable comments on the manuscript. \section{Existence and uniqueness of steady states}\label{section_existence_steady} \subsection{Analytical results} In this part, we prove the following result. \begin{prop}\label{existence_steady_sol} Assume that $P$ is a smooth function. For any $(\rho_0, u_0, u_1)$, with $\rho_0>0,$ $u_0>0$ and $u_1>0$, problem \eqref{NS}-\eqref{BC} has a unique steady solution $\left(\hat{\rho},\hat{u} \right)$ with $\hat{\rho} > 0$. \end{prop} \begin{proof} A steady solution $\left(\hat{\rho},\hat{u} \right)$ of \eqref{NS}-\eqref{BC} satisfies \begin{equation*} \left\{ \begin{aligned} &\left(\hat{\rho} \hat{u} \right)_{x} = 0,\\ &\left(\hat{\rho} \hat{u}^{2} + P(\hat{\rho}) \right)_{x} = \nu \hat{u}_{xx},\\ &\hat{\rho}(0)=\rho_{0},\\ &\hat{u}(0)=u_{0} \text{ , } \hat{u}(1)=u_{1}. \end{aligned} \right. \end{equation*} Thus, $\hat{\rho} \hat{u}=\rho_{0} u _{0}$ and \begin{equation}\label{ode1} \left\{ \begin{aligned} &\nu \rho_{0} u_{0} \hat{\rho}_{x} = b \hat{\rho}^{2} - (\rho_{0} u_{0})^{2} \hat{\rho} - \hat{\rho}^{2} P(\hat{\rho}),\\ &\hat{\rho}(0)= \rho_{0} \end{aligned} \right. \end{equation} where $b$ is a constant that has to be determined. We define the map \begin{equation*} \phi := b \shortrightarrow \hat{\rho}(1) - \frac{\rho_{0} u_{0}}{u_{1}} \end{equation*} where $\hat{\rho}$ is the unique solution of System \eqref{ode1}. Notice that we only define $\phi$ when $\hat{\rho}$ is defined on $[0,1]$. Then, we remark that $\phi(\rho_{0} u_{0}^{2} + P(\rho_{0}))=\rho_{0} - \frac{\rho_{0} u_{0}}{u_{1}}$ and that $\phi$ is increasing. We also remark that if $b_{1}$ is in the domain of $\phi$, any $b<b_{1}$ is in the domain of $\phi$. Therefore, the domain of $\phi$ is an interval containing $\rho_{0} u_{0}^{2} + P(\rho_{0})$. Furthermore, $\phi$ is not bounded from above. Otherwise, one can show that $\hat{\rho}$ is bounded uniformly with respect to $b$ and then that $\phi(b)>b$ for $b$ large enough. One can also show that $\underset{b \shortrightarrow -\infty}{\lim} \phi(b) = - \frac{\rho_{0} u_{0}}{u_{1}}$. Therefore, there exists a unique $b$ such that $\hat{\rho}(1)=\frac{\rho_{0} u_{0}}{u_{1}}$ and $\hat{\rho}>0$. \end{proof} \noindent Notice that a solution of \eqref{NS} is constant if and only if $u_{0} = u_{1}$. \medskip In the following, we study among other things the stability of almost constant steady solutions of \eqref{NS}, by which we mean solutions satisfying \begin{equation}\label{almost_constant_assump} \exists \varepsilon > 0 \text{ , } \varepsilon \ll 1 \text{ and } \left|u_{0} - u_{1} \right| \leq \varepsilon. \end{equation} For this, the following lemma will be useful. \begin{lemma}\label{control_almost_constant_state} Assume that we are under the assumptions of Proposition \ref{existence_steady_sol}. Then, the unique steady solution $(\hat{\rho}, \hat{u})$ of problem \eqref{NS}-\eqref{BC} is smooth and if $u_{1} \neq u_{0}$ we have \begin{equation*} \hat{\rho} > 0 \text{ , } \hat{u} > 0, \left( u_{1} - u_{0} \right) \hat{\rho}_{x} < 0 \text{ , } \left( u_{1} - u_{0} \right) \hat{u}_{x} > 0, \underset{u_{1} \shortrightarrow u_{0}}{\lim}\left( \left\lvert \hat{\rho}_{x} \right\rvert_{\infty} + \left\lvert \hat{u}_{x} \right\rvert_{\infty} \right) = 0. \end{equation*} \end{lemma} \begin{proof} The first four inequalities are clear (notice that if $u_{0} \neq u_{1}$, $|\hat{u}_{x}|>0$). The last inequality follows from a comparison argument and the continuity of the map $\phi$. \end{proof} We denote solutions as {\it compressive} when $\hat{u}_{x} > 0$ and {\it expansive} when $\hat{u}_{x} < 0$. \subsection{Numerical simulations} A steady solution $\left(\hat{\rho},\hat{u} \right)$ of \eqref{NS}-\eqref{BC} is characterized by system \eqref{ode1} where $b$ is the unique zero of $\phi$. The numerical computation of such a solution is carried out in two steps: \medskip \noindent - We compute $b$ with a Newton's method. We initiate the process with $b=b_{0}$ where \begin{equation*} b_{0} = \rho_{0} u_{0}^{2} + P(\rho_{0}). \end{equation*} Note that for a small viscosity ($\nu \leq 1$), the initial starting point $b=b_{0}$ ceases to be relevant. Thus, in this case, we use a dichotomy method to find a better starting point. \medskip \noindent - The solution of system \eqref{ode1} is computed with a four-order Runge-Kutta method. \medskip \medskip We display the results of numerical simulations for a monatomic pressure law $P(\rho) = \rho^{1.4}$ with $\nu=1$. Figure \ref{steadysol1} represents the expansive solution for $u_{0}=2$, $u_{1}=3$ and $\rho_{0}=3$. Figure \ref{steadysol2} represents the compressive solution when $u_{0}=1.5$, $u_{1}=1$ and $\rho_{0}=2$. \begin{figure}[!t] \centering \includegraphics[scale=0.25]{steadysol1.png} \caption{A steady expansive solution of \eqref{NS}. Left: $\rho$ ; Right: $u$} \label{steadysol1} \end{figure} \begin{figure}[!t] \centering \includegraphics[scale=0.25]{steadysol2.png} \caption{A steady compressive solution of \eqref{NS}. Left: $\rho$ ; Right: $u$} \label{steadysol2} \end{figure} \section{Linear estimates}\label{section_linear_estimate} \subsection{The eigenvalue problem}\label{eigenvalue_problem_part} In order to study the stability of steady states, we linearize system \eqref{NS} about the steady state $\left(\hat{\rho},\hat{u} \right)$. Then, we study the corresponding eigenvalue problem for $(r,v)$ \begin{equation}\label{linear_steady} \left\{ \begin{aligned} &\lambda r + \left( \hat{\rho} v +\hat{u} r \right)_{x} = 0,\\ &\lambda \hat{\rho} v + \left(\hat{\rho} \hat{u} v + P'(\hat{\rho}) r \right)_{x} + \hat{u}_{x} \left(\hat{u} r + \hat{\rho} v \right) = \nu v_{xx}, \end{aligned} \right. \end{equation} with \begin{equation}\label{BC_linear_steady} r(0)=v(0)=v(1) = 0. \end{equation} We define the linear unbounded operator \begin{equation}\label{linear_op} \mathcal{L} \left( r , v \right) = \begin{pmatrix} - \left( \hat{\rho} v +\hat{u} r \right)_{x} \\ \nu v_{xx} - \left(\hat{\rho} \hat{u} v + P'(\hat{\rho}) r \right)_{x} - \hat{u}_{x} \left(\hat{u} r + \hat{\rho} v \right) \end{pmatrix}, \end{equation} for $(r,v)$ in the domain $\mathcal{D}(\mathcal{L}) = \left\{(r,v) \in H^{1} \times H^{2} \text{, } r(0)=v(0)=v(1)= 0 \right\}$ and the matrix \begin{equation}\label{linear_op_S} \mathcal{S} = \begin{pmatrix} 1 & 0 \\ 0 & \hat{\rho} \end{pmatrix}. \end{equation} \begin{remark} For constant steady states, the eigenvalue problem simplifies into \begin{equation}\label{linear_constant} \left\{ \begin{aligned} &\lambda r + \hat{\rho} v_{x} +\hat{u} r_{x} = 0,\\ &\lambda \hat{\rho} v + \hat{\rho} \hat{u} v_{x} + P'(\hat{\rho}) r_{x} = \nu v_{xx}. \end{aligned} \right. \end{equation} \end{remark} In the following, we denote by $\sigma(\mathcal{S}^{-1} \mathcal{L})$ the spectrum of $\left( \mathcal{S}^{-1} \mathcal{L}, \mathcal{D}(\mathcal{L}) \right)$ in $L^{2}(0,1)$. The following proposition shows that $\sigma(\mathcal{S}^{-1} \mathcal{L})$ only contains eigenvalues. \begin{prop}\label{compact_inverse_L} The inverse of $\mathcal{L}$ exists and is compact and $\sigma(\mathcal{S}^{-1} \mathcal{L})$ only contains eigenvalues. Furthermore, the spectrum of $\left( \mathcal{S}^{-1} \mathcal{L}, \mathcal{D}(\mathcal{L}) \cap H^{2} \times H^{3} \right)$ in the space $\left\{ (r,v) \in H^{1} \text{, } r(0)=v(0)=v(1)= 0 \right\}$ only contains eigenvalues. \end{prop} \begin{proof} First we show that $0 \notin \sigma(\mathcal{L})$. For $f$ and $g$ in $L^{2}(0,1)$, we solve \begin{equation*} \mathcal{L} (\rho,v) = (f,g) \text{ with } r(0)=v(0)=v(1) = 0. \end{equation*} This leads to the system \begin{equation*} \left\{ \begin{aligned} &\hat{\rho} v +\hat{u} r = -\int_{0}^{x} f(y),\\ &\nu v_{x} = \nu v_{x}(0) + \hat{\rho} \hat{u} v - \frac{P'(\hat{\rho})}{\hat{u}} \left(\hat{\rho} v + \int_{0}^{x} f \right) + \int_{0}^{x} g(y) - \int_{0}^{x} \hat{u}_{x} \int_{0}^{y} f(y). \end{aligned} \right. \end{equation*} \noindent Then, we can solve the second equation with the initial condition $v(0)=0$ \small \begin{equation*} v(x) = \frac{1}{\nu} \int_{0}^{x} \exp \left( \frac{1}{\nu} \int_{y}^{x} \hat{\rho} \hat{u} - \frac{P'(\hat{\rho})}{\hat{u}} \hat{\rho} dz \right) \left( \nu v_{x}(0) - \frac{P'(\hat{\rho})}{\hat{u}} \int_{0}^{y} f + \int_{0}^{y} g - \int_{0}^{y} \hat{u}_{x} \int_{0}^{z} f \right) dy. \end{equation*} \normalsize \noindent Since $v(1)=0$, we can compute $v_{x}(0)$ and it implies that $\mathcal{L}$ is invertible. Furthermore, if $f$ and $g$ are bounded in $L^{2}(0,1)$, we get from the previous equality that $v_{x}(0)$ is bounded and that $v$ is bounded in $H^{1}(0,1)$. The first statement follows easily. The second statement follows from similar computations. \end{proof} \begin{remark} \noindent Notice that $(\hat{\rho}',\hat{u}')$ can not be an eigenfunction of $\mathcal{S}^{-1} \mathcal{L}$ for the eigenvalue $\lambda=0$ since it does not satisfy the boundary conditions \eqref{BC_linear_steady}. This differs from the whole line case. \end{remark} In order to prove the spectral stability of steady solutions, we need high frequency estimates for problem \eqref{linear_steady}-\eqref{BC_linear_steady}. First, we establish a useful lemma. \begin{lemma}\label{high_frequency_control} For any $(r,v)$ satisfying the boundary conditions \eqref{BC_linear_steady} and $\left\lvert \lambda \right\rvert$ large enough, we have \begin{align*} &\left\lvert \tilde{r} \right\rvert_{2}^{2} \leq \frac{C}{\left\lvert \lambda \right\rvert} \left( \left\lvert r \right\rvert_{2}^{2} + \left\lvert v \right\rvert_{2}^{2} + \left\lvert \left( \lambda \mathcal{S} - \mathcal{L} \right)(r,v)_{1} \right\rvert^{2}_{2} \right),\\ &\left\lvert r \right\rvert_{2}^{2} + \left\lvert v \right\rvert_{2}^{2} \leq \frac{C}{\left\lvert \lambda \right\rvert} \left( \left\lvert r_{x} \right\rvert_{2}^{2} + \left\lvert v_{x} \right\rvert_{2}^{2} + \left\lvert \left( \lambda \mathcal{S} - \mathcal{L} \right)(r,v) \right\rvert^{2}_{2} \right),\\ &\left\lvert v_{x} \right\rvert_{2}^{2} \leq \frac{C}{\left\lvert \lambda \right\rvert} \left( \left\lvert r_{x} \right\rvert_{2}^{2} + \left\lvert v_{xx} \right\rvert_{2}^{2} + \left\lvert \left( \lambda \mathcal{S} - \mathcal{L} \right) (r,v) \right\rvert^{2}_{H^{1}} \right), \end{align*} where $\tilde{r}(x) = \int_{0}^{x} r(y) dy$. \end{lemma} \begin{proof} If we denote $\left( \lambda \mathcal{S} - \mathcal{L} \right) (r,v) = (f,g)$ and $\tilde{f}(x) = \int_{0}^{x} f(y) dy$, we have \begin{equation}\label{system_int} \left\{ \begin{aligned} &\lambda \tilde{r} + \hat{\rho} v +\hat{u} r = \tilde{f},\\ &\lambda r + \left( \hat{\rho} v +\hat{u} r \right)_{x} = f,\\ &\lambda \hat{\rho} v + \left(\hat{\rho} \hat{u} v + P'(\hat{\rho}) r \right)_{x} + \hat{u}_{x} \left(\hat{u} r + \hat{\rho} v \right) = \nu v_{xx} + g. \end{aligned} \right. \end{equation} Thus, we easily see that \begin{equation*} \left\lvert \tilde{r} \right\rvert_{2}^{2} \leq \frac{C}{\left\lvert \lambda \right\rvert} \left( \left\lvert r \right\rvert^{2}_{2} + \left\lvert v \right\rvert^{2}_{2} + \left\lvert f \right\rvert^{2}_{2} \right). \end{equation*} Furthermore, by integrating by parts, we get \small \begin{equation*} \left\lvert r \right\rvert_{2}^{2} + \left\lvert \sqrt{\hat{\rho}} v \right\rvert_{2}^{2} = \frac{1}{\left\lvert \lambda \right\rvert} \left\lvert \int_{0}^{1} \overline{r} \lambda r \right\rvert + \frac{1}{\left\lvert \lambda \right\rvert} \left\lvert \int_{0}^{1} \overline{v} \lambda \hat{\rho} v \right\rvert \leq \frac{C}{\left\lvert \lambda \right\rvert} \left( \left\lvert r \right\rvert^{2}_{2} + \left\lvert \left[ \hat{u} \left\lvert r \right\rvert^{2} \right]_{0}^{1} \right\rvert + \left\lvert v \right\rvert_{H^{1}}^{2} + \left\lvert (f,g) \right\rvert^{2}_{2} \right) \end{equation*} \normalsize and the second inequality follows from Lemma \ref{Linfty_controls}. Finally, by differentiating the second equation of \eqref{system_int}, we obtain \small \begin{equation*} \left\lvert \left(\hat{\rho} v \right)_{x} \right\rvert_{2}^{2} = \frac{1}{\left\lvert \lambda \right\rvert} \left\lvert \int_{0}^{1} \overline{\left(\hat{\rho} v \right)_{x}} \lambda \left(\hat{\rho} v \right)_{x} \right\rvert \leq \frac{C}{\left\lvert \lambda \right\rvert} \left( \left\lvert r \right\rvert_{H^{1}}^{2} + \left\lvert v \right\rvert_{H^{2}}^{2} + \left\lvert \left[\hat{\rho} \overline{v_{x}} \left(\nu v_{xx} - P'(\hat{\rho}) r_{x} \right) \right]_{0}^{1} \right\rvert + \left\lvert (f,g) \right\rvert^{2}_{H^{1}} \right). \end{equation*} \normalsize Then, we notice that \small \begin{equation*} \left\{ \begin{aligned} &\nu v_{xx}(0) - P'(\rho_{0}) r_{x}(0) = \rho_{0} u_{0} v_{x}(0)- g(0),\\ &\nu v_{xx}(1) - P'(\hat{\rho}(1)) r_{x}(1) = \hat{\rho}(1) u_{1} v_{x}(1) + P''(\hat{\rho}(1)) \hat{\rho}_{x}(1) r(1) + \hat{u}_{x}(1) u_{1} r(1) - g(1) \end{aligned} \right. \end{equation*} \normalsize and thanks to Lemma \ref{Linfty_controls}, we get \begin{equation*} \left\lvert \left[\hat{\rho} \overline{v_{x}} \left(\nu v_{xx} - P'(\hat{\rho}) r_{x} \right) \right]_{0}^{1} \right\rvert \leq C \left\lvert v \right\rvert_{H^{2}}^{2} + C \left\lvert r \right\rvert_{H^{1}}^{2} + C \left\lvert g \right\rvert_{H^{1}}^{2}. \end{equation*} The result follows easily. \end{proof} We can now establish a high frequency estimate in $H^{1}$. \begin{prop}\label{high_freq_estimate} Assume that $P$ satisfies \eqref{pressure_cond}. There exists a constant $\alpha > 0$ such that if $\Re(\lambda) > - \alpha$ and $\left\lvert \lambda \right\rvert$ is large enough, \begin{equation*} \left\lvert (r,v) \right\rvert^{2}_{H^{1}} \leq C \left\lvert \left(\lambda \mathcal{S} - \mathcal{L} \right) (r,v) \right\rvert^{2}_{H^{1} \times L^2} \end{equation*} for any $(r,v)\in \left\{(r,v) \in H^{1} \times H^{1} \text{, } r(0)=v(0)=v(1)= 0 \right\}$. \end{prop} \begin{proof} This proof is based on an appropriate Goodman-type energy estimate. In the following we denote $ \left(\lambda \mathcal{S} - \mathcal{L}\right) (\rho,v) =(f,g)$. We define the energy \begin{equation*} \mathcal{E} \left(r,v \right) = \frac{1}{2} \int_{0}^{1} \phi_{1} \left\lvert r_{x} \right\rvert^{2} + \phi_{2} \left\lvert \left( \hat{\rho} v \right)_{x} \right\rvert^{2} \end{equation*} where $\phi_{1}$ and $\phi_{2}$ satisfy \begin{equation*} \phi_{1} > 0 \text{ , } \phi_{2} > 0 \text{ , } \phi_{1} = P'(\hat{\rho}) \phi_{2} \text{ , } \frac{1}{2} (\hat{u} \phi_{1})_{x} - 2 \hat{u}_{x} \phi_{1} < 0\footnote{For instance, $\phi_{1}(0)=1$ and $\hat{u} \phi_{1}'=3 \hat{u}' \phi_{1} - \delta \hat{u}$ for $\delta>0$ small enough.}. \end{equation*} This energy is equivalent to the $H^{1}$-norm by the Poincar\'e inequality (see Lemma \ref{poincare}). Then, we compute \begin{equation*} 2 \Re(\lambda) \mathcal{E} \left(r,v \right) = \Re \left( \int_{0}^{1} \phi_{1} \overline{r_{x}} \lambda r_{x} \right) + \Re \left( \int_{0}^{1} \phi_{2} \overline{\left( \hat{\rho} v \right)_{x}} \lambda \left( \hat{\rho} v \right)_{x} \right). \end{equation*} Arguing by density, we assume that $(r,v) \in H^{2} \times H^{3}$. We have \begin{equation*} \begin{aligned} 2 \Re(\lambda) \mathcal{E} \left(r,v \right) \leq &- \int_{0}^{1} \Re \left[ \phi_{1} \overline{r_{x}} \left( \hat{u} r_{xx} + \hat{\rho} v_{xx} + \hat{u}_{x} r_{x} \right] \right) + C \left\lvert r_{x} \right\rvert_{2} \left( \left\lvert r \right\rvert_{2} + \left\lvert v \right\rvert_{H^{1}} + \left\lvert f \right\rvert_{H^{1}} \right)\\ &+\int_{0}^{1} \Re \left[ \phi_{2} \hat{\rho} \overline{v_{x}} \left( \nu v_{xxx} - P'(\hat{\rho}) r_{xx} - \hat{\rho} \hat{u} v_{xx} \right) \right] + \int_{0}^{1} \phi_{2} \hat{\rho} \overline{v_{x}} g_{x}\\ &+\int_{0}^{1} \Re \left[ \phi_{2} \hat{\rho}_{x} \overline{v} \left( \nu v_{xxx} - P'(\hat{\rho}) r_{xx} \right] \right) + C \left\lvert v \right\rvert_{H^{1}} \left( \left\lvert r \right\rvert_{H^{1}} + \left\lvert v \right\rvert_{H^{1}} + \left\lvert v_{xx} \right\rvert_{2}\right). \end{aligned} \end{equation*} Therefore, we have \small \begin{equation*} \begin{aligned} 2 \Re(\lambda) \mathcal{E} \left(r,v \right) \leq & \int_{0}^{1} -\nu \hat{\rho} \phi_{2} \left\lvert v_{xx} \right\rvert^{2} + \left(\frac{1}{2} (\hat{u} \phi_{1})_{x} - 2 \hat{u}_{x} \phi_{1} \right) \left\lvert r_{x} \right\rvert^{2} + \Re(\overline{r_{x}} v_{xx}) \left(- \hat{\rho} \phi_{1} + \hat{\rho} P'(\hat{\rho}) \phi_{2} \right)\\ &+ \Re \left( \left[-\frac{1}{2} \hat{u} \phi_{1} \left\lvert r_{x} \right\rvert^{2} + \phi_{2} \hat{\rho} \overline{v_{x}} \left(\nu v_{xx} - P'(\hat{\rho}) r_{x} + g \right) \right]_{0}^{1} \right)\\ &+ C \left\lvert r_{x} \right\rvert_{2} \left( \left\lvert r \right\rvert_{2} + \left\lvert v \right\rvert_{H^{1}} + \left\lvert f \right\rvert_{H^{1}} \right) + C \left\lvert v \right\rvert_{H^{1}} \left( \left\lvert r \right\rvert_{H^{1}} + \left\lvert v \right\rvert_{H^{1}} + \left\lvert v_{xx} \right\rvert_{2}\right) + C \left\lvert g \right\rvert_{2} \left\lvert v_{xx} \right\rvert_{2}. \end{aligned} \end{equation*} \normalsize Then, we notice that \begin{equation}\label{boundary_equalities} \begin{aligned} & u_{0} r_{x}(0) = - \rho_{0} v_{x}(0) + f(0),\\ &\nu v_{xx}(0) - P'(\rho_{0}) r_{x}(0) + g(0) = \rho_{0} u_{0} v_{x}(0),\\ &\nu v_{xx}(1) - P'(\hat{\rho}(1)) r_{x}(1) + g(1) = \hat{\rho}(1) u_{1} v_{x}(1) + \left( P''(\hat{\rho}(1)) \hat{\rho}_{x}(1) + \hat{u}_{x}(1) u_{1} \right) r(1) \end{aligned} \end{equation} and thanks to Lemma \ref{Linfty_controls}, we obtain \small \begin{equation*} \Re \left( \left[-\frac{1}{2} \hat{u} \phi_{1} \left\lvert r_{x} \right\rvert^{2} + \phi_{2} \hat{\rho} \overline{v_{x}} \left(\nu v_{xx} - P'(\hat{\rho}) r_{x} + g \right) \right]_{0}^{1} \right) \leq C \left\lvert r \right\rvert_{2} \left\lvert r_{x} \right\rvert_{2} + C \left\lvert v_{x} \right\rvert_{2} \left\lvert v_{x} \right\rvert_{H^{1}} + C \left\lvert f \right\rvert_{H^{1}}^{2}. \end{equation*} \normalsize Then, using the second and the third inequality of Lemma \ref{high_frequency_control} and Young's inequality, we can find a constant $\alpha > 0$, such that for $\left\lvert \lambda \right\rvert$ large enough, \begin{equation*} 2 \Re(\lambda) \mathcal{E} \left(r,v \right) \leq - \alpha \left\lvert (r_{x},v_{xx}) \right\rvert_{2}^{2} + C \left\lvert \left(\lambda \mathcal{S} - \mathcal{L}\right) (r,v) \right\rvert_{H^{1} \times L^{2}}^{2}. \end{equation*} Since $\mathcal{E}$ is a norm equivalent to the $H^{1}$-norm, the inequality follows from the Poincar\'e-Wirtinger inequality on $v_{x}$. \end{proof} \subsection{The linear time evolution problem} In this part, we study the linearization of system \eqref{NS} about the steady state $\left(\hat{\rho},\hat{u} \right)$ \begin{equation*} \left\{ \begin{aligned} &\mathcal{S} \begin{pmatrix} r \\ v \end{pmatrix}_{t} - \mathcal{L} \begin{pmatrix} r \\ v \end{pmatrix} = 0,\\ &\left( r,v \right)_{|t=0} = \left( r_{0},v_{0} \right),\\ &r(0)=v(0)=v(1)=0. \end{aligned} \right. \end{equation*} We define for $k \in \mathbb{N}$ the spaces \begin{align*} &\mathcal{H}_{k} = \left\{ (r,v) \in H^{k} \text{, } \frac{d^{l}r}{dx^{l}}(0)=\frac{d^{l}v}{dx^{l}}(0)=\frac{d^{l}v}{dx^{l}}(1)= 0 \text{ , for any } l<k \right\}\\ &\mathcal{H}_{1,0} = \left\{(r,v) \in H^{1} \times L^{2} \text{, } r(0)=0 \right\}. \end{align*} The main goal of this subsection is to show a linear exponential stability in $\mathcal{H}_{1,0}$. This will help us to show the nonlinear exponential stability (see Remark \ref{H1XL2_explanation}). \noindent The following lemmas show that $\mathcal{S}^{-1} \mathcal{L}$ generates a $\mathcal{C}^{0}$-semigroup on $\mathcal{H}_{k}$ and $\mathcal{H}_{1,0}$. \begin{lemma}\label{gen_C0_semigroup_L2} The operator $\left( \mathcal{S}^{-1} \mathcal{L}, \mathcal{D}(\mathcal{L}) \right)$ is closed densely defined on $L^{2}$ and generates a $\mathcal{C}^{0}$-semigroup. Similarly $\left( \mathcal{S}^{-1} \mathcal{L}, \mathcal{H}_{k} \cap H^{k+1} \times H^{k+2} \right)$ is closed densely defined on $\mathcal{H}_{k}$ and generates a $\mathcal{C}^{0}$-semigroup. \end{lemma} \begin{proof} The proof is similar to the proof of Proposition 2.2 in \cite{mascia_zumbrun}. In the following we denote $(f,g)=(\lambda-\mathcal{S}^{-1}\mathcal{L})(r,v)$. For $\lambda>0$ and $(r,v) \in \mathcal{D}(\mathcal{L})$ \begin{equation*} \begin{aligned} \lambda \left\lvert r \right\rvert_{2}^{2} + \left( \lambda \hat{\rho} v , \frac{1}{\hat{\rho}} v \right)_{2} &= -\left( \hat{\rho} r_{x}, r \right)_{2} - \left( \hat{u} v_{x}, r \right)_{2} + \left( f, r \right)_{2} + \nu \left(v_{xx}, v \right)_{2}\\ &\hspace{0.45cm} -\left( P'(\hat{\rho}) r_{x}, v \right)_{2} - \left( \hat{\rho} \hat{u} v_{x}, r \right)_{2} + \left( g, v \right)_{2} + C \left\lvert (r,v) \right\rvert_{2}^{2} \\ &\leq - \nu \left\lvert v_{x} \right\rvert_{2}^{2} + \left( f, r \right)_{2} + \left( g, v \right)_{2} + C \left\lvert (r,v) \right\rvert_{2} \left( \left\lvert (r,v) \right\rvert_{2} + \left\lvert v_{x} \right\rvert_{2} \right) \end{aligned} \end{equation*} where we have integrated by parts and we have noticed a good sign for $\left\lvert r(1) \right\rvert^{2}$. Applying Young's inequality, there exists a constant $C_{0}>0$ such that \begin{equation}\label{o} \lambda \left\lvert r \right\rvert_{2}^{2} + \lambda \left\lvert v \right\rvert_{2}^{2} \leq C_{0} \left\lvert (r,v) \right\rvert_{2}^{2} +\left\lvert (r,v) \right\rvert_{2} \left\lvert (f,g) \right\rvert_{2}. \end{equation} Dividing by $ \left\lvert (r, v) \right\rvert_{2}$, we get \begin{equation*} \lambda \left\lvert (r, v) \right\rvert_{2} \leq C_{0} \left\lvert (r,v) \right\rvert_{2} + \left\lvert (\lambda-\mathcal{S}^{-1} \mathcal{L})(r,v) \right\rvert_{2}, \end{equation*} hence for $\lambda > C_{0}$ \begin{equation}\label{int1_C0semigroup} \left\lvert (r, v) \right\rvert_{2} \leq \frac{1}{\lambda - C_{0}} \left\lvert (\lambda-\mathcal{S}^{-1}\mathcal{L})(r,v) \right\rvert_{2}. \end{equation} Similarly, we have for $(r,v) \in \mathcal{H}_{1} \cap H^{2} \times H^{3}$ \small \begin{equation*} \lambda \left\lvert r_{x} \right\rvert_{2}^{2} + \lambda \left\lvert v_{x} \right\rvert_{2}^{2} = (f_{x},r_{x})_{2} + (g_{x},v_{x})_{2} -(\hat{u} r_{xx},r_{x})_{2} + (v_{xxx}, \frac{\nu}{\hat{\rho}} v_{x})_{2} + C \left( \left\lvert v_{xx} \right\rvert_{2} + \left\lvert (r_{x},v_{x}) \right\rvert_{2} \right) \left\lvert(r,v) \right\rvert_{H^{1}} \end{equation*} \normalsize and there exists a constant $C_{1}>0$ such that for any $\lambda>0$ \begin{equation}\label{t} \lambda \left\lvert (r_{x}, v_{x}) \right\rvert_{2}^{2} \leq C_{1} \left\lvert (r,v) \right\rvert_{H^{1}}^{2} + \left\lvert (r_{x},v_{x}) \right\rvert_{2} \left\lvert (f_{x},g_{x}) \right\rvert_{2}. \end{equation} \noindent Summing \eqref{o} and \eqref{t}, and noting that \begin{equation*} \left\lvert (r,v) \right\rvert_{2} \left\lvert (f,g) \right\rvert_{2} + \left\lvert (r_{x},v_{x}) \right\rvert_{2} \left\lvert (f_{x},g_{x}) \right\rvert_{2} \leq \left\lvert(r,v)\right\rvert_{H^1} \left\lvert(f,g)\right\rvert_{H^1} \end{equation*} by Cauchy-Schwarz' inequality, we get \begin{equation*} \lambda \left\lvert (r, v) \right\rvert_{H^{1}}^{2} \leq (C_{0}+C_{1}) \left\lvert (r,v) \right\rvert_{H^{1}}^{2} + \left\lvert (r,v) \right\rvert_{H^{1}} \left\lvert (\lambda-\mathcal{S}^{-1} \mathcal{L})(r,v) \right\rvert_{H^{1}} \end{equation*} hence for $\lambda > C_{0}+C_{1}$, \begin{equation}\label{int2_C0semigroup} \left\lvert (r, v) \right\rvert_{H^{1}} \leq \frac{1}{\lambda - C_{0}-C_{1}} \left\lvert (\lambda-\mathcal{S}^{-1}\mathcal{L})(r,v) \right\rvert_{H^{1}}. \end{equation} Since we know from Proposition \ref{compact_inverse_L} that the spectrum of $\mathcal{S}^{-1} \mathcal{L}$ only contains eigenvalues, the inequalities \eqref{int1_C0semigroup} and \eqref{int2_C0semigroup} give resolvent bounds and it shows that $\mathcal{S}^{-1} \mathcal{L}$ generates a $\mathcal{C}^{0}$-semigroup on $L^{2}$ and $\mathcal{H}_{1}$ by the Hille-Yosida theorem (see also \cite{pazy_semigroup}). The case $k \geq 2$ is a small adaptation of the previous estimates. \end{proof} In the following, we denote this $\mathcal{C}^{0}$-semigroup by $e^{t \mathcal{S}^{-1} \mathcal{L}}$. \begin{lemma} There exists a constant $\omega>0$ such that for any $(r_{0},v_{0}) \in \mathcal{H}_{1,0}$ \begin{equation}\label{eg} \lvert e^{t\mathcal{S}^{-1} \mathcal{L}} (r_{0},v_{0}) \rvert_{H^{1} \times L^{2}} \leq e^{\omega t} \lvert (r_{0},v_{0}) \rvert_{H^{1} \times L^{2}}. \end{equation} Furthermore, $e^{t\mathcal{S}^{-1} \mathcal{L}}$ is a $\mathcal{C}^{0}$-semigroup on $\mathcal{H}_{1,0}$. \end{lemma} \begin{proof} We argue by density and we take $(r_{0},v_{0}) \in \mathcal{H}_{2}$. In the following, we denote $(r(t),v(t))=e^{t\mathcal{S}^{-1} \mathcal{L}} (r_{0},v_{0})$. Notice that we have $r(t,0)=v(t,0)=v(t,1)=r_{x}(t,0)=0$ for any $t\geq 0$. We define the energy \begin{equation*} \mathcal{E}(r,v)= \frac{A}{2} \left( \lvert r \rvert_{2}^{2} + \left( \hat{\rho} v, v \right)_{2} \right) + \left(v, \hat{\rho}^{2} r_{x} \right)_{2} + \frac{\nu}{2} \left\lvert r_{x} \right\rvert_{2}^{2}. \end{equation*} In the following we take $A>0$ large enough. In particular, $\mathcal{E}$ is equivalent to the $H^{1} \times L^{2}$-norm. We get \begin{equation*} \begin{aligned} \frac{d}{dt} \mathcal{E}(r,v) &\leq \nu A \left(v_{xx} , v \right)_{2} + AC \left\lvert (r,v) \right\rvert_{2} \left\lvert (r,v) \right\rvert_{H^{1}} + \left( \nu v_{xx} , \hat{\rho} r_{x} \right)_{2} - \left( v , \hat{\rho}^{3} v_{xx} \right)_{2}\\ &\hspace{0.45cm} - \left( v , \hat{\rho}^{2} \hat{u} r_{xx} \right)_{2} - \left( \nu \hat{\rho} v_{xx} , r_{x} \right)_{2} - \nu \left(\hat{u} r_{xx}, r_{x} \right)_{2}+ C\left( \left\lvert v \right\rvert_{2} + \left\lvert r_{x} \right\rvert_{2} \right) \left\lvert (r,v) \right\rvert_{H^{1}}\\ &\leq - A \nu \left\lvert v_{x} \right\rvert_{2}^{2} + \left( v_{x} , \hat{\rho}^{3} v_{x} \right)_{2} + \frac{\nu}{2} \left(\hat{u}_{x} r_{x}, r_{x} \right)_{2} + C\left( A \left\lvert (r,v) \right\rvert_{2} + \left\lvert r_{x} \right\rvert_{2} \right) \left\lvert (r,v) \right\rvert_{H^{1}}\\ \end{aligned} \end{equation*} where we have used cancellation of the highest-order terms $\pm \left( \nu v_{xx} , \hat{\rho} r_{x} \right)_{2} $, we have integrated by parts and we have noticed a good sign for $r_{x}(1)^{2}$. Then, applying Young's inequality, we obtain for some $\omega>0$ large enough, \begin{equation*} \frac{d}{dt} \mathcal{E}(r,v) \leq 2 \omega \left( \lvert r \rvert_{2}^{2} + \lvert v \rvert_{2}^{2} + \lvert r_{x} \rvert_{2}^{2} \right) \leq 2\omega \mathcal{E}. \end{equation*} The inequality \eqref{eg} follows easily. Finally for any $U_{0} \in \mathcal{H}_{1,0}$ and $V_{0} \in \mathcal{H}_{1}$ \begin{equation*} \left\lvert e^{t\mathcal{S}^{-1} \mathcal{L}} U_{0} - U_{0} \right\rvert_{H^{1} \times L^{2}} \leq \left\lvert e^{t\mathcal{S}^{-1} \mathcal{L}} V_{0} - V_{0} \right\rvert_{H^{1}} + (1+e^{\omega t}) \left\lvert U_{0}-V_{0} \right\rvert_{H^{1} \times L^{2}} \end{equation*} and continuity at $t=0$ follows since $e^{t\mathcal{S}^{-1} \mathcal{L}}$ is continuous at $t=0$ on $\mathcal{H}_{1}$ and $\mathcal{H}_{1}$ is dense in $\mathcal{H}_{1,0}$. \end{proof} \begin{comment} \begin{lemma} Denoting $(r(t),v(t))=e^{t\mathcal{S}^{-1} \mathcal{L}} (r_{0},v_{0})$, there exist two constants $C>0$ and $\omega>0$ such that for any $(r_{0},v_{0}) \in H^{1} \times L^{2}$ with $r_{0}(0)=0$ \begin{equation*} \frac{t}{1+t} \lvert v_{x} \rvert_{2}^{2} \leq C e^{\omega t} \lvert (r_{0},v_{0}) \rvert_{H^{1} \times L^{2}}^{2}. \end{equation*} \end{lemma} \begin{proof} We argue by density and we take $(r_{0},v_{0}) \in \mathcal{H}_{2}$. We define \begin{equation*} \mathcal{F}(r,v)= \frac{A}{2} \left\lvert v \right\rvert_{2}^{2} + \frac{t}{1+t} \left\lvert v_{x} \right\rvert_{2}^{2}. \end{equation*} One can easily show that, taking $A>0$ large enough, \begin{equation*} \frac{d}{dt} \mathcal{F}(r,v) \leq C \left( \left\lvert r \right\rvert_{2}^{2} + \left\lvert v \right\rvert_{2}^{2} + \left\lvert r_{x} \right\rvert_{2}^{2} \right). \end{equation*} The result follows from the previous lemma. \end{proof} \end{comment} The following proposition gives linear exponential stability under the assumption of a spectral gap. It is the main result of this subsection. \begin{prop}\label{pruss_thm} Assume that $P$ satisfies \eqref{pressure_cond}. Assume that there exists a constant $\alpha>0$, such that $\Re \sigma(\mathcal{S}^{-1} \mathcal{L}) < - \alpha$. Then, there exists $\theta$ and $C$, $0 < \theta < \alpha$, such that for any $(r_{0},v_{0}) \in \mathcal{H}_{1,0}$ \begin{equation*} \left\lvert e^{t \mathcal{S}^{-1} \mathcal{L}}(r_{0},v_{0}) \right\rvert_{H^{1} \times L^{2}} \leq C e^{-\theta t} \left\lvert (r_{0},v_{0}) \right\rvert_{H^{1} \times L^{2}}. \end{equation*} \end{prop} \begin{proof} If $(r_{0},v_{0}) \in \mathcal{H}_{1}$, Proposition \ref{high_freq_estimate} gives two constants $C$ and $\theta$ with $0 < \theta < \alpha$ such that for any $\lambda \in \mathbb{C}$ satisfying $\mathcal{R}(\lambda) = - \theta$ \begin{equation*} \left\lvert \left(\lambda - \mathcal{S}^{-1} \mathcal{L} \right)^{-1} (r_{0},v_{0}) \right\rvert_{H^{1} \times L^{2}} \leq C \left\lvert (r_{0},v_{0}) \right\rvert_{H^{1} \times L^{2}}. \end{equation*} \noindent The result follows by density and Pr\"uss' theorem (see for instance \cite{Pruss_theorem,Yosida_book}). \begin{comment} we get that for $0<\theta<\alpha$ \begin{equation*} \left\lvert e^{t \mathcal{S}^{-1} \mathcal{L}}(r_{0},v_{0}) \right\rvert_{H^{1}} \leq C e^{-\theta t} \left\lvert (r_{0},v_{0}) \right\rvert_{H^{1}}. \end{equation*} Then using the previous lemmas we have for $t \geq 1$ \begin{equation*} \left\lvert e^{t \mathcal{S}^{-1} \mathcal{L}}(r_{0},v_{0}) \right\rvert_{H^{1} \times L^{2}} \leq C e^{-\theta (t-1)} \left\lvert (r(1),v(1)) \right\rvert_{H^{1}} \leq C e^{-\theta (t-1)} \left\lvert (r_{0},v_{0}) \right\rvert_{H^{1} \times L^{2}}. \end{equation*} The result follows easily. \end{comment} \end{proof} \section{Spectral stability}\label{section_spectral_stab} \subsection{Constant and almost constant states} First, we study the spectral stability of constant states. \begin{prop}\label{spec_stab_constant} Assume that $\left(\hat{\rho},\hat{u} \right)$ is a constant solution of \eqref{NS} and that $P$ satisfies \eqref{pressure_cond}. Then, there exists $\alpha>0$, $\Re \sigma(\mathcal{S}^{-1} \mathcal{L}) \leq - \alpha$. \end{prop} \begin{proof} Computing $\Re \left( \left(\eqref{linear_constant}_{1}, P'(\hat{\rho}) \overline{r} \right)_{L^{2}} + \left(\eqref{linear_constant}_{2}, \hat{\rho} \overline{v} \right)_{L^{2}}\right)$, we get \begin{equation*} \Re(\lambda) \left( \left\lvert \sqrt{P'(\hat{\rho})} r \right\rvert^{2}_{2} + \left\lvert \sqrt{\hat{\rho}} v \right\rvert^{2}_{2} \right) + \nu \left\lvert v_{x} \right\rvert_{2}^{2} + \frac{1}{2} P'(\hat{\rho}(1)) u_{1} \left\lvert r(1) \right\rvert^{2} = 0. \end{equation*} Thus, $\Re(\lambda) < 0$. The result follows from Proposition \ref{compact_inverse_L} and Proposition \ref{high_freq_estimate}. \end{proof} We can now establish the main proposition of this part. We recall that $\left(\hat{\rho}, \hat{u} \right)$ is a steady solution of \eqref{NS}-\eqref{BC}. We introduce the Evans function associated to $\left(\hat{\rho}, \hat{v} \right)$ \begin{equation*} \mathcal{D} \left[\rho_{0},u_{0},u_{1} \right] (\lambda) = v(1), \end{equation*} where $\left(\rho,v \right)$ satisfies the ordinary differential equation \begin{equation}\label{eq_diff_evans_func} \left\{ \begin{aligned} &r_{x} = -\frac{\lambda r + \left( \hat{\rho} v \right)_{x} + \hat{u}_{x} r}{\hat{u}},\\ &\nu v_{xx} = \lambda \hat{\rho} v + \left(\hat{\rho} \hat{u} v \right)_{x} + P''(\hat{\rho}) \hat{\rho}_{x} r - P'(\hat{\rho}) \frac{\lambda r + \left( \hat{\rho} v \right)_{x} +\hat{u}_{x} r}{\hat{u}} + \hat{u}_{x} \left(\hat{u} r + \hat{\rho} v \right) \end{aligned} \right. \end{equation} with \begin{equation*} r(0) = v(0) = 0 \text{ , } v'(0)=1. \end{equation*} One can easily show that $\mathcal{D} \left[\rho_{0},u_{0},u_{1} \right](\lambda)=0$ if and only if $\lambda$ is an eigenvalue of \eqref{linear_steady}. We now establish the spectral stability of almost constant steady solutions of \eqref{NS}. \begin{prop} Assume that $P$ satisfies \eqref{pressure_cond}. Let $\left(\hat{\rho},\hat{u} \right)$ be the unique steady solution of \eqref{NS}-\eqref{BC}. Let $\varepsilon>0$ be small enough. Then an eigenvalue $\lambda$ of \eqref{linear_steady}-\eqref{BC_linear_steady} has a negative real part. \end{prop} \begin{proof} By Proposition \ref{high_freq_estimate}, problem \eqref{linear_steady}-\eqref{BC_linear_steady} does not have any eigenvalue of nonnegative real part outside a compact set $K$. Furthermore, from Proposition \ref{spec_stab_constant}, $\mathcal{D}\left[\rho_{0},u_{0},u_{0} \right]$ does not have any zero inside $K \cap \left\lbrace \Re > 0 \right\rbrace$. Since the Evans function $\mathcal{D}$ depends continuously on the boundary conditions, $\mathcal{D}\left[\rho_{0},u_{0},u_{1} \right]$ never vanishes inside $K \cap \left\lbrace \Re > 0 \right\rbrace$ for $\varepsilon$ small enough. \end{proof} \subsection{About general steady states} In the previous part, we only prove the spectral stability of almost constant states. In this part, we show some theoretical and numerical arguments that support the spectral stability of any steady states. \medskip We know from previous works that the stability index criterion is a necessary condition for the spectral stability (see for instance \cite{gardner_jones_stab_ind,pego_weinstein_stab_ind,gardner_zumbrun_gap_lemma}). The stability index criterion states that \begin{equation*} \sgn \left(\mathcal{D} \left[\rho_{0},u_{0},u_{1} \right](0) \right) \sgn \left( \mathcal{D} \left[\rho_{0},u_{0},u_{1} \right](+ \infty) \right) = 1. \end{equation*} The following proposition shows that this criterion is satisfied. \begin{prop} For all steady states of problem \eqref{NS}-\eqref{BC}, the stability index criterion is satisfied. \end{prop} \begin{proof} First we compute $\sgn \left(\mathcal{D} \left[\rho_{0},u_{0},u_{1} \right](0) \right)$. Proceeding as in Proposition \ref{compact_inverse_L}, we get the following system \begin{equation*} \left\{ \begin{aligned} &\hat{\rho} v +\hat{u} r = 0,\\ &\hat{\rho} \hat{u} v + \frac{\hat{\rho}}{\hat{u}} P'(\hat{\rho}) v = \nu v_{x} - \nu v_{x}(0),\\ &r(0)=v(0)=0 \text{ , } v_{x}(0)=1, \end{aligned} \right. \end{equation*} \noindent and we obtain \begin{equation*} v(x) = \int_{0}^{x} \exp \left( \frac{1}{\nu} \int_{y}^{x} \hat{\rho} \hat{u} - \frac{P'(\hat{\rho})}{\hat{u}} \hat{\rho} dz \right) dy \; v_{x}(0). \end{equation*} \noindent Then $\sgn v(1) = \sgn v_{x}(0) = 1$. Secondly, we compute $\mathcal{D} \left[\rho_{0},u_{0},u_{1} \right](+ \infty)$. We have \begin{equation}\label{high_freq_system} \left\{ \begin{aligned} &\lambda r + \hat{u} r_{x} = f,\\ & \nu v_{xx} = \lambda \hat{\rho} v + P'(\hat{\rho}) r_{x} + g,\\ &r(0)=v(0)=0 \text{ , } v_{x}(0)=1, \end{aligned} \right. \end{equation} where $\left\lvert f \right\rvert_{2} + \left\lvert g \right\rvert_{2} \leq C \left(\left\lvert r \right\rvert_{2} + \left\lvert v \right\rvert_{2} + \left\lvert v_{x} \right\rvert_{2} \right)$. By solving the first equation of system \eqref{high_freq_system} we get for $\lambda$ large enough \begin{equation*} \left\lvert r \right\rvert_{2} \leq \frac{C}{\sqrt{\lambda}} \left(\left\lvert v \right\rvert_{2} + \right\rvert v_{x} \left\lvert_{2} \right). \end{equation*} \noindent We can rewrite the second equation of system \eqref{high_freq_system} as \begin{equation*} \nu v_{xx} = \lambda \hat{\rho} v + P'(\hat{\rho}) r_{x} + \widetilde{g} \text{ with } v(0)=0 \text{ , } v_{x}(0)=1, \end{equation*} where $\left\lvert \widetilde{g} \right\rvert_{2} \leq C \left(\left\lvert v \right\rvert_{2} + \left\lvert v_{x} \right\rvert_{2} \right)$. Then, we consider for $s \in [0,1]$ the equation \begin{equation*} \nu w_{xx} = \lambda \left((1-s) \hat{\rho} + s \right) w + (1-s) P'(\hat{\rho}) r_{x} + (1-s) \widetilde{g} \text{ with } w(0)=w(1)=0. \end{equation*} \noindent Multiplying by $w$ and integrating, we notice that when $\lambda$ is large enough the only solution of this equation is $w = 0$. Therefore, for $\lambda$ large enough, we define $z$ solution of \begin{equation*} \nu z_{xx} = \lambda z \text{ with } z(0)=0 \text{ , } z_{x}(0)=1. \end{equation*} \noindent and $v(1)$ and $z(1)$ agree in sign. It follows that $\sgn v(1) = \sgn z(1) = \sgn z_{x}(0) = 1$. \end{proof} This proposition also shows that problem \eqref{linear_steady}-\eqref{BC_linear_steady} has an even number of nonstable eigenvalues, i.e. eigenvalues with a nonnegative real part (see \cite{gardner_jones_stab_ind,pego_weinstein_stab_ind,gardner_zumbrun_gap_lemma}). \medskip \noindent Thanks to Lemma \ref{high_freq_estimate}, we can numerically check that $\sigma(\mathcal{S}^{-1} \mathcal{L})$ does not contain nonstable eigenvalues. Such verifications have for instance been done on the whole line (see \cite{num_stab_zum}). \begin{figure}[!t] \centering \includegraphics[scale=0.3]{lambdagraph.png} \caption{Contour in the complex plane.} \label{lambdagraph} \end{figure} \begin{figure}[!t] \centering \includegraphics[scale=0.35]{evanscontour.png} \caption{Image of a contour mapped by the Evans function. $\nu=1$, $u_{0} = \frac{3}{2}$, $u_{1} = 1$, $\rho_{0} = 2$.} \label{evanscontour} \end{figure} \medskip In the following, we display some numerical simulations for a monatomic pressure law $P(\rho) = \rho^{1.4}$. For any $\lambda$, we can compute the associated Evans function thanks to system \eqref{eq_diff_evans_func}. We use a Runge Kutta 4 scheme. For each value of $u_{0}$, $u_{1}$, $\rho_{0}$ and $\nu$, we compute the Evans function along semi-circular contours of radius $M$ (see Figure \ref{lambdagraph}). We choose $M$ large enough such that our domain contains the half ball of Lemma \ref{high_frequency_control}. Figure \ref{evanscontour} represents the image of the contour with $M=10$, $\nu=1$, $u_{0} = \frac{3}{2}$, $u_{1} = 1$ and $\rho_{0} = 2$. Figure \ref{evanscontour2} represents the image of the contour with $M=10$, $\nu=0.1$, $u_{0} = \frac{3}{2}$, $u_{1} = 1$ and $\rho_{0} = 2$. We can see on these examples that the winding number of these graphs are both zero. Several computations have been performed for other values of the parameters $\nu \in [0.1,10]$, $u_{0} \in [1,10]$, $u_{1} \in [1,10]$ and $\rho_{0} \in [1,10]$. We could not find any nonstable eigenvalues. \begin{figure}[!t] \centering \includegraphics[scale=0.35]{evanscontour2.png} \caption{Image of a contour mapped by the Evans function. $\nu=0.1$, $u_{0} = \frac{3}{2}$, $u_{1} = 1$, $\rho_{0} = 2$.} \label{evanscontour2} \end{figure} \section{Local existence}\label{section_local_existence} In this section, we state a local wellposedness result for problem \eqref{NS}-\eqref{BC} (see e.g. \cite{Matsumura_Nishida,Matsumura_Nishihara}). \begin{prop}\label{local_existence} Let $\rho_0>0,$ $u_0>0$ and $u_1>0$. Assume that $P$ satisfies \eqref{pressure_cond}. Let $\left(\rho_{ini},u_{ini} \right) \in {H^{1}}$ satisfying the boundary conditions \eqref{BC} and $\rho_{ini} > 0$. Then, there exists a time $T>0$ such that problem \eqref{NS}-\eqref{BC} has a unique solution $(\rho,u)$ in $\mathcal{C}\left([0,T];H^{1}(0,1) \right)$ with \begin{equation*} \underset{[0,T]}{\sup} \left\lvert \left( \rho,u \right)(t) \right\rvert_{H^{1}} \leq 2 \left\lvert \left( \rho_{ini},u_{ini} \right) \right\rvert_{H^{1}} \text{ and } \rho(t,x) \geq \frac{\rho_{ini}(x)}{2} \text{ , } 0 \leq t \leq T \text{ , } 0 \leq x \leq 1. \end{equation*} \end{prop} \section{Nonlinear stability}\label{section_nonlinear_stab} For a solution $(\rho,u)$ of problem \eqref{NS}-\eqref{BC}, we define $(r,v) = (\rho - \hat{\rho}, u - \hat{u})$. We notice that $(r,v)$ satisfies the boundary conditions \eqref{BC_linear_steady} and ($\mathcal{L}$ and $\mathcal{S}$ are defined in Section \ref{eigenvalue_problem_part}) \begin{equation}\label{eq_nonlinear_int} \begin{pmatrix} 1 & 0 \\ 0 & \rho \end{pmatrix} \begin{pmatrix} r \\ v \end{pmatrix}_{t} - \mathcal{L} \begin{pmatrix} r \\ v \end{pmatrix} = \begin{pmatrix} - \left( rv \right)_{x} \\ -(\hat{u}v)_{x} r - \hat{\rho} v v_{x} - vrv_{x} - \left(P(\rho) - P(\hat{\rho}) - P'(\hat{\rho})r \right)_{x} \end{pmatrix}. \end{equation} Then, we get \begin{equation}\label{eq_nonlinear} \mathcal{S} \begin{pmatrix} r \\ v \end{pmatrix}_{t} - \mathcal{L} \begin{pmatrix} r \\ v \end{pmatrix} = \mathcal{N} \text{ with } r(t,0) = u(t,0) = u(t,1)=0 \end{equation} where $\mathcal{N}_{1} = -(rv)_{x}$ and \begin{align*} \mathcal{N}_{2} = &-\frac{\hat{\rho}}{\hat{\rho} + r} \left[ (\hat{u}v)_{x} r + \hat{\rho} v v_{x} + vrv_{x} + \left(P(r + \hat{\rho}) - P(\hat{\rho}) - P'(\hat{\rho})r \right)_{x} \right]\\ &+ \frac{r}{\hat{\rho} + r} \left[\left(\hat{\rho} \hat{u} v + P'(\hat{\rho}) r \right)_{x} + \hat{u}_{x} \left(\hat{u} r + \hat{\rho} v \right) - \nu v_{xx} \right]. \end{align*} Notice that $\mathcal{N}_{1}(t,0) = \mathcal{N}_{2}(t,0) = 0$ and that \small \begin{equation*} \mathcal{N}_{2}(t,1) = - \left[ \hat{u} v_{x} r + \left(P'(\hat{\rho}+r) - P'(\hat{\rho}) - P''(\hat{\rho})r \right) \left(\hat{\rho} + r \right)_{x} + P''(\hat{\rho}) r r_{x} \right](t,1). \end{equation*} \normalsize The following proposition is a nonlinear damping estimate. \begin{prop}\label{nonlinear_damping} Let $T>0$ and consider a solution $(r,v) \in \mathcal{C}\left([0,T];H^{1} \right)$ of \eqref{eq_nonlinear} on $[0,T]$. Assume that $P$ satisfies \eqref{pressure_cond} and that there exists $\varepsilon >0$ small enough such that \begin{equation*} \underset{[0,T]}{\sup} \left\lvert (r,v)(t) \right\rvert_{H^{1}} \leq \varepsilon. \end{equation*} Then, there exists some constants $C>0$ and $\theta_{0} > 0$ such that for all $0 \leq t \leq T$ and any $\theta \leq \theta_{0}$, \begin{equation*} \left\lvert (r,v)(t) \right\rvert_{H^{1}}^{2} \leq C e^{-\theta t} \left\lvert (r,v)(0) \right\rvert_{H^{1}}^{2} + C \int_{0}^{t} e^{-\theta (t-s)} \left\lvert (r,v)(s) \right\rvert_{2}^{2} ds. \end{equation*} Furthermore, if $(r,v) \in \mathcal{C}\left([0,T];H^{2} \times H^{3} \right) \cap \mathcal{C}^{1}\left([0,T];H^{1} \right)$ and for $\varepsilon$ small enough \begin{equation*} \underset{[0,T]}{\sup} \left\lvert (r,v)(t) \right\rvert_{H^{2} \times H^{3}} \leq \varepsilon, \end{equation*} there exists some constants $C>0$ and $\theta_{1} > 0$, for any $0 \leq t \leq T$ and $\theta < \theta_{1}$ \begin{equation*} \left\lvert (r,v)(t) \right\rvert_{H^{2} \times H^{3}}^{2} \leq C e^{-\theta t} \left\lvert (r,v)(0) \right\rvert_{H^{2} \times H^{3}}^{2} + C \int_{0}^{t} e^{-\theta (t-s)} \left\lvert (r,v)(s) \right\rvert_{2}^{2} ds. \end{equation*} \end{prop} \begin{proof} This proof is based on an appropriate Goodman-type energy estimate and is similar to the proof of Proposition \ref{high_freq_estimate}. We define the energy equivalent to the $H^{1}$-norm (by the Poincar\'e inequality \ref{poincare}) \begin{equation*} \mathcal{E} \left(r,v \right) = \frac{1}{2} \int_{0}^{1} \phi_{1} \left\lvert r_{x} \right\rvert^{2} + \phi_{2} \left\lvert \left( \hat{\rho} v \right)_{x} \right\rvert^{2} \end{equation*} where $\phi_{1}$ and $\phi_{2}$ satisfy \begin{equation*} \phi_{1} > 0 \text{ , } \phi_{2} > 0 \text{ , } \phi_{1} = P'(\hat{\rho}) \phi_{2} \text{ , } \frac{1}{2} (\hat{u} \phi_{1})_{x} - 2 \hat{u}_{x} \phi_{1} < 0. \end{equation*} Then, after some computations, we obtain \small \begin{equation*} \begin{aligned} \frac{d}{dt} \mathcal{E} \left(r,v \right) \leq &-\nu \int_{0}^{1} \hat{\rho} \phi_{2} \left\lvert v_{xx} \right\rvert^{2} + \left(\frac{1}{2} (\hat{u} \phi_{1})_{x} - 2 \hat{u}_{x} \phi_{1} \right) \left\lvert r_{x} \right\rvert^{2} + r_{x} v_{xx} \left(- \hat{\rho} \phi_{1} + \hat{\rho} P'(\hat{\rho}) \phi_{2} \right)\\ &+ \left[-\frac{1}{2} \hat{u} \phi_{1} \left\lvert r_{x} \right\rvert^{2} + \phi_{2} \hat{\rho} v_{x} \left(\nu v_{xx} - P'(\hat{\rho}) r_{x} \right) \right]_{0}^{1} + \int_{0}^{1} \phi_{1} r_{x} \left( \mathcal{N}_{1} \right)_{x} + \phi_{2} \left( \hat{\rho} v \right)_{x} \left(\mathcal{N}_{2} \right)_{x}\\ &+C \left\lvert r_{x} \right\rvert_{2} \left( \left\lvert r \right\rvert_{2} + \left\lvert v \right\rvert_{H^{1}} \right) + C \left\lvert v \right\rvert_{H^{1}} \left( \left\lvert r \right\rvert_{H^{1}} + \left\lvert v \right\rvert_{H^{1}} + \left\lvert v_{xx} \right\rvert_{2}\right). \end{aligned} \end{equation*} \normalsize Integrating by parts and using Lemma \ref{Linfty_controls} we get \begin{equation*} \int_{0}^{1} \phi_{1} r_{x} \left( \mathcal{N}_{1} \right)_{x} + \phi_{2} \left( \hat{\rho} v \right)_{x} \left(\mathcal{N}_{2} \right)_{x} \leq \left[\phi_{2} \hat{\rho} v_{x} \mathcal{N}_{2} \right]_{0}^{1} + C \left\lvert (r,v) \right\rvert_{H^{1}}^{2} (\left\lvert (r,v) \right\rvert_{H^{1}} + \left\lvert v_{xx} \right\rvert_{2}). \end{equation*} Then, since $\mathcal{N}_{1}(t,0) = \mathcal{N}_{2}(t,0) = 0$, we have \small \begin{align*} & u_{0} r_{x}(t,0) = - \rho_{0} v_{x}(t,0),\\ &\nu v_{xx}(t,0) - P'(\rho_{0}) r_{x}(t,0) = \rho_{0} u_{0} v_{x}(t,0),\\ &\nu v_{xx}(t,1) - P'(\hat{\rho}(1)) r_{x}(t,1) + \mathcal{N}_{2}(t,1) = \hat{\rho}(1) u_{1} v_{x}(t,1) + P''(\hat{\rho}(1)) \hat{\rho}_{x}(1) r(t,1) + \hat{u}_{x}(1) u_{1} r(t,1). \end{align*} \normalsize Finally, thanks to the previous boundary equalities, Lemma \ref{Linfty_controls}, Lemma \ref{interpolation_thm}, Young's inequality and the fact that $\left\lvert (r,v) \right\rvert_{H^{1}}$ is small enough, we obtain \begin{equation*} \frac{d}{dt} \mathcal{E} \left(r,v \right) \leq - \theta_{0} \left\lvert (r_{x},v_{xx}) \right\rvert_{2}^{2} + C \left\lvert (r,v) \right\rvert_{2}^{2}. \end{equation*} The first inequality easily follows from the Poincar\'e-Wirtinger inequality. Similarly, since $(r_{t},v_{t})$ satisfies the boundary conditions \eqref{BC_linear_steady}, we get for $\varepsilon$ and $\theta_{0}$ small enough \begin{equation*} \frac{d}{dt} \mathcal{E} \left(r_{t},v_{t} \right) \leq - \theta_{0} \left\lvert (r_{tx},v_{txx}) \right\rvert_{2}^{2} + C\left\lvert (r_{t},v_{t}) \right\rvert_{2}^{2}. \end{equation*} Then, using \eqref{eq_nonlinear_int} we notice that \begin{equation*} \left\lvert (r_{t},v_{t}) \right\rvert_{2} \leq C \left\lvert (r,v) \right\rvert_{H^{1}} + C \left\lvert v_{xx} \right\rvert_{2}. \end{equation*} Therefore using the Poincar\'e inequalities, we get for $\delta$ and $\theta_{1}$ small enough \begin{equation}\label{Et} \frac{d}{dt} \left( \mathcal{E} \left( r,v \right) + \delta \mathcal{E} \left( r_{t},v_{t} \right) \right) \leq - \theta_{1} \left( \left\lvert (r_{x},v_{x}) \right\rvert_{2}^{2} + \left\lvert (r_{tx},v_{tx}) \right\rvert_{2}^{2} \right) + C \left\lvert (r,v) \right\rvert_{2}^{2} \end{equation} and the result easily follows from the fact that \begin{equation*} \left\lvert (r,v) \right\rvert_{H^{2} \times H^{3}} \leq C \left\lvert (r,v) \right\rvert_{H^{1}} + C \left\lvert (r_{t},v_{t}) \right\rvert_{2}. \end{equation*} \end{proof} \begin{remark}\label{NZrmk} By taking further time-derivatives, we could obtain an estimate similar to \eqref{Et} in an arbitrarily high-regularity Sobolev space of mixed type $H^r\times H^s$, with $s\sim 2r$ as $r\to \infty$. This observation repairs a minor error in \cite{toan_zumbrun_ns}, citing an estimate with $r=s$. \end{remark} We can now state the main result of this paper. \begin{thm}\label{stab_result} Let $\rho_0>0,$ $u_0>0$ and $u_1>0$. Let $(\hat{\rho},\hat{u})$ be the unique steady solution of problem \eqref{NS}-\eqref{BC}. Assume that $P$ satisfies \eqref{pressure_cond}. Assume that there exists $\alpha>0$ such that $\Re (\sigma(\mathcal{S}^{-1} \mathcal{L})) < - \alpha$. Then, there exists $\varepsilon>0$ and $\theta > 0$, for any $\left(\rho_{ini},u_{ini} \right) \in H^{2} \times H^{3}$ satisfying the boundary conditions \eqref{BC}, the compatibility conditions \small \begin{equation}\label{compatibility_bound_cond} \left(\rho_{ini} u_{ini} \right)_{x}\!(0)=0, \left(\rho_{ini} u_{ini}^{2} + P(\rho_{ini}) - \nu u_{ini \; x} \right)_{x}\!(0)=0, \left(\rho_{ini} u_{ini}^{2} + P(\rho_{ini}) - \nu u_{ini \; x} \right)_{x}\!(1)=0 \end{equation} \normalsize and \begin{equation*} \left\lvert \left(\rho_{ini},u_{ini} \right) - \left(\hat{\rho},\hat{u} \right) \right\rvert_{H^{2} \times H^{3}} \leq \varepsilon, \end{equation*} the unique solution $(\rho,u)$ of problem \eqref{NS}-\eqref{BC} with the initial condition $\left(\rho_{ini},u_{ini} \right)$ satisfies \begin{equation*} \left\lvert \left(\rho,u \right)(t) - \left(\hat{\rho},\hat{u} \right) \right\rvert_{H^{2} \times H^{3}} \leq C \left\lvert \left(\rho_{ini},u_{ini} \right) - \left(\hat{\rho},\hat{u} \right) \right\rvert_{H^{2} \times H^{3}} e^{- \theta t}. \end{equation*} \end{thm} \begin{remark}\label{H1XL2_explanation} As we will see in the proof, since we do not know if $\mathcal{N}_{2}(t,1) = 0$, the only way to use a linear damping estimate is to work in $L^{2}$ for the $v$ component. That is why in Proposition \ref{pruss_thm} we used $H^{1} \times L^{2}$ and not $H^{1}$. Notice also that we impose the compatibility conditions \eqref{compatibility_bound_cond} in order to get enough regularity. \end{remark} \begin{proof} We denote by $U(t,x) = \left(\rho,u \right)(t) - \left(\hat{\rho},\hat{u} \right)(t,x)$. Let $T$ be the existence time of Proposition \ref{local_existence}. The Duhamel formulation of Equation \eqref{eq_nonlinear} is, for $0\leq t \leq T$, \begin{equation*} U(t) = e^{t \mathcal{S}^{-1} \mathcal{L}} U(0) + \int_{0}^{t} e^{(t-s) \mathcal{S}^{-1} \mathcal{L}} \mathcal{S}^{-1} \mathcal{N}(s) ds. \end{equation*} Noticing that $\mathcal{N} \in \left\{(r,v) \in H^{1} \times L^{2} \text{, } r(0)=0 \right\}$ and that $\mathcal{N}$ contains at least quadratic terms, Proposition \ref{pruss_thm} gives the existence of $\theta>0$ \begin{equation*} \left\lvert U(t) \right\rvert_{2} \leq \left\lvert U(t) \right\rvert_{H^{1} \times L^{2}} \leq C e^{- \theta t} \left\lvert U(0) \right\rvert_{H^{1} \times L^{2}} + \int_{0}^{t} e^{- \theta (t-s)} \left\lvert U(s) \right\rvert_{H^{2}}^{2} C \left(\left\lvert U(s) \right\rvert_{H^{2}} \right) ds. \end{equation*} Then, the equality \eqref{eq_nonlinear_int} gives \begin{equation*} \left\lvert U(t) \right\rvert_{2} \leq C e^{- \theta t} \left\lvert U(0) \right\rvert_{H^{1} \times L^{2}} + \int_{0}^{t} C \left(\left\lvert U(s) \right\rvert_{H^{2}} \right) e^{- \theta (t-s)} \left( \left\lvert U(s) \right\rvert_{H^{1}}^{2} + \left\lvert U_{t}(s) \right\rvert_{H^{1}}^{2} \right) ds. \end{equation*} Furthermore, the compatibility conditions \eqref{compatibility_bound_cond} imply that $U \in \mathcal{C}\left([0,T];H^{2} \times H^{3} \right)$. Therefore, we can use the nonlinear damping estimate of Proposition \ref{nonlinear_damping} and by Proposition \ref{local_existence} the $H^{2}$-norm of $U$ is controlled by the initial condition. We get for $\varepsilon$ and $\theta>0$ small enough \begin{equation*} \left\lvert U(t) \right\rvert_{2} \leq C \left(\left\lvert U(0) \right\rvert_{H^{2} \times H^{3}} \right) \left((1+t) e^{- \theta t} \left\lvert U(0) \right\rvert_{H^{2} \times H^{3}} + \int_{0}^{t} (t-s) e^{- \theta (t-s)} \left\lvert U(s) \right\rvert_{2}^{2} ds \right). \end{equation*} Denoting $\zeta_{0}(t) = \underset{[0,t]}{\sup} \;\; e^{\frac{\theta}{2} s} \left\lvert U(s) \right\rvert_{2}$, we obtain that for $0 \leq t \leq T$ \begin{equation*} \zeta_{0}(t) \leq C \left( \left\lvert U(0) \right\rvert_{H^{2} \times H^{3}} \right) \left( \varepsilon + \zeta_{0}(t)^{2} \right). \end{equation*} Furthermore, denoting $\zeta_{1}(t) = \underset{[0,t]}{\sup} \;\; \left( e^{\frac{\theta}{2} s} \left\lvert U(s) \right\rvert_{H^{1}} + e^{\frac{\theta}{2} s} \left\lvert U_{t}(s) \right\rvert_{H^{1}} \right)$ and using Proposition \ref{nonlinear_damping}, $\zeta_{1}$ is also controlled on $[0,T]$. Finally, if $\varepsilon$ is small enough, we can take $T = +\infty$ and $\zeta_{1}$ is bounded on $\mathbb{R}^{+}$. \end{proof} \section{An improvement in some situations}\label{section_improvement} The main result of this paper, Theorem \ref{stab_result}, states that spectrally stable steady states are stable in $H^{2} \times H^{3}$. In this part, we prove that under more restrictive conditions, we can state a stability result in $H^{1} \times H^{2}$. To achieve that, we add another assumption \begin{equation}\label{cond2} \begin{aligned} &P'' > 0 \text{ if } \hat{u}_{x} > 0 \text{ (compressive solutions)},\\ &\frac{P''(y)}{P'(y)} < \frac{2}{y} \text{ and } \hat{\rho}_{x} < \frac{1}{4} \hat{\rho} \text{ if } \hat{u}_{x} < 0 \text{ (small expansive solutions)}.\\ \end{aligned} \end{equation} With this additional assumption, we can establish a high frequency estimate in $L^{2}$. \begin{prop}\label{high_freq_estimate_L2} Assume that $P$ satisfies \eqref{pressure_cond} and that Condition \eqref{cond2} is satisfied. There exists a constant $\alpha > 0$ such that if $\Re(\lambda) > - \alpha$ and $\left\lvert \lambda \right\rvert$ is large enough, \begin{equation*} \left\lvert (\rho,v) \right\rvert^{2}_{2} \leq C \left\lvert \left(\lambda \mathcal{S} - \mathcal{L} \right) (\rho,v) \right\rvert^{2}_{2}, \end{equation*} for any $(\rho,v)$ satisfying the boundary conditions \eqref{BC_linear_steady}. \end{prop} \begin{proof} This proof is based on an appropriate Goodman-type energy estimate. In the following we denote $ \left(\lambda \mathcal{S} - \mathcal{L}\right) (\rho,v) =(f,g)$. We define the following energy \begin{equation*} \mathcal{E} \left(r,v \right) = \frac{1}{2} \int_{0}^{1} \phi_{1} \left\lvert r \right\rvert^{2} + \phi_{2} \hat{\rho} \left\lvert v \right\rvert^{2} \end{equation*} where $\phi_{1}$ and $\phi_{2}$ satisfy \begin{align*} \phi_{1} > 0 \text{ , } \phi_{2} > 0 \text{ , } \hat{\rho} \phi_{1} = P'(\hat{\rho}) \phi_{2}. \end{align*} Then, we compute \begin{equation*} 2 \Re(\lambda) \mathcal{E} \left(r,v \right) = \Re \left( \int_{0}^{1} \phi_{1} \overline{r} \lambda r \right) + \Re \left( \int_{0}^{1} \phi_{2} \hat{\rho} \overline{v} \lambda v \right). \end{equation*} After some computations we get \small \begin{equation*} \begin{aligned} 2 \Re(\lambda) \mathcal{E} \left(r,v \right) \leq &\int_{0}^{1} - \nu \phi_{2} \left\lvert v_{x} \right\rvert^{2} + \frac{1}{2} \hat{u}^{2} \left( \frac{\phi_{1}}{\hat{u}} \right)_{x} \left\lvert r \right\rvert^{2} + \Re(\overline{r} v_{x}) \left[P'(\hat{\rho}) \phi_{2} - \hat{\rho} \phi_{1} \right] + C \left\lvert (f,g) \right\rvert_{2} \left\lvert (r,v) \right\rvert_{2}\\ &+ \int_{0}^{1} \left[\nu (\phi_{2})_{xx} - 2 \phi_{2} \hat{u}_{x} \hat{\rho} + (\phi_{2})_{x} \hat{\rho} \hat{u} \right] \frac{\left\lvert v \right\rvert^{2}}{2} + \Re \left(v \overline{r} \right) \left[(\phi_{2})_{x} P'(\hat{\rho}) - \phi_{1} \hat{\rho}_{x} - \phi_{2} \hat{u}_{x} \hat{u} \right]\!. \end{aligned} \end{equation*} \normalsize Then, we separately consider the three situations $\hat{u}_{x} > 0$ (compressive solution), $\hat{u}_{x} = 0$ (constant solution) and $\hat{u}_{x} < 0$ (expansive solution). \medskip \noindent - If $\hat{u}_{x} > 0$, we take $\phi_{2} = 1$, $\phi_{1} = \frac{P'(\hat{\rho})}{\hat{\rho}}$ and we get \begin{equation*} \hat{u}^{2} \left( \frac{\phi_{1}}{\hat{u}} \right)_{x} = \hat{u} (\phi_{1})_{x} - \hat{u}_{x} \phi_{1} = \frac{P''(\hat{\rho}) \hat{\rho}_{x} \hat{u}}{\hat{\rho}} < 0 \text{ , } \nu (\phi_{2})_{xx} - 2 \phi_{2} \hat{u}_{x} \hat{\rho} + (\phi_{2})_{x} \hat{\rho} \hat{u} = -2 \hat{u}_{x} \hat{\rho} < 0. \end{equation*} \medskip \noindent - If $\hat{u}_{x} = 0$, we take $\phi_{1} = P'(\hat{\rho}) - \beta x$, $\phi_{2} = \hat{\rho} - \beta \frac{\hat{\rho}}{P'(\hat{\rho})} x$ with $\beta > 0$ small enough, and we get \begin{equation*} \hat{u} (\phi_{1})_{x} - \hat{u}_{x} \phi_{1} = - \beta \hat{u} < 0 \text{ , } \nu (\phi_{2})_{xx} - 2 \phi_{2} \hat{u}_{x} \hat{\rho} + (\phi_{2})_{x} \hat{\rho} \hat{u} = - \beta \frac{\hat{\rho}^{2} \hat{u}}{P'(\hat{\rho})} < 0. \end{equation*} \medskip \noindent - If $\hat{u}_{x} < 0$, we take $\phi_{2}(x) = \sqrt{M-2x}$, $M>2$ and $\phi_{1}(x) = \frac{P'(\hat{\rho})}{\hat{\rho}} \phi_{2}(x)$ and thanks to Condition \eqref{cond2} we get \begin{align*} &\hat{u} (\phi_{1})_{x} - \hat{u}_{x} \phi_{1} = \frac{\phi_{2}}{\hat{\rho}} P'(\hat{\rho}) \hat{u} \left( \frac{P''(\hat{\rho})}{P'(\hat{\rho})} \hat{\rho}_{x} - \frac{1}{M-2x} \right) < 0,\\ &\nu (\phi_{2})_{xx} - 2 \phi_{2} \hat{u}_{x} \hat{\rho} + (\phi_{2})_{x} \hat{\rho} \hat{u} \leq \phi_{2} \hat{u} \left(2 \hat{\rho}_{x} - \frac{\hat{\rho}}{M-2x} \right) < 0. \end{align*} \medskip \noindent Moreover, in any case, we have (denoting $\tilde{r}(x) = \int_{0}^{x} r(y) dy$) \begin{equation*} \int_{0}^{1} \Re \left(v \overline{r} \right) \left((\phi_{2})_{x} P'(\hat{\rho}) - \phi_{1} \hat{\rho}_{x} - \phi_{2} \hat{u}_{x} \hat{u} \right) \leq C \left\lvert \tilde{r} \right\rvert_{2} \left\lvert v \right\rvert_{H^{1}}. \end{equation*} \noindent Thus, using the first inequality of Lemma \ref{high_frequency_control}, we can find a constant $\alpha > 0$, for $\left\lvert \lambda \right\rvert$ large enough, \begin{equation*} 2 \Re(\lambda) \mathcal{E} \left(r,v \right) \leq - \alpha \left\lvert (r,v) \right\rvert_{2}^{2} + C \left\lvert \left(\lambda - \mathcal{L}\right) (\rho,v) \right\rvert_{2}^{2}, \end{equation*} and the inequality follows. \end{proof} Thanks to this $L^2$ high frequency estimate, we can improve Proposition \ref{pruss_thm}. Under the assumption that $Re \left( \sigma(\mathcal{S}^{-1} \mathcal{L} \right)) \leq - \alpha < 0$, we get \begin{equation*} \left\lvert e^{t \mathcal{S}^{-1} \mathcal{L}} (r,v) \right\rvert_{2} \leq C e^{-\alpha t} \left\lvert (r,v) \right\rvert_{2}. \end{equation*} Furthermore, thanks to the previous appropriate Goodman-type estimate, we can improve the nonlinear damping estimate in Proposition \ref{nonlinear_damping}. If $(r,v)$ in $\mathcal{C}\left([0,T];H^{2} \times H^{1} \right)$ is a solution of \eqref{eq_nonlinear} on $[0,T]$ and \begin{equation*} \underset{[0,T]}{\sup} \left\lvert (r,v)(t) \right\rvert_{H^{1} \times H^{2}} \leq \varepsilon \end{equation*} for $\varepsilon$ small enough, we have \begin{equation*} \left\lvert (r_{t},v_{t})(t) \right\rvert_{2}^{2} \leq C e^{-\theta t} \left\lvert (r,v)(0) \right\rvert_{H^{1} \times H^{2}}^{2} + C \int_{0}^{t} e^{-\theta (t-s)} \left\lvert (r,v)(s) \right\rvert_{2}^{2} ds. \end{equation*} Finally, applying the Duhamel formulation in $L^{2}$, we obtain the following theorem. \begin{thm}\label{stab_result2} Let $\rho_0>0,$ $u_0>0$ and $u_1>0$. Let $(\hat{\rho},\hat{u})$ be the unique steady solution of problem \eqref{NS}-\eqref{BC}. Assume that $P$ satisfies \eqref{pressure_cond} and that Condition \eqref{cond2} is satisfied. Assume that there exists $\alpha>0$ such that $\Re (\sigma(\mathcal{S}^{-1} \mathcal{L})) < - \alpha$. Then, there exists $\varepsilon>0$ and $\theta > 0$, for any $\left(\rho_{ini},u_{ini} \right) \in H^{1} \times H^{2}$ satisfying the boundary conditions \eqref{BC} and \begin{equation*} \left\lvert \left(\rho_{ini},u_{ini} \right) - \left(\hat{\rho},\hat{u} \right) \right\rvert_{H^{1} \times H^{2}} \leq \varepsilon, \end{equation*} the unique solution $(\rho,u)$ of problem \eqref{NS}-\eqref{BC} with the initial condition $\left(\rho_{ini},u_{ini} \right)$ satisfies \begin{equation*} \left\lvert \left(\rho,u \right)(t) - \left(\hat{\rho},\hat{u} \right) \right\rvert_{H^{1} \times H^{2}} \leq C \left\lvert \left(\rho_{ini},u_{ini} \right) - \left(\hat{\rho},\hat{u} \right) \right\rvert_{H^{1} \times H^{2}} e^{- \theta t}. \end{equation*} \end{thm}
{ "timestamp": "2019-01-08T02:26:06", "yymm": "1710", "arxiv_id": "1710.10674", "language": "en", "url": "https://arxiv.org/abs/1710.10674", "abstract": "We study existence and stability of steady solutions of the isentropic compressible Navier-Stokes equations on a finite interval with non characteristic boundary conditions, for general not necessarily small-amplitude data. We show that there exists a unique solution, about which the linearized spatial operator possesses (i) a spectral gap between neutral and growing/decaying modes, and (ii) an even number of nonstable eigenvalues ? (with a nonnegative real part). In the case that there are no nonstable eigenvalues, i.e., of spectral stability, we show this solution to be nonlinearly exponentially stable in H2 X H3. Using \"Goodman-type\" weighted energy estimates, we establish spectral stability for small-amplitude data. For large amplitude data, we obtain high-frequency stability, reducing stability investigations to a bounded frequency regime. On this remaining, bounded-frequency regime, we carry out a numerical Evans function study, with results again indicating universal stability of solutions.", "subjects": "Analysis of PDEs (math.AP)", "title": "Existence and stability of steady compressible Navier-Stokes solutions on a finite interval with noncharacteristic boundary conditions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9790357555117625, "lm_q2_score": 0.72487026428967, "lm_q1q2_score": 0.7096739068468481 }
https://arxiv.org/abs/1303.3064
Locality and thermalization in closed quantum systems
We derive a necessary and sufficient condition for the thermalization of a local observable in a closed quantum system which offers an alternative explanation, independent of the eigenstate thermalization hypothesis, for the thermalization process. We also show that this approach is useful to investigate thermalization based on a finite-size scaling of numerical data. The condition follows from an exact representation of the observable as a sum of a projection onto the local conserved charges of the system and a projection onto the non-local ones. We show that thermalization requires that the time average of the latter part vanishes in the thermodynamic limit while time and statistical averages for the first part are identical. As an example, we use this thermalization condition to analyze exact diagonalization data for a one-dimensional spin model. We find that local correlators do thermalize in the thermodynamic limit although we find no indications that the eigenstate thermalization hypothesis applies.
\section{Supplementary Material} The purpose of this supplementary material is threefold. In Sec.~\ref{Details} we will first provide some additional information about the basis rotation required to split an observable into a local and a non-local part, numerical data testing the ETH, as well as some technical details about the coarse graining procedure used to obtain a continuous density of states. Secondly, we explain in Sec.~\ref{DMRG} the time-dependent DMRG algorithm used to produce the data for infinite system size. In particular, we present as an additional information the time-dependence of the correlation functions after the quench. Adding to the study of generic quantum systems presented in the paper, we finally study the same quantities for an integrable quench in Sec.~\ref{Integrable}. \section{Details of the calculation} \label{Details} In Sec.~\ref{splitting} we give some details about the splitting of an observable into a local and a non-local part. In Sec.~\ref{scaling} we present a finite size scaling analysis of the fluctuations in $O_{nn}$ and $\widetilde O_{nn}$. Finally, we explain in Sec.~\ref{dist} the coarse graining procedures used to obtain the continuous distributions. \subsection{Splitting into local and non-local part} \label{splitting} In the paper we have argued that the proper basis in operator space to study thermalization consists of the local conserved charges plus an additional set of non-local charges which span the rest of the diagonal operator space. In this basis the time and statistical average for the part of the observable obtained by projecting onto the local conserved charges agree by construction so that thermalization becomes a statement about the projection onto the non-local conserved charges only. \paragraph{Multiple local conservation laws} The generalization of Eq.~(8) of the main paper for a system with $f> 1$ local conservation charges, $\{\mathcal{Q}_1,\ldots \mathcal{Q}_f\}$, is straightforward. We can still decompose the operator into a local and a non-local part, \begin{equation} \label{Mazur2} O_\textrm{diag} =\underbrace{\sum_{n=1}^{f}\frac{\langle O \widetilde P_n\rangle_\textrm{th}}{\langle \widetilde P_n^2\rangle_\textrm{th}} \widetilde P_n}_{O_\textrm{loc}} +\underbrace{\sum_{n=f+1}^{D} \frac{\langle O \widetilde P_n\rangle_\textrm{th}}{\langle \widetilde P_n^2\rangle_\textrm{th}} \widetilde P_n}_{O_\textrm{nonloc}}\,, \end{equation} with $\{\widetilde P_n\}$ an orthogonal basis set. Firstly for $n=1,\cdots,f$, $\widetilde{P}_n\equiv\mathcal{Q}_{n}$ are the set of local conserved quantities. Secondly we have $\widetilde{P}_n$ with $n=f+1,\cdots,D$ which are non-local operators defined such that the set $\{\widetilde P_n\}$ is an orthogonal basis. Such a set can always be constructed explicitly. The thermal ensemble average is now given by $\langle O\rangle_\textrm{th}\!=\Tr\{O\rho_\textrm{th}\}$ with $\rho_\textrm{th}\!=\exp(-\sum_n \!\beta_n \mathcal{Q}_n)/Z_\textrm{th}$, $Z_\textrm{th}=\Tr \exp(-\sum_n \!\beta_n \mathcal{Q}_n)$. The Lagrange parameters $\{\beta_n\}$ are determined by the set of equations $\langle\Psi_0|\mathcal{Q}_n|\Psi_0\rangle=\Tr\{\mathcal{Q}_n\rho_\textrm{th}\}$, which in turn ensures that the time and thermal ensemble averages for $O_\textrm{loc}$ are the same by construction, i.e.~$ \overline{O}_\textrm{loc}\equiv \langle O_\textrm{loc}\rangle_\textrm{th}$ is still guaranteed, and thermalization remains a statement about the non-local contributions. \paragraph{Energy shift} By shifting the energy, $H\to H-E_0$, the projection operators $\widetilde P_n$ are modified because of the orthogonality condition $\langle H\widetilde P_n\rangle=0$. The qualitative behavior of $\widetilde{O}_{nn}$ is, however, not affected. A convenient, unique gauge is obtained by demanding that $\langle O_{\rm nonloc}\rangle_\textrm{th}=0$. Focusing once again on a system with only one local conserved quantity, this is achieved by choosing \begin{equation} E_0=\frac{\langle OH\rangle_\textrm{th}\langle H\rangle_\textrm{th}-\langle O\rangle_\textrm{th}\langle H^2\rangle_\textrm{th}}{\langle OH\rangle_\textrm{th}-\langle O\rangle_\textrm{th}\langle H\rangle_{th}}\,, \end{equation} which is the shift we have used in the main part of the paper. For a system with $f> 1$ local conserved quantities a similar condition can be found. \paragraph{Relation between the original and the rotated basis} The relation between the old and new operator basis can be expressed as \begin{equation} \widetilde P_m =\sum_{n=1}^D a_n^m P_n =\sum_{n=1}^D \underbrace{a_n^m \sqrt{\langle P_n\rangle_\textrm{th}}}_{\langle\widetilde P_m P'_n\rangle_\textrm{th}} P'_n \; (m=2,\cdots,D) \end{equation} where $P'_n =P_n/\sqrt{\langle P_n^2\rangle_\textrm{th}}$ are the normalized projection operators, $\langle P'_i P'_j\rangle_\textrm{th}=\delta_{ij}$ and $P_n^2=P_n$. Using the definition of the Householder reflection and some simple algebra we find \begin{equation} \label{coeffs} \langle\widetilde P_mP'_n\rangle_\textrm{th}=\delta_{nm}-\frac{2\varepsilon_n\varepsilon_m}{\langle(H'-P'_D)^2\rangle_\textrm{th}} \frac{\sqrt{\langle P_n\rangle_\textrm{th}}\sqrt{\langle P_m\rangle_\textrm{th}}}{\langle H^2\rangle_{\rm th}}. \end{equation} While $\langle(\ H'-P'_D)^2\rangle_\textrm{th}\propto\mathcal{O}(1)$ and $\langle H^2\rangle\propto \mathcal{O}(N^2)$, we have $\langle P_n\rangle_\textrm{th}\propto \text{e}^{-N}$ so that the expansion coefficients $a_n^m$ are sharply peaked at $n=m$. As a consequence, the initial distribution is not affected by the rotation in the thermodynamic limit, i.e., \begin{equation} \langle\Psi_0|\widetilde P_n|\Psi_0\rangle \stackrel{N\to\infty}{\to} \langle\Psi_0|P_n|\Psi_0\rangle \quad (n=2,\cdots, D) \end{equation} and becomes sharply peaked in the thermodynamic limit. The matrix elements of the observable \begin{eqnarray} \label{Bar_O} \widetilde O_{mm}&=&\frac{\langle O\widetilde P_m\rangle_\textrm{th}}{\langle \widetilde P_m^2\rangle_\textrm{th}}=\sum_n O_{nn} \langle P_n\widetilde P_m\rangle_\textrm{th} \nonumber \\ &=&\sum_n a_n^m\langle P_n\rangle_\textrm{th} O_{nn} \end{eqnarray} are, however, changed because they are given by summing over the exponentially many old matrix elements $O_{nn}$ so that the exponentially small corrections in $a_n^m$, see Eq.~(\ref{coeffs}), still matter. \subsection{Fluctuations in $O_{nn}$ and $\widetilde O_{nn}$ and the eigenstate thermalization hypothesis} \label{scaling} According to the eigenstate thermalization hypothesis, $O_{nn}$ should become a smooth function of the eigenenergy $\varepsilon_n$ in the thermodynamic limit. For the system sizes we are able to exactly diagonalize we are clearly far from that limit and fluctuations in $O_{nn}$ are large. Nevertheless, we can still check how these fluctuations scale with system size. In order to investigate the scaling with system size we define the average size of the fluctuations in an energy interval: \begin{equation}\label{Onn_scaling} \Delta_O=\sum'_n\left|O_{nn}-\left<O_{nn}\right>_{mc}\right|\,. \end{equation} The prime on the sum refers to a restriction to an energy interval of $0.05$ times the bandwidth $W_\Delta=\varepsilon_D-\varepsilon_1$, and centered on the middle of the spectrum, $E=W_\Delta/2+\varepsilon_1$, with $\varepsilon_1$ the ground state energy. This can be more precisely defined as \begin{eqnarray} \sum'_nf_n&=&\sum_{n=1}^Df_n\Gamma(\varepsilon_n-E)\textrm{, where}\nonumber\\ \Gamma(\varepsilon)&=&\theta\left[\varepsilon+0.05W_\Delta\right]-\theta\left[\varepsilon-0.05W_\Delta\right] \end{eqnarray} and $\theta(\varepsilon)$ is the Heaviside function. $\left<O_{nn}\right>_{mc}$ is the locally defined average, in other words the microcanonical ensemble, calculated here with the same energy window: \begin{equation} \left<O_{nn}\right>_{mc}=\sum_{m=1}^DO_{mm}\Gamma(\varepsilon_m-\varepsilon_n)\,. \end{equation} To compare the size of the fluctuations with the magnitude of the operator we define $\left<O\right>_{E}=\sum'_nO_{nn}$. We can define the same in the rotated basis: \begin{equation} \Delta_{\widetilde{O}}=\sum'_n\left|\widetilde{O}_{nn}-\left<\widetilde{O}_{nn}\right>_{mc}\right| \end{equation} with the interval for the sum defined as for Eq.~\eqref{Onn_scaling}. Strictly speaking $\left<\widetilde{O}_{nn}\right>_{mc}$ is no longer the microcanonical ensemble average as $n$ no longer labels the eigenenergies. Nonetheless one can define an analogue and we retain the same notation for ease of presentation. \begin{figure} \includegraphics*[width=0.9\columnwidth]{Supp_Mat_Figure1} \caption{(Color online) Quench with $|\Psi_0(5,0.2)\rangle$ and $H(1,0.2)$. Shown is a comparison of (a) $\Delta_{\widetilde{O}}$, (b) $\Delta_O$, and (c) $\Delta_O/\langle O\rangle_E$ for different system sizes from $N=8$ to $16$. Plotted are the observables $O=\vec{S}_i\vec{S}_{j}$ with $|i-j|=1$ (black circles), $|i-j|=2$ (red squares), $|i-j|=3$ (green triangles), $|i-j|=4$ (blue diamonds), and $|i-j|=5$ (purple triangles). $\Delta_{\widetilde{O}}$ shows clear exponential scaling to zero (note the logarithmic scales).} \label{Supp_Mat_Figure1} \end{figure} We consider again the same quench as in the paper with $|\Psi_0(5,0.2)\rangle$ and $H(1,0.2)$, and look at observables $O=\vec{S}_i\vec{S}_{j}$ for different $|i-j|$. In Fig.~\ref{Supp_Mat_Figure1} we plot $\Delta_{\widetilde{O}}$ and $\Delta_O$ for system sizes $N=8$ to $16$. The absolute magnitude of the fluctuations of $\Delta_O$, Fig.\ref{Supp_Mat_Figure1}(b), can be several orders of magnitude larger than their average value even for $N=16$, see Fig.~\ref{Supp_Mat_Figure1}(c), so that ETH does not apply. In particular it does not seem possible that this can be the explanation for the data of Fig.~2(a) of the paper. Though the trend for the fluctuations appears to be for them to become on the whole smaller for larger system sizes, no clear-cut scaling can be seen. The relative size of the fluctuations, defined as $\Delta_O/\langle O\rangle_{E}$ as in Ref.~[\onlinecite{Rigol2009}], demonstrates even poorer scaling behavior with the system size, see Fig.~\ref{Supp_Mat_Figure1}(c). However, for the non-local part $\Delta_{\widetilde{O}}$, shown in Fig.~\ref{Supp_Mat_Figure1}(a), one sees clear scaling to zero which depends exponentially on the system size. This agrees with our expectations because $\widetilde{O}_{nn}$ is defined by a projection onto a non-local operator and supports the division into local and non-local operators which we have implemented. \subsection{Energy distributions and coarse graining}\label{dist} In order to plot the continuum energy distributions a coarse graining is necessary. The density of states is first made continuous by approximating \begin{equation} \nu(\varepsilon)\equiv\sum_n\delta(\varepsilon-\varepsilon_n)\approx\sum_n\chi_{W}(\varepsilon-\varepsilon_n)\,, \end{equation} with an envelope function: \begin{equation} \chi_{W}(\varepsilon)=\frac{e^{-\varepsilon^2/(2W^2)}}{\sqrt{2 \pi W^2}}\,. \end{equation} In the paper we have used $W=10\delta$ for $N=16$, where $\delta$ is the mean level spacing with an additional running average. The results of these procedures for the density of states are shown in Fig.~\ref{Supp_Mat_Figure2}. The same procedure is performed for the canonical ensemble. As a check that this is working correctly one must compare operator averages found with these coarse grained distributions and with the exact ones. Note that whilst a coarse graining over a wider energy range (\emph{e.g.}~$W=50\delta$) will give the same result for the density of states as in Fig.~\ref{Supp_Mat_Figure2}, it does not give accurate results for the canonical ensemble. \begin{figure} \includegraphics*[width=0.8\columnwidth]{Supp_Mat_Figure2} \caption{(Color online) Coarse grained density of states, $\nu(\varepsilon)$, for the Hamiltonian $H(1,0.2)$ with $N=16$. The coarse graining width is $W=10\delta$, where $\delta$ is the mean level spacing. Shown is the result after coarse graining (red circles), and the result after an additional running average (blue line).} \label{Supp_Mat_Figure2} \end{figure} For the microcanonical ensemble one simply broadens the delta-function around the initial energy $E=\langle\Psi_0|H|\Psi_0\rangle$, \begin{eqnarray} \Gamma_\textrm{mic}(\varepsilon)\approx\hside(\varepsilon-E+W/2)-\hside(\varepsilon-E-W/2)\,, \end{eqnarray} where, again, $\hside(\varepsilon)$ is the Heaviside function. In principle one could also attempt this procedure on the initial distribution to calculate the time average. However for the system sizes we are able to consider we find that it is not possible to smoothen the initial distribution and, at the same time, retain accurate averages for physical quantities. \section{Infinite size time-dependent DMRG} \label{DMRG} In order to show that the quench in the non-integrable case considered in Fig.~1 and Fig.~2 of the main paper does indeed lead to a thermalization of the local correlation functions in the thermodynamic limit at long times we have performed infinite size time-dependent DMRG calculations. The time evolution is performed by using a third order Trotter-Suzuki decomposition with a time step $J\delta t=0.05$. In order to obtain results in the thermodynamic limit we have simulated the dynamics on a light cone which grows with an effective velocity, set by the Trotter time step, which is much larger than the Lieb-Robinson velocity. Further details of the algorithm are given in Ref.~\onlinecite{EnssSirker}. The initial state as well as the thermalized state were calculated using an imaginary time evolution. In Fig.~\ref{Supp_Mat_Fig_DMRG} we show results for $\langle\Psi_0|\vec{S}_i\vec{S}_{i+j}(t)|\Psi_0\rangle-\langle\vec{S}_i\vec{S}_{i+j}\rangle_{\textrm{th}}$ for the quench with $|\Psi_0(5,0.2)\rangle$ and $H(1,0.2)$ considered in the main paper. \begin{figure} \includegraphics*[width=0.8\columnwidth]{Supp_Mat_Fig_DMRG} \caption{(Color online) Difference between the time dependent expectation value after the quench in the non-integrable system and the thermal expectation value, $\langle\Psi_0|\vec{S}_i\vec{S}_{i+j}(t)|\Psi_0\rangle-\langle\vec{S}_i\vec{S}_{i+j}\rangle_{\textrm{th}}$, for distances $j=1,2,3,4$ as indicated on the plot.} \label{Supp_Mat_Fig_DMRG} \end{figure} At the longest times we can simulate this difference becomes smaller than $10^{-3}$. Due to the Trotter decomposition we expect an error of order $(\delta t)^2\sim 10^{-3}$ so that the system has already thermalized within error bars. While we could, in principle, reduce the Trotter step $\delta t$ we also see oscillations of order $10^{-3}$ at these times so that a tighter bound on thermalization would in addition require substantially longer simulation times which are not feasible using present day computers and algorithms. Note that the relative deviation $\Delta_\textrm{rel}$ shown in Fig.~2(a) of the main paper is extremely sensitive to small errors because the difference plotted in Fig.~\ref{Supp_Mat_Fig_DMRG} is divided by the time average of the correlator. For the longer-range correlation functions this value becomes very small---we obtain, for example, $\overline{\vec{S}_i\vec{S}_{i+4}}\approx 0.039$---thus magnifying the numerical error in the time-dependent correlation function. \section{The integrable case} \label{Integrable} Non-equilibrium dynamics in integrable systems and the question of the appropriate statistical ensemble to describe the long-time limit have been intensely studied in recent years.\cite{Rigol2007,FagottiEssler,SantosRigol2010,SantosRigol2010b,CauxKonik} In our paper we only briefly touched upon this issue by explaining how the splitting into a local and a non-local part can be generalized to the integrable case. While for non-interacting systems the additional local conservation laws simply become the occupation numbers of the diagonal modes, they are quite complicated in the interacting case \cite{GrabowskiMathieu} and a detailed study is beyond the scope of this Letter. Here we simply want to look for indications of integrability when taking only the Hamiltonian itself into the local part, as in Eq.~(7) of the paper, thus ignoring all other locally conserved quantities. In Fig.~\ref{Supp_Mat_Figure3} we present the same data as in Fig.~2 of the paper but for the integrable quench with $|\Psi_0(5,0)\rangle$ and $H(1/2,0)$. \begin{figure} \includegraphics*[width=0.8\columnwidth]{Supp_Mat_Figure3_v2} \caption{(Color online) Scaling of the relative deviation $\Delta_\textrm{rel}$ for $O=\vec{S}_i\vec{S}_j$ with $|i-j|=1,2,3,4$ (from bottom to top) in the integrable case. The tDMRG data for $N=\infty$ clearly show that the correlation functions at long times are not described by the canonical ensemble. Note that $\Delta_\textrm{rel}$ is the absolute value of the relative deviation.} \label{Supp_Mat_Figure3} \end{figure} The scaling behavior is now quite different. $\Delta_\textrm{rel}$ for the correlation function $\langle\vec{S}_i\vec{S}_{i+2}\rangle$, in particular, shows an upturn for the largest system sizes which we have considered by exact diagonalization. Furthermore, the results for infinite system size obtained by tDMRG clearly show that the correlation functions at long times after the quench are no longer described by the canonical ensemble. This is also immediately obvious from the time-dependent data shown in Fig.~\ref{Supp_Mat_Fig_DMRG_integrable}. \begin{figure} \includegraphics*[width=0.8\columnwidth]{Supp_Mat_Fig_DMRG_integrable} \caption{(Color online) Difference between the time dependent expectation value after the quench in the integrable system and the thermal expectation value, $\langle\Psi_0|\vec{S}_i\vec{S}_{i+j}(t)|\Psi_0\rangle-\langle\vec{S}_i\vec{S}_{i+j}\rangle_{\textrm{th}}$, for distances $j=1,2,3,4$ as indicated on the plot.} \label{Supp_Mat_Fig_DMRG_integrable} \end{figure} \begin{figure \includegraphics*[width=1.0\columnwidth]{Supp_Mat_Figure4} \caption{(Color online) Integrable quench for $N=16$. Shown are: (a) the initial, microcanonical and canonical energy distribution functions, and results for $O=\vec{S}_i\vec{S}_{i+1}$ in (b) and $O=\vec{S}_i\vec{S}_{i+4}$ in (c). (b1) and (c1) show $O_{nn}$ while (b2) and (c2) show $\widetilde O_{nn}$.} \label{Supp_Mat_Figure4} \end{figure} Finally, we present in Fig.~\ref{Supp_Mat_Figure4} the analogue of Fig.~3 of the paper for the integrable case. One of the obvious differences is that the fluctuations in $O_{nn}$ for the case $O=\vec{S}_i\vec{S}_{i+1}$ are substantially larger than in the non-integrable case, see Fig.~\ref{Supp_Mat_Figure4}(b1). In $\widetilde O_{nn}$, shown in Fig.~\ref{Supp_Mat_Figure4}(b2) and (c2), the largest fluctuations are of similar magnitude as in the non-integrable case shown in the paper, however, substantial fluctuations persist to much higher energies. Note that according the thermalization condition, Eq.~(8) of the main paper, also $\langle\Psi_0|\widetilde P_N|\Psi_0\rangle$ enters. Thus the different energy-level distributions in the two cases will play a role as well: While level repulsion leads to a Wigner-Dyson distribution for a generic model, each state in an integrable model is uniquely characterized by the quantum numbers of the local conserved quantities so that states can cross leading to a Poissonian distribution. The fact that we are missing local conservation lawss when using the canonical ensemble for an integrable system would, of course, be immediately obvious if we chose one of the additional local conserved charges as our observable whose expectation value would be time independent and thus characterized by the initial microstate.
{ "timestamp": "2013-11-05T02:10:35", "yymm": "1303", "arxiv_id": "1303.3064", "language": "en", "url": "https://arxiv.org/abs/1303.3064", "abstract": "We derive a necessary and sufficient condition for the thermalization of a local observable in a closed quantum system which offers an alternative explanation, independent of the eigenstate thermalization hypothesis, for the thermalization process. We also show that this approach is useful to investigate thermalization based on a finite-size scaling of numerical data. The condition follows from an exact representation of the observable as a sum of a projection onto the local conserved charges of the system and a projection onto the non-local ones. We show that thermalization requires that the time average of the latter part vanishes in the thermodynamic limit while time and statistical averages for the first part are identical. As an example, we use this thermalization condition to analyze exact diagonalization data for a one-dimensional spin model. We find that local correlators do thermalize in the thermodynamic limit although we find no indications that the eigenstate thermalization hypothesis applies.", "subjects": "Statistical Mechanics (cond-mat.stat-mech); Strongly Correlated Electrons (cond-mat.str-el); Quantum Physics (quant-ph)", "title": "Locality and thermalization in closed quantum systems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810511092411, "lm_q2_score": 0.7490872187162396, "lm_q1q2_score": 0.7096710366398891 }
https://arxiv.org/abs/1804.07494
Parallel Quicksort without Pairwise Element Exchange
Quicksort is an instructive classroom approach to parallel sorting on distributed memory parallel computers with many opportunities for illustrating specific implementation alternatives and tradeoffs with common communication interfaces like MPI. The (two) standard distributed memory Quicksort implementations exchange partitioned data elements at each level of the Quicksort recursion. In this note, we show that this is not necessary: It suffices to distribute only the chosen pivots, and postpone element redistribution to the bottom of the recursion. This reduces the total volume of data exchanged from $O(n\log p)$ to $O(n)$, $n$ being the total number of elements to be sorted and $p$ a power-of-two number of processors, by trading off against a total of $O(p)$ additional pivot element distributions. Based on this observation, we describe new, \emph{exchange-free} implementation variants of parallel Quicksort and of Wagar's HyperQuicksort.We have implemented the discussed four different Quicksort variations in MPI, and show that with good pivot selection, Quicksort without pairwise element exchange can be significantly faster than standard implementations on moderately large problems.
\section{Introduction} Quicksort~\cite{Hoare62} is often used in the classroom as an example of a sorting algorithm with obvious potential for parallelization on different types of parallel computers, and with enough obstacles to make the discussion instructive. Still, distributed memory parallel Quicksort is practically relevant (fastest) for certain ranges of (smaller) problem sizes and numbers of processors~\cite{AxtmannSanders17,AxtmannWiebigkeSanders18}. This note presents two new parallel variants of the Quicksort scheme: select a pivot, partition elements around pivot, recurse on two disjoint sets of elements. A distributed memory implementation needs to efficiently parallelize both the pivot selection and the partitioning step in order to let the two recursive invocations proceed concurrently. In standard implementations, partitioning usually involves exchanging elements between neighboring processors in a hypercube communication pattern. We observe that this explicit element exchange is not necessary; it suffices instead to distribute the chosen pivots over the involved processors. This leads to two new \emph{exchange-free} parallel Quicksort variants with a cost tradeoff between element exchange and pivot distribution. We discuss implementations of the two variants using the \emph{Message-Passing Interface} (MPI)~\cite{MPI-3.1}, and compare these to standard implementations of parallel Quicksort. Experiments on a medium scale cluster show that this can be faster than the standard pairwise exchange variants when the number of elements per process is not too small. The two approaches can be combined for a smooth transition between exchange-based and exchange-free Quicksort. Using MPI terminology, we let $p=2^k$ denote the number of MPI \emph{processes} that will be mapped to physical processor(core)s. For the Quicksort variants discussed here, $p$ must be a power of two. MPI processes are \emph{ranked} consecutively, $i=0,\ldots,p-1$. We let $n$ denote the total number of input elements, and assume that these are initially distributed evenly, possibly randomized~\cite{AxtmannSanders17} over the $p$ processes, such that each process has roughly $n/p$ elements. Elements may be large and complex and hence expensive to exchange between processes; but all have a key from some ordered domain with a comparison function $<$ that can be evaluated in $O(1)$ time. For each process, input and output elements are stored consecutively in process local arrays. For each process, the number of output elements should be close to the number of input elements, but actual load balance depends on the quality of the selected pivots. The output elements for each process must be sorted, and all output elements for process $i$ must be smaller than or equal to all output elements of process $i+1$, $0\leq i<p-1$. \section{Standard, distributed memory Quicksort and HyperQuicksort} Standard, textbook implementations of parallel Quicksort for distributed memory systems work roughly as follows~\cite{Quinn03,GramaKarypisKumarGupta03,LanMohamed92,SundarMalhotraBiros13,Wagar87}. A global pivot is chosen by some means and distributed over the processes, after which the processes all perform a local partitioning of their input elements. The processes pair up such that process $i$ is paired with process $i\oplus p/2$ (with $\oplus$ denoting bitwise exclusive or), and process pairs exchange data elements such that all elements smaller than (or equal to) the pivot end up at the lower ranked process, and elements larger than (or equal) to the pivot at the higher ranked process. After this, the set of processes is split into two groups, those with rank lower than $p/2$ and those with larger rank. The algorithm is invoked recursively on these two sets of processes, and terminates with a local sorting step when each process belongs to a singleton set of processes. Assuming that pivots close to the median element can be effectively determined, the communication volume for the element exchange over all recursive calls is $O(n\log p)$. With linear time communication costs, the exchange time per process is $O(n/p \log p)$. Global pivot selection and distribution is normally done by the processes locally selecting a (sample of) pivot(s) and agreeing on a global pivot by means of a suitable collective operation. If we assume the cost for this to be $O(\log p+s)$ where $s, s\geq 1$, is the size of the pivot sample per process, the total cost for the pivot selection becomes $O(\log p(\log p+s))$. Some textbook implementations simply use the local pivot from some designated process which is distributed by an \textsf{MPI\_\-Bcast}\xspace operation~\cite{Quinn03,GramaKarypisKumarGupta03}; others use local pivots from all processes from which a global pivot closer to the median is determined by an \textsf{MPI\_\-Allreduce}\xspace-like operation~\cite{AxtmannSanders17,SiebertWolf11}. Either of these take time $O(\log p+s)$ in a linear communication cost model, see, e.g.\@\xspace~\cite{Traff09:twotree}. Before the recursive calls, the set of MPI processes is split in two which can conveniently be done using the collective \textsf{MPI\_\-Comm\_\-create}\xspace operation, and the recursive calls simply consist in each process recursing on the subset of processes to which it belongs. Ideally, \textsf{MPI\_\-Comm\_\-create}\xspace\footnote{It can be assumed that both \textsf{MPI\_\-Comm\_\-create}\xspace and \textsf{MPI\_\-Comm\_\-create\_\-group}\xspace are faster than the alternative \textsf{MPI\_\-Comm\_\-split}\xspace in any reasonable MPI library implementation; if not, a better implementation of \textsf{MPI\_\-Comm\_\-create}\xspace can trivially be given in terms of \textsf{MPI\_\-Comm\_\-split}\xspace. For evidence, see, e.g.\@\xspace~\cite{AxtmannWiebigkeSanders18}.} takes time $O(\log p)$. At the end of the recursion, each process locally sorts (close to) $n/p$ elements. The best, overall running time for this parallel Quicksort implementation becomes $O(n/p \log p+n/p \log(n/p)+\log^2 p)=O(n/p \log n+\log^2 p)$ assuming a small (constant) sample size $s$ is used, with linear speed-up over sequential $O(n \log n)$ Quicksort when $n/p$ is in $\Omega(\log n)$. We refer to this implementation variant as standard, \emph{parallel Quicksort}. Wagar~\cite{Wagar87} observed that much better pivot selection would result by using the real medians from the processes to determine the global pivots. In this variation of the Quicksort scheme, the processes \emph{first sort} their $n/p$ elements locally, and during the recursion keep their local elements in order. The exact, local medians for the processes are the middle elements in the local arrays, among which a global pivot is selected and distributed by a suitable collective operation. As above, this can be done in $O(\log p)$ time. The local arrays are split into two halves of elements smaller (or equal) and larger (or equal) than the global pivot. Instead of having to scan through the array as in the parallel Quicksort implementation, this can be done in $O(\log(n/p))$ time by binary search. Processes pairwise exchange small and large elements, and to maintain order in the local arrays, each process has to merge its own elements with those received from its partner in the exchange. The processes then recurse as explained above. The overall running time of $O(n \log n+\log^2 p)$ is the same as for parallel Quicksort. Wagar's name for this Quicksort variant is \emph{HyperQuicksort}. Wagar~\cite{Wagar87}, Quinn~\cite{Quinn89,Quinn03}, Axtmann and Sanders~\cite{AxtmannSanders17} and others show that HyperQuicksort can perform better than parallel Quicksort due to the possibly better pivot selection. A potential drawback of HyperQuicksort is that the process-local merge step can only be done after the elements from the partner process have been received. In contrast, in parallel Quicksort, the local copying of the elements that are kept at the process can potentially be done concurrently (overlapped) with the reception of elements from the partner process. \leaveout{ For completeness, the two standard parallel Quicksort implementation variants are shown as Algorithm~\ref{alg:standardqsort} and Algorithm~\ref{alg:sortfirstqsort} in the appendix. } \section{Exchange-free, parallel Quicksort} We observe that the partitioning step can be done without actually exchanging any input elements. Instead, it suffices to distribute the pivots, and postpone the element redistribution to the end of the recursion. \begin{algorithm} \caption{Exchange-free, per process Quicksort of elements in $n$-element array $a$ for $p=2^k$.} \label{alg:exchangefreeqsort} \begin{algorithmic}[1] \Procedure{ExchangeFreeQsort}{$a,n$} \State $k\gets p$ \State $as[0]\gets a$ \Comment First segment is whole array $a$ \State $an[0]\gets n$ \Comment of $n$ elements \Repeat \State $j\gets 0$ \Comment Segment count \For{$i=0,k,2k,\ldots,p-k$} \State \Comment Local pivot selection for segment $i$ \State $x[j]\gets\mbox{\Call{local-Choose}{$as[i],an[i]$}}$ \State $j\gets j+1$ \EndFor \State \Comment Global consensus on all $j$ pivots \State $x'[0,\ldots,j-1]\gets\mbox{\Call{global-Choose}{$x[0,\ldots,j-1]$}}$ \State $j\gets 0$ \For{$i=0,k,2k,\ldots,p-k$} \State $n_0\gets\mbox{\Call{Partition}{$as[i],an[i],x'[j]$}}$ \State \Comment{$as[i][0,n_0-1]\leq x'[j], as[i][n_0,an[i]-1]\geq x'[j]$} \State $as[i+k/2]\gets as[i]+n_0$ \State $an[i+k/2]\gets an[i]-n_0$ \State $an[i]\gets n_0$ \State $j\gets j+1$ \EndFor \State $k\gets k/2$ \Until{$k=1$} \State \Call{Alltoall}{$as[0,\ldots,p-1],an[0,\ldots,p-1],bs[0,\ldots,p-1],bn[0,\ldots,p-1]$} \State $m\gets\sum_{i=0}^{p-1}bn[i]$ \State $b\gets bs[0]$ \Comment Consecutive $bs[i]$ segments \State \Call{local-Qsort}{$b,m$} \Comment Load imbalance $|m-n|$ \EndProcedure \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:exchangefreeqsort} shows how this is realized for the standard parallel Quicksort algorithm. The idea is best described iteratively. Before iteration $i, i=0,\ldots,\log_2 p-1$, each process maintains a partition of its elements into $2^i$ segments with all elements in segment $j$ being smaller than (or equal to) all elements in segment $j+1, j=0,\ldots,2^i-2$. In iteration $i$, pivots for all $2^i$ segments are chosen locally by the processes, and by a collective communication operation they agree on a global pivot for each segment. The processes then locally partition their segments, resulting in $2^{i+1}$ segments for the next iteration. The process is illustrated in Figure~\ref{fig:partitioning}. After the $\log_2 p$ iterations, each process has $p$ segments with the mentioned ordering property, which with good pivot selection each contain about $n/p^2$ elements. By an all-to-all communication operation, all $j$th segments are sent to process $j$, after which the processes locally sort their received, approximately $n/p$ elements. Note that no potentially expensive process set splitting is necessary as was the case for the standard Quicksort variations. Also note that the algorithm as performed by each MPI process is actually oblivious to the process rank. All communication is done by process-symmetric, collective operations. If pivot selection is done by a collective \textsf{MPI\_\-Bcast}\xspace or \textsf{MPI\_\-Allreduce}\xspace operation, the cost over all iterations will be $O(\log^2 p+p)$, since in iteration $i$, $2^i$ pivots need to be found and each collective operation takes $O(\log p+2^i)$ time~\cite{Traff09:twotree}. A small difficulty here is that some processes' segments could be empty (if pivot selection is bad) and for such segments no local pivot candidate is contributed. This can be handled using an all-reduction operation that reduces only elements from non-empty processes, or by relying on the reduction operator having a neutral element for the processes not contributing a local pivot. This Quicksort algorithm variant can be viewed as a sample sort implementation (see discussion in~\cite{JaJa00} and, e.g.\@\xspace~\cite{HarschKaleSolomonik18}) with sample key selection done by the partitioning iterations. No communication of elements is necessary during the sampling process, only the pivots are distributed in each iteration. In each iteration, all processes participate which can make it possible to select better pivots that the standard Quicksort variations, where the number of participating processes is halved in each recursive call. Compared to standard, parallel Quicksort, the $\log_2 p$ pivot selection and exchange terms of $O(n/p + \log p)$ are traded for a single $O(n/p + p)$ term accounting for the all-to-all distribution at the end of the partitioning, and an $O(p+\log^2 p)$ term for the pivot selection over all $\log_2 p$ iterations, thus effectively saving a logarithmic factor on the expensive element exchange. The total running time becomes $O(n/p \log n+\log^2 p+p)$ which means that this version scales worse than the standard Quicksort variants with linear speed-up when $n/p$ is in $\Omega(p/\log n)$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.60] \draw (0,8) rectangle node {Segment 0} (16,9); \draw[->] (8,10) node () [above] {$\mathrm{pivot}[p/2]$} -- (8,9.1); \draw (0,4) rectangle node {Segment 0} (8,5); \draw (8,4) rectangle node {Segment 1} (16,5); \draw[->] (4,6) node () [above] {$\mathrm{pivot}[p/4]$} -- (4,5.1); \draw[->] (12,6) node () [above] {$\mathrm{pivot}[3p/4]$} -- (12,5.1); \draw (0,0) rectangle node {Segment 0} (4,1); \draw (4,0) rectangle node {Segment 1} (8,1); \draw (8,0) rectangle node {Segment 2} (12,1); \draw (12,0) rectangle node {Segment 3} (16,1); mpicomms \draw[->] (2,2) node () [above] {$\mathrm{pivot}[p/8]$} -- (2,1.1); \draw[->] (6,2) node () [above] {$\mathrm{pivot}[3p/8]$} -- (6,1.1); \draw[->] (10,2) node () [above] {$\mathrm{pivot}[5p/8]$} -- (10,1.1); \draw[->] (14,2) node () [above] {$\mathrm{pivot}[7p/8]$} -- (14,1.1); \end{tikzpicture} \end{center} \caption{The first three partitioning iterations, from the viewpoint of a single MPI process, assuming for the sake of illustration perfect pivot selection. Before the start of iteration $i, i\geq 0$, the local array of approximately $n/p$ elements is divided into $2^i$ segments. In iteration $i$, all $p$ processes agree on $2^i$ new pivots. Each stores the pivots in $\mathrm{pivot}[i p/2^{i+1}]$ for $i=0,\ldots,2^i$, and partitions the $2^i$ segments for the next iteration.} \label{fig:partitioning} \end{figure} \section{Exchange-free HyperQuicksort} \begin{algorithm} \caption{Exchange-free, per process HyperQuicksort of elements in $n$-element array $a$ for $p=2^k$.} \label{alg:sortfirstexchangefreeqsort} \begin{algorithmic}[1] \Procedure{ExchangeFreeHyperQsort}{$a,n,i\in\{0,\ldots,p-1\}$} \State \Call{local-Qsort}{$a,n$} \State $k\gets p$ \State $as[0]\gets a$ \Comment First segment is whole array $a$ \State $an[0]\gets n$ \Comment of $n$ elements \Repeat \State $j\gets 0$ \Comment Segment count \For{$i=0,k,2k,\ldots,p-k$} \State \Comment Local pivot selection for segment $j, x[j]=as[an[j]/2]$ \State $x[j]\gets\mbox{\Call{local-Choose}{$as[i],an[i]$}}$ \State $j\gets j+1$ \EndFor \State \Comment Global consensus on all $2j$ pivots \State $x'[0,\ldots,j-1]\gets\mbox{\Call{global-Choose}{$x[0,\ldots,j-1]$}}$ \State $j\gets 0$ \For{$i=0,k,2k,\ldots,p-k$} \State $n_0\gets\mbox{\Call{Split}{$as[i],an[i],x'[j]$}}$ \State \Comment{$as[i][0,n_0-1]\leq x'[j], as[i][n_0,an[i]-1]\geq x'[j]$} \State $as[i+k/2]\gets as[i]+n_0$ \State $an[i+k/2]\gets an[i]-n_0$ \State $an[i]\gets n_0$ \State $j\gets j+1$ \EndFor \State $k\gets k/2$ \Until{$k=1$} \State \Call{Alltoall}{$as[0,\ldots,p-1],an[0,\ldots,p-1],bs[0,\ldots,p-1],bn[0,\ldots,p-1]$} \State $m\gets\sum_{i=0}^{p-1}bn[i]$ \State $b\gets bs[0]$ \Comment Consecutive $bs[i]$ segments \State \Call{multiway-Merge}{$bn[0,\ldots,p-1],bs[0,\ldots,p-1],c$} \EndProcedure \end{algorithmic} \end{algorithm} The observation that actual exchange of partitioned elements is not necessary also applies to the HyperQuicksort algorithm~\cite{AxtmannSanders17,Quinn03,Wagar87}. The exchange-free variant is shown as Algorithm~\ref{alg:sortfirstexchangefreeqsort}. In iteration $i,i=0,\log_2 p-1$, each process chooses $2^i$ (optimal) local pivots from each of the $2^i$ segments; each of these local pivots are just the middle element of the sorted segment. With a collective operation, $2^i$ global pivots are selected, and each process performs a split of its segments by binary search for the pivot. Iterations thus become fast, namely $O(\log^2 p+p+p\log (n/p))$ time for the collective all-reduce for global pivot selection of $O(\log^2p +p)$ time over all iterations, and $O(p\log (n/p))$ for the $2^i$ binary searches per iteration. At the end, an (irregular) all-to-all operation is again necessary to send all $j$th segments to process $j$. Each of the $p$ segments received per process are ordered, therefore a multiway merge over $p$ segments is necessary to get the received elements into sorted order. This takes $O(\log p\ n/p)$ time. Since no element exchanges are done in this algorithm and Algorithm~\ref{alg:exchangefreeqsort} before the all-to-all exchange, the amount of work for the different processes remains balanced over the iterations (assuming that all processes have roughly $n/p$ elements to start with). \section{Concrete Implementation and Experimental Results} We have implemented all four discussed parallel Quicksort variants with MPI~\cite{MPI-3.1} as discussed. For the process local sorting, we use the standard, C library \texttt{qsort()}\xspace function. Speed-up is also evaluated relative to \texttt{qsort()}\xspace. In the experimental evaluation we seek to compare the standard parallel Quicksort variants against the proposed exchange-free variants. For that purpose we use inputs where (almost) optimal pivots can be determined easily. Concretely, each local pivot is selected by interpolation between the maximum and the minimum element found in a sample of size $s$. For the HyperQuicksort variants, where input elements are kept in order, process local minimum and maximum elements are simply the first and last element in the input element array. For the standard Quicksort variants, process local maximum and minimum elements are chosen from a small sample of $s=20$ elements. Global maximum and minimum elements over a set of processes are computed by an \textsf{MPI\_\-Allreduce}\xspace operation with the \texttt{MPI\_\-MAX}\xspace operator. The global pivot is interpolated as the average of global maximum and minimum element. As inputs we have used either random permutations of $0,1,\ldots,n-1$ or uniformly generated random numbers in the range $[0,n-1]$. With these inputs, the chosen pivot selection procedure leads to (almost) perfect pivots, and all processes process almost the same number of $n/p$ elements throughout. For the standard parallel Quicksort variants, standard partition with sentinel elements into two parts is used, but such that sequences of elements equal to the pivot are evenly partitioned. For inputs with many equal elements, partition into three segments might be used to improve the load balance~\cite{BentleyMcIlroy93}. For the HyperQuicksort variants, the multiway merge is done using a straightforward binary heap, see, e.g.\@\xspace~\cite{SedgewickWayne11}. For both variants without element exchange, the final data redistribution is done by an \textsf{MPI\_\-Alltoall}\xspace followed by an \textsf{MPI\_\-Alltoallv}\xspace operation\footnote{The implementations are available from the author.}. \begin{figure} \includegraphics[width=0.48\textwidth]{QsortParExch.pdf} \includegraphics[width=0.48\textwidth]{QsortHyperExch.pdf} \caption{Strong scaling results, $n=10^8$ random doubles in the range $[0,10^8-1]$ on a medium-large InfiniBand cluster. Plotted running times and speed-up (SU) are the best observed over 43 measurements. Left plot shows parallel Quicksort versus exchange-free parallel Quicksort. Right plot shows HyperQuicksort versus exchange-free HyperQuicksort.} \label{fig:strongresults} \end{figure} The plots in Figure~\ref{fig:strongresults} shows a few strong scaling results on a medium-sized InfiniBand cluster consisting of 2020 dual-socket nodes with two Intel Xeon E5-2650v2, 2.6 GHz, 8-core Ivy Bridge-EP processors and Intel QDR-80 dual-link high-speed InfiniBand fabric\footnote{This is the Vienna Scientific Cluster (VSC). The author thanks for access and support.}. The MPI library used is \texttt{OpenMPI 3.0.0}\xspace, and the programs were compiled with \texttt{gcc 6.4} with optimization level \texttt{-O3}. The $n=10^8$ input elements are uniformly randomly generated doubles in the range from $[0,10^8-1]$, and $p$ varies from $2^4$ to $2^{12}$. The $p$ MPI processes are distributed with 16 processes per compute node. For each input, measurements were repeated 43 times with 5 non-timed, warm-up measurements, and the best observed times for the slowest processor are shown in the plots and used for computing speed-up. As can be seen, the exchange-free variant of parallel Quicksort gives consistently higher speed-up by about 20\% than the corresponding standard variant with element exchanges up to about $1024$ processes. From then on, the number of elements per process of $n/p<100\,000$ becomes so small that the \textsf{MPI\_\-Allreduce}\xspace on the vectors of pivots and the linear latency of the \textsf{MPI\_\-Alltoallv}\xspace operation become more expensive than the explicit element exchanges. The exchange-free variant of HyperQuicksort is slightly faster than HyperQuicksort, but by a much smaller margin. Interestingly, with (almost) perfect pivot selection as in these experiments, standard parallel Quicksort seems preferable to Wagar's HyperQuicksort. \begin{figure} \includegraphics[width=0.48\textwidth]{QsortParCombined.pdf} \includegraphics[width=0.48\textwidth]{QsortHyperCombined.pdf} \caption{Strong scaling results, $n=10^8$ random doubles in the range $[0,10^8-1]$ on a medium-large InfiniBand cluster for the combined Quicksort implementation. Plotted running times and speed-up (SU) are the best observed over 43 measurements. Left plot shows combined, parallel Quicksort. Right plot shows combined HyperQuicksort.} \label{fig:strongcombinedresults} \end{figure} To counter the degradation in the exchange-free variants for small $n/p$, the explicit exchange and exchange-free variants can be combined. Throughout the recursion in parallel Quicksort and HyperQuicksort, the number of elements per process $n'$ stays the same (under the assumption that optimal pivots are chosen) whereas the number of processes is halved in each recursive call. Thus, the recursion is stopped when $n'>c p/\log n$ for some chosen implementation and system dependent constant $c$, and the corresponding exchange-free variant invoked. By choosing $c$ well, this will give a smoother transition from parallel Quicksort for small per process input sizes to exchange-free Quicksort as $n/p$ grows. Results from these \emph{combined Quicksort} variants are shown in Figure~\ref{fig:strongcombinedresults}. With the constant $c$ chosen experimentally as $c=1500$, the combined Quicksort is never worse than neither standard nor exchange-free variant. Combined parallel Quicksort reaches a speed-up of more than 1000 on $p=4096$ processes. This speed-up, and the speed-up of 650 with $p=2048$ processes is larger than achieved with either parallel Quicksort and exchange-free Quicksort alone. \begin{figure} \includegraphics[width=0.48\textwidth]{QsortParExchWeak.pdf} \includegraphics[width=0.48\textwidth]{QsortHyperExchWeak.pdf} \caption{Weak scaling results on a medium-large InfiniBand cluster, $p=8192$, and number of elements per process varying from $n/p=10\,000$ to $n/p=640\,000$. The input elements are random doubles in the range $[0,10^7-1]$. Plotted running times and parallel efficiency are the best observed over 43 measurements. Left plot shows parallel Quicksort versus exchange-free parallel Quicksort. Right plot shows HyperQuicksort versus exchange-free HyperQuicksort.} \label{fig:weakresults} \end{figure} The plots in Figure~\ref{fig:weakresults} show results from a weak scaling experiment where $p=8192$ is kept fixed, and the initial input size per process varies from $n/p=10\,000$ to $n/p=640\,000$ randomly generated doubles in the range $[0,10^7-1]$. The experimental setup is as for the strong scaling experiment. Beyond about $n/p=200\,000$ elements per process, the exchange-free variants perform better than the standard variants, reaching a parallel efficiency of $15\%$ for exchange-free Quicksort, and $12\%$ for exchange-free HyperQuicksort. Repeating the experiments with random permutations and with integer type elements does not qualitatively change the results. \section{Concluding remarks} This note presented two new variants of parallel Quicksort for the classroom that trade explicit element exchanges throughout the Quicksort recursion against global selection of multiple pivots and a single element redistribution. All communication in the new variants is delegated to MPI collective operations, and the quality of the MPI library will co-determine the scalability of the implementations. For moderately large numbers of elements per process, these variants can be faster than standard parallel Quicksort variants by a significant factor, and can be combined with the standard, exchange-based variants to provide a smoothly scaling parallel Quicksort implementation. \bibliographystyle{abbrv}
{ "timestamp": "2018-10-25T02:10:45", "yymm": "1804", "arxiv_id": "1804.07494", "language": "en", "url": "https://arxiv.org/abs/1804.07494", "abstract": "Quicksort is an instructive classroom approach to parallel sorting on distributed memory parallel computers with many opportunities for illustrating specific implementation alternatives and tradeoffs with common communication interfaces like MPI. The (two) standard distributed memory Quicksort implementations exchange partitioned data elements at each level of the Quicksort recursion. In this note, we show that this is not necessary: It suffices to distribute only the chosen pivots, and postpone element redistribution to the bottom of the recursion. This reduces the total volume of data exchanged from $O(n\\log p)$ to $O(n)$, $n$ being the total number of elements to be sorted and $p$ a power-of-two number of processors, by trading off against a total of $O(p)$ additional pivot element distributions. Based on this observation, we describe new, \\emph{exchange-free} implementation variants of parallel Quicksort and of Wagar's HyperQuicksort.We have implemented the discussed four different Quicksort variations in MPI, and show that with good pivot selection, Quicksort without pairwise element exchange can be significantly faster than standard implementations on moderately large problems.", "subjects": "Distributed, Parallel, and Cluster Computing (cs.DC)", "title": "Parallel Quicksort without Pairwise Element Exchange", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9473810436809827, "lm_q2_score": 0.7490872243177518, "lm_q1q2_score": 0.7096710363822422 }
https://arxiv.org/abs/2105.12864
The Maker-Breaker percolation game on the square lattice
We study the $(m,b)$ Maker-Breaker percolation game on $\mathbb{Z}^2$, introduced by Day and Falgas-Ravry. As our first result, we show that Breaker has a winning strategy for the $(m,b)$-game whenever $b \geq (2-\frac{1}{14} + o(1))m$, breaking the ratio $2$ barrier proved by Day and Falgas-Ravry.Addressing further questions of Day and Falgas-Ravry, we show that Breaker can win the $(m,2m)$-game even if he allows Maker to claim $c$ edges before the game starts, for any integer $c$, and that he can moreover win rather fast (as a function of $c$).Finally, we consider the game played on $\mathbb{Z}^2$ after the usual bond percolation process with parameter $p$ was performed. We show that when $p$ is not too much larger than $1/2$, Breaker almost surely has a winning strategy for the $(1,1)$-game, even if Maker is allowed to choose the origin after the board is determined.
\section{Introduction} \label{sec:intro} \subsection{Background} The so-called \emph{Maker-Breaker games} are a large and well studied class of positional games. To define the simplest version of a Maker-Breaker game, we need a finite or infinite set $\Lambda$ and a family $\mathcal{F}$ of subsets of $\Lambda$. The set $\Lambda$ is called the \emph{board} of the game, and $\mathcal{F}$ is the collection of \emph{winning sets} or \emph{winning configurations}. The game is played in \emph{rounds}. In each round, Maker and Breaker respectively claim an as yet unclaimed element of $\Lambda$, where Maker is the first player. Breaker wins the game if he claims at least one element in each $F \in \mathcal{F}$ by any finite point of the game. Otherwise, Maker wins. On a finite board, this is equivalent to Maker claiming all elements of some $F \in \mathcal{F}$ by the end of the game, though the same is not true for infinite boards. This version of the game is also called the \emph{unbiased Maker-Breaker game}. In the \emph{biased Maker-Breaker game}, introduced by Chvátal and Erd\H{o}s \cite{chvatal1978biased}, the players may claim more elements. To be precise, given natural numbers $m$ and $b$, in each round of the $(m, b)$ Maker-Breaker game, Maker claims $m$ elements of the board whereas Breaker claims $b$. For more information about Maker-Breaker games as well as other positional games, see the excellent books of Beck \cite{beck2008combinatorial}, and of Hefetz, Krivelevich, Stojaković and Szabó~\cite{hefetz2014positional}. In this paper, we address the following Maker-Breaker game played on an infinite board, introduced by Day and Falgas-Ravry~\cites{day2020maker1,day2020maker2}. Let $\Lambda$ be an infinite connected graph and let $v_0 \in V(\Lambda)$ be a vertex. In the scope of this paper, we only have $\Lambda$ being $\mathbb{Z}^2$ or an infinite connected subgraph of it. The \emph{$(m,b)$ Maker-Breaker percolation game on $(\Lambda, v_0)$} is the game with board $E(\Lambda)$ where the winning sets are all infinite connected subgraphs of $\Lambda$ containing $v_0$. That is, Maker's goal is to ensure that $v_0$ is always contained in an infinite subgraph of $\Lambda$ spanned by the edges that she claimed and the unclaimed edges. Note that Breaker wins the game by claiming at least one element in each winning set. In this game this means that Breaker's goal is to claim any set of edges separating $v_0$ from infinity. Notably, the $(1,1)$-game on $\Lambda$ can be seen as a generalisation of the well-known Shannon switching game to an infinite board, see Lehmann~\cite{lehman1964solution} for a description and a solution of this game. Throughout the paper, we refer to the $(m,b)$ Maker-Breaker percolation game on $(\Lambda, v_0)$ as $(m,b)$-game on $(\Lambda, v_0)$. If $\Lambda$ is a transitive graph, we omit the vertex $v_0$ in our notation, as it does not change the analysis of the game. Next, we summarise the main results of the paper \cite{day2020maker2} of Day and Falgas-Ravry. \begin{theorem \label{thm:original} Let $m, b \in \mathbb{N}$. Then \begin{enumerate} \item Maker has a winning strategy for the $(1,1)$-game on $\mathbb{Z}^2$; \item If $m \geq 2b$, then Maker has a winning strategy for the $(m,b)$-game on $\mathbb{Z}^2$; \item If $b \geq 2m$, then Breaker has a winning strategy for the $(m,b)$-game on $\mathbb{Z}^2$. \end{enumerate} \end{theorem} Note that clearly, neither player is harmed by having more moves on their turn, so if for instance, Breaker wins the $(m,b)$-game on a board, he also wins the $(m,b')$-game on that same board with $b' \geq b$. This property is called \emph{bias monotonicity}. In view of \Cref{thm:original}, Day and Falgas-Ravry raised many interesting questions. Most strikingly, they asked if there is some critical ratio $\rho^*$ such that, there exists a positive function $\varphi(m) = o(m)$ such that Breaker wins the $\p[\big]{m, \rho^*m + \varphi(m)}$-game and Maker wins the $\p[\big]{m, \rho^*m - \varphi(m)}$-game. \Cref{thm:original} shows that if such ratio exists, then $1/2 \leq \rho^* \leq 2$. These bounds are associated with the fact that a set of $k$ connected edges in $\mathbb{Z}^2$ has edge-boundary of size at most $2k + 4$, see \Cref{lem:perimetric}. By the \emph{perimetric barrier} we refer to the limitation of either player being roughly twice as powerful as the the other player. Although the main problem they suggest is to break the perimetric barrier, they set out as an open problem to show whether Breaker or Maker can win the $(m,2m-1)$ or $(2b-1,b)$ games, respectively. \subsection{Our results} As our first result, we break the perimetric barrier on Breaker's side, making progress towards answering \cite{day2020maker2}*{Question 5.5} of Day and Falgas-Ravry about the critical ratio. \begin{theorem} \label{thm:main1} Consider the $(m,b)$ Maker-Breaker percolation game on $\mathbb{Z}^2$, where $m \ge 29$ and $b\ge 2m - s$ for some $0 \le s \le \frac{m-22}{14}$. Then Breaker has a winning strategy, which moreover ensures that he wins within the first $3$ rounds of the game. \end{theorem} In particular, this shows that if $\rho^*$ exists, then $1/2 \leq \rho^* \leq 27/14 \approx 1.93$, and thus, breaks the perimetric barrier as discussed previously. We do not believe this bound to be tight, and moreover, we did not attempt to optimise for this constant, as we also believe that the current method will not yield the optimal bound. \Cref{thm:main1} improves the ratio on Breaker's side for $m \ge 36$, and it also shows that for $m \ge 29$, Breaker wins the $(m,2m)$-game rather fast. Nonetheless, it is also of interest to determine how powerful Breaker is in the $(m,2m)$-game for smaller values of $m$. In the proof of \Cref{thm:original}, Day and Falgas-Ravry show that Breaker can win the $(m,2m)$-game on $\mathbb{Z}^2$ within $m^{16m+O(1)}$ rounds, and ask~\cite{day2020maker2}*{Question 5.7} how far this is from best possible. Concerning this question, we prove a slightly stronger result, showing that Breaker can win fast even when allowing Maker an initial boost in the form of an option to claim some edges before the game starts. We consider the following variant of the game. For integers $m,b \ge 1$ and $c \ge 0$ define the \emph{$c$-boosted $(m,b)$ Maker-Breaker percolation game} on $\mathbb{Z}^2$ to be the same as the $(m,b)$ percolation game, with the addition that only in her very first turn, Maker claims $c$ extra edges (so overall in her first round she claims $m+c$ edges). Concerning this game, Day and Falgas-Ravry asked~\cite{day2020maker2}*{Question 5.6} whether Breaker having a winning strategy for the $(m,b)$-game on $(\Lambda, v_0)$ implies that he also has a winning strategy for the $c$-boosted version of the same game. In view of~\cite{day2020maker2}*{Questions 5.6, 5.7}, we prove the following result. \begin{theorem} \label{thm:boost} Let $m \ge 1$ and $c \ge 0$ be integers, and let $b \ge 2m$. Then Breaker wins the $c$-boosted $(m,b)$ Maker-Breaker percolation game on $\mathbb{Z}^2$, and moreover, he can ensure to win within the first $(2c+4)(2c+5)\p[\big]{\ceil{ \frac{2c+2}{m}} + 1}$ rounds. \end{theorem} This theorem tells us that Breaker can not only win the $(m,2m)$-game on $\mathbb{Z}^2$ quite fast, and can not only win the $c$-boosted game for any $c$, he can also win quite fast the $c$-boosted game. Moreover, the number of rounds Breaker needs is uniformly bounded in $m$ and polynomial in $c$. Thus, for the $(m,2m)$-game, we answer the stronger combined version of Questions 5.6 and 5.7 of \cite{day2020maker2}. Applying \Cref{thm:boost} with $c = 0$, we get the following extension of \Cref{thm:main1} for the $(m,2m)$-game without an initial boost. \begin{corollary} \label{cor:m2mBreaker} Let $m \geq 1$ and $b \ge 2m$ be integers. Then Breaker can guarantee to win the $(m,b)$ percolation game on $\mathbb{Z}^2$ within the first $40$ rounds of the game. Moreover, if $m\ge 29$ then Breaker wins within $3$ rounds. \end{corollary} Note that this cannot be extended for a win in $3$ rounds for every $m$, as in fact, for $m = 1$, Maker can survive for $5$ rounds. In the range $1 \leq m \leq 28$, the bound of $40$ we obtain is not optimal, as the proof of \Cref{thm:boost} specialised to $c = 0$ could be greatly simplified. For $m \geq 29$, Maker can indeed survive for $3$ rounds, as it becomes clear in the proof of \Cref{thm:main1}. The final subject we address regards the Maker-Breaker percolation game being played on a random board. It is very common to study Maker-Breaker games on random boards in general. In this case, however, it is of special interest since it was originally introduced by an analogy to percolation theory. It is then very natural to consider the variant of the Maker-Breaker percolation game on $(\mathbb{Z}^2)_p$, the subgraph of open edges in the usual bond percolation process on $\mathbb{Z}^2$ with parameter $p \in [0,1]$. It is well known that for $p \leq 1/2$, there are no infinite connected components on $(\mathbb{Z}^2)_p$, and if $p \in (1/2,1)$, the probability that the origin is in an infinite connected component is strictly between zero and one. To avoid an easy win of Breaker in round zero, we grant Maker the power to choose the origin $v_0$ after sampling the configuration. We call this the \emph{Maker-Breaker percolation game on $p$-polluted $\mathbb{Z}^2$}. Day and Falgas-Ravry asked what is a value of the critical threshold $\theta^{*} \in [1/2,1]$ such that when $p < \theta^{*}$, Breaker almost surely has a winning strategy for the $(1,1)$-game on $p$-polluted $\mathbb{Z}^2$. We make a first progress in this direction, providing a non-trivial lower bound for this threshold. To do so, we explore a connection to a different percolation model, the north-east oriented percolation model on $\mathbb{Z}^2$, which is known to have a critical threshold $p^*$ (see \Cref{sec:polluted} for more details about this model and about the percolation game on a $p$-polluted board). \begin{theorem} \label{thm:polluted} Let $p^{*}$ be a critical threshold for the north-east oriented percolation on $\mathbb{Z}^2$. Then for any $p < p^{*}$, Breaker almost surely has a winning strategy for the $(1,1)$-game on $p$-polluted $\mathbb{Z}^2$. \end{theorem} In particular, as $p^{*} > 0.6298$ by a result of Dhar~\cite{dhar1982directed}, for any $p \le 0.6298$, Breaker almost surely has a winning strategy for $(1,1)$-game on $(\mathbb{Z}^2)_p$. Although considering the critical threshold for the north-east oriented percolation yields a non-trivial result for Breaker's win on the polluted board, this strategy cannot be extended to every $p < 1$, since it was shown by Balister, Bollobás and Stacey~\cite{balister1994improved} that $p^* < 0.6863$. \subsection{Tools and strategy} \label{subsec:tools} Throughout the paper we use two important tools. The first is relating our game to an auxiliary game, where Maker either has to keep her graph connected, or at least connected in some generalised sense. However, if we want to claim that it is enough to prove that Breaker wins against a restricted Maker, we have a certain price to pay. In particular, we consider only strategies of Breaker in which he claims edges from the edge-boundary of Maker's connected component, or a slightly generalised version of that. More importantly, we enable Maker to `save' some edges for later, to make the auxiliary game resemble the original one. Despite these changes, this setting ends up being much easier to analyse, as one of the hard things to tackle when considering strategies for Breaker is handling different connected components in Maker's graph. Furthermore, it turns out that with these adjustments, analysing the auxiliary game is indeed sufficient to prove our results for the original game. Our second tool is considering variations on \Cref{lem:perimetric} (Lemma 2.3 in \cite{day2020maker1}). This simple result tells us that the edge-boundary of any connected finite subgraph of $\mathbb{Z}^2$ is at most `a bit' larger than twice the number of edges of this subgraph. Hence, for instance, when playing the $(m,2m)$-game, it is enough to force Maker to play several `bad moves', not enlarging the edge-boundary of her graph by too much, so that Breaker can surround her connected component completely. When we use a more general notion of connectivity, we need a slightly more general version of \Cref{lem:perimetric}. This variant allows us to analyse the game in a global sense. This, in particular, is how we manage to break the perimetric barrier. Finally, to prove \Cref{thm:polluted}, we only use some simpler tools. In particular, one of those is a pairing strategy in the same spirit of the one that Day and Falgas-Ravry~\cite{day2020maker2} used in their proof for Maker's win in the $(1,1)$-game, but this time as a strategy for Breaker. \subsection{Organisation} Firstly, we present our notation and the definitions we work with in \Cref{sec:preliminary}. After that, we prove \Cref{thm:main1} in \Cref{sec:ratio} and \Cref{thm:boost} in \Cref{sec:fast-boost}. We discuss the game played on the polluted board and prove \Cref{thm:polluted} in \Cref{sec:polluted}. Finally, we present several open problems and further directions in \Cref{sec:open}. \section{Preliminaries} \label{sec:preliminary} For a graph $G$, we denote by $V(G)$ its vertex set, and by $E(G)$ its edge set. Furthermore, we set $e(G) \mathrel{\coloneqq} \card{E(G)}$. For a subset of vertices $U \subseteq V(G)$ we denote by $G[U]$ the subgraph of $G$ induced by $U$. The \emph{edge boundary} $\boundary H$ of a finite subgraph $H$ of a possibly infinite graph $G$ is \begin{equation*} \boundary H \mathrel{\coloneqq} \set[\big]{ \set{x,y} \in E(G) \setminus E(H) \colon \set{x,y} \cap V(H) \neq \emptyset }. \end{equation*} We usually abbreviate `edge boundary' to `boundary', as we do not consider any other type of boundary in this paper. We use the standard terminology where by the square lattice $\mathbb{Z}^2$, we mean the infinite graph with the following vertex and edge sets. \begin{align*} V(\mathbb{Z}^2) &\mathrel{\coloneqq} \set[\big]{ (x,y) \colon x,y \in \mathbb{Z} }, \\ E(\mathbb{Z}^2) &\mathrel{\coloneqq} \set[\big]{ \set{(x,y), (x',y')} \subseteq\mathbb{Z}^2 \colon \abs{x - x'} + \abs{y - y'} = 1}. \end{align*} When considering probability in \Cref{sec:polluted}, we say that an event occurs almost surely (abbreviated a.s.) if it occurs with probability $1$. \subsection{Useful lemmas} Throughout the paper, we use several times the following reverse isoperimetric inequality observed by Day and Falgas-Ravry~\cite{day2020maker1}. \begin{lemma} \label{lem:perimetric} Let $C$ be a finite connected subgraph of $\mathbb{Z}^2$, then \begin{equation*} \card{\boundary C} \leq 2e(C) + 4. \end{equation*} \end{lemma} We present two more versions of this lemma, for which we need some definitions. \begin{definition} \label{def:box} We say that a finite subgraph $B\subseteq \mathbb{Z}^2$ is a \emph{box}, if it is induced by a set of vertices of the form \begin{align*} \set[\big]{ (x,y) \colon a \leq x \leq b, \, c \leq y \leq d }, \end{align*} for some $a,b,c,d \in \mathbb{Z}$. \end{definition} \begin{definition}\label{def:bounding-box} Let $S \subseteq \mathbb{Z}^2$ be a finite set of edges and let $V_S$ be the set vertices in the graph it spans. The \emph{bounding box} of $S$, denoted $\bb(S)$, is the minimal box in $\mathbb{Z}^2$ containing $S$. To spell it out, let \begin{align*} m_x(S) \mathrel{\coloneqq} \min \set[\big]{ x \colon (x,y) \in V_S }, && M_x(S) \mathrel{\coloneqq} \max\set[\big]{ x \colon (x,y) \in V_S }. \end{align*} Also, let $m_y(S), M_y(S)$ be defined analogously for the $y$-axis. Then $\bb(S)$ is the box induced by the following set of vertices \begin{equation*} \set[\big]{ (x,y) \colon m_x(S) \le x \le M_x(S), \, m_y(S) \le y \le M_y(S) }. \end{equation*} \end{definition} For a finite subgraph $G$ of $\mathbb{Z}^2$, we write $\bb(G)$ for $\bb(E(G))$. \begin{figure} \centering \begin{tikzpicture}[scale=0.5] \tikzstyle{s-helper}=[lightgray, thin]; \definecolor{s-b-col}{RGB}{215,25,28}; \tikzstyle{s-b}=[s-b-col, ultra thick]; \definecolor{s-o-col}{RGB}{253,174,97}; \tikzstyle{s-o}=[s-o-col, ultra thick]; \definecolor{s-m-col}{RGB}{171,217,233}; \tikzstyle{s-m}=[s-m-col, ultra thick]; \definecolor{s-g-col}{RGB}{220,220,220}; \tikzstyle{s-g}=[s-g-col, line width=0.1pt]; \clip (-7.5,-2) rectangle (5.9,7.6); \draw[s-m] (-7,0.5) to[out=20,in=150] (-3.5,4.2) (-7.3,3.6) to[out=35, in=-150] (-1.1,1.1); \draw[s-m] (-2.1, 7) to[out=45, in=60] (1.7, 3.8) (-0.5, 5.7) to[out=75, in=-150] (3.2, 6.7); \draw[s-m] (1.8, -1.7) to[out=-15, in=160] (5.7, 1.6) (3.5, -0.5) to[out=-30, in=-150] (4.8, -1.2); \draw[darkgray, densely dashed, thick] (-7.4,0.4) rectangle (-1,4.5); \draw[darkgray, densely dashed, thick] (-2.2,3.7) rectangle (3.3,7.5); \draw[darkgray, densely dashed, thick] (1.7,-1.9) rectangle (5.8,1.8); \draw[black, thick] (-7.4,-1.9) rectangle (5.8,7.5); \end{tikzpicture} \caption{Three connected components, their bounding boxes (dashed), and the bounding box of their box-component (solid).} \label{fig:bounding-box} \end{figure} \begin{lemma}\label{lem:perim-bb} Let $D$ be a finite connected subgraph of $\mathbb{Z}^2$, then \begin{equation*} \card{\boundary \bb(D)} \le \card{\boundary D}. \end{equation*} \end{lemma} \begin{proof} Let $\boundary D$ consist of $h$ horizontal and $v$ vertical edges, so that $\card{\boundary D}$. Let $m_x = m_x(D)$, $M_x = M_x(D)$, $m_y = m_y(D)$, and $M_y = M_y(D)$ be as in \Cref{def:bounding-box}. Note that for any $m_x \le x_0 \le M_x$ there are at least two vertical boundary edges in $\boundary D$ of the form $\set{ (x_0, y), (x_0, y+1)}$. Analogously, for any $m_y \le y_0 \le M_y$ there are at least two horizontal boundary edges in $\boundary D$ of the form $\set{ (x, y_0), (x+1, y_0)}$. In particular we have $h \ge 2(M_y - m_y + 1)$ and $v \ge (M_x - m_x + 1)$. Recall that the box $\bb(D)$ has sides of lengths $M_x - m_x$ and $M_y - m_y$, so \begin{equation*} \card{\boundary \bb(D)} = 2(M_x - m_x + M_y - m_y) + 4, \end{equation*} and the result follows. \end{proof} We define a generalisation of connected component that incorporates the notion of bounding box in \Cref{def:bounding-box}. For doing so we go through several definitions. \begin{definition} \label{def:box-intesect} Let $D_1, D_2 \subseteq \mathbb{Z}^2$ be boxes. We say that $D_1, D_2$ \emph{box-intersect} if \begin{equation*} \p[\big]{E(D_1) \cup (\boundary D_1)} \cap E(D_2) \neq \emptyset. \end{equation*} \end{definition} Note that the relation in \Cref{def:box-intesect} is symmetric. \begin{definition}\label{def:box-comp} Let $D \subseteq \mathbb{Z}^2$ be a finite subgraph. Let $C_1, \ldots C_t$ be the connected components of $D$, and let $\mathcal{R} \mathrel{\coloneqq} \set{\bb(C_1), \ldots, \bb(C_t)}$ be the collection of their bounding boxes. As long as possible, repeat the following process. If there exist $R_i, R_j \in \mathcal{R}$ which box-intersect, remove them from $\mathcal{R}$ and replace them by the bounding box of their union, that is, by $\bb(R_i \cup R_j)$. The final $\mathcal{R}$ obtained in the end of this process is called a collection of \emph{box-components of $D$}. If the final $\mathcal{R}$ contains precisely one box-component, then we say that $D$ is \emph{box-connected}. \end{definition} \begin{observation} Given a finite subgraph $D \subseteq \mathbb{Z}^2$, the output of the process defined in \Cref{def:box-comp} on $D$ is unique. Therefore, it induces an equivalence relation on the connected components of $D$. Note further that a box-component contains no isolated vertices, unless the component itself is a single vertex. \end{observation} Hence, for each subgraph $D\subseteq \mathbb{Z}^2$ we can consider its collection of box-components. This allows us to state a slightly generalised version of \Cref{lem:perimetric}. \begin{lemma}\label{lem:perim-box-comp} Let $D$ be a finite box-connected subgraph of $\mathbb{Z}^2$, then \begin{equation*} \card{\boundary \bb(D)} \le 2e(D) + 4. \end{equation*} \end{lemma} \begin{proof} If $D$ is connected, then the result follows immediately from combining \Cref{lem:perim-bb} and \Cref{lem:perimetric}. Otherwise, it is enough to prove the statement for the case where is $D$ a union of two graphs, $C_1, C_2$, for which the statement holds for both, and where $\bb(C_1), \bb(C_2)$ box-intersect. Indeed, we then apply this repeatedly for each step in the process defined in \Cref{def:box-comp}, which ends in a single box-component for $D$. Denote $R_i \mathrel{\coloneqq} \bb (C_i)$ for $i=1,2$, so by assumption we have \begin{equation} \label{eq:assump} \card{\boundary R_i} \le 2e(C_i) + 4. \end{equation} As $R_1, R_2$ box-intersect, we have that \begin{align*} \p[\big]{E(R_1) \cup (\boundary R_1)} \cap E(R_2) \neq \emptyset. , \end{align*} If either $R_1 \subseteq R_2$ or $R_2 \subseteq R_1$ then we either have \begin{equation*} \card{\boundary \bb(D)} = \card{\boundary R_1} \le 2e(C_1) + 4 < 2e(D) + 4, \end{equation*} or a similar relation holds with $R_1$ replaced by $R_2$. Otherwise, we have $\card{(\boundary R_1) \cap E(R_2)} \ge 1$ and $R_2 \nsubseteq R_1$. It follows by simple geometric observations that $\card[\big]{ \p[\big]{E(R_1) \cup (\boundary R_1)} \cap (\boundary R_2)} \ge 3$. In particular, \begin{equation*} \card{\boundary(R_1 \cup R_2)} \le \card{\boundary R_1} + \card{\boundary R_2} - 4. \end{equation*} Consequently, \begin{align*} \card{\boundary \bb(D)} &= \card{ \boundary \bb(C_1 \cup C_2)} \\ &= \card{\boundary \bb(R_1 \cup R_2)} \\ &\le \card{\boundary(R_1 \cup R_2)} && \p{\text{By \Cref{lem:perim-bb}}} \\ &\le \card{\boundary R_1} + \card{\boundary R_2} - 4 \\ &\le 2e(C_1) + 2e(C_2) + 4 && \p{\text{By (\ref{eq:assump})}} \nonumber \\ &= 2e(D) + 4. \qedhere \end{align*} \end{proof} \begin{remark} \label{rem:BoxBdryRec} Note that the edge boundary $\boundary B$ of any box $B \subseteq \mathbb{Z}^2$, when regarded as a set of dual edges, forms a rectangle. \end{remark} \section{Breaking the perimetric ratio} \label{sec:ratio} In this section, we prove \Cref{thm:main1}. First, let us recall that by bias monotonicity, it is enough to prove \Cref{thm:main1} for $b=2m-s$, as having more power cannot harm Breaker. As a first step, we describe an auxiliary game for which we show that a win of Breaker in this game implies a win of him in the original game. After that, we provide Breaker with an explicit strategy. The rest of this section is devoted to showing that Breaker, by following the suggested strategy, wins the auxiliary game within three rounds. We prove this by a geometric analysis, relying crucially on the introduced notion of box-connectivity and our tools from \Cref{sec:preliminary}. If Breaker plays only on the boundary, it is natural to arrive at the perimetric barrier of the ratio $2$, because of \Cref{lem:perimetric}. More precisely, when Breaker only claims edges from the boundary of Maker's graph, he cannot react to her future moves in advance. That is, in each turn, Maker is able to create as many new unclaimed boundary edges as possible, to which Breaker must respond. To get around this, it is helpful for Breaker to consider the global structure of Maker's graph. Indeed, in general terms, one could interpret our strategy as Breaker forcing Maker to claim edges in an already played region of the board. This extra power from previous turns will lead to the improvement on the ratio. When analysing the game from the point of view of Breaker, we wish to consider the graph of Maker as being always box-connected. Hence, we define the auxiliary game where we consider the box-component of Maker's graph containing the origin as her graph in each round. Consequently, we allow more flexibility in the number of edges that she can claim in each turn. Also, when defining Breaker's strategy later, we must insist that he can only play in a certain way for a result about auxiliary game to translate into the result about the original game. \begin{definition}[$(m,b)$ Maker-Breaker box-limited percolation game on $\mathbb{Z}^2$] \label{def:box-lntd-game} Two players, Maker and Breaker alternate claiming yet unclaimed edges of a board $\mathbb{Z}^2$, starting in round 1 with Maker going first. \begin{itemize} \item In round $i$, Maker chooses a non-negative integer $m_i$ such that for every $i$, \begin{equation} \label{eq:misum} \sum_{j=1}^i m_j \leq im, \end{equation} and then claims $m_i$ unclaimed edges from $E(\mathbb{Z}^2)$. Moreover, Maker must play in a way that in the end of each of her turns, her edges must be in the box-component of $v_0$ (see \Cref{def:box-comp}). \item In each round, Breaker claims at most $b$ unclaimed edges. \item Breaker wins if the connected component of $v_0$ in the graph formed by Maker's edges and all unclaimed edges becomes finite. If Maker can ensure that this never happens, then she wins. \end{itemize} \end{definition} A key result for us is the following proposition relating the two games. \begin{proposition} \label{prop:boxlmtd-unlmted} Let $m, b \geq 1$ be integers. Assume that Breaker can ensure his win in the $(m,b)$ box-limited percolation game on $\mathbb{Z}^2$ within the first $k$ rounds by claiming only edges from the boundary of the bounding box of Maker's graph, or from inside the box itself. Then he can also ensure his win in the $(m,b)$ percolation game on $\mathbb{Z}^2$ within the first $k$ rounds. \end{proposition} \begin{proof} We show that if Maker has a strategy to ensure that Breaker will not win within the first $k$ rounds of the $(m,b)$ percolation game, then she can also ensure that Breaker will not win within the first $k$ rounds of the box-limited percolation game, assuming that Breaker claims only edges from the boundary of the bounding box of her graph, or from inside it. Assume that Maker has such a strategy for the (unlimited) percolation game. Then she can win the box-limited game by playing as follows. Denote by $M$ the box-component spanned by the edges claimed by Maker. Maker follows her winning strategy for the percolation game, and whenever this includes playing some edge $e$ that after the end of her turn would be in a box-component which is not $M$, she only marks this edge as an \emph{imaginary} edge and does not play it in that round. However, she claims an imaginary edge $e$ right after the first time she plays some edge that puts $e$ in $M$. Firstly, Maker can afford saving imaginary edges to claim later in the game, as the terms $m_j$ only have to satisfy $\sum_{i=1}^i m_j \leq im$ for any $i \geq 1$. Furthermore, Breaker cannot claim an imaginary edge before Maker claims it, as we assume that Breaker wins by claiming only edges in $(\boundary M) \cup E(M)$. The result follows. \end{proof} Now consider the \emph{$(m, 2m - s)$ Maker-Breaker box-limited percolation game} on $\mathbb{Z}^2$, where $m \geq 36$ and $1 \leq s \leq \frac{m - 22}{14}$. We provide Breaker with the following strategy. \begin{strategy}[Breaker's strategy for the $(m,b)$ box-limited percolation game on $\mathbb{Z}^2$] \label{str:B-box-lmtd} For any $i \geq 1$, let $M_i$ be the set of edges claimed by Maker in her $i$-th turn. Breaker plays according to the following steps. If at any point of the game Breaker cannot follow any particular step, he forfeits the game. \begin{description}[labelindent=\parindent, leftmargin=2\parindent] \item[First round] \hfill\\ Set $B_1 \mathrel{\coloneqq} \bb(M_1)$. \begin{enumerate} \item[(1)] If $\card{\boundary B_1} \leq 2m-s$, claim all edges in $\boundary B_1$. \item[(2)] Otherwise, let $g_1 \mathrel{\coloneqq} \card{\boundary B_1}- 2m+s$. Claim $2m - s$ edges from $\boundary B_1$, leaving $g_1$ unclaimed boundary-edges in the middle (up to being possibly shifted by one edge, for parity reasons) of one of the longer sides of the box $B_1$. Denote by $G_1$ this set of $g_1$ unclaimed edges. \end{enumerate} \item[Second round] \hfill \begin{enumerate} \item[(1)] If $M_2 \cap G_1 = \emptyset$, then claim all edges in $G_1$ if possible, or forfeit if not possible. \item[(2)] Otherwise, $M_2 \cap G_1 \neq \emptyset$. Let $V_1$ be the set of vertices in $\mathbb{Z}^2 \setminus B_1$ which are contained in edges of $G_1$. Let $P_1 \mathrel{\coloneqq} E \p*{ \mathbb{Z}^2[V_1] }$ be the set of edges in the path induced by the vertices $V_1$. Let $C_1 \mathrel{\coloneqq} E(B_1) \cup \boundary B_1$ and $B_2 \mathrel{\coloneqq} \bb((M_2 \cup P_1) \setminus C_1)$. \item[(2.1)] If $\card{(\boundary B_2) \setminus C_1} \le 2m-s$, claim all edges in $(\boundary B_2) \setminus C_1$. \item[(2.2)] Otherwise, let $g_2 \mathrel{\coloneqq} \card{(\boundary B_2) \setminus C_1} - 2m + s$. As $G_1$ is a set of boundary-edges in the middle of one of the longer sides of $B_1$, it splits the boundary edges adjacent to this side into two sets of consecutive boundary-edges. Denote these two sets by $L_1$ and $R_1$, such that $\card{R_1 \cap (\boundary B_2)} \leq \card{L_1 \cap (\boundary B_2 )}$. Let $e$ be an edge in $(\boundary B_2) \setminus C_1$ of minimal distance to $G_1$. Let $G_2$ be $g_2$ consecutive edges in $(\boundary B_2) \setminus C_1$, starting from $e$ (see \Cref{fig:round2} for an illustration). Claim all $2m-s$ edges in $(\boundary B_2) \setminus C_1$ excluding those edges in $G_2$. \end{enumerate} \item[Third round] \hfill \begin{enumerate} \item If there is a set of at most $2m-s$ unclaimed edges such that claiming them ensures a win in this round, claim all edges in this set. \item Otherwise, forfeit. \end{enumerate} \end{description} \end{strategy} While the description of the strategy may seem complicated at first, it is in fact very simple. For illustrations of the geometric content of this strategy, see \Cref{fig:round1,fig:round2,,fig:round3}. We now show that \Cref{str:B-box-lmtd} is enough to break the perimetric barrier for the box-limited game. Note that when using \Cref{str:B-box-lmtd}, Breaker only claims edges from the bounding box of Maker's graph or from its boundary. Hence, combining \Cref{prop:boxlmtd-unlmted} with the following proposition gives us \Cref{thm:main1}. \begin{proposition}\label{prop:B-box-lmtd} Let $m \ge 36$ and $s \le \tfrac{m-22}{14}$. Then by following \Cref{str:B-box-lmtd}, Breaker wins the $(m, 2m - s)$ box-limited percolation game on $\mathbb{Z}^2$ in at most three rounds. \end{proposition} \begin{proof} We analyse the game by following \Cref{str:B-box-lmtd} step by step, showing that Breaker can indeed follow it without forfeiting at any point, and thus to win the game by the end of the third round. In fact, we show that by the end of the third round, Breaker claims all edges in the boundary of the bounding box of Maker's graph, or a subgraph of it containing the origin. Note that if during the game, Breaker grants Maker with extra edges and wins when playing as if she claimed them, then he also wins the games without granting her those edges. We will use this assumption as it simplifies the analysis of the game. Recall that for each $j \ge 1$, we denote by $M_j$ the set of edges that Maker claimed in her $j$-th turn. Let $m_j \mathrel{\coloneqq} \card{M_j}$, so we have $\sum_{j=1}^i m_j \le im$ for any $i \ge 1$. Refer to \Cref{fig:round1,fig:round2,,fig:round3} for a representation of Rounds 1, 2 and 3 respectively. We use in several points of this analysis that the game is invariant under translations, rotations by $\pi/4$ angles and horizontal and vertical reflections. \subsection*{First Round} Maker plays all her $m_1$ edges $M_1$ in a box-component containing the origin. Set $B_1 \mathrel{\coloneqq} \bb(M_1)$, and let $a_1$ and $b_1$ be the number of vertices in the sides of $B_1$, with $a_1 \geq b_1$. Assume, without loss of generality, that the top and bottom sides of $B_1$ are at least as large as the left and right ones, that is, they consists of $a_1$ vertices. Note that as $\card{\boundary B_1} = 2a_1 + 2b_1$, we get \begin{equation} \label{eq:a1} a_1 \ge \tfrac{1}{4} \card{\boundary B_1}. \end{equation} \begin{description}[leftmargin=0pt] \item[Step (1)] If $\card{\boundary B_1} \le 2m-s$, then by claiming all edges in $\boundary B_1$, Breaker surrounds Maker's graph and wins the game. \item[Step (2)] Assume otherwise, so we have \begin{equation} \label{eq:B1bdrylwr} \card{\boundary B_1} \ge 2m - s + 1. \end{equation} Moreover, by \Cref{lem:perim-box-comp} we get \begin{equation} \label{eq:B1bdryupr} \card{\boundary B_1} \le 2m_1 + 4, \end{equation} so in particular, \begin{equation} \label{eq:m1lwr} m_1 \ge m - \tfrac{1}{2}s - \tfrac{3}{2}. \end{equation} Denote \begin{equation} \label{eq:g1} g_1 \mathrel{\coloneqq} \card{\boundary B_1} - 2m + s. \end{equation} Breaker chooses $g_1$ boundary-edges in the middle (up to being possibly shifted by one edge, for parity reasons) of the bottom side of $\boundary B_1$, and denotes them by $G_1$. We refer to this set of edges as the `gate' for the first round, see \Cref{fig:round1}. Then Breaker claims all edges in $(\boundary B_1) \setminus G_1$. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.5] \tikzstyle{s-helper}=[lightgray, thin]; \definecolor{s-b-col}{RGB}{215,25,28}; \tikzstyle{s-b}=[s-b-col, ultra thick]; \definecolor{s-o-col}{RGB}{253,174,97}; \tikzstyle{s-o}=[s-o-col, ultra thick]; \definecolor{s-m-col}{RGB}{122, 183, 204}; \tikzstyle{s-m}=[s-m-col, ultra thick]; \definecolor{s-g-col}{RGB}{220,220,220}; \tikzstyle{s-g}=[s-g-col, line width=0.1pt]; \clip (-10,-2) rectangle (10, 6); \foreach \x in {-8,-7.8,...,-1.2} \filldraw[xshift=\x cm, s-b] (0,0) -- (0,-0.2); \foreach \x in {1.2,1.4,...,8} \filldraw[xshift=\x cm, s-b] (0,0) -- (0,-0.2); \foreach \x in {-8,-7.8,...,8} \filldraw[xshift=\x cm, yshift=4cm, s-b] (0,0) -- (0,0.2); \foreach \y in {0,0.2,...,4} \filldraw[xshift=-8cm, yshift=\y cm, s-b] (0,0) -- (-0.2,0); \foreach \y in {0,0.2,...,4} \filldraw[xshift=8cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \draw[draw opacity=0] (-0.5,1.5) rectangle (0.5,2.5) node[pos=.5, black] {$B_1$}; \draw[s-m] (1.8,0.3) to[out=25, in=130] (5,3.4) (5,3.4) to[out=-50, in=150] (6.7,0.3) (-7.6,2.5) to[out=40, in=-120] (1.3,2.1) (-4.6,0.4) to[out=10, in=180] (-1.5,3.5) (-1.5,3.5) to[out=0, in=-90] (7.5, 2.8) (-1.2,2.3) to[out=-55, in=120] (-0.4,0.8) (-0.4,0.8) to[out=-60, in=-130] (0.8,0.4) (-7.8,1.2) to[out=10, in=130] (-5.5,0.4) (-7.5,0.3) to[out=0, in=-70] (-6.3, 2.3); \foreach \x in {-1,-0.8,...,0.8,1} \filldraw[xshift=\x cm, s-o] (0,0) -- (0,-0.2); \draw[decorate,decoration={brace,amplitude=5pt, mirror}] (-1,-0.5) -- (1,-0.5) node [midway,below=4pt] {$G_1$}; \draw [|-|] (-8,4.6) -- (8,4.6) node [midway,above=3pt] {$a_1$}; \draw [|-|] (8.6,0) -- (8.6,4) node [midway,right=3pt] {$b_1$}; \draw [|-|] (1.2,-0.6) -- (8,-0.6) node [midway,below=3pt] {$c_1$}; \end{tikzpicture} \caption{End of Round 1, where Maker is in light blue and Breaker in dark red. The set $G_1$ of $g_1$ edges in the gate, in orange, is unclaimed.} \label{fig:round1} \end{figure} This is possible because the bottom side of $\boundary B_1$ contains at least $g_1$ edges, as we observe below. \begin{align} \label{eq:g1uprs} g_1 &\leq s + 4 - 2(m - m_1) \leq s + 4 && \p*{\text{By (\ref{eq:g1}), (\ref{eq:B1bdryupr}), and $m_1 \leq m$}} \\ &\leq \frac{1}{4}\p{2m - s + 1} \leq \frac{1}{4}\card{\boundary B_1} && \p*{\text{As $s \leq \tfrac{m-22}{14}$, $m \geq 29$ and (\ref{eq:B1bdrylwr})}} \nonumber\\ \label{eq:g1upr} &\leq a_1. && \p*{\text{By (\ref{eq:a1})}} \end{align} Assume further, without loss of generality, that the box is fully contained in the top half-plane and that the origin $(0,0)$ is as close as possible to the centre of the bottom side of the box $B_1$. In particular, it is also in the centre of the gate $G_1$. \end{description} \subsection*{Second Round} First note that by (\ref{eq:g1upr}), we have $\card{G_1} = g_1 \le a_1 \le m+1 \le 2m - s$. \begin{description}[leftmargin=0pt] \item[Step (1)] If $M_2 \cap G_1 = \emptyset$, then by claiming all at most $2m - s$ edges in $G_1$, Breaker surrounds $B_1$ completely and thus wins the game. \item[Step (2)] Otherwise, we have $M_2 \cap G_1 \neq \emptyset$. Let $P_1$, $L_1$ and $R_1$ be as in \Cref{str:B-box-lmtd}, and assume without loss of generality that $L_1$ and $R_1$ are sets of boundary-edges to the left and to the right of $G_1$, respectively. Following the strategy, we set $C_1 \mathrel{\coloneqq} E(B_1) \cup \boundary B_1$. In fact, Breaker can regard the edges in $B_1 \cup G_1$ as being played by Maker, and thus, regard $C_1$ as the set of already claimed edges. Note that as $M_2 \cap G_1 \neq \emptyset$, we get $M_2 \cap C_1 \neq \emptyset$. Consider the graph spanned by the set of edges $M'_2 \mathrel{\coloneqq} (M_2 \cup P_1) \setminus C_1$. As this is a proper subset of $M_2 \cup P_1$, it might not be box-connected. Consider its box-component containing $P_1$, and assume without loss of generality that it is $M'_2$ itself, as otherwise it has only less edges. Indeed, we can `return' the edges outside the box-component of $P_1$ back to Maker, since they will not be claimed by Breaker in this turn and Maker can reclaim them immediately in the next turn if she wishes to do so. Denote by $B_2 \mathrel{\coloneqq} \bb(M'_2)$ the new bounding box. Let $a_2$ and $b_2$ be the number of vertices in the top and bottom sides and in the left and right sides of $B_2$, respectively. Since $M_2 \cap G_1 \neq \emptyset$, we have $\card{M_2 \setminus C_1} \le m_2 - 1$. Furthermore, \begin{equation*} \card{P_1} = \card{V_1} - 1 = \card{G_1} - 1 = g_1 - 1, \end{equation*} so we have \begin{equation*} \card{M'_2} \le m_2 + g_1 - 2. \end{equation*} Therefore, \Cref{lem:perim-box-comp} implies that \begin{equation} \label{eq:B2bdry} \card{\boundary B_2} = 2a_2 + 2b_2 \le 2m_2 + 2g_1. \end{equation} Moreover, as $P_1 \subseteq B_2$, we get \begin{equation} \label{eq:g1a2} g_1 \leq a_2. \end{equation} \item[Step (2.1)] If $\card{(\boundary B_2) \setminus C_1} \le 2m - s$, then Breaker claims all edges in $(\boundary B_2) \setminus C_1$. Note that $G_1 \subseteq \boundary B_1 \cap (E(B_2) \cup \boundary B_2)$. Thus, Breaker surrounds $B_1 \cup B_2$ completely and wins the game. \item[Step (2.2)] Assume otherwise, and denote \begin{equation} \label{eq:g2} g_2 \mathrel{\coloneqq} \card{(\boundary B_2) \setminus C_1} - 2m + s. \end{equation} First, note that by (\ref{eq:B2bdry}) we get \begin{equation} \label{eq:g2upr} g_2 \le 2g_1 + s + 2(m_2 - m). \end{equation} Combining the assumption that $g_2 \geq 1$ and (\ref{eq:B2bdry}), it follows that \begin{equation*} 2m - s + 1 \le \card{(\boundary B_2) \setminus C_1} \le 2m_2 + 2g_1 - \card{(\boundary B_2) \cap C_1}, \end{equation*} and hence \begin{equation} \label{eq:B2projB1} \card{(\boundary B_2) \cap C_1} \le 2g_1 + s - 1 + 2(m_2 - m). \end{equation} As $L_1$ and $R_1$ are sets of boundary-edges of $B_1$ on both sides of $G_1$ claimed by Breaker, define $c_1 \mathrel{\coloneqq} \min \set{\card{L_1},\card{R_1}}$, and we have \begin{align} c_1 &= \floor{\tfrac{1}{2} (a_1 - g_1)} \geq \tfrac{1}{2}(a_1 - g_1 - 1) \nonumber \\ &\ge \tfrac{1}{2} \p[\big]{\tfrac{1}{4}\card{\boundary B_1} - \card{\boundary B_1} + 2m-s -1 } && \p{\text{By (\ref{eq:a1}) and (\ref{eq:g1})}} \nonumber \\ &= \tfrac{1}{2} \p[\big]{2m - s - 1 - \tfrac{3}{4}\card{\boundary B_1} } \nonumber \\ &\ge \tfrac{1}{2} \p[\big]{2m - s - 1 - \tfrac{3}{4}(2m+4)} && \p{\text{By (\ref{eq:B1bdryupr}) and $m_1 \le m$}} \nonumber \\ \label{eq:c1} &= \tfrac{1}{4}m - \tfrac{1}{2}s - 2. \end{align} \begin{figure} \centering \begin{tikzpicture}[scale=0.6] \tikzstyle{s-helper}=[lightgray, thin]; \definecolor{s-b-col}{RGB}{215,25,28}; \tikzstyle{s-b}=[s-b-col, ultra thick]; \definecolor{s-o-col}{RGB}{253,174,97}; \tikzstyle{s-o}=[s-o-col, ultra thick]; \definecolor{s-m-col}{RGB}{171,217,233}; \tikzstyle{s-m}=[s-m-col, ultra thick]; \definecolor{s-g-col}{RGB}{220,220,220}; \tikzstyle{s-g}=[s-g-col, line width=0.1pt]; \clip (-9,-11.6) rectangle (9, 4.6); \foreach \x in {-8,-7.8,...,-1.2} \filldraw[xshift=\x cm, s-b] (0,0) -- (0,-0.2); \foreach \x in {1.2,1.4,...,8} \filldraw[xshift=\x cm, s-b] (0,0) -- (0,-0.2); \foreach \x in {-8,-7.8,...,8} \filldraw[xshift=\x cm, yshift=4cm, s-b] (0,0) -- (0,0.2); \foreach \y in {0,0.2,...,4} \filldraw[xshift=-8cm, yshift=\y cm, s-b] (0,0) -- (-0.2,0); \foreach \y in {0,0.2,...,4} \filldraw[xshift=8cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \foreach \y in {-0.2,-0.4,...,-10} \filldraw[xshift=-4cm, yshift=\y cm, s-b] (0,0) -- (-0.2,0); \foreach \x in {-4,-3.8,...,2} \filldraw[xshift=\x cm, yshift=-10cm, s-b] (0,0) -- (0,-0.2); \foreach \y in {-1.8,-2.0,...,-10} \filldraw[xshift=2cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \filldraw[s-m, step=0.2] (-8,0) rectangle (8,4); \filldraw[s-m] (-0.5,1.5) rectangle (0.5,2.5) node[pos=.5, black] {$B_1$}; \foreach \y in {-0.2,-0.4,...,-1.6} \draw[yshift=\y cm, xshift=2 cm, s-o] (0,0) -- (0.2,0); \draw[s-m] (0,0) -- (0,-0.2); \draw[s-m] (-1,-0.2) -- (1,-0.2); \draw[s-m] (0,0) to[out=-90,in=120, looseness=2] (-1.5, -8.5) (-3.1,-8.2) to[out=50, in=-170] (1.8,-8) (-0.8,-7.1) to[out=-120, in=100] (0 ,-9.8) (-3.6,-0.7) to[out=-30, in=170] (-1.5,-2.4) (-3.8,-3.2) to[out=70,in=-120] (-1,-1.2) ; \draw[draw opacity=0] (-1.5,-4.5) rectangle (-0.5,-5.5) node[pos=.5, black] {$B_2$}; \draw [decorate,decoration={brace,amplitude=5pt}] (2.5,-0.25) -- (2.5,-1.65) node [midway,right=4pt] {$G_2$}; \draw [|-|] (-4,-10.6) -- (2,-10.6) node [midway,below=3pt] {$a_2$}; \draw [|-|] (-4.6,-0.2) -- (-4.6,-10) node [midway,left=3pt] {$b_2$}; \draw [|-|] (2.6,-1.8) -- (2.6,-10) node [midway,right=3pt] {$b_2 - g_2$}; \draw [decorate,decoration={brace,amplitude=5pt}] (1.2,0.2) -- (8,0.2) node [midway,above=4pt] {$R_1$}; \draw [<-] (0.5,-0.3) -- (0.5,-1) node[below] {$P_1$}; \end{tikzpicture} \caption{End of Round 2, where Maker is in light blue and Breaker in dark red. In orange, the set $G_2$ of $g_2$ consecutive unclaimed edges, forms the gate for the second round.} \label{fig:round2} \end{figure} Let $x_0$ be minimal such that \begin{equation*} \abs{x_0} \geq \max \set*{ \abs{x} \colon (x,y) \in V \p*{ \mathbb{Z}^2[\boundary B_1]} }. \end{equation*} Assume that Maker claimed any edge which is either to the right or to the left of $B_1$, that is, an edge which has a vertex $v = (x_0,y_0)$. In particular, as Maker plays box-connected, we get that $B_2 \setminus C_1$ contains a horizontal path of length at least $c_1 + g_1$, and thus $(\boundary B_2) \cap C_1$ contains such a path as well. Hence, we get that \begin{align*} \card{(\boundary B_2) \cap C_1} &\ge c_1 + g_1 \\ &\ge \tfrac{1}{4}m - \tfrac{1}{2}s - 2 + g_1 && \p{\text{By (\ref{eq:c1})}} \\ &\ge 2s + 4 + g_1 && \p[\big]{\text{As $s \leq \tfrac{m - 22}{14}$ and $m \geq 29$}}\\ &\geq 2s + 4 - 2(m - m_1) + 2(m_2 - m) + g_1 && \p{\text{By (\ref{eq:misum})}} \\ &\ge 2g_1 + s + 2(m_2 - m), && \p{\text{By (\ref{eq:g1})}} \end{align*} contradicting (\ref{eq:B2projB1}). Hence, Maker did not claim any such edge. It follows that $B_2$ is completely contained in the infinite vertical stripe defined by the right and left sides of $B_1$. We remark that requiring $m \ge 29$ is necessary for this very step. More formally, we have \begin{equation} \label{eq:a2} a_2 = \card{(\boundary B_2) \cap C_1}. \end{equation} Furthermore, note that we must have $B_1 \cap B_2 = \emptyset$, which implies $G_1 \subseteq \boundary B_2$. In particular, by (\ref{eq:B2projB1}), we get \begin{equation} \label{eq:a2upr} a_2 \le 2g_1 + s - 1 + 2(m_2 - m), \end{equation} and by (\ref{eq:g2}), we get that \begin{equation} \label{eq:g2precise} g_2 = \card{\boundary B_2} - a_2 - 2m + s. \end{equation} Recall that $\card{R_1 \cap (\boundary B_2)} \le \card{L_1 \cap (\boundary B_2)}$, so in total we get \begin{equation} \label{eq:a1-a2} \card{R_1 \setminus (\boundary B_2)} \ge \tfrac{1}{2} (a_1 - a_2). \end{equation} Let $G_2$ be a set of $g_2$ consecutive edges starting at an edge $e\in (\boundary B_2) \setminus C_1$ of minimal distance to $G_1$. Note that $e$ is in fact the top-right horizontal edge in $\boundary B_2$, and thus $G_2$ the set of $g_2$ most top boundary-edges on the right side of $B_2$. Following \Cref{str:B-box-lmtd}, Breaker claims all $2m - s$ edges of $(\boundary B_2) \setminus C_1$, leaving the edges of $G_2$ unclaimed as in \Cref{fig:round2}, which we refer as the gate for the second round. \end{description} \subsection*{Third Round} We now show that there exists a set of at most $2m-s$ unclaimed edges such that by claiming them, Breaker wins the game in this round. We do this by a bit more careful geometric analysis, and by combining together all the information described above in the previous two rounds of the game. We start by giving some notations analogously to those in the first two rounds in \Cref{str:B-box-lmtd}. Let $M_3$ be the set of edges that Maker claimed on her third turn. So we have $m_3 \mathrel{\coloneqq} \card{M_3}$. Let $C_2 \mathrel{\coloneqq} C_1 \cup E(B_2) \cup \boundary B_2$. Again, we sometimes regard the edges in $B_2 \cup G_2$ as being played by Maker, and thus regard $C_2$ as the set of already claimed edges. Denote by $D$ the set of boundary-edges adjacent to the right side of $B_2$. So we have $\card{D} = b_2$. Denote by $A$ the box for which $D$ is the set of boundary-edges of the left side of it, and $R_1 \setminus (\boundary B_2)$ is the set of boundary-edges of the top side of it (see \Cref{fig:round3}). We consider three cases. \begin{description}[leftmargin=0pt] \item[Case 1] $M_3 \cap G_2 = \emptyset$. Similarly to the second round, by (\ref{eq:g2upr}) and (\ref{eq:g1uprs}), and the fact that $m_1 + m_2 \le 2m$, we have $\card{G_2} = g_2 \le 3s + 8 \le 2m - s$. Thus in this case we have $\card{M_3 \cap G_2} \le 2m-s$, and by claiming all edges in the gate $G_2$, Breaker surrounds $B_1 \cup B_2$ completely and wins the game. \item[Case 2] $M_3 \cap G_2 \neq \emptyset$ and $(M_3 \setminus C_2) \cap (\boundary A) = \emptyset$. In this case, by claiming all unclaimed edges in $\boundary A$, Breaker surrounds Maker's graph completely and wins the game. We show that he can indeed do so. Note that there are $\card{\boundary A} = 2\card{D} + 2\card{R_1\setminus (\boundary B_2)}$ edges in the boundary of $A$, from which only $\card{D} + \card{R_1\setminus (\boundary B_2)}$ are unclaimed. We get that, \begin{align} \card{(\boundary A) \setminus C_2} &\le \card{D} + \card{R_1 \setminus (\boundary B_2)} \nonumber\\ &\le \card{D} + \tfrac{1}{2}m_1 && \p{\text{As $\card{R_1} \le \tfrac{1}{2}m_1$}} \nonumber\\ &\le m_2 + \tfrac{1}{2}m_1 && \p{\text{As $\card{D} = b_2 \leq m_2$}} \nonumber\\ &\le 2m - \tfrac{1}{2}m_1 && \p{\text{As $m_1 + m_2 \le 2m$}} \nonumber\\ \label{eq:case2} &\le 2m - s, && \p{\text{As $m_1 \geq 2s$ by (\ref{eq:m1lwr})}} \end{align} and therefore, Breaker wins the game. \begin{figure} \centering \begin{tikzpicture}[scale=0.6] \tikzstyle{s-helper}=[lightgray, thin]; \definecolor{s-b-col}{RGB}{215,25,28}; \tikzstyle{s-b}=[s-b-col, ultra thick]; \definecolor{s-o-col}{RGB}{253,174,97}; \tikzstyle{s-o}=[s-o-col, ultra thick]; \definecolor{s-m-col}{RGB}{171,217,233}; \tikzstyle{s-m}=[s-m-col, ultra thick]; \definecolor{s-g-col}{RGB}{220,220,220}; \tikzstyle{s-g}=[s-g-col, line width=0.1pt]; \clip (-9,-12.2) rectangle (10, 5.6); \foreach \x in {-8,-7.8,...,-1.2} \filldraw[xshift=\x cm, s-b] (0,0) -- (0,-0.2); \foreach \x in {1.2,1.4,...,8} \filldraw[xshift=\x cm, s-b] (0,0) -- (0,-0.2); \foreach \x in {-8,-7.8,...,8} \filldraw[xshift=\x cm, yshift=4cm, s-b] (0,0) -- (0,0.2); \foreach \y in {0,0.2,...,4} \filldraw[xshift=-8cm, yshift=\y cm, s-b] (0,0) -- (-0.2,0); \foreach \y in {0,0.2,...,4} \filldraw[xshift=8cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \foreach \y in {-0.2,-0.4,...,-10} \filldraw[xshift=-4cm, yshift=\y cm, s-b] (0,0) -- (-0.2,0); \foreach \x in {-4,-3.8,...,2} \filldraw[xshift=\x cm, yshift=-10cm, s-b] (0,0) -- (0,-0.2); \foreach \y in {-1.8,-2.0,...,-10} \filldraw[xshift=2cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \foreach \y in {-0.2,-0.4,...,-9.8} \filldraw[xshift=7cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \foreach \x in {3.6,3.8,4.0,4.2,4.4,4.6,4.8,5.0} \filldraw[xshift=\x cm, yshift=-10cm, s-b] (0,0) -- (0,-0.2); \foreach \x in {-5.4,-5.2,...,3.4} \filldraw[xshift=\x cm, yshift=-11.2cm, s-b] (0,0) -- (0,-0.2); \foreach \y in {-9.0,-9.2,...,-11.2} \filldraw[xshift=-5.6cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \foreach \x in {-5.4,-5.2,...,-4.2} \filldraw[xshift=\x cm, yshift=-8.8cm, s-b] (0,0) -- (0,-0.2); \foreach \y in {-10.2,-10.4,...,-11.2} \filldraw[xshift=3.4cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \foreach \y in {-10.2,-10.4,...,-11.8} \filldraw[xshift=5.0cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \foreach \x in {5.2,5.4,...,7.6} \filldraw[xshift=\x cm, yshift=-12.0cm, s-b] (0,0) -- (0,0.2); \foreach \y in {-10,-10.2,...,-11.8} \filldraw[xshift=7.6cm, yshift=\y cm, s-b] (0,0) -- (0.2,0); \foreach \x in {7.2,7.4,7.6} \filldraw[xshift=\x cm, yshift=-10.0cm, s-b] (0,0) -- (0,0.2); \filldraw[s-m-col] (-8,0) rectangle (8,4); \filldraw[s-m] (-0.5,1.5) rectangle (0.5,2.5) node[pos=.5, black] {$B_1$}; \filldraw[s-m-col, step=0.2] (-4,-0.2) rectangle (2,-10); \filldraw[s-m] (-1.5,-4.5) rectangle (-0.5,-5.5) node[pos=.5, black] {$B_2$}; \draw[draw opacity=0] (4.1,-4.5) rectangle (5.1,-5.5) node[pos=.5, black] {$A'$}; \draw[draw opacity=0] (-5.5,-10.3) rectangle (-4.5,-11.3) node[pos=.5, black] {$S_1$}; \draw[draw opacity=0] (6.5,-10.3) rectangle (7.5,-11.3) node[pos=.5, black] {$S_2$}; \draw[s-m] (0,0) -- (0,-0.2); \draw[s-m] (2,-1) -- (2.2,-1); \draw[s-m] (2.2,-0.2) -- (2.2,-1.6); \draw[s-m] (2.2,-1) to[out=0,in=90] (3.2,-10) -- (3.2,-10.2) to[out=-100, in=0] (-1,-10.7) to[out=180, in=-110] (-5.2,-9.2) (5.3,-5.8) to[out=100, in=-80] (6.8,-3.7) (5.4,-3.7) to[out=-100, in=80] (6.3,-5.5) (3.0,-7.8) to[out=70, in=-110] (6.2,-8.2) (6,-7.6) to[out=-150, in=90] (5.8,-10) -- (5.8,-10.2) to[out=-90, in=170] (7.5,-11.7) (5.3,-11.7) to[out=0,in=-140] (6.0,-11.3) ; \draw [decorate,decoration={brace,amplitude=5pt, mirror}] (1.7,-0.25) -- (1.7,-9.95) node [midway,left=4pt] {$D$}; \draw [|-|] (8.6,-0.2) -- (8.6,-10) node [midway,right=3pt] {$b_3$}; \draw [|-|] (2.25,-0.5) -- (6.95,-0.5) node [midway,below=3pt] {$a_3$}; \draw [decorate,decoration={brace,amplitude=5pt}] (2.4,0.2) -- (7.95,0.2) node [midway,above=4pt] {$R_1 \setminus (\partial B_2)$}; \draw [|-|] (-8,4.6) -- (8,4.6) node [midway,above=3pt] {$a_1$}; \draw [|-|] (-3.95,0.3) -- (1.95,0.3) node [midway,above=3pt] {$a_2$}; \draw [densely dashed] (2.2,-10) -- (8,-10) -- (8,-0.2); \draw [<-] (2.4,-1.4) -- (2.9,-1.9) node[right=5pt,below=0pt] {$P_2$}; \end{tikzpicture} \caption{End of Round 3, where Maker is in light blue and Breaker in dark red. The region bounded by the dashed lines is the box $A$.} \label{fig:round3} \end{figure} \item[Case 3] $M_3 \cap G_2 \neq \emptyset$ and $(M_3 \setminus C_2) \cap (\boundary A) \neq \emptyset$. For this case we need some further notation. Similarly to the second round, let $V_2$ be the set of vertices in $\mathbb{Z}^2 \setminus B_2$ which are contained in edges of $G_2$. Let further $P_2 \mathrel{\coloneqq} E \p*{\mathbb{Z}^2[V_2]}$ be the set of edges in the path induced by the vertices of $V_2$. Note that we have $P_2 \subseteq A$. Let $M'_3 \mathrel{\coloneqq} (M_3 \cup P_2)\setminus C_2$. Similarly to the second round, we have $\card{P_2} = g_2 - 1$, and since $M_3 \cap G_2 \neq \emptyset$, we also have $\card{M_3 \setminus C_2} \le m_3 - 1$. In total we get \begin{equation} \label{eq:M3} \card{M'_3} \le m_3 + g_2 - 2. \end{equation} Let $M_3^A \mathrel{\coloneqq} M'_3 \cap E(A)$, and let $A' \mathrel{\coloneqq} \bb(M_3^A)$ be its bounding-box (see \Cref{fig:round3}). By box-connectivity, and since $(M_3 \setminus C_2) \cap (\boundary A) \neq \emptyset$, we get that the box $A'$ shares at least one full side with the box $A$. In \Cref{fig:round3}, they share the left side, for instance. If $(\boundary A') \cap M'_3 = \emptyset$, then Breaker wins simply by claiming all the edges in $\boundary A' \setminus C_2$. Indeed, as $A' \subseteq A$ and (\ref{eq:case2}), we have \begin{align*} \card{(\boundary A') \setminus C_2} \le 2m - s. \end{align*} Hence we may assume that $(\boundary A') \cap M'_3 \neq \emptyset$. Let $S_1,\dotsc,S_t$ be the box-components of $M'_3 \setminus M_3^A$ such that intersect the boundary of $A'$. That is, for $i \in [t]$ we have $S_i \cap (\boundary A') \neq \emptyset$. Further, denote $Q_i \mathrel{\coloneqq} \bb(S_i)$. We now show that Breaker can claim all unclaimed edges that surround \begin{equation*} A' \cup Q_1 \cup \dotsb \cup Q_t, \end{equation*} which, in particular, implies that he wins the game. More precisely, we show that there at most $2m-s$ such edges, that is, \begin{equation} \label{eq:BoxesBdry} \card[\big]{\boundary \p[\big]{A' \cup Q_1 \cup \dotsb \cup Q_t} \setminus C_2} \le 2m-s. \end{equation} Let us emphasise that in the equation above we mean the edge-boundary of a union of boxes, rather than the edge-boundary of their joint bounding box, as in previous rounds. To prove (\ref{eq:BoxesBdry}), first note that for any distinct $i,j \in [t]$, we have $Q_i \cap Q_j = \emptyset$ by box-connectivity. Therefore, \begin{align} \card[\big]{\boundary \p[\big]{A' \cup Q_1 \cup \dotsb \cup Q_t} \setminus C_2} &\le \card[\big]{(\boundary A')\setminus C_2} + \sum_{i=1}^t \card[\big]{\boundary (A' \cup Q_i) \setminus \boundary A'} \nonumber \\ \label{eq:SnakesBdry} &\le \card[\big]{(\boundary A')\setminus C_2} + \sum_{i=1}^t \p[\big]{\card{\boundary (A' \cup Q_i)} - \card{\boundary A'}}. \end{align} Now we bound each of these two terms separately. We start with bounding the sum in (\ref{eq:SnakesBdry}). Similarly to the argument in the proof of \Cref{lem:perim-box-comp}, for each $i\in [t]$ we have \begin{equation*} \card{\boundary(A' \cup Q_i)} \le \card{\boundary A'} + \card{\boundary Q_i} - 4. \end{equation*} Recall that by \Cref{lem:perim-box-comp}, we have $\card{\boundary Q_i} \le 2e(S_i) + 4$ for each $i\in [t]$, so in total we get \begin{equation} \label{eq:SnakeSum} \sum_{i=1}^t \p[\big]{\card{\boundary (A' \cup Q_i)} - \card{\boundary A'}} \le 2\sum_{i=1}^t e(S_i). \end{equation} As for the first term in (\ref{eq:SnakesBdry}), let $a_3$ and $b_3$ be the numbers of vertices in the top and bottom sides and in the left and right sides of $A'$, respectively. Recall that $A'$ shares at least one side with $A$, so in particular we have either $a_3 = \card{R_1 \setminus (\boundary B_2)}$ or $b_3 = b_2$. It follows that there are at least $\min \set{ \card{R_1 \setminus (\boundary B_2)}, b_2 - g_2 }$ edges in $\boundary A'$ which are already claimed by Breaker. Moreover, we have $G_2 \subseteq \boundary A'$, so we get a reduction of at least $g_2$ more edges from the amount that Breaker has to claim in $\boundary A'$. In total, we get \begin{equation*} \card[\big]{(\boundary A')\cap C_2} \ge g_2 + \min \set[\big]{ \card{R_1 \setminus (\boundary B_2 )} ,\, b_2 - g_2}. \end{equation*} Denote $m^A_3 \mathrel{\coloneqq} \card{M^A_3}$ and recall that $A' = \bb(M^A_3)$. By \Cref{lem:perim-box-comp} and by the above, we get that \begin{align*} \card[\big]{(\boundary A' )\setminus C_2} &= \card[\big]{\boundary A'} - \card[\big]{(\boundary A')\cap C_2} \\ &\le 2 m_3^A + 4 - g_2 - \min \set[\big]{ \card{R_1 \setminus (\boundary B_2 )} ,\, b_2 - g_2}. \end{align*} To finish the proof, we need the following technical claim, which we prove later. \begin{claim} \label{claim:min} We have \begin{equation} \label{eq:round3min} \min \set[\big]{ \card{R_1 \setminus (\boundary B_2 )} ,\, b_2 - g_2} \ge g_2 + s + 2(m_3 - m). \end{equation} \end{claim} Assume for now that \Cref{claim:min} holds. We get that \begin{equation} \label{eq:A'Bdry} \card[\big]{(\boundary A') \setminus C_2} \le 2(m + m^A_3 - m_3) - 2g_2 - s + 4. \end{equation} Furthermore, by (\ref{eq:M3}) we have \begin{align*} m_3 + g_2 - 2 \ge \card{M'_3} \ge m^A_3 + \sum_{i=1}^t e(S_i), \end{align*} and in particular \begin{align*} m^A_3 - m_3 \le g_2 - 2 - \sum_{i=1}^t e(S_i). \end{align*} So by (\ref{eq:A'Bdry}), we get \begin{align*} \card[\big]{(\boundary A') \setminus C_2} \le 2 \p[\Big]{m + g_2 - 2 - \sum_{i=1}^t e(S_i)} - 2g_2 - s + 4 = 2m - s - 2\sum_{i=1}^t e(S_i). \end{align*} Finally, by (\ref{eq:SnakeSum}), (\ref{eq:SnakesBdry}), and by the above, we conclude \begin{align*} \card[\Big]{\boundary \p[\big]{A' \cup Q_1 \cup \dotsb \cup Q_t} \setminus C_2} \leq 2m - s, \end{align*} proving (\ref{eq:BoxesBdry}), as required. Hence, it is only left to prove \Cref{claim:min}. \begin{proof}[Proof of \Cref{claim:min}] We start with the second term. Observe that \begin{align*} g_2 &= a_2 + 2b_2 - 2m + s && \p{\text{By (\ref{eq:g2precise}) and (\ref{eq:B2bdry})}}\\ &\le b_2 + g_1 + s - (2m - m_2). && \p{\text{Again by (\ref{eq:B2bdry})}} \end{align*} Therefore, we have \begin{align} b_2 - g_2 &\ge 2m - m_2 - s - g_1 \nonumber\\ &\ge 2m - m_2 - 2s - 4 + 2(m - m_1) && \p{\text{By (\ref{eq:g1uprs})}}\nonumber\\ \label{eq:min2large} &\ge m - 2s - 4, \end{align} where we used the fact that $m_1 \leq m$ and $m_1 + m_2 \leq 2m$. On the other hand, we have \begin{align*} g_2 + s + 2(m_3 - m) &\le 2g_1 + 2s + 2(m_2 - m) + 2(m_3 - m) && \p{\text{By (\ref{eq:g2upr})}} \\ &\le 4s + 8 + 4m_1 + 2(m_2 + m_3) - 8m && \p{\text{By (\ref{eq:g1uprs})}} \\ &\le 4s + 8 && \p{\text{By (\ref{eq:misum})}} \\ &\le m - 2s - 4. && \p[\big]{\text{As $s \le \tfrac{m-22}{14}$}}. \end{align*} Combining this with (\ref{eq:min2large}), we get \begin{equation*} b_2 - g_2 \ge g_2 + s + 2(m_3 - m), \end{equation*} proving the claim for the second term in~(\ref{eq:round3min}). As for the first term, we start by noticing that \begin{align} \tfrac{1}{2}a_1 + \tfrac{1}{2}a_2 - 2g_1 &\ge \tfrac{1}{2} (a_1 - 3g_1) && \p{\text{By (\ref{eq:g1a2})}} \nonumber\\ &\ge \tfrac{1}{2} \p[\Big]{\tfrac{1}{4}\card{\boundary B_1} - 3\p[\big]{\card{\boundary B_1} - 2m + s } } && \p{\text{By (\ref{eq:a1}) and (\ref{eq:g1})}} \nonumber \\ &= \tfrac{1}{2}\p[\big]{6m - 3s - \tfrac{11}{4}\card{\boundary B_1} } \nonumber \\ \label{eq:a1a2g1} &\ge \tfrac{1}{2}\p[\big]{6m - 3s - \tfrac{11}{4}(2m_1+4)}. && \p{\text{By (\ref{eq:B1bdryupr})}} \end{align} In addition, using (\ref{eq:g2precise}) and (\ref{eq:B2bdry}), we can also write \begin{equation} \label{eq:g2bnd} g_2 \le 2g_1 + s - a_2 - 2(m - m_2). \end{equation} Hence we get \begin{align*} \card{R_1 \setminus (\boundary B_2 )} - g_2&\ge \tfrac{1}{2} (a_1 - a_2) - g_2 && \p{\text{By (\ref{eq:a1-a2})}}\\ &\ge \tfrac{1}{2}a_1 + \tfrac{1}{2}a_2 - 2g_1 - s + 2(m - m_2) && \p{\text{By (\ref{eq:g2bnd})}}\\ &\ge \tfrac{1}{2}\p[\big]{6m - 3s - \tfrac{11}{4}(2m_1+4)} - s + 2(m - m_2) && \p{\text{By (\ref{eq:a1a2g1})}}\\ &= 7m - 2(m_1 + m_2 + m_3) \\ &\quad- \tfrac{3}{4}m_1 - \tfrac{5}{2}s - \tfrac{11}{2} + 2(m_3 - m) \\ &\ge m - \tfrac{3}{4}m_1 - \tfrac{5}{2}s - \tfrac{11}{2} + 2(m_3 - m) && \p{\text{By (\ref{eq:misum})}} \\ &\ge \tfrac{1}{4}m - \tfrac{5}{2}s - \tfrac{11}{2} + 2(m_3 - m) && \p{\text{As $m_1 \leq m$}} \\ &\ge s + 2(m_3 - m) && \p[\big]{\text{As $s \leq \tfrac{m-22}{14}$}}, \end{align*} finishing the proof of the claim. \end{proof} \end{description} This completes the proof. \end{proof} Note that a more careful analysis, in similar spirit to the one above, might produce a somewhat smaller ratio than $2-\frac{1}{14}+o(1)$ in \Cref{thm:main1}. However, as our main aim was to break the perimetric barrier, we did not attempt to optimise it in the benefit of clarity. \section{Fast win of Breaker in the boosted \texorpdfstring{$(m,2m)$}{(m,2m)}-game} \label{sec:fast-boost} In this section we prove \Cref{thm:boost}. Note that, as the game is bias monotone in $b$, it is enough to prove \Cref{thm:boost} for $b = 2m$. As discussed in \Cref{sec:intro}, the proof contains two important ideas. Firstly, assuming that Maker plays connected significantly simplifies the analysis of the game. Hence we consider a slight variation of the game for which the following hold. \begin{enumerate} \item If Breaker wins this variant within the first $k$ rounds using a strategy satisfying certain further condition, then he also wins the $(m,b)$ percolation game within the first $k$ rounds. \item Maker must keep the graph spanned by the edges she claimed connected. \end{enumerate} Note that there is a certain price we have to pay for this in the form of restrictions on Breaker's strategy; and in allowing Maker, instead of claiming precisely $m$ edges in each round, to claim sometimes a bit more and sometimes a bit less. The second important idea is to define a strategy of Breaker in such a way that he forces Maker to create a component of a `bad' shape. More precisely, as we can see by \Cref{lem:perimetric}, if we give Maker $km + c$ edges to build a connected component and Breaker $2km$ edges to place in the boundary of the said component, Breaker can ensure to claim `almost' all edges in the boundary. Hence, if he can force Maker to create a component that is sufficiently far from equality in \Cref{lem:perimetric}, Breaker can in fact claim all the edges in its boundary. In the rest of the section, we expand these ideas into a formal proof. We start by defining the variation of the game that we mentioned before. \begin{definition}[$c$-boosted $(m,b)$ Maker-Breaker limited percolation game on $\mathbb{Z}^2$] \label{def:bstd-lmtd-game} Two players, Maker and Breaker alternate claiming yet unclaimed edges of a board $\mathbb{Z}^2$, starting in round $1$ with Maker going first. \begin{itemize} \item In round $i$, Maker chooses a non-negative integer $m_i$ such that \begin{equation*} \sum_{j=1}^i m_j \leq im + c, \end{equation*} and then claims $m_i$ unclaimed edges from $E(\mathbb{Z}^2)$. Moreover, Maker must play in a way that in the end of each of her turns, her edges must be in the component of $v_0$. \item In each round, Breaker claims at most $b$ unclaimed edges. \item Breaker wins if the connected component of $v_0$ consisting only of Maker's edges and unclaimed edges becomes finite. If Maker can ensure that this never happens, then she wins. \end{itemize} \end{definition} The following proposition is a key result of the same spirit as \Cref{prop:boxlmtd-unlmted}, relating these two games. \begin{proposition}\label{prop:lmtd-unlmtd} Let $m, b \ge 1$ and $c \ge 0$ be integers. Assume that Breaker can ensure his win in the $c$-boosted $(m,b)$ limited percolation game on $\mathbb{Z}^2$ within the first $k$ rounds by claiming only edges from the boundary of the graph spanned by Maker's edges. Then he can also ensure his win in the $c$-boosted $(m,b)$ percolation game on $\mathbb{Z}^2$ within the first $k$ rounds. \end{proposition} The proof of \Cref{prop:lmtd-unlmtd} is similar to the proof of \Cref{prop:boxlmtd-unlmted} except for some minor details, so we do not include it. Throughout the rest of the section, we consider only the $c$-boosted limited percolation game on $\mathbb{Z}^2$, and prove the following. \begin{proposition} \label{prop:lmtdBwin} Let $m \ge 1$ and $c \geq 0$ be integers. Then Breaker can guarantee to win the $c$-boosted $(m,2m)$ limited percolation game on $\mathbb{Z}^2$ within the first $(2c+4)(2c+5)\p[big]{\ceil{ \frac{2c+2}{m} } + 1}$ rounds of the game. \end{proposition} \Cref{thm:boost} then follows by combining \Cref{prop:lmtd-unlmtd,prop:lmtdBwin}. We start by providing various definitions and assumptions that we need in our proof. Firstly, without loss of generality, we may assume that Maker's graph is not only connected after each turn of hers, but also that she is adding edges to it, one by one, so that her graph, which we denote by $C$, is connected at any single point during her turn. We then let $C(\ell)$ be her graph after Maker claimed $\ell$ edges in total, and we denote by $C_{k} \mathrel{\coloneqq} C(\sum_{i=1}^{k} m_i)$ Maker's graph after she played $k$ full turns. Recall that we denote by $\boundary C(\ell)$ the edge boundary of $C(\ell)$, and note that we include in this boundary also the edges that Breaker has already claimed. The set of those edges in the boundary of $C(\ell)$ that are yet unclaimed by Breaker at this point of the game is denoted by $\boundary_F C(\ell)$, which stands intuitively for the `free' boundary of $C(\ell)$. While this definition may be ambiguous as $\boundary_F C(\ell)$ changes during the turn of Breaker, we will always make it clear to which particular point we refer to when using this notation. \begin{definition} \label{def:edgetypes} For any $\ell \ge 1$, we call an edge $e \in \boundary C(\ell)$: \begin{itemize} \item \emph{Awful} if $\card{\boundary(C(\ell)+e)} - \card{\boundary C(\ell)} \leq 1$. \item \emph{Bad} if it is not awful, but by claiming $e$, Maker creates at least one new awful edge $f$ in $\boundary(C(\ell) + e)$ such that $f$ touches $e$. \item \emph{Good} otherwise. \end{itemize} \end{definition} We now provide Breaker with a strategy. \begin{strategy}[Breaker's strategy for the $(m,b)$ limited percolation game on $\mathbb{Z}^2$] \label{str:B-bstd-lmtd} Let $k \ge 1$ and assume that by the end of her $k$-th turn Maker has claimed $\ell$ edges so far in the game for some $\ell \ge 0$. In his $k$-th turn, Breaker claims edges one by one from $\boundary_F(C(\ell))$ in the following order of priority: \begin{enumerate} \item good edges, \item bad edges, \item awful edges. \end{enumerate} \end{strategy} Note that if at some point Breaker cannot follow \Cref{str:B-bstd-lmtd}, then it means that there are no unclaimed boundary edges in Maker's graph, which in particular means that Breaker has won the game. We make the following straightforward observation that does not require proof. \begin{observation}\label{obs:awfulbad} An awful edge stays awful until it is claimed by either player. A bad edge either stays bad or becomes awful until it is claimed by either player. \end{observation} Further, let \begin{equation*} v_k \mathrel{\coloneqq} 2\card{E(C_k)} + 4 - \card{\boundary C_k}, \end{equation*} and let $w_k$ be the number of awful edges in $\boundary_F C_k$ after $k$ turns of both Maker and of Breaker, and set $v_0 = w_0 = 0$. Let $\boundary_G C_k \subseteq \boundary_F C_k$ denote the subset of good edges. We consider the lexicographic order on ordered pairs $\set{ (x;y) : x,y \in \mathbb{R} }$. That is, we have \begin{equation*} (x;y) > (z;w) \leftrightarrow \set{ x > z } \text{ or } \set{ x = z \text{ and } y > w }. \end{equation*} When we write inequalities between ordered pairs, we always refer to this ordering. \Cref{thm:boost} is implied by the following observation and proposition. \begin{observation} \label{obs:vkwk} Let $k \geq 1$ and assume that after $k$ turns of both Maker and Breaker, Breaker has not yet won. Then we have $0 \leq v_k \leq 2c+3$ and $0 \leq w_k \leq 2c+4$. \end{observation} \begin{proof} Firstly, recall that an awful edge is in particular an unclaimed edge, so by the definition of $w_k$ and by \Cref{lem:perimetric} we have \begin{equation*} 0 \le w_k \le \card{\boundary_F C_k} = \card{\boundary C_k} - 2mk \le 2c + 4. \end{equation*} Secondly, note that $v_k \geq 0$ simply by \Cref{lem:perimetric}. Since Breaker has not won the game yet by the end of his $k$-th turn, and as he claims only edges from the boundary of Maker's graph, we have $\card{\boundary C_k} \ge 2mk+1$, and thus \begin{equation*} v_k = 2\card{E(C_k)} + 4 - \card{\boundary C_k} \le 2c+3. \qedhere \end{equation*} \end{proof} \begin{proposition} \label{prop:vkwk} Let $k \ge 1$ and $c' = \ceil{\frac{2c+2}{m}}$. Assume that after $k+c'$ turns of both Maker and Breaker, Breaker has not yet won the game. Then there exists some $1 \leq r \leq c'+1$, such that $(v_{k+r};w_{k+r}) > (v_k;w_k)$. \end{proposition} Before proving \Cref{prop:vkwk}, let us see why it is useful for us. \begin{claim} \Cref{prop:lmtdBwin} is implied by \Cref{obs:vkwk} and \Cref{prop:vkwk}. \end{claim} \begin{proof} Consider the set \begin{equation*} S \mathrel{\coloneqq} \set[\big]{ (x;y) \colon x \in \set{0,1,2,\dotsc,2c+3} ,\, y \in \set{0,1,2,\dotsc,2c+4} }. \end{equation*} We clearly have $\card{S} = (2c+4)(2c+5)$. By \Cref{obs:vkwk} we have $(v_k;w_k) \in S$ for every $k$ such that Breaker has not yet won after $k$ rounds of the game. By \Cref{prop:vkwk} we can find strictly increasing sequence \begin{equation*} (v_{i_1};w_{i_1}) < (v_{i_2};w_{i_2}) < \dotsb \end{equation*} with $i_1=1$ and $i_{t+1}-i_t \leq c'+1$ for each $t \geq 1$. The result follows. \end{proof} It is left to prove \Cref{prop:vkwk}. We start by proving two easy lemmas. \begin{lemma}\label{lem:vk} Let $k \geq 0$ and assume that after $k$ rounds of the game Breaker has not won yet. Assume that out of $m_{k+1}$ edges that Maker claimed in her $(k+1)$-th turn, $t$ were awful at the time they were claimed, for some $0 \le t \le m_{k+1}$. Then \begin{equation*} \card{\boundary C_{k+1}} - \card{\boundary C_k} \leq 2 m_{k+1} -t. \end{equation*} In particular we get \begin{equation*} v_{k+1} \ge v_k + t. \end{equation*} \end{lemma} \begin{proof} When claimed by Maker, an edge is removed from the edge boundary of the new component, and at most three new edges are added to it, which means that $\card{\boundary C(\ell+1)} - \card{\boundary C(\ell)} \leq 2$, for any $\ell \ge 1$. If the $\ell$-th edge claimed by Maker is awful, then by definition we have $\card{\boundary C(\ell+1)} - \card{\boundary C(\ell)} \leq 1$. The first result follows by combining these two observations. Then in particular, by the definition of $v_k$ and by the above, we have \begin{align*} v_{k+1} &= 2\card{E(C_{k+1})} + 4 - \card{\boundary C_{k+1}} \\ &\ge 2 \p[\big]{\card{E(C_k)} + m_{k+1} } + 4 - \card{\boundary C_k} - 2m_{k+1} + t = v_k + t. \qedhere \end{align*} \end{proof} \begin{lemma}\label{lem:sum-mi} Let $k \geq 1$ and assume that after $k$ rounds of the game, Breaker has not yet won. Then \begin{equation*} \sum_{i=1}^k m_i \geq km-1. \end{equation*} \end{lemma} \begin{proof} Assume for contradiction that $\sum_{i=1}^k m_i \leq km-2$. Then by \Cref{lem:perimetric}, we have $\card{\boundary C_k} \le 2(km-2)+4 = 2km$. However, within his first $k$ turns, Breaker claims precisely $2km$ edges, all in $\boundary C_k$. Thus he must have won at latest after $k$ rounds, which is a desired contradiction. \end{proof} Next we show that we never create many good edges. \begin{lemma}\label{lem:fewgoodedges} For any $\ell \geq 1$, the number of good edges in $\boundary C(\ell+1)$ is at most one more than the number of good edges in $\boundary C(\ell)$, that is \begin{equation*} \card{\boundary_G C(\ell+1)} - \card{\boundary_G C(\ell)} \le 1. \end{equation*} Moreover, $\card{\boundary_G C(1)}, \card{\boundary_G C(2)} \le 2$. \end{lemma} \begin{proof} That $\card{\boundary_G C(1)}, \card{\boundary_G C(2)} \le 2$ follows by inspection, so it only remains to prove the first assertion. Let $\ell \geq 1$. By \Cref{obs:awfulbad}, no edge that was bad or awful in $\boundary C(\ell)$ can be good in $\boundary C(\ell+1)$. So it is enough if we rule out the case that out of at most three edges in $\boundary C(\ell+1) \setminus \boundary C(\ell)$, two different ones would be good. Let $e$ be the $l$th edge claimed by Maker. By symmetry, we may assume that $e$ is horizontal, i.e. that $e= \set{ (x,y), (x+1,y) }$ for some $x,y \in \mathbb{Z}$. Further, since Maker always claims an edge in the edge boundary of her only connected component and due to symmetry again, we only need to consider the following two cases. \begin{enumerate} \item The edge $\set[\big]{(x-1,y), (x,y)}$ was already in $C(\ell)$. \item The edge $\set[\big]{(x,y+1), (x,y)}$ was already in $C(\ell)$. \end{enumerate} In either case, we have that $\boundary C(\ell+1) \setminus \boundary C(\ell)$ is contained in the set \begin{equation*} \set[\Big]{ \set[\big]{(x+1,y), (x+2,y)}, \set[\big]{(x+1,y), (x+1,y-1)}, \set[\big]{(x+1,y), (x+1,y+1)} }, \end{equation*} and out of these three edges, only $\set[\big]{(x+1,y), (x+2,y)}$ may be in $\boundary_G C(\ell+1)$. \end{proof} \Cref{lem:fewgoodedges} has the following corollary. \begin{corollary}\label{cor:fewgoodedges} For any $k \ge 1$, at the end of Breaker's $k$-th turn, we have \begin{equation*} \card{\boundary_G C_k} \le (c - mk)_+, \end{equation*} where $n_+ \mathrel{\coloneqq} \max \set{n, 0}$. \end{corollary} \begin{proof} If $m \ge 3$, then we can assume $\card{E(C_1)} \ge 2$, otherwise Breaker wins already in the first round. And in the case when $\card{E(C_1)} \ge 2$, by \Cref{lem:fewgoodedges}, we get $\card{\boundary_G C_1} \le \card{E(C_1)}$ at the end of Maker's first turn. In his $k$-th turn, Breaker claims at least $\min \set{ 2m, \card{\boundary_G C_k} }$ good edges. As this holds for any $k \ge 1$, at the end of Breaker's $k$-th turn we have \begin{equation*} \card{\boundary_G C_k} \le \p[\big]{\card{E(C_k)} - 2mk }_+ \le (c - mk)_+. \end{equation*} If $m \in \set{1,2}$ and $m_1 \le 1$, then although we can have $\card{\boundary_G C_1} > \card{E(C_1)} $ at the end of Maker's first turn, we still have $\card{\boundary_G C_1} = 0$ by the end of Breaker's turn, and the result follows similarly to the first case. \end{proof} We are now ready to prove \Cref{prop:vkwk}. \begin{proof}[Proof of \Cref{prop:vkwk}] The crucial part of our proof is the following claim. \begin{claim}\label{clm:crucial} There exists $1 \le j \le c'-1$, such that in his $(k+j)$-th turn Breaker claimed a bad or an awful edge. \end{claim} \begin{proof}[Proof of \Cref{clm:crucial}] Assume for contradiction that in his $(k+j)$-th turn Breaker claimed only good edges for all $1 \le j \le c'-1$, meaning he claimed a total of $2m(c'-1)$ good edges during these rounds. By \Cref{cor:fewgoodedges}, at the end of Breaker's $k$-th turn we have $\card{\boundary_G C_k} \le (c - mk)_+$. Moreover, by \Cref{lem:sum-mi} Maker claimed at most \begin{equation*} \sum_{i=k+1}^{k+c'-1} m_i \le c + (k+c'-1)m - km + 1 = c + 1 + m(c'-1) \end{equation*} edges after her $k$-th turn. Hence, by \Cref{lem:fewgoodedges} at most $c + 1 + m(c'-1)$ new good edges were added to the boundary of Maker's graph in rounds $k+1, \dotsc, k+c'-1$. And by \Cref{obs:awfulbad} no awful or bad edge became good, so we get \begin{equation*} 2m(c'-1) \le (c - mk)_+ + c + 1 + m(c'-1), \end{equation*} contradicting $c' = \ceil{\frac{2c+2}{m}}$. \end{proof} Recall that by \Cref{lem:vk} the sequence $v_i$ is non-decreasing. Pick the smallest such $j$ from \Cref{clm:crucial}. In $(k+j+1)$-st turn of Maker, Maker claims some bad or awful edge. If Maker claims an awful edge (or claimed an awful edge in any other of the rounds $k+1,\dotsc,k+j+1$), we have $v_{k+j+1} > v_k$ and we are done. If Maker claims no awful edge (and did not in any other of the rounds $k+1,\dotsc,k+j+1$) but claims a bad edge his $(k+j+1)$-th turn, note that Breaker also must have claimed no awful edge in his $(k+j)$-th turn. Next, there are two options. Either Breaker claims no awful edge in his $(k+j+1)$-th turn, and then $w_{k+j+1}>w_k$ and we are done (as $v_{k+j+1}=v_k$). Or Breaker claims some awful edge in his $(k+j+1)$-th turn, but then Maker claims some awful edge in her $(k+j+2)$-th turn, hence $v_{k+j+2}>v_k$ and we are again done. Hence, the proof of \Cref{prop:vkwk} is finished. \end{proof} This now also concludes the proof of \Cref{thm:boost}. \section{Polluted board} \label{sec:polluted} In this section, we consider the Maker-Breaker percolation game on a random board, as suggested by Day and Falgas-Ravry~\cite{day2020maker2}. As we considered so far the setting of $(\Lambda, v_0) = (\mathbb{Z}^2, (0,0))$, it is natural to consider the same game after bond percolation is performed. Consider an infinite connected graph $\Lambda = (V,E)$ and $p \in [0,1]$. In \indef{bond percolation}, each edge of $\Lambda$ is declared \indef{open} with probability $p$, independently from all the other edges. An edge is said to be \indef{closed} if it is not open. We denote by $(\Lambda)_p$ the random subgraph of $\Lambda$ with vertex set $V$ and edge set $\set{ e \in E \colon e \text{ is open} }$. Denote by $\Pperc{p}$ the probability measure induced by this process. The most striking property of this probabilistic model when $\Lambda = \mathbb{Z}^2$ is that it undergoes a phase transition at $p = 1/2$. Indeed, if by an \indef{open cluster} we mean a connected component of open edges, then $\pperc{p}{\text{exists infinite open cluster}} = 0$ for all $p \leq 1/2$, as shown by Harris~\cite{harris1960lower}, and $\pperc{p}{\text{exists infinite open cluster}} = 1$ for all $p > 1/2$, as shown by Kesten~\cite{kesten1980upper}. For simpler proofs, see Bollobás and Riordan~\cites{bollobas-riordan2006short,bollobas-riordan2007note}, and for a comprehensive introduction to the subject, see the book of Bollobás and Riordan~\cite{bollobas2006percolation}. In this section, we are concerned with the Maker-Breaker percolation game on $(\mathbb{Z}^2)_p$, so to avoid confusion, we refer to the board after performing bond percolation as a \indef{polluted board}. If $p \leq 1/2$, then a.s. (almost surely) the open cluster that contains the origin is finite, so Breaker a.s. wins trivially. On the other hand, if $p > 1/2$, then a.s. there is an infinite open cluster. There is no guarantee however that the origin will be part of this infinite cluster. To make the game more interesting, as suggested by Day and Falgas-Ravry~\cite{day2020maker2}, we allow Maker to choose the initial vertex $v_0$ after the randomness of the bond percolation is realised. \begin{definition} \label{def:polluted} We define the $(m, b)$ Maker-Breaker percolation game on a $p$-polluted board $\Lambda$ as follows. Before the game starts, bond percolation with probability parameter $p$ is performed in the graph $\Lambda$ to obtain the polluted board $(\Lambda)_p$. After all the randomness was realised, Maker chooses a vertex $v_0$. From now onward, the game proceeds as the usual $(m,b)$ Maker-Breaker percolation game on $((\Lambda)_p, v_0)$ that we defined previously. \end{definition} We abbreviate as before, so we refer to the game above as the \indef{$(m,b)$-game on $p$-polluted $\Lambda$}. Note that all the randomness was realised before the game started, so it is a deterministic game played on a random board. We are interested in strategies that win almost surely on the choice of the board. The main result of this section is \Cref{thm:polluted}, concerning the $(1,1)$-game on $p$-polluted $\mathbb{Z}^2$, so from now on, unless stated otherwise, we are considering this game. To prove \Cref{thm:polluted}, we present a winning strategy for Breaker that heavily relies on the closely related model of the north-east oriented percolation on $\mathbb{Z}^2$, which we will now briefly describe. Orient each edge of $\mathbb{Z}^2$ in the direction of the increasing coordinate value. That is, each horizontal edge is oriented to the east and each vertical edges is oriented to the north. Next, each oriented edge is said to be \indef{open} with probability $p$, independently of all the other edges. An oriented edge is said to be \indef{closed} if it is not open. We refer to the probability measure $\Porient{p}$ we obtain as the \indef{north-east oriented percolation on $\mathbb{Z}^2$}. For a detailed overview of the results in this model, see the survey of Durrett~\cite{durrett1984oriented}. In the bond percolation, we are interested in whether there is or there is not an infinite open cluster. In the north-east oriented percolation, we are interested whether there exists an infinite oriented open path. Define the critical probability \begin{equation*} p^{*} \mathrel{\coloneqq} \inf \set*{ p \in [0,1] \colon \porient*{p}{(0,0) \text{ is on an infinite open oriented path}} > 0 }. \end{equation*} It is known that $0.6298 \leq p^{*} \leq 0.6735$, where the lower bound is due to Dhar~\cite{dhar1982directed}, improving on the ideas of Gray, Wierman and Smythe~\cite{gray1980lower}, and the upper bound is due to Balister, Bollobás and Stacey~\cite{balister1994improved}. Like the threshold in the bond percolation, there is an infinite oriented open path a.s. if $p > p^{*}$ and there is no infinite oriented open path a.s. if $p \leq p^{*}$ in the north-east oriented percolation. This model is relevant to us via a simple coupling argument. Note that by forgetting the orientation of the edges, we obtain that for $p > p^{*}$, \begin{align*} &\pperc*{p}{\text{there is an infinite up-right open path from } 0} \\ &= \porient*{p}{(0,0) \text{ is a starting point of infinite oriented open path}} > 0. \end{align*} We say that a subgraph of $\mathbb{Z}^2$ is \emph{barred} if there is no infinite path going only in two directions. That is, if for all vertices $v \in \mathbb{Z}^2$, there is no north-east, no north-west, no south-east and no south-west infinite path starting from $v$. Coupling the bond percolation with four rotated copies of the north-east oriented percolation model as above gives us the following result. \begin{proposition} \label{prop:barred} If $p < p^{*}$, then a.s. each open cluster of $(\mathbb{Z}^2)_p$ is barred. \end{proposition} We will prove the following proposition, which together with \Cref{prop:barred} implies \Cref{thm:polluted}. \begin{proposition} \label{prop:barred-strategy} Fix any barred percolation configuration of $\mathbb{Z}^2$. Then regardless of which point is fixed by Maker as the origin, Breaker has a winning strategy for the $(1,1)$ Maker-Breaker percolation game on this board. \end{proposition} We say that a path is \emph{unbarred} if it only ever uses at most two directions. Therefore, a barred configuration in $\mathbb{Z}^2$ is precisely a configuration with no infinite unbarred paths. For $d \in \mathbb{N}$ and $v \in \mathbb{Z}^2$, let $B_d(v)= v + [-d,d]^2$ be the ball of radius $d$, centred at $v$ in the $\ell_\infty$-norm. We say that an edge is contained in $B_d(v)$ if both of its endpoints are. We also write $B_d$ for the same ball centred at $(0,0)$. Using a very simple argument in the style of an infinite Ramsey theory, we prove the following lemma. \begin{lemma} \label{lem:inframsey} Consider any fixed barred percolation configuration of $\mathbb{Z}^2$. Then for every $v \in \mathbb{Z}^2$, there exists $d = d(v) \in \mathbb{N}$ such that any unbarred path starting from $v$ is contained within $B_{d}(v)$. \end{lemma} \begin{proof} The subtle case we need to treat here is the possibility that there are infinitely many finite unbarred paths from $v$, but still, no infinite unbarred path. Let $S$ be the set of the vertices of $\mathbb{Z}^2$ which are reachable from $v$ using unbarred paths. If $S$ is finite, then the result follows. Now we assume $S$ is infinite and we go for a contradiction. Without loss of generality, the set $S_0 \subseteq S$ of the vertices reachable from $v$ via north-east paths is infinite. So we can find a vertex $v_1$, such that: $v_1$ is a neighbour of $v$, either at north or east of $v$; the edge $\set{v,v_1}$ is open in the configuration; and such that the set $S_1 \subseteq S_0$ of the vertices reachable by north-east paths from $v_1$ is infinite. Continuing inductively, we find a sequence of vertices $v_0, v_1, v_2 \dotsc$ such that $v_{i+1}$ is a neighbour of $v_i$ at north or east, the edge $\set{v_i, v_{i+1}}$ is open and $v_0 = v$. But this is precisely an unbarred infinite path from $v$, which contradicts the configuration being barred. \end{proof} We now define a strategy for Breaker that wins on any barred board, and any choice of $v \in \mathbb{Z}^2$ as an origin. In fact, since a translated board is still barred, we assume without loss of generality that $v = (0,0)$. By \Cref{lem:inframsey}, there is an integer $d$ such that any unbarred path from $(0,0)$ is contained in $B_d$. Breaker's strategy exploits the existence of this box. A vertex in $\mathbb{Z}^2$ is called \indef{axial} if one of its coordinates coincides with a coordinate of $v$ (i.e. if one of its coordinates is $0$, as we took $v=(0,0)$). An edge is called \indef{non-axial} if one of its endpoints is not axial. Breaker's strategy relies on the following pairing. \begin{definition} \label{def:pairing} The \indef{barrier pairing} is the following pairing of the of the non-axial edges of $\mathbb{Z}^2$. Let $(x,y) \in \mathbb{Z}^2$ be a non-axial vertex. \begin{enumerate} \item If $x > 0$ and $y > 0$, then pair $\set{(x,y), (x,y-1)}$ with $\set{(x,y),(x-1,y)}$; \item If $x > 0$ and $y < 0$, then pair $\set{(x,y), (x,y+1)}$ with $\set{(x,y),(x-1,y)}$; \item If $x < 0$ and $y > 0$, then pair $\set{(x,y), (x,y-1)}$ with $\set{(x,y),(x+1,y)}$; \item If $x < 0$ and $y < 0$, then pair $\set{(x,y), (x,y+1)}$ with $\set{(x,y),(x+1,y)}$. \end{enumerate} \end{definition} \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.6] \tikzstyle{s-helper}=[lightgray, thin]; \tikzstyle{s-bold}=[darkgray, very thick]; \draw[step=1, s-helper] (-4.2,-4.2) grid (4.2, 4.2); \foreach \x in {0,...,3} \foreach \y in {0,...,3} { \draw[xshift= \x cm, yshift= \y cm, s-bold] (1, 0.33) -- (1, 1) -- (0.33,1); \draw[xshift=-\x cm, yshift= \y cm, s-bold] (-1, 0.33) -- (-1, 1) -- (-0.33,1); \draw[xshift= \x cm, yshift=-\y cm, s-bold] (1, -0.33) -- (1, -1) -- (0.33,-1); \draw[xshift=-\x cm, yshift=-\y cm, s-bold] (-1, -0.33) -- (-1, -1) -- (-0.33,-1); } \foreach \x in {-4,...,4} \foreach \y in {-4,...,4} \filldraw[xshift=\x cm, yshift=\y cm, black, fill=white, line width=1.0pt] (0,0) circle (3pt); \draw (0,0) node[anchor=south west] {$v$}; \end{tikzpicture} \caption{The barrier pairing around $v$} \label{fig:pairing} \end{figure} We now define the following strategy for Breaker for the $(1,1)$-game on the $p$-polluted square lattice $\mathbb{Z}^2$. \begin{strategy}[Breaker's strategy for the $(1,1)$-game on a barred board] \label{str:barred} Consider a barred configuration of edges of $\mathbb{Z}^2$ and a fixed origin at $(0,0)$. If the open cluster of the origin is finite, then Breaker wins immediately. Let $d$ be minimum such that any unbarred path starting from the origin is contained within $B_d$, which exists by \Cref{lem:inframsey}. Breaker will react to each move of Maker using the barrier pairing in \Cref{def:pairing} as follows. \begin{enumerate} \item If Maker claims a non-axial edge inside $B_{d+1}$, and its paired edge is both open and unclaimed, then Breaker claims that edge; \item Otherwise, if there is any open unclaimed edge inside $B_{d+1}$, breaker claims one of them arbitrarily; \item Otherwise, Breaker claims an arbitrary open unclaimed edge in the open cluster of the origin. \end{enumerate} \end{strategy} \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.4] \tikzstyle{s-helper}=[lightgray, thin]; \tikzstyle{s-bold}=[darkgray, very thick]; \definecolor{s-col}{RGB}{253,174,97}; \tikzstyle{s-hi}=[ultra thick, s-col]; \draw[step=1, s-helper] (-15.2,-15.2) grid (15.2, 15.2); \draw[thick] (-14,15) -- (-14,14) (-13,15) -- (-13,14) (-12,15) -- (-12,14) (-11,15) -- (-11,14) (-10,15) -- (-10,14) (-8,15) -- (-8,14) (-6,15) -- (-6,14) (-2,15) -- (-2,14) (-1,15) -- (-1,14) (4,15) -- (4,14) (6,15) -- (6,14) (8,15) -- (8,14) (9,15) -- (9,14) (15,15) -- (15,14) (-15,14) -- (-15,13) (-14,14) -- (-14,13) (-13,14) -- (-13,13) (-9,14) -- (-9,13) (-8,14) -- (-8,13) (-5,14) -- (-5,13) (-2,14) -- (-2,13) (-1,14) -- (-1,13) (1,14) -- (1,13) (4,14) -- (4,13) (6,14) -- (6,13) (7,14) -- (7,13) (9,14) -- (9,13) (11,14) -- (11,13) (12,14) -- (12,13) (13,14) -- (13,13) (15,14) -- (15,13) (-15,13) -- (-15,12) (-13,13) -- (-13,12) (-12,13) -- (-12,12) (-9,13) -- (-9,12) (-8,13) -- (-8,12) (-7,13) -- (-7,12) (-6,13) -- (-6,12) (-2,13) -- (-2,12) (-1,13) -- (-1,12) (1,13) -- (1,12) (2,13) -- (2,12) (4,13) -- (4,12) (6,13) -- (6,12) (8,13) -- (8,12) (14,13) -- (14,12) (15,13) -- (15,12) (-13,12) -- (-13,11) (-12,12) -- (-12,11) (-10,12) -- (-10,11) (-8,12) -- (-8,11) (-5,12) -- (-5,11) (-1,12) -- (-1,11) (0,12) -- (0,11) (2,12) -- (2,11) (3,12) -- (3,11) (4,12) -- (4,11) (5,12) -- (5,11) (7,12) -- (7,11) (8,12) -- (8,11) (12,12) -- (12,11) (13,12) -- (13,11) (14,12) -- (14,11) (15,12) -- (15,11) (-14,11) -- (-14,10) (-12,11) -- (-12,10) (-11,11) -- (-11,10) (-10,11) -- (-10,10) (-5,11) -- (-5,10) (-1,11) -- (-1,10) (0,11) -- (0,10) (8,11) -- (8,10) (9,11) -- (9,10) (13,11) -- (13,10) (-15,10) -- (-15,9) (-14,10) -- (-14,9) (-12,10) -- (-12,9) (-8,10) -- (-8,9) (-4,10) -- (-4,9) (0,10) -- (0,9) (1,10) -- (1,9) (2,10) -- (2,9) (3,10) -- (3,9) (4,10) -- (4,9) (7,10) -- (7,9) (9,10) -- (9,9) (11,10) -- (11,9) (13,10) -- (13,9) (14,10) -- (14,9) (15,10) -- (15,9) (-15,9) -- (-15,8) (-14,9) -- (-14,8) (-12,9) -- (-12,8) (-10,9) -- (-10,8) (-8,9) -- (-8,8) (-2,9) -- (-2,8) (0,9) -- (0,8) (1,9) -- (1,8) (8,9) -- (8,8) (11,9) -- (11,8) (12,9) -- (12,8) (13,9) -- (13,8) (14,9) -- (14,8) (15,9) -- (15,8) (-15,8) -- (-15,7) (-11,8) -- (-11,7) (-9,8) -- (-9,7) (-8,8) -- (-8,7) (0,8) -- (0,7) (1,8) -- (1,7) (7,8) -- (7,7) (11,8) -- (11,7) (12,8) -- (12,7) (13,8) -- (13,7) (14,8) -- (14,7) (15,8) -- (15,7) (-15,7) -- (-15,6) (-13,7) -- (-13,6) (-12,7) -- (-12,6) (-11,7) -- (-11,6) (-6,7) -- (-6,6) (0,7) -- (0,6) (13,7) -- (13,6) (14,7) -- (14,6) (-15,6) -- (-15,5) (-14,6) -- (-14,5) (-11,6) -- (-11,5) (-9,6) -- (-9,5) (-8,6) -- (-8,5) (-6,6) -- (-6,5) (-4,6) -- (-4,5) (0,6) -- (0,5) (3,6) -- (3,5) (11,6) -- (11,5) (14,6) -- (14,5) (15,6) -- (15,5) (-15,5) -- (-15,4) (-13,5) -- (-13,4) (-12,5) -- (-12,4) (-11,5) -- (-11,4) (-9,5) -- (-9,4) (-8,5) -- (-8,4) (-7,5) -- (-7,4) (-6,5) -- (-6,4) (-3,5) -- (-3,4) (-2,5) -- (-2,4) (0,5) -- (0,4) (1,5) -- (1,4) (6,5) -- (6,4) (8,5) -- (8,4) (9,5) -- (9,4) (10,5) -- (10,4) (11,5) -- (11,4) (12,5) -- (12,4) (13,5) -- (13,4) (14,5) -- (14,4) (-15,4) -- (-15,3) (-13,4) -- (-13,3) (-12,4) -- (-12,3) (-9,4) -- (-9,3) (-7,4) -- (-7,3) (-3,4) -- (-3,3) (6,4) -- (6,3) (7,4) -- (7,3) (8,4) -- (8,3) (12,4) -- (12,3) (14,4) -- (14,3) (15,4) -- (15,3) (-15,3) -- (-15,2) (-12,3) -- (-12,2) (-10,3) -- (-10,2) (-9,3) -- (-9,2) (-8,3) -- (-8,2) (-7,3) -- (-7,2) (-4,3) -- (-4,2) (3,3) -- (3,2) (4,3) -- (4,2) (5,3) -- (5,2) (6,3) -- (6,2) (8,3) -- (8,2) (10,3) -- (10,2) (12,3) -- (12,2) (13,3) -- (13,2) (-12,2) -- (-12,1) (-11,2) -- (-11,1) (-10,2) -- (-10,1) (-9,2) -- (-9,1) (-8,2) -- (-8,1) (-5,2) -- (-5,1) (-2,2) -- (-2,1) (5,2) -- (5,1) (7,2) -- (7,1) (10,2) -- (10,1) (13,2) -- (13,1) (15,2) -- (15,1) (-15,1) -- (-15,0) (-14,1) -- (-14,0) (-13,1) -- (-13,0) (-12,1) -- (-12,0) (-10,1) -- (-10,0) (-9,1) -- (-9,0) (-8,1) -- (-8,0) (-7,1) -- (-7,0) (-5,1) -- (-5,0) (3,1) -- (3,0) (4,1) -- (4,0) (6,1) -- (6,0) (9,1) -- (9,0) (10,1) -- (10,0) (11,1) -- (11,0) (13,1) -- (13,0) (14,1) -- (14,0) (15,1) -- (15,0) (-15,0) -- (-15,-1) (-14,0) -- (-14,-1) (-12,0) -- (-12,-1) (-11,0) -- (-11,-1) (-10,0) -- (-10,-1) (-9,0) -- (-9,-1) (-8,0) -- (-8,-1) (-7,0) -- (-7,-1) (-6,0) -- (-6,-1) (-5,0) -- (-5,-1) (3,0) -- (3,-1) (4,0) -- (4,-1) (5,0) -- (5,-1) (6,0) -- (6,-1) (7,0) -- (7,-1) (8,0) -- (8,-1) (10,0) -- (10,-1) (13,0) -- (13,-1) (14,0) -- (14,-1) (-15,-1) -- (-15,-2) (-14,-1) -- (-14,-2) (-11,-1) -- (-11,-2) (-10,-1) -- (-10,-2) (-9,-1) -- (-9,-2) (-6,-1) -- (-6,-2) (-3,-1) -- (-3,-2) (5,-1) -- (5,-2) (7,-1) -- (7,-2) (12,-1) -- (12,-2) (13,-1) -- (13,-2) (14,-1) -- (14,-2) (-15,-2) -- (-15,-3) (-12,-2) -- (-12,-3) (-11,-2) -- (-11,-3) (-10,-2) -- (-10,-3) (-8,-2) -- (-8,-3) (-4,-2) -- (-4,-3) (4,-2) -- (4,-3) (6,-2) -- (6,-3) (7,-2) -- (7,-3) (9,-2) -- (9,-3) (10,-2) -- (10,-3) (11,-2) -- (11,-3) (12,-2) -- (12,-3) (-13,-3) -- (-13,-4) (-12,-3) -- (-12,-4) (-10,-3) -- (-10,-4) (-9,-3) -- (-9,-4) (-5,-3) -- (-5,-4) (-4,-3) -- (-4,-4) (-3,-3) -- (-3,-4) (3,-3) -- (3,-4) (5,-3) -- (5,-4) (7,-3) -- (7,-4) (9,-3) -- (9,-4) (12,-3) -- (12,-4) (13,-3) -- (13,-4) (15,-3) -- (15,-4) (-15,-4) -- (-15,-5) (-14,-4) -- (-14,-5) (-9,-4) -- (-9,-5) (-6,-4) -- (-6,-5) (-5,-4) -- (-5,-5) (-3,-4) -- (-3,-5) (-2,-4) -- (-2,-5) (-1,-4) -- (-1,-5) (3,-4) -- (3,-5) (6,-4) -- (6,-5) (8,-4) -- (8,-5) (9,-4) -- (9,-5) (10,-4) -- (10,-5) (13,-4) -- (13,-5) (14,-4) -- (14,-5) (-11,-5) -- (-11,-6) (-9,-5) -- (-9,-6) (-8,-5) -- (-8,-6) (-7,-5) -- (-7,-6) (1,-5) -- (1,-6) (2,-5) -- (2,-6) (5,-5) -- (5,-6) (6,-5) -- (6,-6) (7,-5) -- (7,-6) (9,-5) -- (9,-6) (12,-5) -- (12,-6) (15,-5) -- (15,-6) (-14,-6) -- (-14,-7) (-12,-6) -- (-12,-7) (-11,-6) -- (-11,-7) (-10,-6) -- (-10,-7) (-6,-6) -- (-6,-7) (1,-6) -- (1,-7) (3,-6) -- (3,-7) (4,-6) -- (4,-7) (5,-6) -- (5,-7) (9,-6) -- (9,-7) (11,-6) -- (11,-7) (15,-6) -- (15,-7) (-14,-7) -- (-14,-8) (-11,-7) -- (-11,-8) (-4,-7) -- (-4,-8) (2,-7) -- (2,-8) (4,-7) -- (4,-8) (5,-7) -- (5,-8) (7,-7) -- (7,-8) (8,-7) -- (8,-8) (10,-7) -- (10,-8) (11,-7) -- (11,-8) (13,-7) -- (13,-8) (15,-7) -- (15,-8) (-15,-8) -- (-15,-9) (-14,-8) -- (-14,-9) (-13,-8) -- (-13,-9) (-9,-8) -- (-9,-9) (-8,-8) -- (-8,-9) (-7,-8) -- (-7,-9) (-4,-8) -- (-4,-9) (-3,-8) -- (-3,-9) (-1,-8) -- (-1,-9) (1,-8) -- (1,-9) (3,-8) -- (3,-9) (5,-8) -- (5,-9) (7,-8) -- (7,-9) (8,-8) -- (8,-9) (10,-8) -- (10,-9) (12,-8) -- (12,-9) (13,-8) -- (13,-9) (14,-8) -- (14,-9) (-15,-9) -- (-15,-10) (-14,-9) -- (-14,-10) (-13,-9) -- (-13,-10) (-10,-9) -- (-10,-10) (-4,-9) -- (-4,-10) (-1,-9) -- (-1,-10) (0,-9) -- (0,-10) (1,-9) -- (1,-10) (2,-9) -- (2,-10) (4,-9) -- (4,-10) (5,-9) -- (5,-10) (7,-9) -- (7,-10) (11,-9) -- (11,-10) (13,-9) -- (13,-10) (14,-9) -- (14,-10) (15,-9) -- (15,-10) (-12,-10) -- (-12,-11) (-11,-10) -- (-11,-11) (-10,-10) -- (-10,-11) (-9,-10) -- (-9,-11) (-7,-10) -- (-7,-11) (-6,-10) -- (-6,-11) (-5,-10) -- (-5,-11) (1,-10) -- (1,-11) (2,-10) -- (2,-11) (4,-10) -- (4,-11) (5,-10) -- (5,-11) (7,-10) -- (7,-11) (8,-10) -- (8,-11) (9,-10) -- (9,-11) (10,-10) -- (10,-11) (11,-10) -- (11,-11) (12,-10) -- (12,-11) (15,-10) -- (15,-11) (-15,-11) -- (-15,-12) (-14,-11) -- (-14,-12) (-13,-11) -- (-13,-12) (-12,-11) -- (-12,-12) (-10,-11) -- (-10,-12) (-9,-11) -- (-9,-12) (-7,-11) -- (-7,-12) (-6,-11) -- (-6,-12) (-5,-11) -- (-5,-12) (-2,-11) -- (-2,-12) (1,-11) -- (1,-12) (2,-11) -- (2,-12) (3,-11) -- (3,-12) (5,-11) -- (5,-12) (8,-11) -- (8,-12) (9,-11) -- (9,-12) (10,-11) -- (10,-12) (14,-11) -- (14,-12) (15,-11) -- (15,-12) (-14,-12) -- (-14,-13) (-13,-12) -- (-13,-13) (-11,-12) -- (-11,-13) (-9,-12) -- (-9,-13) (-1,-12) -- (-1,-13) (0,-12) -- (0,-13) (1,-12) -- (1,-13) (3,-12) -- (3,-13) (4,-12) -- (4,-13) (10,-12) -- (10,-13) (11,-12) -- (11,-13) (12,-12) -- (12,-13) (13,-12) -- (13,-13) (14,-12) -- (14,-13) (15,-12) -- (15,-13) (-11,-13) -- (-11,-14) (-9,-13) -- (-9,-14) (-8,-13) -- (-8,-14) (-7,-13) -- (-7,-14) (-6,-13) -- (-6,-14) (-5,-13) -- (-5,-14) (-4,-13) -- (-4,-14) (-2,-13) -- (-2,-14) (0,-13) -- (0,-14) (1,-13) -- (1,-14) (2,-13) -- (2,-14) (3,-13) -- (3,-14) (7,-13) -- (7,-14) (9,-13) -- (9,-14) (12,-13) -- (12,-14) (13,-13) -- (13,-14) (14,-13) -- (14,-14) (15,-13) -- (15,-14) (-15,-14) -- (-15,-15) (-12,-14) -- (-12,-15) (-11,-14) -- (-11,-15) (-10,-14) -- (-10,-15) (-9,-14) -- (-9,-15) (-7,-14) -- (-7,-15) (-6,-14) -- (-6,-15) (-5,-14) -- (-5,-15) (-4,-14) -- (-4,-15) (-3,-14) -- (-3,-15) (-1,-14) -- (-1,-15) (0,-14) -- (0,-15) (1,-14) -- (1,-15) (4,-14) -- (4,-15) (5,-14) -- (5,-15) (6,-14) -- (6,-15) (10,-14) -- (10,-15) (11,-14) -- (11,-15) ; \draw[thick] (-15, 15) -- (-12, 15) (-11, 15) -- (-10, 15) (-9, 15) -- (-7, 15) (-6, 15) -- (-2, 15) (2, 15) -- (3, 15) (4, 15) -- (6, 15) (7, 15) -- (8, 15) (10, 15) -- (11, 15) (14, 15) -- (15, 15) (-15, 14) -- (-14, 14) (-13, 14) -- (-12, 14) (-9, 14) -- (-6, 14) (-2, 14) -- (-1, 14) (1, 14) -- (2, 14) (4, 14) -- (5, 14) (8, 14) -- (9, 14) (11, 14) -- (15, 14) (-12, 13) -- (-10, 13) (-8, 13) -- (-6, 13) (-1, 13) -- (2, 13) (4, 13) -- (6, 13) (8, 13) -- (9, 13) (11, 13) -- (13, 13) (14, 13) -- (15, 13) (-14, 12) -- (-13, 12) (-10, 12) -- (-7, 12) (-3, 12) -- (-2, 12) (-1, 12) -- (0, 12) (1, 12) -- (2, 12) (3, 12) -- (4, 12) (5, 12) -- (9, 12) (13, 12) -- (15, 12) (-15, 11) -- (-14, 11) (-12, 11) -- (-9, 11) (-7, 11) -- (-5, 11) (-3, 11) -- (-1, 11) (1, 11) -- (2, 11) (3, 11) -- (8, 11) (11, 11) -- (13, 11) (14, 11) -- (15, 11) (-15, 10) -- (-11, 10) (-9, 10) -- (-8, 10) (-7, 10) -- (-6, 10) (-5, 10) -- (-4, 10) (1, 10) -- (4, 10) (7, 10) -- (10, 10) (12, 10) -- (15, 10) (-15, 9) -- (-14, 9) (-12, 9) -- (-10, 9) (-8, 9) -- (-4, 9) (-1, 9) -- (1, 9) (3, 9) -- (4, 9) (7, 9) -- (11, 9) (12, 9) -- (13, 9) (14, 9) -- (15, 9) (-13, 8) -- (-10, 8) (-3, 8) -- (-2, 8) (-1, 8) -- (0, 8) (1, 8) -- (2, 8) (8, 8) -- (10, 8) (11, 8) -- (15, 8) (-14, 7) -- (-10, 7) (-9, 7) -- (-7, 7) (-1, 7) -- (0, 7) (1, 7) -- (2, 7) (3, 7) -- (4, 7) (12, 7) -- (13, 7) (14, 7) -- (15, 7) (-15, 6) -- (-14, 6) (-11, 6) -- (-9, 6) (-8, 6) -- (-7, 6) (-6, 6) -- (-5, 6) (0, 6) -- (2, 6) (3, 6) -- (4, 6) (14, 6) -- (15, 6) (-15, 5) -- (-14, 5) (-12, 5) -- (-10, 5) (-9, 5) -- (-5, 5) (-4, 5) -- (-2, 5) (0, 5) -- (2, 5) (10, 5) -- (12, 5) (-15, 4) -- (-14, 4) (-13, 4) -- (-12, 4) (-11, 4) -- (-9, 4) (-7, 4) -- (-6, 4) (-4, 4) -- (-2, 4) (-1, 4) -- (1, 4) (6, 4) -- (8, 4) (9, 4) -- (11, 4) (12, 4) -- (14, 4) (-14, 3) -- (-13, 3) (-11, 3) -- (-10, 3) (-7, 3) -- (-5, 3) (-4, 3) -- (-3, 3) (8, 3) -- (9, 3) (10, 3) -- (11, 3) (12, 3) -- (14, 3) (-15, 2) -- (-11, 2) (-8, 2) -- (-5, 2) (3, 2) -- (4, 2) (5, 2) -- (6, 2) (7, 2) -- (8, 2) (9, 2) -- (11, 2) (12, 2) -- (15, 2) (-15, 1) -- (-14, 1) (-12, 1) -- (-9, 1) (-8, 1) -- (-7, 1) (-6, 1) -- (-4, 1) (5, 1) -- (9, 1) (12, 1) -- (13, 1) (-14, 0) -- (-12, 0) (-11, 0) -- (-10, 0) (-9, 0) -- (-6, 0) (4, 0) -- (6, 0) (9, 0) -- (12, 0) (-15, -1) -- (-14, -1) (-12, -1) -- (-10, -1) (-9, -1) -- (-7, -1) (4, -1) -- (8, -1) (9, -1) -- (12, -1) (13, -1) -- (14, -1) (-14, -2) -- (-13, -2) (-12, -2) -- (-11, -2) (-10, -2) -- (-8, -2) (6, -2) -- (7, -2) (9, -2) -- (10, -2) (13, -2) -- (15, -2) (-15, -3) -- (-14, -3) (-13, -3) -- (-12, -3) (-11, -3) -- (-9, -3) (-7, -3) -- (-5, -3) (-4, -3) -- (-3, -3) (4, -3) -- (5, -3) (6, -3) -- (8, -3) (9, -3) -- (13, -3) (14, -3) -- (15, -3) (-13, -4) -- (-12, -4) (-11, -4) -- (-8, -4) (-7, -4) -- (-4, -4) (-3, -4) -- (-2, -4) (5, -4) -- (6, -4) (9, -4) -- (10, -4) (11, -4) -- (12, -4) (13, -4) -- (14, -4) (-15, -5) -- (-13, -5) (-12, -5) -- (-11, -5) (-10, -5) -- (-9, -5) (3, -5) -- (7, -5) (8, -5) -- (13, -5) (14, -5) -- (15, -5) (-14, -6) -- (-13, -6) (-11, -6) -- (-6, -6) (-1, -6) -- (0, -6) (1, -6) -- (6, -6) (7, -6) -- (11, -6) (12, -6) -- (15, -6) (-13, -7) -- (-8, -7) (-5, -7) -- (-3, -7) (-1, -7) -- (1, -7) (2, -7) -- (3, -7) (6, -7) -- (7, -7) (9, -7) -- (10, -7) (13, -7) -- (15, -7) (-15, -8) -- (-14, -8) (-13, -8) -- (-12, -8) (-11, -8) -- (-9, -8) (-8, -8) -- (-7, -8) (-4, -8) -- (-3, -8) (-2, -8) -- (0, -8) (1, -8) -- (2, -8) (4, -8) -- (5, -8) (6, -8) -- (8, -8) (9, -8) -- (10, -8) (11, -8) -- (15, -8) (-14, -9) -- (-12, -9) (-2, -9) -- (2, -9) (3, -9) -- (4, -9) (8, -9) -- (14, -9) (-15, -10) -- (-14, -10) (-13, -10) -- (-12, -10) (-7, -10) -- (-6, -10) (-1, -10) -- (0, -10) (1, -10) -- (3, -10) (4, -10) -- (5, -10) (7, -10) -- (9, -10) (10, -10) -- (14, -10) (-15, -11) -- (-12, -11) (-8, -11) -- (-5, -11) (-3, -11) -- (-1, -11) (0, -11) -- (3, -11) (4, -11) -- (5, -11) (7, -11) -- (11, -11) (12, -11) -- (15, -11) (-15, -12) -- (-13, -12) (-10, -12) -- (-9, -12) (-8, -12) -- (-5, -12) (-3, -12) -- (-2, -12) (-1, -12) -- (0, -12) (3, -12) -- (7, -12) (9, -12) -- (11, -12) (12, -12) -- (15, -12) (-15, -13) -- (-14, -13) (-11, -13) -- (-10, -13) (-9, -13) -- (-7, -13) (-6, -13) -- (-5, -13) (-4, -13) -- (-3, -13) (1, -13) -- (4, -13) (5, -13) -- (7, -13) (8, -13) -- (10, -13) (12, -13) -- (14, -13) (-15, -14) -- (-14, -14) (-12, -14) -- (-11, -14) (-10, -14) -- (-9, -14) (-4, -14) -- (2, -14) (3, -14) -- (5, -14) (6, -14) -- (9, -14) (10, -14) -- (11, -14) (13, -14) -- (15, -14) (-15, -15) -- (-12, -15) (-11, -15) -- (-4, -15) (-3, -15) -- (-2, -15) (-1, -15) -- (2, -15) (4, -15) -- (6, -15) (7, -15) -- (8, -15) (11, -15) -- (14, -15) ; \draw[s-hi] (0,0) -- (0,3) -- (-1,3) -- (-1,9) -- (-3,9) (-2,9) -- (-2,10) -- (-3,10) -- (-3,13) -- (-4,13) -- (-4, 14) -- (-5,14) (-3,12) -- (-4,12) -- (-4, 13) (-1,7) -- (-2,7) -- (-2,6) (-1,6) -- (-4,6) (-3,6) -- (-3,8) -- (-7,8) (-3,7) -- (-4,7) -- (-4,8) (-4,7) -- (-6,7) -- (-6,8) (0,0) -- (-4,0) -- (-4,-1) -- (-5,-1) -- (-5,-2) -- (-7,-2) (-3,0) -- (-3,2) (-2,0) -- (-2,-1) (-1,0) -- (-1,2) -- (-2,2) -- (-2,3) (-1,1) -- (1,1) (0,0) -- (2,0) -- (2,1) -- (3,1) (1,0) -- (1,2) -- (0,2) -- (2,2) -- (2,3) -- (0,3) -- (5,3) (2,2) -- (2,8) -- (7,8) (4,3) -- (4,8) (2,4) -- (4,4) (4,6) -- (7,6) -- (7,5) -- (8,5) -- (8,7) -- (9,7) -- (9,6) -- (10,6) -- (10,7) (4,4) -- (5,4) -- (5,5) -- (9,5) -- (9,6) -- (12,6) (5,6) -- (5,10) (6,6) -- (6,7) -- (5,7) -- (5,9) -- (6,9) (-1,0) -- (-1,-2) -- (-3,-2) (-2,-2) -- (-2,-3) (-1,-1) -- (1,-1) -- (1,0) (0,-2) -- (3,-2) (0,-3) -- (2,-3) -- (2,-2) (0,0) -- (0,-5) -- (-5,-5) -- (-5,-9) (-1,-5) -- (-1,-7) -- (-2,-7) -- (-2,-10) -- (-4,-10) -- (-4,-11) -- (-3,-11) -- (-3,-10) (-2,-5) -- (-2,-6) -- (-3,-6) -- (-3,-5) (-3,-6) -- (-5,-6) -- (-5,-7) -- (-6,-7) (-5,-8) -- (-6,-8) -- (-6,-9) -- (-9,-9) (-8,-9) -- (-8,-11) ; \foreach \x / \y in { -9/-9,-8/-9,-8/-10,-8/-11,-7/8,-7/-2,-7/-9,-6/7,-6/8,-6/-2,-6/-7, -6/-8,-6/-9,-5/7,-5/8,-5/14,-5/-1,-5/-2,-5/-5,-5/-6,-5/-7,-5/-8, -5/-9,-4/0,-4/6,-4/7,-4/8,-4/12,-4/13,-4/14,-4/-1,-4/-5,-4/-6, -4/-10,-4/-11,-3/0,-3/1,-3/2,-3/6,-3/7,-3/8,-3/9,-3/10,-3/11, -3/12,-3/13,-3/-2,-3/-5,-3/-6,-3/-10,-3/-11,-2/0,-2/2,-2/3,-2/6, -2/7,-2/9,-2/10,-2/-1,-2/-2,-2/-3,-2/-5,-2/-6,-2/-7,-2/-8,-2/-9, -2/-10,-1/0,-1/1,-1/2,-1/3,-1/4,-1/5,-1/6,-1/7,-1/8,-1/9, -1/-1,-1/-2,-1/-5,-1/-6,-1/-7,0/0,0/1,0/2,0/3,0/-1,0/-2,0/-3, 0/-4,0/-5,1/0,1/1,1/2,1/2,1/3,1/-1,1/-2,1/-3, 2/0,2/1,2/2,2/3,2/4,2/5,2/6,2/7,2/8,2/-2,2/-3, 3/1,3/3,3/4,3/8,3/-2,4/3,4/4,4/5,4/6,4/7,4/8, 5/3,5/4,5/5,5/6,5/7,5/8,5/9,5/10, 6/5,6/6,6/7,6/8,6/9,7/5,7/6,7/8,8/5,8/6,8/7, 9/5,9/6,9/7,10/6,10/7,11/6,12/6 } \filldraw[xshift=\x cm, yshift=\y cm, s-hi, fill=white, line width=1.0pt] (0,0) circle (4pt); \filldraw[black, fill=white, line width=1.0pt] (0,0) circle (4pt) node[anchor=south west] {$v$}; \draw[densely dashed, very thick, s-hi] (-14.5,-14.5) rectangle (14.5, 14.5); \end{tikzpicture} \caption{Polluted configuration with $p = 0.56$. The box $B_{d}$ is highlighted, as well as the vertices accessible via unbarred paths from $v$.} \label{fig:polluted} \end{figure} Finally, we show that a.s. \Cref{str:barred} is a winning strategy for Breaker. \begin{proof}[Proof of \Cref{prop:barred-strategy}] Consider a barred configuration of edges of $\mathbb{Z}^2$ and a vertex $v$ chosen by Maker. Translate the board so that $v = (0,0)$, and note that the configuration is still barred. We show that Breaker wins the $(1,1)$-game on this board by following \Cref{str:barred}. If the origin is in a finite open cluster, Breaker wins immediately, so assume it is in an infinite open cluster. By \Cref{lem:inframsey}, there is $d$ such that every unbarred path starting from the origin is contained within $B_d$. Since Breaker always claims an unclaimed edge inside the box $B_{d+1}$ if it is possible, all of $B_{d+1}$ is claimed after a finite number of rounds. We claim that once every edge inside $B_{d+1}$ is claimed, there is no path from $(0,0)$ to $B_{d+2} \setminus B_{d+1}$ consisting solely of open edges claimed by Maker. This clearly implies Breaker's win. Assume for contradiction there was such a Maker's path $P$. It cannot be an unbarred path, since every unbarred path is fully contained in $B_d$. Denote $P = \set{v_1, \dotsc, v_\ell}$, where $v_0 = (0,0)$ and $v_\ell \in B_{d+2} \setminus B_{d+1}$. Let $k$ be maximal such that subpath $\set{v_0, \dotsc, v_{k-1}, v_{k}}$ is unbarred. This implies that $2 \leq k \leq \ell - 2$, and further that the edges $\set{v_{k-1},v_k}$ and $\set{v_k,v_{k+1}}$ are paired and contained in $B_{d+1}$. Therefore, by \Cref{str:barred} and the barrier pairing in \Cref{def:pairing}, Maker could not have claimed both these edges. Therefore, no such path $P$ exists. \end{proof} Thus, the proof of \Cref{prop:barred-strategy} and hence also of \Cref{thm:polluted} is finished. \section{Open problems} \label{sec:open} In this paper we have made progress on some of the questions that Day and Falgas-Ravry~\cite{day2020maker2} asked. Notably, we have improved the upper bound on the value of the ratio parameter $\rho^*$ (if such $\rho^*$ indeed exists). However, some of their other questions still remain open, as well as some new problems, which we present below. There are several directions for further research that we consider to be of a great interest. \begin{description}[labelindent=\parindent, leftmargin=2\parindent] \item[Maker's win] We still do not know any integers $(m,b)$ with $b>1$ and $m<2b$ for which Maker has a winning strategy in the $(m,b)$-game on $\mathbb{Z}^2$. In fact, we do not know any such integers even in the boosted version of the game, where Maker is allowed to claim arbitrary but finite number of edges before her first turn. Maker's strategies in the papers of Day and Falgas-Ravry~\cites{day2020maker1,day2020maker2} suggest that it may be beneficial for Maker to play as a dual Breaker in some auxiliary game. Hence, perhaps some of the techniques developed in the present paper could help in answering those questions. \item[Breaking the perimetric ratio with boosted Maker] In \Cref{sec:ratio}, we have shown that there exists $\delta>0$ such that provided $m$ is large enough and $b \geq (2-\delta)m$, Breaker has a winning strategy in the $(m,b)$-game on $\mathbb{Z}^2$. It would be interesting to study the boosted variant of this game, where Maker is allowed to claim finitely many edges before her first turn. There, we still do not know of any integers $(m,b)$ with $b<2m$ for which Breaker has a winning strategy in the boosted $(m,b)$-game (though as we have shown in \Cref{sec:fast-boost}, $b=2m$ suffices even in this variant). Again, it is possible that some of the techniques of the present paper can be developed further to handle this case as well. \item[Polluted board] In \Cref{sec:polluted} we considered the case when the $(1,1)$-game is played on a random board obtained by running a usual percolation process with parameter $p$ on $\mathbb{Z}^2$ (with origin chosen by Maker). This board has almost surely an infinite connected component whenever $p>1/2$, yet we have seen that when $p<0.6298$, Breaker almost surely has a winning strategy. We still do not know if there exists any $\varepsilon > 0$ such that whenever $p > 1-\varepsilon$, Maker almost surely has a winning strategy. Moreover, note that such a strategy would have to be very different from Maker's winning strategy in the $(1,1)$-game on $\mathbb{Z}^2$ which was suggested in~\cite{day2020maker2}. It might also be of an interest to investigate the general $(m,b)$-game on a polluted board. Even more generally, it could be interesting to consider the polluted game played on other boards $\Lambda$, and see if similar phenomena occur there. \end{description} One could also try to improve the bounds in \Cref{thm:main1} and \Cref{thm:polluted} derived in this paper. But while improving these constants significantly would be of some interest (note that improving \Cref{thm:main1} by little should not be very hard as we did not try to optimise the constant there fully), we believe that breaking the three barriers highlighted above is even more important. For a further overview of more open questions, see the paper of Day and Falgas-Ravry~\cite{day2020maker2}. Note that many seemingly easy questions are still open, for instance, it is not known who wins the $(2,3)$ or $(3,2)$-game on $\mathbb{Z}^2$. \section*{Acknowledgements} \label{sec:ack} The authors would like to thank their PhD supervisor Professor Béla Bollobás for suggesting this problem and for his comments. \begin{bibdiv} \begin{biblist} \bib{balister1994improved}{article}{ author={Balister, P.}, author={Bollobás, B.}, author={Stacey, A.}, title={Improved upper bounds for the critical probability of oriented percolation in two dimensions}, date={1994}, journal={Random Structures \& Algorithms}, volume={5}, number={4}, pages={573\ndash 589}, } \bib{beck2008combinatorial}{book}{ author={Beck, J.}, title={Combinatorial {G}ames: {T}ic-{T}ac-{T}oe {T}heory}, publisher={Cambridge University Press}, date={2008}, } \bib{bollobas-riordan2006short}{article}{ author={Bollobás, B.}, author={Riordan, O.}, title={{A Short Proof of the Harris–Kesten Theorem}}, date={2006}, journal={Bulletin of the London Mathematical Society}, volume={38}, number={3}, pages={470\ndash 484}, } \bib{bollobas2006percolation}{book}{ author={Bollobás, B.}, author={Riordan, O.}, title={Percolation}, publisher={Cambridge University Press}, date={2006}, } \bib{bollobas-riordan2007note}{article}{ author={Bollobás, B.}, author={Riordan, O.}, title={{A note on the Harris–Kesten Theorem}}, date={2007}, ISSN={0195-6698}, journal={European Journal of Combinatorics}, volume={28}, number={6}, pages={1720\ndash 1723}, } \bib{chvatal1978biased}{incollection}{ author={Chv{\'a}tal, V.}, author={Erd\H{o}s, P.}, title={Biased positional games}, date={1978}, booktitle={Annals of discrete mathematics}, volume={2}, publisher={Elsevier}, pages={221\ndash 229}, } \bib{day2020maker1}{article}{ author={Day, A.~N.}, author={Falgas-Ravry, V.}, title={{Maker--Breaker percolation games I: Crossing grids}}, date={2020}, journal={Combinatorics, Probability and Computing}, pages={1\ndash 28}, } \bib{day2020maker2}{article}{ author={Day, A.~N.}, author={Falgas-Ravry, V.}, title={{Maker--Breaker percolation games II: Escaping to infinity}}, date={2020}, journal={Journal of Combinatorial Theory, Series B}, } \bib{dhar1982directed}{article}{ author={Dhar, D.}, title={{Directed percolation in two and three dimensions. II. Direction dependence of the wetting velocity}}, date={1982}, journal={Journal of Physics A: Mathematical and General}, volume={15}, number={6}, pages={1859\ndash 1864}, } \bib{durrett1984oriented}{article}{ author={Durrett, R.}, title={Oriented percolation in two dimensions}, date={1984}, journal={The Annals of Probability}, volume={12}, number={4}, pages={999\ndash 1040}, } \bib{gray1980lower}{article}{ author={Gray, L.}, author={Wierman, J.}, author={Smythe, R.}, title={Lower bounds for the critical probability in percolation models with oriented bonds}, date={1980}, journal={Journal of Applied Probability}, pages={979\ndash 986}, } \bib{harris1960lower}{article}{ author={Harris, T.~E.}, title={A lower bound for the critical probability in a certain percolation process}, date={1960}, journal={Mathematical Proceedings of the Cambridge Philosophical Society}, volume={56}, number={1}, pages={13\ndash 20}, } \bib{hefetz2014positional}{book}{ author={Hefetz, D.}, author={Krivelevich, M.}, author={Stojakovi{\'c}, M.}, author={Szab{\'o}, T.}, title={Positional games}, publisher={Springer}, date={2014}, } \bib{kesten1980upper}{article}{ author={Kesten, H.}, title={{The critical probability of bond percolation on the square lattice equals $\frac{1}{2}$}}, date={1980}, journal={Communications in Mathematical Physics}, volume={74}, number={1}, pages={41 \ndash 59}, } \bib{lehman1964solution}{article}{ author={Lehman, A.}, title={{A solution of the Shannon switching game}}, date={1964}, journal={Journal of the Society for Industrial and Applied Mathematics}, volume={12}, number={4}, pages={687\ndash 725}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2021-05-28T02:04:59", "yymm": "2105", "arxiv_id": "2105.12864", "language": "en", "url": "https://arxiv.org/abs/2105.12864", "abstract": "We study the $(m,b)$ Maker-Breaker percolation game on $\\mathbb{Z}^2$, introduced by Day and Falgas-Ravry. As our first result, we show that Breaker has a winning strategy for the $(m,b)$-game whenever $b \\geq (2-\\frac{1}{14} + o(1))m$, breaking the ratio $2$ barrier proved by Day and Falgas-Ravry.Addressing further questions of Day and Falgas-Ravry, we show that Breaker can win the $(m,2m)$-game even if he allows Maker to claim $c$ edges before the game starts, for any integer $c$, and that he can moreover win rather fast (as a function of $c$).Finally, we consider the game played on $\\mathbb{Z}^2$ after the usual bond percolation process with parameter $p$ was performed. We show that when $p$ is not too much larger than $1/2$, Breaker almost surely has a winning strategy for the $(1,1)$-game, even if Maker is allowed to choose the origin after the board is determined.", "subjects": "Combinatorics (math.CO)", "title": "The Maker-Breaker percolation game on the square lattice", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683480491554, "lm_q2_score": 0.7185944046238982, "lm_q1q2_score": 0.7096610890917895 }
https://arxiv.org/abs/1905.09987
On diagonals of operators: selfadjoint, normal and other classes
We provide a survey of the current state of the study of diagonals of operators, especially selfadjoint operators. In addition, we provide a few new results made possible by recent work of Müller-Tomilov and Kaftal-Loreaux. This is an expansion of the second author's lecture part II at OT27.
\section{Introduction} \label{sec:introduction} By a diagonal of an operator $T \in B(\ensuremath{\mathcal{H}})$ we mean a sequence $(\angles{Te_n,e_n})_{n=1}^{\infty}$ where $\{e_n\}_{n=1}^{\infty}$ is an orthonormal basis of $\ensuremath{\mathcal{H}}$. The orthonormal basis is not fixed, and so $T$ has many diagonals. Throughout this paper, we will use $\ensuremath{\mathcal{D}}(T)$ to denote the set of all diagonals of $T$. Instead of varying the orthonormal basis, a useful equivalent viewpoint is to fix the orthonormal basis and consider the diagonals of the operators $UTU^{-1} = UTU^{*}$ in the unitary orbit $\ensuremath{\mathcal{U}}(T)$ relative to this fixed orthonormal basis. If $E$ denotes the canonical trace-preserving conditional expectation onto the subalgebra of diagonal operators determined by this fixed basis (i.e., $E$ denotes the operation of ``taking the main diagonal''), then there is a natural identification between $\ensuremath{\mathcal{D}}(T)$ and $E(\ensuremath{\mathcal{U}}(T))$ via the *-isomorphism $\diag : \ell^{\infty} \to E(B(\ensuremath{\mathcal{H}}))$. As such, sometimes we regard elements of $E(\ensuremath{\mathcal{U}}(T)) \subseteq B(\ensuremath{\mathcal{H}})$ as diagonals of $T$ even though they are operators as opposed to sequences. The collection $\ensuremath{\mathcal{D}}(T)$ contains a substantial amount of information about the operator $T$. For example, since every unit vector (in fact, any $k$-tuple of orthonormal vectors) is contained in some orthonormal basis, $\ensuremath{\mathcal{D}}(T)$ encodes the numerical range $\ensuremath{W}(T)$ (and correspondingly all the $k$-numerical ranges, see \cite{Hal-1964-ASM} for the origin of this notion). Therefore, since an operator is selfadjoint if and only if its numerical range is contained in $\mathbb{R}$, it is clear that $T$ is selfadjoint if and only if $\ensuremath{\mathcal{D}}(T)$ contains only real-valued sequences. This illustrates an example of how information about $\ensuremath{\mathcal{D}}(T)$ can yield obvious information about $T$. A less obvious illustration of information encoding: when $\ensuremath{\mathcal{H}}$ is finite dimensional, $T$ is normal if and only if all the $k$-numerical ranges of $T$ are polygons (see \cite{Li-1994-LMA}), so from this, normality of $T$ can be determined from $\ensuremath{\mathcal{D}}(T)$. We believe it is an open question whether normality of $T$ can be determined from $\ensuremath{\mathcal{D}}(T)$ when $\ensuremath{\mathcal{H}}$ is infinite dimensional; normality certainly cannot be determined solely from the $k$-numerical ranges of $T$. Indeed, the latter is because, as is not hard to prove, both the unilateral shift (which is non-normal) and a diagonal operator whose eigenvalues are dense in the open unit disk have that disk as their $k$-numerical range for each $k$. Another less immediate example: diagonals also encode the essential numerical range by means of the fact that $\lambda \in \ensuremath{W_{\text{e}}}(T)$ if and only if there is some diagonal of $T$ which contains a subsequence converging to $\lambda$. Over the past century, diagonals of operators, especially of selfadjoint operators, have been investigated a great deal with substantial success. A foundational result concerning diagonals of selfadjoint operators is due to Schur \cite{Sch-1923-SBMG} and Horn \cite[Theorems~1~and~5]{Hor-1954-AJM}, and is therefore called the \emph{Schur--Horn theorem}. Of central importance in their theorem is the notion of \emph{majorization} of real-valued sequences. \begin{definition} \label{def:majorization-finite} Given finite sequences $d,\lambda \in \mathbb{R}^n$, to say that $d$ is \emph{majorized by} $\lambda$ (or that $\lambda$ \emph{majorizes} $d$), denoted $d \prec \lambda$, means \begin{equation} \label{eq:majorization-finite} \sum_{i=1}^m d^{*}_i \le \sum_{i=1}^m \lambda^{*}_i \quad\text{for}\ 1 \le m \le n,\ \text{and}\quad \sum_{i=1}^n d_i = \sum_{i=1}^n \lambda_i, \end{equation} where the sequences $d^{*}, \lambda^{*}$ denote the nonincreasing rearrangements of the sequences $d, \lambda$, respectively. \end{definition} \begin{theorem}[\cite{Sch-1923-SBMG,Hor-1954-AJM}] \label{thm:schur-horn} For a selfadjoint operator $T$ on a finite dimensional Hilbert space $\ensuremath{\mathcal{H}}$ of dimension $n$, with eigenvalue sequence $\lambda$, repeated according to multiplicity, and a sequence $d \in \mathbb{R}^n$, the following are equivalent: \begin{enumerate} \item \label{item:schur-horn-diagonal} $d$ is a diagonal of $T$ ($d \in \ensuremath{\mathcal{D}}(T)$); \item $d$ is majorized by $\lambda$ ($d \prec \lambda$); \item \label{item:schur-horn-convexity} $d$ is a convex combination of permutations of $\lambda$ ($d \in \conv \{ \lambda_{\pi} \in \mathbb{R}^n \mid \pi\ \text{is a permutation} \}$). \end{enumerate} \end{theorem} The Schur--Horn theorem is fascinating for several reasons. Firstly, it was the first major result on diagonals of operators. In addition, it provides a \emph{complete} characterization of the diagonals of any fixed selfadjoint operator $T$ on a finite dimensional Hilbert space. Moreover, it shows that the diagonals $\ensuremath{\mathcal{D}}(T)$ of such an operator form a convex set which, as can be easily shown, has the permutations of the eigenvalue sequence as its extreme points. This last fact is particularly surprising in that the authors are unaware of any direct proof; indeed, it is false in general if $T$ is not assumed to be selfadjoint (even for certain normal matrices, see \Cref{ex:nonconvex-normal-diagonal}). Moreover, the unitary orbit of a selfadjoint operator (on a finite dimensional space) is completely determined by $\ensuremath{\mathcal{D}}(T)$ via any extreme point. It is the Schur--Horn theorem that sparked a great deal of interest and focus on diagonals of \emph{selfadjoint} operators in particular which are the main subject of this survey. The purpose of this paper is to provide a brief survey of the current state of knowledge, describe the history (apologies to the many significant unmentioned), add a few new results to the tapestry, and highlight open questions. The information is arranged categorically rather than chronologically. However, we try to provide some indication of the order in which results were discovered when they occur anachronistically in the order of appearance in the paper. The remainder of this paper is organized as follows. In \Cref{sec:compact-selfadjoint} we discuss results for diagonals of compact selfadjoint operators on an infinite dimensional Hilbert space. These can be thought of in some way as the most direct generalizations of the Schur--Horn theorem. Results from this section focus on work found in \cite{AK-2006-OTOAaA,GM-1964-MSN,KW-2010-JFA,LW-2015-JFA,Mar-1964-UMN}. In \Cref{sec:finite-spectrum-selfadjoint} we review the study of diagonals of finite spectrum selfadjoint operators. This was initiated by Kadison \cite{Kad-2002-PNASU,Kad-2002-PNASUa} and a complete characterization was provided by Bownik and Jasper \cite{BJ-2015-TAMS,BJ-2015-BPASM,Jas-2013-JFA}. In \Cref{sec:general-selfadjoint} we discuss several results which hold for broad classes of selfadjoint operators coming from \cite{MT-2019-TAMS,Neu-1999-JFA}. It is in this context that we are able to establish a new result which completely classifies diagonals of certain selfadjoint operators with at least three points in the essential spectrum (see \Cref{thm:diagonal-characterization}). In \Cref{sec:normal-operators} we review the comparatively small amount of work that has been done for diagonals of normal operators due to \cite{Arv-2007-PNASU,Hor-1954-AJM,JLW-2016-IUMJ,Lor-2019-JOT,Wil-1971-JLMS2}. Finally, in \Cref{sec:miscellaneous} we conclude with an overview of the few results which hold for more general classes of operators from \cite{Fan-1984-TAMS,FFH-1987-PAMS,Neu-1999-JFA,JLW-2016-IUMJ,Tho-1977-SJAM,Sin-1976-CMB,MT-2019-TAMS,Her-1991-RMJM}. \section{Compact selfadjoint operators} \label{sec:compact-selfadjoint} We begin by defining some notation which occurs repeatedly throughout this section. Let $c_0$ denote the collection of infinite sequences converging to zero, let $c_0^+$ its subset of nonnegative sequences, and let $c_0^{*}$ denote the subset of $c_0^+$ of nonincreasing sequences. For a sequence $d \in c_0^+$ we let $d^{*} \in c_0^{*}$ denote the nonincreasing rearrangement\footnote{This is not technically a rearrangement in the sense that $d^{*}$ is not always a permutation of $d$, i.e., when $d$ has infinite support but is not strictly positive. However, this is in keeping with the standard terminology in the field and from measure theory.} of $d$. Extending the Schur--Horn theorem to the setting of compact operators first requires a suitable notion of majorization. It turns out that exactly which notion is appropriate depends on the context, but they agree on their common domain of definition (i.e., \Cref{def:weak-majorization,def:real-majorization} coincide for nonnegative sequences in $\ell^1$). \begin{definition}[\protect{\cite[pp. 202--203]{GM-1964-MSN}}] \label{def:weak-majorization} Let $d,\lambda \in c_0^+$. One says that $d$ is \emph{weakly majorized} by $\lambda$, denoted $d \pprec \lambda$, if for all $n \in \mathbb{N}$, \begin{equation*} \sum_{j=1}^n d^*_j \le \sum_{j=1}^n \lambda^*_j. \end{equation*} If in addition \begin{equation*} \sum_{j=1}^{\infty} d_j = \sum_{j=1}^{\infty} \lambda_j, \end{equation*} then $d$ is \emph{majorized} by $\lambda$, denoted $d \prec \lambda$; here we allow for the case when both sums are infinite. \end{definition} \begin{definition}[\protect{\cite[pp. 202--203]{GM-1964-MSN}}] \label{def:real-majorization} Let $d,\lambda \in \ell^1$ be real-valued. One says that $d$ is \emph{majorized} by $\lambda$, denoted $d \prec \lambda$, if both $d_+ \pprec \lambda_+$ and $d_- \pprec \lambda_-$, and also $\sum_{j=1}^{\infty} d_j = \sum_{j=1}^{\infty} \lambda_j$. \end{definition} Gohberg and Markus \cite{GM-1964-MSN} were the first to extend the Schur--Horn theorem, and ultimately they characterized diagonals of selfadjoint trace-class operators modulo the number of zeros which occur in the diagonal sequence. \begin{theorem}[\protect{\cite[Theorem 1]{GM-1964-MSN}}] \label{thm:gohberg-markus} Let $T$ be a selfadjoint trace-class operator in $B(\mathcal{H})$ with eigenvalue sequence $\lambda \in \ell^1$ repeated according to multiplicity. Then \begin{enumerate} \item $d \in \ensuremath{\mathcal{D}}(T)$ implies $d \in \ell^1$ and $d \prec \lambda$, and conversely, \item $d \in \ell^1$ and $d \prec \lambda$ implies $d \oplus \mathbf{0} \in \ensuremath{\mathcal{D}}(T)$ for some dimension of $\mathbf{0}$. \end{enumerate} \end{theorem} The study of diagonals of selfadjoint operators then remained dormant for 35 years until A.~Neumann's generalization of the convexity portion of the Schur--Horn theorem in \cite{Neu-1999-JFA} (see \Cref{sec:general-selfadjoint} for details), although other results on diagonals of general operators did appear during this interim \cite{Fan-1984-TAMS,FF-1987-PRIASA,FFH-1987-PAMS,FF-1994-PAMS}. However, it wasn't until the work of Kadison \cite{Kad-2002-PNASU,Kad-2002-PNASUa} in 2002, and of Arveson and Kadison \cite{AK-2006-OTOAaA} in 2006, that the study of diagonals was truly renewed. This has sparked a flurry of activity that continues today; \cite{Arv-2007-PNASU,Jas-2013-JFA,BJ-2014-CMB,BJ-2015-TAMS,BJ-2015-BPASM,Arg-2015-IEOT,Lor-2019-JOT} stemming from \cite{Kad-2002-PNASUa,Kad-2002-PNASU}, and \cite{KW-2010-JFA,LW-2015-JFA} from \cite{AK-2006-OTOAaA}. The papers proceeding from \cite{Kad-2002-PNASU,Kad-2002-PNASUa} will be discussed in more detail in \Cref{sec:finite-spectrum-selfadjoint}. For now, we will continue with a discussion of \cite{AK-2006-OTOAaA} and the ensuing papers. Arveson and Kadison \cite{AK-2006-OTOAaA} restricted their attention to positive trace-class operators, albeit at the time of its writing, they appeared unaware of the trace-class work of Markus \cite{Mar-1964-UMN} and Gohberg--Markus \cite{GM-1964-MSN}. The result Arveson and Kadison obtained (\Cref{thm:ak-schur-horn}) is closely related to \Cref{thm:gohberg-markus}, but they also stated related open problems in type II$_1$ factors which initiated study on this topic yet is outside the scope of this survey. Forays into type II factors resulting from the impetus in \cite{Kad-2002-PNASUa,Kad-2002-PNASU,AK-2006-OTOAaA} can be found in \cite{AM-2007-IUMJ,AM-2008-JMAA,AM-2013-PJM,BR-2014-PAMS} \begin{theorem}[\protect{\cite[Theorem 4.1]{AK-2006-OTOAaA}}] \label{thm:ak-schur-horn} Let $A \in \mathcal{L}^1_+$ be a positive trace-class operator. Then \begin{equation*} E\Big(\overline{\mathcal{U}(A)}^{\snorm{\cdot}_1}\!\Big) = \{ \diag d \mid d \in \ell^1_+, d \prec s(A) \}, \end{equation*} where $\norm{\cdot}_1$ denotes the trace norm, and $s(A)$ the singular value sequence. \end{theorem} The astute reader will have noticed that Arveson and Kadison considered the trace-norm closure of the unitary orbit instead of the unitary orbit itself. The net effect of taking this closure is essentially to vary the size of the kernel of $A$, as Arveson and Kadison note in \cite[Proposition~3.1(iii)]{AK-2006-OTOAaA}. It turns out that for a positive compact operator $A$, this effect can be achieved by a variety of constructions as we note in \Cref{prop:orbit-closure-equivalences}. \begin{definition}[\protect{\cite[p.~3152]{KW-2010-JFA}}] \label{def:partial-isometry-and-singular-value-orbit} Let $A$ denote a positive compact operator with singular value sequence $s(A)$ and range projection $R_A$. The \emph{partial isometry orbit} $\mathcal{V}(A)$ is the collection \begin{equation*} \mathcal{V}(A) := \{ VAV^{*} \mid V^{*}V = R_A \}. \end{equation*} The \emph{singular value orbit} $S(A)$ is the collection of positive compact operators with the same singular values as $A$, namely, \begin{equation*} S(A) := \{ B \in \mathcal{K}_+ \mid s(B) = s(A) \}. \end{equation*} \end{definition} The next proposition appears in the first author's dissertation, and to our knowledge, is the only reference for this result. When the operator is positive and compact, it unifies the seemingly disparate perspectives of the singular value orbit, partial isometry orbit, norm closure of the unitary orbit and, when the operator is trace-class, even the trace-norm closure of the unitary orbit. This unification makes it possible to realize \Cref{thm:kwpartialisometryorbit} as a strict generalization of \Cref{thm:ak-schur-horn}. \begin{proposition}[\protect{\cite[Proposition~2.1.12]{Lor-2016}}] \label{prop:orbit-closure-equivalences} If $A \in \mathcal{K}_+$ is a positive compact operator, then \begin{equation*} \mathcal{V}(A) = S(A) = \overline{\mathcal{U}(A)}^{\snorm{\cdot}}. \end{equation*} If in addition $A$ is trace-class, then these are also equal to $\overline{\mathcal{U}(A)}^{\snorm{\cdot}_1}$. Furthermore, if $A$ has finite rank, then all these sets coincide with the unitary orbit $\mathcal{U}(A)$. \end{proposition} The following fundamental result of Kaftal and Weiss \cite{KW-2010-JFA} characterizes the diagonals of positive compact operators in the partial isometry orbit, which, by \Cref{prop:orbit-closure-equivalences} is the same as the trace-norm closure of the unitary orbit when the operator is trace-class. Therefore, their following \Cref{thm:kwpartialisometryorbit} is a generalization of \Cref{thm:ak-schur-horn} to compact operators. Moreover, it has a striking resemblance to the majorization portion of the Schur--Horn theorem. \begin{theorem}[\protect{\cite[Proposition 6.4]{KW-2010-JFA}}] \label{thm:kwpartialisometryorbit} Let $A\in \mathcal{K}_+$. Then \begin{equation*} E(\mathcal{V}(A)) = \{ \diag d \mid d \in c_0^+, d \prec s(A) \}. \end{equation*} \end{theorem} As we have already mentioned about all the results concerning compact operators thus far, they only characterize the diagonals \emph{modulo the dimension of the kernel}. The next result, also from \cite{KW-2010-JFA}, is significant in that it overcomes this limitation, at least for positive compact operators with trivial kernel. In the following, $\mathcal{D}$ denotes the diagonal operators. \begin{theorem}[\protect{\cite[Proposition 6.6]{KW-2010-JFA}}] \label{thm:kwrangeprojectionidentity} Let $A\in \mathcal{K}_+$ whose range projection $R_A$ is the identity. Then \begin{equation*} E(\mathcal{U}(A)) = E(\mathcal{V}(A))\cap\{ B\in\mathcal{D} \mid R_B=I \}. \end{equation*} From the equivalent viewpoint of diagonals as sequences, this becomes \begin{equation*} \ensuremath{\mathcal{D}}(A) = \{ d \in c_0^+ \mid d \prec s(A), d_n \not= 0 \ \text{for all}\ n \}. \end{equation*} \end{theorem} Since \cite{KW-2010-JFA}, it has become apparent to the authors and several other researchers (private communications) that understanding the interplay between the dimension of the kernel of a positive compact operator and its diagonal sequences is essential to characterizing diagonals of all selfadjoint operators more generally. However, aside from the cases when the kernel is infinite dimensional or trivial, this remains an open problem. \begin{openproblem} \label{prob:finite-kernel-schur-horn} Characterize, in terms of majorization, the diagonals of a positive compact operator with nontrivial, finite dimensional kernel. In particular, the following cases are important representative problems: \begin{enumerate} \item Characterize $\ensuremath{\mathcal{D}}\left(\diag\left(0,1,\frac{1}{2},\frac{1}{4},\frac{1}{8},\ldots\right)\right)$. \item Characterize $\ensuremath{\mathcal{D}}\left(\diag\left(0,1,\frac{1}{2},\frac{1}{3},\frac{1}{4},\ldots\right)\right)$. \end{enumerate} \end{openproblem} There has been some limited progress by the authors \cite{LW-2015-JFA} on the above problem, which we now describe. In order to describe these results, we need the closely related notions of \emph{$p$-majorization} and \emph{approximate $p$-majorization}. \begin{definition} \label{def:pmajorization} Given $d,\lambda\in c_0^+$ and $p \in \mathbb{Z}_{\ge 0}$, we say that $d$ is \emph{$p$-majorized} by $\lambda$, denoted $d\prec_p\lambda$, if $d\prec\lambda$ and for sufficiently large $n$, we have the inequality \begin{equation*} \sum_{k=1}^{n+p} d^{*}_k \le \sum_{k=1}^{n} \lambda^{*}_k. \end{equation*} And $\infty$-majorization, denoted $d\prec_\infty\lambda$, means $d\prec_p\lambda$ for all $p\in\mathbb{N}$. \end{definition} \begin{definition} \label{def:approximatepmajorization} Given $d,\lambda\in c_0^+$ and $p \in \mathbb{Z}_{\ge 0}$, we say that $d$ is \emph{approximately $p$-majorized} by $\lambda$, denoted $d\precsim_p\lambda$, if $d\prec\lambda$ and for every $\epsilon>0$, and for sufficiently large $n$, \begin{equation*} \sum_{k=1}^{n+p} d^{*}_k \le \sum_{k=1}^n \lambda^{*}_k + \epsilon \lambda^{*}_{n+1}. \end{equation*} Furthermore, if $d\precsim_p\lambda$ for infinitely many $p\in\mathbb{N}$ (equivalently obviously, for all $p\in\mathbb{N}$), this we call approximate $\infty$-majorization and denote it by $d\precsim_\infty\lambda$. \end{definition} In the next theorem, for a positive compact operator $A$, $R_A^{\perp}$ is the projection onto the kernel of $A$, and so its trace is the dimension of the kernel. Informally, this theorem says that for a sequence $d$: \begin{enumerate} \item If $d \prec_p s(A)$ and $d$ has $p$ fewer zeros than the dimension of the kernel of $A$, then $d \in \ensuremath{\mathcal{D}}(A)$. \item If $d \in \ensuremath{\mathcal{D}}(A)$, then $d \precsim_p s(A)$ where $p$ is the difference in the number of zeros of $d$ and the dimension of the kernel of $A$. \end{enumerate} \begin{theorem}[\protect{\cite[Theorems~2.4 and 3.4]{LW-2015-JFA}}]\label{thm:sufficiencyofpmajorization} Let $A,B\in \mathcal{K}_+$, \begin{enumerate} \item \label{item:sufficiencyofpmajorization} If $B$ is a diagonal operator and for some $p \in \mathbb{Z}_{\ge 0} \cup \{\infty\}$, $\trace R_B^\perp \le\trace R_A^\perp \le \trace R_B^\perp +p$ and $s(B)\prec_p s(A)$, then $B\in E(\mathcal{U}(A))$. \item \label{item:necessityofapproximatepmajorization} If $B\in E(\mathcal{U}(A))$, then $s(B) \precsim_p s(A)$, where \begin{equation*} p = \min \{ n\in \mathbb{Z}_{\ge 0} \cup \{\infty\} \mid \trace R_A^\perp \le\trace R_B^\perp +n \} \end{equation*} \end{enumerate} \end{theorem} The $p$-majorization condition in \Cref{thm:sufficiencyofpmajorization}\ref{item:sufficiencyofpmajorization} is known not to be a necessary condition for $d$ to be a diagonal of $A$ \cite[Example~2.6]{LW-2015-JFA}. In contrast, it is not known whether approximate $p$-majorization in \Cref{thm:sufficiencyofpmajorization}\ref{item:necessityofapproximatepmajorization} is a sufficient condition for $d$ to be a diagonal of $A$. However, since $\infty$-majorization and approximate $\infty$-majorization turn out to be equivalent concepts, there is the following corollary which characterizes diagonals of positive compact operators with infinite dimensional kernel. \begin{corollary}[\protect{\cite[Corollary~3.5]{LW-2015-JFA}}] \label{cor:conjecturetrueforinfinitemo} Suppose $A\in \mathcal{K}_+$ has infinite rank and infinite dimensional kernel $(\trace R_A=\infty=\trace R_A^\perp)$. Then \[ E(\mathcal{U}(A)) = E(\mathcal{U}(A))_{fk}\sqcup E(\mathcal{U}(A))_{ik}, \] the members of $E(\mathcal{U}(A))$ with finite dimensional kernel and infinite dimensional kernel, respectively, are characterized by \[ E(\mathcal{U}(A))_{fk}=\{ B\in\mathcal{D}\cap \mathcal{K}_+\mid s(B)\prec_\infty s(A)\quad\text{and}\quad \trace R_B^\perp<\infty\} \] and \[ E(\mathcal{U}(A))_{ik}=\{ B\in\mathcal{D}\cap \mathcal{K}_+ \mid s(B)\prec s(A)\quad\text{and}\quad \trace R_B^\perp=\infty\}. \] \end{corollary} Essentially, \Cref{thm:sufficiencyofpmajorization} says that if $A$ is a positive compact operator with infinite dimensional kernel, then $d \in \ensuremath{\mathcal{D}}(A)$ if and only if either (i) $d \prec s(A)$ and $d$ has infinitely many zeros, or (ii) $d \prec_{\infty} s(A)$. Note that the infinite rank condition in \Cref{thm:sufficiencyofpmajorization} is not a true limitation, since the case when $A$ has finite rank was addressed as far back as \cite{AK-2006-OTOAaA} (because $\closure{\ensuremath{\mathcal{U}}(A)}^{\norm{\cdot}_1} \!\! = \, \ensuremath{\mathcal{U}}(A)$ by \Cref{prop:orbit-closure-equivalences}). \section{Finite spectrum selfadjoint operators} \label{sec:finite-spectrum-selfadjoint} As we remarked in \Cref{sec:compact-selfadjoint}, Kadison authored two of the pioneering papers \cite{Kad-2002-PNASU,Kad-2002-PNASUa} which led to a resurgence in the study of diagonals of selfadjoint operators. In these papers, Kadison investigated diagonals of projections starting from first principles, namely, the Pythagorean Theorem. For the link between the Pythagorean Theorem and diagonals of projections, we refer the reader to Kadison's original paper \cite{Kad-2002-PNASU}. However, the real surprise came in the second paper, where Kadison completely characterized diagonals of projections in $B(\ensuremath{\mathcal{H}})$ with $\ensuremath{\mathcal{H}}$ separable and infinite dimensional, in which an unexpected integer appeared: \begin{theorem}[\protect{\cite[Theorem~15]{Kad-2002-PNASUa}}] \label{thm:kadison-carpenter-pythagorean} A sequence $(d_n)$ is the diagonal of a projection $P$ if and only if it takes values in the unit interval and the quantities \begin{equation*} a := \sum_{d_n < \nicefrac{1}{2}} d_n \quad\text{and}\quad b := \sum_{d_n \ge \nicefrac{1}{2}} (1-d_n) \end{equation*} satisfy one of the mutually exclusive conditions \begin{enumerate} \item \label{item:a+b-infinite} $a+b = \infty$; \item \label{item:a+b-finite} $a+b < \infty$ and $a-b \in \mathbb{Z}$. \end{enumerate} \end{theorem} Since the advent of this theorem, there have been three primary outgrowths. Firstly, there was a push to generalize Kadison's result to arbitrary selfadjoint operators with finite spectrum. This is a natural extension because projections are just selfadjoint operators with two-point spectrum, suitably normalized. Secondly, several authors have tried to explain the integer appearing in \Cref{thm:kadison-carpenter-pythagorean}. Thirdly, Arveson found a generalization of this integer condition for certain normal operators with finite spectrum. We will discuss the first two of these in this section, and the third in \Cref{sec:normal-operators}. Of course, we would be remiss if we didn't mention that the first paper \cite{Kad-2002-PNASU} launched an investigation of ``diagonals'' (expectations of unitary orbits) in von Neumann factors (\cite{AM-2013-PJM,AM-2008-JMAA,AM-2007-IUMJ,BR-2014-PAMS}), but as mentioned in our introduction, this topic is outside the scope of this survey. In a series of papers \cite{Jas-2013-JFA,BJ-2015-TAMS,BJ-2015-BPASM}, Bownik and Jasper managed to extend Kadison's theorem to arbitrary finite spectrum selfadjoint operators. In \cite{Jas-2013-JFA}, Jasper handles the case when the selfadjoint operator has three points in the spectrum. In \cite{BJ-2015-TAMS}, Bownik and Jasper do all the legwork to deal with the general case. However, this results in a very complex theorem, in part because there is a great deal which depends on the precise multiplicity of each of the eigenvalues. So, in \cite{BJ-2015-BPASM}, they provide a slightly simplified version of their theorem. This new version, although still somewhat complex, is remarkably easier to state (see \Cref{thm:bownik-jasper-finite-spectrum} below), and it comes with an entirely independent and much shorter proof. \begin{theorem}[\protect{\cite[Theorem~1.2]{BJ-2015-BPASM}}] \label{thm:bownik-jasper-finite-spectrum} Let $\{ \lambda_j \}_{j=0}^{n+1}$ be an increasing sequence of real numbers such that $\lambda_0 = 0$ and $\lambda_{n+1} = B$. Let $d$ be a sequence with values in $[0,B]$ such that $\sum d_j = \sum (B-d_j) = \infty$. For each $\alpha \in (0,B)$, define \begin{equation*} C(\alpha) = \sum_{d_j < \alpha} d_j \quad\text{and}\quad D(\alpha) = \sum_{d_j \ge \alpha} (B - d_j). \end{equation*} There exists a selfadjoint operator $T$ with finite spectrum $\ensuremath{\sigma}(T) = \{\lambda_j\}_{j=0}^{n+1}$ and diagonal $d$ if and only if either \begin{enumerate} \item $C(B/2) + D(B/2) = \infty$, or \item $C(B/2) + D(B/2) < \infty$ and there exist $N_1, \ldots, N_n \in \mathbb{N}$ and $k \in \mathbb{Z}$ such that \begin{equation*} C(B/2) - D(B/2) = \sum_{j=1}^n \lambda_j N_j + k B \end{equation*} and for all $1 \le r \le n$, \begin{equation*} (B-\lambda_r) C(\lambda_r) + \lambda_r D(\lambda_r) \ge (B - \lambda_r) \sum_{j=1}^r \lambda_j N_j + \lambda_r \sum_{j=r+1}^n (B-\lambda_j)N_j. \end{equation*} \end{enumerate} \end{theorem} Note that \Cref{thm:bownik-jasper-finite-spectrum} does not specify the precise multiplicity of the eigenvalues. This is not a deficiency of the theorem, but rather a feature; Bownik--Jasper have a theorem which completely characterizes diagonals of a finite spectrum selfadjoint operator with specified multiplicities of the eigenvalues \cite[Theorem~1.3]{BJ-2015-TAMS}, but that theorem is significantly more cumbersome. In addition to their generalizations of \Cref{thm:kadison-carpenter-pythagorean}, Bownik and Jasper also provided a new and different proof of the sufficiency direction of this theorem \cite{BJ-2014-CMB}, which Kadison referred to as the Carpenter's theorem, i.e., that conditions \ref{item:a+b-infinite} and \ref{item:a+b-finite} of \Cref{thm:kadison-carpenter-pythagorean} imply that $d$ is a diagonal of a projection $P$. This new proof was constructive, and as a byproduct ensured that the theorem was true even for \emph{real} Hilbert spaces, not just complex ones. While that may seem like an esoteric distinction, this is a topic that has actually arisen repeatedly in the study of diagonals of selfadjoint operators, even as far back as Horn's original paper \cite{Hor-1954-AJM} (see also \cite{KW-2010-JFA} in their discussion of orthogonal matrices, i.e., unitaries with real entries; or see \cite{JLW-2016-IUMJ}). The other primary outgrowth of \Cref{thm:kadison-carpenter-pythagorean} is the elucidation of the necessity direction (which Kadison referred to as the Pythagorean Theorem), particularly the integer in condition \Cref{thm:kadison-carpenter-pythagorean}\ref{item:a+b-finite}. Even Kadison referred to it as ``the curious `integrality' condition imposed on $a-b$'' \cite[p.~5220]{Kad-2002-PNASUa}. Perhaps more surprising is Kadison's proof, which concludes: ``As $a-b$ is arbitrarily close to an integer, $a-b$ is an integer'' \cite[p.~5221]{Kad-2002-PNASUa}. This is a rather analytic way to prove some quantity is an integer, and in this case the proof is rather opaque and does not lend much insight into the origin of this integer. As a result of this unexplained integer, several authors have given new proofs of the necessity of \Cref{thm:kadison-carpenter-pythagorean}\ref{item:a+b-finite}. First, in \cite{Arv-2007-PNASU} Arveson recognized the integer as the index of a certain Fredholm operator, but that description lacked a natural explanation of the role of this Fredholm operator. Later, Kaftal, Ng and Zhang gave another independent proof \cite[Corollary~3.6]{KNZ-2009-JFA} of the necessity of \Cref{thm:kadison-carpenter-pythagorean}\ref{item:a+b-finite} while working on a seemingly unrelated topic: strong sums of projections in von Neumann factors. More recently, Argerami provided yet another proof [Argerami, Theorem~4.6] based on an argument of Effros [Effros, Lemma~4.1]. However, the next theorem, due jointly to the first author and V.~Kaftal, provides a very direct path from the original projection $P$ to the integer $a-b$ via the notion of essential codimension of projections due to Brown, Douglas and Fillmore \cite[Remark~4.9]{BDF-1973-PoaCoOT}. \begin{definition}[\protect{\cite[Remark~4.9]{BDF-1973-PoaCoOT}}] \label{def:essential-codimension} For projections $P,Q$ with $P-Q$ compact, the \emph{essential codimension} of $Q$ in $P$, denoted $[P:Q]$, is the integer defined by \begin{equation*} [P:Q] := \begin{cases} \trace P-\trace Q & \text{if}\ \trace P\ \text{and}\ \trace Q < \infty, \\[0.5em] \ind(V^{*}W) & \parbox[c][2em]{0.5\textwidth}{if $\trace P = \trace Q = \infty$, where \\ $W^{*}W = V^{*}V = I, WW^{*} = P, VV^{*} = Q$.} \\[0.4em] \end{cases} \end{equation*} An equivalent alternative definition is $[P:Q] := \ind(QP)$ where $QP : P\ensuremath{\mathcal{H}} \to Q\ensuremath{\mathcal{H}}$. \end{definition} \begin{theorem}[\protect{\cite[Theorem~1.3]{KL-2017-IEOT}}] \label{thm:kadison-integer-essential-codimension} With the notations of \Cref{thm:kadison-carpenter-pythagorean}, if $P \in B(\ensuremath{\mathcal{H}})$ is a projection with $a + b < \infty$ and $Q$ is the projection onto $\spans \{ e_j \mid d_j \ge \nicefrac{1}{2} \}$, then $P-Q$ is Hilbert--Schmidt and $a - b = [P:Q]$ is the essential codimension of the pair $P,Q$. \end{theorem} \Cref{thm:kadison-integer-essential-codimension} can also illuminate the role of the integer $k$ in \Cref{thm:bownik-jasper-finite-spectrum}, but the details are more technical (see \cite[Section~3]{KL-2017-IEOT}). Moreover, there is a collection of integers in Arveson's generalization of \Cref{thm:kadison-carpenter-pythagorean} to finite spectrum normal operators which can also be explained in a similar way via the notion of essential codimension (see \Cref{sec:normal-operators} herein, especially \Cref{thm:arveson,thm:loreaux-normal-essential-codimension}, for details). \section{General selfadjoint operators} \label{sec:general-selfadjoint} In this section we present both a survey and new results on this topic. \subsection{Existing results} In \Cref{sec:compact-selfadjoint,sec:finite-spectrum-selfadjoint} we considered two extensions of the Schur--Horn theorem to infinite dimensions, both of which were conservative in their spectral characteristics; indeed, there were only ever finitely many points in the essential spectrum, with the interesting cases being when there were only one or two. In this section, we explore results about diagonals of selfadjoint operators which have much weaker, or even no constraints on the spectral characteristics of the operator. The first of these is due to Neumann in \cite[Theorem~3.13]{Neu-1999-JFA}, where he provided a generalization of the convexity aspect of the Schur--Horn theorem to diagonalizable selfadjoint operators. \begin{theorem}[\protect{\cite[Theorem~3.13]{Neu-1999-JFA}}] \label{thm:neumann-diagonalizable-convexity} If $T = \diag(\lambda)$ is a diagonal selfadjoint operator on $B(\ensuremath{\mathcal{H}})$ with eigenvalue sequence $\lambda$, then \begin{equation*} \closure{\ensuremath{\mathcal{D}}(T)}^{\norm{\cdot}_{\infty}} = \closure{\conv \{ \lambda_{\pi} \mid \pi\ \text{is a permutation} \}}^{\norm{\cdot}_{\infty}}. \end{equation*} \end{theorem} While this theorem is incredibly interesting as a generalization of the Schur--Horn theorem, we remark that because it takes closures, it loses much of the fine structure present in the theorems mentioned in \Cref{sec:compact-selfadjoint,sec:finite-spectrum-selfadjoint}. For example, when applied to an infinite, coinfinite projection $P$, we can see that $\closure{\ensuremath{\mathcal{D}}(P)}^{\norm{\cdot}_{\infty}}$ consists precisely of those sequences with values in $[0,1]$, thereby masking the subtle integer condition present in the characterization of $\ensuremath{\mathcal{D}}(P)$ in \Cref{thm:kadison-carpenter-pythagorean}\ref{item:a+b-finite}. In the case of \Cref{thm:neumann-diagonalizable-convexity} when $T$ is a positive compact operator, so that $\lambda \in c_0^+$, then the set on the right-hand side is equal to $\{ d \in c_0^+ \mid d \pprec \lambda \}$ (see \cite[Corollary~2.18]{Neu-1999-JFA}). Contrasting this to the more nuanced (i.e., exact) \Cref{thm:kwpartialisometryorbit}, we see that the effect of taking the $\ell^{\infty}$-closure is in some sense to ignore the exact value of the trace (anything smaller will do). Therefore, even when restricted to compact, or even trace-class, operators, \Cref{thm:neumann-diagonalizable-convexity} misses out on some fine structure in the set of diagonal sequences. Nevertheless, this is an important result in the understanding of diagonals of selfadjoint operators because of its generality. In addition, Neumann proved another powerful result \cite[Remark~4.5]{Neu-1999-JFA}, again using the $\ell^{\infty}$-closure, which applies also to \emph{all} selfadjoint operators. In essence, this theorem states that a sequence $d$ is the diagonal of a selfadjoint operator $T$ if and only if the part of $d$ which lies outside the convex hull of the essential spectrum of $T$ is majorized by the part of the spectrum which lies outside the convex hull of the essential spectrum. \begin{theorem}[\protect{\cite[Remark~4.5]{Neu-1999-JFA}}] \label{thm:neumann-general-schur-horn} Suppose that $T \in B(\ensuremath{\mathcal{H}})$ is selfadjoint and let $\alpha_- = \min \ensuremath{\sigma_{\text{ess}}}(T)$ and $\alpha_+ = \max \ensuremath{\sigma_{\text{ess}}}(T)$. Let $T^{\pm} := (T-\alpha_{\pm})_{\pm}$, which are both positive compact operators. Let $d \in \ell^{\infty}$ be a real-valued sequence. Then $d \in \closure{\ensuremath{\mathcal{D}}(T)}^{\norm{\cdot}_{\infty}}$ if and only if there are sequence $d^{\pm} \in c_0^+$ and $d'$ with values in $[\alpha_-,\alpha_+]$ such that $d = d' + d^+ - d^-$ and $d^{\pm} \pprec s(T^{\pm})$. \end{theorem} Very recently, M\"uller and Tomilov \cite{MT-2019-TAMS} recognized an important pattern in results about diagonals of operators, and they have turned this into a powerful theorem. In particular, they noticed that often a sufficient condition for a sequence to be the diagonal of some operator is that the sequence lies ``well-inside'' the interior of the essential numerical range, in that it does not approach the boundary too rapidly. This involves what they refer to as a Blaschke-type condition (see \Cref{thm:blaschke-muller-tomilov} below) in reference to the Blaschke product analytic function on the open unit disk, albeit in \Cref{thm:blaschke-muller-tomilov} the condition is that the sum is infinite rather than finite. Situations in which M\"uller and Tomilov noticed the Blaschke-type condition prior to their discovery include \Cref{thm:kadison-carpenter-pythagorean}\ref{item:a+b-infinite}, \Cref{thm:jasper-loreaux-weiss-unitary}, and other examples due to Herrero \cite{Her-1991-RMJM}. The power of their theorem is at least two-fold: it applies to the vast majority of selfadjoint operators, and it provides a sufficient condition for a sequence to lie in $\ensuremath{\mathcal{D}}(T)$, as opposed to the closure of this set as in Neumann's theorems mentioned above. \begin{theorem}[\protect{\cite[Theorem~1.1]{MT-2019-TAMS}}] \label{thm:blaschke-muller-tomilov} Let $T \in B(\ensuremath{\mathcal{H}})$ be selfadjoint and let $d = (d_k)_{k=1}^{\infty} \subseteq \Int_{\mathbb{R}} \ensuremath{W_{\text{e}}}(T)$ satisfy the Blaschke condition \begin{equation} \label{eq:blaschke-condition} \sum_{k=1}^{\infty} \dist \{ d_k, \mathbb{R} \setminus \ensuremath{W_{\text{e}}}(T) \} = \infty. \end{equation} Then $d \in \mathcal{D}(T)$. \end{theorem} We remark that M\"uller and Tomilov's entire paper actually applies more generally to finite tuples of operators, but we have restricted here our focus to the setting of a single operator both because it is more in line with the scope of this paper and because it would be cumbersome to define the joint essential numerical range. \Cref{thm:blaschke-muller-tomilov} is a wonderful \emph{tour de force} for establishing that such sequences occur as diagonals and it subsumes in part several earlier results of others (e.g., \Cref{thm:kadison-carpenter-pythagorean,thm:bownik-jasper-finite-spectrum,thm:neumann-general-schur-horn}). However, the parts that it misses are the edge cases (e.g., \Cref{thm:kadison-carpenter-pythagorean}\ref{item:a+b-finite}), and these are in general the harder results to establish. Moreover, because this is concerned with the interior of the essential numerical range, this theorem has nothing to say about diagonals of compact operators, whose essential numerical range is zero. In this sense, \Cref{thm:blaschke-muller-tomilov} is orthogonal to the study of diagonals of compact operators explored in \Cref{sec:compact-selfadjoint}. \subsection{New results} In what follows, we use the above result (\Cref{thm:blaschke-muller-tomilov}) of M\"uller and Tomilov to provide the legwork in establishing that certain sequences are diagonals of selfadjoint operators. In conjunction, we use \Cref{lem:kaftal-loreaux-restriction-lemma} from \cite[Lemma~3.3]{KL-2017-IEOT} to place a constraint (\Cref{cor:kaftal-loreaux-corollary}) on which sequences can appear as the diagonals of certain selfadjoint operators. Consequently, in \Cref{thm:diagonal-characterization} we characterize the diagonals of all selfadjoint operators whose numerical range is contained in the essential numerical range (this is equivalent to the minimum and maximum of the spectrum being elements of the essential spectrum), as long as there are at least three points in the essential spectrum. This class of operators includes, for example, all selfadjoint multiplication operators on a nonatomic measure space. We begin with the lemma from \cite{KL-2017-IEOT} which has as a consequence a constraint on the essential spectrum of a selfadjoint operator when the Blaschke condition \eqref{eq:blaschke-condition} is finite instead of infinite. \begin{lemma}[\protect{\cite[Lemma~3.3]{KL-2017-IEOT}}] \label{lem:kaftal-loreaux-restriction-lemma} let $\mathcal{J}$ be a proper ideal, $T \in B(\ensuremath{\mathcal{H}})_+$ a positive contraction, and $Q \in B(\ensuremath{\mathcal{H}})$ a projection. \begin{enumerate} \item If $Q - QTQ \in \mathcal{J}$ and $Q^{\perp}TQ^{\perp} \in \mathcal{J}$, then $T - Q \in \mathcal{J}^{\nicefrac{1}{2}}$ and $T\chi_{[0,\epsilon]}(T) \in \mathcal{J}$ for every $0 < \epsilon < 1$. ($\chi_{[0,\epsilon]}(T)$ is the spectral projection onto $[0,\epsilon]$.) \item Assume that $T$ is a projection or that $\mathcal{J}$ is idempotent (i.e., $\mathcal{J} = \mathcal{J}^2$). If $T - Q \in \mathcal{J}^{\nicefrac{1}{2}}$ and $T\chi_{[0,\epsilon]}(T) \in \mathcal{J}$ for some $0 < \epsilon < 1$, then $Q - QTQ \in \mathcal{J}$ and $Q^{\perp}TQ^{\perp} \in \mathcal{J}$. \end{enumerate} \end{lemma} While it may not be immediately obvious, \Cref{lem:kaftal-loreaux-restriction-lemma} has the following new corollary. \begin{corollary} \label{cor:kaftal-loreaux-corollary} Suppose that $T$ is a selfadjoint operator in $B(\ensuremath{\mathcal{H}})$ with some diagonal $d \in \ensuremath{\mathcal{D}}(T)$. Let $a = \min \ensuremath{\sigma_{\text{ess}}}(T)$ and $b = \max \ensuremath{\sigma_{\text{ess}}}(T)$. If $(T-b)_+$ and $(T-a)_-$ are trace-class and \begin{equation} \label{eq:blaschke-kaftal-loreaux} \sum_{k=1}^{\infty} \dist \{ d_k, \{a,b\} \} < \infty, \end{equation} then the essential spectrum $\ensuremath{\sigma_{\text{ess}}}(T)$ contains at most two points. \end{corollary} \begin{proof}[Proof of corollary] If $a = b$, then $\ensuremath{\sigma_{\text{ess}}}(T) = \{a\}$ and there is nothing to prove, so suppose $a < b$. We first reduce to the case when $a = 0$ and $b = 1$. In order to do this, we simply replace $T$ with $\frac{1}{b-a}(T-a)$. Note that because this is just a scaling and translation of $T$, the cardinality of the essential spectrum is preserved. Moreover, the diagonal of this new operator is the sequence $\big(\frac{d_k-a}{b-a}\big)_{k=1}^{\infty}$, and it is straightforward to check that the corresponding sum \eqref{eq:blaschke-kaftal-loreaux} differs from the one for $T$ by a factor of $b-a$. From the preceding paragraph, we may assume without loss of generality that $a = 0$ and $b = 1$. We next reduce to the case when $(T-1)_+$ and $T_-$ are zero. For this, simply replace $T$ with $T' = T - (T-1)_+ + T_-$. Then notice that their difference $T - T'$ is a trace-class operator, and hence $T,T'$ have the same essential spectrum. The former fact implies that the difference in their diagonal sequences $(d'_k - d_k)_{k=1}^{\infty}$ is absolutely summable. Moreover, \begin{equation*} \sum_{k=1}^{\infty} \dist \{ d'_k, \{a,b\} \} \le \sum_{k=1}^{\infty} \big(\abs{d'_k - d_k} + \dist \{ d_k, \{a,b\} \}\big) < \infty. \end{equation*} Hence, the hypotheses of the theorem are satisfied by $T'$ and this has the same essential spectrum as $T$, so if \Cref{cor:kaftal-loreaux-corollary} holds for $T'$ it also holds for $T$. From the preceding paragraph, we may assume without loss of generality that $(T-1)_+$ and $T_-$ are zero, so that $T$ is a positive contraction. This implies that $\ensuremath{W}(T) \subseteq \ensuremath{W_{\text{e}}}(T) = [0,1]$; note that we don't necessarily have equality with $\ensuremath{W}(T)$ because $\ensuremath{W}(T)$ may not contain the endpoints $0,1$. In particular, this means that $(d_k)_{k=1}^{\infty} \subseteq [0,1]$. Therefore, the sum \eqref{eq:blaschke-condition} can be split and written as \begin{equation*} \sum_{d_k < \nicefrac{1}{2}} d_k + \sum_{d_k \ge \nicefrac{1}{2}} (1-d_k) = \sum_{k=1}^{\infty} \dist \{ d_k, \{0,1\} \} < \infty. \end{equation*} So, in particular, both sums on the left are finite. Let $(e_k)_{k=1}^{\infty}$ be the orthonormal basis in which $T$ has the diagonal $(d_k)_{k=1}^{\infty}$. Then let $Q$ denote the projection onto the closed span of $\{ e_k \mid d_k \ge \nicefrac{1}{2} \}$. Since $T$ is a positive contraction, $Q^{\perp}TQ^{\perp}$ and $Q-QTQ = Q(1-T)Q$ are positive operators. Moreover, we have \begin{equation*} \trace(Q^{\perp}TQ^{\perp}) = \sum_{d_k < \nicefrac{1}{2}} d_k, \qquad \trace(Q-QTQ) = \sum_{d_k \ge \nicefrac{1}{2}} (1-d_k). \end{equation*} Since both of these sums are finite and the operators are positive, $Q^{\perp}TQ^{\perp}$ and $Q-QTQ$ are trace-class. Therefore $T,Q$ satisfy the hypotheses of \Cref{lem:kaftal-loreaux-restriction-lemma}(i) when $\mathcal{J}$ is the ideal of trace-class operators, and hence $T-Q$ is Hilbert--Schmidt. Being compact, $T-Q$ has image zero in the Calkin algebra. Therefore, the image of $T = (T-Q) + Q$ is a projection in the Calkin algebra and hence $T$ has at most $\{0,1\}$ in its essential spectrum. \end{proof} Using \Cref{thm:blaschke-muller-tomilov} and \Cref{cor:kaftal-loreaux-corollary} we can completely characterize the diagonals of a large class of selfadjoint operators with at least three points in their essential spectrum. This class includes all selfadjoint multiplication operators on a nonatomic measure space. In addition, every selfadjoint operator is a compact perturbation of an operator from this class, hence the image of this class in the Calkin algebra consists of all the selfadjoint elements. \begin{theorem} \label{thm:diagonal-characterization} Suppose that $T \in B(\ensuremath{\mathcal{H}})$ is a selfadjoint operator with $\ensuremath{W}(T) \subseteq \ensuremath{W_{\text{e}}}(T)$ and with at least three points in the essential spectrum. Then a sequence $(d_k)_{k=1}^{\infty}$ lies in $\mathcal{D}(T)$ if and only if $(d_k)_{k=1}^{\infty} \subseteq \ensuremath{W}(T)$ and \begin{equation*} \sum_{k=1}^{\infty} \dist \{ d_k, \mathbb{R} \setminus \ensuremath{W_{\text{e}}}(T) \} = \infty, \end{equation*} and the number of occurrences of each of $\min \ensuremath{\sigma_{\text{ess}}}(T), \max \ensuremath{\sigma_{\text{ess}}}(T)$ in the sequence $(d_k)_{k=1}^{\infty}$ is less than or equal to the dimension of the corresponding eigenspace of $T$. \end{theorem} \begin{proof} Before we prove either direction, we establish a few facts relevant to both directions. When $T$ is normal it is well-known that $\closure{\ensuremath{W}(T)} = \conv \ensuremath{\sigma}(T)$ and the closed set $\ensuremath{W_{\text{e}}}(T) = \conv \ensuremath{\sigma_{\text{ess}}}(T)$. Then, since $T$ is selfadjoint, if $a := \min\ensuremath{\sigma_{\text{ess}}}(T)$ and $b := \max\ensuremath{\sigma_{\text{ess}}}(T)$, then $\conv \ensuremath{\sigma_{\text{ess}}}(T) = [a,b]$ and so $\closure{W(T)} = \conv \ensuremath{\sigma}(T) \supseteq \conv \ensuremath{\sigma_{\text{ess}}}(T) = \ensuremath{W_{\text{e}}}(T) = [a,b]$. Moreover, by hypothesis $\ensuremath{W}(T) \subseteq \ensuremath{W_{\text{e}}}(T)$, and using the previous string of inclusions we conclude $\closure{\ensuremath{W}(T)} = \ensuremath{W_{\text{e}}}(T) = [a,b]$. Finally, since $\ensuremath{W}(T)$ is convex and hence an interval that contains $(a,b)$, we know that the difference $\ensuremath{W_{\text{e}}}(T) \setminus \ensuremath{W}(T)$ is contained in $\{a,b\}$. Additionally, $(T-a)_- = (T-b)_+ = 0$ since $a,b$ are also the minimum and maximum of $\ensuremath{\sigma}(T)$. We first prove the ``only if'' direction. Suppose that $(d_k)_{k=1}^{\infty} \in \mathcal{D}(T)$. First note that $d_k = \angles{Te_k,e_k} \in \ensuremath{W}(T)$. Next, because $T$ has at least three points in its essential spectrum by hypothesis, and $(T-a)_- = (T-b)_+ = 0$ since $\ensuremath{W}(T) \subseteq \ensuremath{W_{\text{e}}}(T)$, we can use \Cref{cor:kaftal-loreaux-corollary} to conclude that $\sum_{k=1}^{\infty} \dist \{ d_k, \mathbb{R} \setminus \ensuremath{W_{\text{e}}}(T) \} = \infty.$ Finally, it is elementary to show that $a$ (or $b$) $\in \ensuremath{W}(T)$ if and only if $a$ (or $b$) is an eigenvalue of $T$ because these points are the minimum and maximum of $\ensuremath{\sigma}(T)$. In fact, more is true; if $\angles{Te_k,e_k} = a$ (or $b$), then $e_k$ is an eigenvector for $a$ (or $b$). Therefore, if a diagonal sequence $(d_k)_{k=1}^{\infty}$ of $T$ takes the value $a$ exactly $m$ times (here, $m = \infty$ is allowable), then there are at least $m$ orthogonal eigenvectors for $a$, so the dimension of the eigenspace is at least $m$. We now prove the ``if'' direction. Suppose that $(d_k)_{k=1}^{\infty} \subseteq \ensuremath{W}(T)$ is a sequence with \begin{equation*} \sum_{k=1}^{\infty} \dist \{ d_k, \mathbb{R} \setminus \ensuremath{W_{\text{e}}}(T) \} = \infty, \end{equation*} and the number of occurrences of each of $a,b$ in the sequence $(d_k)_{k=1}^{\infty}$ is less than or equal to the dimension of the corresponding eigenspace of $T$. Let the numbers (possibly $\infty$) of these occurrences be $m,n$ for $a,b$, respectively. Then there are orthonormal collections $\{f_k\}_{k=1}^m, \{g_k\}_{k=1}^n$ of eigenvectors for the eigenvalues $a,b$, respectively. Note that if $m$ (or $n$) is infinite, we can choose the vectors $f_k$ (or $g_k$) so that the complement of their span inside the eigenspace for $a$ (or $b$) is infinite dimensional. Let $P$ be the projection (which commutes with $T$) onto the closed span of $\{f_k\}_{k=1}^m \cup \{g_k\}_{k=1}^n$. Note that $P^{\perp}TP^{\perp} = TP^{\perp}$ is a selfadjoint operator with $\ensuremath{W_{\text{e}}}(TP^{\perp}) = [a,b]$. To see $a$ is still in this set, note that if $m$ is finite, we only removed a finite rank portion of $T$ at $a$. Whereas, if $m$ is infinite, our choice of $\{f_k\}_{k=1}^m$ guarantees that the eigenspace of $a$ for $TP^{\perp}$ is infinite dimensional. Similar arguments hold for $b$. Now, consider the sequence $(d'_k)_{k=1}^{\infty} \subseteq (a,b) = \Int_{\mathbb{R}} \ensuremath{W_{\text{e}}}(T)$ obtained by deleting all occurrences of $a,b$ from $(d_k)_{k=1}^{\infty}$. We know that \begin{equation*} \sum_{k=1}^{\infty} \dist \{ d'_k, \mathbb{R} \setminus \ensuremath{W_{\text{e}}}(TP^{\perp}) \} = \sum_{k=1}^{\infty} \dist \{ d_k, \mathbb{R} \setminus \ensuremath{W_{\text{e}}}(T) \} = \infty, \end{equation*} because the terms removed from the sequence $(d_k)_{k=1}^{\infty}$ were at distance zero from $\mathbb{R} \setminus \ensuremath{W_{\text{e}}}(TP^{\perp})$. This ensures the sequence $(d'_k)_{k=1}^{\infty}$ is actually an infinite sequence. Therefore by \Cref{thm:blaschke-muller-tomilov}, $(d'_k)_{k=1}^{\infty} \in \mathcal{D}(TP^{\perp})$ (where here the operator acts on the Hilbert space $P^{\perp} \ensuremath{\mathcal{H}}$). Combining the orthonormal basis yielding $(d'_k)_{k=1}^{\infty}$ with the collection $\{f_k\}_{k=1}^m \cup \{g_k\}_{k=1}^n$, we obtain that $(d_k)_{k=1}^{\infty} \in \mathcal{D}(T)$. \end{proof} \section{Normal operators} \label{sec:normal-operators} Up until this point we have only discussed diagonals of selfadjoint operators. That is primarily because the bulk of the research is focused on this case. In this section, we explore results on diagonals of normal operators as well. The first and most natural situation to consider is normal matrices, just as the Schur--Horn theorem (\Cref{thm:schur-horn}) investigated selfadjoint matrices. In this regard, there are several things to be said. Upon examination of \Cref{thm:schur-horn}, there are given two characterizations for diagonals of selfadjoint matrices: one in terms of majorization and the other in terms of permutations of the eigenvalue sequence. Of course, the former is only defined for real sequences, so there is little hope to generalize it to the setting of normal matrices, but the latter is a set of sequences which is easily defined for normal matrices. One might naively hope that the equivalence of \ref{item:schur-horn-diagonal} and \ref{item:schur-horn-convexity} from \Cref{thm:schur-horn} extends verbatim to the setting of normal matrices. The following example, due to Horn \cite[pg. 625]{Hor-1954-AJM} based on an idea he attributes to A.~J.~Hoffman, shows that this is not necessarily the case when three of the eigenvalues are not collinear. \begin{example} \label{ex:nonconvex-normal-diagonal} Suppose $N$ is a $3 \times 3$ normal matrix with eigenvalues $\lambda_1,\lambda_2,\lambda_3$ which are not collinear. Then the sequence with values $d_1 = \frac{\lambda_2 + \lambda_3}{2}, d_2 = \frac{\lambda_1 + \lambda_3}{2}, d_3 = \frac{\lambda_1 + \lambda_2}{2}$ is not a diagonal of $N$. To see this, notice that $d$ is a diagonal of $N$ if and only if there is unitary matrix $U$ such that $U(\diag \lambda)U^{*}$ has diagonal $d$. Upon computation of the diagonal entries of $U(\diag \lambda)U^{*}$ in terms of the entries of $U$, \begin{equation*} d_i = \abs{u_{i1}}^2 \lambda_1 + \abs{u_{i2}}^2 \lambda_2 + \abs{u_{i3}}^2 \lambda_3, \end{equation*} which is a convex combination of $\lambda_1,\lambda_2,\lambda_3$ since the rows of $U$ have norm $1$. Since there is a unique way to write each entry $d_i$ as a convex combination $\lambda_1,\lambda_2,\lambda_3$, it is clear that the square of the moduli of the entries of $U$ must have the form \begin{equation*} \frac{1}{2} \begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\ \end{pmatrix} \end{equation*} Then we observe that it is impossible for $U$ to have orthonormal rows/columns, and therefore $U$ cannot actually be unitary. Therefore $d$ is not a diagonal of $N$. \end{example} In 1973, J.P.~Williams \cite{Wil-1971-JLMS2} completely characterized the diagonals of $3 \times 3$ normal matrices and the description given is entirely geometric. Below, we restate Williams theorem using the terminology of geometry because it becomes quite concise. In comparison, the description given by Williams is rather cumbersome. But first we illustrate with a picture the general case when a diagonal entry does not lie on the boundary of the numerical range described in \Cref{thm:3x3-normal}\ref{item:williams-generic}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{williams-ellipse.pdf} \caption{The ellipse described in \Cref{thm:3x3-normal}. The point $d_1^{\dag}$ is the isotomic conjugate of $d_1$, and for each $1 \le i \le 3$, $\trace_{\lambda_i}(d_1)$ denotes the trace of $d_1$ on the edge opposite the vertex $\lambda_i$. Likewise for $\trace_{\lambda_i}(d_1^{\dag})$.} \label{fig:williams-ellipse} \end{figure} \begin{theorem}[\protect{\cite[Theorem~2]{Wil-1971-JLMS2}}]\label{thm:3x3-normal} Let $N$ be a $3 \times 3$ normal matrix with noncollinear eigenvalues $\lambda_1, \lambda_2, \lambda_3$, so that $W(N) = \conv \{ \lambda_1, \lambda_2, \lambda_3 \}$ is the triangle whose vertices are the eigenvalues. Then $(d_1,d_2,d_3) \in \ensuremath{\mathcal{D}}(N)$ if and only if any of the following mutually exclusive conditions holds: \begin{enumerate} \item $d_1 = \lambda_i$ and $d_2,d_3$ lie on the edge opposite $\lambda_i$ and they are symmetric about its midpoint. \item $d_1 \in \partial W(N) \setminus \{\lambda_1,\lambda_2,\lambda_3\}$ and if $d'_1$ denotes the point which is symmetric to $d_1$ relative to the midpoint of the edge containing $d_1$, then $d_2, d_3$ lie on the line segment joining $d_1$ to $d'_1$ and they are symmetric about its midpoint. \item \label{item:williams-generic} $d_1 \in \Int(W(N))$ and $d_2, d_3$ lie in the ellipse inscribed in $W(N)$ which is tangent to $W(N)$ at the traces of isotomic conjugate $d_1^{\dag}$ of $d_1$, and moreover $d_2, d_3$ are symmetric relative to the center of this ellipse (see \Cref{fig:williams-ellipse} for a diagram). \end{enumerate} \end{theorem} Note that the case when the eigenvalues of $N$ are collinear is actually addressed by \Cref{thm:schur-horn}. Indeed, if the eigenvalues of $N$ are collinear then there is a selfadjoint matrix $T$ and constant $a,b$ so that $N = aT + b$ and hence $\ensuremath{\mathcal{D}}(N) = \ensuremath{\mathcal{D}}(aT + b) = a\ensuremath{\mathcal{D}}(T) + b$. One might hope to generalize Williams' result to matrices which are larger than $3 \times 3$. However, it seems that at this stage there is little hope for progress in this area. Williams proof of \Cref{thm:3x3-normal} depends in an essential way on the fact that the diagonals of $2 \times 2$ matrices, not just the normal ones, are completely characterized. Indeed, a diagonal of a $2 \times 2$ matrix $T$ is necessarily of the form $(d, \trace T - d)$ for any $d \in \ensuremath{W}(T)$, and it is well-known that $\ensuremath{W}(T)$ is an ellipse whose foci are the eigenvalues of $T$. In contrast, there is no such analogous characterization for the diagonal of an arbitrary $3 \times 3$ matrix, which makes the $4 \times 4$ normal case intractable using his approach. Although necessary and sufficient conditions for a sequence to be a diagonal of a normal operator seem out of reach in general, in \cite{Arv-2007-PNASU} Arveson did manage to determine a necessary condition on diagonals of certain finite spectrum normal operators. Arveson discovered this condition as a generalization of Kadison's \Cref{thm:kadison-carpenter-pythagorean}. \begin{theorem}[\protect{\cite[Theorem~4]{Arv-2007-PNASU}}] \label{thm:arveson} Let $X = \{\lambda_1,\ldots,\lambda_m\}$ be the set of vertices of a convex polygon $P \subseteq \mathbb{C}$ and let $d = (d_1,d_2,\ldots)$ be a sequence of complex numbers satisfying $d_n \in P$ for $n \ge 1$, together with the summability condition \begin{equation} \label{eq:summability-condition} \sum_{n=1}^{\infty} \dist(d_n, X) < \infty. \end{equation} Then $d$ is the diagonal of a normal operator $N$ with spectrum $\ensuremath{\sigma}(N) = \ensuremath{\sigma_{\text{ess}}}(N) = X$ if and only if for any $x_n \in X$ such that $\sum_{n=1}^{\infty} \abs{d_n - x_n} < \infty$ there are integers $c_1,\ldots,c_m$ whose sum is zero for which \begin{equation*} \sum_{n=1}^{\infty} (d_n - x_n) = \sum_{n=1}^m c_n \lambda_n. \end{equation*} \end{theorem} The above \Cref{thm:arveson} contains \Cref{thm:kadison-carpenter-pythagorean}(ii) as the special case $X = \{0,1\}$. Moreover, just like \autoref{thm:kadison-carpenter-pythagorean} could be expressed in terms of essential codimension in \Cref{thm:kadison-integer-essential-codimension}, the first author in \cite{Lor-2019-JOT} showed it is possible to do the same for Arveson's theorem. What follows is a slight generalization and reinterpretation of Arveson's theorem in operator theoretic language. \begin{theorem}[\protect{\cite[Theorem~4.3]{Lor-2019-JOT}}] \label{thm:loreaux-normal-essential-codimension} Let $N$ be a normal operator with finite spectrum. If $N$ is diagonalizable by a unitary which is a Hilbert--Schmidt perturbation of the identity, then there is a diagonal operator $N'$ with $\ensuremath{\sigma}(N') \subseteq \ensuremath{\sigma}(N)$ for which $E(N-N')$ is trace-class. Moreover, for any such $N'$, \begin{equation} \label{eq:trace-in-K-spec-N} \trace\big(E(N-N')\big) = \sum_{\lambda \in \ensuremath{\sigma}(N)} [P_{\lambda}:Q_{\lambda}] \lambda, \end{equation} where $P_{\lambda},Q_{\lambda}$ are the spectral projections onto $\{\lambda\}$ of $N,N'$ respectively. Moreover, $P_{\lambda}-Q_{\lambda}$ is Hilbert--Schmidt for each $\lambda \in \ensuremath{\sigma}(N)$. \end{theorem} Even though the problem of characterizing diagonals of a \emph{specific} normal operator seems to be intractable at this time, there has been progress determining the diagonals of all normal operators in certain classes. For example, Horn originally proved \Cref{thm:schur-horn} primarily as a stepping stone to get as his main results \cite[Theorems~8--11]{Hor-1954-AJM} which are characterizations of those finite sequences which are diagonals of some rotation or some unitary matrix. \begin{theorem}[\protect{\cite{Hor-1954-AJM}}] \label{thm:horn-rotation} For a sequence $d \in \mathbb{R}^n$, the following are equivalent. \begin{enumerate} \item $d$ is the diagonal of a rotation matrix, i.e., an orthogonal matrix with determinant 1. \item $d \in \conv \{ \lambda \in \{-1,1\}^n \mid \lambda_1 \cdots \lambda_n = 1 \}.$ \end{enumerate} If, in addition, $d \ge 0$, then these are also equivalent to \begin{enumerate}[resume] \item $d \in \conv \{ \lambda \in \{0,1\}^n \mid \lambda_1 \cdots \lambda_n = 0 \}.$ \item \label{item:rotation-thompson} $d \in [0,1]^n$ and $2 \left(1 - \min_{1 \le i \le n} d_i \right) \le \sum_{i=1}^n (1 - d_i).$ \end{enumerate} \end{theorem} \begin{theorem}[\protect{\cite{Hor-1954-AJM}}] \label{thm:horn-unitary} A sequence $d \in \mathbb{C}^n$ (respectively $\mathbb{R}^n$), is the diagonal of an $n \times n$ unitary (respectively, orthogonal) matrix if and only if its sequence of absolute values satisfies any of the equivalent conditions of \Cref{thm:horn-rotation}. \end{theorem} Because the unitary (orthogonal) matrices are precisely the matrices whose sequence of singular values is the constant sequence with value 1, it is possible to recognize Theorems \ref{thm:horn-rotation} and \ref{thm:horn-unitary} as special cases of Thompson's Theorem (see \Cref{thm:thompson}) via the equivalence \Cref{thm:horn-rotation}\ref{item:rotation-thompson}. In \cite{JLW-2016-IUMJ}, the authors with J.~Jasper were able to extend Horn's theorem to the infinite dimensional setting. \begin{theorem}[\protect{\cite[Theorem~4.3]{JLW-2016-IUMJ}}] \label{thm:jasper-loreaux-weiss-unitary} A complex-valued sequence $d = (d_n)_{n=1}^{\infty}$ is a diagonal of some unitary operator $U$ if and only if its sequence of absolute values $\abs{d}$ takes values in $[0,1]$ and \begin{equation} \label{eq:unitary-thompson} 2 \left(1 - \inf_{n \in \mathbb{N}} d_n \right) \le \sum_{n=1}^\infty (1 - d_n). \end{equation} Moreover, if $d$ is real-valued then $U$ can be chosen to be orthogonal. \end{theorem} \section{General operators} \label{sec:miscellaneous} In this section we review results concerning diagonals of general operators with no assumptions of selfadjointness or normality. One of the early results along these lines is due to Thompson \cite{Tho-1977-SJAM} and, in dimension 2, independently to Sing \cite{Sin-1976-CMB}. It is a finite dimensional result which characterizes diagonals of the collection of matrices with specified singular values. \begin{theorem}[\cite{Tho-1977-SJAM}] \label{thm:thompson} Let $0 \le s \in \mathbb{R}^n$ be a nonincreasing sequence and $d \in \mathbb{C}^n$ a complex-valued sequence. There is an $n \times n$ matrix $T$ with singular value sequence $s$ and diagonal $d$ if and only if for the monotone nonincreasing rearrangement $\abs{d}^{*}$ of the sequence of absolute values of $d$, \begin{equation*} \sum_{i=1}^k \abs{d}_i^{*} \leq \sum_{i=1}^k s_i \quad\text{for } 1 \le k \le n, \end{equation*} and \begin{equation*} 2(s_n - \abs{d}_n^{*}) \le \sum_{i=1}^n s_i - d_i. \end{equation*} Moreover, if $d$ is real-valued, we may choose the matrix $T$ to have real-valued entries. \end{theorem} In \cite{JLW-2016-IUMJ}, the authors with J.~Jasper were able to extend Thompson's Theorem to compact operators in the natural way. In particular, since both the sequence $s,d$ converge to zero for a compact operator $T$, it is natural to expect that the second condition in Thompson's theorem disappears entirely, and this is exactly the outcome. \begin{theorem}[\protect{\cite[Theorem~3.9]{JLW-2016-IUMJ}}]] \label{thm:compact-thompson} If $s = (s_n)_{n=1}^{\infty}$ is a nonnegative nonincreasing sequence and $d = (d_n)_{n=1}^{\infty}$ is a complex-valued sequence, both converging to zero, then there is a compact operator $T$ with singular value sequence $s$ and diagonal $d$ if and only if, for the monotone nonincreasing rearrangement $\abs{d}^{*}$ of the sequence of absolute values of $d$, \begin{equation*} \sum_{i=1}^k \abs{d}_i^{*} \leq \sum_{i=1}^k s_i \quad\text{for } k \in \mathbb{N}. \end{equation*} Moreover, if $d$ is real-valued, we may choose the matrix $T$ to have real-valued entries. \end{theorem} As described in \Cref{sec:general-selfadjoint}, the paper \cite{MT-2019-TAMS} of M\"uller and Tomilov focuses on Blaschke-type conditions as sufficient conditions for a sequence to be the diagonal of an operator. Not only are their results generally applicable for tuples of selfadjoint operators, they also describe several results for general single operators (or tuples). In fact, \Cref{thm:blaschke-muller-tomilov} for selfadjoint operators extends nearly verbatim to general operators with the key difference being that in \autoref{thm:blaschke-muller-tomilov} the interior of the essential numerical range is taken with respect to $\mathbb{R}$, whereas in \autoref{thm:general-blaschke} it is with respect to $\mathbb{C}$. \begin{theorem}[\protect{\cite[Corollary 4.3 (simplified to a single operator)]{MT-2019-TAMS}}] \label{thm:general-blaschke} Let $T \in B(\ensuremath{\mathcal{H}})$ be any operator and suppose $(d_n)_{n=1}^{\infty}$ is a sequence which takes values in $\Int \ensuremath{W_{\text{e}}}(T)$ and satisfies \begin{equation} \label{eq:blaschke-condition-general} \sum_{n=1}^{\infty} \dist ( d_n , \mathbb{C} \setminus \ensuremath{W_{\text{e}}}(T) ) = \infty. \end{equation} Then $(d_n)_{n=1}^{\infty} \in \ensuremath{\mathcal{D}}(T)$. \end{theorem} M\"uller and Tomilov also manage to prove an approximation theorem for diagonals when the Blaschke condition is finite instead of infinite. \begin{theorem}[\protect{\cite[Corollary 5.1 (simplified to a single operator)]{MT-2019-TAMS}}] \label{thm:blascke-p-summable-approximation} Let $T \in B(\ensuremath{\mathcal{H}})$ and let $p > 1$. If $(d_n)_{n=1}^{\infty}$ is a complex-valued sequence satisfying \begin{equation} \label{eq:p-blaschke-summable} \sum_{n=1}^{\infty} \dist^p (d_n, \mathbb{C} \setminus \ensuremath{W_{\text{e}}}(T)) < \infty, \end{equation} then there is a compact operator $K$ in the Schatten ideal $\mathcal{C}_p$ such that $(d_n) \in \ensuremath{\mathcal{D}}(T+K)$. \end{theorem} In addition, it should be noted that the work of Herrero in \cite{Her-1991-RMJM} preceded and inspired \cite{MT-2019-TAMS}, and we have only refrained from mentioning the main theorem of Herrero because it is completely subsumed by \Cref{thm:general-blaschke} above. In turn, Herrero mentions that he views the start of the attention to diagonals of operators (presumably not just selfadjoint operators) as originating with the work of Fan \cite{Fan-1984-TAMS}, and then later Fan, Fong and Herrero \cite{FFH-1987-PAMS}. In \cite{Fan-1984-TAMS}, Fan proved an extension to infinite dimensions of the following finite dimensional result: any $n \times n$ matrix with trace zero has an orthonormal basis with respect to which the diagonal is identically zero. \begin{theorem}[\protect{\cite[Theorem~1]{Fan-1984-TAMS}}] \label{thm:fan-zero-diagonal} A necessary and sufficient condition that an operator $T$ has a zero-diagonal (i.e., a diagonal whose entries are all zero) is that there exists an orthonormal basis $\{e_j\}_{j=1}^{\infty}$ so that a subsequence of the partial sums \begin{equation*} \sum_{j=1}^n \angles{Te_j,e_j} \end{equation*} of the diagonal entries relative to this basis converges to zero. \end{theorem} This sparked Fan, Fong and Herrero in \cite{FFH-1987-PAMS} to determine the shape of the set of ``traces'' of an operator $T$. To explain this, suppose that $\{e_j\}_{j=1}^{\infty}$ is an orthonormal basis relative to which $\sum_{j=1}^n \angles{Te_j,e_j}$ converges to a sum $s$ as $n \to \infty$. Then $R \{\trace T\}$ denotes the set of all such $s$ as $\{e_j\}_{j=1}^{\infty}$ ranges over all orthonormal bases (for which the partial sums converge). Fan, Fong and Herrero managed to show that $R \{ \trace T\}$ can only have one of four possible shapes: empty, a point, a line or the entire plane $\mathbb{C}$. Moreover, they characterized when each situation occurs. \begin{theorem}[\protect{\cite[Theorem~4]{FFH-1987-PAMS}}] \label{thm:fan-fong-herrero-trace-shape} For any operator $T \in B(\ensuremath{\mathcal{H}})$, the set $R \{ \trace T\}$ of traces of $T$ is: \begin{enumerate} \item empty if and only if for some $\theta$, $\Re(e^{i\theta} T)_+$ is not trace-class, but $\Re(e^{i\theta} T)_+$ \emph{is} trace-class; \item a point if and only if $T$ is trace-class; \item a line if and only if for some $\theta$, $\Re(e^{i\theta} T)$ is trace-class, but neither of $\Im(e^{i\theta} T)_+$ is trace-class; \item the entire plane $\mathbb{C}$ if and only if for all $\theta$, $\Re(e^{i\theta} T)_+$ is not trace-class. \end{enumerate} \end{theorem} \bibliographystyle{amsalpha}
{ "timestamp": "2019-05-27T02:05:37", "yymm": "1905", "arxiv_id": "1905.09987", "language": "en", "url": "https://arxiv.org/abs/1905.09987", "abstract": "We provide a survey of the current state of the study of diagonals of operators, especially selfadjoint operators. In addition, we provide a few new results made possible by recent work of Müller-Tomilov and Kaftal-Loreaux. This is an expansion of the second author's lecture part II at OT27.", "subjects": "Functional Analysis (math.FA)", "title": "On diagonals of operators: selfadjoint, normal and other classes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683517080176, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.709661085769426 }
https://arxiv.org/abs/0803.3846
Combinatorics of binomial primary decomposition
An explicit lattice point realization is provided for the primary components of an arbitrary binomial ideal in characteristic zero. This decomposition is derived from a characteristic-free combinatorial description of certain primary components of binomial ideals in affine semigroup rings, namely those that are associated to faces of the semigroup. These results are intimately connected to hypergeometric differential equations in several variables.
\section{Introduction A binomial is a polynomial with at most two terms; a binomial ideal is an ideal generated by binomials. Binomial ideals abound as the defining ideals of classical algebraic varieties, particularly because equivariantly embedded affine or projective toric varieties correspond to prime binomial ideals. In fact, the zero set of any binomial ideal is a union of (translated) toric varieties. Thus, binomial ideals are ``easy'' in geometric terms, and one may hope that their algebra is simple as well. This is indeed the case: the associated primes of a binomial ideal are essentially toric ideals, and their corresponding primary components can be chosen binomial as well. These results, due to Eisenbud and Sturmfels \cite{binomialideals}, are concrete when it comes to specifying associated primes, but less so when it comes to primary components themselves, in part because of difficulty in identifying the monomials therein. The main goal of this article is to provide explicit lattice-point combinatorial realizations of the primary components of an arbitrary binomial ideal in a polynomial ring over an algebraically closed field of characteristic zero; this is achieved in Theorems~\ref{t:components} and~\ref{t:toral}. These are proved by way of our other core result, Theorem~\ref{t:zerocomp}, which combinatorially characterizes, in the setting of an affine semigroup ring over an arbitrary field (not required to be algebraically closed or of characteristic zero), primary binomial ideals whose associated prime comes from a face of the semigroup. The hypotheses on the field~$\kk$ are forced upon us, when they occur. Consider the univariate case: in the polynomial ring $\kk[x]$, primary decomposition is equivalent to factorization of polynomials. Factorization into binomials in this setting is the fundamental theorem of algebra, which requires $\kk$ to be algebraically closed. On the other hand, $\kk$ must have characteristic zero because of the slightly different behavior of binomial primary decomposition in positive characteristic \cite{binomialideals}: in characteristic zero, every primary binomial ideal contains all of the non-monomial (i.e., two-term binomial) generators of its associated prime, but this is false in positive characteristic. The motivation and inspiration for this work came from the theory of hypergeometric differential equations, and the results here are used heavily in the companion article \cite{dmm} (see Section~\ref{s:applications} for an overview of these applications). In fact, these two projects began with a conjectural expression for the non fully suppported solutions of a Horn hypergeometric system; its proof reduced quickly to the statement of Corollary~\ref{c:I(B)}, which directed all of the developments here. Our consequent use of $M$-subgraphs (Definition~\ref{d:M}), and more generally the application of commutative monoid congruences toward the primary decomposition of binomial ideals, serves as an advertisement for hypergeometric intuition as inspiration for developments of independent interest in combinatorics and commutative algebra. The explicit lattice-point binomial primary decompositions in Sections~\ref{s:primary} and~\ref{s:toral} have potential applications beyond hypergeometric systems. Consider the special case of monomial ideals: certain constructions at the interface between commutative algebra and algebraic geometry, such as integral closure and multiplier ideals, admit concrete convex polyhedral descriptions. The path is now open to attempt analogous constructions for binomial ideals. \subsection*{Overview of results The central combinatorial idea behind binomial primary decomposition is elementary and concrete, so we describe it geometrically here. The notation below is meant to be intuitive, but in any case it coincides with the notation formally defined later. Fix an arbitrary binomial ideal $I$ in a polynomial ring, or more generally in any affine semigroup ring $\kk[Q]$. Then $I$ determines an equivalence relation (\emph{congruence}) on~$\kk[Q]$ in which two elements $u,v \in Q$ are congruent, written $u \sim v$, if $\ttt^u - \lambda\ttt^v \in I$ for some $\lambda \neq 0$. If one makes a graph with vertex set $Q$ by drawing an edge from $u$ to $v$ when $u \sim v$, then the congruence classes of $\sim$ are the connected components of this graph. For example, for an integer matrix $A$ with $n$ columns, the \emph{toric ideal}\/ \[ I_A = \<\ttt^u - \ttt^v \mid u,v \in \NN^n \text{ and } Au = Av\> \subseteq \CC[\del_1,\ldots,\del_n] = \CC[\ttt] = \CC[\NN^n] \] determines the congruence in which the class of $u \in \NN^n$ consists of the set $(u + \ker(A)) \cap \NN^n$ of lattice points in the polyhedron $ \{\alpha \in \RR^n \mid A\alpha = Au \text{ and } \alpha \geq 0\}. $ The set of congruence classes for $I_A$ can be thought of as a periodically spaced archipelago whose islands have roughly similar shape (but steadily grow in size as the class moves interior to~$\NN^n$). With this picture in mind, the congruence classes for any binomial ideal~$I$ come in a finite family of such archipelagos, but instead of each island coming with a shape predictable from its archipelago, its boundary becomes fragmented into a number of surrounding skerries. The extent to which a binomial ideal deviates from primality is measured by which bridges must be built---and in which directions---to join the islands to their skerries. When $Q = \NN^n$ is an orthant as in the previous example, each prime binomial ideal in $\CC[Q] = \CC[\NN^n]$ equals the sum $\pp + \mm_J$ of a prime binomial ideal~$\pp$ containing no monomials and a prime monomial ideal~$\mm_J$ generated by the variables whose indices lie outside of a subset $J \hspace{-.5pt}\subseteq\hspace{-.5pt} \{1,\ldots,n\}$. The ideal $\pp$ is a toric ideal after rescaling the variables, so its congruence classes are parallel to a sublattice $L = L_\pp \subseteq \ZZ^J$; in the notation above, $L = \ker(A)$. Now suppose that $\pp + \mm_J$ is associated to our binomial ideal $I$. Joining the aforementioned skerries to their islands is accomplished by considering congruences defined by $I + \pp$: a bridge is built from $u$ to~$v$ whenever $u - v \in L$. To be more accurate, just as $I + \pp$ determines a congruence on~$\NN^n$, it determines one---after inverting the variables outside of~$\mm_J$---on $\ZZ^J \times \NN^\oJ$, where $\ZZ^J = \mathrm{span}_\ZZ \{e_j \mid j \in J\}$, and $\NN^{\oJ}$ is defined analogously for the index subset~$\oJ$ complementary to~$J$. Each resulting class in $\ZZ^J \times \NN^\oJ$ is acted upon by~$L$ and hence is a union of cosets of~$L$. The key observation is that, when $\pp + \mm_J$ is an associated prime of~$I$, some of these classes consist of finitely many cosets of~$L$; let us call these \emph{$L$-bounded} classes. The presence of $L$-bounded classes signals that $L$ is ``sufficiently parallel'' to the congruence determined by~$I$, and this is how we visualize the manner in which $\pp + \mm_J$ is associated to~$I$. Intersecting the $L$-bounded $\ZZ^J \times \NN^\oJ$ congruence classes with $\NN^n$ yields \emph{$L$-bounded} classes in~$\NN^n$; again, these are constructed more or less by building bridges in directions from~$L$ to join the classes defined by~$I$. When the prime $\pp + \mm_J$ is minimal over~$I$, there are only finitely many $L$-bounded classes in $\NN^n$. In this case, the primary component $C_{\pp+\mm_J}$ of~$I$ is well-defined, as reflected in its combinatorics: the congruence defined on~$\NN^n$ by $C_{\pp+\mm_J}$ has one huge class consisting of the lattice points in~$\NN^n$ lying in no $L$-bounded class, and each of its remaining classes is $L$-bounded in~$\NN^n$; this is the content of Theorem~\ref{t:components}.1. The only difference for a nonminimal associated prime of~$I$ is that the huge class is inflated by swal\-lowing all but a sufficiently large finite number of the $L$-bounded classes; this is the content of the remaining parts of Theorem~\ref{t:components}. Here, ``sufficiently large'' means that every swallowed $L$-bounded class contains a lattice point~$u$ with $\ttt^u$ lying \mbox{in a fixed high power of~$\mm_J$}. In applications, binomial ideals often arise in the presence of multigradings. (One reason for this is that binomial structures are closely related to algebraic tori, whose actions on varieties induce multigradings on coordinate rings.) In this context, the matrix~$A$ above induces the grading: monomials $\ttt^u$ and $\ttt^v$ have equal degree if and only if $Au = Av$. Theorem~\ref{t:toral} expounds on the observation that if $L = \ker(A) \cap \ZZ^J$, then a congruence class for $I + \pp$ in $\ZZ^J \times \NN^\oJ$ is $L$-bounded if and only if its image in $\NN^\oJ$ is finite. This simplifies the description of the primary components because, to describe the set of monomials in a primary component, it suffices to refer to lattice point geometry in $\NN^\oJ$, without mentioning $\ZZ^J \times \NN^\oJ$. When it comes to proofs, the crucial insight is that the geometry of $L$-bounded classes for the congruence determined by $I + \pp$ gives rise to simpler algebra when $\ZZ^J \times \NN^\oJ$ is reduced modulo the action of~$L$. Equivalently, instead of considering the associated prime $\pp + \mm_J$ of an arbitrary binomial ideal~$I$ in~$\CC[t_1,\ldots,t_n]$, consider the prime image of the monomial ideal $\mm_J$ associated to the image of~$I$ in $\CC[Q] = \CC[\ttt]/\pp$, where $Q = \NN^n/L$. Since monomial primes in an affine semigroup ring $\CC[Q]$ correspond to faces of~$Q$, the lattice point geometry is more obvious in this setting, and the algebra is sufficiently uncomplicated that it works over an arbitrary field in place of~$\CC$. If $\Phi$ is a face of an arbitrary affine semigroup~$Q$ whose corresponding prime $\pp_\Phi$ is minimal over a binomial ideal~$I$ in~$\kk[Q]$, then $I$ determines a congruence on the semigroup $Q + \ZZ\Phi$ obtained from $Q$ by allowing negatives for~$\Phi$. The main result in this context, Theorem~\ref{t:zerocomp}, says that the monomials $\ttt^u$ in the $\pp_\Phi$-primary component of~$I$ are precisely those corresponding \enlargethispage*{0ex} to lattice points $u \in Q$ not lying in any finite congruence class of $Q + \ZZ\Phi$. This, in turn, is proved by translating lattice point geometry and combinatorics into semigroup-graded commutative algebra in Proposition~\ref{p:graded}. \subsection*{Acknowledgments This project benefited greatly from visits by its various authors to the University of Pennsylvania, Texas A\&M University, the Institute for Mathematics and its Applications (IMA) in Minneapolis, the University of Minnesota, and the Centre International de Rencontres Math\'ematiques in Luminy (CIRM). We thank these institutions for their gracious hospitality. \section{Binomial ideals in affine semigroup rings \label{s:semigroup} Our eventual goal is to analyze the primary components of binomial ideals in polynomial rings over the complex numbers~$\CC$ or any algebraically closed field of characteristic zero. Our principal result along these lines (Theorem~\ref{t:components}) is little more than a rephrasing of a statement (Theorem~\ref{t:zerocomp}) about binomial ideals in arbitrary affine semigroup rings in which the associated prime comes from a face, combined with results of Eisenbud and Sturmfels \cite{binomialideals}. The developments here stem from the observation that quotients by binomial ideals are naturally graded by noetherian commutative monoids. Our source for such monoids is Gilmer's excellent book \cite{gilmer}. For the special case of affine semigroups, by which we mean finitely generated submonoids of free abelian groups, see \cite[Chapter~7]{cca}. We work in this section over an arbitrary field~$\kk$, so it might be neither algebraically closed nor of \mbox{characteristic~zero}. \begin{defn}\label{d:congruence} A \emph{congruence} on a commutative monoid~$Q$ is an equivalence relation $\sim$ with \[ u \,\sim\, v\ \implies\ u\!+\!w \,\sim\, v\!+\!w \quad\text{for all } w \in Q. \] The \emph{quotient monoid} $Q/$$\sim$ is the set of equivalence classes under addition. \end{defn} \begin{defn}\label{d:IQL} The \emph{semigroup algebra} $\kk[Q]$ is the direct sum $\bigoplus_{u \in Q} \kk \cdot \ttt^u$, with multiplication $\ttt^u \ttt^v = \ttt^{u+v}$. Any congruence $\sim$ on~$Q$ induces a $(Q/$$\sim)$-grading on $\kk[Q]$ in which the \emph{monomial}\/ $\ttt^u$ has degree~$\Gamma \in Q/$$\sim$ whenever $u \in \Gamma$. A \emph{binomial ideal}\/ $I \subseteq \kk[Q]$ is an ideal generated by \emph{binomials} $\ttt^u - \lambda\ttt^v$, where $\lambda \in \kk$ is a scalar, possibly equal to zero. \end{defn} \begin{example}\label{ex:Q=NN} A \emph{pure difference}\/ binomial ideal is generated by differences of monic monomials. Given an integer matrix $M$ with $q$~rows, we call $I(M) \subseteq \kk[\del_1,\ldots,\del_q] = \kk[\NN^q]$ the pure difference binomial ideal \begin{align}\label{eq:IM} I(M) &= \<\ttt^u - \ttt^v \mid u - v \text{ is a column of } M, \, u, v \in \NN^q\> \\\nonumber &= \<\ttt^{w_+}-\ttt^{w_-} \mid w = w_+ - w_- \text{ is a column of }M\>. \end{align} Here and in the remainder of this article we adopt the convention that, for an integer vector $w \in \ZZ^q$, the vector $w_+$ has $i^{\rm th}$ coordinate $w_i$ if $w_i\geq 0$ and $0$ otherwise. The vector $w_- \in \NN^q$ is defined by $w_+ - w_- = w$, or equivalently, $w_- = (-w)_+$. If the columns of $M$ are linearly independent, the ideal $I(M)$ is called a \emph{lattice basis ideal} (cf.\ Example~\ref{ex:I(B)}). An ideal of $\kk[\del_1,\ldots,\del_q]$ has the form described in~(\ref{eq:IM}) if and only if it is generated by differences of monomials with disjoint~support. The equality of the two definitions in~(\ref{eq:IM}) is easy to see: the ideal in the first line of the display contains the ideal in the second line by definition; and the disjointness of the supports of $w_+$ and~$w_-$ implies that whenever $u - v = w$ is a column of~$M$, and denoting by $\alpha:= u - w_+ = u - w_-$, we have that the corresponding generator of the first ideal $\ttt^u - \ttt^v = \ttt^\alpha (\ttt^{w_+} - \ttt^{w_-})$, lies in the second ideal. \end{example} \begin{prop}\label{p:sim} A binomial ideal $I \subseteq \kk[Q]$ determines a congruence $\sim_I$ under which \[ u \sim_I v \text{\rm\ if } \ttt^u - \lambda\ttt^v \in I \text{\rm\ for some scalar } \lambda \neq 0. \] The ideal $I$ is graded by the quotient monoid $Q_I = Q/$$\sim_I$, and $\kk[Q]/I$ has $Q_I$-graded Hil\-bert function~$1$ on every congruence class except the class $\{u \in Q \mid \ttt^u \in I\}$ of monomials. \end{prop} \begin{proof} That $\sim_I$ is an equivalence relation is because $\ttt^u - \lambda\ttt^v \in I$ and $\ttt^v - \lambda'\ttt^w \in I$ implies $\ttt^u - \lambda\lambda'\ttt^w \in I$. It is a congruence because $\ttt^u - \lambda\ttt^v \in I$ implies that $\ttt^{u+w} - \lambda\ttt^{v+w} \in I$. The rest is similarly straightforward. \end{proof} \begin{example} In the case of a pure difference binomial ideal $I(M)$ as in Example~\ref{ex:Q=NN}, the congruence classes under~$\sim_{I(M)}$ from Proposition~\ref{p:sim} are the $M$-subgraphs in the following definition, which---aside from being a good way to visualize congruence classes---will be useful later on (see Example~\ref{ex:UM} and Corollary~\ref{c:I(B)}, as well as Section~\ref{s:applications}). \end{example} \begin{defn}\label{d:M} Any integer matrix $M$ with $q$ rows defines an undirected graph~$\Gamma(M)$ having vertex set $\NN^q$ and an edge from $u$ to~$v$ if $u - v$ or $v - u$ is a column of~$M$. An \emph{$M$-path}\/ from $u$ to~$v$ is a path in~$\Gamma(M)$ from $u$ to~$v$. A subset of~$\NN^q$ is \emph{$M$-connected}\/ if every pair of vertices therein is joined by an $M$-path passing only through vertices in the subset. An \emph{$M$-subgraph}\/ of~$\NN^q$ is a maximal $M$-connected subset of~$\NN^q$ (a connected component of~$\Gamma(M)$). An $M$-subgraph is \emph{bounded}\/ if it has finitely many vertices, and \emph{unbounded}\/ otherwise. (See Example~\ref{ex:concrete-subgraph} for a concrete computation and an illustrative figure.) \end{defn} These $M$-subgraphs bear a marked resemblance to the concept of \emph{fiber}\/ in \cite[Chapter~4]{gb&cp}. The interested reader will note, however, that even if these two notions have the same flavor, their definitions have mutually exclusive assumptions, since for a square matrix~$M$, the corresponding matrix~$A$ in \cite{gb&cp} is empty. Given a face~$\Phi$ of an affine semigroup $Q \subseteq \ZZ^\ell$, the \emph{localization} of~$Q$ along~$\Phi$ is the affine semigroup $Q + \ZZ\Phi$ obtained from~$Q$ by adjoining negatives of the elements in~$\Phi$. The algebraic version of this notion is a common tool for affine semigroup rings \cite[Chapter~7]{cca}: for each $\kk[Q]$-module $V$, let $V[\ZZ\Phi]$ denote its \emph{homogeneous localization}\/ along~$\Phi$, obtained by inverting $\ttt^\phi$ for all $\phi \in \Phi$. For example, $\kk[Q][\ZZ\Phi] \cong \kk[Q+\ZZ\Phi]$. Writing \[ \pp_\Phi = {\rm span}_{\kk} \{ \ttt^u \mid u \in Q \minus \Phi \} \subseteq \kk[Q] \] for the prime ideal of the face~$\Phi$, so that $\kk[Q]/\pp_\Phi = \kk[\Phi]$ is the affine semigroup ring for~$\Phi$, we find, as a consequence, that $\pp_\Phi[\ZZ\Phi] = \pp_{\ZZ\Phi} \subseteq \kk[Q+\ZZ\Phi]$, because \[ \kk[Q+\ZZ\Phi]/\pp_\Phi[\ZZ\Phi] = (\kk[Q]/\pp_\Phi)[\ZZ\Phi]= \kk[\Phi][\ZZ\Phi] = \kk[\ZZ\Phi]. \] (We write equality signs to denote canonical isomorphisms.) For any ideal $I \subseteq \kk[Q]$, the localization $I[\ZZ\Phi]$ equals the extension $I\, \kk[Q+\ZZ\Phi]$ of~$I$ to $\kk[Q+\ZZ\Phi]$, and we write \begin{equation}\label{eq:I:Phi} (I:\ttt^\Phi) = I[\ZZ\Phi] \cap \kk[Q], \end{equation} the intersection taking place in $\kk[Q+\ZZ\Phi]$. Equivalently, $(I:\ttt^\Phi)$ is the usual colon ideal $(I:\ttt^\phi)$ for any element $\phi$ sufficiently interior to~$\Phi$ (for example, take $\phi$ to be a high multiple of the sum of the generators of~$\Phi$); in particular, $(I:\ttt^\Phi)$ is a binomial ideal when $I$ is. For the purpose of investigating $\pp_\Phi$-primary components, the ideal $(I:\ttt^\Phi)$ is as good as~$I$ itself, since this colon operation does not affect such components, or better, since the natural map from $\kk[Q]/(I:\ttt^\Phi)$ to its homogeneous localization along~$\Phi$ is injective. Combinatorially, what this means is the following. \begin{lemma} A subset $\Gamma' \subseteq Q$ is a congruence class in $Q_{(I:\ttt^\Phi)}$ determined by $(I:\ttt^\Phi)$ if and only if\/ $\Gamma' = \Gamma \cap Q$ for some class $\Gamma \subseteq Q+\ZZ\Phi$ under the congruence~$\sim_{I[\ZZ\Phi]}$.\qed \end{lemma} \begin{lemma}\label{l:distinct} If a congruence class $\Gamma \subseteq Q+\ZZ\Phi$ under $\sim_{I[\ZZ\Phi]}$ has two distinct elements whose difference lies in~$Q+\ZZ\Phi$, then for all $u \in \Gamma$ the monomial $\ttt^u$ maps to~$0$ in the (usual inhomogeneous) localization $(\kk[Q]/I)_{\pp_\Phi}$ inverting all elements not in~$\pp_\Phi$. \end{lemma} \begin{proof} Suppose $v \not=w \in \Gamma$ with $w-v \in Q+\ZZ\Phi$. The images in~$\kk[Q]/I$ of the monomials $\ttt^u$ for $u \in \Gamma$ are nonzero scalar multiples of each other, so it is enough to show that $\ttt^v$ maps to zero in~$(\kk[Q]/I)_{\pp_\Phi}$. Since $w-v \in Q + \ZZ\Phi$, we have $\ttt^{w-v} \in \kk[Q+\ZZ\Phi]$. Therefore $1-\lambda\ttt^{w-v}$ lies outside of~$\pp_{\ZZ\Phi}$ for all $\lambda \in \kk$, because its image in~$\kk[\ZZ\Phi] = \kk[Q+ \ZZ \Phi]/\pp_{\ZZ\Phi}$ is either $1-\lambda\ttt^{w-v}$ or~$1$, according to whether or not $w-v \in \ZZ\Phi$. (The assumption $v \neq w$ was used here: if $v=w$, then for $\lambda=1$, we have $1-\lambda \ttt^{w-v} = 0 $.) Hence $1-\lambda\ttt^{w-v}$ maps to a unit in~$(\kk[Q]/I)_{\pp_\Phi}$. It follows that $\ttt^v$ maps to~$0$, since $(1-\lambda_{vw}\ttt^{w-v})\ttt^v = \ttt^v-\lambda_{vw}\ttt^w$ maps to~$0$ in~$\kk[Q]/I$ whenever $\ttt^v - \lambda_{vw} \ttt^w \in I$. \end{proof} \begin{lemma}\label{l:unbounded} A congruence class $\Gamma \subseteq Q+\ZZ\Phi$ under $\sim_{I[\ZZ\Phi]}$ is infinite if and only if it contains two distinct elements whose difference lies in\/~$Q+\ZZ\Phi$. \end{lemma} \begin{proof} Let $\Gamma \subseteq Q+\ZZ\Phi$ be a congruence class. If $v,w \in \Gamma$ and $v-w \in Q+\ZZ\Phi$, then $w + \epsilon (v-w) \in \Gamma$ for all positive $\epsilon \in \ZZ$. On the other hand, assume $\Gamma$ is infinite. There are two possibilities: either there are $v,w \in \Gamma$ with $v-w \in \ZZ\Phi$, or not. If so, then we are done, so assume not. Let $\ZZ^q$ be the quotient of $\ZZ^\ell/\ZZ\Phi$ modulo its torsion subgroup. (Here $\ZZ^\ell$ is the ambient lattice of $Q$.) The projection $\ZZ^\ell \to \ZZ^q$ induces a map from $\Gamma$ to its image~$\ol\Gamma$ that is finite-to-one. More precisely, if $\Gamma'$ is the intersection of~$\Gamma$ with a coset of $\ZZ\Phi$ in~$\ker(\ZZ^\ell \to \ZZ^q)$, then $\Gamma'$ maps bijectively to its image~$\ol\Gamma{}'$. There are only finitely many cosets, so some~$\Gamma'$ must be infinite, along with~$\ol\Gamma{}'$. But $\ol\Gamma{}'$ is a subset of the affine semigroup~$\ol{Q/\Phi}$, defined as the image of $Q + \ZZ\Phi$ in~$\ZZ^q$. As $\ol{Q/\Phi}$ has unit group zero, every infinite subset contains two points whose difference lies in~$\ol{Q/\Phi}$, and the corresponding lifts of these to~$\Gamma'$ have their difference in $Q + \ZZ\Phi$. \end{proof} \begin{defn}\label{d:U} Fix a face $\Phi$ of an affine semigroup~$Q$. A subset $S \subseteq Q$ is an \emph{ideal}\/ if $Q+S \subseteq S$, and in that case we write $\kk\{S\} = \<\ttt^u \mid u \in S\> = {\rm span}_\kk\{\ttt^u \mid u \in S\}$ for the monomial ideal in~$\kk[Q]$ having $S$ as its $\kk$-basis. An ideal~$S$ is \emph{$\ZZ\Phi$-closed}\/ if $S = Q \cap (S + \ZZ\Phi)$. If~$\sim$ is a congruence on $Q+\ZZ\Phi$, then the \emph{unbounded ideal}\/ $U \subseteq Q$ is the ($\ZZ\Phi$-closed) ideal of elements $u \in Q$ with infinite congruence class under~$\sim$ in $Q + \ZZ\Phi$. Finally, write $\cB(Q+\ZZ\Phi)$ for the set of bounded (i.e.\ finite) congruence classes of $Q+\ZZ\Phi$ under $\sim$. \end{defn} \begin{example}\label{ex:UM} Let $M$ be as in Definition~\ref{d:M} and consider the congruence $\sim_{I(M)}$ on $Q = \NN^q$. If $\Phi = \{0\}$, then the unbounded ideal $U \subseteq \NN^q$ is the union of the unbounded $M$-subgraphs of~$\NN^q$, while $\cB(\NN^q)$ is the union of the bounded $M$-subgraphs. \end{example} \begin{prop}\label{p:graded} Fix a face $\Phi$ of an affine semigroup~$Q$, a binomial ideal $I \subseteq \kk[Q]$, and a $\ZZ\Phi$-closed ideal $S \subseteq Q$ containing $U$ under the congruence $\sim_{I[\ZZ\Phi]}$. Write $\cB = \cB(Q+\ZZ\Phi)$ for the bounded classes, $J$ for the binomial ideal $(I:\ttt^\Phi) + \kk\{S\}$, and $\ol Q = (Q+\ZZ\Phi)_{I[\ZZ\Phi]}$. \begin{numbered} \item $\kk[Q]/J$ is graded by $\ol Q$, and its set of nonzero degrees is contained in~$\cB$. \item The group $\ZZ\Phi \subseteq \ol Q$ acts freely on~$\cB$, and the $\kk[\Phi]$-submodule $(\kk[Q]/J)_T \subseteq \kk[Q]/J$ in degrees from any orbit $T \subseteq \cB$ is~$0$ or finitely generated and torsion-free of rank~$1$. \item The quotient $Q_J/\Phi$ of the monoid $(Q+\ZZ\Phi)_{J[\ZZ\Phi]}$ by its subgroup $\ZZ\Phi$ is a partially ordered set if we define $\zeta \preceq \eta$ whenever $\zeta + \xi = \eta\,$ for some $\xi \in Q_J/\Phi$. \item $\kk[Q]/J$ is filtered by $\ol Q$-graded\/ $\kk[Q]$-submodules with associated graded module \[ {\rm gr}(\kk[Q]/J) = \bigoplus_{T \in \cB/\Phi} (\kk[Q]/J)_T, \quad \text{where } \cB/\Phi = \{\ZZ\Phi\text{\rm -orbits }T \subseteq \cB\}, \] the canonical isomorphism being as $\cB$-graded $\kk[\Phi]$-modules, although the left-hand side is naturally a $\kk[Q]$-module annihilated by~$\pp_\Phi$. \item If\/ $(\kk[Q]/J)_T \neq 0$ for only finitely many orbits $T \in \cB/\Phi$, then $J$ is a $\pp_\Phi$-primary ideal. \end{numbered} \end{prop} \begin{proof} The quotient $\kk[Q]/(I:\ttt^\Phi)$ is automatically $\ol Q$-graded by Proposition~\ref{p:sim} applied to $Q+\ZZ\Phi$ and $I[\ZZ\Phi]$, given~(\ref{eq:I:Phi}). The further quotient by~$\kk\{S\}$ is graded by $\cB$ because~\mbox{$S \supseteq U$}. $\ZZ\Phi$ acts freely on~$\cB$ by Lemmas~\ref{l:distinct} and~\ref{l:unbounded}: if $\phi \in \ZZ\Phi$ and $\Gamma$ is a bounded congruence class, then the translate $\phi + \Gamma$ is, as well; and if $\phi \neq 0$ then $\phi + \Gamma \neq \Gamma$, because each coset of~$\ZZ\Phi$ intersects $\Gamma$ at most once. Combined with the $\ZZ\Phi$-closedness of~$S$, this shows that $\kk[Q]/J$ is a $\kk[\Phi]$-submodule of the free $\kk[\ZZ\Phi]$-module whose basis consists of the $\ZZ\Phi$-orbits $T \subseteq \cB$. Hence $(\kk[Q]/J)_T$ is torsion-free (it might be zero, of course, if $S$ happens to contain all of the monomials corresponding to congruence classes of~$Q$ arising from $\sim_{I[\ZZ\Phi]}$ classes in~$T$). For item~2, it remains to show that $(\kk[Q]/J)_T$ is finitely generated. Let \mbox{$\TT = \bigcup_{\Gamma \in T} \Gamma \cap Q$}. By construction, $\TT$ is the (finite) union of the intersections $Q \cap (\gamma + \ZZ\Phi)$ of $Q$ with cosets of~$\ZZ\Phi$ in~$\ZZ^\ell$ for $\gamma$ in any fixed $\Gamma \in T$. Such an intersection is a finitely generated $\Phi$-set (a set closed under addition by~$\Phi$) by \cite[Eq.~(1) and Lemma~2.2]{irredres} or \cite[Theorem~11.13]{cca}, where the $\kk$-vector space it spans is identified as the set of monomials annihilated by~$\kk[\Phi]$ modulo an irreducible monomial ideal of~$\kk[Q]$. The images in $\kk[Q]/J$ of the monomials corresponding to any generators for these $\Phi$-sets generate $(\kk[Q]/J)_T$. The point of item~3 is that the monoid $Q_J/\Phi$ acts sufficiently like an affine semigroup whose only unit is the trivial one. To prove it, observe that $Q_J/\Phi$ consists, by item~1, of the (possibly empty set of) orbits $T \in \cB$ such that $(\kk[Q]/J)_T \neq 0$ plus one congruence class $\ol S$ for the monomials in~$J$ (if there are any). The proposed partial order has $T \prec \ol S$ for all orbits $T \in Q_J/\Phi$, and also $T \prec T+v$ if and only if $v \in (Q+\ZZ\Phi) \minus \ZZ\Phi$. This relation~$\prec$ a~priori defines a directed graph with vertex set~$Q_J/\Phi$, and we need it to have no directed cycles. The terminal nature of~$\ol S$ implies that no cycle can contain~$\ol S$, so suppose that $T = T+v$. For some $\phi \in \ZZ\Phi$, the translate $u + \phi$ lies in the same congruence class under $\sim_{I[\ZZ\Phi]}$ as $u + v$. Lemma~\ref{l:unbounded} implies that $v-\phi$, and hence $v$ itself, does not lie in $Q+\ZZ\Phi$. For item~4, it suffices to find a total order $T_0,T_1,T_2,\ldots$ on $\cB/\Phi$ such that \mbox{$\bigoplus_{j \geq k} (\kk[Q]/J)_{T_j}$} is a $\kk[Q]$-submodule for all $k \in \NN$. Use the partial order of $\cB/\Phi$ via its inclusion in the monoid $Q_J/\Phi$ in item~3 for $S = U$. Any well-order refining this partial order will do. Item~5 follows from items~2 and~4 because the associated primes of ${\rm gr}(\kk[Q]/J)$ contain every associated prime of~$J$ for any finite filtration of $\kk[Q]/J$ by $\kk[Q]$-submodules. \end{proof} For connections with toral modules (Definition~\ref{d:toral}), we record the following. \begin{cor}\label{c:graded} Fix notation as in Proposition~\ref{p:graded}. If $I$ is homogeneous for a grading of\/~$\kk[Q]$ by a group~$\cA$ via a monoid morphism $Q \to \cA$, then $\kk[Q]/J$ and ${\rm gr}(\kk[Q]/J)$ are $\cA$-graded via a natural coarsening $\cB \to \cA$ that restricts to a group homomorphism $\ZZ\Phi \to \cA$. \end{cor} \begin{proof} The morphism $Q \to \cA$ induces a morphism $\pi_\cA: Q+\ZZ\Phi \to \cA$ by the universal property of monoid localization. The morphism~$\pi_\cA$ is constant on the non-monomial congruence classes in~$Q_I$ precisely because $I$ is $\cA$-graded. It follows that $\pi_\cA$ is constant on the non-monomial congruence classes in~$(Q+\ZZ\Phi)_{I[\ZZ\Phi]}$. In particular, $\pi_\cA$ is constant on the bounded classes $\cB(Q+\ZZ\Phi)$, which therefore map to~$\cA$ to yield the natural coarsening. The group homomorphism $\ZZ\Phi \to \cA$ is induced by the composite morphism $\ZZ\Phi \to (Q+\ZZ\Phi) \to \cA$, which identifies the group $\ZZ\Phi$ with the $\ZZ\Phi$-orbit in~$\cB$ containing (the class of)~$0$. \end{proof} \begin{theorem}\label{t:zerocomp} Fix a face $\Phi$ of an affine semigroup~$Q$ and a binomial ideal $I \subseteq \kk[Q]$. If\/ $\pp_\Phi$ is minimal over~$I$, then the $\pp_\Phi$-primary component of~$I$ is $(I:\ttt^\Phi) + \kk\{U\}$, where $(I:\ttt^\Phi)$ is the binomial ideal~(\ref{eq:I:Phi}) and $U \subseteq Q$ is the unbounded ideal (Definition~\ref{d:U}) for $\sim_{I[\ZZ\Phi]}$. Furthermore, the only monomials in $(I:\ttt^\Phi) + \kk\{U\}$ are those of the form $\ttt^u$ for $u \in U$. \end{theorem} \begin{proof} The $\pp_\Phi$-primary component of~$I$ is the kernel of the localization homomorphism $\kk[Q] \to (\kk[Q]/I)_{\pp_\Phi}$. As this factors through the homogeneous localization $\kk[Q+\ZZ\Phi]/I[\ZZ\Phi]$, we find that the kernel contains $(I:\ttt^\Phi)$. Lemmas~\ref{l:distinct} and~\ref{l:unbounded} imply that the kernel contains~$\kk\{U\}$. But already $(I:\ttt^\Phi) + \kk\{U\}$ is $\pp_\Phi$-primary by Proposition~\ref{p:graded}.5; the finiteness condition there is satisfied by minimality of~$\pp_\Phi$ applied to the filtration in Proposition~\ref{p:graded}.4. Thus the quotient of $\kk[Q]$ by $(I:\ttt^\Phi) + \kk\{U\}$ maps injectively to its localization at~$\pp_\Phi$. To prove the last sentence of the theorem, observe that under the $\ol Q$-grading from Proposition~\ref{p:graded}.1, every monomial~$\ttt^u$ outside of~$\kk\{U\}$ maps to a $\kk$-vector space basis for the ($1$-dimensional) graded piece corresponding to the bounded congruence class containing~$u$. \end{proof} \begin{example} One might hope that when $\pp_\Phi$ is an embedded prime of a binomial ideal~$I$, the $\pp_\Phi$-primary components, or even perhaps the irreducible components, would be unique, if we require that they be finely graded (Hilbert function $0$ or~$1$) as in Proposition~\ref{p:graded}. However, this fails even in simple examples, such as $\kk[x,y]/\<x^2-xy,xy-y^2\>$. In this case, $I = \<x^2-xy,xy-y^2\> = \<x^2,y\> \cap \<x-y\> = \<x,y^2\> \cap \<x-y\>$ and $\Phi$ is the face $\{0\}$ of~$Q = \NN^2$, so that $I = (I:\ttt^\Phi)$ by definition. The monoid $Q_I$, written multiplicatively, consists of $1$, $x$, $y$, and a single element of degree~$i$ for each $i \geq 2$ representing the congruence class of the monomials of total degree~$i$. Our two choices $\<x^2,y\>$ and $\<x,y^2\>$ for the irreducible component with associated prime~$\<x,y\>$ yield quotients of~$\kk[x,y]$ with different $Q_I$-graded Hilbert functions, the first nonzero in degree~$x$ and the second nonzero in degree~$y$. \end{example} \section{Primary components of binomial ideals \label{s:primary} In this section, we express the primary components of binomial ideals in polynomial rings over the complex numbers as explicit sums of binomial and monomial ideals. We formulate our main result, Theorem~\ref{t:components}, after recalling some essential results from \cite{binomialideals}. In this section we work with the complex polynomial ring $\CC[\ttt]$ in (commuting) variables $\ttt = \del_1,\ldots,\del_n$. If $L \subseteq \ZZ^n$ is a sublattice, then with notation as in Example~\ref{ex:Q=NN}, the \emph{lattice ideal}\/ of~$L$ is \[ I_L = \<\ttt^{u_+} - \ttt^{u_-} \mid u = u_+ - u_- \in L\>, \] More generally, any \emph{partial character} $\rho : L \to \CC^*$ of~$\ZZ^n$, which includes the data of both its domain lattice $L \subseteq \ZZ^n$ and the map to~$\CC^*$, determines a binomial ideal \[ I_\rho = \<\ttt^{u_+} - \rho(u)\ttt^{u_-} \mid u = u_+ - u_- \in L\>. \] (The ideal~$I_\rho$ is called $I_+(\rho)$ in \cite{binomialideals}.) The ideal $I_\rho$ is prime if and only if $L$ is a \emph{saturated} sublattice of~$\ZZ^n$, meaning that $L$ equals its \emph{saturation}, in general defined as \[ \sat(L) = (\QQ L) \cap \ZZ^n, \] where $\QQ L = \QQ \otimes_\ZZ L$ is the rational vector space spanned by~$L$ in $\QQ^n$. In fact, writing $\mm_J = \<\del_j \mid j \notin J\>$ for any $J \subseteq \{1,\ldots,n\}$, every binomial prime ideal in $\CC[\ttt]$ has the form \begin{equation} \label{eq:IrJ} I_{\rho,J} = I_\rho + \mm_J \end{equation} for some \emph{saturated}\/ partial character~$\rho$ (i.e., whose domain is a saturated sublattice) and subset $J$ such that the binomial generators of~$I_\rho$ only involve variables $\del_j$ for $j \in J$ (some of which might actually be absent from the generators of~$I_\rho$) \cite[Corollary~2.6]{binomialideals}. \begin{remark}\label{rem:I_A} A rank $m$ lattice $L \subseteq \ZZ^n$ is saturated if and only if there exists an $(n-m)\times n$ integer matrix $A$ of full rank such that $L=\ker_{\ZZ}(A)$. In this case, if $\rho$ is the trivial character, the ideal $I_{\rho}$ is denoted by $I_A$ and called a \emph{toric ideal}. Note that \begin{equation} \label{eq:IA} I_A = \langle \ttt^u - \ttt^v \mid Au = Av \rangle. \end{equation} If $\rho$ is not the trivial character, then $I_\rho$ becomes isomorphic to~$I_A$ when the variables are rescaled via $\del_i \mapsto \rho(e_i)\del_i$, which induces the rescaling $\ttt^u \mapsto \rho(u)\ttt^u$ on general monomials. \end{remark} The characteristic zero part of the main result in \cite{binomialideals}, Theorem~7.1$'$, says that an irredundant primary decomposition of an arbitrary binomial ideal $I \subseteq \CC[\ttt]$ is given by \begin{equation}\label{eq:Hull} I = \bigcap_{I_{\rho,J}\in\mathrm{Ass}(I)}\mathrm{Hull}(I + I_\rho + \mm_J^e) \end{equation} for any large integer~$e$, where $\mathrm{Hull}$ means to discard the primary components for embedded (i.e.\ nonminimal associated) primes, and $\mm_J^e = \<\del_j \mid j \notin J\>^e$. Our goal in this section is to be explicit about the Hull operation. The salient feature of~(\ref{eq:Hull}) is that $I + I_\rho + \mm_J^e$ contains~$I_\rho$. In contrast, (\ref{eq:Hull}) is false in positive characteristic, where $I_\rho + \mm_J^e$ should be replaced by a Frobenius power of~$I_{\rho,J}$ \cite[Theorem~7.1$'$]{binomialideals}. Our notation in the next theorem is as follows. Given a subset $J \subseteq \{1,\ldots,n\}$, let $\oJ = \{1,\ldots,n\} \minus J$ be its complement, and use these sets to index coordinate subspaces of~$\NN^n$ and $\ZZ^n$; in particular, $\NN^n = \NN^J \times \NN^\oJ$. Adjoining additive inverses for the elements in $\NN^J$ yields $\ZZ^J \times \NN^\oJ$, whose semigroup ring we denote by $\CC[\ttt][\ttt_J^{-1}]$, with $\ttt_J = \prod_{j \in J} \del_j$. As in Definition~\ref{d:U}, $\CC\{S\}$ is the monomial ideal in~$\CC[\ttt]$ having $\CC$-basis~$S$. Finally, for a saturated sublattice $L \subseteq \ZZ^J$, we write $\NN^J/L$ for the image of $\NN^J$ in the torsion-free group~$\ZZ^J/L$. \begin{theorem}\label{t:components} Fix a binomial ideal $I \subseteq \CC[\ttt]$ and an associated prime $I_{\rho,J}$ of~$I$, where $\rho : L \to \CC^*$ for a saturated sublattice $L \subseteq \ZZ^J \subseteq \ZZ^n$. Set $\Phi = \NN^J/L$, and write $\sim$ for the congruence on $\ZZ^J \times \NN^\oJ$ determined by the ideal $(I+I_\rho)[\ZZ^J] = (I+I_\rho)\CC[\ttt][\ttt_J^{-1}]$. \begin{numbered} \item If $I_{\rho,J}$ is a minimal prime of~$I$ and $\wU$ is the set of $u \in \NN^n$ whose congruence classes in $(\ZZ^J \times \NN^\oJ)/$$\sim$ have infinite image in~$\ZZ\Phi \times \NN^\oJ$, then the $I_{\rho,J}$-primary component of~$I$~is \[ \cC_{\rho,J} = \big( (I + I_\rho) : \ttt_J^\infty\big) + \CC\big\{\wU\big\}. \] \end{numbered}\setcounter{separated}{\value{enumi}}% Fix a monomial ideal $K \subseteq \CC[\del_j \mid j \in \oJ]$ containing a power of each available variable, and let $\approx$ be the congruence on $\ZZ^J \times \NN^\oJ$ determined by $(I+I_\rho+K)\CC[\ttt][\ttt_J^{-1}]$. Write $\wU_K$ for the set of $u \in \NN^n$ whose congruence classes in $(\ZZ^J \times \NN^\oJ)/$$\approx$ have infinite image in~$\ZZ\Phi \times \NN^\oJ$. \begin{numbered}\setcounter{enumi}{\value{separated}} \item The $I_{\rho,J}$-primary component of $\<I + I_\rho + K\> \subseteq \CC[\ttt]$ is $\big( (I + I_\rho + K) : \ttt_J^\infty \big)+\CC\{\wU_K\}$. \item If $K$ is contained in a sufficiently high power of\/ $\mm_J$, then \[ \cC_{\rho,J} = \big( (I + I_\rho + K) : \ttt_J^\infty\big) + \CC\big\{\wU_K\big\} \] is a valid choice of $I_{\rho,J}$-primary component for~$I$. \end{numbered} The only monomials in the above primary components are those in $\CC\{\wU\}$ or $\CC\{\wU_K\}$. \end{theorem} \begin{proof} First suppose $I_{\rho,J}$ is a minimal prime of~$I$. We may, by rescaling the variables $\del_j$ for $j \in J$, harmlessly assume that $\rho$ is the trivial character on its lattice~$L$, so that $I_\rho = I_L$ is the lattice ideal for~$L$. The quotient $\CC[\ttt]/I_L$ is the affine semigroup ring~$\CC[Q]$ for $Q = (\NN^J/L) \times \NN^\oJ = \Phi \times \NN^\oJ$. Now let us take the whole situation modulo~$I_L$. The image of~$I_{\rho,J} = I_L + \mm_J$ is the prime ideal $\pp_\Phi \subseteq \kk[Q]$ for the face~$\Phi$. The image in $\CC[Q]$ of the binomial ideal~$I$ is a binomial ideal~$I'$, and $\big( (I + I_L) : \ttt_J^\infty \big)$ has image $(I' : \ttt^\Phi)$, as defined in~(\ref{eq:I:Phi}). Finally, the image of $\wU$ in~$Q$ is the unbounded ideal $U \subseteq Q$ (Definition~\ref{d:U}) by construction. Now we can apply Theorem~\ref{t:zerocomp} to~$I'$ and obtain a combinatorial description of the component associated to $I_{\rho,J}$. The second and third items follow from the first by replacing $I$ with $I+K$, given the primary decomposition in~(\ref{eq:Hull}). \end{proof} \begin{remark} One of the mysteries in \cite{binomialideals} is why the primary components~$\cC$ of binomial ideals turn out to be generated by monomials and binomials. {}From the perspective of Theorem~\ref{t:components} and Proposition~\ref{p:graded} together, this is because the primary components are \emph{finely graded}: under some grading by a free abelian group, namely $\ZZ\Phi$, the vector space dimensions of the graded pieces of the quotient modulo the ideal~$\cC$ are all $0$ or~$1$ \cite[Proposition~1.11]{binomialideals}. In fact, via Lemma~\ref{l:distinct}, fine gradation is the root cause of primaryness. \end{remark} \begin{remark} Theorem~\ref{t:components} easily generalizes to arbitrary binomial ideals in arbitrary commutative noetherian semigroup rings over~$\CC$: simply choose a presentation as a quotient of a polynomial ring modulo a pure difference binomial ideal \cite[Theorem~7.11]{gilmer}. \end{remark} \begin{remark} The methods of Section~\ref{s:semigroup} work in arbitrary characteristic---and indeed, over a field $\kk$ that can fail to be algebraically closed, and can even be finite---because we assumed that a prime ideal $\pp_\Phi$ for a face~$\Phi$ is associated to our binomial ideal. In contrast, this section and the next work only over an algebraically closed field of characteristic zero. However, it might be possible to produce similarly explicit binomial primary decompositions in positive characteristic by reducing to the situation in Section~\ref{s:semigroup}; this remains an open problem. \end{remark} \section{Associated components and multigradings \label{s:toral} In this section, we turn our attention to interactions of primary components with various gradings on~$\CC[\ttt]$. These played crucial roles already in the proof of Theorem~\ref{t:components}: taking the quotient of its statement by the toric ideal~$I_\rho$ put us in the situation of Proposition~\ref{p:graded} and Theorem~\ref{t:zerocomp}, which provide excellent control over gradings. The methods here can be viewed as aids for clarification in examples, as we shall see in the case of lattice basis ideals (Example~\ref{ex:I(B)}). However, this theory was developed with applications in mind \cite{dmm}; see Section~\ref{s:applications}. Generally speaking, given a grading of~$\CC[\ttt]$, there are two kinds of graded modules: those with bounded Hilbert function (the \emph{toral} case below) and those without. The main point is Theorem~\ref{t:toral}: if $\CC[\ttt]/\pp$ has bounded Hilbert function for some graded prime~$\pp$, then the $\pp$-primary component of any graded binomial ideal is easier to describe than usual. To be consistent with notation, we adopt the following conventions for this section. \begin{convention}\label{conv:A} $A = (a_{ij}) \in \ZZ^{d \times n}$ denotes an integer $d \times n$ matrix of rank~$d$ whose columns $a_1,\ldots,a_n$ all lie in a single open linear half-space of~$\RR^d$; equivalently, the cone generated by the columns of $A$ is pointed (contains no lines), and all of the columns~$a_i$ are nonzero. We also assume that $\ZZ A = \ZZ^d$; that is, the columns of $A$ span $\ZZ^d$ as a lattice. \end{convention} \begin{convention}\label{conv:B} Let $B = (b_{jk})\in \ZZ^{n\times m}$ be an integer matrix of full rank~$m \leq n$. Assume that every nonzero element of the column span $\ZZ B$ of~$B$ over the integers~$\ZZ$ is \emph{mixed}, meaning that it has at least one positive and one negative entry; in particular, the columns of $B$ are mixed. We write $b_1,\ldots,b_n$ for the rows of~$B$. Having chosen~$B$, we set $d = n - m$ and pick a matrix $A \in \ZZ^{d \times n}$ such that $AB = 0$ and $\ZZ A = \ZZ^d$. If $d\neq 0$, the mixedness hypothesis on $B$ is equivalent to the pointedness assumption for~$A$ in Convention~\ref{conv:A}. We do allow~\mbox{$d=0$}, in which case $A$ is the empty matrix. \end{convention} The $d \times n$ integer matrix~$A$ in Convention~\ref{conv:A} determines a $\ZZ^d$-grading on $\CC[\ttt]$ in which the degree \mbox{$\deg(\del_j) = a_j$} is defined% \footnote{In noncommutative settings, such as \cite{MMW,dmm}, the variables are written $\ddel_1,\ldots,\ddel_n$, and the degree of $\ddel_j$ is usually defined to be $-a_j$ instead of~$a_j$.} to be the $j^\th$ column of~$A$. Our conventions imply that $\CC[\ttt]$ has finite-dimensional graded pieces, like any finitely generated module \mbox{\cite[Chapter~8]{cca}}. \begin{defn}\label{d:toral} Let $V = \bigoplus_{\alpha \in \ZZ^d} V_\alpha$ be an $A$-graded module over the polynomial ring~$\CC[\ttt]$. The \emph{Hilbert function} $H_V: \ZZ^d \to \NN$ takes the values $H_V(\alpha) = \dim_\CC{V_\alpha}$. If $V$ is finitely generated, we say that the module $V$ is \emph{toral}\/ if the Hilbert function $H_V$ is bounded above. A graded prime~$\pp$ is a \emph{toral prime} if $\CC[\ttt]/\pp$ is a toral module. Similarly, a graded primary component $J$ of an ideal~$I$ is a \emph{toral component} of~$I$ if $\CC[\ttt]/J$ is a toral module. \end{defn} \begin{example} The toric ideal $I_A$ for the grading matrix~$A$ is always an $A$-graded toral prime, since the quotient $\CC[\ttt]/I_A$ is always toral: its Hilbert function takes only the values $0$ or~$1$. In contrast, $\CC[\ttt]$ itself is not a toral module unless $d = n$ (which forces $A$ to be invertible over~$\ZZ$, by Convention~\ref{conv:A}). \end{example} We will be most interested in the quotients of $\CC[\ttt]$ by prime and primary binomial ideals. To begin, here is a connection between the natural gradings from Section~\ref{s:semigroup} and the $A$-grading. \begin{lemma}\label{l:toral} Let $I \subseteq \CC[\ttt]$ be an $A$-graded binomial ideal and $\cC_{\rho,J}$ a primary component, with $\rho : L \to \CC^*$ for $L \subseteq \ZZ^J$. The image $\ZZ A_J$ of the homomorphism $\ZZ^J/L = \ZZ\Phi \to \ZZ^d$ induced by Corollary~\ref{c:graded} (with $\cA = \ZZ A = \ZZ^d$) is generated by the columns $a_j$ of~$A$ indexed by~$j \in J$, as is the monoid image of\/ $\Phi = \NN^J/L$, which we denote by~$\NN A_J$.\qed \end{lemma} To make things a little more concrete, let us give one more perspective on the homomorphism $\ZZ\Phi \to \ZZ^d$. Simply put, the ideal $I_{\rho,J}$ is naturally graded by $\ZZ^J/L = \ZZ\Phi$, and the fact that it is also $A$-graded means that $L \subseteq \ker(\ZZ^n \to \ZZ^d)$, the map to $\ZZ^d$ being given by~$A$. (The real content of Corollary~\ref{c:graded} lies with the action on the rest of~$\cB$.) \begin{example} Let $\rho : L \to \CC^*$ for a saturated sublattice $L \subseteq \ZZ^J \subseteq \ZZ^n$. If $\cC_{\rho,J}$ is an $I_{\rho,J}$-primary binomial ideal, then $\CC[\ttt]/\cC_{\rho,J}$ has a finite filtration whose successive quotients are torsion-free modules of rank~$1$ over the affine semigroup ring $R = \CC[\ttt]/I_{\rho,J}$. This follows by applying Proposition~\ref{p:graded} to Theorem~\ref{t:components}.1 and its proof. If, in addition, $I_{\rho,J}$ is $A$-graded, then some $A$-graded translate of each successive quotient admits a $\ZZ^J/L$-grading refining the $A$-grading via $\ZZ^J/L \to \ZZ^d = \ZZ A$; this follows by conjointly applying Corollary~\ref{c:graded}. \end{example} The next three results provide alternate characterizations of toral primary binomial ideals. In what follows, $A_J$ is the submatrix of~$A$ on the columns indexed~by~$J$. \begin{prop}\label{p:toral} Every $A$-graded toral prime is binomial. In the situation of Lemma~\ref{l:toral}, $\CC[\ttt]/I_{\rho,J}$ and $\CC[\ttt]/\cC_{\rho,J}$ are toral if and only if the homomorphism $\ZZ\Phi \to \ZZ^d$ is injective. \end{prop} \begin{proof} To prove the first part of the statement, fix a toral prime~$\pp$, and let $h \in \NN$ be the maximum of the Hilbert function of~$\CC[\ttt]/\pp$. It is enough, by \cite[Proposition~1.11]{binomialideals}, to show that $h = 1$. Let~$R$ be the localization of~$\CC[\ttt]/\pp$ by inverting all nonzero homogeneous elements. Because of the homogeneous units in~$R$, all of its graded pieces have the same dimension over~$\CC$; and since $R$ is a domain, this dimension is at least~$h$. Thus we need only show that $R_0 = \CC$. For any given finite-dimensional subspace of~$R_0$, multiplication by a common denominator maps it injectively to some graded piece of $\CC[\ttt]/\pp$. Therefore every finite-dimensional subspace of~$R_0$ has dimension at most~$h$. It follows that $H_R(0) \leq h$, so $R_0$ is artinian. But $R_0$ is a domain because $R_0 \subseteq R$, so $R_0 = \CC$. For the second part, $\CC[\ttt]/\cC_{\rho,J}$ has a finite filtration whose associated graded pieces are $A$-graded translates of quotients of $\CC[\ttt]$ by $A$-graded primes, at least one of which is $I_{\rho,J}$ and all of which contain~it. By additivity of Hilbert functions, $\CC[\ttt]/\cC_{\rho,J}$ is toral precisely when all of these are toral primes. However, if a graded prime~$\pp$ contains a toral prime, then $\pp$ is itself a toral prime. Therefore, we need only treat the case of $\CC[\ttt]/I_{\rho,J}$. But $\CC[\ttt]/I_{\rho,J}$ is naturally graded by~$\ZZ\Phi$, with Hilbert function $0$ or~$1$, so injectivity immediately implies that $\CC[\ttt]/I_{\rho,J}$ is toral. On the other hand, if $\ZZ\Phi \to \ZZ^d$ is not injective, then $\NN A_J$ is a proper quotient of the affine semigroup~$\Phi$, and such a proper quotient has fibers of arbitrary cardinality. \end{proof} \begin{cor}\label{c:toralprimes} Let $\rho : L \to \CC^*$ for a saturated lattice $L \subseteq \ZZ^J \cap \ker_{\ZZ}(A) = \ker_{\ZZ}(A_J)$. The quotient\/ $\CC[\ttt]/I_{\rho,J}$ by an $A$-graded prime~$I_{\rho,J}$ is toral if and only if $L = \ker_{\ZZ}(A_J)$.\qed \end{cor} \begin{lemma}\label{l:toral-by-A} Every $A$-graded binomial prime ideal $I_{\rho,J}$ satisfies \[ \dim(I_{\rho,J}) \geq \rank(A_J), \] with equality if and only if\/ $\CC[\ttt]/I_{\rho,J}$ is toral. \end{lemma} \begin{proof} Rescale the variables and assume that $I_{\rho,J} = I_L$, the lattice ideal for a saturated lattice $L \subseteq \ker_{\ZZ}(A_J)$. The rank of $L$ is at most $\# J - \rank(A_J)$; thus $\dim(I_L) = \# J - \rank(L) \geq \rank(A_J)$. Equality holds exactly when $L=\ker_{\ZZ}(A)$, i.e.\ when $\CC[\ttt]/I_{\rho,J}$ is toral. \end{proof} \begin{example}\label{ex:I(B)} Fix matrices $A$ and~$B$ as in Convention~\ref{conv:B}. This identifies $\ZZ^d$ with the quotient of\/ $\ZZ^n/\ZZ B$ modulo its torsion subgroup. Consider the \emph{lattice basis ideal}\/ \begin{equation}\label{eq:I(B)} I(B) = \<\ttt^{u_+} - \ttt^{u_-} \mid u = u_+ - u_- \;\mbox{is a column of}\; B\> \subseteq \CC[\del_1,\dots ,\del_n]. \end{equation} The toric ideal~$I_A$ from~(\ref{eq:IA}) is an associated prime of~$I(B)$, the primary component being $I_A$ itself. More generally, all of the minimal primes of the lattice ideal~$I_{\ZZ B}$, one of which is~$I_A$, are minimal over~$I(B)$ with multiplicity~$1$; this follows from \cite[Theorem~2.1]{binomialideals} by inverting the variables. That result also implies that the minimal primes of~$I_{\ZZ B}$ are precisely the ideals~$I_\rho$ for partial characters $\rho: \sat(\ZZ B) \to \CC^*$ of~$\ZZ^n$ extending the trivial partial character on~$\ZZ B$, so the lattice ideal~$I_{\ZZ B}$ is the intersection of these prime ideals. Hence $I_{\ZZ B}$ is a radical ideal, and every irreducible component of its zero set is isomorphic, as a subvariety of $\CC^n$, to the variety of~$I_A$. In complete generality, each of the minimal primes of~$I(B)$ arises, after row and column permutations, from a block decomposition of~$B$ of the form \begin{equation}\label{eq:MNOB} \left[ \begin{array}{l|r} N & B_J\!\\\hline M & 0\ \end{array} \right], \end{equation} where $M$ is a mixed submatrix of~$B$ of size $q \times p$ for some $0 \leq q \leq p \leq m$ \cite{hostenshapiro}. (Matrices with $q = 0$ rows are automatically mixed; matrices with $q = 1$ row are never mixed.) We note that not all such decompositions correspond to minimal primes: the matrix $M$ has to satisfy another condition which Ho\c{s}ten and Shapiro call irreducibility \cite[Definition~2.2 and Theorem~2.5]{hostenshapiro}. If $I(B)$ is a complete intersection, then only square matrices $M$ will appear in the block decompositions~(\ref{eq:MNOB}), by a result of Fischer and Shapiro \cite{fischer-shapiro}. For each partial character $\rho : \sat(\ZZ B_J) \to \CC^*$ extending the trivial character on~$\ZZ B_J$, the prime $I_{\rho,J}$ is associated to~$I(B)$, where $J = J(M) = \{1,\ldots,n\} \minus {\rm rows}(M)$ indexes the $n-q$ rows not in~$M$. We reiterate that the symbol $\rho$ here includes the specification of the sublattice $\sat(\ZZ B_J) \subseteq \ZZ^n$. The corresponding primary component \mbox{$\cC_{\rho,J} = {\rm Hull}\big(I(B) + I_\rho + \mm_J^e\big)$} of the lattice basis ideal~$I(B)$ is simply $I_\rho$ if $q = 0$, but will in general be non-radical when $q \geq 2$ (recall that $q = 1$ is impossible). The quotient $\CC[\ttt]/\cC_{\rho,J}$ is toral if and only if $M$ is square and satisfies either $\det(M) \neq 0$ or $q=0$. To check this statement, observe that $I(B)$ has $m = n - d$ generators, so the dimension of any of its associated primes is at least~$d$. But since $A_J$ has rank at most~$d$, Lemma~\ref{l:toral-by-A} implies that toral primes of~$I(B)$ have dimension exactly~$d$ (and are therefore minimal). If $I_{\rho,J}$ is a toral associated prime of $I(B)$ arising from a decomposition of the form~(\ref{eq:MNOB}), where $M$ is $q \times p$, then the dimension of $I_{\rho,J}$ is $n-p-(m-q) = d+q-p$, and from this we conclude that $M$ is square. That $M$ is invertible follows from the fact that $\rank(\ker_{\ZZ}(A_J)) = d$. The same arguments show that if $M$ is not square invertible, then $I_{\rho,J}$ is not toral. \end{example} \begin{example} A binomial ideal $I \subseteq \CC[\ttt]$ may be $A$-graded for different matrices $A$; in this case, which of the components of $I$ are toral will change if we alter the grading. For instance, the prime ideal $I = \<\del_1\del_4-\del_2\del_3\> \subseteq \CC[\del_1,\dots,\del_4]$ is homogeneous for both the matrix $[1 \, 1 \, 1 \, 1]$ and the matrix $\left[\begin{smallmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \end{smallmatrix}\right]$. But $\CC[\del_1,\dots,\del_4]/I$ is toral in the $\left[\begin{smallmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \end{smallmatrix}\right]$-grading, while it is not toral in the $[1 \, 1 \, 1 \, 1]$-grading. \end{example} \begin{example}\label{ex:binomial} Let $I = \<bd-de,bc-ce,ab-ae,c^3-ad^2,a^2d^2-de^3,a^2cd-ce^3,a^3d-ae^3\>$ be a binomial ideal in $\CC[\ttt]$, where we write $\ttt = (\del_1,\del_2,\del_3,\del_4,\del_5) = (a,b,c,d,e)$, and let \[ A = \left[\begin{array}{ccccc} 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 & 1 \end{array}\right] \qquad\text{and}\qquad B = \left[\begin{array}{rrr} -2 & -1 & 0 \\ 3 & 0 & 1 \\ 0 & 3 & 0 \\ -1 & -2 & 0 \\ 0 & 0 & -1 \end{array}\right]. \] One easily verifies that the binomial ideal~$I$ is graded by $\ZZ A = \ZZ^2$. If $\omega$ is a primitive cube root of unity ($\omega^3 = 1$), then $I$, which is a radical ideal, has the prime decomposition \begin{align*} I = \<a,c,d\> &\cap\<bc-ad,b^2-ac,c^2-bd,b-e\> \\ &\cap\<\omega bc-ad,b^2-\omega ac,\omega^2 c^2-bd,b-e\> \\ &\cap\<\omega^2bc-ad,b^2-\omega^2ac,\omega c^2-bd,b-e\>. \end{align*} The intersectand $\<a,c,d\>$ equals the prime ideal $I_{\rho,J}$ for $J = \{2,5\}$ and $L = \{0\} \subseteq \ZZ^J$. The homomorphism $\ZZ^J \to \ZZ^2$ is not injective since it maps both basis vectors to $\left[\twoline{1}{1}\right]$; therefore the prime ideal $\<a,c,d\>$ is not a toral component of~$I$. In contrast, the remaining three intersectands are the prime ideals $I_{\rho,J}$ for the three characters $\rho$ that are defined on~$\ker(A)$ but trivial on its index~$3$ sublattice~$\ZZ B$ spanned by the columns~of $B$, where $J = \{1,2,3,4,5\}$. These prime ideals are all toral by Corollary~\ref{c:toralprimes}, with $\ZZ A_J = \ZZ A$. \end{example} Toral components can be described more simply than in Theorem~\ref{t:components}. \begin{theorem}\label{t:toral} Fix an $A$-graded binomial ideal $I \subseteq \CC[\ttt]$ and a toral associated prime $I_{\rho,J}$ of~$I$. Define the binomial ideal $\oI = I \cdot \CC[\ttt]/\<\del_j - 1 \mid j \in J\>$ by setting $\del_j = 1$ for $j \in J$. \begin{numbered} \item Fix a minimal prime $I_{\rho,J}$ of\/~$I$. If\/ $\oU \subseteq \NN^\oJ$ is the set of elements with infinite congruence class in\/~$\NN^\oJ_\oI$ (Proposition~\ref{p:sim}), and $\ttt_J = \prod_{j \in J} \del_j$, then $I$ has $I_{\rho,J}$-primary component \[ \cC_{\rho,J} = \big( (I + I_\rho) : \ttt_J^\infty\big) + \<\ttt^u \mid u \in \oU\>. \] \end{numbered}\setcounter{separated}{\value{enumi}}% Let $K \subseteq \CC[\del_j \mid j \in \oJ]$ be a monomial ideal containing a power of each available variable, and let $\oU_{\!K} \subseteq \NN^\oJ$ be the set of elements with infinite congruence class in\/~$\NN^\oJ_{\oI + K}$. \begin{numbered}\setcounter{enumi}{\value{separated}} \item The $I_{\rho,J}$-primary component of $\<I + I_\rho + K\> \subseteq \CC[\ttt]$ is $\big( (I + I_\rho) : \ttt_J^\infty \big) + \<\ttt^u \mid u\in \oU_{\!K}\>$. \item If $K$ is contained in a sufficiently high power of\/ $\mm_J$, then \[ \cC_{\rho,J} = \big( (I + I_\rho) : \ttt_J^\infty\big) + \<\ttt^u \mid u \in \oU_{\!K}\> \] is a valid choice of $I_{\rho,J}$-primary component for~$I$. \end{numbered} The only monomials in the above primary components are in $\<\ttt^u \mid u \in \oU\>$ or $\<\ttt^u \mid u \in \oU_{\!K}\>$. \end{theorem} \begin{proof} Resume the notation from the statement and proof of Theorem~\ref{t:components}. As in that proof, it suffices here to deal with the first item. In fact, the only thing to show is that $\wU$ in Theorem~\ref{t:components} is the same as $\NN^J \times \oU$ here. Recall that $I' \subseteq \CC[Q]$ is the image of~$I$ modulo~$I_\rho$. The congruence classes of $\ZZ\Phi \times \NN^\oJ$ determined by $I'[\ZZ\Phi]$ are the projections under $\ZZ^J \times \NN^\oJ \to \ZZ\Phi \times \NN^\oJ$ of the $\sim$ congruence classes. Further projection of these classes to~$\NN^\oJ$ yields the congruence classes determined by the ideal $I'' \subseteq \CC[\NN^\oJ]$, where $I''$ is obtained from $I'[\ZZ\Phi]$ by setting $\ttt^\phi = 1$ for all \mbox{$\phi \in \ZZ\Phi$}. This ideal $I''$ is just~$\oI$. Hence we are reduced to showing that a congruence class in $\Phi \times \NN^\oJ$ determined by $I'[\ZZ\Phi]$ is infinite if and only if its projection to~$\NN^\oJ$ is infinite. This is clearly true for the monomial congruence class in $\ZZ\Phi \times \NN^\oJ$. For any other congruence class $\Gamma \subseteq \ZZ\Phi \times \NN^\oJ$, the homogeneity of~$I$ (and hence that of~$I'$) under the $A$-grading implies that $\Gamma$ is contained within a coset of $\KK = \ker(\ZZ\Phi \times \ZZ^\oJ \to \ZZ^d=\ZZ A)$. This kernel~$\KK$ intersects $\ZZ\Phi$ only at $0$ because $I_{\rho,J}$ is toral. Therefore the projection of any coset of~$\KK$ to~$\ZZ^\oJ$ is bijective onto its image. In particular, $\Gamma$ is infinite if and only if its bijective image in $\NN^\oJ$~is~infinite.% \end{proof} \begin{cor}\label{c:I(B)} Resume the notation of Example~\ref{ex:I(B)}. If $I_{\rho,J}$ is a toral minimal prime of the lattice basis ideal~$I(B)$ given by a decomposition as in~(\ref{eq:MNOB}), so $J = J(M)$, then \[ \cC_{\rho,J} = I(B) + I_{\rho,J} + U_M, \] where $U_M \subseteq \CC[\del_j \mid j \in \oJ]$ is the ideal $\CC$-linearly spanned by by all monomials whose exponent vectors lie in the union of the unbounded $M$-subgraphs of\/~$\NN^\oJ$, as in Definition~\ref{d:M}. The only monomials in $\cC_{\rho,J}$ belong to $U_M$.\qed \end{cor} \begin{remark} Theorem~\ref{t:toral} need not always be false for a component that is not toral, but it can certainly fail: there can be congruence classes in $\ZZ\Phi \times \NN^\oJ$ that are infinite only in the $\ZZ\Phi$ direction, so that their projections to~$\NN^\oJ$ are finite. \end{remark} \section{Applications, examples, and further directions \label{s:applications} In this section we give a brief overview of the connection between binomial primary decomposition and hypergeometric differential equations, study some examples, and discuss computational issues. From the point of view of complexity, primary decomposition is hard: even in the case of zero dimensional binomial complete intersections, counting the number of associated primes (with or without multiplicity) is a $\# P$-complete problem \cite{cattani-dick:binomial-ci}. However, the primary decomposition algorithms implemented in Singular \cite{Singular} or Macaulay2 \cite{M2} work very well in reasonably sized examples, and in fact, they provide the only implemented method for computing bounded congruence classes or $M$-subgraphs as in Section~\ref{s:primary}. We remind the reader that \cite[Section 8]{binomialideals} contains specialized algorithms for binomial primary decomposition, whose main feature is that they preserve binomiality at each step. In the case that $q=2$, we can study $M$-subgraphs directly by combinatorial means \cite[Section~6]{dms}. The relevant result is the following. \begin{prop} Let $M$ be a mixed invertible $2 \times 2$ integer matrix. Without loss of generality, write $M = \left[ \begin{array}{rr} a & b \\ -c & -d \end{array} \right] $, where $a,b,c,d$ are positive integers. Then the number of bounded $M$-subgraphs is $\min(ad, bc)$. Moreover, if \[ R = \left\{ \begin{array}{lr} \{(s,t) \in \NN^2 \mid s < b \text{ and } t < c \} & \mbox{if}\; ad > bc, \\ \{(s,t) \in \NN^2 \mid s < a \text{ and } t < d \} & \mbox{if}\; ad < bc, \end{array} \right. \] then every bounded $M$-subgraph passes through exactly one of the points in $R$. \end{prop} If $q>2$, a method for computing $M$-subgraphs may be obtained through a link to differential equations. To make this evident, we make a change in the notation for the ambient ring. \begin{notation} All binomial ideals in the remainder of this article are ideals in the polynomial ring $\CC[\ddel]=\CC[\ddel_1,\dots,\ddel_n]$. \end{notation} The following result for $M$-subgraphs can be adapted to fit the more general context of congruences. \begin{prop}\label{prop:sols-via-subgraphs} Let $M$ be a $q\times q$ mixed invertible integer matrix, and assume that $q>0$. Given $\gamma \in \NN^q$, denote by $\Gamma$ the $M$-subgraph containing $\gamma$. Think of the ideal $I(M) \subseteq \CC[\ddel]$ as a system of linear partial differential equations with constant coefficients. \begin{numbered} \item The system of differential equations $I(M)$ has a unique formal power series solution of the form $G_{\gamma} = \sum_{u \in \Gamma} \lambda_u x^u$ in which $\lambda_{\gamma} = 1$. \item The other coefficients $\lambda_u$ of $G_{\gamma}$ for $u \in \Gamma$ are all nonzero. \item The set $\{G_{\gamma} \mid \gamma$ runs over a set of representatives for the $M$-subgraphs of\/~$\NN^q\}$ is a basis for the space of all formal power series solutions of~$I(M)$. \item The set $\{G_{\gamma} \mid \gamma$ runs over a set of representatives for the \emph{bounded} $M$-subgraphs of~$\NN^q\}$ is a basis for the space of polynomial solutions of~$I(M)$. \end{numbered} \end{prop} The straightforward proof of this proposition can be found in \cite[Section~7]{dmm}. The following example illustrates the correspondence between $M$-subgraphs and solutions of $I(M)$. \begin{example} \label{ex:concrete-subgraph} Consider the $3\times 3$ matrix \[ M = \left[ \begin{array}{rrr} 1 & -5 & 0 \\ -1 & 1 & -1 \\ 0 & 3 & 1 \end{array} \right]. \] A basis of solutions (with minimal support under inclusion) of $I(M)$ is easily computed: \[ \left\{ 1, \quad x+y+z, \quad (x+y+z)^2, \quad (x+y+z)^3, \quad \sum_{n \geq 4} \frac{(x+y+z)^n}{n!} \right\}. \] \begin{figure} \[ \psfrag{a}{\footnotesize$a$} \psfrag{b}{\footnotesize$b$} \psfrag{c}{\footnotesize$c$} \includegraphics{fig-axes-30.eps} \] \caption{The $M$-subgraphs of $\NN^3$ for Example~\ref{ex:concrete-subgraph}.} \label{f:M-subgraphs} \end{figure} The $M$-subgraphs of $\NN^3$ are the four slices $\{(a,b,c) \in \NN^3 \mid a + b + c = n\}$ for $n \leq 3$; for $n \geq 4$, two consecutive slices are $M$-connected by $(-5,1,3)$, yielding one unbounded $M$-subgraph (see Figure~\ref{f:M-subgraphs}). \end{example} A direct combinatorial algorithm for producing the bounded $M$-subgraphs for $q > 2$, or even for finding their number would be interesting and useful, as the number of bounded $M$-subgraphs gives the dimension of the polynomial solution space of a hypergeometric system, and also the multiplicity of an associated prime of a lattice basis ideal. In the case where $I(M)$ is a zero-dimensional complete intersection, such an algorithm can be produced from the results in \cite{cattani-dick:binomial-ci}. The combinatorial computation of the number of bounded congruence classes determined by a binomial ideal in a semigroup ring is open. The system of differential equations $I(M) \subseteq\CC[\ddel]$ is a special case in the class of \emph{Horn hypergeometric systems}. That class of systems takes center stage in our companion article \cite{dmm}, in the more general setting of \emph{binomial $D$-modules} that are introduced there. The input data for these consist of a binomial ideal~$I$ and a vector~$\beta$ of complex parameters. The special case where $I$ is prime corresponds to the \emph{$A$-hypergeometric} or \emph{GKZ hypergeometric systems}, after Gelfand, Kapranov, and Zelevinsky \cite{GGZ, GKZ}. Binomial primary decomposition is crucial for the study of Horn systems, and their more general binomial relatives, because the numerics, algebra, and combinatorics of their solutions are directly governed by the corresponding features of the input binomial ideal. The dichotomy between components that are toral or not, for example, distinguishes between the choices of parameters yielding finite- or infinite-dimensional solution spaces; and in the finite case, the multiplicities of the toral components enter into simple formulas for the dimension. The use of binomial primary decomposition to extract invariants and reduce to the $A$-hypergeometric case underlies the entirety of~\cite{dmm}; see Section~1.6 there for precise statements. Using this ``hypergeometric perspective'', it becomes possible to consider algorithms for binomial primary decomposition that exploit methods for solving differential equations. The idea would be to use the fact from \cite[Section 7]{dmm}, that the supports of power series solutions of hypergeometric systems contain the combinatorial information needed to describe certain toral primary components of the underlying binomial ideal. The method of canonical series \cite[Sections~2.5 and~2.6]{SST}, a symbolic algorithm for constructing power series solutions of regular holonomic left $D_n$-ideals, might be useful. Canonical series methods are based on Gr\"obner bases in the Weyl algebra, and generalize the classical Frobenius method in one variable. \raggedbottom \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2008-03-27T01:26:32", "yymm": "0803", "arxiv_id": "0803.3846", "language": "en", "url": "https://arxiv.org/abs/0803.3846", "abstract": "An explicit lattice point realization is provided for the primary components of an arbitrary binomial ideal in characteristic zero. This decomposition is derived from a characteristic-free combinatorial description of certain primary components of binomial ideals in affine semigroup rings, namely those that are associated to faces of the semigroup. These results are intimately connected to hypergeometric differential equations in several variables.", "subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)", "title": "Combinatorics of binomial primary decomposition", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683506103591, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7096610849806547 }
https://arxiv.org/abs/1612.09108
Nonstandard Measure Spaces with Values in non-Archimedean Fields
The aim of this contribution is to bring together the areas of $p$-adic analysis and nonstandard analysis. We develop a nonstandard measure theory with values in a complete non-Archimedean valued field $K$, e.g. the $p-$adic numbers $\mathbb{Q}_p$. The corresponding theory for real-valued measures is well known by the work of P. A. Loeb, R. M. Anderson and others.We first review some of the standard facts on non-Archimedean measures and briefly sketch the prerequisites from nonstandard analysis. Then internal measures on rings and algebras with values in a nonstandard field ${^*K}$ are introduced. We explain how an internal measure induces a $K$-valued Loeb measure. The standard-part map between a Loeb space and the underlying standard measure space is measurable almost everywhere. We establish liftings from measurable functions to internal simple functions. Furthermore, we prove that standard measure spaces can be described as push-downs of hyperfinite internal measure spaces. This result is an analogue of a well-known Theorem on hyperfinite representations of Radon spaces. Then standard integrable functions are related to internal $S$-integrable functions and integrals are represented by hyperfinite sums. Finally, the results are applied to measures and integrals on $\mathbb{Z}_p$ and $\mathbb{Z}_p^{\times}$. We obtain explicit series expansions for the $p$-adic zeta function and the $p$-adic Euler-Mascheroni constant which we use for computations.
\section{Introduction} Integrals of functions with values in a complete non-Archimedean field are studied in the field of {\em $p$-adic analysis} and a general measure-theoretical approach to $p$-adic integration has been developed by A. van Rooij \cite{rooij}. $p$-adic measures and integrals are used in number theory and in arithmetic geometry, in particular in the context of $p$-adic zeta- and $L$-functions. This contribution applies methods of {\em nonstandard analysis} to measures and functions with values in a complete non-Archimedean field, e.g.\ the $p$-adic numbers $\mathbb{Q}_p$. {Nonstandard \mbox{analysis}} was established in the 1960s by A. Robinson \cite{robinson1961},\cite{robinson1996}. Nonstandard extensions can be defined by the {\em ultrapower} construction and they behave in a functorial way \cite{serpe}. In the past, nonstandard analysis has been successfully applied to real measure theory. {\em Loeb measures} \cite{albeverio1986} are of particular importance because they permit the transition from nonstandard to standard measure spaces. Our aim is to investigate $p$-adic measure spaces with nonstandard methods and to obtain Theorems similar to the case of real Radon measures, for example representations by hyperfinite measure spaces. \\ \subsection{Measures with values in non-Archimedean fields} \label{measures} Let $K$ be a field with a non-Archimedean absolute value $|\ |$. We suppose that the absolute value is non-trivial and $K$ is complete. We recall some basic definitions and facts on $K$-valued measures and integrals from \cite{rooij}, where a detailed exposition of the subject can be found. Our description is based on measures and measurable functions rather than distributions and continuous functions. Of course, these terms and definitions are closely related. Let $X$ be a set and $\mathcal{R}$ a {\em ring} of subsets of $X$. This means that $\varnothing \in \mathcal{R}$ and for any $A, B \in \mathcal{R}$, we have $A \cup B \in \mathcal{R}$ and $A \setminus B \in \mathcal{R}$. We assume $\mathcal{R}$ is {\em covering} and {\em separating}, i.e., for any $a, b \in X$ there is $A \in \mathcal{R}$ such that $a\in A$ and $b\in X \setminus A$. The sets in $\mathcal{R}$ are called {\em measurable} and $(X,\mathcal{R})$ is called a {\em measurable space}. $\mathcal{R}$ is the base of a {\em zero-dimensional} Hausdorff topology on $X$. If in addition $X \in \mathcal{R}$, then $\mathcal{R}$ is an {\em algebra} and we will see that measures on algebras have additional favourable properties.\\ For a given zero-dimensional Hausdorff space $X$, let $B(X)$ be the set of clopen (open and closed) subsets. Then $B(X)$ is a covering and separating algebra. For a locally compact space $X$, the compact and clopen sets form a covering and separating subring of $B(X)$ which is denoted by $B_c(X)$. For example, if $K$ is a locally compact field, then $B_c(K)$ consists of all finite unions of bounded balls $B_r(a) = \{ x \in K\ |\ |x-a| \leq r \}$. \begin{definition} Let $(X,\mathcal{R})$ be a measurable space. A {measure} on $\mathcal{R}$ with values in $K$ is a map $\mu: \mathcal{R} \rightarrow K$ with the following properties: \begin{enumerate} \item $\mu(A \cup B)=\mu(A)+\mu(B)$ for disjoint sets $A, B \in \mathcal{R}$. (Additivity) \item For $A \in \mathcal{R}$, $\|A\|_{\mu} = \sup\ \{|\mu(B)|\ :\ B \subset A,\ B\in \mathcal{R} \} < \infty$. \\ (Boundedness) \item For any shrinking set $\mathcal{R}_0 \subset \mathcal{R}$ (i.e., $A, B \in \mathcal{R}_0 \Rightarrow A \cap B \in \mathcal{R}_0$) with empty intersection (i.e., $\bigcap_{A \in \mathcal{R}_0} A = \varnothing$) and for any $\epsilon > 0$ there exists a set $A_{\epsilon} \in \mathcal{R}_0$ with $|\mu(A_{\epsilon})|<\epsilon$. (Continuity) \end{enumerate} $(X,\mathcal{R},\mu)$ is called a measure space. \label{measure} \end{definition} \noindent {\em Remarks.} If one of the sets $A \in \mathcal{R}_0$ in part (c) is compact in the $\mathcal{R}$-topology, then c) is automatically satisfied, since in this case a finite intersection must be empty. Furthermore, the continuity implies $\sigma$-additivity, i.e.\ $\mu(\bigcup_{n\in \mathbb{N}} A_n) = \sum_{n\in \mathbb{N}} \mu(A_n)$ for disjoint sets $A_n \in \mathcal{R}$, if $A=\bigcup_{n\in \mathbb{N}} A_n \in \mathcal{R}$ holds. In fact, the sequence $A$, $A \setminus A_1$, $A \setminus (A_1 \cup A_2)$, $\dots$ is shrinking and therefore c) yields $\lim_{n \rightarrow \infty} \mu(A)-(\mu(A_1)+\dots+\mu(A_n)) = 0$. But note that is not required that infinite unions are measurable and $\mathcal{R}$ is usually not a $\sigma$-algebra (even if $\mathcal{R}$ is an algebra). The reason for this difference to real measure spaces is that a $K$-valued measure on a $\sigma$-algebra is purely atomic and therefore almost trivial (see \cite{rooij} 4.19 and 7.A): suppose for example that $X$ is a compact ultrametric with the $\sigma$-algebra of Borel sets; then the singletons $\{a\}$ are measurable and the continuity property implies that the measure values of a shrinking set of punctured balls with a fixed center converges to $0$. Any measurable set $Y$ can be covered by a finite set of balls where the absolute value of the measure of the punctured balls is small, i.e.\ less than any fixed $\epsilon$. Since the absolute value of $K$ is non-Archimedean, one obtains $|\mu(Y)| \leq \epsilon$ and hence $\mu(Y)=0$, unless the measure is atomic. The real number $\|A\|_{\mu}$ is defined as the supremum of all $|\mu(B)|$ where $B$ is a subset of $A$. If $\mathcal{R}$ is an algebra then $\|X\|_{\mu}$ is a global upper bound for all $|\mu(A)|$. The definition implies $\|A\|_{\mu} \leq \|B\|_{\mu}$ for $A, B \in \mathcal{R}$ and $A \subset B$, as one expects. The reason for considering $\|.\|_{\mu}$ in addition to $|\mu(.)|$ is that the latter is not monotone; $| \mu(A) | \leq |\mu(B)|$ can be false for a subset $A \subset B$. But the inequality $|\mu(B)| \leq \max(|\mu(A)|, |\mu(B\setminus A)|)$ holds. If $A,B \in \mathcal{R}$ then $\|A \cup B\|_{\mu} \leq \max \{ \|A\|_{\mu}, \|B\|_{\mu} \}$. Indeed, if $C \subset A \cup B$, then $\mu(C) = \mu(C \cap A) + \mu(C \cap (B \setminus A))$ and hence $$|\mu(C)|_{\mu} \leq \|C \cap A\|_{\mu} + \|C \cap (B \setminus A)\|_{\mu} \leq \|A\|_{\mu} + \|B\|_{\mu}. $$ One can show that part (c) of the above Definition implies the existence of a set $A_{\epsilon} \in \mathcal{R}_0$ with the stronger property $\|A_{\epsilon}\|_{\mu}<\epsilon$ (see \cite{rooij} chapter 7). Thus (c) is equivalent to the following:\\ \begin{enumerate} \item[(c')] For any shrinking set $\mathcal{R}_0 \subset \mathcal{R}$ with empty intersection one has $\displaystyle\lim_{A \in \mathcal{R}_0} \| A\|_{\mu} = 0$. \end{enumerate} We say $A \in \mathcal{R}$ is a {$\mu$-\em null set} or {$\mu$-\em negligible} if $\|A\|_{\mu} = 0$. This notion will be extended below to arbitrary subsets of $X$. One defines a {\em Norm function} $N_{\mu} : X \rightarrow \mathbb{R}_{\geq 0}$ by $$N_{\mu}(x) = \inf \{\|A\|_{\mu} : x\in A \in \mathcal{R} \}$$ $N_{\mu}$ is called a Norm function, since it is used to define a seminorm on the space of $K$-valued functions on $X$ (see Section \ref{sec:int}). For real-valued regular measures, such a function $N_{\mu}$ would mostly be zero (except for atomic measures), but this is not the case for $K$-valued measures. In fact, for every locally compact space $X$ there exists a measure $\mu$ on $B_c(X)$ such that $N_{\mu}=1$ everywhere (\cite{rooij} 7.9).\\ $\|A\|_{\mu}$ can be recovered from $N_{\mu}$ by the formula $\|A\|_{\mu} = \sup_{x\in A} N_{\mu}(x)$ (see \cite{rooij} 7.2). The inequality $\geq$ follows from the definition of $N_{\mu}$. For the reverse inequality, take $\epsilon >0$. For every $x \in A$ there exists $B \in \mathcal{R}$ with $x \in B$ such that $\|B\|_{\mu} \leq N_{\mu}(x)+\epsilon$. Using the continuity of the measure (the complements $A \setminus B$ of such $B$'s are shrinking with empty intersection), one finds a $B \in \mathcal{R}$ with $\|B\|_{\mu} \leq (\sup_{x \in A} N_{\mu}(x))+\epsilon$ and $\|A \setminus B\|_{\mu} \leq \epsilon$. Thus $\|A\|_{\mu} \leq \max\{\|B\|_{\mu},\| A\setminus B\|_{\mu} \} \leq (\sup_{x\in A} N_{\mu}(x))+\epsilon$ which establishes the asserted inequality.\\ It follows that $A \in \mathcal{R}$ is a $\mu$-null set if and only if $N_{\mu}(x)=0$ for all $x \in A$. The latter is used to define $\mu$-null subsets of $A$ which are not necessarily measurable. Since null sets are thus compatible with arbitrary unions, there exists a largest $\mu$-null subset of $X$. \\ Now we extend our ring $\mathcal{R}$ and include all sets which can be approximated by measurable sets. The extended (completed) ring $\mathcal{R}_{\mu} \supset \mathcal{R}$ contains all $A \subset X$ with the following property: for all $\epsilon > 0$ there is a measurable set $B_{\epsilon} \in \mathcal{R}$ such that $N_{\mu}(x) \leq \epsilon$ for all $x \in A \Delta B_{\epsilon} = (A \cup B_{\epsilon}) \setminus (A \cap B_{\epsilon})$. The latter is the symmetric difference of $A$ and $B_{\epsilon}$. This means that a set $A \in \mathcal{R}_{\mu}$ can be approximated by a set $B_{\epsilon} \in \mathcal{R}$. If $B_{\epsilon}' \in \mathcal{R}$ is another approximating set then $N_{\mu}(x) \leq \epsilon$ for all $x \in B_{\epsilon} \Delta B_{\epsilon}'$. This implies $|\mu(B_{\epsilon})-\mu(B'_{\epsilon})| \leq \|B_{\epsilon} \Delta B_{\epsilon}'\|_{\mu} \leq \epsilon $. It can be easily shown that $\mathcal{R}_{\mu}$ is again a ring. The measure $\mu$ can be extended to $\mathcal{R}_{\mu}$ by taking the limit $ \mu(A)= \lim_{\epsilon \rightarrow 0} \mu(B_{\epsilon})$ which is well defined by the above. The additivity and boundedness is obvious. Let $X_{\epsilon} = \{ x\in X : N_{\mu}(x)>\epsilon \}$. The continuity of the extended measure follows by intersecting the elements of a shrinking subset with $X_{\epsilon}$ and using the continuity of the original measure (see \cite{rooij} 7.4). The extended algebra is {\em complete}, i.e.\ it contains all subsets of $\mu$-null sets. We obtain the {\em extended measure space} $(X,\mathcal{R}_{\mu},\mu)$. $\mathcal{R}_{\mu}$ is the base of the zero-dimensional $\mathcal{R}_{\mu}$-topology on $X$ which is finer than the original $\mathcal{R}$-topology. Furthermore, $\mathcal{R}_{\mu}$ is stable under further extension, i.e., a set which can be approximated by sets in $\mathcal{R}_{\mu}$ is already contained in $\mathcal{R}_{\mu}$ (see \cite{rooij} 7.5). \\ The following statement (\cite{rooij} 7.6) is crucial for the next section: \begin{prop} Let $(X,\mathcal{R}_{\mu},\mu)$ an extended measure space, $A \in \mathcal{R}_{\mu}$, $\epsilon>0$ and $X_{\epsilon} = \{ x\in X : N_{\mu}(x)>\epsilon \}$. Then $X_{\epsilon} \cap A$ is $\mathcal{R}_{\mu}$-compact and $N_{\mu}$ is $\mathcal{R}_{\mu}$-upper semicontinuous. \label{compact} \end{prop} We obtain an interesting relationship between measurable sets and its associated \mbox{topology}: \begin{cor} Let $B$ be any clopen set in $X$ with respect to the zero-dimensional $\mathcal{R}_{\mu}$-topology and suppose that $B \subset A$ for some $A \in \mathcal{R}_{\mu}$. Then $B \in \mathcal{R}_{\mu}$. In particular, if $\mathcal{R}$ is an algebra and $B(X)$ the algebra of clopen sets in the $\mathcal{R}_{\mu}$-topology, then $B(X)=\mathcal{R}_{\mu}$. \label{bx} \end{cor} {\em Proof.} Let $B \in B(X)$ and $\epsilon>0$. Then Proposition \ref{compact} implies that $A \cap X_{\epsilon}$ and hence also $B \cap X_{\epsilon}$ is compact. Since $B$ can be covered by sets in $\mathcal{R}_{\mu}$, there is a finite sub-cover of $B \cap X_{\epsilon}$ which yields $B \cap X_{\epsilon} \in \mathcal{R}_{\mu}$. Since this holds for all $\epsilon>0$, we then have $B \in \mathcal{R}_{\mu}$. $\hfill \square$\\ {\noindent \em Remark.} A measure $\mu$ on the algebra $B(X)=\mathcal{R}_{\mu}$ is called {\em tight}. In this case, $X_{\epsilon}$ is compact and $\| X \setminus X_{\epsilon}\|_{\mu} \leq \epsilon$ for all $\epsilon > 0$. \\ There is a close connection between measurability and continuity: \begin{cor} The following conditions are equivalent: \begin{enumerate} \item $f: X \rightarrow K$ is $(\mathcal{R}_{\mu}, B(K))$-locally measurable, i.e. $f \cdot \chi_A$ is measurable for any $A \in \mathcal{R}_{\mu}$. \item $f: X \rightarrow K$ is $(\mathcal{R}_{\mu}, B(K))$-continuous. \end{enumerate} \label{cont} \end{cor} {\em Proof.} a) implies b) since $\mathcal{R}_{\mu}$ is a covering ring and conversely, b) implies a) by the above Corollary \ref{bx}.$\hfill\square$ \\ {\noindent \em Remark.} If $\mathcal{R}$ is only a ring, then even the constant functions are only locally measurable. If $\mathcal{R}$ is an algebra then {\em locally measurable} can be replaced by {\em measurable} in part a).\\ If $N_{\mu}$ is everywhere greater than some positive number, then $\mathcal{R}=\mathcal{R}_{\mu}$: \begin{cor} Let $(X,\mathcal{R},\mu)$ be a measure space as above and assume that there is some $\epsilon>0$ such that $N_{\mu}(x) > \epsilon$ for all $x\in X$. Then $\mathcal{R}=\mathcal{R}_{\mu}$, $X$ is locally compact in the $\mathcal{R}$-topology and a function $f: X \rightarrow K$ is $(\mathcal{R}, B(K))$-continuous if and only if $f$ is $(\mathcal{R}, B(K))$-locally measurable. \end{cor} \subsection{Nonstandard extensions} \label{nonst} In this subsection, we briefly recall the notions and prerequisites from nonstandard analysis (see for instance \cite{albeverio1986}, \cite{lr}, \cite{vaeth} for more details). In particular, we discuss nonstandard interpretations of $p$-adic fields. Nonstandard analysis was invented by by Abraham Robinson in 1961 \cite{robinson1961}. His original construction uses model theory and was motivated by a Theorem of T. Skolem who showed the existence of {\em nonstandard} models of arithmetic: the natural numbers can not be uniquely characterized with first-order logic \cite{skolem1934}. There are countable models with infinite (unlimited) numbers. Similarly, there exists a nonstandard model ${^*\mathbb{R}}$ of the theory of the real numbers. ${^*\mathbb{R}}$ is an ordered extension field of $\mathbb{R}$ which contains numbers greater than any standard real number \cite{robinson1996}. Since ${^*\mathbb{R}}$ is a field, it also contains infinitesimal numbers. Later W.A.J. Luxemburg gave an explicit construction of the hyperreal numbers by equivalence classes of sequences of real numbers modulo an ultrafilter (the {\em ultrapower} construction) which is widely used today.\\ This construction can be applied to almost any mathematical object which is contained in a {\em superstructure} $V(S)$ above some base set $S$. The latter is obtained by iterating the power set operation over the base set (for example $S=\mathbb{R}$) and taking unions. There is a general embedding map $^* : V(\mathbb{R}) \rightarrow V({^*\mathbb{R}})$ called {\em nonstandard extension} between the superstructures over $\mathbb{R}$ and ${^*\mathbb{R}}$. An object $A \in V(\mathbb{R})$ (e.g., a set like $\mathbb{N}$ but also higher order structures such as fields, functions, topological spaces or measure spaces) is mapped to an extended object ${^* A} \in V({^*\mathbb{R}})$. ${^*A}$ can be defined by the ultrapower construction, similar to the case of the real numbers. Taking ultrapowers over the index set $I=\mathbb{N}$ yields a countable saturated embedding. Countable saturation suffices for many applications (e.g.\ for countable sets and separable spaces), but sometimes richer nonstandard embeddings are required, i.e.\ a $\kappa$-saturated or polysaturated embedding of superstructures (see \cite{vaeth} for more details). The elements $A \in V(\mathbb{R})$ are called {\em standard}. Let $B \in V({^*\mathbb{R}})$. Then $B$ is called {\em standard} if $B={^*A}$, i.e.\ $B$ is obtained by a constant sequence of $A$'s. $B$ is called {\em internal} if $B \in {^*A}$, i.e.\ $B$ is represented by a sequence $(a_i)_{i\in I}$ with $a_i \in A$. The remaining objects in $V({^*\mathbb{R}})$ are called {\em external}. Note that the standard copy of $A \in V(\mathbb{R})$ in $V({^*\mathbb{R}})$, i.e.\ the set $^{\sigma}A=\{ {^*a} \in {^*A} \ |\ a \in A\}$, is an external subset of the standard set ${^*A}$. \noindent {\em Example:} ${^*\mathbb{N}}$ can be constructed as the product of copies of $\mathbb{N}$ modulo the given ultrafilter. ${^*\mathbb{N}}$ contains infinite numbers, for example the class of the sequence $(n)_{n\in\mathbb{N}}$. A sequence of finite subsets of $\mathbb{N}$, i.e.\ a sequence of elements in $A=\mathcal{P}(\mathbb{N})$, gives an element of ${^*\mathcal{P}(\mathbb{N})}$ and hence an internal subset of ${^*\mathbb{N}}$. This internal set is standard-finite if it coincides with some fixed finite set on a set of indices contained in the ultrafilter. Otherwise, the set is hyperfinite, i.e.\ internal and nonstandard. If $N \in {^*\mathbb{N}}$ is an infinite number then $\{1,2,\dots, N\}$ is a hyperfinite set which contains a copy of $\mathbb{N}$. Note that the subsets $^{\sigma}\mathbb{N}$ and ${^*\mathbb{N}} \setminus {^\sigma}\mathbb{N}$ of ${^*\mathbb{N}}$ are external.\\ The embedding ${^*}:\ \mathcal{C} \rightarrow {^* \mathcal {C}}$ of a small category $\mathcal{C}$ (relative to the superstructure) is a covariant functor, and a given functor $F :\mathcal{C}_1 \rightarrow \mathcal{C}_2 $ between small categories can be extended to a functor ${^* F} : {^* \mathcal {C}_1} \rightarrow {^* \mathcal {C}_2}$. It can be shown that these functors are well behaved (see \cite{serpe}). There are a number of important principles which we mention only shortly: \begin{enumerate} \item {\em Transfer Principle}: Terms, formulas and sentences can be extended to the nonstandard universum. Objects are extended by the $*$-embedding and quantifiers over sets are substituted by quantifiers over internal sets. The important transfer principle states that a sentence $\varphi$ is true if and only if $^* \varphi$ is true. \item {\em Saturation Principle}: Suppose that a family of internal sets $(A_i)_{i \in I}$ has nonempty finite intersections, the nonstandard embedding $^*$ is $\kappa$-saturated and the cardinality of $I$ is at most $\kappa$. Then $\bigcap_{i \in I} A_i \neq \varnothing$. \item {\em Countable Saturation}: A countable decreasing sequence $(A_n)_{n\in \mathbb{N}}$ of nonempty internal sets has a nonempty intersection $\bigcap_{n\in \mathbb{N}} A_n$. \item {\em Countable Comprehension} (equivalent to countable saturation): A sequence $(a_n)_{n\in \mathbb{N}}$ of elements of an internal set $A$ can be extended to an internal sequence $(a_n)_{n\in {^*\mathbb{N}}}$ of elements of $A$. \item {\em Permanence Principle}: Let $A(n)$ an internal formula with $n$ the only free variable. If $A(n)$ holds for all $n \in \mathbb{N}$ with $n \geq n_0$, then there exists an infinite $N_0 \in {^*\mathbb{N}}$ such that $A(n)$ holds for all $n \in {^*\mathbb{N}}$ with $n_0 \leq n \leq N_0$. If $A(n)$ holds for all infinite $n \in {^*\mathbb{N}}$, then there exists $n_0 \in \mathbb{N}$ such that $A(n)$ holds for all $n \geq n_0$. \end{enumerate} A topological space $(X,\mathcal{T})$ possesses an extension $({^* X},{^*\mathcal{T}})$. Let $a \in X$. Then the intersection of all standard neighbourhoods of $a$ in ${^* X}$ is called the {\em monad} of $a$: $$\text{\textit{mon}}\hspace{0.5mm} (a) = \bigcap_{a \in A \in \mathcal{T}} {^*A}$$ An element in $x \in {^*X}$ is called {\em nearstandard} if $x \in mon(a)$ for some $a\in X$ and $ \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ is the subset of nearstandard elements. If $X$ is a Hausdorff space, then $mon(a) \cap mon(b)=\varnothing$ for $a \neq b$. This allows to define the important {\em standard-part} function $\text{\textit{st}}\hspace{0.2mm} _X : \text{\textit{ns}}\hspace{0.5mm} ({^*X}) \rightarrow X$ which projects the subset $mon(a)$ to $a$. To simplify notation we often write $\text{\textit{st}}\hspace{0.2mm} $ instead of $st_X$. The following statement (see \cite{lr} 21.7) gives a nonstandard characterisation of open, closed and compact sets. We remark that countable saturation of the superstructure embedding suffices if the topological space is separable. \begin{prop} Let $X$ be a Hausdorff space and $A \subset X$. Then: \begin{enumerate} \item $A$ is open if and only if $\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \subset {^* A}$, i.e.\ all monads are contained in ${^*A}$. \item $A$ is closed if and only if ${^*A}\, \cap\, \text{\textit{ns}}\hspace{0.5mm} ({^*X}) \subset \text{\textit{st}}\hspace{0.2mm} ^{-1}(A)$, i.e.\ the nearstandard elements in ${^*A}$ are contained in some monad of $A$. \item $A$ is compact if and only if ${^*A} \subset \text{\textit{st}}\hspace{0.2mm} ^{-1}(A)$, i.e.\ all elements in ${^*A}$ are contained in some monad of $A$. This is equivalent to $ \text{\textit{ns}}\hspace{0.5mm} ({^*A})={^* A}$. \end{enumerate} \label{topol} \end{prop} Since $\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \subset \text{\textit{ns}}\hspace{0.5mm} ({^* A})$ holds by definition, we obtain the following Corollary: \begin{cor} Let $X$ be a Hausdorff space. Then $A \subset X$ is clopen if and only if $\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) = {^* A} \cap \text{\textit{ns}}\hspace{0.5mm} ({^*X})$. \label{clopen} \end{cor} We next turn to nonstandard extensions of fields. Let $K$ be a field which is complete with respect to a non-Archimedean absolute value $|\ |_v$. Let $o_K=\{x \in K\ :\ |x|_v \leq 1 \}$ be the ring of integers, $\frak{m}_K=\{x \in K\ :\ |x|_v <1 \}$ its maximal ideal and $k=o_K/\frak{m}_K$ the residue field of $K$. The absolute value $|\ |_v$ extends to an internal absolute value $|\ |_{^*v}: {^*K} \rightarrow {^*\mathbb{R}}_{\geq 0}$. We write $|\ |$ for either $|\ |_v$ or $|\ |_{^*v}$ for simplicity of notation. An element $x \in {^* K}$ is called {\em finite}, if $|x|$ is a finite hyperreal number and is called {\em infinitesimal} if $\text{\textit{st}}\hspace{0.2mm} _K(x)=0$ or equivalently $\text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}} |x| = 0$. The set $\text{\textit{fin}}\hspace{0.5mm} ({^*K})$ of finite elements is a {\em valuation ring} which includes $K$. The ideal $\text{\textit{inf}}\hspace{0.5mm} ({^*K})$ of infinitesimal elements is the maximal ideal of the valuation ring $\text{\textit{fin}}\hspace{0.5mm} ({^*K})$ and their residue field is studied below. Two elements $x,y \in {^*K}$ are called {\em approximate} ($x \approx y$) if their difference is infinitesimal. The set $x+\text{\textit{inf}}\hspace{0.5mm} ({^*K})$ of elements which are approximate to $x \in K$ is the {monad} of $x$ and the union of all monads is the subset $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$ of nearstandard elements. The internal absolute value defines a uniform structure and a Hausdorff topology on ${^*K}$. The topology is the nonstandard extension of the topology on $K$ and ${^*K}$ then has the structure of a topological field. It is easy to see that $\text{\textit{inf}}\hspace{0.5mm} ({^*K})$, all monads, $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$ and $\text{\textit{fin}}\hspace{0.5mm} ({^*K})$ are clopen subsets. The concatenation $\text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}} \circ |\ |_v$ defines a {\em seminorm} on $\text{\textit{fin}}\hspace{0.5mm} ({^*K})$; the corresponding topology is not Hausdorff and coarser than the topology defined by the internal absolute value. The seminorm on $\text{\textit{fin}}\hspace{0.5mm} ({^*K})$ induces a norm on the quotient space $ \text{\textit{fin}}\hspace{0.5mm} ({^*K}) / \text{\textit{inf}}\hspace{0.5mm} ({^*K})$. An element $y \in {^* K}$ is called {\em pre-nearstandard} ($y \in \text{\textit{pns}}\hspace{0.5mm} ({^*K})$) if for any standard $\epsilon>0$ there is some $x \in K$ such that $|x-y| < \epsilon$ (cf. \cite{lr} 24.7). There are obvious inclusions \[ \text{\textit{inf}}\hspace{0.5mm} ({^*K}) \subset \text{\textit{ns}}\hspace{0.5mm} ({^*K}) \subset \text{\textit{pns}}\hspace{0.5mm} ({^*K}) \subset \text{\textit{fin}}\hspace{0.5mm} ({^*K}) \subset {^*K} . \] \begin{prop} Let $K$ be a locally compact non-Archimedean field. Then $ \text{\textit{ns}}\hspace{0.5mm} ({^*K}) = \text{\textit{fin}}\hspace{0.5mm} ({^*K})$ . \label{pnsfin} \end{prop} {\em Proof.} Let $x \in {^*K}$ be finite. Then $x$ is contained in some open unit ball ${^*B}_r(0)$ with standard radius $r$. For any $\epsilon>0$, $B_r(0)$ can be covered with balls $B_{\epsilon}(y)$ with radius $\epsilon>0$ and some center $y \in K$. Since $B_r(0)$ is compact, $B_r(0)$ can be covered with a {\em finite} number of balls $B_{\epsilon}(y)$. Since the $^*$-embedding commutes with finite unions, we obtain $x \in {^*B}_{\epsilon}(y)$ so that $x$ can be approximated by $y \in K$. Hence $x$ is pre-nearstandard. Furthermore, the completeness of $K$ implies $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})=\text{\textit{pns}}\hspace{0.5mm} ({^*K})$ (see \cite{lr} 24.15) which gives our assertion.$\hfill\square$\\ \begin{prop} The standard-part map induces an isometric isomorphism $ \text{\textit{ns}}\hspace{0.5mm} ({^*K}) / \text{\textit{inf}}\hspace{0.5mm} ({^*K}) \cong K $. For a locally compact field $K$ one has \\ $ \text{\textit{fin}}\hspace{0.5mm} ({^*K}) / \text{\textit{inf}}\hspace{0.5mm} ({^*K}) \cong K$. \label{Kv} \end{prop} {\em Proof.} The standard-part maps commutes with the norm: $st_{\mathbb{R}} ( |x|_v) = |\text{\textit{st}}\hspace{0.2mm} _K(x)|_v$ for $x \in \text{\textit{fin}}\hspace{0.5mm} ({^*K})$. The isomorphism follows from a general Theorem on nonstandard representations of complete metric spaces (see \cite{lr} 24.19). For locally compact fields one applies Proposition \ref{pnsfin}.$\hfill\square$\\ \noindent {\em Remark.} The above Proposition also holds for non-complete fields $K$, if one replaces $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$ by $ \text{\textit{pns}}\hspace{0.5mm} ({^*K})$ and completes $K$ on the right-hand side. This construction can be used to define $\mathbb{C}_p$ and a spherical completion $\Omega_p$. Since $\overline{\mathbb{Q}_p}$ is not locally compact, the $\text{\textit{pns}}\hspace{0.5mm} $ subspace of ${^*\overline{\mathbb{Q}_p}}$ is a strict subset of the the $\text{\textit{fin}}\hspace{0.5mm} $ subspace. \\ The following Lemma is easy to show: \begin{lemma} Let $(a_n)_{n\in \mathbb{N}}$ be a sequence of elements in $K$. It extends to a sequence $(a_n)_{n\in {^*\mathbb{N}}}$ in ${^*K}$ and \begin{enumerate} \item $\lim_{n\rightarrow \infty} a_n = a \in K$ if and only if $a_N \approx a$ for all infinite $N \in {^*\mathbb{N}}$. \item The series $\sum_{n=0}^{\infty} a_n $ converges in $K$ if and only if $a_N \approx 0$, or equivalently $|a_N| \approx 0$, for all infinite $N \in {^*\mathbb{N}}$. \end{enumerate} \label{convergence} \end{lemma} \section{Measure spaces} \subsection{Internal measure spaces} In this section, we define internal measure spaces with values in non-Archimedean fields. Let $\Omega$ be an internal set for a given nonstandard extension (e.g. $\Omega={^*X}$ for a standard set $X$) and $\mathcal{S}$ an internal covering and separating subring of $\mathcal{P}(\Omega)$ (e.g. $\mathcal{S} = {^*\mathcal{R}}$ for a standard ring $\mathcal{R}$). Then $(\Omega,\mathcal{S})$ is a measurable space. Let $K$ be a complete non-Archimedean valued field and $B(K)$ the rings of clopen subsets of $K$. Then $(K,B(K))$ and $({^*K},{^*B(K)})$ are measurable spaces. \begin{definition} An internal measure on $\mathcal{S}$ with values in $^*K$ is an internal function $\nu: \mathcal{S} \rightarrow {^*K}$ such that \begin{enumerate} \item $\nu(A \cup B)=\nu(A)+\nu(B)$ for disjoint sets $A, B \in \mathcal{S}$. (Additivity) \item For all $A \in \mathcal{S}$, $\|A\|_{\nu} = \sup\ \{|\nu(B)|\ :\ B \subset A,\ B\in \mathcal{S} \} $ is a hyperreal number. (Boundedness) \end{enumerate} $(\Omega,\mathcal{S},\nu)$ is called an internal, finitely-additive measure space. \label{definternal} \end{definition} The continuity condition of Definition \ref{measure} c) is automatically satisfied if the superstructure embedding is sufficiently saturated. \begin{prop} Assume that the superstructure embedding is $\kappa$-saturated. Let $\mathcal{S}_0 \subset \mathcal{S}$ be a shrinking set with empty intersection and suppose the cardinality of $\mathcal{S}_0$ is at most $\kappa$. This is for example satisfied if $\mathcal{S}_0$ is countable. Then there is a set $A \in \mathcal{S}_0$ with $A = \varnothing$. \label{empty} \end{prop} {\em Proof.} This follows from the saturation property of the superstructure embedding $^*$. Assume that $\mathcal{S}_0$ is shrinking and all sets $A \in \mathcal{S}_0$ are non-empty. Then the intersection over all such $A$ is also non-empty, a contradiction. $\hfill \square$\\ This gives $\sigma$-additivity: if a countable union satisfies $\cup_{n\in\mathbb{N}} A_n \in \mathcal{S}$ then Proposition \ref{empty} implies that the union is finite, so that $\sigma$-additivity follows from finite additivity. But countable unions of internal sets usually fail to be internal. On the other hand, additivity holds for $^*$-finite unions, i.e. for a sequence of disjoint subsets $A_n \in \mathcal{S}$ and any $N \in {^*\mathbb{N}}$, the hyperfinite union satisfies $\bigcup_{n=1}^N A_n \in \mathcal{S}$ and $\nu(\bigcup_{n=1}^N A_n)= \sum_{n=1}^N \nu(A_n)$. \\ Obviously, a standard measure space $(X,\mathcal{R},\mu)$ as described in Section \ref{measure} extends to an internal, finitely additive measure space $({^*X},{^*\mathcal{R}},{^*\mu})$. Another example are {\em hyperfinite} internal measure spaces which turn out to be particularly useful. \begin{prop} Let $\Omega$ be a hyperfinite set and $\mathcal{S}$ an internal and covering subring of $\mathcal{P}(\Omega)$. Then $\mathcal{S}$ is the algebra ${^*\mathcal{P}}(\Omega)$ of internal subsets of $\Omega$. Any internal function $\nu: \Omega \rightarrow {^*K}$ defines a measure on the singleton sets and can be uniquely extended to $\mathcal{S}$. This defines an internal, hyperfinite measure space $(\Omega,\mathcal{S},\nu)$. \label{hyperfinite} \end{prop} {\em Proof.} The corresponding statement is obvious for any finite set $\Omega$. Then the assertion follows from the Transfer Principle.$\hfill\square$\\ \noindent {\em Examples: } Let $\Omega={^*\mathbb{Z}_p}/(p^N)$ for some prime $p \neq 2$ with infinite $N \in {^*\mathbb{N}}$ and $K=\mathbb{Q}_p$. Then any internal sequence $(\nu_i)_{0\leq i < p^N}$ with values in ${^*\mathbb{Q}_p}$ defines an internal finitely additive measure $\nu$ on $\mathcal{S}={^*\mathcal{P}}(\Omega)$. \begin{enumerate} \item If $\nu$ is translation-invariant, then $\nu$ must be constant and if $\nu$ is also normalized, we obtain the {\em internal Haar measure} defined by $\nu_i = p^{-N}$. We have $\nu(a+p^n \Omega)=p^{-n}$ for all $n\leq N$, but since $\nu$ is not finitely bounded, it does not induce a standard measure. \item Set $\nu_i=1$ if $i=0,1,\dots, \frac{p^N-1}{2}$ and $\nu_i=-1$ if $i=\frac{p^N+1}{2},\dots,p^N-1$. Let $n\in \mathbb{N}$ be standard. Then $\nu(a+p^n \Omega)=1$ for $a=0,1,\dots,\frac{p^n-1}{2}$ and $\nu(a+p^n \Omega)=-1$ for $a=\frac{p^n+1}{2}, \dots, p^n -1$. \item Set $\nu_i=(-1)^i$ for $i=0,1,\dots,p^N-1$. Then $\nu(a+p^n \Omega)=(-1)^a$ for $a=0,1,\dots,p^n-1$. \end{enumerate} In the following, we reserve the letters $\mathcal{R}$ and $\mathcal{A}$ for rings or algebras and write $\mathcal{S}$ for internal rings or algebras, for example $\mathcal{S}={^*\mathcal{R}}$.\\ As in section \ref{measures} above, there is a Norm function $N_{\nu} : \Omega \rightarrow {^*\mathbb{R}}_{\geq 0}$. $N_{\nu}(x)$ is defined as the infimum of all $\|A\|_{\nu}$ with $x \in A \in \Omega$. We conclude by transfer from the standard case that $\|A\|_{\nu}$ can be recovered as the supremum of the set of all $N_{\nu}(x)$ with $x \in A$.\\ The measure space $\mathcal{S}$ can be extended so that it includes all $\nu$-approximable sets. This extension is particularly convenient in the nonstandard setting. \begin{definition} Let $(\Omega,\mathcal{S},\nu)$ be an internal measure space and $A \subset \Omega$ a (possibly external) set. $A$ is called a Loeb null set if $N_{\nu}(x) \approx 0$ for all $x \in A$. Moreover, $A$ is called Loeb-measurable if $B \in \mathcal{S}$ exists such that $A \Delta B$ is a Loeb null set. The ring of Loeb-measurable sets is denoted by $\mathcal{S}_L$. \end{definition} Next, we derive a standard measure with values in $K$ from an internal measure space. We need to assume that $\nu$ has values in $ \text{\textit{ns}}\hspace{0.5mm} ({^* K})$. If $K$ is locally compact then this is equivalent to $\nu$ having values in $\text{\textit{fin}}\hspace{0.5mm} ({^*K})$ (see Proposition \ref{pnsfin}). If $\mu$ is globally bounded then $\mu(A)=\text{\textit{st}}\hspace{0.2mm} _K(\nu(A))$ defines a measure on $\mathcal{S}$ with values in $K$ which can be extended to $\mathcal{S}_L$. Furthermore, $\|A\|_{\nu}$ is a finite hyperreal number. \\ The following Proposition shows that the Loeb construction gives a measure space in the standard sense. \begin{prop} Let $(\Omega,\mathcal{S},\nu)$ be an internal measure space and assume that $\nu$ has values in $ \text{\textit{ns}}\hspace{0.5mm} ({^* K})$ and that there exists some $C \in \mathbb{R}$ such that $|\mu(A)| \leq C$ for all $A \in \mathcal{S}$. Then the so-called Loeb measure $\nu_L : \mathcal{S}_L \rightarrow K$ is well defined: $\nu_L(A)=\text{\textit{st}}\hspace{0.2mm} _K(\nu(B))$ where $B \in \mathcal{S}$ such that $A \Delta B$ is a Loeb null set. It has the following properties: \begin{enumerate} \item $\nu_L(A \cup B)=\nu_L(A)+\nu_L(B)$ for disjoint sets $A, B \in \mathcal{S}_L$. (Additivity) \item For $A \in \mathcal{S}_L$, $\|A\|_{\nu_L} = \sup\ \{|\nu_L(B)|\ :\ B \subset A,\ B\in \mathcal{S}_L \} \leq C$. (Boundedness) \item For any shrinking set $\mathcal{S}_{L,0} \subset \mathcal{S}_L$ (i.e., $A, B \in \mathcal{S}_{L,0} \Rightarrow A \cap B \in \mathcal{S}_{L,0} $) with empty intersection (i.e., $\bigcap_{A \in \mathcal{S}_{L,0} } A = \varnothing$) and cardinality at most $\kappa$ one has $\displaystyle \lim_{A \in \mathcal{S}_{L,0}} \| A \|_{\nu_L} = 0$. (Continuity) \end{enumerate} We call $(\Omega,\mathcal{S}_L,\nu_L)$ the Loeb measure space associated to $(\Omega,\mathcal{S},\nu)$. \label{loebprop} \end{prop} \noindent {\em Proof.} Let $B, B' \in \mathcal{S}$ such that $A \Delta B$ and $A \Delta B'$ are Loeb null sets. Then $N_{\nu}(x) \approx 0$ for $x \in B \Delta B'$. The additivity of $\nu$ implies $|\nu(B)-\nu(B')| \leq |\nu(B \Delta B')| \leq \| B \Delta B' \|_{\nu} \approx 0$. Hence $\nu_L$ is well defined. The fact that $\mathcal{S}_L$ is a covering separating ring and the additivity follow immediately from the definition. Let $A \in \mathcal{S}$ and $B \subset A$. By assumption, $\nu(B)$ is nearstandard so that $|\nu(B)| \leq C$. This implies the boundedness of $\|A\|_{\nu_L}$ for $A \in \mathcal{S}_L$. We have to show the continuity property c). Let $\mathcal{S}_{L,0} \subset \mathcal{S}_L$ be a shrinking set with empty intersection and cardinality at most $\kappa$. Let $\epsilon > 0$ be standard and define $\Omega_{\epsilon} \subset \Omega$ to be the internal subset of all $y \in \Omega$ with $N_{\nu}(y) > \epsilon$. For every $A \in \mathcal{S}_{L,0}$ there exists a set $B \in \mathcal{S}$ with $B \cap \Omega_{\epsilon} = A \cap \Omega_{\epsilon}$, by the definition of Loeb measurable sets. Let $\mathcal{S}_0$ be the set of these $B$'s. The assumption $\bigcap_{A \in \mathcal{S}_{L,0}} A = \varnothing$ implies \[ \bigcap_{A \in \mathcal{S}_{L,0}} (A \cap \Omega_{\epsilon}) = \bigcap_{B \in \mathcal{S}_0} (B \cap \Omega_{\epsilon}) = \varnothing . \] $\mathcal{S}_0$ has the same cardinality as $\mathcal{S}_{L,0}$. Therefore Proposition \ref{empty} gives a set $B \in \mathcal{S}_0$ and a corresponding set $A \in \mathcal{S}_{L,0}$ such that $B \cap \Omega_{\epsilon} = A \cap \Omega_{\epsilon} = \varnothing$. This implies $\| A \|_{\nu_L} \leq \epsilon$ which proves the continuity of the Loeb measure.$\hfill\square$ \begin{lemma} Let $(\Omega,\mathcal{S},\nu)$ be an internal measure space and $(\Omega,\mathcal{S}_L,\nu_L)$ a Loeb measure space under the assumptions of the above Proposition \ref{loebprop}. Let $N_{\nu_L}(x)= \inf_{y \in A \in {\mathcal{S}_L}} \|A\|_{\nu_L}$ be the weight function associated to to $\nu_L$. Then $\|A\|_{\nu_L} = \text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}} (\| A \|_{\nu})$ for $A \in \mathcal{S}$ and $ N_{\nu_L}(x)= \text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}} (N_{\nu}(x))$ for all $x \in \Omega$. \label{stloeb} \end{lemma} {\em Proof.} Let $B_1,B_2 \in \mathcal{S}_L$. If $B_1 \Delta B_2$ is a Loeb null set, then the additivity of $\nu_L$ implies $|\nu_L(B_1)-\nu_L(B_2)| \leq \max \{ |\nu_L(B_1 \setminus B_2)| , |\nu_L(B_2 \setminus B_1)| \} \leq \| B_1 \Delta B_2 \|_{\nu_L} = 0$. From this and the definition of $\| \ \|$ we conclude that $\|A\|_{\nu_L} = \text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}} (\| A \|_{\nu})$ for $A \in \mathcal{S}$. Furthermore, it implies $N_{\nu_L}(x) \leq \text{\textit{st}}\hspace{0.2mm} (N_{\nu}(x))$. Suppose that this is a strict inequality for $x \in \Omega$. Then there exists $A \in \mathcal{S}_L$ and $B \in \mathcal{S}$ such that $A \Delta B$ is a Loeb null set and $x \in A \setminus B$. But this implies $N_{\nu}(x) \approx 0$, a contradiction.$\hfill\square$ \subsection{The standard-part map} For any topological space $X$ with extension ${^*X}$ there exists the standard-part map $\text{\textit{st}}\hspace{0.2mm} _X : \text{\textit{ns}}\hspace{0.5mm} ({^*X}) \rightarrow X$ (see section \ref{nonst}) which gives a transition between nonstandard and standard spaces. We will consider the standard-part map for the real numbers, for complete non-Archimedean valued fields and for measurable spaces where the measurable sets form the base of a zero-dimensional Hausdorff topology.\\ First, we give some compatibility properties. \begin{lemma} Let $(X,\mathcal{R},\mu)$ be a measure space and $({^*X},{^*\mathcal{R}},{^*\mu})$ the corresponding internal measure space. \begin{enumerate} \item Let $y \in \text{\textit{ns}}\hspace{0.5mm} ({^* K})$. Then $|\text{\textit{st}}\hspace{0.2mm} _K(y)| = \text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}}(|y|)$. \item For $A \in \mathcal{R}$, one has $\| A\|_{\mu} = \| {^*A}\|_{^*\mu}$. \item $N_{^*\mu} = {^*(N_{\mu})}$ and $N_{^*\mu} (y)= \inf_{y \in A \in {^* \mathcal{R}}} \|A\|_{^*\mu}$. \item Let $y \in \text{\textit{ns}}\hspace{0.5mm} ({^* X})$. Then $N_{\mu}(\text{\textit{st}}\hspace{0.2mm} _X(y)) \geq \text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}}(N_{^*\mu}(y))$. \end{enumerate} \label{compat} \end{lemma} {\em Proof. } a) Since $y \in \text{\textit{ns}}\hspace{0.5mm} ({^* K}) \subset \text{\textit{fin}}\hspace{0.5mm} ({^* K})$, $|y|$ is a finite hyperreal number and $\text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}}(|y|)$ is well defined. The absolute value $|\ |: K \rightarrow \mathbb{R}$ is continuous which implies that the absolute values of the approximate elements $y$ and $\text{\textit{st}}\hspace{0.2mm} _K(y)$ are also approximate (see \cite{lr} 21.11). This gives the equality.\\ b) The assertion follows from the definition of $\| \ \|$ and a general fact on nonstandard extensions of sets which are defined by formulas (\cite{lr} 7.5): $$^*\{ |\mu(B)|: B \subset A,\ B\in \mathcal{R} \}=\{ |{^*\mu}(B)|: B \subset {^*A},\ B \in {^*\mathcal{R}} \}$$ c) It follows from the definition that $N_{^*\mu}$ is the extension of $N_{\mu}$.\\ d) Let $x=\text{\textit{st}}\hspace{0.2mm} _X(y) \in A$. If $x\in A$ with $A \in \mathcal{R}$, then $y \in {^* A}$ by definition of the standard-part map. Since $N_{\mu}(x)$ is the infimum of all such $\|A\|_{\mu}$, and by part b) $\|A\|_{\mu} = \|{^*A}\|_{^*\mu}$, we conclude $N_{\mu}(x) \geq \text{\textit{st}}\hspace{0.2mm} _{\mathbb{R}}(N_{^* \mu}(y))$.$\hfill\square$\\ \noindent {\em Remark:} Equality can not be deduced in d) because $y$ may be contained in additional internal measurable subsets which are not of type ${^*A}$ for $A \in \mathcal{R}$. Another argument uses the fact that $N_{\mu}$ is only upper-semicontinuous (see \cite{rooij} 7.6). The elements $y$ and $x=\text{\textit{st}}\hspace{0.2mm} (y)$ are approximate and since $N_{\mu}$ is upper-semicontinuous at $x$, only the inequality $N_{\mu}(x) \geq N_{^*\mu}(y)$ holds. \\ The following Lemma shows that $\text{\textit{st}}\hspace{0.2mm} _X$ is defined almost everywhere. \begin{lemma} Let $(X,\mathcal{R},\mu)$ be a measure space, $({^*X},{^*\mathcal{R}},{^*\mu})$ the corresponding internal measure space and $A \in \mathcal{R}$ a measurable set. Then ${^* A} \setminus \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ is a Loeb null set, i.e.\ $N_{^*\mu}(y) \approx 0$ for $y \in {^* A} \setminus \text{\textit{ns}}\hspace{0.5mm} ({^*X})$. \label{nullset} \end{lemma} {\em Proof.} We know from Proposition \ref{compact} that $X_{\epsilon} \cap A$ is compact for standard $\epsilon > 0$. It follows that ${^* X_{\epsilon}} \cap {^* A} \subset \text{\textit{st}}\hspace{0.2mm} ^{-1}(X_{\epsilon} \cap A)$ (see Proposition \ref{topol}c) and the latter is a subset of $ \text{\textit{ns}}\hspace{0.5mm} ({^*X})$. Using \cite{lr} 7.5 one obtains ${^* X_{\epsilon}}= {^*(X_{\epsilon})} = ({^*X})_{\epsilon}$ which is the set of all $y \in {^*X}$ with $N_{^*\mu}(y) \geq \epsilon$. Hence $({^* X})_{\epsilon} \cap {^* A} \subset \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ for all $\epsilon >0$ and thus $N_{^*\mu}(y) \approx 0$ for $y \in {^* A} \setminus \text{\textit{ns}}\hspace{0.5mm} ({^*X})$. $\hfill\square$\\ \noindent We now give a non-Archimedean analogue of a similar Theorem on Radon spaces (see \cite{anderson82} 3.3). \begin{theorem} Let $(X,\mathcal{A},\mu)$ be a measure space, $\mathcal{A}$ an algebra, $(X,\mathcal{A}_{\mu},\mu)$ the extended standard measure space and $({^*X},{^*\mathcal{A}}, {^*\mu})$ the internal measure space. Assume that ${^*\mu}$ has values in $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$ and let $({^*X}, {^*\mathcal{A}}_L, {^*\mu}_L)$ be the corresponding Loeb measure space. Then $\text{\textit{st}}\hspace{0.2mm} _X : ({^*X},{^*\mathcal{A}}_L, {^*\mu}_L) \rightarrow (X,\mathcal{A}_{\mu},\mu)$ is defined outside a Loeb null set. Furthermore, $\text{\textit{st}}\hspace{0.2mm} _X$ is a measurable and measure-preserving map. \label{stpart} \end{theorem} {\em Proof.} Since we assumed that $X$ is measurable, ${^*X} \setminus \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ is a Loeb null set by Lemma \ref{nullset}, and $\text{\textit{st}}\hspace{0.2mm} =\text{\textit{st}}\hspace{0.2mm} _X$ is defined ${^*\mu}_L$-almost everywhere. ${^*\mu}$ is globally bounded by $\| X \|_{\mu}=\|{^*X}\|_{^*\mu}$. Now let $A \in \mathcal{A}$, i.e.\ $A$ is clopen in the $\mathcal{A}$-topology. It follows from Corollary \ref{clopen} that $\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) = {^*A} \cap \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ so that $\text{\textit{st}}\hspace{0.2mm} ^{-1}(A)$ is Loeb measurable and $$\mu(A) = {^* \mu}({^* A}) = {^* \mu}_L({^* A} \cap \text{\textit{ns}}\hspace{0.5mm} ({^*X})) = {^* \mu}_L(\text{\textit{st}}\hspace{0.2mm} ^{-1}(A))$$ A general set $A \in \mathcal{A}_{\mu}$ can be approximated by $B_{\epsilon} \in \mathcal{A}$, so that for any standard $\epsilon>0$, $( A \Delta B_{\epsilon}) \subset (X \setminus X_{\epsilon})$. This yields $$\text{\textit{st}}\hspace{0.2mm} ^{-1}(A \Delta B_{\epsilon}) \subset \text{\textit{st}}\hspace{0.2mm} ^{-1} (X \setminus X_{\epsilon}) \subset {^* X} \setminus {^* X}_{\epsilon}$$ The latter subset relation is true since $X_{\epsilon}$ is compact (see Proposition \ref{compact}) and therefore ${^* X}_{\epsilon} \subset \text{\textit{st}}\hspace{0.2mm} ^{-1}(X_{\epsilon})$, by Proposition \ref{topol}. This implies $$N_{^*\mu}(y) \leq \epsilon$$ for $ y \in \text{\textit{st}}\hspace{0.2mm} ^{-1}(A \Delta B_{\epsilon})=\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \Delta st^{-1} (B_{\epsilon}) = \text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \Delta (^*B_{\epsilon} \cap \text{\textit{ns}}\hspace{0.5mm} ({^*X}))$. We have $st^{-1} (B_{\epsilon}) ={^*B_{\epsilon}} \cap \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ and $N_{^* \mu}(y) \approx 0$ for $y \in {^*X} \setminus \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ which is a Loeb null set. Hence $N_{^*\mu}(y) \leq \epsilon$ for $y \in \text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \Delta {^*B_{\epsilon}}$. Applying the countable comprehension principle to the internal sequence $({^*B_{\frac{1}{n}}})_{n \geq 1}$ gives a $B \in {^*\mathcal{A}}$ such that $N_{^*\mu}(y) \approx 0$ for $y \in (\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \Delta (B \cap \text{\textit{ns}}\hspace{0.5mm} ({^*X})))$ and hence also for $y \in (\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \Delta B)$. This implies that $\text{\textit{st}}\hspace{0.2mm} ^{-1}(A)$ is Loeb measurable and ${^* \mu_L}(B) = {^* \mu_L}(st^{-1}(A))$. By definition of the Loeb measure, we have ${^* \mu_L}({^* A})= {^* \mu_L}(B)$. Then the assertion follows from $\mu(A) = {^* \mu}({^* A})$.$\hfill\square$\\ \subsection{Liftings of Measurable Functions} We want to show that a Loeb-measurable function $f: {\Omega} \rightarrow K$ can be lifted to an internal measurable function $F: {\Omega} \rightarrow {^*K}$ with hyperfinite image (see Figure \ref{onelegged}). \\ Such a function is $^*$-simple, i.e. there exist $N \in {^*\mathbb{N}}$, measurable sets $A_1,A_2,\dots,A_N$ and $y_1,y_2,\dots,y_N \in {^*K}$ with $F(x)=\sum_{i=1}^N y_i \chi_{A_i}$, where $\chi_{A_i}$ denotes the characteristic function of $A_i$. \\ \begin{figure*}[h] \begin{center} \begin{tikzpicture}[ node distance=3cm, auto] \matrix (m) [matrix of math nodes, row sep=3em, column sep=3em] { \Omega & & {^*K} \\ & & K \\ }; \path[->] (m-1-1) edge node[auto] {$F$ } (m-1-3); \path[->] (m-1-1) edge node[auto] {$f$ } (m-2-3); \path[->,dashed] (m-1-3) edge node[auto] {$\text{\textit{st}}\hspace{0.2mm} _K$ } (m-2-3); \end{tikzpicture} \caption {One-legged lifting of $f$. The standard-part map $\text{\textit{st}}\hspace{0.2mm} _K$ is defined on the subset $ \text{\textit{ns}}\hspace{0.5mm} ({^* K})$.} \label{onelegged} \end{center} \end{figure*} The following Theorem shows that all Loeb-measurable functions can be lifted and that the standard part of internal measurable functions is Loeb-measurable. \begin{theorem} Let $(\Omega,\mathcal{S},\nu)$ be an internal measure space. Assume that ${\nu}$ has values in $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$ and is globally bounded. Let $({\Omega}, {\mathcal{S}}_L, {\nu}_L)$ be the associated Loeb measure space. \begin{enumerate} \item Let $f: {\Omega} \rightarrow K$ be ${\mathcal{S}}_L$-measurable and suppose that $K$ is separable. Then there is an internal ${\mathcal{S}}$-measurable and $^*$-simple function $F:{\Omega} \rightarrow {^* K}$ such that $f(x)=\text{\textit{st}}\hspace{0.2mm} _K(F(x))$ holds for ${\nu}_L$-almost every $x\in {\Omega}$. \item Conversely, if $F: \Omega \rightarrow {^* K}$ is an internal $\mathcal{S}$-measurable function such that $F(x) \in \text{\textit{ns}}\hspace{0.5mm} ({^* K})$ for ${\nu}_L$-almost every $x\in {^*X}$, then $f:=\text{\textit{st}}\hspace{0.2mm} _K \circ F$ is defined $\nu_L$-almost everywhere and ${\mathcal{S}}_L$-measurable. \end{enumerate} \label{firstlifting} \end{theorem} {\em Proof.} (a) We follow Anderson's proof for real-valued measure spaces (see \cite{anderson82} 5.3). Choose a countable base $U_1=K, U_2, U_3 \dots$ of clopen sets in $K$, take their preimages $f^{-1}(U_n) \in \mathcal{S}_L$ and replace them by $A_n \in \mathcal{S}$ such that $\nu_L(f^{-1}(U_n)\Delta A_n) = 0$. Then define an approximating sequence of $\mathcal{S}$-measurable simple functions $f_n : \Omega \rightarrow {^*K}$ such that $f_n(A_k) \subset {^*U_k}$ for all $k \leq n$. By countable comprehension, $(f_n)_{n \in \mathbb{N}}$ extends to an internal sequence and $F$ can be defined as $f_{N}$ for some infinite $N\in{^*\mathbb{N}}$. $F$ is $\mathcal{S}$-measurable with hyperfinite image in ${^* K}$ and one shows that $\text{\textit{st}}\hspace{0.2mm} (F(x))=f(x)$ for $x \in \Omega \setminus \bigcup_{n=1}^{\infty}\ (f^{-1}(U_n)\Delta A_n)$. Although $\mathcal{S}_L$ is not a $\sigma$-algebra, the countable union of Loeb null sets is again a null set. \\ (b) Let $A \subset K$ be a clopen set. Then $$f^{-1}(A)=F^{-1}(\text{\textit{st}}\hspace{0.2mm} ^{-1}(A)) = F^{-1}({^*A} \cap \text{\textit{ns}}\hspace{0.5mm} ({^*K})) = F^{-1}({^* A}) \cap F^{-1}( \text{\textit{ns}}\hspace{0.5mm} ({^*K})) . $$ Since $F$ is $\mathcal{S}$-measurable, $F^{-1}({^* A})\in \mathcal{S}$. The assumption on $F$ ensures that the intersection with $F^{-1}( \text{\textit{ns}}\hspace{0.5mm} ({^*K}))$ reduces this set only by a $\nu_L$-null set and hence $f^{-1}(A) \in \mathcal{S}_L$. \hspace*{1cm} $\hfill\square$\\ Besides liftings of Loeb-measurable functions on an internal measure space $\Omega$, there are also liftings of {\em standard} measurable functions $f:X \rightarrow K$. Natural candidates are the nonstandard extension ${^* f}$ and a lifting $F$ of the composition $f \circ \text{\textit{st}}\hspace{0.2mm} $ as constructed above (see Figure \ref{two-legged}). \begin{figure*}[h] \begin{center} \begin{tikzpicture}[ node distance=3cm, auto] \matrix (m) [matrix of math nodes, row sep=3em, column sep=3em] { {^*X} & & {^*K} \\ X & & K \\ }; \path[->] (m-1-1) edge node[auto] {${^*f}, F$ } (m-1-3); \path[->,dashed] (m-1-1) edge node[auto] {$\text{\textit{st}}\hspace{0.2mm} _X$} (m-2-1); \path[->] (m-2-1) edge node[auto] {$f$ } (m-2-3); \path[->,dashed] (m-1-3) edge node[auto] {$\text{\textit{st}}\hspace{0.2mm} _K$ } (m-2-3); \end{tikzpicture} \caption {Two-legged lifting of $f$. The standard-part maps $\text{\textit{st}}\hspace{0.2mm} $ are defined on the subsets $ \text{\textit{ns}}\hspace{0.5mm} ({^* X})$ resp.\ $ \text{\textit{ns}}\hspace{0.5mm} ({^* K})$.} \label{two-legged} \end{center} \end{figure*} \begin{theorem} Let $(X,\mathcal{A},\mu)$ be a measure space, $\mathcal{A}$ an algebra, $(X,\mathcal{A}_{\mu},\mu)$ the extended measure space, $({^*X},{^*\mathcal{A}},{^*\mu})$ the internal measure space. Assume that ${^*\mu}$ has values in $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$. Let $({^*X}, {^*\mathcal{A}}_L, {^*\mu}_L)$ be the corresponding Loeb measure space. Let $f: X \rightarrow K$ be a $\mathcal{A}_{\mu}$-measurable function. Then ${^* f}: {^*X} \rightarrow {^*K}$ is ${^*(\mathcal{A}_{\mu})}$- and ${^*\mathcal{A}}_L$- measurable and $\text{\textit{st}}\hspace{0.2mm} _K({^* f}(y))=f(\text{\textit{st}}\hspace{0.2mm} _X(y))$ for ${^* \mu}_L$-almost all $y \in {^*X}$, i.e.\ the diagram in Figure \ref{two-legged} commutes almost everywhere. \label{commute} \end{theorem} {\em Proof. } By transfer, ${^* f}$ is $({^*\mathcal{A}_{\mu}}, {^*B(K)})$-measurable. Since ${^*(\mathcal{A}_{\mu})} \subset {^*\mathcal{A}}_L$, ${^*f}$ is also measurable with respect to ${^*\mathcal{A}}_L$. Since $f$ is $\mathcal{A}_{\mu}$-continuous (see Corallary \ref{cont}), the proof of the commutative diagram is easier than in the real case (see \cite{anderson82} 3.7). Let $y \in \text{\textit{ns}}\hspace{0.5mm} ({^* X})$ and set $x=\text{\textit{st}}\hspace{0.2mm} _X(y)$. Then $y \approx x$ and the continuity implies ${^*f}(y) \approx f(x)$ so that $\text{\textit{st}}\hspace{0.2mm} _K({^*f}(y))=f(x)=f(\text{\textit{st}}\hspace{0.2mm} _X(y))$. ${^*X} \setminus \text{\textit{ns}}\hspace{0.5mm} ({^* X})$ is a null set (cf. Proposition \ref{nullset}) and this gives the assertion. $\hfill\square$\\ {\em Remark.} The composition map $f \circ \text{\textit{st}}\hspace{0.2mm} _X : {^* X} \rightarrow K$ is ${^* \mathcal{A}}_L$-measurable and has a ${^* \mathcal{A}}$-measurable lifting $F:{^* X} \rightarrow {^* K}$ (see Theorem \ref{firstlifting}). What are the main differences between ${^* f}$ and $F$ ? Of course, ${^* f}$ is the natural nonstandard extension of $f$ and ${^* \mathcal{A}}_L$-measurable. In contrast, $F$ is obtained by a non-canonical construction from $f$, but it has additional favourable properties: $F$ is ${^* \mathcal{A}}$-measurable and even ${^*}$-simple. \subsection{Hyperfinite Spaces} A key technique in nonstandard analysis is {\em hyperfinite approximation}. A well-known example from nonstandard real analysis is the representation of the compact interval $X=[0,1]$ by the hyperfinite set $Y=\{0,\frac{1}{N},\frac{2}{N}, \dots, \frac{N-1}{N},1\}$, where $N \in {^*\mathbb{N}}$ is some infinite natural number. There is a hyperfinite measure space defined on $Y$ and the standard-part map is measurable and measure-preserving. Standard Lebesgue integrals on $X$ can be obtained by a hyperfinite summation over $Y$. This was generalized to Radon probability spaces (see \cite{anderson82}, \cite{albeverio1986}) and it can be shown that Radon measures are push-downs of hyperfinite measures spaces. We show a corresponding statement for measures with values in a complete non-Archimedean field $K$.\\ For a measure space $(Y,\mathcal{S},\tau)$ and a map $p: Y \rightarrow X$, the {\em push down} of $Y$ to $X$ via $p$ induces a measure space on $X$. A set $A \subset X$ is measurable if $p^{-1}(A) \in \mathcal{S}$ and the measure $p(\tau)$ is defined by $p(\tau)(A)=\tau(p^{-1}(A))$. \begin{theorem} Let $\mathcal{A}$ be an algebra, $(X,\mathcal{A},\mu)$ a measure space, $(X,\mathcal{A}_{\mu},\mu)$ the extended measure space and $({^*X},{^*\mathcal{A}},{^*\mu})$ the corresponding internal measure space. Assume that ${^*\mu}$ has values in $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$. Then there is a hyperfinite partition of ${^*X}$ such that all partition classes are contained in ${^*\mathcal{A}}$, an equivalence relation $\thicksim$ on ${^* X}$ defined by the partition and a hyperfinite set $Y={^* X}/\thicksim$. It induces a measure $\nu={^*\mu}/\thicksim$ and an internal hyperfinite measure space $(Y,\mathcal{S},\nu)$ such that all internal subsets of $Y$ are measurable. Let $(Y,\mathcal{S}_L,\nu_L)$ be the corresponding Loeb measure space. Then the standard-part map $\text{\textit{st}}\hspace{0.2mm} _Y: Y \rightarrow X$ is well-defined $\mathcal{S}_L$-almost everywhere and $(X,\mathcal{A}_{\mu},\mu)$ is the push-down of $Y$ to $X$ via $\text{\textit{st}}\hspace{0.2mm} _Y$. In particular, $\text{\textit{st}}\hspace{0.2mm} _Y$ is measurable and measure-preserving. \label{hyperfinite-rep} \end{theorem} {\em Proof.} Let $A_1, A_2, \dots, A_n$ be clopen sets in $X$. As in \cite{albeverio1986} 3.4.10, let $P_{A_1,A_2,\dots A_n}$ be the set of hyperfinite partitions of ${^* X}$ into ${^* \mu}$-measurable sets such that each ${^* A_i}$ ($i=1,\dots,n$) is a disjoint union of partition classes. If the nonstandard extension is sufficiently saturated (the usual countable saturation suffices for separable spaces $X$), we obtain a hyperfinite partition $P=\{R_1, R_2, \dots, R_N \}$ of ${^* X}$ into ${^* \mu}$-measurable sets such that each set ${^* A}$ (where $A$ is clopen in $X$) is a disjoint union of $R_i$'s. $P$ defines an equivalence relation $\thicksim$ on ${^* X}$ and $Y:={^* X}/\thicksim$ is a hyperfinite set with $N$ elements. Let $\mathcal{S} \subset {^*\mathcal{A}}$ be the internal set of all hyperfinite unions of partition classes $R_i$ and for $B \in \mathcal{S}$ define $\nu(B)={^*\mu}(B)$. Then $(Y,\mathcal{S},\nu)$ is an internal hyperfinite measure space where all internal subsets of $Y$ (resp.\ their corresponding union of partition classes in ${^*X}$) are measurable. We show that the standard-part map $\text{\textit{st}}\hspace{0.2mm} =\text{\textit{st}}\hspace{0.2mm} _X: \text{\textit{ns}}\hspace{0.5mm} ({^* X}) \rightarrow X$ factorizes via the equivalence relation $\thicksim$ which defines $Y$. Anderson \cite{anderson82} calls this property {\em S-separating} and it means that $\text{\textit{st}}\hspace{0.2mm} (y)$ does not depend on the choice of $y \in R_i$. To this end, suppose that $y,y' \in R_i$. For all clopen sets $A \subset X$, the elements $y$ and $y'$ belong to the same sets ${^* A}$, depending on whether $R_i \subset {^* A}$ holds or not. This implies that they are in the same monad and $\text{\textit{st}}\hspace{0.2mm} (y)=\text{\textit{st}}\hspace{0.2mm} (y')$, if $y \in \text{\textit{ns}}\hspace{0.5mm} ({^* X})$, or equivalently, $y' \in \text{\textit{ns}}\hspace{0.5mm} ({^* X})$. Let $(Y,\mathcal{S}_L,\nu_L)$ be the Loeb measure space which corresponds to $(Y,\mathcal{S},\nu)$. Recall that $A \in \mathcal{S}_L$ if $A$ is a (possible external) union of $R_i$'s and $B \in \mathcal{S}$ exists with $N_{\nu}(y) \approx 0$ for $y \in A \Delta B$. In that case, $\nu_L(A)=\text{\textit{st}}\hspace{0.2mm} _K(\nu(B)) \in K$. Subsequently, we prove that $\text{\textit{st}}\hspace{0.2mm} _Y$ is $\mathcal{S}_L$-measurable and measure-preserving. First, let $A \in \mathcal{A}$ be a clopen set. Then $\text{\textit{st}}\hspace{0.2mm} _Y^{-1}(A)= {^* A} \cap \text{\textit{ns}}\hspace{0.5mm} ({^* X})$ which is a (possibly external) union of partition classes. Since ${^*X} \setminus \text{\textit{ns}}\hspace{0.5mm} ({^* X})$ is a ${^* \mu}$- and a $\nu_L$- null set, we obtain $\text{\textit{st}}\hspace{0.2mm} _Y^{-1}(A) \in \mathcal{S}_L$ and also $\nu_L(\text{\textit{st}}\hspace{0.2mm} _Y^{-1}(A)) = {^*\mu}_L({^* A} \cap \text{\textit{ns}}\hspace{0.5mm} ({^* X})) = {^* \mu}({^* A})=\mu(A)$. For general $A \in \mathcal{A}_{\mu}$, one proceeds as in the proof of Theorem \ref{stpart}. Finally, we prove that $A \subset X$ is $\mathcal{A}_{\mu}$-measurable if $\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \in \mathcal{S}_L$. Let $\epsilon > 0$ be any standard real number and $X_{\epsilon}$ the set of all $x \in X$ with $N_{\mu}(x) > \epsilon$. It suffices to find a set $C \in \mathcal{A}_{\mu}$ such that $A \cap X_{\epsilon} = C \cap X_{\epsilon} $ (see \cite{rooij} 7.3 and 7.8). The assumption yields a $B \in \mathcal{S}$ such that $ B \Delta \text{\textit{st}}\hspace{0.2mm} ^{-1}(A) $ is a Loeb null set. Hence $N_{^*\mu}(x) \leq \epsilon$ for $ x \in B \Delta \text{\textit{st}}\hspace{0.2mm} ^{-1}(A)$ so that the intersection of this set with ${^* X}_{\epsilon}$ is empty and $\text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \cap {^* X}_{\epsilon} = B \cap {^* X}_{\epsilon}$. We apply the $\text{\textit{st}}\hspace{0.2mm} $-map and get $A \cap X_{\epsilon} = \text{\textit{st}}\hspace{0.2mm} (B) \cap X_{\epsilon}$ since $\text{\textit{st}}\hspace{0.2mm} ({^*X_{\epsilon}})= X_{\epsilon}$ by Propositions \ref{compact} and \ref{topol}. Since $B$ is internal, $\text{\textit{st}}\hspace{0.2mm} (B)$ is closed (see \cite{lr} 28.7) and as $X_{\epsilon}$ is compact, we obtain that $A \cap X_{\epsilon}$ is closed. The same argument applies to $X \setminus A$ since ${^* X} \setminus \text{\textit{st}}\hspace{0.2mm} ^{-1}(A) \in \mathcal{S}_L$. This implies that $(X \setminus A) \cap X_{\epsilon}$ is closed. By the above, $X \setminus (A \cap X_{\epsilon})$ is $\mathcal{A}_{\mu}$-open and hence a union of $\mathcal{A}_{\mu}$-clopen sets $A_i$. The intersection $(X \setminus (A \cap X_{\epsilon})) \cap X_{\epsilon} = (X \setminus A) \cap X_{\epsilon}$ is not only closed, but also compact (since $X_{\epsilon}$ is compact). It can hence be covered by a finite union of $A_i$'s. The complement of this union in $X$, which we call $C$, is a clopen set and has the desired property $A \cap X_{\epsilon} = C \cap X_{\epsilon} $. $\hfill\square$ \\ {\em Remark:} For an ultrametric space $X$ with nonstandard extension ${^*X}$, the hyperfinite space $Y$ can be chosen as the set of all balls of radius $\epsilon$ in ${^*X}$, where $\epsilon>0$ is a fixed infinitesimal number. The balls form a partition of ${^*X}$ which is $S$-separating since each ball $B_{\epsilon}(y)$ is a subset of the monad of $\text{\textit{st}}\hspace{0.2mm} _X(y)$ for $y \in \text{\textit{ns}}\hspace{0.5mm} ({^*X})$. The above Theorem says that a measure on an ultrametric space $X$ is determined by a hyperdiscrete measure on the hyperfinite space $Y$ of infinitesimal balls. \section{Integration} \label{sec:int} With the preceding results, it is not surprising that nonstandard extensions can also be used for the integration of functions with values in non-Archimedean fields. For the convenience of the reader we recall some facts from the standard theory \cite{rooij}. \\ For a measure space $(X,\mathcal{R},\mu)$ and a field $K$ as above, the seminorm of a function $f: X \rightarrow K$ is defined as $\| f \|_{\mu} = \sup_{x\in X} |f(x)| \cdot N_{\mu}(x) \in [0,\infty[ \cup \{\infty\}$. Let $\chi_A : X \rightarrow K$ denote the characteristic function of $A \subset X$. For a measurable set $A \in \mathcal{R}$, we have $\|\chi_A\|_{\mu} = \| A \|_{\mu}$. $f$ is a simple function (step function), if $A_1,A_2,\dots,A_n \in \mathcal{R}$ and $x_1,x_2,\dots,x_n \in K$ exist with $f(x)=\sum_{i=1}^n x_i \chi_{A_i}$. The integral is a functional on the space of simple functions defined by $\int_X f(x)\ d\mu = \sum_{i=1}^n x_i \mu(A_i)$ and it satisfies the inequality $| \int_X f(x)\ d\mu | \leq \| f \|_{\mu}$. The space of $\mathcal{R}$-simple functions can be completed w.r.t.\ $\|\ \|_{\mu}$ and a function $f: X\rightarrow K$ is called $\mu$-integrable, if a sequence $(f_n)_{n\in \mathbb{N}}$ of $\mathcal{R}$-simple functions exists such that $\lim_{n \rightarrow \infty} \| f - f_n \|_{\mu} = 0$. The integral $\int_X f(x)\ d\mu$ is defined as $\lim_{n\rightarrow \infty} \int_X f_n(x)\ d\mu$. One can show that $\chi_A$ is $\mu$-integrable if and only if $A \in \mathcal{R}_{\mu}$ and this also provides an alternative way to define the extended ring $\mathcal{R}_{\mu}$. For simple functions the notions of $\mu$-{\em integrability} and $\mathcal{R}_{\mu}$-measurability coincide, but for general functions integrability requires an additional boundedness condition as the following Theorem (\cite{rooij} 7.12 and Corollary \ref{cont}) shows: \begin{theorem} Let ($X,\mathcal{R},\mu)$ be a measure space and $K$ a field as above. A function $f: X\rightarrow K$ is $\mu$-integrable if and only if $f$ is locally $\mathcal{R}_{\mu}$-measurable and for every $\epsilon>0$, the set $\{ x\in X : |f(x)|N_{\mu}(x) \geq \epsilon \}$ is $\mathcal{R}_{\mu}$-compact, hence contained in some set $X_{\delta}$ for $\delta >0$. \label{integral} \end{theorem} Now we study the corresponding nonstandard representation of integrals and show that integrals of $^*$-simple are sufficient. Let $(\Omega,\mathcal{S},\nu)$ be an internal measure space, $N \in {^*\mathbb{N}}$ and $A_1,\dots,A_N \in \mathcal{S}$. Then the $^*$-simple function $F=\sum_{i=1}^N y_i \chi_{A_i}$ is $^*$-integrable and $\int_{\Omega} F(x)\ d\nu = \sum_{i=1}^N y_i \mu(A_i) \in {^*K}$. The seminorm on standard functions extends to internal functions $F: \Omega \rightarrow {^* K}$. For $^*$-simple functions $F$ we have $\| F\|_{\nu} \in {^*\mathbb{R}}_{\geq 0}$ (not $\infty$, but not necessarily finite). It is useful to put the following restriction on ${^*}$-integrability in order to relate it to standard integrals. \begin{definition} Let $(\Omega,\mathcal{S},\nu)$ be an internal measure space and $F: \Omega \rightarrow {^*K}$ a $^*$-simple function. $F$ is called $S$-integrable if the following conditions are satisfied: \begin{enumerate} \item $\|F\|_{\nu}$ is a finite hyperreal number, and \item If $A \in \mathcal{S}$ with $\|A\|_{\nu} \approx 0$ then $\| F \cdot \chi_{A} \|_{\nu} \approx 0$, and \item If $A \in \mathcal{S}$ with $F \cdot \chi_A \approx 0$ then $\| F \cdot \chi_{A} \|_{\nu} \approx 0$. \end{enumerate} \end{definition} We have adopted the definition of $S$-integrability \cite{anderson76} from real nonstandard measure theory. If $\mathcal{S}$ is an algebra such that $\|\Omega\|_{\nu}$ is finite, then c) is automatically satisfied. We remark that a $^*$-simple function $F=\sum_{i=1}^N y_i \chi_{A_i}$ is $S$-integrable if $|y_i|$ and $\|A_i\|_{\nu}$ are finite for all $i=1,\dots,n$. But for example $\delta$-functions (which exist as nonstandard {\em functions}) are not $S$-integrable. \\ We already know from Theorem \ref{firstlifting} that a Loeb measurable function can be lifted to a simple function, and conversely, the standard part of a simple function is Loeb measurable. The following Theorem relates $S$-integrability to Loeb integrability. Since Loeb measurable functions have values in a standard non-Archimedean field, the conventional definition of integrals can be used here. \begin{theorem} Let $(\Omega,\mathcal{S},\nu)$ be an internal measure space, assume that $\nu$ has values in $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$ and let $(\Omega,\mathcal{S}_L,\nu_L)$ be the corresponding Loeb space. \begin{enumerate} \item Let $f: \Omega \rightarrow K$ be integrable (in the conventional sense w.r.t.\ the Loeb measure). Then $f$ has a $S$-integrable and $^*$-simple lifting $F : \Omega \rightarrow {^*K}$. \item Let $K$ be a locally compact and $F: \Omega \rightarrow {^* K}$ a $S$-integrable and $^*$-simple function such that $f= \text{\textit{st}}\hspace{0.2mm} _K \circ F$ is defined outside a Loeb null set. Then $f$ is integrable w.r.t.\ the Lob measure. \end{enumerate} Under the hypotheses of a) or b), \[ \int_{\Omega} f(x)\ d\nu_L = \text{\textit{st}}\hspace{0.2mm} _K \int_{\Omega} F(x)\ d\nu \] \label{internal-int} \end{theorem} {\em Proof.} a): Let $(f_n)_{n\in\mathbb{N}}$ be a sequence of $\mathcal{S}_L$-simple functions which converges to $f$ w.r.t.\ $\|\ \|_{\nu_L}$. It follows from the definition of $\mathcal{S}_L$ that there is a $\mathcal{S}$-simple function $g_n: \Omega \rightarrow K$ for each $n$ such that $f_n = g_n$ outside a Loeb null set. This gives $\| f_n - g_n \|_{\nu_L} = 0$ and $\| f_n - g_n \|_{\nu} \approx 0$. Thus one obtains $\|f-f_n\|_{\nu_L} \approx \|f-f_n\|_{\nu} \approx \|f-g_n\|_{\nu}$. We apply countable saturation to the internal sets of $^*$-simple functions $g$ with $\|f-g\|_{\nu}<\epsilon$. Hence there exists a $^*$-simple function $F: \Omega \rightarrow {^*K}$ such that $\|f-F\|_{\nu} \approx 0$. This implies $|f(x)-F(x)| \approx 0$ and thus $\text{\textit{st}}\hspace{0.2mm} _K(F(x))=f(x)$ outside a Loeb null set. This gives a $^*$-simple lifting of $f$. Since $\| F\|_{\nu} \leq \max\{ \|F-f\|_{\nu}, \|f\|_{\nu} \}$, it follows that $\| F\|_{\nu}$ is finite. If $A \in \mathcal{S}$ with $\| A\|_{\nu} \approx 0$, then $\| F \chi_A\|_{\nu} \leq \max \{ \|(F-f)\chi_A\|_{\nu}, \|f \chi_A\|_{\nu} \}$, which is infinitesimal. If $F \cdot \chi_A \approx 0$ then $\| F \cdot \chi_A \|_{\nu} \approx 0$ since $\|A\|_{\nu}$ is finite. This shows that $F$ is $S$-integrable.\\ b): Let $F: \Omega \rightarrow {^* K}$ be a $S$-integrable lifting of $f$ and $F=\sum_{i=1}^N y_i \chi_{A_i}$. Since $\nu$ is an internal measure with values in $ \text{\textit{ns}}\hspace{0.5mm} ({^*K})$, all $\|A_i\|_{\nu}$ are finite and there is a standard real number $M>0$ such that $N_{\nu}(x) < M$ for all $x \in \bigcup_{i=1}^N A_i$. Let $\epsilon>0$ be any standard real number. We claim that there exists a $\mathcal{S}$-simple function $g :\Omega \rightarrow K$ such that $\| F - g\|_{\nu} < \epsilon$ which implies $\| f - g\|_{\nu_L} \leq \epsilon$ and proves the assertion.\\ First, assume that for all $D \in \mathbb{R}$ there exists $x \in \Omega$ such that $|F(x)| > D$ and $|F(x)| N_{\nu}(x) > \epsilon$. Since $F$ is $S$-integrable, the seminorm $C=\| F\|_{\nu}$ is finite. This implies $N_{\nu}(x) < \frac{C}{D}$ which gives a set $A \in \mathcal{S}$ with $x \in A$, $\| A\|_{\nu} < \frac{C}{D}$ and $\| F \chi_A\|_{\nu} > \epsilon$. Then the countable saturation principle yields the existence of a set $A \in \mathcal{S}$ with $\| A\|_{\nu} \approx 0$ and $\| F \chi_A\|_{\nu} > \epsilon$ which contradicts the second property of $S$-integrability. Hence there is a $D \in \mathbb{R}$ such that for all $x \in \Omega$ one has $|F(x)| \leq D$ or $|F(x)| N_{\nu}(x) < \epsilon$. We may therefore take an approximating simple function $f_n$ which is zero on $x \in \Omega$ with $|F(x)| > D$. \\ Since $K$ is locally compact, the ball with radius $D$ and center $0$ is the finite disjoint union of balls $B_i$ of radius $r$ with $0<r<\frac{\epsilon}{M}$. We define $g$ to be constant on the $\mathcal{S}$ measurable sets $F^{-1}(B_i)$ and zero elsewhere. The value of $g$ on $F^{-1}(B_i)$ is an arbitrary element of $B_i$. $g$ is a $\mathcal{S}$-simple function. Then $|F(x)-g(x)| < \frac{\epsilon}{M}$ and hence $|F(x)-g(x)| N_{\nu}(x) < \epsilon$ holds for all $x$ with $|F(x)| \leq B$. This implies $\| F - g\|_{\nu} < \epsilon$ as claimed.\\ \noindent It remains to prove the integral formula. Under the hypothesis of a) or b), we have a $^*$-simple lift and S-integrable lifting $F: \Omega \rightarrow {^*K}$ of the integrable function $f: \Omega \rightarrow K$. Moreover, for any standard $\epsilon > 0$ we have a $\mathcal{S}$-simple function $g : \Omega \rightarrow K$ with $\| F- g\|_{\nu} < \epsilon$ and $|\int_{\Omega} g\ d\nu -\int_{\Omega} f\ d\nu_L | < \epsilon$. These two inequalities imply $|\int_{\Omega} F\ d\nu -\int_{\Omega} f\ d\nu_L | < \epsilon$ which proves the integral formula. $\hfill \square$ \\ Finally, we consider {\em standard } measure spaces $(X,\mathcal{A}_{\mu},\mu)$ with an algebra $\mathcal{A}$ and integrable functions $f: X \rightarrow K$. By Theorem \ref{hyperfinite-rep} we know that $X$ is the push-down of a hyperfinite measure space $Y$ via the standard-part map. The following result shows that the integral on $X$ can be represented by a hyperfinite summation on the space $Y$. \begin{theorem} Let $(X,\mathcal{A}_{\mu},\mu)$ be the push-down of a hyperfinite measure space $(Y,\mathcal{S}_L,\nu_L)$ via the standard-part map $\text{\textit{st}}\hspace{0.2mm} _Y$ as in Theorem \ref{hyperfinite-rep}. Let $f: X \rightarrow K$ be $\mu$-integrable. Then $f \circ \text{\textit{st}}\hspace{0.2mm} _Y$ is Loeb integrable and there is an S-integrable lift $F: Y \rightarrow {^* K}$ such that $f \circ \text{\textit{st}}\hspace{0.2mm} _Y = \text{\textit{st}}\hspace{0.2mm} _K \circ F$ holds $\mathcal{S}_L$-almost everywhere. The following integrals coincide: \[ \int_X f(x)\ d\mu = \int_{Y} f(\text{\textit{st}}\hspace{0.2mm} _Y(y))\ d\nu_L = \int_Y \text{\textit{st}}\hspace{0.2mm} _K(F(y))\ d\nu_L = \text{\textit{st}}\hspace{0.2mm} _K \int_{Y} F(y)\ d\nu \] \label{intformel} \end{theorem} {\em Proof.} Since $f$ is $\mathcal{A}_{\mu}$-measurable by assumption and $\text{\textit{st}}\hspace{0.2mm} _Y$ is measurable by Theorem \ref{hyperfinite-rep}, we obtain the Loeb measurability of $f \circ \text{\textit{st}}\hspace{0.2mm} _Y$. We have show that $f \circ \text{\textit{st}}\hspace{0.2mm} _Y$ is integrable and that the integrals of $f \circ \text{\textit{st}}\hspace{0.2mm} _Y$ and $f$ coincide. The remaining statements then follow from Theorems \ref{internal-int} and \ref{commute}. \\ We have to prove a {\em change of variable} statement. Since $f$ is $\mu$-integrable there is a sequence of $\mathcal{A}$-simple functions $f_n : X \rightarrow K$ such that $\lim_{n\rightarrow \infty} \| f - f_n \|_{\mu} = 0$. Since $\text{\textit{st}}\hspace{0.2mm} _Y$ is measurable and $\mu(A) = \nu_L(\text{\textit{st}}\hspace{0.2mm} _Y^{-1}(A))$ for $A \in \mathcal{A}$, it is obvious that $f_n \circ \text{\textit{st}}\hspace{0.2mm} _Y$ is $\mathcal{S}_L$-simple and $\int_X f_n(x)\ d\mu = \int_Y f_n(\text{\textit{st}}\hspace{0.2mm} _Y(y))\ d\nu_L$. We have to show that the sequence $f_n \circ \text{\textit{st}}\hspace{0.2mm} _Y$ converges to $f$, i.e.\ $\lim_{n\rightarrow \infty} \| (f - f_n)\circ \text{\textit{st}}\hspace{0.2mm} _Y \|_{\nu_L} = 0$. It is sufficient to prove that $\| g\circ \text{\textit{st}}\hspace{0.2mm} _Y \|_{\nu_L} \leq \| g \|_{\mu}$ for any function $g: X \rightarrow K$. To this end, we let $y \in Y$ and set $x=\text{\textit{st}}\hspace{0.2mm} _Y(y)$. By definition of the seminorm, the above inequality reduces to $N_{\nu_L}(y) \leq N_{\mu}(x)$. For any $A \in \mathcal{A}$ with $x\in A$, one has $y \in \text{\textit{st}}\hspace{0.2mm} ^{-1}(A) = {^* A} \cap \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ by Corollary \ref{clopen}. $X \setminus \text{\textit{ns}}\hspace{0.5mm} ({^*X})$ is a Loeb null set and Lemma \ref{compat} yields $\| A\|_{\mu} = \|{^*A}\|_{^*\mu} \approx \|\text{\textit{st}}\hspace{0.2mm} _Y^{-1}(A)\|_{\nu_L}$. The definition of $N_{\mu}(x)$ resp.\ $N_{\nu_L}(y)$ as infimum of seminorm values of measurable sets finally shows the inequality. Note that $\mathcal{S}_L$ usually contains more (and in particular smaller) measurable sets than those of type $\text{\textit{st}}\hspace{0.2mm} _Y^{-1}(A)$ so that only the desired inequality can be obtained by this argument. $\hspace*{1cm} \hfill\square$ \section{Examples and Applications} In this section, we apply our results to the compact ultrametric spaces $X=\mathbb{Z}_p$ and $X^{\times}=\mathbb{Z}_p^{\times}$. The algebras $\mathcal{A}=B(X)$ and $\mathcal{A}=B(X^{\times})$ of clopen subsets consist of finite unions of {balls} $B_{p^{-n}}(a)=a+p^n \mathbb{Z}_p$. \\ \subsection{$q$-adic Haar Measure} We begin with measures with values in a $q$-adic number field $K=\mathbb{Q}_q$ for a prime number $q \neq p$. There is the translation invariant {\em Haar measure} on $X$ defined by $\mu(B_{p^{-n}}(a)) = p^{-n}$. Since $|p|=1$ for the $q$-adic absolute value, we have $|\mu(A)|=1$ for any non-empty $A \in \mathcal{A}$. Hence $N_{\mu}(x)=1$ for all $x\in X$ and therefore $\mathcal{A}=\mathcal{A}_{\mu}$. The following internal measure space $(Y,\mathcal{S},\nu)$ is a hyperfinite representation of $(X,\mathcal{A},\mu)$ (see Theorem \ref{hyperfinite-rep}): choose any infinite $N \in {^*\mathbb{N}}$ and set $Y={^*\mathbb{Z}_p}/(p^N)$. Let $\mathcal{S}$ be the algebra of all internal subsets of $Y$ and define $\nu$ as a normalized counting measure: $\nu(\{y \}) = p^{-N}$. Then $\nu(A)=\frac{\# A}{p^N}$ where $\# A \in {^*\mathbb{N}}$ denotes the number of elements which is a well-defined by the transfer principle. The standard-part map $\text{\textit{st}}\hspace{0.2mm} _Y : {^*\mathbb{Z}_p}/(p^N) \rightarrow \mathbb{Z}_p$ is measurable and measure-preserving. Furthermore, $A \subset \mathbb{Z}_p$ is measurable if and only if $\text{\textit{st}}\hspace{0.2mm} _Y^{-1}(A)$ is an internal subset of the hyperfinite space $Y$. \\ Since $X$ is a compact space without $\mu$-null sets (other than $\varnothing$), the following conditions are equivalent for a function $f: X \rightarrow K$: a) $f$ is $\mu$-measurable, b) $f$ is $B(X)$-continuous, and c) $f$ is $\mu$-integrable. If this is satisfied, $f$ can be lifted to an internal simple $S$-integrable function $F: Y \rightarrow {^*K}$ where we can take $F(x+p^N{^*\mathbb{Z}_p})={^* f}(x)$. This does not depend on the choice of the representative $x$, since $f$ is continuous. Theorem \ref{intformel} implies \begin{equation} \int_X f(x)\ d\mu = \text{\textit{st}}\hspace{0.2mm} \left( \int_Y F(y)\ d\nu \right) = \text{\textit{st}}\hspace{0.2mm} \left( \frac{1}{p^N} \sum_{x=0}^{p^N-1} {^*f}(x) \right) \label{q-adic} \end{equation} We remark that a similar result is well known for real-valued measures \cite{cutland2000loeb}. Consider for example the compact real interval $X=[0,1]$. The corresponding Lebesgue measure space can be represented by the {\em hyperfinite time line} $Y=\{0,\frac{1}{N},\frac{2}{N}, \dots , \frac{N-1}{N} \}$, the algebra of internal subsets of $Y$ and the normalized counting measure defined by $\nu(\{y\})=\frac{1}{N}$. Any Lebesgue-integrable function $f: X \rightarrow \mathbb{R}$ (or $\mathbb{C}$) can be lifted to an internal simple $S$-integrable function $F: Y \rightarrow {{^*\mathbb{R}}}$ (resp.\ ${^*\mathbb{C}})$. For continuous (or more general Riemann-integrable) functions $f$, one can take $F(y)={^* f}(y)$. The following integrals coincide: \begin{equation} \int_X f(x)\ d\lambda = \text{\textit{st}}\hspace{0.2mm} \left( \int_Y F(y)\ d\nu \right) = \text{\textit{st}}\hspace{0.2mm} \left( \frac{1}{N} \sum_{k=0}^{N-1} {^*f}(\frac{k}{N}) \right) \label{real} \end{equation} Although \eqref{q-adic} and \eqref{real} look very similar, there are important differences: $|\frac{1}{p^N}| = 1$ for the $q$-adic norm, whereas $|\frac{1}{N}| \approx 0$ for the real norm. In the $q$-adic case, only the internal subsets of $Y$ are measurable. For real-valued measures there are additional external Loeb measurable subsets. Needless to say that the space of Lebesgue integrable functions contains discontinuous functions which makes the construction of a $^*$-simple lifting less obvious. \subsection{Kubota-Leopoldt $p$-adic Zeta Function} \label{klzeta} Now let $p\neq 2$ and $K=\mathbb{Q}_p$. First, it is well known that there exists no $p$-adic Haar {\em measure} and only a Haar {\em distribution} on $X=\mathbb{Z}_p$ and $X^{\times}=\mathbb{Z}_p^{\times}$. But for any infinite $N \in {^*\mathbb{N}}$ there is an {\em internal Haar measure} on the hyperfinite set $Y=\{0,1,\dots,p^N-1\}$ given by $\nu(\{a\})=\frac{1}{p^N}$. The integral of any internal function $F:Y \rightarrow {^*K}$ is well defined. Any standard function $f: \mathbb{N} \rightarrow K$ can be uniquely lifted to $Y$ and by restriction to $\mathbb{N}$, a function $f: \mathbb{Z}_p \rightarrow K$ can also be lifted to an internal function $F:Y \rightarrow {^*K}$. We call this the {\em interpolation lifting} of $f$ and an explicit representation is given by the $^*$-finite Mahler polynomial $$ F(y) = \sum_{n=0}^{p^N-1} a_n \binom{y}{n} \text{ where } a_n = \sum_{i=0}^n (-1)^{n-i} \binom{n}{i} f(i) $$ If $f$ is continuous then \begin{equation} \text{\textit{st}}\hspace{0.2mm} _{K}(F(y)) \approx f(\text{\textit{st}}\hspace{0.2mm} _X(y)) \label{Fcompat} \end{equation} Observe that there is a unique interpolation lifting $F$, but there are also other liftings satisfying the relation \eqref{Fcompat}. If the integral $\int_Y F\ d\nu$ is nearstandard then the standard part gives the {\em Volkenborn integral} \cite{volkenborn1}. It follows from Lemma \ref{convergence} that $\text{\textit{st}}\hspace{0.2mm} _K(\int_Y F d\nu)$ does not depend on the infinite number $N$ and the hyperfinite space $Y$. We consider the internal function $F: Y \rightarrow {^*K}$ given by $F(y)=y^n$ where $n \in \mathbb{N}$. By transfer of a classical formula on sums of powers and Bernoulli numbers, the integral of $F$ can be easily computed: \[ \int_Y y^n\ d\nu = \frac{1}{p^N} \sum_{m=0}^{p^N-1} m^n = \frac{1}{n+1} \sum_{j=0}^n \binom{n+1}{j} B_j p^{N\cdot(n+1-j)} \frac{1}{p^N} \] The last sum runs through a finite set and the standard-part of all summands with $j<n$ vanishes since $\text{\textit{st}}\hspace{0.2mm} (p^N)=0$. The remaining term equals the Bernoulli number $B_n$ which is closely related to the value of the Riemann zeta function at the negative integer $1-n$ (see \cite{leopoldt}): \[ \text{\textit{st}}\hspace{0.2mm} \left(\int_Y y^n\ d\nu\right) = B_n = (-1)^{n-1} n \zeta(1-n) \] It is well known that $B_n$ and $\zeta(1-n)$ vanish for odd $n\geq 3$.\\ Now let $Y^{\times} \subset Y$ be the subset of integers prime to $p$. We restrict $\nu$ to $Y^{\times}$ and obtain again a hyperfinite measure space. The obvious decomposition \[ \sum_{m=0}^{p^N -1} m^n = p^n \sum_{m=1}^{p^{N-1}-1} m^n + \sum_{\substack{m=0 \\ p\, \nmid\, m}}^{p^N-1} m^n\] yields the integral formula \[ (1-p^{n-1}) \text{\textit{st}}\hspace{0.2mm} \left(\int_Y y^n\ d\nu \right) = \text{\textit{st}}\hspace{0.2mm} \left(\int_{Y^{\times}} y^n\ d\nu\right) = (-1)^{n-1} n (1-p^{n-1}) \zeta(1-n) \] Below, we will use similar integrals for nonstandard versions of $p$-adic zeta functions. \\ From now on we assume that $K=\mathbb{C}_p$. Elements in $\mathbb{Z}_p^{\times}$ can be uniquely written as $x=\omega(x) \langle x \rangle$ where $\omega$ is the Teichmüller character, $\omega(x)$ a $(p-1)$-st root of unity and $\langle x \rangle \in 1+p\mathbb{Z}_p$. It is well known that ${\langle x \rangle}^s=\exp_p( s \log_p \langle x \rangle) $ is well-defined, continuous and analytic in $s \in \mathbb{C}_p$ if $|s|<p^{(p-2)/(p-1)}$: \begin{equation} {\langle x \rangle}^s = \sum_{n=0}^{\infty} (\log_p \langle x \rangle)^n \frac{s^n}{n!} = \sum_{n=0}^{\infty} \binom{s}{n} (\langle x \rangle - 1)^n \label{power-exp} \end{equation} Let $i \in \mathbb{Z}/(p-1)\mathbb{Z}$. Then $\omega^{1-i}$ is a well-defined power of the Teichmüller character. The $p$-adic zeta functions $\zeta_{p,i}(s) = L_p(s,\omega^{1-i})$ can be defined as a $p$-adic Mellin transform using the Volkenborn integral: \begin{equation} \zeta_{p,i}(s) = \frac{(-1)^{i-1}}{s-1} \int_{X^{\times}} \omega(x)^{1-i} \langle x \rangle^{1-s}\ dx \label{kubota-org} \end{equation} This is basically the original definition of Kubota and Leopoldt. $\zeta_{p,i}(s)$ is a $p$-adic meromorphic function and $\zeta_{p,1}(s)=L_p(s,1)$ has a simple pole in $s=1$. \\ Now we give nonstandard formulas for $\zeta_{p,i}(s)$. \begin{align} \zeta_{p,i}(s) & = \frac{(-1)^{i-1}}{s-1} \text{\textit{st}}\hspace{0.2mm} \left ( \int_{Y^{\times}} \omega(y)^{1-i} \langle y \rangle^{1-s} \ d\nu \right) \label{kubota} \\ & = \frac{(-1)^{i-1}}{s-1} \text{\textit{st}}\hspace{0.2mm} \left( \int_{Y^{\times}} \omega(y)^{1-i} \sum_{n=0}^{M} (\log_p \langle y \rangle)^n \frac{(1-s)^n}{n!} \ d\nu \right) \label{kubota-logp} \\ & = \frac{(-1)^{i-1}}{s-1} \text{\textit{st}}\hspace{0.2mm} \left( \int_{Y^{\times}} \omega(y)^{1-i} \sum_{n=0}^M \binom{1-s}{n} (\langle y \rangle -1)^{n} \ d\nu \right) \label{kubota-exp} \end{align} One has $|\log_p\langle y \rangle| \leq \frac{1}{p}$ and $ |\langle y \rangle -1| \leq \frac{1}{p}$. Furthermore, $|\frac{(1-s)^n}{n!}| \leq p^{n(p-2)/(p-1)} | \frac{1}{n!}|$ and $|\binom{1-s}{n}| \leq p^{n(p-2)/(p-1)} |\frac{1}{n!}|$. An easy computation shows that there exists a $M \in {^*\mathbb{N}}$ such that for all $y \in Y^{\times}$ and $n>M$ $$(\log_p \langle y \rangle)^n \frac{(1-s)^n}{n!} \frac{1}{p^N} \approx 0 \text{ and } \binom{1-s}{n} (\langle y \rangle -1)^{n} \frac{1}{p^N} \approx 0$$ This implies that a hyperfinite summation over $n=0,\dots, M$ suffices in \eqref{kubota-logp} and \eqref{kubota-exp}. $M$ depends only on $N$ and not on $s$. The above formulas \eqref{kubota}, \eqref{kubota-logp} and \eqref{kubota-exp} then follow from Lemma \ref{convergence} and do not depend on the choices of $Y$, $N$ and $M$.\\ The nonstandard representation with hyperfinite sums permits convenient computations. For example, the residue of $\zeta_{p,1}(s)$ at $s=1$ can be easily computed: $\int_{Y^{\times}} d\nu = \frac{(p-1)p^{N-1}}{p^N} = 1- \frac{1}{p}$. All other branches, i.e. $i \neq 1 \mod (p-1)$, are holomorphic since the integral vanishes at $s=1$: $\int_{Y^{\times}} \omega(y)^{1-i} d\nu = \frac{p^{N-1}}{p^N} (\omega(1)^{1-i}+\omega(2)^{1-i}+\dots+\omega(p-1)^{1-i})=0$. Another example is contained in section \ref{mascheroni}.\\ The above nonstandard formulas can be reconverted into standard expressions. For example, \eqref{kubota} gives \[ \zeta_{p,i}(s) = \frac{(-1)^{i-1}}{s-1} \lim_{n \rightarrow \infty} \sum_{\substack{m=1 \\ p\, \nmid\, m}}^{p^n} \omega(m)^{1-i} \langle m \rangle^{1-s} \frac{1}{p^n}\] \subsection{Measures and $p$-adic L-functions} \label{measureszeta} Let $p\neq 2$ and $K=\mathbb{C}_p$. It is well known that regularized Bernoulli {\em measures} on $\mathbb{Z}_p^{\times}$ can be used to construct $p$-adic L-functions. First, the Haar and also the Bernoulli distributions \cite{koblitz} are unbounded in a standard sense. The Bernoulli distribution $\mu_1$ on $X=\mathbb{Z}_p$ is defined by $\mu_1(a+p^n \mathbb{Z}_p) = \frac{a}{p^n} - \frac{1}{2}$ for $0\leq a < p^n$. It can be regularized by the following transformation: $\mu(U)=\mu_1(U)-2 \mu_1(\frac{1}{2} U)$ for clopen subsets $U \subset \mathbb{Z}_p$. The measure $\mu$ coincides with $\mu_{1,\frac{1}{2}}$ in \cite{koblitz} II.5 and $E_2$ in \cite{washington} 12.2. We compute the values of $\mu$. Let $0\leq a < p^n$. We have $\mu(a+p^n \mathbb{Z}_p) = \frac{a}{p^n} - \frac{1}{2} - 2( \{ \frac{a/2}{p^n} \} - \frac{1}{2})$ where $\{\ \}$ denotes the fractional part of a $p$-adic number. We conclude that $\mu(a+p^n \mathbb{Z}_p) = \frac{1}{2}$ if $a$ is even. If $a=2b+1$ is an odd integer, then $\mu(a+ p^n \mathbb{Z}_p) = \frac{1}{p^n} -2 \{ \frac{1/2}{p^n} \} + \frac{1}{2}$. Now the $p$-adic expansion $\frac{1}{2} = \frac{p+1}{2} + \frac{p-1}{2} p + \frac{p-1}{2} p^2 + \dots$ shows that $2 \{ \frac{1/2}{p^n} \} = (p+1)p^{-n} + (p-1)p^{-n+1} + \dots + (p-1)p^{-1} = p^{-n}+1$ and hence $\mu(a+p^n \mathbb{Z}_p)= - \frac{1}{2}$ in that case. This shows that $\mu$ only takes the values $\pm \frac{1}{2}$ on clopen balls. \\ The $p$-adic zeta functions $\zeta_{p,i}(s)$, and more general $p$-adic L-functions $L_p(s,\chi)$ for arbitrary Dirichlet characters $\chi$, can be defined as a $p$-adic Mellin transform of a measure on $X^{\times}=\mathbb{Z}_p^{\times}$ (see for example \cite{washington} ch.\ 12). The above regularized Bernoulli measure $\mu$ can be used to define $p$-adic L-functions $L_p(s,\omega^k)$ for powers of the Teichmüller characters $\omega$ and one has \[ \zeta_{p,i}(s) = L(s,\omega^{1-i}) = \frac{-1}{1-\omega(2)^{1-i} \langle 2 \rangle^{1-s}} \int_{X^{\times}} \omega(x)^{-i} \langle x \rangle^{-s} d\mu \] Now let $N \in {^*\mathbb{N}}$ be infinite, $Y={^*\mathbb{Z}_p}/p^N {^*\mathbb{Z}_p}$ and $Y^{\times} = ({^*\mathbb{Z}_p}/p^N {^*\mathbb{Z}_p})^{\times}$ hyper\-finite spaces with a measure defined by $\nu(\{a+p^N {^*\mathbb{Z}_p}\})=\frac{(-1)^a}{2}$ for $0\leq a < p^N$ (see Proposition \ref{hyperfinite} and subsequent examples). Theorem \ref{hyperfinite-rep} shows that the measure space $(X,B(X),\mu)$ is the push-down of $(Y,{^*\mathcal{P}}(Y),\nu)$ via the standard-part map. Although there are obviously measurable subsets of $X$ and $Y$ with zero measure (e.g.\ the union of two balls or points with measure $\frac{1}{2}$ and $-\frac{1}{2}$), the weight functions $N_{\mu}$ and $N_{\nu}$ have the constant value $|\frac{1}{2}|=1$. An internal function $F: Y \rightarrow {^*\mathbb{C}_p}$ is $S$-integrable if and only if $\|F\|_{\nu}$ is finite, i.e. if $|F(y)|$ is bounded by a standard real number. \\ As above, the continuous function $f(x)=\langle x \rangle^{s}$ on $X^{\times}$ can be lifted to a function $F(y)=\langle y \rangle^{s}$ on $Y^{\times}$ such that $f(\text{\textit{st}}\hspace{0.2mm} _Y (y))=\text{\textit{st}}\hspace{0.2mm} _K( F(y))$ for $y \in Y^{\times}$. One may can take the $^*$-polynomial $F(y)= \sum_{n=0}^M \binom{1-s}{n} (\langle y \rangle -1)^{n}$ where $M \in {^*\mathbb{N}}$ depends on $N$. We obtain a nonstandard representation of $\zeta_{p,i}(s)$: \begin{align} \zeta_{p,i}(s) = & \ \frac{-1}{1-\omega(2)^{1-i} \langle 2 \rangle^{1-s}} \text{\textit{st}}\hspace{0.2mm} _K \left( \int_{Y^{\times}} \omega(y)^{-i} \langle y \rangle^{-s} d\nu \right) \nonumber \\ = & \ \frac{-1}{1-\omega(2)^{1-i} \langle 2 \rangle^{1-s}} \text{\textit{st}}\hspace{0.2mm} _K \left( \sum_{\substack{m=0 \\ p\, \nmid\, m}}^{p^N} \omega(m)^{-i} \langle m \rangle^{-s} \frac{(-1)^m}{2} \right) \label{zeta} \end{align} A standard interpretation of this integral is the following series: \[ \zeta_{p,i}(s) = \frac{1}{2(1-\omega(2)^{1-i} \langle 2 \rangle^{1-s})} \sum_{n=1}^{\infty} \left( \sum_{\substack{m=p^{n-1} \\ p\, \nmid\, m,\ m \text{ odd} }}^{p^n} \omega(m)^{-i} \langle m \rangle^{-s} - \sum_{\substack{m=p^{n-1} \\ p\, \nmid\, m,\ m \text{ even} }}^{p^n} \omega(m)^{-i} \langle m \rangle^{-s} \right) \] For $s=k$ and $i=k \mod p-1$ one has \[ \zeta_{p,i}(k) = L_p(k,\omega^{1-k}) = \frac{-1}{1-2^{1-k}} \cdot \lim_{n \rightarrow \infty} \sum_{m=1}^{p^n} \frac{(-1)^m}{2} m^{-k} \] Such Dirichlet series expansions were proved by D. Delbourgo in \cite{delbourgo2006dirichlet} using $p$-adic fractional derivations. $p$-adic Euler products and series expansions for arbitrary Dirichlet characters are treated in his work \cite{delbourgo2009} and it might be possible to obtain similar results with nonstandard methods, but we leave this for future work. \subsection{$p$-adic Euler-Mascheroni constant} \label{mascheroni} Let $p \neq 2$, $K=\mathbb{C}_p$, $\text{\textit{st}}\hspace{0.2mm} =\text{\textit{st}}\hspace{0.2mm} _K$ the standard-part map and $\zeta_{p,\mathbf{1}}(s)= \frac{1-\frac{1}{p}}{s-1} + \gamma_p + \dots$ the Laurent expansion of the $\mathbf{1}$-branch of the $p$-adic zeta function around $s=1$. The coefficient $\gamma_p$ is called the $p$-adic {\em Euler-Mascheroni} constant. We derive different formulas for that constant.\\ \noindent A) First, $\gamma_p$ can be computed with the Kubota-Leopoldt zeta function and the internal Haar measure $\nu$ (see section \ref{klzeta}). By \eqref{kubota-org} above, we find that $\gamma_p$ is the derivative of $-\int_{X^{\times}} \langle x \rangle^{1-s} dx$ at $s=1$. We use \eqref{kubota}, set $1-s=p^N \approx 0$ and obtain the following nonstandard formula for $\gamma_p$ : \begin{equation} \gamma_p = \text{\textit{st}}\hspace{0.2mm} \left( \frac{1}{-p^N} \left ( \int_{Y^{\times}} \langle y \rangle^{p^N} d\nu \ - (1-\frac{1}{p}) \right) \right) = \text{\textit{st}}\hspace{0.2mm} \left( \frac{-1}{p^N} \left ( \frac{1}{p^N} \sum_{\substack{m=1 \\ p\, \nmid m}}^{p^N} \langle m \rangle^{p^N} - (1-\frac{1}{p}) \right) \right) \label{first} \end{equation} We have $\langle m \rangle^{p^N} = \omega^{-1}(m) m^{p^N}$ and the sums of $p^N$-th powers can be computed with generalized Bernoulli numbers: \[ \sum_{\substack{m=1 \\ p\, \nmid m}}^{p^N} \langle m \rangle^{p^N} = \sum_{m=1}^{p^N} \omega^{-1}(m) m^{p^N} = p^N B_{p^N,\omega^{-1}} + \frac{p^N}{2} (p^N)^2 B_{p^N-1,\omega^{-1}} + \dots \] By the Theorem of von Staudt-Clausen for generalized Bernoulli numbers \cite{leopoldt}, $|B_{k,\omega^{-1}}|_p \leq p$. Therefore \[ \text{\textit{st}}\hspace{0.2mm} \left( \frac{1}{p^N} \sum_{m=1}^{p^N} \omega^{-1}(m) m^{p^N} \right) = B_{p^N,\omega^{-1}} \] Hence it follows that \begin{equation} \gamma_p = \text{\textit{st}}\hspace{0.2mm} \left( \frac{-1}{p^N} \left( B_{p^N,\omega^{-1}} - (1-\frac{1}{p}) \right) \right) = \lim_{n \rightarrow \infty} \frac{1}{p^n} \left( 1-\frac{1}{p} - B_{p^n,\omega^{-1}} \right) \label{knospe} \end{equation} We remark that a similar formula can be found in \cite{koblitz1978}. \\ \noindent B) Alternatively, one can use the expansion \eqref{kubota-logp} which gives \begin{equation} \gamma_p = \text{\textit{st}}\hspace{0.2mm} \left(- \int_{Y^{\times}} \log_p \langle y \rangle d\nu \right) = \text{\textit{st}}\hspace{0.2mm} \left( - \frac{1}{p^N} \sum_{\substack{m=1 \\ p\, \nmid m}}^{p^N} \log_p(m) \right) \label{kubota-log} \end{equation} One has $\displaystyle\sum_{\substack{m=1 \\ p\, \nmid m}}^{p^N} \log_p(m)= \log_p \left( \displaystyle\prod_{\substack{m=1 \\ p\, \nmid m}}^{p^N} m \right) = \log_p (\Gamma_p(p^N))$, where $\Gamma_p$ is the $p$-adic gamma function. Since $\Gamma_p(0)=1$ and $\Gamma_p(x+1)=-\Gamma_p(x)$ for $|x|<1$, we get $\text{\textit{st}}\hspace{0.2mm} (\frac{1}{p^N} \log_p (\Gamma_p(p^N)) )= (\log_p \Gamma_p)'(0) = \frac{\Gamma_p '(0)}{\Gamma_p(0)} = -\Gamma_p'(1)$. $\Gamma_p$ interpolates the factorial with the factors dividing $p$ removed and this yields the following formula: \begin{equation} \gamma_p = \Gamma_p'(1) = \text{\textit{st}}\hspace{0.2mm} \left( \frac{1}{p^N} \left(\Gamma_p(p^N+1) -(-1) \right) \right) = \text{\textit{st}}\hspace{0.2mm} \left( \frac{1}{p^N} \left( \frac{p^N !}{p^{N-1}! \ p^{p^{N-1}}} +1 \right) \right) \label{gamma-fak} \end{equation} The standard version of \eqref{gamma-fak} can be found in \cite{schikhof} 36.A. \\ \noindent C) Next, we use the expansion \eqref{kubota-exp} and set $1-s=p^N$. This gives \begin{align} \gamma_p = & - \text{\textit{st}}\hspace{0.2mm} \left( \frac{1}{p^N} \int_{Y^{\times}} \sum_{n=1}^M \binom{p^N}{n} (\langle y \rangle -1)^{n} \ d\nu \right) \nonumber \\ = & - \text{\textit{st}}\hspace{0.2mm} \left( \int_{Y^{\times}} \sum_{n=0}^M \frac{(-1)^{n+1}}{n} \sum_{j=0}^n \binom{n}{j} \left(\frac{y} {\omega(y)}\right)^{j} (-1)^{n-j} \ d\nu \right) \nonumber \\ = & - \text{\textit{st}}\hspace{0.2mm} \left( \sum_{n=0}^M \frac{(-1)^{n+1}}{n} \sum_{j=0}^n \binom{n}{j} (-1)^{n-j} \int_{Y^{\times}} \omega^{-j}(y) y^{j} \ d\nu \right) \nonumber \\ = & - \text{\textit{st}}\hspace{0.2mm} \left( \sum_{n=0}^M \frac{1}{n} \sum_{j=0}^n \binom{n}{j} (-1)^{j+1} B_{j,\omega^{-j}} \right) \label{newknospe} \end{align} We also state a standard version of the series \eqref{newknospe} which converges very fast. \[ \gamma_p = \lim_{m \rightarrow \infty} \left( \sum_{n=0}^m \frac{1}{n} \sum_{j=0}^n \binom{n}{j} (-1)^{j+1} B_{j,\omega^{-j}} \right) \] \noindent D) One may also use the regularized Bernoulli measure and its nonstandard representation $\nu$ as in section \ref{measureszeta}. By \eqref{zeta} above, \[ \zeta_{p,\mathbf{1}}(s) = \frac{-1}{1- \langle 2 \rangle^{1-s}} \text{\textit{st}}\hspace{0.2mm} \left( \int_{Y^{\times}} \frac{1}{y} \langle y \rangle^{1-s} d\nu \right) \] We compute $\gamma_p$ by setting $1-s=p^N$ and get \begin{align*} \gamma_p = & \ \text{\textit{st}}\hspace{0.2mm} \ \frac{-1}{1- \langle 2 \rangle^{p^N}} \left( \int_{Y^{\times}} \frac{1}{y} \langle y \rangle^{p^N} d\nu \right) - \frac{1-\frac{1}{p}}{-p^N} \\ = & \ \text{\textit{st}}\hspace{0.2mm} \ \frac{1}{1-\langle 2 \rangle^{p^N}} \left( \sum_{\substack{m=1 \\ p\, \nmid m}}^{p^N} \frac{(-1)^{m+1}}{2m} \langle m \rangle^{p^N} + (1-\frac{1}{p})(1-\langle 2 \rangle^{p^N}) \frac{1}{p^N} \right) \end{align*} Since $\langle 2 \rangle^{p^N} = \exp_p(p^N \log_p\langle 2 \rangle) = 1 + p^N \log_p\langle 2 \rangle + \frac{1}{2} p^{2N} \log_p\langle 2 \rangle^2 + \dots$, we conclude \begin{equation} \gamma_p = \ \text{\textit{st}}\hspace{0.2mm} \ \frac{1}{1-\langle 2 \rangle^{p^N}} \left( \sum_{\substack{m=1 \\ p\, \nmid m}}^{p^N} \frac{(-1)^{m+1}}{2m} \langle m \rangle^{p^N} - (1-\frac{1}{p})\log_p\langle 2 \rangle (1 + \frac{1}{2} p^N \log_p\langle 2 \rangle ) \right) \label{delb} \end{equation} A similar formula for $\gamma_p$ modulo $p^n \mathbb{Z}_p$ was proved by D. Delbourgo \cite{delbourgo2009} 2.7. \\ \noindent E) Yet another formula for $\gamma_p$ follows from the construction of $p$-adic L-functions in \cite{washington} 5.11. One has the following expansion around $s=1$: \begin{align} (s-1)\cdot L_p(s,\mathbf{1}) = & \frac{1}{p} \sum_{a=1}^{p-1} \exp_p( (1-s) \log_p \langle a \rangle) \sum_{j=0}^{\infty} \binom{1-s}{j} B_j \frac{p^j}{a^j} \nonumber \\ = & \frac{p-1}{p} + \frac{1}{p} \left ( \sum_{a=1}^{p-1} - \log_p \langle a \rangle - B_1 \frac{p}{a} + \frac{1}{2} B_2 \frac{p^2}{a^2} - \frac{1}{3} B_2 \frac{p^3}{a^3} \pm \dots \right) (s-1) + \dots \label{wash} \end{align} The linear term of the expansion is equal to $\gamma_p$ and this series converges very fast. \\ We did a number of calculations using the mathematics software {Sage} \cite{sage} to provide additional numerical evidence for \eqref{first}, \eqref{knospe}, \eqref{kubota-log}, \eqref{newknospe}, \eqref{delb}, \eqref{wash}. For example, the formulas give the following values (compare \cite{delbourgo2009}): \begin{align*} \gamma_3 =\ & 2\cdot3 + 2\cdot 3^2 + 3^3 + 2\cdot 3^4 + 3^5 + 2\cdot 3^6 + 2\cdot 3^7 + 2\cdot 3^8 + O(3^{10}) \\ \gamma_5 =\ & 5 + 3\cdot5^3 + 2\cdot5^5 + 3\cdot5^6 + 4\cdot5^7 + 5^8 + 2\cdot5^9 + O(5^{10})\\ \gamma_7 =\ & 5 + 2 \cdot 7 + 4\cdot 7^2 + 6\cdot 7^3 + 2\cdot7^4 + 6\cdot7^6 + 2\cdot 7^7 + 7^9 + O(7^{10}) \\ \gamma_{11} =\ & 1 + 10 \cdot 11 + 2 \cdot 11^2 + 11^3 + 5 \cdot 11^4 + 5 \cdot 11^5 + 4 \cdot 11^6 + 5 \cdot 11^7 + O(11^8) \\ \gamma_{13} =\ & 4\cdot 13 + 7\cdot13^3 + 8\cdot13^4 + 7\cdot13^5 + 6\cdot13^6 + 4\cdot13^7+9\cdot13^8 +O(13^9) \end{align*} \section*{Acknowledgments} The author wishes to express his thanks to Daniel Delbourgo for drawing the author's attention to Dirichlet series expansions of $p$-adic L-functions and to the $p$-adic Euler-Mascheroni constant. Furthermore, he wants to thank Jörn Steuding for pointing him to the $p$-adic Gamma function.
{ "timestamp": "2016-12-30T02:07:19", "yymm": "1612", "arxiv_id": "1612.09108", "language": "en", "url": "https://arxiv.org/abs/1612.09108", "abstract": "The aim of this contribution is to bring together the areas of $p$-adic analysis and nonstandard analysis. We develop a nonstandard measure theory with values in a complete non-Archimedean valued field $K$, e.g. the $p-$adic numbers $\\mathbb{Q}_p$. The corresponding theory for real-valued measures is well known by the work of P. A. Loeb, R. M. Anderson and others.We first review some of the standard facts on non-Archimedean measures and briefly sketch the prerequisites from nonstandard analysis. Then internal measures on rings and algebras with values in a nonstandard field ${^*K}$ are introduced. We explain how an internal measure induces a $K$-valued Loeb measure. The standard-part map between a Loeb space and the underlying standard measure space is measurable almost everywhere. We establish liftings from measurable functions to internal simple functions. Furthermore, we prove that standard measure spaces can be described as push-downs of hyperfinite internal measure spaces. This result is an analogue of a well-known Theorem on hyperfinite representations of Radon spaces. Then standard integrable functions are related to internal $S$-integrable functions and integrals are represented by hyperfinite sums. Finally, the results are applied to measures and integrals on $\\mathbb{Z}_p$ and $\\mathbb{Z}_p^{\\times}$. We obtain explicit series expansions for the $p$-adic zeta function and the $p$-adic Euler-Mascheroni constant which we use for computations.", "subjects": "Number Theory (math.NT); Logic (math.LO)", "title": "Nonstandard Measure Spaces with Values in non-Archimedean Fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683498785867, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7096610844548071 }
https://arxiv.org/abs/0708.0923
Spherical Nilpotent Orbits in Positive Characteristic
Let G be a connected reductive linear algebraic group defined over an algebraically closed field of characteristic p. Assume that p is good for G. In this note we classify all the spherical nilpotent G-orbits in the Lie algebra of G. The classification is the same as in the characteristic zero case obtained by D.I. Panyushev in 1994: for e a nilpotent element in the Lie algebra of G, the G-orbit G.e is spherical if and only if the height of e is at most 3.
\section{Introduction} \label{s:intro} Let $G$ be a connected reductive linear algebraic group defined over an algebraically closed field $k$ of characteristic $p > 0$. With the exception of Subsection \ref{sub:bad}, we assume throughout that $p$ is \emph{good} for $G$ (see Subsection \ref{sub:not} for a definition). A \emph{spherical} $G$-variety $X$ is an (irreducible) algebraic $G$-variety on which a Borel subgroup $B$ of $G$ acts with a dense orbit. Homogeneous spherical $G$-varieties $G/H$, for $H$ a closed subgroup of $G$, are of particular interest. They include flag varieties (when $H$ is a parabolic subgroup of $G$) as well as symmetric spaces (when $H$ is the fixed point subgroup of an involutive automorphism of $G$). We refer the reader to \cite{Bri1.5} and \cite{Bri1} for more information on spherical varieties and for their representation theoretic significance. These varieties enjoy a remarkable property: a Borel subgroup of $G$ acts on a spherical $G$-variety only with a finite number of orbits. This fundamental result is due to M.~Brion \cite{Bri2} and \'E.~B.~Vinberg \cite{Vi1} independently in characteristic $0$, and to F.~Knop \cite[2.6]{Kn} in arbitrary characteristic. Let $\gg = \Lie G$ be the Lie algebra of $G$. The aim of this note is to classify the spherical nilpotent $G$-orbits in $\gg$. In case $k$ is of characteristic zero, this classification was obtained by D.I.\ Panyushev in 1994 in \cite{Pa3}. The classification is the same in case the characteristic of $k$ is good for $G$: for $e \in \gg$ nilpotent, $G\cdot e$ is spherical if and only if the height of $e$ is at most $3$ (Theorem \ref{thm 8.40}). The height of $e$ is the highest degree in the grading of $\gg$ afforded by a cocharacter of $G$ associated to $e$ (Definition \ref{defn:height}). The methods employed by Panyushev in \cite{Pa3} do not apply in positive characteristic, e.g.\ parts of the argument are based on the concept of ``stabilizers in general position''; it is unknown whether these exist generically in positive characteristic. Thus a different approach is needed to address the question in this case. \smallskip We briefly sketch the contents of the paper. In Section \ref{s:prelim} we collect the preliminary results we require. In particular, we discuss the concepts of complexity and sphericity, and more specifically the question of complexity of homogeneous spaces. In Subsection \ref{sub:kempf} we recall the basic results of Kempf--Rousseau Theory and in Subsection \ref{sub:assco} we recall the fundamental concepts of associated cocharacters for nilpotent elements from \cite[\S 5]{Ja1} and \cite{Pr1}. There we also recall the grading of $\gg$ afforded by a cocharacter associated to a given nilpotent element and define the notion of the height of a nilpotent element as the highest occurring degree of such a grading, Definition \ref{defn:height}. The complexity of fibre bundles is discussed in Subsection \ref{sub:Fibbun} which is crucial for the sequel. In particular, in Theorem \ref{thm 5.10} we show that the complexity of a fixed nilpotent orbit $G\cdot e$ is given by the complexity of a smaller reductive group acting on a linear space. Precisely, let $\lambda$ be a cocharacter of $G$ that is associated to $e$. Then $P_\lambda$ is the destabilizing parabolic subgroup $P(e)$ defined by $e$, in the sense of Geometric Invariant Theory. Moreover, $L = C_G(\lambda(k^*))$ is a Levi subgroup of $P(e)$. We show in Theorem \ref{thm 5.10} that the complexity of $G\cdot e$ equals the complexity of the action of $L$ on the subalgebra $\bigoplus_{i \geqslant 2}\gg(i,\lambda)$ of $\gg$ where the grading $\gg = \bigoplus_{i \in \ZZ}\gg(i,\lambda)$ is afforded by $\lambda$. In Subsection \ref{sub:sec 6.2} we recall the concept of a weighted Dynkin diagram associated to a nilpotent orbit from \cite[\S 5]{Ca}. There we also present the classification of the parabolic subgroups $P$ of a simple algebraic group $G$ admitting a dense action of a Borel subgroup of a Levi subgroup of $P$ on the unipotent radical of $P$ from \cite[Thm.\ 4.1]{Br1}. Here we also remind the reader of the classification of the parabolic subgroups of $G$ with an abelian unipotent radical. In Section \ref{sect:classification} we give the classification of the spherical nilpotent orbits in good characteristic: a nilpotent element $e$ in $\gg$ is spherical if and only if the height of $e$ is at most $3$ (Theorem \ref{thm 8.40}). In Subsections \ref{sub:ht2} and \ref{sub:ht4} we show that orbits of height $2$ are spherical and orbits of height at least $4$ are not, respectively. The subsequent subsections deal with the cases of height $3$ nilpotent classes. For classical groups these only occur for the orthogonal groups. For the exceptional groups the height $3$ cases are handled in Subsection \ref{sub:ex} with the aid of a computer programme of S.M.\ Goodwin. In Section \ref{sec:appl} we discuss some further results and some applications of the classification. In Subsection \ref{sub:dist} we discuss the spherical nilpotent orbits that are distinguished and in Subsection \ref{sub:orth} we extend a result of Panyushev in characteristic zero to good positive characteristic: a characterization of the spherical nilpotent orbits in terms of pairwise orthogonal simple roots, see Theorem \ref{thm 20.2}. In Subsection \ref{sub:ideals} we discuss generalizations of results from \cite{PaRo1} and \cite{PaRo2} to positive characteristic. In Theorem \ref{thm 20.04} we show that if $\aaa$ is an abelian ideal of $\bb$, then $G\cdot \aaa$ is a spherical variety. In Subsection \ref{sub:geom} we describe a geometric characterization of spherical orbits in simple algebraic groups from \cite{CaCaCo} and \cite{Carno}. Finally, in Subsection \ref{sub:bad} we very briefly touch on the issue of spherical nilpotent orbits in bad characteristic. Thanks to the fact that a Springer isomorphism between the unipotent variety of $G$ and the nilpotent variety of $\gg$ affords a bijection between the unipotent $G$-classes in $G$ and the nilpotent $G$-orbits in $\gg$ (cf.\ \cite{serre}), there is an analogous classification of the spherical unipotent conjugacy classes in $G$. For results on algebraic groups we refer the reader to Borel's book \cite{Bor} and for information on nilpotent classes we cite Jantzen's monograph \cite{Ja1}. \section{Preliminaries} \label{s:prelim} \subsection{Notation} \label{sub:not} Let $H$ be a linear algebraic group defined over an algebraically closed field $k$. We denote the Lie algebra of $H$ by $\Lie H$ or by $\hh$. We write $H^{\circ}$ for the identity component of $H$ and $Z(H)$ for the centre of $H$. The derived subgroup of $H$ is denoted by $\D H$ and we write $\rank H$ for the dimension of a maximal torus of $H$. The unipotent radical of $H$ is denoted by $R_u(H)$. We say that $H$ is reductive provided $H^\circ$ is reductive. Let $K$ be a subgroup of $H$. We write $C_H(K) = \{ h \in H \mid hxh\inverse = x \ \text{for all}\ x \in K\}$ for the centralizer of $K$ in $H$. Suppose $H$ acts morphically on an algebraic variety $X$. Then we say that $X$ is an $H$-variety. Let $x \in X$. Then $H \cdot x$ denotes the $H$-orbit of $x$ in $X$ and $C_H(x) = \{h \in H \mid h\cdot x = x \ \text{for all}\ h \in H\}$ is the stabilizer of $x$ in $H$. For $e \in \hh$ we denote the centralizers of $e$ in $H$ and $\hh$ by $C_H(e) = \{h\in H \mid \Ad(h) e = e\}$ and $\cc_\hh(e) = \{x \in \hh \mid [x,e] = 0\}$, respectively. For $S$ a subset of $H$ we write $\cc_\hh(S) =\{x \in \hh \mid \Ad(s) x = x \ \text{for all}\ s \in S\}$ for the centralizer of $S$ in $\hh$. Suppose $G$ is a connected reductive algebraic group. By $\N$ we denote the nilpotent cone of $\gg$. Let $T$ be a maximal torus of $G$. Let $\Psi = \Psi(G,T)$ denote the set of roots of $G$ with respect to $T$. Fix a Borel subgroup $B$ of $G$ containing $T$ and let $\Pi = \Pi(G, T)$ be the set of simple roots of $\Psi$ defined by $B$. Then $\Psi^+ = \Psi(B,T)$ is the set of positive roots of $G$ with respect to $B$. For $I \subset \Pi$, we denote by $P_I$ and $L_I$ the \emph{standard} parabolic and \emph{standard} Levi subgroups of $G$ defined by $I$, respectively; see \cite[\S 2]{Ca}. For $\beta \in \Psi^+$ write $\beta = \sum_{\alpha \in \Pi} c_{\alpha\beta} \alpha$ with $c_{\alpha\beta} \in \mathbb N_0$. A prime $p$ is said to be \emph{good} for $G$ if it does not divide $c_{\alpha\beta}$ for any $\alpha$ and $\beta$, \cite[Defn.\ 4.1]{SpSt}. Let $U = R_u(B)$ and set $\uu = \Lie U$. For a $T$-stable Lie subalgebra $\mm$ of $\uu$ we write $\Psi(\mm)=\{\beta \in \Psi^+ \mid \gg_{\beta}\subseteq \mm\}$ for the set of roots of $\mm$ (with respect to $T$). For every root $\beta \in \Psi$ we choose a generator $e_{\beta}$ for the corresponding root space $\gg_{\beta}$ of $\gg$. Any element $e \in\uu$ can be uniquely written as $e = \sum_{\beta \in \Psi^+}c_{\beta}e_{\beta}$, where $c_{\beta} \in k$. The \emph{support} of $e$ is defined as $\mathop{\mathrm{supp}}\nolimits(e)=\{\beta \in \Psi^+ \mid c_{\beta}\neq 0\}$. The variety of all Borel subgroups of $G$ is denoted by $\B$. Note that $\B$ is a single conjugacy class $\B =\{B^g \mid g \in G\}$. Also note the isomorphism $\B \cong G/B$. Let $Y(G) = \Hom(k^*,G)$ denote the set of \emph{cocharacters} (one-parameter subgroups) of $G$, likewise for a closed subgroup $H$ of $G$, we set $Y(H) = \Hom(k^*,H)$ for the set of cocharacters of $H$. For $\lambda\in Y(G)$ and $g\in G$ we define $g\cdot \lambda \in Y(G)$ by $(g\cdot \lambda)(t) = g\lambda(t)g^{-1}$ for $t \in k^*$; this gives a left action of $G$ on $Y(G)$. For $\mu \in Y(G)$ we write $C_G(\mu)$ for the centralizer of $\mu$ under this action of $G$ which coincides with $C_G(\mu(k^*))$. By a Levi subgroup of $G$ we mean a Levi subgroup of a parabolic subgroup of $G$. The Levi subgroups of $G$ are precisely the subgroups of $G$ which are of the form $C_G(S)$ where $S$ is a torus of $G$, \cite[Thm.\ 20.4]{Bor}. Note that for $S$ a torus of $G$ the group $C_G(S)$ is connected, \cite[Cor.\ 11.12]{Bor}. \subsection{Complexity} \label{sub:Comp} Suppose the linear algebraic group $H$ acts morphically on the (irreducible) algebraic variety $X$. Let $B$ be a Borel subgroup of $H$. Recall that the \emph{complexity of} $X$ (with respect to the $H$-action on $X$) is defined as \[ \kappa_H(X) := \min_{x\in X}\codim_{X} B\cdot x, \] see also \cite{Bri1}, \cite{Kn}, \cite{LuVu}, \cite{Pa3}, and \cite{Vi1}. Since the Borel subgroups of $H$ are conjugate in $H$ (\cite[Thm.\ 21.3]{Hu}), the complexity of the variety $X$ is well-defined. Since a Borel subgroup of $H$ is connected, we have $\kappa_H(X)=\kappa_{H^{\circ}}(X)$. Thus for considering the complexity of an $H$-action, we may assume that $H$ is connected. Concerning basic properties of complexity, we refer the reader to \cite[\S 9]{Vi1}. We return to the general situation of a linear algebraic group $H$ acting on an algebraic variety $X$. For a Borel subgroup $B$ of $H$, we define \[ \Gamma_{X}(B) := \{x\in X \mid \codim_{X} B\cdot x = \kappa_H(X)\}\subseteq X. \] Then we set \[ \Gamma_{X} := \bigcup_{B \in \mathcal{B}}\Gamma_{X}(B) \subseteq X. \] For $x \in X$, we define \[ \Lambda_H(x) := \{B \in \mathcal{B} \mid \codim_{X} B\cdot x =\kappa_H(X)\} \subseteq \B. \] \begin{rem} \label{rem1.20} The following statements are immediate from the definitions. \begin{itemize} \item[(i)] If $H$ acts transitively on $X$, then $\Gamma_{X}=X$. \item[(ii)] $B\in \Lambda_H(x)$ if and only if $x \in \Gamma_{X}(B)$. \item[(iii)] $\Lambda_H(x)= \varnothing$ if and only if $x \notin \Gamma_{X}$. \end{itemize} \end{rem} The complexity of a reducible variety can easily be determined from the complexities of its irreducible components: Since a Borel subgroup $B$ of $G$ is connected, it stabilizes each irreducible component of $X$, cf.\ \cite[Prop.\ 8.2(d)]{Hu}. Let $x \in \Gamma_{X}(B)$ and choose an irreducible component $X'$ of $X$ such that $x \in X'$. Then $\kappa_G(X)=\kappa_G(X')+\codim_{X}X'$. Therefore, from now on we may assume that $X$ is irreducible. Next we recall the upper semi-continuity of dimension, e.g.\ see \cite[Prop.\ 4.4]{Hu}. \begin{prop} \label{prop 1.10} Let $\varphi : X\to Y$ be a dominant morphism of irreducible varieties. For $x \in X$, let $\varepsilon_{\varphi}(x)$ be the maximal dimension of any component of $\varphi^{-1}(\varphi(x))$ passing through $x$. Then $\{x \in X \mid\: \varepsilon_{\varphi}(x)\geqslant n \}$ is closed in $X$, for all $n \in \mathbb{Z}$. \end{prop} \begin{cor} \label{cor 1.10} Let $X$ be an $H$-variety. The set $\{x \in X \mid \dim H\cdot x \leqslant n \}$ is closed in $X$ for all $n \in \mathbb{Z}$. In particular, the union of all $H$-orbits of maximal dimension in $X$ is an open subset of $X$. \end{cor} \begin{lem} \label{lem 1.40} For every $B \in \mathcal{B}$, we have $\Gamma_{X}(B)$ is a non-empty open subset of $X$. \end{lem} \begin{proof} Note that $\Gamma_{X}(B)$ is the union of $B$-orbits of maximal dimension. Thus, by Corollary \ref{cor 1.10}, $\Gamma_{X}(B)$ is open in $X$. \end{proof} \begin{cor} \label{cor 1.20} $\Gamma_{X}$ is open in $X$. \end{cor} Next we need an easy but useful lemma; the proof is elementary. \begin{lem} \label{lem 1.50} Let $\varphi : X \to Y$ be an $H$-equivariant dominant morphism of irreducible $H$-varieties. For $x \in X$ set $F_{\varphi(x)}=\varphi^{-1}(\varphi(x))$. Then $F_{\varphi(x)}$ is $C_H(\varphi(x))$-stable. \end{lem} Before we can prove the main result of this subsection we need another preliminary result, see \cite[Thm.\ 4.3]{Hu}. \begin{thm} \label{thm 1.20} Let $\varphi: X \to Y$ be a dominant morphism of irreducible varieties. Set $r = \dim X - \dim Y$. Then there is a non-empty open subset $V$ of $Y$ such that $V\subseteq \varphi(X)$ and if $Y'\subseteq Y$ is closed, irreducible and meets $V$ and $Z$ is a component of $\varphi^{-1}(Y')$ which meets $\varphi^{-1}(V)$, then $\dim Z = \dim Y' +r$. In particular, if $v \in V$, then $\dim \varphi^{-1}(v) = r$. \end{thm} For the remainder of this section let $G$ be connected reductive. Let $\varphi : X \to Y$ be a $G$-equivariant dominant morphism of irreducible $G$-varieties. Then $\kappa_G(Y)\leqslant \kappa_G(X)$, \cite[\S 9]{Vi1}. In the main result of this subsection we give an interpretation for the difference $\kappa_G(X) - \kappa_G(Y)$ in terms of the complexity of a smaller subgroup acting on a fibre of $\varphi$. \begin{thm} \label{thm 1.10} Let $\varphi : X \to Y$ be a $G$-equivariant dominant morphism of irreducible $G$-varieties. For $x \in X$ set $F_{\varphi(x)}=\varphi^{-1}(\varphi(x))$. Then for every $B \in \mathcal{B}$ there exists $x \in \Gamma_{X}(B)$ such that for $H = C_B(\varphi(x))^{\circ}$ we have \[ \kappa_G(X)=\kappa_G(Y)+\kappa_H(Z), \] where $Z$ is an irreducible component of $F_{\varphi(x)}$ passing through $x$. \end{thm} \begin{proof} Let $B\in \mathcal{B}$. Let $V$ be a non-empty open subset of $Y$ which satisfies the conditions in Theorem \ref{thm 1.20}. Since $Y$ is irreducible, Lemma \ref{lem 1.40} implies that $\Gamma_{Y}(B)\cap V \neq \varnothing$. For $y \in \Gamma_{Y}(B)\cap V$, Theorem \ref{thm 1.20} implies that any component of $\varphi^{-1}(y)$ has dimension $r = \dim X -\dim Y$, in particular, $\dim \varphi^{-1}(y) = r$. Since $\varphi^{-1}(\Gamma_{Y}(B)\cap V)$ is open in $X$, we have $\varphi^{-1}(\Gamma_{Y}(B)\cap V)\cap \Gamma_{X}(B)\neq \varnothing$, by Lemma \ref{lem 1.40}. Now choose $x\in \varphi^{-1}(\Gamma_{Y}(B)\cap V)\cap \Gamma_{X}(B)$. In particular, $\dim F_{\varphi(x)}=r$. Lemma \ref{lem 1.50} implies that $F_{\varphi(x)}$ is $C_B(\varphi(x))$-stable. Clearly, $C_B(x)$ is the stabilizer of $x$ in $C_B(\varphi(x))$. Thus we obtain \begin{align*} \codim_{F_{\varphi(x)}}C_B(\varphi(x))\cdot x &= \dim F_{\varphi(x)} -\dim C_B(\varphi(x))\cdot x\\ &= r - \dim C_B(\varphi(x))+ \dim C_B(x)\\ &= \dim X -\dim Y - \dim C_B(\varphi(x))+\dim C_B(x)+\dim B-\dim B\\ &= \left( \dim X - \dim B+ \dim C_B(x) \right) \\ & \qquad \qquad -\left(\dim Y -\dim B+ \dim C_B(\varphi(x))\right)\\ &= \kappa_G(X)-\kappa_G(Y), \end{align*} where the last equality holds because $x \in \Gamma_{X}(B)$ and $\varphi(x) \in \Gamma_{Y}(B)$. Let $Z$ be an irreducible component of $F_{\varphi(x)}$ which passes through $x$. Theorem \ref{thm 1.20} implies that $Z$ has the same dimension as $F_{\varphi(x)}$. The connected group $H = C_B(\varphi(x))^{\circ}$ stabilizes $Z$. Note that for each $z \in Z$ we have $\varphi(z) = \varphi(x)$ and $C_B(z) = C_{C_B(\varphi(x))}(z)$ (observed for $z=x$ above). Since $x \in \Gamma_{X}(B)$, $\dim C_B(x)$ is minimal among groups of the form $C_B(z)$ for $z \in Z$. Therefore, because $C_B(z) = C_{C_B(\varphi(x))}(z)$, we see that $\dim C_{C_B(\varphi(x))}(x)$ is minimal among groups of the form $C_{C_B(\varphi(z))}(z)$ for $z \in Z$. We deduce that $x \in \Gamma_{Z}(H)$. Consequently, \[ \kappa_{H}(Z) = \dim Z -\dim C_B(\varphi(x))^{\circ}+\dim C_{C_B(\varphi(x))^{\circ}}(x) = \codim_{F_{\varphi(x)}}C_B(\varphi(x))\cdot x. \] The result follows. \end{proof} \subsection{Spherical Varieties} A $G$-variety $X$ is called \emph{spherical} if a Borel subgroup of $G$ acts on $X$ with a dense orbit, that is $\kappa_G(X) = 0$. We recall some standard facts concerning spherical varieties, see \cite{Bri1}, \cite{Kn} and \cite{Pa3}. First we recall an important result due to \'{E}.B.\ Vinberg \cite{Vi1} and M.\ Brion \cite{Bri2} independently in characteristic zero and F.\ Knop \cite[Cor.\ 2.6]{Kn} in arbitrary characteristic. Let $B$ be a Borel subgroup of $G$. \begin{thm} \label{lem 1.60} A spherical $G$-variety consists only of a finite number of $B$-orbits. \end{thm} We have an immediate corollary. \begin{cor} \label{cor 1.20a} The following are equivalent. \begin{itemize} \item[(i)] The $G$-variety $X$ is spherical. \item[(ii)] There is an open $B$-orbit in $X$. \item[(iii)] The number of $B$-orbits in $X$ is finite. \end{itemize} \end{cor} \subsection{Homogeneous Spaces} Let $H$ be a closed subgroup of $G$. Since $G/H$ is a $G$-variety, we may consider the complexity $\kappa_G(G/H)$. Let $B$ be a Borel subgroup of $G$. The orbits of $B$ on $G/H$ are in bijection with the $(B,H)$-double cosets of $G$. We have that $\kappa_G(G/H) = \codim_{G/H}BgH/H$ for $gH \in \Gamma_{G/H}(B)$. Clearly, $G$ acts transitively on $G/H$, so Remark \ref{rem1.20}(i) implies that we can choose a Borel subgroup $B$ such that $B \in \Lambda_{G}(1H)$. Thus, for this choice of $B$, we have \begin{align} \label{SubCom} \notag \kappa_G(G/H) & = \codim_{G/H}B H/H = \dim G/H -\dim B H/H \\ & = \dim G/H -\dim B /B\cap H \\ \notag & = \dim G -\dim H - \dim B + \dim B \cap H. \end{align} Following M.\ Kr\"{a}mer \cite{Kr}, a subgroup $H$ of $G$ is called \emph{spherical} if $\kappa_G(G/H) = 0$. Since $\kappa_G(G/H) = \kappa_G(G/H^\circ)$, by \eqref{SubCom}, in considering the complexity of homogeneous spaces $G/H$ we may assume that the subgroup $H$ is connected. \begin{comment} \begin{cor}\label{cor 1.70} If $T$ is a torus, then any subgroup of $T$ is spherical. \end{cor} \end{comment} We have an easy lemma. \begin{lem} \label{lem 1.30} Let $G$ be connected reductive and let $H$ be a subgroup of $G$ which contains the unipotent radical of a Borel subgroup of $G$. Then $H$ is spherical. In particular, a parabolic subgroup of $G$ is spherical. \end{lem} \begin{proof} Let $B$ be a Borel subgroup of $G$ such that $U = R_u(B)\leqslant H$. Denote by $B^-$ the opposite Borel subgroup to $B$, relative to some maximal torus of $B$, see \cite[\S 26.2 Cor.\ C]{Hu}. The \emph{big cell} $B^-U$ is an open subset of $G$, \cite[Prop.\ 28.5]{Hu}. We have $B^-U \subseteq B^-H$, so $B^-H$ is a dense subset of $G$. Thus, $G/H$ is spherical. \end{proof} \begin{rem} \label{rem:simplespherical} If both $G$ and $H$ are reductive, then $G/H$ is an affine variety, see \cite[Thm.\ A]{Ri4}. This case has been studied greatly. The classification of spherical reductive subgroups of the simple algebraic groups in characteristic zero was obtained by M.\ Kr\"{a}mer \cite{Kr} and was shown to be the same in positive characteristic by J.\ Brundan \cite{Br1}. M.\ Brion \cite{Bri1.5} classifies all the spherical reductive subgroups of an arbitrary reductive group in characteristic zero. In positive characteristic no such classification is known. However, the classification of the reductive spherical subgroups in simple algebraic groups in positive characteristic follows from work of T.A.\ Springer \cite{Sp2} (see also G.\ Seitz \cite{seitz}), J.\ Brundan \cite{Br1} and R.\ Lawther \cite{Law}. Important examples of reductive spherical subgroups are centralizers of involutive automorphisms of $G$: Suppose that $\mathop{\mathrm{char}}\nolimits k \neq 2$ and let $\theta$ be an involutive automorphism of $G$. Then the fixed point subgroup $C_G(\theta)=\{ g \in G \mid \theta(g)=g\}$ of $G$ is spherical, see \cite[Cor.\ 4.3.1]{Sp2}. \end{rem} For more on the complexity and sphericity of homogeneous spaces see \cite{Bri2}, \cite{LuVu} and \cite{Pa4}. \begin{rem} In order to compute the complexity of an orbit variety, it suffices to determine the complexity of a homogeneous space. For, suppose that $G$ acts on an algebraic variety $X$. Let $x\in X$. Since $G$ is connected, the orbit $G\cdot x$ is irreducible. The map $\pi_x : G/C_G(x) \to G\cdot x$, by $\pi_x(gC_G(x))=g\cdot x$ is a bijective $G$-equivariant morphism, \cite[\S 2.1]{Ja1}. Thus, by applying Theorem \ref{thm 1.10} to $\pi_x$, we have \begin{equation} \label{eq 1.10} \kappa_G(G/C_G(x)) = \kappa_G(G\cdot x). \end{equation} The relevance of \eqref{eq 1.10} is that the left hand side is easier to compute, since calculating $\kappa_G(G/C_G(x))$ only requires the study of groups of the form $C_B(x)$, cf.\ \eqref{SubCom}, where $B$ is a Borel subgroup of $G$. \end{rem} \subsection{Kempf--Rousseau Theory} \label{sub:kempf} Next we require some standard facts from Geometric Invariant Theory, see \cite{Ke}, also see \cite[\S 2]{Pr1}, \cite[\S 7]{Ri3}. Let $X$ be an affine variety and $\phi : k^* \to X$ be a morphism of algebraic varieties. We say that $ \underset{t\to 0}{\lim}\,\phi(t)$ exists if there exists a morphism $\widehat{\phi}:k\to X$ such that $\widehat{\phi}|_{k^*}=\phi$. If such a limit exists, we set $ \underset{t\to 0}{\lim}\,\phi(t)=\widehat{\phi}(0)$. Note, that if such a morphism $\widehat{\phi}$ exists, it is necessarily unique. Let $\lambda$ be a cocharacter of $G$. Define $P_{\lambda}=\{ x \in G\mid \underset{t\to 0}{\lim}\, \lambda(t)x\lambda(t)^{-1} \text{ exists}\}$. Then $P_{\lambda}$ is a parabolic subgroup of $G$, the unipotent radical of $P_{\lambda}$ is given by $R_u(P_{\lambda}) = \{x \in G \mid \underset{t\to 0}{\lim}\, \lambda(t)x\lambda(t)^{-1}=1\}$, and a Levi subgroup of $P_{\lambda}$ is the centralizer $G_G(\lambda) = C_G(\lambda(k^*))$ of the image of $\lambda$ in $G$, \cite[\S 8.4]{Sp0.5}. Let the connected reductive group $G$ act on the affine variety $X$ and suppose $x \in X$ is a point such that $G\cdot x$ is not closed in $X$. Let $C$ denote the unique closed $G$-orbit in the closure of $G\cdot x$, cf.~\cite[Lem.~1.4]{Ri4}. Set $\Lambda(x) := \{\lambda \in Y(G) \mid \underset{t\to 0}{\lim}\, \lambda(t)\cdot x \textrm{ exists and lies in } C \}$. Then there is a so-called \emph{optimal class} $\Omega(x) \subseteq \Lambda(x)$ of cocharacters associated to $x$. The following theorem is due to G.R.~Kempf, \cite[Thm.\ 3.4]{Ke} (see also \cite{rousseau}). \begin{thm} \label{thm 4.15} Assume as above. Then we have the following: \begin{itemize} \item[(i)] $\Omega(x) \neq \varnothing$. \item[(ii)] There exists a parabolic subgroup $P(x)$ of $G$ such that $P(x) = P_\lambda$ for every $\lambda \in \Omega(x)$. \item[(iii)] $\Omega(x)$ is a single $P(x)$-orbit. \item[(iv)] For $g \in G$, we have $\Omega(g\cdot x) = g\cdot \Omega(x)$ and $P(g \cdot x) = gP(x)g^{-1}$. In particular, $C_G(x) \leqslant N_G(P(x)) = P(x)$. \end{itemize} \end{thm} Frequently, $P(x)$ in Theorem \ref{thm 4.15} is called the \emph{destabilizing} parabolic subgroup of $G$ defined by $x \in X$. \subsection{Associated Cocharacters} \label{sub:assco} In this subsection we closely follow A.\ Premet \cite{Pr1}; also see \cite[\S 5]{Ja1}. We recall that $p$ is a good prime for $G$ throughout this section. Every cocharacter $\lambda \in Y(G)$ induces a grading of $\gg$: \[ \gg = \bigoplus_{i \in \mathbb{Z}}\gg(i,\lambda), \] where \[ \gg(i,\lambda)=\{x\in \gg\mid\Ad(\lambda(t))(x)=t^ix\text{ for all }t\in k^*\}, \] see \cite[\S 5.1]{Ja1}. For $P_\lambda$ as in the the previous subsection, we have the following equalities: $\Lie P_{\lambda}=\bigoplus_{i \geqslant 0} \gg(i,\lambda); \Lie R_u(P_{\lambda})=\bigoplus_{i > 0} \gg(i,\lambda)$; and $\Lie C_G(\lambda)=\gg(0,\lambda)$. Frequently, we write $\gg(i)$ for $\gg(i,\lambda)$ once we have fixed a cocharacter $\lambda \in Y(G)$. Let $H$ be a connected reductive subgroup of $G$. A nilpotent element $ e \in \hh$ is called \emph{distinguished in $\hh$} provided each torus in $C_H(e)$ is contained in the centre of $H$, \cite[\S 4.1]{Ja1}. \begin{comment}\begin{rem}\label{rem 4.05} Some texts, for $\mathop{\mathrm{char}}\nolimits k=0$ or large positive characteristic, give alternative definitions, for example \cite[\S 5.7]{Ca} states: For $G$ simple, a nilpotent element $e \in \gg$ is distinguished if it does not commute with any nonzero semisimple elements of $\gg$. This can easily be seen to be equivalent, under the assumptions on $\mathop{\mathrm{char}}\nolimits k$, to the definition above. \end{rem}\end{comment} The following characterization of distinguished nilpotent elements in the Lie algebra of a Levi subgroup of $G$ can be found in \cite[\S 4.6, \S 4.7]{Ja1}. \begin{prop} \label{prop4.20} Let $e \in \gg$ be nilpotent and let $L$ be a Levi subgroup of $G$. Then $e$ is distinguished in $\Lie L$ if and only if $L = C_G(S)$, where $S$ is a maximal torus of $C_G(e)$. \end{prop} Next we recall the definition of an associated cocharacter, see \cite[\S 5.3]{Ja1}. \begin{defn} \label{def 4.30} A cocharacter $\lambda : k^* \to G$ is \emph{associated} to $e \in \N$ if $e \in \gg(2,\lambda)$ and there exists a Levi subgroup $L$ of $G$ such that $e$ is distinguished in $\Lie L$, and $\lambda(k^*) \leqslant \D L$. \end{defn} \begin{rem} \label{rem 4.06} In view of Proposition \ref{prop4.20}, the last two conditions in Definition \ref{def 4.30} are equivalent to the existence of a maximal torus $S$ of $C_G(e)$ such that $\lambda(k^*)\leqslant\D C_G(S)$. We will use this fact frequently in the sequel. \end{rem} Let $e \in \N$. In \cite[\S 2.4, Prop.\ 2.5]{Pr1}, A.~Premet explicitly defines a cocharacter of $G$ which is associated to $e$. Moreover, in \cite[Thm.\ 2.3]{Pr1}, Premet shows that each of these associated cocharacters belongs to the optimal class $\Omega(e)$ determined by $e$. Premet shows this under the so called \emph{standard hypotheses} on $G$, see \cite[\S 2.9]{Ja1}. These restrictions were subsequently removed by G.\ McNinch in \cite[Prop.\ 16]{Mc2} so that this fact holds for any connected reductive group $G$ in good characteristic. It thus follows from \cite[Prop.\ 16]{Mc2}, Theorem \ref{thm 4.15}(iv), and the fact that any two associated cocharacters are conjugate under $C_G(e)$, \cite[Lem.\ 5.3]{Ja1}, that all the cocharacters of $G$ associated to $e \in \N$ belong to the optimal class $\Omega(e)$ defined by $e$; see also \cite[Prop.\ 18, Thm.\ 21]{Mc2}. This motivates and justifies the following notation which we use in the sequel. \begin{defn} \label{d:Gamma} Let $e \in \gg$ be nilpotent. Then we denote the set of cocharacters of $G$ associated to $e$ by \[ \Omega_G^a(e) := \{\lambda \in Y(G)\mid \lambda \text{ is associated to } e \} \subseteq \Omega(e). \] Further, if $H$ is a (connected) reductive subgroup of $G$ with $e \in \hh$ nilpotent we also write $\Omega_H^a(e)$ to denote the cocharacters of $H$ that are associated to $e$. \end{defn} As indicated above, in good characteristic, associated cocharacters are known to exist for any nilpotent element $e \in \gg$; more precisely, we have the following, \cite[\S 5.3]{Ja1}: \begin{prop} \label{prop 4.10} Suppose that $p$ is good for $G$. Let $e \in \gg$ be nilpotent. Then $\Omega_G^a(e) \ne \varnothing$. Moreover, if $\lambda\in \Omega_G^a(e)$ and $\mu \in Y(G)$, then $\mu \in \Omega_G^a(e)$ if and only if $\mu$ and $\lambda$ are conjugate by an element of $C_G(e)$. \end{prop} \begin{comment} \section{Properties of Gradings Induced From Associated Cocharacters}\label{sec 3.1} In this section we consider for $e$ nilpotent in $\gg$ the grading of $\gg$ induced from an associated cocharacter for $e$ in $\gg$. First, the grading of $\gg$ depends, up to isomorphism, only on the $G$-orbit of $e$, and not on the choice of an orbit representative or even the choice of an associated cocharacter. This follows readily from the next two lemmas. \begin{lem}\label{lem 4.25} Let $\lambda$ and $\mu$ be cocharacters of $G$ which are conjugate by $x \in G$. Then $\Ad(x)(\gg(i,\lambda))=\gg(i,\mu)$ for all $i \in \mathbb{Z}$.\end{lem} \begin{proof} Let $Y \in \gg(i,\lambda)$. By assumption we have that $\mu = x\lambda x^{-1}$. Now we have \[\Ad(\mu(t))\Ad(x)(Y)=\Ad(x)\Ad(\lambda(t))(Y)=t^i\Ad(x)Y.\] Thus, $\Ad(x)(Y)\in \gg(i,\mu)$. So $\Ad(x)(\gg(i,\lambda)) \subseteq \gg(i,\mu)$. Similarly, we have $\gg(i,\mu)\subseteq \Ad(x^{-1})(\gg(i,\lambda))$ and the result follows. \end{proof} \begin{lem}\label{lem 4.40} Let $e \in \gg$ be nilpotent and let $Y=\Ad(x)(e)$ for some $x \in G$. If $\lambda\in \Omega_G^a(e)$, then $x\lambda x^{-1}\in \Omega_G^a(Y)$.\end{lem} \begin{proof} Since $\lambda$ is associated to $e$, we have $e \in \gg(2,\lambda)$ and there exists a Levi subgroup $L$ of $G$ such that parts $(ii)$ and $(iii)$ of Definition \ref{def 4.30} hold for $\lambda$. Now, \[\Ad(x\lambda(t) x^{-1})\Ad(x)(e)=\Ad(x) \Ad(\lambda(t))(e)=t^2\Ad (x)(e)=t^2Y,\] and so $Y \in \gg(2,x\lambda x^{-1})$. Clearly, $xLx^{-1}$ is a Levi subgroup of $G$ and parts $(ii)$ and $(iii)$ hold for $xLx^{-1}$ and $x\lambda x^{-1}$. \end{proof} Before we can proceed, we need to know that associated cocharacters are optimal cocharacters, that is $\Omega_G^a(e)\subseteq\Omega(e)$. This is indeed the case, see McNinch's generalization \cite[Prop.\ 16]{Mc2} of Premet's result \cite[Thm.\ 2.1]{Pr1}. Note this also justifies our notation for the set of associated cocharacters. \end{comment} Fix a nilpotent element $e \in \gg$ and an associated cocharacter $\lambda\in \Omega_G^a(e)$ of $G$. Set $P = P_{\lambda}$. By Theorem \ref{thm 4.15}(ii), $P$ only depends on $e$ and not on the choice of the associated cocharacter $\lambda$. Note that $C_G(\lambda)$ stabilizes $\gg(i)$ for every $i \in \mathbb{Z}$. For $n \in \mathbb{Z}_{\geqslant 0}$ we set \[ \gg_{\geqslant n} = \bigoplus\limits_{i\geqslant n}\gg(i)\ \ \text{ and }\ \ \gg_{>n} = \bigoplus\limits_{i>n}\gg(i). \] Then we have \[ \gg_{\geqslant 0} = \Lie P \ \ \text{ and }\ \ \gg_{>0} = \Lie R_u(P). \] Also, $C_G(e) = C_P(e)$, by Theorem \ref{thm 4.15}(iv). The next result is \cite[Prop.\ 5.9(c)]{Ja1}. \begin{comment}\begin{proof} By definition, $C_G(\lambda)=\{x \in G \mid x \lambda(t)x^{-1}=\lambda(t) \text{ for all } t \in k^*\}$. Let $Y \in \gg(i)$ and $x \in C_G(\lambda)$. Now $\Ad(\lambda(t))\Ad(x)(Y)=\Ad(xx^{-1}\lambda(t)x)(Y)=\Ad(x)\Ad(\lambda(t))(Y) =t^i\Ad(x)(Y)$. Thus, $\Ad(x)(Y) \in \gg(i)$. \end{proof}\end{comment} \begin{prop} \label{prop 4.30} The $P$-orbit of $e$ in $\gg_{\geqslant 2}$ is dense in $\gg_{\geqslant 2}$. \end{prop} \begin{cor} \label{cor 4.10} The $C_G(\lambda)$-orbit of $e$ in $\gg(2)$ is dense in $\gg(2)$. \end{cor} \begin{comment}\begin{proof} Lemma \ref{lem 4.45} states that $C_G(\lambda)$ stabilizes each $\gg(i)$. In particular, $C_G(\lambda)$ stabilizes $\gg(2)$. It is known, see \cite[\S 5.10]{Ja1}, that $(\Ad(R_u(P))(e)-e)\subseteq \gg_{\geqslant 3}$, thus $\Ad(R_u(P))(e)\cap\gg(2)=\{e\}$. The result follows. \end{proof} \begin{cor}\label{cor 4.11}If the characteristic of $k$ is very good for $G$, then.\begin{enumerate}\item $[\pp,e]=\gg_{\geqslant 2}$;\item $[\gg_{>0},e]=\gg_{\geqslant 3}$;\item $[\gg(0),e]=\gg(2)$.\end{enumerate}\end{cor} \begin{proof} The assumption on $\mathop{\mathrm{char}}\nolimits k$ implies that $\dim [\pp,e]=\dim P\cdot e=\dim \gg_{\geqslant 2}$, see \cite[\S 1.14]{Ca}, so $[\pp,e]=\gg_{\geqslant 2}$. The other results follow from the fact that $\pp=\gg(0)\bigoplus\gg_{>0}$ and $e \in \gg(2)$. \end{proof}\end{comment} Define \[ C_G(e,\lambda) := C_G(e)\cap C_G(\lambda). \] \begin{cor} \label{cor 3.02} Let $e \in \N$. Then \begin{itemize} \item[(i)] $\dim C_G(e) =\dim \gg(0)+\dim \gg(1)$; \item[(ii)] $\dim R_u(C_G(e))=\dim \gg(1)+\dim\gg(2)$; \item[(iii)] $\dim C_G(e,\lambda)=\dim \gg(0)-\dim \gg(2)$. \end{itemize} \end{cor} \begin{proof} As $C_G(e)=C_P(e)$, part (i) is immediate from Proposition \ref{prop 4.30}. Using the fact that $(\Ad(R_u(P)-1)(e)\subseteq \gg_{\geqslant 3}$ (e.g.\ see \cite[\S 5.10]{Ja1}) and Proposition \ref{prop 4.30}, we see that $\dim \Ad(R_u(P))(e)=\dim \gg_{\geqslant 3}$ and so $\dim C_{R_u(P)}(e)=\dim \gg(1)+\dim\gg(2)$. Finally, part (iii) follows from the first two. \end{proof} The following basic result regarding the structure of $C_G(e)$ can be found in \cite[Thm.\ A]{Pr1}. \begin{prop} \label{pro 4.35} If $\mathop{\mathrm{char}}\nolimits k$ is good for $G$, then $C_G(e)$ is the semi-direct product of $C_G(e,\lambda)$ and $C_G(e)\cap R_u(P)$. Moreover, $C_G(e,\lambda)^{\circ}$ is reductive and $C_G(e)\cap R_u(P)$ is the unipotent radical of $C_G(e)$. \end{prop} \begin{comment} \begin{cor}If the characteristic of $k$ is very good for $G$, then:\begin{enumerate} \item $\dim \cc_\gg(e) =\dim \gg(0)+\dim \gg(1)$; \item $\dim C_{\gg_{>0}}(e)=\dim \gg(1)+\dim\gg(2)$; \item $\dim C_{\gg(0)}(e)=\dim \gg(0)-\dim \gg(2)$.\end{enumerate}\end{cor} Assume for now that the characteristic of $k$ is zero. Recall from Section \ref{chak} the grading of $\gg$ obtained from an $\mathfrak{sl}_2$-triple which contains the nilpotent element $e$. We can also obtain a grading of $\gg$ from an associated cocharacter for $e$ in $\gg$. The following result, see \cite[Prop 5.5]{Ja1}, implies these two gradings are equivalent. \begin{prop}\label{bicosl}The map $\lambda \to d\lambda_e(1)$ is a bijection from $\Omega_G^a(e)$ to the set $\{ H \in [\gg,e] \mid [H,e]=2e \}$.\end{prop} Now we give some examples of associated cocharacters for certain simple algebraic groups. We pay particular attention to the classical groups, but also cite examples for the exceptional groups. First we construct associated cocharacters for the nilpotent elements of classical Lie algebras. So, let $G$ be a simple classical algebraic group and $V$ be the corresponding natural module. Let $e \in \gg$ be a non-zero nilpotent element with corresponding partition $\pi_e=(d_1,d_2,\ldots, d_r)$, cf.\! Section \ref{Class Gr}. It is known, see \cite[\S 3]{Ja1}, that there exist $v_1,v_2,\ldots,v_r \in V$ such that the set $\{e^jv_i \mid 1\leq i \leq r\text{ and } 0 \leq j < d_i\}$ is a basis of $X$. For $m \in \mathbb{Z}$, we define $V(m)=\langle e^jv_i \mid 2j+1-d_i=m\rangle$, clearly $V(m)$ is a subspace of $V$, in fact $V= \bigoplus_{m \in \mathbb{Z}} V(m)$. Next we define a map $\lambda :k^*\to G$ by $\lambda(t)(v)=t^mv \text{ for all } t \in k^*, v \in V(m)$ and extend $\lambda$ linearly to all of $V$. So $\lambda$ is a cocharacter of $G$. In fact it can be shown, see \cite[\S 5.4]{Ja1}, that $\lambda$ is an associated cocharacter for $e$ in $\gg$. Since all associated cocharacters are conjugate under $C_G(e)$, Proposition \ref{prop 4.10}, all associated cocharacters for $e \in \gg$ are of this form. For information on how to construct associated cocharacters for the exceptional groups see \cite[\S 2.4]{Pr1} or \cite[\S 5.14]{Ja1}. \end{comment} \begin{defn} \label{defn:height} Let $e \in \gg$ be nilpotent. The \emph{height} of $e$ with respect to an associated cocharacter $\lambda\in \Omega_G^a(e)$ is defined to be \[ \mathop{\mathrm{ht}}\nolimits(e) := \max\limits_{ i\in \mathbb{N}}\{i\mid\gg(i,\lambda)\neq 0\}. \] Thanks to Proposition \ref{prop 4.10}, the height of $e$ does not depend on the choice of $\lambda\in\Omega_G^a(e)$. Since conjugate nilpotent elements have the same height, we may speak of the height of a given nilpotent orbit. Since $\lambda\in\Omega_G^a(e)$, we have $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 2$ for any nilpotent element $e \in \gg$, cf.\ Definition \ref{def 4.30}. \end{defn} Let $\gg$ be classical with natural module $V$. Set $n = \dim V$. We write a partition $\pi$ of $n$ in one of the following two ways, either $\pi = (d_1, d_2, \ldots , d_r)$ with $d_1 \geqslant d_2 \geqslant \cdots \geqslant d_r \geqslant 0$ and $\sum_{i=1}^r = n$; or $\pi = [1^{r_1} , 2^{r_2} , \ldots]$ with $\sum_i ir_i = n$. These two notations are related by $r_i = |\{j \mid d_j = i\}|$ for $i \geqslant 1$. For $\gg$ classical with natural module $V$ it is straightforward to determine the height of a nilpotent orbit from the corresponding partition of $\dim V$. We leave the proof of the next proposition to the reader. \begin{prop} \label{prop 4.50} Let $e \in \gg$ be nilpotent with partition $\pi_e=(d_1,d_2, \ldots, d_r)$. \begin{itemize} \item[(i)] If $\gg= \mathfrak{gl}(V)$, $\mathfrak{sl}(V)$ or $\mathfrak{sp}(V)$, then $\mathop{\mathrm{ht}}\nolimits(e)=2(d_1-1)$. \item[(ii)] If $\gg= \mathfrak{so}(V)$, then $\mathop{\mathrm{ht}}\nolimits(e)=\left\{% \begin{array}{ll} 2(d_1-1) & \text{ if } \:\:d_1=d_2, \\ 2d_1-3 & \text{ if } \:\:d_1=d_2+1, \\ 2(d_1-2) & \text{ if } \:\:d_1>d_2+1. \\ \end{array}% \right.$ \end{itemize} \end{prop} \begin{rems} \label{rem 3.30} (i). For $\mathop{\mathrm{char}}\nolimits k=0$, Proposition \ref{prop 4.50} was proved in \cite[Thm.\ $2.3$]{Pa2}. (ii). If $e$ is a nilpotent element in $\mathfrak{gl}(V)$, $\mathfrak{sl}(V)$ or $\mathfrak{sp}(V)$, then $\mathop{\mathrm{ht}}\nolimits(e)$ is even. If $e$ is a nilpotent element in $\mathfrak{so}(V)$, then $\mathop{\mathrm{ht}}\nolimits(e)$ is odd if and only if $d_2=d_1-1$. \end{rems} \subsection{Fibre Bundles} \label{sub:Fibbun} Let $H$ be a closed subgroup of $G$. Suppose that $H$ acts on an affine variety $Y$. Define a morphic action of $H$ on the affine variety $G\times Y$ by $h\cdot(g,y)=(gh,h^{-1}\cdot y)$ for $h \in H,g \in G$ and $y \in Y$. Since $H$ acts fixed point freely on $G\times Y$, every $H$-orbit in $G\times Y$ has dimension $\dim H$. There exists a surjective quotient morphism $\rho : G\times Y \to (G\times Y)/H$, \cite[\S 1.2]{MuFo}, \cite[\S 4.8]{PV}. We denote the quotient $(G\times Y)/H$ by $\mathop{G\!\ast_H\!Y}\nolimits$, the \emph{fibre bundle} associated to the \emph{principal bundle} $\pi :G\to G/H$ defined by $\pi(g)=gH$ and \emph{fibre} $Y$. We denote the element $(g,y)H$ of $\mathop{G\!\ast_H\!Y}\nolimits$ simply by $g\ast y$, see \cite[\S 2]{Ri5}. Let $X$ be a $G$-variety and $Y \subseteq X$ be an $H$-subvariety. The \emph{collapsing} of the fibre bundle $\mathop{G\!\ast_H\!Y}\nolimits$ is the morphism $\mathop{G\!\ast_H\!Y}\nolimits \to G\cdot Y \subseteq X$ defined by $g\ast y \rightarrow g\cdot y$. Define an action of $G$ on $\mathop{G\!\ast_H\!Y}\nolimits$ by $g\cdot(g'\ast y)=(gg')\ast y$ for $g,g' \in G$ and $y \in Y$. We then have a $G$-equivariant surjective morphism $\varphi : \mathop{G\!\ast_H\!Y}\nolimits \to G/H$ by $\varphi(g\ast y)=gH$. Note that $\varphi^{-1}(gH)\cong Y$ for all $gH \in G/H$. \begin{comment}For $x \in G$ set $V_x=\varphi^{-1}(xH)$, so $V_x=\{g\ast v \in \mathop{G\!\ast_H\!Y}\nolimits \mid g=xh \text{ for some } h \in H\}$. The morphism $\phi :V_x\to V$, defined by $\phi(g\ast v)=x^{-1}g\cdot v$, is clearly well-defined and surjective. Suppose that $\phi(g\ast v)=\phi(k\ast u)$, so $g\cdot v=k\cdot u$. Since $g^{-1}k\in H$, we have $g\ast v =g(g^{-1}k)\ast (g^{-1}k)^{-1}\cdot v=k\ast u$, and so $\phi$ is bijective. Now define a morphism $\psi : V\to V_x$ by $\psi(v)=x\ast v$. Again, we have that $\psi$ is bijective. Also $\psi(\phi(g\ast v))=g\ast v$ and $\phi(\psi(v))=v$. Thus, $\phi$ is an isomorphism of varieties, see \cite[\S 3]{Ha}, (with inverse $\psi$). This fact also readily implies that \begin{equation}\label{eq 5.1} \dim \mathop{G\!\ast_H\!Y}\nolimits = \dim G+\dim V -\dim H.\end{equation}\end{comment} \begin{prop} \label{prop 5.10} Let $H$ be a closed subgroup of $G$ and let $Y$ be an $H$-variety. Suppose that $B$ is a Borel subgroup of $G$ such that $\dim B\cap H$ is minimal (among all subgroups of the form $B'\cap H$ for $B'$ ranging over $\B$). Then we have \[ \kappa_G(\mathop{G\!\ast_H\!Y}\nolimits)=\kappa_G(G/H)+\kappa_{B\cap H}(Y). \] \end{prop} \begin{proof} We apply Theorem \ref{thm 1.10} to the morphism $\varphi :\mathop{G\!\ast_H\!Y}\nolimits\to G/H$. Thus, for a Borel subgroup $B$ of $G$ and $g\ast y\in \Gamma_{\mathop{G\!\ast_H\!Y}\nolimits}(B)$, we have that $\kappa_G(\mathop{G\!\ast_H\!Y}\nolimits)=\kappa_G(G/H)+\kappa_K(Z)$, where $Z$ is an irreducible component of $\varphi^{-1}(\varphi(g\ast y))$ passing through $g\ast y$, $K = C_B(gH)^{\circ}$. Note that $C_B(gH)=B\cap gHg^{-1}$. So, since $g\ast y\in \Gamma_{\mathop{G\!\ast_H\!Y}\nolimits}(B)$, the dimension of $g^{-1}C_B(gH)g = g^{-1}Bg\cap H$ is minimal. Now, as $\mathop{G\!\ast_H\!Y}\nolimits$ is a fibre bundle, for $x \in G$ we have $Y_x := \varphi^{-1}(\varphi(x\ast y))\cong Y$. Define a morphism $\phi : Y_x\rightarrow Y$ by $\phi(g\ast y)=x^{-1}g\cdot y$. Clearly, $xhx^{-1} \in B\cap xHx^{-1}$ acts on $g\ast y \in Y_x$, as $xhx^{-1}\cdot (g\ast y)=xhx^{-1}g\ast y$. Since $g=xh'$ for some $h' \in H$, we have $xhx^{-1}\cdot (g\ast y)=xhh'\ast y$. So $\phi(xhh'\ast y)=hh'\cdot y$. Thus, if we define an action of $B\cap xHx^{-1}$ on $Y$ by $xhx^{-1}\cdot y = h\cdot y$, the morphism $\phi : Y_x\to Y$ becomes a $(B\cap xHx^{-1})$-equivariant isomorphism. It follows that $\kappa_{B\cap xHx^{-1}}(Y_x)=\kappa_{B\cap xHx^{-1}}(Y)$. Since $x^{-1}(B\cap xHx^{-1})x = x^{-1}Bx\cap H$, we finally get $\kappa_{B\cap xHx^{-1}}(Y)=\kappa_{x^{-1}Bx\cap H}(Y)$. The result follows. \end{proof} \begin{comment} Two parabolic subgroups $P$ and $Q$ of $G$ are called \emph{opposite} if $P\cap Q$ is a Levi subgroup of both $P$ and $Q$, see \cite[\S 14.20]{Bor}. The following result regarding opposite parabolic subgroups of $G$ combines Propositions 14.21 and 14.22 from \cite{Bor}. \begin{prop} Let $P$ and $Q$ be parabolic subgroups of $G$. \begin{itemize} \item[(i)] For any Levi subgroup $L$ of $P$ there exists precisely one parabolic subgroup of $G$ opposite to $P$ and containing $L$. \item[(ii)] Any two parabolic subgroups of $G$ opposite to $P$ are conjugate by a unique element of $R_u(P)$. \item[(iii)] The group $P\cap Q$ is connected and $(P\cap Q)R_u(P)$ is a parabolic subgroup of $G$. In particular, $P\cap Q$ contains a maximal torus of $G$. \end{itemize} \end{prop} \end{comment} Next we need a technical lemma. \begin{lem} \label{lem 5.10} Let $P$ be a parabolic subgroup of $G$. Then for $B$ ranging over $\B$, the intersection $B\cap P$ is minimal if and only if $B\cap P$ is a Borel subgroup of a Levi subgroup of $P$. \end{lem} \begin{proof} We may choose a Borel subgroup $B$ of $G$ so that $BP$ is open dense in $G$, cf.~the proof of Lemma \ref{lem 1.30}. Then the $P$-orbit of the base point in $G/B \cong \B$ is open dense in $\B$. Consequently, the stabilizer of this base point in $P$, that is $P \cap B$ is minimal among all the isotropy subgroups $P \cap B'$ for $B'$ in $\B$. Clearly, $B$ is opposite to a Borel subgroup of $G$ contained in $P$. Thanks to \cite[Cor.~14.13]{Bor}, $P \cap B$ contains a maximal torus $T$ of $G$. Let $L$ be the unique Levi subgroup of $P$ containing $T$. Then \cite[Thm.\ 2.8.7]{Ca} implies that $P \cap B = T(R_u(B) \cap L)$. Clearly, $T(R_u(B) \cap L)$ is solvable and thus lies in a Borel subgroup of $L$. A simple dimension counting argument, using Theorem \ref{thm 1.20} applied to the multiplication map $B \times P \to BP$ and the fact that $\dim BP = \dim G$, shows that $P \cap B$ is a Borel subgroup of $L$. Reversing the argument in the previous paragraph shows that if $P \cap B$ is a Borel subgroup of $L$, then $BP$ is dense in $G$ and thus $P \cap B$ is minimal again in the sense of the statement. \begin{comment} Fix a Levi decomposition $P=LU$ of $P$ where $L$ is a Levi subgroup of $P$ and $U=R_u(P)$. Let $P^-$ be the unique opposite parabolic subgroup to $P$ containing $L$. So $P^-=LU^-$ is a Levi decomposition of $P^-$ where $U^-=R_u(P^-)$. Clearly, if we fix a Borel subgroup $B_L$ of $L$, then $B=B_LU^-$ and $B'=B_LU$ are Borel subgroups of $G$. Thus, $B \cap P=B_L$ is a Borel subgroup of $L$. We claim that for this choice of $B$ the intersection $B\cap P$ is minimal for groups of this form. Suppose not, and let $B''$ be a Borel subgroup of $G$ such that $B''\cap P \lneqq B\cap P=B_L$. Clearly, we have $(B''\cap P)U\lneqq (B\cap P)U=B_LU=B'$ and $B'$ is a Borel subgroup of $G$. Consequently, by Proposition \ref{prop 5.20}(iii), $(B''\cap P)U$ is a parabolic subgroup of $G$, properly contained in a Borel subgroup of $G$. That is absurd. So for a Borel subgroup $B$ of $G$, $B \cap P$ is minimal, provided $B\cap P$ is a Borel subgroup of a Levi subgroup of $P$. Conversely, suppose that $B\cap P$ is minimal. For appropriate Levi decompositions $P=LU$ and $B=TU'$, where $T$ is a maximal torus of $G$ contained in $B\cap P$ and $U'=R_u(B)$, \cite[Thm.\ 2.8.7]{Ca} implies that $B\cap P = T(L\cap U')(U\cap T)(U\cap U')$. Since $U\cap T=\{1\}$, we have $B \cap P = T(L\cap U')(U\cap U')$. Thus, we can write $B\cap P = \overline{L}\:\overline{U}$, where $\overline{L}\leqslant L$ and $\overline{U}\leqslant U$ and both, $\overline{L}$ and $\overline{U}$ are solvable. Again by Proposition \ref{prop 5.20}(iii), we have $(B\cap P)U = \overline{L}\:\overline{U}U = \overline{L}U$ is a parabolic subgroup of $G$. Clearly, $\overline{L}U\leqslant B_LU$ for a suitable choice of a Borel subgroup $B_L$ of $L$. But $B_LU$ is a Borel subgroup of $G$ and $\overline{L}U$ is a parabolic subgroup of $G$ contained in $B_LU$. Thus, $\overline{L}U=B_LU$ and $\overline{L}=B_L$. Now $B \cap P=B_L\overline{U}$ is assumed to be minimal, but we know from the previous paragraph that a group of the form $B_L$ is minimal. Thus, $\overline{U}=\{1\}$ and $B\cap P$ is a Borel subgroup of a Levi subgroup of $P$. The result follows. \end{comment} \end{proof} Next we consider a special case of Proposition \ref{prop 5.10}. \begin{lem} \label{lem 5.20} Let $P$ be a parabolic subgroup of $G$ and let $Y$ be a $P$-variety. Then \[\kappa_G(\mathop{G\!\ast_P\!Y}\nolimits)=\kappa_L(Y),\] where $L$ is a Levi subgroup of $P$. \end{lem} \begin{proof} Proposition \ref{prop 5.10} implies that $\kappa_G(\mathop{G\!\ast_P\!Y}\nolimits)=\kappa_G(G/P)+\kappa_{B\cap P}(Y)$, where $\dim B\cap P$ is minimal. Lemmas \ref{lem 1.30} and \ref{lem 5.10} imply that $\kappa_G(G/P)=0$ and $B\cap P$ is a Borel subgroup of a Levi subgroup of $P$. The result follows. \end{proof} Let $e \in \N$ be a non-zero nilpotent element, $\lambda \in \Omega_G^a(e)$ be an associated cocharacter of $e$ and $\gg=\bigoplus_{i\in \mathbb{Z}}\gg(i)$ be the grading of $\gg$ induced by $\lambda$. Also let $P$ be the destabilizing parabolic subgroup of $G$ defined by $e$, cf.\ Subsection \ref{sub:kempf}. In particular, we have $\Lie P=\gg_{\geqslant 0}$, see Subsection \ref{sub:assco}. \begin{lem} \label{lem 5.30} Let $e \in \N$. Then $G\cdot\gg_{\geqslant 2}=\overline{G\cdot e}$. In particular, $\dim G\cdot \gg_{\geqslant 2} = \dim G\cdot e$. \end{lem} \begin{proof} Since $\gg_{\geqslant 2}$ is $P$-stable, $G\cdot\gg_{\geqslant 2}$ is closed, \cite[Prop.\ 0.15]{Hu2}. Thus, since $e \in \gg(2)\subseteq\gg_{\geqslant 2}$, we have $\overline{G\cdot e}\subseteq G\cdot\gg_{\geqslant 2}$. By Proposition \ref{prop 4.30}, $\overline{P\cdot e}=\gg_{\geqslant 2}$. Since $\overline{P\cdot e}\subseteq \overline{G\cdot e}$, we thus have $\gg_{\geqslant 2}\subseteq \overline{G\cdot e}$. Finally, as $\overline{G\cdot e}$ is $G$-stable, $G\cdot \gg_{\geqslant 2}\subseteq \overline{G\cdot e}$. The result follows. \end{proof} \begin{thm} \label{thm 5.10} Let $e \in \N$. Then \[\kappa_G(G\cdot e) = \kappa_L(\gg_{\geqslant 2}),\] where $L$ is a Levi subgroup of $P$. \end{thm} \begin{proof} We have $\kappa_G(G\cdot e) = \kappa_G(G/C_G(e)) = \kappa_G(G/C_P(e))$, thanks to \eqref{eq 1.10} and the fact that $G_G(e) = C_P(e)$. Moreover, since $G \!\ast_P\! P/C_P(e) \cong G/C_P(e)$, it follows from Lemma \ref{lem 5.20} that $\kappa_G(G/C_P(e)) = \kappa_L(P/C_P(e))$. Finally, thanks to Proposition \ref{prop 4.30} and \eqref{eq 1.10}, we obtain $\kappa_L(P/C_P(e)) = \kappa_L(\gg_{\geqslant 2})$. The result follows. \begin{comment} The collapsing $\phi: \mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits \to G\cdot \gg_{\geqslant 2}$ is $G$-equivariant and dominant. Theorem \ref{thm 1.10} and Lemma \ref{lem 5.30} imply that for a Borel subgroup $B$ of $G$ and $g \ast x \in \Gamma_{\mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits}(B)$, we have $\kappa_G(\mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits) = \kappa_G(\overline{G\cdot e})+\kappa_{H}(Z)$, where $Z$ is an irreducible component of $\phi^{-1}(\phi(g\ast x))$ passing through $g\ast x$ and $H = C_B(g\cdot x)^{\circ}$. We claim that $\dim Z = 0$. Since $\dim Z = \dim \phi^{-1}(\phi(g\ast x))=\dim \mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits -\dim \overline{G\cdot e}$, it suffices to show that $\dim \mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits =\dim G\cdot e$. So, as $\dim \mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits = \dim G -\dim P +\dim \gg_{\geqslant 2}$ and $\dim P = \dim \gg_{\geqslant 0}$, we have $\dim \mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits = \dim G -\dim \gg(0)-\dim \gg(1)$. Finally, by Corollary \ref{cor 3.02}(i), we have $\dim \mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits =\dim G - \dim C_G(e)= \dim G\cdot e$. So the claim follows. Consequently, $Z = \{g\ast x\}$, and thus $\kappa_{H}(Z)=0$ and so $\kappa_G(\mathop{G\!\ast_P\!\gg_{\geqslant 2}}\nolimits)=\kappa_G(\overline{G\cdot e})$. Since the inclusion map $G\cdot e \to \overline{G\cdot e}$ is dominant, we have $\kappa_G(\overline{G\cdot e}) = \kappa_G(G\cdot e)$, by Theorem \ref{thm 1.10}. The result follows from Lemma \ref{lem 5.20}. \end{comment} \end{proof} \begin{rem} For $\mathop{\mathrm{char}}\nolimits k=0$, Theorem \ref{thm 5.10} was proved by Panyushev in \cite[Thm.\ 4.2.2]{Pa2}. \end{rem} \begin{rem} \label{rem 5.20} Thanks to Theorem \ref{thm 5.10}, in order to determine whether a nilpotent orbit is spherical, it suffices to show that a Borel subgroup of a Levi subgroup of $P$ acts on $\gg_{\geqslant 2}$ with a dense orbit. In our classification we pursue this approach. \end{rem} \subsection{Borel Subgroups of Levi Subgroups Acting on Unipotent Radicals} \label{sub:sec 6.2} Let $e \in \gg$ be a non-zero nilpotent element and let $\lambda\in \Omega_G^a(e)$ be an associated cocharacter for $e$. Let $P=P_{\lambda}$ be the destabilizing parabolic subgroup defined by $e$. We denote the Levi subgroup $C_G(\lambda)$ of $P$ by $L$. Our next result is taken from \cite[\S 3]{Ja1}. We only consider the case when $G$ is simple, the extension to the case when $G$ is reductive is straightforward. \begin{prop} \label{prop 5.05} Let $G$ be a simple classical algebraic group and $0 \ne e\in \gg$ be nilpotent with corresponding partition $\pi_e=[1^{r_1},2^{r_2},3^{r_3},\ldots]$. Let $a_i,b_i,s,t \in \mathbb{Z}_{\geqslant 0}$ such that $a_i+1=\sum_{j \geqslant i}r_{2j+1}$, $b_i+1=\sum_{j\geqslant i}r_{2j}$, $2s=\sum_{j\geqslant 0} r_{2j+1}$, and $2t+1=\sum_{j \geqslant 0} r_{2j+1}$. Then the structure of $\D L$ is as follows. \begin{itemize} \item[(i)] If $G$ is of type $A_n$, then $\D L$ is of type $\prod_{i\geqslant 0} A_{a_i} \times \prod_{i\geqslant 1} A_{b_i}$. \item[(ii)] If $G$ is of type $B_n$, then $\D L$ is of type $\prod_{i\geqslant 1} A_{a_i} \times \prod_{i \geqslant 1} A_{b_i}\times B_t$. \item[(iii)] If $G$ is of type $C_n$, then $\D L$ is of type $\prod_{i\geqslant 1} A_{a_i} \times \prod_{i\geqslant 1} A_{b_i}\times C_s$. \item[(iv)] If $G$ is of type $D_n$, then $\D L$ is of type $\prod_{i\geqslant 1} A_{a_i} \times \prod_{i\geqslant 1} A_{b_i}\times D_s$. \end{itemize} \end{prop} We use the conventions that $A_0=B_0=C_0=D_0=\{1\}$, $D_1\cong k^*$ and $D_2 = A_1\times A_1$. \begin{comment} \begin{exmp}\label{eg0.5}Let $G$ be of type $C_{10}$ and $e \in \gg$ be nilpotent with corresponding partition $\pi_e=[4,3^2,2^3,1^4]$. Thus, $a_1=1,b_2=3$ and $s=3$. Proposition \ref{prop 5.05} says that $\D L$ is of type $A_1\times A_3\times C_3$. \end{exmp} \end{comment} In order to describe the Levi subgroups $C_G(\lambda)$ for the exceptional groups we need to know more about associated cocharacters. Let $T$ be a maximal torus of $G$ such that $\lambda(k^*)\leqslant T$. Now let $G_{\mathbb{C}}$ be the simple, simply connected group over $\mathbb{C}$ with the same root system as $G$. Let $\gg_{\mathbb{C}}$ be the Lie algebra of $G_{\mathbb{C}}$. For a nilpotent element $e \in \gg_{\mathbb{C}}$ we can find an $\mathfrak{sl}_2$-triple containing $e$. Let $h\in\gg_{\mathbb{C}}$ be the semisimple element of this $\mathfrak{sl}_2$-triple. Note that $h$ is the image of $1$ under the differential of $\lambda_{\mathbb{C}} \in G_{\mathbb{C}}$ (corresponding to $\lambda$) at $1$. Then there exists a set of simple roots $\Pi$ of $\Psi$ such that $\alpha(h) \geqslant 0$ for all $\alpha \in \Psi^+$ and $\alpha(h) = m_{\alpha}\in\{0,1,2\}$ for all $\alpha \in \Pi$, see \cite[\S 5.6]{Ca}. For each simple root $\alpha\in \Pi$ we attach the numerical label $m_{\alpha}$ to the corresponding node of the Dynkin diagram. The resulting labels form the \emph{weighted Dynkin diagram} $\Delta(e)$ of $e$. We denote the set of weighted Dynkin diagrams of $G$ by $\D(\Pi)$. For $e,e' \in \gg_{\mathbb{C}}$ nilpotent, we have that $\Delta(e)=\Delta(e')$ if and only if $e$ and $e'$ are in the same $G_{\mathbb{C}}$-orbit. In order to determine the weighted Dynkin diagram of a given nilpotent orbit we refer to the method outlined in \cite[\S 13]{Ca} for the classical groups, and to the tables in \emph{loc.\ cit.} for the exceptional groups. \begin{comment} \begin{exmp}\label{eg 6.10}\begin{itemize} \item If $G$ is of type $B_5$ and $e \in \gg$ has corresponding partition $\pi_e=[3,2^2,1^4]$, then $e$ has the following weighted Dynkin diagram: \begin{figure}[ht] \beginpicture \setcoordinatesystem units <1.5cm,1.5cm> point at 0 0 \setplotarea x from 0.25 to 4, y from 1.5 to 2 \put {$\Delta(e)$:} [l] at 3.6 2 \put {$>$} at 6.25 2 \multiput {$\bullet$} at 4.5 2 *4 .5 0 / \putrule from 4.54 2 to 5.96 2 \putrule from 6.04 2.02 to 6.46 2.02 \putrule from 6.03 1.98 to 6.47 1.98 \put {1} at 4.5 2.2 \put {0} at 5 2.2 \put {1} at 5.5 2.2 \put {0} at 6 2.2 \put {0} at 6.5 2.2 \endpicture \end{figure \item If $G$ is of type $E_8$ and $e \in \gg$ has corresponding Bala--Carter label $E_6A_1$, then $e$ has the following weighted Dynkin diagram: \begin{figure}[ht] \beginpicture \setcoordinatesystem units <1.5cm,1.5cm> point at 0 0 \setplotarea x from -4 to 4, y from -3 to -2 \put {$\Delta(e)$:} [l] at -1 -2 \multiput {$\bullet$} at 0 -2 *6 .5 0 / \put {$\bullet$} at 1 -1.5 \putrule from 0.04 -2 to 0.46 -2 \putrule from 0.54 -2 to 0.96 -2 \putrule from 1.04 -2 to 1.46 -2 \putrule from 1.54 -2 to 1.96 -2 \putrule from 1 -1.96 to 1 -1.53 \put {2} at 0 -2.3 \put {2} at .5 -2.3 \put {1} at 1 -2.3 \put {0} at 1.5 -2.3 \put {1} at 2 -2.3 \put {0} at 1.2 -1.5 \put {0} at 2.5 -2.3 \put {1} at 3 -2.3 \putrule from 2.04 -2 to 2.46 -2 \putrule from 2.54 -2 to 2.96 -2 \endpicture \end{figure} \end{itemize} \end{exmp} \vspace{-1.25cm} \end{comment} We return to the case when the characteristic of $k$ is good for $G$. In this case the classification of the nilpotent orbits does not depend on the field $k$. \cite[\S 5.11]{Ca}. Recently, in \cite{Pr1} Premet gave a proof of this fact for the unipotent classes of $G$ which is free from case by case considerations. This applies in our case, since the classification of the unipotent conjugacy classes in $G$ and of the nilpotent orbits in $\N$ is the same in good characteristic, \cite[\S 9 and \S 11]{Ca}. First assume that $G$ is simply connected and that $G$ admits a finite-dimensional rational representation such that the trace form on $\gg$ is non-degenerate; see \cite[\S 2.3]{Pr1} for the motivation of these assumptions. Under these assumptions, given $\Delta \in \D(\Pi)$, there exists a cocharacter $\lambda=\lambda_{\Delta}$ of $G$ which is associated to $e$, where $e$ lies in the dense $L$-orbit in $\gg(2,\lambda)$, for $L = C_G(\lambda)$, such that \begin{equation} \label{eq 6.2} \Ad(\lambda(t))(e_{\pm \alpha})=t^{\pm m_{\alpha}}e_{\pm \alpha} \text{ and } \Ad(\lambda(t))(x) = x \end{equation} for all $\alpha \in \Pi, e_{\pm \alpha}\in \gg_{\pm \alpha}, x \in \ttt$ and $t \in k^*$, \cite[\S 2.4]{Pr1}. We extend this action linearly to all of $\gg$. Now return to the general simple case. Let $\widehat{G}$ be the simple, simply connected group with the same root datum as $G$. Then there exists a surjective central isogeny $\pi :\widehat{G} \to G$, \cite[\S 1.11]{Ca}. Also, an associated cocharacter for $e = d\pi(\widehat{e})$ in $\gg$ is of the form $\pi\circ\widehat{\lambda}$, where $\widehat{\lambda}$ is a cocharacter of $\widehat G$ that is associated to $\widehat{e}$ in $\widehat{\gg}$. This implies that \eqref{eq 6.2} holds for an arbitrary simple algebraic group, when the characteristic of $k$ is good for $G$. After these deliberations we can use the tables in \cite[\S 13]{Ca} to determine the structure of the Levi subgroup $C_G(\lambda)$ for the exceptional groups. Recall that $\Lie C_G(\lambda)=\gg(0)$ and $\gg(0)$ is the sum of the root spaces $\gg_{\alpha}$, where $\alpha \in \Psi$ with $\langle\alpha,\lambda\rangle=0$. Let $\Pi_{0} = \{\alpha \in \Pi \mid m_\alpha = 0\}$, the set of nodes $\alpha$ of the corresponding weighted Dynkin diagram with label $m_\alpha = 0$. Then $C_G(\lambda)=\langle T,U_{\pm\alpha}\mid \alpha \in \Pi_{0}\rangle$. \begin{comment} \begin{exmp}\label{eg 6.20}Using Example \ref{eg 6.10} we have for $G$ of type $B_5$ that $\D C_G(\lambda)$ is of type $A_1A_2$, and for $G$ of Type $E_8$ that $\D C_G(\lambda)$ is of type $A_1A_1A_1$. \end{exmp} \end{comment} It is straightforward to determine the height of a nilpotent orbit from its associated weighted Dynkin diagram. Let $\tilde\alpha=\sum_{\alpha \in \Pi}c_{\alpha}\alpha$ be the highest root of $\Psi$. For each simple root $\alpha \in \Pi$ we have $\gg_{\alpha}\subseteq \gg(m_{\alpha})$ where $m_{\alpha}$ is the corresponding numerical label on the weighted Dynkin diagram, by \eqref{eq 6.2}. \begin{lem} \label{lem 6.05} Let $\tilde\alpha$ be the highest root of $\Psi$ and set $d = \mathop{\mathrm{ht}}\nolimits(e)$. Then $\gg_{\tilde\alpha}\subseteq \gg(d)$. \end{lem} \begin{proof} Clearly, we have $\gg_{\tilde\alpha}\subseteq \gg(i)$ for some $i\geqslant 0$. The lemma is immediate, because if $\tilde\alpha=\sum_{\alpha\in\Pi}c_{\alpha}\alpha$ and $\beta=\sum_{\alpha\in\Pi}d_{\alpha}\alpha$ is any other root of $\Psi$, then $c_{\alpha}\geqslant d_{\alpha}$ for all $\alpha \in \Pi$. \end{proof} Lemma \ref{lem 6.05} readily implies \begin{equation} \label{eq 6.05} \mathop{\mathrm{ht}}\nolimits(e) = \sum_{\alpha \in \Pi}m_{\alpha}c_{\alpha}. \end{equation} The identity \eqref{eq 6.05} is also observed in \cite[\S 2.1]{Pa3}. \begin{comment} \begin{exmp}\label{eg 6.25} Let $G$ be of type $E_8$ and consider the nilpotent element $e\in \gg$ in Example \ref{eg 6.10}(ii). Since $\tilde\alpha=2\alpha_1+3\alpha_2+4\alpha_3+6\alpha_4+5\alpha_5+4\alpha_6+3\alpha_7+2\alpha_8$ is the highest root in a root system of type $E_8$ we can easily calculate that $\mathop{\mathrm{ht}}\nolimits(e)=24$.\end{exmp}\end{comment} \begin{comment} Now we digress from nilpotent orbits and consider the problem of Borel subgroups of Levi subgroups acting (via conjugation) on the unipotent radical of their parabolic subgroup. That is we are interested in $\kappa_L(R_u(P))$ where $P=LR_u(P)$ is an arbitrary parabolic subgroup of $G$ and $L$ is a Levi subgroup of $P$. In particular, we are interested when $R_u(P)$ is spherical, that is whether a Borel subgroup of $L$ acts on $R_u(P)$ with a dense orbit. Throughout this section we assume that $G$ is simple. \end{comment} \begin{comment} \begin{proof}If $G$ is simply connected, then the theorem holds, see \cite[Thm.\ 4.1]{Ro1}. Let $\widehat{G}$ be the simple and simply connected algebraic group with the same root system as $G$ and let $\pi : \widehat{G}\to G$ be a central isogeny. Let $\widehat{N}$ be a normal subgroup of a parabolic subgroup $\widehat{P}$ of $\widehat{G}$ which is contained in $R_u(\widehat{P})$. By \cite[Prop.\ 22.4(ii)]{Bor} the restriction of $\pi$ to $\widehat{N}$ is an isomorphism. Set $N=\pi(\widehat{N})$. We also note, see for example \cite[Thm.\ 22.6(i)]{Bor}, that $\pi(\widehat{P})$ is a parabolic subgroup of $G$, and all parabolic subgroups of $G$ are of this form. Hence all normal subgroups of $P$ which are contained in $R_u(P)$ are of this form, that is images under $\pi$ of such groups. We also have that $N$ and $\widehat{N}$ are bijective and that $\widehat{N}$ and $\widehat{\nn}$ are bijective. Finally since $\pi: \widehat{N}\to N$ is an isomorphism of varieties we have that $\widehat{\nn}$ and $\nn$ are bijective, so $N$ and $\nn$ are bijective. Clearly, each of the above bijections is $\widehat{P}$-equivariant, for $N$ and $\nn$ the group $\widehat{P}$ acts via the morphism $\pi$. In particular, there exists a $P$-equivariant bijective morphism $\varphi : N \to \nn$. \end{proof} \end{comment} For the remainder of this section we assume that $G$ is simple. The generalization of each of the subsequent results to the case when $G$ is reductive is straightforward. For $P$ a parabolic subgroup of $G$ we set $\pp_u=\Lie R_u(P)$. \begin{prop} \label{prop 6.10} Let $P=LR_u(P)$ be an arbitrary parabolic subgroup of $G$, where $L$ is a Levi subgroup of $P$. Then \[ \kappa_G(G/L)=\kappa_L(P/L)=\kappa_L(R_u(P))=\kappa_L(\pp_u). \] \end{prop} \begin{proof} Thanks to Lemma \ref{lem 5.20}, we have $\kappa_G(G/L) = \kappa_G(G\!\ast_P\! P/L) = \kappa_L(P/L)$. \begin{comment} There is a canonical $G$-equivariant dominant morphism $\varphi : G/L \to G/P$, by $\varphi(xL)= xP$ for $x \in G$. Since $G$ acts transitively on $G/L$ and $\varphi^{-1}(\varphi(1L))=P/L$, by Remark \ref{rem1.20}(i) and Theorem \ref{thm 1.10}, we have $\kappa_G(G/L)=\kappa_G(G/P)+\kappa_H(P/L)$, where $H$ is a subgroup of $G$ of the form $B\cap P$, where $B$ is a Borel subgroup of $G$ and $\dim B\cap P$ is minimal. By Lemma \ref{lem 5.10}, $B\cap P$ is a Borel subgroup of a Levi subgroup of $P$. The fact that all Levi subgroups of $P$ are conjugate implies that we may assume that $B\cap P$ is a Borel subgroup of $L$, so $\kappa_H(P/L)=\kappa_L(P/L)$. Finally, by Lemma \ref{lem 1.30}, $\kappa_G(G/P)=0$. It follows that $\kappa_G(G/L)=\kappa_L(P/L)$. \end{comment} If we write $P = R_u(P)L$, then the bijection $P/L = R_u(P)L/L \cong R_u(P)$ gives a canonical $L$-equivariant isomorphism $\phi : P/L \to R_u(P)$ defined by $\phi(xL) = y$, where $x = yz$ with $y\in R_u(P)$ and $z \in L$. Thus, we have $\kappa_L(P/L) = \kappa_L(R_u(P))$. A Springer isomorphism between the unipotent variety of $G$ and $\N$ restricts to an $L$-equivariant isomorphism $R_u(P) \to \pp_u$, e.g., see \cite[Cor.\ 1.4]{Go3}, so that $\kappa_L(R_u(P))=\kappa_L(\pp_u)$. \end{proof} \begin{rems} \label{rema 5.20} (i). While the first two equalities of Proposition \ref{prop 6.10} hold in arbitrary characteristic, the third equality requires the characteristic of the underlying field to be zero or a good prime for $G$; this assumption is required for the existence of a Springer isomorphism, cf.\ \cite[Cor.\ 1.4]{Go3}. (ii). Lemma 4.2 in \cite{Br1} states that there is a dense $L$-orbit on $G/B$ if and only if there is a dense $B_L$-orbit on $R_u(P)$, where $B_L$ is a Borel subgroup of $L$. Notice that there is a dense $L$-orbit on $G/B$ if and only if there is a dense $B$-orbit on $G/L$. In other words, $\kappa_G(G/L)=0$ if and only if $\kappa_L(R_u(P))=0$. Thus, Proposition \ref{prop 6.10} generalizes \cite[Lem.\ 4.2]{Br1}. \end{rems} By Proposition \ref{prop 6.10}, the problem of determining $\kappa_L(R_u(P))$ is equivalent to the problem of determining $\kappa_G(G/L)$. In particular, a Borel subgroup of $L$ acts on $R_u(P)$ with a dense orbit if and only if $L$ is a spherical subgroup of $G$. In fact, the latter have been classified: In characteristic zero this result was proved by M.\ Kr\"{a}mer in \cite{Kr} and extended to arbitrary characteristic by J.\ Brundan in \cite[Thm.\ 4.1]{Br1}: \begin{thm} \label{thm 6.10} Let $L$ be a proper Levi subgroup of a simple group $G$. Then $L$ is spherical in $G$ if and only if $(G, \D L)$ is one of $(A_n, A_{i-1}A_{n-i})$, $(B_n, B_{n-1})$, $(B_n, A_{n-1})$, $(C_n, C_{n-1})$, $(C_n, A_{n-1})$, $(D_n, D_{n-1})$, $(D_n, A_{n-1})$, $(E_6, D_5)$, or $(E_7, E_6)$. \end{thm} \begin{comment}\begin{rem}\label{rem 6.10}Note if $G$ is of type $E_8$, $F_4$ or $G_2$, then $G$ has no proper spherical Levi subgroups. Also note that if $L$ is a proper spherical Levi subgroup of $G$, then $\mathop{\mathrm{rk}}\nolimits \D L=\mathop{\mathrm{rk}}\nolimits G -1$.\end{rem}\end{comment} We also recall the classification of the parabolic subgroups of $G$ with an abelian unipotent radical, cf.\ \cite[Lem.\ 2.2]{RiRoSt}. \begin{lem} \label{lem 6.20} Let $G$ be a simple algebraic group and $P$ be a parabolic subgroup of $G$. Then $R_u(P)$ is abelian if and only if $P$ is a maximal parabolic subgroup of $G$ which is conjugate to the standard parabolic subgroup $P_I$ of $G$, where $I = \Pi\setminus\{\alpha\}$ and $\alpha$ occurs in the highest root $\tilde\alpha$ with coefficient $1$. \end{lem} Let $\Pi=\{\alpha_1,\alpha_2, \ldots, \alpha_n\}$ be a set of simple roots of the root system $\Psi$ of $G$. Using Lemma \ref{lem 6.20}, we can readily determine the standard parabolic subgroups $P_I$ of $G$ with an abelian unipotent radical. For $G$ simple we gather this information in Table \ref{Tab 6.1} below along with the structure of the corresponding standard Levi subgroup $L_I$ of $P_I$. Set $P_{\alpha_{i}'} = P_{\Pi\setminus \{\alpha_i\}}$. Here the simple roots are labelled as in \cite[Planches I - IX]{Bo2}. \begin{table}[ht] \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{|c|c|c|} \hline Type of $G$ & $P_I$ & Type of $\D L_I$ \\ \hline $A_n$ & $P_{\alpha_{i}'}$ for $1\leqslant i \leqslant n$ & $A_{i-1}A_{n-i}$ \\ $B_n$ & $P_{\alpha_{1}'}$ & $B_{n-1}$ \\ $C_n$ & $P_{\alpha_{n}'}$ & $A_{n-1}$ \\ $D_n$ & $P_{\alpha_{1}'},P_{\alpha_{n-1}'}$ and $P_{\alpha_{n}'}$ & $D_{n-1}$ or $A_{n-1}$ \\ $E_6$ & $P_{\alpha_{1}'}$ and $P_{\alpha_{6}'}$ & $D_5$ \\ $E_7$ & $P_{\alpha_{7}'}$ & $E_6$ \\ \hline \end{tabular} \medskip \caption{Parabolic Subgroups with Abelian Unipotent Radical.} \label{Tab 6.1} \end{table} Note that if $G$ is of type $E_8$, $F_4$ or $G_2$, then $G$ does not admit a parabolic subgroup with an abelian unipotent radical. Also compare the list of pairs $(G,\D L)$ from Table \ref{Tab 6.1} with the list in Theorem \ref{thm 6.10}. Our next result is immediate from \cite[Thm.\ 4.1, Lem.\ 4.2]{Br1}. \begin{prop} \label{prop 6.20} If $P=LR_u(P)$ is a parabolic subgroup of $G$ with $R_u(P)$ abelian, then $\kappa_L(R_u(P))=0$. \end{prop} \begin{proof} If $R_u(P)$ is abelian, then using Table \ref{Tab 6.1} we see that all the possible pairs $(G,\D L)$ appear in the list of spherical Levi subgroups given in Theorem \ref{thm 6.10}, that is $\kappa_G(G/L)=0$. Proposition \ref{prop 6.10} then implies that $\kappa_L(R_u(P))=0$. \end{proof} \begin{cor} \label{cor 6.10} If $P$ is a parabolic subgroup of $G$ with $R_u(P)$ abelian, then $\kappa_L(\pp_u)=0$. \end{cor} Let $\Psi$ be the root system of $G$ and let $\Pi\subseteq \Psi$ be a set of simple roots of $\Psi$. Let $P = P_I$ ($I\subseteq \Pi$) be a standard parabolic subgroup of $G$. Let $\Psi_I$ be the root system of the standard Levi subgroup $L_I$, i.e., $\Psi_I$ is spanned by $I$. Define $\Psi_I^+=\Psi_I\cap \Psi^+$. For any root $\alpha \in \Psi$ we can uniquely write $\alpha=\alpha_I+\alpha_{I'}$ where $\alpha_I=\sum_{\beta \in I} c_{\beta}\beta$ and $\alpha_{I'}=\sum_{\beta \in \Pi\setminus I} d_{\beta}\beta$. We define the \emph{level of} $\alpha$ (\emph{relative to $P$} or \emph{relative to $I$}) to be \[ \mathop{\mathrm{lv}}\nolimits(\alpha) := \sum\limits_{\beta \in \Pi\setminus I} d_{\beta}, \] cf.\ \cite{AzBaSe}. Let $d$ be the maximal level of any root in $\Psi$. If $2i> d$, then \[ A_i := \prod\limits_{\mathop{\mathrm{lv}}\nolimits(\alpha)=i}U_{\alpha} \] is an abelian unipotent subgroup of $G$. Note $A_d$ is the centre of $R_u(P)$. Since $L$ normalizes each $A_i$, we can consider $\kappa_L(A_i)$. \begin{comment} It is sufficient to show that $(U_{\alpha},U_{\beta})=\{e\}$ for all $\alpha,\beta \in \Psi$ with $\mathop{\mathrm{lv}}\nolimits(\alpha)=\mathop{\mathrm{lv}}\nolimits(\beta)=i$. The Chevalley commutator relations imply that $(U_{\alpha},U_{\beta})\subseteq\prod_{l,j>0\:;\:l\alpha+j\beta \in \Psi} U_{l\alpha+j\beta}$, $(U_{\alpha},U_{\beta})=\{e\}$. Since $\mathop{\mathrm{lv}}\nolimits(l\alpha+j\beta)=l\mathop{\mathrm{lv}}\nolimits(\alpha)+j\mathop{\mathrm{lv}}\nolimits(\beta)\geqslant 2i>d$, we see that $l\alpha+j\beta$ is not a root for any $l,j>0$. Note, if $i > d=\mathop{\mathrm{ht}}\nolimits(e)$ then $A_i$ is trivial. \end{comment} \begin{prop} \label{prop 6.30} If $P$ is a parabolic subgroup of $G$ and $2i>d$, then $\kappa_L(A_i)=0$. \end{prop} \begin{proof} We maintain the setup from the previous paragraph. Setting $A_i=\prod_{\mathop{\mathrm{lv}}\nolimits(\alpha)=i}U_{\alpha}$ and $A_i^-=\prod_{\mathop{\mathrm{lv}}\nolimits(\alpha)=-i}U_{\alpha}$, let $H$ be the subgroup of $G$ generated by $A_i$, $A_i^-$, and $L$. Then $H$ is reductive, with root system $\Psi_I\cup\{\alpha\in\Psi \mid \mathop{\mathrm{lv}}\nolimits(\alpha)=\pm i\}$, and $LA_i$ is a parabolic subgroup of $H$. Since $A_i$ is abelian, we can invoke Proposition \ref{prop 6.20} to deduce that $\kappa_L(A_i)=0$. \end{proof} There is a natural Lie algebra analogue of Proposition \ref{prop 6.30}: Maintaining the setup from above, for $2i> d$, we see that $\aaa_i := \bigoplus_{\mathop{\mathrm{lv}}\nolimits(\alpha)=i}\gg_{\alpha}$ is an abelian subalgebra of $\gg$. Since $\Lie U_{\alpha} = \gg_{\alpha}$ for all $\alpha \in \Psi$, we have $\Lie A_i =\aaa_i$. Thanks to \cite[Cor.\ 1.4]{Go3}, we obtain the following consequence of Proposition \ref{prop 6.30}. \begin{comment} Since the characteristic of $k$ is good for $G$ we have that there are no degenerations in the Chevalley commutator relations, so we have that $\Lie Z(R_u(P))=Z(\nn)$.\end{comment} \begin{cor} \label{cor 6.35} If $P$ is a parabolic subgroup of $G$ and $2i>d$, then $\kappa_L(\aaa_i)=0$. \end{cor} \begin{comment}\begin{cor}\label{cor 6.30}If $P=LR_u(P)$ is a parabolic subgroup of $G$, then $\kappa_L(Z(\nn))=0$.\end{cor}\end{comment} \begin{rems} (i). Corollary \ref{cor 6.35} was first proved, for a field of characteristic zero, in \cite[Prop.\ 3.2]{Pa3}, although the proof there is somewhat different from ours. (ii). Propositions \ref{prop 6.20} and \ref{prop 6.30} suggest that that if $A$ is an abelian subgroup of $R_u(P)$ which is normal in $P$, then $\kappa_L(A)=0$. It is indeed the case that $P$ acts on $A$ with a dense orbit, see \cite[Thm.\ 1.1]{Ro}. However, this is not the case when we consider instead the action of a Borel subgroup of a Levi subgroup of $P$ on $A$. For example, it follows from \cite[Table 1]{Ro} that if $G$ is of type $A_n$, then the dimension of a maximal normal abelian subgroup $A$ of a Borel subgroup $B$ of $G$ is $i(n+1-i)$, where $1\leqslant i \leqslant n$. Clearly, for $1\neq i\neq n$ we have $\dim A > \mathop{\mathrm{rk}}\nolimits G$. Thus, a maximal torus of $B$ cannot act on $A$ with a dense orbit. Using \cite[Table 1]{Ro}, it is easy to construct further examples. \end{rems} \begin{comment} \ref{cor 6.30} also implies that $L$ (and so $P$) acts on $Z(R_u(P))$ with a dense orbit. Of course, these results are well known, see for example the discussion in \cite[\S 1]{Ro}.\end{comment} \section{The Classification of the Spherical Nilpotent Orbits} \label{sect:classification} \subsection{Height Two Nilpotent Orbits} \label{sub:ht2} In this subsection we show that height two nilpotent orbits are spherical. Let $e \in \gg$ be nilpotent and let $\lambda \in \Omega_G^a(e)$ be an associated cocharacter of $G$. Define the following subalgebra of $\gg$: \begin{equation} \label{eq 6.1} \gg_E := \bigoplus_{i \in \mathbb{Z}}\gg(2i). \end{equation} \begin{prop} \label{prop 6.40} Let $e \in \N$, $\lambda \in \Omega_G^a(e)$, and let $\gg_E$ be the subalgebra of $\gg$ defined in \eqref{eq 6.1}. \begin{itemize} \item[(i)] There exists a connected reductive subgroup $G_E$ of $G$ such that $\Lie G_E=\gg_E$. \item[(ii)] There exists a parabolic subgroup $Q$ of $G_E$ such that $\Lie Q=\bigoplus_{i\geqslant 0}\gg(2i)$. Moreover, $C_G(\lambda)$ is a Levi subgroup of $Q$ and $\Lie R_u(Q)=\bigoplus_{i \geqslant 1}\gg(2i)$. \end{itemize} \end{prop} \begin{proof} Fix a maximal torus $T$ of $G$ such that $\lambda(k^*) \leqslant T$. Set $\Phi=\{\alpha \in\Psi \mid \langle\alpha,\lambda\rangle \in 2\ZZ\}$. Then $\gg_E = \bigoplus_{\alpha \in \Phi} \gg_{\alpha}$. Then $\Phi$ is a semisimple subsystem of $\Psi$. The subgroup $G_E$ generated by $T$ and all the one-dimensional root subgroups $U_{\alpha}$ with $\alpha \in \Phi$ is reductive and has Lie algebra $\gg_E$. Let $Q=P\cap G_E$, where $P = P_\lambda$. Since $\lambda(k^*)\leqslant T\leqslant G_E$, we see that $Q$ is a parabolic subgroup of $G_E$, see the remarks preceding Theorem \ref{thm 4.15}. Since $\Lie C_G(\lambda)=\gg(0)$, we have $C_G(\lambda)\leqslant Q$ and so $C_G(\lambda)$ is a Levi subgroup of $Q$. The remaining claims follow from the fact that $\Lie P=\gg_{\geqslant 0}$, the parabolic subgroup $P$ has Levi decomposition $P=C_G(\lambda)R_u(P)$ and $\Lie R_u(P)=\gg_{>0}$. \end{proof} The following discussion and Lemma \ref{lem 6.500} allow us to reduce the determination of the spherical nilpotent orbits to the case when $G$ is simple. Since the centre of $G$ acts trivially on $\gg$, we may assume that $G$ is semisimple. Let $\tilde G$ be semisimple of adjoint type and $\pi : G \to \tilde G$ be the corresponding isogeny. Let $e \in \gg$ be nilpotent and let $\tilde e = d\pi_1(e)$. Consider the restriction of $d\pi_1$ to the nilpotent variety of $\gg$. Then $d\pi_1 : \N \to \tilde \N$ is a dominant $G$-equivariant morphism, where $\tilde \N$ is the nilpotent variety of $\Lie \tilde G$ and $G$ acts on $\tilde \N$ via $\tilde \Ad \circ \pi$. It then follows from Theorem \ref{thm 1.10} that $\kappa_G(G\cdot e) = \kappa_{\tilde G}(\tilde G \cdot \tilde e)$. We therefore may assume that $G$ is semisimple of adjoint type. \begin{lem} \label{lem 6.500} Let $G$ be semisimple of adjoint type. Then $G$ is a direct product of simple groups $G = G_1G_2 \cdots G_r$. If $e \in \gg$ is nilpotent, then $e = e_1 + e_2 + \ldots + e_r$ for $e_i$ nilpotent in $\gg_i = \Lie G_i$ and $\kappa_G(G\cdot e) = \sum_{i=1}^r \kappa_{G_i}(G_i \cdot e_i)$. \end{lem} \begin{proof} Since $G$ is semisimple of adjoint type, so that $G$ is the direct product $G = G_1G_2 \cdots G_r$ of simple groups $G_i$, we have $\Lie G = \oplus \Lie G_i$. Let $e \in \gg$ be nilpotent. Clearly, any element $x \in C_G(e)$ is of the form $x = x_1 x_2 \cdots x_r$ where $x_i \in G_i$ and we also have that $e = e_1 + e_2 + \ldots +e_r$, where $e_i \in \gg_i$ and each $e_i$ must be nilpotent. We know that $\Ad(x)(e) = e$ so $\Ad(x_1)\Ad(x_2) \cdots \Ad(x_r)(e_1 + e_2 + \ldots + e_r) = e_1 + e_2 + \ldots + e_r$. For $i \ne j$ we have $\Ad(x_i)(e_j) = e_j$, so $\Ad(x)(e_i) = \Ad(x_i)(e_i)$. Therefore, as $\Ad(x_i)$ stabilizes $\gg_i$, we have $\Ad(x_i)(e_i) = e_i$. Thus, we obtain the following decomposition $C_G(e) = C_{G_1}(e_1)C_{G_2}(e_2) \cdots C_{G_r}(e_r)$. For $B$ a Borel subgroup of $G$ we have $B = B_1 B_2 \cdots B_r$, where each $B_i$ is a Borel subgroup of $G_i$ and $C_B(e) = C_{B_1}(e_1)C_{B_2}(e_2) \cdots C_{B_r}(e_r)$. In particular, for $B \in \Gamma_G(e)$ we have that $\dim C_B(e)$ is minimal. This implies that $\dim C_{B_i}(e_i)$ is minimal for each $i$ and so $B_i \in \Gamma_{G_i}(e_i)$. Therefore, we have \begin{align*} \kappa_G(G \cdot e) & = \dim G - \dim C_G(e) - \dim B + \dim C_B(e) \\ & = \sum_{i=1}^r \dim G_i - \sum_{i=1}^r \dim C_{G_i}(e_i) - \sum_{i=1}^r \dim B_i + \sum_{i=1}^r \dim C_{B_i}(e_i)\\ & = \sum_{i=1}^r (\dim G_i - \dim C_{G_i}(e_i) - \dim B_i + \dim C_{B_i}(e_i))\\ & = \sum_{i=1}^r \kappa_{G_i}(G_i \cdot e_i), \end{align*} and the result follows. \end{proof} \begin{lem} \label{lem 6.50} Let $G$ be a connected reductive algebraic group and $e \in \gg$ be nilpotent. If $\mathop{\mathrm{ht}}\nolimits(e)=2$, then $e$ is spherical. \end{lem} \begin{proof} First we assume that $G$ is simple. Let $\lambda \in \Omega_G^a(e)$. Let $\gg_E$ be the Lie subalgebra of $\gg$ as defined in \eqref{eq 6.1} and let $Q$ be the parabolic subgroup of $G_E$ as in Proposition \ref{prop 6.40}(ii). Since $\mathop{\mathrm{ht}}\nolimits(e)=2$, we have $\gg_E=\gg(-2)\bigoplus \gg(0)\bigoplus \gg(2)$. Set $L=C_G(\lambda)$. Then $\kappa_G(G\cdot e)=\kappa_L(\gg(2))$, by Theorem \ref{thm 5.10}. Also, by Proposition \ref{prop 6.40}, $\Lie R_u(Q)=\gg(2)$. Since $R_u(Q)$ is abelian, Corollary \ref{cor 6.10} implies that $\kappa_L(\gg(2))=0$. Now suppose that $G$ is reductive. Let $\D G=G_1G_2\cdots G_r$ be a commuting product of simple groups. For $e \in \gg$ we have $e=e_1+e_2+\ldots +e_r$, where $e_i \in \gg_i = \Lie G_i$ and each $e_i$ is nilpotent. Since $\mathop{\mathrm{ht}}\nolimits(e)=\max_{1\leqslant i \leqslant r}\mathop{\mathrm{ht}}\nolimits(e_i)$, we have $\mathop{\mathrm{ht}}\nolimits(e_i)\leqslant \mathop{\mathrm{ht}}\nolimits(e)=2$ for all $i$. Since $\kappa_G(G\cdot e) = \sum_{i=1}^r \kappa_{G_i}(G_i\cdot e_i)$, by Lemma \ref{lem 6.500}, the result follows from the simple case just proved. \end{proof} \begin{comment} As mentioned in Section \ref{chak}, Lemma \ref{lem 6.50} was proved by D.I.\ Panyushev for a field of characteristic zero in \cite{Pa3}. His methods are very different from ours, see Section \ref{chak} for a review of Panyushev's proof. We also note that our methods are valid in characteristic zero as well. \begin{thm}\label{thm 6.30}Let $G$ be a simple algebraic group. The following are the height two nilpotent orbits in $\gg$. In particular, these orbits are spherical. \begin{itemize} \item For $G$ of type $A_n$, the orbits with partitions $\pi=[1^j,2^i]$. \item For $G$ of type $B_n$, the orbits with partitions $\pi=[1^j,2^{2i}]$ or $\pi=[1^{j},3]$. \item For $G$ of type $C_n$, the orbits with partitions $\pi=[1^{2j},2^i]$. \item For $G$ of type $D_n$, the orbits with partitions $\pi=[1^j,2^{2i}]$ or $\pi=[1^j,3]$. \item For $G$ of type $G_2$, the orbit with Bala--Carter label $A_1$. \item For $G$ of type $F_4$, the orbits with Bala--Carter labels $A_1$ or $\tilde{A_1}$. \item For $G$ of type $E_6$, the orbits with Bala--Carter labels $A_1$ or $A_1A_1$. \item For $G$ of type $E_7$, the orbits with Bala--Carter labels $A_1$\:,\:$A_1A_1$ or $(A_1A_1A_1)''$. \item For $G$ of type $E_8$, the orbits with Bala--Carter labels $A_1$ or $A_1A_1$. \end{itemize} \end{thm} \begin{proof} For the classical groups we use Proposition \ref{prop 4.50} to determine the orbits with height two. Using the tables in \cite[\S 13]{Ca} and \eqref{eq 6.05}, we see that the orbits listed above are precisely the height two nilpotent orbits for the exceptions groups. \end{proof} We will make extensive use of the results from Sections \ref{levi} and \ref{sec 6.2} in the next two chapters, where we consider nilpotent orbits of height at least four and height equal to three, respectively. \chapter{Nilpotent Orbits of Height at Least Four}\label{C7} Throughout this chapter $G$ is a connected reductive algebraic group over an algebraically closed field $k$, the Lie algebra of $G$ is denoted by $\gg$ and the characteristic of $k$ is good for $G$. Recall the standard setup from Chapter \ref{C4}. For $e \in \gg$ a non-zero nilpotent element we let $\lambda\in Y(G)$ be an associated cocharacter for $e$ in $\gg$ and we have the canonical parabolic subgroup $P_{\lambda}=P=C_G(\lambda)R_u(P)$ for $e \in \gg$. Also recall the definition of height of a nilpotent element $e$, see Definition \ref{def 4.40}. \end{comment} \subsection{Even Gradings} \label{sub:even} Suppose that the given nilpotent element $e\in\gg$ satisfies $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$. Also assume that any $\lambda \in \Omega_G^a(e)$ induces an \emph{even grading} on $\gg$, that is $\gg(i,\lambda)=\{0\}$ whenever $i$ is odd. As usual we denote $\gg(i,\lambda)$ simply by $\gg(i)$. \begin{lem} \label{lem 7.10} Let $e \in \N$ and $\lambda \in \Omega_G^a(e)$ be as above. Then $\gg_{\geqslant 2}$ is non-abelian. \end{lem} \begin{proof} Set $\mathop{\mathrm{ht}}\nolimits(e)=d$. For the highest root $\tilde\alpha\in \Psi^+$ we have $\gg_{\tilde\alpha}\subseteq \gg(d)$. Write $\tilde\alpha=\alpha_1+\alpha_2+\ldots +\alpha_r$ as a sum of not necessarily distinct simple roots. The sequence of simple roots $\alpha_1,\alpha_2,\ldots ,\alpha_r$ can be chosen so that $\alpha_1+\alpha_2+\ldots +\alpha_s$ is a root for all $1\leqslant s \leqslant r$, \cite[Cor.\ 10.2.A]{Hu1}. Since the grading of $\gg$ induced by $\lambda$ is even, for all simple roots $\alpha \in \Pi$, we have $\gg_\alpha\subseteq\gg(i)$ with $i \in \{0,2\}$, cf.\ \eqref{eq 6.2}. Since $d \geqslant 4$, for at least one $\alpha_i$ we must have $\gg_{\alpha_i}\subseteq \gg(2)$. Let $\alpha_k$ be the last simple root in the sequence $\alpha_1,\alpha_2,\ldots ,\alpha_r$ with this property. Thus, for $\beta=\alpha_1+\alpha_2+\ldots +\alpha_{k-1}$ we have $\gg_{\beta} \in \gg(d-2)\subseteq\gg_{\geqslant 2}$. Since $\mathop{\mathrm{char}}\nolimits k$ is good for $G$, we have $[\gg_{\beta},\gg_{\alpha_k}]=\gg_{\beta'}$ where $\beta'=\beta +\alpha_{k}$. Therefore, $\gg_{\geqslant 2}$ is non-abelian. \end{proof} \begin{cor} \label{cor} Let $P$ be the destabilizing parabolic subgroup of $G$ defined by $e \in \N$. Then $R_u(P)$ is non-abelian. \end{cor} \begin{comment} \begin{proof} Since $\lambda$ induces an even grading on $\gg$, we have that $\Lie R_u(P)=\gg_{\geqslant 2}$, see Section \ref{sec 3.1}. Since $\gg_{\geqslant 2}$ is non-abelian we have that $R_u(P)$ is also non-abelian: The Chevalley commutator relations and the fact that $\mathop{\mathrm{char}}\nolimits k$ is good for $G$ imply that, for $\alpha_k$ and $\beta$ as in Lemma \ref{lem 7.10}, we have $(U_{\alpha_k},U_{\beta})\neq \{e\}$. \end{proof}\end{comment} Set $\pp_u=\Lie R_u(P)$. Because the grading of $\gg$ is even, $\gg_{\geqslant 2}=\pp_u$. Thus, by Proposition \ref{prop 6.10} and Theorem \ref{thm 5.10}, we have $\kappa_G(G\cdot e)=\kappa_G(G/L)$, where $L=C_G(\lambda)$. Using the classification of the spherical Levi subgroups and the classification of the parabolic subgroups of $G$ with abelian unipotent radical, Theorem \ref{thm 6.10} and Lemma \ref{lem 6.20}, we see that there are only two cases, for $G$ simple, when $R_u(P)$ is non-abelian and $L$ is spherical, namely when $G$ is of type $B_n$ and $\D L$ is of type $A_{n-1}$ and when $G$ is of type $C_n$ and $\D L$ is of type $C_{n-1}$. \begin{lem} \label{lem 7.15} Let $G$ be of type $B_n$ or of type $C_n$. Let $e \in \N$ and $\lambda \in \Omega_G^a(e)$. Set $L=C_G(\lambda)$. If $\pi_e=[1^{r_1},2^{r_2},\ldots]$ is the corresponding partition for $e$, then $\dim Z(L)=|\{a_i,b_i \in \ZZ_{\geqslant 0} \mid a_i+1=\sum_{j\geqslant i}r_{2j+1}, b_i+1=\sum_{j\geqslant i}r_{2j}\}|$. \end{lem} \begin{proof} Since $L$ is reductive, $L=Z(L)\D L$, and $Z(L) \cap \D L$ is finite, we have $\dim L = \dim Z(L) + \dim \D L$. The result follows from Proposition \ref{prop 5.05}. \end{proof} It is straightforward to deduce the following from Propositions \ref{prop 4.50} and \ref{prop 5.05}. \begin{lem} \label{Lem 7.20} Let $e \in \N$ and $\lambda \in \Omega_G^a(e)$ with $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$. Set $L=C_G(\lambda)$. If $G$ is of type $B_n$, then $\D L$ is not of type $A_{n-1}$ and if $G$ is of type $C_n$, then $\D L$ is not of type $C_{n-1}$. \end{lem} \begin{comment} \begin{proof} Let $\pi_e =[1^{r_1},2^{r_2},\ldots]$ be the partition corresponding to $e\in \N$. We refer to Proposition \ref{prop 4.50} for information regarding the relationship between $\mathop{\mathrm{ht}}\nolimits(e)$ and the partition $\pi_e$ and to Proposition \ref{prop 5.05} for information regarding the structure of $\D L$, we also maintain the notation used there. It is sufficient to show that $\mathop{\mathrm{rk}}\nolimits \D L \neq n-1$, or alternatively that $\dim Z(L)\neq 1$. Suppose $G$ is of type $B_n$. Choose $q \in \mathbb{N}$ such that $r_q\neq 0$ but $r_{j}=0$ if $j>q$. Since $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$, either $q\geqslant 4$ or $q=3$ and $r_3\geqslant 2$, by Proposition \ref{prop 4.50}. If $q$ is odd and $q\geqslant 5$, then $a_1,a_2 \geqslant 0$, so $\dim Z(L)\geqslant 2$, by Lemma \ref{lem 7.15}, and $\D L$ is not of type $A_{n-1}$. If $q$ is even, then $b_1,b_2 \geqslant 0$, so $\dim Z(L)\geqslant 2$, by Lemma \ref{lem 7.15}, and $\D L$ is not of type $A_{n-1}$. Finally, if $q=3$ and $r_3\geqslant 2$, then $a_1,t \geqslant 1$ and so groups of type $A_{a_1}$ and $B_t$ are simple factors of $\D L$. In particular, $\D L$ is not of type $A_{n-1}$. Suppose $G$ is of type $C_n$. Choose $q \in \mathbb{N}$ such that $r_q\neq 0$ but $r_{j}=0$ if $j>q$. Since $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$, we have $q\geqslant 3$, by Proposition \ref{prop 4.50}. If $q$ is even, then $b_1,b_2 \geqslant 0$, so $\dim Z(L)\geqslant 2$, by Lemma \ref{lem 7.15}, and $\D L$ is not of type $C_{n-1}$. If $q$ is odd, then $r_q\geqslant 2$, see \cite[Thm.\ 1.6]{Ja1}, and $a_1, s\geqslant 1$, so groups of type $A_{a_1}$ and $C_s$ are simple factors of $\D L$. In particular, $\D L$ is not of type $C_{n-1}$. \end{proof} \end{comment} \begin{lem} \label{lem 7.100} Let $e \in \N$ and suppose that $\lambda \in \Omega_G^a(e)$ induces an even grading on $\gg$. If $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$, then $e$ is non-spherical. \end{lem} \begin{proof} First we observe that if $G$ is simple, then the statement follows from the facts that $R_u(P)$ is non-abelian (Corollary \ref{cor}) and that $(G,\D L)$ is not one of the pairs $(B_n,A_{n-1})$ or $(C_n,C_{n-1})$ (Lemma \ref{Lem 7.20}). So by Theorem \ref{thm 6.10} and Lemma \ref{lem 6.20}, we see that $L$ is a non-spherical subgroup. Therefore, by Proposition \ref{prop 6.10}, $\kappa_L(\gg_{\geqslant 2})>0$ and $e$ is non-spherical. In case $G$ is reductive, we argue as in the proof of Lemma \ref{lem 6.50} and reduce to the simple case. \end{proof} \subsection{Nilpotent Orbits of Height at Least Four} \label{sub:ht4} Let $e \in \gg$ be nilpotent and let $\lambda \in \Omega_G^a(e)$. Let $\gg_E$ be the subalgebra of $\gg$ as defined in \eqref{eq 6.1}. Also let $G_E$ be the connected reductive algebraic group such that $\Lie G_E=\gg_E$ and $Q$ be the parabolic subgroup of $G_E$ as in Proposition \ref{prop 6.40}(ii). Since $e \in \gg_E$ and $\lambda(k^*) \leqslant G_E$, it follows from \cite[Thm.\ 1.1]{FoRo} that $\lambda$ is a cocharacter of $G_E$ which is associated to $e$, i.e.\ $\lambda \in \Omega_{G_E}^a(e)$. Moreover, for $P = P_\lambda$, we have $Q = P\cap G_E$ is the destabilizing parabolic subgroup of $G_E$ defined by $e$. Let $\mathop{\mathrm{ht}}\nolimits_E(e)$ denote the height of $e \in \gg_E$. Now if $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$ and $\mathop{\mathrm{ht}}\nolimits(e)$ is even, then $\mathop{\mathrm{ht}}\nolimits_E(e)=\mathop{\mathrm{ht}}\nolimits(e)$. The case when $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$ and $\mathop{\mathrm{ht}}\nolimits(e)$ is odd is slightly more involved. First we need some preliminary results. A proof of the following can be found in \cite[Prop.\ 2.4]{Pa2}. \begin{lem} \label{lem 7.30} Suppose that $\mathop{\mathrm{char}}\nolimits k=0$. If $e \in \N$ with $\mathop{\mathrm{ht}}\nolimits(e)$ odd, then the weighted Dynkin diagram $\Delta(e)$ contains no ``2'' labels. \end{lem} If $\Pi$ is a set of simple roots of $\Psi$ relative to a maximal torus $T$ which contains $\lambda(k^*)$, then for $\alpha \in \Pi$ we have \begin{equation} \label{eq 7.10} \gg_{\alpha} \subseteq \gg(i) \text{ where }i\in\{0,1\}. \end{equation} To see this recall \eqref{eq 6.2}: $\Ad(\lambda(t))(e_{\alpha})=t^{m_{\alpha}}e_{\alpha}$, for $e_{\alpha}\in \gg_{\alpha}$ and $m_{\alpha}$ is the corresponding label of the weighted Dynkin diagram $\Delta(e)$ of $e$. Thus, by Lemma \ref{lem 7.30}, we have $m_{\alpha}\in \{0,1\}$. \begin{lem} \label{lem 7.40} If $\mathop{\mathrm{ht}}\nolimits(e) = d$ odd, then $\gg(d-1)\neq\{0\}$. \end{lem} \begin{proof} The result follows easily, arguing as in the proof of Lemma \ref{lem 7.10} and using \eqref{eq 7.10}. \end{proof} \begin{cor} If $e \in \N$ with $\mathop{\mathrm{ht}}\nolimits(e)$ odd, then $\mathop{\mathrm{ht}}\nolimits_E(e)=\mathop{\mathrm{ht}}\nolimits(e)-1$. \end{cor} In particular, we have the following conclusion. \begin{cor} \label{cor 7.20} If $e \in \N$ with $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$, then $\mathop{\mathrm{ht}}\nolimits_E(e)\geqslant 4$. \end{cor} Thus, by Lemma \ref{lem 7.100}, Corollary \ref{cor 7.20}, and the fact that $\Omega_G^a(e) \cap Y(G_E) = \Omega_{G_E}^a(e)$ (\cite[Thm.\ 1.1]{FoRo}), we have $\kappa_L(\gg_{E,\geqslant 2})>0$, where $\gg_{E,\geqslant 2}=\bigoplus_{i \geqslant 1}\gg(2i)$ and $L=C_G(\lambda)=C_{G_E}(\lambda)$. \begin{lem} \label{lem 7.50} If a Borel subgroup $B_L$ of $L$ acts on $\gg_{\geqslant 2}$ with a dense orbit, then $B_L$ acts on $\gg_{E,\geqslant 2}$ with a dense orbit. \end{lem} \begin{proof} This follows readily from Theorem \ref{lem 1.60}. \end{proof} Combining Lemmas \ref{lem 7.100}, \ref{lem 7.50} and Corollary \ref{cor 7.20}, we get the main result of this subsection. \begin{prop} \label{prop 7.20} Let $e \in \N$. If $\mathop{\mathrm{ht}}\nolimits(e)\geqslant 4$, then $e$ is non-spherical. \end{prop} \begin{comment} As we commented on in Section \ref{chak} Proposition \ref{prop 7.20} was first proved by D.I.\ Panyushev for a field of characteristic zero in \cite{Pa3}. The proof in \cite{Pa3} is very different from ours. See Section \ref{chak} for a review of Panyushev's proof. We note that our methods are valid in characteristic zero as well. \chapter{}\label{C8} Throughout this chapter $G$ is a connected reductive algebraic group over an algebraically closed field $k$, the Lie algebra of $G$ is denoted by $\gg$ and the characteristic of $k$ is good for $G$. Recall the standard setup from Chapter \ref{C4}. For $e \in \gg$ a non-zero nilpotent element we have $\lambda\in Y(G)$ an associated cocharacter for $e$ in $\gg$ and the canonical parabolic subgroup $P_{\lambda}=P=C_G(\lambda)R_u(P)$. Also recall the definition of height of a nilpotent element $e$, see Definition \ref{def 4.40}. In this chapter we prove that if $\mathop{\mathrm{ht}}\nolimits(e)=3$, then $e$ is spherical. In order to prove this we make use of Theorem \ref{thm 5.10} to see that $\kappa_G(G\cdot e)=\kappa_{C_G(\lambda)}(\gg_{\geqslant 2})$. We also prove this result using case-by-case arguments. \end{comment} \subsection{Nilpotent Orbits of Height Three} Let $e \in \N$ and let $\lambda \in \Omega_G^a(e)$. Let $P = P(e)$ be the destabilizing parabolic subgroup defined by $e$. Then $P = LR_u(P)$ for $L = C_G(\lambda)$. Let $B_L$ be a Borel subgroup of $L$ so that $\lambda(k^*)\leqslant B_L$. Write $B_L=TU_L$ for a Levi decomposition of $B_L$, where $U_L = R_u(B_L)$ and $T$ is a maximal torus of $G$ containing $\lambda(k^*)$. Let $\bb_L = \Lie B_L$, $\nn = \Lie U_L$, and $\mathfrak{t}=\Lie T$. \begin{lem} \label{lem 8.10} Let $e\in \gg$ be nilpotent and $\lambda$ be an associated cocharacter for $e$ in $\gg$. Then the following are equivalent. \begin{itemize} \item[(i)] The nilpotent element $e$ is spherical. \item[(ii)] There exists $e' \in \gg_{\geqslant 2}$ such that $\overline{\Ad(B_L)(e')}=\gg_{\geqslant 2}$. \item[(iii)] There exists $e' \in \gg_{\geqslant 2}$ such that $\dim C_{B_L}(e')=\dim B_L - \dim \gg_{\geqslant 2}$. \end{itemize} \end{lem} \begin{proof} Thanks to Theorem \ref{thm 5.10}, $\kappa_G(G\cdot e) = \kappa_L(\gg_{\geqslant 2})$. Thus (i) and (ii) are equivalent. The equivalence between (ii) and (iii) is clear. \end{proof} Recall from Subsection \ref{sub:not} the definition of the support of a nilpotent element in $\uu$. \begin{lem} \label{lem 8.20} Let $e \in \gg_{\geqslant 2}$. If $\mathop{\mathrm{supp}}\nolimits(e)$ is linearly independent, then $\dim C_{T}(e) = \dim T - |\mathop{\mathrm{supp}}\nolimits(e)|$. \end{lem} \begin{proof} Suppose that $\mathop{\mathrm{supp}}\nolimits(e)$ is linearly independent. Then $\dim \Ad(T)(e) = |\mathop{\mathrm{supp}}\nolimits(e)|$, e.g.\ see \cite[Lem.\ 3.2]{Go2}. The desired equality follows. \end{proof} The following is a standard consequence of orbit maps. \begin{lem} \label{lem 8.30} Let $e' \in \gg_{\geqslant 2}$. Then $\dim C_{B_L}(e')\leqslant \dim \cc_{\bb_L}(e')$ and $\dim C_{U_L}(e')\leqslant \dim \cc_\nn(e')$. \end{lem} \begin{comment} \begin{proof} Let $P = P(e')$ be the destabilizing parabolic subgroup of $G$ defined by $e'$ and set $V = \D R_u(P)$. Note that, as $\Lie R_u(P)=\gg_{\geqslant 1}$ and $\gg_{\geqslant 2}=[\gg_{\geqslant 1},\gg_{\geqslant 1}]$, we have $\Lie V = \gg_{\geqslant 2}$. Define $\hh = \bb_L\bigoplus\gg_{\geqslant 2}$, thus $e' \in \hh$. The fact that $e' \in \gg_{\geqslant 2} = \gg(2)\bigoplus \gg(3)$ and $\gg_{\geqslant 2}$ is abelian implies that $\cc_{\hh}(e') = \cc_{\bb_L}(e')\bigoplus \gg_{\geqslant 2}$. In particular, $\dim \cc_{\hh}(e')=\dim \cc_{\bb_L}(e')+\dim \gg_{\geqslant 2}$. Let $H$ be the connected subgroup of $G$ such that $\Lie H=\hh$, so $H=B_LV$. Next we claim that $\Ad(V)(e')=\{e'\}$: First write $e'=e'_2+e'_3$ where $e'_i\in \gg(i)$. Using \cite[5.10(3)]{Ja1} we see that for any $v \in V$, we have $\Ad(v)(e'_2+e'_3)=e'_2 + d_{3}$, for some $d_{3}\in \gg(3)$. Also observe that $\Ad(v^{-1})(e'_2+e'_3)=e'_2-d_{3}$. The claim then follows from the fact that $\Ad([V,V])(e') = \{e'\}$. Thus, $C_H(e')=C_{B_L}(e')V$. In particular, $\dim C_H(e')=\dim C_{B_L}(e')+\dim V=\dim C_{B_L}(e')+\dim \gg_{\geqslant 2}$. In general, $\Lie C_H(e') \subseteq \cc_{\hh}(e')$ (cf.\ \cite[\S 1.14]{Ca}), so $\dim C_{B_L}(e')+\dim \gg_{\geqslant 2}=\dim C_H(e')\leqslant \dim \cc_{\hh}(e')=\dim \cc_{\bb_L}(e')+\dim \gg_{\geqslant 2}$. Thus $\dim C_{B_L}(e')\leqslant \dim \cc_{\bb_L}(e')$. The proof of the second claim is similar, with $\nn\bigoplus\gg_{\geqslant 2}$ replacing $\hh$. \end{proof} \end{comment} In \cite[Prop.\ 5.4]{Go4}, Goodwin showed that each $U$-orbit in $\uu$ admits a unique so called \emph{minimal} orbit representative, see \cite[Def.\ 5.3]{Go4}. (This depends on a suitable choice of an ordering of the positive roots compatible with the height function, cf.\ \cite[Def.\ 3.1]{Go4}.) Moreover, a special case of \cite[Prop.\ 7.7]{Go4} gives that for $e$ the minimal representative of its $U$-orbit in $\uu$, we have $C_B(e) = C_T(e)C_U(e)$. As a consequence, we readily obtain the following. \begin{lem} \label{lem 8.40} Let $e' \in \gg_{\geqslant 2}$. Suppose that $e'$ is the minimal representative of its $U$-orbit in $\uu$. Then $C_{B_L}(e') = C_{T}(e')C_{U_L}(e')$. In particular, $\dim C_{B_L}(e') = \dim C_{T}(e')+\dim C_{U_L}(e')$. \end{lem} \begin{comment} \begin{proof} Since $e' \in \gg_{\geqslant 2}$, we have $e'=e'_2+e'_3$ where $e'_i\in \gg(i)$. As $B_L\leqslant L$, it follows from \cite[5.10(3)]{Ja1} that $\Ad(z)(e'_i)=e'_i$ for $z \in C_{B_L}(e')$. First we show that $C_{B_L}(e'_i)=C_T(e'_i)C_{U_L}(e'_i)$ for $i \in\{2,3\}$. Let $z_i \in C_{B_L}(e'_i)$. We can uniquely factor $z_i$ as $z_i=y_ix_i$ with $x_i \in T$ and $y_i \in U_L$. Now write $e'_i=e_{\beta_1}+\ldots +e_{\beta_k}$ where $e_{\beta_j} \in \gg_{\beta_j}\setminus\{0\}$ and $\beta_j \in \Psi^+$. We proceed by induction on $|\mathop{\mathrm{supp}}\nolimits(e'_i)|$. If $|\mathop{\mathrm{supp}}\nolimits(e'_i)|=0$, then there is nothing to prove. Suppose the result holds in case $|\mathop{\mathrm{supp}}\nolimits(e'_i)|=k-1$. Suppose that $|\mathop{\mathrm{supp}}\nolimits(e'_i)|=k$. Let $\alpha \in \mathop{\mathrm{supp}}\nolimits(e'_i)$ be a minimal root of $\mathop{\mathrm{supp}}\nolimits(e'_i)$, that is, if $\beta$ is any other root in $\mathop{\mathrm{supp}}\nolimits(e'_i)$, then $\beta \nprec \alpha$. The Chevalley relations imply that $\Ad(y_ix_i)(e_{\alpha})=\Ad(y_i)(te_{\alpha})=te_{\alpha} + d$, where $t \in k^*$ and $d$ is a sum of root vectors $e_{\gamma}$, where $\gamma\succneqq\alpha$ and $e_{\gamma}\in\gg_{\gamma}$ for $\gamma \in \Psi^+$. Since $y_ix_i \in C_{B_L}(e'_i)$ and $\alpha$ is a minimal root in $\mathop{\mathrm{supp}}\nolimits(e'_i)$, we must have $x_i,y_i \in C_{B_L}(e_{\alpha})$. Now $|\mathop{\mathrm{supp}}\nolimits(e'_i-e_{\alpha})|=k-1$ and by induction $x_i,y_i \in C_{B_L}(e'_i-e_{\alpha})$ and so $x_i,y_i \in C_{B_L}(e'_i-e_{\alpha})\cap C_{B_L}(e_{\alpha}) \leqslant C_{B_L}(e'_i)$. Thus, $C_{B_L}(e'_i)\leqslant C_T(e'_i)C_{U_L}(e'_i)$. The reverse implication is obvious, so $C_{B_L}(e'_i)=C_T(e'_i)C_{U_L}(e'_i)$. It now follows from $C_{B_L}(e'_2)\cap C_{B_L}(e'_3)=C_{B_L}(e'_2+e'_3)=C_{B_L}(e')$ and $C_T(e'_2)C_{U_L}(e'_2)\cap C_T(e'_3)C_{U_L}(e'_3)=C_T(e'_2+e'_3)C_{U_L}(e'_2+e'_3)=C_T(e')C_{U_L}(e')$ that $C_{B_L}(e')=C_T(e')C_{U_L}(e')$. \end{proof} \end{comment} \begin{prop} \label{prop 8.10} Let $G$ be a simple algebraic group. Table \ref{t:1} below gives a complete list of the height $3$ nilpotent orbits in $\gg$. \end{prop} \begin{proof} For the classical groups we use Proposition \ref{prop 4.50}. By Remark \ref{rem 3.30}, there are no height $3$ nilpotent orbits in types $A_n$ and $C_n$. Using the tables in \cite[\S 13]{Ca} and \eqref{eq 6.05}, one readily determines the desired orbits when $G$ is exceptional. \end{proof} In Table \ref{t:1} we either give the partition or the Bala--Carter label of the corresponding orbit, cf.~\cite[\S 13]{Ca}. \begin{table}[h] \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|} \hline Type of $G$ & Orbits \\ \hline $A_n$ & - \\ $B_n$ & $[1^j,2^{2i},3]$ with $i>0$\\ $C_n$ & - \\ $D_n$ & $[1^j,2^{2i},3]$ with $i>0$\\ \hline $G_2$ & $\tilde{A_1}$ \\ $F_4$ & $A_1 + \tilde{A_1}$ \\ $E_6$ & $3A_1$ \\ $E_7$ & $(3A_1)'$, $4A_1$\\ $E_8$ & $3A_1$, $4A_1$\\ \hline \end{tabular} \medskip \caption{The nilpotent orbits of height $3$.} \label{t:1} \end{table} \bigskip In the next three subsections we concentrate on the height $3$ orbits in types $B_n$, $D_n$, and the exceptional types, respectively. \subsection{Height Three Nilpotent Elements of $\mathfrak{so}_{2n+1}(k)$} \label{subsect:soodd} In this subsection let $G$ be of type $B_n$ for $n \geqslant 3$, so $\gg= \mathfrak{so}_{2n+1}(k)$. The nilpotent orbits in $\gg$ are classified by the partitions of $2n+1$ with even parts occurring with even multiplicity, see \cite[Thm.\ 1.6]{Ja1}. By Proposition \ref{prop 4.50}, the height $3$ nilpotent orbits correspond to partitions of $2n+1$ of the form $\pi_{r,s}=[1^s,2^{2r},3]$, where $r\geqslant 1, s\geqslant 0$ and $2r+s+1=n$. Denote the corresponding nilpotent orbit by $\mathcal{O}_{r,s}$ and a representative of such an orbit by $e_{r,s}$. \begin{lem} \label{lem 8.50} There are precisely $\left [\frac{n-1}{2} \right ]$ distinct height $3$ nilpotent orbits in $\gg$. \end{lem} \begin{proof} By our comments above, we need to show that there are precisely $\left [ \frac{n-1}{2} \right ]$ partitions of $2n+1$ of the form $\pi_{r,s}$. This is equivalent to finding all partitions of $n-1$ of the form $[1^{s/2},2^r]$. Thus $r$ satisfies $1\leqslant r\leqslant \frac{n-1}{2}$. Since $r$ is an integer, the result follows. \end{proof} Since the number $2r+1$ appears frequently in the sequel, we set $\widehat{r}=2r+1$. Using \cite[\S 13]{Ca}, we readily see that that $e_{r,s}$ has the following weighted Dynkin diagram: \vfill \eject \begin{figure}[ht] \beginpicture \setcoordinatesystem units <1.5cm,1.5cm> point at 0 0 \setplotarea x from -.5 to 4, y from 1.5 to 2 \put {$\Delta(e_{r,s})$:} [l] at 2.1 2 \multiput {$\bullet$} at 3.5 2 *2 .5 0 / \put {$>$} at 6.25 2 \multiput {$\bullet$} at 5 2 *3 .5 0 / \putrule from 3.54 2 to 3.96 2 \putrule from 4.54 2 to 5.54 2 \putrule from 6.04 2.02 to 6.46 2.02 \putrule from 6.03 1.98 to 6.47 1.98 \put {1} at 3.5 2.2 \put {0} at 4 2.2 \put {1} at 5 2.2 \put {0} at 5.5 2.2 \put {0} at 4.5 2.2 \put {0} at 6 2.2 \put {0} at 6.5 2.2 \put {$\widehat{r}$} at 5.0 1.8 \setdashes <.5mm, 1mm> \putrule from 4.04 2 to 5.96 2 \endpicture \caption{Labeling of $\Delta(e_{r,s})$.} \label{Dynkin3B} \end{figure} \begin{rem} \label{rem 8.10} Note that in $\Delta(e_{r,s})$ there are precisely two simple roots, $\alpha_1$ and $\alpha_{\widehat{r}}$ that are labeled with a ``1'' and that there is an odd number of simple roots between $\alpha_1$ and $\alpha_{\widehat{r}}$. Also, the short simple root is labeled with a ``1'' if and only if $s = 0$, and this can only happen when $n$ is odd. \end{rem} We refer to \cite[Planche II]{Bo2} for information regarding the root system of type $B_n$. Let $\alpha_1,\ldots,\alpha_n$ be the simple roots of $\Psi^+$ and let \begin{align*} \beta_{j,k}&=\alpha_j+\ldots +\alpha_k \text{ for } 1\leqslant j \leqslant k\leqslant n, \\ \gamma_{j,k}&=\alpha_j+ \ldots +\alpha_{k-1}+2\alpha_k+\ldots + 2\alpha_n \text{ for } 1\leqslant j< k<n, \end{align*} where $\beta_{j,j} = \alpha_j$. Note that all the possible $\beta$'s and $\gamma$'s exhaust $\Psi^+$. For a $T$-stable Lie subalgebra $\mm$ of $\uu$ recall the definition of the set of roots $\Psi(\mm)$ of $\mm$ with respect to $T$ from Subsection \ref{sub:not}. \begin{lem} \label{lem 8.60} For an associated cocharacter of $e_{r,s}$ in $\gg$ we have \begin{itemize} \item[(i)] $\Psi(\gg(2))=\{ \beta_{1,j},\gamma_{i,m},\gamma_{l,k} \mid 1< l < k\leqslant \widehat{r}\leqslant j \text{ and } 1<i<m\leqslant \widehat{r}\}$, and so $\dim \gg(2)=2r^2-r+2s+1$; \item[(ii)] $\Psi(\gg(3))=\{\gamma_{1,k} \mid k\leqslant \widehat{r}\}$, and so $\dim \gg(3)=2r$. \end{itemize} \end{lem} \begin{proof} For every $\delta \in \Psi$ we have that $\gg_{\delta}\subseteq \gg(i)$ for some $i \in \{0,\pm 1,\pm 2,\pm 3\}$. For the simple roots this information can be read off from $\Delta(e_{r,s})$, see \eqref{eq 6.2}. Let $\delta=\sum_{\alpha\in \Pi}c_{\delta,\alpha}\alpha$ be a positive root. Now $\gg_{\delta} \subseteq \gg(2)$ if and only if $c_{\delta,\alpha_1}+c_{\delta,\alpha_{\widehat{r}}}=2$. All of the roots listed above satisfy this condition, and no others do. Finally, $\gg_{\delta} \subseteq \gg(3)$ if and only if $c_{\delta,\alpha_1}+c_{\delta,\alpha_{\widehat{r}}}=3$. All of the roots listed above satisfy this condition, and no others do. \end{proof} \begin{lem} \label{lem 8.70} For an associated cocharacter of $e_{r,s}$ in $\gg$ we have \begin{itemize} \item[(i)] $\Psi(\bb_L)=\{\beta_{j,k},\gamma_{l,m} \mid \widehat{r} < j \text{ or } 1<j\leqslant k < \widehat{r}, \widehat{r}< l<m\}$. \item[(ii)] $\dim \bb_L=2r^2+s^2+s+r+1$. \end{itemize} \end{lem} \begin{proof} For every $\delta \in \Psi$ we have that $\gg_{\delta}\subseteq \gg(i)$ for some $i \in \{0,\pm 1,\pm 2,\pm 3\}$. As mentioned above, for the simple roots this information can be read off from $\Delta(e_{r,s})$, see \eqref{eq 6.2}. Let $\delta=\sum_{\alpha\in \Pi}c_{\delta,\alpha}\alpha \in \Psi^+$. Then $\gg_{\delta} \subseteq \bb_L$ if and only if $c_{\delta,\alpha_1}+c_{\delta,\alpha_{\widehat{r}}}=0$. All of the roots listed above satisfy this condition, and no others do. Consequently, $\dim \nn = 2r^2+s^2-r$. Since $\dim \mathfrak{t}=n$, we get $\dim \bb_L = 2r^2+s^2+s+r+1$. \end{proof} It follows from Figure \ref{Dynkin3B} that $L$ is of Dynkin type $A_{\widehat{r}-1} \times B_s$. Accordingly, there is a natural partition of the roots of $\bb_L$ into a union of two subsets, namely the positive roots of the $A_{\widehat{r}-1}$ and $B_s$ subsystems, respectively. Thus, we have $\Psi(\bb_L)=\Psi_1(\bb_L)\cup\Psi_2(\bb_L)$, where \begin{align*} \Psi_1(\bb_L)&=\{ \beta_{j,k} \mid 1<j\leqslant k < \widehat{r}\}, \\ \Psi_2(\bb_L)&=\{\beta_{j,k},\gamma_{l,m} \mid \widehat{r}<j\leqslant k , \widehat{r}< l < m\}. \end{align*} Similarly, we can decompose the roots of $\gg_{\geqslant 2}$ into two sets as follows: $\Psi(\gg_{\geqslant 2}) = \Psi_1(\gg_{\geqslant 2}) \cup \Psi_2(\gg_{\geqslant 2})$, where \begin{align*} \Psi_1(\gg_{\geqslant 2})&=\{\gamma_{j,k} \mid 1\leqslant j < k\leqslant \widehat{r} \}, \\ \Psi_2(\gg_{\geqslant 2})&=\{\beta_{1,j},\gamma_{1,k} \mid \widehat{r} \leqslant j, \widehat{r} < k\}. \end{align*} The sets $\Psi_i(\bb_L)$ and $\Psi_i(\gg_{\geqslant 2})$ satisfy the following property: \begin{equation} \label{eq 8.10} \delta \in \Psi_i(\bb_L), \eta \in \Psi_{3-i}(\gg_{\geqslant 2}) \ \Rightarrow \ \delta + \eta \notin \Psi,\ i \in \{ 1,2\}. \end{equation} Denote by $\bb_L^i$ the Lie subalgebras of $\bb_L$ such that $\Psi(\bb_L^i)=\Psi_i(\bb_L)$ for $i = 1,2$. For the rest of this subsection we show that the following element is a representative of the dense $B_L$-orbit in $\gg_{\geqslant 2}$; set: \[ e'_{r,s} := \sum_{j,k=0}^{r-1}(e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}} +e_{\gamma_{1,\widehat{r}-2k}}) +e_{\gamma_{1,\widehat{r}+1}}+e_{\beta_{1,\widehat{r}}},\] where $e_{\delta}\in \gg_{\delta}\setminus\{0\}$ for $\delta \in \Psi(\gg_{\geqslant 2})$. Recall from the paragraph before Lemma \ref{lem 8.40} the notion of minimal $U$-orbit representatives in $\uu$ from \cite{Go4}. \begin{lem} \label{lem 8.80} Each $e'_{r,s}$ is the minimal representative of its $U$-orbit in $\uu$, $\mathop{\mathrm{supp}}\nolimits(e'_{r,s})$ is linearly independent, and $|\mathop{\mathrm{supp}}\nolimits(e'_{r,s})|=\left\{ \begin{array}{ll} 2r+2 & \text{ if } \:\:s>0; \\ 2r+1 & \text{ if } \:\:s=0. \\ \end{array} \right.$ \end{lem} \begin{proof} It is straightforward to check that $e'_{r,s}$ is the minimal representative of its $U$-orbit in $\uu$ in the sense of \cite{Go4} and one easily computes $|\mathop{\mathrm{supp}}\nolimits(e'_{r,s})|$. Note that the root $\gamma_{1,\widehat{r}+1}$ only occurs if $s > 0$. Suppose there exist scalars $\tau_j,\xi_k, \mu$ and $\nu$ such that \[ \sum_{j=0}^{r-1}\tau_j\gamma_{\widehat{r}-2j-1,\widehat{r}-2j} + \sum_{k=0}^{r-1}\xi_k\gamma_{1,\widehat{r}-2k} +\mu\gamma_{1,\widehat{r}+1} +\nu\beta_{1,\widehat{r}}=0. \] Since the coefficients of $\alpha_1,\alpha_2$, and $\alpha_3$ must be zero, we have \[ \sum_{k=0}^{r-1}\xi_k+\mu+\nu=0,\ \tau_{r-1}+\sum_{k=0}^{r-1}\xi_k+\mu+\nu=0, \ \text{ and }\ \xi_{r-1}+2\tau_{r-1}+\sum_{k=0}^{r-1}\xi_k+\mu+\nu=0. \] These three equations imply that $\tau_{r-1}=0=\xi_{r-1}$. Continuing in this way, we see that $\tau_j=0=\xi_j$ for all $j$. Thus we are left to show that $\gamma_{1,\widehat{r}+1}$ and $\beta_{1,\widehat{r}}$ are linearly independent; but this is obvious. \end{proof} Thanks to Lemma \ref{lem 8.80} it is harmless to assume that $\mathop{\mathrm{supp}}\nolimits(e'_{r,s})$ is part of a Chevalley basis of $\gg$. \begin{lem} \label{lem 8.111} $\dim \cc_\nn(e'_{r,s})=\left\{ \begin{array}{ll} (s-1)^2 & \text{ if } \:\:s>0; \\ 0 & \text{ if } \:\:s=0. \\ \end{array} \right.$ \end{lem} \begin{proof} Thanks to \eqref{eq 8.10}, we may consider the two summands $\sum_{j,k=0}^{r-1}(e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}}+e_{\gamma_{1,\widehat{r}-2k}})$ and $e_{\gamma_{1,\widehat{r}+1}}+e_{\beta_{1,\widehat{r}}}$ of $e'_{r,s}$ separately. Since $\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}+\gamma_{1,\widehat{r}-2k} \in \Psi_1(\gg_{\geqslant 2})$, we need only consider the root spaces $\gg_{\delta}$ for $\delta \in \Psi_1(\bb_L)$. So let $\beta_{i,m}\in \Psi_1(\bb_L)$. If $m=\widehat{r}-2l$ for some $0\leqslant l<r$, then, by the Chevalley commutator relations, $[e_{\gamma_{\widehat{r}-2l+1,\widehat{r}-2(l-1)}}, \gg_{\beta_{i,\widehat{r}-2l}}]=\gg_{\gamma_{i,\widehat{r}-2(l-1)}}$, since $\mathop{\mathrm{char}}\nolimits k$ is good for $G$. If $m=\widehat{r}-2l-1$ for some $0\leqslant l<r$, then $[e_{\gamma_{1,\widehat{r}-2l}},\gg_{\beta_{i,\widehat{r}-2l-1}}]=\gg_{\gamma_{1,i}}$. Next we observe that all the $\beta$'s above exhaust the set $\Psi_1(\bb_L)$. Consequently, $\cc_{\bb_L^1}(\sum_{j,k=0}^{r-1} (e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}}+e_{\gamma_{1,\widehat{r}-2k}})) =\{0\}$. Next we consider the summand $e_{\gamma_{1,\widehat{r}+1}}+e_{\beta_{1,\widehat{r}}}$. First observe that $[\nn,e_{\gamma_{1,\widehat{r}+1}}]=\{0\}$, so $\cc_\nn(e_{\gamma_{1,\widehat{r}+1}})=\nn$. Secondly, the root $\beta_{1,\widehat{r}}$ lies in $\Psi_2(\gg_{\geqslant 2})$. Thanks to property \eqref{eq 8.10}, we need only consider roots $\delta \in \Psi_2(\bb_L)$. We see that the only roots $\delta\in \Psi_2(\bb_L)$ with $\delta+\beta_{1,\widehat{r}}\in \Psi(\gg_{\geqslant 2})$ are of the form $\beta_{\widehat{r}+1,j}$ or $\gamma_{\widehat{r}+1,k}$ where $\widehat{r}+1\leqslant j\leqslant n$ and $\widehat{r}+1 < k\leqslant n$. Again the Chevalley commutator relations imply $[\gg_{\beta_{\widehat{r}+1,j}},e_{\beta_{1,\widehat{r}}}]= \gg_{\beta_{1,j}}$ and $[\gg_{\gamma_{\widehat{r}+1,k}},e_{\beta_{1,\widehat{r}}}]= \gg_{\gamma_{1,k}}$. We also observe that $\beta_{j,k}$ and $\gamma_{l,m}$ for $\widehat{r}+1<j,l$ have the property that $\beta_{1,\widehat{r}+1}+\gamma_{l,m},\beta_{1,\widehat{r}+1}+\beta_{j,k}\notin \Psi_2(\gg_{\geqslant 2})$. All the roots above exhaust $\Psi_2(\bb_L)$, so we conclude that all the roots $\beta_{j,k}$ and $\gamma_{l,m}$ for $\widehat{r}+1<j,l$ of $\Psi_2(\bb_L)$ are all contained in $\Psi(\cc_\nn(e_{\beta_{1,\widehat{r}}}))$. If $s>0$, these roots form the set of positive roots of a root system of type $B_{s-1}$, there are exactly $(s-1)^2$ positive roots in a root system of type $B_{s-1}$ and so $|\Psi(\cc_\nn(e_{\beta_{1,\widehat{r}}}))|=(s-1)^2$. Therefore, $\dim \cc_\nn(e'_{r,s})=(s-1)^2$, clearly, if $s=0$ then, $\dim \cc_\nn(e'_{r,s})=0$. \end{proof} \begin{prop} \label{prop 8.20} The $B_L$-orbit of $e'_{r,s}$ is dense in $\gg_{\geqslant 2}$. \end{prop} \begin{proof} Thanks to Lemma \ref{lem 8.10}, it is sufficient to show that $\dim B_L=\dim C_{B_L}(e'_{r,s})+\dim \gg_{\geqslant 2}$. Lemma \ref{lem 8.60} implies that $\dim \gg_{\geqslant 2}=2r^2+2s+r+1$ and Lemma \ref{lem 8.70} implies that $\dim B_L = 2r^2+s^2+s+r+1$. By Lemma \ref{lem 8.80}, $e'_{r,s}$ is the minimal representative of its $U$-orbit in $\uu$. Thus, by Lemma \ref{lem 8.40}, we have $\dim C_{B_L}(e'_{r,s})=\dim C_{T}(e'_{r,s})+\dim C_{U}(e'_{r,s})$. Consequently, Lemmas \ref{lem 8.30}, \ref{lem 8.80}, and \ref{lem 8.111} imply that, for $s>0$, $\dim C_{B_L}(e'_{r,s})\leqslant n-2r-2+(s-1)^2=s^2-s$. So \[ \dim C_{B_L}(e'_{r,s})+\dim \gg_{\geqslant 2}\leqslant s^2-s+2r^2+r+2s+1 = \dim B_L. \] This clearly implies $\dim B_L=\dim C_{B_L}(e'_{r,s})+\dim \gg_{\geqslant 2}$. Similarly, if $s=0$, we get $\dim B_L=\dim C_{B_L}(e'_{r,s})+\dim \gg_{\geqslant 2}$. \end{proof} \begin{cor} \label{cor 8.10} $\dim C_{B_L}(e'_{r,s})=s(s-1)$. \end{cor} Finally, from Lemma \ref{lem 8.10} we obtain \begin{cor} \label{cor 8.20} If $G$ is of type $B_n$ and $e\in \N$ with $\mathop{\mathrm{ht}}\nolimits(e)=3$, then $e$ is spherical. \end{cor} \subsection{Height Three Nilpotent Elements of $\mathfrak{so}_{2n}(k)$} \label{subsect:soeven} Assume for this subsection that $G$ is of type $D_n$ for $n\geqslant 4$, so $\gg=\mathfrak{so}_{2n}$. We know that the nilpotent orbits in $\gg$ are classified by the partitions of $2n$ with even parts occurring with even parity, see \cite[Thm.\ 1.6]{Ja1}. We showed that the height three nilpotent orbits correspond to partitions of $2n$ of the form $\pi_{r,s}=[1^{2s+1},2^{2r},3]$ where $r\geqslant 1, s\geqslant 0$ and $2r+s+2=n$, see Proposition \ref{prop 4.50}. Similarly to the $B_n$ case, we denote the corresponding orbit by $\mathcal{O}_{r,s}$ and a representative of such an orbit by $e_{r,s}$. Because the proofs of the results in this subsection are virtually identical to the ones in Subsection \ref{subsect:soodd}, they are omitted. \begin{lem} \label{lem 8.90} There are precisely $\left [\frac{n-2}{2} \right ]$ distinct height 3 nilpotent orbits in $\gg$. \end{lem} \begin{comment}\begin{proof} It is clearly sufficient to show that there are precisely $\left [ \frac{n-2}{2} \right ]$ partitions of $2n$ of the form $\pi_{r,s}$. This is equivalent to finding partitions of $n-2$ of the form $[1^s,2^r]$. Thus we find that $r$ must satisfy $1\leqslant r\leqslant \frac{n-2}{2}$. Since $r$ is an integer, the result follows. \end{proof}\end{comment} Using \cite[\S 13]{Ca}, we can easily calculate that for $s>0$, $e_{r,s}$ has the weighted Dynkin diagram $\Delta(e_{r,s})$ as shown in Figure \ref{Dynkin3D} below. \begin{figure}[ht] \beginpicture \setcoordinatesystem units <1.5cm,1.5cm> point at 0 0 \setplotarea x from -5.1 to 3, y from -1 to 1 \put {$\Delta(e_{r,s})$:} [l] at -2.2 0 \multiput {$\bullet$} at -1 0 *4 .5 0 / \put {$\bullet$} at 1.5 0 \putrule from -.96 0 to -.54 0 \putrule from 0.04 0 to .96 0 \put {$\bullet$} at 2 0.5 \put {$\bullet$} at 2 -0.5 \plot 1.53 0.04 1.96 0.5 / \plot 1.53 -0.02 1.96 -0.5 / \put {$0$} at 0 .2 \put {$1$} at .5 .2 \put {$0$} at 1.5 .2 \put {$0$} at 2.3 0.5 \put {$0$} at 2.3 -0.5 \put {$0$} at 1 .2 \put {$0$} at -.5 .2 \put {$1$} at -1 .2 \put {$\widehat{r}$} at .55 -.3 \setdashes <.5mm, 1mm> \putrule from -.54 0 to 1.46 0 \endpicture \caption{Labelling of $\Delta(e_{r,s})$ for $s>0$.} \label{Dynkin3D} \end{figure} Similarly, when $s=0$, the labelling of $\Delta(e_{r,0})$ is shown in Figure \ref{Dynkin3D0} below. \begin{figure}[ht] \beginpicture \setcoordinatesystem units <1.5cm,1.5cm> point at 0 0 \setplotarea x from -4.7 to 3, y from -1 to 1 \put {$\Delta(e_{r,0})$:} [l] at -1.2 0 \multiput {$\bullet$} at 0 0 *2 .5 0 / \put {$\bullet$} at 1.5 0 \putrule from 0.04 0 to .46 0 \put {$\bullet$} at 2 0.5 \put {$\bullet$} at 2 -0.5 \plot 1.53 0.04 1.96 0.5 / \plot 1.53 -0.02 1.96 -0.5 / \put {$1$} at 0 0.2 \put {$0$} at .5 0.2 \put {$0$} at 1.5 0.2 \put {$1$} at 2.3 0.5 \put {$1$} at 2.3 -0.5 \put {$0$} at 1 0.2 \setdashes <.5mm, 1mm> \putrule from .54 0 to 1.46 0 \endpicture \caption{Labelling of $\Delta(e_{r,0})$.} \label{Dynkin3D0} \end{figure} \begin{rem} \label{rem 8.20} Note that there is always an odd number of ``0'' labels between the first and second ``1'' labels in $\Delta(e_{r,s})$. If $s>0$, then there are $s+1$ ``0'' labels to the right of the second ``1'' label. Finally, $s=0$ only if $n$ is even. \end{rem} We refer to \cite[Planche IV]{Bo2} for information regarding the root system of type $D_n$. We use the following notation for the positive roots $\Psi^+$. Let $\alpha_1,\ldots,\alpha_n$ be the set of simple roots of $\Psi^+$ and let \begin{align*} \beta_{j,k}&=\alpha_j+\ldots +\alpha_k \text{ for } 1\leqslant j \leqslant k\leqslant n\ \text{ not } j=n-1, k=n,\\ \beta_{j}&=\alpha_j+\ldots +\alpha_{n-2}+\alpha_n \text{ for } 1\leqslant j\leqslant n-2,\\ \gamma_{j,k}&=\alpha_j+ \ldots +\alpha_{k-1}+2\alpha_k+\ldots + 2\alpha_{n-2}+\alpha_{n-1}+\alpha_n \text{ for } 1\leqslant j< k<n-2. \end{align*} Here we again use the convention $\beta_{j,j} = \alpha_j$. Note that all the possible $\beta$'s and $\gamma$'s exhaust $\Psi^+$. \begin{comment} \begin{exmp}\label{ex 8.30} The positive roots of $D_4$ are: \[\Psi^+=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4,\beta_{1,2},\beta_{2,3},\beta_{1,3},\beta_{1},\beta_{2,4},\beta_{1,4},\beta_2,\gamma_{1,2}\}.\]Note that $\gamma_{1,2}$ is the highest root of $D_4$. In fact, $\gamma_{1,2}$ is always the highest root in a root system of type $D_n$.\end{exmp}\end{comment} Next we consider the structure of the abelian Lie subalgebra $\gg_{\geqslant 2}=\gg(2)\bigoplus\gg(3)$. \begin{lem} \label{lem 8.100} An associated cocharacter for $e_{r,s}$ affords the following. \begin{itemize} \item[(i)] $\Psi(\gg(2))=\left\{ \begin{array}{ll} \{ \beta_{1,j},\beta_{1},\gamma_{l,k},\gamma_{1,m} \mid 1< l < k\leqslant \widehat{r}\leqslant j, \widehat{r}<m\} & \text{ if } s>0; \\ \{ \beta_{1,n-1},\beta_{1},\beta_{i,n},\gamma_{j,k}\mid 2\leqslant i< \widehat{r}, 1<j<k< \widehat{r}\}& \text{ if } s=0. \\ \end{array}\right.$\\ In particular, $\dim \gg(2)=2r^2-r+2s+2$. \item[(ii)] $\Psi(\gg(3))=\left\{ \begin{array}{ll} \{\gamma_{1,k}\mid k\leqslant \widehat{r}\}& \text{ if } s>0; \\ \{\beta_{1,n},\gamma_{1,k}\mid 2\leqslant k< \widehat{r}\}& \text{ if } s=0. \\ \end{array}\right.$\\ In particular, $\dim \gg(3)=2r$. \end{itemize} \end{lem} \begin{comment} \begin{proof} Let $\lambda$ be an associated cocharacter for $e_{r,s}$ in $\gg$ and $T$ be a maximal torus of $G$ such that $\lambda(k^*)\leqslant T$. For every $\delta \in \Psi$ we have that $\gg_{\delta}\subseteq \gg(i)$ for some $i \in \{0,\pm 1,\pm 2,\pm 3\}$. The weighted Dynkin diagram of $e_{r,s}$ tells us where the simple root spaces lie, see \eqref{eq 6.2}. Let $\delta=\sum_{\alpha\in \Pi}c_{\delta,\alpha}\alpha$ be a positive root.\begin{itemize} \item A root space $\gg_{\delta}$ lies in $\gg(2)$ if and only if $c_{\delta,\alpha_1}+c_{\delta,\alpha_{\widehat{r}}}=2$. We can easily see that all of the roots listed above satisfy this condition, and no others do. A counting exercise gives the dimension. \item A root space $\gg_{\delta}$ lies in $\gg(3)$ if and only if $c_{\delta,\alpha_1}+c_{\delta,\alpha_{\widehat{r}}}=3$. Again we can easily see that all of the roots listed above satisfy this condition, and no others do. A simple counting exercise gives the dimension. \end{itemize} This completes the proof of the lemma. \end{proof}\end{comment} Next we look at the structure of the Lie subalgebra $\bb_L$ of $\gg(0)$. \begin{lem} \label{lem 8.110} An associated cocharacter for $e_{r,s}$ affords the following. $\Psi(\bb_L)=\left\{ \begin{array}{ll} \{\beta_{i},\beta_{j,k},\gamma_{l,m} \mid \widehat{r} < j \text{ or } 1<j\leqslant k < \widehat{r}, \widehat{r}<i\:,\: \widehat{r}< l<m\} & \text{ if } s>0; \\ \{\beta_{j,k}\mid 1<j\leqslant k < \widehat{r}\} & \text{ if } s=0. \\ \end{array} \right.$ \\ In particular, $\dim \bb_L=2r^2+s^2+r+2s+2$. \end{lem} \begin{comment}\begin{proof}Let $\lambda$ be an associated cocharacter for $e_{r,s}$ in $\gg$ and $T$ be a maximal torus of $G$ such that $\lambda(k^*)\leqslant T$. For every $\delta \in \Psi$ we have that $\gg_{\delta}\subseteq \gg(i)$ for some $i \in \{0,\pm 1,\pm 2,\pm 3\}$. The weighted Dynkin diagram of $e_{r,s}$ tells us where the simple root spaces lie, see \eqref{eq 6.2}. Let $\delta=\sum_{\alpha\in \Pi}c_{\delta,\alpha}\alpha$ be a positive root. A root space $\gg_{\delta}$ lies in $\bb_L$ if and only if $c_{\delta,\alpha_1}+c_{\delta,\alpha_{\widehat{r}}}=0$. We can easily see that all of the roots listed above satisfy this condition, and no others do. A counting exercise gives $\dim \nn = 2r^2+s^2-r+s$. Since $\dim \mathfrak{t}=n$, we get that $\dim \bb_L=2r^2+s^2+r+2s+2$. Alternatively, we note that the Lie algebra $\gg(0)$ is isomorphic to $\mathfrak{sl}_{\widehat{r}-1}(k)\times \mathfrak{so}_{s+1}(k)\times k^2$, thanks to the weighted Dynkin diagram. The structure and dimension of $\bb_L$ follows. \end{proof}\end{comment} Similarly to the $B_n$ case, the roots of $\bb_L$ naturally form two distinct subsets, namely the roots whose support lies strictly to the left of the second ``1'' label of the weighted Dynkin diagram and those whose support lies strictly to the right of the second ``1'' label on the weighted Dynkin diagram. More precisely, we have $\Psi(\bb_L)=\Psi_1(\bb_L)\cup\Psi_2(\bb_L)$ where \begin{align*} \Psi_1(\bb_L)&=\{ \beta_{j,k} \mid 1<j\leqslant k < \widehat{r}\}, \\ \Psi_2(\bb_L)&=\{\beta_{j,k},\beta_i,\gamma_{l,m} \mid \widehat{r}<j\leqslant k , \widehat{r}< i, \widehat{r}< l < m\}. \end{align*} Again we partition the roots of $\gg_{\geqslant 2}$ into two distinct subsets. More precisely, we write $\Psi(\gg_{\geqslant 2}) = \Psi_1(\gg_{\geqslant 2})\cup\Psi_2(\gg_{\geqslant 2})$, where for $s\geqslant 1$, we define \begin{align*} \Psi_1(\gg_{\geqslant 2})&=\{\gamma_{j,k} \mid 1\leqslant j < k\leqslant \widehat{r} \},\\ \Psi_2(\gg_{\geqslant 2})&=\{\beta_1,\beta_{1,j},\gamma_{1,k} \mid \widehat{r} \leqslant j, \widehat{r} < k\}, \end{align*} and for $s=0$, we define \begin{align*} \Psi_1(\gg_{\geqslant 2})&=\{\gamma_{j,k} \mid 1\leqslant j < k\leqslant \widehat{r} \},\\ \Psi_2(\gg_{\geqslant 2})&=\{\beta_1,\beta_{1,n-1},\beta_{j,n} \gamma_{1,k} \mid j\leqslant\widehat{r} < k\}. \end{align*} Again, we have the following property of these sets: \begin{equation} \label{eq 8.20} \delta \in \Psi_i(\bb_L), \eta \in \Psi_{3-i}(\gg_{\geqslant 2}) \ \Rightarrow\ \delta + \eta \notin \Psi, \ i \in \{ 1,2\}. \end{equation} For $s>1$, set \[ e'_{r,s} := \sum_{j,k=0}^{r-1}(e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}} +e_{\gamma_{1,\widehat{r}-2k}})+e_{\gamma_{1,\widehat{r}+1}} +e_{\beta_{1,\widehat{r}}} \in \gg_{\geqslant 2}, \] for $s=1$, set \[ e'_{r,1} := \sum_{j,k=0}^{r-1}(e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}} +e_{\gamma_{1,\widehat{r}-2k}})+e_{\beta_{1,n}}+e_{\beta_{1,\widehat{r}}} \in \gg_{\geqslant 2}, \] and for $s=0$, set \[ e'_{r,0} := \sum_{j,k=1}^{r-1}(e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}} +e_{\gamma_{1,\widehat{r}-2k}})+e_{\beta_{1,n}}+e_{\beta_{1,n-1}} +e_{\beta_{n-2,n}}+e_{\beta_{1}} \in \gg_{\geqslant 2}. \] \begin{lem} \label{lem 8.120} With the notation as above, we have $|\mathop{\mathrm{supp}}\nolimits(e'_{r,s})|=2r+2$, $\mathop{\mathrm{supp}}\nolimits(e'_{r,s})$ is linearly independent, and $\dim \cc_\nn(e'_{r,s})=s(s-1)$. \end{lem} \begin{comment}\begin{proof} A simple counting exercise gives the cardinality of $|\mathop{\mathrm{supp}}\nolimits(e'_{r,s})|$. The proof that the roots of $\mathop{\mathrm{supp}}\nolimits(e'_{r,s})$ are linearly independent for $s>0$ is very similar to the proof of Lemma \ref{lem 8.80}, in fact for $s>1$ the proof applies word-for-word. Thus, we only prove that the roots of $\mathop{\mathrm{supp}}\nolimits(e'_{r,0})$ are linearly independent. Clearly, $\{\alpha_i\}_{1\leqslant i\leqslant n}$ is a basis for $\Psi$. Suppose there exist scalars $\tau_j,\xi_k, a,b,c$ and $d$ such that \[\sum_{j=1}^{r-1}\tau_j\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}+\sum_{k=1}^{r-1}\xi_k\gamma_{1,\widehat{r}-2k} +a\beta_{1,n}+b\beta_{1,n-1}+c\beta_{n-2,n}+d\beta_{1}=0.\] Assume that $n\geqslant 6$, so that the roots $\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}$ and $\gamma_{1,\widehat{r}-2k}$ exist. Since the coefficients of $\alpha_1,\alpha_2$ and $\alpha_3$ must be zero, we have \[\sum_{k=1}^{r-1}\xi_k+a+b+d=0\:\:,\:\: \tau_{r-1}+\sum_{k=1}^{r-1}\xi_k+a+b+d=0\text{ and } \xi_{r-1}+2\tau_{r-1}+\sum_{k=0}^{r-1}\xi_k+a+b+d=0.\]These three equations imply that $\tau_{r-1}=0=\xi_{r-1}$. Continuing this argument we see that $\tau_j=0=\xi_j$ for all $j$. We are left to show that the roots $\beta_{1,n},\beta_{1,n-1},\beta_{n-2,n}$ and $\beta_{1}$ are linearly independent, this is also the case when $n=4$. This is equivalent to showing that the vectors $(1,1,1,1)$, $(1,1,1,0)$, $(0,1,0,1)$ and $(1,1,0,1)$ are linearly independent, this is trivial. The result follows. \end{proof}\end{comment} \begin{comment}\begin{proof} Again, the proof that $\dim \cc_\nn(e'_{r,s})=s(s-1)$ for $s>0$ is very similar to the proof of Proposition \ref{lem 8.80}, in fact for $s>1$ the proof applies word-for-word. Thus, for $s>0$, we have that $|\cc_\nn(e'_{r,s})|$ is equal to the cardinality of a positive system in a root system of type $D_s$, hence the result in these cases. Thus, we only need prove that $\dim \cc_\nn(e'_{r,0})=0$. So we are considering the element $\sum_{j,k=1}^{r-1}(e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}}+e_{\gamma_{1,\widehat{r}-2k}}) +e_{\beta_{1,n}}+e_{\beta_{1,n-1}}+e_{\beta_{n-2,n}}+e_{\beta_{1}}$. Note that $\cc_\nn(e_{\beta_{1,n-1}})=\nn = \cc_\nn(e_{\beta_{1}})$. So we need only consider the element $\sum_{j,k=1}^{r-1}(e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}}+e_{\gamma_{1,\widehat{r}-2k}}) +e_{\beta_{1,n}}+e_{\beta_{n-2,n}}$. If $m=\widehat{r}-2l$ for some $1\leqslant l<r$, then the Chevalley commutator relations imply that $[e_{\gamma_{\widehat{r}-2l+1,\widehat{r}-2l+2}},\gg_{\beta_{i,\widehat{r}-2l}}]=\gg_{\gamma_{i,\widehat{r}-2l+2}}$, since $\mathop{\mathrm{char}}\nolimits k$ is good for $G$. If $m=\widehat{r}-2l-1$ for some $1\leqslant l<r$, then, $[e_{\gamma_{1,\widehat{r}-2l}},\gg_{\beta_{i,\widehat{r}-2l-1}}]=\gg_{\gamma_{1,i}}$, again by the Chevalley commutator relations. Also $[e_{\beta_{1,n}},\gg_{\beta_{i,\widehat{r}-1}}]=\gg_{\gamma_{1,i}}$ Next we observe that all the $\beta$'s above exhaust the set $\Psi(\bb_L)$. Thus $C_{\bb_L}(\sum_{j,k=0}^{r-1}(e_{\gamma_{\widehat{r}-2j-1,\widehat{r}-2j}}+e_{\gamma_{1,\widehat{r}-2k}}))=\{0\}$. The result follows. \end{proof}\end{comment} \begin{prop} \label{prop 8.33} The $B_L$-orbit of $e'_{r,s}$ is dense in $\gg_{\geqslant 2}$. \end{prop} \begin{comment}\begin{proof} By Lemma \ref{lem 8.10} it is sufficient to show that $\dim B_L=\dim C_{B_L}(e'_{r,s})+\dim \gg_{\geqslant 2}$. Lemma \ref{lem 8.100} implies that $\dim \gg_{\geqslant 2}=2r^2+r+2s+2$ and Lemma \ref{lem 8.110} implies that $\dim B_L = 2r^2+s^2+r+2s+2$. One readily checks that $e'_{r,s}$ is the minimal representative of its $U$-orbit in $\uu$ in the sense of \cite{Go4}. Thus, by Lemma \ref{lem 8.40} we have $\dim C_{B_L}(e'_{r,s})=\dim C_{T}(e'_{r,s})+\dim C_{U}(e'_{r,s})$. Lemmas \ref{lem 8.30} and \ref{lem 8.120} and Proposition \ref{prop 8.30} imply that $\dim C_{B_L}(e'_{r,s})\leqslant n-2r-2+s(s-1)=s^2$. So $\dim C_{B_L}(e'_{r,s})+\dim \gg_{\geqslant 2}\leqslant s^2+2r^2+r+2s+2=\dim B_L$, this clearly implies $\dim B_L=\dim C_{B_L}(e'_{r,s})+\dim \gg_{\geqslant 2}$. Similarly, if $s=0$, we get $\dim B_L=\dim C_{B_L}(e'_{r,s})+\dim \gg_{\geqslant 2}$. \end{proof}\end{comment} \begin{cor} \label{cor 8.40} If $G$ is of type $D_n$ and $e\in \N$ with $\mathop{\mathrm{ht}}\nolimits(e)=3$, then $e$ is spherical. \end{cor} \subsection{Height Three Nilpotent Elements of the Exceptional Lie Algebras} \label{sub:ex} We fix an ordering of the roots $\alpha_1,\ldots,\alpha_r$ of $\Psi(\gg_{\geqslant 2})$ such that $\alpha_i\prec \alpha_j$ for $i<j$. Define the subalgebra $\mm_i$ of $\gg_{\geqslant 2}$ by setting $\mm_i=\bigoplus_{j=i+1}^r\gg_{\alpha_j}$ and the quotient $\qq_i$ by $\qq_i=\gg_{\geqslant 2}/\mm_i$ for $0 \leqslant i \leqslant r$. Let $B$ be a Borel subgroup of $G$ such that $\gg_{\geqslant 2}\subseteq \Lie R_u(B)=\uu$. Note that each $\qq_i$ is a $B$-module. The computer programme, $\mathsf{DOOBS}$, devised by S.M.\ Goodwin allows us to determine whether $B$ acts on $\gg_{\geqslant 2}$ with a dense orbit. For details of the $\mathsf{GAP4}$ (\cite{Ga}) computer algebra program of Goodwin, we refer the reader to \cite{Go2} and \cite{Go2.5}. Working inductively, starting with $i = 0$, at each stage of the algorithm, $\mathsf{DOOBS}$ determines a representative $x_i+\mm_i$, with $\mathop{\mathrm{supp}}\nolimits(x_i)$ linearly independent of a dense $B$-orbit on $\qq_i$ or decides that $B$ does not act on $\qq_i$ with a dense orbit. \begin{comment}Next an outline of how the $\mathsf{DOOBS}$ algorithm works. \begin{itemize} \item[$0^{th}$ step] $\mathsf{DOOBS}$ considers the action of $B$ on $\qq_0=\{0\}$. Trivially, $B$ acts on $\qq_0$ with a dense orbit, the algorithm chooses $0+\mm_0$ as a representative of a dense orbit and therefore sets $x_0=0$. \item[$i^{th}$ step] $\mathsf{DOOBS}$ has chosen the representative $x_{i-1}+\mm_{i-1}$ of a dense $B$-orbit on $\qq_{i-1}$ with $\mathop{\mathrm{supp}}\nolimits(x_{i-1})$ linearly independent. The algorithm considers the action of $B$ on $\qq_i$. \begin{itemize} \item[$\bullet$] First $\mathsf{DOOBS}$ considers the $B$-orbit of $x_{i-1}+\mm_i$. It calculates the dimension of $\cc_{\uu}(x_{i-1}+\mm_i)$, knowledge of the Chevalley commutator relations reduce this to a problem in linear algebra. If this dimension is equal to $|\mathop{\mathrm{supp}}\nolimits(x_{i-1})|$, then the algorithm decides that $B\cdot (x_{i-1}+\mm_i)$ is dense in $\qq_i$ and so sets $x_i=x_{i-1}$ and goes to the $(i+1)^{th}$ step. \item[$\bullet$] If the algorithm decides that $B\cdot (x_{i-1}+\mm_i)$ is not dense in $\qq_i$, then it considers the $B$-orbit of $x_{i-1}+e_{\alpha_i}+\mm_i$. The set of roots $\mathop{\mathrm{supp}}\nolimits(x_{i-1})\cup \{\alpha_i\}$ is considered; if these roots are linearly independent, then the algorithm decides that $B\cdot (x_{i-1}+e_{\alpha_i}+\mm_i)$ is dense in $\qq_i$. The algorithm sets $x_i=x_{i-1}+e_{\alpha_i}$ and goes to the $(i+1)^{th}$ step. \item[$\bullet$] If $\mathsf{DOOBS}$ decides that neither $B\cdot (x_{i-1}+\mm_i)$ nor $B\cdot (x_{i-1}+e_{\alpha_i}+\mm_i)$ is dense in $\qq_i$, then it decides that $B$ does not act on $\qq_i$ (and therefore it does not act on $\gg_{\geqslant 2}$) with a dense orbit and stops.\end{itemize} \item[$(r+1)^{th}$ step] $\mathsf{DOOBS}$ has chosen a representative of a dense orbit on $\qq_r=\gg_{\geqslant 2}$, so it concludes that $B$ does act on $\gg_{\geqslant 2}$ with a dense orbit and finishes.\end{itemize}\end{comment} $\mathsf{DOOBS}$ also keeps a record of the primes for which $\dim_p \cc_{\uu}(x_i+\mm_{i+1})>\dim_0 \cc_{\uu}(x_i+\mm_{i+1})$, where $\dim_p\cc_{\uu}(x_i+\mm_{i+1})$ and $\dim_0\cc_{\uu}(x_i+\mm_{i+1})$ denote the dimension of $\cc_{\uu}(x_i+\mm_{i+1})$ over a field of characteristic $p$ and characteristic $0$ respectively, see Remark 3.2 in \cite{Go2}. For these primes we cannot conclude that $B$ acts on $\gg_{\geqslant 2}$ with a dense orbit. If $\mathsf{DOOBS}$ determines that $B$ acts on $\gg_{\geqslant 2}$ with a dense orbit, then it calculates a representative of the dense orbit and a list of primes for which the result is not necessarily valid. There is a variant of $\mathsf{DOOBS}$ called $\mathsf{DOOBSLevi}$, see \cite{Go2.5}. This program considers a parabolic subgroup $P=LR_u(P)$ and determines whether a Borel subgroup $B_L$ of $L$ acts on an ideal of $\Lie R_u(P)$ with a dense orbit. The algorithm used to determine whether $B_L$ acts on an ideal with a dense orbit is essentially the same as the $\mathsf{DOOBS}$ algorithm, with $B_L$ replacing $B$. $\mathsf{DOOBSLevi}$ also records the primes for which its conclusions are not necessarily valid. \begin{comment}The input required for $\mathsf{DOOBSLevi}$ is the type of $G$, the rank of $G$, a generating set for the ideal and a generating set for the Levi subgroup. For this we number the roots of $\Psi^+$ as $\alpha_1,\ldots,\alpha_r$ and note that if $\mathop{\mathrm{rk}}\nolimits G=n$, then $\alpha_1, \ldots,\alpha_n$ are the simple roots of $\Psi^+$. For $\mathsf{DOOBSLevi}$ to generate the ideal we input a subset $\{i_1,\ldots,i_k\}$ of $\{1,\ldots,r\}$ such that the roots corresponding to $\{i_1,\ldots,i_k\}$ generate the ideal. For $\mathsf{DOOBSLevi}$ to generate a Levi subgroup we input a subset $\{i_1,\ldots,i_s\}$ of $\{1,\ldots,n\}$ such that $L$ is generated by the simple roots corresponding to $\{1,\ldots,n\}\setminus\{i_1,\ldots,i_s\}$.\end{comment} Let $e \in \N$ of height $3$ and let $\lambda$ be a cocharacter of $G$ that is associated to $e$. We use the same numbering of the positive roots as in $\mathsf{GAP4}$. Table \ref{tab 8.10} below lists the roots whose root subgroups together with $T$ generate the Levi subgroup $C_G(\lambda)$ and we also list the roots whose root subspaces generate $\gg_{\geqslant 2}$ (as a $B$-submodule of $\gg$) for the $7$ cases of height three nilpotent orbits for the simple exceptional groups, see Proposition \ref{prop 8.10}. These are determined by means of the weighted Dynkin diagrams. \begin{comment} Indeed, see Section \ref{levi} for how to determine the Levi subgroup from the weighted Dynkin diagram. A similar method can be used for $\gg_{\geqslant 2}$, or we can note that generators for $\gg_{\geqslant 2}$ can be read off Table 1 in \cite{Ro}.\end{comment} \begin{table}[ht] \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{|c|c|c|c|} \hline Type of $G$& Bala--Carter Label & Generators for $L$ & Generators for $\gg_{\geqslant 2}$ \\ \hline $G_2$ & $\tilde{A_1}$&$\alpha_2$ & $\alpha_4$ \\ $F_4$ & $A_1 + \tilde{A_1}$&$\Pi\setminus\{\alpha_4\}$ & $\alpha_{16}$ \\ $E_6$ & $3A_1$&$\Pi\setminus\{\alpha_4\}$ & $\alpha_{24}$ \\ $E_7$ & $(3A_1)'$&$\Pi\setminus\{\alpha_3\}$ & $\alpha_{37}$ \\ $E_7$ & $4A_1$&$\Pi\setminus\{\alpha_{2},\alpha_{7}\}$ & $\alpha_{30},\alpha_{53}$ \\ $E_8$ & $3A_1$ &$\Pi\setminus\{\alpha_7\}$ & $\alpha_{74}$ \\ $E_8$ & $4A_1$&$\Pi\setminus\{\alpha_2\}$ & $\alpha_{69}$ \\ \hline \end{tabular} \bigskip \caption{Height Three Nilpotent Orbits in the Exceptional Lie Algebras.} \label{tab 8.10} \end{table} \begin{comment} Next we reproduce the $\mathsf{GAP4}$ worksheet in which we computed using the $\mathsf{DOOBSLevi}$ algorithm. \begin{alltt} gap> Read("Gap/DOOBSLevi.gap");\\ gap> DOOBS("G",2,[4],[1]); v.4+v.5 is a representative of a dense B(L)-orbit on n. No prime restrictions\\ gap> DOOBS("F",4,[16],[4]); v.16+v.20+v.22+v.23 is a representative of a dense B(L)-orbit on n. No prime restrictions\\ gap> DOOBS("E",6,[24],[4]); v.24+v.30+v.34+v.35 is a representative of a dense B(L)-orbit on n. No prime restrictions\\ gap> DOOBS("E",7,[37],[3]); v.37+v.55+v.61+v.62 is a representative of a dense B(L)-orbit on n. No prime restrictions\\ gap> DOOBS("E",7,[30,53],[2,7]); v.30+v.47+v.53+v.56+v.59+v.60+v.62 is a representative of a dense B(L)-orbit on n. No prime restrictions\\ gap> DOOBS("E",8,[74],[7]); v.74+v.104+v.118+v.119 is a representative of a dense B(L)-orbit on n. No prime restrictions\\ gap> DOOBS("E",8,[69],[2]); v.69+v.91+v.106+v.112+v.114+v.115+v.117+v.119 is a representative of a dense B(L)-orbit on n. No prime restrictions \end{alltt} \begin{rem}\label{rem 8.30} The representatives are given in the form $\sum v.i$, in our notation $v.i$ means $e_{\alpha_i}$ where $e_{\alpha_i}\in \gg_{\alpha_i}\setminus\{0\}$. Also note that there are no prime restrictions in the cases considered above, so $\kappa_L(\gg_{\geqslant 2})=0$ for a field of arbitrary characteristic. However, we can only conclude that the corresponding nilpotent orbit is spherical if $\mathop{\mathrm{char}}\nolimits k$ is good for $G$, since we need to make use of Theorem \ref{thm 5.10} which requires the condition on the field.\end{rem}\end{comment} The height $3$ cases for the exceptional groups were analyzed using the $\mathsf{DOOBSLevi}$ algorithm. It turns out that there are no characteristic restrictions in these cases: \begin{lem} \label{lem 8.130} If $G$ is simple of exceptional type and $e\in \N$ with $\mathop{\mathrm{ht}}\nolimits(e)=3$, then $e$ is spherical. \end{lem} Corollaries \ref{cor 8.20} and \ref{cor 8.40} combined with Lemma \ref{lem 8.130} give the following result. \begin{prop} \label{prop 8.40} Let $G$ be a connected reductive algebraic group and let $e \in \N$. If $\mathop{\mathrm{ht}}\nolimits(e)=3$, then $e$ is spherical. \end{prop} \begin{proof} If $G$ is simple, then the statement follows from Corollaries \ref{cor 8.20} and \ref{cor 8.40} and Lemma \ref{lem 8.130}. In the general case we argue as in Lemma \ref{lem 6.50} to reduce to the simple case. \end{proof} \subsection{The Classification} \label{sub:class} Our main classification theorem now follows readily from Lemma \ref{lem 6.50} and Propositions \ref{prop 7.20} and \ref{prop 8.40}. \begin{thm} \label{thm 8.40} Let $G$ be a connected reductive algebraic group. Suppose that $\mathop{\mathrm{char}}\nolimits k$ is a good prime for $G$. Then a nilpotent element $e \in \gg$ is spherical if and only if $\mathop{\mathrm{ht}}\nolimits(e)\leqslant 3$. \end{thm} \begin{rem} \label{rem 8.50} Let $G$ be a simple algebraic group and let $\mathop{\mathrm{char}}\nolimits k$ be a good prime for $G$. Then the spherical nilpotent orbits are given in Table \ref{t:sherical}. We present the orbits by listing the corresponding partition in the classical cases or by giving the corresponding Bala--Carter label for the exceptional groups. \end{rem} \begin{table}[ht] \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{|c|l|} \hline Type of $G$ & Spherical Orbits \\ \hline $A_n$ & $[1^j,2^i]$\\ $B_n$ & $[1^j,2^{2i}]$, or $[1^j,2^{2i},3]$ with $i \geqslant 0$\\ $C_n$ & $[1^{2j},2^i]$ \\ $D_n$ & $[1^j,2^{2i}]$, or $[1^j,2^{2i},3]$ with $i \geqslant 0$\\ \hline $G_2$ & $A_1$ or $\tilde{A_1}$\\ $F_4$ & $A_1$, $\tilde{A_1}$, or $A_1 + \tilde{A_1}$\\ $E_6$ & $A_1$, $2A_1$, or $3A_1$\\ $E_7$ & $A_1$, $2A_1$, $(3A_1)'$, $(3A_1)''$, or $4A_1$\\ $E_8$ & $A_1$, $2A_1$, $3A_1$, or $4A_1$\\ \hline \end{tabular} \bigskip \caption{The spherical nilpotent Orbits for $G$ simple.} \label{t:sherical} \end{table} \begin{rem} \label{rem 8.55} Using the fact that in good characteristic a Springer map affords a bijection between the set of unipotent $G$-conjugacy classes and the set of nilpotent $G$-orbits (see \cite{serre}), Theorem \ref{thm 8.40} also gives a classification of the spherical unipotent classes in $G$. Here we define the height of a unipotent element $u$ of $G$ as the height of the image of $u$ in $\N$ under a Springer isomorphism. \end{rem} \begin{comment}\begin{proof} Use Theorems \ref{thm 6.30} and \ref{thm 8.10} \end{proof} We have now classified the spherical nilpotent orbits, over a field of good characteristic for $G$. The classification is the same as the classification over a field of characteristic zero, see \cite{Pa3}. In \emph{loc.\!\! cit.\!} the proof that height three implies spherical was also shown by case-by-case considerations, however a general proof was later found, see \cite{Pa1}. In the final chapter we discuss some applications of this result. \chapter{}\label{C9} In this chapter we discuss some applications of the main result, Theorem \ref{thm 8.40}. Throughout this chapter $G$ is a connected reductive algebraic group over an algebraically closed field $k$, the Lie algebra of $G$ is denoted by $\gg$ and the characteristic of $k$ is good for $G$, except in the final section. \end{comment} \section{Applications and Complements} \label{sec:appl} Here we discuss some applications of the main result and some further consequences. \subsection{Spherical Distinguished Nilpotent Elements} \label{sub:dist} Recall that a nilpotent element $e\in \N$ is distinguished in $\gg$ if every torus contained in $C_G(e)$ is contained in the centre of $G$. For now we assume that $G$ is simple, so $e$ is distinguished in $\gg$ if and only if any torus contained in $C_G(e)$ is trivial and hence $C_G(e)^{\circ}$ is unipotent. Further recall that $\kappa_G(G\cdot e)=\kappa_G(G/C_G(e)^{\circ})$, cf.\ equation \eqref{eq 1.10}. Since $C_G(e)^{\circ}$ is connected and unipotent, it is contained in the unipotent radical $U$ of a Borel subgroup $B = TU$ of $G$. Let $B^-=TU^-$ be the unique opposite Borel subgroup to $B=TU$ relative to $T$, see \cite[\S 26.2]{Hu}. Consequently, $B^-\cap C_G(e)^{\circ} \subseteq B^-\cap U = \{1\}$. Thus, by equation \eqref{SubCom}, we have $\kappa_G(G/C_G(e)^{\circ}) = \dim G-\dim C_G(e)^{\circ} -\dim B^- = \dim U -\dim C_G(e)$, or equivalently, $\kappa_G(G\cdot e) = |\Psi^+| - \dim C_G(e)$. We summarize what we have just shown. \begin{prop} \label{prop 20.11} Let $e \in \N$ be a distinguished nilpotent element. Then \[ \kappa_G(G\cdot e)=|\Psi^+|-\dim C_G(e). \] \end{prop} \begin{rem} \label{rem 20.10} Proposition \ref{prop 20.11} was first observed by Panyushev for a field of characteristic zero in \cite[Cor.\ 2.4]{Pa3}. \end{rem} If $G$ is a simple classical group, then the distinguished nilpotent elements are given as follows, see Lemmas 4.1 and 4.2 in \cite{Ja1}. \begin{lem} \label{lem 20.01} Let $e \in \N$ and let $\pi_e$ be the corresponding partition of $\dim V$. \begin{itemize} \item[(i)] If $G=\SL(V)$, then $e$ is distinguished if and only if $\pi_e=[\dim V]$. \item[(ii)] If $G=\SP(V)$, then $e$ is distinguished if and only if $\pi_e$ consists only of distinct even parts. \item[(iii)] If $G=\SO(V)$, then $e$ is distinguished if and only if $\pi_e$ consists only of distinct odd parts. \end{itemize} \end{lem} \begin{cor} \label{cor 20.1} If $G=\SO(V)$ and $e \in \N$ is spherical and distinguished, then $\mathop{\mathrm{ht}}\nolimits(e)=2$. \end{cor} \begin{proof} Thanks to Proposition \ref{prop 8.10}, the height $3$ nilpotent elements have partitions of the form $\pi=[1^s,2^{2r},3]$, where $r>0$. Thus such a partition has even parts and so is not distinguished. So if $e$ is spherical and distinguished, then $\mathop{\mathrm{ht}}\nolimits(e)=2$. \end{proof} Proposition \ref{prop 4.50} and Lemma \ref{lem 20.01} imply the following result. \begin{prop} \label{prop 20.02} Let $e \in \N$ be distinguished and $\pi_e$ be the corresponding partition of $\dim V$. \begin{itemize} \item[(i)] If $G=\SL(V)$, then $\mathop{\mathrm{ht}}\nolimits(e)=2$ if and only if $\pi_e=[2]$. \item[(ii)] If $G=\SP(V)$, then $\mathop{\mathrm{ht}}\nolimits(e)=2$ if and only if $\pi_e=[2]$. \item[(iii)] If $G=\SO(V)$, then $\mathop{\mathrm{ht}}\nolimits(e)=2$ if and only if $\pi_e=[3]$ or $\pi_e=[1,3]$. \end{itemize} \end{prop} \begin{thm} \label{thm 20.02} If $G$ is a simple algebraic group and $e \in \N$ is spherical and distinguished, then $G$ is of type $A_1$. \end{thm} \begin{proof} For $G$ simple classical, Proposition \ref{prop 20.02} implies that $G$ is of type $A_1$. For $G$ of exceptional type it follows from Remark \ref{rem 8.50} and the tables in \cite[\S 13]{Ca} that there are no nilpotent orbits in $\gg$ that are both spherical and distinguished. \end{proof} \subsection{Orthogonal Simple Roots and Spherical Nilpotent Orbits} \label{sub:orth} In \cite[Thm.\ 3.4]{Pa2}, Panyushev proved that if the characteristic of $k$ is zero, then $e\in \N$ is spherical if and only if there exist pairwise orthogonal simple roots $\alpha_1,\alpha_2,\ldots ,\alpha_t$ in $\Pi$ such that $G\cdot e$ contains an element of the form $\sum_{i=1}^{t}e_{\alpha_i}$ where $e_{\alpha_i}\in\gg_{\alpha_i}\setminus\{0\}$. By pairwise orthogonal we mean that $\langle\alpha_i,\alpha_j\rangle=0$ for $i\neq j$. In this subsection we show that this is also the case if the characteristic of $k$ is good for $G$. \begin{lem} \label{lem 20.15} Let $\D G$ be of type $A_1^{t}$ for some $t \geqslant 1$. Then there is precisely one distinguished nilpotent orbit in $\N$. \end{lem} \begin{proof} Since the nilpotent orbits of $G$ in $\gg$ are precisely the nilpotent orbits of $\D G$ in $\Lie \D G$, we may assume that $G$ is semisimple. Thus, $G=G_1 G_2 \cdots G_r$ and each $G_i$ is of type $A_{1}$. There is precisely one distinguished nilpotent orbit when $G_i$ is of type $A_1$: the unique non-zero nilpotent orbit. Also $G\cdot e$ is distinguished in $\gg$ if and only if $G_i\cdot e_i$ is distinguished in $\gg_i = \Lie G_i$ for all $i$, where $e = e_1+\ldots +e_r$ and $e_i \in \gg_i$ is nilpotent. \end{proof} \begin{lem} \label{lem 20.111} Let $e \in \N$ and $S$ be a maximal torus of $C_G(e)$. Then $\D C_G(S)$ is of type $A_{1}^{t}$ for some $t \geqslant 1$ if and only if there exist pairwise orthogonal simple roots $\alpha_1$, $\alpha_2$, $\ldots$, $\alpha_t$ in $\Pi$ such that $G\cdot e$ contains an element of the form $\sum_{i=1}^{t}e_{\alpha_i}$, where $e_{\alpha_i}\in\gg_{\alpha_i}\setminus \{0\}$. \end{lem} \begin{proof} Suppose that $\D C_G(S)$ is of type $A_{1}^{t}$. Let $\alpha_1,\ldots,\alpha_t$ be simple roots of $\Phi$, where $\Phi$ is the root system of $C_G(S)$ relative to a maximal torus $T$ of $C_G(S)$. As $\D C_G(S)$ is of type $A_{1}^t$, the roots $\alpha_1,\ldots,\alpha_t$ are pairwise orthogonal. Clearly, $e\in \Lie C_G(S) = \cc_{\gg}(S)$ and $e$ is distinguished in $\cc_{\gg}(S)$, see Proposition \ref{prop4.20}. By Lemma \ref{lem 20.15}, an element of the form $\sum_{i=1}^{t}e_{\alpha_i}$ is also distinguished in $\cc_\gg(S)$ and there is precisely one distinguished nilpotent orbit in $\cc_\gg(S)$. Thus, $e$ and $\sum_{i=1}^{t}e_{\alpha_i}$ are in the same $C_G(S)$-orbit, hence they are in the same $G$-orbit. So $G\cdot e$ contains an element of the desired form. Conversely, suppose that there exist pairwise orthogonal simple roots $\alpha_1,\alpha_2,\ldots , \alpha_t\in \Psi$ such that $G\cdot e$ contains an element of the form $e' = \sum_{i=1}^{t}e_{\alpha_i}$. Let $H$ be the subgroup of $G$ generated by $\{T,U_{\pm \alpha_i}\mid 1\leqslant i\leqslant t\}$, where $T$ is as in the previous paragraph. Then $\D H$ is of type $A_1^t$. By construction, $e'$ is distinguished in $\hh$. By Proposition \ref{prop4.20}, $H$ is of the form $C_G(S')$, where $S'$ is a maximal torus of $C_G(e')$. Thus, $\D C_G(S')$ is of type $A_{1}^t$. Since $e$ and $e'$ are $G$-conjugate, so are $C_G(e)$ and $C_G(e')$, as well as $S$ and $S'$. Finally, we get that $C_G(S)$ and $C_G(S')$ are $G$-conjugate. The result follows. \end{proof} \begin{lem} \label{lem 20.1} If $e \in \N$ is spherical, then $\D C_G(S)$ is of type $A_{1}^{t}$ for some $t \geqslant 1$. \end{lem} \begin{proof} Let $\lambda$ be a cocharacter of $G_G(S)$ that is associated to $e$, i.e.\ $\lambda \in \Omega_{C_G(S)}^a(e)$. Then, since $\Lie C_G(S)=\cc_\gg(S)$, it follows from \cite[Cor.\ 3.21]{FoRo} that $\lambda \in \Omega_G^a(e)$. As $e$ is spherical in $\gg$, we have $\mathop{\mathrm{ht}}\nolimits(e)\leqslant 3$, by Theorem \ref{thm 8.40}. As $\lambda \in \Omega_{C_G(S)}^a(e)$, we also have $\mathop{\mathrm{ht}}\nolimits(e)\leqslant 3$ when we regard $e$ as an element of $\cc_\gg(S)$. Thus, again by Theorem \ref{thm 8.40}, $e$ is spherical in $\cc_\gg(S)$. So $e$ is distinguished and spherical in $\cc_\gg(S)$ and so $\D C_{G}(S)$ is of type $A_{1}^t$, by Theorem \ref{thm 20.02}. \end{proof} In order to prove the reverse implication of Lemma \ref{lem 20.1} we first need to consider the group $C_G(S)$. If $G$ is classical, then the structure of $C_G(S)$ can be determined from the partition $\pi_e$ corresponding to $e$; see \cite[\S 4.8]{Ja1} for the following result. \begin{lem} \label{lem 20.11} Let $G$ be simple classical and $e\in \N$ with corresponding partition $\pi_e$. \begin{itemize} \item[(i)] If $G$ is of type $A_n$ and $\pi_e=[1^{r_1},2^{r_2},\ldots]$, then $\D C_G(S)$ is of type $\prod_{i\geqslant 1}A_{i-1}^{r_i}$. \item[(ii)] If $G$ is of type $B_n$ and $\pi_e=[1^{2s_1+\epsilon_1},2^{2s_2},3^{2s_3+\epsilon_3},\ldots]$, where $s_i\geqslant 0$ and $\epsilon_i \in \{0,1\}$, then $\D C_G(S)$ is of type $\prod_{i\geqslant 1}A_{i-1}^{s_i} \times B_{m}$, where $2m+1=\sum_{\epsilon_i \neq 0} i$. \item[(iii)] If $G$ is of type $C_n$ and $\pi_e=[1^{2s_1},2^{2s_2+\epsilon_2},3^{2s_3},4^{2s_4+\epsilon_4},\ldots]$, where $s_i\geqslant 0$ and $\epsilon_i \in \{0,1\}$, then $\D C_G(S)$ is of type $\prod_{i\geqslant 1} A_{i-1}^{s_i} \times C_{m}$, where $2m=\sum_{\epsilon_i \neq 0} i$. \item[(iv)] If $G$ is of type $D_n$ and $\pi_e=[1^{2s_1+\epsilon_1},2^{2s_2},3^{2s_3+\epsilon_3},\ldots]$, where $s_i\geqslant 0$ and $\epsilon_i \in \{0,1\}$, then $\D C_G(S)$ is of type $\prod_{i\geqslant 1} A_{i-1}^{s_i} \times D_{m}$, where $2m=\sum_{\epsilon_i \neq 0} i$. \end{itemize} \end{lem} \begin{lem} \label{lem 20.2} If $G$ is simple classical and $\D C_G(S)$ is of type $A_{1}^{t}$, then $e$ is spherical. \end{lem} \begin{proof} First suppose that $G$ is of type $A_n$. Since $\D C_G(S)$ is of type $A_{1}^{t}$, it follows from Lemma \ref{lem 20.11} that $r_i=0$ for all $i \geqslant 3$. Thus $\pi_e=[1^{r_1},2^{r_2}]$ and so $e$ is spherical, by Remark \ref{rem 8.50}. Let $G$ be of type $B_n$. Since $\D C_G(S)$ is of type $A_{1}^{t}$, it follows from Lemma \ref{lem 20.11} that $s_i=0$ for $i\geqslant 3$ and $m\leqslant 1$, so $2m+1 \leqslant 3$. Since $2m+1$ is a sum of distinct odd integers, we either have $2m+1=1$ or $2m+1=3$. Thus $\pi_e=[1^{2s_1+1},2^{2s_2}]$ or $\pi_e=[1^{2s_1},2^{2s_2},3]$ and so $e$ is spherical, again by Remark \ref{rem 8.50}. Let $G$ be of type $C_n$. Since $\D C_G(S)$ is of type $A_{1}^{t}$, it follows from Lemma \ref{lem 20.11} that $s_i=0$ for $i\geqslant 3$ and $m\leqslant 1$, so $2m\leqslant 2$. Since $2m$ is a sum of distinct even integers, we either have $2m=0$ or $2m=2$. Thus $\pi_e=[1^{2s_1},2^{2s_2}]$ or $\pi_e=[1^{2s_1},2^{2s_2+1}]$ and so, by Remark \ref{rem 8.50}, $e$ is spherical. Finally, let $G$ be of type $D_n$. Since $\D C_G(S)$ is of type $A_{1}^{t}$, it again follows from Lemma \ref{lem 20.11} that $s_i=0$ for $i\geqslant 3$ and $m\leqslant 2$, so $2m\leqslant 4$. Since $2m$ is a sum of distinct odd integers, we either have $2m = 0$ or $2m = 1+3$. Thus $\pi_e = [1^{2s_1},2^{2s_2}]$ or $\pi_e = [1^{2s_1+1},2^{2s_2},3]$ and so, by Remark \ref{rem 8.50}, $e$ is spherical. \end{proof} All that remains is to check the exceptional cases. The Bala--Carter label of $e \in \N$ gives the Dynkin type of a Levi subgroup $L$ of $G$ such that $e$ is distinguished in $\Lie \D L$. By Proposition \ref{prop4.20}, such a Levi subgroup is the centralizer of a maximal torus of $C_G(e)$. Thus, the Bala--Carter label gives the type of $\D C_G(S)$. It follows from the tables in \cite[\S 13]{Ca} and Remark \ref{rem 8.50} that any nilpotent orbit with Bala--Carter label $A_1^t$ is spherical. We summarize this in Table \ref{tabel20.1} below. \begin{table}[ht] \renewcommand{\arraystretch}{1.2} \centering \begin{tabular}{|c|c|c||c|c|c|} \hline Type & Bala--Carter Label & Height &Type & Bala--Carter Label & Height\\ \hline $G_2$ & $A_1$ & 2 &$E_7$ & $A_1$ & 2 \\ $G_2$ & $\tilde{A_1}$ & 3& $E_7$ & $2A_1$ & 2 \\\cline{1-3} $F_4$ & $A_1$ & 2 &$E_7$ & $(3A_1)''$ & 2 \\ $F_4$ & $\tilde{A_1}$ & 2 & $E_7$ & $(3A_1)'$ & 3 \\ $F_4$ & $A_1 + \tilde{A_1}$ & 3&$E_7$ & $4A_1$ & 3 \\\hline $E_6$ & $A_1$ & 2 &$E_8$ & $A_1$ & 2 \\ $E_6$ & $2A_1$ & 2 &$E_8$ & $2A_1$ & 2 \\ $E_6$ & $3A_1$ & 3 &$E_8$ & $3A_1$ & 3 \\ - &- &- & $E_8$ & $4A_1$ & 3 \\ \hline \end{tabular} \bigskip \caption{Orbits in Exceptional Lie Algebras with $\D C_G(S)$ of Type $A_{1}^t$.} \label{tabel20.1} \end{table} \begin{lem} \label{lem 20.25} If $G$ is a simple exceptional algebraic group and $\D C_G(S)$ is of type $A_{1}^{t}$, then $e$ is spherical. \end{lem} \begin{lem} \label{lem 20.3} Let $e\in \N$. If $\D C_G(S)$ is of type $A_{1}^{t}$, then $e \in \gg$ is spherical. \end{lem} \begin{proof} For $G$ simple, the result follows from Lemmas \ref{lem 20.2} and \ref{lem 20.25}. In the general case let $\D G=G_1G_2\cdots G_r$ be a commuting product of simple groups and $e = e_1+e_2+\ldots +e_r$, where $e_i \in \gg_i = \Lie G_i$ and each $e_i$ is nilpotent. A maximal torus $S$ of $C_G(e)$ is of the form $S_1S_2\cdots S_r$, where $S_i$ is a maximal torus of $C_{G_i}(e_i)$. The simple case implies that $\D C_{G_i}(S_i)$ is of type $A_{1}^{t}$. \end{proof} Lemmas \ref{lem 20.3} and \ref{lem 20.111} now imply the main result of this subsection. \begin{thm} \label{thm 20.2} Let $e \in \N$ and let $S$ be a maximal torus of $C_G(e)$. Then the following are equivalent. \begin{itemize} \item[(i)] $e$ is spherical; \item[(ii)] $\D C_G(S)$ is of type $A_{1}^t$; \item[(iii)] there exist pairwise orthogonal simple roots $\alpha_1,\alpha_2,\ldots ,\alpha_t\in \Pi$ such that $G\cdot e$ contains an element of the form $\sum_{i=1}^{t}e_{\alpha_i}$, where $e_{\alpha_i}\in\gg_{\alpha_i}\setminus\{0\}$. \end{itemize} \end{thm} \begin{comment} \begin{exmp}\label{ex 20.05}\begin{itemize}\item Let $G$ be of type $A_3$ and let $\alpha_1,\alpha_2$ and $\alpha_3$ be simple roots of $\Psi$. Clearly, the following sets contain pairwise orthogonal simple roots $\{\alpha_1\}\:,\:\{\alpha_2\}\:,\:\{\alpha_3\}$ and $\{\alpha_1,\alpha_3\}$. If $e_{\alpha_i}\in \gg_{\alpha_i}\setminus\{0\}$ for $i=1,2,3$, then $e_{\alpha_i}$ and $e_{\alpha_j}$ lie in the same $G$-orbit, since the roots $\alpha_i$ and $\alpha_j$ are conjugate under the Weyl group of $G$. So by Theorem \ref{thm 20.2} we have two distinct non-zero spherical nilpotent orbits in $\gg$, cf.\ Theorem \ref{thm 8.50}. \item Let $G$ be of type $G_2$ and let $\alpha$ and $\beta$ be simple roots of $\Psi$, with $\alpha$ the long root. Trivially the following sets contain pairwise orthogonal simple roots $\{\alpha\}$ and $\{\beta\}$. Since $\alpha$ and $\beta$ have different lengths, they are not conjugate under the Weyl group of $G$. So by Theorem \ref{thm 20.2} we have two distinct non-zero spherical nilpotent orbits in $\gg$, cf.\! Theorem \ref{thm 8.50}. \end{itemize}\end{exmp}\end{comment} \subsection{Spherical Orbits and ad-Nilpotent Ideals} \label{sub:ideals} In this section we generalize some results from \cite{PaRo1} and \cite{PaRo2} to a field of good characteristic. When $G$ is simple and classical, Panyushev gave simple algebraic criteria for a nilpotent element $e \in \N$ to be spherical in \cite[\S 4]{Pa3}. We show that these criteria are still valid for a field of good characteristic. \begin{lem} \label{lem 20.3a} Let $G$ be a simple classical algebraic group and $e \in \N$. \begin{itemize} \item[(i)] Let $e$ be a nilpotent matrix in $\mathfrak{sl}_n$ or $\mathfrak{sp}_n$. Then $e$ is spherical if and only if $e^2=0$. \item[(ii)] Let $e$ be a nilpotent matrix in $\mathfrak{so}_n$. Then $e$ is spherical if and only if the rank of $e^2$ is at most one. \end{itemize} \end{lem} \begin{proof} Let $e$ be a nilpotent matrix in $\mathfrak{sl}_n$ or $\mathfrak{sp}_n$. If $e$ is spherical, then $\pi_e=[1^j,2^i]$, for appropriate $i$ and $j$, see Remark \ref{rem 8.50}. By considering the corresponding Jordan blocks for $\pi_e$, we see that $e^2=0$. Conversely, if $e^2=0$, then $e$ is conjugate to an element $e'$ with partition $\pi_{e'}=[1^j,2^i]$ and so $e$ is spherical, again by Remark \ref{rem 8.50}. Let $e$ be a nilpotent matrix in $\mathfrak{so}_n$. If $e$ is spherical, then $\pi_e=[1^j,2^i]$ or $\pi_e=[1^j,2^i,3]$, for appropriate $i$ and $j$, see Remark \ref{rem 8.50}. By considering the corresponding Jordan blocks for $\pi_e$, we see that either $e^2 = 0$ or $e^2$ has partition $\pi_{e^2}=[1^k,2]$. Thus the rank of $e^2$ is either $0$ or $1$. Conversely, if the rank of $e^2$ is at most $1$, then $e$ is conjugate to an element $e'$ with partition $\pi_{e'} = [1^j,2^i]$ or $\pi_{e'} = [1^j,2^i,3]$ and so $e$ is spherical, again by Remark \ref{rem 8.50}. \end{proof} In \cite{PaRo1} and \cite{PaRo2}, D.I.\ Panyushev and the second author gave a classification of the spherical ideals of $\bb=\Lie B$ contained in $\bb_u=\Lie R_u(B)$, where $B$ is a Borel subgroup of $G$ in characteristic $0$. An ideal $\cc$ of $\bb$ is \emph{ad-nilpotent} if $\cc$ is contained in $\bb_u$. An ad-nilpotent ideal $\cc$ of $\bb$ is called \emph{spherical} if its $G$-saturation $G\cdot \cc=\{x\cdot e \mid x \in G, e \in \cc\}$ is a spherical $G$-variety. First in \cite[Cor.\ 2.4]{PaRo1} it is proved that if $\aaa$ is an Abelian ideal of $\bb$, then $\aaa$ is spherical. In \cite[Prop.\ 4.1 and Thm.\ 4.2]{PaRo2} it is proved that there are non-abelian spherical ideals only if $G$ is not simply-laced, that is if the Dynkin diagram of $G$ has a multiple bond. Theorem 2.3 in \cite{PaRo1} states that any $G$-orbit meeting an abelian ad-nilpotent ideal $\aaa$ is spherical. This is proved by means of the fact that an orbit $G\cdot e$ is spherical if and only if $\ad(e)^4=0$, see \cite[Cor.\ 2.2]{Pa3}. Unfortunately, this equivalence is no longer true in positive characteristic, see Example \ref{ex 20.10}. However, the forward implication of this equivalence is still valid in good characteristic. \begin{lem} \label{lem 20.30} If $e \in \N$ is spherical, then $\ad(e)^4=0$. \end{lem} \begin{proof} If $e$ is spherical, then by Theorem \ref{thm 8.40} $\mathop{\mathrm{ht}}\nolimits(e)\leqslant 3$. Let $\gg=\bigoplus_{i=-3}^{3}\gg(i)$ be the grading of $\gg$ afforded by an associated cocharacter in $\Omega_G^a(e)$. We have that $e \in \gg(2)$. Consequently, $\ad(e)^4(\gg(i)) \subseteq \gg(i+8) = \{ 0 \}$ for any $-3 \leqslant i \leqslant 3$. Consequently, $\ad(e)^4=0$ on all of $\gg$. \end{proof} The next example shows that the converse of Lemma \ref{lem 20.30} is not true in general in positive characteristic. \begin{exmp} \label{ex 20.10} Let $G=\SL_3(k)$ and $\mathop{\mathrm{char}}\nolimits k=3$. So $\gg=\mathfrak{sl}_3(k)$. Set $e=e_{2,1}+e_{3,2}$, where $e_{i,j}$ is the elementary matrix with a 1 in the $(i,j)$ position and 0's elsewhere. So $e$ is a regular nilpotent element in $\gg$. Consider the grading of $\gg$ afforded by an associated cocharacter in $\Omega_G^a(e)$. We have $\gg=\bigoplus_{i=-2}^{2}\gg(2i)$. In order to prove $\ad(e)^4=0$, it is sufficient to show that $\ad(e)^4(\gg(-4))=\{0\}$. Clearly, $\gg(-4)=ke_{1,3}$. Now $\ad(e)(e_{1,3})=e_{2,3}-e_{1,2}$ and $\ad(e)(e_{2,3}-e_{1,2})=e_{1,1}-2e_{2,2}+e_{3,3}$. Since $\mathop{\mathrm{char}}\nolimits k=3$, we have $e_{1,1}-2e_{2,2}+e_{3,3}=e_{1,1}+e_{2,2}+e_{3,3}$ and $e_{1,1}+e_{2,2}+e_{3,3} \in Z(\gg)$. Thus, $\ad(e)^4=0$. However, $e$ is not spherical, as $\pi_e=[3]$, see Remark \ref{rem 8.50}. \end{exmp} We note that Proposition 4.1 and Theorem 4.2 in \cite{PaRo2} both also hold in good characteristic, as their proofs only require properties of the underlying root system $\Psi$ and the results established in Lemmas \ref{lem 20.3a} and \ref{lem 20.30}. \begin{comment} We restate these two results below and refer to \cite{PaRo2} for the proofs. \begin{prop} \label{prop 20.4} Let $\cc$ be an ad-nilpotent ideal of $\bb$. \begin{itemize} \item[(i)] If $G$ is simply-laced and $[\cc,\cc]\neq \{0\}$, then $\cc$ is non-spherical. \item[(ii)] If $G$ is doubly-laced and $[\cc,[\cc,\cc]]\neq \{0\}$, then $\cc$ is non-spherical. \item[(iii)] If $G$ is of type $G_2$, then any spherical ideal is abelian. \end{itemize} \end{prop} \begin{thm} \label{thm 20.03} Suppose that $G$ is doubly-laced. \begin{itemize} \item[(i)] Let $G$ be of type $B_n\:\: (n\geqslant 2)$. Then there is a unique maximal non-abelian spherical ad-nilpotent ideal of $\bb$. \item[(ii)] Let $G$ be of type $C_n\:\: (n\geqslant 2)$. Then there are $n-1$ maximal non-abelian spherical ad-nilpotent ideals of $\bb$. \item[(iii)] Let $G$ be of type $F_4$. Then there are two maximal non-abelian spherical ad-nilpotent ideals of $\bb$. \end{itemize} \end{thm} We list the maximal non-abelian spherical ad-nilpotent ideals of $\bb$, see \cite[Table 1]{PaRo2}. \begin{table}[ht] \renewcommand{\arraystretch}{1.5} \centering \begin{tabular}{|c|l|c|} \hline Type of $G$ & Generators for $\cc$ & $\dim \cc$ \\ \hline $B_n$ & $\alpha_n$ & $\frac{n(n+1)}{2}$ \\\hline $C_n$ & $\alpha_1$ & $2n-1$ \\\hline $C_n$ & $\alpha_1+\ldots+\alpha_k, 2\alpha_k+\ldots +2\alpha_{n-1}+\alpha_n$ for $2\leqslant k \leqslant n-1$ & $2n+\frac{(k^2+3k)}{2}$ \\\hline $F_4$ & $(1,1,1,1)$ & 10 \\\hline $F_4$ & $(0,1,2,1)$ & 11 \\ \hline \end{tabular} \medskip \caption{Maximal Non-Abelian Spherical Ideals of $\bb$ in Type $B_n$, $C_n$ and $F_4$.} \label{tab 20.10} \end{table} \end{comment} So we are left to show that if $\aaa$ is an abelian ad-nilpotent ideal, then $\aaa$ is spherical. Since $G\cdot \aaa$ is irreducible, it is the closure of some nilpotent orbit, say $\overline{G\cdot e} = G\cdot \aaa$. The maximal abelian ad-nilpotent ideals of $\bb$ are the same in good characteristic as in characteristic zero, see Table 1 in \cite{Ro} and Tables I and II in \cite[\S 4]{PaRo1}. Using the description of the orbits in Tables I and II in \cite[\S 4]{PaRo1}, we infer that the Bala--Carter label of $G\cdot e$ is of the form $A_1^t$, so $G\cdot e$ is spherical, thanks to Theorem \ref{thm 20.2}. Since $G\cdot e$ is open in $G\cdot \aaa$, it follows that $G\cdot \aaa$ is spherical. It is straightforward to get the sphericity of $G\cdot\aaa$ for any abelian ideal $\aaa$ of $\bb$ from the sphericity result of the maximal abelian ideals. Thus we have established the following. \begin{thm} \label{thm 20.04} Let $\aaa$ be an abelian ad-nilpotent ideal of $\bb$. Then $\aaa$ is spherical. \end{thm} As a corollary of Theorem \ref{thm 20.04} we get \cite[Thm.\ 1.1]{Ro} in good characteristic. \begin{cor} \label{cor 20.2} Let $P$ be a parabolic subgroup of $G$ and let $\aaa$ be an abelian ideal of $\Lie P$ in $\Lie R_u(P)$. Then $P$ acts on $\aaa$ with finitely many orbits. \end{cor} \begin{rem} We note that Theorem \ref{thm 20.04} and Corollary \ref{cor 20.2} do in fact hold in arbitrary characteristic, cf.\ \cite[Thm.\ 1.1]{Ro}. \end{rem} \begin{rem} If $\cc$ is a spherical ideal of $\bb$, then clearly $B$ acts on $\cc$ with a finite number of orbits. However, the converse does not hold. There are many additional instances when $B$ acts on a given ideal $\cc$ of $\bb$ only with a finite number of orbits, e.g. see the results in \cite{hilleroehrle} and \cite{juergensroehrle}. \end{rem} \subsection{A Geometric Characterization of Spherical Orbits} \label{sub:geom} In this subsection we describe a formula characterizing spherical $G$-orbits in a simple algebraic group $G$ in terms of elements of the Weyl group $W$ of $G$ that is proved in \cite[Thm.\ 1]{CaCaCo}. For $x \in G$ the conjugacy class $G\cdot x$ is spherical if $G\cdot x$ is a spherical variety. While this characterization in {\it loc.\ cit.} is based on case by case arguments, recently, G.\ Carnovale \cite[Thm.\ 2]{Carno} gave a proof of this result which is free of case by case considerations and applies in good odd characteristic. Using the arguments from \cite{CaCaCo} combined with our classification of the spherical unipotent nilpotent orbits, Remark \ref{rem 8.55}, we can generalize this formula to good characteristic. Let $G$ be simple and suppose that $p$ is good for $G$. Fix a Borel subgroup $B$ of $G$. Let $W$ be the Weyl group of $G$ and let $BwB$ be the $(B,B)$-double coset of $G$ containing $w \in W$. The following was shown in \cite{CaCaCo} in an argument independent of the characteristic of the underlying field: Suppose that $\CO$ is a conjugacy class in $G$ which intersects the double coset $BwB$ so that $\dim \CO = \ell(w) + \mathop{\mathrm{rk}}\nolimits(1 - w)$ holds. Then $\CO$ is spherical. Here $\mathop{\mathrm{rk}}\nolimits(1 - w)$ denotes the rank of the linear map $1 - w$ in the standard representation of $W$ and $\ell$ is the usual length function of $W$ with respect to a distinguished set of generators of $W$. Conversely, let $\CO$ be a spherical conjugacy class in $G$ and let $BwB$ be the $(B,B)$-double coset containing the dense $B$-orbit in $\CO$. Then $\dim \CO = \ell(w) + \mathop{\mathrm{rk}}\nolimits(1 - w)$, see \cite[Thm.\ 2]{Carno}. Consequently, this gives a geometric characterization of the spherical conjugacy classes in $G$. For proofs we refer the reader to \cite{CaCaCo} and \cite{Carno}. Observe that as a consequence of the finiteness of the Bruhat decomposition of $G$ and the fact that any $(B,B)$-double coset and any conjugacy class of $G$ are irreducible subvarieties of $G$, for a given conjugacy class $\CO$ in $G$ there is a unique $w \in W$ such that $\CO \cap BwB$ is dense in $\CO$. \begin{thm} (\cite[Thm.\ 1]{CaCaCo}) \label{thm:CaCaCo} Let $\CO$ be a conjugacy class in $G$ and let $w \in W$ be such that $\CO \cap BwB$ is dense in $\CO$. Then $\CO$ is spherical if and only if $\dim \CO = \ell(w) + \mathop{\mathrm{rk}}\nolimits(1 - w)$. \end{thm} \subsection{Bad Primes and Spherical Nilpotent Orbits} \label{sub:bad} Finally, we briefly discuss the situation when the characteristic of $k$ is bad for $G$. In this case the classification of the nilpotent orbits in $\N$ is different from that in good characteristic, see \cite[\S 5.11]{Ca}. However, there is still only a finite number of nilpotent orbits, \cite{HoltSpaltenstein}. Unfortunately, our methods do not allow us to give a classification of the spherical nilpotent orbits in this case. For, in our classification we made use of the height of a nilpotent orbit, where the height is defined via an associated cocharacter. However, it is not known whether associated cocharacters always exist for all nilpotent elements in bad characteristic, cf.\ \cite[\S 5.14, \S 5.15]{Ja1}. In principle one can still determine whether a given nilpotent orbit is spherical by a case by case analysis. Next we give two examples of this. In particular, we show that Theorem \ref{thm 20.2} fails in bad characteristic in general. These examples show that there can be additional spherical nilpotent orbits in bad characteristic. \begin{exmps} \label{eg 20.20} (i). Let $G$ be of type $B_2$ and $\mathop{\mathrm{char}}\nolimits k=2$. Let $\alpha$ and $\beta$ be the simple roots of $\Psi$ with $\alpha$ the long root. Let $e=e_{\alpha+\beta}+e_{\alpha+2\beta}$. According to \cite[\S 5.14]{Ja1} the centralizer $C_G(e)$ is the unipotent radical of a Borel subgroup of $G$. Thus, by Lemma \ref{lem 1.30}, $C_G(e)$ is a spherical subgroup of $G$ and so $e$ is spherical. Note that the $G$-orbit of $e$ does not contain an element of the form $e_{\alpha}$ or $e_{\beta}$, but $e$ is still spherical. Thus, Theorem \ref{thm 20.2} is no longer true in bad characteristic. Moreover, $e$ is distinguished in $\gg$, \cite[\S 5.14]{Ja1}. This shows that Theorem \ref{thm 20.02} can also fail for bad characteristic. (ii). Let $G$ be of type $G_2$ and $\mathop{\mathrm{char}}\nolimits k=3$. Let $\alpha$ and $\beta$ be the simple roots of $\Psi$ with $\alpha$ the long root. Let $e=e_{\alpha+2\beta}+e_{2\alpha+3\beta}$. According to \cite[\S 5.15]{Ja1}, the centralizer $C_G(e)$ is the unipotent radical of a Borel subgroup of $G$. Thus, by Lemma \ref{lem 1.30}, $C_G(e)$ is a spherical subgroup of $G$ and so $e$ is spherical. Again, the $G$-orbit of $e$ does not contain an element of the form $e_{\alpha}$ or $e_{\beta}$, but $e$ is spherical. Again, $e$ is distinguished in $\gg$, \cite[\S 5.15]{Ja1}. \end{exmps} \bigskip {\bf Acknowledgements}: The first author acknowledges funding by the EPSRC. We are grateful to S.M.~Goodwin for providing the relative version $\mathsf{DOOBSLevi}$ of his program $\mathsf{DOOBS}$ that was used in Subsection \ref{sub:ex} to determine the sphericity of the nilpotent orbits of height $3$ for the exceptional cases and for very helpful discussions and improvements of the paper. We would also like to thank the referee for suggesting some improvements. \bigskip
{ "timestamp": "2008-05-27T15:37:54", "yymm": "0708", "arxiv_id": "0708.0923", "language": "en", "url": "https://arxiv.org/abs/0708.0923", "abstract": "Let G be a connected reductive linear algebraic group defined over an algebraically closed field of characteristic p. Assume that p is good for G. In this note we classify all the spherical nilpotent G-orbits in the Lie algebra of G. The classification is the same as in the characteristic zero case obtained by D.I. Panyushev in 1994: for e a nilpotent element in the Lie algebra of G, the G-orbit G.e is spherical if and only if the height of e is at most 3.", "subjects": "Group Theory (math.GR); Rings and Algebras (math.RA)", "title": "Spherical Nilpotent Orbits in Positive Characteristic", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683498785867, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7096610844548071 }
https://arxiv.org/abs/1205.0095
Dependence of Kolmogorov widths on the ambient space
We study the dependence of the Kolmogorov widths of a compact set on the ambient Banach space.
\section{Introduction} Let $\mz $ be a subset of a Banach space $\mx $ and $x\in \mx $. The {\it distance from $x$ to $\mz $} is defined as \[E(x,\mz )=\inf\{||x-z||:~z\in \mz \}.\] \begin{definition} {\rm Let $K$ be a subset of a Banach space $\mx $, $n\in\mathbb{N}\cup\{0\}$. The {\it Kolmogorov $n$-width} (or {\it $n$-th Kolmogorov number}) of $K$ is given by \[d_n(K, \mx)=\inf_{\mx _n}\sup_{x\in K}E(x,\mx _n),\] where the infimum is over all subspaces $\mx_n \subset \mx$, of dimension not exceeding $n$. We use the notation $d_n(K)$ if $\mx$ is clear from context.} \end{definition} This notion was introduced by Kolmogorov \cite{Kol36} in 1936. It has been a subject of an extensive study and has found many applications, both in Approximation Theory and in Functional Analysis, see \cite{CS}, \cite{LGM96}, \cite{Pie80}, \cite{Pin85}, and \cite{Tik60}. In \cite{OS09} it was discovered that some general asymptotic properties of Kolmogorov widths are useful in the study of closures of sets of operators in the weak operator topology. More results on asymptotic properties of Kolmogorov widths were discovered in \cite{Ost10}. The purpose of this paper is to continue analysis of asymptotic properties of widths. \medskip Our emphasis in this paper is on dependence of asymptotic properties of widths on the ambient space. It is known for long time (see \cite[\S 7]{Tik60}) that if $\my$ is a subspace of a Banach space $\mx$ and $K\subset\my$, then it can happen that $d_n(K,\my)>d_n(K,\mx)$. Furthermore, the quotient $d_n(K,\my)/d_n(K,\mx)$ can be arbitrarily large. An example with in a certain sense optimal order of this quotient was found in \cite{Ost10}, where the following result was proved: \begin{theorem}[\cite{Ost10}]\label{T:DepSpace} For each $n$ the Banach space $\ell_1^{3n}$ contains a $2n$-dimensional subspace $Y_{2n}$ and a compact $K_{2n}\subset \my_{2n}$ such that $d_n(K_{2n},\ell_1^{3n})\le 1$ but $d_n(K_{2n},\my_{2n})\ge c\sqrt{n}$ for some absolute constant $c>0$. \end{theorem} \begin{remark}\label{R:Optimal} The order in Theorem \ref{T:DepSpace} is optimal in the following sense: Proposition \ref{P:nDim} implies that $d_n(K_{2n},\my_{2n})\le \sqrt{2n}\,d_n(K_{2n},\ell_1^{3n})$. \end{remark} The paper is structured as follows: in Section \ref{s:abs wi}, we introduce the notion of the absolute width $d_n^a(K)$ (Definition \ref{D:AW}), and collect the necessary basic facts. In general, $d_n^a(K) \leq d_n(K)$, but in some cases, we obtain the equality, or at least proportionality, of the two quantities. In Section \ref{s:affine}, we study affine widths. This allows us to construct, in certain Banach spaces $X$, a compact convex set $K$ so that $d_1(K) > d_1^a(K)$. In Section \ref{s:compare} we note some connections of Kolmogorov and absolute widths to other $s$-sequences (such as the sequences of Gelfand numbers). This provides us with some tools to be used later.\medskip We then pass to the study of asymptotic behavior of Kolmogorov numbers. In Section \ref{s:vr}, we exhibit a large class of Banach spaces which contain a sequence of compact subsets $(K_n)$, so that $\lim_n d_{k_n}(K_n)/d_{k_n}^a(K_n) = \infty$, for some increasing sequence $(k_n)$. In Section \ref{s:ratio}, we sharpen this result by showing that, if a space $\mx$ satisfies certain conditions (for instance, if it is $K$-convex), then it contains a compact $K$ with the property that $\limsup_n d_n(K)/d_n^a(K) = \infty$. If, furthermore, $\mx$ contains $\ell_p$ ($1 < p < \infty$) as a complemented subspace, then it contains a compact subset $K$ so that $\liminf_n n^{-\sigma} d_n(K)/d_n^a(K) = \infty$, for some $\sigma > 0$. In Section \ref{s:restrict}, we examine compacts $K$ for which $d_n(K) = d_n^a(K)$, for any ambient space. Finally, Section \ref{s:image} is devoted to comparing the Kolmogorov widths of the sets $K$ and $u(K)$, where $u$ is compact operator.\medskip Throughout the paper we pose some interesting geometric problems related to our study (Problems \ref{P:Nspace}, \ref{P:Linfty}, \ref{P:ProjRel}, \ref{P:ratio}, \ref{P:to infty}, \ref{P:RevKKM}, \ref{P:omalComp}). Problem \ref{P:ProjRel} could be of interest not only in the context of the theory of widths.\medskip We use the basic Banach space theory and its standard notation. We denote by $\ball(\mx)$ the closed unit ball of a space $\mx$. \section{Absolute widths}\label{s:abs wi} Dependence of the sequence $\{d_n(K)\}_{n=0}^\infty$ on the ambient Banach space leads to the introduction of the following definition. \begin{definition}[\cite{Ism74}]\label{D:AW} {\rm Let $K$ be a compact in a Banach space $\my$ and $n\in\mathbb{N}$. The $n$-th {\it absolute width} (or {\it number}) $d^a_n(K)$ of $K$ is defined by $d^a_n(K)=\inf_\mx d_n(K,\mx)$, where the $\inf$ is over all Banach spaces $\mx$ containing $\my$ as a subspace.} \end{definition} Absolute widths were studied in \cite{Ism74}, \cite{Koc90}, \cite{Oik95}, and \cite{Ost10}. Our main purpose in this paper is to study the asymptotic behavior of the quotients $d_n(K,\my)/d_n^a(K)$ under different assumptions. We start with the following natural open problem: characterize Banach spaces $\my$ for which $d_n(K,\my)=d_n^a(K)$ for all compacts $K\subset \my$. We present a class of Banach spaces having this property. The following definition goes back to \cite{LP68}: Let $1\le \lambda<\infty$. A Banach space $\my$ is called an {\it $\mathcal{L}_{\infty,\lambda}$-space} if for every finite-dimensional subspace $S\subset \my$ there is a finite-dimensional subspace $F\subset \my$ such that $S\subset F$ and $d(F,\ell_\infty^m)\le\lambda$, where $m=\dim F$. A Banach space is called an {\it $\mathcal{L}_{\infty,\lambda+}$-space} if it is a $\mathcal{L}_{\infty,\nu}$-space for each $\nu>\lambda$. See \cite{Bou81} and \cite{LT73} for theory of $\mathcal{L}_{p}$-spaces. More generally, a Banach space $\mx$ is called an {\it ${\mathcal{N}}_\lambda$-space} if, for every finite dimensional subspace $E$ of $X$, there exists a finite dimensional subspace $F$, satisfying $E \subset F \subset X$ and $\lambda(F)\le\lambda$. Here, following \cite{Tom89}, we define $\lambda(F)$ the ({\it absolute}) {\it projection constant} of $F$ as follows: for a superspace $G \supset F$, define the {\it relative projection constant} $\lambda(F,G)$ as the infimum of $\|P\|$, where $P$ is the projection from $G$ onto $F$. Then $\lambda(F) = \sup \lambda(F,G)$, with the supremum taken over all superspaces $G$. A Banach space $X$ is called an {\it ${\mathcal{N}}_{\lambda+}$-space} if it is a ${\mathcal{N}}_{\nu}$-space for each $\nu>\lambda$, and an {\it ${\mathcal{N}}$-space} if it is a ${\mathcal{N}}_\lambda$-space for some $1\le\lambda<\infty$. It is easy to see that each $\mathcal{L}_{\infty,\lambda}$-space is an $\mathcal{N}_\lambda$-space. However, the converse is false, see e.g. \cite{Sza90}. It is not known whether each $\mathcal{N}$-space is an $\mathcal{L}_{\infty,\lambda}$-space for some $\lambda<\infty$. This problem is a version of the well-known $P_\lambda$-problem (see \cite[Problem 7, p.~323]{LP68}), which is still open. However, it is known \cite{LL66} that, for a real Banach space $\mx$, the following are equivalent: (i) $\mx$ is a $\mathcal{N}_{1+}$-space; (i) $\mx$ is a $\mathcal{L}_{\infty, 1+}$-space; (iii) $\mx^* = L_1(\mu)$, for some measure $\mu$. \begin{proposition}\label{P:LinfAbs} Let $K$ be a compact in an $\mathcal{N}_{\infty,\lambda+}$-space $\my$. Then $d_n(K,\my)\le\lambda d_n^a(K)$ for all $n\in\mathbb{N}$. \end{proposition} \begin{proof} It suffices to show that for each $C>\lambda$ and $n \in \N$ we have $d_n(K,\my)\le C d_n^a(K)$. Pick $\ep>0$ so that $(1+3 \vr + \vr^2)\lambda < C$. By the definition of $d_n^a$ there exists a Banach space $\mx\supset \my$ and an $n$-dimensional subspace $\mx_n\subset\mx$ such that $E(x,\mx_n)\le (1+\ep)d_n^a(K)$ for any $x\in K$. Let $\{k_i\}\subset K$ be an $\ep\lambda d_n^a(K)$-net in $K$. Find a finite dimensional subspace $F \subset \my$, containing $\{k_i\}$, so that there exists a projection $P:\mx\to F$ satisfying $\|P\|\le \lambda(1+\ep)$. Let $\my_n=P(\mx_n)$. Then $E(k_i,\my_n)=E(Pk_i,P\mx_n)\le (1+\ep)\lambda E(k_i,\mx_n)\le (1+\ep)^2\lambda d_n^a(K)$. Let $k\in K$ and $k_i$ be such that $||k-k_i||\le \ep\lambda d_n^a(K)$, we have $$ E(k,\my_n)\le ||k-k_i||+E(k_i,\my_n)\le ((1+\ep)^2+\ep)\lambda d_n^a(K) \leq C d_n^a(K) . \, \, \, \qedhere $$ \end{proof} \begin{corollary}\label{C:LinfAbs} Let $K$ be a compact in an $\mathcal{L}_{\infty,1+}$-space $\my$. Then $d_n(K,\my)=d_n^a(K)$ for all $n\in\mathbb{N}$. \end{corollary} In this connection it is worth mentioning that all spaces of continuous functions on compacts with their $\sup$-norms are $\mathcal{L}_{\infty,1+}$-spaces, see \cite{LT73}. \begin{remark} Corollary \ref{C:LinfAbs} can be regarded as a generalization of the following result of Ismagilov \cite[Corollary of Theorem 2]{Ism74}: Let $K$ be a compact in a Banach space $\mx$ and $\mb$ be the Banach space of all bounded functions on $\ball(\mx^*)$ (the unit ball of $\mx^*$) with the $\sup$-norm. Let $i$ be the natural isometric embedding of $\mx$ into $\mb$. Then $d_n^a(K)=d(i(K),\mb)$. To get this result from Corollary \ref{C:LinfAbs} it suffices to combine the corollary with the well-known fact that $\mb$ is an $\mathcal{L}_{\infty, 1+}$-space (see \cite{LT73}). \end{remark} Do Proposition \ref{P:LinfAbs} and Corollary \ref{C:LinfAbs} characterize the ${\mathcal{N}}$ spaces and ${\mathcal{L}}_{\infty,1+}$ spaces, respectively? \begin{problem}\label{P:Nspace} Let a Banach space $\my$ be such that for some $1\le\lambda<\infty$ the condition $d_n(K,\my)\le\lambda d_n^a(K)$ holds for each compact $K\subset \my$ and each $n\in\mathbb{N}$. Does it follow that $\my$ is an $\mathcal{N}$-space? \end{problem} \begin{problem}\label{P:Linfty} Let a Banach space $\my$ be such that $d_n^a(K)=d_n(K,\my)$ for each compact $K\subset \my$ and each $n\in\mathbb{N}$. Does it follow that $\my$ is an $\mathcal{L}_{\infty,1+}$-space? \end{problem} Approaches to these questions may rely on Zippin's solution \cite{Zip81a,Zip81b,Zip84} to the close-to-isometric version of the $P_\lambda$-problem. (See \cite{Tom89} for a presentation of this result of Zippin and \cite{Zip00} for further results related to the $P_\lambda$-problem.) Corollary \ref{C:LinfAbs} can be used to estimate from above the quotient $d_k(K)/d_k^a(K)$ for an $n$-dimensional compact $K$. \begin{proposition}\label{P:nDim} Let $K$ be an $n$-dimensional compact in a Banach space $\my$. Then $d_k(K,\my)\le\sqrt{n}d_k^a(K)$ for all $k\in\mathbb{N}$. \end{proposition} \begin{proof} We may assume that $\my$ is separable and so we may consider $\my$ as a subspace of $\ell_\infty(I)$. It is easy to see that $\ell_\infty(I)$ is an $\mathcal{L}_{\infty,1+}$-space. By Corollary \ref{C:LinfAbs}, $d_n^a(K) = d_n(K, \ell_\infty(I))$. \medskip The inequality $d_k(K,\my)\le\sqrt{n}d_k^a(K)$ is trivially true for $k\ge n$. So let $k\in\{0,\dots,n-1\}$. Consider an arbitrary $\ep>0$. Let $\mx_k$ be a $k$-dimensional subspace of $\ell_\infty$ such that $E(x,\mx_k)\le (1+\ep)d_k^a(K)$ for all $x\in K$. Let $P:\ell_\infty(I) \to\span[K]$ be a linear projection with norm $\le\sqrt{n}$, existing by the Kadets-Snobar theorem \cite{KS71} and let $\my_k=P\mx_k$. Then for all $x\in K$ we have $E(x,\my_k)=E(Px,P\mx_k)\le ||P||E(x,\mx_k)\le \sqrt{n}(1+\ep)d_k^a(K)$. \end{proof} As we already mentioned in Remark \ref{R:Optimal}, the estimate of Proposition \ref{P:nDim} is optimal up to a multiplicative constant.\medskip As a step towards the solution of Problems \ref{P:Linfty} and \ref{P:Nspace} we find a wide class of spaces $X$ for which the quotients $d_n(K,X)/d_n^a(K)$ can be arbitrarily large. This is the subject of Sections \ref{s:vr} and \ref{s:ratio}. \section{Affine widths, geometry, and injectivity}\label{s:affine} While dealing with arbitrary convex (not necessarily centrally symmetric) sets, it is convenient to use affine subspaces for approximation (see e.g.~\cite{AO}). \begin{definition}\label{D:aff_widths} {\rm Let $K$ be a compact in a Banach space $Y$ and $n\in\N \cup \{0\}$. The $n$-th {\it affine width} $\affd_n(K)$ of $K$ is set to be $\inf_Z \sup_{x \in K} E(x,Z)$, where the infimum runs over all affine subspaces of $Z \subset Y$ of dimension not exceeding $n$. The $n$-th {\it absolute affine width} $\affd^a_n(K)$ of $K$ is defined by $\affd^a_n(K)=\inf_X \affd_n(K,X)$, where the $\inf$ is over all Banach spaces $X$ containing $Y$ as a subspace.} \end{definition} It is clear that $\affd^a_n(K) \leq \affd_n(K,X)$, and the equality is attained if $X$ is $1$-injective. Moreover (see \cite[Section 6.2]{AO}), $$ d_n(K) \geq \affd_n(K) \geq d_{n+1}(K \cup (-K)) . $$ Furthermore, $d_n(K) = \affd_n(K)$ if $K$ is centrally symmetric. The affine widths $\affd_0$ have been considered previously. To summarize them, recall a few definitions. \begin{definition}\label{D:Jung} {\rm For a bounded subset $K$ of a Banach space $\my$, define its {\it diameter} $D(K)$ and {\it radius} $R(K)$ by setting $$ D(K) = \sup_{a,b \in K} \|a-b\| , \, \, \, R(K) = \inf_{y \in \my} \sup_{a \in K} \|a-y\| $$ (that is, $R(K)$ is the infimum of the radii of balls containing $K$). The {\it Jung constant} $J(\my)$ of a Banach space $\my$ is defined as the supremum (over bounded sets $K \subset \my$) of $2 R(K)/D(K)$.
{ "timestamp": "2012-05-02T02:01:40", "yymm": "1205", "arxiv_id": "1205.0095", "language": "en", "url": "https://arxiv.org/abs/1205.0095", "abstract": "We study the dependence of the Kolmogorov widths of a compact set on the ambient Banach space.", "subjects": "Functional Analysis (math.FA); Metric Geometry (math.MG)", "title": "Dependence of Kolmogorov widths on the ambient space", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683495127004, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7096610841918833 }
https://arxiv.org/abs/2011.07244
A Blaschke-Lebesgue Theorem for the Cheeger constant
In this paper we prove a new extremal property of the Reuleaux triangle: it maximizes the Cheeger constant among all bodies of (same) constant width. The proof relies on a fine analysis of the optimality conditions satisfied by an optimal Reuleaux polygon together with an explicit upper bound for the inradius of the optimal domain. As a possible perspective, we conjecture that this maximal property of the Reuleaux triangle holds for the first eigenvalue of the $p$-Laplacian for any $p\in (1,+\infty)$ (the current paper covers the case $p=1$ whereas the case $p=+\infty$ was already known).
\section{Introduction} Bodies of constant width (also named after L. Euler {\it orbiforms}) have attracted much attention in the mathematical community along the last centuries. Several surveys have been devoted to these objects, and contain an abundant literature. We refer notably to a chapter in Bonnesen-Fenchel's famous book \cite{BF}, a survey by Chakerian-Groemer in the book ``Convexity and its applications" \cite{CG}, and the recent book by Martini-Montejano-Olivaros \cite{MMO}. In the plane, two bodies of constant width play a particular role: the disk, of course, and the Reuleaux triangle (obtained by drawing arcs of circle from each vertex of an equilateral triangle between the other two vertices). If all plane bodies of constant width have the same perimeter (this is Barbier's Theorem), they do not have the same area and the two extreme sets are precisely the disk (with maximal area by the isoperimetric inequality) and the Reuleaux triangle (with minimal area). This last result is the famous Blaschke-Lebesgue Theorem, see \cite{Bla} for the proof of W. Blaschke or \cite{KM} for a more modern exposition, \cite{Le} for the original proof of H. Lebesgue, and \cite{BF}, where this proof is reproduced. Let us mention that many other proofs with very different flavours (more geometric or more analytic) appeared later, for example in \cite{Be}, \cite{CCG}, \cite{Eg}, \cite{Ga}, and \cite{Ha}. The disk and the Reuleaux triangle share this extremal properties for other geometric functionals like {\it the inradius} and {\it the circumradius}, in particular the Reuleaux triangle minimizes the inradius among all bodies of constant width, see e.g. \cite{BF} or \cite{CG}. We believe that these extremal properties of the disk and the Reuleaux triangle hold for more complicated functionals. In particular in Section \ref{conc}, we explain why we think that the Reuleaux triangle maximizes the first eigenvalue of the $p$-Laplacian (with Dirichlet boundary condition) for any $p$, $1\leq p\leq +\infty$. Note that it is well known that the disk (or the ball in any dimension) minimizes this eigenvalue, for any $p$ and the proof is done by spherical rearrangement. The aim of this paper is to make a first step in this direction by proving that the Reuleaux triangle maximizes the Cheeger constant among all bodies of constant width. Indeed, the Cheeger constant (defined below) can also be seen as the first eigenvalue of the $1$-Laplacian, see \cite{KaFr}. The Cheeger constant of a bounded plane domain $\Omega$ is defined as \begin{equation}\label{ch1} h(\Omega)=\min_{E\subset \Omega} \frac{P(E)}{|E|} \end{equation} where $P(E)$ is the perimeter of $E$ (defined as the perimeter in the sense of De Giorgi for measurable sets) and $|E|$ is the area of $E$. In \eqref{ch1}, the minimum is achieved as soon as $\Omega$ has a Lipschitz boundary. A set $E$ which realizes this minimum is called a {\it Cheeger set} of $\Omega$ and we denote it by $C_\Omega$. This notion, introduced by Jeff Cheeger in \cite{Che} (to obtain a geometric lower bound for the first eigenvalue of the Laplacian), has extensively received attention in the last decades. For an introductory survey on the Cheeger problem we refer for example to \cite{Pa1}. In general the Cheeger set is not unique, but it is unique if $\Omega$ is convex, see \cite{AlCa}. Moreover, for convex planar domains, there is a nice characterization of the Cheeger constant and the Cheeger set, see e.g. Lachand-Robert and Kawohl \cite{KLR}: the Cheeger constant reads \begin{equation}\label{defR} h(\Omega)=\frac{1}{R(\Omega)},\quad \hbox{where $R(\Omega)$ satisfies }\ |\Omega_{-R}|=\pi R^2, \end{equation} where $\partial \Omega_{-R}$ is the inner parallel set to $\partial \Omega$ at distance $R$, and the Cheeger set is $C_\Omega=\Omega_{-R(\Omega)}+B_{R(\Omega)}$ (the Minkowski sum of $\Omega_{-R(\Omega)}$ and the disk of radius $R(\Omega)$). Therefore, the main result of this paper is \begin{theorem}\label{maintheo} The Reuleaux triangle maximizes the Cheeger constant in the class of plane bodies of constant width. In other words, for any body $\Omega$ of constant width \begin{equation}\label{ineq1} h(\Omega)\leq h(\mathbb{T}) \end{equation} where $\mathbb{T}$ is the Reuleaux triangle of same width. \end{theorem} Our strategy of the proof is as follows. Without loss of generality, we work with bodies of width 1. First of all, we look at this maximization problem in the restricted class of Reuleaux polygons (with a number of sides less than $2N+1$). We will then generalize the result, exploiting the density of the Reuleaux polygons in the class of bodies with constant width. We begin with a simple observation on the inradius of the optimal domain: it must be small, more precisely, smaller than $r_0\simeq 0.4305$. Note that the minimal value, obtained by the Reuleaux triangle, is $r_{min}=1-1/\sqrt{3}\simeq 0.4226$. The key point to get such a precise estimate is the explicit computation of the minimal area of a body of constant width enclosed in a given annulus, that we obtained in a recent paper, see the Appendix and reference \cite{HL}. Now, in the class of Reuleaux polygons, after having proved existence of a maximizer, we obtain optimality conditions, thanks to the so-called {\it shape derivative}. For that purpose, we consider only a particular kind of perturbations allowing us to stay in the same class. These perturbations may be defined for any Reuleaux polygon (except the Reuleaux triangle) and have been used by W. Blaschke in his proof of the Blaschke-Lebesgue Theorem. They consist in sliding one vertex on its arc, moving that way three corresponding arcs of the polygon in order to respect the constant width condition and letting all the other arcs unchanged. The optimality condition we get is rather complicated, but it allows us to prove, through a precise analysis of the functions involved, that the optimal domain has arcs with very similar lengths: in Theorem \ref{theolengths} we give an estimate of the ratio of the lengths of two consecutive arcs that happens to be close to 1. To conclude, we are able to use this property of the lengths to prove that the inradius of such Reuleaux polygon must be larger than $r_0$, first with a general proof in the case $N\geq 7$, then for all the remaining values of $N=2,3,4,5,6$ by a simple analysis. This proves that the optimal Reuleaux polygon cannot have more than 3 sides. \medskip In this paper, we define $\mathcal{B}^1$ as the class of plane bodies of constant width $1$ and $\mathcal{B}_N^1$ as the subclass of Reuleaux polygon with (at most) $2N+1$ sides. Throughout the paper we will always take the origin at the center of the incircle. \section{Existence and a first optimality condition} \subsection{Existence} First of all, we show that the functional $h$ is bounded above in $\mathcal B^1$ by explicit bounds. A first upper bound comes from two classical theorems: the Barbier Theorem (see, e.g. \cite{CG}) and the Blaschke-Lebesgue Theorem (see, e.g. \cite{Bla}). The former states that the perimeter of any plane body of constant width $1$ is $\pi$, the latter asserts that the Reuleaux triangle minimizes the area among plane bodies of constant width. By definition of $h$, we immediately get \begin{equation}\label{harea} h(\Omega)\leq \frac{\pi}{|\Omega|} \leq \frac{\pi}{|\mathbb{T}|}=\frac{2\pi}{\pi-\sqrt{3}}\sim 4.4576. \end{equation} Another possible strategy to get the boundedness is to exploit the monotonicity of $h$ with respect to the inclusion, together with the fact that $\mathbb T$ minimizes the inradius $r(\Omega)$ in $\mathcal B^1$ (see, e.g. \cite{BF}): \begin{equation}\label{hrho} h(\Omega) \leq h(B(0,r(\Omega))) = \frac{2}{r(\Omega)}\leq \frac{2}{r(\mathbb{T})}=\frac{2}{1-1/\sqrt{3}}\sim 4.732. \end{equation} \begin{proposition}\label{exist} The functional $h$ admits a maximizer in $\mathcal B^1$. \end{proposition} \begin{proof} In \eqref{harea} (or \eqref{hrho}) we have shown that $h$ is bounded above in the class. Therefore, its supremum is finite. Let $\Omega_n$ be a maximizing sequence. Since the elements of $\mathcal B^1$ are convex bodies with prescribed constant width, we infer that they can all be enclosed into a compact set. Therefore, by Blaschke selection theorem, up to a subsequence (not relabeled), $\Omega_n\to \Omega^*$ with respect to the Hausdorff metric, for some convex body $\Omega^*$. Now it is classical that the class $\mathcal B^1$ is closed for the Hausdorff metric (Hausdorff convergence is equivalent to uniform convergence of the support functions), thus $\Omega^*\in \mathcal B^1$. To conclude, we exploit the continuity of $h$ with respect to the Hausodrff metric. This is proved, e.g., in \cite[Proposition 3.1]{Pa2}. \end{proof} Actually, the same existence result can be proved in the subclass $\mathcal B_N^1$ of Reuleaux polygons with at most $2N+1$ sides. \begin{proposition}\label{existN} For every $N\in \mathbb N$, the functional $h$ admits a maximizer in $\mathcal B^1_N$. \end{proposition} \begin{proof} Arguing as in the proof of Proposition \ref{exist}, the statement follows by combining the boundedness of $h$ from above, the compactness of $\mathcal B^1_N$ with respect to the Hausdorff metric (see \cite[Proposition 2.2]{KM}), and the continuity of $h$ with respect to the Hausdorff metric. \end{proof} \subsection{The Cheeger constant of a Reuleaux triangle} In this paragraph we compute $h(\mathbb T)$ using the implicit formula \eqref{defR}. We recall that the boundary of the Reuleaux triangle is formed by three arcs of circle of radius 1 and arc length $\pi/3$, centered at three boundary points $P_1$, $P_2$, and $P_3$. Without loss of generality, we choose the orientation in such a way that \begin{equation}\label{P123} P_1=\frac{1}{\sqrt{3}}\,e^{i 11 \pi /6},\quad P_2=\frac{1}{\sqrt{3}}\,e^{i \pi /2},\quad P_3= \frac{1}{\sqrt{3}}\,e^{i 7 \pi /6}. \end{equation} Given an arbitrary $0<R<1$, the boundary of the inner parallel set $\Omega_{-R}$ is made of three arcs of circle, centered at the $P_i$, with radius $1-R$. They meet at three points $Q_i$, $i=1,2,3$, which, by symmetry, lie on the segments $P_iO$, being $O$ the origin. \begin{figure}[h] \begin{center} {\includegraphics[height=4truecm] {triangle} \quad \quad \quad \includegraphics[height=4truecm] {CT} } \end{center} \caption{{\it Left: the Reuleaux triangle and an inner parallel set. Right: the Cheeger set of the Reuleaux triangle and the inner parallel set.}}\label{fig-parallel3} \end{figure} In order to determine the area of the inner parallel set, we need to compute the following objects: the angle $\alpha$ such that $Q_2=P_1+(1-R) e^{i\alpha}$, the distance $y:=|OQ_2|$, and the angle $j:=\widehat{Q_2P_1Q_3}$ (see also Fig. \ref{fig-parallel3}). Recalling formulas \eqref{P123} and imposing that the horizontal coordinate of $Q_2$ is zero, we get $$ \alpha=\arccos \left(- \frac{1}{2(1-R)}\right). $$ Similarly, evaluating the vertical component of $Q_2$, we obtain $$ y=(1-R)\sin\alpha -\frac{1}{2\sqrt{3}}\,. $$ Finally, it is immediate to check that $j=2(5\pi/6-\alpha)$. Let us now compute the area. Connecting each $Q_i$ with the origin, the inner parallel set $\Omega_{-R}$ is divided into three parts of equal area, and we have $$ |\Omega_{-R}|=\frac{3}{2} \left[ \frac{\sqrt{3}}{2}y^2 + (1-R)^2(j - \sin j)\right]. $$ Imposing \eqref{defR}, namely that $|\Omega_{-R}|=\pi R^2$, we find \begin{equation}\label{Rtriangle} 0.22802 \leq R=R(\mathbb{T}) \leq 0.22803, \end{equation} implying \begin{equation}\label{htriangle} h(\mathbb{T})\geq 4.3853. \end{equation} \subsection{A first optimality condition} The knowledge of $r(\mathbb T)$ and the computation of $h(\mathbb T)$ and $R(\mathbb T)$ allow us to get some necessary conditions on the values of the functionals $r$ and $R$ for maximizers. \begin{proposition}\label{propr0} Let $\Omega^*$ be a maximizer for $h$ in $\mathcal B^1$. Then \begin{eqnarray} 0.21132 \leq \frac{r(\mathbb T)}{2} \leq &\!\!\!\! R(\Omega^*)\!\!\!\!&\leq R(\mathbb T) \leq 0.22803, \label{estimatesR} \\ \smallskip\notag \\ 0.4226 \leq r(\mathbb T) \leq & \!\!\!\!r(\Omega^*)\!\!\!\!&\leq r_0:=0.4302. \label{r0} \end{eqnarray} \end{proposition} \begin{proof} Let us start with $R$. By definition, $R(\Omega^*)=1/h(\Omega^*)\leq 1/h(\mathbb T) = R(\mathbb T)$. On the other hand, exploiting \eqref{hrho}, we get $R(\Omega^*)\geq r(\mathbb T)/2=(1-1/\sqrt{3})/2$. These inequalities, together with \eqref{Rtriangle}, prove \eqref{estimatesR}. As already mentioned, the proof of the minimality of $\mathbb T$ for the inradius can be found in \cite{BF}, in particular $r(\mathbb T)\leq r(\Omega^*)$. In order to prove the upper bound for $r(\Omega^*)$, we introduce the auxiliary function \begin{equation}\label{A} \begin{array}{lll} \mathcal A:[1-1/\sqrt{3}; 1/2] & \longrightarrow \mathbb R^+ \\ &r \mapsto \mathcal A(r):=\min \left\{|\Omega|\ :\ \Omega\in \mathcal B^1,\ r(\Omega)=r \right\}. \end{array} \end{equation} In other words, $\mathcal A$ associates to $r$ the minimal area of a shape in $\mathcal B^1$ with prescribed inradius. Note that the endpoints of the domain of $\mathcal A$ are the minimal and maximal inradius of shapes in $\mathcal B^1$. The properties of $\mathcal A$ and of the optimal shapes are investigated in \cite{HL}. For the benefit of the reader, the main facts are gathered in the Appendix, in the last section of the paper. In view of definition \eqref{A} of $\mathcal A$, for every shape $\Omega$ in the class, we have $|\Omega| \geq \mathcal A(r(\Omega))$, so that, arguing as in \eqref{harea}, \begin{equation}\label{a1} h(\Omega)\leq \frac{\pi}{\mathcal A(r(\Omega))}. \end{equation} On the other hand, for $\Omega^*$ maximizer, there holds $h(\Omega^*)\geq h(\mathbb T)$. This fact, combined with \eqref{a1}, gives $$ \mathcal A(r(\Omega^*))\leq \frac{\pi}{h(\mathbb T)}. $$ Since $\mathcal A$ is strictly increasing, we infer that $$ r(\Omega^*) \leq \mathcal A^{-1} \left( \frac{\pi}{h(\mathbb T)}\right) <r_0:=0.4302, $$ concluding the proof. \end{proof} \section{Optimality conditions in the class of Reuleaux polygons} In this section we write a family of optimality conditions in the class of Reuleaux polygons, namely for the study of the maximization of $h$ in $\mathcal B_N^1$. To this aim, we need to fix some definitions. \subsection{Reuleaux polygons} The boundary of a Reuleaux polygon $\Omega$ of width 1 is made of an odd number of arcs of radius 1, centered at boundary points $P_k$, $k=1,\ldots, 2N+1$, for some $N\in \mathbb N$. Notice that in this case $\Omega \in \mathcal B^1_M$ for every $M\geq N$. The boundary arc centered at $P_k$ is denoted by $\gamma_k$ and is parametrized by \begin{equation}\label{param} \gamma_k:=\{P_k + e^{is}\ :\ s\in [\alpha_k, \beta_k]\}, \end{equation} for some pair of angles $\alpha_k, \beta_k$. We identify here the complex number $e^{is}$ with the point $(\cos s, \sin s)\in \mathbb R^2$. For brevity, we set \begin{equation}\label{defj} j_k:=\mathcal H^1(\gamma_k)\quad \mbox{(the length of $\gamma_k$)}. \end{equation} The vertexes are ordered as follows: the subsequent and previous points of $P_k$ are $$ P_{k+1}=P_k + e^{i\alpha_k}\quad \hbox{and}\quad P_{k-1}=P_{k} + e^{i\beta_k}, $$ respectively. Accordingly, the angles satisfy $$ \beta_{k+1}= \alpha_k +\pi \,\quad \hbox{mod }2\pi. $$ The concatenation of the parametrizations of the arcs provides a parametrization of the boundary of the Reuleaux polygon in counter clockwise sense: the order is $\gamma_{2N+1}$, $\gamma_{2N-1}$, $\ldots$, $\gamma_{1}$, $\gamma_{2N}$, $\gamma_{2N-2}$, $\ldots$, $\gamma_2$, namely first the arcs with odd label followed by the arcs with even label, see e.g., Fig. \ref{figR}. \begin{figure}[h] \begin{center} {\includegraphics[height=4.5truecm] {figR}} \end{center} \caption{{\it Notation of vertexes, arcs, and angles for the parametrization of a Reuleaux heptagon.}}\label{figR} \end{figure} \subsection{The Cheeger set of a Reuleaux polygon}\label{ssR} Let $\Omega$ be a Reuleaux polygon. According to \cite{KLR}, the boundary of the Cheeger set $C_\Omega$ is the union of a (non empty) portion of $\partial \Omega$ and arcs of circle of radius $R:=R(\Omega)$. Moreover, the arcs of circle meet $\partial \Omega$ tangentially. In view of the geometry of $\Omega$, we infer that the intersection $\partial C_\Omega \cap \partial \Omega$ is the union of arcs of circle of radius 1 of the form $\gamma_\ell':=\partial C_\Omega \cap \gamma_\ell$. We parametrize them as follows: \begin{equation}\label{arcprime} \gamma_\ell'= \{P_\ell+ e^{is}\ :\ s\in [\alpha_\ell', \beta_\ell']\}, \end{equation} for suitable $\ell$s and $\alpha_\ell\leq \alpha_\ell'\leq \beta_\ell'\leq \beta_\ell$. Notice that, a priori, there might be an index for which $\gamma_\ell'=\emptyset$. \medskip We now show that the fact that the ``free part'' of the boundary of $C_\Omega$ meets $\partial \Omega$ tangentially entails a relation among the contact angles, the lengths of the arcs, and $R$. More precisely, let us assume that $\partial C_\Omega$ intersects two consecutive arcs: $\gamma_\ell$ and $\gamma_{\ell-2}$. In a neighborhood of their common point $P_{\ell-1}$, the boundary of the Cheeger set is the concatenation of $\gamma_\ell'$, the arc of circle $$ \{ Q + R e^{is}\ :\ s\in [\beta_\ell', \alpha_{\ell-2}']\}, $$ and $\gamma_{\ell-2}'$. The point $Q$ is the intersection of the segments joining $P_\ell$ with the contact point $P_\ell + e^{i \beta_\ell'}$ and $P_{\ell-2}$ with $P_{\ell-2}+e^{i\alpha_{\ell-2}'}$. Let $M$ denote the midpoint of the segment $P_\ell P_{\ell-2}$. This structure is summarized in Fig. \ref{ballR}. \begin{figure}[h] \begin{center} {\includegraphics[height=5truecm] {ballR}} \end{center} \caption{{\it The boundary of the Cheeger set in a neighborhood of $P_{\ell-1}$.}}\label{ballR} \end{figure} Note that the angle $\widehat{P_\ell P_{\ell-1} P_{\ell-2}}$ is equal to $\alpha_{\ell-2}-\beta_\ell= j_{\ell-1}$. Let us write the point $Q$ in two different ways: \begin{equation}\label{systemQ} \left\{ \begin{array}{lll} Q=P_\ell + (1-R)e^{i \beta_\ell'} \\ Q= P_\ell + e^{i\beta_\ell} + |QP_{\ell-1}|e^{i (\beta_\ell+\pi + j_{\ell-1}/2)}. \end{array} \right. \end{equation} In order to compute $|QP_{\ell-1}|$, let us introduce the auxiliary function \begin{equation}\label{defU} U(x):= \arcsin (\sin(x)/\sqrt{a}), \end{equation} being $a:=(1-R)^2$. According to this notation, we infer that the angle $\widehat {P_\ell Q M}=\widehat{M Q P_{\ell-2}}$ is nothing but $U(j_{\ell-1}/2)$. Therefore, we may write $$ |QP_{\ell-1}|=|MP_{\ell-1}| - |MQ|= \cos(j_{\ell-1}/2) - \sqrt{a}\cos(U(j_{\ell-1}/2)). $$ By combining the previous expression with \eqref{systemQ}, we conclude that \begin{equation}\label{proj1} \left\{ \begin{array}{lll} (1-R)\cos(\beta_\ell')&=\cos(\beta_\ell) - \left[\cos(j_{\ell-1}/2) - \sqrt{a}\cos(U(j_{\ell-1}/2))\right]\cos(\beta_\ell+j_{\ell-1}/2) \\ (1-R)\sin(\beta_\ell')&=\sin(\beta_\ell) -\left[\cos(j_{\ell-1}/2) - \sqrt{a}\cos(U(j_{\ell-1}/2))\right]\sin(\beta_\ell+j_{\ell-1}/2). \end{array} \right. \end{equation} Similarly, exploiting the fact that $$ \left\{ \begin{array}{lll} Q=P_{\ell-2} + (1-R)e^{i \alpha_{\ell-2}'} \\ Q= P_{\ell-2} + e^{i\alpha_{\ell-2}} + |QP_{\ell-1}|e^{i (\alpha_{\ell-2}+\pi - j_{\ell-1}/2)} \end{array} \right. $$ we get \begin{equation}\label{proj2} \left\{ \begin{array}{lll} (1-R)\cos(\alpha_{\ell-2}')&=\cos(\alpha_{\ell-2}) - \left[\cos(j_{\ell-1}/2) - \sqrt{a}\cos(U(j_{\ell-1}/2))\right]\cos(\alpha_{\ell-2} - j_{\ell-1}/2) \\ (1-R)\sin(\alpha_{\ell-2}')&=\sin(\alpha_{\ell-2}) -\left[\cos(j_{\ell-1}/2) - \sqrt{a}\cos(U(j_{\ell-1}/2))\right]\sin(\alpha_{\ell-2} - j_{\ell-1}/2). \end{array} \right. \end{equation} \subsection{Blaschke deformations} We now introduce a family of deformations in the class of Reuleaux polygons of width 1, which allow to connect any pair of elements in a continuous way (with respect to the complementary Hausdorff distance), staying in the class. This definition has been introduced by W. Blaschke in \cite{Bla} and analysed by Kupitz-Martini in \cite{KM}. \begin{definition}\label{def-BD} Let $\Omega$ be a Reuleaux polygon with $2N+1$ sides. Let $k$ be one of the indexes in $\{1,\ldots, 2N+1\}$. A \emph{Blaschke deformation} acts moving the point $P_{k}$ on the arc $\gamma_{k-1}$ increasing or decreasing the arc length. Consequently, the point $\gamma_{k+1}$ moves and the arcs $\gamma_{k}$, $\gamma_{k+1}$, and $\gamma_{k+2}$ are deformed, as in Fig. \ref{fig-def}. We say that a Blaschke deformation is \emph{small} if the arc length of $\gamma_{k-1}$ has changed of $\varepsilon\in \mathbb R$, small in modulus. \end{definition} \begin{figure}[h] \begin{center} {\includegraphics[height=4.5truecm] {bdef}} \end{center} \caption{{\it A Blaschke deformation of a Reuleaux heptagon which moves $P_k$ on $\gamma_{k-1}$ changing $\alpha_{k-1}$ into $\alpha_{k-1}^\varepsilon:=\alpha_{k-1}+\varepsilon$, with $\varepsilon>0$ small.}}\label{fig-def} \end{figure} Let $\Omega_\varepsilon$ denote the Reuleaux polygon obtained by $\Omega$ after a small Blaschke deformation of parameter $\varepsilon$, moving $P_k$. When $\varepsilon$ is infinitesimal, $\Omega_\varepsilon$ can be written as the image of a small perturbation of the identity: $$ \Omega_\varepsilon = \phi_\varepsilon (\Omega),\quad \phi_\varepsilon (x)= x + \varepsilon V(x) + o(|\varepsilon|), $$ for a suitable vector field $V$. The behavior of $V$ on the boundary is described in \cite[\S 2.1, formulas (2.5) and (2.6)]{HL}, in particular, adopting the parametrization \eqref{param} of the boundary arcs, there holds: \begin{equation}\label{Vn} V\cdot n = \left\{ \begin{array}{lll} \sin(s-\alpha_{k-1}) \quad & \hbox{on }\gamma_k \\ \displaystyle{-\frac{\sin j_k}{\sin j_{k+1}}\sin(s-\alpha_{k+1})}\quad & \hbox{on }\gamma_{k+1} \\ 0 & \hbox{else}, \end{array} \right. \end{equation} where, according to \eqref{defj}, $j_k$ and $j_{k+1}$ are the lengths of $\gamma_k$ and $\gamma_{k+1}$, respectively. \subsection{The first order shape derivative of $h$ with respect to Blaschke deformations} In order to derive optimality conditions, a classical idea is to impose that the first order shape derivative of $h$ at a critical Reuleaux polygon vanishes for every small deformation which preserves the constraints of $\mathcal B^1_N$. For a generic convex set $\Omega$, denoting by $C_\Omega$ its (unique, see \cite{AlCa}) Cheeger set, the first order shape derivative of $h$ at $\Omega$ in direction $V\in C^1(\mathbb R^2;\mathbb R^2)$ reads (see \cite{PaSa}) \begin{equation}\label{hprime} \frac{\mathrm{d} h}{\mathrm{d} V}(\Omega) :=\lim_{\varepsilon \to 0}\frac{h((I + \varepsilon V)(\Omega) )- h(\Omega)}{\varepsilon}=\frac{1}{|C_\Omega|} \int_{\partial C_\Omega \cap \partial \Omega} (\mathcal C_\Omega - h(\Omega)) V\cdot n\, \mathrm{d} \mathcal H^{1}, \end{equation} where $I$ is the identity map and $\mathcal C_\Omega$ is the curvature. A consequence of this formula is that the Cheeger set of a maximizer in $\mathcal B_N^1$ intersects all the boundary arcs. \begin{proposition}\label{boundary} Let $\Omega^*$ be a maximizer for $h$ in $\mathcal B_N^1$. Then $\partial C_\Omega \cap \gamma_\ell \neq \emptyset$ for every $\ell$. \end{proposition} \begin{proof} Let us consider an infinitesimal Blaschke deformation which moves the vertex $P_k$ (see Definition \ref{def-BD}). Since $\Omega$ is a critical shape, the first order shape derivative of $h$ at $\Omega$ with respect to this deformation is zero. In formulas, exploiting \eqref{hprime}, \eqref{Vn}, and the fact that the curvature $\mathcal C_\Omega$ is equal to 1 on $\partial \Omega \cap \partial C_\Omega$, we derive the following optimality condition: \begin{equation}\label{op1} 0 = \int_{\partial C_\Omega \cap \gamma_k } \sin(s-\alpha_{k-1})\,\mathrm{d} s -\frac{\sin j_k}{\sin j_{k+1}} \int_{\partial C_\Omega \cap \gamma_{k+1}} \sin (s-\alpha_{k+1})\, \mathrm{d} s. \end{equation} Looking at \eqref{op1}, we see that if the Cheeger set does not meet the arc $\gamma_k$, the corresponding integral over $\partial C_\Omega \cap \gamma_k$ is zero, therefore the other integral has to be zero, meaning that either the Cheeger set does not meet $\gamma_{k+1}$ or the intersection is a single point. Repeating this argument, we obtain that $\partial C_\Omega \cap \gamma_\ell$ is either empty or a singleton, for any arc $\gamma_\ell$ on the boundary. Let us prove that it is impossible (this is a common fact for any Cheeger set). Since the free parts of $C_\Omega$ are arcs of circle of radius $R(\Omega)$, if the Cheeger set touches the boundary of $\Omega$ only at singletons, we would have that the curvature $\mathcal{C}_{C_\Omega}$ of the Cheeger set is equal to $1/R(\Omega)$ almost everywhere. Thus by Gauss-Bonnet formula: $$\mathcal{H}^1(\partial C_\Omega)=\int_{\partial C_\Omega} \mathcal{C}_{C_\Omega} X\cdot n=\frac{1}{R(\Omega)} \int_{\partial C_\Omega} X\cdot n = \frac{2|C_\Omega|}{R(\Omega)},$$ which would yield $$h(\Omega)= \frac{\mathcal{H}^1(\partial C_\Omega)}{|C_\Omega|}=\frac{2}{R(\Omega)},$$ in contradiction with \eqref{defR}. \end{proof} Taking $\Omega$ a Reuleaux polygon and $V$ inducing an arbitrary Blaschke deformation, we obtain a family of optimality conditions. In order to state the result, let us introduce the following auxiliary functions: \begin{eqnarray} && G(x):=\sin^2 (x) + \sqrt{a} \cos(x) \cos(U(x)) =\sin^2 (x) + \cos(x) \sqrt{a-\sin^2(x)},\label{defg} \\ && F(x,y):=\sqrt{a}\cos(2x+y-U(y)), \label{defF} \\ && H(x,y,z):=\sin(2z)[g(x)-F(y,z)],\label{defH} \end{eqnarray} where $a$ is a constant depending on $\Omega$, $a:=(1-R(\Omega))^2$, and $U$ is the function defined in \eqref{defU}. \begin{proposition}\label{propoc} Let $\Omega$ be a critical shape for $h$ in the class of Reuleaux polygons. Then, for every $k$, \begin{equation}\label{oc3} H\left( \frac{j_{k-1}}{2}, \frac{j_{k}}{2}, \frac{j_{k+1}}{2} \right)= H\left( \frac{j_{k+2}}{2}, \frac{j_{k+1}}{2}, \frac{j_{k}}{2} \right). \end{equation} \end{proposition} \begin{proof} In the following, for brevity, the functional $R(\Omega)$ will be denoted by $R$. We use again Formula \eqref{op1}. In view of the parametrization \eqref{arcprime} of $\partial C_\Omega\cap \partial \Omega$, we get \begin{align*} 0 & = \int_{\alpha_k' }^{\beta_k'} \sin(s-\alpha_{k-1})\,\mathrm{d} s -\frac{\sin j_k}{\sin j_{k+1}} \int_{\alpha_{k+1}'}^{\beta_{k+1}'} \sin (s-\alpha_{k+1})\, \mathrm{d} s \\ & = -\cos(\beta_k'-\alpha_{k-1}) + \cos(\alpha_k'-\alpha_{k-1}) + \frac{\sin j_k}{\sin j_{k+1}} \left[\cos(\beta_{k+1}'-\alpha_{k+1}) - \cos (\alpha_{k+1}' -\alpha_{k+1}) \right] \\ & = \cos(\beta_k'-\beta_k) - \cos(\alpha_k'-\beta_k) + \frac{\sin j_k}{\sin j_{k+1}} \left[\cos(\beta_{k+1}'-\alpha_{k+1}) - \cos (\alpha_{k+1}' -\alpha_{k+1}) \right], \end{align*} where in the last equality we have used $\alpha_{k-1}\equiv\beta_k + \pi$, modulo $2\pi$. Rearranging the terms, we rewrite the optimality condition as: \begin{equation}\label{oc1} \sin (j_{k+1}) [\cos (\beta_k-\beta'_k) - \cos(\beta_k-\alpha'_k)] = \sin(j_k) [\cos (\alpha'_{k+1}-\alpha_{k+1}) - \cos(\beta'_{k+1}-\alpha_{k+1})]. \end{equation} Exploiting \eqref{proj1} with $\ell=k$, we obtain \begin{align} \cos(\beta_k-\beta_k') &= \cos(\beta_k)\cos(\beta_k')+\sin(\beta_k)\sin(\beta_k')\notag \\ & = \left[\sin^2(j_{k-1}/2) +\sqrt{a} \cos(j_{k-1}/2) \cos(U(j_{k-1}/2))\right]/\sqrt{a}\notag \\ & =G\left(\frac{j_{k-1}}{2}\right)/\sqrt{a},\label{term1} \end{align} where $G$ is the function defined in \eqref{defg} and $a:=(1-R)^2$. Similarly, taking $\ell=k+2$ in \eqref{proj2}, we get \begin{align} \cos(\beta_k-\alpha_k') &= \left[ \cos(j_k) - \left( \cos(j_{k+1}/2)-\sqrt{a} \cos(U(j_{k+1}/2))\right)\cos(j_k+j_{k+1}/2) \right]/\sqrt{a}\notag \\ &= \left[\sin(j_k)\sin (j_k + j_{k+1}/2) + \sqrt{a} \cos(U(j_{k+1}/2))\cos(j_k+j_{k+1}/2) \right] /\sqrt{a}\notag \\ & =F\left(\frac{j_k}{2}, \frac{j_{k+1}}{2}\right)/\sqrt{a},\label{term2} \end{align} where $F$ is the function introduced in \eqref{defF}. Here, for the last equality, we have used $\cos(x) \cos(y)=\cos(x-y)-\sin(x)\sin(y)$, together with the fact $\sqrt{a}\sin (U(x))=\sin(x)$. By combining \eqref{term1} with \eqref{term2}, we may rewrite the left-hand side of \eqref{oc1} as $$ H\left( \frac{j_{k-1}}{2}, \frac{j_{k}}{2}, \frac{j_{k+1}}{2} \right)/\sqrt{a}. $$ The same strategy adopted for the left-hand side of \eqref{oc1} also applies for the right-hand side, giving $G\left(\frac{j_{k+2}}{2}\right)/\sqrt{a}-F\left(\frac{j_{k+1}}{2}, \frac{j_{k}}{2}\right)/\sqrt{a} = H\left( \frac{j_{k+2}}{2}, \frac{j_{k+1}}{2}, \frac{j_{k}}{2} \right)/\sqrt{a}$. By multiplying both sides by $\sqrt{a}$, we get \eqref{oc3}. \end{proof} \begin{remark} The optimality condition \eqref{oc3} is obviously satisfied by all the regular Reuleaux polygons. Actually, we believe that only the regular Reuleaux polygons are critical points for the Cheeger constant, and we give some support to this claim in Remark \ref{rem3.6}. If we were able to prove that fact, the proof of our main theorem would be much simpler as we could make the explicit computation of the Cheeger constant of any regular Reuleaux polygon. Let us also refer to the recent paper \cite{Phi} where is done a similar comparison between the first Dirichlet eigenvalue or the torsion among every regular Reuleaux polygon, showing that the Reuleaux triangle is always the optimal domain in this restricted class. \end{remark} \subsection{Analysis of the optimality conditions} Throughout the subsection $\Omega$ will denote a maximizer for $h$ in $\mathcal B^1_M$ for some $M$, with $2N+1$ boundary arcs $j_1, \ldots, j_{2N+1}$. We will use the optimality conditions stated in Propositions \ref{boundary} and \ref{propoc} to obtain some information on the lengths of consecutive intervals. \medskip A first rough estimate can be deduced from Propositions \ref{boundary}: \begin{lemma}\label{roughesti} Two consecutive lengths $j_k$ and $j_{k+1}$ satisfy $$ 0.1339 j_{k+1} \leq j_k \leq \frac{1}{0.1339} j_{k+1}. $$ \end{lemma} \begin{proof} In subsection \ref{ssR}, we have highlighted the relation between the lengths of the arcs $\gamma_\ell'\subset \partial \Omega \cap \partial C_\Omega$ and the arcs $\gamma_\ell$. This relation, valid for every $\ell$ thanks to Proposition \ref{boundary}, can be stated as follows (see also Fig. \ref{ballR} with $\ell=k+2$): $$\alpha'_k-\alpha_k = \arcsin\left(\frac{\sin j_{k+1}/2}{1-R}\right)-\frac{j_{k+1}}{2}=U\left(\frac{j_{k+1}}{2}\right) - \frac{j_{k+1}}{2}.$$ Now $j_k\geq \alpha'_k-\alpha_k$, thus using standard estimates for the arcsine and sine, we obtain $$ j_k \geq \arcsin\left(\frac{1}{1-R}\sin\left(\frac{j_{k+1}}{2}\right)\right) - \frac{j_{k+1}}{2} \geq \frac{R}{2(1-R)} j_{k+1} \geq 0.1339 j_{k+1}. $$ For the last inequality, we have used $R\geq 0.21132$, see \eqref{estimatesR}. The other bound in the statement can be obtained in a similar way, by considering the difference of $\beta_{k+1}-\beta'_{k+1}$. \end{proof} The optimality condition \eqref{oc3} in Proposition \ref{propoc} allows us to obtain a refined estimate. \begin{theorem}\label{theolengths} Two consecutive lengths $j_k$ and $j_{k+1}$ satisfy \begin{equation} \tau j_{k+1} \leq j_k \leq \frac{1}{\tau} j_{k+1}\ \mbox{with $\tau \geq 0.99 - 0.05 h^2$} \end{equation} where $h$ denotes the largest length (among all arcs of the Reuleaux polygon). \end{theorem} Before showing the proof of this result, let us make some comments about its consequences. \begin{remark}\label{rem3.6} Since the maximal length of any arc is less than $\pi/3$ (a maximal arc joins two points on the outercircle being tangent to the incircle: this computation is done for example in \cite{HL}), we deduce from the theorem that $\tau\geq 0.93$ and the smaller is $h$, the better will be the estimate. For example, for $h\leq 0.45$, we get $\tau \geq 0.97$. In some sense, the optimal domain is close to a regular Reuleaux polygon. Note that if the polygon has $2N+1$ sides, the smallest one has a length that is at least $\tau^N h$ (because there is at most $N-1$ arcs between the largest and the smallest if we turn in the good direction). \end{remark} We can deduce from Theorem \ref{theolengths} a bound for the maximal length of an arc of a $2N+1$ optimal Reuleaux polygon. \begin{proposition}\label{propmaxlength} Let $h_N$ be the maximal length of an arc of a $2N+1$ Reuleaux polygon satisfying the optimality conditions. Let us denote by $\tau_N$ the rate between two consecutive lengths as defined in Theorem \ref{theolengths}. Then \begin{equation}\label{maxlength} h_N\leq \frac{(1-\tau_N)\, \pi}{1+\tau_N - 2\tau_N^{N+1}}. \end{equation} \end{proposition} \begin{remark} Thanks to this proposition, we obtain for example the following bounds for a $2N+1$ Reuleaux polygon satisfying the optimality conditions: using the fact that the right-hand side of \eqref{maxlength} is decreasing in $\tau$ and iterating Theorem \ref{theolengths} to get better rates we have, we infer that the maximal length of one side $h_N^{max}$ and the minimal length of one side $h_N^{min}$ (that is computed as $\tau_N^N h_N^{max}$) satisfy: \\ \begin{center} \begin{tabular}{c|c|c|c|c} $N$ & $2N+1$ & $\tau_N$ & $h_N^{max}$ & $h_N^{min}$ \\ \hline 2 & 5 & 0.9687 & 0.6526 & 0.6123 \\ 3 & 7 & 0.9791 & 0.4652 & 0.4367 \\ 4 & 9 & 0.9834 & 0.3622 & 0.3387\\ 5 & 11 & 0.9855 & 0.2971 & 0.2762 \\ 6 & 13 & 0.9868 & 0.2522 & 0.2328 \\ 7 & 15 & 0.9875 & 0.2194 & 0.2009 \\ 8 & 17 & 0.9881 & 0.1944 & 0.1765 \\ 9 & 19 & 0.9884 & 0.1746 & 0.1572 \\ \end{tabular} \captionof{table}{Table of rates, maximal and minimal lengths for Reuleaux polygons}\label{table1} \end{center} confirming that we are not far from regular Reuleaux polygons. \end{remark} \begin{proof}[Proof of Proposition \ref{propmaxlength}] We start from the arc of length $h_N$. Its two neighbours have a length at least $\tau_N h_N$, the next neighbours have a length at least $\tau_N^2 h_N$... up to the farthest arcs (in the enumeration) which have a length at least $\tau_N^N h_N$. Therefore, since the sum of all the lengths is equal to the perimeter $\pi$, we have the inequality $$\pi \geq h_N\left(1+2\sum_{k=1}^N \tau_N^k\right)=h_N \frac{1+\tau_N -2\tau_N^{N+1}}{1-\tau_N}$$ therefore \begin{equation}\label{majhN} h_N\leq \frac{\pi (1-\tau_N)}{1+\tau_N -2\tau_N^{N+1}}. \end{equation} \end{proof} The remaining part of the subsection is devoted to the proof of Theorem \ref{theolengths}. We start with some inequalities for the functions $U,G,F,H$ defined above, see \eqref{defU} and \eqref{defg}-\eqref{defH}. We assume that our variables satisfy the following conditions: $$ x,y,z \in \left [0,\frac{\pi}{6}\right],\quad 2y\geq U(z)-z. $$ We will also use everywhere, for the higher order terms the estimate \eqref{estimatesR}, namely $R\in [0.21132,0.22803]$. Starting from $$x-\frac{x^3}{6}\leq \sin x\leq x - 0.16439 x^3\ \;\mbox{for } 0\leq x\leq \frac{\pi}{6}$$ we get successively \begin{equation}\label{estiu} \frac{x}{1-R} + 0.1284 x^3 \leq U(x) \leq \frac{x}{1-R} + 0.1831 x^3 \end{equation} $$ 1-R - \frac{R^2x^2}{2(1-R)} - 0.05 x^4 \leq G(x) \leq 1-R - \frac{R^2x^2}{2(1-R)} - 0.02 x^4 $$ $$F(y,z)\leq 1-R-2(1-R)y^2-\frac{R^2 z^2}{2(1-R)}+2Ryz+S_1(y,z)$$ with a remainder $$S_1(y,z)= 0.52579 y^4- 0.28176 y^3z + 0.06736 y^2z^2 +0.28376 yz^3- 0.03852 z^4.$$ In the same way $$F(y,z)\geq 1-R-2(1-R)y^2-\frac{R^2 z^2}{2(1-R)}+2Ryz+S_2(y,z)$$ with a remainder $$S_2(y,z)= 0.49223 y^4-0.30404 y^3z + 0.0403 y^2z^2 +0.19159 yz^3-0.02727 z^4.$$ Putting together these different estimates yields for $H(x,y,z)$ the following estimates: \begin{equation}\label{estiH1} H(x,y,z)\leq 4(1-R)y^2z+\frac{R^2}{1-R}(z^3-zx^2)-4Ryz^2+T_1(x,y,z) \end{equation} with a remainder which is a polynomial of degree 5 given by \begin{align*}T_1(x,y,z)= & -0.04 x^4z +0.00732 x^3z^2 + 0.04491 x^2z^3 -0.98447 y^4z \\ & +0.60808 y^3z^2 -1.94515 y^2z^3 +0.26158 yz^4 +0.01257 z^5, \end{align*} and \begin{equation}\label{estiH2} H(x,y,z)\geq 4(1-R)y^2z+\frac{R^2}{1-R}(z^3-zx^2)-4Ryz^2+T_2(x,y,z) \end{equation} with a remainder given by \begin{align*} T_2(x,y,z)=& -0.1x^4z + 0.03774 x^2z^3 - 1.05158 y^4z \\ & +0.56352 y^3z^2 -2.21639 y^2z^3 -0.004 yz^4 + 0.02293 z^5. \end{align*} Now writing the optimality condition $H\left( \frac{j_{k-1}}{2}, \frac{j_{k}}{2}, \frac{j_{k+1}}{2} \right)- H\left( \frac{j_{k+2}}{2}, \frac{j_{k+1}}{2}, \frac{j_{k}}{2} \right)=0$ and using estimates \eqref{estiH1}, \eqref{estiH2} yields the two following inequalities \begin{equation}\label{estiH3} 0\leq j_k j_{k+1}(j_k-j_{k+1}) + \frac{R^2}{4(1-R)}\left(j_{k+1}^3-j_{k+1}j_{k-1}^2+j_kj_{k+2}^2-j_k^3\right) +E_1(j_{k-1},j_k,j_{k+1},j_{k+2})/16 \end{equation} with \begin{eqnarray*} E_1(j_{k-1},j_k,j_{k+1},j_{k+2})=-0.04 j_{k-1}^4j_{k+1}+0.00732 j_{k-1}^3j_{k+1}^2+0.04491 j_{k-1}^2j_{k+1}^3 \\ -0.02293 j_k^5 -0.98047 j_k^4j_{k+1} +2.82447 j_k^3j_{k+1}^2-0.03774 j_k^3j_{k+2}^2 \\ -2.50867j_k^2j_{k+1}^3 +1.31316 j_kj_{k+1}^4 +0.1 j_kj_{k+2}^4 +0.01257 j_{k+1}^5 \end{eqnarray*} and \begin{equation}\label{estiH4} 0\geq j_k j_{k+1}(j_k-j_{k+1}) + \frac{R^2}{4(1-R)}\left(j_{k+1}^3-j_{k+1}j_{k-1}^2+j_kj_{k+2}^2-j_k^3\right) +E_2(j_{k-1},j_k,j_{k+1},j_{k+2})/16 \end{equation} with \begin{eqnarray*} E_2(j_{k-1},j_k,j_{k+1},j_{k+2})=-0.1 j_{k-1}^4j_{k+1}+0.03774 j_{k-1}^2j_{k+1}^3 -0.01257 j_k^5\\ -1.31316 j_k^4j_{k+1} + 2.50867 j_k^3j_{k+1}^2 -0.04491 j_k^3j_{k+2}^2-2.82447 j_k^2j_{k+1}^3 \\-0.00732 j_k^2j_{k+2}^3+0.98047 j_kj_{k+1}^4 +0.04 j_kj_{k+2}^4 + 0.02293 j_{k+1}^5. \end{eqnarray*} Note that the coefficient $R^2/(4(1-R))$ satisfies \begin{equation}\label{016} 0.01415 \leq \frac{R^2}{4(1-R)} \leq 0.01684. \end{equation} We are going to make the proof of Theorem \ref{theolengths} in the case $j_k\leq j_{k+1}$ by using inequality \eqref{estiH3} which leads to a simple inequality for a polynomial of degree 3 easy to analyse. In the case $j_k\geq j_{k+1}$ we can proceed exactly in the same way, but using inequality \eqref{estiH4} instead. In the sequel we use the following notations: $j_{k+1}=j$ and $j_{k-1}=u j$, $j_k=t j$, $j_{k+2}=v j$. The three numbers $t,u,v$ are positive and $t\leq 1$ by assumption. First, we need an estimate of the remainder $E_1$, that can be written as $$ E_1(u j , tj, j, vj) =j^5\left(F_1(t,v) + F_2(u)\right)$$ with $$ F_1(t,v)= -0.02293 t^5 -0.98047 t^4 + 2.82447 t^3-0.03774 t^3 v^2 -2.50867 t^2 +1.31316 t + 0.1 t v^4 + 0.01257 $$ and $$ F_2(u)= -0.04 u ^4 + 0.00732 u^3+ 0.04491 u^2. $$ \begin{lemma}\label{estiE1} For $v\leq 1$, we have $E_1(u j , tj, j, vj)\leq 0.71653873 j^5$ (for every $t\in [0,1]$ and every $u$). \\ For $v\geq 1$, we have $E_1(u j , tj, j, vj)\leq (0.1v^4- 0.03774 v^2+ 0.65427873) j^5$ (for every $t\in [0,1]$ and every $u$). \end{lemma} \begin{proof} The expression containing $u$ being a polynomial of degree 4, it is straightforward to prove that the maximum of $F_2$ on $\mathbb{R}_+$ is attained for $u\in [0.8210107,0.8210108]$ and its value is less than $0.01614873$. For $F_1(t,v)$, computing the derivative with respect to $v$, we see that this derivative is negative when $v\leq \sqrt{0.07548/0.4}\, t$ and positive after. Since it is easy to check that, for all $t\in [0,1]$, $F(t,0)< F(t,1)$ we see that, to maximize $F_1$, we must choose $v=1$ when $v\leq 1$, or keep $v$ when $v\geq 1$. Now, the derivative of $F_1$ with respect to $t$ is $$\frac{\partial F_1}{\partial t}=- 0.11465 t^4 -3.92188 t^3 + 8.4734 t^2 -5.01734 t +1.31316 - 0.11322 t^2 v^2 + 0.1 v^4.$$ The two last terms satisfy $$0.1 v^4 - 0.11322 t^2 v^2 \geq 0.1 v^4 - 0.11322 v^2 \geq -0.0033,$$ thus we are led to study a polynomial of degree 4 in $t$ and it is immediate to check that this polynomial is positive for any $t\in [0,1]$. Therefore, to maximize $F_1$ we must choose $t=1$. The conclusion follows. \end{proof} \medskip We are now in a position to prove the theorem. \begin{proof}[Proof of Theorem \ref{theolengths}] We make the proof in two steps. In the first step, we prove that $\tau \geq 0.92$ and then, using this estimate we get the conclusion.\\ Assuming $j_k\leq j_{k+1}$, we have to consider four different cases: \begin{itemize} \item Case 1: $j_{k-1}\geq j_k$ and $j_{k+1}\geq j_{k+2}$ \item Case 2: $j_{k-1}\leq j_k$ and $j_{k+1}\geq j_{k+2}$ \item Case 3: $j_{k-1}\leq j_k$ and $j_{k+1}\leq j_{k+2}$ \item Case 4: $j_{k-1}\geq j_k$ and $j_{k+1}\leq j_{k+2}$ \end{itemize} We start from the inequality established in \eqref{estiH3}: recalling the upper bound in \eqref{016} for $R^2/(4(1-R))$, we obtain \begin{equation}\label{estiH3bis} 0\leq t(t-1)+0.01684(1-u^2+t v^2-t^3) + \frac{E_1(uj,tj,j,vj)}{16j^3}. \end{equation} The idea is to bound it from above in each of the four cases with a polynomial $q(t)$ of degree 3 in $t$ which satisfies the following properties: \begin{equation}\label{condiq} q(0)>0,\quad q(1)>0, \quad q\ \hbox{decreasing and then increasing in }[0,1]\,,\quad q<0 \ \hbox{in }[0.1,0.92]. \end{equation} The positivity of the polynomial, together with the estimate $t>0.1339$ provided in Lemma \ref{roughesti}, give $t>0.92$. In each case, we will have to consider a polynomial $q$, depending on three coefficients $A,B,C$ given by $q(t)= t(t-1)+0.01684(1+At-Bt^2-t^3) +C$. It is immediate to check that if $A,B,C$ satisfy \begin{equation}\label{condiqs} 0.015 \leq B\leq A\leq 1.2\ \mbox{and } \; 0<C\leq 0.0514 \end{equation} then \eqref{condiq} holds true. \smallskip \noindent {\bf Case 1:} Here $u\geq t$ and $v\leq 1$, thus \eqref{estiH3bis}, Lemma \ref{estiE1}, and the fact that $j\leq \pi/3$, give $$0\leq t(t-1) +0.01684(1-t^2+t-t^3)+0.04912.$$ This polynomial satisfies \eqref{condiqs} and then \eqref{condiq}. \smallskip \noindent {\bf Case 2:} Here by assumption $u\leq t$ and $v\leq 1$. Moreover, using Lemma \ref{roughesti}, we get $u\geq 0.1339 t$. These estimates, together with Lemma \ref{estiE1}, imply that $$ 0\leq t(t-1) +0.01684(1-0.1339^2 t^2+t-t^3)+0.04912. $$ This polynomial satisfies \eqref{condiqs} and then \eqref{condiq}. \smallskip \noindent {\bf Case 3:} In this case, the four lengths are increasing and may belong to a sequence of increasing numbers $j_{k-1} \leq j_k \leq j_{k+1} \leq j_{k+2} \ldots \leq j_m$ with $j_m\geq j_{m+1}$. We proceed by a descent induction. From the case 2., we see that $j_{m-1}\geq 0.92 j_m$. Assume by induction that $j_{k+1}\geq 0.92 j_{k+2}$. Then $v\leq 1/0.92$. To estimate $j_k$ in terms of $j_{k+1}$ we use the same technique as above with $$ 1-u^2+t v^2-t^3\leq 1-0.1339^2 t^2 + t/0.92^2 -t^3. $$ We have also to change the estimate for the remainder $E_1$: according to Lemma \ref{estiE1} and since $1\leq v\leq1/0.92$ we have here $$ \frac{E_1(uj,tj,j,vj)}{16 j^3} \leq (0.1/0.92^4-0.03774/0.92^2+0.65427873) \frac{j^2}{16}\leq 0.051355. $$ All in all, \eqref{estiH3bis} gives $$ 0 \leq t(t-1) + 0.01684 (1-0.1339^2 t^2 + t/0.92^2 -t^3) + 0.051355. $$ This polynomial satisfies \eqref{condiqs} and then \eqref{condiq}. \smallskip \noindent {\bf Case 4:} We proceed as in the third case, by induction starting at the last number $j_m$ of the increasing sequence $j_k\leq j_{k+1} \leq j_{k+2} \ldots \leq j_m$. Here the upper bound of \eqref{estiH3bis} is $$ t(t-1) + 0.01684(1-t^2+t/0.92^2-t^3)+ 0.051355. $$ and satisfies \eqref{condiqs} and \eqref{condiq}. \medskip We now get a better estimate by using this number $0.92$, without replacing $j$ by its upper bound $\pi/3$. Since the statement is valid for any pair of consecutive lengths, coming back to $t,u,v$, we have: $t\leq 1, u\geq 0.92 t,0.92 \leq v\leq 1/0.92$. Using these stronger estimates in \eqref{estiH3bis}, we get $$ 0\leq q(t):=t(t-1) + 0.01684(1-0.92^2 t^2+t/0.92^2-t^3) + 0.04682 j^2. $$ Here the constant term is the same as in Cases 3 and 4. The polynomial satisfies \eqref{condiq}. Therefore, if we show that its larger root is at least $0.99-0.05j^2$ we are done. In other words, we shall prove that $q(0.99-0.05 j^2)\leq 0$. Developing the computation, we obtain $$ q(0.99-0.05j^2)\leq 10^{-3}(-3.672 + 0.722 j^2 + 2.34 j^4 + 0.002 j^6) $$ and since the right--hand side is negative for $j\leq \pi/3$ the thesis follows. \end{proof} \subsection{Inradius of a Reuleaux polygon}\label{secinradius} In this paragraph we give a general formula for the inradius of a Reuleaux polygon. \begin{definition}\label{def-contact} We say that $M\in \partial \Omega$ is a {\it contact point} if it belongs to the incircle. \end{definition} In the particular case in which there exist three contact points, two belonging to consecutive arcs, and the third belonging to the opposite arc, the inradius can be easily computed. \begin{lemma}\label{r5} Let $\Omega$ be a Reuleaux polygon with $2N+1$ sides. Assume that the arcs $\gamma_1$, $\gamma_2$, and $\gamma_{2N+1}$ are tangent to the incircle. Then \begin{equation}\label{formula5} r(\Omega)=1-\frac{1}{2\cos(j_1/2)}. \end{equation} \end{lemma} \begin{proof} The standing assumptions are summarized in Fig. \ref{fig-3}. \begin{figure}[h] \begin{center} {\includegraphics[height=3.5truecm] {inradius3}} \caption{{\it The configuration under study: contact points on two consecutive arcs $\gamma_2$, $\gamma_{2N+1}$, and on the opposite arc $\gamma_1$.}}\label{fig-3} \end{center} \end{figure} Since the arcs $\gamma_1$,$\gamma_2$, and $\gamma_{2N+1}$ are tangent to the incircle, denoting by $M_1$, $M_2$, and $M_3$ the three contact points, respectively, we infer that the length of $OM_1$, $OM_2$, and $OM_3$ are equal to $r(\Omega)$, so that the length of $OP_1$, $OP_2$, and $OP_{2N+1}$ are equal to $1-r(\Omega)$. In particular, the two triangles $P_1OP_2$ and $P_1OP_{2N+1}$ are congruent. Let us consider one of the two triangles: it is isosceles, with base of length 1, legs of length $1-r(\Omega)$, and base angle of amplitude $j_1/2$. Therefore, we conclude that $$ \big(1-r(\Omega)\big) \cos(j_1/2)= \frac12. $$ This concludes the proof. \end{proof} \begin{remark}\label{penta} The described situation in Lemma \ref{r5} always occurs for regular Reuleaux polygons (actually, in this case all the boundary arcs are tangent to the incircle) and for all the Reuleaux pentagons. \end{remark} The previous situation is a particular case: what remains true in general is the existence of three contact points which do not lie in the same half-plane (limited by a line going through the origin); what changes is the number of boundary points between pairs of contact points. In order to clarify this fact, we need to introduce the notion of {\it sector}. \begin{definition}\label{def-sec} Let $M_1,M_2,M_3$ be three contact points (labeled in the direct sense) with polar angles $t_1,t_2,t_3$, and not lying in the same half-plane (limited by a line going through the origin). The segments joining these contact points with their opposite boundary points (with polar angles $t_i+\pi$) pass through the origin and identify a partition of the interval $[0,2\pi]$ in six parts, that we call {\it sectors}: $$[t_2,t_1+\pi],[t_1+\pi,t_3],[t_3,t_2+\pi],[t_2+\pi,t_1],[t_1,t_3+\pi],[t_3+\pi,t_2].$$ Here the angles are intended modulo $2\pi$. The length of the sector $[t_i, t_{i-1}+\pi]$, $i\in \mathbb Z_3$, is denoted by $u_i$ or, when no ambiguity may arise, simply by $u$. \end{definition} \begin{remark} Note that each sector $[t_i, t_{i-1}+\pi]$, $i\in \mathbb Z_3$, is coupled with the opposite sector $[t_i+\pi, t_{i-1}]$ (again, angles intended modulo $2\pi$) which has the same length. \end{remark} Beside the length, we associate to each sector another characteristic parameter. To fix the ideas, let us consider the first sector $[t_2,t_1+\pi]$. Up to relabeling the indexes, we may assume that $M_1$ belongs to the boundary arc $\gamma_1$ centered at $P_1$. Accordingly, $M_2$ lies on the boundary arc $\gamma_{2m}$ centered at $P_{2m}$, for some $m$. Going along the boundary in the direct sense, namely in counter-clockwise sense, between $M_2$ and $P_1$, we find $P_{2m-1}, P_{2m-3}, \ldots, P_3, P_1$; whereas between $P_{2m}$ and $M_1$ we find the vertexes $P_{2m}, P_{2m-2}, \ldots, P_4, P_2$ (see also Fig. \ref{fig-t}). \begin{figure}[h] \begin{center} {\includegraphics[height=4truecm] {t21} \quad \includegraphics[height=4truecm] {t22}} \caption{{\it Left: the families $P_1,\ldots, P_{2m}$ and $P_2,\ldots,P_{2m}$ associated to two contact points $M_1$ and $M_2$. Right: an example with $m=1$.}}\label{fig-t} \end{center} \end{figure} This leads us to define inside the sector $[t_2,t_1+\pi]$ the sequence of numbers $t_2<x_1<x_2< \ldots <x_{2m-2}<x_{2m-1}<t_1+\pi$ where $$x_1=\beta_{2m},x_2=\alpha_{2m-2},x_3=\beta_{2m-2},\ldots x_{2m-2}=\alpha_2,x_{2m-1}=\beta_2.$$ As a function of the parameters $m$ and $u$, the inradius is given by the following. \begin{lemma}\label{lem-rmt} Let $\Omega$ be a Reuleaux polygon. Let $u$ and $m$ be the two parameters of a sector, as in Definition \ref{def-sec}. Then \begin{equation}\label{formulainradius} r(\Omega) = 1-\frac{\sum_{k=2}^{2m} \cos \beta_k}{\sin u} = 1- \frac{\sum_{k=1}^{2m-1} (-1)^{k-1}\cos x_k}{\sin u}. \end{equation} \end{lemma} \begin{proof} Throughout the proof, for brevity we set $r=r(\Omega)$. Without loss of generality, we may assume to work with the first sector, delimited by $M_2$ and $P_1$. Up to a rigid motion, $M_1=(0,-r)$. Accordingly, $t_1=3\pi/2$, and the sector under study is $[t_2,\pi/2]$. The statement simply follows by writing $P_1$ in two different ways: by construction, $P_1=(0,1-r)$; on the other hand, exploiting the rule $P_j= P_{j+1} + e^{i \beta_j}$ and $P_{2m}=-(1-r)e^{it_2}$, we get $$ P_1=-(1-r) e^{i t_2} + \sum_{k=2}^{2m} e^{i \beta_k}. $$ Taking the projections on the horizontal and vertical components, we conclude that \begin{equation}\label{sys} \left\{ \begin{array}{ccc} 0=-(1-r)\cos t_2 +\sum_{k=2}^{2m} \cos \beta_k \\ \smallskip \\ (1-r)= -(1-r)\sin t_2 +\sum_{k=2}^{2m} \sin \beta_k . \end{array} \right. \end{equation} The first line of the system, recalling that the length of the sector here is $\pi/2-t_2$, gives the first statement. The second one comes from the definition of the $x_k's$ and the relation $\alpha_k=\beta_{k+1}-\pi$. \end{proof} \begin{remark} Notice that we do not require $M_2$ to be the ``first'' contact point met in the path. Moreover, notice that $m$ and $u$ do not depend on the orientation chosen. \end{remark} We conclude the paragraph with some estimates for the length of a sector, which will be crucial in the next section. \begin{lemma}\label{lengthsector} The length $u$ of any sector satisfies \begin{equation}\label{lensec1} u\geq 2\left(\sqrt{1-2r}+r(2\arctan(\sqrt{4(1-r)^2-1})-\arccos\left(\frac{r}{1-r}\right)\right) \end{equation} where $r$ in the inradius of the Reuleaux polygon.\\ In particular, for a Reuleaux polygon with an inradius $r\leq r_0$ (for example an optimal Reuleaux polygon), we have \begin{equation}\label{lensec2} 0.9926 \leq u \leq 1.1563 . \end{equation} \end{lemma} \begin{proof} We work with the first sector, delimited by $M_2$ and $P_1$. Up to a rigid motion, we may assume that $M_1=(0,-r)$, so that $P_1=(0,1-r)$. In particular, $t_1=3\pi/2$ and the sector under study is $[t_2,\pi/2]$ with length $u:=u_2=\pi/2-t_2$. The length of the boundary of $\Omega$ between $M_2$ and $P_1$ is $$L(M_2P_1)=x_1-t_2+x_3-x_2+x_5-x_4+\ldots + x_{2m-1} - x_{2m-2} .$$ In the same way, the length of the opposite boundary from $P_2$ to $M_1$ is $$L(P_2M_1)=x_2-x_1+x_4-x_3+\ldots +\frac{\pi}{2}-x_{2m-1}.$$ Therefore, by addition $u=\frac{\pi}{2} - t_2 = L(M_2P_1) + L(P_2M_1)$. Now let us introduce the point $P_0$ defined as the intersection of the arc of circle of radius 1, centered at $P_2$ with the outercircle (of radius $1-r$), see Figure \ref{fig-geod}. \begin{figure}[h] \begin{center} {\includegraphics[height=4.5truecm] {geodes.png}} \caption{{\it Computation of the length of a sector by comparison with a geodesic.}}\label{fig-geod} \end{center} \end{figure} By convexity, the point $P_1$ is after the point $P_0$ (in the direct sense) on the outercircle. We are going to make comparisons with the geodesics inside the annulus $\{r\leq |X|\leq 1-r\}$. We denote by $geod(A,B)$ the length of the geodesic between two points $A,B$ in the annulus. We have $$L(M_2P_1) \geq geod(M_2,P_1) \geq geod (M_2,P_0).$$ The same inequality holds for $L(P_2M_1)$. Thus, $u\geq 2 geod(M_2,P_0)$. Now, let us compute $geod(M_2,P_0)$. This geodesic is made of \begin{itemize} \item a segment $P_0H$ joining $P_0$ to the point $H$ defining the tangent to the incircle which goes through $P_0$ (see Figure \ref{fig-geod}), \item the arc of the incircle \t{$M_2H$}. \end{itemize} By Pythagoras' theorem, $|P_0H|=\sqrt{1-2r}$. Now we set $\ell(r)=2\arctan(\sqrt{4(1-r)^2-1})$. A simple trigonometric computation shows that the angle $\widehat{M_2P_2P_0}$ is $\ell(r)/2$. Relations between the intercepting angles in a circle show that the angle $\widehat{P_0OM_2}$ is $\ell(r)$. Now the angle $\widehat{P_0OH}$ is $\arccos(r/(1-r))$, therefore the arc \t{$M_2H$} has length which is $r(\ell(r) - \arccos(r/(1-r))$ and the inequality \eqref{lensec1} follows. For the inequality \eqref{lensec2}, we remark that the function in the right-hand side of \eqref{lensec1} is decreasing and we compute its value for $r=r_0$ getting the estimate from below. The estimate from above follows since the lengths of the three sectors satisfy $u_1+u_2+u_3=\pi$. \end{proof} \begin{remark}\label{5791113} The number of points $x_k$ we can have in a sector is odd but variable. Nevertheless, we can bound this number for low values of $N$. For example, for a Reuleaux heptagon ($N=3$), there is necessarily at least one sector (two if we consider its corresponding sector) with only one point inside. For a Reuleaux nonagon ($N=4$), either there is one sector with only one point, or all sectors have three points. For $N=5$ or $N=6$ there is at least one sector with either one or three points (because if any sector has more than 5 points, we have a number of sides at least equal to $6$ (number of sectors) times $5$ (number of points) divided by $2 =15$). \end{remark} \section{Proof of the main theorem} The key results of this section concern the maximization of $h$ in the subclass of Reuleaux polygons with a prescribed maximal number of sides, namely in $\mathcal B_N^1$, for $N\in \mathbb N$. They are: \begin{proposition}\label{15} A $2N+1$-Reuleaux polygon with $N\geq 7$ cannot be the maximizer. \end{proposition} \begin{proposition}\label{autres} A $2N+1$-Reuleaux polygon with $2\leq N\leq 6$ cannot be the maximizer. \end{proposition} Once proved the propositions (see the next two paragraphs), we are done: \begin{proof}[Proof of Theorem \ref{maintheo}] Let $N\in \mathbb N$ be fixed. In view of Proposition \ref{existN}, $h$ admits a maximizer $\Omega_N$ in the class $\mathcal B_N^1$. Thanks to Propositions \ref{15} and \ref{autres}, $\Omega_N$ is necessarily the Reuleaux triangle, in particular $\max_{\mathcal B_N^1} h= h(\mathbb T)$. Let us now consider the maximization problem in the whole class $\mathcal B^1$. As already shown in Proposition \ref{exist}, the problem admits a solution. Exploiting the density with respect to the Hausdorff metric of the Reuleaux polygons (cf. \cite{BF} and \cite{Buc}), the continuity of $h$ with respect to the Hausdorff metric (see again \cite[Proposition 3.1]{Pa2}), and the fact that $\{\mathcal B^1_N\}_N$ is an increasing family with $N$, we infer that $$ \max_{\mathcal B^1} h = \sup_N \max_{\mathcal B^1_N} h= \lim_{N \to \infty} \max_{\mathcal B^1_N} h. $$ As shown at the beginning of the proof, the sequence $\{\max_{\mathcal B^1_N} h\}_N$ is stationary, equal to $h(\mathbb T)$. This concludes the proof. \end{proof} The next two paragraphs are devoted to the proof of Proposition \ref{15} and \ref{autres}, respectively. \subsection{Reduction to Reuleaux polygons with less than 15 sides} We start by proving that an optimal Reuleaux polygon has less than 15 sides. \begin{theorem}\label{quinze} Let $\Omega$ be a $2N+1$-Reuleaux polygon satisfying the optimality condition \eqref{oc3}. Then its inradius satisfies \begin{equation}\label{minr} r\geq \frac{1}{2}-\frac{h_N}{4\sin u}-\frac{1-\tau_N}{4\tau_N}\left(1+\frac{h_N^2}{6} \frac{u}{\sin u}\right)-\frac{h_N^2}{24} \frac{u}{\sin u} \end{equation} where $h_N$ is the maximal length of an arc (given in Proposition \ref{propmaxlength}) and $\tau_N$ is the rate between two consecutive lengths (given in Theorem \ref{theolengths}) and $u$ is the length of a sector, as in Lemma \ref{lengthsector}. \end{theorem} \begin{proof} Following the notations of Section \ref{secinradius} we consider a sector $[t_2,\pi/2]$ with $2m-1$ points $t_1<x_1<x_2< \ldots <x_{2m-1}<\frac{\pi}{2}$. For convenience, we will denote $x_0=t_2$ and $x_{2m}=\pi/2$. According to Formula \eqref{formulainradius}, in order to bound from below the inradius, we need to estimate from above $$C:=\cos x_1 -\cos x_2 + \ldots - \cos x_{2m-2} + \cos x_{2m-1}$$ that can also be written $C=\sum_{k=1}^m \int_{x_{2k-1}}^{x_{2k}} \sin t\, \mathrm{d} t$. We also introduce $C'=\sum_{k=0}^{m-1} \int_{x_{2k}}^{x_{2k+1}} \sin t\, \mathrm{d} t$ so that $C+C'=\int_{t_2}^{\pi/2} \sin t \, \mathrm{d} t = \cos t_2 =\sin u$ since $u=\frac{\pi}{2}-t_2$. The idea of the proof is to use the trapezoidal rule to estimate both integrals $C$ and $C'$ taking advantage on the information we have on the lengths of each interval. Let us denote $h_k=x_{k+1}-x_k$ and we recall that, in view of Theorem \ref{theolengths}, for any $k$: \begin{equation}\label{encad} \tau_N h_{k-1} \leq h_k \leq \frac{h_{k-1}}{\tau_N}. \end{equation} Let us introduce the approximations of $C$ and $C'$ obtained by the trapezoidal rule: \begin{eqnarray*} C_h=\sum_{k=1}^m \frac{h_{2k-1}}{2}(\sin x_{2k-1} +\sin x_{2k})\\ C'_h=\sum_{k=0}^{m-1} \frac{h_{2k}}{2}(\sin x_{2k} +\sin x_{2k+1}). \end{eqnarray*} The classical error formulae in numerical integration provide \begin{eqnarray*} - \frac{h_N^2}{12}\sum_{k=1}^m h_{2k-1} \leq C-C_h\leq \frac{h_N^2}{12}\sum_{k=1}^m h_{2k-1}\\ - \frac{h_N^2}{12}\sum_{k=0}^{m-1} h_{2k} \leq C'_h-C'\leq \frac{h_N^2}{12}\sum_{k=0}^{m-1} h_{2k}\\ \end{eqnarray*} that yields by adding the two inequalities and using $u=x_{2m}-x_0$ \begin{equation}\label{inte1} -\frac{u h_N^2}{12} + C_h-C'_h \leq C-C' \leq +\frac{u h_N^2}{12} + C_h-C'_h \,. \end{equation} We write $$C_h-C'_h=-\frac{h_0}{2} \sin x_0 +\sum_{k=1}^{2m-1} (-1)^{k-1} \frac{h_k-h_{k-1}}{2} \sin x_k +\frac{h_{2m-1}}{2}\,.$$ Now, using \eqref{encad} $$|h_k - h_{k-1}| \sin x_k \leq \left(\frac{1}{\tau_N}-1\right) \min (h_{k-1},h_k) \sin x_k \leq \left(\frac{1}{\tau_N}-1\right) \frac{h_{k-1}+h_k}{2} \sin x_k.$$ Thus $$C_h-C'_h \leq -\frac{h_0}{2} \sin x_0 + \frac{1}{2} \left(\frac{1}{\tau_N}-1\right) \sum_{k=1}^{2m-1} \frac{h_{k-1}+h_k}{2} \sin x_k + \frac{h_N}{2}\,.$$ We use now the middle-point integration rule to estimate the term $\sum_{k=1}^{2m-1} \frac{h_{k-1}+h_k}{2} \sin x_k$. First with the odd points $x_{2k-1}$ on intervals of length less than $2h_N$: $$\left|\int_{x_0}^{x_{2m}} \sin t\, \mathrm{d} t - \sum_{k=1}^{m} (h_{2k-1}+h_{2k-2}) \sin x_{2k-1}\right| \leq \frac{u (2 h_N)^2}{24}$$ then with the even points $x_{2k}$: $$\left|\int_{x_1}^{x_{2m-1}} \sin t\, \mathrm{d} t - \sum_{k=1}^{m-1} (h_{2k-1}+h_{2k}) \sin x_{2k}\right| \leq \frac{u (2 h_N)^2}{24}\,.$$ Therefore by addition $$\sum_{k=1}^{2m-1} \frac{h_{k-1}+h_k}{2} \sin x_k \leq \frac{1}{2}(\cos x_0 +\cos x_1 -\cos x_{2m-1})+ \frac{u h_N^2}{6} \leq \cos x_0 + \frac{u h_N^2}{6}$$ and we infer \begin{equation} C_h-C'_h \leq \frac{1}{2} \left(\frac{1}{\tau_N}-1\right)(\cos x_0 + \frac{u h_N^2}{6}) + \frac{h_N}{2}\,. \end{equation} Finally, using \eqref{inte1} together with $C+C'=\sin u$, we obtain $$ C\leq \frac{\sin u}{2}+ \frac{h_N}{4} + \frac{1}{4} \left(\frac{1}{\tau_N}-1\right)(\sin u + \frac{u h_N^2}{6}) + \frac{u h_N^2}{24}, $$ which, combined with $1-r=C/\sin u$, gives \eqref{minr}. \end{proof} As a corollary, we can give the \begin{proof}[Proof of Proposition \ref{15}] When $N$ increases, according to Theorem \ref{theolengths} and Proposition \ref{propmaxlength}, the maximal length $h_N$ decreases while the rate $\tau_N$ increases. Therefore, the right-hand side of inequality \eqref{minr} is increasing with $N$. In other words, if we prove that the inradius of a $15$-Reuleaux polygon (satisfying the optimality conditions) is greater than $r_0$, it will also be true for any $2N+1$-Reuleaux polygon (satisfying the optimality conditions) with $N\geq 7$. Now, according to Proposition \ref{propr0}, this shows that these Reuleaux polygons cannot be optimal. We have seen in Lemma \ref{lengthsector}, Formula \eqref{lensec2}, that for an optimal domain (thus with an inradius less than $r_0$), we can choose the largest sector whose length satisfies $\frac{\pi}{3} \leq u \leq 1.1563$. This implies in particular $u/\sin u \leq 1.2633$ and $1/\sin u \leq 2/\sqrt{3}$. Plugging these bounds in \eqref{minr} together with $h_N=0.2194$ and $\tau_N=0.9875$ (see Table \ref{table1}) provides an inradius $r>r_0$ that gives the thesis. \end{proof} \subsection{The case of polygons with a number of sides between 5 and 13.} In this paragraph we rule out the intermediate cases, corresponding to Reuleaux polygons satisfying the optimality conditions and having a number of sides between $5$ and $13$. \begin{proof}[Proof of Proposition \ref{autres}] Throughout the proof, we consider a Reuleaux polygon satisfying the optimality conditions. Its inradius, for brevity, will be simply denoted by $r$. In view of Proposition \ref{propr0}, it is enough to show that $r>r_0$. The proof is organized as follows: first, we treat the case of Reuleaux pentagons ($N=2$); then we analyze the Reuleaux polygons with $N=3,4,5,6$, distinguishing the cases in which there is a sector of 1 point or not. Note that the former is always satisfied for heptagons ($N=3$); moreover, in the latter, there is always a sector with 3 points (see also Remark \ref{5791113}). \medskip \noindent {\it Step 1. The case of pentagons, $N=2$.} As already noticed in Remark \ref{penta}, for pentagons the inradius is given by formula \eqref{formula5}. Since $\mathcal H^1(\gamma_1)\leq h_2^{max}$, we get, thanks to Table 1: $$ r\geq 1- \frac{1}{2\cos(h_2^{max}/2)}>0.47>r_0. $$ This concludes the proof for pentagons. \medskip \noindent{\it Step 2. The case of a sector with $1$ point, for $N=3,4,5,6$}. Without loss of generality, up to a rotation, we may assume that such a sector is the segment $[t,\pi/2]$ with length $u= \pi/2-t$. According to this notation, we rewrite system \eqref{sys} as follows: $$ \left\{\begin{array}{ccc} &(1-r)\sin u=\cos x_1 \\ &(1-r)(1+\cos u)=\sin x_1, \end{array} \right. $$ so that, taking the quotient, we get $$ \tan(u/2)= \frac{1}{\tan x_1 }. $$ Therefore \begin{align*} 1-r& = \frac{\cos x_1 }{\sin(u)} = \frac{\cos x_1 (1+ \tan^2(u/2))}{2 \tan (u/2)} \\ & = \frac{\cos x_1 (1 + 1/\tan^2(x_1)) \tan x_1}{2} \\ & = \frac{1}{2\sin x_1}. \end{align*} The value of $x_1$ is unknown, however, its distance from $\pi/2$ is at most the maximal length of one of the arcs: $x_1 \geq \pi/2-h_N^{max}$. Since $h_N^{max}$ is decreasing with respect to $N$, we get $$ 1-r\leq \frac{1}{2\sin (\pi/2-h_N^{max})}=\frac{1}{2 \cos (h_N^{max})}\leq \frac{1}{2 \cos (h_3^{max})}, $$ so that, using Table 1: \begin{equation}\label{estimate1} r \geq 1- \frac{1}{2\cos(h_3^{max})}>0.44 >r_0. \end{equation} This concludes the proof of the step. \medskip \noindent {\it Step 3. The case of a sector with 3 points, for $N=4,5,6$.} Let $N$ be fixed. As in Step 2, without loss of generality, up to a rotation, we may assume that such a sector is $[t,\pi/2]$ with length $u=\pi/2-t$. By assumption, there exists a sector with 3 points $x_1,x_2,x_3$. By Lemma \ref{lem-rmt}, we have $$ r= 1- \frac{\left( \cos x_1 -\cos x_2 + \cos x_3 \right) }{\cos t}. $$ We claim that \begin{equation}\label{claimt} t\in [t_0, t_1]\subset [\pi/11,4\pi/11]. \end{equation} Let us assume the claim true (it will be shown at the end of the proof). In order to have a lower bound for $r$ we look for an upper bound for $C:= \cos x_1 -\cos x_2 + \cos x_3 $, in $[t_0, t_1]$. Let $h_1:=x_2-x_1$ and $h_2:=x_3-x_2$. Set $h:= (h_1+h_2)/2$ and $\delta:= (h_1-h_2)/2$. Therefore \begin{align*} C& =\cos(x_2-h -\delta) - \cos x_2 + \cos(x_2 + h -\delta) \\ & = \frac12 \left( \cos(x_2-h -\delta) - \cos x_2\right) + \frac12 \left( \cos(x_2+h -\delta) - \cos x_2 \right) \\ & \quad + \frac12 \left( \cos(x_2-h -\delta) +\cos(x_2+h-\delta)\right) \\ & = \cos(x_2-\delta) (1-4\sin^2(h/2))+2\sin(x_2-\delta/2)\sin(\delta/2). \end{align*} Without loss of generality, up to consider the opposite sector, we may assume that $$ x_2\geq \frac{t+\pi/2}{2}. $$ Moreover, there holds $$ |\delta|\leq (1-\tau_N)\frac{h_N^{max}}{2}. $$ These two bounds give \begin{eqnarray*} &x_2-\delta \geq \frac{t}{2} + \frac{\pi}{4} - (1-\tau_N) h_N^{max}/2\quad \Rightarrow \quad \cos(x_2-\delta) \leq \cos (t/2 + \pi/4 - (1-\tau_N) h_N^{max}/2)& \\ &1-4\sin^2(h/2) \leq 1-4\sin^2(h_N^{min}/2)& \\ &2\sin(x_2-\delta/2)\sin(\delta/2) \leq 2 \sin (\delta/2) \leq 2 \sin ((1-\tau_N)h_N^{max}/4).& \end{eqnarray*} Using these bounds in the expression of $C$, we obtain the following upper bound for $C/\cos t$: \begin{align*} \frac{C}{\cos t} & \leq \frac{\cos (t/2 +\pi/4-(1-\tau_N) h_N^{max}/2) (1-4\sin^2(h_N^{min}/2)) + 2 \sin ((1-\tau_N)h_N^{max}/4)}{\cos t} \\ & \leq \frac{\cos (t/2 +\pi/4-(1-\tau_N) h_N^{max}/2)}{\cos t} \, (1-4\sin^2(h_N^{min}/2)) + 2 \frac{\sin ((1-\tau_N)h_N^{max}/4)}{\cos t_1}. \end{align*} Let us consider the first term. We want to show that this is decreasing, namely we claim that $$ \forall t\in [t_0,t_1]\quad f(t):=\frac{\cos (t/2 +\alpha_N)}{\cos t}\leq f(t_0), $$ with $\alpha_N:=\pi/4-(1-\tau_N) h_N^{max}/2$. To this aim, we prove that $f'<0$ in $[t_0,t_1]$: \begin{align*} \cos^2(t) f'(t)= & =- \frac12\sin(t/2 + \alpha_N)\cos t + \sin t \cos(t/2+\alpha_N) \\ & =\sin (t/2) \cos(t/2)\cos(t/2+\alpha_N) + \frac12 \sin(t/2-\alpha_N) \\ & = \sin (t/2) \cos(\alpha_N)\left(\frac12 + \cos^2(t/2)\right) - \cos (t/2) \sin(\alpha_N)\left(\frac12 + \sin^2(t/2)\right). \end{align*} To prove that $$\sin (t/2) \cos(\alpha_N)\left(\frac12 + \cos^2(t/2)\right) < \cos (t/2) \sin(\alpha_N)\left(\frac12 + \sin^2(t/2)\right)$$ we square both sides and setting $x=\cos^2(t/2)$, this leads to consider the polynomial $$P(x)=x^3 -3\sin^2(\alpha_N) x^2 +\left(\frac94 \sin^2(\alpha_N) -\frac34\cos^2(\alpha_N)\right) x - \frac{\cos^2(\alpha_N)}{4},$$ and look when it is positive. Now, for the three values $(\alpha_4,\alpha_5,\alpha_6)\simeq (0.7824,0.7832,0.7837)$ obtained from Table 1, we see that the polynomial $P(x)$ has only one real root which is less than $0.65$. Therefore, as soon as $\cos(t/2)\geq \sqrt{0.65}$ we have $f'<0$ and this is the case for $t\in [t_0,t_1]$. Therefore, we have \begin{equation}\label{lastestimate} r\geq 1 - \frac{\cos (t_0/2 +\pi/4-(1-\tau_N) h_N^{max}/2)}{\cos t_0} \, (1-4\sin^2(h_N^{min}/2)) - 2 \frac{\sin ((1-\tau_N)h_N^{max}/4)}{\cos t_1}. \end{equation} Let us now prove the claim \eqref{claimt}: \begin{itemize} \item for $N=4$ the presence of a sector with 3 points and the absence of a sector with 1 point, occurs only when all the sectors have 3 points. If we choose the sector with maximal length, in view of Lemma \ref{lengthsector}, Formula \eqref{lensec2}, we infer that $$t\in [t_0,t_1] := [\pi/2-1.1563,\pi/6]\subset [\pi/11, 4\pi/11]; $$ \item for $N=5$ the presence of a sector with 3 points and the absence of a sector with 1 point, occurs only when two sector have 3 points and one sector has 5 points. In particular, there exists a sector with 3 points which attain either the maximal or the minimal length. In the first case, in view of Lemma \ref{lengthsector}, Formula \eqref{lensec2}, we may take (as for $N=4$) $$ [t_0,t_1] := [\pi/2-1.1538,\pi/6]\subset [\pi/11, 4\pi/11]; $$ in the second case, in view of Lemma \ref{lengthsector}, Formula \eqref{lensec2}, we may take $$[t_0,t_1] := [1.1563/2,\pi/2-2h_5^{min}]=[0.57815,1.0184]\subset [\pi/11, 4\pi/11];$$ \item for $N=6$, we obtain the bounds for $t$ in a different way: since between $t_2$ and $\pi/2$ by assumption we have 3 points, we infer that $$t\in [t_0,t_1]:= [\pi/2 - 4 h_6^{max}, \pi/2-2 h_6^{min}]= [0.5619,1.10505]\subset [\pi/11, 4\pi/11]. $$ \end{itemize} Using these $t_0$ and $t_1$ in \eqref{lastestimate}, we get $r> 0.46$ for $N=4$, $r>0.44$ for $N=5$, and $r> 0.45$, for $N=6$. These lower bounds are all greater than $r_0$. This concludes the proof. \end{proof} \section{Conclusion and perspectives}\label{conc} The Cheeger constant is known to be the first eigenvalue of the $1$-Laplacian, see \cite{KaFr}. On the other side, according to \cite{JuLiMa}, the first eigenvalue of the $\infty$-Laplacian is nothing else than $1/r$ the inverse of the inradius. Since the Reuleaux triangle maximizes $1/r(\Omega)$ in the class $\mathcal{B}^1$ and we have proved in this paper that it also maximizes the Cheeger constant, a very natural question and conjecture is: \smallskip \noindent {\bf Conjecture : (Blaschke-Lebesgue Theorem for all eigenvalues)} Prove that the Reuleaux triangle maximizes the first eigenvalue of the $p$-Laplacian in the class $\mathcal{B}^1$ for all $p, 1\leq p\leq +\infty$. \medskip We conclude by noticing that the problem under study could have been set in a different class of shapes: the planar convex sets with prescribed {\it minimal width} or {\it thickness} (i.e., the minimal distance between two parallel lines enclosing the set). It is immediate to check that a maximizer in this class is actually a body of constant width, namely the width constraint is saturated in any direction. The very same reasoning applies to the minimization problem, replacing the thickness constraint with the diameter constraint. \section{Appendix} For the benefit of the reader, we gather here the main properties of the function $\mathcal A$, studied in \cite{HL}. As already mentioned, the function $\mathcal A$ associates to $r$ the minimal area of a body with constant with (=1) and inradius $r$. The domain of definition of the function is the interval $[1-1/\sqrt{3}, 1/2]$, which spans all the possible inradii of the bodies of constant width 1. Among them, we highlight the inradii of regular Reuleaux polygons, by labeling them as $r_{_{2N+1}}$, being $2N+1$ the number of sides. The sequence $\{r_{_{2N+1}}\}_{N\in \mathbb N}$ is increasing and runs from $1-1/\sqrt{3}$ to $1/2$ (not attained). The optimizer is unique and is always a Reuleaux polygon (the regular one for a ``good'' inradius) with a precise structure, that we write here below. The characterization of the optimizer allows one to compute the area quite easily, providing an explicit formula for $\mathcal A(r)$. In \cite[Theorem 1.2]{HL} we have proved the following: \begin{itemize} \item If $r=r_{_{2N+1}}$ for some $N\in \mathbb N$, then the optimal set of $\mathcal A(r)$ is the regular Reuleaux $(2N+1)$-gon. \smallskip \item If instead $r_{_{2N-1}}<r<r_{_{2N+1}}$ for some $N\in \mathbb N$, $N\geq 2$, setting $$ \ell(r):=2 \arctan\left(\sqrt{4(1-r)^2-1}\right),\quad x(r):=\frac\pi2 - \frac{2N-1}{2}\, \ell(r), $$ the optimal set of $\mathcal A(r)$ is unique (up to rigid motions) and has the following structure: \begin{itemize} \item[i)] it is a Reuleaux polygon with $2N+1$ sides, all but one tangent to the incircle; \item[ii)] the non tangent side has both endpoints on the outercircle and has length $$ a(r):=2\,\arcsin\Big((1-r)\sin(x(r))\Big), $$ its two opposite sides have one endpoint on the outercircle and meet at a point in the interior of the annulus; moreover, they both have length $$ b(r):= x(r) + \frac{\ell(r) -a(r)}{2}; $$ \item[iii)] the other $2N-2$ sides are tangent to the incircle, have both endpoints on the outercircle, and have length $\ell(r)$. \end{itemize} \smallskip \item Setting \begin{align*} A(r,x,a,b):=& (1-r)^2 \sin x \cos x + \frac{a-\sin a}{2} +b-\sin b \\ & +(1-r) \big(\cos(a/2)-(1-r)\cos x\big) \sin(x+\ell(r)), \end{align*} the least area reads $$ \mathcal A(r)=\left\{ \begin{array}{lll} (2N+1)A(r_{_{2N+1}},0,0,0)\quad & \hbox{if }r=r_{_{2N+1}}, \\ (2N-2)A(r,0,0,0)+A(r,x(r),a(r),b(r)) \quad & \hbox{if } r_{_{2N-1}}<r<r_{_{2N+1}}. \end{array} \right. $$ \smallskip \item The function $r\mapsto \mathcal A(r)$ is continuous and increasing. \end{itemize} \bigskip \noindent {\bf Acknowledgements}: This work was partially supported by the project ANR-18-CE40-0013 SHAPO financed by the French Agence Nationale de la Recherche (ANR). IL acknowledges the Dipartimento di Matematica - Universit\`a di Pisa for the hospitality.
{ "timestamp": "2020-11-17T02:09:14", "yymm": "2011", "arxiv_id": "2011.07244", "language": "en", "url": "https://arxiv.org/abs/2011.07244", "abstract": "In this paper we prove a new extremal property of the Reuleaux triangle: it maximizes the Cheeger constant among all bodies of (same) constant width. The proof relies on a fine analysis of the optimality conditions satisfied by an optimal Reuleaux polygon together with an explicit upper bound for the inradius of the optimal domain. As a possible perspective, we conjecture that this maximal property of the Reuleaux triangle holds for the first eigenvalue of the $p$-Laplacian for any $p\\in (1,+\\infty)$ (the current paper covers the case $p=1$ whereas the case $p=+\\infty$ was already known).", "subjects": "Analysis of PDEs (math.AP); Optimization and Control (math.OC)", "title": "A Blaschke-Lebesgue Theorem for the Cheeger constant", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683495127003, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7096610841918832 }
https://arxiv.org/abs/0907.4412
The Cohomology Ring of the Space of Rational Functions
Let Rat_k be the space of based holomorphic maps from S^2 to itself of degree k. Let beta_k denote the Artin's braid group on k strings and let Bbeta_k be the classifying space of beta_k. Let C_k denote the space of configurations of length less than or equal to k of distinct points in R^2 with labels in S^1. The three spaces Rat_k, Bbeta_{2k}, C_k are all stably homotopy equivalent to each other. For an odd prime p, the F_p-cohomology ring of the three spaces are isomorphic to each other. The F_2-cohomology ring of Bbeta_{2k} is isomorphic to that of C_k. We show that for all values of k except 1 and 3, the F_2-cohomology ring of Rat_k is not isomorphic to that of Bbeta_{2k} or C_k. This in particular implies that the HF_2-localization of Rat_k is not homotopy equivalent to HF_2-localization of Bbeta_{2k} or C_k. We also show that for k >= 1, Bbeta_{2k} and Bbeta_{2k+1} have homotopy equivalent HF_2-localizations.
\section{Introduction} Let $\beta_k$ denote the Artin's braid group on $k$ strings. Let $B\beta_k$ be the classifying space of $\beta_k$. Let $C_k({\mathbb{R}}^2,S^1)$ denote the space of configurations of length less than or equal to $k$ of distict points in ${\mathbb{R}}^2$ with labels in $S^1$, with some identifications. We use just $C_k$ to denote the space $C_k({\mathbb{R}}^2,S^1)$. Let $Rat_k$ be the space of based holomorphic maps from $S^2$ to itself of degree $k$. \cite{seg79},\cite{cohcoh91,cohcoh93} shows that theses three spaces $Rat_k$, $B\beta_{2k}$, $C_k$ are all stably homotopy equivalent. In fact, \cite{cohdav88, sna74} shows that these spaces split stably as a wedge sum $\vee_{j \leq k} D_j(S^1)$, where $D_j = C_j/C_{j-1}$ is a space related to the Brown-Gitler spectra. The three spaces are closely related to $\Omega^2S^2$. We explain some facts about these spaces in the next section. Totaro \cite{tot90} has shown that the three spaces have isomorphic $\mathbb{F}_p$-cohomologies for an odd prime $p$. He has also shown that the $\mathbb{F}_2$-cohomology ring of $B\beta_{2k}$ is isomorphic to that of $C_k$ and if $k+1$ is not a power of 2, then the $\mathbb{F}_2$-cohomology ring of $Rat_k$ is not isomorphic to that of $B\beta_{2k}$ or $C_k$. This paper extends the result to all values of $k$ except when $k = 1$ or $k = 3$ [Theorem \ref{main}]. This in particular implies that $Rat_k$ is not homotopy equivalent to $C_k$ if $k$ is not equal to 1 or 3. Bousfield has defined the localization of spaces with respect to homology in \cite{bou75}. Two spaces $X$ and $Y$ have homotopy equivalent $HR$-localizations if and only if there are maps $X \ra X_1 \la Y$ such that each map induces an isomorphism on homology groups with coefficients in the ring $R = {\mathbb{Z}}, \mathbb{F}_p$ or ${\mathbb{Z}}[q^{-1}]$. Our result implies that, $H\mathbb{F}_2$-localizations of $Rat_k$ and $B\beta_{2k}$ are not homotopy equivalent. We also show that for $k \geq 1$ $B\beta_{2k}$ and $B\beta_{2k+1}$ have isomorphic $\mathbb{F}_2$-cohomologies and $H\mathbb{F}_2$-localizations of $B\beta_{2k}$ and $B\beta_{2k+1}$ are homotopy equivalent [Lemma \ref{braid}]. For $k=1$, the three spaces $Rat_1$, $B\beta_2$ and $C_1$ are all homotopy equivalent to $S^1$. For $k = 3$, it turns out that the corresponding three spaces have isomorphic cohomology rings with coefficients in $\mathbb{F}_p$ for any prime $p$. Moreover, the actions of the dual of the Steenrod algebra on the $\mathbb{F}_2$-homologies of $Rat_3$, $B\beta_6$ and $C_3$ are also isomorphic.\newline {\it Acknowledgements}: The author thanks his PhD supervisor Burt Totaro for introducing to this subject and for numerous interesting discussions. \section{$B\beta_k$, $C_k$, $Rat_k$} In this section, we describe the three spaces $Rat_k$, $B\beta_k$ and $C_k$, their respective integral cohomologies and their relation with the space $\Omega^2S^2$. We also describe the coalgebra structure of their respective $\mathbb{F}_2$-homologies. For this chapter, the default ring of coefficients is $\mathbb{F}_2$. Let $\Omega^2S^2$ be the double loop space of $S^2$, i.e the space of maps from $S^2$ to itself. \begin{align*} \pi_0(\Omega^2S^2) \cong \pi_2(S^2) \cong {\mathbb{Z}}, \end{align*} where the degree map induces an isomorphism \begin{align*} \xymatrix{ \pi_0(\Omega^2S^2) \ar[r]^{\cong} & {\mathbb{Z}}.} \end{align*} \subsection{$B\beta_k$} The braid space $B\beta_k$ is the classifying space of the braid group on $k$-strings, $\beta_k$. Let $F({\mathbb{R}}^2, k)$ denote the configuration space of $k$-points in ${\mathbb{R}}^2$, i.e. \begin{displaymath} F({\mathbb{R}}^2, k) = \{ (x_1,...,x_k)| x_i \in {\mathbb{R}}^2,i \neq j \Rightarrow x_i \neq x_j \}.\end{displaymath} The symmetric group on $k$ elements $\Sigma_k$ acts freely on $F({\mathbb{R}}^2, k)$. We can take $ F({\mathbb{R}}^2, k)/ \Sigma_k$ as a model of the classifying space of $\beta_k$. Thus the space $B\beta_k$ is the space of unordered $k$-tuples of points in ${\mathbb{R}}^2$. The space $B\beta_k$ can also be described as the space of degree $k$ complex polynomials without multiple roots and with the leading coefficient equal to the unity. The rational cohomology of the braid groups is as follows (\cite{ver99}, Theorem 8.1-2). \begin{Lemma} For $k \geq 2$, the rational cohomology groups of $B\beta_k$ are trivial except for \begin{align*} H^0(B\beta_k;{\mathbb{Q}}) \cong& {\mathbb{Q}},\\ H^1(B\beta_k;{\mathbb{Q}}) \cong& {\mathbb{Q}}. \end{align*} And for $k \geq 1$, \begin{align*} H^i(B\beta_{2k+1};{\mathbb{Z}}) \cong H^i(B\beta_{2k};{\mathbb{Z}}). \end{align*} \end{Lemma} As the spaces $B\beta_{2k}$, $Rat_k$ and $C_k$ are stably homotopy equivalent to each other, \begin{align*} H^i(B\beta_{2k};{\mathbb{Z}}) \cong H^i(Rat_k;{\mathbb{Z}}) \cong H^i(C_k;{\mathbb{Z}}). \end{align*} F. Cohen has calculated the $\mathbb{F}_p$-homology of $\coprod B\beta_k$ and $\Omega^2S^2$ in \cite{cohlad76}. The spaces $\Omega^2S^2$ and $B\beta_k$ are $\mathcal{C}_2$-spaces, i.e. $\mathcal{C}_2$, the `little 2-cubes operad' acts on them. Hence there is the Araki-Kudo operation on the $\mathbb{F}_2$-homologies of $\coprod_k B\beta_k$ and $\Omega^2S^2$, $ Q : H_q \ra H_{2q+1} $ and the Pontrjagin product which makes their homologies commutative rings. Let $\Omega_k^2 S^2$ denote the $k^{th}$ component of $\Omega^2S^2$ corresponding to the degree $k$ maps $f: S^2 \ra S^2$. Then $Q$ maps $H_i(\Omega_k^2S^2)$ to $H_{2i+1}(\Omega^2_{2k}S^2)$. There is a natural map $\phi: B\beta_k \ra \Omega^2_kS^2$. This map can be described as follows. Replace the $k$-tuple of distinct points in ${\mathbb{R}}^2$ by $k$ disjoint unit circles in ${\mathbb{R}}^2$. Then define a map from ${\mathbb{R}}^2 \cup \infty$ to itself by sending everything except interiors of unit circles to the point at infinity and by sending the interior of each unit circle onto the whole of ${\mathbb{R}}^2$ homeomorphically. Identifying ${\mathbb{R}}^2 \cup \infty$ with $S^2$ by the stereographic projection gives a degree $k$ map from $S^2$ to itself. This is precisely the natural map $\phi$ from $B\beta_k$ to $\Omega^2_kS^2$. An algebraic construction of a map $B\beta_k \ra \Omega^2S^2$ is given in section 1, \cite{seg79}. Note that \begin{align*} \pi_1(B\beta_k)& \cong \beta_k,\\ \pi_1(\Omega^2S^2)& \cong {\mathbb{Z}},\\ \pi_n(B\beta_k)& \cong \{0\}, \forall k > 1. \end{align*} Hence the map $\phi$ can not be a homotopy equivalence in any range of dimensions. But it turns out that the map induces an isomorphism of homologies up to dimension $\lfloor k/2 \rfloor := $\emph{the greatest integer smaller than or equal to }$k/2$. The map $\phi$ induces a map $\Phi: \coprod_{k \geq 0} B\beta_k \ra \Omega^2S^2$. Let $g$ be the generator of $H_0(B\beta_1)$. By using the map $\Phi$, let $g$ also denote the generator of $H_0(\Omega^2_1S^2)$. Then the homology of these two spaces is build-up by the `Araki-Kudo' operation $Q$ and its iterations $Q^i(x) = Q(Q^{i-1}(x))$. To be precise, there are algebra isomorphisms (appendix III, \cite{cohlad76}) \begin{displaymath} H_*(\coprod B\beta_k) \cong \mathbb{F}_2[g, Qg, Q^2g, \ldots], \end{displaymath} \begin{displaymath} H_*(\Omega^2S^2) \cong \mathbb{F}_2[g, g^{-1}] \otimes \mathbb{F}_2[Qg, Q^2g, \ldots]. \end{displaymath} Note that the dimension in homology of $Q^ig$ is $2^i-1$ and is contained in the $2^i$th component of $\Omega^2S^2$. Define the weight of a homology class to be the component in which that class lives. Hence, $H_*(B\beta_k)$ is the span of monomials in $g, Qg, Q^2g, ...$ of weight $k$, where $Q^ig$ has the weight $2^i$ and the dimension $2^i-1$. Hence note that for any $k$, the top dimensional homology of $B\beta_k$ is generated by a single element. If the binary expansion of $k$ is $k = \sum_{j \in J} 2^j$, then this top dimension is $H_{k-|J|}$. Also notice that $Q(x^2) = x^2Qx + Qx \cdot x^2 = 0$ as the homology coefficients are in $\mathbb{F}_2$. Further, this operation $Q$ is linear and that the Cartan formula holds (lemma 5.2, IX, \cite{cohlad76}) \begin{displaymath} Q(xy) = x^2Qy + Qx \cdot y^2. \end{displaymath} The coproduct structure on the homology, i.e. the cup product strucutre on the cohomology of $B\beta_k$ is as given below. It turns out that $H_*(B\beta_k)$ is a primitively generated Hopf algebra. i.e., let \begin{displaymath} \psi: H_* \ra H_* \otimes H_* \end{displaymath} denote the coproduct on the homology. Then $\psi(g) = g \otimes g$ and $Q^ig$ for $i \geq 1$ is primitive in its component, \begin{displaymath} \psi(Q^ig) = g^{2^i} \otimes Q^ig + Q^ig \otimes g^{2^i}. \end{displaymath} $\psi$ being a coproduct map satisfies that \begin{displaymath} \psi(xy) = \psi(x)\psi(y). \end{displaymath} The expressions for $Q(g^{-1})$ and $Q(g^{-1}Qg)$ in $H_*(\Omega^2S^2)$ in terms of $g$ and $Q^ig$ can be obtained using the Cartan formula. They are, \begin{align*} Q(g^{-1}) &= g^{-4}Q(g) \\ Q(g^{-1}Qg) &= g^{-2}Q^2g + g^{-4}(Qg)^3 \end{align*} $H^*(B\beta_{2k})$ is isomorphic to $H^*(B\beta_{2k+1})$ \cite{arn68,fuk70}. We will show that $B\beta_{2k}$ and $B\beta_{2k+1}$ have homotopy equivalent $H\mathbb{F}_2$-localizations. \begin{Lemma}\label{braid} The cohomology ring $H^*(B\beta_{2k})$ is isomorphic to $H^*(B\beta_{2k+1})$. In fact, $B\beta_{2k}$ and $B\beta_{2k+1}$ have homotopy equivalent $H\mathbb{F}_2$-localizations. \end{Lemma} {\noindent{\sc Proof: }} Let $x \in H_i(B\beta_{2k})$. Hence $x$ is an element of weight $2k$ and dimension $i$ in ${\mathbb{Z}}_2[g,Qg,Q^2g,\cdots]$. Hence $gx$ is an element of weight $2k+1$ and dimension $i$. Hence $gx \in H_i(B\beta_{2k+1})$. Also, let $y$ be a monomial in ${\mathbb{Z}}_2[g,Qg,Q^2g,\cdots]$ of the dimension $i$ and the weight $2k+1$, i.e. $y \in H_i(B\beta_{2k+1}) $. As each of the $Q^ig$ has even weight, $y$ is divisible by $g$, and $y/g = x \in H_i(B\beta_{2k})$. Also \begin{align*} \psi(y) =& \psi(g)\psi(x)\\ =& (g \otimes g)\psi(x). \end{align*} Hence multiplication by $g$ induces an isomorphism of coalgebras \begin{align*} \xymatrix{ H_*(B\beta_{2k}) \ar[r]^{\cdot g} & H_*(B\beta_{2k+1}).} \end{align*} Hence, \begin{align*} H^*(B\beta_{2k}) \cong H^*(B\beta_{2k+1}). \end{align*} Furthermore, let $i_{2k}: B\beta_{2k} \ra B\beta_{2k+1}$ be the inclusion map given by adding a point away from a given $2k$-tuple to get a $2k+1$-tuple. Then note that $i_{2k_*}$ is precisely multiplication by $g$. Hence $i_{2k_*}$ induces isomorphism on $\mathbb{F}_2$-homologies. Hence $B\beta_{2k}$ and $B\beta_{2k+1}$ have homotopy equivalent $H\mathbb{F}_2$-localizations. \qed The action of the dual of the Steenrod algebra on $H_*(\coprod B\beta_k)$ is given in the appendix of \cite{coh78}. Let $Sq_j^*: H_n(-) \ra H_{n-j}(-)$ be the dual of the $j$th Steenrod operation $Sq^j$. Then \begin{align*} Sq_j^*(Q^ig) =& 0 \text{ if } j \geq 2.\\ Sq_1^*(Q^ig) =& (Q^{i-1}g)^2 \text{ if } i \geq 2\\ Sq_1^*(Qg) =& 0. \end{align*} \subsection{$C_k$} The following description of the configuration spaces is taken from \cite{tot90}, i.e. in turn from \cite{cohshi91}. Let $C({\mathbb{R}}^2,Y)$ denote the space of all configurations of distinct points in ${\mathbb{R}}^2$ with labels in $Y$. It is defined by \begin{displaymath} C({\mathbb{R}}^2,Y) = (\bigcup_{j=1}^{\infty} F({\mathbb{R}}^2,j) \times_{\Sigma_j} Y^j)/\sim \end{displaymath} and if $* \in Y$ is a fixed basepoint then the equivalence relation $\sim$ is given by \begin{displaymath} (x_1,\ldots, x_j) \times_{\Sigma_j} (t_1,\ldots , t_{j-1},*) \sim (x_1,\ldots,x_{j-1}) \times_{\Sigma_{j-1}}(t_1,\ldots,t_{j-1}). \end{displaymath} Let $C_k({\mathbb{R}}^2,Y)$ denote the subspace of all configurations of length less than or equal to $k$. i.e. \begin{displaymath} C_k({\mathbb{R}}^2,Y) = (\bigcup_{j=1}^k F({\mathbb{R}}^2,j) \times_{\Sigma_j} Y^j)/\sim . \end{displaymath} We denote by $C_k$ the space $C_k({\mathbb{R}}^2, S^1).$ There is a relation between configuration spaces and iterated loop spaces(May-Milgram and Segal ). If $Y$ is a connected CW-complex then $C({\mathbb{R}}^2,Y)$ is homotopy equivalent to the based loop space $\Omega^2\Sigma^2Y$ which is defined by \begin{displaymath} \Omega^2\Sigma^2Y = \{f: S^2 \ra \Sigma^2Y | f(\infty) = * \} \end{displaymath} Hence $C_k$ can be considered as a finite dimensional approximation to $\Omega^2S^3$. \begin{displaymath} \pi_1(C_k) \cong {\mathbb{Z}}. \end{displaymath} The Hopf map $S^3 \ra S^2$ induces a map of 2-fold loop spaces, from $\Omega^2S^3$ to $\Omega^2S^2$. The long exact sequence of the homotopy groups of the fibration $S^1 \ra S^3 \ra S^2$ implies that $\Omega^2S^3 \ra \Omega^2S^2$ gives the homotopy equivalence from $\Omega^2S^3$ to $\Omega^2_0S^2$. This helps in obtaining the following result (theorem 3.1, III, \cite{cohlad76}), \begin{displaymath} H_*(\bigcup_{k \geq 0}C_k) \cong H_*(\Omega_0^2S^2) \cong \mathbb{F}_2[g^{-2}Qg, Q(g^{-2}Qg), \ldots]. \end{displaymath} $H_*C_k$ is the span of monomials of weight less than or equal to $k$, where the weight of $Q^i(g^{-2}Qg)$ is $2^i$ and it lives in the dimension $2^{i+1} - 1$. Proposition 1 from \cite{tot90} shows that as coalgebras, \begin{displaymath} H_*(B\beta_{2k}) \cong H_*(C_k). \end{displaymath} Havlicek \cite{hav95} has described the precise cohomology ring of $C_k$ as the dual to this coalgebra. \subsection{$Rat_k$} The space $Rat_k({\mathbb{C}} P^1)$ or $Rat_k$ is the space of based holomorphic maps $S^2 \ra S^2$ of degree $k$. It can be described more precisely as the space of rational functions from ${\mathbb{C}} \cup \infty$ to ${\mathbb{C}} \cup \infty$ which sends $\infty$ to 1, i.e. \begin{displaymath} Rat_k := \{ \dfrac{f(z)}{h(z)} = \dfrac{z^k + a_{k-1}z^{k-1} + \ldots + a_0}{z^k + b_{k-1}z^{k-1} + \ldots + b_0}| \text{\emph{f}(\emph{z}) and \emph{h}(\emph{z}) are coprime} \} \end{displaymath} $Rat_k$ is a nilpotent space up to dimension $k$ (corollary 6.3, \cite{seg79}). i.e. the fundamental group of $Rat_k$ acts nilpotently on homotopy groups $\pi_i(Rat_k)$ for $2 \leq i \leq k$. Consider the map given by resultant of two polynomials \begin{align*} R : Rat_k \ra& {\mathbb{C}}^*\\ (f/h) \mapsto& resultant(f,h). \end{align*} Then the map $R$ induces an isomorphism of fundamental groups (proposition 6.4, \cite{seg79}) \begin{displaymath} \pi_1(Rat_k) \cong {\mathbb{Z}}. \end{displaymath} There is a natural map $ Rat_k \ra \Omega^2_kS^2$ which simply forgets that a map in $Rat_k$ is holomorphic. This map is well described in (\cite{boyman88}, \cite{seg79}). This induces a map \begin{align*} \chi: \coprod_{k \geq 0} Rat_k \ra \Omega^2S^2. \end{align*} The map $\chi$ preserves the action of $\mathcal{C}_2$ operad on the spaces $\coprod_{k \geq 0} Rat_k$ and $\Omega^2S^2$. The map $\chi$ induces a map on the homologies and in the proof of Theorem1 in \cite{tot90}, it is shown that this induced map is an injection. The image of this map is a polynomial ring generated by $g$ and $Q^i(g^{-1}Qg)$ for $i \geq 0$. To be precise, \begin{displaymath} H_*(\coprod_k Rat_k) = \mathbb{F}_2[g, g^{-1}Qg, Q(g^{-1}Qg), \ldots]. \end{displaymath} As before, $g$ has weight 1 and dimension zero, and $Q^{i-1}(g^{-1}Qg)$, $i \geq 1$ has weight $2^{i-1}$ and dimension $2^i - 1$. $H_*(Rat_k)$ as a sub-coalgebra of $H_*(\coprod_k Rat_k)$ is generated by the monomials of weight $k$. Note again that the top dimensional homology of $Rat_k$ is generated by a single element. There is a one-to-one correspondence between the generators of $H_*(\coprod_k Rat_k)$ and $H_*(\coprod_k B\beta_{k})$. $g$ of course corresponds to $g$ and $Q^{i-1}(g^{-1}Qg)$ corresponds to $Q^ig$. Note that in this correspondence, except for $g$, the weights of the generators of $H_*(\coprod_k Rat_k)$ are exactly the half of the weights of the generators they correspond to in $H_*(\coprod_k B\beta_k)$. The coproduct structure on $H_*(Rat_k)$ is as follows. $g^{-1}Qg$ is primitive in its component, but $Q^i(g^{-1}Qg)$ is not primitive in its component for $i \geq 1$. \begin{align*} \psi(g^{-1}Qg) = g \otimes g^{-1}Qg + g^{-1}Qg \otimes g. \end{align*} And for $i \geq 1$, \begin{align*} \psi Q^i(g^{-1}Qg) = &\underbrace{g^{2^i} \otimes Q^i(g^{-1}Qg)}_{0,2^{i+1}-1} + \underbrace{Q^ig \otimes (g^{-1}Qg)^{2^i}}_{2^i-1,2^i} + \underbrace{(g^{-1}Qg)^{2^i} \otimes Q^ig}_{2^i,2^i-1} \\ &+ \underbrace{Q^i(g^{-1}Qg) \otimes g^{2^i}}_{2^{i+1}-1,0}. \end{align*} Numbers appearing below a symbol indicate the dimension in homology of the corresponding symbol. i.e. $(0,2^{i+1}-1)$ below $g^{2^{i}} \otimes Q^i(g^{-1}Qg)$ indicate that $g^{2^i}$ is zero dimensional and dimension of $Q^i(g^{-1}Qg)$ is $2^{i+1}-1$. \section{$\mathbb{F}_2$-cohomology ring of $Rat_k$ is not isomorphic to that of $B\beta_{2k}$ or $C_k$} This section proves that if $k$ is not equal to 1 or 3, then the cohomology ring of $Rat_k$ with coefficients in $F_2$ is not isomorphic to the cohomology ring of $B\beta_{2k}$ or $C_k$ with coefficients in $F_2$. Totaro \cite{tot90} has shown this statement when $k+1$ is not a power of 2. For $k=1$, all three spaces $Rat_1$, $B\beta_2$ and $C_1$ are homotopy equivalent ot $S^1$. For $k=3$, the three spaces $Rat_3$, $B\beta_6$ and $C_3$ have isomorphic $\mathbb{F}_p$-homology as coalgebras for any prime $p$. Following theorem shows the result for the remaining values of $k$, that is when $k+1$ is a power of 2 and $k>3$. This in particular shows that if $k$ is not equal to 1 or 3, then there does not exists any sequence of maps \begin{align*} B\beta_{2k} \ra X_1 \la Rat_k \end{align*} each of which induces isomorphism on $\mathbb{F}_2$-homology. Hence in the context of \cite{bou75} our result implies that the two spaces $Rat_k$ and $B\beta_{2k}$ can not have homotopy equivalent $HF_2$-localizations. For completeness, we include Totaro's argument when $k+1$ is not a power of 2. \begin{Theorem}\label{main} The $\mathbb{F}_2$-cohomology of $Rat_k$ is not isomorphic to the $\mathbb{F}_2$-cohomology of $B\beta_{2k}$ or $C_k$ except when $k = 1$ or $3$. \end{Theorem} {\noindent{\sc Proof: }} Firstly assume that $k+1$ is not a power of 2. Let \begin{align*} k = \sum_{j \in J} 2^j. \end{align*} The top dimensional homology group of both $Rat_k$ and $B\beta_{2k}$ is 1-dimensional. $H_{2k-|J|}$, the top dimensional homology of $Rat_k$ is spanned by $x$ equal to $\prod_{j\in J} Q^j(g^{-1}Qg)$ and of $B\beta_{2k}$ by $y$ equal to $\prod_{j \in J} Q^{j+1}g$. Consider the set \begin{displaymath} S(x) = \{s \geq 0 | \psi(x)|_{H_s \otimes H_{d-s}} \neq 0 \}.\end{displaymath} Similarly define $S(y)$. The aim is to show that $S(x)$ is not equal to $S(y)$ which implies that the homologies of $Rat_k$ and $B\beta_{2k}$ are not isomorphic as coalgebras. Let $r$ be the smallest integer such that $r \in J$ but $r-1 \notin J$. As $k+1$ is not a power of 2, such a $r$ exists. We observe that $\psi(x)$ is non-zero in $H_{2^r-1} \otimes H_{dim(x)-(2^r-1)}$. \begin{align*} \psi(x) = \prod_{j \in J} \psi(Q^j(g^{-1}Qg)). \end{align*} There is at least one term in dimension $H_{2^r-1} \otimes H_{dim(x)-(2^r-1)}$ in the expansion of $ \psi(Q^j(g^{-1}Qg))$, which is \begin{align*} Q^rg\prod_{j\in J, j \neq r}g^{2^j} \otimes (g^{-1}Qg)^{2^r}\prod_{j \in J, j \neq r} Q^j(g^{-1}Qg). \end{align*} As $r-1 \notin J$, there is no other term of this dimension in $\psi(x)$. Hence $2^r-1 \in S(x)$. \begin{displaymath} \psi(y) = \prod_{j \in J} \Big( \underbrace{g^{2^{j+1}} \otimes Q^{j+1}g}_{0,2^{j+1}-1} + \underbrace{Q^{j+1}g \otimes g^{2^{j+1}}}_{2^{j+1}-1,0} \Big). \end{displaymath} Note that $\psi(y)$ is zero in dimension $H_{2^r-1} \otimes H_{dim(x) - (2^r-1)}$ as $r-1 \notin J$. Hence $2^{r}-1 \notin S(y)$ and $S(x) \neq S(y)$. This proves that whenever $k+1$ is not a power of 2, the cohomology rings $H^*(Rat_k)$ and $H^*(B\beta_{2k})$ or $H^*(C_k)$ are not isomorphic. Now assume that $k+1$ is a power of 2, and assume that \begin{displaymath} k = \sum_0^r 2^j = 2^{r+1} -1.\end{displaymath} We continue to denote by $x$ the generator of the top dimensional homology group of $Rat_k$, \begin{displaymath} x = \prod_0^{r} Q^j(q^{-1}Qg) \end{displaymath} and by $y$, the generator of the top dimensional homology group of $B\beta_{2k}$, \begin{displaymath} y = \prod_0^{r} Q^{j+1}g. \end{displaymath} For both spaces, the top dimension of homology is $d = 2^{r+2} - r -3$. Let $S(x)$ and $S(y)$ be as before and we will show that $S(x) \neq S(y)$. \begin{align*} \psi(y) &= \prod_0^{r} \psi(Q^{j+1}g)\\ &= \prod_0^r \Big( \underbrace{g^{2^{j+1}} \otimes Q^{j+1}g}_{0,2^{j+1}-1} + \underbrace{Q^{j+1}g \otimes g^{2^{j+1}}}_{2^{j+1}-1,0} \Big) \end{align*} Numbers $(0,2^{j+1}-1)$ appearing below $g^{2^{j+1}} \otimes Q^{j+1}g$ indicate the dimension in homology of the corresponding element, i.e. $g^{2^{j+1}}$ has dimension $0$ and $Q^{j+1}g$ has dimension $2^{j+1}-1$. Hence dimensions which appear in $\psi(y)$ are precisely those which appear in the expression \begin{displaymath} \Big( ((1,0) + (0,1)) \cdot ((0,3) + (3,0)) \cdot ((0,7)+(7,0)) \cdot \ldots \Big) \end{displaymath} From this expression, it is clear that $2 \notin S(y)$ and $5 \notin S(y)$. It turns out that although $2 \notin S(x)$, $5 \in S(x)$. \begin{displaymath} \psi(x) = \psi(g^{-1}Qg)\prod_1^r \psi(Q^j(g^{-1}Qg)) \end{displaymath} Using the expressions for $\psi(g^{-1}Qg)$ and $\psi(Q^j(g^{-1}Qg))$, we get that the dimensions which appear in $\psi(x)$ are from the expression \begin{displaymath} ((0,1)+(1,0)) \cdot \Big( ((0,3)+(1,2)+(2,1)+(3,0)) \cdot ((0,7)+(3,4)+(4,3)+(7,0)) \cdot \ldots \Big) \end{displaymath} There are exactly two ways to obtain dimension $(2,d-2)$. Namely, $(1,0) \otimes (1,2) \otimes (0,7) \otimes \ldots$ and $(0,1) \otimes (2,1) \otimes (0,7) \otimes \ldots$. The expression corresponding to dimensions $(1,0) \otimes (1,2)$ is $(g^{-1}Qg \otimes g) \cdot (Qg \otimes (g^{-1}Qg)^2)$, which is $g^{-1}(Qg)^2 \otimes g^{-1}(Qg)^2$. By symmetry, expression corresponding to $(0,1) \otimes (2,1)$ is also $g^{-1}(Qg)^2 \otimes g^{-1}(Qg)^2$. Hence, \begin{align*} \psi(x)|_{H_2 \otimes H_{d-2}} &= g^{-1}(Qg)^2 \otimes g^{-1}(Qg)^2 \prod_2^r (g^{2^j} \otimes Q^j(g^{-1}Qg))\\ &+ g^{-1}(Qg)^2 \otimes g^{-1}(Qg)^2 \prod_2^r (g^{2^j} \otimes Q^j(g^{-1}Qg)). \end{align*} Because the coefficients of the homology are in $\mathbb{F}_2$, the two terms cancel each other. Hence, $2 \notin S(x)$. There are exactly four ways to obtain dimension $(5,d-5)$. Namely $(1,0) \otimes (1,2) \otimes (3,4) $, $(0,1) \otimes (2,1) \otimes (3,4)$, $(1,0) \otimes (0,3) \otimes (4,3)$ and $(0,1) \otimes (1,2) \otimes (4,3)$. From the paragraph above, first two of these, $(1,0) \otimes (1,2) \otimes (3,4) $ and $(0,1) \otimes (2,1) \otimes (3,4)$ cancel with each other. Hence, \begin{align*}\psi(x)|_{H_5 \otimes H_{d-5}} &= (g^{-1}Qg \otimes g)(g^2 \otimes Q(g^{-1}Qg))((g^{-1}Qg)^4 \otimes Q^2(g)) \prod_3^r (g^{2^{j}} \otimes Q^j(g^{-1}Qg))\\ &+ (g \otimes g^{-1}Qg)(Qg \otimes (g^{-1}Qg)^2)((g^{-1}Qg)^4 \otimes Q^2g) \prod_3^r (g^{2^{j}} \otimes Q^j(g^{-1}Qg)) \end{align*} \begin{align*} \Rightarrow \psi(x)|_{H_5 \otimes H_{d-5}} &= g^2(g^{1}Qg)^5 \otimes gQ^2gQ(g^{-1}Qg) \prod_3^r (g^{2^{j}} \otimes Q^j(g^{-1}Qg)) \\ &+ g^2(g^{1}Qg)^5 \otimes (g^{-1}Qg)^3Q^2g \prod_3^r (g^{2^{j}} \otimes Q^j(g^{-1}Qg)) \end{align*} Using that $Q(g^{-1}Qg) = g^{-2}Q^2(g) + g^{-4}(Qg)^3$, \begin{displaymath} \psi(x)|_{H_5 \otimes H_{d-5}} = g^2(g^{-1}Qg)^5 \otimes g^{-1}(Q^2g)^2 \prod_3^r (g^{2^{j}} \otimes Q^j(g^{-1}Qg)) \end{displaymath} This implies that $5 \in S(x)$, proving the result when $k+1$ is a power of 2. \qed \section{Some Questions} $\mathbb{F}_p$-cohomology rings of $Rat_3$, $B\beta_{6}$ and $C_3$ are isomorphic to each other. For $p > 2$, it follows from section 6, \cite{tot90}. For $p = 2$, we can see this by hand. $H_*(B\beta_6)$ is the span of monomials of weight 6 in $\mathbb{F}_2[g, Qg, Q^2g, \cdots]$. And these are $g^6$ (dim 0), $g^4Qg$ (dim 1), $g^2(Qg)^2$ (dim 2), $g^2Q^2g$, $(Qg)^3$ (both dim 3) and $Q^2gQg$ (dim 4). $H_*(Rat_3)$ is the span of monomials of weight 3 in $\mathbb{F}_2[g, g^{-1}Qg, Q(g^{-1}Qg), \cdots]$. And these are $g^3$ (dim 0), $gQg$ (dim 1), $g^{-1}(Qg)^2$ (dim 2), $g^{-3}(Qg)^3$, $gQ(g^{-1}Qg)$ (both dim 3) and $g^{-1}QgQ(g^{-1}Qg)$ (dim 4). It is easy to check from above that $H_*(B\beta_6)$ is isomorphic as a coalgebra to $H_*(Rat_3)$ proving that $Rat_3$, $B\beta_{6}$ and $C_3$ have isomorphic $\mathbb{F}_2$-cohomology rings. We can also see by hand that the action of the Steenrod algebra on $H_*(Rat_3)$ and $H_*(B\beta_6)$ is the same. Consider $g^2Q^2g \in H_3(B\beta_6)$. Then \begin{align*} Sq_1^*(g^2Q^2g) = g^2(Qg)^2. \end{align*} The element corresponding to $g^2Q^2g$ in $H_3(Rat_3)$ is $gQ(g^{-1}Qg)$. \begin{align*} Sq_1^*(gQ(g^{-1}Qg)) =& Sq_1^*(g^{-1}Q^g) + g^{-3}(Qg)^3 \\ =& g^{-1}(Qg)^2. \end{align*} $g^2(Qg)^2 \in H_2(B\beta_6)$ corresponds to $g^{-1}(Qg)^2 \in H_2(Rat_3)$. Similarly, by checking for each generator, we can verify that the action of $Sq_1^*$ on $H_*(Rat_3)$ and $H_*(B\beta_6)$ is the same. It is still unknown if $Rat_3$, $B\beta_{6}$ and $C_3$ have homotopy equivalent $HF_2$-localizations or not. Also it is still unknown for $p > 2$ and $k > 2$, if $Rat_k$, $B\beta_{2k}$ and $C_k$ have homotopy equivalent $H\mathbb{F}_p$-localizations or not. Cohen-Shimamoto \cite{cohshi91} have shown that $Rat_2$ and $C_2$ are not homotopy equivalent to each other by considering natural ${\mathbb{Z}}$-coverings of these spaces. \begin{displaymath} \pi_1(Rat_k) \cong \pi_1(C_k) \cong {\mathbb{Z}}. \end{displaymath} Let ${\widetilde{Rat}}_k$ and ${\widetilde{C}}_k$ be the universal covers of $Rat_k$ and $C_k$ respectively. Let $D_2$ be the ${\mathbb{Z}}/2$-Moore space $S^2 \cup_2 e^3$. It is known that $C_2$ is stably homotopy equivalent to $S^1 \vee D_2$ \cite{sna74}. Cohen-Shimamoto show that $C_2$ is homotopy equivalent to $S^1 \vee D_2$. Hence we can precisely calculate \begin{displaymath} H_2({\widetilde{C}}_2; {\mathbb{Z}}) \cong \pi_2({\widetilde{C}}_2) \cong \pi_2(C_2) \end{displaymath} which is infinitely generated. Whereas ${\widetilde{Rat}}_k$ is homotopy equivalent to $R^{-1}(\{1\})$ for the resultant map $R: Rat_k \ra {\mathbb{C}}^*$. $R^{-1}(\{1\})$ is a finite $CW$-complex and hence $H_*({\widetilde{Rat}}_k; {\mathbb{Z}})$ is finitely generated. This shows that ${\mathbb{Z}}$-homologies of ${\widetilde{Rat}}_2$ and ${\widetilde{C}}_2$ are not isomorphic and that \begin{align*} \pi_2(C_2) \ncong \pi_2(Rat_2). \end{align*}. Let $\gamma_k$ be the commutator subgroup $[\beta_k, \beta_k]$ of $\beta_k$. Then there is a short exact sequence \begin{displaymath} 0 \ra \gamma_k \ra \beta_k \ra {\mathbb{Z}} \ra 0. \end{displaymath} Let $B\gamma_k$ denote the classifying space of $\gamma_k$. The $\mathbb{F}_p$-homology of $B\gamma_k$ is calculated in (theorem 4, \cite{cal06}), and it is finitely generated. We conjecture that homology of ${\widetilde{C}}_k$ is infinitely generated for many values of $k$ whereas we already know that homologies of ${\widetilde{Rat}}_k$ and $B\gamma_k$ are finitely generated for all $k$. \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
{ "timestamp": "2009-07-25T13:53:06", "yymm": "0907", "arxiv_id": "0907.4412", "language": "en", "url": "https://arxiv.org/abs/0907.4412", "abstract": "Let Rat_k be the space of based holomorphic maps from S^2 to itself of degree k. Let beta_k denote the Artin's braid group on k strings and let Bbeta_k be the classifying space of beta_k. Let C_k denote the space of configurations of length less than or equal to k of distinct points in R^2 with labels in S^1. The three spaces Rat_k, Bbeta_{2k}, C_k are all stably homotopy equivalent to each other. For an odd prime p, the F_p-cohomology ring of the three spaces are isomorphic to each other. The F_2-cohomology ring of Bbeta_{2k} is isomorphic to that of C_k. We show that for all values of k except 1 and 3, the F_2-cohomology ring of Rat_k is not isomorphic to that of Bbeta_{2k} or C_k. This in particular implies that the HF_2-localization of Rat_k is not homotopy equivalent to HF_2-localization of Bbeta_{2k} or C_k. We also show that for k >= 1, Bbeta_{2k} and Bbeta_{2k+1} have homotopy equivalent HF_2-localizations.", "subjects": "Algebraic Topology (math.AT)", "title": "The Cohomology Ring of the Space of Rational Functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683495127003, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7096610841918832 }
https://arxiv.org/abs/math/0511684
Global residues for sparse polynomial systems
We consider families of sparse Laurent polynomials f_1,...,f_n with a finite set of common zeroes Z_f in the complex algebraic n-torus. The global residue assigns to every Laurent polynomial g the sum of its Grothendieck residues over the set Z_f. We present a new symbolic algorithm for computing the global residue as a rational function of the coefficients of the f_i when the Newton polytopes of the f_i are full-dimensional. Our results have consequences in sparse polynomial interpolation and lattice point enumeration in Minkowski sums of polytopes.
\section{Introduction} Let $f_1=\cdots=f_n=0$ be a system of Laurent polynomial equations in $n$ variables whose Newton polytopes are $\Delta_1,\dots,\Delta_n$. Suppose the solution set $Z_f$ in the algebraic torus ${\mathbb T}=(\mathbb C-\{0\})^n$ is finite. This will be true if the coefficients of the $f_i$ are generic. The {\it global residue} assigns to every polynomial $g$ the sum over $Z_f$ of Grothendieck residues of the meromorphic form $$\omega_g=\frac{g}{f_1\dots f_n}\,\frac{dt_1}{t_1}\wedge\dots\wedge\frac{dt_n}{t_n}.$$ It is a symmetric function of the solutions and hence depends rationally on the coefficients of the system. Global residues were studied by Tsikh, Gelfond and Khovanskii, and later by Cattani, Dickenstein and Sturmfels \cite{Ts, GKh2, CaD, CDS}. There are numerous applications of global residues ranging from elimination algorithms in commutative algebra \cite{CDS} to integral representation formulae in complex analysis \cite{Ts}. There have been several approaches to the problem of computing the global residue explicitly. Gelfond and Khovanskii considered systems with {\it generically positioned} Newton polytopes. The latter means that for any linear function $\xi$, among the corresponding extremal faces $\Delta_1^\xi,\dots,\Delta_n^\xi$ at least one is a vertex. In this case Gelfond and Khovanskii found an explicit formula for the global residue as a Laurent polynomial in the coefficients of the system \cite{GKh2}. In general, the global residue is not a Laurent polynomial and one should not expect a closed formula for it. Another approach was taken by Cattani and Dickenstein in the {\it generalized unmixed} case when all $\Delta_i$ are dilates of a single $n$-dimensional polytope (see \cite{CaD}). Their algorithm requires computing a certain element in the homogeneous coordinate ring of a toric variety whose {\it toric residue} equals 1. Then the {\it Codimension Theorem} for toric varieties \cite{CD} allows global residue computation using Gr\"obner bases techniques. This approach was then extended by D'Andrea and Khetan to a slightly more general {\it ample} case \cite{DK}. However, it turned out to be a hard (and still open) problem to find an element of toric residue~1 for an arbitrary collection of polytopes.\footnote{For a combinatorial construction of such an element that provides a solution to the problem for a wide class of polytopes see~\cite{KS}.} In this paper we present a new algorithm for computing the global residue when $\Delta_1,\dots,\Delta_n$ are arbitrary $n$-dimensional polytopes (see \rs{algorithm}). The proof of its correctness is in the spirit of Cattani and Dickenstein's arguments, but avoids the toric residue problem. It relies substantially on the Toric Euler--Jacobi theorem due to Khovanskii~\cite{uspehi}. Also in our algorithm we replace Gr\"obner bases computations with solving a linear system. This gives an expression of the global residue as a quotient of two determinants. The same idea was previously used by D'Andrea and Khetan in \cite{DK}. There is an intimate relation between the global residue and polynomial interpolation. In the classical case this was understood already by Kronecker. In \rs{sparseinterpol} we show how the Toric Euler--Jacobi theorem gives sparse polynomial interpolation. Another consequence of this theorem is a lower bound on the number of interior lattice points in the Minkowski sum of $n$ full-dimensional lattice polytopes in $\mathbb R^n$ (\rc{bound}). Finally, our results give rise to interesting optimization problems about Minkowski sums which we discuss in \rs{geometry}. \subsection*{Acknowledgments} This work was inspired by discussions with Askold Khovanskii when I visited him in Toronto. I am grateful to Askold for the valuable input and his hospitality. I thank Amit Khetan for numerous discussions on this subject and Eduardo Cattani and David Cox for their helpful comments. \section{Global Residues in $\mathbb C^n$ and Kronecker interpolation} We begin with classical results on polynomial interpolation that show how global residues come into play. Let $f_1,\dots, f_n\in\mathbb C[t_1,\dots,t_n]$ be polynomials of degrees $d_1,\dots,d_n$, respectively. Let $\rho=\sum d_i-n$ denote the {\it critical degree} for the $f_i$. We assume that the solution set $Z_f\subset\mathbb C^n$ of $f_1=\dots=f_n=0$ consists of a finite number of simple roots. The following is a classical problem on polynomial interpolation. \begin{Prob} Given $\phi: Z_f\to\mathbb C$ find a polynomial $g\in\mathbb C[t_1,\dots,t_n]$ of degree at most $\rho$ such that $g(a)=\phi(a)$ for all $a\in Z_f$. \end{Prob} Kronecker \cite{K} suggested the following solution to this problem. Since each $f_i$ vanishes at every $a\in Z_f$ we can write (non-uniquely): \begin{equation}\label{e:1} f_i=\sum_{j=1}^n g_{ij}(t_j-a_j),\quad a=(a_1,\dots,a_n),\ \ g_{ij}\in\mathbb C[t_1,\dots,t_n]. \end{equation} The determinant $g_a$ of the polynomial matrix $[g_{ij}]$ is a polynomial of degree at most~$\rho$. Notice that the value of $g_a$ at $t=a$ equals the value of the Jacobian $J_f=\det\left(\frac{\partial f_i}{\partial t_j}\right)$ at $t=a$, since $\frac{\partial f_i}{\partial t_j}(a)=g_{ij}(a)$ from \re{1}. Also $g_a(a')=0$ for any $a'\in Z_f$, $a'\neq a$. Indeed, substituting $t=a'$ into \re{1} we get \begin{equation} 0=\sum_{j=1}^n g_{ij}(a')(a'_j-a_j),\nonumber \end{equation} which means that the non-zero vector $a'-a$ is in the kernel of the matrix $[g_{ij}(a')]$, hence $\det[g_{ij}(a')]=0$. It remains to put \begin{equation}\label{e:2} g=\sum_{a\in Z_f}\frac{\phi(a)}{J_f(a)}\,g_a. \end{equation} \begin{Rem} When choosing $g_{ij}$ in \re{1} one can assume that $$g_{ij}=\frac{\partial F_i}{\partial t_j}+\text{lower degree terms},$$ where $F_i$ is the main homogeneous part of $f_i$. Then \re{2} becomes \begin{equation}\label{e:3} g=\bigg(\sum_{a\in Z_f}\frac{\phi(a)}{J_f(a)}\bigg)J_F+\text{\rm lower degree terms}, \end{equation} where $J_F$ is the Jacobian of the $F_i$. \end{Rem} \begin{Def} Given $g\in\mathbb C[t_1,\dots,t_n]$ the sum of the local Grothendieck residues $$\cal R_f(g)=\sum_{a\in Z_f}\operatorname{res}_a\left(\frac{g}{f_1\dots f_n}\,dt_1\wedge\dots\wedge dt_n\right)$$ is called the {\it global residue} of $g$ for the system $f_1=\dots=f_n=0$. In the case of simple roots of the system we get $$\cal R_f(g)=\sum_{a\in Z_f}\frac{g(a)}{J_f(a)}.$$ \end{Def} \begin{Th} ({The Euler-Jacobi theorem}) Let $f_1=\dots=f_n=0$ be a generic polynomial system with $\deg(f_i)=d_i$. Then for any $h$ of degree less than $\rho=\sum d_i-n$ the global residue $\cal R_f(h)$ is zero. \end{Th} \begin{pf} Consider the function $h:Z_f\to\mathbb C$, $a\mapsto h(a)$. According to the previous discussion there is a polynomial of the form \re{3} which takes the same values on $Z_f$ as $h$. In other words, $$h\equiv\bigg(\sum_{a\in Z_f}\frac{h(a)}{J_f(a)}\bigg)J_F+\text{\rm lower degree terms}\quad (\text{mod }\ I),$$ where we used that the ideal $I=\langle f_1,\dots, f_n\rangle$ is radical and the roots are simple. Comparing the homogeneous parts of degree $\rho$ in this equation we see that either $J_F\in I$, which is equivalent to the $F_i$ having a non-trivial common zero (does not happen generically), or the coefficient of $J_F$ (the global residue of $h$) is zero. \end{pf} \section{Global Residues in ${\mathbb T}^n$ and sparse polynomial interpolation}\label{S:sparseinterpol} Now we will consider {\it sparse} polynomial systems and define the global residue in this situation. The word ``sparse'' indicates that instead of fixing the degrees of the polynomials we fix their {\it Newton polytopes}. The Newton polytope $\Delta(f)$ of a (Laurent) polynomial $f$ is defined as the convex hull in $\mathbb Z^n$ of the exponent vectors of all the monomials appearing in $f$. Note that the Newton polytope of a generic polynomial of degree $d$ is a $d$-dilate of the standard $n$-simplex. Such polynomials are usually called {\it dense}. In what follows when we say generic sparse polynomial we will mean that its Newton polytope is fixed and the coefficients are generic. Let $f_1,\dots, f_n\in\mathbb C[t_1^{\pm 1},\dots,t_n^{\pm 1}]$ be Laurent polynomials whose Newton polytopes $\Delta_1,\dots, \Delta_n$ are $n$-dimensional. We will assume that the solution set $Z_f\subset{\mathbb T}^n$ of the system $f_1=\dots=f_n=0$ is finite. Here ${\mathbb T}^n$ denotes the $n$-dimensional algebraic torus $(\mathbb C-\{0\})^n$. Similar to the affine case define the {\it global residue} of a Laurent polynomial $g$ as the sum of the local Grothendieck residues $${\cR_f^{\T}}(g)=\sum_{a\in Z_f} \operatorname{res}_a\left(\frac{g}{f_1\dots f_n}\,\frac{dt_1}{t_1}\wedge\dots\wedge\frac{dt_n}{t_n}\right).$$ When the roots of the system are simple we obtain $${\cR_f^{\T}}(g)=\sum_{a\in Z_f}\frac{g(a)}{{J_f^{\,\T}}(a)},$$ where ${J_f^{\,\T}}=\det\left(t_j\frac{\partial f_i}{\partial t_j}\right)$ is the {\it toric Jacobian} of the polynomials $f_1,\dots,f_n$. Notice that $\cal R^{\mathbb T}_f$ is a linear function on the space of Laurent polynomials and depends rationally on the coefficients of the $f_i$, since it is symmetric in the roots $Z_f$. In \rs{algorithm} we will give an algorithm for computing ${\cR_f^{\T}}(g)$ as a rational function of the coefficients of the system. The following theorem due to A.~Khovanskii is a far reaching generalization of the Euler--Jacobi theorem. \begin{Th}\cite{uspehi}\label{T:Kh} Let $f_1=\dots=f_n=0$ be a generic system of Laurent polynomials with $n$-dimensional Newton polytopes $\Delta_1,\dots, \Delta_n$. Let $\Delta=\Delta_1+\dots+\Delta_n$ be the Minkowski sum. Then \begin{enumerate} \item (Toric Euler-Jacobi) for any Laurent polynomial $h$ whose Newton polytope lies in the interior of $\Delta$ the global residue ${\cR_f^{\T}}(h)$ is zero. \item (Inversion of Toric Euler-Jacobi) for any $\phi: Z_f\to \mathbb C$ with $\sum_{a\in Z_f}\phi(a)=0$ there exists a polynomial $h$ whose Newton polytope lies in the interior of $\Delta$ such that $\phi(a)=h(a)/{J_f^{\,\T}}(a)$. \end{enumerate} \end{Th} Let us denote by $S_{\Delta^\circ}$ the vector space of all Laurent polynomials whose Newton polytope lies in the interior $\Delta^\circ$ of $\Delta$. We have a linear map \begin{equation}\label{e:linearmap} \cal A:S_{\Delta^\circ}\to\mathbb C^{|Z_f|},\quad h\mapsto\bigg(\frac{h(a)}{{J_f^{\,\T}}(a)},\ a\in Z_f\bigg). \end{equation} Then the above theorem says that the image of $\cal A$ is the hyperplane $\{\sum x_i=0\}$ in~$\mathbb C^{|Z_f|}$. By Bernstein's theorem the number of solutions $|Z_f|$ is equal to the normalized mixed volume $n!\,V(\Delta_1,\dots,\Delta_n)$ of the polytopes \cite{Bern}. We thus obtain a lower bound on the number of interior lattice points of Minkowski sums. \begin{Cor}\label{C:bound} Let $\Delta_1,\dots, \Delta_n$ be $n$-dimensional lattice polytopes in $\mathbb R^n$ and $\Delta$ their Minkowski sum. Then the number of lattice points in the interior of $\Delta$ is at least $n!\,V(\Delta_1,\dots,\Delta_n)-1$. \end{Cor} It would be interesting to give a direct geometric proof of this inequality and determine all collections $\Delta_1,\dots,\Delta_n$ for which the bound is sharp. In the unmixed case $\Delta_1=\dots=\Delta_n=\Delta$ the inequality becomes \begin{equation}\label{e:unmixedbound} (n\Delta)^\circ\cap\mathbb Z^n\geq n!\operatorname{Vol}_n(\Delta)-1 \end{equation} and can be deduced from the Stanley's Positivity theorem for the Ehrhart polynomial \cite{Stan}. Recently Batyrev and Nill described all possible $\Delta$ which give equality in \re{unmixedbound} (see \cite{BN}). Here is a mixed case example which shows that the bound in \rc{bound} is sharp. \begin{Ex} Let $\Gamma(m)$ denote the simplex defined as the convex hull in $\mathbb R^n$ of $n+1$ points $\{0,e_1,\dots,e_{n-1},me_n\}$, where $e_i$ is the $i$-th standard basis vector. Consider a collection of $n$ such simplices $\Gamma(m_1),\dots,\Gamma(m_n)$ with $m_1\leq\dots\leq m_n$. It is not hard to see that their mixed volume equals $m_n$. For example, one can consider a generic system with these Newton polytopes and eliminate all but the last variable to obtain a univariate polynomial of degree $m_n$. The number of solutions of such a system is $m_n$, which is the mixed volume by Bernstein's theorem. Also one can see that the number of interior lattice points in $\Gamma(m_1)+\dots+\Gamma(m_n)$ is exactly $m_n-1$. (In fact, these lattice points are precisely the points $(1,\dots,1,k)$ for $1\leq k< m_n$.) \end{Ex} \begin{Cor} (Sparse Polynomial Interpolation)\label{C:interpol} Let $f_1=\dots=f_n=0$ be a generic system with $n$-dimensional Newton polytopes $\Delta_1,\dots,\Delta_n$ and let $\Delta$ be their Minkowski sum. Let $Z_f\subset{\mathbb T}^n$ denote the solution set of the system. Then for any function $\phi:Z_f\to\mathbb C$ there is a polynomial $g$ with $\Delta(g)\subseteq\Delta$ such that $g(a)=\phi(a)$. Moreover, $g$ can be chosen to be of the form $g=h+c{J_f^{\,\T}}$ for some $h$ with $\Delta(h)\subset\Delta^\circ$ and a constant $c$. \end{Cor} \begin{pf} Consider a new function $\psi:Z_f\to\mathbb C$ given by $$\psi(a)=\frac{\phi(a)}{{J_f^{\,\T}}(a)}-\frac{c}{MV},\quad \text{where }\ c=\sum_{a\in Z_f}\frac{\phi(a)}{{J_f^{\,\T}}(a)},\ MV=n!\, V(\Delta_1,\dots,\Delta_n).$$ Then the sum of the values of $\psi$ over the points of $Z_f$ equals zero. Therefore there exists $h\in S_{\Delta^\circ}$ such that $h(a)={J_f^{\,\T}}(a)\psi(a)=\phi(a)-\frac{c}{MV}{J_f^{\,\T}}(a)$ for all $a\in Z_f$. It remains to put $g=h+\frac{c}{MV}{J_f^{\,\T}}$. \end{pf} \begin{Rem} \rt{Kh} is an instance of a more general result. Let $f_1,\dots,f_k$ be generic Laurent polynomials with $n$-dimensional Newton polytopes $\Delta_1,\dots,\Delta_k$, for $k\leq n$. The set of their common zeroes defines an algebraic variety $Z_k$ in ${\mathbb T}^n$. There is a way to embed ${\mathbb T}^n$ into a projective toric variety $X$ so that the algebraic closure of $f_i=0$ defines a Cartier divisor $D_i$ on $X$ and the closure $\overline Z_k$ is a complete intersection in $X$. In \cite{Kh2} Khovanskii described the space of top degree holomorphic forms on $\overline Z_k$. The special case $k=n$ corresponds to the space of all functions on the finite set $\overline Z_n=Z_f$ whose description we gave in \rt{Kh}. It follows from cohomology computation on complete intersections. In particular, for $k=n$, there is an exact sequence of global sections \begin{equation}\label{e:seq1} \dots\to \bigoplus_{i=1}^n H^0(X,\cal O(D-D_i+K))\to H^0(X,\cal O(D+K))\to \mathbb C^{|Z_f|}\to \mathbb C \to 0. \end{equation} Here $D=D_1+\dots+D_n$, $K$ the canonical divisor, and $\cal O(L)$ the invertible sheaf corresponding to a divisor $L$. The first non-zero map in \re{seq1} (from the right) is the trace map, the second map is the residue map we considered in \re{linearmap}, and the third one is given by $(f_1,\dots,f_n)$. \end{Rem} \section{Some commutative algebra} As before consider a system of Laurent polynomial equations $f_1=\dots=f_n=0$ whose Newton polytopes $\Delta_1,\dots,\Delta_n$ are full-dimensional. For generic coefficients the system will have a finite number of simple roots $Z_f$ in the torus ${\mathbb T}^n$. We concentrate on the following problem. \begin{Prob} Given a Laurent polynomial $g$ compute the global residue ${\cR_f^{\T}}(g)$ as a rational function of the coefficients of the $f_i$. \end{Prob} We postpone the algorithm to the next section and now formulate our main tool for solving the problem. \begin{Th}\label{T:Cox} Let $\Delta_0,\dots, \Delta_n$ be $n+1$ full-dimensional lattice polytopes in $\mathbb R^n$ and assume that $\Delta_0$ contains the origin in its interior. Put \begin{equation}\label{e:notation} \tilde\Delta=\Delta_0+\dots+\Delta_n\quad\text{and}\quad \tilde\Delta_{(i)}=\Delta_0+\dots+\Delta_{i-1}+\Delta_{i+1}+\dots+\Delta_n. \end{equation} Then for generic polynomials $f_i$ with Newton polytopes $\Delta_i$, for $1\leq i\leq n$, the linear map \begin{equation}\label{e:resmap} \bigoplus_{i=0}^nS_{\tilde{\Delta}_{(i)}^\circ}\oplus\mathbb C\to S_{\tilde{\Delta}^\circ}, \quad (h_0,\dots,h_n, c)\mapsto h_0+\sum_{i=1}^n h_if_i+c{J_f^{\,\T}} \end{equation} is surjective. \end{Th} \begin{pf} Let $g$ be in $S_{\tilde{\Delta}^\circ}$. By \rc{interpol} there exists a polynomial $h_0$ supported in $\Delta^\circ=\tilde{\Delta}_{(0)}^\circ$ and a constant $c$ such that $g(a)=h_0(a)+c{J_f^{\,\T}}(a)$ for all $a\in Z_f$, i.e. the polynomial $g-h_0-c{J_f^{\,\T}}\in S_{\tilde{\Delta}^\circ}$ vanishes on $Z_f$. Now the statement follows from \rt{Noether} below. \end{pf} The following statement can be considered as the toric version of the classical Noether theorem in $\mathbb P^n$ (see \cite{Ts}, section 20.2). \begin{Th}\label{T:Noether} Let $f_1,\dots,f_n$ be generic Laurent polynomials with $n$-dimensional Newton polytopes $\Delta_1,\dots,\Delta_n$. Let $h$ be a Laurent polynomial vanishing on $Z_f$. Assume $\Delta(h)$ lies in the interior of $\tilde\Delta=\Delta_0+\Delta_1+\dots+\Delta_n$ for some $n$-dimensional polytope~$\Delta_0$. Then $h$ can be written in the form $$h=h_1f_1+\dots+h_nf_n,\quad\text{with }\ \Delta(h_i)\subset\tilde\Delta_{(i)}^\circ,$$ where $\tilde\Delta_{(i)}$ as in \re{notation}. \end{Th} \begin{pf} First we note that the statement remains true when $\Delta_0=\{0\}$, i.e. $h$ is supported in the interior of $\Delta=\Delta_1+\dots+\Delta_n$. This follows from the exact sequence \re{seq1}. Indeed, if one considers the toric variety $X$ associated with $\Delta$ then each $f_i$ defines a semiample divisor $D_i$ on $X$ with polytope $\Delta_i$. It is well-known that for any semiample divisor $L$ \begin{equation}\label{e:fact} H^{n-\dim\Delta_L}(X,\cal O(L+K))\cong S_{\Delta_L^\circ}, \end{equation} where $\Delta_L$ is the polytope of $L$ (see, for example \cite{Kh1,F}). Thus the first term in \re{seq1} is isomorphic to $S_{\Delta_{(1)}^\circ}\oplus\dots\oplus S_{\Delta_{(n)}^\circ}$, where $\Delta_{(i)}=\sum_{j\neq i}\Delta_j$. Since the sequence is exact and $h$ lies in the kernel of the second map we get the required representation. Now assume $\Delta_0$ is $n$-dimensional. Let $X$ be the toric variety associated with $\tilde\Delta$. Let $f_0$ be any monomial supported in $\Delta_0$. Then $f_0,\dots,f_n$ define $n+1$ semiample divisors $D_0,\dots, D_n$ on $X$ whose polytopes are $\Delta_0,\dots,\Delta_n$. Since $f_1,\dots,f_n$ are generic (and so all their common zeroes lie in ${\mathbb T}^n$) and $f_0$ is a monomial, the divisors $D_0,\dots, D_n$ have empty intersection in $X$. Then the following twisted Koszul complex of sheaves on $X$ is exact (see \cite{CD, DK}): $$0\to\cal O(K)\to\bigoplus_{i=0}^n\cal O(D_i+K)\to\dots\to\bigoplus_{i=0}^n\cal O(\tilde D-D_i+K)\to\cal O(\tilde D+K)\to 0,$$ where $\tilde D=D_0+\dots+D_n$ and $K$ the canonical divisor on $X$. The first few term of the cohomology sequence are $$\dots\to\bigoplus_{i=0}^nH^0(X,\cal O(\tilde D-D_i+K))\to H^0(X,\cal O(\tilde D+K))\to 0,$$ where the middle map is given by $(f_0,\dots,f_n)$. This, with the help of \re{fact}, provides $$h=h_0f_0+h_1f_1+\dots+h_nf_n,\quad\text{where }\ \Delta(h_i)\subset\tilde\Delta_{(i)}^\circ.$$ Notice that $h_0$ vanishes on $Z_f$ and is supported in the interior of $\tilde\Delta_{(0)}=\Delta$. Therefore, there exist $h_i'$ such that $$h_0=h_1'f_1+\dots+h_n'f_n,\quad\text{with }\ \Delta(h_i')\subset\Delta_{(i)}^\circ$$ by the previous case. It remains to note that $\Delta(h_i'f_0)\subset\Delta_{(i)}^\circ+\Delta_0=\tilde\Delta_{(i)}^\circ$. \end{pf} \begin{Rem} \rt{Cox} has interpretation in terms of the homogeneous coordinate ring $S_X$ of the toric variety $X$ associated with $\tilde\Delta$ (see \cite{Coxhom}). One can homogenize $f_0,\dots, f_n$ to get elements $F_0,\dots,F_n\in S_X$ of {\it big} and {\it nef} degrees. According to the Codimension Theorem of Cox and Dickenstein (see \cite{CD}) the codimension of~$I=\langle F_0,\dots F_n\rangle$ in {\it critical} degree (corresponding to the interior of $\tilde\Delta$) equals 1. Then \rt{Cox} says that the homogenization of the Jacobian ${J_f^{\,\T}}$ to the critical degree generates the critical degree part of the quotient $S_X/I$. \end{Rem} \section{Algorithm for computing the global residue in ${\mathbb T}^n$}\label{S:algorithm} Now we will present an algorithm for computing the global residue ${\cR_f^{\T}}(g)$ for any Laurent polynomial $g$ assuming that the Newton polytopes of the system are full-dimensional. \begin{Alg} Let $f_1=\dots=f_n=0$ be a system of Laurent polynomial equations with $n$-dimensional Newton polytopes $\Delta_1,\dots,\Delta_n$. As before we let $\Delta$ denote the Minkowski sum of the polytopes.\newline {\it Input:} A Laurent polynomial $g$ with Newton polytope $\Delta(g)$. \begin{enumerate} \item[{\it Step 1:}] Choose an $n$-dimensional polytope $\Delta_0$ with $0\in\Delta_0^\circ$ such that the Minkowski sum $\tilde{\Delta}=\Delta_0+\Delta$ contains $\Delta(g)$ in its interior. \item[{\it Step 2:}] Solve the system of linear equations \begin{equation}\label{e:linsys} g=h_0+\sum_{i=1}^n h_if_i+c{J_f^{\,\T}} \end{equation} for $c$, where $h_i$ are polynomials with unknown coefficients supported in the interior of $\tilde\Delta_{(i)}$ (see \re{notation}). \end{enumerate} {\it Output:} The global residue ${\cR_f^{\T}}(g)=c\,n!\,V(\Delta_1,\dots,\Delta_n)$. \end{Alg} \begin{pf} According to \rt{Cox}, given $g$ with $\Delta(g)\subset\tilde\Delta^\circ$ there exist $h_i$ supported in $\tilde\Delta_{(i)}^\circ$ and $c\in\mathbb C$ such that $g=h_0+\sum_{i=1}^n h_if_i+c{J_f^{\,\T}}$. Taking the global residue we have $${\cR_f^{\T}}(g)={\cR_f^{\T}}(h_0)+{\cR_f^{\T}}\Big(\sum_{i=1}^n h_if_i\Big)+c\,{\cR_f^{\T}}({J_f^{\,\T}})=c\,n!\,V(\Delta_1,\dots,\Delta_n),$$ where the first two terms vanish by \rt{Kh} (1) and the definition of the global residue. \end{pf} \begin{Rem}\label{R:jac} Notice that we can ignore those terms of ${J_f^{\,\T}}$ whose exponents lie in the interior of $\Delta$ since their residue is zero by \rt{Kh} (1), and work with the ``restriction'' of ${J_f^{\,\T}}$ to the boundary of $\Delta$. \end{Rem} We illustrate the algorithm with a small 2-dimensional example. \begin{Ex} Consider a system of two equations in two unknowns. $$\begin{cases}&\hspace{-10pt}f_1=a_1x+a_2y+a_3x^2y^2,\\ &\hspace{-10pt}f_2=b_1x+b_2xy^2+b_3x^2y^2.\end{cases}$$ The Newton polytopes $\Delta_1$, $\Delta_2$ and their Minkowski sum $\Delta$ are depicted in \rf{ex1}. \begin{figure}[h] \centerline{ \scalebox{0.80}{ \input{newex2.pstex_t}}} \caption{} \label{F:ex1} \end{figure} We compute the global residue of $g=x^5y^4$. Let $\Delta_0$ be the triangle with vertices $(-1,0)$, $(0,-1)$ and $(2,1)$. Then the Minkowski sum $\tilde\Delta=\Delta+\Delta_0$ contains $\Delta(g)=(5,4)$ in the interior (see \rf{ex2}). \begin{figure}[h] \centerline{ \scalebox{0.75}{ \input{newex3.pstex_t}}} \caption{} \label{F:ex2} \end{figure} The vector space $S_{\tilde\Delta^\circ}$ has dimension 15 and a monomial basis \begin{equation} S_{\tilde\Delta^\circ}=\langle xy, xy^2, xy^3, x^2, x^2y, x^2y^2, x^2y^3, x^3y, x^3y^2, x^3y^3, x^3y^4, x^4y^2, x^4y^3, x^4y^4, x^5y^4\rangle.\nonumber \end{equation} Now $\tilde\Delta_{(0)}=\Delta_1+\Delta_2=\Delta$, $\tilde\Delta_{(1)}=\Delta_0+\Delta_2$ and $\tilde\Delta_{(2)}=\Delta_0+\Delta_1$ and the corresponding vectors spaces are of dimension 4, 6 and 6, respectively. Here are their monomial bases: $$ S_{\tilde{\Delta}_{(0)}^\circ}=\langle x^2y, x^2y^2, x^2y^3, x^3y^3\rangle,\quad S_{\tilde{\Delta}_{(1)}^\circ}=\langle x, xy, xy^2, x^2y, x^2y^2, x^3y^2\rangle, $$ $$S_{\tilde{\Delta}_{(2)}^\circ}=\langle y, x, xy, x^2y, x^2y^2, x^3y^2\rangle,$$ as \rf{ex3} shows. \begin{figure}[h] \centerline{ \scalebox{0.75}{ \input{newex1.pstex_t}}} \caption{} \label{F:ex3} \end{figure} We have $${J_f^{\,\T}}=-a_2b_1xy-a_2b_2xy^3+2a_1b_2x^2y^2-2a_2b_3x^2y^3+2(a_1b_3-a_3b_1)\,x^3y^2+2a_3b_2x^3y^4,$$ where we can ignore the third and the fourth terms (see \rr{jac}). Now the map \re{resmap} written in the above bases has the following $15\times 17$ matrix, which we denote by $\mathbf A$. \begin{equation} \left[ {\begin{array}{rrrrccccccccccccc} 0 & 0 & 0 & 0 & {a_{2}} & 0 & 0 & 0 & 0 & 0 & {b_{1}} & 0 & 0 & 0 & 0 & 0 & - {a_{2}}{b_{1}} \\ 0 & 0 & 0 & 0 & 0 & {a_{2}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & {a_{2}} & 0 & 0 & 0 & {b_{2}} & 0 & 0 & 0 & 0 & 0 & - {a_{2}}{b_{2}} \\ 0 & 0 & 0 & 0 & {a_{1}} & 0 & 0 & 0 & 0 & 0 & 0 & {b_{1}} & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & {a_{1}} & 0 & 0 & 0 & 0 & 0 & 0 & {b_{1}} & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & {a_{1}} & {a_{2}} & 0 & 0 & 0 & {b_{2}} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & {a_{2}} & 0 & {b_{3}} & 0 & {b_{2}} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & {a_{1}} & 0 & 0 & 0 & 0 & 0 & {b_{1}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & {a_{3}} & 0 & 0 & 0 & {a_{1}} & 0 & 0 & {b_{3}} & 0 & 0 & {b_{1}} & 0 & 2({a_{1}}{b_{3}}- {a_{3}}{b_{1}}) \\ 0 & 0 & 0 & 1 & 0 & {a_{3}} & 0 & 0 & 0 & {a_{2}} & 0 & 0 & {b_{3}} & {b_{2}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & {a_{3}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {b_{2}} & 0 & 2{a_{3}}{b_{2}} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {a_{1}} & 0 & 0 & 0 & 0 & 0 & {b_{1}} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & {a_{3}} & 0 & 0 & 0 & 0 & 0 & {b_{3}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {a_{3}} & 0 & 0 & 0 & 0 & 0 & {b_{3}} & {b_{2}} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {a_{3}} & 0 & 0 & 0 & 0 & 0 & {b_{3}} & 0 \end{array}} \right]\nonumber \end{equation} It remains to solve the system {${\mathbf A}{\mathbf c}={\mathbf b}$}, where $\mathbf c$ is the vector of unknowns (the coefficients of the $h_i$ and $c$) and $\mathbf b$ is the monomial $g=x^5y^4$ written in the basis for $S_{\tilde\Delta^\circ}$, i.e. $\mathbf b=(0,\dots,0,1)^t$. With the help of {\tt Maple} we obtain $$c=\frac{1}{4}\,\frac{{a_{1}}^{2}\,{b_{2}}}{{a_{3}}\,( {a_{1}}\,{b_{3}} - {a_{3}}\,{b_{1}}) ^{2}}.$$ Since the mixed volume (area) of $\Delta_1$ and $\Delta_2$ equals 4, we conclude that $${\cR_f^{\T}}(x^5y^4)=\frac {{a_{1}}^{2}\,{b_{2}}}{{a_{3}}\,( {a_{1}}\,{b_{3}} - {a_{3}}\,{b_{1}}) ^{2}}.$$ \end{Ex} \section{Some geometry}\label{S:geometry} The first step of our algorithm is constructing a lattice $n$-dimensional polytope $\Delta_0$ such that the Minkowski sum $\tilde\Delta=\Delta_0+\Delta$ contains both $\Delta$ and $\Delta(g)$ in its interior. There are many ways of doing that. For example, one can take $\Delta_0$ to be a sufficiently large dilate of $\Delta$ (translated so it contains the origin in the interior). In general, this could result in an unnecessarily large dimension of $S_{\tilde\Delta^\circ}$, which determines the size of the linear system \re{linsys}. Therefore, to minimize the size of the linear system one would want to solve the following problem. \begin{Prob} Given two lattice polytopes $\Delta$ and $\Delta'$ in $\mathbb R^n$, $\dim \Delta=n$, find an $n$-dimensional lattice polytope $\Delta_0$ such that $\Delta+\Delta_0$ contains both $\Delta$ and $\Delta'$ in its interior and has the smallest possible number of interior lattice points. \end{Prob} This appears to be a hard optimization problem. Instead we will consider a less challenging one. First, since the global residue is linear we can assume that $g$ is a monomial, i.e. $\Delta(g)$ is a point. \begin{Prob}\label{Pr:segment} Given a convex polytope $\Delta$ and a point $u$ in $\mathbb R^n$ find a segment $I$ starting at the origin such that $u$ is contained in the Minkowski sum $\Delta+I$ and the volume of $\Delta+I$ is minimal. \end{Prob} If $I=[\hspace{1pt}0,m]$, for $m\in\mathbb Z^n$, is such a segment then we can take $\Delta_0$ to be a ``narrow'' polytope with $0$ in the interior and $m$ one of the vertices, as in \rf{ex2}. Then the volume (and presumably the number of interior lattice points) of $\tilde\Delta=\Delta+\Delta_0$ will be relatively small. We will now show how \rpr{segment} can be reduced to a linear programming problem. Let $\Delta\subset\mathbb R^n$ be an $n$-dimensional polytope, $I=[\hspace{1pt}0,v]$ a segment, $v\in\mathbb R^n$. First, notice that the volume of $\Delta+I$ equals $$\operatorname{Vol}_n(\Delta+I)=\operatorname{Vol}_n(\Delta)+|I|\cdot \operatorname{Vol}_{n-1}(\text{pr}_I\Delta),$$ where $\text{pr}_I\Delta$ is the projection of $\Delta$ onto the hyperplane orthogonal to $I$, $\operatorname{Vol}_k$ the $k$-dimensional volume, and $|I|$ the length of $I$. For each facet $\Gamma\subset\Delta$ let $n_\Gamma$ denote the outer normal vector whose length equals the ($n-1$)-dimensional volume of $\Gamma$. Then we can write $$|I|\cdot\operatorname{Vol}_{n-1}(\text{pr}_I(\Delta))=\frac{1}{2}\sum_{\Gamma\subset\Delta}|\langle n_\Gamma,v\rangle|.$$ But the latter is the support function $h_Z$ of a convex polytope (zonotope) $Z$, which is the Minkowski sum of segments: $$h_Z(v)=\sum_{\Gamma\subset\Delta}|\langle n_\Gamma,v\rangle|,\quad Z=\sum_{\Gamma\subset\Delta}[-n_\Gamma,n_\Gamma].$$ Indeed, $h_Z$ is the sum of the support functions of the segments. Also it is clear that $$h_{[-n_\Gamma,n_\Gamma]}(v)=\max_{-1\leq t\leq 1}\langle\hspace{1pt} tn_\Gamma,v\rangle=|\langle n_\Gamma,v\rangle|.$$ The following figure shows the polytopes $\Delta$ and $Z$, and the normal fan $\Sigma_Z$ of $Z$. \begin{figure}[h] \centerline{ \scalebox{0.55}{ \input{fig1.pstex_t}}} \caption{} \label{F:fig1} \end{figure} Now we get back to \rpr{segment}. After translating everything by $-u$ we may assume that $u$ is at the origin. Then \rpr{segment} is equivalent to finding $x\in\Delta$ such that the volume of $\Delta+[\hspace{1pt}0,-x]$ is minimal, which by the previous discussion means minimizing the support function $h_Z(-x)=h_Z(x)$ on $\Delta$. We can interpret this geometrically. The function $h_Z$ is a nonnegative continuous function, linear on every cone of the normal fan $\Sigma_Z$. Its graph above the polytope $\Delta$ is a ``convex down'' polyhedral set in $\mathbb R^{n+1}$ (see \rf{fig2}). The set of points with the smallest last coordinate is a face of this polyhedral set. The projection of this face to $\Delta$ gives the solution to our minimization problem. \begin{figure}[h] \centerline{ \scalebox{0.80}{ \input{fig2.pstex_t}}} \caption{} \label{F:fig2} \end{figure} Finally, note that the normal fan $\Sigma_Z$ has a simple description. It is obtained by translating all the facet hyperplanes $H_\Gamma$, for $\Gamma\subset\Delta$, to the origin.
{ "timestamp": "2006-05-30T18:36:58", "yymm": "0511", "arxiv_id": "math/0511684", "language": "en", "url": "https://arxiv.org/abs/math/0511684", "abstract": "We consider families of sparse Laurent polynomials f_1,...,f_n with a finite set of common zeroes Z_f in the complex algebraic n-torus. The global residue assigns to every Laurent polynomial g the sum of its Grothendieck residues over the set Z_f. We present a new symbolic algorithm for computing the global residue as a rational function of the coefficients of the f_i when the Newton polytopes of the f_i are full-dimensional. Our results have consequences in sparse polynomial interpolation and lattice point enumeration in Minkowski sums of polytopes.", "subjects": "Algebraic Geometry (math.AG); Commutative Algebra (math.AC)", "title": "Global residues for sparse polynomial systems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683476832692, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7096610828772643 }
https://arxiv.org/abs/0804.0427
Fibered orbifolds and crystallographic groups
In this paper, we prove that a normal subgroup N of an n-dimensional crystallographic group G determines a geometric fibered orbifold structure on the flat orbifold E^n/G, and conversely every geometric fibered orbifold structure on E^n/G is determined by a normal subgroup N of G, which is maximal in its commensurability class of normal subgroups of G. In particular, we prove that E^n/G is a fiber bundle, with totally geodesic fibers, over a b-dimensional torus, where b is the first Betti number of G.Let N be a normal subgroup of G which is maximal in its commensurability class. We study the relationship between the exact sequence 1 -> N -> G -> G/N -> 1 splitting and the corresponding fibration projection having an affine section. If N is torsion-free, we prove that the exact sequence splits if and only if the fibration projection has an affine section. If the generic fiber F = Span(N)/N has an ordinary point that is fixed by every isometry of F, we prove that the exact sequence always splits. Finally, we describe all the geometric fibrations of the orbit spaces of all 2- and 3-dimensional crystallographic groups building on the work of Conway and Thurston.
\section{Introduction} Let $E^n$ be Euclidean $n$-space. A map $\phi:E^n\to E^n$ is an isometry of $E^n$ if and only if there is an $a\in E^n$ and an $A\in {\rm O}(n)$ such that $\phi(x) = a + Ax$ for each $x$ in $E^n$. We shall write $\phi = a+ A$. In particular, every translation $\tau = a + I$ is an isometry of $E^n$. A {\it flat $n$-orbifold} is a $(E^n,{\rm Isom}(E^n))$-orbifold as defined in \S 13.2 of Ratcliffe \cite{R}. A connected flat $n$-orbifold has a natural inner metric space structure. If $\Gamma$ is a discrete group of isometries of $E^n$, then its orbit space $E^n/\Gamma = \{\Gamma x: x\in E^n\}$ is a connected, complete, flat $n$-orbifold, and conversely if $M$ is a connected, complete, flat $n$-orbifold, then there is a discrete group $\Gamma$ of isometries of $E^n$ such that $M$ is isometric to $E^n/\Gamma$ by Theorem 13.3.10 of \cite{R}. \vspace{.15in} \noindent{\bf Definition:} A flat $n$-orbifold $M$ {\it geometrically fibers} over a flat $m$-orbifold $B$, with {\it generic fiber} a flat $(n-m)$-orbifold $F$, if there is a surjective map $\eta: M \to B$, called the {\it fibration projection}, such that for each point $y$ of $B$, there is an open metric ball $B(y,r)$ of radius $r > 0$ centered at $y$ in $B$ such that $\eta$ is isometrically equivalent on $\eta^{-1}(B(y,r))$ to the natural projection $(F\times B_y)/G_y \to B_y/G_y$, where $G_y$ is a finite group acting diagonally on $F\times B_y$, isometrically on $F$, and effectively and orthogonally on an open metric ball $B_y$ in $E^m$ of radius $r$. This implies that the fiber $\eta^{-1}(y)$ is isometric to $F/G_y$. The fiber $\eta^{-1}(y)$ is said to be {\it generic} if $G_y = \{1\}$ or {\it singular} if $G_y$ is nontrivial. \vspace{.15in} An $n$-dimensional {\it crystallographic group} ({\it space group}) is a discrete group of isometries $\Gamma$ of $E^n$ such that $E^n/\Gamma$ is compact. We prove that if ${\rm N}$ is a normal subgroup of an $n$-dimensional space group $\Gamma$, then the flat orbifold $E^n/\Gamma$ geometrically fibers over a flat orbifold, with generic fiber a connected flat orbifold, naturally induced by ${\rm N}$. Conversely, we prove that if $E^n/\Gamma$ geometrically fibers over a flat orbifold $B$ with generic fiber a connected flat orbifold $F$, then this fibration is equivalent to a geometric fibration induced by a normal subgroup ${\rm N}$ of $\Gamma$. An {\it $m$-dimensional torus} or $m$-{\it torus} is a topological space homeomorphic to the cartesian product of $S^1$ with itself $m$-times. Here a 0-{\it torus} is defined to be a point. We prove that if $\Gamma$ is an $n$-dimensional space group $\Gamma$ with first Betti number $\beta_1$, then the flat orbifold $E^n/\Gamma$ is a fiber bundle over a $\beta_1$-torus with totally geodesic fibers. We illustrate the theory by describing all the geometric fibrations of the orbit spaces of all $2$- and $3$-dimensional space groups building on the work of Conway et al. \cite{C-T}. \section{Normal Subgroups of Crystallographic Groups} The fundamental theorem of crystallographic groups is the following theorem. \begin{theorem} {\rm (Bieberbach's Theorems)} \begin{enumerate} \item If $\Gamma$ is an $n$-dimensional space group, then the subgroup ${\rm T}$ of all translations in $\Gamma$ is a free abelian normal subgroup of rank $n$ and of finite index in $\Gamma$ such that $\{a: a+I\in {\rm T}\}$ spans $E^n$. \item Two $n$-dimensional space groups $\Gamma_1$ and $\Gamma_2$ are isomorphic if and only if they are conjugate by an affine homeomorphism of $E^n$. \item There are only finitely many isomorphism classes of $n$-dimensional space groups for each $n$. \end{enumerate} \end{theorem} In dimensions $0,1,\ldots,6$, there are $1,2,17,219, 4\,783, 222\,018, 28\,927\,922$ isomorphism classes of space groups, respectively. See Brown et al. \cite{B-Z} and Plesken and Schulz \cite{P-S}. Let $\Gamma$ be an $n$-dimensional space group. Define $\eta:\Gamma \to {\rm O}(n)$ by $\eta(a+A) = A$. Then $\eta$ is a homomorphism whose kernel is the group of translations in $\Gamma$. The image $\Pi$ of $\eta$ is a finite group by Part (1) of Theorem 1 called the {\it point group} of $\Gamma$. Let ${\rm N}$ be a normal subgroup of $\Gamma$. Define $${\rm Span}({\rm N}) = {\rm Span}\{a\in E^n:a+I\in {\rm N}\}.$$ The following theorem strengthens Theorem 17 of Farkas \cite{Farkas}. \begin{theorem} Let ${\rm N}$ be a normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. \begin{enumerate} \item If $b+B\in\Gamma$, then $BV=V$. \item If $a+A\in {\rm N}$, then $a\in V$ and $ V^\perp\subseteq{\rm Fix}(A)$. \item The group ${\rm N}$ acts effectively on each coset $V+x$ of $V$ in $E^n$ as a space group of isometries of $V+x$. \end{enumerate} \end{theorem} \begin{proof} (1) Let $a+I\in {\rm N}$ and let $b+B\in\Gamma$, then $(b+B)(a+I)(b+B)^{-1} = Ba+I\in {\rm N}$. Hence $B$ leaves $V$ invariant. (2) The coset space $E^n/V$ is a Euclidean space where the distance between cosets is the orthogonal distance in $E^n$. The quotient map from $E^n$ to $E^n/V$ maps $V^\perp$ isometrically onto $E^n/V$. The group $\Gamma$ acts isometrically on $E^n/V$ by $(b+B)(V+x) = V+b+Bx$. Let ${\rm T} = \{a+I\in\Gamma\}$. Then ${\rm N}/{\rm N}\cap{\rm T} \cong {\rm NT}/{\rm T} \subseteq \Gamma/{\rm T}$, and so ${\rm N}/{\rm N}\cap{\rm T}$ is a finite group. The group ${\rm N}\cap{\rm T}$ acts trivially on $E^n/V$, and so ${\rm N}/{\rm N}\cap{\rm T}$ acts isometrically on $E^n/V$. Therefore ${\rm N}/{\rm N}\cap{\rm T}$ fixes a point $V+x$ of $E^n/V$. This implies that the group ${\rm N}$ leaves the coset $V+x$ invariant. Let ${\rm N}' = (-x+I){\rm N}(x+I)$. Then ${\rm N}'$ leaves $V$ invariant. Let $a+ A \in N$. Then $(-x+I)(a+A)(x+I) = a+ Ax-x+A$. Let $a' = a+ (A-I)x$. Then $a'+ A\in {\rm N}'$. As $a'+A$ leaves $V$ invariant, $a'\in V$. Let $b+I\in \Gamma$. Then $b+I \in (-x+I)\Gamma(x+I)$, since $b+I$ and $x+I$ commute. Let $a+A\in {\rm N}$. Then $(-b+I)(a'+A)(b+I) = a'+(A-I)b+A$ is in ${\rm N}'$. Hence $(A-I)b\in V$. Now $\{b: b+I\in \Gamma\}$ spans $E^n$ by Theorem 1. Let $W = \big({\rm Fix}(A)\big)^\perp$. Then $A-I$ maps $W$ isomorphically onto $W$, and so $A-I$ maps $E^n$ onto $W$. Hence $\{(A-I)b:b+I\in\Gamma\}$ spans $W$. Therefore $W\subseteq V$. Hence $V^\perp \subseteq W^\perp = {\rm Fix}(A)$. Let $a+ A\in {\rm N}$. Write $a= b+c$ with $b\in V$ and $c\in V^\perp$. Let $r$ be the order of $A$. Then $r$ is finite and \begin{eqnarray*} (a+A)^r & = & a + Aa + \cdots + A^{r-1}a + I\\ & = & b + Ab+ \cdots + A^{r-1}b + rc +I\ \ = \ \ d + I. \end{eqnarray*} As $b + Ab+ \cdots + A^{r-1}b,\ d \in V$, we have $rc=0$, and so $c=0$. Hence $a\in V$. (3) If $a+A\in{\rm N}$, then $a\in V$ and $AV=V$, and so $(a+A)V = V$. Hence the group ${\rm N}$ leaves $V$ invariant. As $V^\perp \subseteq {\rm Fix}(A)$ for each $a+A\in {\rm N}$, we deduce that ${\rm N}$ acts effectively on $V+x$ for each $x\in V^\perp$. Let $a_1+I,\ldots, a_k+I$ be translations in ${\rm N}$ such that $\{a_1,\ldots,a_k\}$ is a basis for $V$. Then $(V+x)/\langle a_1+I,\ldots,a_k+I\rangle$ is a $k$-torus for each $x\in V^\perp$. Hence $(V+x)/{\rm N}$ is compact for each $x\in V^\perp$. Therefore ${\rm N}$ acts effectively as a space group of isometries of $V+x$ for each $x\in V^\perp$. \end{proof} Let ${\rm N}$ be a normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. The group ${\rm N}$ acts trivially on $E^n/V$ by Theorem 2, and so $\Gamma/{\rm N}$ acts isometrically on $E^n/V$ by $$({\rm N}(b+B))(V+x) = {\rm N}((b+B)(V+x))= V+ b+Bx.$$ Observe that ${\rm N}(b+B)$ fixes $V$ if and only if $b\in V$. Hence the kernel of the corresponding homomorphism from $\Gamma/{\rm N}$ to ${\rm Isom}(E^n/V)$ is $\overline{\rm N}/{\rm N}$ where $$\overline{\rm N} = \{b+B\in \Gamma: b\in V\ \hbox{and}\ V^\perp\subseteq{\rm Fix}(B)\}.$$ As ${\rm N}\subseteq \overline{\rm N}$, the orbit space $V/{\rm N}$ projects onto $V/\overline{\rm N}$. The orbit space $V/{\rm N}$ is compact by Theorem 2, and so $V/\overline{\rm N}$ is compact. Hence, the group $\overline{\rm N}$ acts effectively as a space group of isometries of $V$. Therefore ${\rm N}$ has finite index in $\overline{\rm N}$. The group ${\rm N}$ may be a proper subgroup of $\overline{\rm N}$. For example, if ${\rm N} = \{a+I\in \Gamma\}$, then $\overline{\rm N} = \Gamma$. As $\overline{\rm N}/{\rm N}$ is a normal subgroup of $\Gamma/{\rm N}$, we have that $\overline{\rm N}$ is a normal subgroup of $\Gamma$. The group $\Gamma/\overline{\rm N}$ acts effectively on $E^n/V$ as a group of isometries. Let ${\rm H_1}$ and ${\rm H_2}$ be subgroups of a group $\Gamma$. Then ${\rm H_1}$ and ${\rm H_2}$ are said to be ${\it commensurable}$ if ${\rm H_1\cap H_2}$ has finite index in both ${\rm H_1}$ and ${\rm H_2}$. Commensurability is an equivalence relation on the set of subgroups of $\Gamma$. \begin{theorem} Let ${\rm N_1}$ and ${\rm N_2}$ be normal subgroups of a space group $\Gamma$. Then $\overline{\rm N_1} = \overline{\rm N_2}$ if and only if ${\rm N}_1$ and ${\rm N}_2$ are commensurable. \end{theorem} \begin{proof} Suppose ${\rm N}_1$ and ${\rm N}_2$ are commensurable. Let ${\rm T}=\{a+I\in\Gamma\}$. Now ${\rm N}_1\cap{\rm N}_2\cap{\rm T}$ has finite index in ${\rm N}_1\cap{\rm N}_2$ and ${\rm N}_1\cap{\rm N}_2$ has finite index in ${\rm N}_i$ for $i=1,2$. Hence ${\rm N}_1\cap{\rm N}_2\cap{\rm T}$ has finite index in ${\rm N}_i\cap{\rm T}$ for $i=1,2$. Therefore ${\rm Span}({\rm N}_1\cap{\rm N}_2) = {\rm Span}({\rm N}_i)$ for $i = 1,2$. Hence $\overline{\rm N_1} = \overline{\rm N_2}$. Conversely, suppose $\overline{\rm N_1} = \overline{\rm N_2}$. Now ${\rm N}_i$ has finite index in $\overline{{\rm N}_i}$ for $i=1,2$. As ${\rm N}_1{\rm N}_2\subseteq \overline{{\rm N}_1}=\overline{{\rm N}_2}$, we have that ${\rm N}_i$ has finite index in ${\rm N}_1{\rm N}_2$ for $i=1,2$. Now ${\rm N}_1/{\rm N}_1\cap{\rm N}_2\cong {\rm N}_1{\rm N}_2/{\rm N}_2$ and ${\rm N}_2/{\rm N}_1\cap{\rm N}_2\cong {\rm N}_1{\rm N}_2/{\rm N}_1$. Hence ${\rm N}_1\cap{\rm N}_2$ has finite index in both ${\rm N}_1$ and ${\rm N}_2$. Therefore ${\rm N}_1$ and ${\rm N}_2$ are commensurable. \end{proof} \begin{corollary} If ${\rm N}$ is a normal subgroup of a space group $\Gamma$, then $\overline{\overline{\rm N}} = \overline{\rm N}$ and $\overline{\rm N}$ is the unique maximal element of the commensurability class of normal subgroups of $\Gamma$ that contains ${\rm N}$. \end{corollary} \begin{corollary} If $\psi: \Gamma \to \Gamma'$ is an isomorphism of space groups, and ${\rm N}$ is a normal subgroup of $\Gamma$, then $\overline{\psi({\rm N})} = \psi(\overline{\rm N})$. \end{corollary} \section{Geometric Flat Orbifold Fibrations} We say that the normal subgroup ${\rm N}$ of a space group $\Gamma$ is a {\it complete} if $\overline{\rm N}={\rm N}$. If ${\rm N}$ is a normal subgroup of $\Gamma$, then $\overline{\rm N}$ is a complete normal subgroup of $\Gamma$ by Theorem 3 called the {\it completion} of ${\rm N}$ in $\Gamma$. \begin{lemma} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V={\rm Span}({\rm N})$. Then $\Gamma/{\rm N}$ acts effectively as a discrete group of isometries of $E^n/V$. \end{lemma} \begin{proof} From the above discussion, we know that $\Gamma/{\rm N}$ acts effectively as a group of isometries of $E^n/V$. As ${\rm N}{\rm T}/{\rm N}$ is of finite index in $\Gamma/{\rm N}$, it suffices to show that ${\rm N}{\rm T}/{\rm N}$ acts as a discrete group of isometries of $E^n/V$. Now ${\rm N}{\rm T}/{\rm N}\cong {\rm T}/{\rm N}\cap{\rm T}$. We claim that the group ${\rm T}/{\rm N}\cap{\rm T}$ is torsion-free. Let $\tau =a+I\in{\rm T}$ and suppose tha $\tau^r\in {\rm N}\cap{\rm T}$ for some integer $r>0$. Then $ra+I \in {\rm N}\cap{\rm T}$. Hence $ra\in V$, and so $a\in V$. Therefore $\tau\in {\rm N}\cap{\rm T}$, since ${\rm N}$ is complete. Thus ${\rm T}/{\rm N}\cap{\rm T}$ is torsion-free. Hence ${\rm T}$ has a set of generators $\{b_1+I,\ldots, b_n+I\}$ such that $b_1+I,\ldots, b_k+I$ generate ${\rm N}\cap{\rm T}$. Then ${\rm N}(b_{k+1}+I),\ldots,{\rm N}(b_n+I)$ generate ${\rm N}{\rm T}/{\rm N}$. As $\{b_{k+1},\ldots, b_n\}$ projects to a basis of $E^n/V$, we have that ${\rm N}{\rm T}/{\rm N}$ acts as a discrete group of isometries of $E^n/V$ by Theorems 5.2.4 and 5.3.2 of Ratcliffe \cite{R}. \end{proof} \begin{theorem} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. Then the flat orbifold $E^n/\Gamma$ geometrically fibers over the flat orbifold $(E^n/V)/(\Gamma/{\rm N})$ with generic fiber the flat orbifold $V/{\rm N}$. \end{theorem} \begin{proof} Let $\eta:E^n/\Gamma\to (E^n/V)/(\Gamma/{\rm N})$ be the map defined by $$\eta(\Gamma x) = (\Gamma/{\rm N})(V+x) = \Gamma(V+x).$$ Then the following diagram commutes \[\begin{array}{ccc} E^n & {\buildrel \tilde{\eta} \over \longrightarrow} & E^n/V \\ \downarrow & & \downarrow \\ E^n/\Gamma & {\buildrel \eta \over \longrightarrow} &(E^n/V)/(\Gamma/{{\rm N}}) \end{array} \] where $\tilde\eta$ and the vertical maps are the quotient maps. Hence $\eta$ is a continuous surjection, and so $(E^n/V)/(\Gamma/{\rm N})$ is a compact flat $m$-orbifold with $m = {\rm dim}(E^n/V)$. Let $x\in V^\perp$, and let $$\Gamma_{V+x} = \{\gamma\in\Gamma: \gamma(V+x) = V+x\}.$$ Then the stabilizer of $V+x$ in $\Gamma/{\rm N}$ is $G_x=\Gamma_{V+x}/{\rm N}$. By Lemma 1, the group $G_x$ is finite. Let $r$ be half the distance from $V+x$ to the set of remaining points of the orbit $(\Gamma/{\rm N})(V+x)$ in $E^n/V$. By Lemma 1, we have that $r>0$. Let $B_x$ be the open ball of radius $r$ centered at $V+x$ in $E^n/V$. The quotient map from $E^n/V$ to $(E^n/V)/(\Gamma/{\rm N})$ maps $B_x$ onto $(\Gamma/{\rm N})B_x/(\Gamma/{\rm N})$ which is isometric to $B_x/G_x$ with $G_x$ acting effectively and orthogonally on $B_x$. Let $U$ be the $r$-neighborhood of $V+x$ in $E^n$. Then we have $$\eta^{-1}((\Gamma/{\rm N})B_x/(\Gamma/{\rm N})) = \Gamma U/\Gamma.$$ Now $\Gamma U/\Gamma$ is isometric to $U/\Gamma_{V+x}$. Moreover we have $$U/\Gamma_{V+x} = (U/{\rm N})/(\Gamma_{V+x}/{\rm N}) = (U/{\rm N})/G_x.$$ The group ${\rm N}$ acts trivially on $E^n/V$. Hence $U/{\rm N}$ is isometric to $\big((V+x)/{\rm N}\big)\times B_x$. Let $F= V/{\rm N}$. Then $F$ is a compact flat $(n-m)$-orbifold. Now $F$ is isometric to $(V+x)/{\rm N}$, since ${\rm N}$ acts trivially on $E^n/V$. The finite group $G_x$ acts diagonally on $\big((V+x)/{\rm N}\big)\times B_x$, isometrically on $(V+x)/{\rm N}$, and effectively and orthogonally on $B_x$. We have a commutative diagram \[\begin{array}{ccc} \Gamma U/\Gamma & {\buildrel \overline\eta\over\longrightarrow} &(\Gamma/{\rm N})B_x/(\Gamma/{\rm N}) \\ \downarrow & & \downarrow \\ \big(\big((V+x)/{\rm N}\big)\times B_x\big)/G_x & \longrightarrow & B_x/G_x \end{array}\] where the vertical maps are isometries, $\overline\eta$ is the restriction of $\eta$, and the bottom map is the natural projection. Thus $E^n/\Gamma$ geometrically fibers over the flat $m$-orbifold $(E^n/V)/(\Gamma/{\rm N})$ with generic fiber the flat $(n-m)$-orbifold $F=V/{\rm N}$. \end{proof} Let ${\rm N}$ be a normal subgroup of an $n$-dimensional space group $\Gamma$. We call the map $\eta:E^n/\Gamma\to (E^n/V)/(\Gamma/\overline{\rm N})$ defined in the proof of Theorem 4, the {\it fibration projection determined by} ${\rm N}$. Let ${\rm T} = \{a+I\in\Gamma\}$. By Theorem 4, the fibration projection $\eta$ determined by ${\rm N}$ is an injective Seifert fibering, with typical fiber $V/({\rm N}\cap{\rm T})$, in the sense of Lee and Raymond \cite{L-R}. \begin{theorem} Let ${\rm N}$ be a normal subgroup of a space group $\Gamma$. Then the following are equivalent: \begin{enumerate} \item The quotient group $\Gamma/{\rm N}$ is a space group. \item The quotient group $\Gamma/{\rm N}$ has no nontrivial finite normal subgroups. \item The normal subgroup ${\rm N}$ of $\Gamma$ is complete. \end{enumerate} \end{theorem} \begin{proof} By Theorem 2 every normal subgroup of a space group is a space group. Hence a space group has no nontrivial finite normal subgroups. Therefore (1) implies (2). As $\overline{\rm N}/{\rm N}$ is a finite normal subgroup of $\Gamma/{\rm N}$, we have that (2) implies (3). Finally (3) implies (1) by Theorem 4. \end{proof} If $\Gamma$ is a group, let $Z(\Gamma)$ be the {\it center} of $\Gamma$. If $\Gamma$ is a finitely generated group, let $\beta_1$ be the {\it first Betti} number of $\Gamma$. Here $\Gamma/[\Gamma,\Gamma] \cong G\oplus {\mathbb Z}^{\beta_1}$ with $G$ a finite abelian group. The next theorem strengthens Theorem 6 of Farkas \cite{Farkas}. \begin{theorem} If $\Gamma$ is a space group, then every element of $Z(\Gamma)$ is a translation, the rank of $Z(\Gamma)$ is $\beta_1$, and $Z(\Gamma)$ is a complete normal subgroup of $\Gamma$. \end{theorem} \begin{proof} The translations in $\Gamma$ are characterized as the elements of $\Gamma$ with only finitely many conjugates. Hence $Z(\Gamma)$ is a subgroup of the group ${\rm T}$ of translations of $\Gamma$. Let $\Pi$ be the point group of $\Gamma$. If $a+I\in {\rm T}$ and $b+B\in\Gamma$, then $$(b+B)(a+I)(b+B)^{-1} = Ba+I.$$ Hence conjugation in $\Gamma$ induces an action of $\Pi$ on ${\rm T}$. The group extension $$ 1 \to {\rm T} \longrightarrow \Gamma \longrightarrow \Pi \to 1$$ determines an exact sequence of cohomology groups $$H^1(\Pi,{\mathbb Z}) \to H^1(\Gamma, {\mathbb Z})\to H^1({\rm T},{\mathbb Z})^\Pi\to H^2(\Pi,{\mathbb Z}).$$ Here ${\rm T}, \Gamma$ and $\Pi$ act trivially on ${\mathbb Z}$ and $H^1({\rm T},{\mathbb Z})^\Pi$ is the subgroup of $H^1({\rm T},{\mathbb Z})$ of elements fixed under the induced action of $\Pi$. By the universal coefficients theorem, $H^1(\Pi,{\mathbb Z})\cong {\rm Hom}(H_1(\Pi),{\mathbb Z}) = 0$ and $\beta_1 ={\it rank}(H^1(\Gamma,{\mathbb Z}))$. By Corollary 5.5 in Chapter IV of Mac Lane \cite{Mac}, we have $H^2(\Pi,{\mathbb Z}) \cong {\rm Hom}(\Pi,S^1)$, and so $H^2(\Pi,{\mathbb Z})$ is finite. Hence $\beta_1 = {\it rank}(H^1({\rm T},{\mathbb Z})^\Pi)$. By the universal coefficients theorem, $$H^1({\rm T},{\mathbb Z})^\Pi \cong {\rm Hom}(H_1({\rm T}),{\mathbb Z})^\Pi \cong {\rm T}^\Pi = Z(\Gamma).$$ Thus ${\it rank}(Z(\Gamma)) = \beta_1$. Let $V={\rm Span}(Z(\Gamma))$. If $a+I\in Z(\Gamma)$ and $b+B\in\Gamma$, then $$Ba+I =(b+B)(a+I)(b+B)^{-1} = a+I.$$ Hence $V\subseteq {\rm Fix}(B)$. Therefore $\overline{Z(\Gamma)}=\{a+I \in \Gamma: a \in V\} = Z(\Gamma)$. \end{proof} Let $Z(\Gamma)$ be the center of an $n$-dimensional space group $\Gamma$. Then every element of $Z(\Gamma)$ is a translation, the rank of $Z(\Gamma)$ is the first Betti number $\beta_1$ of $\Gamma$, and $Z(\Gamma)$ is a complete normal subgroup of $\Gamma$ by Theorem 6. Let $V={\rm Span}(Z(\Gamma))$. By Theorem 4, the flat orbifold $E^n/\Gamma$ geometrically fibers over the flat orbifold $(E^n/V)/(\Gamma/Z(\Gamma))$ with generic fiber the flat $\beta_1$-torus $V/Z(\Gamma)$. Suppose $x\in V^\perp$ and $b+B\in\Gamma_{V+x}$. Write $b=c+d$ with $c\in V$ and $d\in V^\perp$. Then $(b+B)(V+x) = V+d+Bx$, and so $d+Bx = x$. If $v\in V$, then $$(b+B)(v+x) = b+v+Bx = c+v+x.$$ Thus $\Gamma_{V+x}$ acts as a discrete group of translations on $V+x$. As $\Gamma_{V+x}$ contains $Z(\Gamma)$, we have that $(V+x)/\Gamma_{V+x}$ is a $\beta_1$-torus. Thus all the fibers of the fibration projection $\eta: E^n/\Gamma\to (E^n/V)/(\Gamma/Z(\Gamma))$ are $\beta_1$-tori, The $\beta_1$-torus $V/Z(\Gamma)$, as an additive group, acts effectively on $E^n/\Gamma$ by $$(Z(\Gamma) v)(\Gamma x) = \Gamma(v+x).$$ The projection from $(V+x)/Z(\Gamma)$ to $(V+x)/\Gamma_{V+x}$ is a covering projection for each $x\in V^\perp$. Therefore the action of $V/Z(\Gamma)$ on $E^n/\Gamma$ is an injective toral action in the sense of Conner and Raymond \cite{C-R}. \section{Equivalence of Geometric Fibrations} Let $M$ be a flat $n$-orbifold. Suppose $M$ geometrically fibers over a flat $m$-orbifold $B_i$ with generic fiber a flat $(n-m)$-orbifold $F_i$ and fibration projection $\eta_i:M\to B_i$ for $i=1,2$. Then the fibration projections $\eta_1$ and $\eta_2$ are said to be {\it geometrically equivalent} if there is an isometry $\beta:B_1\to B_2$ such that $\beta\eta_1=\eta_2$. \begin{theorem} Let $\Gamma$ be an $n$-dimensional space group. Suppose that the flat orbifold $E^n/\Gamma$ geometrically fibers over a flat $m$-orbifold $B$ with generic fiber a connected flat $(n-m)$-orbifold $F$ and fibration projection $\eta:E^n/\Gamma\to B$. Then $\Gamma$ has a complete normal subgroup ${\rm N}$ such that $\eta$ is geometrically equivalent to the fibration projection determined by ${\rm N}$. \end{theorem} \begin{proof} The fibration projection $\eta$ is locally isometrically equivalent to a natural projection $(F\times D)/G\to D/G$, where $G$ is a finite group acting diagonally on $F\times D$, isometrically on $F$, and effectively and orthogonally on an open $m$-disk $D$. The fibers of the projection $(F\times D)/G\to D/G$ are connected, totally geodesic, and parallel, and so the fibers of $\eta$ are connected, totally geodesic, and parallel. Hence there is a vector subspace $V$ of $E^n$ such that each coset of $V$ in $E^n$ projects onto a fiber of $\eta$ and $\Gamma$ maps each coset of $V$ to a coset of $V$. Let $b+B\in\Gamma$. Then $(b+B)V = b+BV$, and so $BV=V$. Let $V+x$ project to a generic fiber $F$ of $\eta$, that is, to a fiber of $\eta$ with $G = 1$. By conjugating $\Gamma$ by $-x+I$, we may assume that $x=0$. Now $F = \Gamma V/\Gamma$ which is isomorphic to $V/\Gamma_V$ where $\Gamma_V = \{\gamma\in\Gamma: \gamma(V) = V\}$. As $\eta$ is isometrically equivalent to the projection $F\times D \to D$ in a tubular neighborhood of the fiber $F$, we deduce that $$\Gamma_V = \{a+A\in \Gamma: a\in V\ \hbox{and}\ V^\perp\subseteq{\rm Fix}(A)\}.$$ We claim that $\Gamma_V$ is a normal subgroup of $\Gamma$. Let $b+B\in\Gamma$ and $a+A\in \Gamma_V$. Then we have $$(b+B)(a+A)(b+B)^{-1} = b+Ba-BAB^{-1}b+BAB^{-1}.$$ If $x\in V^\perp$, then $B^{-1}x\in V^\perp$, and so $BAB^{-1}x = x$. Therefore $V^\perp \subseteq {\rm Fix}(BAB^{-1})$. Write $b=c+d$ with $c\in V$ and $d\in V^\perp$. Then we have \begin{eqnarray*} b+Ba-BAB^{-1}b & = & b+Ba-BAB^{-1}c -BAB^{-1}d \\ & = & b+Ba-BAB^{-1}c -d \\ & = & c+Ba-BAB^{-1}c \end{eqnarray*} which is an element of $V$. Therefore $(b+B)(a+A)(b+B)^{-1}\in \Gamma_V$. Thus $\Gamma_V$ is a normal subgroup of $\Gamma$. Now $F= V/\Gamma_V$ is compact, and so $\Gamma_V$ acts as a space group of isometries of $V$. Therefore $V = {\rm Span}(\Gamma_V)$ by Part (1) of Theorem 1. Hence $\overline\Gamma_V=\Gamma_V$. The fibration projection $\eta$ and the fibration projection $\eta_V$ determined by $\Gamma_V$ have the same fibers. Hence there is a homeomorphism $\beta:B\to (E^n/V)/(\Gamma/\Gamma_V)$ such that $\beta\eta = \eta_V$. The map $\phi$ is an isometry, since the metrics on $B$ and $(E^n/V)/(\Gamma/\Gamma_V)$ are determined by the distance between fibers in $E^n/\Gamma$. Therefore $\eta$ is geometrically equivalent to $\eta_V$. \end{proof} Let $M_i$ be a connected, complete, flat $n$-orbifold for $i=1,2$. Suppose $M_i$ geometrically fibers over a flat $m$-orbifold $B_i$ with generic fiber a flat $(n-m)$-orbifold $F_i$ and fibration projection $\eta_i:M_i\to B_i$ for $i=1,2$. The fibration projections $\eta_1$ and $\eta_2$ are said to be {\it isometrically equivalent} if there are isometries $\alpha:M_1\to M_2$ and $\beta:B_1\to B_2$ such that $\beta\eta_1=\eta_2\alpha$. \begin{theorem} Let $M$ be a compact, connected, flat $n$-orbifold. If $M$ geometrically fibers over a flat $m$-orbifold $B$ with generic fiber a flat $(n-m)$-orbifold $F$ and fibration projection $\eta:M\to B$, then there exists an $n$-dimensional space group $\Gamma$ and a complete normal subgroup ${\rm N}$ of $\Gamma$ such that $\eta$ is isometrically equivalent to the fibration projection $\eta_V: E^n/\Gamma\to (E^n/V)/(\Gamma/{\rm N})$ determined by ${\rm N}$. \end{theorem} \begin{proof} There exists an $n$-dimensional space group $\Gamma$ and an isometry $\alpha: M \to E^n/\Gamma$ by Theorem 13.3.10 of Ratcliffe\cite{R}. The map $\eta\alpha^{-1}: E^n/\Gamma \to B$ is a geometric fibration projection. There exists a complete normal subgroup ${\rm N}$ of $\Gamma$ and an isometry $\beta: B\to (E^n/V)/(\Gamma/{\rm N})$ such that $\beta(\eta\alpha^{-1}) = \eta_V$ by Theorem 7. Hence $\beta\eta=\eta_V\alpha$, and so $\eta$ is isometrically equivalent to $\eta_V$. \end{proof} \begin{theorem} Let ${\rm N}_i$ be a complete normal subgroup of an $n$-dimensional crystal\-lographic group $\Gamma_i$ for $i=1,2$, and let $\eta_i:E^n/\Gamma_i \to (E^n/V_i)/(\Gamma_i/{\rm N}_i)$ be the fibration projections determined by ${\rm N}_i$ for $i=1,2$. Then $\eta_1$ and $\eta_2$ are isometrically equivalent if and only if there is an isometry $\xi$ of $E^n$ such that $\xi\Gamma_1\xi^{-1} = \Gamma_2$ and $\xi{\rm N}_1\xi^{-1} = {\rm N}_2$. \end{theorem} \begin{proof} Suppose $\eta_1$ and $\eta_2$ are isometrically equivalent. Then there exists isometries $\alpha:E^n/\Gamma_1\to E^n/\Gamma_2$ and $\beta: (E^n/V_1)/(\Gamma/{\rm N}_1)\to(E^n/V_2)/(\Gamma/{\rm N}_2)$ such that $\beta\eta_1=\eta_2\alpha$. The isometry $\alpha$ lifts to an isometry $\xi$ of $E^n$ such that $\alpha(\Gamma_1 x) = \Gamma_2\xi(x)$ for each $x\in E^n$ by Theorem 13.2.6 of Ratcliffe \cite{R}. Therefore $\xi\Gamma_1\xi^{-1}=\Gamma_2$. As $\beta\eta_1=\eta_2\alpha$, the isometry $\alpha$ maps a generic fiber of $\eta_1$ onto a generic fiber of $\eta_2$. Let $V_1+x$ be a coset of $V_1$ that projects to a generic fiber of $\eta_1$. Then $\xi$ maps $V_1+x$ onto $V_2+\xi(x)$ and $V_2+\xi(x)$ projects to a generic fiber of $\eta_2$. Hence $\xi$ conjugates the stabilizer of $V_1+x$ in $\Gamma_1$ to the stabilizer of $V_2+\xi(x)$ in $\Gamma_2$. Therefore $\xi{\rm N}_1\xi^{-1} = {\rm N}_2$. Conversely, suppose $\xi$ is an isometry of $E^n$ such that $\xi\Gamma_1\xi^{-1} = \Gamma_2$ and $\xi{\rm N}_1\xi^{-1} = {\rm N}_2$. Then $\xi$ induces an isometry $\alpha: E^n/\Gamma_1 \to E^n/\Gamma_2$ defined by $\alpha(\Gamma_1x) = \Gamma_2\xi(x)$ for each $x\in E^n$. Let ${\rm T}_i$ be the group of translations of $\Gamma_i$ for $i=1,2$. Then $\xi{\rm T}_1\xi^{-1} = {\rm T}_2$, since ${\rm T}_i$ is the unique maximal abelian normal subgroup of $\Gamma_i$ for $i=1,2$. Hence $\xi({\rm N}_1\cap{\rm T}_1)\xi^{-1} = {\rm N}_2\cap{\rm T}_2$, and so if $\xi =b+B$, then $BV_1 = V_2$. Therefore $\xi$ maps each coset of $V_1$ in $E^n$ onto a coset of $V_2$ in $E^n$. Hence $\xi$ induces an isometry $\overline\xi: E^n/V_1\to E^n/V_2$ defined $\overline\xi(V_1+x) = V_2+\xi(x)$. If $\gamma\in \Gamma_1$ and $x\in E^n$, then we have \begin{eqnarray*} \overline\xi({\rm N}_1\gamma(V_1+x)) & = & \overline\xi(V_1+\gamma(x)) \\ & = & V_2+\xi\gamma(x) \\ & = & V_2 + \xi\gamma\xi^{-1}\xi(x) \ \ = \ \ ({\rm N}_2\xi\gamma\xi^{-1})\overline\xi(V_1+x). \end{eqnarray*} Hence $\overline\xi({\rm N}_1\gamma) = ({\rm N}_2\xi\gamma\xi^{-1})\overline\xi$, and so $\overline\xi({\rm N}_1\gamma)\overline\xi^{-1}={\rm N}_2\xi\gamma\xi^{-1}$. Therefore we have $\overline\xi(\Gamma_1/{\rm N}_1)\overline\xi\hbox{}^{-1} = \Gamma_2/{\rm N}_2$. Hence $\overline\xi$ induces an isometry $$\beta: (E^n/V_1)/(\Gamma_1/{\rm N}_1)\to(E^n/V_2)/(\Gamma_2/{\rm N}_2)$$ defined by $$\beta((\Gamma_1/{\rm N}_1)(V_1+x)) = (\Gamma_2/{\rm N}_2)\overline\xi(V_1+x) = (\Gamma_2/{\rm N}_2)(V_2+\xi(x)).$$ If $x\in E^n$, we have \begin{eqnarray*} \beta\eta_1(\Gamma_1x) & = & \beta((\Gamma_1/{\rm N}_1)(V_1+x)) \\ & = & (\Gamma_2/{\rm N}_2)(V_2+\xi(x)) \\ & = & \eta_2(\Gamma_2\xi(x)) \ \ = \ \ \eta_2\alpha(\Gamma_1x). \end{eqnarray*} Therefore $\beta\eta_1=\eta_2\alpha$. Thus $\eta_1$ and $\eta_2$ are isometrically equivalent. \end{proof} Let $M$ be a connected, complete flat $n$-orbifold. A {\it universal orbifold covering projection} is a geometric fibration projection $\pi: \tilde M \to M$ that is isometrically equivalent to the natural projection $\pi_\Gamma:E^n\to E^n/\Gamma$ for some discrete group $\Gamma$ of isometries of $E^n$. There exists a universal orbifold covering projection $\pi: \tilde M \to M$ by Theorem 13.3.10 of \cite{R}, and any two universal orbifold covering projections $\pi_i:\tilde M_i \to M$, $i=1,2$, are isometrically equivalent by Theorem 13.2.6 of \cite{R}. Let $\pi_i:\tilde M_i\to M_i$ be universal orbifold covering projections for $i=1,2$. A homeomorphism $\alpha: M_1\to M_2$ is said to be {\it affine} if there is an affine homeomorphism $\tilde\alpha: \tilde M_1 \to \tilde M_2$ such that $\alpha\pi_1=\pi_2\tilde\alpha$. Note that $\alpha: M_1\to M_2$ being affine does not depend on the choice of the universal orbifold covering projections $\pi_i:\tilde M_i\to M_i$. The fibration projections $\eta_1$ and $\eta_2$ are said to be {\it affinely equivalent} if there are affine homeomorphisms $\alpha:M_1\to M_2$ and $\beta:B_1\to B_2$ such that $\beta\eta_1=\eta_2\alpha$. \begin{theorem} Let ${\rm N}_i$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma_i$ for $i=1,2$, and let $\eta_i:E^n/\Gamma_i \to (E^n/V_i)/(\Gamma_i/{\rm N}_i)$ be the fibration projections determined by ${\rm N}_i$ for $i=1,2$. Then the following are equivalent: \begin{enumerate} \item The fibration projections $\eta_1$ and $\eta_2$ are affinely equivalent. \item There is an affine homeomorphism $\phi$ of $E^n$ such that $\phi\Gamma_1\phi^{-1} = \Gamma_2$ and $\phi{\rm N}_1\phi^{-1} = {\rm N}_2$. \item There is an isomorphism $\psi:\Gamma_1\to \Gamma_2$ such that $\psi({\rm N}_1) = {\rm N}_2$. \end{enumerate} \end{theorem} \begin{proof} The proof of the equivalence of (1) and (2) is the same as the proof of Theorem 9. The equivalence of (2) and (3) follows from Theorem 7.5.4 of Ratcliffe \cite{R}. \end{proof} \section{Reducibility of Crystallographic Groups} Let ${\rm N}$ be a normal subgroup of an $n$-dimensional space group $\Gamma$. The {\it dimension} of ${\rm N}$, denoted by ${\rm dim}({\rm N})$, is defined to be the dimension of $V = {\rm Span}({\rm N})$. Note that ${\rm dim}({\rm N})$ is equal to the virtual cohomological dimension of ${\rm N}$ by Theorem 2. The theory in this paper is nontrivial only if $0 < {\rm dim}({\rm N}) < n$, and so we should discuss when such a normal subgroup ${\rm N}$ exists. Let ${\rm T}$ be the group of translations of $\Gamma$, and let $\Pi$ be the point group of $\Gamma$. The group ${\rm T}$ is free abelian of rank $n$ and $\{b: b+I\in {\rm T}\}$ spans $E^n$ by Theorem 1. Choose a set $\{b_1+I,\ldots,b_n+I\}$ of $n$ generators of ${\rm T}$. Then $\{b_1,\ldots, b_n\}$ is a basis of $E^n$. Let $\gamma = b+B \in \Gamma$. Then $\gamma(b_j+I)\gamma^{-1} = Bb_j+I\in{\rm T}$ for each $j=1,\ldots,n$. Hence there are integers $c_{ij}$ for $i,j=1,\ldots,n$ such that $Bb_j = \sum_{i=1}^nc_{ij}b_i$ for each $j$. The representation $\rho:\Pi\to {\rm GL}(n,\mathbb Z)$ defined by $\rho(B) = (c_{ij})$ is a monomorphism. The representation $\rho$ is said be {\it reducible} if there is an integer $k$, with $0< k < n$, such that every matrix in the image of $\rho$ is in the block form $$\left(\begin{array}{cc} A & B \\ O & D \end{array}\right)$$ where $A$ is a $k\times k$ block and $O$ is a $(n-k)\times k$ block of zeros. The group $\Gamma$ is said to be $\mathbb Z$-{\it reducible}, if there is a set of $n$ generators of ${\rm T}$ such that the corresponding representation $\rho:\Pi\to {\rm GL}(n,\mathbb Z)$ is reducible. \begin{theorem} Let $\Gamma$ be an $n$-dimensional space group with translation group ${\rm T}$ and point group $\Pi$. Then the following are equivalent: \begin{enumerate} \item The group $\Gamma$ is $\mathbb Z$-reducible. \item There exists a $\Pi$-invariant vector subspace $V$ of $E^n$ with basis $\{b_1,\ldots, b_k\}$ such that $0< k < n$ and $b_i+I \in {\rm T}$ for each $i=1,\ldots, k$. \item The group $\Gamma$ has a normal subgroup ${\rm N}$ of dimension $k$ with $0 < k < n$. \end{enumerate} \end{theorem} \begin{proof} Suppose $\Gamma$ is $\mathbb Z$-reducible. Then there is a set of generators $\{b_1+I,\ldots,b_n+I\}$ of ${\rm T}$ such that the corresponding representation $\rho:\Pi\to {\rm GL}(n,\mathbb Z)$ is reducible. Let $k$ be the integer in the definition of reducibility, and let $V={\rm Span}\{b_1,\ldots, b_k\}$. Then $V$ is a $\Pi$-invariant vector subspace of $E^n$ with basis $\{b_1,\ldots, b_k\}$ such that $0< k < n$ and $b_i+I \in {\rm T}$ for each $i=1,\ldots, k$. Thus (1) implies (2). Suppose there exists a $\Pi$-invariant vector subspace $V$ of $E^n$\! with basis $\{b_1,\ldots, b_k\}$ such that $0< k < n$ and $b_i+I \in {\rm T}$ for each $i=1,\ldots, k$. Define $${\rm N}=\{a+I\in{\rm T}:\ a\in V\}.$$ As $V$ is $\Pi$-invariant, ${\rm N}$ is a normal subgroup of $\Gamma$. As $V = {\rm Span}({\rm N})$, we have that ${\rm dim}({\rm N}) = k$. Thus (2) implies (3). Suppose $\Gamma$ has a normal subgroup ${\rm N}$ of dimension $k$ with $0 < k < n$. Then ${\rm T}$ has a set of generators $\{b_1+I,\ldots, b_n+I\}$ such that $b_1+I,\ldots, b_k+I$ generate $\overline{{\rm N}}\cap{\rm T}$ by the proof of Lemma 1. As $\overline{\rm N}\cap{\rm T}$ is a normal subgroup of $\Gamma$, the additive group generated by $b_1,\ldots, b_k$ is $\Pi$-invariant. Hence the representation $\rho:\Pi\to {\rm GL}(n,\mathbb Z)$ determined by the set of generators $\{b_1+I,\ldots,b_n+I\}$ of ${\rm T}$ is reducible. Thus (3) implies (1). \end{proof} \begin{theorem} Let $\Gamma$ be an $n$-dimensional space group. If $\Gamma$ has a normal subgroup ${\rm N}$ of dimension $k$, then $\Gamma$ has a normal subgroup ${\rm N}'$ of dimension $(n-k)$. \end{theorem} \begin{proof} This follows from Theorem 11 and Proposition 2.1.1 of Brown, Neub\"user, and Zassenhaus \cite{B-N-Z}. \end{proof} \section{Geometric Fiber Bundles} Let $\Gamma$ be an $n$-dimensional space group. If $E^n/\Gamma$ geometrically fibers over a flat manifold $B$ with generic fiber a flat orbifold $F$, then $E^n/\Gamma$ is a fiber bundle over $B$ with totally geodesic fibers isometric to $F$. \vspace{.15in} \noindent{\bf Definition:} A flat $n$-orbifold $M$ is a {\it geometric fiber bundle} over a flat $m$-manifold $B$ with fiber a flat $(n-m)$-orbifold $F$ if there is a surjective map $\eta: M \to B$, called the {\it fibration projection}, such that for each point $y$ in $B$, there is an open metric ball $B(y,r)$ of radius $r>0$ centered at $y$ in $B$ such that $\eta$ is isometrically equivalent on $\eta^{-1}(B(y,r))$ to the natural projection $F\times B_y \to B_y$ onto an open metric ball $B_y$ in $E^m$ of radius $r$. \vspace{.15in} \begin{lemma} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. Then $(E^n/V)/(\Gamma/{\rm N})$ is a flat manifold if and only if $\Gamma/{\rm N}$ is torsion-free. \end{lemma} \begin{proof} Suppose $(E^n/V)/(\Gamma/{\rm N})$ is a flat manifold. Then the quotient map from $E^n/V$ to $(E^n/V)/(\Gamma/{\rm N})$ is a universal covering projection with $\Gamma/{\rm N}$ its group of covering transformations. Therefore $(E^n/V)/(\Gamma/{\rm N})$ is an aspherical manifold, and so its fundamental group is torsion-free. Therefore $\Gamma/{\rm N}$ is torsion-free. Conversely if $\Gamma/{\rm N}$ is torsion-free, then $(E^n/V)/(\Gamma/{\rm N})$ is a flat manifold, since the finite group $G_x=\Gamma_{V+x}/{\rm N}$ is trivial for each $x\in V^\perp$. \end{proof} \begin{theorem} Let ${\rm N}$ be a normal subgroup of an $n$-dimensional space group $\Gamma$ such that $\Gamma/{\rm N}$ is torsion-free, and let $V = {\rm Span}({\rm N})$. Then ${\rm N}$ is complete, and the flat orbifold $E^n/\Gamma$ is a geometric fiber bundle over the flat manifold $(E^n/V)/(\Gamma/{\rm N})$ with fiber the flat orbifold $V/{\rm N}$. \end{theorem} \begin{proof} We have that ${\rm N}$ is complete by Theorem 5. The rest of the theorem follows from Lemma 2 and Theorem 4. \end{proof} \begin{theorem} Let $\Gamma$ be an $n$-dimensional space group with first Betti number $\beta_1$. Then $\Gamma$ has a unique normal subgroup ${\rm N}$ such that $\Gamma/{\rm N}$ is a free abelian group of rank $\beta_1$, and the flat orbifold $E^n/\Gamma$ uniquely fibers as a geometric fiber bundle over a flat $\beta_1$-torus. \end{theorem} \begin{proof} We have $\Gamma/[\Gamma, \Gamma] \cong G\oplus\mathbb Z^{\beta_1}$ with $G$ a finite abelian group. Hence the subgroup of $\Gamma$ containing $[\Gamma,\Gamma]$ that corresponds to $G$ is the unique normal subgroup ${\rm N}$ of $\Gamma$ such that $\Gamma/{\rm N}$ is a free abelian group of rank $\beta_1$. By Theorem 13, we have that ${\rm N}$ is complete. Let $V={\rm Span}({\rm N})$. Then $(E^n/V)/(\Gamma/{\rm N})$ is a flat $\beta_1$-torus, since $\Gamma/{\rm N}$ is a free abelian group of rank $\beta_1$. Therefore $E^n/\Gamma$ fibers as a geometric fiber bundle over a flat $\beta_1$-torus by Theorem 4. The flat orbifold $E^n/\Gamma$ uniquely fibers as a geometric fiber bundle over a flat $\beta_1$-torus by Theorem 7, since ${\rm N}$ is the unique normal subgroup of $\Gamma$ such that $\Gamma/{\rm N}$ is free abelian of rank $\beta_1$. \end{proof} \begin{lemma} If $\Gamma$ is an $n$-dimensional space group with translation group {\rm T} and point group $\Pi$, then the transfer homomorphism ${\rm tr}: \Gamma \to {\rm T}$ is given by the formula $${\rm tr}(b+B) = \left({\textstyle\sum}\{A:A\in\Pi\}\right)b+ I \ \ \hbox{for each}\ \ b+B\in\Gamma.$$ \end{lemma} \begin{proof} For each $A\in\Pi$, choose a coset representative $a_A+A$ of T in $\Gamma$ corresponding to $A$. Given an element $b+B\in \Gamma$ and a coset representative $a_A+A$, then there exists a unique coset representative $a_{A'}+A'$ such that $$(a_A+A)(b+B)(a_{A'}+A')^{-1}\in{\rm T}.$$ The transfer homomorphism ${\rm tr}: \Gamma \to {\rm T}$ is defined by the formula $${\rm tr}(b+B) = {\textstyle\prod}\{(a_A+A)(b+B)(a_{A'}+A')^{-1}: A\in \Pi\}.$$ We have that $$(a_A+A)(b+B)(a_{A'}+A')^{-1} = a_A+Ab- AB(A')^{-1}a_{A'}+AB(A')^{-1}.$$ Therefore $AB(A')^{-1}=I$, and so $A' = AB$. Hence we have that \begin{eqnarray*} {\rm tr}(b+B) & = & {\textstyle\prod}\{a_A+Ab-a_{AB}+I: A\in\Pi\} \\ & = & \big({\textstyle\sum}\{a_A:A\in\Pi\} -{\textstyle\sum}\{a_{AB}:A\in\Pi\}+{\textstyle\sum}\{Ab:A\in\Pi\}\big) + I \\ & = & \big({\textstyle\sum}\{A:A\in\Pi\}\big)b + I. \end{eqnarray*} \end{proof} \begin{lemma} If $\Pi$ is a finite group of orthogonal transformations of $E^n$, then \begin{enumerate} \item ${\rm Im}({\textstyle\sum}\{A\in \Pi\}) = {\rm Fix}(\Pi)$, \item ${\rm Ker}({\textstyle\sum}\{A\in \Pi\}) = {\rm Fix}(\Pi)^\perp$. \end{enumerate} \end{lemma} \begin{proof} Let $B\in \Pi$. Observe that $$(B-I)({\textstyle\sum}\{A\in \Pi\}) = O.$$ Now ${\rm Ker}(B-I) = {\rm Fix}(B)$. Hence we have $${\rm Im}({\textstyle\sum}\{A\in \Pi\}) \subseteq {\rm Fix}(\Pi).$$ Let $x\in {\rm Fix}(\Pi)$. Then we have $$({\textstyle\sum}\{A\in \Pi\})(x) = |\Pi|x.$$ Hence $x\in {\rm Im}({\textstyle\sum}\{A\in \Pi\})$. This proves (1). Let $x\in {\rm Ker}({\textstyle\sum}\{A\in \Pi\})$. Write $ x = u+v$ with $u \in {\rm Fix}(\Pi)$ and $v\in {\rm Fix}(\Pi)^\perp$. Then we have that $$0 = ({\textstyle\sum}\{A\in \Pi\})(x) = |\Pi|u + ({\textstyle\sum}\{A\in \Pi\})(v).$$ Now $|\Pi|u \in {\rm Fix}(\Pi)$ and $({\textstyle\sum}\{A\in \Pi\})(v) \in {\rm Fix}(\Pi)^\perp$, since ${\rm Fix}(\Pi)^\perp$ is a $\Pi$-invariant subspace of $E^n$. Therefore $|\Pi|u = 0$, and so $u = 0$. Hence $x\in{\rm Fix}(\Pi)^\perp$. Conversely, suppose $x\in {\rm Fix}(\Pi)^\perp$. By (1), we have $$({\textstyle\sum}\{A\in \Pi\})(x)\in{\rm Fix}(\Pi)\cap {\rm Fix}(\Pi)^\perp = \{0\}.$$ Therefore $x\in {\rm Ker}({\textstyle\sum}\{A\in \Pi\})$. This proves (2). \end{proof} Let ${\rm N}$ and ${\rm N}'$ be a normal subgroups of a space group $\Gamma$. We say that ${\rm N}$ and ${\rm N}'$ are {\it orthogonal} if ${\rm Span}({\rm N}') = ({\rm Span}({\rm N}))^\perp$. If ${\rm N}$ and ${\rm N}'$ are orthogonal, complete, normal subgroups of $\Gamma$, we define ${\rm N}^\perp = {\rm N}'$. \begin{theorem} Let $\Gamma$ be an $n$-dimensional space group with first Betti number $\beta_1$. Then the kernel of the transfer homomorphism $tr: \Gamma \to {\rm T}$ is the unique normal subgroup ${\rm N}$ of $\Gamma$ such that $\Gamma/{\rm N}$ is a free abelian group of rank $\beta_1$, Moreover ${\rm N}$ and $Z(\Gamma)$ are orthogonal, complete, normal subgroups of $\Gamma$. \end{theorem} \begin{proof} Let $\Pi$ be the point group of $\Gamma$. By Lemmas 3 and 4, we have that $$tr(Z(\Gamma)) = \{|\Pi|b+ I: b+I\in Z(\Gamma)\}\subseteq {\rm Im}(tr) \subseteq Z(\Gamma).$$ Hence ${\rm Im}(tr)$ is a free abelian group of rank $\beta_1$ by Theorem 6. Therefore ${\rm Ker}(tr)$ is the unique normal subgroup ${\rm N}$ of $\Gamma$ such that $\Gamma/{\rm N}$ is a free abelian group of rank $\beta_1$. By Lemmas 3 and 4, we have that $${\rm N} = {\rm Ker}(tr) = \{b+ B\in \Gamma: b\in {\rm Fix}(\Pi)^\perp\}.$$ Hence ${\rm Span}({\rm N}) \subseteq {\rm Fix}(\Pi)^\perp.$ By Theorems 6 and 14, we have that $${\rm dim}({\rm Span}({\rm N})) = n-\beta_1 = {\rm dim}({\rm Fix}(\Pi)^\perp).$$ Therefore ${\rm Span}({\rm N}) ={\rm Fix}(\Pi)^\perp$. Now as ${\rm Span}(Z(\Gamma)) ={\rm Fix}(\Pi)$, we conclude that ${\rm N}^\perp = Z(\Gamma)$. \end{proof} \section{Crystallographic Group Extensions} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, let $V = {\rm Span}({\rm N})$, and let $m = {\rm dim}(E^n/V)$. Then ${\rm N}$ is an $(n-m)$-dimensional space group by Theorem 2, and $\Gamma/{\rm N}$ is an $m$-dimensional space group by Theorem 4. We call the exact sequence of natural group homomorphisms, $$ 1 \to {{\rm N}}\ {\buildrel i\over \longrightarrow}\ \Gamma\ {\buildrel p\over\longrightarrow} \ \Gamma/{\rm N}\to 1,$$ a {\it space group extension}. In this section, we study the relationship between the point groups of ${\rm N}, \Gamma$, and $\Gamma/{\rm N}$. \begin{lemma} Let $\Gamma$ be a space group with group of translations ${\rm T}$ and point group $\Pi$. Let ${\rm N}$ be a normal subgroup of $\Gamma$, and let $\Phi = \{A\in \Pi:a+A\in{\rm N}\ \hbox{for some}\ a\}$. Then \begin{enumerate} \item the group of translations of ${\rm N}$ is ${\rm N}\cap{\rm T}$, \item the point group of ${\rm N}$ is isomorphic to $\Phi$, and \item the group $\Phi$ is a normal subgroup of $\Pi$. \end{enumerate} \end{lemma} \begin{proof} Let $V = {\rm Span}({\rm N})$, and let $a+A\in {\rm N}$. By Theorem 2, we have that $a\in V$ and $V^\perp \subseteq {\rm Fix}(A)$. Hence $a+A\in{\rm N}$ acts as a translation on $V$ if and only if $A = I$. Thus (1) holds. Let $\eta:\Gamma\to\Pi$ be defined by $\eta(b+B) = B$. Then (2) follows from (1), since the ${\rm Ker}(\eta|_{\rm N})={\rm N}\cap{\rm T}$, and (3) follows from (1), since ${\rm N}/({\rm N}\cap{\rm T}) \cong{\rm N}{\rm T}/{\rm T}$, and so $\Phi$ corresponds to the normal subgroup ${\rm N}{\rm T}/{\rm T}$ of $\Gamma/{\rm T}$. \end{proof} \begin{theorem} Let $\Gamma$ be a space group with point group $\Pi$. Let ${\rm N}$ be a complete normal subgroup of $\Gamma$, let $V = {\rm Span}({\rm N})$, let $\Psi = \{B\in\Pi: V^\perp\subseteq {\rm Fix}(B)\}$, let ${\rm M} = \{b+B\in\Gamma: B\in \Psi\}$, and let $\Phi$ be as in Lemma 5. Then \begin{enumerate} \item the group of translations of $\Gamma/{\rm N}$ is ${\rm M}/{\rm N}$, \item the group $\Phi$ is a normal subgroup of $\Psi$, \item the group $\Psi$ is a normal subgroup of $\Pi$, \item the point group of $\Gamma/{\rm N}$ is isomorphic to $\Pi/\Psi$. \end{enumerate} \end{theorem} \begin{proof} Let $b+B\in\Gamma$. Suppose $b+B$ acts as a translation on $E^n/V$. Then $(b+B)(V+x) = V + c + x$ for some $c\in E^n$ for all $x\in V^\perp$. Now $(b+B)(V+x) = V+b+Bx$. Taking $x=0$, we see that $b-c\in V$, and so $V+b+Bx = V+b+x$. Hence $Bx-x \in V\cap V^\perp = \{0\}$. Therefore $x\in {\rm Ker}(B-I) = {\rm Fix}(B)$. Hence $V^\perp \subseteq {\rm Fix}(B)$, and so $b+B\in M$. Conversely if $b+B\in M$, then $b+B$ obviously acts as a translation on $E^n/V$. Thus (1) holds. Let $A\in \Phi$. By Theorem 2, we have that $A\in\Psi$. Thus (2) holds by Lemma 5. Let $B\in \Psi$, and let $C\in\Pi$. By Theorem 2, we have that $C$ leaves $V$ invariant. Hence $C$ leaves $V^\perp$ invariant. Therefore $CBC^{-1}\in\Psi$. Thus (3) holds. Let ${\rm T}$ be the translation group of $\Gamma$. Then (4) follows from (1), since $$(\Gamma/{\rm N})/({\rm M}/{\rm N})\cong \Gamma/{\rm M} \cong (\Gamma/{\rm T})/({\rm M}/{\rm T}) \cong \Pi/\Psi.$$ \vspace{-.2in} \end{proof} For example, let $e_1 = (1,0)$ and $e_2 = (0,1)$, let $t_i = e_i+I$ for $i=1,2$, let $\beta = \frac{1}{2}e_1+{\rm diag}(1,-1)$, and let $\Gamma = \langle t_1,t_2,\beta\rangle$. Then $\Gamma$ is a 2-dimensional space group, and $E^2/\Gamma$ is a flat Klein bottle. The group $\langle t_2\rangle$ is a complete normal subgroup of $\Gamma$ and ${\rm Span}\langle t_2\rangle = {\rm Span}\{e_2\}$. Hence $\Phi$ is trivial and $\Psi$ has order two. \begin{corollary} If $\Gamma$ is a space group with translation group ${\rm T}$, then the group of translations of $\Gamma/Z(\Gamma)$ is ${\rm T}/Z(\Gamma)$ and the point group of $\Gamma/Z(\Gamma)$ is isomorphic to the point group of $\Gamma$. \end{corollary} \section{Splitting Crystallographic Group Extensions} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, let $V = {\rm Span}({\rm N})$, and consider the corresponding space group extension $$ 1 \to {\rm N}\ {\buildrel i\over \longrightarrow}\ \Gamma\ {\buildrel p\over\longrightarrow} \ \Gamma/{\rm N}\to 1.$$ In this section, we study the relationship between the above space group extension splitting ($p$ having a right inverse) and the corresponding fibration projection $\eta: E^n/\Gamma \to (E^n/V)/(\Gamma/{\rm N})$ having an affine section. Here an {\it affine section} of $\eta$ is an affine map $\sigma: (E^n/V)/(\Gamma/{\rm N}) \to E^n/\Gamma$ such that $\eta\sigma$ is the identity map of $(E^n/V)/(\Gamma/{\rm N})$. A map $\sigma: (E^n/V)/(\Gamma/{\rm N}) \to E^n/\Gamma$ is {\it affine} if $\sigma$ lifts to an affine map $\tilde\sigma: E^n/V \to E^n$ with respect to the natural quotient maps. \begin{lemma} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. Let $\eta: E^n/\Gamma \to (E^n/V)/(\Gamma/{\rm N})$ be the fibration projection determined by ${\rm N}$. If the space group extension $$ 1 \to {\rm N}\ {\buildrel i\over \longrightarrow}\ \Gamma\ {\buildrel p\over\longrightarrow} \ \Gamma/{\rm N}\to 1$$ splits, then $\eta$ has an affine section. \end{lemma} \begin{proof} Suppose that the space group extension splits. Then $\Gamma$ has a subgroup $\Sigma$ such that $p$ maps $\Sigma$ isometrically onto $\Gamma/{\rm N}$. By Theorem 5.4.6 of \cite{R}, the group $\Sigma$ has a free abelian subgroup ${\rm H}$ of rank $m$ and finite index, there is an $m$-plane $Q$ of $E^n$ such that ${\rm H}$ acts effectively on $Q$ as a discrete group of translations, and the $m$-plane $Q$ is invariant under $\Sigma$. Hence $$m = {\rm dim}({\rm H}) = {\rm dim}(\Gamma/{\rm N}) = {\rm dim}(E^n/V).$$ By conjugating $\Gamma$ by a translation, we may assume that $Q$ is a vector subspace of $E^n$. Let $a_1+A_1,\ldots, a_m+A_m$ be generators of ${\rm H}$. Then $a_i\in Q$ for each $i$ and $A_i$ fixes $Q$ pointwise for each $i$. Let $k$ be the order of the point group of $\Gamma$. Then $(a_i+A_i)^k = ka_i+I$ for each $i$. Hence, by replacing ${\rm H}$ by a subgroup of finite index, we may assume that $A_i=I$ for each $i$. Now $p(a_i+I)$ acts on $E^n/V$ by ${\rm N}(a_i+I)(V+x) = V+x+a_i$, and so $p(a_i+I)$ acts as a translation on $E^n/V$ for each $i$. As $p({\rm H})$ has finite index in $\Gamma/{\rm N}$, we have that $p({\rm H})$ has finite index in the translation subgroup of $\Gamma/{\rm N}$. Therefore the vectors $V+a_1,\ldots, V+a_m$ span $E^n/V$. Hence the quotient map $\pi:E^n\to E^n/V$ maps $Q$ isomorphically onto $E^n/V$. Therefore $V\cap Q = \{0\}$. If $x\in E^n$, then $x$ can be written uniquely as $x = x_V+x_Q$ with $x_V\in V$ and $x_Q\in Q$. Define $\phi:E^n/V\to Q$ by $\phi(V+x) = x_Q$. Then $\phi$ is a well-defined linear isomorphism. Let $a+A\in \Sigma$. Then $a\in Q$ and $A$ leaves both $V$ and $Q$ invariant. Observe that \begin{eqnarray*} \phi(p(a+A)(V+x)) & = & \phi({\rm N}(a+A)(V+x)) \\ & = & \phi(V+a+Ax) \\ & = & a + Ax_Q \\ & = & (a+A)x_Q \ \ = \ \ (a+A)\phi(V+x). \end{eqnarray*} Hence $\phi$ induces an affine map $\overline\phi:(E^n/V)/(\Gamma/{\rm N})\to E^n/\Gamma$ whose image is $\Gamma Q/\Gamma$. Observe that \begin{eqnarray*} \eta\overline\phi((\Gamma/{\rm N})(V+x)) & = & \eta(\Gamma\phi(V+x)) \\ & = & \eta(\Gamma x_Q) \\ & = & (\Gamma/{\rm N})(V+x_Q) \ \ = \ \ (\Gamma/{\rm N})(V+x). \end{eqnarray*} Therefore $\eta\overline\phi$ is the identity map. Thus $\overline\phi$ is an affine section of $\eta$. \end{proof} \begin{lemma} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. Let $\eta: E^n/\Gamma \to (E^n/V)/(\Gamma/{\rm N})$ be the fibration projection determined by ${\rm N}$. If $\eta$ has an affine section $\sigma:(E^n/V)/(\Gamma/{\rm N})\to E^n/\Gamma$ such that ${\rm Im}(\sigma)$ intersects a fiber $F_0$ of $\eta$ at an ordinary point $x_0$ of $F_0$, then the space group extension $ 1 \to {\rm N} \to \Gamma \to \Gamma/{\rm N}\to 1$ splits. \end{lemma} \begin{proof} By conjugating $\Gamma$ by a translation, we may assume that $x_0 = \Gamma 0$. Then $\sigma$ lifts to an affine map $\tilde\sigma:E^n/V\to E^n$ such that ${\rm Im}(\tilde\sigma)$ is a vector subspace $Q$ of $E^n$ and the following diagram commutes \[\begin{array}{ccc} E^n/V & {\buildrel \tilde{\sigma} \over \longrightarrow} & E^n \vspace{.1in}\\ \pi_{\Gamma/{\rm N}}\ \downarrow \hspace{.2in} & & \phantom{\pi_\Gamma}\downarrow\ \pi_\Gamma \vspace{.1in} \\ (E^n/V)/(\Gamma/{\rm N}) & {\buildrel \sigma \over \longrightarrow} & E^n/\Gamma \end{array} \] where the vertical maps are the quotient maps. Now we have $$\pi_\Gamma(Q) = \pi_\Gamma(\tilde\sigma(E^n/V)) = \sigma(\pi_{\Gamma/{\rm N}}(E^n/V)) = {\rm Im}(\sigma).$$ Let $\tilde\eta:E^n\to E^n/V$ be the quotient map. Then $\pi_{\Gamma/{\rm N}}\tilde\eta=\eta\pi_\Gamma$, and so $$\pi_{\Gamma/{\rm N}}\tilde\eta(Q) = \eta\pi_\Gamma(Q) = \eta({\rm Im}(\sigma)) = (E^n/V)/(\Gamma/{\rm N}).$$ As $\tilde\eta(Q)$ is a vector subspace of $E^n/V$, we deduce that $\tilde\eta(Q) =E^n/V$. Therefore $\tilde\eta\tilde\sigma: E^n/V\to E^n/V$ is an affine homeomorphism, and so $\tilde\sigma:E^n/V\to E^n$ is an affine embedding whose image is $Q$. Let $\Sigma$ be the stabilizer of $Q$ in $\Gamma$, and let $a+A\in{\rm N}\cap\Sigma$. Then $a\in V\cap Q =\{0\}$. Hence $A = I$, since $\Gamma 0$ is an ordinary point of the fiber $F_0 = \pi_\Gamma(V)$, which is isometric to $V/\Gamma_V$. Therefore $N\cap \Sigma = \{I\}$. Suppose $a+A\in\Sigma$ and $a+A$ fixes $Q$ pointwise. Then $a=0$, since $0\in Q$. Let $x \in Q$. Write $x = v+w$ with $v\in V$ and $w\in V^\perp$. Then $Ax = Av+Aw$, and so $x = Av+Aw$. Now $Av\in V$ and $Aw\in V^\perp$, and so $Av = v$ and $Aw=w$. Hence $A$ fixes $V^\perp$ pointwise, since $\tilde\eta(Q)=E^n/V$. Therefore $A\in {\rm N}$. Hence $A = I$, since $\Gamma 0$ is an ordinary point of $F_0$. Thus $\Sigma$ acts effectively on $Q$. Suppose $x\in Q$, and $\gamma\in \Gamma$, and $y=\gamma x\in Q$. Choose $r>0$ small enough so that $B(y,r)\cap B(\alpha y,r)=\emptyset$ unless $\alpha\in\Gamma_y$, the stabilizer of $y$ in $\Gamma$. Then we have $$\pi_\Gamma^{-1}\big(\pi_\Gamma(B(y,r)\cap Q)\big)\cap B(y,r) = \mathop{\cup}_{\alpha\in\Gamma_y}\alpha\big(B(y,r)\cap Q\big).$$ We have $\pi_\Gamma(B(y,r)\cap\gamma Q) = \pi_\Gamma(B(y,r)\cap Q)$. Therefore, we have $$B(y,r)\cap\gamma Q \ \subset \mathop{\cup}_{\alpha\in\Gamma_y}\big(B(y,r)\cap\alpha Q\big).$$ Hence, we have $$B(y,r)\cap\gamma Q\ = \mathop{\cup}_{\alpha\in\Gamma_y}\big(B(y,r)\cap\alpha Q\cap\gamma Q\big).$$ Therefore $\gamma Q = \alpha Q$ for some $\alpha\in \Gamma_y$, since $\Gamma_y$ is finite. Hence $\alpha^{-1}\gamma Q = Q$ and $\alpha^{-1}\gamma x = y$. Thus $\alpha^{-1}\gamma \in \Sigma$ and $\pi_\Gamma$ induces an isometry from $Q/\Sigma$ to ${\rm Im}(\sigma)$. We have a commutative diagram \[\begin{array}{ccc} Q \hspace{-.2in}& {\buildrel \tilde{\eta}_1 \over \longrightarrow} &\!\!\!\!\! E^n/V \vspace{.1in}\\ (\pi_\Gamma)_1 \downarrow \phantom{(\pi_\Gamma)_1} \hspace{-.2in}& & \phantom{\pi_{\Gamma/{\rm N}}}\downarrow\ \pi_{\Gamma/{\rm N}} \vspace{.1in} \\ {\rm Im}(\sigma) \hspace{-.2in}& {\buildrel \eta_1 \over \longrightarrow} & (E^n/V)/(\Gamma/{\rm N}) \end{array} \] with $\tilde\eta_1,\eta_1,(\pi_\Gamma)_1$ the restrictions of $\tilde\eta,\eta,\pi_\Gamma$, respectively; moreover, $\tilde\eta_1$ and $\eta_1$ are homeomorphisms and the fibers of $(\pi_\Gamma)_1$ are the orbits of the action of $\Sigma$ on $Q$. Let $x\in Q$ such that $\pi_{\Gamma/{\rm N}}\tilde\eta_1(x) = \pi_{\Gamma/{\rm N}}(V+x)$ is an ordinary point of $(E^n/V)/(\Gamma/{\rm N})$, and let ${\rm N}\gamma\in \Gamma/{\rm N}$. Then there exists $\gamma'\in\Sigma$ such that $\tilde\eta_1(\gamma'x) = ({\rm N}\gamma)\tilde\eta_1(x)$. Hence $({\rm N}\gamma')(V+x) = ({\rm N}\gamma)(V+x)$, and so ${\rm N}\gamma'={\rm N}\gamma$, since $\pi_{\Gamma/{\rm N}}(V+x)$ is an ordinary point of $(E^n/V)/(\Gamma/{\rm N})$. Therefore ${\rm N}\Sigma =\Gamma$. Hence, the space group extension $$ 1 \to {\rm N}\ {\buildrel i\over \longrightarrow}\ \Gamma\ {\buildrel p\over\longrightarrow} \ \Gamma/{\rm N}\to 1$$ splits with $p$ mapping $\Sigma$ isometrically onto $\Gamma/{\rm N}$. \end{proof} \begin{theorem} Let ${\rm N}$ be a complete, torsion-free, normal subgroup of an $n$-dimen\-sional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. Let $\eta: E^n/\Gamma \to (E^n/V)/(\Gamma/{\rm N})$ be the fibration projection determined by ${\rm N}$. Then $\eta$ has an affine section if and only if the space group extension $1 \to {\rm N} \to \Gamma \to \Gamma/{\rm N}\to 1$ splits. \end{theorem} \begin{proof} If the space group extension $1 \to {\rm N} \to \Gamma \to \Gamma/{\rm N}\to 1$ splits, then $\eta$ has an affine section by Lemma 6. If $\eta$ has an affine section $\sigma$, then ${\rm Im}(\sigma)$ intersects every generic fiber of $\eta$ at an ordinary point, since every point of a generic fiber is an ordinary point, because ${\rm N}$ is torsion-free. Therefore, the space group extension $1 \to {\rm N} \to \Gamma \to \Gamma/{\rm N}\to 1$ splits by Lemma 7. \end{proof} \begin{lemma} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. Let $\eta: E^n/\Gamma \to (E^n/V)/(\Gamma/{\rm N})$ be the fibration projection determined by ${\rm N}$. If $V/{\rm N}$ has a point ${\rm N} v_0$ that is fixed by every isometry of $V/{\rm N}$, then the map $\sigma: (E^n/V)/(\Gamma/{\rm N}) \to E^n/\Gamma$, defined by $\sigma((\Gamma/{\rm N})(V+x)) = \Gamma(v_0+x)$ for each $x\in V^\perp$, is a section of $\eta$ and an affine isometric embedding. \end{lemma} \begin{proof} Let $b+B\in\Gamma$. Write $b= c+d$ with $c\in V$ and $d\in V^\perp$. Then $(b+B)V = V+d$. Let $a+A\in{\rm N}$ and let $v\in V$. Then we have $$(b+B)((a+A)v) = (b+B)(a+A)(b+B)^{-1}(b+B)v= (a'+A')((b+B)v)$$ with $a'+A'\in {\rm N}$, because ${\rm N}$ is a normal subgroup of $\Gamma$. Hence $b+B$ induces an isometry $(b+B)_\ast$ from $V/{\rm N}$ to $(V+d)/{\rm N}$ defined by $(b+B)_\ast({\rm N} v) = {\rm N}(b+Bv)$. Now $(b+B)_\ast({\rm N} v_0) = {\rm N}(b+Bv_0)$, and so ${\rm N}(b+Bv_0)$ is fixed by every isometry of $(V+d)/{\rm N}$. Next, we have $(d+I)V = V+d$ and $(d+I)((a+A)v) = (a+A)((d+I)v)$. Hence $d+I$ induces an isometry $(d+I)_\ast$ from $V/{\rm N}$ to $(V+d)/{\rm N}$ defined by $(d+I)_\ast({\rm N} v) = {\rm N}(v+d)$. Now $(d+I)_\ast({\rm N} v_0) = {\rm N}(v_0+d)$. Hence we have $$(d+I)_\ast(b+B)_\ast^{-1}({\rm N}(b+Bv_0)) = {\rm N}(v_0+d),$$ and so ${\rm N}(v_0+d) = {\rm N}(b+Bv_0)$. Therefore there is an $a+A\in{\rm N}$ such that $(a+A)(b+Bv_0) = v_0+d$. Hence $a+Ac+ABv_0 = v_0$. The map $\tilde\sigma:E^n/V\to E^n$, defined by $\tilde\sigma(V+x) = v_0+x$ for each $x\in V^\perp$, is an affine isometric embedding. Observe that \begin{eqnarray*} \tilde\sigma((b+B)(V+x)) & = & \tilde\sigma(V+d+Bx) \\ & = & v_0+d+Bx \\ & = & a+Ac+ABv_0+d+Bx \\ & = & a+Ac+ABv_0+Ad+ABx \\ & = & (a+Ab+AB)(v_0+x) \ \ = \ \ (a+A)(b+B)\tilde\sigma(V+x). \end{eqnarray*} Hence $\tilde\sigma$ induces a map $\sigma:(E^n/V)/(\Gamma/{\rm N})\to E^n/\Gamma$ defined by $\sigma((\Gamma/{\rm N})(V+x)) = \Gamma(v_0+x)$ for each $x\in V^\perp$. Now we have $$\eta\sigma((\Gamma/{\rm N})(V+x)) = \eta(\Gamma(v_0+x)) = (\Gamma/{\rm N})(V+x),$$ and so $\sigma$ is a section of $\eta$ and an affine isometric embedding. \end{proof} \begin{theorem} Let ${\rm N}$ be a complete normal subgroup of an $n$-dimensional space group $\Gamma$, and let $V = {\rm Span}({\rm N})$. If $V/{\rm N}$ has an ordinary point ${\rm N} v_0$ that is fixed by every isometry of $V/{\rm N}$, then the space group extension $1\to{\rm N}\to \Gamma\to \Gamma/{\rm N}\to 1$ splits. \end{theorem} \begin{proof} By conjugating $\Gamma$, we may assume that $\Gamma V/\Gamma$ is a generic fiber of $\eta$. Then $\Gamma V/\Gamma$ is isometric to $V/{\rm N}$. By Lemma 8, the fibration projection $\eta: E^n/\Gamma\to (E^n/V)/(\Gamma/{\rm N})$ has an affine section $\sigma$ such that ${\rm Im}(\sigma)$ intersects the fiber $\Gamma V/\Gamma$ in the ordinary point $\Gamma v_0$. Hence the space group extension $1\to{\rm N}\to \Gamma\to \Gamma/{\rm N}\to 1$ splits by Lemma 7. \end{proof} For example, consider a space group extension $1\to{\rm N}\to\Gamma\to\Gamma/{\rm N}\to1$ such that ${\rm N}$ is an infinite dihedral group. Then $V/{\rm N}$ is a closed interval. The midpoint of the closed interval $V/{\rm N}$ is an ordinary point of $V/{\rm N}$ that is fixed by the nonidentity isometry of $V/{\rm N}$. Hence the space group extension $1\to{\rm N}\to\Gamma\to\Gamma/{\rm N}\to1$ splits by Theorem 18. We next consider an example that shows that the hypothesis that ${\rm N} v_0$ is an ordinary point cannot be dropped in Theorem 18. Let $\Gamma$ be the 3-dimensional space group with IT number 113 in Table 1B of \cite{B-Z}. Then $\Gamma = \langle t_1,t_2,t_3,\alpha,\beta,\gamma\rangle$ where $t_i = e_i+I$ for $i=1,2,3$ are the standard translations, and $\alpha = \frac{1}{2}e_1+\frac{1}{2}e_2+A$, $\beta=\frac{1}{2}e_1+B$, $\gamma =\frac{1}{2}e_2+C$, and $$A = \left(\begin{array}{rrr} -1 & 0 & 0\\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array}\right),\ \ B = \left(\begin{array}{rrr} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & -1 \end{array}\right), \ \ C = \left(\begin{array}{rrr} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{array}\right).$$ The group ${\rm N} = \langle t_1,t_2,\alpha,\beta\gamma\rangle$ is a complete normal subgroup of $\Gamma$ with $V = {\rm Span}({\rm N}) = {\rm Span}\{e_1,e_2\}$. The isomorphism type of ${\rm N}$ is $2\!\ast\!22$ in Conway's notation \cite{Conway} or $cmm$ in the international notation \cite{S}. The flat orbifold $V/{\rm N}$ is a pointed hood. The orbifold $V/{\rm N}$ has a unique cone point ${\rm N}(\frac{1}{4}e_1+\frac{1}{4}e_2)$ which corresponds to the fixed point $\frac{1}{4}e_1+\frac{1}{4}e_2$ of the halfturn $\alpha$. Hence the cone point of $V/{\rm N}$ is fixed by every isometry of $V/{\rm N}$. Therefore the fibration projection $\eta:E^3/\Gamma\to(E^3/V)/(\Gamma/{\rm N})$ has an isometric section by Lemma 8. However the space group extension $1\to {\rm N}\to \Gamma\to \Gamma/{\rm N}\to 1$ does not split, since $\gamma$ projects to an element of order 2 in $\Gamma/{\rm N}$, but $((d+D)\gamma)^2\neq I$ for all $d+D$ in ${\rm N}$. To see why, observe that there are only four possibilities for $D$, namely $D=I, A, BC, ABC$. Suppose $D=I$. Then $d=ke_1+\ell e_2$ for some integers $k$ and $\ell$, and we have $$\big((d+D)\gamma\big)^2 = (1+2\ell)e_2+I \neq I.$$ The proofs of the other three cases for $D$ are similar. This example also shows that the hypothesis that ${\rm N}$ is torsion-free cannot be dropped in Theorem 17. \section{Seifert Fibrations} We call a geometric fibration of a flat $n$-orbifold $M$ over a flat $(n-1)$-orbifold $B$ with generic fiber a connected, compact, flat 1-orbifold $F$ a {\it geometric Seifert fibration}. Here $F$ is either a circle or a closed interval. Seifert fibrations of compact flat 3-orbifolds have been studied by Bonahon and Siebenmann \cite{B-S}, Dunbar \cite{Dunbar}, and Conway et al. \cite{C-T}. Every Seifert fibration of a compact flat 2- or 3-orbifold is isotopic to a geometric Seifert fibration by Proposition 2.12 of Boileau, Maillot, and Porti \cite{B-M-P} and the discussion in \cite{B-S}. Let $F$ be a connected, compact, flat 1-orbifold, and let $B$ be a flat $(n-1)$-orbifold $B$. Then $F\times B$ is a flat $n$-orbifold. The natural projection of $F\times B$ onto $B$ is a geometric Seifert fibration over $B$ with generic fiber $F$. Let ${\rm I}$ be a closed interval, let $\hat B$ be a connected, complete, flat $(n-1)$-orbifold, and let $\sigma$ be an involution of ${\rm I}\times \hat B$ which acts diagonally, as a reflection on ${\rm I}$, and effectively and isometrically on $\hat B$. Let $M = ({\rm I}\times \hat B)/\langle \sigma\rangle$. Then $M$ is a flat $n$-orbifold and $B = \hat B/\langle \sigma\rangle$ is a flat $(n-1)$-orbifold. The natural projection of $M$ onto $B$ is a geometric Seifert fibration over $B$ with generic fiber ${\rm I}$. The flat orbifold $M$ is called the {\it twisted ${\rm I}$-bundle} over $B$ determined by the orbifold double cover $\hat B$ of $B$. \begin{theorem} Let $M$ be a connected, complete, flat $n$-orbifold, let $B$ be a flat $(n-1)$-orbifold, and let $\eta:M\to B$ be a geometric Seifert fibration with generic fiber a closed interval ${\rm I}$. Let $\dot{{\rm I}}$ be the set of endpoints of ${\rm I}$, let $\dot{M}$ be the the union of all the endpoints of the fibers of $\eta$ that are determined by the endpoints of ${\rm I}$, and let $\dot{\eta}:\dot{M} \to B$ be the restriction of $\eta$. Then $\dot{M}$ is a complete, flat $(n-1)$-orbifold, and $\dot{\eta}$ is a geometric fibration over $B$ with generic fiber the flat $0$-orbifold $\dot{{\rm I}}$. If $\dot{M}$ is disconnected, then $\eta$ is isometrically equivalent to the natural projection ${\rm I} \times B \to B$. If $\dot{M}$ is connected, then $\eta$ is isometrically equivalent to the natural projection of the twisted ${\rm I}$-bundle over $B$ determined by the orbifold double covering $\dot{\eta}: \dot{M} \to B$. \end{theorem} \begin{proof} That $\dot{M}$ is a flat $(n-1)$-orbifold and $\dot{\eta}$ is a geometric fibration over $B$ with generic fiber $\dot{{\rm I}}$ follows from the definition of the geometric fibration $\eta$. The orbifold $\dot{M}$ is complete, since $\dot{M}$ is a closed subspace of the complete metric space $M$. Let $B_{1/2}$ be the union of all the points of the fibers of $\eta$ that are determined by the midpoint of ${\rm I}$. Then $B_{1/2}$ is a flat $(n-1)$-orbifold, and $\eta$ maps $B_{1/2}$ isometrically onto $B$. The geometric fibration $\dot{\eta}:\dot{M}\to B$ is geometrically equivalent to the fiberwise projection from $\dot{M}$ to $B_{1/2}$. Suppose $\dot{M}$ is disconnected. Then $\dot{M}$ has exactly two connected components $B_0$ and $B_1$, and $\dot{\eta}$ maps $B_i$ isometrically onto $B$ for each $i$, since $\dot{\eta}:\dot{M} \to B$ is an orbifold double covering and $B$ is connected. Hence $B_{1/2}$ is two-sided in $M$. Parameterize ${\rm I}$ so that ${\rm I} = [0,\ell]$. Define $\phi: {\rm I}\times B \to M$ by $\phi(t,y) = y_t$ where $y_t$ is the point in $\eta^{-1}(y)$ which is at a distance $t$ from $B_0$ along $\eta^{-1}(y)$ towards $B_1$. Then $\phi$ is an isometry, and $\eta$ is isometrically equivalent to the natural projection ${\rm I}\times B\to B$. Suppose $\dot{M}$ is connected. Then $B_{1/2}$ is one-sided in $M$. Let $\tau:\dot{M}\to \dot{M}$ be the continuous involution that transposes the endpoints of the fibers of $\eta$ that are isometric to ${\rm I}$. Then $\tau$ is an isometry. Note that $x$ is fixed by $\tau$ if and only if $\eta^{-1}(\eta(x))$ is a singular fiber isometric to ${\rm I}$ folded in half. The geometric fibration $\dot{\eta}: \dot{M}\to B$ induces an isometry from $\dot{M}/\langle \tau\rangle$ to $B$. Define $\phi: {\rm I}\times \dot{M} \to M$ by $\phi(t,x) = x_t$ where $x_t$ is the point of $\eta^{-1}(\eta(x))$ which is at a distance $t$ from $x$ if $t\leq \ell/2$ or at a distance $\ell-t$ from $\tau(x)$ if $t\geq \ell/2$. Let $\sigma: {\rm I}\times \dot{M}$ be the isometry that acts diagonally, as the reflection in ${\rm I}$, and by $\tau$ on $\dot{M}$. Then $\phi$ induces an isometry from $({\rm I}\times \dot{M})/\langle\sigma\rangle$ to $M$, and $\eta$ is isometrically equivalent to the natural projection of the twisted ${\rm I}$-bundle over $B$ determined by the orbifold double covering $\dot{\eta}: \dot{M} \to B$. \end{proof} \begin{theorem} If $1 \to {\rm N}\ {\buildrel i\over \longrightarrow}\ \Gamma\ {\buildrel p\over\longrightarrow} \ \Gamma/{\rm N}\to 1$ is a space group extension such that ${\rm N}$ is an infinite dihedral group, then $\Gamma$ has a subgroup $\Sigma$ such that $p$ maps $\Sigma$ isomorphically onto $\Gamma/{\rm N}$, and either $\Gamma = {\rm N}\times\Sigma$ and $\Sigma$ is unique and orthogonal to ${\rm N}$, or else $\Sigma$ has a subgroup $\Sigma_0$ of index 2 such that if $\Gamma_0 = {\rm N}\Sigma_0$, then $\Gamma_0={\rm N}\times \Sigma_0$ and $\Sigma_0$ is unique, but $\Sigma$ is not necessarily unique; moreover $\Sigma_0$ is a complete normal subgroup of $\Gamma$, which is orthogonal to ${\rm N}$, and $\Gamma/\Sigma_0$ is an infinite dihedral group. \end{theorem} \begin{proof} Let $V = {\rm Span}({\rm N})$, and let $n={\rm dim}(\Gamma)$. Then ${\rm I} = V/{\rm N}$ is a closed interval and the fibration projection $\eta:E^n/\Gamma\to(E^n/V)/(\Gamma/{\rm N})$ is a geometric Seifert fibration with generic fiber ${\rm I}$. Let $M=E^n/\Gamma$. By Theorem 18, the group $\Gamma$ has a subgroup $\Sigma$ such that $p$ maps $\Sigma$ isomorphically onto $\Gamma/{\rm N}$, and by Theorem 19, either $\Gamma = {\rm N}\times\Sigma$, with $\Sigma$ orthogonal to ${\rm N}$, if $\dot{M}$ is disconnected, or else $\Sigma$ has a subgroup $\Sigma_0$ of index 2, if $\dot{M}$ is connected, corresponding to the orbifold double cover $\dot{M}$ of $(E^n/V)/(\Gamma/{\rm N})$, such that if $\Gamma_0 = {\rm N}\Sigma_0$, then $\Gamma_0={\rm N} \times \Sigma_0$, since ${\rm I}\times \dot{M}$ double covers $M$. If $\Gamma = {\rm N}\times\Sigma$, then $\Sigma$ is unique, since $\Sigma$ is the centralizer of ${\rm N}$ in $\Gamma$. If $\Gamma \neq {\rm N}\times\Sigma$, then $\Sigma_0$ is unique, since $\Sigma_0$ is the centralizer of ${\rm N}$ in $\Gamma$. To see that $\Sigma$ is not necessarily unique, see example (5) in the next section. The group $\Sigma_0$ is normal in $\Gamma$, since $\Sigma_0$ is the centralizer of a normal subgroup of $\Gamma$. The group $\Sigma_0$ is complete and $\Gamma/\Sigma_0$ is an infinite dihedral group, since $\Sigma_0$ corresponds to the generic fiber of the geometric fibration of $({\rm I}\times \dot{M})/\langle \sigma\rangle$ over ${\rm I}/\langle \sigma\rangle$ induced by projection along the first factor. Finally $\Sigma_0$ is orthogonal to ${\rm N}$, since $\Sigma$ can be chosen to be orthogonal to ${\rm N}$ as in the proof of Lemma 8. \end{proof} \section{Reducible 2-Dimensional Crystallographic Groups} Let $\Gamma$ be a 2-dimensional space group. Then every nontrivial geometric fibration of $E^2/\Gamma$ is a Seifert fibration. If $b+B\in \Gamma$ and $B$ has no 1-dimensional invariant vector space, then $E^2/\Gamma$ does not Seifert fiber. Hence if $B$ is a rotation of order 3 or 4, then $E^2/\Gamma$ does not Seifert fiber. This excludes 8 of the 2-dimensional space group isomorphism types. The orbifolds of the remaining 9 isomorphism types do Seifert fiber. We now describe all of these geometric Seifert fibrations. We denote a circle by ${\rm O}$ and a closed interval by ${\rm I}$. We consider the groups in the order of Table 1A of Brown et al. \cite{B-Z} which corresponds to the IT order \cite{IT}. We use the generators for the representatives of the isomorphism types of the 2-dimensional space groups listed in Table 1A of \cite{B-Z}. Let $t_1 = e_1 + I$ and $t_2 = e_2 + I$ be the standard basis translations, and let $$A = \left(\begin{array}{rr} -1 & 0 \\ 0 & -1 \end{array}\right),\ \ B = \left(\begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array}\right), \ \ C = \left(\begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array}\right).$$ Let $a,b$ be relatively prime integers, let $c,d$ be integers such that $ad-bc=1$, and let $\phi:E^2\to E^2$ be the linear automorphism defined by $\phi(e_1) = (a,b)$ and $\phi(e_2) = (c,d)$. The symbol $\rtimes$ denotes a semidirect product of groups. (1) Let $\Gamma=\langle t_1,t_2\rangle$. The isomorphism type of $\Gamma$ is $\circ\, (p1)$. Here $\circ$ is Conway's notation \cite{Conway} and $p1$ is the IT notation \cite{S}. The orbifold $E^2/\Gamma$ is a flat torus and $E^2/\Gamma$ is the cartesian product ${\rm O}\times {\rm O}$. The proper, complete, normal subgroups of $\Gamma$ are of the form $\langle t_1^at_2^b\rangle$. We have $\Gamma= \langle t_1^at_2^b\rangle \times \langle t_1^ct_2^d\rangle$, and so $E^2/\Gamma$ is a geometric trivial fiber bundle over ${\rm O}$, with fiber ${\rm O}$, in infinitely many ways. The linear automorphism $\phi$ normalizes $\Gamma$ and conjugates $\langle t_1\rangle$ to $\langle t_1^at_2^b\rangle$. Therefore, all the geometric Seifert fiberings of $E^2/\Gamma$ are affinely equivalent. (2) Let $\Gamma=\langle t_1,t_2, A\rangle$. The isomorphism type of $\Gamma$ is $2222\,(p2)$ and $E^2/\Gamma$ is a flat pillow. The proper, complete, normal subgroups of $\Gamma$ are of the form $\langle t_1^at_2^b\rangle$. We have $\Gamma= \langle t_1^at_2^b\rangle \rtimes \langle t_1^ct_2^d, A\rangle$, and so $E^2/\Gamma$ geometrically fibers over ${\rm I}$, with generic fiber ${\rm O}$, in infinitely many ways. The linear automorphism $\phi$ normalizes $\Gamma$ and conjugates $\langle t_1\rangle$ to $\langle t_1^at_2^b\rangle$. Therefore, all the geometric Seifert fiberings of $E^2/\Gamma$ are affinely equivalent. (3) Let $\Gamma=\langle t_1,t_2, B\rangle$. The isomorphism type of $\Gamma$ is $*\!*(pm)$ and $E^2/\Gamma$ is a flat annulus. The proper, complete, normal subgroups of $\Gamma$ are $Z(\Gamma)=\langle t_1\rangle$ and $\langle t_2,B\rangle$. We have $\Gamma = \langle t_1\rangle\times \langle t_2,B\rangle$, and so $E^2/\Gamma$ is the cartesian product ${\rm O}\times {\rm I}$. (4) Let $\Gamma=\langle t_1,t_2,\beta\rangle$ where $\beta = \frac{1}{2}e_1+B$. The isomorphism type of $\Gamma$ is $\times\!\times (pg)$ and $E^2/\Gamma$ is a flat Klein bottle. The proper, complete, normal subgroups of $\Gamma$ are $Z(\Gamma)=\langle t_1\rangle$ and $\langle t_2\rangle$. Now $\Gamma/\langle t_1\rangle \cong D_\infty$, and so $E^2/\Gamma$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm O}$. The extension $\langle t_1\rangle \to \Gamma \to D_\infty$ does not split, since $\Gamma$ is torsion-free. Also $\Gamma = \langle t_2\rangle\rtimes\langle \beta\rangle$, and so $E^2/\Gamma$ is a geometric fiber bundle over ${\rm O}$ with fiber ${\rm O}$. (5) Let $\Gamma=\langle t_1,t_2,C\rangle$. The isomorphism type of $\Gamma$ is $*\!\times (cm)$ and $E^2/\Gamma$ is a flat M\"obius band. The proper, complete, normal subgroups of $\Gamma$ are $Z(\Gamma)=\langle t_1t_2\rangle$ and $\langle t_1t_2^{-1},C\rangle$. Now $\Gamma/\langle t_1t_2\rangle \cong D_\infty$, and so $E^2/\Gamma$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm O}$. The extension $\langle t_1t_2\rangle \to \Gamma \to D_\infty$ does not split, since $\Gamma$ is not the direct product of $\langle t_1t_2\rangle$ and $D_\infty$. Also $\Gamma = \langle t_1t_2^{-1},C\rangle\rtimes\langle t_iC\rangle$ for $i=1,2$, and so $E^2/\Gamma$ is a geometric fiber bundle over ${\rm O}$ with fiber ${\rm I}$. We have $(t_iC)^2 = t_1t_2$ for $i=1,2$, and $\langle t_1t_2\rangle$ is the centralizer of $\langle t_1t_2^{-1},C\rangle$ in $\Gamma$. As $(t_1C)^{-1} = t_2^{-1}C$, we have that $\langle t_1C\rangle \neq \langle t_2C\rangle$. Hence the subgroup $\Sigma = \langle t_1C\rangle$ in Theorem 20 is not necessarily unique. (6) Let $\Gamma=\langle t_1,t_2, A, B\rangle$. The isomorphism type of $\Gamma$ is $\ast 2222\,(pmm)$ and $E^2/\Gamma$ is a square. The proper, complete, normal subgroups of $\Gamma$ are $\langle t_1, AB\rangle$ and $\langle t_2,B\rangle$. Now $\Gamma = \langle t_1, AB\rangle\times \langle t_2,B\rangle$, and so $E^2/\Gamma$ is the cartesian product ${\rm I}\times {\rm I}$. The linear automorphism $\sigma: E^2\to E^2$, defined by $\sigma(e_1) =e_2$ and $\sigma(e_2) = e_1$, normalizes $\Gamma$ and conjugates $\langle t_1, AB\rangle$ to $\langle t_2, B\rangle$. Therefore, the two geometric Seifert fiberings of $E^2/\Gamma$ are isometrically equivalent. (7) Let $\Gamma=\langle t_1,t_2,A,\beta\rangle$ where $\beta = \frac{1}{2}e_2+B$. The isomorphism type is $22\ast (pmg)$ and $E^2/\Gamma$ is a pillowcase. The proper, complete, normal subgroups of $\Gamma$ are $\langle t_1\rangle$ and $\langle t_2,\beta\rangle$. Now $\Gamma =\langle t_1\rangle \rtimes \langle A,\beta\rangle$, and so $E^2/\Gamma$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm O}$. Also $\Gamma = \langle t_2,\beta\rangle\rtimes\langle t_1,A\rangle$, and so $E^2/\Gamma$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm I}$. (8) Let $\Gamma=\langle t_1,t_2,A,\beta\rangle$ where $\beta = \frac{1}{2}e_1+\frac{1}{2}e_2+B$. The isomorphism type of $\Gamma$ is $22\!\times (pgg)$ and $E^2/\Gamma$ is a projective pillow. The proper, complete, normal subgroups of $\Gamma$ are $\langle t_1\rangle$ and $\langle t_2\rangle$. Now $\Gamma/\langle t_i\rangle \cong D_\infty$ for $i=1,2$, and so $E^2/\Gamma$ geometrically fibers over ${\rm I}$, with generic fiber ${\rm O}$, in two ways. The extension $\langle t_1\rangle \to \Gamma \to D_\infty$ does not split, since $\beta$ projects to an element of order 2 but $t_1^k\beta$ is a glide-reflection for each integer $k$. The linear automorphism $\sigma: E^2\to E^2$, defined by $\sigma(e_1) =e_2$ and $\sigma(e_2) = e_1$, normalizes $\Gamma$ and conjugates $\langle t_1\rangle$ to $\langle t_2\rangle$. Therefore, the two geometric Seifert fiberings of $E^2/\Gamma$ are isometrically equivalent. Hence the extension $\langle t_2\rangle \to \Gamma \to D_\infty$ also does not split. (9) Let $\Gamma=\langle t_1,t_2,A,C\rangle$. The isomorphism type of $\Gamma$ is $2\!\ast\! 22\,(cmm)$ and $E^2/\Gamma$ \linebreak is a pointed hood. The proper, complete, normal subgroups of $\Gamma$ are $\langle t_1t_2,AC\rangle$ and $\langle t_1t_2^{-1},C\rangle$. Now $\Gamma =\langle t_1t_2, AC\rangle \rtimes \langle t_1A, C\rangle$ and $\Gamma =\langle t_1t_2^{-1}, C\rangle \rtimes \langle t_1A, AC\rangle$, and so $E^2/\Gamma$ geometrically fibers over ${\rm I}$, with generic fiber ${\rm I}$, in two ways. The linear automorphism $\sigma: E^2\to E^2$, defined by $\sigma(e_1) =e_1$ and $\sigma(e_2) = -e_2$, normalizes $\Gamma$ and conjugates $\langle t_1t_2,AC\rangle$ to $\langle t_1t_2^{-1},C\rangle$. Therefore, the two geometric Seifert fiberings of $E^2/\Gamma$ are isometrically equivalent. \section{Co-Seifert Fibrations} Let $M$ be a flat $n$-orbifold. We call a geometric fibration of $M$ over a connected, compact, flat 1-orbifold $B$, with generic fiber a flat $(n-1)$-orbifold $F$, a {\it geometric co-Seifert fibration}. Here $B$ is either a circle ${\rm O}$ or a closed interval ${\rm I}$. The structure of a geometric co-Seifert fibration $\eta:M\to B$ tells you a lot about the geometry and topology of the orbifold $M$. If the base $B$ is a circle ${\rm O}$, then $M$ is a geometric fiber bundle over ${\rm O}$ with fiber $F$. Hence $M$ is a mapping torus over $F$ with monodromy an isometry of $F$. Moreover, the singular set $\Sigma$ of $M$ is a fiber bundle over ${\rm O}$ with fiber the singular set $\Sigma_\ast$ of $F$. For example, take $\Gamma$ to be as in (5) in \S 10. Then $M = E^2/\Gamma$ is a flat M\"obius band, $M$ is a geometric fiber bundle over ${\rm O}$ with fiber ${\rm I}$, and $M$ is the mapping torus over ${\rm I}$ with monodromy the reflection of ${\rm I}$ about the midpoint of ${\rm I}$. The singular set $\Sigma$ of $M$ is the boundary of the M\"obius band, which is a nontrivial fiber bundle over ${\rm O}$ with fiber the endpoints of ${\rm I}$. Now suppose the base $B$ is a closed interval ${\rm I}$ with endpoints $0$ and $1$. Let ${\rm I}^\circ$ be the open interval $(0,1)$. Then $\eta^{-1}({\rm I}^\circ)$ is the cartesian product of $F$ and ${\rm I}^\circ$. Hence $M$ is obtained from the open tube $\eta^{-1}({\rm I}^\circ)$ by adjoining the two singular fibers $F_0 = \eta^{-1}(0)$ and $F_1=\eta^{-1}(1)$ of $\eta$ to the ends of the tube, one to each end. Let $G_0$ and $G_1$ be the finite groups of order 2 in the definition of the geometric fibration $\eta$ such that $F_i$ is isometric to $F/G_i$ for $i=1,2$. There are three cases to consider. The first case is when $G_0$ and $G_1$ both act trivially on $F$. Then $M$ is the cartesian product $F\times{\rm I}$. The singular set $\Sigma$ of $M$ is $F_0\cup (\Sigma_\ast\times {\rm I})\cup F_1$ where $\Sigma_\ast$ is the singular set of $F$. For example, take $\Gamma$ to be as in (6) in \S 10. Then $M= E^2/\Gamma$ is a square and $M$ is the cartesian product ${\rm I}\times{\rm I}$. The singular set of $M$ is the boundary of the square, which is the union of the 4 closed intervals $F_0, \Sigma_\ast\times {\rm I}, F_1$. The second case is when one of $G_0$ and $G_1$ acts trivially on $F$ and the other does not, say $G_1$ acts trivially on $F$ and $G_0$ does not. Then $M$ is isometric to $(F\times [-1,1])/G_0$ where $G_0$ acts diagonally, isometrically on $F$, and orthogonally on $[-1,1]$ by reflecting $[-1,1]$ about $0$; in other words, $M$ is a twisted ${\rm I}$-bundle over $F_0$ with monodromy determined by the action of $G_0$ on $F$. The singular set $\Sigma$ of $M$ is $\Sigma_0\cup(\Sigma_\ast\times (0,1])\cup F_1$ where $\Sigma_0$ is the singular set of $F_0$. For example, take $\Gamma$ to be as in (7) in \S 10. Then $M= E^2/\Gamma$ is a pillowcase and $M$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm O}$ as in the second case. We have that $M$ is a twisted ${\rm I}$-bundle over ${\rm I}$ with monodromy determined by a reflection of ${\rm O}$. The singular set of $M$ is the union of the boundary circle $F_1$ of $M$ together with the two cone points which form the singular set of the closed interval $F_0$. As another example, take $\Gamma$ to be as in (9) in \S 10. Then $M=E^2/\Gamma$ is a pointed hood and $M$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm I}$ as in the second case. We have that $M$ is a twisted ${\rm I}$-bundle over ${\rm I}$ with monodromy determined by the reflection of ${\rm I}$. The singular set $\Sigma$ of $M$ is the boundary of $M$ together with a single cone point. Observe that $\Sigma$ is the union of the closed interval $F_1$, two half-open intervals $\Sigma_\ast\times (0,1]$, and the end points of $F_0$, one of which is the cone point and the other joins the two half-open intervals $\Sigma_\ast\times (0,1]$ to form a closed interval. The third case is when $G_0$ and $G_1$ both act nontrivially on $F$. Let $M_0 = \eta^{-1}([0,1/2])$ and $M_1=\eta^{-1}([1/2,1])$, and let $F_{1/2} = \eta^{-1}(1/2)$. Then $M_i$ is a twisted ${\rm I}$-bundle over $F_i$ with monodromy determined by the action of $G_i$ on $F$ for $i=1,2$. Note that $M_i$ is a flat $n$-orbifold with totally geodesic boundary $F_{1/2}$ for each $i=1,2$. We have that $M = M_0\cup M_1$ with $M_0\cap M_1 = F_{1/2}$. The singular set $\Sigma$ of $M$ is $\Sigma_0\cup(\Sigma_\ast\times (0,1))\cup\Sigma_1$ where $\Sigma_i$ is the singular set of $F_i$ for $i=1,2$. For example, take $\Gamma$ to be as in (2) in \S 10. Then $M= E^2/\Gamma$ is a pillow and $M$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm O}$ as in the third case. Observe that $M_i$ is a pillowcase for each $i =1,2$, with $F_{1/2}$ their common boundary circle. The singular set of $M$ is four cone points which form the union of the endpoints of the closed intervals $F_0$ and $F_1$. As another example, take $\Gamma$ to be as in (8) in \S 10. Then $M= E^2/\Gamma$ is a projective pillow and $M$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm O}$ as in the third case. Here $G_0$ acts be a reflection on ${\rm O}$ and $G_1$ acts by the a half-turn on ${\rm O}$. Therefore $M_0$ is a pillowcase and $M_1$ is a M\"obius band, with $F_{1/2}$ is their common boundary circle. Therefore $M$ is topologically a projective plane. The singular set of $M$ consists of two cone points which are the endpoints of the closed interval $F_0$. As another example, take $\Gamma$ to be as in (7) in \S 10. Then $M= E^2/\Gamma$ is a pillowcase and $M$ geometrically fibers over ${\rm I}$ with generic fiber ${\rm I}$ as in the second case. Here $G_i$ acts on ${\rm I}$ by reflection for $i=1,2$, $M_i$ is a pointed hood for each $i=1,2$, and $F_{1/2}$ is a closed interval which is half the boundary of each hood. The singular set consists of the endpoints of the closed intervals $F_0$ and $F_1$ and two open intervals $\Sigma_\ast\times (0,1)$. The two open intervals $\Sigma_\ast\times (0,1)$ together with one endpoint from each of $F_0$ and $F_1$ form the boundary circle of $M$. Let $\Gamma$ be a 3-dimensional, orientation preserving, space group. In \cite{Dunbar}, Dunbar describes the topology and geometry of the flat orbifold $E^3/\Gamma$. When $E^3/\Gamma$ co-Seifert fibers, the structure of a co-Seifert fibration can effectively be used to determine the topology and geometry of the orbifold $E^3/\Gamma$. \section{Reducible 3-Dimensional Crystallographic Groups} Let $\Gamma$ be a 3-dimensional space group with point group $\Pi$. Suppose $E^3/\Gamma$ geometrically fibers over a 1- or 2-dimensional flat orbifold. By Theorem 7, the geometric fibration of $E^3/\Gamma$ is determined by a complete normal subgroup ${\rm N}$ of $\Gamma$. Let $V = {\rm Span}({\rm N})$. Then $V$ is either a 1- or 2-dimensional vector subspace of $E^3$. As both $V$ and $V^\perp$ are invariant by $\Pi$, the group $\Pi$ has a 1-dimensional invariant vector space. The group $\Gamma$ is said to be {\it reducible} or {\it irreducible} according as its point group $\Pi$ has or has not a 1-dimensional invariant vector space. It is well known to crystallographers that reducibility of 3-dimensional space groups is equivalent to $\mathbb Z$-reducibility. Hence $E^3/\Gamma$ geometrically fibers over a flat 1- or 2-orbifold if and only if $\Gamma$ is reducible by Theorems 4, 7, and 11. There are six families of 3-dimensional space group isomorphism types, the triclinic, monoclinic, orthorhombic, tetragonal, hexagonal, and cubic families. See Table 1B of Brown et al.~\cite{B-Z} where the six families are listed in the above order. The cubic family consists of all the irreducible isomorphism types. There are 35 irreducible 3-dimensional space group isomorphism types and 184 reducible 3-dimensional space group isomorphism types. All the geometric Seifert fibrations of $E^3/\Gamma$, up to affine equivalence, were neatly described by Conway et al.~in their 2001 paper \cite{C-T}. We will describe the Conway et al.~\ formulation of a geometric fibration of $E^3/\Gamma$ over a flat 2-orbifold and show how it determines a dual geometric fibration of $E^3/\Gamma$ over a flat 1-orbifold with the same invariant 1-dimensional vector space. Let $\Gamma$ is a reducible 3-dimensional space group. Conway et al.~represent a geometric fibration of $E^3/\Gamma$ over a flat 2-orbifold by an exact sequence $$ 1 \to {\rm K} \longrightarrow \Gamma\ {\buildrel \pi\over\longrightarrow}\ {\rm H} \to 1$$ where ${\rm H}$ is a 2-dimensional space group and ${\rm K} = {\rm ker}(\pi)$. Here an element $g$ of $\Gamma$ is represented in the form $(g_H,g_V)$ where $\pi(g) = g_H$ and $g_V$ is the 3rd coordinate function of $g$. The group ${\rm K}$ is generated by either the translation $e_3+I$ or $e_3+I$ and the reflection $A={\rm diag}(1,1,-1)$. The group extension is specified by normalized choices of the 3rd coordinate functions for a set of generators of ${\rm H}$. The choices are such that $\{(I,g_V):\,g\in\Gamma\}$ is a discrete group $\Lambda$ of isometries of $E^3$ containing ${\rm K}$ as a normal subgroup of finite index. The projection $\phi: g\mapsto g_V$ is a group homomorphism whose image is a 1-dimensional space group isomorphic to $\Lambda$. Let ${\rm N} = {\rm ker}(\phi)$. Then ${\rm N}$ is complete by Theorem 5. Hence $E^3/\Gamma$ geometrically fibers over a flat 1-orbifold corresponding to the group extension $$ 1 \to {\rm N} \longrightarrow \Gamma\ {\buildrel \phi\over\longrightarrow}\ \Lambda \to 1$$ by Theorem 4 with ${\rm Span}({\rm N}) = {\rm Span}\{e_1,e_2\}$. Hence ${\rm K}$ and ${\rm N}$ are orthogonal. We call ${\rm N}$ the {\it orthogonal dual} of ${\rm K}$, and we say that ${\rm K}$ and ${\rm N}$ correspond under {\it orthogonal duality}. As ${\rm K}\cap {\rm N} = \{I\}$, we have that $\pi$ maps ${\rm N}$ isometrically onto a normal subgroup of ${\rm H}$, moreover \begin{eqnarray*} {\rm H}/\pi({\rm N}) & \cong & (\Gamma/{\rm K})/({\rm KN}/{\rm K}) \\ & \cong & \Gamma/{\rm KN} \\ & \cong & (\Gamma/{\rm N})/({\rm KN}/{\rm N}) \ \ \cong \ \ \Lambda/{\rm K}. \end{eqnarray*} All the elements of ${\rm K}$ commute with all the elements of ${\rm N}$. The group $\Gamma$ is the direct product of ${\rm K}$ and ${\rm N}$ if and only if ${\rm K} = \Lambda$. This corresponds to the occurrence of a row of $0\!+ \cdots 0\!+0-$ (resp. $0\!+\cdots 0+$) in the couplings column of Table 1 of \cite{C-T} for interval (resp. circular) fibrations. The group $\Lambda$ is infinite cyclic if and only if all the elements of $\Lambda$ are translations. This corresponds to the occurrence of a row of all plus signs in the couplings column of Table 1 of \cite{C-T}. The tetragonal and hexagonal families of 3-dimensional space group isomorphism types (IT numbers 75-194) consist of all the reducible group isomorphism types whose point group has a unique 1-dimensional invariant vector space. If $\Gamma$ belongs to one of these 110 isomorphism types, then $\Gamma$ has a unique 1-dimensional, complete, normal subgroup ${\rm K}$ and a unique 2-dimensional, complete, normal subgroup ${\rm N}$; moreover ${\rm K}$ and ${\rm N}$ are orthogonal. This is the case if and only if in the Conway et al.~representation of $\Gamma$ the group ${\rm H}$ is irreducible. Hence, the classification of the geometric co-Seifert fibrations of $E^3/\Gamma$ corresponds via orthogonal duality to the Conway et al.~\ classification of the geometric Seifert fibrations of $E^3/\Gamma$ when $\Gamma$ belongs to the tetragonal or hexagonal families. The orthorhombic family (IT numbers 16-74) consists of all the reducible group isomorphism types whose point group has exactly three orthogonal 1-dimensional invariant vector spaces. If $\Gamma$ belongs to one of these 59 isomorphism types, then $\Gamma$ has exactly three 1-dimensional, complete, normal subgroups ${\rm K}_1,{\rm K}_2,{\rm K}_3$, and $\Gamma$ has exactly three 2-mensional, complete, normal subgroups ${\rm N}_1,{\rm N}_2,{\rm N}_3$. This is the case if and only if in the Conway et al.~\ representation of $\Gamma$ the group ${\rm H}$ has exactly two proper complete, normal subgroups. We order ${\rm K}_i$ and ${\rm N}_i$ so that $V_i ={\rm Span}({\rm K}_i)$ and $W_i={\rm Span}({\rm N}_i)$ are orthogonal complements for each $i$. The 1-dimensional vector spaces $V_1,V_2,V_3$ are mutually orthogonal. Suppose $\alpha:E^3\to E^3$ is an affine homeomorphism such that $\alpha\Gamma\alpha^{-1} = \Gamma'$ with $\Gamma'$ a space group. Define ${\rm K}_i' = \alpha{\rm K}_i\alpha^{-1}$ and ${\rm N}_i' = ({\rm K}_i')^\perp$ for $i=1,2,3$. Let $V_i' ={\rm Span}({\rm K}_i')$ and $W_i' = {\rm Span}({\rm N}_i')$ for each $i$. Let $\alpha = a + A$ with $A$ a linear automorphism of $E^3$. Then $AV_i = V_i'$ for each $i$. Now $\alpha{\rm N}_i\alpha^{-1} = {\rm N}_j'$ for some $j$ by Corollary 2. Therefore $AW_i = W_j'$. As $AV_i\cap AW_i = \{0\}$, we must have that $j=i$ for each $i=1,2,3$. Likewise if ${\rm N}_i' = \alpha{\rm N}_i\alpha^{-1}$ and ${\rm K}_i' = ({\rm N}_i')^\perp$ for each $i=1,2,3$, then $\alpha({\rm K}_i)\alpha^{-1} = {\rm K}_i'$ for each $i=1,2,3$. Thus if ${\rm N}$ is a complete normal subgroup of $\Gamma$, then $\alpha({\rm N}^\perp)\alpha^{-1} = (\alpha{\rm N}\alpha^{-1})^\perp$. By Theorem 10, the geometric fibrations of $E^3/\Gamma$ and $E^3/\Gamma'$ determined by complete normal subgroups ${\rm N}$ of $\Gamma$ and ${\rm N}'$ of $\Gamma'$ are affinely equivalent if and only if the geometric fibrations of $E^3/\Gamma$ and $E^3/\Gamma'$ determined by ${\rm N}^\perp$ and $({\rm N}')^\perp$ are affinely equivalent. Hence, the classification of the geometric co-Seifert fibrations of $E^3/\Gamma$ corresponds via orthogonal duality to the Conway et al.~\ classification of the geometric Seifert fibrations of $E^3/\Gamma$ when $\Gamma$ belongs to the orthorhombic family. The triclinic and monoclinic families consists of all the reducible space group isomorphism types whose point group has infinitely many 1-dimensional invariant vector spaces. If $\Gamma$ belongs to one of these 15 isomorphism types, then $\Gamma$ has infinitely many 1-dimensional, complete, normal subgroups, and $\Gamma$ has infinitely many 2-dimensional, complete, normal subgroups. This is the case if and only if in the Conway et al. representation of $\Gamma$ the group ${\rm H}$ has infinitely many proper complete normal subgroups. The triclinic family consists of two isomorphism types (IT numbers 1 and 2). If $\Gamma$ belongs to the triclinic family, then all the geometric Seifert fibrations of $E^3/\Gamma$ are affinely equivalent and all the geometric co-Seifert fibrations of $E^3/\Gamma$ are affinely equivalent. The monoclinic family consists of 13 isomorphism types (IT numbers 3-15). If $\Gamma$ belongs to one of these 13 isomorphism types, then $\Gamma$ has a unique 1-dimensional, complete, characteristic subgroup ${\rm K}$ and a unique 2-dimensional, complete, characteristic subgroup ${\rm N}$; moreover ${\rm K}$ and ${\rm N}$ are orthogonal. The corresponding Seifert fibration is given by the primary name in Table 2b of \cite{C-T}. The classification of the geometric co-Seifert fibrations of $E^3/\Gamma$ corresponds via orthogonal duality to the Conway et al.~\ classification of the geometric Seifert fibrations of $E^3/\Gamma$ when $\Gamma$ belongs to the monoclinic family except for the two cases $(\ast\!:\!\times)$ and $(2\overline{\ast}2\!:\!2)$ in \cite{C-T}. In Table 1, we replace the Seifert fibration $(\ast\!:\!\times)$ of space group IT 9 with the affine equivalent Seifert fibration $(\ast\!:\!\times_1)$, with couplings $\frac{1}{2}\!+\frac{1}{2}+$, and we replace the Seifert fibration $(2\overline{\ast}2\!:\!2)$ of space group IT 15 with the affine equivalent Seifert fibration $(2\overline{\ast}2_1\!:\!2)$ with couplings $0\!-\frac{1}{2}\!-\frac{1}{2}+$. With these two substitutions, indicated by a $\dag$ in Table 1, the classification of the geometric co-Seifert fibrations of $E^3/\Gamma$ under affine equivalence corresponds via orthogonal duality to the Conway et al.~\ classification of the geometric Seifert fibrations of $E^3/\Gamma$. See Table 1 for the orthogonal correspondence between the the classifications of the Seifert and co-Seifert fibrations of $E^3/\Gamma$. The first column of Table 1 is the IT number of the corresponding space group. The second column is the Conway fibrifold name of the Seifert fibration. The third column indicates whether or not the corresponding space group extension splits. If the generic fiber is a closed interval, then the space group extension splits by Theorem 18. The fourth column indicates the generic fiber of the orthogonal dual co-Seifert fibration. The fifth column indicates the base of the orthogonal dual co-Seifert fibration with a centered dot representing a circle and a dash representing a closed interval. The sixth column indicates whether or not the corresponding space group extension splits. If the base is a circle, then the space group extension splits, since $\Gamma/{\rm N}$ is an infinite cyclic group. The seventh column gives the index of ${\rm K}{\rm N}$ in $\Gamma$; in particular, the index is 1 if and only if both fibrations are direct products. If the generic fiber of the Seifert fibration is a closed interval, then the base of the co-Seifert fibration is also a closed interval and the index is 1 or 2 by Theorem 20. The information in Table 1 was obtained by computer calculations. Finally, the 10 closed flat space forms in Table 1 have IT numbers 1,4,7,9,19,29,33,76,144,169.
{ "timestamp": "2009-10-20T18:49:03", "yymm": "0804", "arxiv_id": "0804.0427", "language": "en", "url": "https://arxiv.org/abs/0804.0427", "abstract": "In this paper, we prove that a normal subgroup N of an n-dimensional crystallographic group G determines a geometric fibered orbifold structure on the flat orbifold E^n/G, and conversely every geometric fibered orbifold structure on E^n/G is determined by a normal subgroup N of G, which is maximal in its commensurability class of normal subgroups of G. In particular, we prove that E^n/G is a fiber bundle, with totally geodesic fibers, over a b-dimensional torus, where b is the first Betti number of G.Let N be a normal subgroup of G which is maximal in its commensurability class. We study the relationship between the exact sequence 1 -> N -> G -> G/N -> 1 splitting and the corresponding fibration projection having an affine section. If N is torsion-free, we prove that the exact sequence splits if and only if the fibration projection has an affine section. If the generic fiber F = Span(N)/N has an ordinary point that is fixed by every isometry of F, we prove that the exact sequence always splits. Finally, we describe all the geometric fibrations of the orbit spaces of all 2- and 3-dimensional crystallographic groups building on the work of Conway and Thurston.", "subjects": "Geometric Topology (math.GT); Group Theory (math.GR)", "title": "Fibered orbifolds and crystallographic groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683469514966, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7096610823514168 }
https://arxiv.org/abs/2205.03928
Number of complete subgraphs of Peisert graphs and finite field hypergeometric functions
For a prime $p\equiv 3\pmod{4}$ and a positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$. The Peisert graph $P^\ast(q)$ is defined as the graph with vertex set $\mathbb{F}_q$ where $ab$ is an edge if and only if $a-b\in\langle g^4\rangle \cup g\langle g^4\rangle$. We provide a formula, in terms of finite field hypergeometric functions, for the number of complete subgraphs of order four contained in $P^\ast(q)$. We also give a new proof for the number of complete subgraphs of order three contained in $P^\ast(q)$ by evaluating certain character sums. The computations for the number of complete subgraphs of order four are quite tedious, so we further give an asymptotic result for the number of complete subgraphs of any order $m$ in Peisert graphs.
\part{title} \usepackage{amsmath,amsthm,amssymb,amscd} \newcommand{\mathcal E}{\mathcal E} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{result}[theorem]{Result} \newtheorem{corollary}[theorem]{Corollary} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{xca}[theorem]{Exercise} \newtheorem{problem}[theorem]{Problem} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \newtheorem{conj}[theorem]{Conjecture} \numberwithin{equation}{section} \allowdisplaybreaks \begin{document} \title[number of complete subgraphs of Peisert graphs] {number of complete subgraphs of Peisert graphs and finite field hypergeometric functions} \author{Anwita Bhowmik} \address{Department of Mathematics, Indian Institute of Technology Guwahati, North Guwahati, Guwahati-781039, Assam, INDIA} \email{anwita@iitg.ac.in} \author{Rupam Barman} \address{Department of Mathematics, Indian Institute of Technology Guwahati, North Guwahati, Guwahati-781039, Assam, INDIA} \email{rupam@iitg.ac.in} \subjclass[2020]{05C25; 05C30; 11T24; 11T30} \date{9th May 2022} \keywords{Peisert graphs; clique; finite fields; character sums; hypergeometric functions over finite fields} \begin{abstract} For a prime $p\equiv 3\pmod{4}$ and a positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$. The Peisert graph $P^\ast(q)$ is defined as the graph with vertex set $\mathbb{F}_q$ where $ab$ is an edge if and only if $a-b\in\langle g^4\rangle \cup g\langle g^4\rangle$. We provide a formula, in terms of finite field hypergeometric functions, for the number of complete subgraphs of order four contained in $P^\ast(q)$. We also give a new proof for the number of complete subgraphs of order three contained in $P^\ast(q)$ by evaluating certain character sums. The computations for the number of complete subgraphs of order four are quite tedious, so we further give an asymptotic result for the number of complete subgraphs of any order $m$ in Peisert graphs. \end{abstract} \maketitle \section{introduction and statements of results} The arithmetic properties of Gauss and Jacobi sums have a very long history in number theory, with applications in Diophantine equations and the theory of $L$-functions. Recently, number theorists have obtained generalizations of classical hypergeometric functions that are assembled with these sums, and these functions have recently led to applications in graph theory. Here we make use of these functions, as developed by Greene, McCarthy, and Ono \cite{greene, greene2,mccarthy3, ono2} to study substructures in Peisert graphs, which are relatives of the well-studied Paley graphs. \par The Paley graphs are a well-known family of undirected graphs constructed from the elements of a finite field. Named after Raymond Paley, they were introduced as graphs independently by Sachs in 1962 and Erd\H{o}s \& R\'enyi in 1963, inspired by the construction of Hadamard matrices in Paley's paper \cite{paleyp}. Let $q\equiv 1\pmod 4$ be a prime power. Then the Paley graph of order $q$ is the graph with vertex set as the finite field $\mathbb{F}_q$ and edges defined as, $ab$ is an edge if $a-b$ is a non-zero square in $\mathbb{F}_q$. \par It is natural to study the extent to which a graph exhibits symmetry. A graph is called \textit{symmetric} if, given any two edges $xy$ and $x_1y_1$, there exists a graph automorphism sending $x$ to $x_1$ and $y$ to $y_1$. Another kind of symmetry occurs if a graph is isomorphic to its complement, in which case the graph is called \textit{self-complementary}. While Sachs studied the self-complementarity properties of the Paley graphs, Erd\H{o}s \& R\'enyi were interested in their symmetries. It turns out that the Paley graphs are both self-complementary and symmetric. \par It is a natural question to ask for the classification of all self-complementary and symmetric (SCS) graphs. In this direction, Chao's classification in \cite{chao} sheds light on the fact that the only such possible graphs of prime order are the Paley graphs. Zhang in \cite{zhang}, gave an algebraic characterization of SCS graphs using the classification of finite simple groups, although it did not follow whether one could find such graphs other than the Paley graphs. In 2001, Peisert gave a full description of SCS graphs as well as their automorphism groups in \cite{peisert}. He derived that there is another infinite family of SCS graphs apart from the Paley graphs, and, in addition, one more graph not belonging to any of the two former families. He constructed the $P^\ast$-graphs (which are now known as \textit{Peisert graphs}) as follows. For a prime $p\equiv 3\pmod{4}$ and a positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$, that is, $\mathbb{F}_q^\ast=\mathbb{F}_q\setminus\{0\}=\langle g\rangle$. Then the Peisert graph $P^\ast(q)$ is defined as the graph with vertex set $\mathbb{F}_q$ where $ab$ is an edge if and only if $a-b\in\langle g^4\rangle \cup g\langle g^4\rangle$. It is shown in \cite{peisert} that the definition is independent of the choice of $g$. It turns out that an edge is well defined, since $q\equiv 1\pmod 8$ implies that $-1\in\langle g^4\rangle$. \par We know that a complete subgraph, or a clique, in an undirected graph is a set of vertices such that every two distinct vertices in the set are adjacent. The number of vertices in the clique is called the order of the clique. Let $G^{(n)}$ denote a graph on $n$ vertices and let $\overline{G^{(n)}}$ be its complement. Let $k_m(G)$ denote the number of cliques of order $m$ in a graph $G$. Let $T_m(n)=\text{min}\left(k_m(G^{(n)})+ k_m(\overline{G^{(n)}})\right) $ where the minimum is taken over all graphs on $n$ vertices. Erd\H{o}s \cite{erdos}, Goodman \cite{goodman} and Thomason \cite{thomason} studied $T_m(n)$ for different values of $m$ and $n$. Here we note that the study of $T_m(n)$ can be linked to Ramsey theory. This is because, the diagonal Ramsey number $R(m,m)$ is the smallest positive integer $n$ such that $T_m(n)$ is positive. Also, for the function $k_m(G^{(n)})+ k_m(\overline{G^{(n)}})$ on graphs with $n=p$ vertices, $p$ being a prime, Paley graphs are minimal in certain ways; for example, in order to show that $R(4,4)$ is atleast $18$, the Paley graph with $17$ vertices acts as the only graph (upto isomorphism) such that $k_m(G^{(17)})+ k_m(\overline{G^{(17)}})=0$. What followed was a study on $k_m(G)$, $G$ being a Paley graph. Evans et al. \cite{evans1981number} and Atansov et al. \cite{atanasov2014certain} gave formulae for $k_4(G)$, where $G$ is a Paley graph with number of vertices a prime and a prime-power, respectively. One step ahead led to generalizations of Paley graphs by Lim and Praeger \cite{lim2006generalised}, and computing the number of cliques of orders $3$ and $4$ in those graphs by Dawsey and McCarthy \cite{dawsey}. Very recently, we \cite{BB} have defined \emph{Paley-type} graphs of order $n$ as follows. For a positive integer $n$, the Paley-type graph $G_n$ has the finite commutative ring $\mathbb{Z}_n$ as its vertex set and edges defined as, $ab$ is an edge if and only if $a-b\equiv x^2\pmod{n}$ for some unit $x$ of $\mathbb{Z}_n$. For primes $p\equiv 1\pmod{4}$ and any positive integer $\alpha$, we have also found the number of cliques of order $3$ and $4$ in the Paley-type graphs $G_{p^{\alpha}}$. \par The Peisert graphs lie in the class of SCS graphs alongwith Paley graphs, so it would serve as a good analogy to study the number of cliques in the former class too. There is no known formula for the number of cliques of order $4$ in Peisert graph $P^{\ast}(q)$. The main purpose of this paper is to provide a general formula for $k_4(P^\ast(q))$. In \cite{jamesalex2}, Alexander found the number of cliques of order $3$ using the properties that the Peisert graph are edge-transitive and that any pair of vertices connected by an edge have the same number of common neighbors (a graph being edge-transitive means that, given any two edges in the graph, there exists a graph automorphism sending one edge to the other). In this article, we follow a character-sum approach to compute the number of cliques of orders $3$ and $4$ in Peisert graphs. In the following theorem, we give a new proof for the number of cliques of orders $3$ in Peisert graphs by evaluating certain character sums. \begin{theorem}\label{thm1} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Then, the number of cliques of order $3$ in the Peisert graph $P^{\ast}(q)$ is given by $$k_3(P^\ast(q))=\dfrac{q(q-1)(q-5)}{48}.$$ \end{theorem} Next, we find the number of cliques of order $4$ in Peisert graphs. In this case, the character sums are difficult to evaluate. We use finite field hypergeometric functions to evaluate some of the character sums. Before we state our result on $k_4(P^\ast(q))$, we recall Greene's finite field hypergeometric functions from \cite{greene, greene2}. Let $p$ be an odd prime, and let $\mathbb{F}_q$ denote the finite field with $q$ elements, where $q=p^r, r\geq 1$. Let $\widehat{\mathbb{F}_q^{\times}}$ be the group of all multiplicative characters on $\mathbb{F}_q^{\times}$. We extend the domain of each $\chi\in \widehat{\mathbb{F}_q^{\times}}$ to $\mathbb{F}_q$ by setting $\chi(0)=0$ including the trivial character $\varepsilon$. For multiplicative characters $A$ and $B$ on $\mathbb{F}_q$, the binomial coefficient ${A \choose B}$ is defined by \begin{align*} {A \choose B}:=\frac{B(-1)}{q}J(A,\overline{B}), \end{align*} where $J(A, B)=\displaystyle \sum_{x \in \mathbb{F}_q}A(x)B(1-x)$ denotes the Jacobi sum and $\overline{B}$ is the character inverse of $B$. For a positive integer $n$, and $A_0,\ldots, A_n, B_1,\ldots, B_n\in \widehat{\mathbb{F}_q^{\times}}$, Greene \cite{greene, greene2} defined the ${_{n+1}}F_n$- finite field hypergeometric function over $\mathbb{F}_q$ by \begin{align*} {_{n+1}}F_n\left(\begin{array}{cccc} A_0, & A_1, & \ldots, & A_n\\ & B_1, & \ldots, & B_n \end{array}\mid x \right) :=\frac{q}{q-1}\sum_{\chi\in \widehat{\mathbb{F}_q^\times}}{A_0\chi \choose \chi}{A_1\chi \choose B_1\chi} \cdots {A_n\chi \choose B_n\chi}\chi(x). \end{align*} For $n=2$, we recall the following result from \cite[Corollary 3.14]{greene}: $${_{3}}F_{2}\left(\begin{array}{ccc} A, & B, & C \\ & D, & E \end{array}| \lambda\right)=\sum\limits_{x,y\in\mathbb{F}_q}A\overline{E}(x)\overline{C}E(1-x)B(y)\overline{B}D(1-y)\overline{A}(x-\lambda y).$$ Some of the biggest motivations for studying finite field hypergeometric functions have been their connections with Fourier coefficients and eigenvalues of modular forms and with counting points on certain kinds of algebraic varieties. For example, Ono \cite{ono} gave formulae for the number of $\mathbb{F}_p$-points on elliptic curves in terms of special values of Greene's finite field hypergeometric functions. In \cite{ono2}, Ono wrote a beautiful chapter on finite field hypergeometric functions and mentioned several open problems on hypergeometric functions and their relations to modular forms and algebraic varieties. In recent times, many authors have studied and found solutions to some of the problems posed by Ono. \par Finite field hypergeometric functions are useful in the study of Paley graphs, see for example \cite{dawsey, wage}. In the following theorem, we express the number of cliques of order $4$ in Peisert graphs in terms of finite field hypergeometric functions. \begin{theorem}\label{thm2} Let $p$ be a prime such that $p\equiv 3\pmod 4$. For a positive integer $t$, let $q=p^{2t}$. Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$. If $\chi_4$ is a character of order $4$, then the number of cliques of order $4$ in the Peisert graph $P^{\ast}(q)$ is given by \begin{align*} k_4(P^\ast(q))=\frac{q(q-1)}{3072}\left[2(q^2-20q+81)+2 u(-p)^t+3q^2\cdot {_{3}}F_{2}\left(\begin{array}{ccc} \hspace{-.12cm}\chi_4, &\hspace{-.14cm} \chi_4, &\hspace{-.14cm} \chi_4^3 \\ & \hspace{-.14cm}\varepsilon, &\hspace{-.14cm} \varepsilon \end{array}| 1\right) \right]. \end{align*} \end{theorem} Using Sage, we numerically verify Theorem $\ref{thm2}$ for certain values of $q$. We list some of the values in Table \ref{Table-1}. We denote by ${_{3}}F_{2}(\cdot)$ the hypergeometric function appearing in Theorem \ref{thm2}. \begin{table}[ht] \begin{center} \begin{tabular}{|c |c | c | c | c | c | c|} \hline $p$ &$q$ & $k_4(P^\ast(q))$ & $u$ & $q^2 \cdot {_{3}}F_{2}(\cdot)$ & $k_4(P^\ast(q))$ &${_{3}}F_{2}(\cdot)$\\ && (by Sage) & & (by Sage) & (by Theorem \ref{thm2}) &\\\hline $3$ &$9$ & $0$ & $-1$ & $10$ & $0$& $0.1234\ldots$ \\ $7$ &$49$ & $2156$ & $7$ & $-30$ & $2156$& $-0.0123\ldots$\\ $3$ &$81$ & $21060$ & $7$ & $-62$ & $21060$& $-0.0094\ldots$\\ $11$ &$121$ & $116160$ & $7$ & $42$ & $116160$& $0.0028\ldots$\\ $19$ &$361$ & $10515930$ & $-17$ & $522$ & $10515930$& $0.0040\ldots$\\ $23$ &$529$ & $49135636$ & $23$ & $930$ & $49135636$& $0.0033\ldots$\\ \hline \end{tabular} \caption{Numerical data for Theorem \ref{thm2}} \label{Table-1} \end{center} \end{table} \par We note that the number of $3$-order cliques in the Peisert graph of order $q$ equals the number of $3$-order cliques in the Paley graph of the same order. The computations for the number of cliques of order $4$ are quite tedious, so we further give an asymptotic result in the following theorem, for the number of cliques of order $m$ in Peisert graphs, $m\geq 1$ being an integer. \begin{theorem}\label{asym} Let $p$ be a prime such that $p\equiv 3\pmod 4$. For a positive integer $t$, let $q=p^{2t}$. For $m\geq 1$, let $k_m(P^\ast(q))$ denote the number of cliques of order $m$ in the Peisert graph $P^\ast(q)$. Then $$\lim\limits_{q\to\infty}\dfrac{k_m(P^\ast(q))}{q^m}=\dfrac{1}{2^{{m}\choose_{2}}m!}.$$ \end{theorem} \section{preliminaries and some lemmas} We begin by fixing some notations. For a prime $p\equiv 3\pmod{4}$ and positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\mathbb{F}_q$, that is, $\mathbb{F}_q^\ast=\mathbb{F}_q\setminus\{0\}=\langle g\rangle$. Now, we fix a multiplicative character $\chi_4$ on $\mathbb{F}_q$ of order $4$ (which exists since $q\equiv 1\pmod 4$). Let $\varphi$ be the unique quadratic character on $\mathbb{F}_q$. Then, we have $\chi_4^2=\varphi$. Let $H=\langle g^4\rangle\cup g\langle g^4\rangle$. Since $H$ is the union of two cosets of $\langle g^4\rangle $ in $\langle g\rangle $, we see that $|H|=2\times \frac{q-1}{4}=\frac{q-1}{2}$. We recall that a vertex-transitive graph is a graph in which, given any two vertices in the graph, there exists some graph automorphism sending one of the vertices to the other. Peisert graphs being symmetric, are vertex-transitive. Also, the subgraphs induced by $\langle g^4\rangle$ and $g\langle g^4\rangle$ are both vertex transitive: if $s,t$ are two elements of $\langle g^4\rangle$ (or $g\langle g^4\rangle$) then the map on the vertex set of $\langle g^4\rangle$ (or $g\langle g^4\rangle$) given by $x\longmapsto \frac{t}{s} x$ is an isomorphism sending $s$ to $t$. The subgraph of $P^\ast(q)$ induced by $H$ is denoted by $\langle H\rangle$. \par Throughout the article, we fix $h=1-\chi_4(g)$. For $x\in\mathbb{F}_q^\ast$, we have the following: \begin{align}\label{qq} \frac{2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)}{4} = \left\{ \begin{array}{lll} 1, & \hbox{if $\chi_4(x)\in\{1,\chi_4(g)\}$;} \\ 0, & \hbox{\text{otherwise.}} \end{array} \right. \end{align} We note here that for $x\neq 0$, $x\in H$ if and only if $\chi_4(x)=1$ or $\chi_4(x)=\chi_4(g)$. \par We have the following lemma which will be used in proving the main results. \begin{lemma}\label{rr} Let $q=p^{2t}$ where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a multiplicative character of order $4$ on $\mathbb{F}_q$, and let $\varphi$ be the unique quadratic character. Then, we have $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=-(-p)^t$. \end{lemma} \begin{proof} By \cite[Proposition 1]{katre}, we have $J(\chi_4,\chi_4)=-(-p)^t$. We also note that by Theorem 2.1.4 and Theorem 3.2.1 of \cite{berndt}, where the results remain the same if we replace a prime by a prime power, we see that $J(\chi_4,\varphi)=\chi_4(4)J(\chi_4,\chi_4)=a_4+ib_4$, where $a_4^2+b_4^2=q$ and $a_4\equiv -(\frac{q+1}{2})\pmod 4$. Hence, $a_4\equiv 1\pmod 4$ and $a_4=-(-p)^t,b_4=0$. Thus, we obtain $J(\chi_4,\varphi)=J(\chi_4,\chi_4)=-(-p)^t$. \end{proof} Next, we evaluate certain character sums in the following lemmas. \begin{lemma}\label{lem1} Let $q\equiv 1\pmod 4$ be a prime power and let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ such that $\chi_4(-1)=1$, and let $\varphi$ be the unique quadratic character. Let $a\in\mathbb{F}_q$ be such that $a\neq0,1$. Then, $$\sum_{y\in\mathbb{F}_q}\chi_4((y-1)(y-a))=\varphi(a-1)J(\chi_4,\chi_4).$$ \end{lemma} \begin{proof} We have \begin{align*} &\sum_{y\in\mathbb{F}_q}\chi_4((y-1)(y-a))=\sum_{y'\in\mathbb{F}_q}\chi_4(y'(y'+1-a))\\ &=\sum_{y''\in\mathbb{F}_q}\chi_4((1-a)y'')\chi_4((1-a)(y''+1)) =\varphi(1-a)\sum_{y''\in\mathbb{F}_q}\chi_4(y''(y''+1))\\ &=\varphi(1-a)\sum_{y''\in\mathbb{F}_q}\chi_4(-y''(-y''+1))\\ &=\varphi(1-a)J(\chi_4,\chi_4), \end{align*} where we used the substitutions $y-1=y'$, $y''=y'(1-a)^{-1}$, and replaced $y''$ by $-y''$. \end{proof} \begin{lemma}\label{lem2} Let $q\equiv 1\pmod 4$ be a prime power and let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ such that $\chi_4(-1)=1$. Let $a\in\mathbb{F}_q$ be such that $a\neq0,1$. Then, $$\sum_{y\in\mathbb{F}_q}\chi_4(y)\overline{\chi_4}(a-y)=-1.$$ \end{lemma} \begin{proof} We have \begin{align*} &\sum_{y\in\mathbb{F}_q}\chi_4(y)\overline{\chi_4}(a-y)=\sum_{y'\in\mathbb{F}_q}\chi_4(ay')\overline{\chi_4}(a-ay')\\ &=\sum_{y'\in\mathbb{F}_q}\chi_4(y')\overline{\chi_4}(1-y')\\ &=\sum_{y'\in\mathbb{F}_q}\chi_4\left(y'(1-y')^{-1}\right)\\ &=\sum_{y''\in\mathbb{F}_q, y''\neq -1}\chi_4(y'') =-1, \end{align*} where we used the substitutions $y' =y a^{-1}$ and $y'' =y'(1-y')^{-1}$, respectively. \end{proof} \begin{lemma}\label{lem3} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$. Then, \begin{align}\label{koro} \sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\overline{\chi_4}(x)\chi_4(y)\chi_4(1-y)\chi_4(x-y)=-2\rho \end{align} and \begin{align}\label{koro1} \sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\overline{\chi_4}(x)\chi_4(y)\chi_4(1-y)\overline{\chi_4}(x-y)=1-\rho. \end{align} \end{lemma} \begin{proof} By Lemma \ref{lem2}, we have \begin{align*} \sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\sum_{x\neq 0,1,y}\overline{\chi_4}(x)\chi_4(x-y) &=\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\left[-1-\chi_4(y-1) \right]\\ &=-\rho-\sum_y \chi_4(y)\varphi(1-y)=-2\rho, \end{align*} which proves \eqref{koro}. Next, using the substitution $x'=xy^{-1}$, we have \begin{align}\label{sum-new} \sum_x \overline{\chi_4}(x)\overline{\chi_4}(x-y)&=\sum_{x'} \overline{\chi_4}(x'y)\overline{\chi_4}(x'y-y)\notag \\ &=\varphi(y)\rho. \end{align} So, using \eqref{sum-new}, we find that \begin{align*} &\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\sum_{x\neq 0,1,y}\overline{\chi_4}(x)\overline{\chi_4}(x-y)\\ &=\sum_{y\neq 0,1}\chi_4(y)\chi_4(1-y)\left[\varphi(y)\rho-\overline{\chi_4}(y-1) \right]\\ &=\rho\sum_y \overline{\chi_4}(y)\chi_4(1-y)-\sum_y \chi_4(y)\\ &=-\rho+1. \end{align*} This completes the proof of the lemma. \end{proof} We need to evaluate several analogous character sums as in Lemma \ref{lem3}. To this end, we have the following two lemmas whose proofs merely involve Lemmas \ref{lem1} and \ref{lem2} (as in Lemma \ref{lem3}). \begin{lemma}\label{lema1} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$. Then, we have \begin{align*} &\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1} \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y)\\ &=\left\{ \begin{array}{lll} -2\rho, & \hbox{if $(i_1, i_2, i_3)\in \{(1, 1, 1), (-1, -1, -1)\};$} \\ 2, & \hbox{if $(i_1, i_2, i_3)\in \{(1, 1, -1), (-1, -1, 1)\};$} \\ 1-\rho, & \hbox{if $(i_1, i_2, i_3)\in \{(1, -1, 1), (1, -1, -1), (-1, 1, 1), (-1, 1, -1)\}$.} \end{array} \right. \end{align*} \end{lemma} \begin{lemma}\label{corr} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character on $\mathbb{F}_q$ of order $4$ and let $\varphi$ be the unique quadratic character. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where $\rho=-(-p)^t$. Then, for $i_1,i_2,i_3\in\{\pm 1\}$, we have the following tabulation of the values of the expression given below: \begin{align}\label{new-eqn1} \sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}A_x \cdot \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y). \end{align} For $w\in\{1,2,\ldots,8\}$ and $z\in \{1,2,\ldots,7\}$, the $(w,z)$-th entry in the table corresponds to \eqref{new-eqn1}, where $A_x$ is either $\chi_4(x),\overline{\chi_4}(x),\chi_4(1-x)$ or $\overline{\chi_4}(1-x)$ and the tuple $(i_1,i_2,i_3)$ depends on $w$. \begin{align*} \begin{array}{|l|l|l|l|l|l|l|} \cline {4 - 7 } \multicolumn{3}{c|}{} & \multicolumn{4}{|c|}{A_{x}} \\ \hline i_{1} & i_{2} & i_{3} & \chi_4(x) & \overline{\chi_4}(x) & \chi_4(1-x) & \overline{\chi_4}(1-x) \\ \hline 1 & 1 & 1 & -2 \rho & -2 \rho & -2 \rho & -2 \rho \\ 1 & 1 & -1 & 1-\rho & 1-\rho & 1- \rho & 1-\rho \\ 1 & -1 & 1 & {\rho}^2+1 & 2 & {\rho}^2-\rho & 1-\rho \\ 1 & -1 & -1 & 1-\rho & {\rho}^2-\rho & 2 & {\rho}^2+1 \\ -1 & 1 & 1 &{\rho}^2-\rho & 1-\rho &{\rho}^2+1 & 2 \\ -1 & 1 & -1 &2 & {\rho}^2+1 &1-\rho & {\rho}^2-\rho \\ -1 & -1 & 1 &1-\rho & 1-\rho &1-\rho & 1-\rho \\ -1 & -1 & -1 &-2\rho & -2\rho &-2\rho & -2\rho\\ \hline \end{array} \end{align*} For example, the $(3,6)$-th position contains the value ${\rho}^2-\rho$. Here $w=3$ corresponds to $i_1=1,i_2=-1,i_3=1$; $z=6$ corresponds to the column $A_x=\chi_4(1-x)$. So, $$\sum\limits_{x,y\in\mathbb{F}_q, x\neq 1}\chi_4(1-x)\chi_4(y) \overline{\chi_4}(1-y)\chi_4(x-y)={\rho}^2-\rho.$$ \end{lemma} \begin{proof} The calculations follow along the lines of Lemma \ref{lem1} and Lemma \ref{lem2}. For example, in Lemma \ref{lem3}, one can take $\chi_4(x),~\chi_4(x-1)$ or $\overline{\chi_4}(x-1)$ in place of $\overline{\chi_4}(x)$ in $\eqref{koro}$ and $\eqref{koro1}$ (which we denote by $A_x$), and easily evaluate the corresponding character sum. \end{proof} \begin{lemma}\label{lem4} Let $q=p^{2t}$, where $p\equiv 3\pmod 4$ is a prime and $t$ is a positive integer. Let $\chi_4$ be a character of order $4$. Let $\varphi$ and $\varepsilon$ be the quadratic and the trivial characters, respectively. Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$. Then, \begin{align*} &{_{3}}F_2\left(\begin{array}{ccc} \chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon \end{array}\mid 1 \right)= {_{3}}F_2\left(\begin{array}{ccc}\overline{\chi_4}, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon \end{array}\mid 1\right)\\ &={_{3}}F_2\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}\mid 1\right) ={_{3}}F_2\left(\begin{array}{ccc}\overline{\chi_4}, & \chi_4, & \chi_4\\ & \varphi, & \varepsilon\end{array}\mid 1\right)\\ &=\frac{1}{q^2}[-2u(-p)^t]. \end{align*} \end{lemma} \begin{proof} Let $\chi_8$ be a character of order $8$ such that $\chi_8^2=\chi_4$. Now, Proposition 1 in \cite{katre} tells us that $J(\chi_4,\chi_4)=-(-p)^t$ and hence it is real. Again, by Theorem 3.3.3 and the paragraph preceeding Theorem 3.3.1 in \cite{berndt}, $J(\chi_8,\chi_8^2)=\chi_8(-4)J(\chi_4,\chi_4)$, where $\chi_8(4)=\pm 1$ and thus, is also real. By \cite[Theorem 4.37]{greene}, we have \begin{align}\label{doe} {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)&=\binom{\chi_8}{\chi_8^2}\binom{\chi_8}{\chi_8^3}+\binom{\chi_8^5}{\chi_8^2}\binom{\chi_8^5}{\overline{\chi_8}}\notag \\ &=\frac{\chi_8(-1)}{q^2}[J(\chi_8,\chi_8^6)J(\chi_8,\chi_8^5)+J(\chi_8^5,\chi_8^6)J(\chi_8^5,\chi_8)]. \end{align} Using Theorems 2.1.5 and 2.1.6 in \cite{berndt} we obtain \begin{align*} &J(\chi_8,\chi_8^6)=\chi_8(-1)J(\chi_8,\chi_8),\\ &J(\chi_8,\chi_8^5)=\chi_8(-1)J(\chi_8,\chi_8^2),\\ &J(\chi_8^5,\chi_8^6)=\chi_8(-1)\overline{J(\chi_8,\chi_8)}. \end{align*} Substituting these values in $\eqref{doe}$ and using \cite[Lemma 3.6 (2)]{dawsey}, we find that \begin{align}\label{real} {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)&=\frac{\chi_8(-1)}{q^2}[J(\chi_8,\chi_8)J(\chi_8,\chi_8^2)+\overline{J(\chi_8,\chi_8)}J(\chi_8,\chi_8^2)]\notag \\ &=\frac{1}{q^2}J(\chi_8,\chi_8^2)\times 2 Re(J(\chi_8,\chi_8))\times \chi_8(-1)\notag \\ &=\frac{1}{q^2}[-2u(-p)^t]. \end{align} Since ${_{3}}F_{2}\left(\begin{array}{ccc}\overline{\chi_4}, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)$ is the conjugate of ${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \chi_4\\ & \varepsilon, & \varepsilon\end{array}\mid 1\right)$, so both are equal as the value given in \eqref{real} is a real number. Using Lemma 4.37 in \cite{greene} again, we have \begin{align}\label{jack} {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}| 1\right)&=\binom{\overline{\chi_8}}{\chi_8^2}\binom{\overline{\chi_8}}{\chi_8}+\binom{\chi_8^3}{\chi_8^2}\binom{\chi_8^3}{\overline{\chi_8}^3}\notag \\ &=\frac{\chi_8(-1)}{q^2}[J(\overline{\chi_8},\overline{\chi_8}^2)J(\overline{\chi_8},\overline{\chi_8})+J(\chi_8^3,\overline{\chi_8}^2)J(\chi_8^3,\chi_8^3)]. \end{align} Recalling Theorem 2.1.6 in \cite{berndt} gives $J(\chi_8,\chi_8)=J(\chi_8^3,\chi_8^3)$. Also, Theorem 2.1.5 in \cite{berndt} gives $J(\chi_8^3,\overline{\chi_8}^2)=\overline{J(\chi_8^5,\chi_8^2)}=\overline{J(\chi_8,\chi_8^2)}=J(\chi_8,\chi_8^2)$. Hence, $\eqref{jack}$ yields \begin{align*} {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \overline{\chi_4}, & \overline{\chi_4}\\ & \varphi, & \varepsilon\end{array}| 1\right)&=\frac{1}{q^2}J(\chi_8,\chi_8^2)\times 2 Re(J(\chi_8,\chi_8))\times \chi_8(-1)\\ &=\frac{1}{q^2}[-2u(-p)^t], \end{align*} which is the same real number we found in $\eqref{real}$. Hence, its complex conjugate, namely ${_{3}}F_{2}\left(\begin{array}{ccc}\overline{\chi_4}, & \chi_4, & \chi_4\\ & \varphi, & \varepsilon\end{array}| 1\right)$ is also real and has the same value. This completes the proof of the lemma. \end{proof} Next, we note the following observations given in the beginning of the sixth section in \cite{dawsey}. We state it as a lemma since we shall use it in proving Theorem \ref{thm2}. Greene \cite{greene, greene2} gave some transformation formulae which we list here as follows. Let $A,B,C,D,E$ be characters on $\mathbb{F}_q$. Then, we have \begin{align} &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)={_{3}}F_{2}\left(\begin{array}{ccc}B\overline{D}, & A\overline{D}, & C\overline{D}\\ & \overline{D}, & E\overline{D}\end{array}| 1\right),\label{1}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=ABCDE(-1)\cdot {_{3}}F_{2}\left(\begin{array}{ccc}A, & A\overline{D}, & A\overline{E}\\ & A\overline{B}, & A\overline{C}\end{array}| 1\right),\label{2}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}|1\right)=ABCDE(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}B\overline{D}, & B, & B\overline{E}\\ & B\overline{A}, & B\overline{C}\end{array}| 1\right),\label{3}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E\end{array}| 1\right)=AE(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & E\overline{C}\\ & AB\overline{D}, & E\end{array}| 1\right),\label{4}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E\end{array}| 1\right)=AD(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}A, & D\overline{B}, & C\\ & D, & AC\overline{E} \end{array}| 1\right),\label{5}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=B(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\overline{A}D, & B, & C\\ & D, & BC\overline{E}\end{array}| 1\right),\label{6}\\ &{_{3}}F_{2}\left(\begin{array}{ccc}A, & B, & C\\ & D, & E \end{array}| 1\right)=AB(-1)\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\overline{A}D, & \overline{B}D, & C\\ & D, & DE\overline{AB}\end{array}| 1\right).\label{7} \end{align} Let $X=\{(t_1,t_2,t_3,t_4,t_5)\in\mathbb{Z}_4^5: t_1,t_2,t_3\neq 0,t_4,t_5;~t_1+t_2+t_3\neq t_4,t_5\}$. To each of the transformations in $\eqref{1}$ to $\eqref{7}$, Dawsey and McCarthy in \cite{dawsey} associated a map on $X$; for example, the transformation in $\eqref{1}$ gives that $${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_1}, & \chi_4^{t_2}, & \chi_4^{t_3}\\ & \chi_4^{t_4}, & \chi_4^{t_5}\end{array}| 1\right)={_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_2-t_4}, &\chi_4^{t_1-t_4} , & \chi_4^{t_3-t_4}\\ & \chi_4^{-t_4}, &\chi_4^{t_5-t_4}\end{array}| 1\right),$$ so it induces a map $f_1: X\rightarrow X$ given by $$f_1(t_1,t_2,t_3,t_4,t_5)=(t_2-t_4,t_1-t_4,t_3-t_4,-t_4,t_5-t_4).$$ Similarly, the other transformations in $\eqref{2}$ to $\eqref{7}$ led to the construction of the maps $f_2$ to $f_7$. \begin{lemma}\label{dlemma} Let $X=\{(t_1,t_2,t_3,t_4,t_5)\in\mathbb{Z}_4^5: t_1,t_2,t_3\neq 0,t_4,t_5;~t_1+t_2+t_3\neq t_4,t_5\}$. Define the functions $f_i:X\rightarrow X,~i\in\{1,2,\ldots,7\}$ in the following manner: \begin{align*} f_1(t_1,t_2,t_3,t_4,t_5)&=(t_2-t_4,t_1-t_4,t_3-t_4,-t_4,t_5-t_4),\\ f_{2}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{1}-t_{4}, t_{1}-t_{5}, t_{1}-t_{2}, t_{1}-t_{3}\right),\\ f_{3}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{2}-t_{4}, t_{2}, t_{2}-t_{5}, t_{2}-t_1,t_2-t_3\right),\\ f_{4}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{2}, t_{5}-t_{3}, t_{1}+t_{2}-t_{4}, t_{5}\right),\\ f_{5}\left(t_1, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{1}, t_{4}-t_{2}, t_{3}, t_{4}, t_{1}+t_{3}-t_{5}\right),\\ f_{6}\left(t_{1}, t_{2}, t_{3}, t_{4}, t_{5}\right)&=\left(t_{4}-t_{1}, t_{2}, t_{3}, t_{4}, t_{2}+t_{3}-t_{5}\right),\\ f_{7}\left(t_{1},t_{2}, t_{3},t_{4}, t_{5}\right)&=\left(t_{4}-t_{1},t_{4}-t_{2},t_{3}, t_{4}, t_{4}+t_{5}-t_{1}-t_{2}\right). \end{align*} Then the group generated by $f_1,\ldots,f_7$, with operation composition of functions, is the set $$\mathcal{F}=\{f_0,f_i,f_j \circ f_l,f_4\circ f_1,f_6\circ f_2,f_5\circ f_3,f_1\circ f_4\circ f_1: 1\leq i\leq 7,~1\leq j\leq 3,~4\leq l\leq 7\},$$ where $f_0$ is the identity map. \\ Moreover, the group $\mathcal{F}$ acts on the set $X$. If we associate the $5$-tuple $(t_1,t_2,\ldots,t_5)\in X$ to the hypergeometric function ${_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_1}, & \chi_4^{t_2}, & \chi_4^{t_3}\\ & \chi_4^{t_4}, & \chi_4^{t_5}\end{array}| 1\right)$, then each orbit of the group action consists of a number of $5$-tuples $(t_1,t_2,\ldots,t_5)$, and the corresponding ${}_3 F_{2}$ terms have the same value. \end{lemma} \begin{proof} For a proof, see Section $6$ of \cite{dawsey}. \end{proof} In order to prove Theorem \ref{asym}, the following famous theorem, due to Andr\'e Weil, serves as the crux. We state it here. \begin{theorem}[Weil's estimate]\label{weil} Let $\mathbb{F}_q$ be the finite field of order $q$, and let $\chi$ be a character of $\mathbb{F}_q$ of order $s$. Let $f(x)$ be a polynomial of degree $d$ over $\mathbb{F}_q$ such that $f(x)$ cannot be written in the form $c\cdot {h(x)}^s$, where $c\in\mathbb{F}_q$. Then $$\Bigl\lvert\sum_{x\in\mathbb{F}_q}\chi(f(x))\Bigr\rvert\leq (d-1)\sqrt{q}.$$ \end{theorem} The rest of the article goes as follows. In Section $3$, we prove Theorem \ref{thm1}. In Section $4$, we prove Theorem \ref{thm2}. Finally, in Section $5$ we prove the asymptotic formula for the number of cliques of any order in Peisert graphs. To count the number of cliques in Peisert graphs, we note that since the graph is vertex-transitive, so any two vertices in the graph are contained in the same number of cliques of a particular order. We will also use the following notation throughout the proofs. For an induced subgraph $S$ of a Peisert graph and a vertex $v\in S$, we denote by $k_3(S)$ and $k_3(S,v)$ the number of cliques of order $3$ in $S$ and the number of cliques of order $3$ in $S$ containing $v$, respectively. \section{number of $3$-order cliques in $P^\ast(q)$} In this section, we prove Theorem \ref{thm1}. Recall that $\mathbb{F}_q^\ast=\langle g\rangle$ and $H=\langle g^4\rangle\cup g\langle g^4\rangle$. Also, $\langle H\rangle$ is the subgraph induced by $H$ and $h=1-\chi_4(g)$. \begin{proof}[Proof of Theorem \ref{thm1}] Using the vertex-transitivity of $P^\ast(q)$, we find that \begin{align}\label{trian} k_3(P^\ast(q))&=\frac{1}{3}\times q\times k_3(P^\ast(q),0)\notag \\ &=\frac{q}{3}\times \text{number of edges in }\langle H\rangle . \end{align} Now, \begin{align}\label{ww-new} \text{the number of edges in~} \langle H\rangle =\frac{1}{2}\times \mathop{\sum\sum}_{\chi_4(x-y)\in \{1, \chi_4(g)\}} 1, \end{align} where the 1st sum is taken over all $x$ such that $\chi_4(x)\in\{1,\chi_4(g)\}$ and the 2nd sum is taken over all $y\neq x$ such that $\chi_4(y)\in\{1,\chi_4(g)\}$. Hence, using \eqref{qq} in \eqref{ww-new}, we find that \begin{align}\label{ww} &\text{the number of edges in~}\langle H\rangle \notag \\ &=\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))\notag\\ &\hspace{1.5cm}\times \sum\limits_{y\neq 0,x}[(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))]. \end{align} We expand the inner summation in $\eqref{ww}$ to obtain \begin{align}\label{ee} &\sum\limits_{y\neq 0,x}[4+2h\chi_4(y)+2\overline{h}\overline{\chi_4}(y)+2h\chi_4(x-y)+2\overline{h}\overline{\chi_4}(x-y)+2\chi_4(y)\overline{\chi_4}(x-y)\notag \\ & +2\overline{\chi_4}(y)\chi_4(x-y)-2\chi_4(g)\chi_4(y(x-y))+2\chi_4(g)\overline{\chi_4}(y(x-y))]. \end{align} We have \begin{align}\label{new-eqn3} \sum\limits_{y\neq 0,x}\chi_4(y(x-y))=\sum\limits_{y\neq 0,1}\chi_4(xy)\chi_4(x-xy)=\varphi(x) J(\chi_4,\chi_4). \end{align} Using Lemma \ref{lem2} and \eqref{new-eqn3}, \eqref{ee} yields \begin{align}\label{new-eqn2} &\sum\limits_{y\neq 0,x}[(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))]\notag \\ &=4(q-3)-4h\chi_4(x)-4\overline{h}\overline{\chi_4}(x)-2\chi_4(g)\varphi(x)J(\chi_4,\chi_4)+2\chi_4(g)\varphi(x)\overline{J(\chi_4,\chi_4)}. \end{align} Now, putting \eqref{new-eqn2} into \eqref{ww}, and then using Lemma \ref{rr}, we find that \begin{align*} &\text{the number of edges in }\langle H\rangle\\ =&\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}[(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(4(q-3)-4h\chi_4(x)-4\overline{h}\overline{\chi_4}(x))]\\ =&\frac{1}{2\times 4^3}\sum\limits_{x\neq 0}[8(q-5)+(4h(q-3)-8h)\chi_4(x)+(4\overline{h}(q-3)-8\overline{h})\overline{\chi_4}(x)]\\ =&\frac{(q-1)(q-5)}{16}. \end{align*} Substituting this value in $\eqref{trian}$ gives us the required result. \end{proof} \section{number of $4$-order cliques in $P^\ast(q)$} In this section, we prove Theorem \ref{thm2}. First, we recall again that $\mathbb{F}_q^\ast=\langle g\rangle$ and $H=\langle g^4\rangle\cup g \langle g^4\rangle$. Let $J(\chi_4,\chi_4)=J(\chi_4,\varphi)=\rho$, where the value of $\rho$ is given by Lemma \ref{rr}. Let $q=u^2+2v^2$ for integers $u$ and $v$ such that $u\equiv 3\pmod 4$ and $p\nmid u$ when $p\equiv 3\pmod 8$. Let $\chi_8$ be a character of order $8$ such that $\chi_8^2=\chi_4$. Note that in the proof we shall use the fact that $\chi_4(-1)=1$ multiple times. Recall that $h=1-\chi_4(g)$. \begin{proof}[Proof of Theorem \ref{thm2}] Noting again that $P^\ast(q)$ is vertex-transitive, we find that \begin{align}\label{tt} k_4(P^\ast(q)) &=\frac{q}{4}\times \text{ number of $4$-order cliques in $P^\ast(q)$ containing }0\notag \\ &=\frac{q}{4}\times k_3(\langle H\rangle). \end{align} Let $a, b\in H$ be such that $\chi_4(ab^{-1})=1$. We note that \begin{align}\label{new-eqn4} k_3(\langle H\rangle, a) =\frac{1}{2}\times \mathop{\sum\sum}_{\chi_4(x-y)\in \{1, \chi_4(g)\}} 1, \end{align} where the 1st sum is taken over all $x$ such that $\chi_4(x), \chi_4(a-x)\in\{1,\chi_4(g)\}$ and the 2nd sum is taken over all $y\neq x$ such that $\chi_4(y), \chi_4(a-y)\in\{1,\chi_4(g)\}$. Hence, using \eqref{qq} in \eqref{new-eqn4}, we find that \begin{align*} &k_3(\langle H\rangle, a)\\ &=\frac{1}{2\times 4^5}\sum_{x\neq 0,a}\sum_{y\neq 0,a,x}[(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\ &\times (2+h\chi_4(a-y)+\overline{h}\overline{\chi_4}(a-y))(2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))\\ &\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))]. \end{align*} Using the substitution $Y=ba^{-1}y$, the sum indexed by $y$ in the above yields \begin{align*} &k_3(\langle H\rangle, a)\\ &=\frac{1}{2\times 4^5}\sum_{x\neq 0,a}\sum_{Y\neq 0,b,ba^{-1}x} [(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\ &\times (2+h\chi_4(Y-b)+\overline{h}\overline{\chi_4}(Y-b))(2+h\chi_4(Y-ba^{-1}x)+\overline{h}\overline{\chi_4}(Y-ba^{-1}x))\\ &\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))] \\ &=\frac{1}{2\times 4^5}\sum_{Y\neq 0,b}\sum_{x\neq 0,a,ab^{-1}Y}[(2+h\chi_4(a-x)+\overline{h}\overline{\chi_4}(a-x))\\ &\times (2+h\chi_4(Y-b)+\overline{h}\overline{\chi_4}(Y-b)) (2+h\chi_4(Y-ba^{-1}x)+\overline{h}\overline{\chi_4}(Y-ba^{-1}x))\\ &\times (2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))]. \end{align*} Again, using the substitution $X=ba^{-1}x$ yields \begin{align*} &k_3(\langle H\rangle, a)\\ &=\frac{1}{2\times 4^5}\sum_{Y\neq 0,b}\sum_{X\neq 0,b,Y}[(2+h\chi_4(b-X)+\overline{h}\overline{\chi_4}(b-X))\\ &\times(2+h\chi_4(b-Y)+\overline{h}\overline{\chi_4}(b-Y))(2+h\chi_4(X-Y)+\overline{h}\overline{\chi_4}(X-Y))\\ &\times (2+h\chi_4(X)+\overline{h}\overline{\chi_4}(X))(2+h\chi_4(Y)+\overline{h}\overline{\chi_4}(Y))] \\ &=k_3(\langle H\rangle,b). \end{align*} Thus, if $a, b\in H$ are such that $\chi_4(ab^{-1})=1$, then \begin{align}\label{cond} k_3(\langle H\rangle,a)=k_3(\langle H\rangle,b). \end{align} Let $\langle g^4\rangle =\{x_1,\ldots,x_{\frac{q-1}{4}}\}$ with $x_1=1$ and $g\langle g^4\rangle=\{y_1,\ldots, y_{\frac{q-1}{4}}\}$ with $y_1=g$. Then, \begin{align}\label{pick} \sum_{i=1}^{\frac{q-1}{4}}k_3(\langle H\rangle,x_i)+\sum_{i=1}^{\frac{q-1}{4}}k_3(\langle H\rangle,y_i)=3\times k_3(\langle H\rangle). \end{align} By $\eqref{cond}$, we have $$k_3(\langle H\rangle,x_1)=k_3(\langle H\rangle,x_2)=\cdots=k_3(\langle H\rangle,x_{\frac{q-1}{4}})$$ and $$k_3(\langle H\rangle,y_1)=k_3(\langle H\rangle,y_2)=\cdots=k_3(\langle H\rangle,y_{\frac{q-1}{4}}).$$ Hence, \eqref{pick} yields \begin{align}\label{1g} k_3(\langle H\rangle)=\frac{q-1}{12}[k_3(\langle H\rangle, 1)+ k_3(\langle H\rangle, g)]. \end{align} Thus, we need to find only $k_3(\langle H\rangle, 1)$ and $k_3(\langle H\rangle, g)$. We first find $k_3(\langle H\rangle, 1)$. \par We have \begin{align}\label{xandy} &k_3(\langle H\rangle,1)\notag \\ &=\frac{1}{2\times 4^5}\sum_{x\neq 0,1}[ (2+h\chi_4(1-x)+\overline{h}\overline{\chi_4}(1-x))(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x))]\notag\\ &\hspace{1.5cm} \sum_{y\neq 0,1,x}[(2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y)) (2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)) \notag \\ &\hspace{2.5cm}\times (2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))]. \end{align} Let $i_1,i_2,i_3\in\{\pm 1\} $ and let $F_{i_1,i_2,i_3}$ denote the term $\chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y)$. Using this notation, we expand and evaluate the inner summation in \eqref{xandy}. We have \begin{align}\label{sun} &\sum_{y\neq 0,1,x}[2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y)][2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y)][2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)]\notag\\ &=\sum_{y\neq 0,1,x}[8+4h\chi_4(y)+4\overline{h}\overline{\chi_4}(y)+4h\chi_4(1-y)+4\overline{h}\overline{\chi_4}(1-y)+4h\chi_4(x-y)\notag\\&+4\overline{h}\overline{\chi_4}(x-y) +4\chi_4(y)\overline{\chi_4}(1-y)+4\overline{\chi_4}(y)\chi_4(1-y)+4\chi_4(y)\overline{\chi_4}(x-y)\notag\\ &+4\overline{\chi_4}(y)\chi_4(x-y) +4\chi_4(1-y)\overline{\chi_4}(x-y)+4\overline{\chi_4}(1-y)\chi_4(x-y)\notag\\ &+2h^2\chi_4(y)\chi_4(1-y)+2{\overline{h}}^2\overline{\chi_4}(y)\overline{\chi_4}(1-y)+2h^2\chi_4(y)\chi_4(x-y)\notag\\ &+2{\overline{h}}^2\overline{\chi_4}(y)\overline{\chi_4}(x-y) +2h^2\chi_4(1-y)\chi_4(x-y)+2{\overline{h}}^2\overline{\chi_4}(1-y)\overline{\chi_4}(x-y)\notag\\ &+h^3 F_{1,1,1}+2hF_{1,1,-1}+2hF_{1,-1,1}+2\overline{h}F_{1,-1,-1}+2hF_{-1,1,1}+2\overline{h}F_{-1,1,-1} \notag\\ &+2\overline{h}F_{-1,-1,1}+{\overline{h}}^3F_{-1,-1,-1}]. \end{align} Now, referring to Lemmas \ref{lem1} and \ref{lem2}, we can easily check that any term of the form $\sum\limits_{y}\chi_4(\cdot)\overline{\chi_4}(\cdot)$ gives $-1$, $\sum\limits_y \chi_4((y-1)(y-x))$ gives $\varphi(x-1)\rho$ and $\sum\limits_y \chi_4(y(y-x))$ gives $\varphi(x)\rho$. Hence, $\eqref{sun}$ yields \begin{align}\label{yonly} &\sum_{y\neq 0,1,x}[2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y)][2+h\chi_4(1-y)+\overline{h}\overline{\chi_4}(1-y)][2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y)]\notag \\ &=A+B\chi_4(x)+\overline{B}\overline{\chi_4}(x)+B\chi_4(x-1)+\overline{B}\overline{\chi_4}(x-1)-4\chi_4(x)\overline{\chi_4}(x-1)\notag \\ &-4\overline{\chi_4}(x)\chi_4(x-1)-2h^2\chi_4(x)\chi_4(x-1)-2{\overline{h}}^2\overline{\chi_4}(x)\overline{\chi_4}(x-1)\notag \\ &+h^3 F_{1,1,1}+2hF_{1,1,-1}+2hF_{1,-1,1}+2\overline{h}F_{1,-1,-1}+2hF_{-1,1,1}+2\overline{h}F_{-1,1,-1}\notag \\&+2\overline{h}F_{-1,-1,1}+{\overline{h}}^3F_{-1,-1,-1}\notag\\ &=:\mathcal{I}, \end{align} where $A=8(q-8)$ and $B=-12h$. \par Next, we introduce some notations. Let \begin{align*} B_1&=16(q-9)+6B+\overline{B}h^2,\\ D_1&=2\overline{B}-8\overline{h}+Bh^2-4h^3,\\ E_1&=8(q-9)+4Bh,\\ F_1&=16(q-9)+4 Re(B\overline{h}). \end{align*} For $i\in\{1,2,3,4\}$ and $j\in\{1,2,\ldots,8\}$, we define the following character sums. \begin{align*} T_j&:=\sum_{x\neq 0,1}\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y),\\ U_{ij}&:=\sum_{x\neq 0,1}\chi_4^l(m)\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y),\\ V_{ij}&:=\sum_x\chi_4^{l_1}(x)\chi_4^{l_2}(1-x)\sum_y \chi_4^{i_1}(y)\chi_4^{i_2}(1-y)\chi_4^{i_3}(x-y), \end{align*} where \begin{align*} l = \left\{ \begin{array}{lll} 1, & \hbox{if $i$ is odd,} \\ -1, & \hbox{\text{otherwise};} \end{array} \right. \end{align*} \begin{align*} m = \left\{ \begin{array}{lll} x, & \hbox{if $i\in\{1,2\}$,} \\ 1-x, & \hbox{\text{otherwise;}} \end{array} \right. \end{align*} and \begin{align*} (l_1,l_2) = \left\{ \begin{array}{lll} (1,1), & \hbox{if $i=1$,} \\ (1,-1), & \hbox{if $i=2$,} \\ (-1,1), & \hbox{if $i=3$,} \\ (-1,-1), & \hbox{if $i=4$.} \end{array} \right. \end{align*} Also, corresponding to each $j$, let $(i_1,i_2,i_3)$ take the value according to the following table: \begin{table}[h!] \begin{center} \begin{tabular}{ |c| c| c| c| } \hline $j$ & $i_1$ & $i_2$ & $i_3$ \\ \hline $1$ & $1$ & $1$ & $1$ \\ $2$ & $1$ & $1$ & $-1$ \\ $3$ & $1$ & $-1$ & $1$\\ $4$ & $1$ & $-1$ & $-1$\\ $5$ & $-1$ & $1$ & $1$\\ $6$ & $-1$ & $1$ & $-1$\\ $7$ & $-1$ & $-1$ & $1$\\ $8$ & $-1$ & $-1$ & $-1$\\ \hline \end{tabular} \end{center} \end{table}\\ Then, using $\eqref{yonly}$ and the notations we just described, $\eqref{xandy}$ yields \begin{align*} &k_3(\langle H\rangle,1)=\frac{1}{2048}\sum_{x\neq 0,1}[2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)][2+h\chi_4(1-x)+\overline{h}\overline{\chi_4}(1-x)]\times \mathcal{I}\\ =&\frac{1}{2048}\sum_{x\neq 0,1}\Big[ 32(q-15)+B_1\chi_4(x)+\overline{B_1}\overline{\chi_4}(x)+B_1\chi_4(x-1)+\overline{B_1}\overline{\chi_4}(x-1)\\ &+4 Re(Bh)\varphi(x)+4 Re(Bh)\varphi(x-1)+D_1\chi_4(x)\varphi(x-1)+\overline{D_1}\overline{\chi_4}(x)\varphi(x-1)\\ &+D_1\varphi(x)\chi_4(x-1)+\overline{D_1}\varphi(x)\overline{\chi_4}(x-1)+E_1\chi_4(x)\chi_4(x-1)+\overline{E_1}\overline{\chi_4}(x)\overline{\chi_4}(x-1) \\ &+F_1\chi_4(x)\overline{\chi_4}(1-x)+\overline{F_1}\overline{\chi_4}(x)\chi_4(x-1)\Big]\\ &+\frac{1}{2\times 4^5}\Big[ 4h^3T_1+8hT_2+8h T_3+8\overline{h}T_4+8h T_5+8\overline{h}T_6+8\overline{h}T_7+4{\overline{h}}^3 T_8\\ &+2h^4 U_{11}+4h^2 U_{12}+4h^2 U_{13}+8 U_{14}+4h^2 U_{15}+8 U_{16}+8 U_{17}+4{\overline{h}}^2 U_{18}\\ &+4h^2 U_{21} +8 U_{22}+8 U_{23}+4{\overline{h}}^2 U_{24}+8 U_{25}+4{\overline{h}}^2 U_{26}+4{\overline{h}}^2 U_{27}+2{\overline{h}}^4 U_{28}\\ &+2h^4 U_{31}+4h^2 U_{32}+4h^2 U_{33}+8 U_{34}+4h^2 U_{35}+8 U_{36}+8 U_{37}+4{\overline{h}}^2 U_{38}\\ &+4h^2 U_{41} +8 U_{42}+8 U_{43}+4{\overline{h}}^2 U_{44}+8 U_{45}+4{\overline{h}}^2 U_{46}+4{\overline{h}}^2 U_{47}+2{\overline{h}}^4 U_{48}\\ &+h^5 V_{11}+2h^3 V_{12}+2h^3 V_{13}+4h V_{14}+2h^3 V_{15}+4h V_{16}+4h V_{17}+4\overline{h} V_{18}\\ &+2h^3 V_{21}+4h V_{22}+4h V_{23}+4\overline{h}V_{24}+4h V_{25}+4\overline{h}V_{26}+4\overline{h}V_{27}+2{\overline{h}}^3 V_{28}\\ &+2h^3 V_{31}+4h V_{32}+4h V_{33}+4\overline{h}V_{34}+4h V_{35}+4\overline{h}V_{36}+4\overline{h}V_{37}+2{\overline{h}}^3 V_{38}\\ &+4h V_{41}+4\overline{h}V_{42}+4\overline{h}V_{43}+2{\overline{h}}^3 V_{44}+4\overline{h}V_{45}+2{\overline{h}}^3 V_{46}+2{\overline{h}}^3 V_{47}+{\overline{h}}^5 V_{48} \Big]. \end{align*} Using Lemmas \ref{lem3}, \ref{lema1} and \ref{corr}, we find that \begin{align}\label{bigex} &k_3(\langle H\rangle,1)=\frac{1}{2048}\left[32(q^2-20q+81) \right.\notag \\ &+h^5 V_{11}+2h^3 V_{12}+2h^3 V_{13}+4h V_{14}+2h^3 V_{15}+4h V_{16}+4h V_{17}+4\overline{h} V_{18}\notag \\ &+2h^3 V_{21}+4h V_{22}+4h V_{23}+4\overline{h}V_{24}+4h V_{25}+4\overline{h}V_{26}+4\overline{h}V_{27}+2{\overline{h}}^3 V_{28}\notag \\ &+2h^3 V_{31}+4h V_{32}+4h V_{33}+4\overline{h}V_{34}+4h V_{35}+4\overline{h}V_{36}+4\overline{h}V_{37}+2{\overline{h}}^3 V_{38}\notag \\ &\left.+4h V_{41}+4\overline{h}V_{42}+4\overline{h}V_{43}+2{\overline{h}}^3 V_{44}+4\overline{h}V_{45}+2{\overline{h}}^3 V_{46}+2{\overline{h}}^3 V_{47}+{\overline{h}}^5 V_{48}\right]. \end{align} Now, we convert each term of the form $V_{i j}$ $[i \in\{1,2,3,4\}, j\in\{1,2, \ldots, 8\}]$ into its equivalent $q^{2}\cdot {_{3}}F_{2}$ form. We use the notation $(t_{1}, t_{2}, \ldots, t_{5})\in \mathbb{Z}_4^5$ for the term $q^{2}\cdot {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4^{t_{1}}, & \chi_4^{t_{2}}, & \chi_4^{t_{3}}\\ & \chi_4^{t_{4}}, & \chi_4^{t_{5}}\end{array}| 1\right)$. Then, $\eqref{bigex}$ yields \begin{align}\label{bigexp} &k_3(\langle H\rangle,1)=\frac{1}{2048}\left[32(q^2-20q+81)\notag \right. \\ &\hspace{.5cm}+h^{5}(3,1,1,2,2)+2 h^{3}(1,1,3,2,0)+2 h^{3}(3,1,1,0,2)+4 h(1,1,3,0,0)\notag \\ &\hspace{.5cm}+2h^{3}(3,3,1,0,2)+4 h(1,3,3,0,0)+4 h(3,3,1,2,2)+4 \overline{h}(1,3,3,2,0) \notag\\ &\hspace{.5cm}+2 h^{3}(3,1,3,2,2)+4 h(1,1,1,2,0)+4 h(3,1,3,0,2)+4 \overline{h}(1,1,1,0,0)\notag\\ &\hspace{.5cm}+4 h(3,3,3,0,2)+4 \overline{h}(1,3,1,0,0)+4 \overline{h}(3,3,3,2,2)+2 {\overline{h}}^{3}(1,3,1,2,0)\notag \\ &\hspace{.5cm}+2 h^{3}(3,1,3,2,0)+4 h(1,1,1,2,2)+4 h(3,1,3,0,0)+4 \overline{h}(1,1,1,0,2)\notag\\ &\hspace{.5cm}+4 h(3,3,3,0,0)+4 \overline{h}(1,3,1,0,2)+4 \overline{h}(3,3,3,2,0)+2 {\overline{h}}^{3}(1,3,1,2,2) \notag \\ &\hspace{.5cm}+4 h(3,1,1,2,0)+4 \overline{h}(1,1,3,2,2)+4 \overline{h}(3,1,1,0,0)+2{\overline{h}}^{3}(1,1,3,0,2)\notag\\ &\hspace{.5cm}\left. +4 \overline{h}(3,3,1,0,0)+2 \overline{h}^{3}(1,3,3,0,2)+ 2\overline{h}^{3}(3,3,1,2,0)+{\overline{h}}^{5}(1,3,3,2,2)\right]. \end{align} Next, we use Lemma \ref{dlemma} alongwith the notations therein. We list the tuples $(t_1,t_2,\ldots, t_5)$ in each orbit of the group action of $\mathcal{F}$ on $X$, and then group the corresponding terms in $\eqref{bigexp}$ together. The orbit representatives $(1,1,1,0,0)$, $(3,3,3,0,0)$, $(1,3,3,2,0)$, $(3,1,1,2,0)$ and $(1,1,3,0,0)$ given in the proof of Corollary 2.7 in \cite{dawsey} are the ones whose orbits exhaust the hypergeometric terms in $\eqref{bigexp}$. We denote the $q^2\cdot {_{3}}F_{2}$ terms corresponding to these orbit representatives as $M_1,M_2,\ldots,M_5$ respectively. Then, $\eqref{bigexp}$ yields \begin{align}\label{mex} &k_3(\langle H\rangle,1)=\frac{1}{2048} \left[32(q^2-20q+81)\right. \notag \\ &\hspace{.5cm}+h^{5}M_4+2 h^{3}M_1+2 h^{3}M_1+4 hM_5 +2h^{3}M_1+4 hM_5+4 hM_1+4 \overline{h}M_3 \notag\\ &\hspace{.5cm}+2 h^{3}M_4+4 hM_5+4 hM_2+4 \overline{h}M_1+4 hM_5+4 \overline{h}M_5+4 \overline{h}M_5+2 {\overline{h}}^{3}M_3\notag \\ &\hspace{.5cm}+2 h^{3}M_4+4 hM_5+4 hM_5+4 \overline{h}M_5 +4 hM_2+4 \overline{h}M_1+4 \overline{h}M_5+2 {\overline{h}}^{3}M_3 \notag \\ &\hspace{.5cm}+4 hM_4+4 \overline{h}M_3+4 \overline{h}M_5+2{\overline{h}}^{3}M_2 +4 \overline{h}M_5+2 \overline{h}^{3}M_2+\left. 2\overline{h}^{3}M_2+{\overline{h}}^{5}M_3\right]. \end{align} Using Lemma \ref{lem4} (note that we could not reduce $M_5$), $\eqref{mex}$ yields \begin{align}\label{mexp} k_3(\langle H\rangle,1)=\frac{1}{128}\left[2(q^2-20q+81)+2 u(-p)^t +3q^2\cdot{_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}| 1\right) \right]. \end{align} Returning back to $\eqref{1g}$, we are now left to calculate $k_3( \langle H\rangle,g)$. Again, we have \begin{align}\label{gxandy} &k_3(\langle H\rangle,g)\notag\\ &=\frac{1}{2048}\sum_{x\neq 0,g}\sum_{y\neq 0,g,x}\left[ (2+h\chi_4(g-x)+\overline{h}\overline{\chi_4}(g-x)) (2+h\chi_4(g-y)+\overline{h}\overline{\chi_4}(g-y))\notag \right.\\ &\times\left. (2+h\chi_4(x-y)+\overline{h}\overline{\chi_4}(x-y))(2+h\chi_4(x)+\overline{h}\overline{\chi_4}(x)) (2+h\chi_4(y)+\overline{h}\overline{\chi_4}(y))\right]. \end{align} Using the substitutions $Y=yg^{-1}$ and $X=xg^{-1}$, and then using the fact that $h\chi_4(g)=\overline{h}$, \eqref{gxandy} yields \begin{align*} &k_3(\langle H\rangle,g)\\ &=\frac{1}{2048}\sum_{x\neq 0,1}\sum_{y\neq 0,1,x}\left[ (2+\overline{h}\chi_4(1-x)+h\overline{\chi_4}(1-x)) (2+\overline{h}\chi_4(1-y)+h\overline{\chi_4}(1-y))\notag \right.\\ &\times\left. (2+\overline{h}\chi_4(x-y)+h\overline{\chi_4}(x-y)) (2+\overline{h}\chi_4(x)+h\overline{\chi_4}(x))(2+\overline{h}\chi_4(y)+h\overline{\chi_4}(y))\right]. \end{align*} Comparing this with $\eqref{xandy}$ we see that the expansion of the expression inside this summation will consist of the same summation terms as in $\eqref{xandy}$ except that the coefficient corresponding to each summation will become the complex conjugate of the corresponding coefficient of the same summation. This means that, to calculate the coefficient of each summation after expanding the expression in $\eqref{gxandy}$, we need to replace each corresponding coefficient in $\eqref{mex}$ by its complex conjugate. Now, $\eqref{mexp}$ is the final expression from $\eqref{mex}$, and we see that \eqref{mexp} contains three summands, two of them being real numbers and the other being a ${}_3F_2$ term whose coefficient is also a real number. Then by the foregoing argument, $\eqref{gxandy}$ yields the same value as given in $\eqref{mexp}$. Thus, $\eqref{1g}$ gives that \begin{align*} k_3(\langle H\rangle)=\frac{q-1}{768}&\left[2(q^2-20q+81)+2 u(-p)^t+3q^2\cdot {_{3}}F_{2}\left(\begin{array}{ccc}\chi_4, & \chi_4, & \overline{\chi_4}\\ & \varepsilon, & \varepsilon\end{array}| 1\right)\right]. \end{align*} Substituting the above value in $\eqref{tt}$, we complete the proof of the theorem. \end{proof} \section{proof of theorem $\ref{asym}$} Let $m\geq 1$ be an integer. We have observed that the calculations for computing the number of $4$-order cliques in $P^\ast(q)$ become very tedious. However, we can have an asymptotic result on the number of cliques of order $m$ in $P^\ast(q)$ as $q\rightarrow\infty$. The method follows along the lines of \cite{wage} and so we prove by the method of induction. \begin{proof}[Proof of Theorem \ref{asym}] Let $\mathbb{F}_q^\ast=\langle g\rangle$. We set a formal ordering of the elements of $\mathbb{F}_q:\{a_1<\cdots<a_q\}$. Let $\chi_4$ be a fixed character on $\mathbb{F}_q$ of order $4$ and let $h=1-\chi_4(g)$. First, we note that the result holds for $m=1,2$ and so let $m\geq 3$. Let the induction hypothesis hold for $m-1$. We shall use the notation `$a_m\neq a_i$' to mean $a_m\neq a_1,\ldots,a_{m-1}$. Recalling \eqref{qq}, we see that \begin{align}\label{ss} k_m(P^\ast(q))&=\mathop{\sum\cdots\sum}_{a_1<\cdots<a_m}\prod_{1\leq i<j\leq m} \frac{2+h\chi_4(a_i-a_j)+\overline{h}\chi_4^3(a_i-a_j)}{4}\notag \\ &=\frac{1}{m}\mathop{\sum\cdots\sum}_{a_1<\cdots<a_{m-1}}\left[ \prod_{1\leq i<j\leq m-1}\frac{2+h\chi_4(a_i-a_j)+\overline{h}\chi_4^3(a_i-a_j)}{4}\right.\notag \\ &\left.\frac{1}{4^{m-1}}\sum\limits_{a_m\neq a_i}\prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}\right] \end{align} In order to use the induction hypothesis, we try to bound the expression $$\sum\limits_{a_m\neq a_i}\prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}$$ in terms of $q$ and $m$. We find that \begin{align}\label{dd} \mathcal{J}&:=\sum\limits_{a_m\neq a_i} \prod_{i=1}^{m-1}\{2+h\chi_4(a_m-a_i)+\overline{h}\chi_4^3(a_m-a_i)\}\notag \\ &=2^{m-1}(q-m+1)\notag \\ &+\sum\limits_{a_m\neq a_i}[(3^{m-1}-1)\text{ number of terms containing expressions in }\chi_4] \end{align} Each term in \eqref{dd} containing $\chi_4$ is of the form $$2^f h^{i'}\overline{h}^{j'}\chi_4((a_m-a_{i_1})^{j_1}\cdots (a_m-a_{i_s})^{j_s}),$$ where \begin{equation}\label{asy} \left.\begin{array}{l} 0\leq f\leq m-2,\\ 0\leq i',j'\leq m-1,\\ i_1,\ldots,i_s \in \{1,2,\ldots,m-1\},\\ j_1,\ldots,j_s \in \{1,3\},\text{ and}\\ 1\leq s\leq m-1. \end{array}\right\} \end{equation} Let us consider such an instance of a term containing $\chi_4$. Excluding the constant factor $2^fh^{i'}\overline{h}^{j'}$, we obtain a polynomial in the variable $a_m$. Let $g(a_m)=(a_m-a_{i_1})^{j_1}\cdots (a_m-a_{i_s})^{j_s}$. Using Weil's estimate (Theorem \ref{weil}), we find that \begin{align}\label{asy1} \mid\sum\limits_{a_m\in\mathbb{F}_q}\chi_4(g(a_m))\mid\leq (j_1+\cdots+j_s-1)\sqrt{q}. \end{align} Then, using \eqref{asy1} we have \begin{align}\label{asy2} |2^fh^{i'}\overline{h}^{j'} \sum\limits_{a_m}\chi_4(g(a_m))|&\leq 2^{f+i'+j'}(j_1+\cdots+j_s-1)\sqrt{q}\notag \\ &\leq 2^{3m-4}(3m-4)\sqrt{q}\notag \\ &\leq 2^{3m}\cdot 3m\sqrt{q}. \end{align} Noting that the values of $\chi_4$ are roots of unity, using \eqref{asy2}, and using \eqref{asy} and the conditions therein, we obtain \begin{align*} &\mid 2^f h^{i'}\overline{h}^{j'}\sum\limits_{a_m\neq a_i}\chi_4(g(a_m))\mid\\ &=\mid 2^fh^{i'}\overline{h}^{j'}\left\lbrace \sum\limits_{a_m}\chi_4(g(a_m))-\chi_4(g(a_1))-\cdots-\chi_4(g(a_{m-1})) \right\rbrace \mid\\ &\leq 2^{3m}\cdot 3m\sqrt{q}+2^{2m-3}\\ &\leq 2^{2m}(1+2^m\cdot 3m\sqrt{q}), \end{align*} that is, $$-2^{2m}(1+2^m\cdot 3m\sqrt{q})\leq 2^f h^{i'}\overline{h}^{j'}\sum\limits_{a_m\neq a_i}\chi_4(g(a_m))\leq 2^{2m}(1+2^m\cdot 3m\sqrt{q}).$$ Then, \eqref{dd} yields \begin{align*} &2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)\\ &\leq \mathcal{J}\\ &\leq 2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1) \end{align*} and thus, \eqref{ss} yields \begin{align}\label{asy3} &[2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)]\times\frac{1}{m\times 4^{m-1}}k_{m-1}(P^\ast(q))\notag\\ &\leq k_m(P^\ast(q))\notag \\ &\leq [2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)]\times\frac{1}{m\times 4^{m-1}}k_{m-1}(P^\ast(q)) \end{align} Dividing by $q^m$ throughout in \eqref{asy3} and taking $q\rightarrow \infty$, we have \begin{align}\label{ff} &\lim_{q\rightarrow \infty}\frac{2^{m-1}(q-m+1)-2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}\times q}\lim_{q\rightarrow \infty}\frac{k_{m-1}(P^\ast(q))}{q^{m-1}}\notag \\ &\leq \lim_{q\rightarrow \infty}\frac{k_m(P^\ast(q))}{q^m}\notag \\ &\leq \lim_{q\rightarrow \infty}\frac{2^{m-1}(q-m+1)+2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}\times q}\lim_{q\rightarrow \infty}\frac{k_{m-1}(P^\ast(q))}{q^{m-1}} \end{align} Now, using the induction hypothesis and noting that \begin{align*} &\lim\limits_{q\to\infty}\frac{2^{m-1}(q-m+1)\pm 2^{2m}(1+2^m\cdot 3m\sqrt{q})(3^{m-1}-1)}{m\times 4^{m-1}q}\\ &=\frac{1}{m\times 4^{m-1}}2^{m-1}\\ &=\frac{1}{m\times 2^{m-1}} , \end{align*} we find that both the limits on the left hand side and the right hand side of \eqref{ff} are equal. This completes the proof of the result. \end{proof} Taking $m=3$ in Theorem \ref{asym}, we find that $$\lim\limits_{q\to\infty}\dfrac{k_3(P^\ast(q))}{q^3}=\frac{1}{48}.$$ We obtain the same limiting value from Theorem \ref{thm1} as well. \par Taking $m=4$ in Theorem \ref{thm2} and Theorem \ref{asym}, we obtain the following corollary which is also evident from Table \ref{Table-1}. \begin{corollary}\label{cor1} We have \begin{align*} \lim\limits_{q\to\infty} {_{3}}F_{2}\left(\begin{array}{ccc} \chi_4, & \chi_4, & \chi_4^3 \\ & \varepsilon, & \varepsilon \end{array}| 1\right)=0. \end{align*} \end{corollary} \begin{proof} Putting $m=4$ in Theorem \ref{asym}, we have \begin{align}\label{eqn1-cor1} \lim\limits_{q\to\infty}\dfrac{k_4(P^\ast(q))}{q^4}=\frac{1}{1536}. \end{align} Putting $m=4$ in Theorem \ref{thm2}, we have \begin{align}\label{eqn2-cor1} \lim\limits_{q\to\infty}\dfrac{k_4(P^\ast(q))}{q^4}=\frac{1}{1536}+3\times \lim\limits_{q\to\infty} {_{3}}F_{2}\left(\begin{array}{ccc} \chi_4, & \chi_4, & \chi_4^3 \\ & \varepsilon, & \varepsilon \end{array}| 1\right). \end{align} Combining \eqref{eqn1-cor1} and \eqref{eqn2-cor1}, we complete the proof. \end{proof} \section{Acknowledgements} We are extremely grateful to Ken Ono for previewing a preliminary version of this paper and for his helpful comments.
{ "timestamp": "2022-05-10T02:24:02", "yymm": "2205", "arxiv_id": "2205.03928", "language": "en", "url": "https://arxiv.org/abs/2205.03928", "abstract": "For a prime $p\\equiv 3\\pmod{4}$ and a positive integer $t$, let $q=p^{2t}$. Let $g$ be a primitive element of the finite field $\\mathbb{F}_q$. The Peisert graph $P^\\ast(q)$ is defined as the graph with vertex set $\\mathbb{F}_q$ where $ab$ is an edge if and only if $a-b\\in\\langle g^4\\rangle \\cup g\\langle g^4\\rangle$. We provide a formula, in terms of finite field hypergeometric functions, for the number of complete subgraphs of order four contained in $P^\\ast(q)$. We also give a new proof for the number of complete subgraphs of order three contained in $P^\\ast(q)$ by evaluating certain character sums. The computations for the number of complete subgraphs of order four are quite tedious, so we further give an asymptotic result for the number of complete subgraphs of any order $m$ in Peisert graphs.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "Number of complete subgraphs of Peisert graphs and finite field hypergeometric functions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683465856102, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7096610820884927 }
https://arxiv.org/abs/1901.02652
$d$-Galvin families
The Galvin problem asks for the minimum size of a family $\mathcal{F} \subseteq \binom{[n]}{n/2}$ with the property that, for any set $A$ of size $\frac n 2$, there is a set $S \in \mathcal{F}$ which is balanced on $A$, meaning that $|S \cap A| = |S \cap \overline{A}|$. We consider a generalization of this question that comes from a possible approach in complexity theory. In the generalization the required property is, for any $A$, to be able to find $d$ sets from a family $\mathcal{F} \subseteq \binom{[n]}{n/d}$ that form a partition of $[n]$ and such that each part is balanced on $A$. We construct such families of size polynomial in the parameters $n$ and $d$.
\section{Introduction} \label{section:intro} \subsection{Galvin problem} The starting point of this paper is a question raised by Galvin in extremal combinatorics. Given two sets $A$ and $S$, we say that $S$ is \defin{balanced on $A$} if $|S \cap A| = \frac{|S|}{2}$. \begin{figure}[h!] \label{fig:exGalvin} \centering \includegraphics[scale=0.6]{3.png} \caption{$S$ balanced on $A$} \end{figure} \begin{definition}[Galvin family] If $4 \mid n$, a family $\F \subseteq \binom {[n]} {n/2}$ is said to be \defin{Galvin} if for any $A \in \binom {[n]} {n/2}$ there exists a set $S \in \F$ which is balanced on $A$ (\ie $|S \cap A| = \frac{n}{4}$). \end{definition} The \defin{Galvin problem} asks for the minimal size, denoted by $\mg{n}$, of a Galvin family. An upper bound of $\mg{n} \leq \frac{n}{2}$ follows from the family given by the sets $S_i = \{i, i+1, \dots, i + \frac{n}{2} -1 \}$ for $i \in [n/2]$. Lower bounds for the size of Galvin families are more subtle. An easy counting argument shows that $m(n) \geq \frac{\binom {n} {n/2}}{\binom {n/2} {n/4}^2} = \Theta(\sqrt{n})$, which is far from $n/2$. Frankl and Rödl~\cite{FR} established that $\mg{n} \geq \epsilon n$ for some $\epsilon > 0$ whenever $\frac{n}{4}$ is odd, as a corollary to a strong result in extremal set theory. This linear bound was later strengthened by Enomoto, Frankl, Ito and Nomura~\cite{EF} to $\mg{n} = n/2$, with the same parity constraint, thus showing the optimality of the construction in this special case. Later, using Gröbner basis methods and linear algebra, Hegedűs~\cite{H09} obtained that $\mg{n} \geq \frac{n}{4}$ whenever $\frac{n}{4} > 3$ is a prime. \begin{figure}[h!] \label{fig:exGalvin} \centering \includegraphics[scale=0.23]{1.png} \caption{A Galvin family for $n=8$ consisting of 4 sets} \end{figure} \subsection{Generalizations and related works} Surprisingly, problems closely related to the one of Galvin proved useful in arithmetic complexity theory, in order to give lower bounds on the size of arithmetic circuits computing some target polynomials. This connection was first noticed by Jansen~\cite{J08}, and was recently successfully used in a paper by Alon et al.~\cite{AlonKV18}. There the elements of the Galvin family $\F$ are allowed to be sets of size between $2 \tau$ and $n - 2\tau$ ($\tau$ being an integer). Furthermore, for a given $A \in \binom {[n]} {n/2}$ instead of asking for the existence of a set $S \in \F$ perfectly balanced on $A$ the authors look for a set $S$ which is nearly balanced, \ie $\left||S \cap A| - \frac{|S|}{2}\right| < \tau$ for the same $\tau$. For this setting, Alon, Kumar and Volk~\cite{AlonKV18} showed, using the so-called polynomial method, that $\mg{n} \geq \Omega(n/\tau)$. Alon, Bergmann, Coppersmith, and Odlyzko~\cite{AlonBCO88} investigate a problem dealing with $\{-1,+1\}$ vectors which looks similar to the Galvin one. When rephrasing it as an extremal problem over sets, it reads as follows: what is the minimal number $K(n,c)$ on the size of a family $\F \subseteq \mathcal P([n])$ such that the following holds $$\forall A \subseteq [n], \exists S \in \F, \left | |\comp A \triangle S| - |A \triangle S| \right | \leq c,$$ where $\triangle$ denotes the symmetric difference. Setting $c = 0$ and asking all sets to be of size $n/2$ is exactly Galvin problem. However, it does not seem to be any evident dependencies between the two problems. We consider here a different type of generalization. Asking for a set $S \in \F$ to be balanced on $A \in \binom {[n]} {n/2}$ is equivalent (up to a factor $2$ in the family size) to ask for a partition of $[n]$ in two parts, namely $(S, \comp S)$, such that each part is balanced on $A$ and such that $S$, $\comp S$ are elements of $\F$. Instead of splitting $[n]$ in two parts, we look for partitions that involve more sets. Introducing a parameter $d \in \mathbb{N}$, we want, for a given $A$, to be able to find $d$ sets in $\F$ that form a partition of $[n]$ and such that each set is balanced on $A$. The original motivation for considering this generalization stems from arithmetic circuits. There, an open question is to know whether there is a separation between two models of computation called multilinear algebraic branching programs (ml-ABPs) and multilinear circuits (ml-circuits). By ``separation'', we mean that there is some specific polynomial $f$ that can be computed by a small ml-circuit but any ml-ABP for $f$ must be of size superpolynomial in the degree and the number of variables of $f$. Proving that any generalized Galvin families (\ie with $d$ parts in the partitions -- see below for a formal definition) must be of superpolynomial size (in $n$ the size of the ground set, and $d$ the number of parts) would imply a separation between ml-ABPs and ml-circuits. Since our main result is to prove that generalized Galvin families of polynomial size exist, this approach is unfortunately not promising. Note that this does not call into question either the plausible separation between ml-ABPs and ml-circuits or the approach through a proof that ml-ABPs cannot compute efficiently so-called ``full rank polynomials''. This only rules out a specific approach to tackling the question of knowing whether ml-ABPs can efficiently compute full rank polynomials. However, we believe that the construction is of intrinsic combinatorial interest. \section{$d$-Galvin families} \subsection{Definition} We start with the formal definition of generalized Galvin families: \begin{definition}[$d$-Galvin families] Given two integers $d,n \in \mathbb{N}$ such that $2d \mid n$, we say that a family $\F \subseteq \binom {[n]} {\frac{n}{d}}$ is \defin{$d$-Galvin} if for any $A \in \binom {[n]} {n/2}$, \defin{$A$ is handled by $\F$}, meaning that there exist $d$ sets $S_1,\dots,S_d \in \F$ such that: \begin{itemize} \item The $S_i$ form a partition of $[n]$, \item Each $S_i$ is balanced on $A$ (\ie $|S_i \cap A| = \frac{n}{2d}$). \end{itemize} \end{definition} \begin{figure}[h!] \label{fig:exGalvin} \centering \includegraphics[scale=0.5]{2.png} \caption{Set $A$ handled by a partition $S_1, S_2, \dots S_d$} \end{figure} \begin{remark} Note that a $2$-Galvin family is simply a Galvin family (up to adding the complements of any set in the family). \end{remark} Somewhat surprisingly, small $d$-Galvin families exist. \begin{theorem} \label{thm:main} For any $d, n \in \mathbb{N}$ such that $2d \mid n$, there exists a $d$-Galvin family of size $\tilde\Theta(n^2 d^\finaldpow)$. \end{theorem} Here $\tilde\Theta(f(n,d))$ is some function $g$ such that $f(n,d) (\ln f(n,d))^{c_1} \leq g(n,d) \leq f(n,d) (\ln f(n,d))^{c_2}$ for some integers $c_1, c_2$. The next section is devoted to the construction of a $d$-Galvin family, yielding a proof of the main theorem. \subsection{Proof of Theorem~\ref{thm:main}} For technical reasons, we need to distinguish two cases in the proof of Theorem~\ref{thm:main}: we start by giving a construction when $d$ is reasonably small, then we show how to adapt it to handle larger $d$. \noindent{\textbf{First case: }$d < \frac{n}{(\ln n)^3}$} The overall idea is to construct a family $\F$ of size $\tilde \Theta(n d^{\finaldpow})$ such that a random set $A \in \binom {[n]} {n/2}$ is handled by $\F$ with probability at least $1/2$. Taking the random family $\G$ which is the union of $n$ independent such $\F$ increases this probability to at least $1- 2^{-n}$. By the union bound, the probability that $\G$ handles all sets $A$ is non-zero, yielding the existence of the desired family. We now focus on the construction of such a family $\F$. \smallskip \noindent \textbf{Construction of $\F$} For a set $X$, we use the notation $A \sim X$ to denote that $A$ is a set chosen uniformly at random from $X$. We let $k := \frac n {2d}$ for the rest of the paper. \begin{lemma} \label{lem:randomf} When $d < \frac{n}{(\ln n)^3}$, there is a family $\F \subseteq \binom {[n]} {2k}$ of size $\tilde \Theta(n d^\finaldpow)$ such that $$\Pr_{A \sim \binom {[n]} {n/2}}(A \text{ is handled by } \F) \geq 1/2$$ \end{lemma} Before going into the construction, let us see how we can prove the main theorem, with Lemma~\ref{lem:randomf} in hand. \begin{proof}[Proof of Theorem~\ref{thm:main}, first case] Let $\sigma_1, \dots, \sigma_n$ be $n$ permutations of $[n]$, chosen uniformly at random. For any of these, construct the family $\F_{\sigma_i} = \sigma_i(\F)$, \ie the family from Lemma~\ref{lem:randomf} where any element $e \in [n]$ has been replaced by $\sigma_i(e)$. Consider the family $\G := \cup_{i \in [n]} \F_{\sigma_i}$. We aim to prove that $\G$ is $d$-Galvin with non-zero probability. Given a set $A$, let $\mathrm{H_i}$ be the event: ``$A$ is handled by $\F_{\sigma_i}$''. $\mathrm{H_i}$ is equivalent to ``$\sigma_i^{-1}(A)$ is handled by $\F$''. As $\sigma_i^{-1}(A)$ is a uniformly random set independent from $\sigma_{i'}^{-1}(A)$ for $i \neq i'$, this proves the independence between the events $\mathrm{H_i}$. From this we conclude $$\Pr_{A \sim \binom {[n]} {n/2}}(\forall i \in [n], A \text{ is not handled by } \F_{\sigma_i}) \le 2^{-n}$$ Thus, by the union bound, there is a non-zero probability that $\G$ handles all sets $A$, concluding the proof of the theorem. \end{proof} The rest of the section consists of a proof of Lemma~\ref{lem:randomf}. The overall strategy is to divide the elements of $[n]$ into buckets, denoted by $\chi_i$, and build the sets $S$ from any pair of buckets $(\chi_i,\chi_j)$. Suppose the amount by which these buckets are unbalanced on $A$ are $R_i$ and $R_j$ respectively. If half the elements of $S$ are chosen from bucket $\chi_i$ and half from bucket $\chi_j$ then the amount by which $S$ is unbalanced on $A$ will be close to a normal distribution with expectation depending on $R_i$ and $R_j$. By showing a good upper bound on the $R_i$, the probability that $S$ is balanced is reasonably large, and picking only polynomially many random sets $S$ is sufficient. In fact, we must be slightly more careful because the bucket errors accumulate as we pick many sets $S$. Fortunately, we can manage this by taking an ordering $\pi$ of the buckets such that the error of $\cup_{j \leq i} \chi_{\pi(j)}$ stays small for all $i$. \begin{proof}[Proof of Lemma~\ref{lem:randomf}] First, we divide $[n]$ into several intervals (recall that $k = \frac n {2d}$). \begin{itemize} \item $\chi_0 = (0, k]$, \item $\chi_i = ((2i-1)k, (2i+1) k]$ for $i \in [d-1]$, \item $\chi_d = ((2d-1) k, n]$. \end{itemize} For $i \in [d-1]$ we create sets $G_i = \{T_i^h, h \in [1,r] \}$ by sampling independently $r = \tilde \Theta(n^{1/2} d^{\semifinaldpow / 2})$ subsets $T_i^h \sim \binom {\chi_i}{k}$ and adding them to $G_i$. For technical reasons, we let $G_0$ to be the singleton $\{\emptyset\}$ and $G_d = \{\chi_d\}$. Finally let $\F = \{(\comp {T_i^h} \cup T_{j}^{l}: i,j \in [0, d], T_i^h \in G_i, T_j^l \in G_j\}$, where $\comp {T_i^h}$ denotes $\chi_i \setminus T_i^h$ . Now, we claim that such a random $\F$ handles $A \sim \binom {[n]} {n/2}$ with probability at least $1/2$, giving the existence of the desired family. As there are $\Theta(d^2)$ pairs $(i, j)$ to consider and for each one we add $\tilde\Theta((n^{1/2} d^{\semifinaldpow/2})^2)$ sets $S$ to $\F$, this gives a total size $|\F| = \tilde\Theta(n d^\finaldpow)$. For $I \subseteq [0,d]$ we introduce an \defin{error term $R$(I)} to represent the error in balancing $A$ . We let $\chi(I) = \cup_{i \in I} \chi_i$ and $R(I) = |A \cap \chi(I)| - \frac{|\chi(I)|} 2$. Furthermore we write $R_i := R(\{i\})$. For reasons that will become clear later, we want to choose a permutation $\pi$ of $[0,d]$ with $\pi(0) = 0$ and $\pi(d) = d$ with $\max_{i \in [0, d]} |R(\pi([0, i]))|$ small. \begin{figure}[h!] \label{fig:exGalvin} \centering \includegraphics[scale=2]{4.png} \caption{An ordering $\pi$} \end{figure} \begin{claim}\label{goodpi} $\exists \pi : \max_{i \in [0, d]} |R(\pi([0, i]))| \le \max_{i \in [0,d]} |R_i|$ \end{claim} \begin{proof} We let $\pi(0)$ be fixed to be $0$, and for each $i \geq 0$, pick $\pi(i+1)$ among the remaining elements such that $R_{\pi(i+1)}$ has opposite sign from $R(\pi[0,i])$. If $R(\pi[0, i]) = 0$ pick any value of $\pi(i+1)$. Note that this is always possible as $R([0,d]) = 0$. \end{proof} We fix $\pi$ to be a permutation that fulfills Claim~\ref{goodpi} for the rest of the paper. \begin{claim}\label{low_max_lem} With probability at least $\frac 3 4$ we have $\max_{i \in [0,d]} |R_i| \le \sqrt{\ln (\maxlemconst d)} \sqrt k$. \end{claim} \begin{proof} For $i \in [1,d-1]$, each element $R_i$ follows a hypergeometric distribution $H(\frac n 2, n, 2k)$. We get the following bound, due to Hoeffding~\cite{H63}: $$P(|R_i| > x) \le 2 \exp(-\frac {2x^2} {2k}) $$ With $x = \sqrt{ \ln (\maxlemconst d) } \sqrt k$ this becomes $2\exp(-\ln (\maxlemconst d)) = \frac 2 {\maxlemconst} \cdot \frac 1 d$. $R_0$ and $R_d$ follow the distribution $H(\frac n 2, n, k)$, which yields an even stronger bound for $i = 0$ and $i = d$. Applying a union bound over all $i \in [d]$, the probability that at least one $|R_i|$ exceeds $\sqrt{\ln (\maxlemconst d)} \sqrt k$ is bounded by $\frac 2 \maxlemconst \frac {d+1} d < \frac 1 4$ (since $d \geq 2$). \end{proof} \begin{claim}\label{sample_lem} Suppose $d < \frac n {(\ln n)^3}$. Given some $T_i \in G_i$ for $i \in [1, d]$, let $S_j := \comp T_{\pi(j-1)} \cup T_{\pi(j)}$ for $j \in [d]$. If $\{S_j\}_{j < i}$ are balanced on $A$ then we have $S_i$ balanced on $A$ with probability at least $$\Theta\left( \exp(-\frac \dpowparam k \max \{R(\pi[0,i-1])^2, R_{\pi(i)}^2\}) \sqrt{\frac 1 k} \right)$$ \end{claim} \begin{proof} Let $t := -R(\pi[0,i-1])$. Since the $\{ S_j\}_{j < i}$ are balanced, we have: \begin{equation} \label{eq:bal} |A \cap \cup_{j=1}^{i-1} S_j| = (i-1) k \end{equation} On the other hand: \begin{align*} |A \cap \chi(\pi[0,i-1])| &= |A \cap \cup_{j=1}^{i-1} S_j| + |A \cap \comp T_{\pi(i-1)}| & \\ &= (i-1) k + |A \cap \comp T_{\pi(i-1)}| & \text{using (\ref{eq:bal})} \\ \intertext{and} |A \cap \chi(\pi[0,i-1])| &= (2i-1)\frac k 2 - t & \text{by definition of } R(\cdot) \end{align*} Therefore, $|A \cap \comp T_{\pi(i-1)}| = \frac k 2 - t$. To make $S_i$ to be balanced we must have $|A \cap T_{\pi(i)}| + |A \cap \comp T_{\pi(i-1)}| = k$. This means that the probability that $S_i$ is balanced is the probability that $|A \cap T_{\pi(i)}| = \frac k 2 + t$. Let $x := |A \cap T_{\pi(i)}|$ and $R := R_{\pi(i)}$. We have that $x$ follows a hypergeometric distribution with parameters $H(k + R, 2k, k)$. Claim~\ref{algebra} below suffices to establish Claim~\ref{sample_lem}. \end{proof} \begin{figure}[h!] \label{fig:exGalvin} \centering \includegraphics[scale=2.8]{5.png} \caption{Conditions on $|A \cap T_{\pi(i)}|$} \end{figure} We state an easy lemma that will be helpful for Claim~\ref{algebra} to estimate binomial coefficients, a proof of which can be found in Spencer and Florescu~\cite{SF14}. \begin{lemma} \label{lem:binomial} $\binom {n} {\frac n 2 - m} = 2^n \sqrt{\frac{2}{n\pi}} \exp\left(-\frac{2m^2}{n}\right)\left(1 + O(\frac{m^3}{n^2})\right)$ \end{lemma} \begin{claim} \label{algebra} We have that $x = \frac k 2 + t$ with probability at least $$\Theta \left( \exp(-\frac \dpowparam k \max \{t^2, \frac {R^2} 4\}) \sqrt{\frac 1 k} \right)$$ \end{claim} \begin{proof} As $x$ follows a hypergeometric distribution with parameters $H(k+R,2k,k)$, we have that \begin{equation} \label{eq:px} P(x = \frac k 2 + t) = \binom {k+R} {\frac k 2 + t} \binom {k-R} {\frac k 2 - t} {\binom {2k} k}^{-1}. \end{equation} As long as $(\frac R 2 - t)^3 = o(k^2)$, which is the case when $d < \frac{n}{(\ln n)^3}$, we may apply Lemma~\ref{lem:binomial}, we have that (\ref{eq:px}) equals \begin{align*} =& 2^{k+R} \sqrt{\frac{2}{(k+R)\pi}} \exp\left(-\frac{2(\frac R 2 - t)^2}{k+R}\right) \\ &\times 2^{k-R} \sqrt{\frac{2}{(k-R)\pi}} \exp\left(-\frac{2(\frac R 2 - t)^2}{k-R}\right) \\ &\times \left( 2^{2k} \sqrt{\frac{2}{2k\pi}} \right)^{-1} (1 + o(1)) \\ =& \sqrt{\frac {4k} {(k+R)(k-R) \pi}} \exp \left( -2 (\frac R 2 - t)^2 (\frac 1 {k+R} + \frac 1 {k-R}) \right) (1 + o(1)) \\ =& \sqrt{\frac {4k} {(k^2 - R^2)\pi}} \exp \left( \frac {-4k(\frac R 2 - t)^2} {k^2 - R^2} \right) (1 + o(1)) \\ \intertext{By Claim~\ref{low_max_lem} we have $0 \le t, R \le \sqrt{ \ln(\maxlemconst d)} \sqrt k = o(k)$, therefore we finally get} =& \sqrt{\frac 4 {k\pi}} \exp\left(- \frac 4 k (\frac R 2 - t)^2\right) (1 + o(1)) \end{align*} \end{proof} Combining Claim~\ref{low_max_lem} and Claim~\ref{sample_lem}, we have a probability of \begin{align*} \Theta \left( \exp(-\frac \dpowparam k (k \ln (\maxlemconst d))) \sqrt{\frac 1 k} \right) &= \Theta \left( \exp(- \dpowparam \ln (\maxlemconst d)) \sqrt{\frac d n} \right)\\ &= \Theta((\maxlemconst d)^{-\semifinaldpow/2} n^{-1/2}) \end{align*} that $S_i$ is balanced. Call this probability $y$. If $|G_i| = \frac {\ln(4d)} y$ then the probability that some choice of $T_{\pi(i)}$ balances $S_i$ is at least $1 - \frac 1{4d}$. By the union bound, the chance that $|R_i|$ is not bounded in Claim~\ref{low_max_lem} or that any $S_{i}$ is unbalanced is at most $\frac 1 4 + d \frac 1{4d} = \frac 1 2$. Hence the probability that we get a $d$-Galvin partition is at least $\frac 1 2$, as desired. \end{proof} In the above proof we used $d < \frac n {(\ln n)^3}$ to apply Lemma~\ref{lem:binomial}. While this could perhaps be improved to $d = \frac n {\ln n}$, there is a real barrier here. When $d$ is this large we expect some buckets to be entirely empty of elements from $A$ and the above proof does not work. We now handle the case where $d$ is larger. \noindent{\textbf{Second case: }$d \geq \frac n {(\ln n)^3}$} \begin{proof}[Proof of Theorem~\ref{thm:main}, second case] First, observe that Galvin families compose nicely; if $\F$ is an $a$-Galvin family over $[n]$, and if we take a $b$-Galvin family $\F_S$ over $S$ for each set $S \in \F$, then the union of all $\F_S$ forms an $ab$-Galvin family. Set $d' = \frac{n}{(\ln n)^3} $ and assume for the moment that $d'$ and $\frac d {d'}$ are valid factors of $d$. The idea is to start by constructing a $d'$-Galvin family $\F$ over $[n]$, using the previous construction. We then recursively apply the construction to get a $\frac d {d'}$-Galvin family $\F_S$ for any each $S \in \F$, and the final family is the union of all $\F_S$. The elements of $\F$ are sets of size $(\ln n)^3$, therefore the families $\F_S$ are of size $\tilde \Theta(1)$, and the overall construction is of size $\tilde \Theta(n^2 d^\finaldpow)$. In the case that $d'$ and $\frac d {d'}$ are not valid factors of $d$, we do the following. Let $k' = \lfloor \frac d {d'} \rfloor$. The idea is to construct a family $\F$ with sets of size $2k'k$, and $2(k'+1)k$, that behaves like a Galvin family: we ask that any set $A$ has a partition of $[n]$ from sets in $\F$, where each set of the partition is balanced on $A$. We then apply recursively the construction to split the sets of size $2k'k$ and $2(k'+1)k$ until we get size $k$ sets. To create the family $\F$, we adapt the construction of the Galvin family when $d < \frac{n}{(\ln n)^3}$, in the following way. Note that in any partition of $[n]$ into sets of these sizes, the number of sets of size $2k'k$ and $2(k'+1)k$ are fixed (given by $d$ and $n$). We denote these numbers by $f$ and $c$. We need to ensure that the $\comp {T_i^h} \cup T_j^l$ are of the correct sizes (\ie $2k'k$ or $2(k'+1)k$). For that, we change the sizes of the $\chi_i$ in the following way: \begin{itemize} \item $|\chi_0| = k'k$ \item For $c$ values of $i \in [1,d-1]$, we have $|\chi_i| = 2(k'+1)k$ \item For the other $i \in [1,d-1]$ we have $|\chi_i| = 2k'k$ \item $|\chi_d| = k'k$. \end{itemize} We then choose the $T_i^h$ to be of size $k'k$ except for $i=0$ where the unique $T_0$ remains $\emptyset$. This gives the desired sizes for $|S_i|$ and it is not hard to see that the proof carries over to this case with some simple and obvious modifications. \end{proof} \subsection{Galvin family without the divisibility condition} The previous definition of a $d$-Galvin family requires $2d \mid n$. Here we present a relaxed version, which can be defined without the divisibility condition, and prove that such families of polynomial size can be obtained using our previous construction. When the divisibility condition does not hold we would like $d$ sets to be exactly or almost exactly balanced on $A$ and for those sets to be as close in size as possible. To be exactly balanced they must have evenly many elements, so if $[n]$ is odd then we must include a set of odd size which is imbalanced by 1 element. Of the remaining elements, the closest they can come in size is differing by 2 elements - being of size either $2\lfloor k \rfloor$ or $2\lceil k \rceil$. We are able to achieve this best possible outcome. \begin{definition}[$d$-Galvin family, second version] Given two integers $d, n \in \mathbb{N}$ with $d \leq n$, we say that a family $\F \subseteq 2^{[n]}$ is \defin{$d$-Galvin} if for any $A \in \binom {[n]} {\lceil n/2 \rceil}$, \defin{$A$ is handled by $\F$}, meaning that there exist $d$ sets $S_1,\dots,S_d \in \F$ such that: \begin{enumerate} \item $\forall i < d$, $|S_i| = 2 \lfloor k \rfloor$ or $|S_i| = 2 \lceil k \rceil$, \item $2 \lfloor k \rfloor \leq |S_d| \leq 2 \lceil k \rceil$ \item The $S_i$ form a partition of $[n]$, \item For $i < d$, each $S_i$ is balanced on $A$. \item $|\comp A \cap S_d| \leq |A \cap S_d| \leq |\comp A \cap S_d| + 1$. \end{enumerate} \end{definition} \begin{figure}[h!] \label{fig:exGalvin} \centering \includegraphics[scale=1.3]{9.png} \caption{For $n = 29, d = 6$, we have three sets of size $2 \lfloor k \rfloor$, two sets of size $2 \lceil k \rceil$, and one set of size $\lfloor k \rfloor + \lceil k \rceil$.} \end{figure} \begin{theorem} There exists a $d$-Galvin family of size polynomial in $d$ and $n$. \end{theorem} \begin{proof}[Sketch of the proof.] We modify the previous construction slightly in order to handle this more general setting. This is very similar to the proof of Theorem~\ref{thm:main} in the case $d \geq \frac{n}{(\ln n)^3}$. Suppose $k$ is not an integer and write $k' := \lfloor k \rfloor$. Furthermore, assume for the moment that $k = \omega((\ln n)^3)$ so that the construction from Claim~\ref{sample_lem} holds. Note that in any partition of $[n]$ into sets that respect properties $(1)$ and $(2)$ of the definition, the number of sets of size $2k'$, $2k' +1$, and $2(k'+1)$ are fixed (given by $d$ and $n$). We denote these numbers by $f, m$ and $c$. We need to ensure that the $\comp {T_i^h} \cup T_j^l$ are of the correct size in order to be able to fulfill our definition. For that, we change the size of the $\chi_i$ in the following way: \begin{itemize} \item $|\chi_0| = k'$ if $m = 0$ and $k'+1$ otherwise \item For $c$ values of $i \in [1,d-1]$, we have $|\chi_i| = 2(k'+1)$ \item For the other $i \in [1,d-1]$ we have $|\chi_i| = 2k'$ \item $|\chi_d| = k'$. \end{itemize} We then choose the $T_i^h$ to be of size $k'$ except for $i=0$ where the unique $T_0$ remains $\emptyset$. By doing so, the partitions from the family respect properties $(1)$ and $(2)$, and again the proof that this gives a valid construction is very close to the original proof and we omit the details. Finally, if $k = O((\ln n)^3)$ then we may have to simultaneously apply the adjustments above and the ones in the proof of the second case of Theorem~\ref{thm:main}. \end{proof} \section{Discussion and open questions} The actual construction is probabilistic and it could be interesting to derandomize it, without increasing too much the size of the family. A way to tackle the problem is to carefully design the sets $T_i$ belonging to $G_i$ instead of taking them randomly. The given upper bound is nicely polynomial in $n$ and $d$ but it is unlikely to be tight. We suspect that even modifications of the current construction can yield some improvements. In particular, the family $\F$ from Lemma~\ref{lem:randomf} is constructed by taking the union $\comp T_i \cup T_j$ over all possible pairs $(T_i,T_j) \in G_i \times G_j$ for $i,j \in [d]$. It might be possible to restrict $(i,j)$ to come from the edges of a sparse graph over the vertices $[d]$, and still prove Claim \ref{goodpi}, maybe in some slightly weaker form, possibly saving a factor close to $d$. Even if this is possible the resulting family is still not likely to be optimal size and hence we have not investigated this approach in detail as it would lead to considerable complications and we prefer a simple construction. A truly optimal construction is likely to require some new ideas. While there is a linear lower bound for the original Galvin problem, it is not clear how to derive from this linear lower bounds for $d$-Galvin families for $p > 2$. An easy counting argument, similar to the one for the original Galvin problem, gives that $|\F|^{d-1} \geq \frac{\binom {n} {n/2}}{\binom {n/d} {k}^d}$ (since the number of possible partitions of $[n]$ with $d$ sets from $\F$ is bounded by $|\F|^{d-1}$), providing $|\F| \geq \Omega(\frac {\sqrt n} {d^{\frac 1 2 -\frac 1 {2d}}})$. When focusing on large $d$ we get the simple bound below which is an improvement in the regime $d = \Omega(n^{1/5})$: \begin{claim} A $d$-Galvin family must be size at least $\frac {d^2} 2$. \end{claim} \begin{proof} Let us fix a $d$-Galvin family $\F$ over $[n]$, and consider the set $B = \{(S,x), S \in \F, x \in S\}$. We first prove that for any $x \in [n]$, there must be at least $\frac d 2$ sets from $\F$ that contain $x$. Suppose it is not the case for a particular $a \in [n]$, and consider a set $A$ of size $\frac n 2$ that contains $(\cup_{S \text{ s.t } a \in S} S)$ (such a $A$ exists since by the assumption the union is smaller than or equal to $\frac n 2$). Any set $S\in \F$ that contains $a$ is completely included in $A$, and thus cannot be balanced on $A$. Therefore $A$ is not handled by $\F$. Finally, observe that the previous remark implies that $|B| \geq \frac{nd}{2}$. As each set $S \in \F$ is of size $\frac n d$, the number of sets in $\F$ must be at least $\frac{d^2}{2}$. \end{proof} \section*{Acknowledgements} We thank Andrew Morgan for giving helpful suggestions in the details of claims 4 and 5. The second author would like to thank Hervé Fournier for valuable discussions. \bibliographystyle{plain}
{ "timestamp": "2019-01-10T02:08:33", "yymm": "1901", "arxiv_id": "1901.02652", "language": "en", "url": "https://arxiv.org/abs/1901.02652", "abstract": "The Galvin problem asks for the minimum size of a family $\\mathcal{F} \\subseteq \\binom{[n]}{n/2}$ with the property that, for any set $A$ of size $\\frac n 2$, there is a set $S \\in \\mathcal{F}$ which is balanced on $A$, meaning that $|S \\cap A| = |S \\cap \\overline{A}|$. We consider a generalization of this question that comes from a possible approach in complexity theory. In the generalization the required property is, for any $A$, to be able to find $d$ sets from a family $\\mathcal{F} \\subseteq \\binom{[n]}{n/d}$ that form a partition of $[n]$ and such that each part is balanced on $A$. We construct such families of size polynomial in the parameters $n$ and $d$.", "subjects": "Combinatorics (math.CO)", "title": "$d$-Galvin families", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683513421314, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7096610795549008 }
https://arxiv.org/abs/1511.04720
On some series formed by values of the Riemann Zeta function
The partial fraction expansion of coth($\pi$z), due to Euler, is generalized to power series having for coefficients the Riemann zeta function evaluated at certain arithmetic sequences. A further generalization using arbitrary Dirichlet series is also proposed. The resulting formulas are new, as far as we know, since they could not be found in any of the classical or recent handbooks of formulas that were at our disposal.
\section{Origin of the series} The starting point for this paper is the known formula \cite[p. 49]{Valiron:1942}: $$ \suny \frac1{n^2+z^2} = \begin{cases} \displaystyle \frac{\pi^2}{6} & \text{if }z=0, \\ \displaystyle \frac{\pi}{2z} \coth (\pi z) - \frac1{2z^2} & \text{if }0<|z|<1. \end{cases} $$ These two values must obviously be connected. Indeed, from the Taylor series for $z=0$, we obtain the expansion $$ \suny \frac1{n^2+z^2} = \frac{\pi^2}{6} - \frac{\pi^4}{90}z^2 + \frac{\pi^6}{945}z^4 + O(z^6), \qquad(z\to0), $$ where the classical values of the Riemann's zeta function for the even integers can be recognized immediately. The same question can now be asked for similar partial fraction expansions such as $$ \suny \frac1{n^3+z^3}, \quad\suny \frac1{n^4+z^4}, \quad\ldots $$ Our results presented below extend significantly some particular formulas published recently in \cite{ChoiSrivastava:1997} and \cite{Jameson:2014} (although it was written before knowing these papers), notably by considering Cesàro summation and more general Dirichlet series. \section{Formula of order $p$} \begin{theo} For a complex $z$ of modulus less or equal to $1$, $z \neq -1$: \begin{equation} \label{n2plusz} \suny {\frac1{n^2+z}} = \suzy {(-1)^k \zeta(2k+2) z^{k}}, \end{equation} \begin{equation} \label{n3plusz} \suny {\frac1{n^3+z}} = \suzy {(-1)^k \zeta(3k+3) z^{k}}, \end{equation} and generally for $p\in\Cplx$ such that $\RE{p}>1$ ({\it formula of order $p$})~: For all complex numbers $z$ of modulus less or equal to $1$, $z \neq -1$: \begin{equation} \label{npplusz} \suny {\frac1{n^p+z}} = \suzy {(-1)^k \zeta(pk+p) z^{k}}. \end{equation} \end{theo} \begin{proof} We prove only the formula of order $p$, since the first two formulas are special cases. Let us first assume $|z|<r_0<1$. This condition implies the normal convergence\footnote{ \cite[p. 124]{Bromwich:1926} ``Series wich satisfy the M-test have been called normally convergent by Baire."} of the series and hence the legitimate inversion of the summations \cite[p. 104]{Remmert:1998}. We have $$\suny \frac1{n^p+z^p} = \suny \frac1{n^p}\frac1{1+(z/n)^p} = \suny \frac1{n^p} \suzy (-1)^k \left(\frac{z}{n}\right)^{kp}$$ and we obtain, with the inversion of summations, $$\suny \frac1{n^p+z^p} =\suzy (-1)^k z^{pk} \suny \frac1{n^{p(k+1)}}=\suzy {(-1)^k \zeta(pk+p) z^{pk}}$$ Substituting $z$ to $z^p$ yields the formula of order $p$. We thus have an analytical function of $z$ in $D(0,r_0)$ which can be extended to the whole open disc $D(0,1)$ because the convergence radius is $1$ from the Cauchy-Hadamard formula. \end{proof} On the convergence circle, the analytical function $$ f_p(z)=\suzy {(-1)^k \zeta(pk+p) z^{pk}} $$ has a pole at $z=-1$. From the formula, the only problematic term in the left hand side is the first term whose denominator is zero if $z=-1$. We conclude that the function $f_p(z)$ has a simple pole on the convergence circle $|z|=1$ at $z=-1$. \begin{theo} The function $f_p(z)$ is meromorphic on $\mathbb{C}$, and its singularities are an infinity of simple poles at $z=-n^p$ ($n \in \Nint$) with residue $1$. \end{theo} \begin{proof} All the singularities of $f_p$ must appear in the left hand side of the formula of order $p$. We begin by the pole at $-1$. We have to calculate the residue by the formula $$ \lim_{z \rightarrow -1} (z+1)f_p(z). $$ From the formula of order $p$, we have \begin{align*} \lim_{z \to -1} (z+1)f_p(z) &= \lim_{z \to -1}\suny {\frac1{n^p+z}} \\ &=\lim_{z \to -1} (z+1)\frac1{1+z} + \lim_{z \to -1} (z+1)\sum_{n=2}^\infty {\frac1{n^p+z}} \\ &= 1 + 0 = 1. \end{align*} Hence this pole is simple and has residue $1$. For the other poles, the same demonstration can be given by considering $n^p+z$ instead of $1+z$. \end{proof} \begin{rema} When $p$ is not real, the poles of $f_p(z)$ are on a logarithmic spiral. The poles are on a straight line (the real axis) only when $p$ is real. It is the only analytical function that we know of whose poles are on a spiral, without being specially constructed by using the Mittag-Leffler's theorem on the principal parts. \end{rema} \section{First generalization} Let $m$ and $p$ be two complex numbers such that $\RE{m} < \RE{p}-1$. We have, for $|z|<1$, $$ \suny \frac{n^m}{n^p+z^p} = \suny \frac1{n^{p-m}}\frac1{1+(z/n)^p} = \suny \frac1{n^{p-m}} \suzy (-1)^k \left(\frac{z}{n}\right)^{kp} $$ and then, since the condition on $m$ ensures the absolute convergence of the series, $$ \suny \frac{n^m}{n^p+z^p} = \suzy (-1)^k z^{pk} \suny \frac1{n^{p(k+1)-m}}, $$ Hence \begin{theo} Let $m$ and $p$ be two complex numbers such that $\RE{m} < \RE{p}-1$. Then, for $|z| \leq 1$, $z \neq -1$, \begin{equation} \suny \frac{n^m}{n^p+z} = \suzy {(-1)^k \zeta(pk+p-m) z^{k}}, \end{equation} and the power series $$ \suzy {(-1)^k \zeta(pk+p-m) z^{k}} $$ defines a meromorphic function which can be extended to the whole complex plane, except for an infinity of simple poles at $z=-n^p$ ($n \in \Nint$) with residue $n^m$. \end{theo} \section{Second generalization} Let $m$ be an integer. For $|z| < 1$, we have $$ \suny \frac{\ln^m(n)}{n^p+z^p} = \suny \frac{\ln^m(n)}{n^{p}}\frac1{1+(z/n)^p} = \suny \frac{\ln^m(n)}{n^{p}} \suzy (-1)^k \left(\frac{z}{n}\right)^{kp} $$ and then $$ \suny \frac{\ln^m(n)}{n^p+z^p} = \suzy (-1)^k z^{pk} \suny \frac{\ln^m(n)}{n^{p(k+1)}}. $$ Hence \begin{theo} Let $m$ an integer. For $|z| \leq 1$, $z \neq -1$, \begin{equation} \suny \frac{\ln^m(n)}{n^p+z} = (-1)^m \suzy {(-1)^k \zeta^{(m)}(pk+p) z^{k}}, \end{equation} and the power series $$ \suzy {(-1)^k \zeta^{(m)}(pk+p) z^{k}} $$ defines a meromorphic function which can be extended to the whole complex plane, except for an infinity of simple poles at $z=-n^p$ ($n \in \Nint$) with residue $(-1)^m \ln^m(n)$. \end{theo} By combining the two previous theorems, we get a more general result. \begin{theo} Let $m$ be an integer, and $p$, $q$ be two complex numbers such that $\RE{q} < \RE{p}-1$. For $|z| \leq 1$, $z \neq -1$, \begin{equation} \suny \frac{n^q\ln^m(n)}{n^p+z} = (-1)^m \suzy {(-1)^k \zeta^{(m)}(pk+p-q) z^{k}}, \end{equation} and the power series $$ \suzy {(-1)^k \zeta^{(m)}(pk+p-q) z^{k}} $$ defines a meromorphic function which can be extended to the whole complex plane, except for an infinity of simple poles at $z=-n^p$ ($n \in \Nint$) with residue $(-1)^m n^q\ln^m(n)$. \end{theo} \section{Third generalization} The same principle works with more complex expressions. We have, with two terms, for $|z|<1/\max(|a|,|b|)$, $\RE{p}>1$, $\RE{q}>1$, \begin{equation} \suny \frac{1}{(n^p+az)(n^q+bz)} = \sum_{l=0}^\infty \suzy {(-1)^{k+l} \zeta(p+q +pk+lq) a^k b^l z^{k+l}}. \end{equation} With three terms, for $|z|<1/\max(|a|,|b|,|c|)$, $\RE{p}>1$, $\RE{q}>1$, $\RE{r}>1$, \begin{multline} \suny \frac{1}{(n^p+az)(n^q+bz)(n^r+cz)} =\\ \sum_{m=0}^\infty \sum_{l=0}^\infty \suzy {(-1)^{k+l+m} \zeta(p+q+r +pk+lq+mr) a^k b^l c^m z^{k+l+m}}. \end{multline} And so on. \section{General formulas} As previously we work with expressions of the form $\sum a_n/(b_n+z)$ with $\sum a_n/b_n$ absolutely summable. We remark that these expressions, as function of $z$, are absolutely summable for $z \in \Cplx - \{b_n | n \in \Nint\}$. We have also \begin{lemma} (Abel, 1826) \label{Abellem} Let the entire series $$f(z)=\suzy a_k z^k$$ of convergence radius $1$. If $$\suzy a_k$$ converge to $L$ then, for $z$ in an Stolz angle of vertex $1$, $$\lim_{z \to 1} \suzy a_k z^k = L=f(1).$$ \end{lemma} \begin{theo} \label{thdirichlet} Consider the Dirichlet's series $$ f(s)=\suny \frac{a_n}{n^s} $$ having $\sigma_a \neq +\infty$ as abscissa of absolute convergence. Let $s$ a complex number such that $\RE{s}>\sigma_a$. For $|z| \leq 1$, $z \neq -1$, \begin{equation} \label{thdirichletf} \suny \frac{a_n}{n^s+z} = \suzy {(-1)^k f(ks+s) z^{k}}. \end{equation} The power series $$ g(z)=\suzy {(-1)^k f(ks+s) z^{k}} $$ defines a meromorphic function $g$ which can be extended to the whole complex plane, except for an infinity\footnote{We assume the existence of an infinite subsequence of non-zero numbers $a_n$.} of simple poles at $z=-n^s$ ($n \in \Nint$) with residue $a_n$ if $a_n \neq 0$. \end{theo} \begin{proof} We have, for $|z| < 1$, $$\suny \frac{a_n}{n^s+z^s} = \suny \frac{a_n}{n^s}\frac1{1+(z/n)^s} = \suny \frac{a_n}{n^s} \suzy (-1)^k \left(\frac{z}{n}\right)^{ks}$$ and we obtain, with the inversion of summations, legitimated by absolute convergence, $$\suny \frac{a_n}{n^s+z^s} =\suzy (-1)^k z^{sk} \suny \frac{a_n}{n^{s(k+1)}}=\suzy {(-1)^k f(ks+s) z^{sk}}$$ Substituting $z$ to $z^s$ yields the formula (\ref{thdirichletf}). To extend to $|z| \leq 1$, $z \neq -1$, we apply the Abel's lemma \ref{Abellem}. All the singularities of $g$ must appear in the left hand side of (\ref{thdirichletf}). We have \begin{align*} \lim_{z \to -n^s} (z+n^s)g(z) &= \lim_{z \to -n^s}\suny {\frac{a_n}{n^s+z}} \\ &=\lim_{z \to -n^s} (z+n^s)\frac{a_n}{n^s+z} + \lim_{z \to -n^s} (z+n^s)\sum_{k \neq n} {\frac{a_k}{k^s+z}} \\ &= a_n + 0 = a_n. \end{align*} Hence $z=-n^s$ is a simple pole if $a_n \neq 0$. For a point $\alpha$ not of the form $-n^s$, $$\lim_{z \to \alpha} (z-\alpha)g(z) = \lim_{z \to \alpha}(z-\alpha)\suny {\frac{a_n}{n^s+z}} = 0,$$ so there is not other singularity in $\Cplx$. \end{proof} \begin{rema} We can work with expressions as $$\suny \frac{a_n}{n^s(n^s+z)} = \suzy {(-1)^k f(ks+2s) z^{k}},$$ or $$\suny \frac{a_n}{n^{s_1}(n^{s_2}+\alpha z)} = \suzy {(-1)^k f(ks_2+s_1+s_2) \alpha^k z^{k}},$$ and so on. \end{rema} \begin{coro} Under the previous hypotheses, for an integer $m \ge 0$, we have \begin{equation} \suny \frac{a_n}{(n^s+z)^{m+1}} = \suky {(-1)^{k+m} f(ks+s) \frac{k!}{m!(k-m)!} z^{k-m}}. \end{equation} \end{coro} \begin{proof} This is the derivative of order $m$ of the formula (\ref{thdirichletf}) with respect to $z$.\end{proof} \begin{coro} Let $(\alpha_i)$ a sequence of $m>1$ complex numbers, $(\beta_i)$ a sequence of $m$ complex numbers such that $\RE{\beta_i}>1$ and $f$ as in the previous theorem \ref{thdirichlet}. We have, for $|z| < 1/\max_{i=1, \ldots, m} (\alpha_i)$ \begin{multline} \suny a_n \prod_{i=1}^m \frac1{(n^{\beta_i}+\alpha_i z)} =\\ \sum_{m_1=0}^\infty \sum_{m_2=0}^\infty \ldots \sum_{m_m=0}^\infty{(-1)^{\sum_{i=1}^m {m_i}} f\left(\sum_{i=1}^m (m_i+1) \beta_i \right) \left(\prod_{i=1}^m \alpha_i^{m_i}\right) z^{\sum_{i=1}^m m_i}}. \end{multline} \end{coro} \begin{rema} If $\alpha_l=0$ then $m_l=0$ in the formula. \end{rema} \begin{theo} \label{thserie} Let the analytical function $f(s)$ be defined in the disc $D(0,R)$, $R>0$, by its power series~: $$ f(s) = \sum_{k=1}^\infty a_k s^k, $$ and zero in $0$. Fix a complex $s$ such that $\RE{s} > 1$. For $|z| <R$, \begin{equation} \suny f\left(\frac{z}{n^s}\right) = \sum_{k=1}^\infty {a_k \zeta(ks) z^{k}}. \end{equation} If $a_1=0$, and $s$ such that $\RE{s} > \frac12$, \begin{equation} \suny f\left(\frac{z}{n^s}\right) = \sum_{k=2}^\infty {a_k \zeta(ks) z^{k}}. \end{equation} \end{theo} \begin{rema} This theorem contents the formulas of \cite{Jameson:2014} when $s=1$ and $z=1$ or $z=-1$. \end{rema} \begin{proof} The conditions ensure the absolute convergence of the series. We have \begin{align*} \suny f\left(\frac{z}{n^s}\right) &= \suny \sum_{k=1}^\infty a_k\left(\frac{z}{n^s}\right)^k\\ &= \sum_{k=1}^\infty {a_k \suny \left(\frac{z}{n^s}\right)^k}\\ &= \sum_{k=1}^\infty {a_k \zeta(ks) z^{k}}. \end{align*} \end{proof} By combining the last two theorems, we get a more general result. \begin{theo} \label{thdirichletserie} Let the analytical function $f$, defined in the disc $D(0,R)$ by the series $$ f(z) = \sum_{k=1}^\infty{a_k z^k}, $$ and zero in $0$. Let $g(s)$ the complex analytical function defined by the Dirichlet's series $$ g(s)=\suny \frac{b_n}{n^s} $$ in the half-plane $\RE{s}>\sigma_a$. Let a complex number $s$ such that $\RE{s} \ge 1$, $|z|<R$, another number $s'$ such that $\RE{s+s'}>\sigma_a$. We have \begin{equation} \suny \frac{b_n}{n^{s'}}f\left(\frac{z}{n^s}\right) = \sum_{k=1}^\infty {a_k g(ks+s')} z^{k}. \end{equation} \end{theo} \begin{theo} \label{thserie2} Let the analytical function $f(s)$ be defined in the disc $D(0,R)$, $R>0$, by its power series~: $$ f(s) = \sum_{k=1}^\infty a_k s^k, $$ and zero in $0$. Let $(\lambda_n)$ a sequence of positive real numbers whose are increasing up to infinity such that the sum $$\suny |\exp(-\lambda_n s)|$$ converges for $\RE{s}>\sigma_a$. Let $$D(s)=\suny \exp(-\lambda_n s)$$ and a complex $s$ such that $\RE{s} > \sigma_a$. For $|z| <R$, \begin{equation} \suny f\left(ze^{-\lambda_n s}\right) = \sum_{k=1}^\infty {a_k D(ks) z^{k}}. \end{equation} \end{theo} \begin{theo} \label{thsuite} Let $(b_n)$ a sequence of complex numbers whose moduli are increasing up to infinity and $(a_n)$ an infinite sequence of complex numbers such that the sum $\suny |{a_n}/{b_n}|$ converges. Then \begin{equation} \label{thsuitef} \suny{\frac{a_n}{b_n-z}} = \suzy{\left(\suny \frac{a_n}{b_n^{k+1}}\right) z^k} \end{equation} for $|z| \leq |b_1|$, $z \neq b_1$. The power series $$ \suzy{\left(\suny \frac{a_n}{b_n^{k+1}}\right) z^k} $$ defines a meromorphic function which can be extended to the whole complex plane, except for simple poles at $z=b_n$ ($n \in \Nint$) with residue $-a_n$ when $a_n \neq 0$. \end{theo} \begin{coro} Under the previous hypothesis, for an integer $m \ge 0$, we have \begin{equation} \suny{\frac{a_n}{(b_n-z)^{m+1}}} = \suky{\left(\suny \frac{a_n}{b_n^{k+1}}\right)\frac{k!}{m!(k-m)!} z^{k-m}} \end{equation} \end{coro} \begin{proof} This is the derivative of order $m$ of the formula (\ref{thsuitef}) with respect to $z$.\end{proof} \section{The inverse problem} \section{Applications of the previous formulas} The notation $(C,k)$ means the Cesàro summation method of order $k$ \cite[p. 96]{Hardy:1949}. \subsection{} First formula (\ref{n2plusz}), $z=1$ in the Cesàro's sense \cite[p. 205]{Apostol:1974} $$ \suny \frac1{n^2+1}=\sum_{k=1}^\infty(-1)^{k+1}\zeta(2k)\qquad(C,1). $$ \subsection{} Second formula (\ref{n3plusz}), $z=1$ in the Cesàro's sense $$ \suny \frac1{n^3+1}=\sum_{k=1}^\infty(-1)^{k+1}\zeta(3k)\qquad(C,1). $$ \subsection{} Partial derivative of the first formula (\ref{n2plusz}) with respect to $z$, $z=1$ in the Cesàro's sense $$ \suny \frac1{(n^2+1)^2}=\sum_{k=1}^\infty(-1)^{k+1}k\zeta(2k)\qquad(C,2). $$ \subsection{} Theorem \ref{thserie} for $f(s)=\ln(1+s)$, $\RE{s}>1$ then limit $z \to 1^-$ $$ \suny \ln\left(1+\frac1{n^s}\right)=\sum_{k=1}^\infty(-1)^{k}\frac{\zeta(ks)}{k}. $$ \subsection{} Theorem \ref{thserie} for $f(s)=\exp(s)-1$ $$ \suny \left[\exp\left(\frac1{n^2}\right)-1\right]=\sum_{k=1}^\infty \frac{\zeta(2k)}{k!}. $$ \subsection{} Theorem \ref{thserie} for $f(z)=\sin(z)$, $\RE{s}>1$ $$ \suny \sin\left(\frac{z}{n^s}\right) = \suzy (-1)^k \frac{z^{2k+1}}{(2k+1)!}\zeta\left(2ks+s\right). $$ \subsection{} Theorem \ref{thdirichlet} for $f(s)=1/\zeta(s)$. $$ \suny \frac{\mu(n)}{n^2+z} = \suzy { \frac{(-1)^k}{\zeta(2k+2)} z^{k}}. $$ \subsection{} Theorem \ref{thdirichlet} for $f(s)=\zeta'(s)/\zeta(s)$. $$ \suny \frac{\Lambda(n)}{n^2+z} = \suzy {(-1)^{k+1} \frac{\zeta'}{\zeta}(2k+2) z^{k}}. $$ where $$\Lambda(n)=\begin{cases} \displaystyle \ln(p) & \text{if } n=p^m,\quad p\text{ prime,} \\ \displaystyle 0 & \text{otherwise.} \end{cases}$$ \subsection{} Theorem \ref{thdirichlet} for $f(s)=\zeta(s-1)/\zeta(s)$. $$ \suny \frac{\varphi(n)}{n^3+z} = \suzy {(-1)^{k+1} \frac{\zeta(3k+2)}{\zeta(3k+3)} z^{k}}. $$ where $\varphi(n)$ is the Euler's totient function. So $$ \suny \frac{\varphi(n)}{n^3+1} = \suzy {(-1)^{k+1} \frac{\zeta(3k+2)}{\zeta(3k+3)}}. $$ \subsection{} We can use the theorems \ref{thdirichlet}, \ref{thdirichletserie} and the two corollaries with the L-Dirichlet functions. With $a_n=\chi(n)$, we have by the theorem \ref{thdirichlet} $$\sum_{n=1}^\infty \frac{\chi(n)}{n^s+z} = \sum_{k=1}^\infty (-1)^k L(ks+s,\chi)z^k.$$ and by the theorem \ref{thdirichletserie} $$\suny \frac{\chi(n)}{n^{s'}}f\left(\frac{z}{n^s}\right) = \sum_{k=1}^\infty {a_k L(ks+s',\chi)} z^{k}.$$ For example, with the Catalan's zeta-function $$\beta( s)= \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^s},$ we obtain $$\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^s+z} = \sum_{k=1}^\infty (-1)^k\beta(ks+s)z^k.$$ So $$\sum_{n=1}^\infty \frac{(-1)^n}{(2n+1)^s+z} = \sum_{k=1}^\infty (-1)^k \left[\beta(ks+s)-1\right]z^k.$$ \subsection{} From $$ \suny \frac1{4n^2-1} = \frac12, $$ and $z=-1/4$ in the first formula (\ref{n2plusz}), we have $$ \suny \frac{\zeta(2n)}{4^n} = \frac12. $$ \subsection{} With $z=-1/16$, the first formula (\ref{n2plusz}) gives $$ \suny \frac{\zeta(2n)}{16^n} = \frac1{2} - \frac{\pi}{8}. $$ \subsection{} Some similar formulas are already known, for example \cite[p. 691, \S5.1.27]{BrychkovMarichevPrudnikov:1998} (with the sign of the fraction $\frac12$ corrected to be negative) $$ \suny \frac1{n^4+1} = \suzy (-1)^k{\zeta(4k+4)} = -\frac12+\frac{\pi\sqrt{2}}{4}\frac{\sinh(\pi\sqrt{2}) +\sin(\pi\sqrt{2})}{\cosh(\pi\sqrt{2})-\cos(\pi\sqrt{2})}. $$ \subsection{} Theorem \ref{thsuite} for $a_n=1$, $b_n=n^2+n$, $z=1$ $$\suny{\frac{1}{n^2+n-1}} =1+\frac{\sqrt{5}}{5}\pi \tan \left(\frac{\pi \sqrt{5}}{2}\right)= \suzy{\left(\suny \frac{1}{(n^2+n)^{k+1}}\right)}$$ \section{Code for Maple} For theorem \ref{thserie} \begin{verbatim} #Definition of the function f := x -> exp(x); #The function must be 0 for x=0 g:= x -> f(x ) - f( 0); #We calculate the coefficient of the entire series a := n->eval(diff(g(x), x$n), x = 0)/factorial(n); #Values of s and z z := 1/3; s := 2; #Left hand side s1 := sum(g(z/n^s), n = 1 .. infinity); evalf(s1, 20); #Right hand side ss2 := sum(a(k)*Zeta(k*s)*z^k, k = 1 .. infinity); evalf(ss2, 20); \end{verbatim} \bigskip\noindent Acknowledgements. The author would like to thank Jacques Gélinas for his help with this English translation of the original French version of the paper and the bibliography.
{ "timestamp": "2015-11-17T02:11:52", "yymm": "1511", "arxiv_id": "1511.04720", "language": "en", "url": "https://arxiv.org/abs/1511.04720", "abstract": "The partial fraction expansion of coth($\\pi$z), due to Euler, is generalized to power series having for coefficients the Riemann zeta function evaluated at certain arithmetic sequences. A further generalization using arbitrary Dirichlet series is also proposed. The resulting formulas are new, as far as we know, since they could not be found in any of the classical or recent handbooks of formulas that were at our disposal.", "subjects": "Complex Variables (math.CV); Number Theory (math.NT)", "title": "On some series formed by values of the Riemann Zeta function", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9875683509762452, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.709661079291977 }
https://arxiv.org/abs/1809.07398
On Permutation Weights and $q$-Eulerian Polynomials
Weights of permutations were originally introduced by Dugan, Glennon, Gunnells, and Steingrímsson (Journal of Combinatorial Theory, Series A 164:24-49, 2019) in their study of the combinatorics of tiered trees. Given a permutation $\sigma$ viewed as a sequence of integers, computing the weight of $\sigma$ involves recursively counting descents of certain subpermutations of $\sigma$. Using this weight function, one can define a $q$-analog $E_n(x,q)$ of the Eulerian polynomials. We prove two main results regarding weights of permutations and the polynomials $E_n(x,q)$. First, we show that the coefficients of $E_n(x, q)$ stabilize as $n$ goes to infinity, which was conjectured by Dugan, Glennon, Gunnells, and Steingrímsson (Journal of Combinatorial Theory, Series A 164:24-49, 2019), and enables the definition of the formal power series $W_d(t)$, which has interesting combinatorial properties. Second, we derive a recurrence relation for $E_n(x, q)$, similar to the known recurrence for the classical Eulerian polynomials $A_n(x)$. Finally, we give a recursive formula for the numbers of certain integer partitions and, from this, conjecture a recursive formula for the stabilized coefficients mentioned above.
\section{Introduction} Dugan, Glennon, Gunnells, and Steingr\'\i msson defined certain weights of permutations in their work on the combinatorics of tiered trees \cite{dugan2019tiered}. Tiered trees are a generalization of maxmin trees, which were originally introduced by Postnikov \cite{postnikov1997intransitive} and appear in the study of local binary search trees \cite{forge2014linial}, hypergeometric functions \cite{gelfand1997combinatorics}, and hyperplane arrangements \cite{postnikov2000deformations, stanley1996hyperplane}. In \cite{dugan2019tiered}, Dugan, Glennon, Gunnells, and Steingr\'\i msson defined weight for tiered trees, motivated by their role in the enumerations of absolutely irreducible representations of certain quivers and torus orbits on certain homogeneous varieties \cite{gunnells2018torus}. They showed that weight $0$ maxmin trees are in natural bijection with permutations. They defined the weight of a permutation $\sigma$ as the largest weight of a maxmin tree that can be constructed from $\sigma$. Computing the weight of $\sigma$ involves recursively counting descents of certain subpermutations of $\sigma$. Using the weight of a permutation, a new $q$-analog of the Eulerian polynomials $E_n(x,q)$ was defined in \cite{dugan2019tiered}. The $q$-Eulerian polynomial $E_n(x,q)$ differs from other $q$-Eulerian polynomials by Carlitz \cite{carlitz1954}, Foata-Sch{\"u}tzenberger \cite{foata1978major}, Stanley \cite{stanley1976binomial}, and Shareshian-Wachs \cite{shareshian2007}, and presents interesting combinatorial properties. It was observed in \cite{dugan2019tiered} that the coefficients of $E_n(x,q)$ seemed to exhibit a certain $\textit{stabilization phenomenon}$. From this conjecture, the formal power series $W_d(t)$ was extracted from $E_n(x,q)$ and observed to have a connection with enumerations of a certain type of partition $T(n,k)$ \cite{sloane2003line}. We present two main results regarding permutation weights and the $q$-Eulerian polynomials $E_n(x,q)$. We prove the stabilization phenomenon conjectured in \cite{dugan2019tiered}, which enables a rigorous definition of the formal power series $W_d(t)$ defined in \cite{dugan2019tiered}. We derive a recurrence relation for the $q$-Eulerian polynomials $E_n(x,q)$, similar to the known recurrence for the classical Eulerian polynomials $A_n(x)$. This paper is organized as follows. In Section \ref{sec:preliminaries}, we introduce preliminary definitions and notation. In Section \ref{sec:stabilization}, we define the weight disparity of a permutation and find a lower bound for weight disparity. We then use this result to show that the coefficients of $x^d$ in the formal power series $W_d(t)$ do indeed stabilize. In Section \ref{sec: formula}, we derive a recurrence relation for $E_n(x,q)$. In Section \ref{sec:second proof}, we determine the conditions for which coefficients of $E_n(x,q)$ are stabilized and give a second proof of the stabilization phenomenon. We conclude with a conjecture regarding the coefficients of $W_d(t)$ in Section \ref{sec: OEIS}. \section{Preliminaries} \label{sec:preliminaries} Let $S_n$ denote the set of permutations of $[n] = \{1,2, \dots, n\}$. We consider the permutation $\sigma \in S_n$ as an ordered sequence of integers $a_1 a_2 \dots a_n$. We say that $\sigma \in S_n$ is a permutation of length $n$. We say that $i \in [n-1]$ is a descent of $\sigma$ if $a_i > a_{i+1}$, and we write $\des(\sigma)$ to denote the number of descents in $\sigma$. The definitions in this section follow from \cite{dugan2019tiered}. \begin{definition}\label{def:2.1} Given a permutation $\sigma = a_1 a_2 \cdot\cdot\cdot a_n$, we define a method for splitting $\sigma$ into certain subpermutations $\sigma_1, \sigma_2, \dots, \sigma_j$. \begin{enumerate}[label=(\roman*)] \item Find the minimum element of $\sigma$, call it $a_m$. Divide $\sigma$ into subpermutations $\sigma_{\ell}$, $a_m$, and $\sigma_{r}$, where $\sigma_{\ell} = a_1 a_2 \cdot \cdot \cdot a_{m-1}$ and $\sigma_{r} = a_{m+1} a_{m+2} \cdot\cdot\cdot a_n$. We have $\sigma = \sigma_{\ell}$ $\cdot$ $a_m$ $\cdot$ $\sigma_r$. \item Find the largest element of $\sigma_{\ell}$, call it $a_k$. Let $\sigma_{\ell1} = a_1 a_2 \cdot\cdot\cdot a_k$ and let $\sigma_{\ell2} = a_{k+1} a_{k+2} \cdot\cdot\cdot a_{m-1}$. We have $\sigma = \sigma_{\ell1}$ $\cdot$ $\sigma_{\ell2}$ $\cdot$ $a_m$ $\cdot$ $\sigma_r$. \item Repeat step (ii) for $\sigma_{\ell2}$ until $\sigma_{\ell}$ cannot be divided further. This process results in a collection of subpermutations $\sigma_{\ell1}, \sigma_{\ell2}, \dots, \sigma_{\ell s}, \sigma_{r}$. \end{enumerate} \end{definition} \begin{example} \label{ex:2.2} We split the permutation $\sigma = 839562147$. By step (i), we have $839562 \cdot 1 \cdot 47$. By step (ii), we have $839 \cdot 562 \cdot 1 \cdot 47$. By step (iii), we have $839 \cdot 56 \cdot 2 \cdot 1 \cdot 47$. Since $\sigma$ cannot be split further, we are done. \end{example} \begin{definition}\label{def:2.3} Let $\sigma \in S_{n}$. Then the $\textit{weight}$ of $\sigma$ is the function $w: S_{n} \rightarrow \mathbb{Z}^{\geq 0}$ defined recursively as follows: \begin{enumerate}[label=(\roman*)] \item If $\sigma$ is the identity permutation or $n=1$, then $w(\sigma)=0$. \item Otherwise, consider $\sigma$ as a permutation of $S_{n+1}$ by appending $n+1$ to the right of $\sigma$. Let $\sigma_{1}, \sigma_{2}, \dots, \sigma_{j}$ be the subpermutations of this new permutation as in Definition \ref{def:2.1}. Then the weight of $\sigma$ is defined as $$w(\sigma) = \sum_{i=1}^{j} (\des(\sigma_{i}) + w(\sigma_{i})).$$ \end{enumerate} \end{definition} \begin{example}\label{ex:2.4} Consider the permutation $\sigma = 781659243 \in S_9$. We complete $\sigma$ to $781659243A$ by adding a maximal element A to the end. After splitting, we have $78 \cdot 1 \cdot 659243A$. The weight of $\sigma$ is given by $$w(\sigma) = (w(78) + \des(78)) + (w(1) + \des(1)) + (w(659243A) + \des(659243A)).$$ We compute $w(\sigma)$ as follows: \begin{itemize} \item Since the permutation $1$ is of length 1, we have $w(1) = 0$ and $\des(1)=0$. \item It is clear that that $w(78)=0$ and $\des(78)=0$. \item If we consider the numbers in the permutation $659243A$ relative to each other, we can see that $659243A$ flattens to the permutation $5461327$. We complete the permutation to $54613278$ and split to get $546 \cdot 1 \cdot 3278$. So the weight of $659243A$ is given by $w(659243A) = (w(546) + \des(546)) + (w(1) + \des(1)) + (w(3278)+\des(3278))$. A quick computation shows that $w(546)=0$ and $w(3278)=0$, so we have $w(65392) = (0+1)+(0+0)+(0+1)=2$. \end{itemize} Thus, $w(\sigma) = (0+0) + (0+0) + (2+3) = 5$. \end{example} \begin{definition}\label{def:2.5} The $\textit{Eulerian polynomial}$ $A_{n}(x)$ is defined as $$A_{n}(x) = \sum_{\sigma \in S_{n}} x^{\des(\sigma)}.$$ The coefficients of $A_{n}(x)$ count permutations in $S_n$ with $m$ descents and are the $\textit{Eulerian numbers}$, denoted $A(n,m)$. \end{definition} Using weights of permutations, a new generalization of the Eulerian polynomials $E_n(x,q)$ was defined in \cite{dugan2019tiered}. \begin{definition}\label{def:2.6} The $\textit{\textit{q}-Eulerian polynomial}$ $E_n(x,q)$ is defined as $$E_{n}(x,q) = \sum_{\sigma \in S_{n}} x^{\des(\sigma)}q^{w(\sigma)}.$$ \end{definition} \begin{example}\label{ex:2.7} We list the $q$-Eulerian polynomials $E_n(x,q)$ up to $n=7$: $E_{0}(x,q) = 1$ \\ $E_{1}(x,q) = 1$ \\ $E_{2}(x,q) = 1+x$ \\ $E_{3}(x,q) = 1+x(q+3) + x^2$ \\ $E_{4}(x,q) = 1+x(q^2+3q+7) + x^2(q^2 + 4q+6) + x^3$ \\ $E_{5}(x,q) = 1+x(q^3+3q^2+7q+15) + x^2(q^4 + 4q^3+11q^2 + 25q+25) + x^3(q^3 + 5q^2 + 10q + 10) + x^4$ \\ $E_{6}(x,q) = 1+x(q^4+3q^3+7q^2+15q+31) + x^2(q^6+4q^5+11q^4+31q^3+58q^2 + 107q+90) + x^3(q^6+5q^5+16q^4+34q^3 + 76q^2 + 105q + 65) + x^4(q^4+6q^3+15q^2+20q+15)+x^5$ \\ $E_{7}(x,q) = 1+x(q^5 +3q^4+7q^3+15q^2+31q+63) + x^2(q^8+4q^7+11q^6+31q^5+65q^4+149q^3+237q^2 + 392q+301) + x^3(q^9 + 5q^8+16q^7+41q^6+104q^5+203q^4+380q^3 + 609q^2 + 707q + 350) + x^4(q^8+6q^7+22q^6+55q^5+106q^4+210q^3+336q^2+315q+140)+x^5(q^5+7q^4+21q^3+35q^2+35q+21) + x^6$ \\ Additional $q$-Eulerian polynomials are in Appendix \ref{appendix:A}. \end{example} \section{Proof of Stabilization Phenomenon} \label{sec:stabilization} Let us fix $k$ and consider the coefficients of $x^k$ in $E_{n}(x,q)$. It was conjectured in \cite{dugan2019tiered} that as $n$ goes to infinity, these coefficients converge to a fixed sequence and thus display a $\textit{stabilization phenomenon}$. Observe that for $k=2$ in Example \ref{ex:2.7}, the coefficients of $x^2$ seem to stabilize to \begin{center} $q^N+4q^{N-1}+11q^{N-2}+31q^{N-3}+65q^{N-4}+ \cdots$ \end{center} where $N$ is the maximum weight of a permutation of length $n$ with $2$ descents. Using this conjectural observation, the formal power series $W_{d}(t) \in \mathbb{Z}[[t]]$ was extracted in \cite{dugan2019tiered} and observed to have interesting combinatorial properties. In this section, we prove the stabilization phenomenon conjectured in \cite{dugan2019tiered}. We first define the weight disparity of a permutation and find a lower bound for weight disparity in Theorem \ref{thm:disparity}. We then use this bound to prove Theorem \ref{thm:stabilization}. We thus give an explicit formula for the formal power series $W_{d}(t)$, enabling the study of certain combinatorial properties in the coefficients of $W_d(t)$. \begin{definition}\label{def:3.1} We denote the \textit{maximum weight} of a permutation of length $n$ with $d$ descents by $\maxwt(n,d)$. \end{definition} It was shown in \cite{dugan2019tiered} that the maximum weight of such a permutation is $d(n-d-1)$. \begin{lemma}\label{lem:3.2} Let $\sigma \in S_n$ be a permutation with $d$ descents. If we fix the last number in $\sigma$ to be $1$, then $\maxwt(\sigma) = (d-1)(n-d-1)$. \end{lemma} \begin{proof} This proof is derived directly from the proof of Theorem 6.10 in \cite{dugan2019tiered}. \end{proof} \begin{definition}\label{def:3.3} Let $\sigma \in S_n$ be a permutation with $d$ descents. We define the $\textit{weight disparity}$ $\Delta(\sigma)$ of $\sigma$ as $\Delta(\sigma) = \maxwt(n,d)-w(\sigma)$. \end{definition} We now prove a lower bound for weight disparity, $\Delta(\sigma)$. \begin{theorem}\label{thm:disparity} Let $\sigma \in S_n$ be a permutation with $d$ descents. If $\sigma$ does not start with $1$, then $\Delta(\sigma) \geq n-d-1$. \end{theorem} \begin{proof} Let $\sigma \in S_n$ be a permutation with $d$ descents such that $\sigma = \pi_L \cdot 1 \cdot \pi_R$, where $\pi_L$ and $\pi_R$ are subpermutations and $\pi_L$ is nonempty. We have two cases: \begin{enumerate}[label=(\roman*)] \item The subpermutation $\pi_{R}$ is empty. Then $\sigma = \pi_{L} \cdot 1$, and by Lemma 3.1, we have $w(\sigma) \leq (d-1)(n-d-1)$ so $\Delta(\sigma) \ge n-d-1$. \item The subpermutation $\pi_{R}$ is nonempty. Then $\sigma = \pi_{L} \cdot 1 \cdot \pi_{R}$. Let $\pi_L$ be a permutation on $\ell$ numbers with $des(\pi_L) = q$. We have $\pi_{R} \in S_{n-\ell-1}$ and $des(\pi_{R}) = d-q-1$. We proceed to bound $w(\sigma)$. Computing the weight of $\sigma$, we have \begin{center} $w(\sigma) = w(\pi_{L} \cdot 1) + w(\pi_{R}) + \des(\pi_{R}),$ \end{center} which implies \begin{center} $w(\sigma) \leq q(\ell-q-1) + (d-q-1)(n-\ell-d+q).$ \end{center} Since $\des(\pi_R) = d-q-1 \geq 0$ and $\des(\pi_L) = q < \ell$, we have $(q-d+1)(\ell-q-1) \leq 0$. Similarly, we have $des(\pi_R)=d-q-1<n-\ell-1$ and so $q(-n+\ell+d-q) \leq 0$. Combining these two inequalities, we get \begin{equation}\label{eq: 1} (q-d+1)(\ell-q-1)+q(-n+\ell+d-q) \leq 0. \end{equation} Adding $(d-1)(n-d-1)$ to \eqref{eq: 1} and substituting for $w(\sigma)$, we obtain \begin{center} $w(\sigma) \leq (d-1)(n-d-1)$, \end{center} or equivalently, \begin{center} $\Delta(\sigma) \geq n-d-1$. \end{center} \end{enumerate} \end{proof} \begin{definition} \label{def:3.5} We denote the coefficient of $x^d$ in $E_n(x,q)$ by $E_n[x^d]$ and the coefficient of $x^dq^m$ in $E_n(x,q)$ by $E_n[x^dq^m]$. \end{definition} We now show that the coefficients of $E_{n}[x^d]$ display a stabilization phenomenon. \begin{theorem} \label{thm:stabilization} For all $k, d, m \in \mathbb{N}$ such that $m = d+k+1$ and $n \ge m$, the value of $E_n[x^dq^{\maxwt(n,d)-k}]$ is independent of $n$. \end{theorem} \begin{proof} We show that for sufficiently large $n$, the number of permutations of length $n$ with a given weight disparity is constant. Let $\sigma \in S_{m+1}$ such that $m=d+k+1$ for $k, m, d \in \mathbb{N}$. We determine the form of the permutations $\sigma$ counted by $E_{m+1}[x^dq^{\maxwt(m+1,d)-k}]$. We have the following cases: \begin{enumerate}[label=(\roman*)] \item Suppose $\sigma$ is of the form $1 \cdot \pi$, where $\pi \in S_m$ and $\des(\pi)=d$ and $w(\pi) = \maxwt(m,d)-k= (d-1)(m-d-1)$. Then we have \begin{equation*} \begin{split} w(\sigma) & = w(\pi)+d = \maxwt(m+1,d)-k. \end{split} \end{equation*} So $\sigma$ is counted by $E_{m+1}[x^dq^{\maxwt(m+1,d)-k}]$. Note that removing $1$ from the beginning of $\sigma$ and subtracting every element by $1$ is a bijection that gives us permutations $\pi \in S_m$ with $d$ descents and weight equal to $w(\pi) = \maxwt(m,d)-k$. So $\sigma$ is also counted by $E_{m}[x^dq^{\maxwt(m,d)-k}]$. \item Suppose $\sigma $ is of the form $1 \cdot \pi$, where $\des(\pi) \neq d$. Then $\sigma$ is not counted by $E_{m+1}[x^dq^{\maxwt(m+1,d)-k}]$. \item Suppose $\sigma $ is of the form $1 \cdot \pi$, where $w(\pi) \neq d(m-d-1)-k$. Then $w(\sigma) = w(\pi)+d \neq \maxwt(m+1,d)-k$, and $\sigma$ is not counted by $E_{m+1}[x^dq^{\maxwt(m+1,d)-k}]$. \item Suppose $\sigma$ does not start with $1$. Let $p = \des(\sigma)$. By Theorem \ref{thm:disparity}, we have $\Delta(\sigma) \geq m - \des(\sigma)$, which implies the following: \begin{align*} \maxwt(m+1,p)-w(\sigma)+p &\geq m \Rightarrow \\ p(m+1-p-1)-w(\sigma)+p &> m-1 = d + k \Rightarrow \\ p-w(\sigma) &> d - (\maxwt(m+1, p)-k). \end{align*} So either $p \neq d$ or $w(\sigma) \neq \maxwt(m+1, p)-k$. Thus $\sigma$ is not counted by $E_{m+1}[x^dq^{\maxwt(m+1,d)-k}]$. \end{enumerate} Since the number of permutations with weight disparity $k$ and $d$ descents in $S_{m+1}$ is the same as those in $S_{m}$, we have \begin{center} $E_{m}[x^dq^{\maxwt(m,d)-k}] = E_{m+1}[x^dq^{\maxwt(m+1,d)-k}]$. \end{center} By a similar argument, we have \begin{center} $E_{m+1}[x^dq^{\maxwt(m+1,d)-k}] = E_{m+2}[x^dq^{\maxwt(m+2,d)-k}]$. \end{center} Thus for all positive integers $n>m$, we have $E_{n}[x^dq^{\maxwt(n,d)-k}] = E_{m}[x^dq^{\maxwt(m,d)-k}]$. \end{proof} Theorem \ref{thm:stabilization} shows that the formal power series $W_d(t)$ defined in \cite{dugan2019tiered} exists and is given by $W_d(t^k) = E_{d+k+1}[x^dq^{(d-1)k}]$. We can now extract $W_d(t)$ from the stabilized coefficients of $E_n(x,q)$. \begin{definition} \label{def:3.7} If $E_{n}[x^d]$ stabilizes to $q^N + a_{1}q^{N-1} + a_{2}q^{N-2}+ a_{3}q^{N-3} + \ldots$, where $N=d(n-d-1)$, then the formal power series $W_d(t)$ is defined in \cite{dugan2019tiered} as \begin{center} $W_{d}(t) = 1 + a_{1}t + a_{2}t^2 + a_{3}t^3 + ...$. \end{center} \end{definition} \begin{example} \label{ex:3.8} By Theorem \ref{thm:stabilization} and our data for the $q$-Eulerian polynomials, we have \begin{align*} W_1(t) &= 1+3t+7t^2+15t^3+31t^4+63t^5+... \\ W_2(t) &= 1+4t+11t^2+31t^3+65t^4+157t^5+... \\ W_3(t) &= 1+5t+16t^2+41t^3+112t^4+244t^5+... \\ W_4(t) &= 1+6t+22t^2+63t^3+155t^4+393t^5+... \\ W_5(t) &= 1+7t+29t^2+92t^3+247t^4+590t^5+... \end{align*} \end{example} \section{A Recurrence Relation for $E_n(x,q)$} \label{sec: formula} The classical Eulerian polynomials $A_n(x)$ in Definition \ref{def:2.5} enumerate permutations according to their number of descents. The Eulerian polynomials $A_n(x)$ satisfy the following recurrence for $n \geq 1$: \begin{align*} A_0(x)&=1, \\ A_n(x) &= \sum_{i=1}^{n-1} \binom{n-1}{i} A_i(x) \cdot A_{n-i-1}(x) \cdot x. \end{align*} We now derive a recurrence relation for the $q$-Eulerian polynomials $E_n(x,q)$, similar to the recurrence for $A_n(x)$. \begin{theorem} \label{thm:formula} The $q$-Eulerian polynomials $E_n(x,q)$ satisfy the recurrence relation \begin{align*} E_{0}(qx,q) &= 1, \\ E_n(x,q) &= \sum_{i=1}^{n-1} \binom{n-1}{i} E_i(x,q) \cdot E_{n-i-1}(qx,q) \cdot x + E_{n-1}(qx,q). \end{align*} \end{theorem} We introduce the following definitions and lemmas used in the proof of Theorem \ref{thm:formula}. \begin{definition} \label{def:4.2} We define $S_n^{\prime}$ to be the set of all permutations in $S_n$ ending in $1$. That is, $S^\prime_{n} = \{\sigma \in S_n \mid \sigma = \pi \cdot 1,$ where $\pi \in S_{n-1}\}$. \end{definition} \begin{lemma} \label{lem:4.3} There exists a bijection $f:S_n \leftrightarrow S^\prime_{n+1}$ such that, for all $\sigma \in S_n$, $w(f(\sigma)) = w(\sigma)$ and $\des(f(\sigma)) = \des(\sigma) + 1$. \end{lemma} \begin{proof} We first show that there exists a function $f: S_n \rightarrow S^\prime_{n+1}$ that preserves weight and increases number of descents by $1$. Let $\pi \in S_n$ such that $\pi = \pi_L \cdot 1 \cdot \pi_R$, where the subpermutations $\pi_{L}$ and $\pi_R$ can be empty. Consider the function $f: S_n \rightarrow S^\prime_{n+1}$ defined by \begin{center} $f(\pi) = f(\pi_L \cdot 1 \cdot \pi_R) = \pi_R \cdot n+1 \cdot \pi_L \cdot 1$. \end{center} That is, $f$ replaces $1$ with the maximum element $n+1$, exchanges $\pi_L$ and $\pi_R$, and appends a minimum element $1$ at the end. Notice that for any $\pi \in S_n$, we have $f(\pi) \in S^\prime_{n+1}$ and $\des(f(\pi)) = \des(\pi) + 1$. The weight of $f(\pi)$ is given by \begin{center} $w(f(\pi)) = w(\pi_L \cdot 1) + w(\pi_R) + \des(\pi_R)$, \end{center} so $w(f(\pi)) = w(\pi)$. So $f$ preserves weight and increases number of descents by $1$. We now show that there exists a function $g: S^\prime_{n+1} \rightarrow S_n$ that is an inverse of $f$ and preserves weight and decreases number of descents by $1$. Let $\alpha \in S^\prime_{n+1}$ be such that $\alpha = \alpha_L \cdot n+1 \cdot \alpha_R \cdot 1$, where the subpermutations $\alpha_L$ and $\alpha_R$ can be empty. Let $g: S^\prime_{n+1} \rightarrow S_n$ be the function defined by \begin{center} $g(\alpha) = g(\alpha_L \cdot n+1 \cdot \alpha_R \cdot 1) = \alpha_R \cdot 1 \cdot \alpha_L$. \end{center} That is, $g$ removes $1$ from the end of the permutation, replaces $(n+1)$ with $1$, and exchanges $\alpha_L$ and $\alpha_R$. Notice that for any $\alpha \in S^\prime_{n+1}$, $g(\alpha) \in S_n$, and $\des(g(\alpha)) = \des(\alpha)-1$. The weight of $g(\alpha)$ is given by \begin{center} $w(g(\alpha)) = w(\alpha_R \cdot 1) + w(\alpha_L) + \des(\alpha_L)$. \end{center} Since $w(\alpha) = w(\alpha_L \cdot n+1) + \des(\alpha_L \cdot n+1)+ w(\alpha_R \cdot 1)$, we have $w(\alpha) = w(\alpha_L) + \des(\alpha_L) + w(\alpha_R \cdot 1) = w(g(\alpha))$. So $g$ preserves weight and decreases number of descents by $1$. Finally, we show that $f$ is a bijection from $S_n$ to $S^\prime_{n+1}$. We have the following: \begin{center} $g(f(\pi)) = g(\pi_R \cdot n+1 \cdot \pi_L \cdot 1) = \pi_L \cdot 1 \cdot \pi_R = \pi,$ $f(g(\alpha)) = f(\alpha_R \cdot 1 \cdot \alpha_L) = \alpha_L^{\prime} \cdot n+1 \cdot \alpha_R^{\prime} \cdot 1 = \alpha.$ \end{center} Thus since $S_n$ and $S_{n+1}^{\prime}$ are equinumerous, $g=f^{-1}$ and $f$ is a bijection from $S_n$ to $S^\prime_{n+1}$. \end{proof} \begin{definition} \label{def:4.5} We write $E_n^{*}(x,q)$ to denote the $q$-Eulerian polynomial $$\sum_{\sigma \in S_n^{\prime}} x^{\des(\sigma)}q^{w(\sigma)}.$$ \end{definition} \begin{lemma} \label{lem:4.5} We have \begin{center} $E_k[x^d] = E_{k+1}^{*}[x^{d+1}]$. \end{center} \end{lemma} \begin{proof} From Definition \ref{def:2.6} and Definition \ref{def:4.5}, we have the following: \begin{equation} \label{eq: 2} E_k[x^d] = \sum_{\sigma \in S_k}x^{\des(\sigma)q^{w(\sigma)}}, \end{equation} \begin{equation} \label{eq: 3} E_{k+1}^*[x^{d+1}] = \sum_{\alpha \in S_{k+1}} x^{\des(\sigma)} q^{w(\alpha)}. \end{equation} The coefficient of $x^dq^{w(\sigma)}$ in \eqref{eq: 2} counts permutations in $S_k$ with $d$ descents and weight $w(\sigma)$. The coefficient of $x^{d+1}q^{w(\sigma)}$ in \eqref{eq: 3} counts permutations in $S_{k+1}$ ending in $1$ with $d+1$ descents and weight $w(\sigma)$. By Lemma \ref{lem:4.3}, there exists a bijection between these two sets of permutations that preserves weight and, respectively decreases, number of descents by $1$. Hence, the sizes of these two sets are equal, and we have $E_k[x^d] = E_{k+1}^{*}[x^{d+1}].$ \end{proof} \begin{lemma} \label{lem:4.6} The $q$-Eulerian polynomials $E_n(x,q)$ satisfy the recurrence relation $$E_n[x^d] = \sum_{i=1}^{n-1}\binom{n-1}{i} \Big ( \sum_{k=1}^{i} E_{n-i-1}[x^{d-k}] \cdot E_i[x^{k-1}] \cdot q^{d-k} \Big ) + E_{n-1}[x^d]q^d.$$ \end{lemma} \begin{proof} Let $\sigma \in S_n$ be a permutation with $d$ descents. We have two cases: \emph{Case 1:} Suppose $\sigma = 1 \cdot \pi_1$ for any $\pi_1 \in S_{n-1}$. We have $w(\sigma) = w(\pi_1) + d$. So the number of permutations $\sigma \in S_n$ with $d$ descents and weight $w(\sigma)$ is equal to the number of permutations $\pi_1 \in S_{n-1}$ with $d$ descents and weight $w(\pi_1)+d$. This is counted by $E_{n-1}[x^d]q^d$. $\\$ \par \emph{Case 2:} Suppose $\sigma = \pi_L \cdot 1 \cdot \pi_R$, where $1$ is in position $i+1$ of $\sigma$ and $1 \leq i \leq n-1$. It is clear that $\pi_L \in S_i$ and $\pi_R \in S_{n-i-1}$. There are $\binom{n-1}{i}$ ways to select the $i$ elements in the subpermutation $\pi_L$. Let $k = \des(\pi_L \cdot 1)$, where $1\leq k \leq i$. Then we have $\des(\pi_R)=d-k$ and $w(\sigma) = w(\pi_L \cdot 1) + w(\pi_R) + \des(\pi_R)$. The permutations $\pi_L \cdot 1$ are counted by $E^*_{i+1}[x^{k}]$, and the permutations $\pi_R$ are counted by $E_{n-1-i}[x^{d-k}]$. We know that $E_n[x^dq^w] \cdot E_{m}[x^eq^v]$ counts permutations of length $n+m$ with $d+e$ descents and weight $w+v$. Thus permutations of length $n$ with $d$ descents and weight $w(\pi_L \cdot 1) + w(\pi_R)$ are counted by \begin{center} $E^*_{i+1}[x^{k}] \cdot E_{n-1-i}[x^{d-k}]$. \end{center} We need to add $\des(\pi_R) = d-k$ to the weight of these permutations to count permutations with length $n$, $d$ descents, and weight $w(\sigma)$. Multiplying by $q^{d-k}$, we have \begin{center} $q^{d-k} \cdot E^*_{i+1}[x^{k}] \cdot E_{n-1-i}[x^{d-k}]$. \end{center} Thus the permutations $\sigma$ are counted by $$\sum_{i=1}^{n-1} \binom{n-1}{i} \sum_{k=1}^{i} E_{i+1}^{*}[x^{k}] \cdot E_{n-1-i}[x^{d-k}] q^{d-k},$$ which by Lemma \ref{lem:4.5} is equal to $$\sum_{i=1}^{n-1} \binom{n-1}{i} \sum_{k=1}^{i} E_{i}[x^{k-1}] \cdot E_{n-1-i}[x^{d-k}]q^{d-k}.$$ Combining Case 1 and Case 2, we have $$E_n[x^d]=\sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} E_{i}[x^{k-1}] \cdot E_{n-1-i}[x^{d-k}]q^{d-k} \Big ) +E_{n-1}[x^d]q^d.$$ \end{proof} \begin{proof}[Proof of Theorem \ref{thm:formula}] From Lemma \ref{lem:4.6}, it follows that $$E_n(x,q) = \sum_{d=0}^{n-1} x^d \Big ( \sum_{i=1}^{n-1}\binom{n-1}{i} \Big ( \sum_{k=1}^{i} E_{n-i-1}[x^{d-k}] \cdot E_i[x^{k-1}] \cdot q^{d-k} \Big ) + E_{n-1}[x^d]q^d \Big ).$$ If we distribute the first summation and let $\ell = d-k$, we get $$E_n(x,q) = \sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} \Big ( \sum_{\ell=0}^{n-1-i} E_{n-i-1}[x^{\ell}] \cdot q^\ell \cdot E_{i}[x^{k-1}] \cdot x^{\ell+k} \Big ) \Big ) + \sum_{j=0}^{n-1} E_{n-1}[x^j] q^j x^j.$$ We can rearrange terms to get $$E_n(x,q)=\sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} E_{i}[x^{k-1}]x^{k-1} \Big ) \Big ( \sum_{\ell=0}^{n-i-1} E_{n-i-1}[x^\ell] \cdot q^\ell x^{\ell} \Big ) \cdot x +\sum_{j=0}^{n-1} E_{n-1}[x^j] q^j x^j.$$ Substituting the corresponding $q$-Eulerian polynomials, we have $$E_n(x,q) = \sum_{i=1}^{n-1} \binom{n-1}{i} E_i(x,q) \cdot E_{n-i-1}(qx,q) \cdot x + E_{n-1}(qx,q).$$ \end{proof} \begin{remark} We compare our recurrence relation for $E_n(x,q)$ with other recurrences in the literature. Note that if we set $q=1$, Theorem \ref{thm:formula} becomes the recurrence for the classical Eulerian polynomials $A_n(x)$. Since there are type B and D Eulerian polynomials that satisfy recurrence relations analogous to the recurrence for the polynomials $A_n(x)$ \cite{hyatt2016recurrences}, there may also be other $q$-analogs like the one we study. Finally, we note that the recurrence relation in Theorem \ref{thm:formula} resembles the recurrence derived by Postnikov for the polynomials $f_n(x)$, given by $$f_n(x) = x \Big ( \sum_{\ell=1}^{n-1} \binom{n-1}{\ell} \cdot f_\ell^{\prime}(x) f_{n-\ell-1}(x) + f_{n-1}(x) \Big ),$$ where $f_n(x)= \sum_k f_{nk}x^k$ and $f_{nk}$ denotes the number of all maxmin trees on the set $[n+1]$ with $k$ maxima \cite{postnikov1997intransitive}. Given that weight 0 maxmin trees are in bijection with permutations \cite{dugan2019tiered}, it would be interesting to explore this connection between all maxmin trees and permutations further. \end{remark} Using Lemma \ref{lem:4.6} and Theorem \ref{thm:formula}, we give a recursive formula for the coefficients of $E_n(x,q)$. \begin{corollary} \label{cor:4.9} The coefficients of the $q$-Eulerian polynomials $E_n(x,q)$ satisfy the recurrence relation $$E_n[x^dq^m] = \sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} \Big ( \sum_{j=0}^{m} E_{n-i-1}[x^{d-k}q^{m-j}] \cdot E_{i}[x^{k-1}q^{k+j-d}] \Big ) \Big ) + E_{n-1}[x^dq^{m-d}].$$ \end{corollary} \begin{proof} Since by Lemma \ref{lem:4.6}, we have $$E_n[x^d]=\sum_{i=1}^{n-1} \binom{n-1}{i} \sum_{k=1}^{i} E_{i}[x^{k-1}] \cdot E_{n-1-i}[x^{d-k}]q^{d-k}+E_{n-1}[x^d]q^d,$$ we can piece together $m$ in $E_n[x^dq^m]$ by introducing a third summation and iterating through all possibilities. \end{proof} \section{Alternate Proof of the Stabilization Phenomenon} \label{sec:second proof} Using our recursive formula for the coefficients of $E_n(x,q)$ in Corollary \ref{cor:4.9}, we determine a lower bound for the exponent of $q$ such that the coefficients $E_n[x^dq^w]$ are stabilized in Theorem \ref{thm:5.1}. We then use this result to give a second proof of Theorem \ref{thm:stabilization}. \begin{theorem} \label{thm:5.1} If $m \geq (d-1)(n-d-1)+1$ for $d \neq 0$ and $d \neq n-1$, then $$E_n[x^dq^m] = E_{n-1}[x^dq^{m-d}].$$ \end{theorem} \begin{proof} By Corollary \ref{cor:4.9}, we have \begin{equation} \label{eq:4} E_n[x^dq^{m}] = \sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} \Big (\sum_{j=0}^{m} E_{n-i-1}[x^{d-k}q^{m-j}] \cdot E_i[x^{k-1}q^{k+j-d}] \Big ) \Big ) + E_{n-1}[x^dq^{m-d}]. \end{equation} For fixed $i$ and $k$, we attempt to find bounds for $j$ such that $E_{n-i-1}[x^{d-k}q^{m-j}] \cdot E_i[x^{k-1}q^{k+j-d}] \neq 0$. We consider the cases when $m$ is greater than, equal to, or less than $(d-1)(n-d-1)$. We show that in case (i), we must have $E_{n-i-1}[x^{d-k}q^{m-j}] \cdot E_i[x^{k-1}q^{k+j-d}] = 0$ and so $E_n[x^dq^m] = E_{n-1}[x^dq^{m-d}]$. \begin{enumerate}[label=(\roman*)] \item Suppose $m=(d-1)(n-d-1)+p$ for some positive integer $p$. We show that $E_n[x^dq^m] = E_{n-1}[x^dq^{m-d}]$. Note that $m-d \geq 0$ since $0<d<n-1$. By Corollary \ref{cor:4.9}, we have \begin{equation} \label{eq:4} \begin{split} E_n[x^dq^{(d-1)(n-d-1)+p}] = \sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} \Big (&\sum_{j=0}^{(d-1)(n-d-1)+p} E_{n-i-1}[x^{d-k}q^{(d-1)(n-d-1)+p-j}]\\ &\cdot E_i[x^{k-1}q^{k+j-d}] \Big ) \Big ) + E_{n-1}[x^dq^{(d-1)(n-d-1)+p-d}]. \end{split} \end{equation} The weight of a permutation of length $n$ with $d$ descents is bounded by $\maxwt(n,d)=d(n-d-1)$. So from the coefficients $E_{n-i-1}[x^{d-k}q^{(d-1)(n-d-1)+p-j}]$ and $E_i[x^{k-1}q^{k+j-d}]$, we have the following inequalities: \begin{equation} \label{eq: ie3} (d-1)(n-d-1)+1-j+p \leq (d-k)(n-1-i-d+k-1). \end{equation} \begin{equation} \label{eq: ie4} k+j-d \leq (k-1)(i-k). \end{equation} Analyzing the terms and summation bounds in \eqref{eq:4} gives $1 \leq k \leq d$, $k \leq i$ and $n-1-i \geq d-k-1$. A straightforward computation shows that these summation bounds and \eqref{eq: ie3} and \eqref{eq: ie4} contradict each other. Thus there does not exist $j$ such that $E_{n-i-1}[x^{d-k}q^{m-j}] \cdot E_i[x^{k-1}q^{k+j-d}] \neq 0$. We have $$\sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} \Big (\sum_{j=0}^{(d-1)(n-d-1)+p} E_{n-i-1}[x^{d-k}q^{(d-1)(n-d-1)+p-j}] \cdot E_i[x^{k-1}q^{k+j-d}] \Big ) \Big )=0.$$ So $E_n[x^dq^m] = E_{n-1}[x^dq^{m-d}]$. \\ \item Suppose $m = (d-1)(n-d-1)$. We show that $E_n[x^dq^m] \neq E_{n-1}[x^dq^{m-d}]$. Note that $m-d \geq 0$ since $0 < d < n-1$. Using the same reasoning as in case (i), we have the following inequalities: \begin{equation} \label{eq: ie5} (d-1)(n-d-1)-j \leq (d-k)(n-1-i-d+k-1) \end{equation} \begin{equation} \label{eq: ie6} k+j-d \leq (k-1)(i-k) \end{equation} A straightforward computation shows that there exists nonnegative integers $j$ that satisfy both \eqref{eq: ie5} and \eqref{eq: ie6}. Thus we have $$\sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} \Big (\sum_{j=0}^{(d-1)(n-d-1)} E_{n-i-1}[x^{d-k}q^{(d-1)(n-d-1)-j} \cdot E_i[x^{k-1}q^{k+j-d}] \Big ) \Big ) \neq 0.$$ So $E_n[x^dq^m] \neq E_{n-1}[x^dq^{m-d}]$. \\ \item Suppose $m = (d-1)(n-d-1)-q$ for some $q \in \mathbb{Z}^{>0}$. Using the same reasoning as in cases (i) and (ii), we have the following inequalities: \begin{equation} \label{eq: ie7} (d-1)(n-d-1)-j-q \leq (d-k)(n-1-i-d+k-1) \end{equation} \begin{equation} \label{eq: ie8} k+j-d \leq (k-1)(i-k) \end{equation} A straightforward computation shows that there exists nonnegative integers $j$ that satisfy both \eqref{eq: ie7} and \eqref{eq: ie8}. Thus we have $$\sum_{i=1}^{n-1} \binom{n-1}{i} \Big ( \sum_{k=1}^{i} \Big (\sum_{j=0}^{(d-1)(n-d-1)+1} E_{n-i-1}[x^{d-k}q^{(d-1)(n-d-1)+1-j} \cdot E_i[x^{k-1}q^{k+j-d}] \Big ) \Big ) \neq 0.$$ So $E_n[x^dq^m] \neq E_{n-1}[x^dq^{m-d}]$. \end{enumerate} \end{proof} Using Theorem \ref{thm:5.1}, we give an alternate proof of the stabilization phenomenon. \begin{proof}[Proof of Theorem \ref{thm:stabilization}] By Theorem \ref{thm:5.1}, we have $E_n[x^dq^m] = E_{n-1}[x^dq^{m-d}]$ if $m \geq (d-1)(n-d-1)+1$, where $d \neq 0$ and $d \neq n-1$. Furthermore, $E_{n}[x^dq^m]$ is stabilized when $m \geq (d-1)(n-d-1)+1$ and is not stabilized when $m < (d-1)(n-d-1)+1$. It is easy to show that $E_n[x^{0}q^{0}]$ is stabilized and $E_n[x^{n-1}q^{0}]$ is not. \end{proof} \section{Connection Between $W_d(t)$ and Integer Partitions} \label{sec: OEIS} Theorem \ref{thm:formula} gives a recursive formula for the $q$-Eulerian polynomials $E_n(x,q)$, and Theorem \ref{thm:5.1} states the condition for which the coefficients $E_n[x^dq^w]$ are stabilized. We conclude with a conjecture regarding the coefficients of $W_d(t)$, which are the stabilized coefficients of $E_n(x,q)$. It was observed in \cite{dugan2019tiered} that the coefficients of $W_d(t)$ correspond to numbers in the sequence {\fontfamily{cmtt}\selectfont A256193} by Alois P. Heinz in the Online Encyclopedia of Integer Sequences \cite{sloane2003line}. The numbers $T(n,k)$ count partitions of $n$ with exactly $k$ parts of a second type, denoted by a prime $^{\prime}$. For example, we have \\ \qquad \qquad \qquad \qquad \qquad $T(3,0) = 3$, corresponding to 111, 21, 3, \qquad \qquad \qquad \qquad \qquad $T(3,1) = 6$, corresponding to $1^{\prime}11$, $11^{\prime}1$, $111^{\prime}$, $2^{\prime}1$, $21^{\prime}$, $3^{\prime}$, \qquad \qquad \qquad \qquad \qquad $T(3,2) = 4$, corresponding to $1^{\prime}1^{\prime}1$, $1^{\prime}11^{\prime}$, $11^{\prime}1^{\prime}$, $2^{\prime}1^{\prime}$, \qquad \qquad \qquad \qquad \qquad $T(3,3) = 1$, corresponding to $1^{\prime}1^{\prime}1^{\prime}$. \\ Table \ref{tab:2} is a short table of $T(n,k)$ with $k$ constant along the columns. Numbers in bold correspond to coefficients of the power series $W_d(t)$. \begin{table}[htb] \begin{tabular}{llllllllll} \textbf{1} & & & & & & & & & \\ 1 & \textbf{1} & & & & & & & & \\ 2 & \textbf{3} & \textbf{1} & & & & & & & \\ 3 & 6 & \textbf{4} & \textbf{1} & & & & & & \\ 5 & 12 & \textbf{11} & \textbf{5} & \textbf{1} & & & & & \\ 7 & 20 & 24 & \textbf{16} & \textbf{6} & \textbf{1} & & & & \\ 11 & 35 & 49 & \textbf{41} & \textbf{22} & \textbf{7} & \textbf{1} & & & \\ 15 & 54 & 89 & 91 & \textbf{63} & \textbf{29} & \textbf{8} & \textbf{1} & & \\ 22 & 86 & 158 & 186 & \textbf{155} & \textbf{92} & \textbf{37} & \textbf{9} & \textbf{1} & \\ 30 & 128 & 262 & 351 & 342 & \textbf{247} & \textbf{129} & \textbf{46} & \textbf{10} & \textbf{1} \end{tabular} \vspace{+.2cm} \caption{The numbers $T(n,k)$} \label{tab:2} \vspace{-.4cm} \end{table} We can therefore state the following theorem and conjecture: \begin{theorem} \label{thm:w_d(t)} For fixed $k \in \mathbb{N}$ and $d \geq 2k$, we have $$T(d+k, d) = \sum_{i=1}^{k} \Big ((-1)^{i+1} \cdot \binom{k}{i} \cdot T(d+k-i, d-i)\Big) + 1.$$ \end{theorem} \begin{conjecture} \label{conj:6.1} For fixed $k \in \mathbb{N}$ and $d \geq 2k$, we have $$W_d[t^{k}] = \sum_{i=1}^{k} \Big((-1)^{i+1} \cdot \binom{k}{i} \cdot W_{d-i}[t^{k}] \Big) + 1.$$ \end{conjecture} We proceed to prove the relation for $T(d+k,d)$ in Theorem \ref{thm:w_d(t)}. We first prove the following lemma: \begin{lemma} \label{lem:6.1} Let $n, k, b \in \mathbb{N}$ such that $b \leq 2k-n$. The numbers $T(n,k)$ satisfy the relation $$T(n,k)=\sum_{j=0}^{b} \binom{b}{j} \cdot T(n-b, k-j).$$ \end{lemma} \begin{proof} Let $n, k, b \in \mathbb{N}$ such that $b \leq 2k-n$ and let $0 \leq j \leq b$. Consider the partitions represented by $T(n-b,k-j)$. Appending a partition of $b$ 1s with $j$ of them $1^{\prime}$ results in partitions represented by $T(n,k)$. There are $\sum_{j=0}^{b} \binom{b}{j} \cdot T(n-b, k-j)$ such partitions. We show that this summation counts all partitions $T(n,k)$. By our assumption and the definition of $T(n,k)$, each partition represented by $T(n,k)$ ends in at least one 1 or $1^{\prime}$. The partition with the fewest 1s (1 or $1^{\prime}$) is the partition $\underbrace{2^{\prime}2^{\prime} \dots 2^{\prime}}_{n-k} \underbrace{1^{\prime}1^{\prime} \dots 1^{\prime}}_{2k-n}$. So every partition represented by $T(n,k)$ ends in at least $2k-n$ 1s and is counted by the summation. Thus for $b \leq 2k-n$, we have $T(n,k) = \sum_{j=0}^{b} \binom{b}{j} \cdot T(n-b, k-j)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:w_d(t)}] Let $k \in \mathbb{N}$ be fixed and $d \geq 2k$. By Lemma \ref{lem:6.1}, we get \begin{align*} T(d+k,d) &= \sum_{j=0}^{k} \binom{k}{j} \cdot T(d,d-j) \\ &= \sum_{j=1}^{k} \binom{k}{j} \cdot T(d,d-j) + 1. \end{align*} Note that $\binom{k}{j} = \sum_{i=1}^{j} \binom{k}{j} \binom{j}{i} (-1)^{i+1}$. We have $$T(d+k,d) = \sum_{j=1}^{k} T(d,d-j) \Big ( \sum_{i=1}^{j} \binom{k}{j} \binom{j}{i} (-1)^{i+1} \Big ) + 1.$$ By the identity $\binom{r}{s} \binom{s}{t} = \binom{r}{t} \binom{r-t}{s-t}$, we get $$T(d+k,d) = \sum_{j=1}^{k} T(d,d-j) \Big (\sum_{i=1}^{j} \binom{k}{i} \binom{k-i}{j-i} (-1)^{i+1} \Big ) + 1.$$ If we change the order of summations and let $a = j-i$, we obtain $$T(d+k,d)= \sum_{i=1}^{k} \binom{k}{i} (-1)^{i+1} \Big (\sum_{a=0}^{k-i} \binom{k-i}{a} \cdot T(d,d-a-i) \Big ) + 1.$$ Since $k-i \leq 2(d-i)-(d+k-i)$, by Lemma \ref{lem:6.1} we have $$T(d+k,d) = \sum_{i=1}^{k} \Big ( (-1)^{i+1} \binom{k}{i} \cdot T(d+k-i,d-i) \Big ) + 1.$$ \end{proof} \section*{Acknowledgements} The authors would like to thank Professor Einar Steingr\'\i msson for his support, advice, and invaluable feedback on this paper. We thank Professor Paul E. Gunnells for proposing this area of research and for his helpful suggestions. We are grateful to Roger Van Peski for his dedicated mentoring and edits. Finally, we thank the PROMYS program and the Clay Mathematics Institute, under which this research was made possible. \newpage
{ "timestamp": "2019-09-18T02:04:48", "yymm": "1809", "arxiv_id": "1809.07398", "language": "en", "url": "https://arxiv.org/abs/1809.07398", "abstract": "Weights of permutations were originally introduced by Dugan, Glennon, Gunnells, and Steingrímsson (Journal of Combinatorial Theory, Series A 164:24-49, 2019) in their study of the combinatorics of tiered trees. Given a permutation $\\sigma$ viewed as a sequence of integers, computing the weight of $\\sigma$ involves recursively counting descents of certain subpermutations of $\\sigma$. Using this weight function, one can define a $q$-analog $E_n(x,q)$ of the Eulerian polynomials. We prove two main results regarding weights of permutations and the polynomials $E_n(x,q)$. First, we show that the coefficients of $E_n(x, q)$ stabilize as $n$ goes to infinity, which was conjectured by Dugan, Glennon, Gunnells, and Steingrímsson (Journal of Combinatorial Theory, Series A 164:24-49, 2019), and enables the definition of the formal power series $W_d(t)$, which has interesting combinatorial properties. Second, we derive a recurrence relation for $E_n(x, q)$, similar to the known recurrence for the classical Eulerian polynomials $A_n(x)$. Finally, we give a recursive formula for the numbers of certain integer partitions and, from this, conjecture a recursive formula for the stabilized coefficients mentioned above.", "subjects": "Combinatorics (math.CO)", "title": "On Permutation Weights and $q$-Eulerian Polynomials", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.987568348780928, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.7096610777144343 }